Found 100 AI tools
Click any tool to view details
CodeAnt AI is an AI code review tool designed to help developers improve the efficiency and accuracy of code reviews through artificial intelligence technology. It reviews code changes in real time, scans for potential security vulnerabilities, and provides recommendations for code quality improvements. CodeAnt AI supports multiple programming languages, can automatically fix issues, and integrate into existing version control systems. Product background information shows that CodeAnt AI has been recognized by industry experts and has been used in multiple high-value enterprises, proving its importance in improving development efficiency and code quality. The product is priced at $10 per user per month, with a 30-day free trial.
Prisma Optimize is a tool that uses artificial intelligence technology to analyze and optimize database queries. It accelerates applications by providing in-depth insights and actionable recommendations to make database queries more efficient. Prisma Optimize supports a variety of databases, including PostgreSQL, MySQL, SQLite, SQL Server, CockroachDB, PlanetScale, and Supabase, and can be seamlessly integrated into existing technology stacks without the need for large-scale modifications or migrations. The main advantages of the product include improving database performance, reducing query latency, optimizing query patterns, etc. This is a powerful tool for developers and database administrators to help them manage and optimize databases more effectively.
Packmind is a platform designed to improve team learning speed and engineering performance through artificial intelligence technology. It helps accelerate your team's skill growth and improve code quality by integrating best coding practices and standards directly into development tools and AI coding assistants. Packmind helps technical teams increase productivity, reduce technical debt, and promote knowledge sharing and collaboration among teams through features such as its Tech Coach IDE plug-in, practice review engine, and AI coding assistant integration.
AgentStack is a command line tool for quickly creating AI agent projects. It is based on Python 3.10+, supports a variety of popular agent frameworks, such as CrewAI, Autogen and LiteLLM, and integrates a variety of tools to simplify the development process. The design concept of AgentStack is to simplify the process of building AI agents from scratch, so that agent projects can be quickly up and running without complex configuration. It also provides an interactive test runner, live development server, and production build scripts. AgentStack is open source and follows the MIT license. It is suitable for developers who want to quickly enter AI agent development.
Batteries Included is a full-featured platform designed for modern service development, providing a one-stop solution with source code available. It is built based on open source code, supports deployment from Docker to Knative, and has features such as automated security and updates, intelligent automation, high reliability, advanced AI technology, and easy-to-integrate SSO. This platform is designed to help developers build, deploy and easily scale projects while ensuring data privacy and cost-effectiveness.
Gait is an AI-native version control tool that helps teams more easily understand and edit AI-generated code by storing hints, context, and a combination of code. gait automatically saves AI code generation conversations and shares development context with the team through version control. It supports GitHub Copilot and Cursor, and provides a variety of functions including AI Blame, Codegen Analytics and Team Collaboration. gait aims to increase developer productivity through AI technology while ensuring that the copyright and intellectual property rights of the code are protected.
Code2.AI is an innovative online platform that uses artificial intelligence technology to help developers quickly transform ideas into code. The platform compresses the code base so that AI can understand and program alongside developers. Key benefits of Code2.AI include accelerated development processes, unlimited coding capabilities, and seamless integration with existing projects. It supports any programming language, whether for web or mobile development, providing complete functional code, not just code snippets. In addition, Code2.AI also provides detailed usage guides to help users use AI for programming more effectively.
DevKit is an AI assistant designed specifically for developers. It combines the world's leading large-scale language models (LLMs) and more than 30 mini tools to help developers quickly build software and significantly improve development efficiency. DevKit supports quickly generating public API configurations, querying Postgres databases in plain English, generating and executing code within the chat interface, and stimulating creativity through code generation and p5.js tools for artistic creation and mini-game development. DevKit has been widely recognized by the developer community for its powerful functions and ease of use, and has been rated as one of the top development tools by the Product Hunt community.
It is a command line tool that uses AI to generate Git submission information to reduce workload. It supports multiple specifications and customization options. It is free and oriented to developers.
Sparrow is a comprehensive API management solution that provides a comprehensive set of tools to facilitate the entire API life cycle, guiding R&D teams to pursue excellence in API design-first development. It supports API requests, WebSockets, API testing processes and AI support, and is a collaborative open source solution designed to simplify the complexity of API development. Sparrow provides powerful tools to protect and manage API data and provides self-hosting capabilities, giving users complete control over their testing environments.
Octomind QA Agent is an automated testing tool based on artificial intelligence. It can automatically analyze web applications and generate test cases, execute tests and maintain test code. The main advantage of this tool is that it does not require users to have programming knowledge, which can significantly lower the threshold for testing and improve testing efficiency. It is suitable for developers and teams who want to improve software quality and reduce testing costs and time. Octomind QA Agent offers a free trial version so users can try out its features without providing credit card information.
Swarm is an experimental framework managed by the OpenAI Solutions team for building, orchestrating, and deploying multi-agent systems. It achieves coordination and execution between agents by defining the abstract concepts of agents and handoffs. The Swarm framework emphasizes lightweight, high controllability, and ease of testing. It is suitable for scenarios that require a large number of independent functions and instructions, allowing developers to have complete transparency and fine-grained control over context, steps, and tool calls. The Swarm framework is currently in the experimental stage and is not recommended for use in production environments.
Cline is an autonomous coding agent integrated into the IDE that uses artificial intelligence technology to help developers with code writing, editing, file creation and command execution. Cline provides a secure and easy-to-operate graphical user interface by combining powerful APIs and models such as OpenRouter, Anthropic, OpenAI, etc., allowing users to control and approve file changes and terminal commands at every step of the operation. This not only improves development efficiency but also ensures operational safety. Key benefits of Cline include support for multiple APIs and models, executing commands directly in the terminal, creating and editing files, analyzing images and browser screenshots, and enhancing its functionality with contextual information such as URLs, issue panels, file and folder contents.
CursorCore is a series of open source models designed to assist programming through programming instruction alignment, supporting features such as automated editing and inline chat. These features mimic the core capabilities of closed-source AI-assisted programming tools like Cursor. This project promotes the application of AI in the field of programming through the power of the open source community, allowing developers to write and edit code more efficiently. The project is currently in its early stages, but has already demonstrated its potential to improve programming efficiency and assist with code generation.
Coframe is a platform that leverages artificial intelligence technology for website optimization and personalization. Working with OpenAI, it has developed a model that can generate high-quality, visually brand-consistent UI code. The main advantage of this technology is its ability to speed up the website optimization process, making website optimization faster and more cost-effective, while allowing for experimental and personalized approaches that were not possible before. Coframe’s background information shows that it has cooperated with OpenAI, which is also introduced on its blog. The product's pricing and positioning information are not clearly stated on the page.
LlamaIndex.TS is a framework designed for building large language model (LLM)-based applications. It focuses on helping users ingest, structure and access private or domain-specific data. This framework provides a natural language interface for connecting humans and inferred data, allowing developers to enhance their software capabilities through LLM without becoming experts in machine learning or natural language processing. LlamaIndex.TS supports popular runtime environments such as Node.js, Vercel Edge Functions and Deno.
Firebender is an AI programming assistant plug-in specially designed for Android Studio, created by Android developers Aman and Kevin. It prioritizes privacy and focuses on Android development, aiming to help developers improve coding efficiency and become 10 times more efficient developers.
Fragments is an open source template based on Next.js for building fully AI-generated applications. It integrates E2B Sandbox SDK and Code Interpreter SDK, supports multiple programming languages and frameworks, such as Python, Next.js, Vue.js, etc., and supports multiple artificial intelligence large language model (LLM) providers, such as OpenAI, Anthropic, etc. This template is especially suitable for developers who want to quickly start and leverage AI for application development.
devActivity is a tool that provides software engineering teams with data-driven performance assessments, AI-driven retrospective insights, contribution and work quality analysis, and operational bottleneck alerts. It is based on commit/pull request/code review/issue/comment events and aims to enhance software engineering projects by providing actionable insights and engaging gamification features.
realtime-playground is an interactive platform built on LiveKit Agents, allowing users to experience OpenAI's real-time API directly in the browser. The platform provides users with a place to experiment and explore the real-time interaction capabilities of artificial intelligence by integrating the latest API technology.
gptme is a personal AI assistant running on the terminal. It is equipped with local tools and can write code, use the terminal, browse the web, visual recognition, etc. It is a native alternative to the ChatGPT "code interpreter" that is not restricted by software, internet access, timeouts, or privacy issues.
firecrawl-openai-realtime is an OpenAI real-time API console integrated with Firecrawl, designed to provide developers with an interactive API reference and checker. It includes two utility libraries, openai/openai-realtime-api-beta as a reference client (for browsers and Node.js), and /src/lib/wavtools, which allows simple management of audio in the browser. This product is a React project created using create-react-app and packaged with Webpack.
o1 is an experimental project that aims to help models solve often intractable logic problems by using large language models (LLMs) to create inference chains. It supports Groq, OpenAI and Ollama backends, allowing models to "think" and solve problems through dynamic reasoning chains. o1 demonstrated that the logical reasoning capabilities of existing models can be significantly improved through hints alone, without the need for additional training.
o1-engineer is a command-line tool designed to help developers efficiently manage and interact with projects through OpenAI's API. It provides code generation, file editing, project planning and other functions to simplify the development workflow.
Canvas is a new interface launched by OpenAI designed to improve writing and coding projects through collaboration with ChatGPT. It allows users to work with ChatGPT in a separate window, going beyond a simple chat interface. Canvas leverages the GPT-4o model to better understand the user's context and provide inline feedback and suggestions. It supports direct editing of text or code, and provides a quick operation menu to help users adjust writing length, debug code, etc. Canvas also supports version backtracking to help users manage different versions of the project.
torchao is a PyTorch library that focuses on custom data types and optimization, supporting quantized and sparsified weights, gradients, optimizers and activation functions for inference and training. It is compatible with torch.compile() and FSDP2 and is able to provide acceleration for most PyTorch models. torchao aims to improve the model's inference speed and memory efficiency while minimizing accuracy loss through technologies such as quantization-aware training (QAT) and post-training quantization (PTQ).
gradio-bot is a tool that turns Hugging Face Space or Gradio apps into Discord bots. It allows developers to quickly deploy existing machine learning models or applications to the Discord platform through simple command line operations to achieve automated interaction. This not only improves the accessibility of the application, but also provides developers with a new channel for direct interaction with users.
Llama Stack is an API collection that defines and standardizes the building blocks required for generative AI application development. It covers the entire development lifecycle from model training and fine-tuning, to product evaluation, to building and running AI agents in production environments. Llama Stack aims to accelerate innovation in the AI field by providing consistent, interoperable components.
Show-Me is an open source application designed to provide a visual and transparent alternative to traditional large language model interactions such as ChatGPT. It enables users to understand the step-by-step thought process of language models by decomposing complex problems into a series of reasoning subtasks. The application uses LangChain to interact with language models and visualize the inference process through a dynamic graphical interface.
Repopack is a powerful tool that packages your entire codebase into a single, AI-friendly file, ideal for feeding your codebase to large language models (LLMs) or other AI tools such as Claude, ChatGPT, and Gemini.
promptic is a lightweight, decorator-based Python library that simplifies the process of interacting with large language models (LLMs) through litellm. Using prompt, you can easily create prompts, handle input parameters, and receive structured output from LLMs with just a few lines of code.
WaveCoder is a large code language model developed by Microsoft Research Asia. It enhances the breadth and versatility of the large code language model through instruction fine-tuning. It demonstrates excellent performance in multiple programming tasks such as code summarization, generation, translation, and repair. The innovation of WaveCoder lies in the data synthesis framework and two-stage instruction data generation strategy it uses to ensure the high quality and diversity of data. The open source of this model provides developers with a powerful programming aid that helps improve development efficiency and code quality.
RD-Agent is an automated research and development tool launched by Microsoft Research Asia. Relying on the powerful capabilities of large language models, it creates a new model of artificial intelligence-driven R&D process automation. By integrating data-driven R&D systems, it can use artificial intelligence capabilities to drive the automation of innovation and development. It not only improves R&D efficiency, but also uses intelligent decision-making and feedback mechanisms to provide unlimited possibilities for future cross-field innovation and knowledge transfer.
Copilot Arena is an open source AI programming assistant that provides users with code auto-completion functions by integrating a variety of the latest large-scale language models (LLMs), such as GPT-4o, Codestral, Llama-3.1, etc. This plug-in is designed to evaluate the performance of different language models in programming assistance and help developers find the model that best suits their programming style. It's free to use and available as a Visual Studio Code plug-in, making it easy to install and use.
MemoryScope is a framework that provides long-term memory capabilities for large language model (LLM) chatbots. It enables chatbots to store and retrieve memory fragments through memory databases and work libraries, thereby achieving personalized user interaction experience. Through operations such as memory retrieval and memory integration, this product enables the robot to understand and remember the user's habits and preferences, providing users with a more personalized and coherent conversation experience. MemoryScope supports multiple model APIs, including openai and dashscope, and can be used in conjunction with existing agent frameworks such as AutoGen and AgentScope, providing rich customization and scalability.
awesome-cursorrules is a collection of .cursorrules files customized for the Cursor AI editor. Cursor AI is an AI-powered code editor that allows developers to define project-specific instructions through .cursorrules files, allowing the AI to generate code based on the project's specific needs and preferences. These files help improve the relevance and accuracy of code generation, ensure that code is consistent with the project's style guide, improve development efficiency, and promote consistency in coding practices across team projects.
Sentient is a framework/SDK that allows developers to build intelligent agents that can control browsers in 3 lines of code. It leverages the latest artificial intelligence technology to enable complex network interactions and automation tasks through simple code. Sentient supports a variety of AI models, including OpenAI, Together AI, etc., and can provide customized solutions according to users' specific needs.
AgentRE is an agent-based framework specifically designed for relationship extraction in complex information environments. It can efficiently process and analyze large-scale data sets by simulating the behavior of intelligent agents to identify and extract relationships between entities. This technology is of great significance in the fields of natural language processing and information retrieval, especially in scenarios where large amounts of unstructured data need to be processed. The main advantages of AgentRE include its high scalability, flexibility and ability to handle complex data structures. The framework is open source, allowing researchers and developers to freely use and modify it to suit different application needs.
the Shire is an AI programming agent language designed to enable communication between large language models (LLM) and integrated development environments (IDEs) to support automated programming. It originated from the AutoDev project and aims to provide developers with an AI-driven IDE, including DevIns, the predecessor of Shire. Shire enables users to build an AI-driven development environment that meets their personal needs by providing customized AI agents.
RAGLAB is a modular, research-oriented open source framework focused on Retrieval Augmented Generation (RAG) algorithms. It provides replications of 6 existing RAG algorithms, as well as a comprehensive evaluation system with 10 benchmark datasets, enabling fair comparison of different RAG algorithms and facilitating the efficient development of new algorithms, datasets, and evaluation metrics.
GenAgent is a framework for building collaborative AI systems by creating workflows and converting these workflows into code for better understanding by large language model (LLM) agents. GenAgent is able to learn from human-designed work and create new workflows. The generated workflows can be interpreted as collaborative systems to complete complex tasks.
Replit Agent is an AI-driven tool designed to help users build software projects. Its ability to understand natural language cues and assist in creating applications from scratch makes software development more accessible to users of all skill levels. Replit Agent is Replit’s latest attempt to democratize AI coding tools, pushing human-machine collaboration to a new level, allowing AI agents and humans to complement each other, fill in each other’s gaps, and learn from each other.
pr-agent is an AI assistant tool launched by CodiumAI, designed to help developers review code more quickly and efficiently. It can automatically analyze submissions and PRs and provide a variety of feedback, such as automatically generating PR descriptions, topic feedback, security issues, code suggestions, etc. The tool supports multiple programming languages and is open source and available on GitHub. It improves software quality by simplifying the code review process and is a powerful assistant for development teams and individual developers.
Yi-Coder is a family of open-source large-scale language models (LLMs) that provide state-of-the-art coding performance with less than 10 billion parameters. It comes in two sizes—1.5B and 9B parameters—in base and chat versions and is designed for efficient inference and flexible training. Yi-Coder-9B was trained on an additional 2.4 trillion high-quality tokens on GitHub's code base-level code corpus and code-related data filtered from CommonCrawl. Yi-Coder excels at a variety of programming tasks, including basic and competitive programming, code editing and warehouse-level completion, long-context understanding, and mathematical reasoning.
JetBrains is a well-known provider of software development tools and services, providing a series of integrated development environments (IDEs) and tools for different programming languages and development platforms. Known for their powerful code analysis, smart tips, quick navigation, and rich plugin ecosystem, these tools are designed to improve developer productivity and code quality. JetBrains' products are widely used in enterprise-level software development, helping teams improve development efficiency, reduce errors, and accelerate product time to market.
How Much VRAM is an open source project designed to help users estimate the amount of video memory their models require during training or inference. This project enables users to decide on the desired hardware configuration without having to try multiple configurations. This project is very important for developers and researchers who need to train deep learning models because it can reduce the trial and error cost of hardware selection and improve efficiency. The project is licensed under the MPL-2.0 license and is provided free of charge.
QAbot-zh/query-key is a pure front-end API detection tool. It supports testing of multiple API formats, such as oneapi/newapi, etc., and can detect openai format APIs. The main advantage of this tool is its pure front-end implementation. Users do not need to worry about gateway timeouts while ensuring data security. It also provides a complete display of test activity data, including response time and model consistency, allowing users to intuitively understand the performance of the API. In addition, it supports local one-click operation and online hosting of pages, making it convenient for users to quickly deploy and use.
query-key-app is a server-side application used to query interface status. It supports API activity testing in the OpenAI standard format. The application is assisted by GPT, provides a simple query interface, and supports local operation and serverless deployment. The main advantages include easy deployment, convenient use, and the ability to quickly detect interface status, making it suitable for developers who need to quickly verify interface availability.
ComfyUI-Nexus is a node customized for ComfyUI, designed to achieve seamless integration of multi-person collaboration workflows. It allows multiple users to work on the same workflow simultaneously, supports local and remote access, and enhances team collaboration with live chat capabilities. The plug-in also has administrator permission control, workflow backup and other functions to ensure smooth and efficient team workflow.
Hey is a command line interface-based AI assistant driven by a version of the ChatGPT AI model supported by MindDB. This project is designed for the Hashnode X MindsDB hackathon. Hey can interact with any other Large Language Model (LLM) service URL, not just mdb.ai. It provides a fast and convenient way to get answers to programming questions, supporting code snippets and multiple configuration options.
Ape is an open source AI prompt engineer developed by Weavel, aiming to improve efficiency by optimizing AI interaction methods. It is a prompt engineering library specially designed for AI, supporting customized and automated AI interaction processes, helping developers and users utilize AI technology more efficiently. Ape's core advantages lie in its open source, flexibility and ease of use, making it suitable for scenarios that require complex interaction with AI.
MLE-Agent is an intelligent companion designed for machine learning engineers and researchers. It has functions such as independent baseline creation, integration with Arxiv and Papers with Code, intelligent debugging, file system integration, comprehensive tool integration, and interactive command line chat. It supports AI/ML functions and MLOps tools such as OpenAI and Ollama to provide support for seamless workflow.
Zed AI is a plug-in integrated into programming workflows that enhances code generation, transformation, and analysis through direct dialogue with large language models (LLMs). It provides a variety of interaction methods, including assistant panels, slash commands, inline assistants, and prompt libraries to improve development efficiency. Zed AI also supports multiple LLMs providers, allowing developers to choose different models according to their needs to improve development efficiency. In addition, Zed AI offers a new hosting service that is free for the first month and comes with an Anthropic API designed for fast conversion of existing text.
reTerminal by Seeed Studio is an all-in-one development board based on Raspberry Pi. It has an IPS multi-touch screen, dual-band Wi-Fi and Bluetooth 5.0, and is pre-installed with a compatible Linux system. It adopts a modular design and provides a wealth of interfaces and components, which is suitable for developing personalized IoT and artificial intelligence projects. It also has the ability to implement industrial-level monitoring and control functions.
RAG_Techniques is a collection of technologies focused on Retrieval-Augmented Generation (RAG) systems, aiming to improve the accuracy, efficiency, and context richness of the system. It provides a hub for cutting-edge technology, driving the development and innovation of RAG technology through community contributions and a collaborative environment.
Parsera is a lightweight Python library specifically designed to be combined with large language models (LLMs) to simplify the process of website data scraping. It makes data scraping more efficient and cost-effective by using minimal tokens to increase speed and reduce costs. Parsera supports multiple chat models and can be customized to use different models, such as OpenAI or Azure.
avante.nvim is a Neovim plugin designed to mimic the behavior of the Cursor AI IDE, with AI-driven code suggestions, helping users apply these suggestions directly to their source files with minimal effort. The plug-in is currently in the early stages of development, and the code may be unstable and prone to problems, but the project is undergoing rapid iterations and many exciting new features will be added one after another.
This is a project built with Next.js that leverages Upstash Vector to provide Wikipedia’s semantic search capabilities. The project achieves efficient search and retrieval of Wikipedia content by optimizing and loading the customized Google font Inter.
Omni Engineer is a console tool with integrated artificial intelligence capabilities designed to enhance development workflows. It provides features such as intelligent responsive programming queries, file management, web search, and image processing. Compared to its predecessor, Claude Engineer, Omni Engineer offers more control while simplifying operations, and is suitable for those who want to code with the help of a better assistant.
The AI Scientist is a comprehensive system designed to enable fully automated open scientific discovery. It enables basic models, such as large language models (LLMs), to be studied independently. The system represents a major challenge for artificial intelligence in scientific research, assisting human scientists in thinking and coding through automation while reducing reliance on human supervision.
agent-service-toolkit is a complete toolkit for running AI agent services based on LangGraph, including LangGraph agent, FastAPI service, client and Streamlit application, providing complete settings from agent definition to user interface. It leverages the high degree of control and rich ecosystem of the LangGraph framework to support advanced features such as concurrent execution, graph looping, and streaming results.
IntelliJ IDEA is an integrated development environment launched by JetBrains. It is specially designed for Java and Kotlin languages. It provides powerful code automatic completion, code analysis, flexible navigation, and rich plug-in ecosystem. The latest version 2024.2 introduces new terminal (Beta) features, including AI-driven command generation, enhanced command completion, and custom prompts, aiming to improve developers’ coding efficiency and experience.
LangGraph Engineer is an alpha version of the agent designed to help quickly launch LangGraph applications. It focuses on creating the correct nodes and edges, but does not try to write the logic to populate the nodes and edges, leaving it to the user.
labelU-Kit is an open source front-end labeling component library that provides labeling functions for images, videos, and audios, and supports multiple labeling methods such as 2D boxes, points, lines, polygons, and three-dimensional boxes. It is provided as an NPM package, which is convenient for developers to integrate into their own annotation platform to improve the efficiency and flexibility of data annotation.
RAGFoundry is a library designed to improve the ability of large language models (LLMs) to use external information by fine-tuning the model on a specially created RAG augmented dataset. The library helps users easily train models through Parameter Efficient Fine-Tuning (PEFT) and measure performance improvements using RAG-specific metrics. It has a modular design and the workflow is customizable through configuration files.
This is an open source project for remote control operation of the humanoid robot Unitree H1_2. It utilizes Apple Vision Pro technology to allow users to control the robot through a virtual reality environment. The project was tested on Ubuntu 20.04 and Ubuntu 22.04, and detailed installation and configuration guides are provided. The main advantages of this technology include the ability to provide an immersive remote control experience and support testing in a simulated environment, providing a new solution for the field of robot remote control.
AI Artifacts is an open source version of the Anthropic Claude Artifacts interface that uses E2B's code interpreter SDK and core SDK to execute AI code. E2B provides a cloud sandbox to safely run AI-generated code and can handle installing libraries, running shell commands, running Python, JavaScript, R and Nextjs applications, etc.
Llama Coder is an artificial intelligence-based code generator powered by Llama 3.1 and Together AI. It can understand users' ideas and convert them into actual application codes, greatly improving development efficiency and innovation speed. The product has powerful AI model support behind it and is highly intelligent and flexible. It is a revolutionary technology in the field of programming.
GitHub Models is a new generation AI model service launched by GitHub, aiming to help developers become AI engineers. It integrates industry-leading large and small language models directly into the GitHub platform, allowing more than 100 million users to access and use these models directly on GitHub. GitHub Models provides an interactive model playground where users can test different prompts and model parameters without paying a fee. In addition, GitHub Models integrates with Codespaces and VS Code, allowing developers to seamlessly use these models in the development environment and achieve production deployment through Azure AI, providing enterprise-level security and data privacy protection.
JoyCoder is an intelligent programming assistant independently developed by JD.com. It is based on a large language model and adapts to a variety of IDEs. It provides code prediction, intelligent question and answer and other functions. It can improve developers' programming efficiency and code quality, reduce programming errors, and reduce the frequency of fixing problems. This product is suitable for all kinds of developers, especially in rapid development and testing needs. With the rise of intelligent programming, JoyCoder provides developers with an efficient and smooth programming environment to meet their diverse needs. Regarding product pricing, please contact a pre-sales consultant for specific information.
knowledge_graph_maker is a Python library capable of converting arbitrary text into a knowledge graph based on a given ontology. A knowledge graph is a semantic network that represents the network between real-world entities and the relationships between them. This library uses graph algorithms and centrality calculations to help users deeply analyze text content, realize connectivity analysis between concepts, and improve the depth of communication with text through graph retrieval enhanced generation (GRAG) technology.
Husky-v1 is an open source language agent model focused on solving complex multi-step reasoning tasks involving numerical, tabular and knowledge-based reasoning. It uses expert models such as tool usage, code generators, query builders, and mathematical reasoners to perform reasoning. This model supports CUDA 11.8, requires downloading the corresponding model file, and can run all expert models in parallel through the optimized inference process.
lmms-finetune is a unified code base designed to simplify the fine-tuning process of large multimodal models (LMMs). It provides a structured framework that allows users to easily integrate the latest LMMs and perform fine-tuning, supporting strategies such as full fine-tuning and lora. The code base is designed to be simple and lightweight, easy to understand and modify, and supports multiple models including LLaVA-1.5, Phi-3-Vision, Qwen-VL-Chat, LLaVA-NeXT-Interleave and LLaVA-NeXT-Video.
SuperCoder is an open source autonomous software development system that leverages advanced AI tools and agents to simplify and automate coding, testing and deployment tasks, improving efficiency and reliability. It supports multiple programming languages and frameworks to meet different development needs.
Composio is a platform that provides high-quality tools and integrations for AI agents. It simplifies the authentication, accuracy and reliability of agents, allowing developers to integrate multiple tools and frameworks with a single line of code. It supports more than 100 tools, covering more than 90 platforms such as GitHub, Notion, and Linear, and provides a variety of functions including software operation, operating system interaction, browser functions, search, software development environment (SWE), and ad hoc agent data (RAG). Composio also supports six different authentication protocols, which can significantly improve the accuracy of proxy calling tools. Additionally, Composio can be embedded into applications as a backend service to manage authentication and integration for all users and agents, maintaining a consistent experience.
RouteLLM is a framework for serving and evaluating large language model (LLM) routers. It intelligently routes queries to models of varying cost and performance to save costs while maintaining response quality. It delivers router functionality out of the box and has shown up to 85% cost reduction and 95% GPT-4 performance on widely used benchmarks.
Claude Engineer is an advanced command line interface that leverages the capabilities of Anthropic's Claude 3 and Claude 3.5 models to assist with a wide range of software development tasks. This tool seamlessly combines the power of state-of-the-art large language models with practical file system operations, web search capabilities, intelligent code analysis and execution capabilities.
AgentScope is an innovative multi-agent platform designed to empower developers to build multi-agent applications using large-scale models. It has easy-to-use, high robustness and Actor-based distributed features, and supports custom fault-tolerance control and retry mechanisms to enhance application stability.
Typebot is an open source chatbot builder that allows users to visually create advanced chatbots, embed them into any web/mobile application, and collect results in real time. It provides more than 34 building blocks, such as text, pictures, videos, audio, conditional branches, logic scripts, etc., and supports multiple integration methods, such as Webhook, OpenAI, Google Sheets, etc. Typebot supports custom themes to match brand identity and provides in-depth analysis capabilities to help users gain insight into the chatbot's performance.
ai-renamer is a Node.js-based command line tool that leverages Ollama and LM Studio models (such as Llava, Gemma, Llama, etc.) to intelligently rename files based on their content. It supports multiple file types such as videos and pictures, and can optimize the renaming process through custom parameters. This tool enables users to automate file management and improve efficiency, especially for developers and content creators who need to batch process file names.
SaltAI Language Toolkit is a project that integrates the retrieval-augmented generation (RAG) tool Llama-Index, Microsoft's AutoGen and LlaVA-Next, enhancing the functionality and user experience of the platform through ComfyUI's adaptable node interface. The project added proxy functionality on May 9, 2024.
Praison AI is a low-code, centralized framework designed to simplify the creation and orchestration of multi-agent systems for a variety of large language model (LLM) applications. It emphasizes ease of use, customizability and human-computer interaction. Praison AI leverages AutoGen and CrewAI or other agent frameworks to enable complex automation tasks with predefined roles and tasks. Users can interact with the agent through the command line interface or user interface, create custom tools, and extend its functionality in a variety of ways.
Mamba-Codestral-7B-v0.1 is an open source code model based on the Mamba2 architecture developed by the Mistral AI Team, with performance comparable to state-of-the-art Transformer-based code models. It performs well on multiple industry-standard benchmarks, providing efficient code generation and understanding capabilities for programming and software development domains.
Codestral Mamba is a language model released by the Mistral AI team that focuses on code generation. It is based on the Mamba2 architecture and has the advantages of linear time reasoning and the ability to model infinite sequences in theory. The model is professionally trained with advanced coding and inference capabilities that rivals current state-of-the-art Transformer-based models.
Datalore is an AI-driven data analysis tool that integrates Anthropic's Claude API and multiple data analysis libraries. It provides an interactive interface that enables users to perform data analysis tasks using natural language commands.
Claude Dev is a VSCode extension that leverages the proxy coding capabilities of Anthropic's Claude 3.5 Sonnet to step-by-step handle complex software development tasks. Not only does it support file reading and writing, creating projects, and executing terminal commands (with user permission), it also provides an intuitive GUI that allows users to safely and easily explore the potential of agent AI.
exo is an experimental software project that aims to use existing devices at home, such as iPhone, iPad, Android, Mac, Linux, etc., to unify them into a powerful GPU to run AI models. It supports a variety of popular models, such as LLaMA, and has a dynamic model segmentation function that can optimally segment models based on the current network topology and device resources. In addition, exo also provides a ChatGPT-compatible API, making using exo in an application to run models only requires a single line of code change.
Tribe AI is a low-code tool that leverages the langgraph framework to allow users to easily customize and coordinate teams of agents. By allocating complex tasks to agents who specialize in different areas, each agent can focus on what it does best and solve problems faster and better.
llm-graph-builder is an application that uses large language models (such as OpenAI, Gemini, etc.) to extract nodes, relationships and their attributes from unstructured data (PDF, DOCS, TXT, YouTube videos, web pages, etc.), and uses the Langchain framework to create a structured knowledge graph. It supports uploading files from local machines, GCS or S3 buckets, or network resources, selecting LLM models and generating knowledge graphs.
Semantic Chunkers is a multi-modal chunking library for intelligently chunking text, video, and audio to improve the efficiency and accuracy of AI and data processing.
MInference is an inference acceleration framework for long-context large language models (LLMs). It takes advantage of the dynamic sparsity characteristics in the LLMs attention mechanism, significantly improves the speed of pre-filling through static pattern recognition and online sparse index approximate calculation, achieving a 10x acceleration in processing 1M context on a single A100 GPU, while maintaining the accuracy of inference.
gpt-frontend-code-gen is a front-end project built based on React and Vite, combined with Koa back-end service, to realize the function of generating and previewing front-end pages. It uses the GPT-4 model and supports Chakra UI and ShadcnUI component generation, allowing developers to continuously iterate and modify the page through dialogue until satisfactory results are achieved.
FlashAttention is an open source attention mechanism library designed for Transformer models in deep learning to improve computing efficiency and memory usage efficiency. It optimizes attention calculation through IO-aware methods, reducing memory usage while maintaining accurate calculation results. FlashAttention-2 further improves parallelism and work distribution, while FlashAttention-3 is optimized for Hopper GPUs and supports FP16 and BF16 data types.
AWS App Studio is a service powered by generative artificial intelligence that uses natural language to build enterprise-grade applications, enabling technical professionals without deep software development skills, such as IT project managers, data engineers, and enterprise architects, to quickly develop business applications that meet the needs of the organization in minutes. The service provides highly secure, scalable and high-performance applications without having to consider the underlying code or infrastructure. App Studio handles all deployment, operation and maintenance work, freeing technical professionals to focus on innovation rather than application management.
Anthropic Power Artifacts is an open source project that replicates Anthropic's Artifacts user interface in its Claude chat application. The project uses E2B’s Code Interpreter SDK to safely execute AI-generated code. E2B provides a cloud sandbox environment that can safely run AI-generated code and can handle installing libraries, running shell commands, executing Python, JavaScript, R and Nextjs applications, etc.
maestro is an intelligent framework for coordinating sub-agents. It utilizes two AI models, Opus and Haiku in the Anthropic API, to decompose target tasks, perform sub-tasks, and finally integrate the results. The framework supports multiple APIs, including Anthropic, Gemini, OpenAI, etc., and simplifies the model selection process through LiteLLM.
LAMDA-TALENT is a comprehensive tabular data analysis toolbox and benchmarking platform that integrates more than 20 deep learning methods, more than 10 traditional methods, and more than 300 diverse tabular data sets. Designed to improve model performance on tabular data, the toolbox provides powerful preprocessing capabilities, optimizes data learning, and supports user-friendly and adaptable operations for both novice and expert data scientists.
CodeGeeX4-ALL-9B is the latest open source version of the CodeGeeX4 series model. Based on GLM-4-9B continuous training, it significantly improves code generation capabilities. It supports code completion, generation, code interpretation, web search, function calling, code Q&A and other functions, covering multiple scenarios of software development. It performs well on public benchmarks such as BigCodeBench and NaturalCodeBench. It is the strongest code generation model with less than 1 billion parameters, achieving the best balance between inference speed and model performance.
APIGen is an automated data generation pipeline designed to generate verifiable, high-quality data sets for function-call applications. The model ensures data reliability and correctness through a three-level verification process, including format checking, actual function execution, and semantic verification. APIGen can generate diverse data sets in a large-scale and structured manner, and verify the correctness of the generated function calls by actually executing the API, which is crucial to improving the performance of the function call proxy model.
Meta Large Language Model Compiler (LLM Compiler-13b-ftd) is an advanced large language model built on Code Llama, focusing on compiler optimization and code reasoning. It shows excellent performance in predicting LLVM optimization effects and assembly code decompilation, which can significantly improve code efficiency and reduce code size.
Explore other subcategories under programming Other Categories
768 tools
465 tools
368 tools
140 tools
85 tools
66 tools
61 tools
AI development assistant Hot programming is a popular subcategory under 294 quality AI tools