English | 简体中文 | 日本語 | Español | Klingon | Français
Dify is an LLM application development platform that has helped built over 100,000 applications. It integrates BaaS and LLMOps, covering the essential tech stack for building generative AI-native applications, including a built-in RAG engine. Dify allows you to deploy your own version of Assistants API and GPTs, based on any LLMs.
Dify.AI Cloud provides all the capabilities of the open-source version, and includes 200 free requests to OpenAI GPT-3.5.
Dify is model-agnostic and boasts a comprehensive tech stack compared to hardcoded development libraries like LangChain. Unlike OpenAI's Assistants API, Dify allows for full local deployment of services.
Feature | Dify.AI | Assistants API | LangChain |
---|---|---|---|
Programming Approach | API-oriented | API-oriented | Python Code-oriented |
Ecosystem Strategy | Open Source | Closed and Commercial | Open Source |
RAG Engine | Supported | Supported | Not Supported |
Prompt IDE | Included | Included | None |
Supported LLMs | Rich Variety | Only GPT | Rich Variety |
Local Deployment | Supported | Not Supported | Not Applicable |
1. LLM Support: Integration with OpenAI's GPT family of models, or the open-source Llama2 family models. In fact, Dify supports mainstream commercial models and open-source models (locally deployed or based on MaaS).
2. Prompt IDE: Visual orchestration of applications and services based on LLMs with your team.
3. RAG Engine: Includes various RAG capabilities based on full-text indexing or vector database embeddings, allowing direct upload of PDFs, TXTs, and other text formats.
4. Agents: A Function Calling based Agent framework that allows users to configure what they see is what they get. Dify includes basic plugin capabilities like Google Search.
5. Continuous Operations: Monitor and analyze application logs and performance, continuously improving Prompts, datasets, or models using production data.
Star us, and you'll get instant notifications for all new releases on GitHub!
Before installing Dify, make sure your machine meets the following minimum system requirements:
- CPU >= 2 Core
- RAM >= 4GB
The easiest way to start the Dify server is to run our docker-compose.yml file. Before running the installation command, make sure that Docker and Docker Compose are installed on your machine:
cd docker
docker compose up -d
After running, you can access the Dify dashboard in your browser at https://localhost/install and start the initialization installation process.
Big thanks to @BorisPolonsky for providing us with a Helm Chart version, which allows Dify to be deployed on Kubernetes. You can go to https://github.com/BorisPolonsky/dify-helm for deployment information.
If you need to customize the configuration, please refer to the comments in our docker-compose.yml file and manually set the environment configuration. After making the changes, please run docker-compose up -d
again. You can see the full list of environment variables in our docs.
We welcome you to contribute to Dify to help make Dify better in various ways, submitting code, issues, new ideas, or sharing the interesting and useful AI applications you have created based on Dify. At the same time, we also welcome you to share Dify at different events, conferences, and social media.
- Roadmap and Feedback. Best for: sharing feedback and checking out our feature roadmap.
- GitHub Issues. Best for: bugs and errors you encounter using Dify.AI, see the Contribution Guide.
- Email Support. Best for: questions you have about using Dify.AI.
- Discord. Best for: sharing your applications and hanging out with the community.
- Twitter. Best for: sharing your applications and hanging out with the community.
- Business License. Best for: business inquiries of licensing Dify.AI for commercial use.
To protect your privacy, please avoid posting security issues on GitHub. Instead, send your questions to [email protected] and we will provide you with a more detailed answer.
This repository is available under the Dify Open Source License, which is essentially Apache 2.0 with a few additional restrictions.