Welcome to the LLM Models and RAG Hands-on Guide repository! This guide is designed for technical teams interested in developing basic conversational AI solutions using Retrieval-Augmented Generation (RAG).
This repository provides a comprehensive guide for building conversational AI systems using large language models (LLMs) and RAG techniques. The content combines theoretical knowledge with practical code implementations, making it suitable for those with a basic technical background.
This guide is primarily for technical teams engaged in developing a basic conversational AI with RAG solutions. It offers a basic introduction to the technical aspects. This guide helps anyone with basic technical background to get involved in the AI domain. This guide combines between the theoretical, basic knowledge and code implementation. It's important to note that most of the content is compiled from various online resources, reflecting the extensive effort in curating and organizing this information from numerous sources.
- intro
- What is Conversational AI?
- The Technology Behind Conversational AI
- LLM Basics
- What is a large language model (LLM)?
- How do LLMs work?
- What are the Relations and Differences between LLMs and Transformers?
- What are Pipelines in Transformers?
- What are Hugging Face Transformers?
- Chains
- What are chains?
- Foundational chain types in LangChain
- LLMChain
- Creating an LLMChain
- Sequential Chains
- SimpleSequentialChain
- SequentialChain
- Transformation
- Prompt Engineering
- What is Prompt Engineering?
- Embeddings
- Vector Stores
- Chunking
- Quantization
- What is Quantization?
- How does quantization work?
- Hugging Face and Bitsandbytes Uses
- Loading a Model in 4-bit Quantization
- Loading a Model in 8-bit Quantization
- Changing the Compute Data Type
- Using NF4 Data Type
- Nested Quantization for Memory Efficiency
- Loading a Quantized Model from the Hub
- Exploring Advanced techniques and configuration
- Temperature
- Langchain Memory
- Agents & Tools
- Walkthrough — Project Utilizing Langchain
- RAG
- groq
- What is LlamaParse ?
- Use Case – 1
- Use Case – 2
- Source Code
An introduction to the technology behind conversational AI, covering its fundamentals and applications.
Understand what LLMs are, how they work, and their role in conversational AI. This section also explores the differences between LLMs and transformers.
Detailed explanation of transformers, including their pipelines and the Hugging Face library.
Learn about different types of prompts, prompt engineering techniques, and best practices for using the OpenAI API.
Explore the use of embeddings in LLMs, vector databases, and various chunking methods for document splitting.
Implementation details for the first use case, including benchmark results and performance analysis. Refer to the usecase-1
directory for code and documentation.
A detailed walkthrough of integrating actions with a chatbot, such as getting weather event. See the usecase-2
directory for more information.
This project is licensed under the MIT License. See the LICENSE file for details.
Please feel free to contribute to enrich the content!
For any questions or feedback, please feel free to contact me directly @zahaby.
Happy coding!