[NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support Llama-3/3.1, Llama-2, LLaMA, BLOOM, Vicuna, Baichuan, TinyLlama, etc.
-
Updated
Sep 27, 2024 - Python
[NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support Llama-3/3.1, Llama-2, LLaMA, BLOOM, Vicuna, Baichuan, TinyLlama, etc.
kani (カニ) is a highly hackable microframework for chat-based language models with tool use/function calling. (NLP-OSS @ EMNLP 2023)
Introducing Project Zephyrine: Elevating Your Interaction Plug and Play, and Employing GPU Acceleration within a Modernized Automata Local Graphical User Interface.
AI-powered email management system that automates email categorization, sentiment analysis, summarization, and response generation using NLP models and machine learning, enhancing communication efficiency.
This repository contains the supplementary material / appendix to go with the paper “Is Temperature the Creativity for Large Language Models” by Max Peeperkorn, Tom Kouwenhoven, Dan Brown, and Anna Jordanous.
Retrieval-Augmented Generation (RAG) to analyze and respond to queries about President Biden's 2023 State of the Union (SOTU) Address
Fine-tuning LLaMA 2 for toxicity classification using a balanced Kaggle dataset, with a focus on overcoming class imbalance, optimizing computational efficiency through PEFT and QLORA, and achieving high accuracy in detecting toxic content across multiple classes
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
Repositório dedicado ao trabalho de graduação
FineTuning LLMs on conversational medical dataset.
This package simplifies your interaction with various GPT models, removing the need for tokens or other methods to access GPT
InsightSolver: Colab notebooks for exploring and solving operational issues using deep learning, machine learning, and related models.
A cybersecurity chatbot built using open-source LLMs namely Falcon-7B and Llama-2-7b-chat-hf. The models are fine-tuned on a custom question-answer dataset compiled from the OWASP Top 10 and CVEs from NVD.
The lexical simplification baseline for MLSP Shared Task at BEA 2024
📚 Local PDF-Integrated Chat Bot: Secure Conversations and Document Assistance with LLM-Powered Privacy
中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
Add a description, image, and links to the llama-2 topic page so that developers can more easily learn about it.
To associate your repository with the llama-2 topic, visit your repo's landing page and select "manage topics."