Stars
KoLLaVA: Korean Large Language-and-Vision Assistant (feat.LLaVA)
General technology for enabling AI capabilities w/ LLMs and MLLMs
✨✨Latest Advances on Multimodal Large Language Models
An Open-Ended Embodied Agent with Large Language Models
[TLLM'23] PandaGPT: One Model To Instruction-Follow Them All
Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Flax.
Data and code for NeurIPS 2022 Paper "Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering".
언어모델을 학습하기 위한 공개 한국어 instruction dataset들을 모아두었습니다.
An open collection of implementation tips, tricks and resources for training large language models
Code and documentation to train Stanford's Alpaca models, and generate the data.
Official implementation for "Multimodal Chain-of-Thought Reasoning in Language Models" (stay tuned and more will be updated)
Running large language models on a single GPU for throughput-oriented scenarios.
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
Easily turn large sets of image urls to an image dataset. Can download, resize and package 100M urls in 20h on one machine.
Power CLI and Workflow manager for LLMs (core package)
Recent Advances in Vision and Language Pre-training (VLP)
Multimodal AI Story Teller, built with Stable Diffusion, GPT, and neural text-to-speech
This repo includes ChatGPT prompt curation to use ChatGPT better.
This is the implementation of the paper AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning (https://arxiv.org/abs/2205.12410).
Pecab: Pure python Korean morpheme analyzer based on Mecab
Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch
Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch
A trend starts from "Chain of Thought Prompting Elicits Reasoning in Large Language Models".
SRT(Super Rapid Train: https://etk.srail.kr/) wrapper for python