Skip to content

Awesome resources for in-context learning and prompt engineering: Mastery of the LLMs such as ChatGPT, GPT-3, and FlanT5, with up-to-date and cutting-edge updates.

License

Notifications You must be signed in to change notification settings

EgoAlpha/prompt-in-context-learning

Repository files navigation

Typing SVG

An Open-Source Engineering Guide for Prompt-in-context-learning from EgoAlpha Lab.

๐Ÿ“ Papers | โšก๏ธ Playground | ๐Ÿ›  Prompt Engineering | ๐ŸŒ ChatGPT Prompt ๏ฝœ โ›ณ LLMs Usage Guide

version Awesome

โญ๏ธ Shining โญ๏ธ: This is fresh, daily-updated resources for in-context learning and prompt engineering. As Artificial General Intelligence (AGI) is approaching, letโ€™s take action and become a super learner so as to position ourselves at the forefront of this exciting era and strive for personal and professional greatness.

The resources include:

๐ŸŽ‰Papers๐ŸŽ‰: The latest papers about In-Context Learning, Prompt Engineering, Agent, and Foundation Models.

๐ŸŽ‰Playground๐ŸŽ‰: Large language models๏ผˆLLMs๏ผ‰that enable prompt experimentation.

๐ŸŽ‰Prompt Engineering๐ŸŽ‰: Prompt techniques for leveraging large language models.

๐ŸŽ‰ChatGPT Prompt๐ŸŽ‰: Prompt examples that can be applied in our work and daily lives.

๐ŸŽ‰LLMs Usage Guide๐ŸŽ‰: The method for quickly getting started with large language models by using LangChain.

In the future, there will likely be two types of people on Earth (perhaps even on Mars, but that's a question for Musk):

  • Those who enhance their abilities through the use of AIGC;
  • Those whose jobs are replaced by AI automation.

๐Ÿ’ŽEgoAlpha: Hello! human๐Ÿ‘ค, are you ready?

Table of Contents

๐Ÿ“ข News

โ˜„๏ธ EgoAlpha releases the TrustGPT focuses on reasoning. Trust the GPT with the strongest reasoning abilities for authentic and reliable answers. You can click here or visit the Playgrounds directly to experience itใ€‚

๐Ÿ‘‰ Complete history news ๐Ÿ‘ˆ


๐Ÿ“œ Papers

You can directly click on the title to jump to the corresponding PDF link location

Survey

Motion meets Attention: Video Motion Prompts ๏ผˆ2024.07.03๏ผ‰

Towards a Personal Health Large Language Model ๏ผˆ2024.06.10๏ผ‰

Husky: A Unified, Open-Source Language Agent for Multi-Step Reasoning ๏ผˆ2024.06.10๏ผ‰

Towards Lifelong Learning of Large Language Models: A Survey ๏ผˆ2024.06.10๏ผ‰

Towards Semantic Equivalence of Tokenization in Multimodal LLM ๏ผˆ2024.06.07๏ผ‰

LLMs Meet Multimodal Generation and Editing: A Survey ๏ผˆ2024.05.29๏ผ‰

Tool Learning with Large Language Models: A Survey ๏ผˆ2024.05.28๏ผ‰

When LLMs step into the 3D World: A Survey and Meta-Analysis of 3D Tasks via Multi-modal Large Language Models ๏ผˆ2024.05.16๏ผ‰

Uncertainty Estimation and Quantification for LLMs: A Simple Supervised Approach ๏ผˆ2024.04.24๏ผ‰

A Survey on the Memory Mechanism of Large Language Model based Agents ๏ผˆ2024.04.21๏ผ‰

๐Ÿ‘‰Complete paper list ๐Ÿ”— for "Survey"๐Ÿ‘ˆ

Prompt Engineering

Prompt Design

LLaRA: Supercharging Robot Learning Data for Vision-Language Policy ๏ผˆ2024.06.28๏ผ‰

Dataset Size Recovery from LoRA Weights ๏ผˆ2024.06.27๏ผ‰

Dual-Phase Accelerated Prompt Optimization ๏ผˆ2024.06.19๏ผ‰

From RAGs to rich parameters: Probing how language models utilize external knowledge over parametric information for factual queries ๏ผˆ2024.06.18๏ผ‰

VoCo-LLaMA: Towards Vision Compression with Large Language Models ๏ผˆ2024.06.18๏ผ‰

LaMDA: Large Model Fine-Tuning via Spectrally Decomposed Low-Dimensional Adaptation ๏ผˆ2024.06.18๏ผ‰

The Impact of Initialization on LoRA Finetuning Dynamics ๏ผˆ2024.06.12๏ผ‰

An Empirical Study on Parameter-Efficient Fine-Tuning for MultiModal Large Language Models ๏ผˆ2024.06.07๏ผ‰

Cross-Context Backdoor Attacks against Graph Prompt Learning ๏ผˆ2024.05.28๏ผ‰

Yuan 2.0-M32: Mixture of Experts with Attention Router ๏ผˆ2024.05.28๏ผ‰

๐Ÿ‘‰Complete paper list ๐Ÿ”— for "Prompt Design"๐Ÿ‘ˆ

Chain of Thought

An Empirical Study on Parameter-Efficient Fine-Tuning for MultiModal Large Language Models ๏ผˆ2024.06.07๏ผ‰

Cantor: Inspiring Multimodal Chain-of-Thought of MLLM ๏ผˆ2024.04.24๏ผ‰

nicolay-r at SemEval-2024 Task 3: Using Flan-T5 for Reasoning Emotion Cause in Conversations with Chain-of-Thought on Emotion States ๏ผˆ2024.04.04๏ผ‰

Visualization-of-Thought Elicits Spatial Reasoning in Large Language Models ๏ผˆ2024.04.04๏ผ‰

Can Small Language Models Help Large Language Models Reason Better?: LM-Guided Chain-of-Thought ๏ผˆ2024.04.04๏ผ‰

Visual CoT: Unleashing Chain-of-Thought Reasoning in Multi-Modal Language Models ๏ผˆ2024.03.25๏ผ‰

A Chain-of-Thought Prompting Approach with LLMs for Evaluating Students' Formative Assessment Responses in Science ๏ผˆ2024.03.21๏ผ‰

NavCoT: Boosting LLM-Based Vision-and-Language Navigation via Learning Disentangled Reasoning ๏ผˆ2024.03.12๏ผ‰

ERA-CoT: Improving Chain-of-Thought through Entity Relationship Analysis ๏ผˆ2024.03.11๏ผ‰

Bias-Augmented Consistency Training Reduces Biased Reasoning in Chain-of-Thought ๏ผˆ2024.03.08๏ผ‰

๐Ÿ‘‰Complete paper list ๐Ÿ”— for "Chain of Thought"๐Ÿ‘ˆ

In-context Learning

LaMDA: Large Model Fine-Tuning via Spectrally Decomposed Low-Dimensional Adaptation ๏ผˆ2024.06.18๏ผ‰

The Impact of Initialization on LoRA Finetuning Dynamics ๏ผˆ2024.06.12๏ผ‰

An Empirical Study on Parameter-Efficient Fine-Tuning for MultiModal Large Language Models ๏ผˆ2024.06.07๏ผ‰

Leveraging Visual Tokens for Extended Text Contexts in Multi-Modal Learning ๏ผˆ2024.06.04๏ผ‰

Learning to grok: Emergence of in-context learning and skill composition in modular arithmetic tasks ๏ผˆ2024.06.04๏ผ‰

Long Context is Not Long at All: A Prospector of Long-Dependency Data for Large Language Models ๏ผˆ2024.05.28๏ผ‰

Efficient Prompt Tuning by Multi-Space Projection and Prompt Fusion ๏ผˆ2024.05.19๏ผ‰

MAML-en-LLM: Model Agnostic Meta-Training of LLMs for Improved In-Context Learning ๏ผˆ2024.05.19๏ผ‰

Improving Diversity of Commonsense Generation by Large Language Models via In-Context Learning ๏ผˆ2024.04.25๏ผ‰

Stronger Random Baselines for In-Context Learning ๏ผˆ2024.04.19๏ผ‰

๐Ÿ‘‰Complete paper list ๐Ÿ”— for "In-context Learning"๐Ÿ‘ˆ

Retrieval Augmented Generation

Retrieval-Augmented Mixture of LoRA Experts for Uploadable Machine Learning ๏ผˆ2024.06.24๏ผ‰

Enhancing RAG Systems: A Survey of Optimization Strategies for Performance and Scalability ๏ผˆ2024.06.04๏ผ‰

Enhancing Noise Robustness of Retrieval-Augmented Language Models with Adaptive Adversarial Training ๏ผˆ2024.05.31๏ผ‰

Accelerating Inference of Retrieval-Augmented Generation via Sparse Context Selection ๏ผˆ2024.05.25๏ผ‰

DocReLM: Mastering Document Retrieval with Language Model ๏ผˆ2024.05.19๏ผ‰

UniRAG: Universal Retrieval Augmentation for Multi-Modal Large Language Models ๏ผˆ2024.05.16๏ผ‰

ChatHuman: Language-driven 3D Human Understanding with Retrieval-Augmented Tool Reasoning ๏ผˆ2024.05.07๏ผ‰

REASONS: A benchmark for REtrieval and Automated citationS Of scieNtific Sentences using Public and Proprietary LLMs ๏ผˆ2024.05.03๏ผ‰

Superposition Prompting: Improving and Accelerating Retrieval-Augmented Generation ๏ผˆ2024.04.10๏ผ‰

Untangle the KNOT: Interweaving Conflicting Knowledge and Reasoning Skills in Large Language Models ๏ผˆ2024.04.04๏ผ‰

๐Ÿ‘‰Complete paper list ๐Ÿ”— for "Retrieval Augmented Generation"๐Ÿ‘ˆ

Evaluation & Reliability

CELLO: Causal Evaluation of Large Vision-Language Models ๏ผˆ2024.06.27๏ผ‰

PrExMe! Large Scale Prompt Exploration of Open Source LLMs for Machine Translation and Summarization Evaluation ๏ผˆ2024.06.26๏ผ‰

Revisiting Referring Expression Comprehension Evaluation in the Era of Large Multimodal Models ๏ผˆ2024.06.24๏ผ‰

OR-Bench: An Over-Refusal Benchmark for Large Language Models ๏ผˆ2024.05.31๏ผ‰

TimeChara: Evaluating Point-in-Time Character Hallucination of Role-Playing Large Language Models ๏ผˆ2024.05.28๏ผ‰

Subtle Biases Need Subtler Measures: Dual Metrics for Evaluating Representative and Affinity Bias in Large Language Models ๏ผˆ2024.05.23๏ผ‰

HW-GPT-Bench: Hardware-Aware Architecture Benchmark for Language Models ๏ผˆ2024.05.16๏ผ‰

Multimodal LLMs Struggle with Basic Visual Network Analysis: a VNA Benchmark ๏ผˆ2024.05.10๏ผ‰

Vibe-Eval: A hard evaluation suite for measuring progress of multimodal language models ๏ผˆ2024.05.03๏ผ‰

Causal Evaluation of Language Models ๏ผˆ2024.05.01๏ผ‰

๐Ÿ‘‰Complete paper list ๐Ÿ”— for "Evaluation & Reliability"๐Ÿ‘ˆ

Agent

Cooperative Multi-Agent Deep Reinforcement Learning Methods for UAV-aided Mobile Edge Computing Networks ๏ผˆ2024.07.03๏ผ‰

Symbolic Learning Enables Self-Evolving Agents ๏ผˆ2024.06.26๏ผ‰

Adversarial Attacks on Multimodal Agents ๏ผˆ2024.06.18๏ผ‰

DigiRL: Training In-The-Wild Device-Control Agents with Autonomous Reinforcement Learning ๏ผˆ2024.06.14๏ผ‰

Transforming Wearable Data into Health Insights using Large Language Model Agents ๏ผˆ2024.06.10๏ผ‰

Neuromorphic dreaming: A pathway to efficient learning in artificial agents ๏ผˆ2024.05.24๏ผ‰

Fine-Tuning Large Vision-Language Models as Decision-Making Agents via Reinforcement Learning ๏ผˆ2024.05.16๏ผ‰

Learning Multi-Agent Communication from Graph Modeling Perspective ๏ผˆ2024.05.14๏ผ‰

Smurfs: Leveraging Multiple Proficiency Agents with Context-Efficiency for Tool Planning ๏ผˆ2024.05.09๏ผ‰

Unveiling Disparities in Web Task Handling Between Human and Web Agent ๏ผˆ2024.05.07๏ผ‰

๐Ÿ‘‰Complete paper list ๐Ÿ”— for "Agent"๐Ÿ‘ˆ

Multimodal Prompt

InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output ๏ผˆ2024.07.03๏ผ‰

LLaRA: Supercharging Robot Learning Data for Vision-Language Policy ๏ผˆ2024.06.28๏ผ‰

Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs ๏ผˆ2024.06.28๏ผ‰

LLaVolta: Efficient Multi-modal Models via Stage-wise Visual Context Compression ๏ผˆ2024.06.28๏ผ‰

Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs ๏ผˆ2024.06.24๏ผ‰

VoCo-LLaMA: Towards Vision Compression with Large Language Models ๏ผˆ2024.06.18๏ผ‰

Beyond LLaVA-HD: Diving into High-Resolution Large Multimodal Models ๏ผˆ2024.06.12๏ผ‰

An Empirical Study on Parameter-Efficient Fine-Tuning for MultiModal Large Language Models ๏ผˆ2024.06.07๏ผ‰

Leveraging Visual Tokens for Extended Text Contexts in Multi-Modal Learning ๏ผˆ2024.06.04๏ผ‰

DeCo: Decoupling Token Compression from Semantic Abstraction in Multimodal Large Language Models ๏ผˆ2024.05.31๏ผ‰

๐Ÿ‘‰Complete paper list ๐Ÿ”— for "Multimodal Prompt"๐Ÿ‘ˆ

Prompt Application

IncogniText: Privacy-enhancing Conditional Text Anonymization via LLM-based Private Attribute Randomization ๏ผˆ2024.07.03๏ผ‰

Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs ๏ผˆ2024.06.28๏ผ‰

OMG-LLaVA: Bridging Image-level, Object-level, Pixel-level Reasoning and Understanding ๏ผˆ2024.06.27๏ผ‰

Adversarial Search Engine Optimization for Large Language Models ๏ผˆ2024.06.26๏ผ‰

VideoLLM-online: Online Video Large Language Model for Streaming Video ๏ผˆ2024.06.17๏ผ‰

Regularizing Hidden States Enables Learning Generalizable Reward Model for LLMs ๏ผˆ2024.06.14๏ผ‰

Autoregressive Model Beats Diffusion: Llama for Scalable Image Generation ๏ผˆ2024.06.10๏ผ‰

Language models emulate certain cognitive profiles: An investigation of how predictability measures interact with individual differences ๏ผˆ2024.06.07๏ผ‰

PaCE: Parsimonious Concept Engineering for Large Language Models ๏ผˆ2024.06.06๏ผ‰

Yuan 2.0-M32: Mixture of Experts with Attention Router ๏ผˆ2024.05.28๏ผ‰

๐Ÿ‘‰Complete paper list ๐Ÿ”— for "Prompt Application"๐Ÿ‘ˆ

Foundation Models

TheoremLlama: Transforming General-Purpose LLMs into Lean4 Experts ๏ผˆ2024.07.03๏ผ‰

Pedestrian 3D Shape Understanding for Person Re-Identification via Multi-View Learning ๏ผˆ2024.07.01๏ผ‰

Token Erasure as a Footprint of Implicit Vocabulary Items in LLMs ๏ผˆ2024.06.28๏ผ‰

OMG-LLaVA: Bridging Image-level, Object-level, Pixel-level Reasoning and Understanding ๏ผˆ2024.06.27๏ผ‰

Fundamental Problems With Model Editing: How Should Rational Belief Revision Work in LLMs? ๏ผˆ2024.06.27๏ผ‰

Efficient World Models with Context-Aware Tokenization ๏ผˆ2024.06.27๏ผ‰

The Remarkable Robustness of LLMs: Stages of Inference? ๏ผˆ2024.06.27๏ผ‰

ResumeAtlas: Revisiting Resume Classification with Large-Scale Datasets and Large Language Models ๏ผˆ2024.06.26๏ผ‰

AITTI: Learning Adaptive Inclusive Token for Text-to-Image Generation ๏ผˆ2024.06.18๏ผ‰

Unveiling Encoder-Free Vision-Language Models ๏ผˆ2024.06.17๏ผ‰

๐Ÿ‘‰Complete paper list ๐Ÿ”— for "Foundation Models"๐Ÿ‘ˆ

๐Ÿ‘จโ€๐Ÿ’ป LLM Usage

Large language models (LLMs) are becoming a revolutionary technology that is shaping the development of our era. Developers can create applications that were previously only possible in our imaginations by building LLMs. However, using these LLMs often comes with certain technical barriers, and even at the introductory stage, people may be intimidated by cutting-edge technology: Do you have any questions like the following?

  • โ“ How can LLM be built using programming?
  • โ“ How can it be used and deployed in your own programs?

๐Ÿ’ก If there was a tutorial that could be accessible to all audiences, not just computer science professionals, it would provide detailed and comprehensive guidance to quickly get started and operate in a short amount of time, ultimately achieving the goal of being able to use LLMs flexibly and creatively to build the programs they envision. And now, just for you: the most detailed and comprehensive Langchain beginner's guide, sourced from the official langchain website but with further adjustments to the content, accompanied by the most detailed and annotated code examples, teaching code lines by line and sentence by sentence to all audiences.

Click ๐Ÿ‘‰here๐Ÿ‘ˆ to take a quick tour of getting started with LLM.

โœ‰๏ธ Contact

This repo is maintained by EgoAlpha Lab. Questions and discussions are welcome via [email protected].

We are willing to engage in discussions with friends from the academic and industrial communities, and explore the latest developments in prompt engineering and in-context learning together.

๐Ÿ™ Acknowledgements

Thanks to the PhD students from EgoAlpha Lab and other workers who participated in this repo. We will improve the project in the follow-up period and maintain this community well. We also would like to express our sincere gratitude to the authors of the relevant resources. Your efforts have broadened our horizons and enabled us to perceive a more wonderful world.