Skip to content

Collection of links, tutorials and best practices of how to collect the data and build end-to-end RLHF system to finetune Generative AI models

Notifications You must be signed in to change notification settings

HumanSignal/RLHF

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

31 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Aligning ML Models with Human Feedback

Stages of model alignment

What this repository contains:

This repository contains a collection of tutorials, best practices, and references for developers, data scientists, and machine learning professionals of all skill levels.

This repo may be of interest to you if:

If you have any questions about this repo, or need a hand:

Establishing Supervised Model Baseline

In this step, we collect labeled text data to train an initial Large Language Model (LLM), focusing on task-specific performance improvements. This stage involves gathering instructions and responses to adapt the base model to a broad range of tasks, enhancing its ability to generate accurate and contextually relevant responses. Typically this step involves:

  • Selecting baseline Foundational Model (FM) that can perform fairly well on general tasks (like GPT2 or GPT-J)
  • Generate dataset of pairs prompt input followed by response. You can manually label dataset or generate the data like it is provided in the example
  • Peform supervised model finetuning.

Gathering and Incorporating Human Feedback

This stage involves collecting comparison data to establish human preferences for the responses generated by the supervised model. By ranking multiple responses based on quality, we can train a reward model that effectively captures human preferences. This reward model plays a crucial role in reinforcement learning, optimizing the performance of the fine-tuned foundational model.

Gathering Human Feedback Tutorial - This Jupyter Notebook tutorial will guide you through the process of collecting comparison data, establishing human preferences, and incorporating this feedback into the reward model training.

Training and Assessing the Final Model with Reinforcement Learning

The training stage is a challenging process, and the final model assessment is a critical component in evaluating your model's quality. It is essential to determine whether the model adheres to the provided instructions, avoids biases, and maintains a high standard of performance.

About

Collection of links, tutorials and best practices of how to collect the data and build end-to-end RLHF system to finetune Generative AI models

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •