Skip to content

Latest commit

 

History

History
54 lines (41 loc) · 1.39 KB

rlhf.md

File metadata and controls

54 lines (41 loc) · 1.39 KB

RLHF (Beta)

Overview

Reinforcement Learning from Human Feedback is a method whereby a language model is optimized from data using human feedback. Various methods include, but not limited to:

  • Proximal Policy Optimization (PPO) (not yet supported in axolotl)
  • Direct Preference Optimization (DPO)
  • Identity Preference Optimization (IPO)

RLHF using Axolotl

Important

This is a BETA feature and many features are not fully implemented. You are encouraged to open new PRs to improve the integration and functionality.

The various RL training methods are implemented in trl and wrapped via axolotl. Below are various examples with how you can use various preference datasets to train models that use ChatML

DPO

rl: dpo
datasets:
  - path: Intel/orca_dpo_pairs
    split: train
    type: chatml.intel
  - path: argilla/ultrafeedback-binarized-preferences
    split: train
    type: chatml.argilla

IPO

rl: ipo

Using local dataset files

datasets:
  - ds_type: json
    data_files:
      - orca_rlhf.jsonl
    split: train
    type: chatml.intel

Trl autounwrap for peft

Trl supports autounwrapping peft models, so that a ref model does not need to be additionally loaded, leading to less VRAM needed. This is on by default. To turn it off, pass the following config.

# load ref model when adapter training.
rl_adapter_ref_model: true