Skip to content
/ ICDPO Public

The official implementation of "ICDPO: Effectively Borrowing Alignment Capability of Others via In-context Direct Preference Optimization".

License

Notifications You must be signed in to change notification settings

F2-Song/ICDPO

Repository files navigation

ICDPO: Effectively Borrowing Alignment Capability of Others via In-context Direct Preference Optimization

Authors: Feifan Song, Yuxuan Fan, Xin Zhang, Peiyi Wang, Houfeng Wang

arXiv: Abstract / PDF

model

Large Language Models (LLMs) rely on Human Preference Alignment (HPA) to ensure the generation of safe content. Due to the heavy cost associated with fine-tuning, fine-tuning-free methods have emerged, typically modifying LLM decoding with external auxiliary methods. However, these methods do not essentially enhance the LLM itself. In this paper, we rethink the derivation procedures of DPO, based on which we conversely build an instant scorer using the states of the LLM before and after In-context Learning (ICL). Accordingly, we propose a novel approach called In-Context Direct Preference Optimization (ICDPO). It enables LLMs to borrow the HPA capabilities from superior LLMs with ICL, generating well-aligned responses as estimated by the aforementioned instant scorer, thereby enhancing the final performance. ICDPO can be further enhanced with a two-stage retriever and an upgraded scorer, both offering benefits. Extensive experiments show its effectiveness, particularly in outperforming two fine-tuning-free baselines, and it exhibits competitiveness with SFT + LoRA. We also conduct detailed analyses to offer comprehensive insights into ICDPO.

Results

Main Results

main_results_on_hh other_main_results

GPT-4 Evaluation

gpt4_results

Implementation

Packages Requirement

python==3.9.13
torch==1.13.1
transformers==4.28.1

Usage

We prepare a script exec.sh for easy start. Before using it, you need to first specify some parameters. Below is a sample:

script=hh
id=1
task=hh
pos_mode=icl
neg_mode=base
generator=llama
retrieval=random
model_name_or_path=YOUR_MODEL_PATH

sh scripts/sample_generate_${script}.sh $id $task $pos_mode $neg_mode $generator $retrieval $model_name_or_path
sh scripts/evaluate_${script}.sh $id $task

Then, just run the script:

sh exec.sh

You can further change the value of each slot and set another id to try different settings.

⚠️Caution

ICDPO is a research project and is not intended for other use. We observe that the data involved in this work may indispensably contain sensitive, offensive, and misleading content, whose presence does not represent our attitudes, but is solely for research and should not be used or distributed outside of research contexts. We do not guarantee the safety of the generated content. Please use it at your own risk.

We are committed to establishing a more inclusive and ethically sound era of AI technology, which can be applied to legitimate needs and generate content that aligns with universally positive human values.

Citation

If this work is helpful to you, welcome to cite our paper as:

@misc{song2024icdpo,
      title={ICDPO: Effectively Borrowing Alignment Capability of Others via In-context Direct Preference Optimization}, 
      author={Feifan Song and Yuxuan Fan and Xin Zhang and Peiyi Wang and Houfeng Wang},
      year={2024},
      eprint={2402.09320},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

About

The official implementation of "ICDPO: Effectively Borrowing Alignment Capability of Others via In-context Direct Preference Optimization".

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published