Repository navigation

#

reinforcement-learning-from-human-feedback

An Easy-to-use, Scalable and High-performance RLHF Framework (70B+ PPO Full Tuning & Iterative DPO & LoRA & RingAttention & RFT)

Python
6333
15 小时前

A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.

Python
805
10 个月前

A repo for RLHF training and BoN over LLMs, with support for reward model ensembles.

Python
42
3 个月前

Official code for ICML 2024 paper, "RIME: Robust Preference-based Reinforcement Learning with Noisy Preferences" (ICML 2024 Spotlight)

Python
28
6 个月前

annotated tutorial of the huggingface TRL repo for reinforcement learning from human feedback connecting equations from PPO and GAE to the lines of code in the pytorch implementation

Jupyter Notebook
18
15 天前

RLHF-Blender: A Configurable Interactive Interface for Learning from Diverse Human Feedback

Python
12
2 天前

[TSMC] Ask-AC: An Initiative Advisor-in-the-Loop Actor-Critic Framework

Python
8
10 个月前

[AAMAS 2025] Privacy-preserving and Personalized RLHF, with convergence guarantees. The Code contains experiments for training multiple instances of GPT-2 for personalized sentiment aligned text generation.

Python
6
3 天前

This repository contains the implementation of a Reinforcement Learning with Human Feedback (RLHF) system using custom datasets. The project utilizes the trlX library for training a preference model that integrates human feedback directly into the optimization of language models.

Python
3
8 个月前

LMRax is a framework built on JAX to train transformers language models by reinforcement learning, along with the reward model training.

Python
2
2 年前

Code for Bachelor thesis, The Human Factor: Addressing Diversity in Reinforcement Learning from Human Feedback.

Python
0
8 个月前