Repository navigation

#

reinforcement-learning-from-human-feedback

An Easy-to-use, Scalable and High-performance RLHF Framework based on Ray (PPO & GRPO & REINFORCE++ & vLLM & Ray & Dynamic Sampling & Async Agentic RL)

Python
8060
13 天前

A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.

Python
825
1 年前

CodeUltraFeedback: aligning large language models to coding preferences (TOSEM 2025)

Python
71
1 年前

A repo for RLHF training and BoN over LLMs, with support for reward model ensembles.

Python
44
9 个月前

Official code for ICML 2024 paper, "RIME: Robust Preference-based Reinforcement Learning with Noisy Preferences" (ICML 2024 Spotlight)

Python
34
1 年前

annotated tutorial of the huggingface TRL repo for reinforcement learning from human feedback connecting equations from PPO and GAE to the lines of code in the pytorch implementation

Jupyter Notebook
20
6 个月前

RLHF-Blender: A Configurable Interactive Interface for Learning from Diverse Human Feedback

Python
13
2 天前

[AAMAS 2025] Privacy-preserving and Personalized RLHF, with convergence guarantees. The Code contains experiments for training multiple instances of GPT-2 for personalized sentiment aligned text generation.

Python
10
6 个月前

[TSMC] Ask-AC: An Initiative Advisor-in-the-Loop Actor-Critic Framework

Python
8
1 年前

This repository contains the implementation of a Reinforcement Learning with Human Feedback (RLHF) system using custom datasets. The project utilizes the trlX library for training a preference model that integrates human feedback directly into the optimization of language models.

Python
5
1 年前

LMRax is a framework built on JAX to train transformers language models by reinforcement learning, along with the reward model training.

Python
2
3 年前

Code for Bachelor thesis, The Human Factor: Addressing Diversity in Reinforcement Learning from Human Feedback.

Python
0
1 年前