Skip to content

Blog

Rlhf

Tutorials··12 min read

CS336 Notes: Lecture 17 - Alignment, RL 2

RL foundations for LLMs: policy gradients, baselines for variance reduction, GRPO implementation details, and practical training considerations for reasoning models.

Read
Tutorials··16 min read

CS336 Notes: Lecture 16 - Alignment, RL 1

Advanced RL for alignment: PPO implementation details, GRPO as a simpler alternative, overoptimization risks, and case studies from DeepSeek R1, Kimi K1.5, and Qwen 3.

Read
Tutorials··10 min read

CS336 Notes: Lecture 15 - Alignment, SFT and RLHF

Post-training for helpful assistants: supervised fine-tuning on instructions, safety tuning, RLHF with preference data, PPO vs DPO, and the challenges of learning from human feedback.

Read