CS336 Notes: Lecture 17 - Alignment, RL 2
RL foundations for LLMs: policy gradients, baselines for variance reduction, GRPO implementation details, and practical training considerations for reasoning models.
Blog
Filter
RL foundations for LLMs: policy gradients, baselines for variance reduction, GRPO implementation details, and practical training considerations for reasoning models.
Advanced RL for alignment: PPO implementation details, GRPO as a simpler alternative, overoptimization risks, and case studies from DeepSeek R1, Kimi K1.5, and Qwen 3.
Post-training for helpful assistants: supervised fine-tuning on instructions, safety tuning, RLHF with preference data, PPO vs DPO, and the challenges of learning from human feedback.
Data filtering and deduplication at scale: n-gram language models, fastText classifiers, importance sampling, MinHash, LSH, and Bloom filters for efficient web-scale processing.
Training data for LLMs: Common Crawl processing, quality filtering, the evolution of data pipelines from BERT to modern models, and the critical role of copyright and licensing.