RL post-training
Collection
6 items • Updated
This model is a fine-tuned version of Qwen3-4B using GRPO (Group Relative Policy Optimization) with KL penalty for mathematical reasoning.
Trained with PipelineRL.
| Split | Datasets |
|---|---|
| Train | gsm8k_train, math_train |
| Test | gsm8k_test, math_500 |
| Parameter | Value |
|---|---|
| Algorithm | GRPO (Group Relative Policy Optimization) |
| Policy Loss | ppo |
| KL Coefficient | 0.001 |
| Epsilon (clip) | 0.02 |
| Divide Advantage by Std | False |
| Filter Zero Advantage Groups | False |
| Rollouts per Problem | 16 |
| Parameter | Value |
|---|---|
| Base Model | Qwen/Qwen3-4B |
| Learning Rate | 1e-06 |
| LR Scheduler | cosine |
| Warmup Steps | 25 |
| Max Training Steps | 1500 |
| Micro Batch Size | 2 |
| Gradient Accumulation | 128 |
| Effective Batch Size | 256 |
| Sequence Length | 8192 |
| Gradient Clipping | 0.3 |
| Weight Decay | 0.01 |
| Optimizer | adamw_torch |
| Precision | bf16 |
| DeepSpeed | ZeRO Stage 3 |
Full training logs: https://wandb.ai/jaygala24-team/rl-post-training/runs/qwen3_4b_grpo_with_kl_2a1p1f_4xh100_197342_finetune_d0a43ea2
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("jaygala24/Qwen3-4B-GRPO-KL-math-reasoning", revision="step-0200") # or whatever branch name, e.g. "step-0400", "step-0600"
tokenizer = AutoTokenizer.from_pretrained("jaygala24/Qwen3-4B-GRPO-KL-math-reasoning", revision="step-0200") # or whatever branch name, e.g. "step-0400", "step-0600"
prompt = "Please reason step by step, and put your final answer within \\boxed{{}}.\n\nWhat is the sum of 123 and 456?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=4096, temperature=0.7)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
from vllm import LLM, SamplingParams
llm = LLM(model="jaygala24/Qwen3-4B-GRPO-KL-math-reasoning", revision="step-0200") # or whatever branch name, e.g. "step-0400", "step-0600"
sampling_params = SamplingParams(temperature=0.7, max_tokens=4096)
prompt = "Please reason step by step, and put your final answer within \\boxed{}.\n\nWhat is the sum of 123 and 456?"
outputs = llm.generate([prompt], sampling_params)
print(outputs[0].outputs[0].text)