Agent-rc
Collection
Agent research, tool calling and ReAct • 3 items • Updated
Fine-tuned variant of Qwen3-4B-Instruct-2507, optimized for tool-use and function call generation via reinforcement learning with composite reward signals.
| Base Model | Qwen/Qwen3-4B-Instruct-2507 |
| Training Method | GRPO (Group Relative Policy Optimization) |
| Specialization | Tool-use, function calling |
| License | Apache 2.0 |
The model is trained with three complementary reward functions:
| Parameter | Value |
|---|---|
| Optimizer | AdamW |
| Learning rate | 5e-6 |
| Scheduler | Cosine with min LR (min_lr_rate=0.1) |
| Generations per prompt | 4 |
| Model | Overall Accuracy |
|---|---|
| Qwen3-4B-I-1209 (this model) | 0.7233 |
| Qwen3-4B-Instruct-2507 (base) | 0.6350 |
| Salesforce/Llama-xLAM-2-8b-fc-r | 0.5792 |
Additional benchmark results will be added as evaluation continues.
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("beyoru/Qwen3-4B-I-1209")
tokenizer = AutoTokenizer.from_pretrained("beyoru/Qwen3-4B-I-1209")
Feedback on model quality, edge cases, and real-world performance is welcome. Open an issue or reach out via the links below.
@misc{qwen3-4b-i-1209,
title = {Qwen3-4B-I-1209: Fine-tuned Qwen3-4B-Instruct with GRPO for Tool-Use and Function Calling},
author = {Beyoru},
year = {2025},
howpublished = {\url{https://huggingface.co/beyoru/Qwen3-4B-I-1209}}
}
Unable to build the model tree, the base model loops to the model itself. Learn more.