Instructions to use ReasoningShield/ReasoningShield-3B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use ReasoningShield/ReasoningShield-3B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="ReasoningShield/ReasoningShield-3B")# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("ReasoningShield/ReasoningShield-3B", dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use ReasoningShield/ReasoningShield-3B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "ReasoningShield/ReasoningShield-3B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ReasoningShield/ReasoningShield-3B", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/ReasoningShield/ReasoningShield-3B
- SGLang
How to use ReasoningShield/ReasoningShield-3B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "ReasoningShield/ReasoningShield-3B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ReasoningShield/ReasoningShield-3B", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "ReasoningShield/ReasoningShield-3B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ReasoningShield/ReasoningShield-3B", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use ReasoningShield/ReasoningShield-3B with Docker Model Runner:
docker model run hf.co/ReasoningShield/ReasoningShield-3B
Update pipeline tag to text-classification and add paper link
#1
by nielsr HF Staff - opened
This PR improves the model card for ReasoningShield by:
- Updating the
pipeline_tag: Thepipeline_taghas been changed fromtext-generationtotext-classification. This more accurately reflects the model's primary function of safety detection and moderation over reasoning traces, where it classifies text into predefined safety risk levels. This change will enhance the model's discoverability on the Hub under the correct task category. - Adding a paper link: A direct link to the associated paper, ReasoningShield: Safety Detection over Reasoning Traces of Large Reasoning Models, has been added to the model card content for improved visibility and accessibility of the research.
Please review these changes.