AI DAGGR Safety Scanner
Scan content for NSFW, toxicity, and policy violations before publishing.
API Endpoints
/scan
- image (filepath, optional): Image to scan for NSFW content
- caption (str, optional): Text to scan for toxicity
- threshold (float): Flagging threshold 0.0-1.0 (default: 0.5)
Returns: dict with keys: safe, nsfw_score, toxicity_score, safety_score, reason
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support