FrootAI — AmpliFAI your AI Ecosystem Get Started

All Solution Plays

Play 10

Content Moderation

Low🔧 Skeleton

Filter harmful content with Azure Content Safety and APIM gateway.

Every AI response passes through Azure Content Safety for severity scoring across hate, violence, self-harm, and sexual categories. APIM acts as the gateway, enforcing rate limits and routing. Custom blocklists catch domain-specific terms. Azure Functions handle async processing for high-volume scenarios.

Architecture Pattern

Safety gateway, severity scoring, blocklists, custom categories

Azure Services

Content SafetyAPI ManagementAzure Functions

DevKit (.github Agentic OS)

  • agent.md — root orchestrator with builder→reviewer→tuner handoffs
  • 3 agents — Content Mod Builder (gpt-4o), Reviewer (gpt-4o-mini), Tuner (gpt-4o-mini)
  • 3 skills — deploy (121 lines), evaluate (101 lines), tune (120 lines)
  • 4 prompts — /deploy, /test, /review, /evaluate with agent routing
  • .vscode/mcp.json — FrootAI MCP with Content Safety key + envFile

TuneKit (AI Config)

  • config/safety.json — severity levels, custom categories, blocklists
  • config/guardrails.json — filtering rules, thresholds
  • evaluation/ — moderation test sets

Tuning Parameters

Severity levels (0–6)Custom categoriesBlocklistsConfidence thresholds

Estimated Cost

Dev/Test

$50–100/mo

Production

$300–800/mo