FrootAI — AmpliFAI your AI Ecosystem Get Started

All Solution Plays

Play 75

Exam Generation Engine

Medium Ready

Auto-generate exams with difficulty calibration, rubric generation, and anti-cheating variation.

Automated exam generation system that creates assessments from curriculum materials with precise difficulty calibration. Azure OpenAI generates questions across Bloom's taxonomy levels, produces detailed rubrics and answer keys, and creates anti-cheating question variations. Blob Storage manages curriculum documents and exam banks, Cosmos DB stores question metadata and difficulty analytics, and Functions orchestrate the generation pipeline with batch processing support.

Architecture Pattern

Curriculum → question generation → difficulty calibration → rubric + answer key → anti-cheat variations

Azure Services

Azure OpenAIAzure Blob StorageAzure Cosmos DBAzure Functions

DevKit (.github Agentic OS)

  • agent.md — root orchestrator with builder→reviewer→tuner handoffs
  • 3 agents — Exam Builder (gpt-4o), Reviewer (gpt-4o-mini), Tuner (gpt-4o-mini)
  • 3 skills — deploy (203 lines), evaluate (129 lines), tune (245 lines)
  • 4 prompts — /deploy, /test, /review, /evaluate with agent routing
  • .vscode/mcp.json — FrootAI MCP with OpenAI key input + envFile

TuneKit (AI Config)

  • config/openai.json — question generation and rubric prompts
  • config/exam.json — difficulty distribution, question types, Bloom's levels
  • config/guardrails.json — fairness, bias detection, content appropriateness
  • evaluation/eval.py — Difficulty accuracy >85%, Rubric coverage >90%

Tuning Parameters

Difficulty distribution curveBloom's level targetingQuestion type ratiosAnti-cheat variation countRubric granularity

Estimated Cost

Dev/Test

$40–100/mo

Production

$500–2K/mo