FrootAI — AmpliFAI your AI Ecosystem Get Started

All Solution Plays

Play 13

Fine-Tuning Workflow

High🔧 Skeleton

End-to-end fine-tuning with data prep, LoRA training, evaluation, and deployment.

Curate training data, configure LoRA parameters, train on Azure ML with GPU compute, evaluate with automated metrics, then deploy the fine-tuned model. MLflow tracks experiments. The pipeline handles data validation, train/val splitting, hyperparameter sweeps, and model versioning.

Architecture Pattern

LoRA fine-tuning, dataset curation, evaluation, MLOps

Azure Services

Azure ML WorkspaceGPU ComputeStorageMLflowAzure OpenAI (base models)

DevKit (.github Agentic OS)

  • agent.md — root orchestrator with builder→reviewer→tuner handoffs
  • 3 agents — Fine-Tuning Builder (gpt-4o), Reviewer (gpt-4o-mini), Tuner (gpt-4o-mini)
  • 3 skills — deploy (111 lines), evaluate (100 lines), tune (120 lines)
  • 4 prompts — /deploy, /test, /review, /evaluate with agent routing
  • .vscode/mcp.json — FrootAI MCP with Azure ML + HuggingFace inputs + envFile

TuneKit (AI Config)

  • config/training.json — LoRA rank, learning rate, epochs, batch size
  • config/dataset.json — train/val split, preprocessing
  • config/evaluation.json — eval metrics, thresholds
  • evaluation/eval.py — automated scoring

Tuning Parameters

LoRA rank (8–64)Learning rateEpochsBatch sizeEval metrics thresholds

Estimated Cost

Dev/Test

$200–400/mo

Production

$1.5K–5K/mo (training)