FAI MCP Integration Guide
45 MCP tools across 7 categories — the bridge between AI models and the FAI knowledge base.
What Is MCP?
The Model Context Protocol (MCP) is an open standard for connecting AI models to external tools and data sources. Think of it as "USB-C for AI" — a universal plug that lets any MCP-compatible client (GitHub Copilot, Claude Desktop, Cursor, Azure AI Foundry) call the same set of tools without custom integration code.
FrootAI's MCP server exposes 45 tools that give AI models direct access to the FROOT knowledge base, solution plays, architecture patterns, FAI Engine protocol wiring, scaffold generation, plugin marketplace, and evaluation pipelines. Instead of pasting knowledge into prompts manually, AI models call MCP tools to retrieve exactly the context they need.
How FrootAI Uses MCP
The FAI MCP server is the runtime bridge between AI models and the FrootAI ecosystem. When an agent needs architecture guidance, it calls get_architecture_pattern. When it needs to validate a config, it calls validate_config. The MCP server handles:
- Knowledge retrieval — Search and fetch from 17 FROOT modules
- Play discovery — Find matching solution plays by description
- Config validation — Check openai.json and guardrails.json against best practices
- Cost estimation — Calculate Azure costs for any play at dev or prod scale
- Model comparison — Side-by-side model recommendations for specific use cases
- Evaluation — Run quality scores against guardrail thresholds
45 Tools in 7 Categories
| Category | Tools | Count | Purpose |
|---|---|---|---|
| Static Knowledge | get_module, list_modules, lookup_term, search_knowledge, get_froot_overview, get_model_catalog | 6 | Read FROOT modules, glossary, model catalog |
| Live Architecture | get_architecture_pattern, compare_models, get_azure_pricing, fetch_azure_docs | 4 | Architecture guidance, model comparison, pricing |
| Agent Chain | agent_build, agent_review, agent_tune | 3 | Builder/Reviewer/Tuner workflow |
| Ecosystem | list_community_plays, semantic_search_plays, compare_plays, estimate_cost, generate_architecture_diagram, run_evaluation, validate_config | 7 | Play discovery, cost, eval, validation |
| Developer | embedding_playground, fetch_external_mcp | 2 | Learning tools and external MCP discovery |
| FAI Engine | wire_play, inspect_wiring, validate_manifest, get_play_detail, list_primitives, evaluate_quality | 6 | Protocol wiring, primitive inspection, quality gates |
| Scaffold & Create | scaffold_play, create_primitive, smart_scaffold | 3 | Generate plays and primitives with FAI Protocol |
| Marketplace | marketplace_search, install_plugin, uninstall_plugin, compose_plugins, + 9 more | 13 | Plugin discovery, install, compose, validate |
Installation
The FAI MCP server is available as an npm package, Python package, and Docker image:
# Node.js (recommended)
npx frootai-mcp@latest
# Python
pip install frootai-mcp
# Docker
docker run -i ghcr.io/frootai/frootai-mcp
# Verify installation
npx frootai-mcp --version
npx frootai-mcp --list-toolsClient Configuration
Configure the FAI MCP server in your preferred AI client:
GitHub Copilot (VS Code)
{
"servers": {
"frootai": {
"command": "npx",
"args": ["frootai-mcp@latest"],
"env": {}
}
}
}Claude Desktop
{
"mcpServers": {
"frootai": {
"command": "npx",
"args": ["frootai-mcp@latest"]
}
}
}Cursor
{
"mcpServers": {
"frootai": {
"command": "npx",
"args": ["frootai-mcp@latest"]
}
}
}Azure AI Foundry
In Azure AI Foundry, add the MCP server as a connected tool in your agent configuration. Use the Docker image for cloud-hosted deployments:
# Add as tool connection in AI Foundry portal
# Image: ghcr.io/frootai/frootai-mcp:latest
# Transport: stdioPer-Play MCP Configuration
Each solution play can customize the MCP server by passing a play ID as an environment variable. This scopes the server's responses to the specific play's context — knowledge modules, WAF pillars, and infrastructure:
{
"servers": {
"frootai": {
"command": "npx",
"args": ["frootai-mcp@latest"],
"env": {
"FAI_PLAY": "01-enterprise-rag",
"FAI_WAF": "security,reliability,cost-optimization",
"FAI_KNOWLEDGE": "F1,F2,R2,O4"
}
}
}
}When FAI_PLAY is set, tools like search_knowledge prioritize the play's knowledge modules, validate_config uses play-specific thresholds, and estimate_costdefaults to the play's Azure services.
Tool Parameter Reference
Key tools and their parameters:
| Tool | Required Params | Optional Params | Returns |
|---|---|---|---|
| get_module | module_id | section | Full module content or specific section |
| search_knowledge | query | max_results | Matching sections across all modules |
| compare_models | useCase | priority | Side-by-side model comparison |
| estimate_cost | play | scale | Itemized monthly Azure cost |
| validate_config | config_type, config_content | play | Validation results with warnings |
| run_evaluation | scores | thresholds, play | Pass/fail per metric |
| semantic_search_plays | query | top_k | Ranked play matches with confidence |
| wire_play | — | playId, manifestPath | Wiring report: primitives connected, context injected, duration |
| scaffold_play | name | description, model, wafPillars, dryRun | 24+ files created with FAI Protocol auto-wired |
| inspect_wiring | — | playId | Dependency graph: agents → instructions → skills → hooks |
| evaluate_quality | scores | thresholds, play | Pass/fail per metric against play guardrails |
Building Custom MCP Tools
Extend the FAI MCP server by adding custom tools. Each tool is a function registered with the MCP runtime:
// Custom MCP tool template
module.exports = {
name: "my_custom_tool",
description: "Describe what this tool does",
parameters: {
type: "object",
properties: {
query: {
type: "string",
description: "The input query"
}
},
required: ["query"]
},
handler: async ({ query }) => {
// Your tool logic here
const result = await processQuery(query);
return {
content: [
{
type: "text",
text: JSON.stringify(result, null, 2)
}
]
};
}
};Troubleshooting MCP Connections
Common issues when setting up the FAI MCP server:
| Symptom | Cause | Fix |
|---|---|---|
| Tools not appearing in Copilot | MCP server not started | Check .vscode/mcp.json exists and reload VS Code |
| Connection refused | Port conflict or npx cache | Run npx frootai-mcp@latest manually to verify |
| Stale responses | Old MCP server version | Clear npx cache: npx clear-npx-cache |
| Play-scoped tools return generic data | FAI_PLAY env var not set | Add FAI_PLAY to mcp.json env block |
External MCP Discovery
The fetch_external_mcptool searches public MCP registries for servers that complement the FAI ecosystem — GitHub, Jira, Slack, database, and other integrations. Combine FAI's architecture knowledge with external data sources for end-to-end workflows.
// In Copilot Chat:
// "Find MCP servers for database integration"
// The fetch_external_mcp tool searches public registries
// and returns compatible MCP servers:
// - @modelcontextprotocol/server-postgres
// - @modelcontextprotocol/server-sqlite
// - mcp-server-mongodb
// - mcp-server-redisWhat Is Wiring? The FAI Protocol
Every solution play has a fai-manifest.json — a binding contract that declares which agents, instructions, skills, hooks, knowledge modules, and quality thresholds belong to the play. When you call wire_play, the FAI Engine:
- Loads the manifest and resolves all file paths
- Builds shared context — injects knowledge modules + WAF pillars into every primitive
- Wires the graph — connects agents → instructions → skills → hooks
- Creates quality gates — guardrails thresholds from the manifest
Think of it as docker compose up but for AI primitives. One manifest, one command, everything connected.
{
"play": "01-enterprise-rag",
"version": "1.0.0",
"context": {
"knowledge": ["R2-RAG-Architecture", "O3-MCP-Tools-Functions"],
"waf": ["security", "reliability", "cost-optimization"],
"scope": "enterprise-rag-qa"
},
"primitives": {
"agents": ["./agent.md"],
"instructions": ["./instructions.md"],
"skills": ["./.github/skills/deploy-enterprise-rag/"],
"hooks": ["../../hooks/frootai-secrets-scanner/"],
"guardrails": {
"groundedness": 0.95,
"coherence": 0.90,
"safety": 0
}
}
}