FrootAI — AmpliFAI your AI Ecosystem Get Started

FAI Skills Workshop

Build skills that auto-wire into solution plays with evaluation context.

L5·14 min read·Medium

Skills vs Instructions

Instructions set passive standards — they shape how Copilot writes code. Skills guide active procedures— they teach Copilot how to accomplish a multi-step task from start to finish. Think of instructions as the "coding style guide" and skills as the "how-to manual."

AspectInstructionSkill
ActivationAutomatic (glob match)Explicit (user invokes)
BehaviorPassive (shapes output)Active (drives workflow)
FormatSingle .md fileFolder with SKILL.md + assets
AssetsNoneTemplates, scripts, examples
Example"Always use managed identity""Deploy Play 01 to Azure step by step"

SKILL.md Folder Structure

Every skill lives in its own folder under .github/skills/. The folder name must match the name field in the SKILL.md frontmatter. All assets referenced by the skill go inside this folder.

Skill folder layout
.github/skills/
└── deploy-01-enterprise-rag/
    ├── SKILL.md                 # The procedure (required)
    ├── templates/
    │   ├── main.bicep           # Infrastructure template
    │   └── parameters.json      # Parameters file
    ├── scripts/
    │   └── deploy.sh            # Deployment automation
    └── examples/
        └── sample-query.py      # Usage example

Frontmatter Fields

FieldRequiredRulesExample
nameYeskebab-case, must match folder namedeploy-01-enterprise-rag
descriptionYes10–1024 charactersDeploy Play 01 Enterprise RAG to Azure

Writing a Complete Skill

A well-structured skill has numbered steps, each with a clear action, expected output, and verification. Here's a full example:

.github/skills/deploy-01-enterprise-rag/SKILL.md
---
name: deploy-01-enterprise-rag
description: "Deploy the Enterprise RAG play to Azure with full infrastructure"
---

## Prerequisites
- Azure CLI authenticated (`az login`)
- Subscription with Contributor role
- Resource group created: `rg-fai-rag-prod`

## Step 1: Validate the Manifest
Run the FAI Engine to verify all primitives and configs resolve:
```bash
node engine/index.js solution-plays/01-enterprise-rag/fai-manifest.json --status
```
Expected: All green checkmarks, status READY.

## Step 2: Deploy Infrastructure
Use the Bicep template to provision Azure services:
```bash
az deployment group create \
  --resource-group rg-fai-rag-prod \
  --template-file templates/main.bicep \
  --parameters @templates/parameters.json
```
Expected: Azure OpenAI, AI Search, Cosmos DB, Container Apps provisioned.

## Step 3: Configure Secrets
Store API endpoints in Key Vault (never in code):
```bash
az keyvault secret set --vault-name kv-fai-rag \
  --name "openai-endpoint" --value "$OPENAI_ENDPOINT"
```

## Step 4: Deploy Application
Push the container image and deploy:
```bash
az containerapp up --name ca-fai-rag \
  --resource-group rg-fai-rag-prod \
  --source . --env-vars "KEY_VAULT_URL=https://kv-fai-rag.vault.azure.net"
```

## Step 5: Run Evaluation
Verify quality against guardrails.json thresholds:
```bash
python -m evaluation.run --play 01 --dataset eval/golden-set.jsonl
```
Expected: groundedness ≥ 0.85, relevance ≥ 0.80, coherence ≥ 0.90.

## Verification
- [ ] All Azure services provisioned and healthy
- [ ] Application responding at the Container Apps endpoint
- [ ] Evaluation scores meet guardrail thresholds
- [ ] No secrets in code (hook scan clean)

Parameters Table Format

Skills that accept configurable parameters should document them in a Markdown table within the SKILL.md body:

Parameters table in SKILL.md
## Parameters

| Parameter | Required | Default | Description |
|-----------|----------|---------|-------------|
| resource_group | Yes | — | Target Azure resource group |
| region | No | eastus2 | Azure region for deployment |
| sku | No | consumption | Container Apps pricing tier |
| min_replicas | No | 0 | Minimum container instances |
| max_replicas | No | 10 | Maximum container instances |

Bundled Assets

Skills can include templates, scripts, and examples as bundled assets. These files live alongside SKILL.md in the skill folder and are referenced by relative path in the procedure steps.

  • templates/ — Bicep, ARM, Terraform, or Helm files for infrastructure
  • scripts/ — Shell or Python scripts for automation steps
  • examples/ — Sample code showing how to use the deployed solution
  • configs/ — Pre-tuned config files (openai.json, guardrails.json overrides)

Keep total asset size under 5 MB per skill. If assets are larger, host them externally and reference via URL.

Play Compatibility

Skills declare compatibility with solution plays through their naming convention. A skill named deploy-01-enterprise-rag targets Play 01. The FAI Engine uses this convention to auto-wire skills into the correct manifest.

Play-compatible skill naming
# Pattern: {action}-{play-number}-{play-slug}
deploy-01-enterprise-rag          # Deploy skill for Play 01
evaluate-01-enterprise-rag        # Evaluation skill for Play 01
tune-14-cost-gateway              # Tuning skill for Play 14
scaffold-22-multi-agent-swarm     # Scaffolding skill for Play 22

# Generic skills (not play-specific)
fai-play-initializer          # Works with any play
fai-bicep-deployer            # Generic infrastructure deployment

Standalone vs Wired Usage

Like all FAI primitives, skills work in both modes:

  • Standalone (LEGO block) — Drop the skill folder into .github/skills/ and invoke it directly in Copilot Chat. The skill runs with its own bundled context.
  • Wired (solution play) — When listed in a play's fai-manifest.json under primitives.skills, the skill inherits the play's shared knowledge, WAF pillars, and guardrail thresholds automatically.

In wired mode, the skill's verification step can reference the play's evaluation thresholds — e.g., "verify groundedness ≥ 0.85" — because those values are injected by the FAI Engine from the manifest.

Evaluation Integration

Skills that deploy or tune AI solutions should include an evaluation step as their final action. This ensures the deployed solution meets the quality gates defined in config/guardrails.json:

Evaluation step in SKILL.md
## Step 5: Run Quality Evaluation

Execute the evaluation suite against the golden dataset:
```bash
python -m evaluation.run \
  --play 01 \
  --dataset eval/golden-set.jsonl \
  --thresholds config/guardrails.json
```

### Expected Results
| Metric        | Threshold | Target                      |
|---------------|-----------|------------------------------|
| Groundedness  | >= 0.85   | Factual accuracy from sources |
| Relevance     | >= 0.80   | Answer matches the question   |
| Coherence     | >= 0.90   | Logical flow and structure    |
| Safety        | >= 0.95   | No harmful content            |

If any metric falls below threshold, review config/openai.json
settings (lower temperature, adjust max_tokens) and re-evaluate.

Scaffolding a New Skill

Use the scaffold script to generate the folder structure with all required files:

Terminal
# Scaffold a new skill
node scripts/scaffold-primitive.js skill

# Interactive prompts:
# ? Skill name (kebab-case): my-custom-deployer
# ? Description: Deploy custom services to Azure with WAF alignment
#
# Created:
#   .github/skills/my-custom-deployer/SKILL.md
#   .github/skills/my-custom-deployer/templates/  (empty)
#   .github/skills/my-custom-deployer/scripts/     (empty)

# Validate the new skill
npm run validate:primitives