Agent Skills in .NET and the Microsoft Agent Framework
Using skills for progressive disclosure and composable abilities
Suppose your agent handles five different workflows, you have a few options.
One enormous system prompt with all five sets of instructions stuffed inside it,
Five separate agents, each deployed and maintained independently,
Or, something better
The Microsoft Agent Framework’s Skills pattern is a clean approach. One agent with capabilities externalized into composable, versioned markdown files. The agent loads the instruction set it needs, only when it needs it, and ignores the rest.
That last part is important, because that keeps your context smaller.
CodeGuardAI is a portfolio project I built to explore multi-agent architecture in .NET: an automated code review system with separate Quality and Security agents, built on Azure OpenAI with basic RAG and MCP integration. (GitHub)
When I was building CodeGuardAI, I made the call early to separate Quality and Security into distinct agents rather than one agent with a combined prompt. It’s easy to see what the alternative would have looked like: quality rules and security guidance competing for the model’s attention, ambiguous findings where both concerns apply, and a prompt that grows every time you want to add a new check. The moment I started thinking about what comes next; a performance reviewer, a docs analyzer - the single-prompt path became obviously untenable. Skills help break up those responsibilities and make the solution more composable.
What a Skill Actually Is
A Skill is a markdown file. Specifically, a SKILL.md file with YAML frontmatter:
---
name: Code Quality Assessor
description: Reviews .NET code for quality issues, naming violations, SOLID principles, and test coverage gaps.
version: 1.0.0
category: engineering
tags: [dotnet, quality, code-review]
---
## Instructions
You are a .NET code quality specialist. When reviewing code:
1. Check for SOLID principle violations — especially SRP and DIP.
2. Flag methods longer than 20 lines as candidates for extraction.
3. Look for missing or inadequate test coverage signals.
4. Use the `analyze_file` tool to inspect the code before proceeding.The frontmatter is the advertisement: name, description, tags. The markdown body is the full instruction set. The agent sees the frontmatter at startup. The body only gets loaded when the agent decides this skill is relevant to the task at hand.
The Progressive Disclosure Mechanic
Here are the phases of progressive disclosure:
Advertise
Agent sees name + description only
Token cost: ~100 tokens per skill
Load
Full SKILL.md body injected when relevant
Token cost: Full file
Read Resource
Supplementary data files loaded on demand
Token cost: On demand
An agent advertising 10 skills costs roughly 1,000 tokens at startup. Loading a single skill’s full instructions costs whatever that file contains, but only when the agent decides the task requires it. Supplementary data files (severity criteria, naming conventions, example patterns) stay on disk until explicitly referenced.
Two tools are auto-registered by the framework to drive this: load_skill and read_skill_resource. These come with FileAgentSkillsProvider.
The agent reads the descriptions, picks the relevant skill, loads it, and proceeds. If an instruction references a supplementary file, the agent fetches it via read_skill_resource exactly when it needs it. Context stays lean until the work demands otherwise.
FileAgentSkillsProvider in C#
Here’s the built-in implementation. Point it at a directory and you’re up and running.
Packages:
dotnet add package Microsoft.Agents.AI --prerelease
dotnet add package Azure.AI.OpenAI
dotnet add package Azure.IdentityWire up the provider:
// Program.cs
var builder = WebApplication.CreateBuilder(args);
IChatClient chatClient = new AzureOpenAIClient(
new Uri(builder.Configuration["AZURE_OPENAI_ENDPOINT"]),
new DefaultAzureCredential())
.GetChatClient(builder.Configuration["AZURE_OPENAI_DEPLOYMENT_NAME"])
.AsIChatClient();
var skillsProvider = new FileAgentSkillsProvider(
skillPath: Path.Combine(AppContext.BaseDirectory, "skills"));
var agent = chatClient.AsAIAgent(new ChatClientAgentOptions
{
AIContextProviders = [skillsProvider],
Tools = [ /* callable tools registered here — more on this below */ ]
});Directory structure:
skills/
├── code-quality/
│ ├── SKILL.md
│ └── references/
│ └── naming-conventions.md
└── tech-debt-assessor/
├── SKILL.md
└── references/
└── severity-criteria.mdOne subdirectory per skill. The references/ folder is optional. Use it for supplementary data the skill might need but shouldn’t load unconditionally. The provider discovers everything from the directory tree at startup.
Each skill is a discrete responsibility, so it makes sense for the structure of the directory to reflect that. Adding, removing, or modifying a skill is a discrete operation. The agent binary doesn’t change. That separation opens up the possibility of other teams owning their own skill files, or pulling from a shared skills marketplace.
A Critical Distinction: Skills vs Tools
Let’s clarify the terminology as there is some overlap here.
Skills guide the LLM. They’re instruction sets: markdown files telling the agent how to approach a task, what to check, what format to use for output. They’re not callable functions. They’re prompts in a composable, on-demand package.
Tools give the agent hands. A tool is a callable function: analyze_directory, run_tests, read_file. These return results the model reasons about. Registered separately in ChatClientAgentOptions.Tools.
A SKILL.md file can reference tools by name in its instructions: “use the analyze_file tool to inspect the code before forming a conclusion.” That works because the tool is registered separately. The skill names it. The tool delivers the capability.
The two are independent by design. You can swap skill files without touching tool registration. You can add new tools without touching skills. Neither knows about the other except by name.
Where Python goes further, and why C# doesn’t yet
In the Python MAF SDK, a skill can bundle its own callable logic via @skill.script. Because Python is interpreted, the framework can load and execute a script file in-process at runtime with no compilation step. A skill becomes genuinely self-contained: instructions and the tools they depend on, packaged together.
@unit_converter_skill.script(name="convert", description="Convert a value: result = value × factor")
def convert_units(value: float, factor: float) -> str:
result = round(value * factor, 4)
return json.dumps({"result": result})C# doesn’t appear to have this yet. To execute arbitrary C# code at runtime you’d need Roslyn scripting, subprocess execution, or dynamically-loaded assemblies. Each of those carries real complexity and security risk. The MAF docs show a subprocess approach (for python) and immediately flag it as “demonstration purposes only.” Roslyn-based in-process execution is the likely path forward, but it will require a safe sandboxing model first.
For now in C#: skills are instructions, tools are separate. That constraint is actually fine for most agent architectures. The pattern still gives you composability and progressive disclosure. But it’s worth knowing that the Python SDK is ahead here.
One more thing worth flagging: if you’ve read the A2A posts, you’ll have seen the word “skills” used there too. In an AgentCard, an agent advertises its capabilities as AgentSkill objects. These are completely different things. A2A skills are discovery metadata telling other agents what to send you. SKILL.md skills are internal instruction sets. Same term, different layer entirely.
In CodeGuardAI, the Quality and Security agents are already the separation of concerns, but Skills would sit below that level. A Quality agent might load a naming conventions skill for one task and a SOLID principles skill for another. A Security agent might switch between an injection vulnerabilities skill and a dependency audit skill. The agents stay focused; the skills make each agent’s capabilities composable within that focus.
The MCP + Skills Pattern
FileAgentSkillsProvider gets you started. For production, the natural pairing is Skills + MCP.
Skills define what the agent should do. The MCP server exposes what tools it can do it with. Both are independently swappable. Neither knows the other’s implementation details.
Three-project structure that works well:
SkillsCore: shared orchestration librarySkillsExecutor: console or API entry pointSkillsMcpServer: custom MCP server exposing your domain tools
MCP server setup:
dotnet add package ModelContextProtocol --prerelease// SkillsMcpServer/Tools/ProjectAnalysisTools.cs
[McpServerToolType]
public static class ProjectAnalysisTools
{
[McpServerTool, Description("Analyzes a .NET project directory structure")]
public static string AnalyzeDirectory(string path, int maxDepth = 3)
{
// real implementation
}
}// SkillsMcpServer/Program.cs
builder.Services
.AddMcpServer()
.WithStdioServerTransport()
.WithToolsFromAssembly();
MCP client config (appsettings.json):
{
"McpServers": {
"ProjectAnalysis": {
"command": "path/to/SkillsMcpServer",
"args": []
}
}
}The orchestration loop: client discovers available tools from the MCP server via ListToolsAsync(), passes them to the model alongside the advertised skills, and the model decides which tools to call. Tool calls route to CallToolAsync(), results feed back into the conversation, model continues until it has nothing left to call.
Swap the MCP server (new tools, new domain) without touching a single skill file. Swap a skill (new workflow, new guidance) without touching tools. That’s a clean boundary.
CodeGuardAI’s Quality agent is an obvious candidate for this. Drop in a documentation reviewer skill, a performance auditor, an API contract checker. Each one extends what the agent can do without touching the agent itself.
When Skills Aren’t the Answer
Not every agent benefits from this pattern.
Probably don’t need Skills if:
Your agent handles one workflow. A clear, stable system prompt is simpler.
The instruction set is short and fits comfortably in the context window.
You’re the only person who will ever touch the agent’s instructions.
Skills start earning their complexity when:
One agent needs to handle multiple distinct workflows cleanly.
Multiple people or teams contribute to the agent’s capabilities. Skills are independently ownable files.
The combined instruction set is large enough that progressive disclosure meaningfully reduces context load.
You want to version and ship capabilities independently without redeploying the agent binary.
The pattern shines in platform scenarios. One agent shared across teams. Each team owns one or more skill files. The binary doesn’t change; the skills directory does. This is a different deployment model from what most .NET developers are used to.
The Takeaway
The insight isn’t “use markdown files instead of system prompts.”
It’s the architecture: progressive disclosure keeps context lean by design. Skill files make capabilities ownable, versionable, and deployable independently. The MCP pairing decouples the execution layer on the same principle. Three independent parts, composed at runtime.
For a single-workflow agent with a stable prompt: skip it. For anything that needs to flex across domains, or anything maintained by more than one person, Skills give you a pattern that scales nicely.
Enjoying the .NET agent series?
for more practical content on building agentic systems in C#.



Here's a Microsoft GitHub demo of Skills in .NET! https://github.com/Azure-Samples/agent-skills-dotnet-demo