Semantic Kernel & Microsoft Agent Framework 1.0 — Building AI Agents With C#

Posted on: 4/26/2026 8:12:36 PM

27,770+ GitHub Stars
1.0 GA Production-Ready Release (04/2026)
5 Orchestration Patterns
12+ AI Model Connectors

Context: Why .NET Needs a Dedicated AI Framework

In 2026, building AI applications is no longer just about calling a chat completion API and returning the result. Real-world systems require autonomous agents — capable of reading data from multiple sources, calling external APIs, planning multi-step actions, and coordinating with other agents to complete complex tasks.

In the Python ecosystem, LangChain and CrewAI captured early adoption. But for .NET developers — those running enterprise systems in C# — there was no native AI framework with tight integration into dependency injection, middleware pipelines, and the familiar microservices architecture. Microsoft Semantic Kernel was built to fill that gap.

From Semantic Kernel to Microsoft Agent Framework 1.0

March 2023
Microsoft announces Semantic Kernel — a lightweight SDK for integrating LLMs into .NET and Python applications. Architecture centered around Kernel + Plugin + Planner.
October 2025
Microsoft unifies Semantic Kernel and AutoGen (multi-agent framework from Microsoft Research) under the umbrella name Microsoft Agent Framework.
April 2026
Agent Framework 1.0 GA ships — production-ready, with stabilized multi-agent orchestration patterns and MCP/A2A protocol support.

Key Takeaway

Agent Framework 1.0 does not replace Semantic Kernel — it builds on top of it. Semantic Kernel remains the foundation layer (Kernel, Plugin, Connector), while AutoGen provides the multi-agent orchestration engine. If you're already using Semantic Kernel, all your existing plugins and connectors continue to work.

Architecture Overview

graph TD
    subgraph "Microsoft Agent Framework 1.0"
        MAF["Agent Framework API"]

        subgraph "Orchestration Layer"
            SEQ["Sequential"]
            CON["Concurrent"]
            HO["Handoff"]
            GC["Group Chat"]
            MAG["Magentic-One"]
        end

        subgraph "Semantic Kernel Core"
            K["Kernel"]
            P["Plugins"]
            M["Memory / RAG"]
            C["AI Connectors"]
        end

        subgraph "Runtime"
            IP["InProcess Runtime"]
            DR["Distributed Runtime"]
        end
    end

    MAF --> SEQ
    MAF --> CON
    MAF --> HO
    MAF --> GC
    MAF --> MAG

    SEQ --> K
    CON --> K
    HO --> K
    GC --> K
    MAG --> K

    K --> P
    K --> M
    K --> C

    K --> IP
    K --> DR

    C --> OAI["OpenAI / Azure OpenAI"]
    C --> HF["Hugging Face"]
    C --> NV["NVIDIA"]
    C --> OL["Ollama (Local)"]

    style MAF fill:#e94560,stroke:#fff,color:#fff
    style K fill:#2c3e50,stroke:#fff,color:#fff
    style SEQ fill:#f8f9fa,stroke:#e94560,color:#2c3e50
    style CON fill:#f8f9fa,stroke:#e94560,color:#2c3e50
    style HO fill:#f8f9fa,stroke:#e94560,color:#2c3e50
    style GC fill:#f8f9fa,stroke:#e94560,color:#2c3e50
    style MAG fill:#f8f9fa,stroke:#e94560,color:#2c3e50
  
Layered architecture of Microsoft Agent Framework 1.0

The architecture is split into 3 clear layers:

  • Semantic Kernel Core: Kernel handles AI model invocation, Plugins provide extensibility (native functions, OpenAPI, MCP tools), Memory/RAG connects knowledge bases.
  • Orchestration Layer: 5 multi-agent coordination patterns, inherited from AutoGen research.
  • Runtime: Manages agent lifecycle — InProcess for simplicity, Distributed for production scale.

Semantic Kernel Core: Kernel, Plugin, Connector

Kernel — The Heart of the System

The Kernel is the central hub connecting AI models with business logic. It manages AI services (which model, what configuration), the plugin registry, and memory providers. Kernel integrates directly with Microsoft.Extensions.DependencyInjection — extremely familiar territory for .NET developers:

using Microsoft.SemanticKernel;

var builder = Kernel.CreateBuilder();

// Add AI service
builder.AddAzureOpenAIChatCompletion(
    deploymentName: "gpt-4o",
    endpoint: "https://your-resource.openai.azure.com/",
    apiKey: "your-api-key"
);

// Add plugins from classes
builder.Plugins.AddFromType<WeatherPlugin>();
builder.Plugins.AddFromType<DatabasePlugin>();

// Build kernel
Kernel kernel = builder.Build();

Plugins — Extending AI Capabilities

Plugins are how you "teach" the AI model to call real code. Each plugin is a C# class with methods marked [KernelFunction]:

public class OrderPlugin
{
    [KernelFunction, Description("Look up order information by order ID")]
    public async Task<OrderInfo> GetOrderAsync(
        [Description("The order ID")] string orderId,
        IOrderService orderService)
    {
        return await orderService.GetByIdAsync(orderId);
    }

    [KernelFunction, Description("Create a new order from a list of products")]
    public async Task<string> CreateOrderAsync(
        [Description("List of product IDs")] List<int> productIds,
        [Description("Shipping address")] string shippingAddress,
        IOrderService orderService)
    {
        var order = await orderService.CreateAsync(productIds, shippingAddress);
        return $"Created order #{order.Id}";
    }
}

When a user asks "Where is my order ORD-1234?", the AI model automatically recognizes it needs to call GetOrderAsync with parameter "ORD-1234". The entire function calling process is handled automatically by Semantic Kernel.

Plugins from Multiple Sources

Beyond native C# classes, Semantic Kernel supports importing plugins from OpenAPI specs (Swagger), Prompt templates (YAML/JSON), and MCP Servers — meaning you can turn any REST API into an AI tool without writing wrapper code.

AI Connectors — No Vendor Lock-in

Provider NuGet Package Supported Models
Azure OpenAI Microsoft.SemanticKernel.Connectors.AzureOpenAI GPT-4o, GPT-4.1, o3, o4-mini
OpenAI Microsoft.SemanticKernel.Connectors.OpenAI GPT-4o, GPT-4.1, DALL·E
Hugging Face Microsoft.SemanticKernel.Connectors.HuggingFace Mistral, Llama, Phi
NVIDIA Microsoft.SemanticKernel.Connectors.Nvidia NIM microservices
Ollama Microsoft.SemanticKernel.Connectors.Ollama Any local model (Llama 3, Phi-3, Gemma)
Google Microsoft.SemanticKernel.Connectors.Google Gemini Pro, Gemini Ultra

5 Orchestration Patterns for Multi-Agent Systems

This is the strongest part of Agent Framework 1.0 — inherited from Microsoft Research's AutoGen work, providing 5 agent coordination patterns:

graph LR
    subgraph "Sequential"
        S1["Agent A"] --> S2["Agent B"] --> S3["Agent C"]
    end

    style S1 fill:#e94560,stroke:#fff,color:#fff
    style S2 fill:#2c3e50,stroke:#fff,color:#fff
    style S3 fill:#4CAF50,stroke:#fff,color:#fff
  
Sequential: output from one agent becomes input for the next

1. Sequential — Step-by-step Pipeline

Agent A finishes, passes the result to Agent B, then Agent C. Ideal for workflows with a clear ordering: analyze → process → review.

// Analyze requirements → Write code → Review code
var analyst = new ChatCompletionAgent
{
    Name = "Analyst",
    Instructions = "Analyze requirements and create a technical spec.",
    Kernel = kernel
};

var coder = new ChatCompletionAgent
{
    Name = "Coder",
    Instructions = "Write C# code based on the technical spec.",
    Kernel = kernel
};

var reviewer = new ChatCompletionAgent
{
    Name = "Reviewer",
    Instructions = "Review code for security, performance, and best practices.",
    Kernel = kernel
};

SequentialOrchestration pipeline = new(analyst, coder, reviewer);

InProcessRuntime runtime = new();
await runtime.StartAsync();

var result = await pipeline.InvokeAsync(
    "Create a REST API endpoint for product management with pagination and caching",
    runtime);
string output = await result.GetValueAsync();

2. Concurrent — Parallel Processing

Broadcasts the same task to all agents, collecting results independently. Ideal for multi-perspective analysis or ensemble decision making:

var securityAnalyst = CreateAgent("SecurityAnalyst", "Analyze code for security vulnerabilities.");
var perfAnalyst = CreateAgent("PerfAnalyst", "Analyze code for performance issues.");
var uxAnalyst = CreateAgent("UXAnalyst", "Analyze API design for developer experience.");

ConcurrentOrchestration ensemble = new(securityAnalyst, perfAnalyst, uxAnalyst);

// All 3 agents run in parallel, each providing their own perspective
var result = await ensemble.InvokeAsync(codeToReview, runtime);

3. Handoff — Dynamic Routing

Agents autonomously decide when to pass the task to another agent based on context. Similar to a support escalation system:

sequenceDiagram
    participant U as User
    participant T as Triage Agent
    participant B as Billing Agent
    participant S as Senior Support

    U->>T: "I was charged incorrectly this month"
    T->>B: Handoff → Billing Agent
    B->>B: Check invoice, detect anomaly
    B->>S: Handoff → Senior Support (needs refund approval)
    S->>U: Confirm refund and send email
  
Handoff pattern: agents self-escalate based on context
HandoffOrchestration handoff = new(
    triageAgent, billingAgent, seniorSupport)
{
    // Agents decide handoff dynamically via function calls
    InteractionMode = InteractionMode.Dynamic
};

4. Group Chat — Collaborative Discussion

Multiple agents participate in a shared conversation, coordinated by a Group Manager. Suited for brainstorming, collaborative problem solving, or consensus building. The Group Manager decides which agent speaks next based on conversation context.

5. Magentic-One — Generalist Multi-Agent

Inspired by Microsoft Research's MagenticOne paper — a "generalist" multi-agent system capable of solving complex tasks. Includes an Orchestrator agent coordinating Specialist agents (WebSurfer, FileSurfer, Coder, Terminal). This is the most powerful but also the most complex pattern.

Pattern When to Use Complexity
Sequential Ordered pipelines (ETL, CI/CD automation) Low
Concurrent Parallel analysis, ensemble voting Low
Handoff Customer support, dynamic task routing Medium
Group Chat Brainstorming, collaborative review Medium
Magentic-One Complex tasks requiring multiple specialists High

MCP and A2A Protocol Integration

Agent Framework 1.0 supports two crucial protocols in the AI agent ecosystem:

Model Context Protocol (MCP)

MCP allows agents to dynamically discover and invoke tools from external MCP servers. Instead of hardcoding plugins, agents can connect to an MCP server and auto-discover available tools:

// Connect to MCP server for dynamic tool import
var mcpTools = await McpClientFactory.CreateAsync(
    new SseClientTransport(new Uri("http://localhost:3001/sse")));

kernel.Plugins.AddFromMcpServer("ExternalTools", mcpTools);

Agent-to-Agent Protocol (A2A)

A2A (from Google DeepMind) enables agents in different runtimes to communicate via HTTP. A .NET agent can call a Python or Java agent through a standardized A2A endpoint. This opens up the ability to build polyglot multi-agent systems — each agent written in the most suitable language.

graph TD
    A["C# Agent
(Semantic Kernel)"] -->|A2A Protocol| B["Python Agent
(LangChain)"] A -->|A2A Protocol| C["Java Agent
(Spring AI)"] A -->|MCP| D["MCP Server
(Database Tools)"] A -->|MCP| E["MCP Server
(File System)"] style A fill:#e94560,stroke:#fff,color:#fff style B fill:#2c3e50,stroke:#fff,color:#fff style C fill:#2c3e50,stroke:#fff,color:#fff style D fill:#f8f9fa,stroke:#e94560,color:#2c3e50 style E fill:#f8f9fa,stroke:#e94560,color:#2c3e50
MCP provides tools, A2A connects agents across runtimes

Declarative Agents — Define Agents in YAML

A new feature in 1.0 is Declarative Agents — instead of hardcoded C#, you can define agent instructions, tool bindings, and orchestration topology in version-controlled YAML files:

# agents/order-support.yaml
name: OrderSupport
instructions: |
  You are an order support specialist. Look up order information,
  handle return requests, and escalate to senior if needed.
model: gpt-4o
tools:
  - plugin: OrderPlugin
  - plugin: ShippingPlugin
  - mcp: http://localhost:3001/sse
memory:
  provider: azure-ai-search
  index: order-knowledge-base
orchestration:
  pattern: handoff
  members:
    - agent: agents/billing-agent.yaml
    - agent: agents/senior-support.yaml
// Load and run agent from YAML
var agent = await AgentFactory.CreateFromYamlAsync("agents/order-support.yaml");
var runtime = new InProcessRuntime();
await runtime.StartAsync();

var result = await agent.InvokeAsync("Order ORD-5678 is 3 days late", runtime);

Benefits of Declarative Agents

YAML agent definitions can be version controlled, reviewed in PRs, and deployed via CI/CD pipelines — no recompilation needed when changing agent behavior. This pattern is ideal for large teams where AI engineers and backend developers work in parallel.

Built-in Memory and RAG

Agent Framework includes native RAG (Retrieval-Augmented Generation) — allowing agents to query their own knowledge base instead of relying solely on the LLM's training data:

// Configure vector store for memory
builder.AddAzureAISearchVectorStore(
    endpoint: new Uri("https://your-search.search.windows.net"),
    apiKey: "your-key");

// Or use in-memory for development
builder.AddInMemoryVectorStore();

// Agent automatically searches memory when context is needed
var agent = new ChatCompletionAgent
{
    Name = "KnowledgeAgent",
    Instructions = "Answer based on internal knowledge base.",
    Kernel = kernel,
    // Memory is injected automatically via DI
};

Semantic Kernel supports multiple vector store backends: Azure AI Search, Qdrant, Pinecone, Weaviate, PostgreSQL (pgvector), Redis — choose the backend that fits your existing infrastructure.

Semantic Kernel vs LangChain — When to Choose What?

Criteria Semantic Kernel / Agent Framework LangChain
Primary Language C# (.NET), Python Python, JavaScript
Architecture Plugin model + DI container Chain/Agent abstraction
Multi-agent 5 built-in orchestration patterns LangGraph (graph-based)
Enterprise Integration Native Azure, Microsoft 365, Dynamics Community integrations
Ecosystem Smaller, .NET-focused Larger, many third-party tools
Best For .NET teams, Microsoft enterprise stack Python teams, need lots of pre-built integrations

Migration Note

If you're using Semantic Kernel < 1.0, check the breaking changes in the Agent Orchestration API — particularly how agents are created (from AgentBuilder to ChatCompletionAgent constructor) and how invocation works (from direct agent.InvokeAsync to Orchestration + Runtime).

Practical Example: Multi-Agent Customer Support System

To demonstrate the framework's power, let's build a customer support system with 3 agents coordinating via the Handoff pattern:

graph TD
    U["Customer"] --> T["Triage Agent"]
    T -->|FAQ| F["FAQ Agent
(knowledge base lookup)"] T -->|Technical| TE["Technical Agent
(debug + log lookup)"] T -->|Billing| B["Billing Agent
(invoice + refund)"] F --> R["Response → Customer"] TE --> R B --> R style T fill:#e94560,stroke:#fff,color:#fff style F fill:#f8f9fa,stroke:#e94560,color:#2c3e50 style TE fill:#f8f9fa,stroke:#e94560,color:#2c3e50 style B fill:#f8f9fa,stroke:#e94560,color:#2c3e50
Multi-agent customer support system with Handoff orchestration
// 1. Create specialized agents
var triageAgent = new ChatCompletionAgent
{
    Name = "TriageAgent",
    Instructions = """
        Classify customer requests into one of 3 categories:
        - FAQ: common questions about the product
        - Technical: technical issues, bugs, errors
        - Billing: invoices, payments, refunds
        Hand off to the appropriate agent.
        """,
    Kernel = kernel
};

var faqAgent = new ChatCompletionAgent
{
    Name = "FAQAgent",
    Instructions = "Answer FAQs based on the product knowledge base.",
    Kernel = kernelWithRAG  // Kernel with vector store
};

var technicalAgent = new ChatCompletionAgent
{
    Name = "TechnicalAgent",
    Instructions = "Provide technical support: look up logs, check service status.",
    Kernel = kernelWithTools  // Kernel with LogPlugin, ServiceStatusPlugin
};

var billingAgent = new ChatCompletionAgent
{
    Name = "BillingAgent",
    Instructions = "Handle billing issues: look up invoices, process refund requests.",
    Kernel = kernelWithBilling  // Kernel with InvoicePlugin
};

// 2. Create Handoff orchestration
HandoffOrchestration support = new(triageAgent, faqAgent, technicalAgent, billingAgent);

// 3. Run
InProcessRuntime runtime = new();
await runtime.StartAsync();

var result = await support.InvokeAsync(
    "I was charged twice this month for the Premium plan, please check.",
    runtime);

Console.WriteLine(await result.GetValueAsync());
await runtime.RunUntilIdleAsync();

Production Best Practices

1. Limit Scope Per Agent

Each agent should have 3-5 tools maximum. Too many tools make it harder for the LLM to choose the correct function call, increasing latency and reducing accuracy. If you need more capabilities, split into multiple agents and use Handoff.

2. Streaming for Better UX

All orchestration patterns support streaming responses. For long conversations, enable streaming instead of waiting for the entire response — the user experience difference is significant.

3. Human-in-the-Loop Checkpoints

For actions with side effects (sending emails, processing refunds, deploying code), always set checkpoints requiring human approval before the agent executes. Agent Framework supports this natively via HumanInTheLoop middleware.

4. Observability with OpenTelemetry

Agent Framework has built-in OpenTelemetry traces. Each agent invocation creates its own span, letting you track latency, token usage, and error rates on Jaeger/Grafana Tempo. This is a must-have for production.

5. Use Distributed Runtime When Scaling

InProcess runtime is sufficient for prototypes and small services. When agent workload grows or you need fault tolerance, switch to Distributed Runtime — agents are deployed as independent microservices, communicating via message bus.

Conclusion

Microsoft Agent Framework 1.0 marks a significant maturity milestone for AI agent development on .NET. By unifying Semantic Kernel (solid foundation) with AutoGen (multi-agent orchestration), Microsoft delivers a single SDK from prototype to production.

With 5 orchestration patterns, extensible plugin architecture, native MCP/A2A support, and deep integration into the .NET ecosystem — this is the framework to master if you're building AI applications in C#. The greatest strength is that developers don't need to leave their familiar DI/middleware architecture to enter the world of AI agents.

References