| title | Getting Started with AG-UI |
|---|---|
| description | Step-by-step tutorial to build your first AG-UI server and client with Agent Framework |
| zone_pivot_groups | programming-languages |
| author | moonbox3 |
| ms.topic | tutorial |
| ms.author | evmattso |
| ms.date | 04/01/2026 |
| ms.service | agent-framework |
This tutorial demonstrates how to build both server and client applications using the AG-UI protocol with .NET or Python and Agent Framework. You'll learn how to create an AG-UI server that hosts an AI agent and a client that connects to it for interactive conversations.
By the end of this tutorial, you'll have:
- An AG-UI server hosting an AI agent accessible via HTTP
- A client application that connects to the server and streams responses
- Understanding of how the AG-UI protocol works with Agent Framework
::: zone pivot="programming-language-csharp"
Before you begin, ensure you have the following:
- .NET 8.0 or later
- Azure OpenAI service endpoint and deployment configured
- Azure CLI installed and authenticated
- User has the
Cognitive Services OpenAI Contributorrole for the Azure OpenAI resource
Note
These samples use Azure OpenAI models. For more information, see how to deploy Azure OpenAI models with Microsoft Foundry.
Note
These samples use DefaultAzureCredential for authentication. Make sure you're authenticated with Azure (e.g., via az login). For more information, see the Azure Identity documentation.
Warning
The AG-UI protocol is still under development and subject to change. We will keep these samples updated as the protocol evolves.
The AG-UI server hosts your AI agent and exposes it via HTTP endpoints using ASP.NET Core.
Note
The server project requires the Microsoft.NET.Sdk.Web SDK. If you're creating a new project from scratch, use dotnet new web or ensure your .csproj file uses <Project Sdk="Microsoft.NET.Sdk.Web"> instead of Microsoft.NET.Sdk.
Install the necessary packages for the server:
dotnet add package Microsoft.Agents.AI.Hosting.AGUI.AspNetCore --prerelease
dotnet add package Azure.AI.Projects --prerelease
dotnet add package Azure.Identity
dotnet add package Microsoft.Agents.AI.Foundry --prereleaseNote
The Microsoft.Agents.AI.Foundry package is required for the AsAIAgent() extension method that creates an Agent Framework agent from an AIProjectClient.
Create a file named Program.cs:
// Copyright (c) Microsoft. All rights reserved.
using Azure.AI.Projects;
using Azure.Identity;
using Microsoft.Agents.AI;
using Microsoft.Agents.AI.Hosting.AGUI.AspNetCore;
WebApplicationBuilder builder = WebApplication.CreateBuilder(args);
builder.Services.AddHttpClient().AddLogging();
builder.Services.AddAGUI();
WebApplication app = builder.Build();
string endpoint = builder.Configuration["AZURE_OPENAI_ENDPOINT"]
?? throw new InvalidOperationException("AZURE_OPENAI_ENDPOINT is not set.");
string deploymentName = builder.Configuration["AZURE_OPENAI_DEPLOYMENT_NAME"]
?? throw new InvalidOperationException("AZURE_OPENAI_DEPLOYMENT_NAME is not set.");
// Create the AI agent
AIAgent agent = new AIProjectClient(
new Uri(endpoint),
new DefaultAzureCredential())
.AsAIAgent(
model: deploymentName,
name: "AGUIAssistant",
instructions: "You are a helpful assistant.");
// Map the AG-UI agent endpoint
app.MapAGUI("/", agent);
await app.RunAsync();Warning
DefaultAzureCredential is convenient for development but requires careful consideration in production. In production, consider using a specific credential (e.g., ManagedIdentityCredential) to avoid latency issues, unintended credential probing, and potential security risks from fallback mechanisms.
AddAGUI: Registers AG-UI services with the dependency injection containerMapAGUI: Extension method that registers the AG-UI endpoint with automatic request/response handling and SSE streamingAsAIAgent: Creates an Agent Framework agent from anAIProjectClientwith a specified model and instructions- ASP.NET Core Integration: Uses ASP.NET Core's native async support for streaming responses
- Instructions: The agent is created with default instructions, which can be overridden by client messages
- Configuration:
AIProjectClientwithDefaultAzureCredentialprovides secure authentication
Set the required environment variables:
export AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com/"
export AZURE_OPENAI_DEPLOYMENT_NAME="gpt-4o-mini"Run the server:
dotnet run --urls http://localhost:8888The server will start listening on http://localhost:8888.
Note
Keep this server running while you set up and run the client in Step 2. Both the server and client need to run simultaneously for the complete system to work.
The AG-UI client connects to the remote server and displays streaming responses.
Important
Before running the client, ensure the AG-UI server from Step 1 is running at http://localhost:8888.
Install the AG-UI client library:
dotnet add package Microsoft.Agents.AI.AGUI --prerelease
dotnet add package Microsoft.Agents.AI --prereleaseNote
The Microsoft.Agents.AI package provides the AsAIAgent() extension method.
Create a file named Program.cs:
// Copyright (c) Microsoft. All rights reserved.
using Microsoft.Agents.AI;
using Microsoft.Agents.AI.AGUI;
using Microsoft.Extensions.AI;
string serverUrl = Environment.GetEnvironmentVariable("AGUI_SERVER_URL") ?? "http://localhost:8888";
Console.WriteLine($"Connecting to AG-UI server at: {serverUrl}\n");
// Create the AG-UI client agent
using HttpClient httpClient = new()
{
Timeout = TimeSpan.FromSeconds(60)
};
AGUIChatClient chatClient = new(httpClient, serverUrl);
AIAgent agent = chatClient.AsAIAgent(
name: "agui-client",
description: "AG-UI Client Agent");
AgentSession session = await agent.CreateSessionAsync();
List<ChatMessage> messages =
[
new(ChatRole.System, "You are a helpful assistant.")
];
try
{
while (true)
{
// Get user input
Console.Write("\nUser (:q or quit to exit): ");
string? message = Console.ReadLine();
if (string.IsNullOrWhiteSpace(message))
{
Console.WriteLine("Request cannot be empty.");
continue;
}
if (message is ":q" or "quit")
{
break;
}
messages.Add(new ChatMessage(ChatRole.User, message));
// Stream the response
bool isFirstUpdate = true;
string? threadId = null;
await foreach (AgentResponseUpdate update in agent.RunStreamingAsync(messages, session))
{
ChatResponseUpdate chatUpdate = update.AsChatResponseUpdate();
// First update indicates run started
if (isFirstUpdate)
{
threadId = chatUpdate.ConversationId;
Console.ForegroundColor = ConsoleColor.Yellow;
Console.WriteLine($"\n[Run Started - Thread: {chatUpdate.ConversationId}, Run: {chatUpdate.ResponseId}]");
Console.ResetColor();
isFirstUpdate = false;
}
// Display streaming text content
foreach (AIContent content in update.Contents)
{
if (content is TextContent textContent)
{
Console.ForegroundColor = ConsoleColor.Cyan;
Console.Write(textContent.Text);
Console.ResetColor();
}
else if (content is ErrorContent errorContent)
{
Console.ForegroundColor = ConsoleColor.Red;
Console.WriteLine($"\n[Error: {errorContent.Message}]");
Console.ResetColor();
}
}
}
Console.ForegroundColor = ConsoleColor.Green;
Console.WriteLine($"\n[Run Finished - Thread: {threadId}]");
Console.ResetColor();
}
}
catch (Exception ex)
{
Console.WriteLine($"\nAn error occurred: {ex.Message}");
}- Server-Sent Events (SSE): The protocol uses SSE for streaming responses
- AGUIChatClient: Client class that connects to AG-UI servers and implements
IChatClient - AsAIAgent: Extension method on
AGUIChatClientto create an agent from the client - RunStreamingAsync: Streams responses as
AgentResponseUpdateobjects - AsChatResponseUpdate: Extension method to access chat-specific properties like
ConversationIdandResponseId - Session Management: The
AgentSessionmaintains conversation context across requests - Content Types: Responses include
TextContentfor messages andErrorContentfor errors
Optionally set a custom server URL:
export AGUI_SERVER_URL="http://localhost:8888"Run the client in a separate terminal (ensure the server from Step 1 is running):
dotnet runWith both the server and client running, you can now test the complete system.
$ dotnet run
Connecting to AG-UI server at: http://localhost:8888
User (:q or quit to exit): What is 2 + 2?
[Run Started - Thread: thread_abc123, Run: run_xyz789]
2 + 2 equals 4.
[Run Finished - Thread: thread_abc123]
User (:q or quit to exit): Tell me a fun fact about space
[Run Started - Thread: thread_abc123, Run: run_def456]
Here's a fun fact: A day on Venus is longer than its year! Venus takes
about 243 Earth days to rotate once on its axis, but only about 225 Earth
days to orbit the Sun.
[Run Finished - Thread: thread_abc123]
User (:q or quit to exit): :q
The client displays different content types with distinct colors:
- Yellow: Run started notifications
- Cyan: Agent text responses (streamed in real-time)
- Green: Run completion notifications
- Red: Error messages
- Client sends HTTP POST request with messages
- ASP.NET Core endpoint receives the request via
MapAGUI - Agent processes the messages using Agent Framework
- Responses are converted to AG-UI events
- Events are streamed back as Server-Sent Events (SSE)
- Connection closes when the run completes
AGUIChatClientsends HTTP POST request to server endpoint- Server responds with SSE stream
- Client parses incoming events into
AgentResponseUpdateobjects - Each update is displayed based on its content type
ConversationIdis captured for conversation continuity- Stream completes when run finishes
The AG-UI protocol uses:
- HTTP POST for sending requests
- Server-Sent Events (SSE) for streaming responses
- JSON for event serialization
- Thread IDs (as
ConversationId) for maintaining conversation context - Run IDs (as
ResponseId) for tracking individual executions
Now that you understand the basics of AG-UI, you can:
- Add Backend Tools: Create custom function tools for your domain
::: zone-end
::: zone pivot="programming-language-python"
Before you begin, ensure you have the following:
- Python 3.10 or later
- Azure OpenAI service endpoint and deployment configured
- Azure CLI installed and authenticated
- User has the
Cognitive Services OpenAI Contributorrole for the Azure OpenAI resource
Note
These samples use Azure OpenAI models. For more information, see how to deploy Azure OpenAI models with Foundry.
Note
These samples use DefaultAzureCredential for authentication. Make sure you're authenticated with Azure (e.g., via az login). For more information, see the Azure Identity documentation.
Warning
The AG-UI protocol is still under development and subject to change. We will keep these samples updated as the protocol evolves.
The AG-UI server hosts your AI agent and exposes it via HTTP endpoints using FastAPI.
Install the necessary packages for the server:
pip install agent-framework-ag-ui --preOr using uv:
uv pip install agent-framework-ag-ui --prerelease=allowThis will automatically install agent-framework-core, fastapi, and uvicorn as dependencies.
Create a file named server.py:
"""AG-UI server example."""
import os
from agent_framework import Agent
from agent_framework.openai import OpenAIChatCompletionClient
from agent_framework_ag_ui import add_agent_framework_fastapi_endpoint
from azure.identity import AzureCliCredential
from fastapi import FastAPI
# Read required configuration
endpoint = os.environ.get("AZURE_OPENAI_ENDPOINT")
deployment_name = os.environ.get("AZURE_OPENAI_CHAT_COMPLETION_MODEL")
if not endpoint:
raise ValueError("AZURE_OPENAI_ENDPOINT environment variable is required")
if not deployment_name:
raise ValueError("AZURE_OPENAI_CHAT_COMPLETION_MODEL environment variable is required")
chat_client = OpenAIChatCompletionClient(
model=deployment_name,
azure_endpoint=endpoint,
api_version=os.getenv("AZURE_OPENAI_API_VERSION"),
credential=AzureCliCredential(),
)
# Create the AI agent
agent = Agent(
name="AGUIAssistant",
instructions="You are a helpful assistant.",
client=chat_client,
)
# Create FastAPI app
app = FastAPI(title="AG-UI Server")
# Register the AG-UI endpoint
add_agent_framework_fastapi_endpoint(app, agent, "/")
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="127.0.0.1", port=8888)add_agent_framework_fastapi_endpoint: Registers the AG-UI endpoint with automatic request/response handling and SSE streamingAgent: The Agent Framework agent that will handle incoming requests- FastAPI Integration: Uses FastAPI's native async support for streaming responses
- Instructions: The agent is created with default instructions, which can be overridden by client messages
- Configuration:
OpenAIChatCompletionClientaccepts explicit Azure routing inputs such asmodel,azure_endpoint,api_version, andcredential, and can also read from environment variables
Set the required environment variables:
export AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com/"
export AZURE_OPENAI_CHAT_COMPLETION_MODEL="gpt-4o-mini"Run the server:
python server.pyOr using uvicorn directly:
uvicorn server:app --host 127.0.0.1 --port 8888The server will start listening on http://127.0.0.1:8888.
The AG-UI client connects to the remote server and displays streaming responses.
The AG-UI package is already installed, which includes the AGUIChatClient:
# Already installed with agent-framework-ag-ui
pip install agent-framework-ag-ui --preCreate a file named client.py:
"""AG-UI client example."""
import asyncio
import os
from agent_framework import Agent
from agent_framework_ag_ui import AGUIChatClient
async def main():
"""Main client loop."""
# Get server URL from environment or use default
server_url = os.environ.get("AGUI_SERVER_URL", "http://127.0.0.1:8888/")
print(f"Connecting to AG-UI server at: {server_url}\n")
# Create AG-UI chat client
chat_client = AGUIChatClient(server_url=server_url)
# Create agent with the chat client
agent = Agent(
name="ClientAgent",
client=chat_client,
instructions="You are a helpful assistant.",
)
# Get a thread for conversation continuity
thread = agent.create_session()
try:
while True:
# Get user input
message = input("\nUser (:q or quit to exit): ")
if not message.strip():
print("Request cannot be empty.")
continue
if message.lower() in (":q", "quit"):
break
# Stream the agent response
print("\nAssistant: ", end="", flush=True)
async for update in agent.run(message, session=thread, stream=True):
# Print text content as it streams
if update.text:
print(f"\033[96m{update.text}\033[0m", end="", flush=True)
print("\n")
except KeyboardInterrupt:
print("\n\nExiting...")
except Exception as e:
print(f"\n\033[91mAn error occurred: {e}\033[0m")
if __name__ == "__main__":
asyncio.run(main())- Server-Sent Events (SSE): The protocol uses SSE format (
data: {json}\n\n) - Event Types: Different events provide metadata and content (UPPERCASE with underscores):
RUN_STARTED: Agent has started processingTEXT_MESSAGE_START: Start of a text message from the agentTEXT_MESSAGE_CONTENT: Incremental text streamed from the agent (withdeltafield)TEXT_MESSAGE_END: End of a text messageRUN_FINISHED: Successful completionRUN_ERROR: Error information
- Field Naming: Event fields use camelCase (e.g.,
threadId,runId,messageId) - Thread Management: The
threadIdmaintains conversation context across requests - Client-Side Instructions: System messages are sent from the client
Optionally set a custom server URL:
export AGUI_SERVER_URL="http://127.0.0.1:8888/"Run the client (in a separate terminal):
python client.pyWith both the server and client running, you can now test the complete system.
$ python client.py
Connecting to AG-UI server at: http://127.0.0.1:8888/
User (:q or quit to exit): What is 2 + 2?
[Run Started - Thread: abc123, Run: xyz789]
2 + 2 equals 4.
[Run Finished - Thread: abc123, Run: xyz789]
User (:q or quit to exit): Tell me a fun fact about space
[Run Started - Thread: abc123, Run: def456]
Here's a fun fact: A day on Venus is longer than its year! Venus takes
about 243 Earth days to rotate once on its axis, but only about 225 Earth
days to orbit the Sun.
[Run Finished - Thread: abc123, Run: def456]
User (:q or quit to exit): :q
The client displays different content types with distinct colors:
- Yellow: Run started notifications
- Cyan: Agent text responses (streamed in real-time)
- Green: Run completion notifications
- Red: Error messages
Before running the client, you can test the server manually using curl:
curl -N http://127.0.0.1:8888/ \
-H "Content-Type: application/json" \
-H "Accept: text/event-stream" \
-d '{
"messages": [
{"role": "user", "content": "What is 2 + 2?"}
]
}'You should see Server-Sent Events streaming back:
data: {"type":"RUN_STARTED","threadId":"...","runId":"..."}
data: {"type":"TEXT_MESSAGE_START","messageId":"...","role":"assistant"}
data: {"type":"TEXT_MESSAGE_CONTENT","messageId":"...","delta":"The"}
data: {"type":"TEXT_MESSAGE_CONTENT","messageId":"...","delta":" answer"}
...
data: {"type":"TEXT_MESSAGE_END","messageId":"..."}
data: {"type":"RUN_FINISHED","threadId":"...","runId":"..."}
- Client sends HTTP POST request with messages
- FastAPI endpoint receives the request
AgentFrameworkAgentwrapper orchestrates the execution- Agent processes the messages using Agent Framework
AgentFrameworkEventBridgeconverts agent updates to AG-UI events- Responses are streamed back as Server-Sent Events (SSE)
- Connection closes when the run completes
- Client sends HTTP POST request to server endpoint
- Server responds with SSE stream
- Client parses incoming
data:lines as JSON events - Each event is displayed based on its type
threadIdis captured for conversation continuity- Stream completes when
RUN_FINISHEDevent arrives
The AG-UI protocol uses:
- HTTP POST for sending requests
- Server-Sent Events (SSE) for streaming responses
- JSON for event serialization
- Thread IDs for maintaining conversation context
- Run IDs for tracking individual executions
- Event type naming: UPPERCASE with underscores (e.g.,
RUN_STARTED,TEXT_MESSAGE_CONTENT) - Field naming: camelCase (e.g.,
threadId,runId,messageId)
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
app = FastAPI()
# Add CORS for web clients
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
add_agent_framework_fastapi_endpoint(app, agent, "/agent")app = FastAPI()
weather_agent = Agent(name="weather", ...)
finance_agent = Agent(name="finance", ...)
add_agent_framework_fastapi_endpoint(app, weather_agent, "/weather")
add_agent_framework_fastapi_endpoint(app, finance_agent, "/finance")try:
async for event in client.send_message(message):
if event.get("type") == "RUN_ERROR":
error_msg = event.get("message", "Unknown error")
print(f"Error: {error_msg}")
# Handle error appropriately
except httpx.HTTPError as e:
print(f"HTTP error: {e}")
except Exception as e:
print(f"Unexpected error: {e}")Ensure the server is running before starting the client:
# Terminal 1
python server.py
# Terminal 2 (after server starts)
python client.pyMake sure you're authenticated with Azure:
az loginVerify you have the correct role assignment on the Azure OpenAI resource.
Check that your client timeout is sufficient:
httpx.AsyncClient(timeout=60.0) # 60 seconds should be enoughFor long-running agents, increase the timeout accordingly.
The client automatically manages thread continuity. If context is lost:
- Check that
threadIdis being captured fromRUN_STARTEDevents - Ensure the same client instance is used across messages
- Verify the server is receiving the
thread_idin subsequent requests
Now that you understand the basics of AG-UI, you can:
- Add Backend Tools: Create custom function tools for your domain
::: zone-end