MCP 기반 에이전트 구축 프레임워크: mcp-agent 소개
요약
mcp-agent는 Model Context Protocol (MCP)을 활용하여 효과적이고 견고한 AI 에이전트를 쉽게 구축할 수 있도록 설계된 컴포저블(composable) 프레임워크입니다. 이 도구는 Anthropic의 'Building Effective Agents' 패턴을 포함한 다양한 에이전트 디자인 패턴을 구현하며, LLM 연결부터 복잡한 워크플로우 관리까지 전 과정을 단순화합니다. Temporal 기반으로 확장성을 확보하여 에이전트를 일시 중지(pause), 재개(resume), 복구(recover)할 수 있게 하며,
핵심 포인트
- Model Context Protocol (MCP)을 핵심으로 사용하여 에이전트 구축의 단순성과 견고함을 극대화했습니다.
- map-reduce, orchestrator, evaluator-optimizer 등 다양한 고급 에이전트 패턴을 컴포저블하게 연결하여 사용할 수 있습니다.
- Temporal를 백엔드 런타임으로 지원하여 API 변경 없이도 복잡한 워크플로우의 상태 관리(pause, resume, recover)가 가능합니다.
- CLI(`uvx mcp-agent`)를 통해 프로젝트 스캐폴딩 및 배포 과정을 간소화하여 개발 속도를 높였습니다.
Show HN: Mcp-Agent – Build effective agents with Model Context Protocol
Build effective agents with Model Context Protocol using simple, composable patterns.
[Examples | Building Effective Agents | MCP]
mcp-agent is a simple, composable framework to build effective agents using Model Context Protocol.
Note
mcp-agent's vision is that MCP is all you need to build agents, and that simple patterns are more robust than complex architectures for shipping high-quality agents.
mcp-agent gives you the following:
- Full MCP support: It fully implements MCP, and handles the pesky business of managing the lifecycle of MCP server connections so you don't have to.
- Effective agent patterns: It implements every pattern described in Anthropic's Building Effective Agents in a composable way, allowing you to chain these patterns together.
- Durable agents: It works for simple agents and scales to sophisticated workflows built on Temporal so you can pause, resume, and recover without any API changes to your agent.
Altogether, this is the simplest and easiest way to build robust agent applications.
We welcome all kinds of contributions, feedback and your help in improving this project.
Quick Start Examples
Basic Agent Example:
import asyncio
from mcp_agent.app import MCPApp
from mcp_agent.agents.agent import Agent
from mcp_agent.workflows.llm.augmented_llm_openai import OpenAIAugmentedLLM
app = MCPApp(name="hello_world")
async def main():
async with app.run():
agent = Agent(
name="finder",
instruction="Use filesystem and fetch to answer questions.",
server_names=["filesystem", "fetch"],
)
async with agent:
llm = await agent.attach_llm(OpenAIAugmentedLLM)
answer = await llm.generate_str("Summarize README.md in two sentences.")
print(answer)
if __name__ == "__main__":
asyncio.run(main())
Add your LLM API key to mcp_agent.secrets.yaml or set it in env. The Getting Started guide walks through configuration and secrets in detail.
Key Features & Capabilities
- Connect LLMs to MCP servers in simple, composable patterns like map-reduce, orchestrator, evaluator-optimizer, router & more. |
- Create MCP servers with a FastMCP-compatible API. You can even expose agents as MCP servers. |
- Core: Tools ✅ Resources ✅ Prompts ✅ | Scales to production workloads using Temporal as the agent runtime backend without any API changes.
- Beta: Deploy agents yourself, or use mcp-c for a managed agent runtime. All apps are deployed as MCP servers.
mcp-agent's complete documentation is available at docs.mcp-agent.com, including full SDK guides, CLI reference, and advanced patterns. This readme gives a high-level overview to get you started.
Getting Started with the CLI
The CLI is available via uvx mcp-agent.
To get up and running,
- Scaffold a project:
uvx mcp-agent init - Deploy:
uvx mcp-agent deploy my-agent
You can get up and running in 2 minutes by running these commands:
mkdir hello-mcp-agent && cd hello-mcp-agent
uvx mcp-agent init
uv init
uv add "mcp-agent[openai]"
# Add openai API key to `mcp_agent.secrets.yaml` or set `OPENAI_API_KEY`
uv run main.py
We recommend using uv to manage your Python projects (uv init).
Alternatively:
pip install mcp-agent
# Also add optional packages for LLM providers (e.g. uv add "mcp-agent[openai, anthropic, google, azure, bedrock]"
Advanced Usage Examples
1. Basic Agent Example (Finder):
This example shows a basic
AI 자동 생성 콘텐츠
본 콘텐츠는 HN AI Engineering의 원문을 AI가 자동으로 요약·번역·분석한 것입니다. 원 저작권은 원저작자에게 있으며, 정확한 내용은 반드시 원문을 확인해 주세요.
원문 바로가기