Skip to main content
Home · Insights · MCP for Better Software
April 21, 2025

The Model Context Protocol: Unifying AI Integrations for Better Software Development

Introduction

The Model Context Protocol (MCP), announced by Anthropic on November 24, 2024, is an open standard designed to standardize how AI applications interact with external systems and data sources. MCP aims to break down information silos and legacy systems, allowing AI models to access real-time, contextual data from various sources, thereby enhancing their performance and relevance. As AI assistants gain mainstream adoption, the industry has seen rapid advances in model capabilities, but even the most sophisticated models are often constrained by their isolation from data. MCP addresses this challenge by providing a universal protocol for connecting AI systems with data sources, replacing fragmented integrations with a single, standardized approach.

This blog post explores MCP in depth, focusing on four main topics: what MCP is, how to build with it, its role in agent systems, and what the future holds for this groundbreaking protocol. The content is primarily based on a detailed presentation by Anthropic, supplemented with insights from recent articles and official documentation.

Table Of Contents

1. What is MCP?

MCP, or Model Context Protocol, is an open protocol developed by Anthropic to standardize and facilitate the interaction between AI applications and external systems, tools, and data sources. It aims to provide a seamless way for AI models to access and utilize context from various sources, enhancing their capabilities and personalization. Often compared to a USB-C port for AI applications, MCP acts as a universal adapter that eliminates the need for custom code for each connection, making it easier for developers to integrate AI with diverse systems.

The protocol is inspired by existing standards like APIs, which standardize web app interactions, and the Language Server Protocol (LSP), which standardizes how IDEs interact with language-specific tools. MCP introduces three primary interfaces to achieve this standardization:

  • Tools: Functionalities exposed by the server that the AI model can invoke when needed. These can include reading data, writing data, updating databases, or performing other actions on external systems. Tools are model-controlled, meaning the AI model decides when and how to use them based on the task at hand.
  • Resources: Data provided by the server to the application. These can be static (e.g., a fixed text file) or dynamic (e.g., a JSON file updated with user actions). Resources are application-controlled, meaning the application decides how to use them, such as attaching them to a chat interface or incorporating them into the model’s context.
  • Prompts: Predefined templates that users can invoke to interact with the server in specific ways. These prompts are sent to the AI model to generate responses or perform tasks based on user input. Prompts are user-controlled, offering a way for users to directly influence the interaction.

By standardizing these interactions, MCP allows for greater flexibility, reusability, and efficiency in building AI applications that require access to external data and tools. It addresses the “NxM problem,” where N represents the number of AI models and M the number of tools, reducing redundant development efforts by providing a universal protocol for integration. This standardization fosters a collaborative ecosystem where developers can build and share connectors, accelerating the development of intelligent, context-aware AI applications.

2. Build With MCP

Building with MCP involves creating both clients and servers that adhere to the protocol’s standards. On the client side, developers integrate MCP to enable their AI applications to interact with external systems by invoking tools, querying resources, and using prompts. On the server side, developers expose these capabilities—tools, resources, and prompts—in a way that is consumable by any MCP-compatible client. This separation allows for a modular ecosystem where different components can be developed independently yet work together seamlessly.

Core Components

  • Tools: These are functionalities that the AI model can call to perform actions or retrieve data from external systems. For example, a tool might allow an AI application to fetch issues from a GitHub repository or add tasks to an Asana project. The server exposes these tools, and the model within the client application decides when to invoke them based on the task. Tools are versatile, supporting actions like reading data, writing to databases, or interacting with local file systems.
  • Resources: These are data structures provided by the server that the application can use as needed. Resources can be static (e.g., a predefined text file) or dynamic (e.g., a JSON file that incorporates user-specific data). In applications like Claude for Desktop, resources manifest as attachments that users can select via the UI or that the model can automatically attach to a task based on relevance.
  • Prompts: These are predefined interaction templates that users can trigger to engage with the server in specific ways. For instance, in the IDE Zed, users can use slash commands (e.g., /gpr to summarize a pull request) to invoke predefined prompts that are sent to the AI model. Prompts allow users to initiate complex interactions with minimal input, as the server interpolates the prompt with relevant context.

Practical Examples

The power of MCP lies in its ability to allow any client to connect to any server without additional customization. Several applications demonstrate this capability:

  • Claude for Desktop: An MCP client that integrates with servers like GitHub and Asana. For example, a user can provide Claude for Desktop with a GitHub repository URL and ask it to triage issues. The model autonomously invokes the “list issues” tool, summarizes the issues, and prioritizes them based on user context. Similarly, it can interact with an Asana server to add tasks to a project by invoking tools like “list workspaces” and “search projects.”
  • Wind Surf and Goose: Other MCP clients that leverage the protocol to enhance their functionality. Wind Surf has its own UI for interacting with MCP tools, while Goose refers to them as “extensions,” showing how different applications can integrate MCP in their own way while benefiting from the standardized protocol.

Deployment with Docker

Containerization technologies like Docker can simplify the deployment and distribution of MCP servers. By encapsulating servers into containers, developers can ensure consistency across different environments, making it easier to share and utilize servers across teams and systems. For example, Docker’s integration with MCP allows developers to package servers with all necessary dependencies, streamlining the setup process (Docker Blog).

Benefits for Developers

MCP’s standardized approach reduces the complexity of integrating AI applications with external systems. Developers can build a server once and have it work with any MCP-compatible client, or create a client that can connect to any MCP server without additional coding. This modularity fosters a collaborative ecosystem where community-built servers, such as those for GitHub or Asana, can be easily adopted, often requiring just a few hundred lines of code to implement.

3. MCP & Agents

MCP is poised to become a foundational protocol for building AI agent systems, which are autonomous AI systems designed to perform tasks like researching, coding, or managing workflows. Agents rely heavily on accessing external data and tools to function effectively, and MCP provides a standardized way to facilitate these interactions, making it easier to develop and scale agent-based applications.

Agents as Augmented LLMs

At its core, an agent can be thought of as an augmented large language model (LLM) that runs in a loop, using tools and data to achieve specific goals. MCP serves as the “bottom layer” for these agents, providing a unified interface to invoke tools, query resources, and utilize prompts from external servers. This abstraction allows agent developers to focus on the logic and behavior of the agent without needing to implement custom integrations for each tool or data source.

Example: mCP Agent Framework

The mCP Agent framework, developed by Last Mile AI, illustrates how MCP enhances agent capabilities. In a demo, an agent was tasked with researching quantum computing’s impact on cybersecurity. The agent was defined with access to MCP servers for web search (e.g., Brave), data fetching, and file system management. It autonomously formed a plan, invoked the search tool to gather information, verified facts using a fact-checking agent, and synthesized the data into a report using a writer agent. This process was streamlined by MCP’s standardized interfaces, allowing the agent to focus on the task rather than the mechanics of server integration.

Composability

MCP supports composability, meaning that agents can be built as compositions of multiple client-server interactions. This enables the creation of complex, hierarchical agent systems where one agent can delegate tasks to other specialized agents or servers. For example, a user might interact with a general-purpose coding agent via Claude for Desktop. If tasked with checking Grafana logs, the agent can dynamically discover a verified Grafana server via the MCP registry, invoke its tools, and complete the task without prior programming for that specific system.

Sampling and Intelligence

Another powerful feature is sampling, which allows servers to request LLM inference calls from the client. This enables servers to leverage the client’s AI model for intelligent decision-making, such as formulating questions to gather more user input. This federated approach ensures that clients maintain control over privacy and cost parameters while allowing servers to incorporate intelligence into their interactions.

Benefits for Agent Development

By providing a standardized interface, MCP enables agents to dynamically discover and utilize various capabilities, enhancing their flexibility and effectiveness. Developers can focus on defining the agent’s core logic and tasks, while MCP handles the integration with external systems. This is particularly valuable for building self-evolving agents that can adapt to new tools and data sources as they become available.

4. What’s Next for MCP?

The future of MCP is promising, with several exciting developments on the horizon that aim to enhance its functionality and adoption. As of June 2025, MCP is still in its early stages, but its roadmap includes features that could solidify its position as a foundational protocol for AI development.

Remote Servers and OAUTH 2.0

One of the most anticipated features is support for remote servers, which allows MCP servers to be hosted remotely and accessed via URLs. This is facilitated by Server-Sent Events (SSSE) for real-time communication and OAUTH 2.0 for secure authentication and authorization. For example, a Slack MCP server can orchestrate authentication by guiding the user through an OAUTH flow, with the server holding the token and providing the client with a session token for future interactions. This advancement simplifies deployment and accessibility, enabling servers to be used without local setup.

MCP Registry

The introduction of an official MCP registry is a critical development to address the current fragmentation in the MCP ecosystem. With over 1,100 community-built servers and various ecosystems emerging, discoverability has been a challenge. The registry will provide a centralized metadata service for discovering, verifying, and managing MCP servers. It will allow developers to find trusted servers, such as those officially verified by companies like Shopify or Cloudflare, and integrate them into their applications. The registry will also support versioning, capturing changes in server capabilities to ensure compatibility.

Additional Enhancements

MCP is exploring several other enhancements to improve its functionality:

  • Stateful vs. Stateless Connections: Currently, MCP servers maintain stateful connections with clients. Future developments aim to support short-lived, stateless connections, allowing clients to disconnect and resume interactions later without re-providing data.
  • Streaming: Supporting real-time data streaming from servers to clients is essential for applications requiring continuous updates, such as live data feeds or interactive workflows.
  • Namespacing: To manage conflicts when multiple servers expose tools with similar names, MCP plans to introduce namespacing, ensuring clarity and avoiding errors. This could also enable logical groupings of tools, such as “Finance tools” for specific services.
  • Proactive Server Behavior: Enabling servers to initiate interactions with clients based on events or deterministic triggers will make systems more responsive and autonomous. For example, a server could notify a client about a new resource or request additional user input without being prompted.

Industry Adoption and Collaboration

The success of MCP will largely depend on widespread industry adoption and collaboration among AI organizations. As noted in a Forbes article, MCP’s potential to become a foundational protocol hinges on its acceptance by major industry players, similar to how protocols like SOAP and WSDL became standards for web services. Anthropic is engaging with other foundational model providers and companies to promote MCP, fostering a cohesive and efficient AI ecosystem.

Community and Resources

For developers interested in exploring MCP, Anthropic provides official documentation and SDKs, as well as a community-driven ecosystem with numerous open-source servers and tools. The MCP website offers resources for contributing, reporting bugs, and engaging in discussions. Community enthusiasm, as seen in an X post, highlights MCP’s potential to revolutionize AI integration by streamlining data access and boosting performance.

Challenges and Considerations

While MCP shows significant promise, its full impact is not yet realized, as best practices for debugging, governance, and security are still emerging. For instance, ensuring trust in servers and managing permissions effectively will be critical as the ecosystem grows. The registry and verification mechanisms will help address these concerns, but developers must remain judicious about which servers they connect to, similar to how they approach web applications today.

Conclusion

The Model Context Protocol (MCP) represents a significant step forward in standardizing how AI applications interact with external systems and data sources. By providing a unified interface through tools, resources, and prompts, MCP enables developers to build more powerful, flexible, and context-rich AI systems. Its integration with agent frameworks enhances its utility, allowing for the creation of autonomous agents that can dynamically access and utilize external context.

Looking ahead, with features like remote servers, a centralized registry, and advanced interaction patterns, MCP is set to become an indispensable tool in the AI developer’s toolkit. Its success will depend on widespread adoption and collaboration across the industry, but as of June 2025, the protocol is already showing signs of transforming AI development. Developers are encouraged to explore MCP through Anthropic’s official documentation and contribute to its growing ecosystem, as the possibilities for innovation are vast and the potential impact transformative.


MCP Frequently Asked Questions
Why are resources and prompts separate from tools in MCP? Can’t tools handle all context needs?

MCP distinguishes tools, resources, and prompts to enable nuanced control across different parts of the system. Tools are controlled by the AI model, which decides when to use them based on the task. Resources, managed by the application, provide data like files or JSON structures that can be static or dynamically updated. Prompts, user-initiated, allow predefined interaction templates. This separation ensures that models, applications, and users each have tailored ways to interact with servers, offering richer and more flexible integrations than tools alone could provide.

Is it appropriate to use tools to connect a vector database to an AI model?

It depends on the scenario. Tools are well-suited when the decision to access a vector database is context-dependent, allowing the model to determine when a query is needed or if additional user input is required. If access is predictable and routine, a direct call to the database might suffice without needing a tool, simplifying the integration.

How does MCP manage authentication for secure interactions?

MCP supports secure authentication through OAUTH 2.0, particularly for remote servers. This allows servers to manage authentication handshakes with external systems (e.g., Slack), securely holding tokens while providing clients with session tokens for ongoing interactions. This ensures secure, standardized access to external systems without requiring clients to handle complex authentication logic.

How does MCP integrate with existing agent frameworks?

MCP complements agent frameworks by standardizing access to external tools, resources, and prompts. For example, frameworks like LangGraph can use adapters to connect to MCP servers, enabling agents to leverage external systems without altering their core logic. MCP focuses on context delivery, while agent frameworks handle the agent’s decision-making and looping processes, creating a synergistic relationship.

Does MCP replace the need for agent frameworks?

No, MCP does not replace agent frameworks. It provides a standardized layer for accessing external data and tools, while frameworks manage the agent’s internal logic, such as knowledge management and task orchestration. MCP enhances frameworks by simplifying context integration, allowing developers to focus on agent behavior rather than custom integrations.

Can MCP be used with proprietary or sensitive data?

Absolutely. MCP’s open design allows servers to be hosted within secure environments, such as a company’s Virtual Private Cloud (VPC) or on individual devices, making it suitable for proprietary data. This flexibility ensures that sensitive information can be accessed securely by AI applications.

How does MCP separate agent logic from external system capabilities?

MCP enables developers to concentrate on an agent’s core functionality—such as selecting the right model or managing task orchestration—by abstracting the complexity of connecting to external systems. Developers can leverage community-built MCP servers to access data or tools, allowing agents to expand their capabilities without requiring bespoke integrations.

What makes the mCP Agent framework unique?

The mCP Agent framework, developed by Last Mile AI, simplifies agent development by providing modular components for building agents as augmented LLMs that operate in a loop. It supports workflows like orchestration, where a primary agent coordinates sub-agents, and integrates seamlessly with MCP servers. Its open-source nature makes it an accessible, flexible option for developers.

How do resources and prompts fit into agent-based workflows?

In agent workflows, resources and prompts enhance user interaction within a UI, such as a chat interface. Resources can display task plans or data as attachments, while prompts enable users to trigger specific actions (e.g., summarizing progress with a command). These elements are particularly useful in interactive settings, though they may not always be central to automated agent loops.

How does MCP support evaluation of tool calls?

MCP streamlines tool call evaluations by providing a consistent server interface, allowing developers to test tools across different evaluation systems. While the evaluation process itself remains similar to traditional methods, MCP simplifies comparing server versions (e.g., 1.0 vs. 1.1) against the same evaluation framework, enhancing efficiency.

Where should logic like retries or authentication reside in MCP systems?

Typically, logic such as retry handling and authentication is best placed on the server side, as servers are closer to the external systems they interact with. This aligns with MCP’s design, where clients may not know servers initially, so servers manage these interactions. However, debates persist about whether clients or servers should handle certain logic, and practices are still evolving.

Is there a limit to how many servers an LLM can interact with via MCP?

There’s no strict limit, with models like Claude handling up to a few hundred tools effectively. To manage larger numbers, strategies like tool-search tools (using retrieval-augmented generation or fuzzy search) or grouping tools hierarchically (e.g., by category like Finance) help prevent context overload. Best practices for scaling are still developing.

Can AI models generate MCP servers automatically?

Yes, for simple servers that wrap APIs, tools like Klein can autogenerate MCP servers dynamically (e.g., for GitLab). Complex servers requiring logging or data transformations need manual development, but autogeneration is effective for straightforward integrations.

Is Anthropic collaborating with service providers for official MCP servers?

Yes, companies like Cloudflare and Stripe have developed official MCP servers, and Anthropic is engaging with other major organizations to create and host servers. These may be publicly accessible or remotely hosted, simplifying integration for developers.

How does MCP handle server versioning to prevent disruptions?

MCP servers, distributed as packages (e.g., npm, pip), include version numbers to support upgrades. While tool or resource changes might affect workflows, adherence to the MCP protocol ensures compatibility. The upcoming registry will track version changes, aiding developers in managing updates.

How will MCP support server discovery and extension distribution?

The forthcoming MCP registry API will provide a centralized, open metadata service for discovering and verifying servers, akin to an artifactory. Companies can host their own registries, and existing marketplaces (e.g., VS Code) can integrate with the registry, enhancing server accessibility.

How does MCP address error handling in complex, multi-layered systems?

Error handling in MCP mirrors that of other hierarchical systems. Each layer (client or server) validates and structures data before passing it to the next, ensuring reliability. MCP’s standardized interfaces simplify these interactions, but error management depends on system design.

Why use MCP servers instead of standard HTTP servers?

MCP servers offer advanced features like resource notifications and server-to-client communication, enabling intelligent, multi-step interactions. Unlike HTTP servers, which focus on stateless data exchange, MCP servers can act as autonomous agents, enhancing AI-driven tasks.

How is control flow managed in multi-layered MCP systems?

Control flow, including rate limiting, is typically handled by the application layer hosting the LLM, which oversees model interactions. Alternatively, servers can manage these aspects if they host their own models, offering flexibility based on system architecture.

How are decisions made in networked MCP systems?

MCP doesn’t dictate decision-making hierarchies; this is left to system designers. The protocol enables networked architectures, allowing developers to define how primary nodes or agents coordinate tasks across servers.

How does MCP ensure observability of server interactions?

Observability isn’t mandated by the protocol, similar to traditional APIs. Servers act as black boxes, and clients may not see all downstream interactions. Developers can use metadata to enhance observability, but it’s up to the system builders to implement these features.

How can MCP servers be made debuggable, especially for complex tasks?

While MCP doesn’t enforce debugging standards, servers should provide metadata and logs to clients for better usability. Tools like Anthropic’s Inspector help developers monitor connections and logs, and community-driven best practices are emerging to improve debuggability.

Are there established guidelines for building and debugging MCP servers?

Best practices for server construction and debugging are still evolving. Anthropic and the community are working toward standardized guidelines, drawing on microservices patterns to integrate intelligence and improve developer experience.

How can clients control server behavior, such as limiting tool actions?

Clients can influence servers through prompts or tool annotations, which allow parameters like limiting the number of actions (e.g., web pages searched). Future protocol updates may standardize annotations for read vs. write actions, giving clients more granular control.

Can MCP tools be used to debug other servers?

Yes, tools like Inspector enable developers to view logs and verify server connections. Servers can be built to analyze logs or configure settings for debugging, leveraging MCP’s flexibility to create diagnostic tools.

Who manages governance and security in MCP systems?

Server builders are responsible for governance and security, using OAUTH 2.0 to control access to external systems. Servers act as gatekeepers, ensuring clients only access authorized data, aligning with the principle that servers are closest to the end application.

Does MCP’s OAUTH support allow for changing permission scopes?

The initial OAUTH 2.0 implementation (added ~May 2025) doesn’t support dynamic scope changes, but Anthropic plans to enhance OAUTH to allow permission elevation, enabling more flexible access control.

Is it secure for MCP servers to hold OAUTH tokens?

Servers holding tokens is a design choice, as they are closer to the external application (e.g., Slack). This allows secure management of authentication handshakes. Clients must carefully select trusted servers, similar to trusting web applications, to mitigate risks.

How does MCP compare to RESTful APIs?

MCP excels in scenarios requiring intelligent data transformations or context-aware interactions for AI models, such as formatting data for LLMs. RESTful APIs are better for simple, stateless data exchanges, while MCP supports complex, stateful interactions.

How does MCP handle regressions when server tools change?

Versioned packages (e.g., npm, pip) help manage regressions, and the MCP registry will track changes in tools or resources. Evaluations for tool calls remain consistent, with MCP simplifying testing by providing a standardized interface for comparing server versions.

Does MCP require Claude, or is it compatible with other models?

MCP is model-agnostic, designed to work with any LLM. While Claude is optimized for tool use and agent tasks, MCP’s benefits stem from its standardized protocol, encouraging adoption by other model providers.

Can MCP servers initiate actions proactively?

MCP supports server-initiated notifications for resource updates. Future enhancements will allow servers to start interactions based on events or deterministic triggers, and composability enables servers to act as clients with their own LLMs for proactive behavior.

What are the guidelines for choosing between standard I/O and SSE transports?

MCP is transport-agnostic, but standard I/O is typically used for local, in-memory communication, while Server-Sent Events (SSE) are preferred for remote servers. Developers can create custom transports, but these are the prevalent patterns.

Must the LLM be involved in all MCP client-server interactions?

No, clients can directly call server functions (e.g., list tools, call resources) without involving the LLM, allowing deterministic interactions for specific tasks.

Does MCP support direct server-to-server communication?

Direct server-to-server communication isn’t a core feature, as interactions typically route through the client. However, MCP’s flexibility allows developers to implement such communication, though it’s not yet a first-class capability.