Back to glossary
M
Definition

MCP IT definition

Model Context Protocol: an open standard published by Anthropic in 2024 to securely connect AI models to external tools, data, and applications.

MCP (Model Context Protocol) is an open standard published by Anthropic in November 2024 to connect AI models — typically LLMs — to enterprise tools, data, and applications in a unified and secure way. Often described as "the USB-C of LLMs", MCP standardizes how a model discovers and uses external resources, instead of re-implementing a custom integration per (model, tool) pair.

The protocol was adopted in less than a year by every major player: Anthropic (Claude), OpenAI (ChatGPT, Agents SDK), Google DeepMind (Gemini), Microsoft (Copilot Studio), Cursor, Zed, Replit. It is one of the hottest topics in the AI ecosystem in 2025-2026 — and a major lever to expose the IT estate context to AI agents without rebuilding integration per model.

The problem MCP solves

Before MCP, every AI vendor had its own tool format:

  • OpenAI: function calling, proprietary JSON.
  • Google: tool use Gemini, different format.
  • Anthropic: tool use Claude, yet another format.
  • Cursor, Continue, Cline: each with its own connectors.

Consequence: every application vendor (Notion, Slack, GitHub, Linear, Kabeen) had to write N integrations, and every AI vendor had to maintain N connectors. Unsustainable.

MCP introduces a single format: an application vendor writes one MCP server, and any MCP-compatible AI client can connect to it.

MCP architecture

MCP follows a client-server model:

  • The MCP client: embedded in the AI application (Claude Desktop, Cursor, ChatGPT, custom agent).
  • The MCP server: exposed by a vendor or service. It exposes three primitives:
  • - Resources: read-only data sources (files, DB rows, APIs).
  • - Tools: functions the model can call to act (create a ticket, send an email).
  • - Prompts: predefined templates the end user can invoke.
  • Transport: stdio (local process) or HTTP/SSE (network).

The client asks the server "what resources and tools do you have?", then the model decides when to use them during the conversation.

Security and permissions

MCP ships native guardrails:

  • Authentication: standard OAuth 2.0 for remote servers.
  • Explicit permissions: the user approves each sensitive tool.
  • Scopes: each tool declares what it does (read / write / execute).
  • Audit: every tool call can be logged.
  • Sandboxing: MCP servers can run in isolated containers.

These guardrails matter: an AI agent with MCP access to a server that both reads and writes emails can be hijacked by prompt injection.

MCP vs classic REST APIs

| Aspect | REST API | MCP | |---|---|---| | Discovery | Manual (read the docs) | Automatic (model queries the server) | | Format | Vendor-specific | Unified, JSON-RPC | | Intent | Humans and applications | LLMs and AI agents | | Documentation | Swagger, OpenAPI | Schema built into the protocol | | Permissions | App-specific | Standardized, granular |

MCP does not replace REST APIs — it wraps them to make them discoverable and usable by models.

Enterprise MCP use cases

  • Connect Claude / ChatGPT to the IT estate: expose Jira, Confluence, Slack, GitHub to AI via MCP.
  • Provide context to agents: an MCP server that exposes the application catalogue, owners, and dependencies lets an AI agent answer employee questions accurately.
  • Automate development: Cursor, Continue, Cline use MCP to plug in dev tools.
  • Expose the whole IT estate to an IT copilot: Kabeen ships an MCP server that makes the live application graph queryable by any compatible LLM.

The MCP ecosystem

Hundreds of open-source MCP servers already exist:

  • GitHub: create a PR, read issues.
  • Linear: manage tickets.
  • Slack: read and send messages.
  • Notion: query a knowledge base.
  • Google Drive, Box, SharePoint: read documents.
  • PostgreSQL, BigQuery: run read-only queries.
  • Kabeen: expose the application graph, usage, and IT-estate cost.

Governing MCP in the enterprise

Rolling out MCP requires specific governance:

  • Catalogue of approved MCP servers: company-wide.
  • Permission policy: who can connect which servers.
  • Centralized audit: of MCP calls.
  • Alignment with [IAM](/en/glossary/iam): and least-privilege.
  • Watch: on prompt injection and MCP-targeting attacks.

Without a frame, MCP becomes a new vector for Shadow AI — anyone plugging tools into ChatGPT or Claude without approval.

Frequently asked questions

What is MCP?

+

MCP (Model Context Protocol) is an open standard published by Anthropic in November 2024 to connect AI models to external tools, data, and applications in a unified and secure way. Often described as "the USB-C of LLMs", MCP standardizes how a model discovers and uses external resources instead of re-implementing a custom integration per model/tool pair.

What is MCP useful for in the enterprise?

+

MCP lets you wire a service once to any MCP-compatible AI. Concretely: expose Jira, Slack, Confluence, GitHub to Claude or ChatGPT through a single MCP server; provide application context to an AI agent (catalogue, owners, dependencies); automate development with Cursor or Cline. The integration cost across AI vendors and app vendors is divided.

How is MCP different from a REST API?

+

A REST API is designed for humans and applications: Swagger docs to read, vendor-specific formats. MCP is designed for LLMs: automatic discovery of resources and tools, unified JSON-RPC format, schema built into the protocol, standardized granular permissions. MCP does not replace REST APIs — it wraps them to make them discoverable by models.

What security risks does MCP introduce?

+

Three main risks: (1) prompt injection — an attacker can hijack the agent through an instruction hidden in a resource read via MCP, (2) over-permissioning — an MCP server with too many rights enlarges the attack surface, (3) Shadow AI — without governance, employees plug their own tools into ChatGPT or Claude without approval. Guardrails: OAuth 2.0, explicit per-tool permissions, centralized audit, IAM alignment, least privilege.

Need help mapping your IT landscape?

Kabeen helps you inventory, analyze and optimize your application portfolio.

Try for free