cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
28788
Views
6
Helpful
0
Comments
Omar Santos
Cisco Employee
Cisco Employee

The Model Context Protocol (MCP) is an open standard that provides a universal way to connect AI models and agentic applications to various data sources and tools. It's how AI applications and applications can supply context (documents, database records, API data, web search results, etc.) to AI applications. This capability is very powerful, but it also raises important cybersecurity-related questions.

Note: I wrote a detailed article titled “Integrating Agentic RAG with MCP Servers: Technical Implementation Guide”

When AI assistants gain access to sensitive files, databases, or services via MCP, organizations must ensure those interactions are secure, authenticated, and auditable.

The MCP architecture (host, client, servers) inherently creates defined points where security controls can be applied. Let’s explore how MCP can be leveraged for security purposes (from securing model interactions and logging their actions, to guarding against adversarial inputs and ensuring compliance with data protection).

Overview of MCP and Its Security Architecture

MCP follows a client-server model with clear separation of roles.

An MCP Host (the AI application or agent) connects via an MCP Client library to one or more MCP Servers.

OmarSantos_0-1742780256382.png

Each server exposes a specific set of capabilities (such as reading files, querying a database, or calling an API) through a standardized protocol. This is why people refer to MCP as the “USB-c port for AI applications”.

By design, this architecture introduces security boundaries: the host and servers communicate only via the MCP protocol, which means security policies can be enforced at the protocol layer. For example, an MCP server can restrict which files or database entries it will return, regardless of what the AI model requests. Likewise, the host can decide which servers to trust and connect to. This clear delineation of components makes it easier to apply the Zero Trust principle (treating each component and request as potentially untrusted until verified).

Security Considerations when Using MCP

The following table outlines a few security when implementing MCP.

 

Security Concern

Description

Unmonitored Access

Without proper visibility, AI assistants could access or modify sensitive data without detection. A compromised prompt or malicious instruction might extract confidential information using legitimate MCP connections.

Lack of Built-in Approval Workflows

Standard MCP implementations do not include built-in approval workflows, making it challenging to enforce human-in-the-loop requirements for critical operations such as database modifications or financial transactions.

Limited Audit Trails and Monitoring

Although MCP enables powerful integrations, it doesn't inherently offer comprehensive monitoring of prompts. This can hamper security investigations and compliance reporting by leaving gaps in the recorded interactions.

Privilege Management Challenges

Managing access across multiple MCP servers with differing security needs becomes complex. Without proper controls, ensuring that every component operates with the least privilege can be difficult, increasing the risk of overexposure to sensitive operations.

 

Secure and Authenticated Model Interaction

One of MCP’s primary benefits is enabling interactions between AI models, MCP servers, tools, and data sources. Traditional “plug-in” style integrations or custom scripts might expose data without robust authentication.

OmarSantos_3-1742781114086.png

The following table includes some of the key security considerations when implementing the MCP transport mechanisms, along with best practices and implementation notes:

Category

Best Practices / Considerations

Description / Implementation Notes

Authentication & Authorization

Adopt Standardized Protocols
- Use established protocols such as OAuth 2.0/OAuth 2.1 or OpenID Connect
- Example: Node.js service using Passport.js with an OAuth 2.0 strategy

Storing and Validating Client Credentials
- Store credentials securely using encrypted databases (e.g., CyberArk Conjur, HashiCorp Vault)

Use Secure Token Handling
- Use JWTs with expiration and token rotation
- Secure token storage (HttpOnly cookies, secure mobile storage)
- Implement token revocation mechanisms

Implement Authorization Checks
- Use Role‑Based Access Control (RBAC) and Access Control Lists (ACLs)
- Integrate middleware to enforce policies

- Standard protocols provide a secure framework for managing tokens.
- Secure storage and handling of credentials reduce the risk of exposure.
- Enforcing authorization through RBAC and ACLs ensures that only permitted users can access sensitive operations or data.

Data Security

Use TLS for Network Transport
- Configure HTTPS with valid TLS certificates (e.g., Let's Encrypt or organizational CA)
- Enforce strong cipher suites and disable outdated protocols

Sanitize Input Data
- Use input validation libraries to sanitize user inputs
- Example: Python's bleach library, JavaScript's DOMPurify

- TLS encryption protects data in transit from eavesdropping or man‑in‑the‑middle attacks.
- Sanitizing inputs prevents injection attacks and ensures data integrity.

Network Security

Implement Rate Limiting
- Use middleware or API gateways (e.g., express-rate-limit for Node.js, Nginx settings) to restrict requests per IP/client
- Consider burst control to allow short spikes

Use Appropriate Timeouts
- Set connection, read, and write timeouts on servers and clients
- Adjust settings in web server configurations

Handle DoS Scenarios
- Implement circuit breakers and throttling logic

Monitor for Unusual Patterns
- Integrate logging with SIEM tools (Splunk, Graylog, ELK)

Implement Proper Firewall Rules
- Configure network firewalls and application firewalls (WAF)
- Apply network segmentation

- Rate limiting and timeouts help mitigate resource exhaustion and DoS attacks.
- Continuous monitoring with SIEM integration assists in detecting anomalies.
- Firewall configurations and network segmentation limit the attack surface and contain potential breaches.

 

Security Considerations when Exposing Tools

When exposing tools, it's very important to implement robust security measures to safeguard against malicious inputs, unauthorized access, and other potential AI application MCP-related vulnerabilities.

OmarSantos_1-1742780256397.png

 

The following table shows the best practices, considerations, and implementation notes.

Category

Best Practices / Considerations

Description / Implementation Notes

Input Validation

- Validate all parameters against the schema
- Sanitize file paths and system commands
- Validate URLs and external identifiers
- Check parameter sizes and ranges
- Prevent command injection

- Use JSON Schema validators to enforce input structures.
- Sanitize and whitelist file paths and commands to avoid unintended execution.
- Ensure URLs conform to expected formats.
- Set limits on parameter sizes to prevent resource exhaustion.
- Escape or reject suspicious command inputs.

Access Control

- Implement authentication where needed
- Use appropriate authorization checks
- Audit tool usage
- Rate limit requests
- Monitor for abuse

- Use robust methods (OAuth, JWT) to authenticate users.
- Enforce role‑based access control (RBAC) for tool operations.
- Log all tool invocations with detailed context.
- Implement rate limiting to mitigate DoS attacks.
- Continuously monitor logs and usage patterns for anomalies.

Error Handling

- Don’t expose internal errors to clients
- Log security-relevant errors
- Handle timeouts appropriately
- Clean up resources after errors
- Validate return values

- Return generic error messages to prevent leakage of internal system details.
- Securely log detailed error data for investigation.
- Set proper timeout values to avoid hanging processes.
- Ensure all resources (files, connections) are properly released after errors.
- Verify that tool outputs conform to expected formats.

 

Additional Security Considerations and Best Practices

The following table lists additional security concerns and key requirements for implementing MCP.

Area

Key Requirements

Implementation Considerations

User Consent & Control

- Users must explicitly consent to and understand all data access and operations.
- Users must retain control over what data is shared and which actions are taken.
- Implement clear UIs for reviewing and authorizing activities.

- Provide intuitive interfaces that transparently show which data is being accessed and why.
- Ensure granular consent options, allowing users to adjust permissions for different operations.

Data Privacy

- Hosts must obtain explicit user consent before exposing user data to servers.
- Hosts must not transmit resource data without user consent.
- Protect user data with appropriate access controls.

- Enforce encryption and strong access control policies to safeguard sensitive data.
- Incorporate consent workflows that prevent unauthorized data transmission, and document data flows for compliance purposes.

Tool Safety

- Tools represent arbitrary code execution and require caution.
- Hosts must obtain explicit user consent before invoking any tool.
- Users should clearly understand what each tool does before authorizing its use.

- Clearly document each tool’s functionality and potential impact to help users make informed decisions.
- Implement safeguards that restrict tool operations to pre-approved scenarios, reducing the risk of unintended code execution.

LLM Sampling Controls

- Users must explicitly approve any LLM sampling requests.
- Users should control whether sampling occurs, what prompt is sent, and what results the server can see.
- The protocol intentionally limits server visibility into prompts.

- Ensure the sampling process is transparent by exposing critical details to users before execution.
- Provide configuration options that let users set limits on data exposure, maintaining control over both input and output in LLM interactions.

 

Monitoring Agentic Implementations

MCP supports a uniform method for servers to transmit structured log messages to clients. Clients can adjust the verbosity of these logs by setting a minimum severity level, while servers send notifications that include the log severity, an optional logger identifier, and additional JSON-serializable information.

However, when AI agents and applications have the ability to retrieve or manipulate data, maintaining a traceable record of those actions is essential. Cisco AI Defense enhances the security of modern AI implementations (like agents powered by MCP) by providing robust, real‑time monitoring and threat detection that go beyond the protocol’s structured logging. You can learn more about Cisco AI Defense here.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: