How to Secure an MCP Server Before You Trust It With Your Code
MCP is useful because it lets an AI coding assistant do more than autocomplete text. It can read project files, call external tools, reach databases, and interact with systems your developers use every day.
That is exactly why you should treat MCP as a security boundary, not a convenience setting.
If an MCP-connected agent can touch your repo, shell, tickets, secrets, or internal APIs, the important question is no longer "Is the model smart?" It is:
What can this tool chain do when the prompt is wrong, the output is malicious, or the approval model is too loose?
This guide gives you a practical hardening checklist for MCP servers used in engineering workflows.
What changes when you enable MCP
Without MCP, an AI coding tool mostly produces text.
With MCP, it can:
- read files and project metadata
- invoke tools with structured arguments
- query systems outside the repo
- receive large tool outputs that may contain sensitive data
- take actions based on instructions embedded in repo content, tickets, docs, or tool responses
In Anthropic and Cursor, MCP is explicitly designed to connect assistants to external tools and data sources. The official MCP specification also calls out security controls around authorization, roots, tool confirmation, validation, and output handling.
That means your security posture depends on both the model and the tool surface you expose to it.
The short checklist
| Control | Why it matters |
|---|---|
Prefer local stdio when possible | Smaller network and auth surface |
| Limit roots and working directories | Reduces file exposure and path traversal risk |
| Keep approval on for sensitive tools | Prevents silent high-impact actions |
| Use environment variables for secrets | Avoids hardcoded tokens in checked-in config |
| Audit and pin MCP servers | Third-party server code becomes part of your trust chain |
| Validate and sanitize tool outputs | Prevents accidental leakage and prompt poisoning |
| Verify all generated changes with static analysis | Catches insecure code and removed controls before merge |
1. Prefer the smallest transport that solves the problem
If a workflow only needs local repo access, prefer a local stdio server over a remote networked server.
Why:
- fewer auth flows to maintain
- less chance of accidental external exposure
- simpler debugging and incident scope
- easier local isolation
Cursor's MCP docs support local stdio, SSE, and streamable HTTP. Use the more complex transports only when you genuinely need multi-user or remote access.
If you do need remote transport:
- require HTTPS
- use short-lived credentials
- rotate tokens
- validate redirect URIs and OAuth configuration
- log access to sensitive tools
2. Scope roots and file access aggressively
The MCP spec's roots model exists for a reason: the client should expose only the paths the server actually needs.
In practice:
- expose the project root, not your home directory
- do not mount shared secrets directories into project-scoped tooling
- separate production infrastructure repos from application repos
- avoid a single all-powerful server that sees everything
If your agent can browse multiple roots, it can also blend instructions and data from multiple contexts. That makes accidental leakage and prompt poisoning easier.
For AI coding, the safe default is:
- one repo
- one bounded working directory
- one job-specific tool set
3. Keep approval on for sensitive operations
Both Claude Code and Cursor document approval controls for tool usage. Leave those controls enabled for actions such as:
- shell-backed tools
- database access
- secret lookups
- deployment systems
- ticketing or production admin integrations
- any tool that writes outside the repo
Do not normalize broad auto-run until you have a tight allowlist and a clear audit trail.
The common failure mode here is not a Hollywood-style compromise. It is a developer approving too much because the workflow is noisy, repetitive, and feels routine.
If a tool can cause production impact, the approval step is part of the product, not friction to remove.
4. Treat MCP server code as part of your supply chain
An MCP server is not just a connector. It is executable logic with:
- its own dependencies
- its own auth behavior
- its own logging and error handling
- its own assumptions about safe inputs and outputs
Before adopting a third-party MCP server:
- review its source
- pin versions instead of tracking floating
main - understand what environment variables it expects
- verify what it logs
- check whether it enforces auth, rate limits, and path boundaries
If a server has access to customer data, billing systems, infra, or source code, give it the same review bar you would give an internal service with similar privilege.
5. Keep secrets out of checked-in MCP config
Anthropic's project-scoped MCP config and Cursor's server definitions are convenient, but convenience is where teams accidentally commit credentials.
Use environment-variable expansion instead of inline values:
{
"mcpServers": {
"internal-tools": {
"command": "python",
"args": ["tools/mcp_server.py"],
"env": {
"API_BASE_URL": "${API_BASE_URL}",
"SERVICE_TOKEN": "${SERVICE_TOKEN}"
}
}
}
}
Also:
- scope tokens to the minimum permissions required
- prefer separate tokens per server, not one shared admin token
- rotate credentials on server changes or team changes
- never pass production secrets into experimental servers
6. Plan for prompt injection through tools and tool output
Prompt injection is not limited to user chat messages.
In an MCP workflow, the model can receive instructions from:
- markdown files in the repo
- issue tracker tickets
- pasted logs
- database records
- design docs
- tool responses
That means an MCP server can become a delivery path for malicious or manipulative text, even if the transport itself is perfectly authenticated.
Defensive defaults:
- sanitize high-risk tool output before returning it
- truncate oversized responses
- remove secrets and tokens from logs and tool results
- require confirmation before executing secondary actions derived from tool output
- avoid "take whatever the tool returned and immediately act on it" workflows
7. Verify the code after the agent acts
MCP controls help you constrain what the assistant can do. They do not prove the code changes are safe.
That is where Skylos fits.
For repo-level verification:
skylos . -a
For AI-generated PRs or local diffs:
skylos diff main..HEAD --danger
For LLM-integrated applications:
skylos defend .
This gives you three layers:
- MCP controls to bound the tool surface
- approval controls to review sensitive actions
- static analysis and diff analysis to catch insecure or regressive code before merge
A practical setup for engineering teams
If you want a default setup that is hard to regret, use this:
- Local
stdioserver when possible - Project-scoped roots only
- Environment variables for credentials
- Approval required for high-impact tools
- Pinned server versions
- Skylos local scan before commit
- Skylos diff scan in CI for PRs
That setup will not make MCP risk-free. It will make it understandable, reviewable, and much harder to misuse by accident.
Where to go next
- Need a repo-specific workflow for Claude Code? Read How to Review Claude Code Output for Python Security Regressions
- Need a Cursor-specific workflow? Read How to Use Skylos as a Cursor Security Scanner for Python
- Need to catch removed decorators and missing checks in PRs? Read How to Catch Removed Auth Checks and Security Regressions in AI-Generated PRs