Authors
Sonu Kumar 1, Anubhav Girdhar 2, Ritesh Patil 3, Divyansh Tripathi 4, Nishanth Veduruvada 5, Madhukar Anugu 6, Sneha Roy 7, Venkata Talatam 8, 1 R&D, USA, 2 Involead, India, 3 Gen AI CoE, India, 4 IIT Roorkee, India, 5 Arc Steam Technologies, India, 6 H&R Block, USA, 7 Deloitte USI, India, 8 Kodryx AI, India
Abstract
As Agentic AI gain mainstream adoption, the industry invests heavily in model capabilities, achieving rapid leaps in reasoning and quality. However, these systems remain largely confined to data silos, and each new integration requires custom logic that is difficult to scale. The Model Context Protocol (MCP) addresses this challenge by defining a universal, open standard for securely connecting AI-based applications (MCP clients) to data sources (MCP servers). However, the flexibility of the MCP introduces new risks, including malicious tool servers and compromised data integrity. We present MCP Guardian, a framework that strengthens MCP-based communication with authentication, rate-limiting, logging, tracing, and Web Application Firewall (WAF) scanning. Through real-world scenarios and empirical testing, we demonstrate how MCP Guardian effectively mitigates attacks and ensures robust oversight with minimal overheads. Our approach fosters secure, scalable data access for AI assistants, underscoring the importance of a defense-in-depth approach that enables safer and more transparent innovation in AI-driven environments.
Keywords
model context protocol, mcp, agentic ai, artificial intelligence, generative ai