AI Integration
πŸ”Œ

ModelContextProtocol

An open protocol that connects AI models to data sources and tools with a standardized interface

Intermediate open-source self-hosted LLM tools integration

Alternative To

  • β€’ Custom API Integrations
  • β€’ RAG Frameworks
  • β€’ Function Calling

Difficulty Level

Intermediate

Requires some technical experience. Moderate setup complexity.

Overview

Model Context Protocol (MCP) is an open standard that creates seamless, secure connections between AI assistants and external data sources, tools, and systems. Developed by Anthropic and released as an open-source project, MCP provides a unified protocol that replaces fragmented tool integrations with a single standardized approach for connecting AI systems with data sources.

MCP is designed to solve the common challenge of connecting AI models to the systems where data lives, including content repositories, business tools, and development environments. By standardizing these connections, MCP allows developers to build more robust AI applications that can access relevant context and resources when needed, while maintaining proper security controls and user consent mechanisms.

System Requirements

  • Python: 3.9 or higher (for Python SDK)
  • Node.js: 16 or higher (for TypeScript/JavaScript SDK)
  • Java: 17 or higher (for Java SDK)
  • CPU: 2+ cores
  • RAM: 4GB+
  • Storage: 100MB+ for base installation
  • Operating System: Windows, macOS, or Linux

Installation Guide

MCP offers multiple SDKs to accommodate different development preferences. Here’s how to install each one:

Python SDK

# Using pip
pip install mcp

# Using uv (recommended)
uv init myproject
cd myproject
uv venv
# Activate virtual environment
source .venv/bin/activate  # On macOS/Linux
.venv\Scripts\activate     # On Windows
uv add mcp

TypeScript/JavaScript SDK

# Using npm
npm install @modelcontextprotocol/sdk

# Using yarn
yarn add @modelcontextprotocol/sdk

# Using pnpm
pnpm add @modelcontextprotocol/sdk

Java SDK

# Add to pom.xml (Maven)
<dependency>
    <groupId>io.modelcontextprotocol</groupId>
    <artifactId>mcp-sdk</artifactId>
    <version>latest.version</version>
</dependency>

# Or build.gradle (Gradle)
implementation 'io.modelcontextprotocol:mcp-sdk:latest.version'

Spring Boot Starter

# Add to pom.xml (Maven)
<dependency>
    <groupId>org.springframework.ai</groupId>
    <artifactId>spring-ai-mcp-client-spring-boot-starter</artifactId>
    <version>latest.version</version>
</dependency>

Practical Exercise: Building a Code Analysis MCP Server

Let’s create an innovative MCP server that analyzes Python code for quality issues and provides improvement suggestions. This example demonstrates real-world utility for developers by combining static analysis with AI-driven recommendations.

Step 1: Set Up Project Structure

First, create a new project directory:

mkdir code-analyzer-mcp
cd code-analyzer-mcp

Create a virtual environment and install dependencies:

python -m venv venv
source venv/bin/activate  # On macOS/Linux
venv\Scripts\activate     # On Windows
pip install mcp pylint radon mypy

Step 2: Create the Server Code

Create a file named server.py with the following content:

import asyncio
import os
import tempfile
import json
import subprocess
from typing import Any, Dict, List

from mcp.server import Server, NotificationOptions
from mcp.server.models import InitializationOptions
import mcp.server.stdio
import mcp.types as types

# Initialize the server
server = Server("code-analyzer")

@server.list_tools()
async def handle_list_tools() -> List[types.Tool]:
    """List available code analysis tools."""
    return [
        types.Tool(
            name="analyze-code",
            description="Analyze Python code for quality issues and provide suggestions",
            inputSchema={
                "type": "object",
                "properties": {
                    "code": {
                        "type": "string",
                        "description": "Python code to analyze"
                    },
                    "analysis_types": {
                        "type": "array",
                        "items": {
                            "type": "string",
                            "enum": ["lint", "complexity", "type"]
                        },
                        "description": "Types of analysis to perform"
                    }
                },
                "required": ["code"]
            }
        ),
        types.Tool(
            name="fix-common-issues",
            description="Automatically fix common code issues",
            inputSchema={
                "type": "object",
                "properties": {
                    "code": {
                        "type": "string",
                        "description": "Python code to fix"
                    }
                },
                "required": ["code"]
            }
        )
    ]

@server.execute_tool("analyze-code")
async def handle_analyze_code(args: Dict[str, Any]) -> Any:
    """Handle code analysis tool execution."""
    code = args.get("code", "")
    analysis_types = args.get("analysis_types", ["lint", "complexity", "type"])

    if not code:
        return {"error": "No code provided"}

    results = {}

    # Create a temporary file for the code
    with tempfile.NamedTemporaryFile(suffix=".py", delete=False) as temp_file:
        temp_file_path = temp_file.name
        temp_file.write(code.encode('utf-8'))

    try:
        # Run different analysis based on requested types
        if "lint" in analysis_types:
            results["lint_issues"] = await run_pylint(temp_file_path)

        if "complexity" in analysis_types:
            results["complexity"] = await run_complexity_analysis(temp_file_path)

        if "type" in analysis_types:
            results["type_issues"] = await run_type_analysis(temp_file_path)

        # Add a summary of the results
        total_issues = 0
        if "lint_issues" in results:
            total_issues += len(results["lint_issues"])
        if "type_issues" in results:
            total_issues += len(results["type_issues"])

        results["summary"] = {
            "total_issues": total_issues,
            "health_score": calculate_health_score(results)
        }

        return results

    finally:
        # Clean up the temporary file
        if os.path.exists(temp_file_path):
            os.unlink(temp_file_path)

@server.execute_tool("fix-common-issues")
async def handle_fix_common_issues(args: Dict[str, Any]) -> Any:
    """Handle automatic code fixing tool execution."""
    code = args.get("code", "")

    if not code:
        return {"error": "No code provided"}

    # Create a temporary file for the code
    with tempfile.NamedTemporaryFile(suffix=".py", delete=False) as temp_file:
        temp_file_path = temp_file.name
        temp_file.write(code.encode('utf-8'))

    try:
        # Apply autoformatting using Black
        process = await asyncio.create_subprocess_exec(
            "python", "-m", "black", temp_file_path,
            stdout=asyncio.subprocess.PIPE,
            stderr=asyncio.subprocess.PIPE
        )

        stdout, stderr = await process.communicate()

        # Read the modified file
        with open(temp_file_path, 'r', encoding='utf-8') as f:
            fixed_code = f.read()

        # Perform initial analysis to compare
        before_analysis = await handle_analyze_code({"code": code, "analysis_types": ["lint"]})
        after_analysis = await handle_analyze_code({"code": fixed_code, "analysis_types": ["lint"]})

        # Calculate improvement
        before_issues = len(before_analysis.get("lint_issues", []))
        after_issues = len(after_analysis.get("lint_issues", []))

        return {
            "fixed_code": fixed_code,
            "improvement": {
                "issues_before": before_issues,
                "issues_after": after_issues,
                "issues_fixed": before_issues - after_issues,
                "percent_improved": round(((before_issues - after_issues) / max(before_issues, 1)) * 100, 2)
            }
        }

    finally:
        # Clean up the temporary file
        if os.path.exists(temp_file_path):
            os.unlink(temp_file_path)

async def run_pylint(file_path):
    """Run Pylint on the given file."""
    process = await asyncio.create_subprocess_exec(
        "pylint", "--output-format=json", file_path,
        stdout=asyncio.subprocess.PIPE,
        stderr=asyncio.subprocess.PIPE
    )

    stdout, stderr = await process.communicate()

    try:
        issues = json.loads(stdout.decode())
        # Transform the data for better usability
        return [{
            "line": issue.get("line"),
            "column": issue.get("column"),
            "type": issue.get("type"),
            "symbol": issue.get("symbol"),
            "message": issue.get("message")
        } for issue in issues]
    except json.JSONDecodeError:
        return []

async def run_complexity_analysis(file_path):
    """Run complexity analysis using Radon."""
    process = await asyncio.create_subprocess_exec(
        "radon", "cc", "--json", file_path,
        stdout=asyncio.subprocess.PIPE,
        stderr=asyncio.subprocess.PIPE
    )

    stdout, stderr = await process.communicate()

    try:
        complexity_data = json.loads(stdout.decode())
        functions = []
        for filename, file_data in complexity_data.items():
            for func in file_data:
                functions.append({
                    "name": func.get("name"),
                    "complexity": func.get("complexity"),
                    "rank": func.get("rank"),
                    "line_number": func.get("lineno"),
                    "end_line": func.get("endline")
                })

        return {
            "functions": functions,
            "average_complexity": sum(f["complexity"] for f in functions) / max(len(functions), 1)
        }
    except json.JSONDecodeError:
        return {"functions": [], "average_complexity": 0}

async def run_type_analysis(file_path):
    """Run type analysis using Mypy."""
    process = await asyncio.create_subprocess_exec(
        "mypy", "--json-report", "-", file_path,
        stdout=asyncio.subprocess.PIPE,
        stderr=asyncio.subprocess.PIPE
    )

    stdout, stderr = await process.communicate()

    try:
        data = json.loads(stderr.decode())
        errors = []
        for error in data.get("errors", []):
            errors.append({
                "line": error.get("line"),
                "column": error.get("column"),
                "message": error.get("message"),
                "severity": error.get("severity")
            })
        return errors
    except json.JSONDecodeError:
        return []

def calculate_health_score(results):
    """Calculate a health score from 0-100 based on analysis results."""
    # Start with a perfect score
    score = 100

    # Deduct points for lint issues
    lint_issues = results.get("lint_issues", [])
    if lint_issues:
        score -= min(40, len(lint_issues) * 2)

    # Deduct points for complexity
    complexity = results.get("complexity", {})
    avg_complexity = complexity.get("average_complexity", 0)
    if avg_complexity > 5:
        score -= min(30, (avg_complexity - 5) * 5)

    # Deduct points for type issues
    type_issues = results.get("type_issues", [])
    if type_issues:
        score -= min(30, len(type_issues) * 3)

    return max(0, score)

async def main():
    # Run the server using stdin/stdout streams
    async with mcp.server.stdio.stdio_server() as (read_stream, write_stream):
        await server.run(
            read_stream,
            write_stream,
            InitializationOptions(
                server_name="code-analyzer",
                server_version="0.1.0",
                capabilities=server.get_capabilities(
                    notification_options=NotificationOptions(),
                    experimental_capabilities={},
                ),
            ),
        )

if __name__ == "__main__":
    asyncio.run(main())

Step 3: Configure MCP in Claude Desktop

Create a file called .claude-mcp.json in your home directory with the following content:

{
  "mcpServers": {
    "code-analyzer": {
      "command": "python",
      "args": ["/path/to/your/code-analyzer-mcp/server.py"]
    }
  }
}

Replace /path/to/your/code-analyzer-mcp/server.py with the absolute path to your server.py file.

Step 4: Try It Out with Claude Desktop

  1. Run Claude Desktop
  2. Start a new conversation
  3. Ask Claude to analyze some Python code, for example:
    • “Can you analyze this Python code for quality issues?” followed by some code
    • “What improvements can I make to this function?” with a Python function

Claude will request permission to use the MCP server. Once granted, it will provide detailed analysis and suggestions to improve your code.

Example Usage

Here’s an example Python function to analyze:

def process_data(data):
    results = []
    for i in range(len(data)):
        item = data[i]
        if item['status'] == 'active':
            if item['value'] > 100:
                results.append({
                    'id': item['id'],
                    'normalized': item['value'] / 100,
                    'category': item.get('category', 'unknown')
                })
    return results

Claude can analyze this code and provide feedback on:

  • Lint issues (like using enumerate instead of range/len)
  • Complexity metrics for the function
  • Type annotations that could be added
  • Suggestions for better error handling
  • Performance improvements

Key Features

The Model Context Protocol offers several key features:

  1. Standardized Interface: Common protocol for AI applications to access external data and tools
  2. Security-Focused: Built-in permission model and consent flows for secure AI interactions
  3. Multiple Capabilities:
    • Resources: Allows reading file-like data such as documents or API responses
    • Tools: Function-like capabilities that can be called by AI models
    • Prompts: Pre-written templates to help users accomplish specific tasks
  4. Transport Options: Supports multiple transport mechanisms like stdio, HTTP, and Server-Sent Events
  5. Multiple SDKs: Available for Python, TypeScript/JavaScript, Java, Kotlin, and more
  6. Open Source: Fully open source with active community development
  7. Integration with AI Hosts: Works with hosts like Claude Desktop, AI-enhanced IDEs, and more
  8. Two-Way Communication: Supports bidirectional communication between AI models and data sources
  9. Capability Discovery: Dynamic discovery of available tools and resources
  10. Consent Framework: Built-in mechanisms for user permission and consent

Architecture

The MCP architecture consists of several key components:

  • MCP Hosts: Applications that run AI models and need access to external data (e.g., Claude Desktop)
  • MCP Clients: Protocol clients that maintain connections with servers
  • MCP Servers: Lightweight programs that expose specific capabilities through the protocol
  • Local Data Sources: Local files, databases, and services accessed by MCP servers
  • Remote Services: External APIs and services that MCP servers can connect to

MCP follows a client-server model where:

  1. The host initializes a client connection to an MCP server
  2. The client and server negotiate capabilities
  3. The client can then access tools, resources, and prompts provided by the server
  4. All communication follows the JSON-RPC 2.0 specification

Resources

Official Resources

Pre-built MCP Servers

Several pre-built MCP servers are available for common systems:

Community Resources

Suggested Projects

You might also be interested in these similar projects:

πŸ•ΈοΈ

Crawl4AI

Blazing-fast, AI-ready web crawler and scraper designed specifically for LLMs, AI agents, and data pipelines

Difficulty: Beginner to Intermediate
Updated: Mar 23, 2025
πŸ€–

CrewAI

CrewAI is a standalone Python framework for orchestrating role-playing, autonomous AI agents that collaborate intelligently to tackle complex tasks through defined roles, tools, and workflows.

Difficulty: Intermediate
Updated: Mar 23, 2025
πŸ€–

PydanticAI

PydanticAI is a Python agent framework designed to make it less painful to build production-grade applications with Generative AI, featuring strong type safety and validation.

Difficulty: Intermediate
Updated: Mar 23, 2025