Welcome, developers and AI enthusiasts! Are you ready to dive into the next frontier of Large Language Model (LLM) integration? If you’ve been grappling with how to provide your LLMs with rich, real-time context and enable them to perform complex, multi-step actions, then you’re in the right place. This MCP Python Tutorial is your definitive guide to understanding, building, and leveraging the Model Context Protocol (MCP) to create truly intelligent and autonomous AI applications.
This comprehensive guide is inspired by the insightful “Let’s Learn MCP Python” session with Gwen and Marlene from the Python on Azure team. You can watch the full session here: https://www.youtube.com/watch?v=qQZFvz4BTCY.
Why MCP Matters: The “USBC for AI” Revolution
In the rapidly evolving world of artificial intelligence, LLMs like GPT-4, Claude, or Sonnet have transformed what’s possible. However, their true power is unleashed when they can interact seamlessly with external systems and draw upon relevant, dynamic context. This is where the Model Context Protocol (MCP) steps in.
Coined by Anthropic, the creators of Claude, MCP is an open protocol that standardizes how applications provide context to LLMs. Imagine it as the USBC for AI: a universal standard for connecting diverse components. Or, as some in the community suggest, consider it the REST for LLMs, providing a structured way for AI models to consume and interact with functionalities, much like RESTful APIs did for web services.
This standardization is crucial. It moves us beyond ad-hoc prompting to a world where LLMs can reliably invoke functions, access data, and manage complex workflows. If you’re looking to build sophisticated agentic coding solutions or simply make your LLMs more effective, understanding MCP is non-negotiable.
The Core Pillars of MCP: Hosts, Clients, and Servers
Before we jump into the code, let’s establish the fundamental architecture of an MCP interaction:
- Hosts: These are the environments or platforms where MCP operations take place. Think of your Integrated Development Environment (IDE) like VS Code, or even a specialized agent orchestration framework.
- Clients: These are the tools that interact with an MCP server to request information or actions. A prime example is GitHub Copilot Chat, which acts as a powerful client, leveraging the capabilities exposed by MCP servers.
- Servers: The heart of MCP, these are the applications or services that expose specific functionalities, data, and predefined instructions to clients. Our focus in this MCP Python Tutorial will be on building robust Python-based MCP servers.
This clear separation of concerns allows for modular, scalable, and highly effective AI solutions. Now, let’s break down the essential components that your MCP server will expose.
Diving Deep: Prompts, Tools, and Resources
Within the MCP framework, there are three primary types of contextual information or functionality that a server can expose to an LLM client:
- Prompts: These are static, predefined templates for specific tasks. They act as reusable placeholders, ensuring consistent instruction delivery to the LLM. Think of them as pre-engineered queries designed for optimal LLM output.
- Tools: These are dynamic functions that an LLM can invoke to perform specific actions. Tools enable the LLM to interact with external systems – sending emails, updating databases, running tests, or even orchestrating other LLM calls.
- Resources: These are read-only data sources (like files, documents, or database schemas) that provide the LLM with relevant information. Resources are identified by URIs, allowing the LLM to access and understand contextual data without it being directly part of the prompt.
Ready to see how these come to life? Let’s begin our step-by-step MCP Python Tutorial.
Step 1: Setting Up Your First MCP Python Tutorial Server
Our journey begins by setting up the foundational Python server and configuring VS Code to recognize it.
1.1 Prerequisites and Installation
First, ensure you have Python installed. We highly recommend using uv for blazing-fast dependency management, especially if you’re working with a pyproject.toml file.
# Install the core MCP library
pip install mcp
# If you prefer using uv for project setup
# Assuming you have a pyproject.toml in your project
# uv sync
1.2 Crafting Your Basic Server File (prompt_server.py)
Create a new Python file, prompt_server.py, which will house our first simple MCP server. We’ll start by defining a prompt.
from mcp.server.fast_mcp import FastMCP
import asyncio
# Initialize your MCP server with a unique name
mcp = FastMCP(name="learn_python_mcp")
@mcp.prompt
async def generate_python_topics(level: str) -> str:
"""
Generates five Python topics for someone new to programming,
depending on their level of experience (beginner or intermediate).
"""
return f"Generate five Python topics for someone who is a {level} to programming."
if __name__ == "__main__":
# Run the MCP server asynchronously
asyncio.run(mcp.run())
Notice the @mcp.prompt decorator. This is how the FastMCP library identifies functions that expose prompts to the client.
1.3 Configuring VS Code with mcp.json
For VS Code and GitHub Copilot to discover and interact with your MCP server, you need a special configuration file.
- Create a folder named
.vscodein the root directory of your project. - Inside the
.vscodefolder, create a new file calledmcp.json.
{
"input": {},
"server": {
"name": "learn_python_mcp", // MUST match the name in your Python script!
"command": "/path/to/your/uv", // Replace with actual path (e.g., /usr/local/bin/uv or C:\Python\Scripts\uv)
"arguments": [
"-d",
".",
"run",
"prompt_server.py" // Path to your Python server file from the root
]
}
}
Important Notes for mcp.json:
"name": This must precisely match thenameyou provided when initializingFastMCPin your Python script."command": This should be the full path to youruvexecutable. You can find this by runningwhich uv(Linux/macOS) orwhere uv(Windows) in your terminal. If you don’t useuv, you can usepythonand then adjust the arguments accordingly (e.g.,"python",["prompt_server.py"])."arguments": These are the command-line arguments passed to yourcommand."-d", "."tells the server to run from the current directory."run", "prompt_server.py"executes your server file.
1.4 Launching Your MCP Server
Once you save the mcp.json file, VS Code should detect it and prompt you to start the MCP server. If it doesn’t, you might need to restart VS Code or manually ensure the server is running by executing the command from your terminal.
Step 2: Crafting Powerful Prompts for Your LLM
With our server running, let’s explore how to leverage the prompt we defined. Prompts are excellent for standardizing common LLM queries or providing consistent instructions.
2.1 Using Your Prompt in VS Code
- Open the GitHub Copilot Chat window in VS Code.
- Type a forward slash
/into the chat input. - You should now see your custom prompt,
/generate_python_topics, appear in the list of available commands. - Select
/generate_python_topics. Copilot Chat will then prompt you to provide thelevelparameter. - Enter
beginnerorintermediateand press Enter.
Copilot Chat will automatically fill out the prompt text based on your function’s return value. You can then send this to Copilot, and it will generate Python topics tailored to the specified level. This seamless interaction empowers users to trigger complex LLM behaviors with simple, predefined commands.
Step 3: Building Intelligent Tools for Dynamic Actions
While prompts are excellent for static instructions, tools enable your LLM to perform dynamic, action-oriented tasks. Think of tools as functions the LLM can “call” to interact with the outside world.
3.1 Creating Your Tools Server (tools_server.py)
Let’s expand our functionality by creating a new server file, tools_server.py, that includes both a prompt and a tool. This example will generate and store Python exercises.
from mcp.server.fast_mcp import FastMCP
from dataclasses import dataclass
from typing import Dict, Any
import asyncio
mcp = FastMCP(name="learn_python_mcp")
# A simple in-memory "database" for exercises
exercise_db: Dict[str, Any] = {}
@dataclass
class Exercise:
title: str
description: str
hint: str
solution: str
difficulty: int
@mcp.prompt
async def generate_exercise_prompt(topic: str, level: str) -> str:
"""
Generates a Python exercise prompt given a topic and level, requesting JSON output.
"""
return f"Generate a Python exercise about {topic} for {level} developers. Return as JSON with keys: title, description, hint, solution, difficulty (1-5)."
@mcp.tool
async def generate_and_create_exercise(context, topic: str, level: str) -> str:
"""
Generates a Python exercise using the LLM and stores it in our "database."
"""
prompt_text = await generate_exercise_prompt(topic, level)
# Use context.session.create_message to send the prompt to Copilot's LLM
# The LLM will then generate the exercise based on the prompt
exercise_response = await context.session.create_message(
role="user", content=prompt_text
)
if exercise_response and exercise_response.parts:
try:
# Assuming the LLM responds with a JSON string
import json
exercise_data = json.loads(exercise_response.parts[0].text)
exercise = Exercise(
title=exercise_data.get("title", "Untitled Exercise"),
description=exercise_data.get("description", ""),
hint=exercise_data.get("hint", ""),
solution=exercise_data.get("solution", ""),
difficulty=exercise_data.get("difficulty", 1),
)
exercise_db[exercise.title] = exercise
return f"Successfully generated and stored exercise: '{exercise.title}' (Difficulty: {exercise.difficulty})"
except json.JSONDecodeError:
return "Failed to parse exercise data from LLM response."
else:
return "Failed to get a valid response from the LLM for exercise generation."
@mcp.tool
async def list_exercises() -> str:
"""
Lists all currently stored Python exercises.
"""
if exercise_db:
exercise_list = "\n".join(
f"- {exercise.title} (Difficulty: {exercise.difficulty})"
for exercise in exercise_db.values()
)
return f"Here are the exercises I have:\n{exercise_list}"
else:
return "No exercises stored yet. Generate some!"
if __name__ == "__main__":
asyncio.run(mcp.run())
Here, we introduce @mcp.tool. The generate_and_create_exercise tool takes a context object, which provides access to MCP client functionalities. Crucially, await context.session.create_message() allows our tool to send a prompt directly to Copilot’s underlying LLM and receive its response. This enables powerful LLM orchestration within your custom tools.
3.2 Updating mcp.json for Tools
Modify your mcp.json file to point to your new tools_server.py file:
{
"input": {},
"server": {
"name": "learn_python_mcp",
"command": "/path/to/your/uv",
"arguments": [
"-d",
".",
"run",
"tools_server.py" // Changed to your tools server file
]
}
}
Restart your MCP server in VS Code. You should now see multiple tools and prompts available when you interact with Copilot Chat.
3.3 Leveraging Tools in VS Code
Tools are designed for the LLM to call autonomously. You typically prompt Copilot Chat in natural language, and it determines which tool to use.
- In Copilot Chat, try phrases like: “Generate a challenging Python exercise on decorators for an experienced developer.”
- Or: “List all the Python exercises you currently have.”
Copilot Chat will likely recognize your intent and invoke the appropriate generate_and_create_exercise or list_exercises tool. You’ll see the tool being invoked in the chat interface.
Step 4: Leveraging Resources for Rich Context
Resources are file-like data sources that provide read-only information to the LLM. They are perfect for exposing structured data, documentation, or historical records that the LLM might need to answer questions or complete tasks.
4.1 Preparing Resource Files
Create some simple JSON files that our resource server will expose.
beginner_exercises.json:
[
{"title": "Basic Syntax", "description": "Write a 'Hello World' program.", "difficulty": 1},
{"title": "Variables and Data Types", "description": "Declare variables of different types.", "difficulty": 1}
]
study_progress.json:
{
"Marlene": {"exercises_completed": 5, "last_activity": "2024-10-27", "favorite_topic": "Classes"},
"Gwen": {"exercises_completed": 10, "last_activity": "2024-10-28", "favorite_topic": "Concurrency"}
}
4.2 Creating the Resource Server (resources_server.py)
Now, let’s create resources_server.py to expose these files.
from mcp.server.fast_mcp import FastMCP
import json
import asyncio
mcp = FastMCP(name="learn_python_mcp")
@mcp.resource
async def get_study_progress(username: str) -> str:
"""
Retrieves the study progress for a given username from 'study_progress.json'.
"""
try:
with open("study_progress.json", "r") as f:
data = json.load(f)
if username in data:
return json.dumps(data[username], indent=2)
else:
return "User not found."
except FileNotFoundError:
return "Study progress file not found."
except Exception as e:
return f"Error reading study progress: {e}"
@mcp.resource
async def list_exercises_for_level(level: str) -> str:
"""
Lists exercises from 'beginner_exercises.json' for a specific difficulty level.
"""
try:
with open("beginner_exercises.json", "r") as f:
data = json.load(f)
# Filter by a 'difficulty' key if it exists, otherwise return all
exercises = [ex for ex in data if str(ex.get("difficulty")) == level or level.lower() == "all"]
return json.dumps(exercises, indent=2)
except FileNotFoundError:
return "Exercises file not found."
except Exception as e:
return f"Error reading exercises: {e}"
@mcp.tool
async def check_user_progress(context, username: str) -> str:
"""
A tool to check study progress using the 'get_study_progress' resource.
"""
progress_data = await get_study_progress(username)
return f"Current progress for {username}:\n{progress_data}"
@mcp.tool
async def find_exercises_by_level(context, level: str) -> str:
"""
A tool to find exercises by difficulty level using the 'list_exercises_for_level' resource.
"""
exercises = await list_exercises_for_level(level)
return f"Exercises for {level} level:\n{exercises}"
if __name__ == "__main__":
asyncio.run(mcp.run())
The @mcp.resource decorator defines a function that provides data. While resources can be directly selected as context in Copilot Chat, it’s often more natural for an LLM to access them via a tool, as demonstrated by check_user_progress and find_exercises_by_level.
4.3 Updating mcp.json for Resources
Change your mcp.json to point to resources_server.py:
{
"input": {},
"server": {
"name": "learn_python_mcp",
"command": "/path/to/your/uv",
"arguments": [
"-d",
".",
"run",
"resources_server.py" // Changed to your resource server file
]
}
}
Restart your MCP server.
4.4 Accessing Resources in VS Code
You can expose resources to Copilot Chat in two main ways:
- Manually as Context: Click the “Add context” button in Copilot Chat, then select “MCP resources.” Choose your desired resource (e.g.,
get_study_progress) and provide any required parameters (likeusername). The resource’s content will be added to the chat’s context. - Via a Tool (Recommended for Agentic Flow): Ask Copilot in natural language: “What is Marlene’s study progress?” or “Show me all beginner exercises.” Copilot (if properly configured and if your prompt is clear) should invoke the
check_user_progressorfind_exercises_by_leveltool, which then accesses the resource.
Step 5: The Power of Sampling for Client-Side LLM Invocation
Sampling is an advanced MCP feature that allows your server to instruct the client (e.g., GitHub Copilot) to make an LLM call directly using its own configured model. This shifts the computational burden and token usage to the client side.
5.1 Implementing a Tool with Sampling
Let’s add a tool that uses sampling to generate research tips. Add this to your tools_server.py (or a new file, ensuring mcp.json is updated).
# ... (existing imports and FastMCP initialization) ...
@mcp.tool
async def generate_research_tips(context, topic: str) -> str:
"""
Generates three insightful tips on researching a given topic by sampling the client's LLM.
"""
prompt_for_llm = f"Generate three actionable tips on how to effectively research the topic of '{topic}'. Focus on practical advice for a developer."
# Use context.sample() to tell the client (Copilot) to invoke its LLM
llm_response = await context.sample(prompt_for_llm)
if llm_response and llm_response.type == "text":
return f"Here are some research tips for '{topic}':\n{llm_response.text}"
else:
return "Sorry, I couldn't generate research tips at this time."
# ... (rest of your server code) ...
When this tool is invoked, Copilot Chat will ask the user for permission to make an LLM call (due to token usage implications). Once approved, Copilot’s integrated LLM will process the prompt_for_llm and return the generated tips.
5.2 Using Sampling in VS Code
- Ensure your
mcp.jsonpoints to the server file containinggenerate_research_tips. - In Copilot Chat, type: “Give me three tips on researching ‘Quantum Computing’.”
- Copilot will inform you that the MCP server is requesting an LLM call. Grant permission.
- Observe as the LLM generates the tips directly within Copilot Chat.
Step 6: Putting It All Together – The Study Buddy App (Your Homework!)
The true power of MCP shines when you combine prompts, tools, resources, and sampling into a cohesive application. The original session demonstrated a “Study Buddy” application that guides users through learning Python. While the full implementation wasn’t shown live due to time constraints, the core concepts are available.
Your challenge for this MCP Python Tutorial is to build out or complete this application.
You can find the starting code and examples in the official GitHub repository for this session: https://github.com/Azure-Samples/learn-mcp-python.
Specifically, explore the server_part_two.py file within the study_buddy folder. This file is designed to integrate the concepts of generating topics (prompts), creating and listing exercises (tools), and tracking user progress (resources). Your task is to ensure all these pieces work together seamlessly, potentially refining the copilot_instructions.py file to guide Copilot Chat’s behavior in agentic mode.
Best Practices for Your MCP Python Tutorial Journey
As you delve deeper into MCP development, keep these best practices in mind:
- Be Opinionated with Prompts: Don’t be afraid to design your prompts with specific instructions and expected output formats. This guides the LLM to provide more reliable and consistent results.
- Iterate and Experiment: The field of LLM engineering is nascent. You won’t get the perfect prompt, tool, or resource design on the first try. Continuously refine your implementations based on the LLM’s responses. Find the “sweet spot” balance between your model, your prompt, and the context you provide.
- Manage Your Tools: In your
mcp.jsonconfiguration, you can select which specific tools from connected MCP servers (like GitHub MCP or Hugging Face MCP) you want to enable. Only enable what’s necessary to reduce LLM confusion and improve performance. - Prioritize Security: Especially when dealing with resources that access sensitive data (like databases), implement robust security measures. Always apply the same security principles you would to any other application development. Be cautious when exposing write access to critical systems via LLM-invoked tools, and always validate inputs.
Beyond Python: The Growing MCP Ecosystem
While this MCP Python Tutorial focused on Python, the MCP ecosystem is expanding rapidly. Microsoft offers various options for hosts and clients, including Visual Studio, Windows, and Teams, alongside tools for C# (with a dedicated C# MCP SDK), JavaScript, Azure API Management, and Azure Functions to build your servers. The future of intelligent applications is cross-platform and language-agnostic.
Conclusion: Empowering Your AI with Context and Action
The Model Context Protocol represents a powerful leap forward in how we build and interact with AI. By mastering MCP, you gain the ability to infuse your LLM applications with dynamic context, enable them to perform complex actions, and create truly intelligent workflows that save time and enhance efficiency.
This MCP Python Tutorial has equipped you with the foundational knowledge and practical steps to get started. Now, it’s your turn to experiment, build, and innovate. The possibilities are limitless. Dive into the provided code, complete the Study Buddy app, and start transforming your AI projects today!
Discover more from teguhteja.id
Subscribe to get the latest posts sent to your email.

