LangChain MCP Integration: Complete Tutorial

LangChain MCP Integration: Complete Tutorial

LangChain is the most popular framework for building LLM-powered applications. By integrating MCP (Model Context Protocol) servers, you can give LangChain agents access to a standardized ecosystem of tools. This tutorial covers the full setup.

Why MCP + LangChain?

LangChain has its own tool system, but MCP tools offer advantages: they are reusable across different AI platforms, follow a security-first design, and have a growing ecosystem of pre-built servers. One MCP server works with Claude, LangChain, CrewAI, and any MCP-compatible client.

Step 1: Install Dependencies

pip install langchain langchain-openai langchain-mcp-adapters

Install MCP servers you want to use:

# Filesystem tools
npm install -g @modelcontextprotocol/server-filesystem

# GitHub integration
npm install -g @modelcontextprotocol/server-github

# PostgreSQL access
pip install mcp-server-postgres

Step 2: Create the MCP-to-LangChain Adapter

from langchain_mcp_adapters import MCPToolkit
from mcp import StdioServerParameters

# Define MCP server connections
filesystem_server = StdioServerParameters(
    command="npx",
    args=["-y", "@modelcontextprotocol/server-filesystem", "/workspace"]
)

# Create toolkit from MCP server
toolkit = MCPToolkit(server_params=filesystem_server)
await toolkit.initialize()

# Get LangChain-compatible tools
tools = toolkit.get_tools()
print(f"Loaded {len(tools)} tools from MCP server")

Step 3: Build a LangChain Agent with MCP Tools

from langchain_openai import ChatOpenAI
from langchain.agents import create_tool_calling_agent, AgentExecutor
from langchain_core.prompts import ChatPromptTemplate

llm = ChatOpenAI(model="gpt-4o")

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant with access to file tools."),
    ("human", "{input}"),
    ("placeholder", "{agent_scratchpad}")
])

agent = create_tool_calling_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

result = executor.invoke({"input": "List all Python files in the workspace"})
print(result["output"])

Step 4: Use XLUXX for Tool Selection

When your agent needs tools dynamically, use the XLUXX resolver to pick the best MCP server for each task:

pip install xluxx
import xluxx

# Find the highest-trust MCP server for a capability
result = xluxx.resolve("database query postgresql")
print(f"Recommended: {result.server_name} (trust: {result.trust_score})")
print(f"Install: {result.install_command}")

Or query the API directly:

curl https://api.xluxx.net/v1/tools?q=postgresql&sort=trust_score

Multiple MCP Servers in One Agent

from langchain_mcp_adapters import MCPToolkit

servers = {
    "filesystem": StdioServerParameters(command="npx", args=["-y", "@modelcontextprotocol/server-filesystem", "/data"]),
    "github": StdioServerParameters(command="npx", args=["-y", "@modelcontextprotocol/server-github"]),
}

all_tools = []
for name, params in servers.items():
    tk = MCPToolkit(server_params=params)
    await tk.initialize()
    all_tools.extend(tk.get_tools())

# Agent now has filesystem + GitHub tools
executor = AgentExecutor(agent=create_tool_calling_agent(llm, all_tools, prompt), tools=all_tools)

Best Practices

  • Limit tool count per agent to avoid confusion (10-15 max)
  • Use XLUXX trust scores to vet servers before production use
  • Pin MCP server versions in production
  • Monitor tool call logs for unexpected behavior

Use XLUXX Trust Layer to find reliable MCP tools: api.xluxx.net

Related Articles


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *