COMPREHENSIVE GUIDES
AI Setup Guides
Step-by-step tutorials for setting up every major AI tool. From local models to cloud APIs, get started in minutes.
Table of Contents
Set Up Claude CodeSet Up Claude Desktop with MCPInstall and Use OllamaUse DeepSeek (Free AI)Set Up OpenClaw MonitoringRun Llama LocallyUse GitHub CopilotUse Cursor AIUse GPT-4 APISet Up Google Gemini APIUse Hugging Face ModelsInstall LM StudioUse Perplexity AIUse OpenRouter (Unified AI API)Set Up MCP ServersHow to Set Up Claude Code
Steps
- Install Node.js 18+ from nodejs.org if you don’t already have it.
- Open your terminal and run:
npm install -g @anthropic-ai/claude-code - Get an API key from console.anthropic.com and set it:
export ANTHROPIC_API_KEY=your-key - Navigate to your project directory and run
claudeto start an interactive session. - Use natural language to describe what you want to build, fix, or refactor. Claude Code reads your codebase, edits files, and runs commands autonomously.
- For advanced usage, explore slash commands like
/initto create a CLAUDE.md project file, and MCP server integrations for connecting external tools.
XLUXX Tip: XLUXX can score the MCP servers you connect to Claude Code, helping you verify third-party tool reliability before granting them access to your codebase.
How to Set Up Claude Desktop with MCP
Steps
- Download Claude Desktop from claude.ai/download for macOS or Windows.
- Install and sign in with your Anthropic account.
- Open Settings (gear icon) and navigate to the Developer section.
- Click “Edit Config” to open
claude_desktop_config.jsonin your text editor. - Add MCP server configurations. Example for a filesystem server:
{"mcpServers":{"filesystem":{"command":"npx","args":["-y","@modelcontextprotocol/server-filesystem","/path/to/dir"]}}} - Save the config file and restart Claude Desktop. Your MCP tools will appear in the tools menu (hammer icon).
XLUXX Tip: Before connecting any MCP server, use the XLUXX API to check its trust score. Run
pip install xluxx && xluxx score server-name to verify safety and reliability.
How to Install and Use Ollama
Steps
- Download Ollama from ollama.com/download for your OS (macOS, Linux, or Windows).
- Run the installer. On Linux, use:
curl -fsSL https://ollama.com/install.sh | sh - Pull a model to run locally:
ollama pull llama3.1(ormistral,codellama,gemma2, etc.) - Start chatting:
ollama run llama3.1launches an interactive session in your terminal. - For API access, Ollama runs a local server at
http://localhost:11434. Use it with any OpenAI-compatible client. - Manage models with
ollama list,ollama rm model-name, and create custom models with a Modelfile.
XLUXX Tip: Running AI locally? XLUXX helps you compare model reliability across providers so you can choose the most consistent model for your use case.
How to Use DeepSeek (Free AI)
Steps
- Visit chat.deepseek.com and create a free account.
- Start chatting immediately in the web interface. DeepSeek-V3 and DeepSeek-R1 (reasoning model) are available for free.
- For API access, go to platform.deepseek.com and generate an API key.
- Install the Python client:
pip install openai(DeepSeek uses an OpenAI-compatible API). - Point your client to DeepSeek’s endpoint:
base_url="https://api.deepseek.com"with your API key. - DeepSeek R1 excels at math and reasoning tasks. Use it when you need step-by-step logical analysis.
XLUXX Tip: Use XLUXX to monitor DeepSeek API uptime and response quality over time, ensuring consistent performance for production workloads.
How to Set Up OpenClaw Monitoring
Steps
- Visit openclaw.org and review the open-source AI monitoring framework.
- Clone the repository:
git clone https://github.com/openclaw/openclaw.git - Install dependencies:
cd openclaw && pip install -e . - Configure your monitoring targets in
config.yaml— specify which AI APIs and MCP servers to track. - Launch the dashboard:
openclaw serveand openhttp://localhost:8080in your browser. - Set up alerts for anomalies in response quality, latency spikes, or trust score changes.
XLUXX Tip: OpenClaw integrates natively with XLUXX trust scores. Enable the XLUXX plugin to overlay runtime trust data on your monitoring dashboards.
How to Run Llama Locally
Steps
- Choose your runtime: Ollama (easiest), llama.cpp (fastest), or Hugging Face Transformers (most flexible).
- For Ollama:
ollama pull llama3.1:70b(or:8bfor smaller GPUs). - For llama.cpp: clone the repo, build with
make, download a GGUF model from Hugging Face, and run with./llama-cli -m model.gguf -p "Your prompt" - Ensure you have enough VRAM: 8B models need ~6GB, 70B models need ~40GB (or use quantized versions).
- For a web UI, pair with Open WebUI:
docker run -p 3000:8080 ghcr.io/open-webui/open-webui:main - Connect Open WebUI to your local Ollama instance for a ChatGPT-like interface running entirely on your hardware.
XLUXX Tip: XLUXX benchmarks quantized vs. full-precision model outputs. Check trust scores to understand how quantization affects reliability for your specific tasks.
How to Use GitHub Copilot
Steps
- Sign up for GitHub Copilot at github.com/features/copilot (free for students and open-source maintainers).
- Install the GitHub Copilot extension in VS Code from the Extensions marketplace.
- Sign in with your GitHub account when prompted.
- Start coding — Copilot will suggest completions as you type. Press Tab to accept a suggestion.
- Use
Ctrl+Enter(orCmd+Enter) to open the Copilot suggestions panel with multiple alternatives. - Try Copilot Chat (sidebar) for conversational coding help: ask it to explain code, write tests, or refactor functions.
XLUXX Tip: XLUXX can help you evaluate whether Copilot’s suggestions meet your code quality standards by scoring AI-generated code patterns across your repositories.
How to Use Cursor AI
Steps
- Download Cursor from cursor.com — it’s a VS Code fork with built-in AI.
- Install and import your VS Code settings, extensions, and keybindings during setup.
- Open a project and press
Ctrl+K(orCmd+K) to edit code with AI — describe what you want changed. - Use
Ctrl+L(orCmd+L) to open the AI chat panel for longer conversations about your codebase. - Tag files with
@filenamein chat to give the AI context about specific files. - Configure your preferred AI model (Claude, GPT-4, etc.) in Settings > Models. Cursor supports multiple providers.
XLUXX Tip: Use XLUXX to compare the reliability of different AI backends available in Cursor, helping you choose the most consistent model for code generation.
How to Use GPT-4 API
Steps
- Create an account at platform.openai.com and add billing information.
- Generate an API key from the API Keys section in your dashboard.
- Install the Python library:
pip install openai - Set your key:
export OPENAI_API_KEY=your-key - Make your first API call:
from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(model="gpt-4o", messages=[{"role":"user","content":"Hello"}]) - Explore function calling, vision (image input), and structured outputs for production applications.
XLUXX Tip: XLUXX monitors GPT-4 API reliability metrics including uptime, latency percentiles, and response quality — critical data for production deployments.
How to Set Up Google Gemini API
Steps
- Go to aistudio.google.com and sign in with your Google account.
- Click “Get API Key” to generate a free Gemini API key.
- Install the SDK:
pip install google-genai - Set your key:
export GEMINI_API_KEY=your-key - Make a request:
from google import genai
client = genai.Client()
response = client.models.generate_content(model="gemini-2.0-flash", contents="Hello") - Gemini supports multimodal inputs (text, images, video, audio) and has a generous free tier of 15 RPM for Gemini Pro.
XLUXX Tip: XLUXX tracks Gemini API trust scores alongside other providers, giving you an apples-to-apples reliability comparison for multi-provider architectures.
How to Use Hugging Face Models
Steps
- Create a free account at huggingface.co.
- Install Transformers:
pip install transformers torch - Load any model in a few lines:
from transformers import pipeline
gen = pipeline("text-generation", model="meta-llama/Llama-3.1-8B-Instruct")
print(gen("Hello world")) - For hosted inference, get an API token from Settings > Access Tokens.
- Use the Inference API:
pip install huggingface_huband callInferenceClient("model-name").text_generation("prompt") - Browse 500,000+ models at huggingface.co/models filtered by task, language, and license.
XLUXX Tip: XLUXX catalogs thousands of Hugging Face model servers. Check trust scores before deploying community models in production.
How to Install LM Studio
Steps
- Download LM Studio from lmstudio.ai — available for macOS, Windows, and Linux.
- Install and launch the app. Browse the built-in model catalog to find models.
- Search for a model (e.g., “Llama 3.1” or “Mistral”) and click Download. LM Studio handles GGUF format automatically.
- Go to the Chat tab, select your downloaded model, and start chatting with a local AI.
- For API access, go to the Server tab and click “Start Server.” It runs an OpenAI-compatible endpoint at
http://localhost:1234. - Use LM Studio’s server with any OpenAI-compatible client by changing the base URL to your local endpoint.
XLUXX Tip: XLUXX can benchmark your local LM Studio models against cloud APIs, helping you understand the reliability trade-offs of running AI locally.
How to Use Perplexity AI
Steps
- Visit perplexity.ai and start asking questions — no account required for basic use.
- Create a free account for conversation history, collections, and file uploads.
- Use Focus modes to control search scope: All, Academic, Writing, Math, Video, or Social.
- For the API, sign up at docs.perplexity.ai and get an API key.
- Install and use:
pip install openaiwithbase_url="https://api.perplexity.ai"and model"sonar". - Perplexity excels at research tasks with cited sources. Use it when you need fact-checked, referenced answers.
XLUXX Tip: XLUXX monitors Perplexity API endpoints for uptime and citation accuracy, ensuring the research data you depend on is reliably sourced.
How to Use OpenRouter (Unified AI API)
Steps
- Sign up at openrouter.ai and add credits to your account.
- Generate an API key from the Keys section in your dashboard.
- Install the OpenAI library:
pip install openai - Point to OpenRouter:
from openai import OpenAI
client = OpenAI(base_url="https://openrouter.ai/api/v1", api_key="your-key")
response = client.chat.completions.create(model="anthropic/claude-sonnet-4", messages=[...]) - Access 200+ models from every major provider through one API. Switch models by changing the model string.
- Use the rankings page to compare model performance and pricing.
XLUXX Tip: XLUXX trust scores complement OpenRouter’s rankings. Use both together to make informed decisions about which models to route your traffic to.
How to Set Up MCP Servers
Steps
- MCP (Model Context Protocol) lets AI assistants connect to external tools and data sources securely.
- Browse available MCP servers at the official MCP servers repository.
- Install a server using npx (no install needed):
npx -y @modelcontextprotocol/server-filesystem /path - Configure it in your AI client (Claude Desktop, Claude Code, etc.) by editing the MCP config file.
- Popular servers include: filesystem, GitHub, Slack, PostgreSQL, Google Drive, and Brave Search.
- For custom integrations, build your own MCP server using the MCP SDK in Python or TypeScript.
XLUXX Tip: XLUXX is purpose-built for MCP. Score any server before connecting:
npx xluxx-mcp-server or use our API to get real-time trust scores for 15,000+ cataloged servers.
Built by XLUXX — Runtime Trust Scoring for AI Agents
Last updated: April 2026
