Why Claude for Your AI Applications
Anthropic's Claude is consistently ranked among the top AI models for code, writing, and complex instruction following. In 2026, Claude claude-opus-4-6 and claude-sonnet-4-6 are the preferred choice for many production applications, particularly where:
- Long documents need to be processed (200K context window)
- Complex, multi-step instructions must be followed precisely
- Writing quality is a primary concern — Claude's output sounds more natural
- Safety and refusal behavior needs to be predictable and calibrated
- Code review and generation require deep code understanding
You don't have to choose between Claude and OpenAI — many production applications use both, routing tasks to the best model for each use case.
Getting an API Key
To access the Claude API:
- Go to console.anthropic.com and create an account
- Navigate to API Keys and click "Create Key"
- Give it a name and copy the key (starts with
sk-ant-...) - Add a payment method to unlock full API access beyond free tier
New accounts get $5 in free API credits — enough to test all the examples in this tutorial.
Understanding Claude Models in 2026
Anthropic's model lineup is organized around three tiers:
- claude-opus-4-6 — Flagship model. Best reasoning, writing, and complex tasks. $15/M input tokens. Use for your most demanding applications.
- claude-sonnet-4-6 — Best balance of capability and cost. $3/M input tokens. The workhorse for most production applications.
- claude-haiku-4-5 — Fastest and cheapest. $0.25/M input tokens. Perfect for high-volume tasks where speed matters more than depth.
Rule of thumb:Start with claude-sonnet-4-6 for most tasks. If output quality isn't sufficient, try claude-opus-4-6. If you need to reduce costs on high-volume operations, try claude-haiku-4-5.
Messages API: The Basics
Claude uses the Messages API — similar to OpenAI's Chat Completions but with some differences.
import anthropic
client = anthropic.Anthropic(api_key="sk-ant-your-key")
message = client.messages.create(
model="claude-sonnet-4-6-20250219",
max_tokens=1024,
messages=[
{"role": "user", "content": "Explain quantum entanglement simply."}
]
)
print(message.content[0].text)Key Differences from OpenAI
- No "system" role in messages — Claude uses a separate
systemparameter - Required
max_tokens— must specify max output tokens (OpenAI has a default) - Response format —
message.content[0].textvs OpenAI'schoices[0].message.content
System Prompts and Instructions
Claude's system parameter is where you define the AI's role, constraints, and context. Claude is particularly good at following detailed, multi-part instructions:
message = client.messages.create(
model="claude-sonnet-4-6-20250219",
max_tokens=2048,
system="""You are an expert code reviewer for Python applications.
Your review process:
1. Check for security vulnerabilities (SQL injection, XSS, auth issues)
2. Identify performance bottlenecks
3. Note style and readability issues
4. Suggest specific improvements
Format your response as:
## Security Issues (critical)
[issues if any]
## Performance Concerns
[issues if any]
## Code Quality
[issues if any]
## Recommended Changes
[specific code suggestions]""",
messages=[
{"role": "user", "content": f"Review this code:
{code_to_review}"}
]
)Tool Use (Function Calling)
Claude supports tool use — the ability to call external functions and use the results in responses. This is how you build Claude-powered agents:
tools = [
{
"name": "get_stock_price",
"description": "Get the current stock price for a given ticker symbol",
"input_schema": {
"type": "object",
"properties": {
"ticker": {
"type": "string",
"description": "Stock ticker symbol (e.g., AAPL, TSLA)"
}
},
"required": ["ticker"]
}
}
]
response = client.messages.create(
model="claude-sonnet-4-6-20250219",
max_tokens=1024,
tools=tools,
messages=[
{"role": "user", "content": "What's Apple's current stock price?"}
]
)
# Check if Claude wants to use a tool
if response.stop_reason == "tool_use":
tool_use = next(b for b in response.content if b.type == "tool_use")
tool_name = tool_use.name # "get_stock_price"
tool_input = tool_use.input # {"ticker": "AAPL"}
# Execute the tool
result = get_stock_price(tool_input["ticker"])
# Send result back to Claude
final_response = client.messages.create(
model="claude-sonnet-4-6-20250219",
max_tokens=1024,
tools=tools,
messages=[
{"role": "user", "content": "What's Apple's current stock price?"},
{"role": "assistant", "content": response.content},
{"role": "user", "content": [{
"type": "tool_result",
"tool_use_id": tool_use.id,
"content": str(result)
}]}
]
)Vision: Analyzing Images
Claude can analyze images — screenshots, photos, diagrams, charts — with high accuracy:
import base64
# Read and encode image
with open("screenshot.png", "rb") as f:
image_data = base64.standard_b64encode(f.read()).decode("utf-8")
message = client.messages.create(
model="claude-sonnet-4-6-20250219",
max_tokens=1024,
messages=[
{
"role": "user",
"content": [
{
"type": "image",
"source": {
"type": "base64",
"media_type": "image/png",
"data": image_data,
},
},
{
"type": "text",
"text": "What errors or issues do you see in this screenshot?"
}
],
}
],
)
# Also supports URLs directly:
{
"type": "image",
"source": {
"type": "url",
"url": "https://example.com/chart.png"
}
}Document Processing with PDFs
Claude's 200K context window makes it exceptional for document processing. Unlike RAG-based approaches, you can often pass entire documents directly:
import base64
# Encode PDF
with open("contract.pdf", "rb") as f:
pdf_data = base64.standard_b64encode(f.read()).decode("utf-8")
response = client.messages.create(
model="claude-sonnet-4-6-20250219",
max_tokens=2048,
messages=[
{
"role": "user",
"content": [
{
"type": "document",
"source": {
"type": "base64",
"media_type": "application/pdf",
"data": pdf_data
}
},
{
"type": "text",
"text": """Analyze this contract and provide:
1. Key parties and their obligations
2. Payment terms
3. Termination conditions
4. Any unusual or risky clauses
Format as bullet points."""
}
]
}
]
)This works for contracts, invoices, research papers, financial reports — any document where you need intelligent analysis. At 200K tokens, you can process documents up to ~500 pages.
Claude vs GPT-4o: Honest Comparison
Based on extensive testing across hundreds of use cases:
| Task | Claude 3.5 Sonnet | GPT-4o |
|---|---|---|
| Long-form writing | Better | Good |
| Code generation | Similar | Similar |
| Document analysis | Better (200K ctx) | Good (128K ctx) |
| Following instructions | Better | Good |
| Image generation | N/A | Via DALL-E 3 |
| Pricing (Sonnet vs GPT-4o) | $3/M tokens | $2.5/M tokens |
| Speed | Fast | Fast |
| API ecosystem | Growing | More mature |
Streaming and Cost Optimization
# Streaming responses
with client.messages.stream(
model="claude-haiku-4-5-20250219",
max_tokens=1024,
messages=[{"role": "user", "content": "Write a product description"}]
) as stream:
for text in stream.text_stream:
print(text, end="", flush=True)
# Cost optimization tips:
# 1. Use Haiku for high-volume, simple tasks (10x cheaper than Sonnet)
# 2. Cache system prompts using prompt caching (beta feature)
# 3. Use max_tokens wisely - don't set too high
# 4. For classification, use short prompts + constrained responses
# 5. Batch API for offline processing (50% discount)n8n Claude Integration
Claude is available natively in n8n — no custom HTTP requests needed:
- Go to Settings → Credentials → Add Credential
- Search for Anthropic API
- Paste your API key
- In any AI Agent or Chat Model node, select Anthropic Chat Model
- Choose your model (claude-sonnet-4-6-20250219 for most cases)
You can now build n8n workflows with Claude as the intelligence layer — all the automation power of n8n with the superior writing and analysis quality of Claude.
Practical tip: Use Claude claude-sonnet-4-6 for customer-facing chatbots where quality matters (better writing, fewer refusals on edge cases). Use GPT-4o-mini when you need a cheaper option for high-volume classification or processing tasks. Our AI Chatbot course (€59) covers both APIs and teaches you when to use each model for maximum quality and minimum cost.