What You Need Before Starting
Connecting n8n to OpenAI is one of the most powerful combinations in modern automation. You get the logic and connectivity of n8n combined with the intelligence of GPT-4o, DALL-E, Whisper and Embeddings — all without writing a single line of Python or JavaScript.
Before you start, make sure you have:
- n8n installed — either self-hosted via Docker or n8n Cloud account
- OpenAI API key — create one at platform.openai.com (requires a paid account)
- Basic n8n knowledge — understand what nodes and workflows are (see our n8n Beginner Guide)
- A use case in mind — chatbot, text processor, image generator, or AI agent
Pro tip:Set a monthly spending limit in OpenAI's dashboard before you start. Go to platform.openai.com → Settings → Billing → Usage limits. Start with $10–$20 to avoid surprises.
Step 1: Adding OpenAI Credentials in n8n
Before using any OpenAI node, you need to store your API key as a credential in n8n. This is done once and reused across all workflows.
- Open n8n and go to Settings → Credentials
- Click Add Credential and search for "OpenAI"
- Select OpenAI API
- Paste your API key (starts with
sk-...) - Click Save — n8n encrypts and stores it securely
Now every OpenAI node in your workspace can use this credential. You never have to paste the key again.
Step 2: The OpenAI Node — Core Operations
The n8n OpenAI node supports multiple operations. Here are the most important ones:
Chat Completions (most common)
This calls the GPT-4o, GPT-4o-mini or o3 models. It's what you use for chatbots, text classification, summarization, translation and almost every NLP task.
- Operation: Message a Model
- Model: gpt-4o-mini (or gpt-4o for complex tasks)
- Messages: System message + User message
- Output:
choices[0].message.content
Image Generation (DALL-E 3)
Generate images from text descriptions. Use this for product mockups, social media images, or creative assets.
- Operation: Generate an Image
- Model: dall-e-3
- Prompt: Your text description
- Size: 1024x1024, 1792x1024 (landscape), or 1024x1792 (portrait)
Audio Transcription (Whisper)
Convert audio files to text. Essential for voice message processing, podcast transcription, and meeting notes.
Embeddings
Convert text to vectors for semantic search and RAG pipelines. Use text-embedding-3-smallfor most use cases — it's fast, cheap, and performs excellently.
Building a Chat Completions Workflow
Let's build the most fundamental n8n + OpenAI workflow: a text processor that takes input and returns an AI-generated response.
Workflow: Summarize Any Text
- Add a Manual Trigger node (for testing)
- Add a Set node with a field:
text = "Your long article goes here..." - Add an OpenAI node:
- Operation: Message a Model
- Model: gpt-4o-mini
- System message:
You are a concise summarizer. Always respond in 3 bullet points. - User message:
{{$json.text}}
- The output
{{$json.choices[0].message.content}}contains the summary
This same pattern works for translation, sentiment analysis, keyword extraction, FAQ generation — any text transformation task you can think of.
Function Calling in n8n (Tool Use)
Function calling (now called "tool use") lets the AI decide to call external functions based on the user's request. In n8n, this is implemented through the AI Agent node.
How the AI Agent Node Works
The AI Agent node has three slots:
- Chat Model — connect an OpenAI Chat Model sub-node (GPT-4o)
- Memory — connect a Simple Memory or Redis Memory for conversation history
- Tools — connect any n8n nodes as callable tools (HTTP Request, Google Sheets, etc.)
When a user asks "What's the weather in Rome?", the agent decides to call the weather tool, gets the data, and incorporates it into a natural-language response. All of this happens automatically.
// Example tool definition for the AI Agent
// The agent receives this as a "tool" it can call:
{
"name": "get_customer_info",
"description": "Look up customer information by email address",
"parameters": {
"type": "object",
"properties": {
"email": {
"type": "string",
"description": "The customer's email address"
}
},
"required": ["email"]
}
}Building a Customer Support Bot
Here's a complete customer support bot workflow using n8n and OpenAI. This handles FAQ questions, escalates to humans when needed, and maintains conversation memory.
Architecture
- Trigger: Telegram Trigger or Webhook (from your website)
- Memory: Simple Memory (last 10 messages per user)
- Knowledge: HTTP Request tool to your FAQ API or Notion database
- AI Agent: GPT-4o with system prompt + tools
- Escalation: IF node — if AI confidence is low, notify human agent
- Reply: Telegram/Webhook response node
System Prompt for Customer Support
You are a helpful customer support agent for [Company Name].
Your job is to answer questions about our products and services.
Guidelines:
- Be friendly and concise
- If you don't know the answer, say so honestly
- Use the search_faq tool to look up relevant information
- If the customer is upset or the issue is complex,
respond with "ESCALATE" to transfer to a human agent
Company info: [Insert your company details here]Real result: One of our students built this exact bot for an e-commerce client and reduced support tickets by 67% in the first month. The bot handled 340 conversations per week autonomously. Learn to build production-ready chatbots in our AI Chatbot Development course (€59).
Image Generation Workflow
Build a workflow that generates product images, social media visuals, or blog thumbnails on demand.
Workflow: Auto-Generate Blog Images
- Trigger: HTTP Webhook receiving blog post title and topic
- OpenAI (GPT-4o-mini): Generate an optimized DALL-E prompt from the title
- OpenAI (DALL-E 3): Generate the image using the refined prompt
- HTTP Request: Upload the image URL to your CMS or S3 bucket
- Response: Return the image URL to the caller
// Step 2 - Prompt optimization system message:
"Convert the blog title into a professional DALL-E prompt.
Style: photorealistic, modern tech aesthetic, 16:9 format.
No text in image. Return ONLY the prompt, nothing else."
// Step 2 - User message:
"Blog title: {{$json.title}}"Cost Optimization Tips
OpenAI costs can add up quickly if you're not careful. Here are the key optimizations:
1. Choose the Right Model
- GPT-4o-mini: Use for 80% of tasks — classification, summarization, simple chat. 10x cheaper than GPT-4o.
- GPT-4o: Reserve for complex reasoning, code generation, or when output quality matters.
- o3-mini: For math, logic and step-by-step reasoning tasks.
2. Optimize Your Prompts
- Keep system prompts concise — every token costs money
- Use structured output (JSON mode) to avoid verbose AI responses
- For classification tasks, instruct the model to respond with a single word
3. Cache Common Responses
For FAQ bots, pre-generate answers for the top 50 questions and store them in a database. Only call OpenAI for questions that don't match existing answers. This can reduce costs by 70–80%.
4. Use Streaming for Better UX
Enable streaming responses so users see text appearing in real-time rather than waiting. This doesn't reduce cost but dramatically improves perceived performance.
Putting It All Together: Next Steps
You now have everything you need to build powerful AI workflows with n8n and OpenAI. The most important thing is to start with a real use case — pick something you do manually today and automate it.
Common starting points:
- Email triage and auto-reply with GPT-4o-mini
- Customer FAQ bot for Telegram or your website
- Social media content generator with DALL-E 3
- Meeting transcription and summary pipeline with Whisper
- Lead qualification bot that scores and routes inquiries
If you want to build client-facing chatbots and charge €500–€2,000 per project, our AI Chatbot Development course walks through the complete process from setup to delivery and pricing.