How to Build an AI Agent in 10 Minutes
How to Build an AI Agent in 10 Minutes
AI agents are everywhere right now — from coding assistants to smart customer support bots. The good news? You don’t need to be an ML researcher to build one.
In this guide, you’ll build a simple AI agent using LangChain and Node.js in about 10 minutes. Along the way you’ll also learn the basics of prompt engineering so your agent actually behaves the way you want.
We’ll cover:
- What an AI agent actually is (in simple words)
- Setting up a tiny Node.js project with LangChain
- Writing a basic agent that can respond intelligently
- Adding structure with good prompts (prompt engineering 101)
- Ideas to extend this into something real
What Is an AI Agent? (Without Buzzwords)
Let’s keep it simple.
An AI agent is:
A program that uses an LLM (like GPT-4, GPT-4o, GPT-3.5, etc.) to decide what to do next, possibly use tools (code, APIs, databases), and then respond to the user.
It’s not just “call the model once and print the answer”.
It’s more like:
- Receive a user message
- Think about it using an LLM
- Decide what info or tools it needs
- Use those tools (if any)
- Return a final, helpful answer
In this post, we’ll build a minimal agent: one LLM + a clear prompt + a simple “persona”. Think of it as a foundation you can grow later with tools and memory.
Prerequisites
You’ll need:
- Node.js (v18+ recommended)
- npm or yarn
- An OpenAI API key (or any LangChain-supported LLM; we’ll use OpenAI in this example)
If you don’t have an OpenAI API key yet, create an account on their platform and generate a key. Keep it secret and don’t commit it to GitHub.
Step 1: Set Up the Project
Create a new folder and initialize a Node.js project:
mkdir ai-agent-10-min
cd ai-agent-10-min
npm init -y
Now install the dependencies:
npm install langchain @langchain/openai dotenv
We’ll use:
langchain– the framework that orchestrates LLMs, prompts, tools, etc.@langchain/openai– the OpenAI integration for LangChaindotenv– to load our API key from an.envfile
Create a .env file in the project root:
touch .env
Add your OpenAI key:
OPENAI_API_KEY=your_openai_api_key_here
Important: Never hardcode your API key in code or commit .env to Git.
Finally, in package.json, change the "type" to "module" so we can use import:
{
"name": "ai-agent-10-min",
"version": "1.0.0",
"main": "index.js",
"type": "module",
...
}
Step 2: Prompt Engineering 101 (The Brain of Your Agent)
Before writing code, let’s fix one thing:
Your agent is only as good as your prompt.
A good system prompt (or “agent instruction”) usually includes:
- Role – Who is the agent?
- Goal – What should it help the user do?
- Style – How should it respond (tone, length, format)?
- Constraints – What it must avoid (hallucinating, making things up, etc.).
- Examples (optional) – Few example Q&As or behaviors.
For this tutorial, let’s define a simple persona:
“You are a friendly AI assistant that helps developers understand and build AI agents using LangChain and Node.js.”
We’ll embed this into our code as the system message.
Step 3: Create a Simple AI Agent Script
Create a file named agent.js:
touch agent.js
Now add this code:
import 'dotenv/config';
import readline from 'readline';
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
// 1. Initialize the model
const model = new ChatOpenAI({
modelName: "gpt-4o-mini", // or gpt-4o / gpt-4.1 / gpt-3.5 depending on access
temperature: 0.4, // lower = more focused, higher = more creative
});
// 2. Define the prompt template (our “agent brain”)
const prompt = ChatPromptTemplate.fromMessages([
[
"system",
`
You are an AI Agent Coach.
Your job:
- Help developers build and understand AI agents using LangChain and Node.js.
- Explain concepts clearly with short, practical examples.
- When showing code, use JavaScript (Node.js).
- If user asks something unrelated to AI agents, briefly answer but gently bring it back to the topic.
Rules:
- Be honest if you don't know something.
- Prefer simple language over jargon.
- Keep answers under 300 words unless user asks for more detail.
`.trim(),
],
["human", "{input}"],
]);
// 3. Create a simple "chain": prompt -> model
const chain = prompt.pipe(model);
// 4. Small CLI loop so we can chat with the agent
const rl = readline.createInterface({
input: process.stdin,
output: process.stdout,
});
async function ask(question) {
const response = await chain.invoke({ input: question });
console.log("\nAgent:\n", response.content, "\n");
}
function startChat() {
rl.question("You: ", async (question) => {
if (question.toLowerCase() === "exit") {
rl.close();
return;
}
await ask(question);
startChat();
});
}
console.log("🤖 AI Agent ready! Type your question (or 'exit' to quit):\n");
startChat();
Run it:
node agent.js
Try asking things like:
- “What is an AI agent in LangChain?”
- “Show me a simple LangChain Node.js example.”
- “How can I add tools to my agent?”
You just built a minimal AI agent CLI: it has a clear role, goal, and style defined by your prompt.
Step 4: Give Your Agent a Simple “Tool”
Right now, your “agent” is basically a smart chatbot. Let’s make it a little more agent-like by giving it a tool.
For simplicity, we’ll add a calculator tool: whenever the user’s question looks like a math problem, we’ll handle it with code instead of the LLM.
Update agent.js to add a tiny router before calling the model:
// ... top part remains the same
async function ask(question) {
// Very naive math detection – just for demo
const isMath =
question.match(/[0-9]/) &&
(question.includes("+") || question.includes("-") || question.includes("*") || question.includes("/"));
if (isMath) {
try {
// Don't use eval in real apps; use a proper math parser library.
const result = Function(`"use strict"; return (${question})`)();
console.log("\nAgent (Calculator Tool):\n", `Result: ${result}\n`);
return;
} catch (e) {
console.log("\nAgent:\n I tried to compute that, but something went wrong. I'll ask the LLM instead.\n");
}
}
// Fallback to LLM
const response = await chain.invoke({ input: question });
console.log("\nAgent:\n", response.content, "\n");
}
Now if you run:
node agent.js
and type:
2 + 3 * 5
You’ll see the calculator tool respond instead of the LLM.
This is the core idea of agents:
The model doesn’t need to solve everything itself. It can decide when to use tools (APIs, functions, databases), then combine the results with its reasoning.
In “proper” LangChain agents, tool selection and calling can be delegated to the LLM, but this small example gives you the flavor without too much complexity.
Step 5: Make the Prompt Smarter
Let’s improve our agent’s behavior using prompt engineering.
You can tweak the system prompt to:
- Force step-by-step explanations
- Control output format (like bullet points or sections)
- Add examples of good answers
For example, replace the system text with this:
You are an AI Agent Coach.
Your mission:
- Help developers build and understand AI agents using LangChain and Node.js.
- When answering, follow this structure:
1) Short summary (2–3 sentences)
2) Step-by-step explanation
3) Optional code snippet (if relevant)
Guidelines:
- Use simple, friendly language.
- Prefer concrete examples over abstract theory.
- If the user is a beginner, avoid advanced jargon or explain it in plain English.
- If you don't know something, say so honestly.
You can assume the user knows basic JavaScript and Node.js, but is new to AI agents.
Just this change will make your agent feel more focused, more helpful, and more consistent.
Step 6: Where to Go Next
In 10 minutes, you’ve:
- Set up a small Node.js + LangChain project
- Built a basic chat-based agent with a clear role & rules
- Added a tiny “tool” (calculator) to show how agents can use code
- Practiced prompt engineering to control the agent’s behavior
To turn this into a more powerful, real-world agent, you could:
- Add more tools
- A documentation search tool (load your own markdown/HTML docs)
- A database query tool
- A web search tool
- Add memory
- Store previous messages or user preferences so the agent remembers context across sessions.
- Build a web UI
- Wrap this logic in an Express or Next.js API and add a React/HTML frontend.
- Deploy it
- Host on a small server or serverless function and expose it via API.
Key Prompt Engineering Tips (Quick Recap)
Before we close, here’s a quick cheat sheet:
- Be explicit: Tell the agent who it is and what success looks like.
- Set constraints: Word limits, tone, format, examples.
- Iterate: If the output feels off, don’t immediately blame the model — improve the prompt first.
- Think step-by-step: Ask the model to explain its thinking or structure answers.
- Start simple: One clear role + one clear task beats a messy, overloaded prompt.
Output
If you want, in a future post we can:
- Turn this CLI agent into a web-based AI agent
- Or build a LangChain agent that calls real tools (like an API, database, or your own dev docs)
For now, you’ve got your first AI agent running in Node.js. Not bad for 10 minutes. 🚀
If you found this helpful, follow along as I share more dev + AI tutorials and experiments. You can also find me on YouTube and LinkedIn for deeper breakdowns.

