Stop Writing Prompts That Fail: The Contract-Based Approach That Actually Works

Stop Writing Prompts That Fail: The Contract-Based Approach That Actually Works

Stop Writing Prompts That Fail: The Contract-Based Approach That Actually Works

TL;DR

Your AI prompts are breaking because they’re conversations, not contracts. In production (especially in n8n), AI must behave like a component with a strict interface. I’ll show you how to write prompts as binding agreements with clear inputs, constraints, and outputs— the same way you’d write API documentation—so outputs become predictable and machine-parseable.

Keywords: prompt engineering, contract-based prompting, n8n AI automation, structured prompts, JSON output prompts, reliable AI workflows

Another Monday morning debugging session. My content automation had posted the same tweet three times. The AI followed my prompt exactly: “create unique social media content.”

The problem wasn’t the model. The problem was me. My definition of “unique” wasn’t in the contract.

This happens to every developer using AI in production. We write prompts like we’re chatting with a helpful colleague. But automation doesn’t work on conversations. It works on contracts.

Here’s the shift that fixed my broken workflows: prompts in production aren’t dialogue. They’re binding agreements with clear terms.


Why the Conversation Model Fails in Automation

Most prompts look like this:

“Hey, can you take this blog and turn it into a Twitter thread? Make it engaging and include some hashtags.”

This feels natural. But AI inside an automation pipeline isn’t a person— it’s a component sitting between two deterministic steps.

The conversational approach fails because:

  • Vague words are untestable: “engaging”, “good”, “unique”
  • Hidden requirements are never enforced: character limits, structure, CTA, formatting
  • Downstream nodes depend on exact shape: if JSON breaks, your workflow breaks
  • No defined failure behavior: what happens if output is invalid?

In n8n, the failure chain usually looks like this:

  1. You send a “friendly” prompt
  2. AI returns a response with extra text, markdown, or unexpected fields
  3. Your parser fails (JSON.parse() breaks)
  4. Your routing logic receives null/undefined
  5. Workflow retries or repeats → duplicate posts / incorrect actions

That’s not an AI problem. That’s an interface problem.


The Contract-Based Solution

Instead of conversations, write contracts. Every reliable prompt contract has three mandatory sections:

  • Input Definitions (what the AI receives)
  • Constraint Clauses (rules it must follow)
  • Output Specifications (exact schema it must return)

Think of it like API documentation: if the request and response are not strict, the system is not reliable.


The Minimal Contract Template (Copy-Paste)

Use this whenever you’re starting:


PROMPT CONTRACT

INPUTS:
- content: string (plain text only)
- platform: "twitter" | "linkedin"
- goal: "education" | "awareness" | "conversion"

CONSTRAINTS:
- No markdown, no backticks, no extra commentary
- Keep tone: practical, builder-oriented, no hype
- Must include 2-3 hashtags
- Must be unique: do not repeat phrases from content verbatim
- Must self-check compliance before returning

OUTPUT (JSON ONLY):
{
  "platform": "string",
  "text": "string",
  "hashtags": ["string"],
  "checks": {
    "character_count": number,
    "constraints_passed": boolean
  }
}

That alone eliminates 80% of “random” failures.


1) Input Definitions

Inputs define what data the AI receives. No assumptions. No guessing.


INPUT DEFINITIONS:
- source_content: string (plain text, max 1800 chars)
- target_platforms: array (e.g., ["Twitter", "LinkedIn"])
- content_intent: "education" | "awareness" | "conversion"
- brand_parameters: object (tone, audience, constraints)

I learned this the hard way when HTML fragments started appearing in posts. Now I explicitly say: plain text only.

In n8n, I validate inputs before the AI node. If inputs don’t match the contract, AI never runs.


2) Constraint Clauses

Constraints are guardrails. They remove interpretation.


CONSTRAINT CLAUSES:
- Generate exactly 2 variations per platform
- Twitter: 240–280 characters, thread-compatible
- LinkedIn: 1200–1500 characters
- Tone: practical, example-focused, no marketing language
- Prohibited: emojis, superlatives, vague claims
- Required: 2–3 hashtags (primary first)
- Format: JSON.parse() compatible output
- Structure: problem → solution → implementation pattern
- Self-validation: check compliance with all clauses before output

This is where reliability is created. “Good content” becomes measurable rules.

I maintain constraint libraries (in Sheets/Airtable/Git). Every production failure becomes a new clause. That’s how the system improves.


3) Output Specifications

Outputs define exactly what the AI must return. No “best effort.” No “as requested.”


OUTPUT SPECIFICATIONS (JSON ONLY):
{
  "generated_content": [
    {
      "platform": "string",
      "content": "string",
      "compliance_check": {
        "character_count": number,
        "hashtag_count": number,
        "constraints_passed": boolean
      }
    }
  ]
}

Once you enforce structured JSON output, n8n becomes effortless. No regex hacks. No manual cleanup.


Real Example: A Production Content Contract


CONTENT GENERATION CONTRACT

INPUT DEFINITIONS:
- validated_content: {{$json.clean_input}} (plain text)
- distribution_targets: {{$json.platforms}}
- campaign_goal: {{$json.goal}}

CONSTRAINT CLAUSES:
- Output 1 primary piece per platform
- Variations: 2–3 per platform
- Tone: direct, actionable, builder-oriented
- Prohibited: hype, fluff, empty promises
- Required: concrete examples + clear CTA
- Output: machine-parseable JSON only
- Self-check: validate constraints before output

OUTPUT SPEC:
{
  "campaign_output": [
    {
      "platform": "string",
      "primary_content": "string",
      "variations": ["string"],
      "quality_metrics": {
        "actionability_score": number,
        "constraints_passed": boolean
      }
    }
  ]
}

This contract runs through an AI node, then the output flows into platform actions. The system stays predictable.


How to Implement Contracts in n8n (The Production Pattern)

Step 1: Validate Inputs (Before AI)

Use a Function/Code node to validate incoming data. Fail fast.


// Basic input validation example
const content = $json.clean_input;

if (!content || typeof content !== 'string') {
  throw new Error("Contract violation: clean_input must be a string");
}
if (content.length > 1800) {
  throw new Error("Contract violation: clean_input exceeds 1800 chars");
}

return items;

Step 2: Assemble the Contract Prompt

Create a contract template in a file/variable and inject dynamic values.

Step 3: Parse & Validate Output (After AI)

This is the most important node in production. Strip markdown fences. Parse JSON. Verify required keys.


// Parse AI output safely
let raw = $json.ai_output || $json.text || '';
raw = raw.replace(/```json|```/g, '').trim();

let parsed;
try {
  parsed = JSON.parse(raw);
} catch (e) {
  throw new Error("Contract violation: output is not valid JSON");
}

// Required structure checks
if (!parsed.campaign_output || !Array.isArray(parsed.campaign_output)) {
  throw new Error("Contract violation: missing campaign_output array");
}

return [{ json: parsed }];

Step 4: Define the Escape Hatch

When the contract fails:

  • Retry once with a stricter “repair prompt” (optional)
  • Route to manual review
  • Or return safe defaults (never post garbage)

A contract without a failure path is still fragile.


Common Contract Pitfalls

Pitfall 1: Vague Quality Rules

Don’t: “Make it high quality”
Do: “Include a concrete example. Use active voice. Provide steps. Avoid hype.”

Pitfall 2: Missing Edge Cases

Handle empty input, max length input, malformed JSON, special characters. Every failure reveals a missing clause.

Pitfall 3: Assuming Shared Context

AI doesn’t know your audience or brand voice unless you state it. Treat it like a stateless component.

Pitfall 4: Over-specifying Creativity

Avoid micro-managing. Give structure and constraints, not prison bars.


The Engineering Mindset Shift

Contract-based prompting is not “better prompting.” It’s engineering.

You’re not asking AI for help. You’re defining a strict interface a system must obey.

That’s how you move from random outputs to reliable workflows. That’s how you ship AI automations without constant babysitting.

Start your next prompt as a contract: Inputs. Constraints. Outputs. Then validate on both sides.

Your AI automations will stop behaving like experiments — and start behaving like systems.

- Avnish Yadav (avnishyadav.com | GitHub | @avnish.codes)

Next Post Previous Post
No Comment
Add Comment
comment url