Campaign-Grade AI · Essay #4

I Replaced a $75K Analyst With a $20/Month Agent

Not metaphorically. Here's the exact workflow, what it actually costs, what it can't do, and what happened when I stress-tested it on a breaking news event — in real time, while I was asleep.

Campaign-Grade AI — Essay Series

Before we get into the workflow, I want to be precise about what I mean — because "replaced" is a word that usually ends conversations before they start.

I didn't fire a person. I don't have a junior analyst on staff. What I mean is: I audited what a junior analyst would actually do in my operation — the specific tasks, the specific outputs, the specific hours — and I built a system that does those things. The market rate for that work, at a junior analyst level in a mid-sized marketing org, is somewhere between $55K and $85K depending on your city. Call it $75K.

The system I built costs about $20 a month to run.

The gap between those two numbers is where I want to spend this essay — not to make a provocative point, but because understanding exactly what's in that gap is more useful than either the utopian or the dystopian version of the story.

What the Analyst Was Actually Doing

I run SMS and MMS fundraising campaigns for political and nonprofit organizations. It's a high-volume, fast-moving discipline: dozens of clients, thousands of messages, real-time performance data that determines which creative gets scaled and which gets killed. The decisions are mine. But feeding those decisions requires a specific category of work that I was either doing myself or not doing at all.

Here's what the hypothetical analyst role looked like before I built the system:

None of this requires strategic judgment. All of it requires time, consistency, and the ability to synthesize information from multiple sources without introducing bias or errors. That's the profile of a good junior analyst. It's also, increasingly, the profile of a well-prompted AI agent.

The System — What I Actually Built

The architecture is less complicated than most people assume. There's no custom model. No proprietary infrastructure. It's a set of agents built on top of existing AI APIs and tools, connected by a set of workflows that I designed once and now run automatically.

The Analyst Agent Stack — What Runs Daily

01

Performance Intelligence Engine

Pulls live data from the campaign API across all clients. Classifies every creative by lifecycle phase (rising / stable / fading / fatigued). Surfaces the 6 things that need attention today. Runs at 7 AM. Costs: API calls, roughly $0.40/day.

02

Breaking News Briefing Agent

Scans news sources by client category — law enforcement, veterans, political — identifies relevance, verifies sources, produces a structured brief with confirmed quotes, usage guidance, and compliance flags. Triggered by event or schedule.

03

Multi-Model Creative Generator

Takes a brief and runs it through Claude, GPT-4, and Gemini simultaneously. Each model generates SMS and MMS variants. Output: a comparison dashboard with all variants scored against historical performance benchmarks. I pick winners.

04

Pattern Forensics (Weekly)

Runs against the full historical campaign dataset. Identifies which hook structures, character patterns, and emotional angles are producing the highest yield. Builds the "creative genome" that feeds the briefing agent's scoring benchmarks.

What It Costs

This is where people either lean in or check out, depending on their priors. Let me be specific, because the numbers are genuinely surprising.

Component Tool / Service Monthly Cost
AI model API access (Claude + GPT-4 + Gemini) Anthropic / OpenAI / Google APIs $12–18
Automation orchestration (cron scheduling, workflow triggers) Self-hosted via existing dev environment $0
Data storage and dashboard hosting GitHub Pages (static HTML outputs) $0
News API / source verification Brave Search API + web fetch tools $3–5
Total monthly cost ~$20
Junior analyst (market rate, fully loaded) $6,250/mo

The honest caveat: that $20 doesn't include my time designing the workflows. I spent probably 40 hours building the initial system. That's a one-time investment — and it's what a good analyst would spend in their first month getting oriented anyway. Every subsequent month, the system runs itself.

$20
Monthly operating cost for the full stack
165
Clients monitored daily by the performance engine
4,000+
Campaigns analyzed in the pattern forensics dataset

The Real Test: What Happened on March 1st

Claims are cheap. Here's what actually happened.

On February 28th, 2026, the United States and Israel launched Operation Epic Fury — strikes on Iranian nuclear infrastructure that resulted in the confirmed death of Supreme Leader Ali Khamenei. It was the biggest geopolitical event in years. By the time I woke up on March 1st, the story had been developing for hours.

For a political fundraising operation, this is exactly the scenario that separates prepared organizations from reactive ones. Within 24–48 hours of an event like this, the organizations that have relevant creative in front of their donors will outperform those scrambling to produce it by a factor of 3–5x in yield. The window is short and unforgiving.

What I woke up to was this:

🤖 Autonomous Agent Build Log — March 1, 2026 · 3:12 AM PT
03:12
Event detected: US-Israel strikes on Iran. Multiple wire services confirmed. Khamenei death confirmed by Iranian state media.
03:14
Source verification begun: Reuters, NYT Live Blog, Guardian, Times of Israel, Politico, CBS 6 Richmond cross-referenced. Gold Star mother Brenda May quote located and verified: "It's not a war — it's payback."
03:21
Client relevance scored: Senate Leadership Fund (SLF) flagged HIGH — active political fundraising client, military/foreign policy angle. Source citations compiled with usage guidance and compliance flags.
03:28
Brief delivered to Claude, GPT-4, Gemini simultaneously: Background, verified quotes, compliance rules, creative format specs (SMS/MMS), scoring benchmarks from historical SLF performance data.
03:41
Creative output received: 18 complete creatives — 4 SMS + 2 MMS variants from each model. All checked against client-specific rules (character limits, merge field syntax, claim boundaries).
03:49
Comparison dashboard built and deployed: All 18 creatives displayed side-by-side with model attribution, character counts, and performance predictions. Full shootout ready for review at 6 AM.

Total elapsed time: 37 minutes. I was asleep for all of it.

When I opened my phone in the morning, I had a fully verified intelligence brief with source citations, 18 production-ready creatives across three AI models, a comparison dashboard, and a recommended pick from the agent — all calibrated to that specific client's historical performance data.

In the old workflow, a breaking event like this meant a 4–6 hour sprint: find and verify sources, write the brief, produce creative variants, circulate for review. The window where donors are most emotionally engaged with the news would already be closing by the time we had anything to send. The agent delivered a complete package in 37 minutes — while I was asleep.

The comparison is concrete. Before building this system, here's what a breaking-news response looked like:

Before: Manual Process

4–6 hours of source research and verification. Brief written by hand, sent to a copywriter. One or two creative variants produced. Limited time for iteration. By the time creative was ready, the emotional peak of the news cycle had passed. Full-time analyst required to do this reliably.

After: Agent Workflow

37 minutes. Multi-source verification with explicit citation database. 18 creative variants across 3 models. Historical performance benchmarks applied automatically. Comparison dashboard ready before the news cycle peaks. I reviewed it over coffee.

What It Can't Do

This is the part most people skip. I'm not going to.

The system is good at pattern work — finding what's there, synthesizing information from multiple sources, applying known frameworks to new inputs. It is not good at judgment that requires context the system doesn't have.

On the Iran brief: the agent produced 18 creatives. I looked at them for about 12 minutes, picked three, flagged one compliance issue the agent missed (a claim about Senate vote margins that was technically accurate but would have required a footnote under our client's guidelines), and sent the package for final review. The 12 minutes of my judgment is worth something the $20/month doesn't capture.

The agent also has no intuition about what a client relationship can bear. It doesn't know that a particular donor file is fatigued, that a specific signer has been overused, or that a compliance officer at the client is particularly sensitive about casualty statistics. That institutional knowledge lives in my head and in my team's notes — not in the agent's context window.

The honest accounting: The agent handles maybe 80% of what a junior analyst does. The remaining 20% is judgment, relationship context, and the kind of institutional knowledge that accumulates over years. My 12 minutes of review on the Iran brief was that 20%. The agent's 37 minutes was the 80%. That ratio is what makes the economics work.

The Pattern Forensics Example

The breaking-news workflow is the most dramatic example. But the one I lean on most is the pattern analysis — the "what's working right now" picture that used to take me a day a week to maintain manually.

I run SMS campaigns across roughly 40 active clients at any given time. Each client has a creative library with dozens of tested messages. Understanding which structures are producing the highest yield — across all clients, across all lists, after controlling for list size and timing — used to require a level of analysis I could only do occasionally, not continuously.

The pattern forensics agent runs weekly against a dataset of 4,000+ campaigns. It surfaces things like: anger/outrage hooks outperform deadline/urgency hooks by roughly 15x in political fundraising contexts. Or: the "test/surveillance conceit" opening structure — where the message frames itself as a confirmation or verification rather than a fundraising ask — has been producing $0.013–0.020 per send in controlled tests across veteran and law enforcement clients, against a baseline of $0.004–0.008.

These are the kinds of findings that a good analyst would surface after weeks of careful work. The agent surfaces them overnight, every week, against the freshest data available. My job is to decide what to do with the finding — which is where the judgment lives.

The agent doesn't tell me what to do. It tells me what's true. The distance between those two things is where my value as a strategist either shows up or doesn't.

How to Think About This If You're a CMO

The instinct, when you hear something like this, is to start imagining where you'd apply it — and then to get stuck on all the reasons your situation is different. Your data is messier. Your compliance requirements are stricter. Your stakeholders would never approve it. Your IT team controls the stack.

Some of those objections are real. But here's the framing shift I'd offer:

Don't start by asking "where can I deploy an AI analyst?" Start by asking: "what work is my team currently doing that requires consistency and synthesis, not judgment?" That's the work the agent is actually good at. The judgment work — the strategic calls, the relationship management, the institutional context — stays with the people.

In my operation, the split shook out roughly like this:

The 80/20 split may shift in your context. If you're in a heavily regulated industry, the compliance judgment layer may be larger. If your data is cleaner and your client relationships are more standardized, the agent may handle more. But the split exists everywhere — the question is just whether you've mapped it yet.

What This Changes — and What It Doesn't

The economics of analysis work are changing in a way that's hard to fully price right now. A task that cost $75K annually in salary and overhead now costs $20 in API calls. That's not a marginal improvement — it's a structural change in the cost curve of a function that almost every marketing organization carries.

What doesn't change: the value of judgment. If anything, what I've found is that the agent work has increased the premium on strategic thinking, because the synthesis layer is no longer the bottleneck. When you have to produce the analysis yourself, analysis becomes the constraint. When the agent produces it, you find out quickly whether your judgment is actually any good — because that's the only thing left.

The $75K I'm "saving" isn't going back to the bottom line. It's being reinvested in the judgment layer — in the time I can now spend on the actual strategic calls, the client relationships, the 12-minute review sessions that determine which of the 18 creatives actually goes to a donor file of 300,000 people.

That's the version of the story I want to tell — not the cost savings, but what it makes possible when the cost savings are reinvested correctly.

The Three-Step Implementation Pattern

01

Audit the Analysis Work First

Map every task your analyst (real or hypothetical) does by week. Classify each as: synthesis-heavy (agent-suitable) or judgment-heavy (human-required). Be honest — most orgs are surprised how much falls in the synthesis bucket.

02

Build the Briefing Format First

Before automating anything, design the output you want. What does the perfect daily brief look like? What does the ideal creative research package contain? Get the format right manually — then automate the production of it. Order matters here.

03

Invest the Savings in the Judgment Layer

The ROI isn't in the cost savings — it's in what you do with the freed capacity. If you build an agent and then just do the same amount of analysis faster, you've missed the point. Reinvest the freed time in strategic work that was previously crowded out by the synthesis bottleneck.

A Final Note on the Human Side

I said at the start that I didn't fire anyone. That's true in my case. But I want to be clear-eyed about what this technology means at scale.

The junior analyst role — as it has been practiced for the past 20 years — is in genuine transition. Not the judgment layer. Not the strategic layer. Not the relationships. But the synthesis and production work that comprises most of a junior analyst's first two years is exactly what these systems are now competent to do.

The organizations that respond well to this will redesign the junior role: less synthesis production, more synthesis evaluation and strategic apprenticeship. The ones that respond poorly will either try to deny it (and fall behind) or use it as an excuse to reduce headcount without reinvesting in the judgment layer (and also fall behind, just more profitably for a quarter or two).

The $20/month isn't the story. What you do with the other $74,980 is the story.

Campaign-Grade AI Workflows AI Agents Marketing Ops Direct Response Proof

The Operator's Prompt Pack

The system prompts, briefing templates, and workflow architecture behind the case studies in this series — including the breaking-news briefing format and the creative forensics framework. Built for marketing operators running real campaigns at scale.

Get the Prompt Pack →
← Back to justinhart.biz