Introduction: I Let AI Agents Run My Business for a Year. Here Is What Happened.
Note to the reader: Claude and I collaborated on this series. I’ve only lightly edited so those of you who have not worked closely with an LLM get a sense for what a human/AI collaboration yields. It’s decidedly not my “voice.” Although I’ve lightly edited for clarity, my personal writing style is not subject to the grandiosity with which Claude write in the first person under my name.
I want to be upfront about something before you read a single post in this series: some of what I am about to describe is going to sound like I made it up. The numbers will seem inflated. The timeline will seem impossible. The breadth of what one person built in twelve months will seem like the kind of thing that requires a team, a venture fund, and a marketing department willing to sand the edges off the true story.
I’ve worked adjacent to software developers for almost my entire career. As director of audience and later as publisher of several software development-focused journals.
More recently, I’ve worked in two CTO organizations managing data products and strategy. I worked daily with developers as an internal customer and product manager.
Over the years I gained pretty good insight into Agile development, and how teams build business critical — and customer facing software. That knowledge hasn’t been tossed out the window. It heavily informs the way I approach the projects I’m be discussing. But the time compression in sprints using an AI companion blew me away.
What were weekly sprints can be compressed into hours.
Granted, I’m not building complex enterprise software with five nines uptime requirements, but the journey I’ve been on has been illuminating.
None of it is exaggerated. All of it is documented. And the most useful thing I can do before the series begins is explain how it happened, because the explanation is the part that transfers to your situation — not the outputs, but the conditions that produced them.
How This Started
I have spent thirty-two years in B2B media. Audience development. Content strategy. Editorial leadership. Technology strategy. I know the industries I work in in ways that take decades to accumulate. I understand the financial models, the audience psychology, the advertiser relationships, the competitive dynamics, the technology choices that separate B2B media companies that grow from those that stagnate.
What I did not have was the ability to build software. I had domain expertise without execution capability. The gap between knowing what should be built and being able to build it had limited me for years.
In early 2025, that gap closed.
Not because I learned to code in the traditional sense — though I learned enough to direct and evaluate what was being built. Because AI collaboration, used systematically, gave me the ability to turn domain knowledge into functioning software at a pace and cost that had no analog in my previous thirty years of professional experience.
Over the twelve months that followed, I built more than twenty software products. WordPress plugins. AI-powered chatbots. Desktop applications. Progressive web apps. Research assistants. SEO and answer engine optimization tools. A multi-agent consulting system. A knowledge base built from years of expertise, accessible to AI agents I direct like a team. The equivalent cost of that work, measured against what a traditional development team would have charged, was somewhere between $1.3 million and $2.6 million. My actual AI API costs across the year: between $1,500 – 2,500.
I am writing this series to document exactly how that happened. (Well actually Claude wrote this, and I edited it)
The Thing That Actually Changed Everything
The honest answer to “how did you do this” is not a tool. It is not a model or a platform or a framework. It is something I did not expect to matter as much as it does: I gave the AI everything.
Most people who experiment with AI give it tasks. Write this email. Summarize this document. Generate ten ideas for this campaign. They treat AI as a capable assistant that needs to be told what to do one task at a time.
What I did was different. I built a knowledge system — a structured representation of who I am as a professional and how I think — and I gave it to Claude as the foundation for everything that followed.
That system includes the obvious things: my domain expertise in B2B media, audience development frameworks, editorial standards, technology strategy. My understanding of the financial services verticals, the insurance market, the investment advisory space, the legal publishing landscape. The product development principles I have refined over decades. The audience marketing playbooks that work and the ones that look good in decks and fail in practice.
But it also includes things that surprised me when I started articulating them. The books and authors that have shaped how I think about complex problems. The music I return to when I need to think clearly about something hard. The people whose judgment I trust, and what specifically I trust them about. A detailed personal history — professional and otherwise — that contains the formative experiences that turned into professional instincts I had never made explicit.
When I gave Claude that full picture, something interesting happened. It identified connections in my own thinking that I had never made consciously. It found patterns between how I approach editorial problems and how I approach technology problems — patterns that had been operating implicitly for years without my being aware of them. It drew on non-vocational experiences I had shared — a framework I had developed for thinking about scuba diving risk assessment, for example — and connected it to how I evaluate product risk. The result was an AI that understood not just what I knew but how I think.
That knowledge system became the foundation of the agent architecture that followed. When I built specialized AI agents — a strategy agent, a code quality agent, a content agent, a research agent, a documentation agent — each one drew on the same underlying knowledge system. They did not just follow instructions. They operated with context about what I value, how I evaluate quality, what good looks like in my specific domains.
The agents multiplied my output. The knowledge system made that output consistently mine.
The Framework I Built Before Building Products
Before writing a single line of code or producing a single product, I spent weeks on something that seemed like overhead at the time: a systematic assessment of my own skills, experiences, and expertise, translated into structured context that AI could work with.
This was not a resume. It was a distillation — what I actually know about audience development, not just that I have done it; the specific mental models I use when evaluating content strategy, not just that I have a content strategy background; the hard-won lessons from projects that failed, not just the highlights from projects that succeeded.
I worked with Claude iteratively to develop this. I would share a domain area, and Claude would ask questions that surfaced assumptions I had never articulated. I would describe a professional experience, and Claude would identify the principle it embedded that I had been applying unconsciously. The process of building the knowledge system was itself a learning process — not about AI, but about my own expertise.
Once the knowledge system existed, I built Skills and Agents around it. A Skill, in the Claude architecture I was using, is a reusable instruction set for a specific type of task — how to approach a requirements document, how to evaluate AI-generated code against my standards, how to structure a content strategy recommendation for a specific audience type. An Agent is a more persistent collaborator with a system prompt, context files, and a defined role in the workflow.
I now have thirty-one specialized agents. Some handle technical tasks: code generation, quality assurance, documentation, architecture review. Some handle editorial tasks: research synthesis, content drafting, fact-checking, SEO and answer engine optimization. Some handle strategic tasks: competitive analysis, product positioning, client-facing communication. Each one is specialized. Each one operates from the same underlying knowledge system that represents how I think.
For people who have not worked deeply with AI in this way, this is likely the part that sounds most implausible. Thirty-one agents. A knowledge system that contains your non-vocational experiences. Agents that understand how you think. It sounds like science fiction or marketing language for something much simpler.
I understand the skepticism. I would have shared it a year ago. The rest of this series is the evidence that changes the prior.
What You Will Actually Get From This Series
This is not a series about AI tools. I will mention specific tools — Claude, Cursor, Pinecone, Tauri — but only in the context of decisions made and lessons learned. The tools will change. The methodology is what transfers.
This is not a series about hype. Twenty-plus products in twelve months is a real number, but this series documents the expensive failures alongside the successes. The two weeks of debugging that came from an over-scoped first product. The duplicate code problem that compounded across seven products before I addressed it. The context management mistakes that cost real calendar time and real rework. The failures are documented in detail because they are more instructive than the wins.
Experienced developers will roll their eyes at my rookie mistakes as a software architect.
This series is about a specific, documented methodology for professional AI collaboration — what it looks like to go from ad hoc AI use to a systematic practice that produces compounding leverage. It covers the full arc: how to build the knowledge system that makes AI genuinely useful for your specific expertise, how to develop the skill of directing AI with precision, how to build the infrastructure that makes each project faster than the last, how to evaluate the real costs and realistic ROI, and how to apply all of this to your domain — not just software development.
The methodology documented here was built by someone with deep B2B media expertise who had no software development background. It produced results that professional development teams would measure in years and millions. And it did so because domain expertise, systematically captured and made available to AI systems, is a compounding asset in ways that general AI use never is.
That is the thing I most want you to take from this introduction, before the technical details and the case study specifics begin: the leverage in AI collaboration does not come from the AI. It comes from the human’s clarity about what they know, what they value, and what they are trying to build. The AI executes with remarkable capability. The human defines what remarkable execution looks like in their domain.
Thirty-two years of domain expertise, carefully articulated and given to an AI as the foundation for everything it builds — that is what produced twenty products and a million-dollar portfolio on a $25,000 budget.
The series that follows documents how to do the same thing with your expertise, in your domain, starting from wherever you are right now.
A Note on What Has Changed Since This Work Began
The twelve months documented in this series were built on AI capabilities that have since been substantially upgraded. Claude Sonnet 4.6, released in February 2026, runs at performance levels that previously required the Opus-tier model. The 1-million-token context window — now in beta — means the entire knowledge system and product portfolio can exist in a single session context simultaneously, something that required careful management and constant tradeoffs throughout this case study.
Agent Teams, Agent Skills, and the Model Context Protocol — all of which arrived after this work was substantially complete — would have changed significant parts of the methodology. The final post in this series addresses those changes directly: what we would have built differently with today’s tooling, and what that means if you are starting now rather than in 2024.
The core methodology does not change. The ceiling it can reach is higher than it was when I started. If you are beginning today, you are starting from a more capable platform than this case study was built on. That is a reason for more optimism, not less. The path is documented. The tools are better. The leverage available to a domain expert who builds this practice systematically is larger than anything in this series captured — and this series captured quite a lot.