The Real Numbers — What Twenty Products Actually Cost, and What a Human Team Would Have Cost
Discussions of AI productivity often stay at the level of anecdote and percentage improvement. This post provides something more specific: a full cost analysis of a twenty-product portfolio, with estimated AI API costs, estimated human time investment, and the equivalent human team cost that would have been required to produce the same output. The numbers have ranges — the AI API costs were not tracked as a primary metric and must be estimated — but the estimates are informed by session records, token usage patterns, and AI pricing data.
The AI API Cost
Over twelve months, the development practice involved an estimated two hundred to four hundred active build sessions. Session length ranged from twenty minutes to several hours, with an average estimated at forty-five to sixty minutes of active AI interaction per session. Token usage per session is estimated at fifty thousand to one hundred fifty thousand tokens (input plus output combined), with higher usage in early sessions when context re-establishment consumed a larger fraction and lower usage in later sessions after context infrastructure reduced overhead.
Total token estimate: ten million to sixty million tokens over the twelve months. Using Claude’s published pricing of approximately three dollars per million input tokens and fifteen dollars per million output tokens, with a roughly three-to-one input-to-output ratio in typical development sessions, the estimated total AI API cost falls in the range of five thousand to twenty-five thousand dollars.
A conservative central estimate is eight thousand to twelve thousand dollars for the year. The actual cost was likely higher in the first half — before context optimization, before the Shared Library reduced the tokens required per session for equivalent output — and lower in the second half. The context optimization work documented in Post 4 likely reduced per-session token consumption by thirty to fifty percent in the second half of the year compared to the first.
Human time invested: approximately six hundred forty to eight hundred twenty hours over twelve months. This is the human judgment time — requirements writing, architecture decisions, session direction, testing, documentation, and infrastructure maintenance. It does not include the AI’s time, which was essentially instantaneous. At a consultant’s time value of two hundred to three hundred dollars per hour, this represents one hundred twenty-eight thousand to two hundred forty-six thousand dollars of time investment.
Product-by-Product Summary
Organized by complexity tier with human direction hours and equivalent human development time:
High-complexity products (AI News Cafe, GD Chatbot, Backyard Gardener, Estate Manager): four products. Human direction: 60–100 hours each. Equivalent human development: 35–60 developer-days each — meaning what a full-time developer would take working in a traditional team environment. Total equivalent for these four products: 140–240 developer-days.
Medium-high products (Career Coach, Journey Mapper, Executive Advisor, My TravelPlanner, Scuba GPT): five products. Human direction: 30–50 hours each. Equivalent human development: 15–25 developer-days each. Total: 75–125 developer-days.
Medium-complexity products (Factchecker, SEO Assistant, AEO Optimizer, Farmers Bounty): five products. Human direction: 20–35 hours each. Equivalent human development: 10–18 developer-days each. Total: 50–90 developer-days.
Infrastructure (Shared Library, Agent System, Coding Playbook, Knowledge Base): total human direction 105–140 hours. Equivalent human time: 53–70 days at senior-level rates given the architectural and organizational value.
Other products and assets (IT Influentials website, Local SEO, supporting materials): 30–40 human direction hours. 15–20 days equivalent.
Total equivalent developer-days: approximately 333–545 days. That is between 1.3 and 2.2 full-time developer-years of production work, across a range of specialties from mobile development to AI/ML engineering to front-end development.
The Human Team Equivalent Cost
To produce this portfolio with a traditional team would require:
A VP of Engineering providing technical leadership and architecture oversight: $18,000–22,000 per month. A senior full-stack developer specializing in WordPress/PHP for the core plugin work: $14,000–18,000 per month. An AI and ML engineer for the Claude integrations, RAG architecture, and agent systems: $16,000–20,000 per month. A front-end developer for the React, Swift, and UI work: $10,000–14,000 per month. A product manager to translate requirements and manage the backlog: $9,000–12,000 per month. A QA engineer part-time: $5,000–7,000 per month. A technical writer for documentation part-time: $4,000–6,000 per month.
Total monthly payroll: $76,000–99,000.
At this staffing level, with competent execution, the portfolio would require twelve to eighteen months to produce — traditional teams have coordination overhead, context-switching, and velocity ramp-up time that extend timelines relative to the compressed AI-assisted timelines described in Post 3.
Total payroll cost: $912,000–$1,782,000. Adding benefits, overhead, tooling, management time, and reasonable contingency at thirty to fifty percent: $1,200,000–$2,700,000.
The Comparison
AI API cost: $5,000–25,000. Human time investment at consulting rates: $128,000–246,000. Total AI-assisted cost: $133,000–271,000 on the high end of estimates.
Traditional team equivalent: $1,200,000–2,700,000.
The cost ratio is between four-to-one and twenty-to-one in favor of the AI-assisted approach, depending on how you account for the human time investment and which estimates you use. Using the most conservative AI-assisted estimate and the most conservative human team estimate, the ratio is still approximately four-to-one. Using the central estimates, it is closer to seven-to-one to ten-to-one.
The more practically useful framing: the AI-assisted approach produced work equivalent to what a well-staffed team would charge one to three million dollars to produce, at a total all-in cost of under three hundred thousand dollars — and that upper bound assumes consultant-level hourly rates for the human time invested.
What the Numbers Do Not Capture
An honest analysis acknowledges the limitations of the comparison.
A human team with dedicated QA, architectural review, and security auditing would produce different quality output — higher quality on the dimensions those dedicated roles address, lower quality on velocity and cost. The AI-assisted portfolio carries more technical debt than a fully human-staffed portfolio would carry. That debt is real, visible, and documented in Post 7.
A human team also produces institutional knowledge in the form of experienced people whose skills compound over time. The AI-assisted practice produces products and documentation. The documentation is better maintained than many human-staffed projects produce, but it is not equivalent to having experienced professionals whose contextual judgment deepens with each project completed.
These limitations inform where AI-assisted development is the right choice and where traditional human expertise is worth the additional cost. For the use cases represented in this portfolio, the trade-off was clearly favorable. For higher-stakes domains with strict quality, security, or regulatory requirements, the calculus is different. Knowing which situation you are in is the judgment call that the cost comparison informs but does not determine.
How to Calculate Your Own Equivalent Numbers
The numbers in this post are specific to software development. The analytical framework — what did the AI-assisted approach cost, and what would the equivalent human-only approach have cost — is applicable to any knowledge work domain. Here is how to apply it to your own situation.
Step one: identify the task categories you perform where AI could produce a useful draft. Estimate how long each category currently takes per instance. Estimate how long it would take with AI collaboration — including the time to define, contextualize, direct, evaluate, and iterate. The ratio of current time to AI-assisted time is your leverage ratio for that task category.
Step two: estimate the dollar value of the time saved. Take your effective hourly rate (total compensation divided by working hours) and multiply it by the hours saved per year across all AI-assisted task categories. This is the human time value of the AI collaboration.
Step three: estimate the AI API cost for your expected usage level. For most knowledge work professionals using AI for document and analysis work — as opposed to high-volume software development — the AI API cost is likely $50 to $500 per month at current pricing. Compare that cost to the human time value calculated in step two.
For most professionals who complete this calculation honestly, the AI API cost is a rounding error relative to the human time value. The investment question is not whether the AI API cost is justified — it almost always is, easily. The investment question is whether the time required to build and maintain the AI collaboration infrastructure (context files, templates, specialist profiles) is justified by the leverage it produces. The answer, for any professional spending more than ten hours per week on production tasks that AI can assist, is almost always yes — usually within the first month of systematic use.
The Cost Trajectory Over Time
One aspect of the cost analysis worth examining separately: the cost trajectory over the twelve months, not just the totals. The first quarter of the year was the most expensive per unit of output: context re-establishment overhead was high, no Shared Library existed, fewer established patterns reduced build efficiency, and more corrective iteration was required per feature built.
The second quarter improved as the methodology matured: better requirements documents reduced first-pass correction needs, some patterns were established and could be referenced in session context, context files were being maintained more consistently. The third quarter improved further as the Shared Library started to take shape and the Agent System was introduced. The fourth quarter, with the full infrastructure in place, was substantially more efficient than the first.
The practical implication: cost estimates for early-phase AI collaboration should assume a higher cost-per-unit-of-output than the steady-state practice. The methodology investment during the first three to six months is real and necessary. It produces the infrastructure and patterns that make the later-phase efficiency possible. Organizations or individuals evaluating the economics of AI collaboration should model the full lifecycle — including the methodology ramp-up phase — rather than projecting steady-state efficiency from the beginning.
The Comparison to Traditional Development Augmented With AI
The comparison presented in this post is between solo vibe coding and a full traditional development team. A more practically relevant comparison for many organizations is between traditional development augmented with AI tools versus vibe coding as the primary development methodology.
Traditional development augmented with AI — developers using GitHub Copilot or similar tools to speed up code writing while maintaining traditional team structure, code review, and project management — typically produces productivity improvements of fifteen to thirty percent. The coordination overhead remains. The cycle time improvement is meaningful but incremental.
Vibe coding as the primary methodology, in a solo or very small team context, produces productivity improvements measured in multiples rather than percentages — as the numbers in this post demonstrate. The trade-off is the quality disciplines that a traditional team’s review structure provides versus the velocity disciplines that vibe coding requires to maintain quality without that structure. Both are achievable; they require different organizational forms and different discipline investments.
How Recent AI Innovations Change This Picture
The cost analysis described in this post reflected the pricing and capability profile of AI models available in 2024 and early 2025. The model landscape has shifted substantially since then, and the financial case for AI-augmented development has strengthened.
Claude Opus 4.6, released February 2026, offers a 1-million-token context window with prompt caching discounts of 90% on cached tokens and batch processing discounts of 50%. For the workflow described in this case study — where large, frequently-reused context files were loaded into many sessions — prompt caching is a direct cost reduction on the most expensive part of the token budget. The shared library context that was loaded repeatedly across sessions becomes a cached asset rather than a fresh input cost every time.
Extended prompt caching, maintaining context for up to 60 minutes across session breaks, also affects the cost structure. Shorter breaks no longer require re-establishing full context, which means the token cost of session overhead is lower. The effective cost per unit of productive work decreases.
On the capability side, Claude Sonnet 4.6 now performs at levels that previously required Opus-class pricing, while charging at Sonnet pricing. For the volume of development work described in this case study, the ability to use Sonnet for a larger proportion of tasks without sacrificing output quality is a meaningful cost reduction. Tasks that previously required Opus — complex architecture design, multi-file refactoring, integration planning — can now be handled at Sonnet pricing as model capability has risen. The build cost per product in a portfolio like the one described here would be materially lower with current pricing than with the 2024 pricing referenced in this post’s numbers.