Twenty Products in Twelve Months — The Full Portfolio and What the Numbers Reveal

Claims about AI productivity are everywhere. Concrete documentation of what AI-augmented work actually produces over a sustained period — with specific products, specific time estimates, and honest analysis — is rarer. This post provides exactly that: a complete inventory of the twenty-plus products built over twelve months, with time estimates, complexity assessments, and the pattern analysis that makes the inventory instructive rather than just a list of projects.

The Complete Portfolio

AI News Cafe — WordPress plugin aggregating content from multiple news sources, generating AI editorial summaries at multiple lengths, classifying by topic, and providing editorial workflow. Uses Claude for summarization. Complexity: high. Estimated directed hours: 60–80 over 4–6 weeks.

Career Coach — WordPress plugin providing AI-powered career guidance, portfolio positioning, and career transition support for professionals at career inflection points. Complexity: medium-high. Estimated directed hours: 40–50 over 3–4 weeks.

Journey Mapper — WordPress plugin for building and visualizing customer journey maps with stage management, database storage, and export capabilities. Complexity: medium-high. Estimated directed hours: 40–50 over 3–4 weeks.

Executive Advisor — WordPress plugin providing strategic advisory guidance through an AI interface, with a four-pillar consulting framework and knowledge base integration. Complexity: medium. Estimated directed hours: 30–40 over 2–3 weeks.

GD Chatbot — WordPress-based AI chatbot with full Retrieval-Augmented Generation architecture: knowledge base search, Pinecone vector database queries, Tavily web search, and Claude synthesis. The reference RAG implementation for the portfolio. Complexity: high. Estimated directed hours: 60–80 over 5–6 weeks.

Backyard Gardener (now Farmers Bounty)— Multi-platform application: macOS desktop app in Swift/SwiftUI, React PWA, and WordPress plugin, integrating Weather Underground real-time data, soil and plant databases, and AI planting guidance. The most technically complex product in the portfolio. Complexity: very high. Estimated directed hours: 80–100 over 6–8 weeks.

Estate Manager — Application for estate administration: asset tracking, document management, professional advisor coordination. Built as desktop app and PWA. Complexity: high. Estimated directed hours: 60–80 over 5–6 weeks.

Factchecker Plugin — WordPress plugin integrating AI fact-checking into editorial workflows: source verification, claim analysis, confidence scoring, editorial flagging. Complexity: medium. Estimated directed hours: 25–30 over 2 weeks.

SEO Assistant Plugin — WordPress plugin providing real-time SEO analysis, keyword recommendations, and content optimization guidance integrated into editorial workflows. Complexity: medium. Estimated directed hours: 20–30 over 2 weeks.

AEO Optimizer — WordPress plugin optimizing content for Answer Engine Optimization — structuring content to appear in AI-generated answer responses from ChatGPT Search, Perplexity, and Google AI Overviews. Complexity: medium. Estimated directed hours: 25–30 over 2 weeks.

Scuba GPT — AI agent specialized in scuba diving knowledge: dive site recommendations, certification guidance, equipment selection, safety protocols. Includes fine-tuning data preparation. Complexity: medium. Estimated directed hours: 30–40 over 3 weeks.

My TravelPlanner — Travel planning application with AI-powered destination recommendations, itinerary building, and logistical research. WordPress plugin and PWA components. Complexity: medium-high. Estimated directed hours: 35–45 over 3 weeks.

Local SEO — Research and strategy deliverable with AI-generated analysis of local SEO opportunities for nonprofit organizations. Research-driven, not code-driven. Estimated directed hours: 10–15.

IT Influentials Website — Marketing website built on WordPress with full brand implementation and service positioning. Complexity: low-medium. Estimated directed hours: 20–25.

ITI Shared Library — Centralized code library eliminating 50–70% of duplicate code: Claude API client, Tavily client, Pinecone client, database base class, patterns library, project templates. Infrastructure that multiplied all subsequent product development. Estimated directed hours: 30–40.

Agent System — Multi-agent orchestration system with Orchestrator, Architecture, API Integration, Database, QA, and Documentation agents with defined roles and handoff protocols. Estimated directed hours: 40–50.

Personalized Coding Playbook — Living cross-project reference documenting full technology stack, architectural patterns, coding conventions, and development workflows. V1 then V2. Estimated directed hours: 15–20.

Knowledge Base and Agents JSON System — Organizational knowledge infrastructure: structured JSON agent definitions, skills library, context files, and agent prompts across 50+ domain-specific AI advisors. Complexity: medium. Estimated directed hours: 20–30.

Expat Advisor – Comprehensive tool for overseas relocation. Knowledgebases for France, Spain and Italy. Content files, agents for 6 professional domains involved in relocation. Estimated directed hours: 25–30.

Personal Assistant – Superset of all our agents, with additional context about Peter’s tastes in food, music and art. Agent orchestration. Estimated directed hours: 15-20.

What the Portfolio Reveals

Time compression was the dominant trend. The first complex product took four to six weeks of active direction. By the end of the year, a product of equivalent complexity took five to ten days. The improvement was not from the AI getting dramatically faster — the code generation was always fast. It was from better requirements documents, shared components, and a context infrastructure that meant each session started from an accurate, productive baseline rather than from zero. The trajectory shows what twelve months of consistent methodology improvement looks like.

Infrastructure investment was the highest-ROI work. The Shared Library, Agent System, and Coding Playbook represent approximately 85–140 hours of directed work — roughly 12–17% of the total estimated hours across the full portfolio. But every product built after the Shared Library was completed benefited from it. Every session that used the Agent System was more efficient because of it. The infrastructure investment was not overhead; it was the work that made all subsequent work faster and better.

Technical complexity and human time do not scale linearly. Backyard Gardener, the most technically complex product, took roughly twice as long in human direction time as a simpler plugin — not ten times as long. This is because the AI handles the raw complexity of code generation; the human time concentrates in requirements, evaluation, and judgment, which scale more gently with technical complexity than code production does. The AI does not slow down when the problem gets harder. The human does, but less than proportionally.

The bottleneck was always human judgment, not AI capability. Across twenty products, the delays and quality problems were consistently traceable to the same human-side issues: unclear requirements, scope that expanded past what could be tested, missing constraints that produced contextually wrong implementations, and delayed decisions about v1 scope. The AI consistently delivered technically correct code when given clear direction. When direction was unclear, output was wrong in ways that reflected the ambiguity in the direction. The AI was rarely the bottleneck. Human clarity was almost always the bottleneck.

The Implication for Non-Technical Professionals

The inventory above documents a software portfolio. Most readers of this series are not building software portfolios. The professional implication is not that everyone should become a software builder. It is that the leverage ratio demonstrated — one person directing AI producing what ten people would produce without AI — is available in domains beyond software, and understanding what it looks like in a concrete example makes it easier to recognize the equivalent leverage in your own domain.

What does a twenty-product equivalent look like for a marketing professional? It might be thirty campaign strategies, fifty content briefs, fifteen competitive analyses, and twelve brand frameworks — produced in the time it previously took to produce a third of that output with the same quality. What does it look like for a financial analyst? Twice as many client analyses completed at the same depth, with more scenario modeling, more comprehensive due diligence, and faster turnaround on each engagement. What does it look like for a consultant? More engagements supported simultaneously, with richer research, more comprehensive deliverables, and faster response to client questions.

The specific products in the ITI portfolio are illustrations. The pattern they illustrate — systematic AI collaboration producing leverage ratios of five to ten to one on specific task categories — is the point. That pattern is available wherever knowledge work can be systematically AI-augmented, which is a growing proportion of what most white-collar professionals spend their time on.

The Infrastructure Ratio

One specific observation worth highlighting from the portfolio inventory: the infrastructure products — the Shared Library, the Agent System, the Coding Playbook, and the Knowledge Base — represent approximately twelve to seventeen percent of the total estimated hours invested but produced a disproportionate share of the total value generated.

The Shared Library alone, which took thirty to forty hours to build, produced a seventy percent reduction in build time for every subsequent standard product. Applied across ten subsequent products at an average of one week of saved time each, the Shared Library returned its investment roughly twelve to fifteen times over in the products built after it. The Agent System and Coding Playbook produced similar returns: each hour invested in those infrastructure components produced multiple hours of saved time in every subsequent session that referenced them.

This ratio — twelve to seventeen percent of total investment producing disproportionate total return — is not unique to this case study. It reflects a general principle: infrastructure investment in any compounding practice produces outsized returns relative to direct production investment, because infrastructure multiplies all future production while direct production produces only one thing. The challenge is that infrastructure investment requires deferring immediate output for future leverage, and that trade-off is psychologically difficult when there is immediate production work available. Understanding the ratio makes the trade-off easier to make deliberately.

Applying the Portfolio Lens to Your Practice

Consider what a portfolio lens would reveal about your own professional output over the past twelve months. What were the most significant deliverables? How much of the work that went into them was direct production of the deliverable versus overhead, coordination, and rework? How much was reconstruction of context you had previously established but hadn’t captured in reusable form? How much was rebuilding analytical frameworks or document structures you had built before?

The answers to these questions reveal where AI collaboration could produce the most leverage in your specific practice. The direct production components — the first drafts, the data compilations, the framework applications — are where AI produces immediate time savings. The context reconstruction and framework rebuilding are where context files and template libraries produce additional leverage. Together, these represent the opportunity for your own practice to demonstrate the kind of output trajectory this portfolio documents.

How Recent AI Innovations Change This Picture

The portfolio described in this post — twenty-plus products built in twelve months — was achieved with AI tooling that was substantially less capable than what exists today. That is not a caveat; it is a data point about what becomes possible with current tooling.

Agent Teams, now available experimentally with Opus 4.6, change the parallel development calculus fundamentally. The limitation in the methodology described here was sequential: each product had to be built in sequence because one human was directing one AI session at a time. Agent Teams allow multiple Claude Code instances to work simultaneously on different parts of a system — one handling frontend, one handling backend, one handling tests — with direct coordination between them. A single human directing an agent team could theoretically maintain the oversight role across parallel development tracks, which were not independently possible in the original methodology.

For a portfolio-building approach, the implication is that the products that took multiple weeks to develop sequentially could now be developed in parallel across agent teams. The calendar compression available in the original methodology — itself substantial — compounds further when the bottleneck of sequential AI sessions is partially removed.

The 1-million-token context window also changes the portfolio management picture. At the scale of twenty-plus products, maintaining context across all of them — the shared library, the individual product architectures, the integration patterns — was a significant documentation and context-management challenge. A context window large enough to hold the entire shared library plus active product context means that cross-product consistency checking becomes a single-session operation rather than a multi-session coordination problem.

Leave a Comment