The Framework — How Any Professional Can Apply This to Their Work

The vibe coding practice documented in this series — twelve months, twenty products, a specific methodology with documented outcomes — reduces to a methodology. Not a software development methodology. An AI collaboration methodology that happens to have been applied to software. The five-phase structure, the context infrastructure practices, the specialist routing model, the quality disciplines — all of these transfer directly to any knowledge work domain where the work involves defining what you need, directing an AI to produce a draft, evaluating that draft, and iterating toward a final output.

That description covers most of what white-collar professionals do: research and analysis, writing and editing, strategic planning, financial modeling, legal drafting, marketing development, data interpretation. In every one of these domains, the same methodology produces leverage when applied systematically. This post makes that methodology explicit and provides a translation layer for applying it outside of software.

The Five-Phase Methodology

Phase 1: Define

Before any AI interaction, define what you are trying to produce. Specifically and concretely — not “I need a competitive analysis” but “I need a competitive analysis of the US enterprise project management software market that identifies the five most significant players, describes their differentiated positioning on three dimensions (ease of implementation, enterprise scalability, and AI feature depth), and identifies the two gap opportunities most actionable for a mid-market entrant. The audience is a six-person executive team who are market-aware but not deep in project management software. Output should be eight to twelve pages with an executive summary and a competitive positioning matrix.”

Every element of that definition — the scope, the dimensions, the audience, the format, the length — changes what the AI produces. Each element omitted is a dimension on which the AI will make an assumption that may or may not match your intent. The definition investment is the highest-ROI activity in any AI collaboration session. Every minute spent sharpening the definition saves three to five minutes of corrective iteration.

Phase 2: Contextualize

Load the relevant context before the first request. Context is everything the AI needs to know about your specific situation, constraints, preferences, and history that it cannot infer from the task definition alone. For a one-off task, this might be a paragraph. For an ongoing project, it is the project context file updated after each session.

For the competitive analysis: your organization and its specific market position (not generic industry context). Your audience’s specific knowledge gaps and sensitivities. Your conventions: what format the executive team prefers for this type of analysis. Your constraints: what competitive information is already known and need not be repeated. Your history: what prior analyses have covered and what gaps they left.

Phase 3: Direct

Give the AI one bounded, specific objective to start. “Write the executive summary first — three paragraphs covering the key market finding, the competitive dynamic most relevant to our market entry decision, and the top gap opportunity we should investigate further. Maximum one page. I will review this before you continue to the full analysis sections.” One objective. Clear scope boundaries. Explicit success criteria.

Review and approve before proceeding. The review at this stage is the cheapest point to catch a fundamental misalignment: if the executive summary is framing the problem incorrectly, correct that before three more sections are written in the wrong frame. Catching a frame error here costs one redirect. Catching it after the full analysis is written costs multiple corrective iterations across multiple sections.

Phase 4: Evaluate

Evaluate against the original definition, not against whether the output seems generally good. Not “does this look professional?” but “does this correctly answer the question I defined, with the right scope, for the right audience, at the right depth?” The evaluation requires domain expertise — which is why AI collaboration works better for people who know their domain than for people who do not. Your expertise is what makes evaluation possible. Without it, you cannot distinguish correct output from plausible-but-wrong output.

Specific evaluation questions: Is every factual claim verifiable from the sources used or from my domain knowledge? Does the structure address the actual decision the audience needs to make, or a different but easier question? What is missing that the definition required? What is present that was not requested and should be removed or relocated?

Phase 5: Iterate

Direct corrections starting with structural issues. A correction to the overall analytical framework affects everything in the analysis. A correction to the word choice in a specific sentence affects only that sentence. Make framework corrections first, then structural corrections, then stylistic corrections. Verify each correction before moving to the next. Iterate until the output meets the original definition of done.

The Context File for Any Domain

Every ongoing AI project in any domain benefits from a context document with this structure:

Purpose: what this project is and what it is trying to achieve. Two to three sentences.

Conventions: format standards, tone, level of detail, citation approach, length guidelines. Everything the AI needs to produce output that matches your quality standards and your audience’s expectations without being told explicitly in each session.

Constraints: what is explicitly out of scope, what has already been addressed and should not be duplicated, what is politically or competitively sensitive and should be handled with care. Constraints prevent the most common form of technically correct but contextually wrong output.

Current state: what has been produced so far, what key decisions have been made, what is outstanding. Updated after every session. The current state element is what makes the context file a living document rather than a static brief.

Do not repeat: specific content already covered that should not be readdressed. Prevents the most common form of context file maintenance failure — the AI drawing on outdated entries in the current state field and duplicating work already done.

The Template Library

The Shared Library principle — build a reusable component once and load it for every subsequent use — applies to any knowledge work domain as a template library. Reusable document structures, analysis frameworks, research approaches, argument patterns that appear repeatedly in your work.

A management consultant: standard problem-solution-evidence structure for recommendations. Standard competitive analysis framework. Standard risk matrix. Standard stakeholder communication format. Each template built once, maintained when better approaches emerge, loaded as context whenever the relevant task type arises.

A financial analyst: standard discounted cash flow model structure. Standard due diligence checklist. Standard investment memo format. Standard sensitivity analysis framework. Each built as a reference that travels into sessions on the relevant task type.

Templates loaded as context produce consistently more structured, more on-format outputs than templates described anew in each session. The AI applies the template structure reliably when it is loaded; it approximates it when it must infer the structure from a description. The difference in first-pass output quality is measurable in reduced corrective iterations.

The Two Practices That Define the Difference

After twelve months of documented practice across twenty products, two behaviors consistently separate professionals who achieve exceptional AI leverage from those who achieve marginal improvement.

First: they invest in context infrastructure before it feels necessary. They create the context file on day one of a project rather than day fifteen. They build the template before the third use rather than the tenth. They document decisions immediately rather than retrospectively. These investments always feel premature. They consistently produce returns that justify the apparent prematurity within a handful of subsequent sessions. The professionals who defer them pay for the deferral in every session that follows.

Second: they direct rather than approve. They think through the output before asking for it — what it should contain, what constraints bound it, what success looks like. They load context that calibrates the AI before assigning the task. They evaluate against a specific definition rather than a general impression. They correct one specific thing at a time and verify each correction. This posture requires investing judgment at the beginning of the workflow rather than at the end. The front-end investment pays back at a ratio of three to five to one in reduced correction effort at the back end.

Build the infrastructure. Be the director. The leverage follows consistently, across any domain where knowledge work can be AI-augmented.

Common Application Mistakes and How to Avoid Them

The five-phase methodology is straightforward in principle and easy to short-circuit in practice. The three most common short-circuits, and what each one costs:

Skipping Phase 1 (Define) and going straight to Phase 3 (Direct): this is the most common mistake. The output of a session without adequate definition is usually plausible but wrong in ways that require extensive corrective iteration. The cost is three to five times the time that Phase 1 would have taken. Every session that skips Phase 1 pays this cost. Most professionals new to AI collaboration make this mistake consistently in their first month and stop making it after experiencing the cost directly enough times.

Skipping Phase 2 (Contextualize) by not maintaining a context file: this is the infrastructure debt problem. Without a context file, every session pays the re-establishment overhead. Without templates, every session of a given type starts from scratch. The cumulative cost of this short-circuit across a hundred sessions on a given project or task type is measured in hours, not minutes.

Evaluating in Phase 4 against “does this look good?” rather than against the Phase 1 definition: this is the evaluation depth problem. Evaluating against general impression misses specific gaps that would be caught by systematic evaluation against the original definition. The cost is that requirements gaps slip through to later phases where they are more expensive to correct.

The practical mitigation for all three: build a pre-session checklist that includes the four components of Phase 1, a prompt to load the context file at the start of Phase 2, and a reminder to evaluate against the original definition at the start of Phase 4. The checklist is a cognitive prosthetic that prevents the short-circuits that experience and habit would otherwise prevent. Use it until the methodology is fluent enough that the steps are automatic.

Adapting the Methodology to Different Time Scales

The five-phase methodology scales across different time scales of work. For a single task taking thirty minutes, it might look like: two minutes of definition writing, one minute of context file loading, a single bounded request, a quick evaluation against the definition, one or two corrective iterations. Total overhead from the methodology: three to five minutes on a thirty-minute task — well worth it for the improvement in first-pass quality.

For a project taking several weeks, the methodology scales up: Phase 1 becomes a full requirements document (thirty to sixty minutes). Phase 2 becomes setting up and populating the project context file (one to two hours initial investment). Phases 3, 4, and 5 repeat across many sessions, each focused on a bounded objective. The total methodology investment is substantial — perhaps fifteen to twenty percent of total project time — and the return is substantial: better first-pass outputs, fewer correction cycles, and a completed project with maintained documentation that supports future work on the same project.

The ratio of methodology investment to direct production work should be roughly fifteen to twenty-five percent for complex, multi-session projects. Lower ratios produce quality problems from skipped definition and context work. Higher ratios produce efficiency problems from over-engineering the process. The target range is a guideline, not a rule — adjust based on the complexity of the specific project and the maturity of the methodology in your practice.

How Recent AI Innovations Change This Picture

The framework described in this post was designed to be domain-agnostic — applicable to professionals across industries and roles, not just software development. Recent AI innovations have made this applicability more concrete and the entry barriers lower.

Agent Skills mean that the “context infrastructure” component of the framework — which required significant custom work in this case study — is now a platform feature with minimal setup cost. A marketing professional building their first AI collaboration practice can start with a shared skills library rather than building context infrastructure from scratch. The framework’s infrastructure step becomes days of setup rather than weeks.

MCP connections mean that the AI can integrate with the tools professionals already use — CRM systems, project management platforms, analytics dashboards, document repositories — without manual data export and re-import. A lawyer can give Claude MCP access to their case management system. A financial analyst can give Claude MCP access to their data platform. The integration work that previously required custom development or manual workarounds is increasingly available through the MCP connector ecosystem.

Computer use capabilities — Claude’s ability to operate real applications on a screen — extend the framework’s applicability to workflows that cannot be API-integrated. If a professional’s tools don’t have MCP servers or APIs, Claude can now interact with those tools directly through their graphical interfaces. The scope of what can be automated within a professional workflow, without custom integration work, has expanded substantially.

Leave a Comment