The Think-First Workflow

A Guide to Building Ambitious Projects with AI

I want to share a workflow that has fundamentally changed how I approach complex projects. As someone who isn't a traditional code developer but has extensive experience in architecting and planning within Unreal Engine, this process has empowered me to build things I once thought were only possible with a future, fully autonomous AGI.

This guide is for those who are serious about their projects. It's a method that prioritizes deep, front-loaded thinking to enable non-coders to tackle projects of nearly any complexity. It's not a shortcut, but it is my current best-known approach to transforming a powerful AI into a precise and effective tool.

Guiding Principles

Before diving into the steps, it's crucial to understand the philosophy that makes this process work.

Front-Load the "Thinking"

The core strategy is to do almost all of the cognitive work—the planning, research, and architectural design—before a single line of code is written. You are creating an exhaustive blueprint that allows the AI to "do" rather than "think" during the implementation phase.

Documentation IS the Project

All planning, research, and architectural documents are saved directly within my project's source control. This treats the planning phase with the same importance as the final code, creating a holistic and well-documented project history.

Format for the AI

Since most documentation is for the AI's consumption, it must be structured for clarity. I use Markdown exclusively, with clearly labeled files, functions, and code snippets to remove any guesswork for the AI assistant.

My Tools of Choice

G

Gemini (in Google AI Studio)

The engine for all my planning and research. Its ability to manage and synthesize vast contexts is unparalleled for the kind of deep work this workflow requires, especially with Unreal Engine C++.

R

Roo Code

Reserved for implementation when a project's complexity truly warrants an automated, multi-agent approach, especially during iterative coding cycles that follow the main planning phase.

C

Copy-Paste & Diff Method

My default workflow, offering absolute control. It hinges on a meticulous plan that directs the AI to produce clean code. I never blindly paste AI code; instead, I use a "Compare with Clipboard" feature in my editor. This shows the exact changes, allowing me to make final edits and ensure I am the final gatekeeper for all code entering my project.

The Workflow: A Step-by-Step Guide

1

The Initial Spark

Create a low-effort, rough draft of your plan to establish a baseline understanding.

2

Deep Dive

Write an exhaustive document explaining your vision in as much detail as possible to multiply the AI's capabilities.

3

Laying the Groundwork

Provide the AI with relevant source code (from the engine, GitHub, etc.) and instruct it to produce a high-level research document summarizing the key systems.

4

Refining the Research

Distill the AI's analysis into a focused V2 research document that will serve as a core reference.

5

The Blueprint (A Two-Draft Architectural Plan)

This is the most intensive planning stage, broken into four parts:

  • Create Draft 1: Instruct the AI to build the first draft, leaving room for improvement.
  • Request Improvement Plan: Have the AI generate a formal list of 5-10 potential improvements.
  • Provide Human Feedback: Critically review every suggestion to correct the AI's course and second-guess your own assumptions.
  • Create Draft 2: Instruct the AI to generate the final architectural plan incorporating all of your feedback.
6

The Roadmap

With the final architecture complete, create one or more detailed, step-by-step implementation plans.

Managing Context: A Dialogue with Your AI

Your role in this workflow is to be the curator of the AI's attention. Understanding how to manage the conversation context is key to getting high-quality results.

The 100k Token Rule of Thumb

For critical decision-making phases like creating the architectural plan (Phase 5), I try to keep the Gemini conversation context under 100,000 tokens. When the context grows too large, the AI's attention can become stretched, leading to less consistent output.

Embrace Large Context for Research

During the research phases (Phase 1-3), this limit doesn't apply. Feeding Gemini vast amounts of source code (even 400k-600k tokens) is perfectly fine, as you are leveraging its core strength in synthesizing broad information.

Strategic Conversation Management

Structure your conversations deliberately:

  • For most projects: Go from Phase 1 to 5 in one continuous chat if the context is manageable.
  • For large projects: Compile research, then start a new, clean conversation for the architectural plan to avoid high token counts.
  • For implementation: Always start a fresh conversation for Phase 6, providing only the final architecture and necessary files.
"This workflow is a testament to the power of human-AI collaboration. As AI becomes more capable, our role will increasingly shift from implementation to orchestration. Even with true AGI, the responsibility of deciding what to build and why will remain ours. We are, and will continue to be, the architects of the vision."

The Payoff: Deciding How to "Do"

With your plans complete, you must decide if the AI still needs to "think."

If NO Thinking is Required

Your plan is a checklist. You can use a cost-effective orchestrator in Roo Code to simply execute the steps.

If YES Thinking is Required

Your plan needs minor problem-solving during implementation. A capable "thinking" model should act as orchestrator. While Gemini is excellent, models specifically tuned for agentic coding (like Horizon models) can be very effective. Their strength is less in top-tier "thinking" and more in correctly passing context to sub-tasks, ensuring the plan is followed with precision.