White Paper

Architect & Assemblers AI Coding Methodology

A comprehensive methodology for building robust, scalable, and maintainable software using AI assistance.

Written & Published By: Kenneth Alge | Feb 2025

About the Author

Kenneth Alge is the founder of Mental Alchemy IO, an AI consulting and software development firm based in Charleston, SC. The Architect & Assemblers framework emerged from building LOOM itself—a project that required orchestrating multiple AI models across a large codebase. Kenneth applies these methodologies daily and continues to refine them through real-world software development.

Abstract

This framework outlines how to leverage Large Language Models in software development while keeping human oversight and architectural integrity intact. You act as the Architect—providing high-level direction and strategic oversight. AI models act as Assemblers—generating and refining code under your guidance.

The methodology builds on the principles from Cognitive Mirror, applying collaborative AI thinking specifically to software development. If you haven't read that yet, it's worth your time—the iterative refinement and multi-AI techniques there are the foundation for everything here.

What follows is thorough planning, modular design, and iterative refinement. You'll mitigate the weaknesses of AI-generated code while leveraging its strengths.

0Introduction: The Architect Mindset

AI coding tools are powerful but undirected. They excel at generating code for well-defined tasks but struggle with architectural decisions, long-term maintainability, and understanding how pieces fit into a larger system. Hand an AI a vague request and you'll get vague code. Hand it a precise specification with clear context, and you'll get something you can actually use.

Architect & Assemblers framework addresses this by establishing clear roles:

The Architect (You)

  • Define project vision and scope
  • Design system architecture
  • Make strategic decisions
  • Review and approve all code
  • Resolve conflicts between AI suggestions

The Assemblers (AI)

  • Generate code for defined tasks
  • Review code for bugs and style
  • Suggest optimizations
  • Provide architectural feedback
  • Generate tests and documentation

Core Principle: AI's not just a tool, if you know how to prompt it, it can help you figure out what to do or how to proceed. However, you remain in control, guiding the process and making final decisions. The goal isn't to automate development—it's to amplify your capability. This mirrors the Cognitive Mirror's core loop: you articulate, the AI reflects, you correct, you refine. The same dynamic applies here, just focused on code.

1Phase 1: Project Genesis & Blueprinting

This phase is entirely about planning and design. No code is written here. The goal is to create a comprehensive blueprint that guides your AI Assemblers. Rush this phase and you'll pay for it later—AI-generated spaghetti is still spaghetti.

Step 1: Create the Genesis Document

Define your project's core purpose, scope, target audience, technical requirements, and success criteria. This document becomes the single source of truth—everything else flows from it.

Your Genesis Document should cover:

  • Project Overview: Name, description, target audience, core problem solved, success metrics, monetization strategy (if applicable)
  • Technical Foundation: Platform(s), technology stack, performance requirements, scalability needs, security considerations
  • Reference Analysis: Similar existing systems, features to emulate or improve, potential technical challenges

Be as specific as possible. The more clarity you provide here, the better the AI will perform later. Vague requirements produce vague code.

Step 2: System Identification and Decomposition

Break the project into manageable, independent Systems, then decompose into Components and Micro-Features. This hierarchical breakdown is where the methodology earns its keep.

The Hierarchy:

  • Systems: Major functional areas (e.g., User Authentication, Recipe Management, Search)
  • Components: Major features within each system (e.g., Recipe Storage, Recipe Search, Recipe Display)
  • Micro-Features: The smallest individually codable units—think single function level

For each Micro-Feature, document: Name, Description, Inputs, Outputs, Expected Behavior, and Error Conditions. This level of detail might feel tedious, but it's exactly what AI needs to generate useful code.

Think of it this way: if you can't describe what a function should do in plain English, you're not ready to have AI write it.

Step 3: Architectural Diagramming and Relationship Mapping

Visualize the system architecture and define relationships between components. This is the deep mapping that prevents integration nightmares later.

Create these maps:

  • System & Component Relationship Map: Which components belong to which systems
  • Cross-System Communication Map: How different systems interact through their Gatekeeper APIs (more on this below)
  • Component Import & Function Call Map: Which files import which, which functions call which
  • Variable & Data Structure Map: All global variables, data structures, and their scope

The Gatekeeper Pattern: For each System, designate a "Gatekeeper" file (e.g., api.py) that handles all external interactions. Systems only talk to each other through these Gatekeepers. This keeps coupling manageable and makes testing straightforward.

Step 4: AI-Assisted Architecture Review

Here's where the Multi-AI workflow comes in. Get feedback from multiple AI models on your proposed designs.

Useful prompts:

  • "Critique this architecture for potential bottlenecks, scalability issues, or security vulnerabilities."
  • "Suggest alternative architectural approaches for this system."
  • "Identify any potential tight coupling or dependencies that could cause problems."
  • "Evaluate this technology stack for a project with these requirements."

Use different LLMs for diverse perspectives—Claude might catch something GPT misses, and vice versa. Document the feedback and your rationale for changes. This creates a decision trail you'll appreciate later.

Step 5: File/Folder Structure Setup

Create the physical directory structure and empty files matching your architecture. Add basic import statements reflecting planned dependencies.

Why this matters: Pre-emptive setup prevents naming inconsistencies and ensures AI generates code in correct locations. When you prompt an AI to write a function, you can specify exactly where it goes and what it can access. No ambiguity.

2Phase 2: AI-Assisted Code Construction

Now we write code—micro-feature by micro-feature. The heavy planning in Phase 1 pays off here. You'll be giving AI specific, well-defined tasks instead of vague requests.

Step 1: AI Role Assignment

Not all AIs are equal for all tasks. Figure out which model works best for what.

Process:

  • Create test micro-features from different systems
  • Prompt each AI to generate code for these tests
  • Evaluate output on: Correctness, Efficiency, Readability, Style adherence, Documentation quality
  • Assign roles: Primary Coder, Style Reviewer, Logic Reviewer, Security Auditor

This is the AI Capability Profiling from the Cognitive Mirror, applied to code. Document what each AI does well—you'll reference this constantly.

Step 2: The Micro-Feature Implementation Cycle

This is the core loop. Repeat for each Micro-Feature:

  1. Initial Prompting: Craft a detailed prompt for your Primary Coder. Include the micro-feature description, file location, inputs/outputs, relevant global variables, style requirements, and error handling expectations. The more context, the better the output.
  2. Code Review: Share the generated code with your Reviewer AIs. Have them critique for correctness, efficiency, style, and edge cases. This is AI Critique AI in action.
  3. Refinement: Based on feedback, either prompt the Primary Coder to refine or modify manually. Sometimes human judgment is faster.
  4. Iteration: Repeat until you're satisfied with code quality. Usually takes 2-3 passes.

Step 2.5: Creating the Basic Framework

Before diving deep into features, get your files communicating. Create basic functions in each file, have them call each other, run to verify. This catches structural issues early—before you've invested hours in detailed implementation.

Step 3: Unit Testing

Write unit tests for each Micro-Feature before integration. Use AI to help generate test cases covering various inputs, edge cases, and error conditions.

Aim for high test coverage. Tests catch bugs early and make future changes safer. They also serve as documentation—a test that shows expected behavior is worth a thousand comments.

Prompting Best Practice: Keep prompts focused and unambiguous. Always specify: the file it belongs to, related functions it should call or be called by, global variables it can access, coding style requirements, and error handling expectations. A good prompt is a mini-specification.

3Phase 3: System Integration & Testing

Now we combine individually tested components and ensure the entire system works together. This is where architectural decisions from Phase 1 prove their worth—or reveal their flaws.

Integration Sequence

  1. Component Integration: Combine Micro-Features within a Component. Test interactions between Micro-Features. Use version control to track changes—you'll want rollback capability.
  2. System Integration: Connect Components within a System via your Gatekeeper APIs. Focus testing on interfaces between Components.
  3. Cross-System Integration: Connect different Systems according to your Communication Map. Perform end-to-end testing simulating real user scenarios.

Each level catches different types of bugs. Component integration catches logic errors. System integration catches interface mismatches. Cross-system integration catches architectural flaws.

System-Wide Testing

  • Functional Testing: Verify all features work as expected
  • Performance Testing: Measure response times, resource usage, scalability
  • Security Testing: Identify and address potential vulnerabilities
  • User Acceptance Testing: If possible, have real users test and provide feedback

Document all test results and track identified issues. This becomes your quality baseline for future releases.

4Phase 4: Iteration & Expansion

The software is built. Now you maintain, improve, and scale it. The methodology doesn't end at launch—it's a continuous cycle.

Bug Fixing

Use the same AI-assisted process for fixes. Always write a test case that reproduces the bug before fixing it. This prevents regressions and documents what went wrong.

New Features

Return to Phase 1 for significant features. Update your Genesis Document and system maps. Treat new features as mini-projects following the entire methodology—it maintains consistency and quality.

Performance Optimization

Use monitoring tools to identify bottlenecks. Use AI to suggest optimizations—code refactoring, algorithm improvements, database query optimization. Test thoroughly before deploying.

Documentation

Maintain living documentation that evolves with the project. Use AI to help generate docs from code and specs. Out-of-date documentation is worse than no documentation.

Scaling and Infrastructure

When you need to handle more load, you'll need infrastructure changes:

  • Horizontal Scaling: Adding more instances of your application servers
  • Vertical Scaling: Increasing resources (CPU, RAM, storage) of existing servers
  • Database Scaling: Replication, sharding, or distributed database systems
  • Caching: Redis, Memcached to reduce database load
  • CDN: Distribute static assets closer to users

AI can help here too. Try: "We're expecting 10,000 concurrent users. Suggest a cloud infrastructure setup using AWS for a Python/Django web application with a PostgreSQL database."

Key Principles

These are the ideas that make the methodology work. Violate them at your peril.

Planning is Paramount

The success of this methodology hinges on the thoroughness of Phase 1. Rushing into code generation without proper architecture produces AI-generated spaghetti. The blueprinting phase might feel slow, but it's where you save the most time overall.

Granularity is Key

Breaking the project into Micro-Features is what makes AI collaboration effective. LLMs perform best on well-defined, scoped tasks. A prompt for "build me a user authentication system" will disappoint. A prompt for "write a function that validates an email format and returns true/false" will deliver.

AI is a Tool, Not a Replacement

You remain in control, guiding the process and making strategic decisions. Never blindly accept AI output. Review everything. The AI is an Assembler; you're the Architect. The building stands or falls on your decisions, not theirs.

Iteration is Essential

The methodology is designed for continuous refinement. First-pass code is rarely final code. The Iterative Deepening loop from the Cognitive Mirror applies directly: generate, review, correct, refine, repeat.

Documentation is Crucial

Clear documentation is what makes long-term maintainability possible—and it's what provides context to AI in future sessions. Your Genesis Document, your maps, your decision logs—they're not bureaucracy, they're assets.

Advanced Techniques

Once you're comfortable with the core methodology, these techniques can accelerate your workflow further:

AI-Assisted Test Generation

Use AI to generate not just application code, but unit tests, integration tests, and performance tests. Provide the function signature and expected behavior; let AI generate the test cases.

AI-Assisted Debugging

Provide the code, the error message, and the context. Ask AI to suggest possible causes and solutions. Often faster than Stack Overflow diving.

AI-Assisted Refactoring

Use AI to suggest code improvements for readability, maintainability, and performance. Particularly useful for cleaning up prototype code before it becomes production code.

AI-Assisted Security Audits

Have AI scan code for potential vulnerabilities. Not a replacement for professional security review, but good for catching obvious issues early.

Related White Papers

Put This Methodology into Practice

LOOM provides the codebase context that makes AI-assisted development actually work. See your architecture, map dependencies, and export structural data to your AI tools.