The first question teams ask when they start using AI for software development is usually about speed. How much faster can we ship? The question they should ask first is different: who decides what we build and how?
In the synthesis engineering framework, the first pillar is human architectural authority. Humans make strategic architectural decisions. AI implements within those constraints. This is the foundation everything else rests on.
Why Architecture Cannot Be Delegated
AI operates conversation by conversation. It does not carry forward an architectural vision from last month’s database migration to this week’s API refactor to next quarter’s scaling plan. Each conversation starts fresh, or at best with a summary of prior context. That is sufficient for implementing a component. It is not sufficient for maintaining coherence across a system that evolves over months or years.
When an engineer asks AI to “build a user authentication system,” the AI will produce something functional. It might choose JWTs one day and session tokens the next, depending on the prompt. It might use a relational database for user records or a document store, depending on what seems natural for the conversation. Each individual choice might be reasonable. The aggregate of uncoordinated choices across dozens of conversations is an incoherent system.
Architecture is the set of decisions that are expensive to change later. Database schema, service boundaries, communication protocols, security models, deployment topology. These decisions constrain everything built on top of them. Getting them right requires understanding the business domain, the team’s capabilities, the expected scale, and the tradeoffs between competing concerns over time. AI can help evaluate options. It cannot hold the full context that makes one option right for your specific situation.
What Stays Human
The decisions that remain with the engineer:
Technology stack and framework choices. Not because AI cannot suggest good stacks, but because stack decisions carry organizational consequences. Choosing a language your team does not know creates a training burden. Choosing a framework with a small community creates a hiring burden. These are human organizational judgments.
System boundaries and component interactions. Where you draw the line between services, what talks to what, which data lives where. These decisions encode your understanding of the business domain. A payment service separate from the order service is an architectural opinion about how your business works. AI does not have that opinion.
Data modeling and database architecture. How you model your data determines what queries are fast and what queries are slow for years to come. This is a bet on what questions your business will ask of its data. AI can optimize a schema for today’s queries. It cannot anticipate which queries matter next year.
Security models and authentication approaches. Security architecture requires understanding your threat model, your regulatory environment, and your tolerance for friction in the user experience. These are judgment calls that depend on context AI does not have.
Scaling strategies and performance targets. How much traffic you expect, how you plan to handle spikes, what your latency budget is. These come from business knowledge and operational experience, not from code patterns.
What AI Does Within Those Constraints
Once the architectural decisions are made, AI is remarkably effective at implementing within them. Given a clear architecture, AI generates components that follow the chosen patterns. It writes integration code that respects the specified boundaries. It produces tests that validate the architectural decisions. It optimizes implementations within the defined constraints.
The key insight is that AI’s speed is most valuable when the direction is already set.
An AI that generates code in the wrong architectural direction generates technical debt at the speed of autocomplete.
The Failure Mode
I have watched teams lose architectural coherence by treating AI as an architect rather than an implementer. The pattern is predictable. An engineer asks AI to build a feature. The AI makes architectural choices embedded in the implementation. The engineer ships it because it works. Another engineer does the same thing for a different feature. After a few months, the codebase has three different approaches to data access, two conflicting authentication mechanisms, and no one can explain why.
The fix is not to use AI less. The fix is to be explicit about what decisions belong to humans and what decisions AI can make. Write down your architectural decisions. Put them in ADRs, in CLAUDE.md files, in whatever format your team uses. Make the constraints visible so that AI implementations stay within them.
Practicing This
If you are starting to work with AI on a codebase, here is a practical approach. Before asking AI to build anything, write down the architectural decisions that already exist in your system. If they are not written down, that is the first problem to solve, with or without AI.
Then, when you start a session with AI, establish those constraints at the beginning. “We use PostgreSQL. Our services communicate via gRPC. Authentication goes through the gateway service. All database access goes through the repository pattern.” These are not prompting tricks. They are the architectural rails that keep AI implementations coherent with the rest of your system.
The engineer who does this well ships faster than the engineer who lets AI make architectural choices. Not because the code is generated faster, but because the code does not need to be rewritten when it turns out to conflict with the rest of the system.
Human architectural authority is not about distrust of AI. It is about recognizing what each party does best.
Humans hold the long-term vision. AI executes within it. That division of labor produces better systems than either party working alone.