Try this: open any AI tool and type "Design a great onboarding flow that gets users to see the value the product provides." What comes back will look like a design. Screens, labels, a plausible sequence. It'll be confident and reasonably coherent.
It'll also be useless. Because it doesn't know your users, your product's actual value, your business constraints, or the specific moment where people tend to drop off. It's a pattern that resembles something that worked somewhere else — reassembled for no one in particular.
I had to learn this the hard way. When I started building with AI and the results got good, I made the wrong inference. I thought AI had gotten better at design. What had actually happened was that I had gotten better at directing it — and I mistook the output for the intelligence.
What Actually Happened When It Worked
When I began building in Google AI Studio, I wasn't handing AI a design problem. I was specifying every constraint, one by one. "Show this field only when trainer mode is selected." "This role can see all entries; this role can only see their own." "When the user completes this step, advance to this state, not that one." Each prompt was a design decision I had already made — I was just using AI to execute it.
What surprised me was how faithfully it executed. When the direction was precise, it didn't improvise. It didn't average across what it had seen before. It just did exactly what I described. That's when I realized: the quality of the output was a direct reflection of the quality of the direction. AI didn't contribute design intelligence. It contributed reliable execution.
Direction Is the Design
This is the thing I had wrong. I thought design was the construction — the act of building the thing. So when AI started building it, I assumed design had transferred to AI. But construction was never the design. The design was every decision that preceded it: what the user needs, what states the system has, what logic governs the transitions, what the edge cases are, and why any of it matters.
None of that came from the AI. All of it came from me. The prompts weren't shortcuts — they were the work. The effort it took to specify conditional logic precisely, to anticipate role conflicts, to describe what should happen in states I hadn't built yet — that effort is design. AI just made it possible to test those decisions immediately, against something that ran.
The test is simple: if AI could design, the vague prompt would work. "Design a great onboarding flow" would be enough. It never is. The more effort it takes to direct AI toward something useful, the more that effort is proof of where the design actually lives.
How the Workflow Changed
What did shift — genuinely — was the sequence. I now start by building the functional skeleton before ever opening Figma. I work through logic, states, and conditional behavior first. Once that's stable, I move into Figma to design the component library and system around it. The visual design reflects actual functionality instead of anticipating it.
That's a real change. In the old workflow, you'd design screens that implied behavior, then hand them to a developer to interpret. Now the behavior is proven first, and the visual design gives it form. It's a better order of operations — not because AI designed anything, but because AI made it cheap to test functional logic before committing to visual design.
An Unexpected Layer: Designing the Conversation
One thing I didn't anticipate was having to design the AI chat experience inside the app itself. It became more than a utility — it acted as an interface to the data. Users could query, manage, and navigate functionality through language. That made the chat a living UI — one that required its own design thinking: what questions should it answer, what should it decline, how does it surface information without overwhelming the user?
Designing that interaction felt like shaping a dialogue between logic and user intent. Which is, when I think about it, just design — applied to a new surface.
What AI Still Can't Provide
AI doesn't conduct research. It doesn't know what your users actually struggle with, what competitors are doing, or what business constraints are non-negotiable. It can't decide which features matter or why the product should exist. It doesn't care if the output serves anyone — it just produces what the prompt describes.
It also doesn't notice its own mistakes. It follows conflicting instructions without flagging the conflict. It rebuilds something you already decided to remove without questioning why. The clarity about what matters, the restraint to stop when the direction isn't right — that's still entirely on the designer.
AI got better at executing design. That's genuinely useful. But execution was never the hard part.