Product Design & AI

I look forward to revisiting this entry in six months to see how things have changed. Evolution is, and has always been, a natural part of life. Generation cycles are quicker, and now, so must we.

Zapier
ChatGPT
Figma
Jan. 2026

Intro

At its most simple, product design is the understsanding of a problem and the context the problem lives in, expressed through a UI. 

A design’s “goodness” is a function of the level to which the probem and its context was understood. 
Context composition is problem dependent, but things like the competitive landscape a business operates in and the goals that drive them, user data and user goals, user attention type, customer acquistion methods, an existing design system, and a product’s codebase are all contextual elements shared by most problems spaces. 

These are (non-exhaustive) inputs to a product/eng team’s ultimate solution. 

The moat for human intelligence is the ability to unify disperate information and process the composite. It is far from trivial to connect all points of context needed to achieve an output indistinguishable from a human, to an AI system. 

For now.

Until such a time when outputs are indeed indistinguishable, it is sensible to progressively load AI into the systems that make your work possible. 

Below are a two of mine. Collaboration and a pulse are essential.

Custom GPTs

Collaboration is an essential part of product design and delivering meaningful solutions.

Though not always, the value of collaboration is typically commensurate to the level of understanding participants bring to a discussion. A richer understanding of the context the object of discussion lives in and the things it is impacted by, will often lead to more useful outcomes.

I contest that collaboration, at the time of writing, is still best reserved for humans, but powerful insights can be uncovered when thoughtfully collaborating with an AI companion that is equipped with context and bounded intent.  

I am able to realize such value through the creation and integration of custom, app-specific GPTs.

In Practice
These GPTs are equipped with a persistent body of context that I routinely audit and update. The goal is to externalize the information I do not want to continually restate, while ensuring the system remains aligned with the reality of the product and the organization it serves. This context typically includes screenshots of production UI, a current inventory of the design system and its components, spacing and layout rules, user personas, a company and product overview, organizational goals, and the idiosyncrasies that meaningfully affect design decisions. I also encode the prevailing design ethos that governs trade-offs and taste.

The configuration is intentionally broad. It is designed to function as a stable foundation for any design request related to a specific application, rather than a narrowly scoped tool for a single task or feature.

In practice, this broad foundation is complemented by more granular, solution-specific context at the chat level. Once an initial design draft exists, I attach artifacts such as solution screenshots, PRDs, or exploratory notes directly to the conversation. This allows the GPT to reason from its persistent understanding of the product while incorporating the specifics of the problem at hand.

My prompts vary depending on intent. Sometimes they are structured, sometimes conversational. At times I am thinking aloud; at others I have a precise request. The framing generally includes additional problem context, a description of the proposed solution, relevant constraints, and a clear request. Importantly, I do not use the GPT as a designer. I use it as a thinking partner. The value lies in its ability to interrogate assumptions, surface edge cases, and stress-test decisions, not in producing UI directly.

Because the contextual groundwork persists, significant time is saved by not re-establishing the environment each time a new problem is discussed. Designs emerge more polished and resilient. Edge cases are identified earlier, and implicit assumptions are made explicit. This materially reduces the effort required to make work engineer-ready and often replaces multiple rounds of cross-functional clarification. It also affords the ability to run lightweight synthetic user evaluations as part of the design process.

User Intelligence Layer

High-quality decisions require current, reliable information about what users are doing and what they are saying. Static artifacts and periodic research snapshots are often insufficient to meet this need.

Through automation, I maintain a user intelligence layer that continuously ingests customer signals. By integrating tools such as Zapier, I am able to pull in data from session tracking systems, support calls, inbound emails, support tickets, and related sources. This information is normalized and fed into a dedicated GPT that I can query semantically, as well as configure to produce regularly scheduled analyses of user behavior and sentiment.

This system changes the role traditionally occupied by UX research. Rather than relying on intermittent, manually synthesized studies, much of the baseline understanding is continuously refreshed. In most cases, this allows the product team to operate effectively without a dedicated researcher, reserving deeper qualitative work for moments where it is genuinely required.

A key distinction is that a GPT, by itself, is static. An automation layer is live. The combination allows the reasoning system to remain grounded in up-to-date reality rather than historical context. Together, they form a continuously evolving picture of user needs and behavior.

Conclusion

These two systems, custom GPTs for contextual reasoning and an automated user intelligence layer, are primary ways I leverage AI and automation to increase both the speed and quality of my work as a designer.

They allow me to spend less time reconstructing context and more time making decisions that are informed, deliberate, and defensible.