Essay 15 March 2026 10 min read

Taste Can't Be Automated (Yet)

AI can score, rank, and categorise. But it cannot feel the "rightness" of a choice. This is why human tastemakers become more valuable, not less, as AI scales.

Let me describe something that happens to me several times a week. I am reviewing an AI-generated output. A piece of writing, a design layout, a product feature, a brand name. The output is technically correct. It meets the brief. It ticks every box on the checklist. And something is wrong with it. Not wrong in a way I can immediately articulate. Wrong in a way I feel before I can explain. A tightness in the gut that says: this is not it. Close, but not it.

That feeling is taste. And I am increasingly convinced it is the single most important thing that AI cannot replicate.

This is not a sentimental argument. I am not defending human superiority because it makes me feel better. I have built eighteen products using AI tools. I am deeply invested in what AI can do. But precisely because I use it every day, I have a clear view of where it excels and where it falls short. And the place it falls short most consistently is in the judgment call about what is truly right versus what is merely adequate.

The three layers of taste

Taste, as I have come to understand it through years of building products and decades of working in brand strategy, operates on three distinct layers. AI has access to some of them but not all of them, and the one it cannot access is the one that matters most.

Layer one: pattern recognition. This is the ability to identify what has worked before. To look at a thousand successful brand identities and extract the common elements. To analyse ten years of best-selling book covers and identify the visual trends. To study which product features correlate with high user satisfaction and predict which new features might succeed.

AI is extraordinary at this layer. Arguably better than humans. It can process more data, identify subtler patterns, and recall more examples than any human brain. If taste were only pattern recognition, AI would already have it. This is the layer that powers recommendation engines, that drives algorithmic curation, that makes AI-generated design templates feel professional even when they are generic.

Layer two: cultural context. This is the ability to understand not just what has worked, but why it worked and whether those conditions still apply. It is knowing that a certain visual style signals "premium" in 2026 but would have signaled "corporate" in 2016. It is understanding that a brand voice that sounds confident in one market sounds arrogant in another. It is reading the cultural moment and knowing what is appropriate, what is overdue, and what is too early.

AI has partial access to this layer. It can be trained on cultural data. It can be prompted with context. But its understanding is secondhand. It knows what people have written about culture. It does not experience culture. It can tell you that minimalism is trending in interior design because it has read articles saying so. It cannot walk into a room and feel whether the minimalism works or whether it has crossed the line into sterile emptiness. The difference matters.

Layer three: personal conviction. This is the most important layer and the one AI has zero access to. It is the willingness to make a choice that the data does not support, because you believe it is right. It is the creative director who rejects the focus-group-approved option and chooses the one that tested worse but feels more true. It is the strategist who sees the opportunity before the data confirms it. It is the product builder who ships the thing nobody asked for because they can see, in their gut, that it is needed.

Conviction is not data. It is not pattern recognition. It is not even cultural awareness, although it is informed by all of those things. Conviction is the human capacity to stake your reputation on a judgment that cannot be proven in advance. AI cannot do this because AI has no reputation to stake. It has no skin in the game. It has no lived experience that makes one choice feel more right than another. It has outputs, not opinions.

The strategist's edge

I spent 15 years in advertising as a strategy director. The job title sounds corporate but the actual work is deeply creative: understanding people, identifying opportunities, and making recommendations about what a brand should do and say and be. The best strategists I have worked with all shared one trait: they could see what was going to matter before the evidence arrived.

This is not mysticism. It is deep expertise combined with broad cultural awareness combined with the confidence to act on incomplete information. The strategist reads the signals, processes them through a mental model built over years of experience, and arrives at a conviction. The data often catches up later. But the strategist was there first.

AI can identify trends from data. It can spot patterns in consumer behaviour. It can generate strategic recommendations based on historical case studies. What it cannot do is make the leap. The leap from "the data suggests" to "I believe." The leap from analysis to conviction. That leap requires something that emerges from lived experience: the intuition that this is the right moment for this idea, even if the spreadsheet does not say so.

This is why strategists, creative directors, editors, and tastemakers become more valuable as AI scales, not less. The analytical layer of their work can be augmented and accelerated by AI. The conviction layer cannot. And as AI makes the analytical layer cheaper, the relative value of conviction increases.

Why "good enough" is not good enough

AI is remarkably skilled at producing work that is good enough. Good enough to pass a quick review. Good enough to fill a content calendar. Good enough to ship to production. Good enough to fool most people most of the time.

But "good enough" is not what builds brands, earns loyalty, or creates lasting value. "Good enough" is the dangerous middle ground where everything functions and nothing resonates. It is the competent design that nobody remembers. The adequate copy that nobody shares. The functional product that nobody recommends.

The gap between "good enough" and "genuinely good" is where taste lives. And it is a gap that AI consistently struggles to close on its own. It can get you to 80% with speed and consistency. The last 20% requires a human who knows the difference between adequate and excellent and is unwilling to settle for the former.

I see this in my own work constantly. The AI generates a first draft that is competent, well-structured, and stylistically appropriate. And it is also slightly flat. Slightly predictable. Slightly lacking in the specific quality that would make it feel alive. The revision process, the part where I push the output from "good enough" to "actually good," is where my taste does the work that the tool cannot.

The scoring paradox

I have built systems that score cultural relevance. The Relevance Index scores 1,200 brands on a 0-100 scale. Taste OS scores brands across five taste dimensions. These tools use AI and algorithms to quantify something that feels inherently unquantifiable.

And here is the paradox: the scores are useful precisely because they provoke disagreement. When someone looks at a brand's Relevance Index score and says "that's too high" or "that's too low," they are exercising taste. The number is not the truth. The number is the starting point for a conversation about what the truth might be. The human reaction to the score is more valuable than the score itself.

This is the deepest limitation of AI in the taste domain. It can produce a score. It cannot tell you whether the score feels right. It can rank options. It cannot tell you whether the ranking captures something essential or misses it entirely. It can categorise and organise. It cannot tell you whether the categories are the right ones.

Every system that attempts to automate taste runs into this wall. The automation handles the mechanics. The human handles the meaning. And meaning is where taste lives.

What this means for the future

If you are reading this and thinking about your career, your company, or your creative practice, the implication is clear: invest in your taste. Develop it, trust it, and be willing to defend it.

The people who will be most valuable in the AI era are not the ones who can use the tools most efficiently. Efficiency is a commodity. Speed is a commodity. Technical fluency with AI tools is a commodity that will only become more common as the tools become more accessible.

The scarce resource is judgment. The ability to look at what the machine has produced and know, with confidence, whether it is right. The willingness to reject adequate work and push for excellent work. The accumulated cultural knowledge that allows you to read a moment and make a choice that the data cannot justify but that turns out to be correct.

That is taste. And it is built the same way it has always been built: by looking at a lot of things, forming opinions about them, testing those opinions against reality, and gradually developing an internal compass that points toward quality.

AI makes the looking easier. It can surface more options, generate more variations, expose you to more possibilities in less time. But the opinion-forming, the compass-building, the conviction-developing: those remain stubbornly, irreducibly human.

Can taste eventually be automated? Perhaps. The "(Yet)" in the title is honest. AI capabilities are advancing faster than anyone predicted, and I would not rule out a future where machines develop something that resembles genuine aesthetic and cultural judgment. But that future is not here now, and it is not arriving next quarter.

In the meantime, the strategist who trusts their instinct, the creative director who insists on excellence, the curator who knows what belongs and what does not: these are the people the AI economy needs most. Not because they are sentimental holdouts against progress. Because they provide the one thing the most powerful tools in history still cannot generate on their own.

The feeling that something is right.

This is the third essay in a three-part series. Read the first: When Machines Have Taste. Read the second: The Curation Premium.

Subscribe to Taste Machines

New essays on AI, taste, and building. No spam.