The render is not the product

·5 min read
  • craft
  • ai
  • judgment

Everyone's deliverables look good now.

Polished decks, crisp renders, clean reports, persuasive first drafts — all of it sits within reach of anyone with a laptop and ten minutes. The visual floor has risen across nearly every craft at once, and most people are still using it to judge quality.

It no longer works.

The floor, not the ceiling

When everyone in a field has access to the same tools, the same reference libraries, the same AI-assisted drafts, the output stops being a differentiator. It becomes the floor. What used to impress now just qualifies you to be in the conversation.

This is the pattern that will play out across nearly every craft over the next decade, if it hasn't already. The output of a mid-tier practitioner becomes indistinguishable from the output of a top-tier practitioner when you're looking only at the output. And most buyers look only at the output, because that's what's in front of them.

Which means the game shifts to everything that isn't the output. The thinking behind the decisions. The framing of the problem before the tool was applied. The judgement about what to leave out. The specific distinctions the work is organised around. None of those travel with the file.

What Ericsson actually found

Anders Ericsson spent decades studying expert performance. His finding wasn't that experts know more. It was that experts perceive more. They see distinctions that others experience as sameness.

The chess master doesn't have a bigger memory for pieces. She sees chunks where an amateur sees individual pieces. The radiologist doesn't look at more of the scan — he looks at less of it, because he knows exactly where to focus. The experienced sommelier isn't trying to hold a bottle's full profile in mind; she's tracking three or four specific markers that most people have never even been told to taste for.

In each case, the expert is running a different job than the amateur on the same inputs. They're doing a filtering operation the amateur doesn't know exists. The output — move, diagnosis, description — is a consequence of the perception. And it was the perception that took a decade to develop, not the output.

Tools can replicate outputs. They cannot replicate perception, because the thing worth noticing has to be noticed first. You can't prompt for what you don't know to ask about.

The uncomfortable question

The uncomfortable question isn't whether your tools are good enough. It's whether you've been confusing your tools' output with your own judgement.

This is worth sitting with. A decade ago, producing a given piece of polished work required a real understanding of the material, because producing it was hard enough that only people with that understanding bothered. The output was a reliable proxy for the perception behind it. Now the output is a commodity, and the proxy has broken. Plenty of people producing polished work have no underlying perception at all.

Plenty of practitioners have also quietly drifted into this category without noticing. The tools got more generous. The output looked better. The client seemed satisfied. The part of the job that used to require sharpening — the framing, the recommendation, the judgement about what the specific situation actually calls for — stopped getting exercised. The atrophy isn't visible from the output. It shows up later, when the question is genuinely hard and the tool can't rescue you.

That's the moment that separates people with underlying perception from people whose perception was always downstream of their tools.

What the perception layer actually does

When a tool produces a polished first draft of anything — a strategy deck, a piece of code, a design, a legal argument, a financial model — the draft already incorporates a set of default assumptions about what the problem is. Those defaults are correct often enough that you can run most of your career inside them.

They're wrong when the specific situation has an edge the defaults don't cover. Which is most of the situations worth charging for. A standard answer to a standard question is a commodity. The premium work is applied to questions that are almost but not quite standard, where the tool will happily produce a confident, well-formatted, subtly wrong answer.

Knowing where the specific situation diverges from the default is perception. It's what distinguishes a professional from a well-prompted amateur, and it's the part of the job that won't be commoditised soon, because the divergence always depends on context the tool wasn't looking at.

The last thing that travels

Look at the person in your industry whose work keeps getting chosen even though their presentation isn't the flashiest. There's something they're seeing that others aren't. Some set of distinctions they treat as obvious that the rest of the field treats as sameness. Their judgement gets recommended by word of mouth long before their deliverables circulate.

That perception is the last thing that can't be commoditised. Everything on top of it will be. The renders, the proposals, the presentations, the first drafts of the thinking — all of it will converge toward a similar, polished floor. What remains is the thing underneath, which is just: do you actually know what you're looking at.

The new baseline doesn't reward the people with the best tools. It exposes the ones who had nothing underneath.