Idioform
← Back to blog

The Complexity Has to Go Somewhere

November 2025

Most of us carry an intuition about complexity that served us well for most of human history: more things means more to manage. A bigger house needs more cleaning. A longer document takes longer to read. A machine with more parts has more that can break.

This intuition is so deeply ingrained that we rarely question it. But when it comes to software — and increasingly, to working with AI — the intuition often inverts. And if you don't notice the inversion, you'll make decisions that create the very problems you were trying to avoid.

The Inversion

Consider two approaches to a website that serves multiple types of customers.

Approach A: One page that handles everything. Conditional logic determines what each visitor sees. Different buttons appear based on different criteria. The page does five jobs, but it's only one page.

Approach B: Five separate pages, one for each customer type. Each page does one job. More pages, but each is straightforward.

The instinct says Approach A is simpler — after all, it's one thing instead of five. But anyone who's maintained software knows the opposite is true. The single page becomes a tangle of exceptions and edge cases. Changes become risky because everything is connected to everything else. The “simple” solution becomes the complicated one.

Approach B requires more upfront decisions, but each piece is independent and clear. Add a sixth customer type? Add a sixth page. No risk to the existing five.

The complexity didn't disappear — it just moved. In Approach A, it lives in the logic that holds everything together. In Approach B, it's distributed across separate, manageable pieces.

The Same Principle Applies to Prompts

When people first start using AI tools like ChatGPT or Claude, they tend toward short prompts. “Write me a property description.” “Draft an email to a client.” The assumption is that shorter means simpler.

But short prompts produce generic output. The AI doesn't know what kind of property, what the target buyer cares about, what tone fits your brand, what information is legally required. So the output comes back vague, and you spend time rewriting it, or you go back and forth with follow-up prompts trying to get what you actually needed.

The complexity didn't disappear. You just pushed it downstream.

A longer, more structured prompt — one that specifies the property type, the target buyer, the tone, the must-include details, the things to avoid — takes more effort upfront. But it produces output you can actually use. The time you invested in the prompt is time you don't spend fixing the output.

This is counterintuitive for people accustomed to delegating work to humans. When you brief a competent estate agent or marketing professional, you can afford to be sparse because they bring their own judgment, context, and professional standards. They fill in the gaps you left.

AI doesn't work that way. It will fill gaps, but with generic defaults rather than professional judgment. The specificity has to come from somewhere, and if it's not in the prompt, it won't be in the output.

Where Professional Value Lives

This is why “just ask ChatGPT” isn't actually a threat to professional expertise — it's a misunderstanding of where the complexity lives.

A good estate agent doesn't just write property descriptions. They know which features matter to which buyers. They know what “well-presented” means in a £200,000 flat versus a £2,000,000 house. They know which legal details must be included and where. They know what “chain-free” signals to a motivated buyer. They know when to emphasise the garden and when to lead with the commute.

That knowledge doesn't evaporate because AI can generate text. It just moves. Instead of living in the act of writing, it lives in the act of specifying — knowing what to ask for, what constraints to set, what the output needs to accomplish.

The professional who understands this can use AI to handle the production while they focus on the judgment. The one who doesn't will either produce generic work or spend as much time fixing AI output as they would have spent writing from scratch.

The Case for Structured Prompts

This brings us to why prompt architecture matters.

A well-designed prompt isn't just a request — it's a container for professional judgment. It forces the user to make the decisions that actually matter: Who is this for? What are we trying to achieve? What constraints apply? What does good look like?

These decisions take a few minutes. But they're the right few minutes — the ones where expertise actually counts. And once made, they produce output that's fit for purpose, not output that needs extensive correction.

The prompt absorbs the complexity so the output doesn't have to.

This is what we mean when we talk about prompt systems rather than prompt collections. Anyone can compile a list of “useful prompts.” The value is in prompts that encode professional judgment — that know what questions to ask, what constraints to apply, what traps to avoid.

The complexity has to go somewhere. We'd rather it lived in a system you can rely on than in problems you have to solve every day.