LLMs are shortsighted and that's your fault
3 min read

LLMs are shortsighted and that's your fault

A teammate asked me something last week that stuck with me longer than it should have.

“Don’t you think LLMs are incredibly shortsighted? They make whatever decision looks optimal right now, but zoom out to the full project and it’s often the wrong call.”

He was right. But not about where the blame lands.

The three hardcoded ifs

Ask a model for a component with three statuses. You’ll get three if-statements. Hardcoded. Working. Clean. Zero flexibility for the fourth status that product is already sketching on a whiteboard somewhere.

And it’s not because the model can’t write something more resilient. It’s because nothing in your prompt told it to.

It doesn’t have the business context you didn’t share. It wasn’t in the planning meeting. It doesn’t know the PM has five new variants on the roadmap. It sees three statuses because you showed it three statuses.

Training rewards favor “works now.” For years, users complained about over-engineered output — abstract factory patterns for hello worlds. So the pendulum swung hard the other way. Models learned to take the shortest path to working code.

It deliberately avoids YAGNI violations. It won’t build flexibility you didn’t ask for. That’s not a limitation. That’s the model doing exactly what it was trained to do.

One sentence changes everything

This isn’t a “dumb model” problem. It’s a default setting. And you change it with a single sentence of context.

Tell it: “this domain is uncertain, we’ll probably add more types, design it so a new variant doesn’t require touching fifteen files” — and you get fundamentally different code. Strategy patterns. Discriminated unions. Config-driven architecture. The kind of structure that bends instead of breaking.

The model was always capable of this. It just needed the signal.

Where the experienced developer actually matters

Not in typing code faster — the model does that better. Not in framework knowledge — the model knows more of it. Not even in “best practices” — the model has those memorized.

In knowing where you need slack and where you can go straight.

That’s the thing no model can infer from a prompt. The awareness that this part of the system will change three times before launch, while that part has been stable for two years and will stay stable. That intuition comes from sitting in the meetings, hearing the product debates, watching the domain shift under your feet.

One hint — “this is certain,” “this will change,” “this is uncertain” — and the shortsightedness disappears. The model gets decision context and writes code on a completely different level.

An experienced developer paired with an LLM has absurd leverage. But only if the developer stops complaining about the “dumb model” and starts passing along what they already know.

The next time you prompt a model, ask yourself: did you give it business context, or just technical requirements?