Chasing models won't make you better at using them
4 min read

Chasing models won't make you better at using them

Last week another model dropped. My feed lit up instantly. Benchmarks, hot takes, “this changes everything” tweets. LinkedIn caught the wave two days later, as usual. Half the people I follow suddenly decided their current model was useless.

I’ve seen this exact cycle repeat every two weeks for the past two years.

The treadmill

New model launches. Benchmarks say it’s 2% better at some task you’ve never cared about. X loses its mind. Someone posts a hand-picked comparison. Another person calls their old model dead. Two weeks later, another model drops, and we do the whole thing again.

Meanwhile, the people actually shipping things? They barely noticed.

Two years with one model

I’ve been using Claude daily for about two years now. Not because it wins every benchmark. Because somewhere around month three, I stopped caring about which model was “best” and started caring about how I use it.

The time you spend searching for the perfect model is time lost building real skill with the one you already have.

Three skills matter way more than your choice of provider:

Good prompting. Structure, limits, clear goal. Not memorizing templates - learning how to explain a problem so clearly that the model has no room to wander.

Context management. When should the model get more context? When is more context hurting the output? When do you start fresh versus pushing through a long thread? From experience: the longer a conversation runs, the more the model loses focus. A fresh chat with a well-written prompt often beats dragging a thread through 30 messages.

Understanding limitations. Knowledge cutoffs, hallucinations, weak spots with numbers. If you don’t know where your model breaks, you’ll trust it when you shouldn’t.

None of these are unique to any model. All of them transfer completely.

Where it gets concrete

In Cursor, I switch between GPT, Gemini, and Claude all the time. My prompting patterns, my instinct for when context is off, my sense for when an output smells like a hallucination - all of it carries over. The model changes, the skills don’t.

Depth with one model teaches you patterns that apply to all of them. Breadth across ten models teaches you how to set up API keys.

”But what if your model stops being the best?”

Then I’ll switch. When your skills work everywhere, switching providers takes an afternoon.

When Gemini 3 Pro came out, I spent a few days testing it. It impressed me. But the gain wasn’t big enough to change my entire workflow. So I went back to Claude and kept shipping.

That’s the right way to deal with new releases. Test when something truly interesting drops. But don’t throw everything away because a benchmark moved two points.

How much time last month did you spend testing new models versus improving your prompts and workflow?

If most of it went to testing, you’re working on the wrong thing. The model is a tool. Your skill with it is the multiplier.