Skip to main content

AI and the Concorde problem: when cost starts to matter more than capability

4 min read

A personal take on why AI might become increasingly expensive to run, why efficiency will likely matter more than raw capability, and what the Concorde can teach us about sustainability.

aitechnologyproduct thinkingeconomicssoftware engineering

Alongside programming, I’ve always had a strong interest in aviation.
That’s probably why the Concorde analogy keeps coming up when I think about modern AI.


The Concorde lesson

Concorde is often remembered as a technological triumph.

It flew faster than any commercial passenger aircraft before or since. It proved that supersonic travel was possible, safe, and repeatable. From a pure engineering standpoint, it worked.

It still failed.

Not because the technology was flawed, but because the economics never truly made sense. Concorde was expensive to operate, expensive to maintain, constrained by regulation, and viable only for a very small group willing to pay a premium.

That distinction matters. Concorde didn’t fail at capability. It failed at sustainability.

If you’re curious about the details behind that failure, this video does a good job of breaking it down in plain terms:
This plane could cross the Atlantic in 3.5 hours. Why did it fail?

When people talk about AI today, I hear echoes of that same pattern.


A personal observation about AI costs

This isn’t a prediction that “AI will fail.” It’s closer to a concern based on what we’re already seeing.

Running modern AI systems is expensive in ways that are easy to underestimate early on:

  • compute-heavy workloads
  • specialized hardware
  • growing energy usage
  • infrastructure that scales faster than expected
  • costs that become visible only once systems reach production

During experimentation, many of these costs are softened by cloud credits, venture funding, or internal budgets that aren’t under immediate pressure. Over time, those buffers disappear.

At that point, AI stops being a demo and becomes a recurring expense.


Capability is easy to celebrate. Cost is harder to face.

There’s a natural bias in tech toward what’s impressive.

Large, general-purpose models that can do “a bit of everything” are fascinating. They demo well. They create momentum. But fascination doesn’t pay invoices.

The harder question is whether the value created by AI consistently outweighs the cost of running it, especially at scale.

In some cases, the answer already seems to be yes. In others, it’s still unclear. And in many, it probably depends on how deliberately the system is designed.

This is where the Concorde parallel feels useful, not as a prophecy, but as a warning.


Why efficiency will probably matter more over time

If AI remains expensive and difficult to cost-predict, teams will likely shift their priorities. Not because efficiency is fashionable, but because it becomes unavoidable.

There are already early signals of this shift:

  • growing interest in smaller, more specialized models
  • closer tracking of cost per inference, not just accuracy or capability
  • discussions around self-hosting or hybrid setups to reduce long-term spend
  • tighter collaboration between engineering and finance to understand AI cost structure

None of this means “less AI.” It means more intentional AI.

The conversation moves from:

What can this model do?

to:

Is this the most cost-efficient way to solve this problem?

That shift feels increasingly likely as AI moves from novelty to infrastructure.


The real risk isn’t failure. It’s misalignment.

Concorde didn’t disappear because it was useless. It disappeared because the world decided it wasn’t worth paying for at scale.

AI could face a similar moment, not globally, but in specific forms:

  • overly general systems used where simpler tools would suffice
  • AI features added for optics rather than measurable impact
  • solutions that move cost around instead of actually removing it

Those versions of AI may quietly fade, not because they’re bad, but because they’re inefficient.

Other forms will likely persist and grow: the ones that are focused, boring, and economically defensible.


Final thoughts

I don’t think AI is headed for collapse. I do think it’s heading toward a phase where cost, efficiency, and discipline matter more than raw capability.

Concorde is a reminder that engineering brilliance alone isn’t enough. Technology has to earn its place every day it runs.

AI probably will too.

The interesting question isn’t whether AI can do incredible things.
It’s whether we’re using it in ways that still make sense once the bill arrives.

AI and the Concorde problem: when cost starts to matter more than capability – Vlad Moraru