Every Map of the World Is a Lie

Every map of the world is a lie.

Not maliciously. It's math. You can't flatten a sphere without distorting it. The Mercator projection most of us grew up with makes Greenland look the size of Africa. Useful for navigation, but it warps everything around a single invisible center point.

What stuck with me from José María Barrera's keynote at GROK this morning: that center point is invisible from inside the map. You live in the projection, you can't see what's distorting your view. You just think that's how the world looks.

AI models work exactly the same way.

When OpenAI or Anthropic or Google trains a model, they choose the training data, the objective functions, the parameters. Those choices are the center point. They define the geometry of how the model understands meaning (what's "similar," what's "different," what matters and what doesn't). They don't tell you what those choices are.

A handful of companies are defining the geometry of meaning for the rest of us.

And we can't see the distortion because we're inside the map.

José told the story of the Gros Michel banana. Before the 1950s, that was THE banana. Bigger, sweeter, and by all accounts way better than what we eat today. But we loved it so much we made every commercial banana a genetic clone of the same plant. Monoculture. One variety, everywhere.

Then Panama disease showed up. One pathogen wiped out the entire global supply. Every banana was identically vulnerable. The industry started over with the Cavendish (the one you know now), and we did the exact same thing again. Cloned it everywhere. A new strain of Panama disease was declared a world emergency in 2019 because we're right back in the same spot. (We really do not learn.)

Now look at AI. A few models, trained by a few companies, using largely the same approaches, deployed everywhere. If every business is building on top of the same handful of models, a single flaw isn't an isolated failure. It's a systemic one. Not if. When.

You're inside someone else's map, using someone else's geometry, and you can't see the distortion.

So what do you do about it?

Own your data. Train your own models where it makes sense. Build systems where you can swap providers when (not if) one of them changes the rules on you. Don't let a single vendor become the invisible center of your business decisions.

I run my own work through multiple models. I've built infrastructure that lets me swap between providers without rebuilding everything. Not because I think any particular company collapses tomorrow — because the point isn't the specific failure, it's the structural exposure.

The AI you use today has a hidden center point you can't see. At minimum, know that it's there. Better yet, build so you're not trapped inside one map.