AI companies are using human words to sell you software. And it's working.

AI companies are using human words to sell you software. And it's working.

AI companies are using human words to sell you software. And it's working.

Anthropic just announced a feature called "dreaming." OpenAI says its models need "thinking time." Your chatbot has "memories" about you.

Quick reality check on what those words actually mean:

"Dreaming" a batch job that reviews activity logs between sessions. "Thinking" extra compute cycles running inference passes. "Memories" key-value pairs stored in a database.

These are all useful features (genuinely useful ones, even). But the naming isn't describing the product, it's selling it. Researchers studying this found something worth knowing: the more human an AI sounds, the less we scrutinize the company selling it. The human language creates a trust shortcut that skips right past your normal vendor evaluation.

That gap between words and engineering has real costs. Apple just wrote a $250 million check because Siri's marketing used human-sounding promises the product couldn't keep

I've caught myself doing this too (more than once, honestly). A vendor shows me something that "reasons through your codebase" and my brain fills in way more capability than the tool actually has. The word "reasons" does a lot of heavy lifting in that sentence. When I strip it back to "runs your code through its model a few extra times" still useful, still worth paying for, but now I'm buying what it does instead of what it sounds like.

Next time an AI vendor pitches you something that "thinks" and "dreams" and "remembers," try this: ask them to explain the feature without using any human words. If the engineering holds up on its own, you've probably found a good tool. If they can't sell it without the anthropomorphism, that tells you something (and not something great) about where the value actually is.