Your next AI vendor demo is going to look incredible. It always does.
Your next AI vendor demo is going to look incredible. It always does.
They'll connect their tool to a sample database, ask it a question in plain English, and watch it pull exactly the right answer. Stakeholders will nod. Someone will say "when can we have this?"
Here's what the demo won't show you: what happens when 15 people hit it at once. Whether it logs who asked what (and whether that matters for your compliance requirements). What breaks when the person who set it up leaves. Who has access to your data once the connection is live.
Most of these AI integrations run on a protocol called MCP (think of it as the universal adapter that lets AI tools talk to your business systems). It's solid technology. It's also brand new, and the default setup skips most of what you'd want in a production environment. No access controls out of the box. No usage tracking. No "what happens when it fails" plan.
According to an InfoQ survey from Q1 2026, 76% of enterprise AI vendors say they're building on MCP right now. That means your next sales call probably involves it (whether they mention it by name or not).
None of this means you should avoid AI tools that use MCP. You shouldn't. But you should be asking vendors three questions they probably don't want to answer:
What authentication and access controls ship by default. Not what's "available," what's actually turned on? What logging exists so I can see who accessed what data and when? What's the recovery plan when this breaks at 2pm on a Tuesday? (or 2am on a Friday?)
If the answer to any of those is "we can customize that," what they're really saying is "the demo doesn't do that." And that's worth knowing before you sign.