Google Blocked Mr Beast. His Real Name Worked.
Google's AI refused to generate Mr Beast. So I typed his real name and it did it anyway.
I was making a silly video call background in Nano Banana (I do this regularly, usually some absurd mashup of ideas). This time: a gigantic toilet full of money with Mr Beast hanging off the flush handle. I don't recall what sparked it.
Google politely declined. "There are a lot of people I can help with, but I can't depict some public figures. Do you have anyone else in mind?"
I typed "James Donaldson."
Google proceeded to generate Mr Beast (if not exactly as requested and with a questionable set of toilet plumbing).
Same person. Same face. The filter caught the stage name and missed the legal name. That's the depth of the safety layer between "blocked" and "sure, here you go."
This keeps happening. Anthropic said Mythos was too dangerous to release because it could autonomously find and weaponize software vulnerabilities. Then the Financial Times reported the actual reason for the limited rollout: they didn't have enough computing power to serve it. The safety story was better marketing than "our servers can't handle the load."
The pattern is worth noticing if you're making decisions based on what AI companies say they can or can't do. The guardrails are often string matching, not understanding. The safety announcements are often PR strategy, not engineering reality.
Safety matters. What these companies are selling as "safety" often isn't.
When a vendor tells you their AI "can't" do something, ask whether that's a hard technical limit or a keyword filter that breaks when you rephrase the question. (This applies to the AI chatbot your CRM vendor just added, not just Google's image generator.)