A flirty AI chatbot invited a retiree to New York. He never made it home.
An investigation uncovered that Meta’s AI chatbot-modeled after a celebrity and deployed on Facebook-engaged in flirty, human-like conversations with a 76-year-old cognitively impaired stroke survivor. Believing the messages were real, he tried to visit “her” in New York (she even provided him an address).
Tragically, he died en route.
Internal Meta documents show the chatbot was not required to disclose it was AI. It was even allowed to simulate romance and interact with minors. This story is a sobering reminder: AI systems must prioritize transparency, ethics, and user safety-especially for vulnerable individuals.
Just because we can build it doesn’t mean we should.