Why ChatGPT Crashes on 'David Mayer': Digital Privacy Link?

Over the weekend, users of the conversational AI platform ChatGPT noticed an intriguing phenomenon. The popular chatbot would freeze up and refuse to answer questions if asked about specific names like "David Mayer." Conspiracy theories began to swirl, but there may be a more ordinary reason behind this strange behavior.

Word Spread Quickly

It was reported this last weekend that the name "David Mayer" was seemingly toxic to the chatbot, with numerous people trying to trick it into acknowledging the name without success. Every attempt to make ChatGPT spell out that specific name led to failure or even a break in the middle of the name. As it would say, "I’m unable to produce a response."

Names That Crash the Service

It wasn't just "David Mayer" that caused issues. Other names such as Brian Hood, Jonathan Turley, Jonathan Zittrain, David Faber, and Guido Scorza were also found to crash the service. (It's likely that more names have been discovered since then, making this list not exhaustive.)

Who Are These Men?

Brian Hood is an Australian mayor who accused ChatGPT of falsely describing him. His lawyers got in touch with OpenAI, but no lawsuit was filed. Jonathan Turley is a lawyer and Fox News commentator who was "swatted" in late 2023. Jonathan Zittrain is a legal expert known for speaking on the "right to be forgotten." Guido Scorza is on the board at Italy’s Data Protection Authority. These are all individuals who may have preferred certain information to be "forgotten" by search engines or AI models.

David Mayer - The Academic

There was a Professor David Mayer who taught drama and history, specializing in the connections between the late Victorian era and early cinema. He died in the summer of 2023 at the age of 94. For years, he faced a legal and online issue where his name was associated with a wanted criminal using his name as a pseudonym, preventing him from traveling. He fought continuously to have his name disambiguated.

Conclusion and Speculation

Lacking an official explanation from OpenAI, our speculation is that the model has a list of people whose names require special handling. These names are likely covered by special rules due to legal, safety, privacy, or other concerns. Just like many other names and identities, they go through various forms of processing before being answered. It's possible that a list was corrupted with faulty code, causing the chat agent to break. However, this is just our speculation based on what we've learned. As with these things, Hanlon's Razor applies - don't attribute to malice what can be explained by stupidity or syntax error. This whole drama serves as a reminder that AI models are not magic but are actively monitored and interfered with by the companies that make them. Next time you seek facts from a chatbot, it might be better to go directly to the source.