AI-generated Q&A can push false memories

study from MIT got 200 people to watch a crime video and answer questions from either a standard questionnaire or an interactive genAI chatbot. The chatbot successfully imprinted them with false memories.

The chatbot was designed to deliberately mislead. It would ask a leading question and then give positive reinforcement to an incorrect answer. You might think this isn’t a fair test—the chatbot is trying to mislead. But what if the chatbot does this accidentally?

In real life, as in a courtroom that includes a prosecutor and defense, a judge interrupts questioning that is deliberately misleading, and there is an opportunity for cross-examination. Maybe future AI systems will need to incorporate something similar, some kind of “critical thinking” module to help mitigate these issues.

Arxiv