MIT paper proves ChatGPT sycophancy causes delusional spiraling and standard fixes don’t work
MIT just published math that should make every AI product team uncomfortable. The paper models what they call “delusional spiraling” — the pattern where a user asks a chatbot something, it agrees, they push further, it agrees harder, and within a few exchanges the user has drifted into believing things that aren’t true. The researchers…
