Topic: distributional shift vulnerability
-
When LLMs Go Rogue: The Fluent Nonsense Problem
Research from Arizona State University suggests that Chain-of-Thought reasoning in LLMs is not genuine reasoning but a fragile form of pattern recognition, which degrades with unfamiliar tasks or prompt changes. The study highlights the concept of distributional shift, showing that LLMs perform p...
Read More »