OpenAI identified and is fixing a bug in ChatGPT models that caused the AI to repeatedly mention goblins in responses, even when users asked unrelated questions. The issue emerged gradually rather than as a sudden malfunction, the company said.
OpenAI did not specify which ChatGPT versions were affected or how widespread the problem became. The company's characterization of the bug as one that "crept in subtly" suggests the problem went undetected for some time before users or internal testing flagged it.
The incident reflects broader challenges in large language models: unexpected behaviors can emerge from training data or parameter adjustments in ways that are difficult to predict or immediately diagnose. While the goblin bug appears benign, it underscores how AI systems can develop quirks that require active monitoring and correction even after deployment.
OpenAI has not said whether the fix has been fully deployed or remains in progress.
