Kjølv Egeland
In recent months, there have been surges of speculation online that seismic events in Asia were caused by clandestine nuclear tests or military exchanges involving nuclear arms. An earthquake in Iran last October and a series of seismic events in Pakistan in April and May stimulated frantic theory-crafting by social media users and sensationalist news organizations. Both waves of speculation took place against a backdrop of intense conflicts in the regions concerned.
The spread of hearsay about nuclear or other strategic weapons tests is not new. During the Cold War, speculation about atomic tests, secret superweapons, and exotic arms experiments flourished in print magazines and popular culture. But novel digital technologies have added a new layer of complexity to the grapevine, boosted by ever-pervasive and invasive social media platforms.
Social media and AI-powered large language models certainly offer valuable sources of information. But they also risk facilitating the spread of misinformation more widely—and faster—than traditional modes of communication.
Worse, large language models could also end up validating false information.
‘Grok’ was lucky this time. Following a seismic event in Pakistan on May 12, numerous users of Elon Musk’s social media platform X (formerly known as Twitter) resorted to asking its AI chatbot “Grok” whether the event might have been produced by an underground nuclear test. X’s chatbot Grok adds to a growing list of other chatbots, including OpenAI’s ChatGPT, Anthropic’s Claude, Google’s Gemini, and China’s DeepSeek.
Grok’s answer to X users curious about the May 12 event was that the quake was due to natural causes. To support its answer, Grok used as evidence the fact that the event had taken place at a depth of 10 kilometers—too deep for a nuclear test.
No comments:
Post a Comment