How AI hoaxes could move markets

Dec 11, 2025 - 05:02
How AI hoaxes could move markets

Image generated by Chat GPT

If a fake image of a collapsed bridge can stop trains imagine what else AI generated scams could do? Asks John Oxley

Last week, a few dozen trains around Carlisle were delayed. It appears to be an unremarkable story, especially for anyone familiar with the state of British railways. The cause, however, was novel: a manipulated image. Following reports of a small-scale earthquake in the county, someone fabricated an AI-generated image of a local bridge that was partially collapsed. This was reported to Network Rail, which suspended service to conduct a physical inspection. Although it was resolved quickly, it indicates an emerging threat. 

When we think of disinformation, we tend to think of long, complex political influence campaigns. Indeed, such practices exist and are facilitated by the ease with which AI can create messages and images. We see this in Russia’s attempts to push its narratives around the invasion of Ukraine, with bot networks and duped humans pushing dubious points supported by confected evidence. The Carlisle bridge affair points to how rapid, sharp messages can also have a destabilising effect. 

This matters to both security and markets. In 2013, the Associated Press’ Twitter account was hacked, and a message was posted claiming there had been an explosion at the White House, injuring then-President Barack Obama. It triggered a market panic, with US stocks dropping by hundreds of billions of dollars in a few minutes before rapidly correcting. Blame was attributed to pro-Assad Syrian hacking groups. 

There is a real vulnerability in fast-moving economies like ours. Fakes that are placed with sufficient authenticity can move markets. Imagine perhaps an image showing a hazard on a plane, aping the real incidents with Boeing, or a photo of a FTSE CEO taken seriously ill at an event. Easy to conjure up, quick to spread before any rebuttal can be mounted. Just like in 2013, it could be an easy way to create chaos. 

The events of last week show the risk of smaller-scale incidents. Hoaxes have always been a drain on the resources of emergency services – but now they can be more believable than ever. Easily accessible AI can create the appearance of a crash or fire that needs investigation, pulling responders away from real incidents. At low levels, it can be a nuisance, but a coordinated deployment could pose a serious security threat, reducing readiness for real emergencies and sowing distrust in official reports. Such fakes could be particularly dangerous if timed to coincide with an actual emergency or attack, when information is fuzzy and rumours take hold swiftly. 

The technology cannot be rolled back. Instead, our approach to resilience needs to acknowledge it. From infrastructure to markets, we need robust protocols built on an understanding of how quickly and easily dramatic images can now be fabricated. Decision-making needs to balance rapid responses against this scepticism. 

Likewise, the public needs a healthier scepticism: a recognition that digital evidence is no longer proof but suggestion, something to be weighed rather than swallowed. That doesn’t mean cynicism; it means maturity. In fast-moving economies, resilience will depend on cultivating a culture that prizes verification over velocity, and steadiness over the drama of the feed. If AI makes fabrication effortless, then our counterweight must be a system – and a society – that takes a beat before reacting.

A few trains being delayed because of a fake photo is not the worst thing in the world. It is, however, a warning. This is a new threat, one that ne’er-do-wells can exploit to cause a little disruption, but which could also be used by real villains with much worse intentions. Our institutions have a responsibility to build guardrails against it, and so do we, being careful in our own minds what we believe and share. Just as we defend against real threats, we now have to guard against the unreal, too.