Beyond the Hype: What’s Next for AI and Emerging Tech?

October 30, 2025

The World Media Group brought together leading AI journalists in a recent panel discussion to find out how the latest advancements in AI and emerging tech are likely to reshape the way we work and interact. Antonia John, Global Advertising Sales Director CNN International Commercial, chaired the panel, with Jeremy Kahn, AI Editor at Fortune, and Harry Booth, AI Reporter at TIME, offering a grounded perspective on where we are and where we’re heading. Their insights cut through the noise to reveal that the technology is more complex and nuanced than the headlines suggest.

Here are some of the key takeouts from the conversation:

  • AI has become a geopolitical battleground, with countries forced to choose between US and Chinese technology camps whilst “AI sovereignty” remains out of reach for most nations
  • Workers are being “squashed between superhuman expectations and disappointing reality” – developers using AI assistants thought they were 20% faster but were actually 90% slower
  • A multinational firm lost $25 million to deepfake scammers posing as colleagues on a video call, whilst AI-generated misinformation is “much more on the side of making things worse”
  • Real ROI is emerging: JP Morgan expects $2-3 billion in positive returns within three years, but success requires focused, expert-led implementation rather than generic adoption
  • It’s an “industrial bubble, not tulip mania” – the technology is real and transformative, but market valuations have run ahead of reality

The new geopolitical battleground

AI has become the latest front in the US-China rivalry. Chip manufacturing is the new oil, a strategic resource that’s shaping global power and politics. Countries are increasingly forced to choose sides, selecting between American or Chinese chips and AI models.

“The fate of nations has always, to some extent, been dependent on the fate of technology,” said Kahn. Whilst this competition was brewing before AI, the technology has intensified existing tensions. Kahn said the US is pushing countries to pick a camp, whilst China, interestingly, has been slightly more agnostic. Both nations, however, have imposed restrictions on each other’s technology: China limiting domestic companies from using US models and Nvidia chips, whilst America tries to control exports of Nvidia’s most advanced chips to China.

The buzzword on everyone’s lips is “AI sovereignty”. Every nation wants control over this infrastructure, believing it’s fundamental to their economic future. Yet Kahn points out the harsh reality: “The data centres take a tremendous amount of energy. Not every country has that energy. They certainly don’t have it from renewable resources. There are only a few chip providers in the world that can provide the technology to do this.” 

Most countries simply lack the resources to build extensive AI infrastructure, creating a world where true sovereignty remains out of reach for all but the wealthiest nations.

Regulation challenges

When it comes to overseeing regulations, the responsibility is shared between companies, governments and consumers. Booth noted that many AI firms employ people who genuinely care about safety. “I think we’re actually quite lucky to be in a situation where there are a lot of people within these AI companies that care really deeply about those risks and are really dedicated to testing them,” he said.

Booth pointed out that the CEOs of most of the major AI companies have, at one point or another, said that AI could spell the end for humanity, so allowing private organisations to mark their own homework seems dangerously naive. 

California’s recent SB 53 legislation offers an interesting model. Rather than prescribing specific guardrails, it compels AI companies to conduct safety testing and share the results with regulators, whilst protecting whistleblowers who flag inadequate processes. 

Booth explained why this approach matters: “When the technology is moving really quickly, what maybe matters more than just having a particular kind of guardrails in place, is actually having insight as to where the technology is right now and where it’s going.” Given that California represents the world’s fourth largest economy and hosts most of the US’s AI developers, such state-level regulation has global implications.

The UK’s AI Security Institute represents another approach: an independent, government-led evaluator with significant resources. However, compliance remains voluntary and companies have been patchy in their cooperation.

Perhaps more concerning are the harms already materialising. Meta has faced scrutiny over allowing chatbots to engage in sexual banter with underage users. OpenAI is being sued by parents whose children have taken their own lives, with allegations that the companies had not implemented sufficient guardrails around conversations involving suicidal thoughts, keeping users engaged instead of cutting off the chat right away. The engagement business model that proved so problematic with social media appears to be repeating itself with AI.

The true cost of misinformation 

AI’s ability to create convincing fake content has become terrifyingly sophisticated. OpenAI’s Sora model can now generate highly realistic videos, including footage that resembles CCTV recordings. Whilst these outputs carry watermarks and metadata, experts worry about their removal, and the technology’s potential misuse in courts and political campaigns.

Kahn highlighted research from the BBC showing that even asking chatbots to summarise news stories frequently produces errors and biased summaries. Booth said data showed that the incidents currently causing the most harm were “financial scams, deep fakes, these kinds of technologies where you’re using it to fool people”. 

He gave an example of a multinational engineering firm that was scammed out of $25 million after a high-ranking employee participated in what they believed was a video call with a number of other staff members, all of whom turned out to be deep fakes. As Booth noted, this is happening now and causing real money to be lost.

The panel stressed that whilst AI can help detect misinformation once identified, it’s currently doing far more to create the problem than solve it. Kahn was blunt: “There are some things that can be done to help check the spread of information where AI is going to be a help but I think  right now, it’s much more on the side of making things worse.” 

Media literacy education and responsible journalism are crucial to counterbalancing the effects of AI in this new landscape.

The workplace reality check

The narrative around AI replacing jobs hasn’t materialised as predicted. Instead, something more subtle is occurring. Booth described workers being “squashed between these superhuman expectations” and the “disappointing reality”. Employers assume AI will dramatically boost productivity, but the technology often falls short in real-world applications.

A striking example came from coding, where developers using AI assistants believed they were working 20% faster but were actually 90% slower. Kahn explained the paradox: “They were so worried that the code would not be correct that they were having to spend a tremendous amount of time looking through every line of code to verify, whereas if they’d written it themselves, they would have known if it was correct not.” Similar patterns emerged in legal work, where AI can produce research briefs in minutes rather than days, but lawyers then spend a day manually checking every citation.

Translators, meanwhile, have seen their work repriced as an “AI cleanup service”. Booth’s research revealed a troubling reality: “They, by their accounts, were spending just as long correcting these machine translations as they would have translating something from scratch, but now they’re getting paid half as much for work.” 

The lesson, Booth says, is “Things are just going to go better when you actually put the AI tools in the hands of the people with the expertise in the first place, rather than handing them AI slop to fix up.”

ROI success stories 

Business leaders are understandably focused on return on investment. Kahn acknowledged that whilst many pilot projects have failed to deliver value, this is changing. He said JP Morgan, for example, spent about a billion dollars on AI last year and expects to break even this year, with projections of $2-3 billion in positive ROI within three years.

Success stories tend to share common characteristics: they are focused on specific, measurable tasks rather than generic “use AI at work” mandates. Booth gave the example of a London-based international law firm, which created a custom tool to scan 2400 licensing agreements, identifying those needing adaptation and human review for different markets. This halved their costs, even accounting for development time.

“There are real examples of ROI, but it tends not to be these off-the-shelf solutions,” Booth said.  It requires people with expertise in both the domain and the AI implementation to get value.

The headlines a year from now

The session closed with John asking the panel to predict what they thought would be making headlines this time next year. Kahn thinks the narrative will shift from “95% of pilots are failing” to genuine ROI stories. Yet paradoxically, he expects valuations of companies like Nvidia to fall. 

He compared it to the dot-com bubble: the underlying technology was real and transformative, but market expectations ran far ahead of reality.

Kahn agrees with Jeff Bezos’s recent comments: “He said it was a good kind of bubble. It’s an industrial bubble. I think what he meant by that is it’s not a tulip mania. There is some underlying thing here of value. The question is really over the time scale in which that value will be realised.”

Booth pointed to reinforcement learning environments as an emerging trend. Rather than simply predicting the next word, AI systems will be trained in virtual workspaces, learning to navigate email, browsers and documents through thousands of attempts. This approach could finally deliver the autonomous agents we’ve been promised.

The environmental impact remains a pressing concern. Whilst algorithmic efficiency improves by four times annually, usage grows even faster. However, there’s cautious optimism: tech companies are investing heavily in green energy and potentially breakthrough technologies like nuclear fusion.

Maintaining perspective 

Perhaps the panel’s most important message was about perspective. Booth pointed out that even if we are in an AI bubble, bubbles are not necessarily all bad because they redirect capital and talent in a way that’s needed. The infrastructure being built and the problems being solved will have lasting value, even if the current valuations don’t hold.

John continued this theme as she summed up the panel concluding, “AI isn’t just a boom or a bubble. It’s a defining force that will continue to evolve in ways we’re only beginning to understand.” The question isn’t whether it will transform our world, but how we’ll navigate that transformation whilst managing the very real risks it presents.