Are you looking for information about offers, devices or your account?

Please choose your local Vodafone website

We need intelligent regulation to grow AI responsibly

By Joakim Reiter, Chief External Affairs Officer, and Scott Petty, Chief Technology Officer, Vodafone

When a first child is born, the new parents receive lots of bewildering advice. The UK has invited the leaders in artificial intelligence to Bletchley Park for the AI Safety Summit, a landmark at the birth of a new industry in which the UK can lead. But is the conference going to receive the best parenting advice it can?

Prime Minister Rishi Sunak is right to grasp the moment. He wants the UK to become a global hub for AI, reflecting how generative artificial intelligence has captured the public imagination. ChatGPT set the record for the fastest growing consumer service in the history of the web, according to analysts at UBS. By August, it was serving up to 1.5bn visitors.

Underlying this marvel are large language models, or “foundation models” that make this possible, able to produce art, poetry, and code. But novelty aside, these models also have some radically transformative uses. As the Secretary of State for Science, Innovation and Technology Michelle Donelan rightly said, we could see AI “unlocking gains in productivity and efficiency that could never have been imagined before”.

With the persistent productivity gap in many European countries – all the more concerning, given the ageing population challenge facing our societies – it is no surprise that governments and businesses have a real sense of urgency in exploiting AI’s potential. Also, the importance of AI extends way beyond economic realms – to medical research and health, environmental protection and energy efficiency, and much, much more.

For many leading technology and telecommunications companies, however, AI is an evolution – not a revolution – for the way we work. Vodafone has deployed artificial intelligence for years, now used in 120 different functions across the organisation. Machine learning is helping us plan and run better our networks, reaping sustainability benefits for the nation and efficiency wins for our customers.

AI must be trustworthy

With such benefits at stake, governments should create an environment where experimentation is encouraged. Of course, experimentation also comes with risks.

For the promises of AI to be kept, the systems and their operators must be trustworthy, and reflect what users want. The onus is on all to act responsibly, exercise adequate care and be considerate in the development and application of AI. At Vodafone, everything we do with this technology is guided by our framework for responsible AI. This emanated from our own learnings of deploying machine learning tools, and was among the first corporate AI frameworks of its kind when introduced in 2019. A team involving all relevant functions at senior managerial level ensures that it is kept up to date in line with new developments.

As organisations across the technology community actively look at the opportunities and risks of AI, in discussion with government and academia, telcos like Vodafone can offer wise counsel.

As mobile operators, we have a history of bringing disruptive technology – and every tech wave in the modern era has been disruptive, with winners and losers – to the masses. Our industry created innovative business models that allowed the least advantaged to benefit from the revolution in mobile technology. Retailers like Vodafone became wholesalers by creating virtual operators to compete with us – a delicate platform balancing act that Silicon Valley struggles with today.

Our industry also became adept at managing competing demands and engineering around them. Networks once designed only for voice calls now carry great floods of data, at gigabit speeds and using technology engineers once thought impossible, bringing benefits to businesses and millions of people.

We learned how to avoid mistakes, to minimise risks and to partner with customers, governments and other stakeholders on how to mitigate potentially negative impacts of novel technologies. A similar approach would be advisable for AI today.

Politicians attending the AI safety summit should look to address some fundamental questions. Are they getting an honest picture of the capabilities and limitations, indeed risks, of generative AI? Are they engaging with the full spectrum of stakeholders who can inform them? There are signs that they haven’t been so far, and that has implications for trust in the technology.

Focus on immediate risks, not apocalyptic threats

The Department of Science, Innovation and Technology (DSIT) says talks at the summit “will explore and build consensus on rapid, international action to advance safety at the frontier of AI technology.”

In doing so, we believe that attendees must avoid the temptation to follow recent trends, where headlines and political statements have focused on distant, apocalyptic threats.

The real dangers of generative AI are unlikely to appear in a sci-fi film. For example, since the dial-up days of the internet, we’ve been spammed by fraudsters. We could often tell something was wrong with phishing emails, because the English wasn’t quite right, or the grammar gave the game away. But generative AI allows fraudsters to perfect their craft. Similarly, AI-developed ‘deepfake’ videos and images are already making it harder for us to distinguish truth from disinformation.

We should learn from the collective mistakes regarding social media platforms, where we are only now – when the damage to the fabric of our societies has already taken place – trying to address the risks posed by disinformation, hate speech, cyberbullying, child pornography and other illegal content.

The summit discussions should focus on such real and immediate risks of AI, and the routes to finding effective mitigations.

So, what might smart regulation look like?

The idea that we can regulate all the possible use cases of AI is clearly a non-starter. Innovation by permission is the death of innovation. Governments can and should nurture AI experimentation, while working with industry to ensure this is done responsibly.

We need to strike the right balance between, on the one hand, capturing AI’s potential to greatly increase productivity for businesses and improve everyday life for citizens, and on the other, tackling the risks that could be exploited by wrongdoers. Lawmakers must therefore take a clear-eyed, proportionate view when putting the necessary guardrails in place. They must embrace a broad, multi-stakeholder approach in assessing the challenges with AI. And they must find a flexible, risk-based approach – elements of the EU’s AI Act, or the UK Government’s Online Safety Act, offer good examples – while ensuring any regulation is agile enough to cope with an unpredictable future.

Those gathering at Bletchley Park understand that AI is still in its infancy. The ideal outcome of this meeting would be to achieve consensus on how best collaborate for the future regulation, development and adoption of AI in ways that truly benefit society – not in ways that threaten it.

  • AI
  • Digital Society
  • Digitalisation
  • Innovation
  • Protecting data
  • Public Policy
  • Technology

More stories

No results found