Written by

Bernard Marr

Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity. He is a best-selling author of over 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations. He has a combined following of 4 million people across his social media channels and newsletters and was ranked by LinkedIn as one of the top 5 business influencers in the world.

Bernard’s latest books are ‘Future Skills’, ‘The Future Internet’, ‘Business Trends in Practice’ and ‘Generative AI in Practice’.

Generative AI Book Launch
View My Latest Books

Follow Me

Bernard Marr ist ein weltbekannter Futurist, Influencer und Vordenker in den Bereichen Wirtschaft und Technologie mit einer Leidenschaft für den Einsatz von Technologie zum Wohle der Menschheit. Er ist Bestsellerautor von 20 Büchern, schreibt eine regelmäßige Kolumne für Forbes und berät und coacht viele der weltweit bekanntesten Organisationen. Er hat über 2 Millionen Social-Media-Follower, 1 Million Newsletter-Abonnenten und wurde von LinkedIn als einer der Top-5-Business-Influencer der Welt und von Xing als Top Mind 2021 ausgezeichnet.

Bernards neueste Bücher sind ‘Künstliche Intelligenz im Unternehmen: Innovative Anwendungen in 50 Erfolgreichen Unternehmen’

View Latest Book

Follow Me

World Leaders Weigh Tech’s Use Of ‘Good Or Evil’ At AI Summit

3 January 2024

Was the first international summit on AI safety an important stepping stone toward ensuring safe and productive AI for all? A political opportunity for U.K. Prime Minister Rishi Sunak to position his government as a global leader in the field? Or a show put on to convince us all that international tech giants can be trusted to act responsibly as they roll out increasingly powerful AI tools?

World Leaders Weigh Tech’s Use Of ‘Good Or Evil’ At AI Summit | Bernard Marr

Well, I think it’s fair to say that it was a bit of all three.

High on the agenda at the two-day event at Bletchley Park in Milton Keynes were discussions around the potential for AI to be put to use for malicious purposes. Threats ranged from AI-enhanced cyber attacks that can learn to defeat cyber defense technologies to autonomous weapons that could be programmed to hunt and kill.

"AI is the most powerful technology ever created and like any powerful technology, it can be used for good or evil,” Stuart Russell, professor of AI at the University of California, Berkley, told the audience, which included delegations from over 30 countries and political leaders including U.S. Vice President Kamala Harris and European Commission President Ursula von der Leyen. “It is essential we develop AI safety measures to prevent it from being misused.

There were some concrete results, too. Those behind the event will say that the Bletchley Declaration represents an important step forward. The first formal international agreement on developing a framework around safe AI was signed by 28 countries, including the U.K., EU, U.S., India and China.

Of course, it’s one thing to get 28 nations to sign a document and another to get them to play nicely together while competing for their share of the $15 trillion that could be up for grabs across the global economy.

The overarching focus of the event, attended by leaders from tech giants including Alphabet, Meta and OpenAI, was on safety concerns around frontier AI.

This is a term that's only been loosely defined and is often applied to the most powerful generative AI models, as well as AI that’s capable of doing many jobs (generalized or strong AI) rather than simply carrying out one task (narrow AI).

Rishi And Elon

One of the most anticipated events, which wasn’t part of the official agenda, was when Sunak and Elon Musk came together for a live-streamed interview following the conference’s conclusion.

It’s fair to say that Sunak seemed slightly off-footed by some of Musk’s comments, such as his prediction that AI would mean “there will come a point when no job is needed.” This is not exactly in line with the belief of Sunak’s grassroots supporters in the importance of honest, hard work.

Musk talked about the fears in Silicon Valley that government intervention of the type being pushed for by the summit would “crush innovation and slow them down,” admitting that “it will be annoying.”

He also said that he believes governments aren't used to moving at the speed that will be necessary to keep up with the lightning-fast pace of AI advancement.

But describing the role of government, he said, “We’ve learned over the years that having a referee is a good thing,” and that the government's role could be as “a referee to make sure we have sportsman-like conduct, and the public safety is addressed.”

On balance, he said he believed AI will be a force for good, “but the probability of it going bad is not zero percent.”

Addressing the international aspects of the summit and the assertion of the need for international collaboration, Sunak asked Musk whether he believed it was right that China had been asked to attend, adding that there were questions around “Can we trust them?”

Musk replied that China's involvement is critical to the process of mitigating threats globally, adding that he believes they are “willing to participate” in ensuring the safety of AI.

“Having them here is essential,” he said. “If they are not participants, it’s pointless.”

The AI Safety Challenge

During discussions around security, Microsoft principal AI researcher Kate Crawford drew attention to the complexity and opaqueness of many AI systems, making it difficult to understand how (and when) they may make mistakes.

And the global nature of the challenge, with the need to foster international cooperation as well as healthy competition, was addressed by others, including Fei-Fei Li, professor of computer science at Stanford University. “AI safety is a global challenge, and it requires a global response,” Li said. “We need to work together to develop shared safety standards and to ensure that AI is developed and used in a way that benefits all of humanity.”

The representative of the Chinese government—often considered by commentators in the West to be a wild card when it comes to global AI governance—echoed these sentiments. “We call for global collaborations to share knowledge and make AI technologies available to the public,” Vice Minister Wu Zhaohui told the conference.

The Bletchley Declaration

The declaration itself was proclaimed as the biggest individual success moment of the conference. Although there was some criticism of the fact that it represented an idea more than a concrete plan, there were some signs that individual nations are ready to follow through.

As the host nation, the U.K. was the first to put its money where its mouth is, announcing the formation of an official AI Safety Institute that will test the safety of emerging forms of AI.

The declaration itself, though fairly wordy for something agreed to by 28 countries less than a day after the event started, mainly reiterates points that those advocating for AI safety have been raising for some time. These include the need for AI to be “trustworthy, human-centric and responsible.”

It affirms that signatory states and nations will construct an agenda for managing AI risk based on identifying and developing an “evidence-based” understanding of the risk, as well as supporting collaborative scientific research.

You can read the whole thing here, but I thought, why not let AI itself tell us what it’s about? So here’s ChatGPT’s one-sentence summary: “The Bletchley Declaration stresses safe, responsible AI development, urges international cooperation to mitigate risks, and promotes global benefits.”

To be honest, having read the whole thing through myself, there’s not really that much more to it than that.

Criticism Of The Summit

Not everyone was happy with the summit. Some media coverage criticized the relatively small amount of time dedicated to considering the impact of AI’s energy-intensive compute and data centers on the environment. It’s true that this side of the discussion seemed to be downplayed by comparison to the more far-fetched—some might say alarmist—worries about rogue AIs harming humanity.

Others said there was too heavy a focus on the way AI would be used by tech giants, ignoring the impact it will have on everyday jobs and smaller businesses.

Writing for The Guardian, Chris Stokel-Walker, author of the forthcoming How AI Ate The World, commented that the event seemed as much about Sunak’s eagerness to pal up with the tech industry as it was about positive action. “I'm not holding my breath for positive results and a new AI accord that meets the challenges we face,” he wrote.

And Women’s charity Refuge said that safety issues involving women and girls seemed to have been ignored at the summit, with no discussion over the dangers posed by deepfake technology that can be used to create fake pornographic images of anyone without their consent. The charity drew attention to an incident in which images of 20 underage girls at one Spanish school were distributed on social media.

Was It Worth It?

For a first event of its type, gathering over 100 political and technology leaders around the table was certainly a good start. While it’s inevitable that the event prompted a certain amount of political posturing and grandstanding, it was also a necessary first step toward taking more decisive action.

Over the coming weeks and months, we’ll hopefully see more governments and corporations laying out plans to transform some of the aspirations into action. It’s also certain that the conference will be the first of many—another has already been announced that will take place in France in the near future.

Hopefully, organizers will learn lessons from this one and be more inclusive of marginalized voices next time. With a greater emphasis on the impact that AI is going to have on everyone, it's likely that more concrete progress will be achieved.

Business Trends In Practice | Bernard Marr
Business Trends In Practice | Bernard Marr

Related Articles

Sex And Intimacy In The Generative AI Era

Sex and technology have long been intertwined – millions of us use dating apps to find partners, and some of the earliest commercial online activity revolved around pornography.[...]

Generative AI Sucks: Meta’s Chief AI Scientist Calls For A Shift To Objective-Driven AI

In a landscape where generative AI is hailed as the frontier of technological innovation, Yann LeCun, Chief AI Scientist at Meta, presents a contrarian viewpoint that challenges the status quo.[...]

Instacart Harnesses Generative AI To Revolutionize Grocery Delivery Experience

Grocery delivery and pickup service Instacart is not shy about adopting new technologies. So, it makes sense that the company has embraced generative AI across the business.[...]

Responsible AI: Why Privacy Is An Essential Element

Today, people often talk about “responsible” AI use, but what do they really mean?[...]

The Amazing Ways IKEA Is Using Generative AI

Global furniture retailer IKEA has long been at the forefront of tech-driven retail innovation.[...]

Generative AI Is Coming To Your Home Appliances

Across all industries, organizations are rapidly embracing generative AI. Among them, makers of home appliances like fridges and ovens.[...]

Sign up to Stay in Touch!

Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity.

He is a best-selling author of over 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations.

He has a combined following of 4 million people across his social media channels and newsletters and was ranked by LinkedIn as one of the top 5 business influencers in the world.

Bernard’s latest book is ‘Generative AI in Practice’.

Sign Up Today

Social Media

0
Followers
0
Followers
0
Followers
0
Subscribers
0
Followers
0
Subscribers
0
Yearly Views
0
Readers

Podcasts

View Podcasts