Written by

Bernard Marr

Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity. He is a best-selling author of over 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations. He has a combined following of 4 million people across his social media channels and newsletters and was ranked by LinkedIn as one of the top 5 business influencers in the world.

Bernard’s latest books are ‘Future Skills’, ‘The Future Internet’, ‘Business Trends in Practice’ and ‘Generative AI in Practice’.

Generative AI Book Launch
View My Latest Books

Follow Me

Bernard Marr ist ein weltbekannter Futurist, Influencer und Vordenker in den Bereichen Wirtschaft und Technologie mit einer Leidenschaft für den Einsatz von Technologie zum Wohle der Menschheit. Er ist Bestsellerautor von 20 Büchern, schreibt eine regelmäßige Kolumne für Forbes und berät und coacht viele der weltweit bekanntesten Organisationen. Er hat über 2 Millionen Social-Media-Follower, 1 Million Newsletter-Abonnenten und wurde von LinkedIn als einer der Top-5-Business-Influencer der Welt und von Xing als Top Mind 2021 ausgezeichnet.

Bernards neueste Bücher sind ‘Künstliche Intelligenz im Unternehmen: Innovative Anwendungen in 50 Erfolgreichen Unternehmen’

View Latest Book

Follow Me

What Every CEO Needs To Know About The New AI Act

2 April 2024

Having recently passed the Artificial Intelligence Act, the European Union is about to bring into force some of the world’s toughest AI regulations.

Potentially dangerous AI applications have been designated “unacceptable” and will be illegal except for government, law enforcement and scientific study under specific conditions.

As was true with the EU’s General Data Protection Regulation, this new legislation will add obligations for anyone who does business within the 27 member states, not just the companies based there.

Those responsible for writing it have said that the aim is to protect citizens’ rights and freedoms while also fostering innovation and entrepreneurship. But the 460-odd published pages of the act contain a lot more than that.

If you run a business that operates in Europe or sells to European consumers, there are some important things you need to know. Here’s what stands out to me as the key takeaways for anyone who wants to be prepared for potentially significant changes.

What Every CEO Needs To Know About The New AI Act | Bernard Marr

When Does It Come Into Force?

The Artificial Intelligence Act was adopted by the EU Parliament on March 13 and is expected to soon become law when it is passed by the European Council. It will take up to 24 months for all of it to be enforced, but enforcement of certain aspects, such as the newly banned practices, could start to happen in as little as six months.

As was the case with GDPR, this delay is to give companies time to ensure they’re compliant. After this deadline, they could face significant penalties for any breaches. These are tiered, with the most serious reserved for those breaking the “unacceptable uses” ban. At the top end are fines of up to 30 million euros, or 6% of the company’s global turnover (whichever is higher).

Potentially even more damaging, though, would be the impact on a business’ reputation if it’s found to be breaking the new law. Trust is everything in the world of AI, and businesses that show they can’t be trusted are likely to be further punished by consumers.

Some Uses Of AI Will Be Banned

The act states that “AI should be a human-centric technology. It should serve as a tool for people, with the ultimate aim of increasing human well-being.”

In order to do that, the EU has prohibited the use of AI for a number of potentially harmful purposes, including:

  • Using AI to influence or change behaviors in ways that are harmful.
  • Biometric classification to infer political and religious beliefs or sexual preference or orientation.
  • Social scoring systems that could lead to discrimination.
  • Remotely identifying people via biometrics in public places (facial recognition systems, for example.)

There are some exemptions. There’s a list of situations for which law enforcement organizations can deploy “unacceptable” AIs, including preventing terrorism and locating missing people. There are also exemptions for scientific study.

The act says “AI should be a human-centric technology. It should serve as a tool for people, with the ultimate aim of increasing human well-being.” So it’s good to see that limiting the ways it could cause harm has been put at the heart of the new laws.

However, there is a fair amount of ambiguity and openness around some of the wording, which could potentially leave things open to interpretation. Could the use of AI to target marketing for products like fast food and high-sugar soft drinks be considered to influence behaviors in harmful ways? And how do we judge whether a social scoring system will lead to discrimination in a world where we’re used to being credit-checked and scored by a multitude of government and private bodies?

This is an area where we will have to wait for more guidance or information on how enforcement will be applied to understand the full consequences.

High-Risk AI

Aside from the uses deemed unacceptable, the act breaks down AI tools into three further categories: high, limited and minimal risk.

High-risk AI includes use cases like self-driving cars and medical applications. Businesses involved in these or similarly risky fields will find themselves facing stricter rules as well as a greater obligation around data quality and protection.

Limited and minimal-risk use cases could include applications of AI purely for entertainment, such as in video games, or in creative processes such as generating text, video or sounds.

There will be fewer requirements here, although there will still be expectations regarding transparency and ethical use of intellectual property.

Transparency

The act makes it clear that AI should be as transparent as possible. Again, there’s some ambiguity here—at least in the eyes of someone like me who isn’t a lawyer. Stipulations are made, for example, around cases where there is a need to “protect trade secrets and confidential business information.” But it’s uncertain right now how this would be interpreted when cases start coming before courts.

The act covers transparency in two ways. First, it decrees that AI-generated images must be clearly marked to limit the damage that can be done by deception, deepfakes and disinformation.

It also covers the models themselves in a way that seems particularly aimed at big tech AI providers like Google, Microsoft and OpenAI. Again, this is tiered by risk, with developers of high-risk systems becoming obliged to provide extensive information on what they do, how they work and what data they use. Stipulations are also put in place around human oversight and responsibility.

Requiring AI-generated images to be marked as such seems like a good idea in theory, but it might be difficult to enforce, as criminals and spreaders of deception are unlikely to comply. On the other hand, it could help establish a framework of trust, which will be critical to enabling effective use of AI.

As far as big tech goes, I expect this will likely come down to a question of how much they are willing to divulge. If regulators accept the likely objections that documenting algorithms, weightings and data sources is confidential business information, then these provisions could turn out to be fairly toothless.

It’s important to note, though, that even smaller businesses building bespoke systems for niche industries and markets could, in theory, be affected by this. Unlike the tech giants, they may not have the legal firepower to argue their way in court, putting them at a disadvantage when it comes to innovating. Care should be taken to ensure that this doesn’t become an unintended consequence of the act.

What Does This Mean For The Future Of AI Regulation?

First, it shows that politicians are starting to make moves when it comes to tackling the huge regulatory challenges thrown up by AI. While I’m generally positive about the impact I expect AI to have on our lives, we can’t ignore that is also has huge potential to cause harm, deliberately or accidentally. So any application of political will toward addressing this is a good thing.

But writing and publishing laws is the relatively easy part. It’s putting in place the regulatory, enforcement and cultural frameworks to support the change that takes real effort.

The EU AI act is the first of its kind, but it’s widely expected that it will be followed by further regulation across the globe, including in the United States and China.

This means that it’s essential for business leaders, wherever they are in the world, to take steps to ensure they’re prepared for the changes that are coming.

Two key takeaways from the EU AI Act are that every organization will have to understand where their own tools and applications sit on the risk scale and take steps to ensure that their AI operations are as transparent as possible.

On top of that, there’s a real need to stay informed on the ever-changing regulatory landscape of AI. The relatively slow pace that law moves at means you shouldn’t be taken by surprise.

Above all, though, I believe the key message is the importance of building a positive culture around ethical AI. Ensuring that your data is clean and unbiased, your algorithms are explainable and any potential for causing harm is clearly identified and mitigated is the best way to make sure you’re prepared for whatever legislation might appear in the future.

Business Trends In Practice | Bernard Marr
Business Trends In Practice | Bernard Marr

Related Articles

The 12 Best Smart Home Devices Transforming Homes in 2025

By now, “smart” versions exist of just about every home appliance, gadget and gizmos we can think of. However, manufacturers continue[...]

11 Most Reliable AI Content Detectors: Your Guide To Spotting Synthetic Media

Since the launch of ChatGPT just two years ago, the volume of synthetic – or fake – content online has increased exponentially.[...]

The AI-Powered Citizen Revolution: How Every Employee Is Becoming A Technology Creator

Something remarkable is happening in organizations around the world.[...]

6 Mistakes IT Teams Are Guaranteed To Make In 2025

The next wave of artificial intelligence isn't just knocking at enterprise doors - it's exposing fundamental flaws in how organizations approach technology transformation.[...]

2025’s Tech Forecast: The Consumer Innovations That Will Matter Most

Consumer technology covers all of the tech we buy to make our lives more convenient, productive or fun.[...]

7 Healthcare Trends That Will Transform Medicine In 2025

Healthcare has evolved dramatically in recent years, with technology driving countless new opportunities, just as demographic and societal factors have created new challenges.[...]

Sign up to Stay in Touch!

Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity.

He is a best-selling author of over 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations.

He has a combined following of 4 million people across his social media channels and newsletters and was ranked by LinkedIn as one of the top 5 business influencers in the world.

Bernard’s latest book is ‘Generative AI in Practice’.

Sign Up Today

Social Media

0
Followers
0
Followers
0
Followers
0
Subscribers
0
Followers
0
Subscribers
0
Yearly Views
0
Readers

Podcasts

View Podcasts