Written by

Bernard Marr

Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity. He is a best-selling author of 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations. He has a combined following of 4 million people across his social media channels and newsletters and was ranked by LinkedIn as one of the top 5 business influencers in the world.

Bernard’s latest books are ‘Future Skills’, ‘The Future Internet’ and ‘Business Trends in Practice’.

View Latest Book

Follow Me

Bernard Marr ist ein weltbekannter Futurist, Influencer und Vordenker in den Bereichen Wirtschaft und Technologie mit einer Leidenschaft für den Einsatz von Technologie zum Wohle der Menschheit. Er ist Bestsellerautor von 20 Büchern, schreibt eine regelmäßige Kolumne für Forbes und berät und coacht viele der weltweit bekanntesten Organisationen. Er hat über 2 Millionen Social-Media-Follower, 1 Million Newsletter-Abonnenten und wurde von LinkedIn als einer der Top-5-Business-Influencer der Welt und von Xing als Top Mind 2021 ausgezeichnet.

Bernards neueste Bücher sind ‘Künstliche Intelligenz im Unternehmen: Innovative Anwendungen in 50 Erfolgreichen Unternehmen’

View Latest Book

Follow Me

Why Companies Are Vastly Underprepared For The Risks Posed By AI

30 June 2023

In the last year, artificial intelligence has arrived with a bang. Due to the emergence of generative tools like ChatGPT, businesses across every industry are realizing its immense potential and starting to put it to use.

Why Companies Are Vastly Underprepared For The Risks Posed By AI | Bernard Marr

We know that there are challenges – a threat to human jobs, the potential implications for cyber security and data theft, or perhaps even an existential threat to humanity as a whole. But we certainly don’t yet have a full understanding of all of the implications. In fact, a World Economic Forum report recently stated that organizations “may currently underappreciate AI-related risks," with just four percent of leaders considering the risk level to be "significant."

In May, Samsung became one of the latest companies to ban the use of ChatGPT after it discovered staff had been feeding it data from sensitive documents. ChatGPT’s operators, the AI research organization OpenAI, openly state that there is no guarantee of privacy when this happens, as it involves uploading data to the cloud where it can be accessed by its employees and potentially others.

This is just one of what are likely to be many examples we will see in the coming months of businesses shutting the stable door after the horse has bolted. The speed with which this technology has arrived on the scene, combined with the huge excitement around its transformative potential and the well-documented power of the fear-of-missing-out (FOMO) effect, has left many organizations unprepared for what is to come.

What Are The Risks?

The first step towards managing the risks posed by generative AI is understanding what they are. For businesses, they can largely be segmented into four categories:


A common problem with generative AI at this stage is that we can’t always rely on its results to be accurate. Anyone who has used ChatGPT or similar tools for research or to answer complex questions will know that it can sometimes give incorrect information. This isn’t helped by the fact that AI is often opaque about its sources, making it difficult to check facts. Making mistakes or taking action based on inaccurate information could easily lead to operational or reputational damage to businesses.

Security threats

This can come in the form of both internal and external threats. Internally, unaware or improperly-trained users could expose sensitive corporate information, or protected customer information, by feeding it into cloud-based generative platforms such as ChatGPT. Externally, generative AI enables cyber-criminals to engage in new and sophisticated forms of social engineering and phishing attacks. This has included the use of generative AI to create fake voice messages from business leaders, asking employees to share or expose sensitive information.


AI systems are only as good as the data they are trained on, and there is a great deal of concern about the implications that this has for creating biased outcomes. If data is collected in a biased way (for example, over or under-representing statistics of particular population sections), then this can lead to skewed results that can impact decision-making. An example is a tool designed to automatically scan the resumes of job applicants and filter out those who are unsuitable. If the tool doesn’t have enough data on applicants from a particular segment, this could mean it isn’t able to accurately assess applications from that segment. This can also lead to unfavorable outcomes and reputational damage if AI is used to respond to customer inquiries and after-sales support.

Culture and trust

The introduction of new tools, technologies, and processes can often cause anxiety among workers. With AI and all the discussion about replacing humans, this is understandably more intense than usual. Employees may fear that AI systems have been brought into their jobs to potentially make them redundant. This can lead to the development of apprehension, mistrust, and disgruntled workers. It could cause them to feel that their own human skills are less valuable, creating toxicity in the workplace and increasing employee turnover. There could also be concerns that certain AI systems, such as those used for workplace monitoring, have been brought in to surveil human workers or monitor their activity in an intrusive way.

How Prepared Are Organizations?

A survey carried out by analysts Baker McKenzie concluded that many C-level leaders are over-confident in their assessments of organizational preparedness in relation to AI. In particular, it exposed concerns about the potential implications of biased data when used to make HR decisions.

It also proposed that it would be very sensible for companies to think about appointing a Chief AI Officer (CIAO) with overall responsibility for assessing the impact and opportunities on the horizon. So far, companies have been slow to do this, with just 41% of companies reporting that they have AI expertise at the board level. In my experience, it’s uncommon to find that companies have specific policies in place around the use of generative AI tools. They often lack a framework to ensure that information generated by AI is accurate and trustworthy and that AI decision-making is not affected by bias and a lack of transparency around AI systems. Another significant gap in AI preparedness at many organizations is that the impact of disruption on culture, job satisfaction, and trust is also often underestimated.

Improving Corporate Preparedness

There's no quick fix to addressing a seismic societal shift as disruptive as AI, but any strategy should include developing a framework aimed at identifying and addressing the threats covered here. It should also cover keeping an eye on the horizon for new threats that will emerge as technology matures.

Certainly, a good start is to ensure AI expertise is present at the board level, for example, through the appointment of a CAIO or similar. As well as mitigating threats, this is a person who can ensure opportunities are identified and exploited. Their job should then include ensuring that awareness permeates throughout the organization at all levels. Every employee should be aware of the risks regarding accuracy, bias, and security. On top of that, they should also have an understanding of how AI is likely to impact their own role and how it can augment their skills to make them more efficient and effective. Companies should make efforts to ensure there is an open, ongoing dialogue, including reassurance over its impact on human jobs and education on the new opportunities that are opening up for AI-skilled humans.

If AI is used for information-gathering or decision-making, policies should be in place to assess the accuracy and identify areas of operation that could be impacted by AI bias. Particularly for those using AI at scale, this could mean investing in rigorous testing and quality assurance systems.

Identifying and mitigating AI cyber threats will also increasingly become a part of organizational cyber-security strategies. This can be as simple as ensuring employees are aware of the threats of AI-enhanced phishing and social engineering attacks, right up to deploying AI-based cyber defense systems to protect against AI-augmented hacking attempts.

Last but by no means least, companies should make efforts to engage with regulators and government bodies in discussions around AI regulation and legislation. As the technology matures, industry bodies and organizations such as trade unions will be involved in drafting and implementing codes of practice, regulations, and standards. It’s essential that the organizations that are at the forefront of using this technology provide their input and expertise.

By failing to understand and react to these threats, any individual or organization runs the risk of falling foul of one of the greatest of all threats posed by AI – failing to exploit its opportunities and by doing so, being left behind by more forward-thinking competitors.

Business Trends In Practice | Bernard Marr
Business Trends In Practice | Bernard Marr

Related Articles

Generative AI: The Secret Weapon Of Successful CEOs

Remember how amazed we were when ChatGPT made its debut just a year ago? Well, as we’ve since learned, that was only the beginning.[...]

Virtual Reality, Real Business: The Impact Of The Metaverse On Companies

Metaverse has undoubtedly been one of the most talked-about concepts of the year. At the start of 2022, the focus was on Facebook’s surprise re-branding of itself to Meta Platforms.[...]

The Future Of Medicine: How AI is Shaping Patient Care And Drug Discovery

One of the most exciting aspects of AI is its implications for healthcare. Today, doctors and other medical professionals routinely augment their human skills and experience with the help of intelligent machines.[...]

Navigating The Future: 10 Global Trends That Will Define 2024

We’re approaching the mid-point of a decade in which we’ve already seen significant global transformation.[...]

Unlocking The Future Of Learning: How XR Tech Transforms Education

In the metaverse era, education as we know it will change. And I’m not just talking about formal education in schools, colleges, and universities – but also workplace learning and lifelong learning.[...]

2024 IoT And Smart Device Trends: What You Need to Know For The Future

By the end of 2024, there are projected to be more than 207 billion devices connected to the worldwide network of tools, toys, devices and appliances that make up the Internet of Things (IoT).[...]

Sign up to Stay in Touch!

Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity.
He is a multi-award-winning and internationally best-selling author of over 20 books, writes a regular column for Forbes and advises and works with many of the world’s best-known organisations.
He has a combined following of 4 million people across his social media channels and newsletters and was ranked by LinkedIn as one of the top 5 business influencers in the world.
Sign Up Today

Social Media

Yearly Views


View Podcasts