Why Companies Are Vastly Underprepared For The Risks Posed By AI
30 June 2023
In the last year, artificial intelligence has arrived with a bang. Due to the emergence of generative tools like ChatGPT, businesses across every industry are realizing its immense potential and starting to put it to use.
We know that there are challenges – a threat to human jobs, the potential implications for cyber security and data theft, or perhaps even an existential threat to humanity as a whole. But we certainly don’t yet have a full understanding of all of the implications. In fact, a World Economic Forum report recently stated that organizations “may currently underappreciate AI-related risks," with just four percent of leaders considering the risk level to be "significant."
In May, Samsung became one of the latest companies to ban the use of ChatGPT after it discovered staff had been feeding it data from sensitive documents. ChatGPT’s operators, the AI research organization OpenAI, openly state that there is no guarantee of privacy when this happens, as it involves uploading data to the cloud where it can be accessed by its employees and potentially others.
This is just one of what are likely to be many examples we will see in the coming months of businesses shutting the stable door after the horse has bolted. The speed with which this technology has arrived on the scene, combined with the huge excitement around its transformative potential and the well-documented power of the fear-of-missing-out (FOMO) effect, has left many organizations unprepared for what is to come.
What Are The Risks?
The first step towards managing the risks posed by generative AI is understanding what they are. For businesses, they can largely be segmented into four categories:
Accuracy
A common problem with generative AI at this stage is that we can’t always rely on its results to be accurate. Anyone who has used ChatGPT or similar tools for research or to answer complex questions will know that it can sometimes give incorrect information. This isn’t helped by the fact that AI is often opaque about its sources, making it difficult to check facts. Making mistakes or taking action based on inaccurate information could easily lead to operational or reputational damage to businesses.
Security threats
This can come in the form of both internal and external threats. Internally, unaware or improperly-trained users could expose sensitive corporate information, or protected customer information, by feeding it into cloud-based generative platforms such as ChatGPT. Externally, generative AI enables cyber-criminals to engage in new and sophisticated forms of social engineering and phishing attacks. This has included the use of generative AI to create fake voice messages from business leaders, asking employees to share or expose sensitive information.
Bias
AI systems are only as good as the data they are trained on, and there is a great deal of concern about the implications that this has for creating biased outcomes. If data is collected in a biased way (for example, over or under-representing statistics of particular population sections), then this can lead to skewed results that can impact decision-making. An example is a tool designed to automatically scan the resumes of job applicants and filter out those who are unsuitable. If the tool doesn’t have enough data on applicants from a particular segment, this could mean it isn’t able to accurately assess applications from that segment. This can also lead to unfavorable outcomes and reputational damage if AI is used to respond to customer inquiries and after-sales support.
Culture and trust
The introduction of new tools, technologies, and processes can often cause anxiety among workers. With AI and all the discussion about replacing humans, this is understandably more intense than usual. Employees may fear that AI systems have been brought into their jobs to potentially make them redundant. This can lead to the development of apprehension, mistrust, and disgruntled workers. It could cause them to feel that their own human skills are less valuable, creating toxicity in the workplace and increasing employee turnover. There could also be concerns that certain AI systems, such as those used for workplace monitoring, have been brought in to surveil human workers or monitor their activity in an intrusive way.
How Prepared Are Organizations?
A survey carried out by analysts Baker McKenzie concluded that many C-level leaders are over-confident in their assessments of organizational preparedness in relation to AI. In particular, it exposed concerns about the potential implications of biased data when used to make HR decisions.
It also proposed that it would be very sensible for companies to think about appointing a Chief AI Officer (CIAO) with overall responsibility for assessing the impact and opportunities on the horizon. So far, companies have been slow to do this, with just 41% of companies reporting that they have AI expertise at the board level. In my experience, it’s uncommon to find that companies have specific policies in place around the use of generative AI tools. They often lack a framework to ensure that information generated by AI is accurate and trustworthy and that AI decision-making is not affected by bias and a lack of transparency around AI systems. Another significant gap in AI preparedness at many organizations is that the impact of disruption on culture, job satisfaction, and trust is also often underestimated.
Improving Corporate Preparedness
There's no quick fix to addressing a seismic societal shift as disruptive as AI, but any strategy should include developing a framework aimed at identifying and addressing the threats covered here. It should also cover keeping an eye on the horizon for new threats that will emerge as technology matures.
Certainly, a good start is to ensure AI expertise is present at the board level, for example, through the appointment of a CAIO or similar. As well as mitigating threats, this is a person who can ensure opportunities are identified and exploited. Their job should then include ensuring that awareness permeates throughout the organization at all levels. Every employee should be aware of the risks regarding accuracy, bias, and security. On top of that, they should also have an understanding of how AI is likely to impact their own role and how it can augment their skills to make them more efficient and effective. Companies should make efforts to ensure there is an open, ongoing dialogue, including reassurance over its impact on human jobs and education on the new opportunities that are opening up for AI-skilled humans.
If AI is used for information-gathering or decision-making, policies should be in place to assess the accuracy and identify areas of operation that could be impacted by AI bias. Particularly for those using AI at scale, this could mean investing in rigorous testing and quality assurance systems.
Identifying and mitigating AI cyber threats will also increasingly become a part of organizational cyber-security strategies. This can be as simple as ensuring employees are aware of the threats of AI-enhanced phishing and social engineering attacks, right up to deploying AI-based cyber defense systems to protect against AI-augmented hacking attempts.
Last but by no means least, companies should make efforts to engage with regulators and government bodies in discussions around AI regulation and legislation. As the technology matures, industry bodies and organizations such as trade unions will be involved in drafting and implementing codes of practice, regulations, and standards. It’s essential that the organizations that are at the forefront of using this technology provide their input and expertise.
By failing to understand and react to these threats, any individual or organization runs the risk of falling foul of one of the greatest of all threats posed by AI – failing to exploit its opportunities and by doing so, being left behind by more forward-thinking competitors.
Related Articles
The Employees Secretly Using AI At Work
Imagine walking into your office and noticing your colleague Sarah effortlessly breezing through her tasks with uncanny efficiency.[...]
Battling AI Fakes: Are Social Platforms Doing Enough?
Since generative AI went mainstream, the amount of fake content and misinformation spread via social media has increased exponentially.[...]
Creating The Universal AI Employee Of The Future
Imagine a world where your most productive employee never sleeps, never takes a vacation, and can seamlessly adapt to any role you need.[...]
20 Generative AI Tools For Creating Synthetic Data
The AI revolution that we’re currently living through is a direct result of the explosion in the amount of data that’s available to be mined and analyzed for insights.[...]
How To Tell Reality From Fiction Amid The AI-Driven Truth Crisis
The artificial intelligence narrative swings between utopian dreams and dystopian nightmares, often overshadowing the nuanced reality of its current capabilities and limitations.[...]
7 Ways To Turn The ‘Bring Your Own AI’ Threat Into An Opportunity
As AI tools become increasingly accessible, companies face a new trend: BYOAI, or bring your own AI.[...]
Sign up to Stay in Touch!
Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity.
He is a best-selling author of over 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations.
He has a combined following of 4 million people across his social media channels and newsletters and was ranked by LinkedIn as one of the top 5 business influencers in the world.
Bernard’s latest book is ‘Generative AI in Practice’.
Social Media