5 Bad ChatGPT Mistakes You Must Avoid
31 May 2023
Generative AI applications like ChatGPT and Stable Diffusion are incredibly useful tools that can help us with many day-to-day tasks. Many of us have already found that when used effectively, they can make us more efficient, productive, and creative.
However, what’s also becoming increasingly apparent is that there are both right ways and wrong ways to use them. If we aren’t careful, it’s easy to develop bad habits that could quickly turn into problems.
So, here’s a quick list of five pitfalls that can easily be overlooked. Being aware of these dangers should make it fairly simple to avoid them and ensure we’re always using these powerful new tools in a way that’s helpful to us rather than setting us up for embarrassment or failure.
Believing everything it tells you
Unfortunately, you only need to play around with ChatGPT for a short time to realize that far from being an all-knowing robot overlord, it can be prone to being a bit dim at times. It has a tendency to hallucinate – a term borrowed from human psychology to make its errors seem more relatable to us. It really just means it makes things up, gets things wrong, and sometimes does so with an air of confidence that can seem comical.
Of course, it's constantly being updated, and we can expect it to get better. But as of now, it has a particular propensity to make up non-existent citations or to cite research and papers that bear no relationship to the topic at hand.
The key lesson is to check and double-check anything factual that it tells you. The internet (and the world) is already full of enough misinformation, and we certainly don't need to be adding to it. Particularly if you’re using it to create business content, it’s important to have stringent editing and reviewing processes in place for everything you publish. Of course, this is important for human-created content, too. But putting too much trust in the capabilities of AI can easily lead to mistakes that can make you look silly and could even damage your reputation.
Using it to replace original thinking
It's important to remember that, in some ways, AI – particularly language-based generative AI like ChatGPT – is similar to a search engine. Specifically, it's entirely reliant on the data it can access, which in this case, is the data it's been trained on. One consequence of this is that it will only regurgitate or reword existing ideas; it won’t create anything truly innovative or original like a human can.
If you’re creating content for an audience, then it’s likely they come to you to learn about your unique experiences or benefit from your expertise in your field or because there's something about your personality or the way you communicate that appeals to them. You can’t replace this with generic AI-generated common knowledge. Emotions, feelings, random thoughts, and lived experiences feed into our ideas, and AI doesn’t replicate any of this. AI can certainly be a very useful tool for research and for helping us to organize our thoughts and working processes, but it won't generate that "spark" that enables successful businesses (and people) to distinguish themselves and excel at what they do.
Forgetting about privacy
When we’re working with cloud-based AI engines like ChatGPT or Dall-E 2, we don’t have any expectation of privacy. OpenAI – the creator of those specific tools – is upfront about this in its terms of use (you did read them, right?). It’s also worth noting that its privacy policy has been called "flimsy."
All of our interactions, including the data we input and the output it generates, are considered fair game for its own systems to ingest, store and learn from. For example, Microsoft has admitted that it monitors and reads conversations between Bing and its users. This means we have to be careful about entering personal and sensitive information. This could also apply to content such as business strategies, communications with clients, or internal company documents. There’s simply no guarantee they won’t be exposed in some way. An early public version of Microsoft’s ChatGPT-powered Bing was briefly pulled offline when it was found that it was occasionally sharing details of private conversations with other users.
Many companies (and at least one country – Italy) have banned the use of ChatGPT due to concerns over privacy. If you do use it in a professional capacity, it is important to have safeguards in place, as well as to keep up-to-date on legal obligations that come with handling such data. Solutions exist for running local instances of applications, allowing data to be processed without leaving your jurisdiction. These could soon become essential for businesses in fields such as healthcare or finance, where handling private data is routine.
Becoming over-reliant
Developing an excessive reliance on AI could easily become a problem for a number of reasons. For example, there are numerous situations where services may become unavailable, like when users or service providers are hit by technical issues. Tools and applications can also be pulled offline for security or administrative reasons, such as to apply updates. Or they could be targeted by hackers with denial-of-service attacks, leaving them offline.
Just as critically, over-reliance on AI could prevent us from developing and honing certain skills of our own that the AI tools are filling in for. This might include research, writing and communicating, summarizing, translating content for different audiences, or structuring information. These are skills that are important for professional growth and development, and neglecting to practice them could leave us at a disadvantage when we need them at a time when the assistance of AI isn’t available.
Losing the human touch
In a recent episode of South Park, the kids use ChatGPT to automate “boring” aspects of their lives – such as interacting with their loved ones (as well as cheating on their schoolwork). Obviously, this is played for laughs, but as with all good comedy, it’s also a commentary on life. Generative AI tools make it easy to automate emails, social messaging, content creation, and many other aspects of business and communications. At the same time, it can make it difficult to convey nuances and be an obstacle to empathy and relationship-building.
It’s essential to remember that the idea is to use AI to augment our humanity – by freeing up time spent on mundane and repetitive tasks so that we can concentrate on what makes us human. This means interpersonal relationships, creativity, innovative thinking, and fun. If we start trying to automate those parts of our lives, we will be building a future for ourselves that’s just as damaging as the worst that the AI doom-mongers are predicting.
Related Articles
4 Smartphones Leading The AI Revolution
As enterprises increasingly rely on company-issued smartphones as primary computing devices, these mobile devices are becoming the frontline of workplace AI integration.[...]
The Rise Of AI-Enabled Virtual Pets: Why Millions Are Raising Digital Companions
Remember Tamagotchis? Those tiny digital pets that had millions of kids frantically pressing buttons to keep their virtual companions alive in the 1990s?[...]
The Dark Side Of AI: How Deepfakes And Disinformation Are Becoming A Billion-Dollar Business Risk
Every week, I talk to business leaders who believe they're prepared for AI disruption. But when I ask them about their defense strategy against AI-generated deepfakes and disinformation, I'm usually met with blank stares.[...]
Why You Should Be Polite To ChatGPT And Other AIs
In my latest conversation with ChatGPT, I caught myself saying "please" and "thank you." My wife, overhearing this, couldn't help but laugh at my politeness toward a machine.[...]
The 7 Revolutionary Cloud Computing Trends That Will Define Business Success In 2025
Picture this: A world where quantum computing is as accessible as checking your email, where AI automatically optimizes your entire cloud infrastructure, and where edge computing seamlessly melds with cloud services to deliver lightning-fast responses.[...]
AI And The Global Economy: A Double-Edged Sword That Could Trigger Market Meltdowns
The stock market's current AI euphoria, driven by companies like NVIDIA developing powerful processors for machine learning, might mask a more troubling reality.[...]
Sign up to Stay in Touch!
Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity.
He is a best-selling author of over 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations.
He has a combined following of 4 million people across his social media channels and newsletters and was ranked by LinkedIn as one of the top 5 business influencers in the world.
Bernard’s latest book is ‘Generative AI in Practice’.
Social Media