When Machines Know How You’re Feeling: The Rise Of Affective Computing
2 July 2021
The clinical, emotionless computer or robot is a staple of science fiction, but science fact is starting to change: computers are getting much better at understanding emotions.
As we turn to computers, smart devices and robots to do more and more functions that have always been the exclusive domain of humans, this emotion-detecting technology will become increasingly important. Automated customer service “bots” will be better able to know if a customer is getting the help they need. Robot caregivers involved with telemedicine may be able to detect pain or depression even if the patient doesn’t explicitly talk about it. One insurance company I am working with is even experimenting with call voice analytics that can detect that someone is telling lies to their claims handers.
IBM’s Watson has developed a ‘Tone Analyzer’ that can detect sarcasm and a multitude of other emotions from your writing. It also has an Emotion Analysis API, to help users understand the emotion of people they’re chatting with.
These sorts of advancements are important for computers and robots that hope to seamlessly interact with humans. They may not yet pass the Turing test, but recognizing emotions gets them a step closer.
Affective computing
This particular branch of computer science is known as affective computing, that is, the study and development of systems and devices that can recognize, interpret, process, and simulate human experiences, feelings or emotions.
But it’s also related to deep learning, because complicated algorithms are required for the computer to perform facial recognition, detect emotional speech, recognize body gestures, and other data points. The computer compares the data input — in this case, a human with whom it is interacting — to its learning database to make a judgement about the person’s emotions.
A University of Ohio team programed a computer to recognize 21 ‘compound’ emotions including happily surprised and sadly disgusted. And, in tests, the computer was more accurate at recognizing these subtle emotions than the human subjects were.
That’s because the computer’s pattern recognition capabilities are superior to a human’s, and because many people tend to use the same facial muscle movements to indicate the same emotions.
Potential applications
Other than avoiding a HAL 9000 scenario, the potential for these applications is enormous.
- In e-learning situations, the presentation method can be customized to suit the user, preventing them from getting bored or slowing down when they are confused.
- Digital pets and companions, like this Japanese robot, will become more common and more lifelike.
- Psychological health services could benefit from programs that can recognize a patient’s emotional state.
- Companies could use the technology to conduct market research, analyzing a product tester’s actual emotions rather than simply their statements.
- The same could be done to judge the impact of advertising or political speeches and statements.
- Security companies could use the technology to identify individuals in crowds who seem nervous as potential threats.
- Your computer might even be able to warn you to pause before you send an angry email, change the music track to fit your mood, or disable your car if you’re in an emotionally volatile state.
- The technology is also being used to help people with autism and other disabilities to interact with the rest of the world.
Clearly this technology could have many potential benefits. But, as with any technological advancements, there could also be pitfalls. Woe to the person who seems nervous in an airport when he or she is simply running late. And don’t let your computer catch you making angry or mocking expressions just after a meeting with your boss.
(If you’re not sure how you feel about this, why not check out one of the many emotion recognition apps that will tell you how you’re feeling.)
Related Articles
Hype Or Reality: Will AI Really Solve Climate Change?
Many believe that the climate crisis is humanity's most pressing issue today.[...]
The Biggest Fintech Trends In The Next 10 Years
If a week is a long time in politics, then ten years is an eternity in the world of technology.[...]
How Businesses Should (And Should Not) Use AI: A Strategic Blueprint
Businesses often find themselves at a crossroads in the race to leverage artificial intelligence (AI).[...]
How Generative AI Will Change The Jobs Of Computer Programmers And Software Engineers
One of the most powerful features of language-based generative AI tools like ChatGPT, Microsoft's Copilot, or AWS Code Whisperer is the ability to create computer code.[...]
AI And Jobs: The Good And Bad News
In today's rapidly evolving digital landscape, the discussion about artificial intelligence (AI) and its impact on jobs is more relevant than ever.[...]
How Generative AI Will Change The Jobs Of Artists And Designers
By definition, artists and designers are creative people. They work in these jobs because they have talent and skills that they love to share with the world.[...]
Sign up to Stay in Touch!
Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity.
He is a best-selling author of over 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations.
He has a combined following of 4 million people across his social media channels and newsletters and was ranked by LinkedIn as one of the top 5 business influencers in the world.
Bernard’s latest book is ‘Generative AI in Practice’.
Social Media