No matter how excited people like me get about the idea of Big Data, artificial intelligence and an always-connected world, we can’t forget that there are negative connotations to these terms.
Fear manifests for a reason, from speaking to smart people I know it isn’t just technophobes and those inclined to paranoia who can get the shivers. When people hear about how much their bank or supermarket knows about them, or that their illness will soon be diagnosed by a robot, or that cars can now drive themselves, it sets off alarm bells, and with good cause.
The scale of data-gathering carried out by both industry and governments has gone through the roof in the last decade, and those doing it haven’t always been great at communicating exactly what they are up to, and why. At times, they haven’t even bothered to communicate it at all.
It’s for this reason that despite being a Big Data fanatic I can concede the fact that for every person like me – who hears the terms and immediately thinks about the ground-breaking work being done to tackle cancer, explore space, or defeat terrorists – there is at least one person who immediately thinks of hackers, snoopers and of their sensitive personal data being sold to the highest bidder.
This is a critical issue that is too often overlooked by commentators, as well as the $200 billion-plus analytics industry itself. Because Big Data just won’t work without trust – both that machines will operate the way they are supposed to, and that data will be collected and used responsibly.
The first of these worries is likely to be easier to overcome than the second. Experts who have assessed the technology have predicted that autonomous cars will be safer than human-operated ones, and AI will diagnose illness more accurately than human doctors.
This suggests that once people start to see the positive effect on statistics such as human mortality and accident rates, trust is likely to develop.
But for either of those applications of Big Data to even get off the ground, consent will have to be gained to gather and share a lot of personal data. Machine learning-driven cancer detecting relies on using image recognition technology across huge databases of x-ray and ultrasound scans, as well as patient records and doctors’ notes. The technology powering autonomous cars is clearly reliant on our willingness to share our locations and destination at all times with the systems plotting our journeys.
The more data they have available, the more reliable and accurate these systems – and others such as those that tackle financial crime and help prevent terrorist attacks – will become, and the more value we will gain from them.
But they will be hampered from the outset if people grow so scared of losing control of their own data that they refuse to share it – and with global-scale data breaches and system failures making the news on a weekly basis, who could blame them?
As far as personal data goes, there are concerns both that it will be used inappropriately, and that through being lost or stolen it will fall into the hands of someone else who will use it inappropriately.
Politicians have a part to play, here. They are more than happy to use our data to get themselves elected, so perhaps we should not be surprised that they have not been overly pro-active at proposing legislation to govern the way industry uses whatever personal information it can gather on us.
The European Union’s General Data Protection Regulation – due to come into force next year – will hopefully be a step in the right direction – with strict rules about how personal data can be used, and stiff penalties for those who break them.
The simple change of making it necessary for customers to “opt in” before their data can be used should go some way to alleviating fears of misuse. After all, in theory this means we can withdraw our consent, if a business is found to be misusing personal data.
So legislation will play its part but a bigger decider will be whether the tech industry as a whole is willing to join in. There are encouraging signs – certainly among the big players, that the trend is towards more openness about what is going on “behind the scenes” with our data, and giving us a say in how it is used. The growing number of options provided by the likes of Facebook and Google are evidence of this, as are the increasing efforts to provide transparent terms and conditions and user agreements. Undoubtedly this is due to scrutiny from the media and consumer groups which the big online service providers have come under since Amazon started tracking buying behavior and Google started to sell slots in our search results to advertisers.
Until all of this comes together in a way which means we feel secure in the technological as well as ethical integrity of the data service providers, it’s likely that Big Data will remain a dirty word to some people. Intrinsically they will link it to government and corporate surveillance of our personal lives, and the potential damage that could be caused by data breaches. That is a shame, as it has the potential to be a wonderful thing, and to change our world for the better in ways that we are only just starting to understand.
Bernard Marr is a bestselling author, keynote speaker, and advisor to companies and governments. He has worked with and advised many of the world's best-known organisations. LinkedIn has recently ranked Bernard as one of the top 10 Business Influencers in the world (in fact, No 5 - just behind Bill Gates and Richard Branson). He writes on the topics of intelligent business performance for various publications including Forbes, HuffPost, and LinkedIn Pulse. His blogs and SlideShare presentation have millions of readers.