Today, Big Data gives us unprecedented insights and opportunities across all industries, from healthcare to financial to manufacturing and more. But, it also raises concerns and questions that must be addressed. The relentless changes to technology and Big Data are keeping everyone on their toes, and the reality is that organisations and tech departments, government agencies, consumer protection groups and consumers are struggling to keep up. For me, there are 3 Big Data concerns that should keep people up at night: Data Privacy, Data Security and Data Discrimination.
When the 4th Amendment was ratified in 1791 to give US citizens the “reasonable expectation of privacy”, there was no way for those who wrote it to have imagined the complications of 21st century technology.
There’s no doubt we benefit from many conveniences and breakthroughs due to Big Data-powered apps and services, but at what risk to our privacy? Do we have any control about how much of our personal information is used? We’re now at the point where even a total technology boycott may no longer fully protect us. Unless, of course, you choose to walk everywhere you go, wear a different mask every day (to foil face-recognition technology) and use only cash (that you never deposit in a financial institution). Succeeding at navigating the modern world without technology is quite tricky and won’t necessarily protect your privacy 100%.
It’s true much of this information is used in benign ways, but the potential for sensitive data to be used for evil is frightening. And the U.S. government is still trying to determine how best to regulate internet privacy rules. Congress is currently debating rules adopted last year by the Federal Communications Commission that require internet service providers to tell its customers what information is collected and how it will be used or shared.
While the outcome of pending legislation is anyone’s guess, American lawmakers could follow the lead of the European Union and create an environment that protects people. In 2018, the EU’s General Data Protection Regulation (GDPR) will be fully implemented with the primary goal to “give citizens back control of their personal data.” This regulation applies to any company holding data about any European Union citizen so includes Google, Facebook and other international companies “processing and holding the personal data of data subjects residing in the European Union, regardless of the company’s location.” And, it turns out, The EU aren’t messing around. Penalties for non-compliance are up to 4% of annual global turnover—a big stick to keep companies in line.
Let’s not forget that ethical business practices are just good business. I advise all the companies I work with that transparency and ethical use of data is vital—not only because it’s the right thing to do, but stronger regulation is coming. Companies should do what they can where they can to be transparent and help consumers understand what data they are collecting and for what purpose. The Big Data ecosystem is becoming increasingly complex with the Internet of Things and connected devices. Companies who are forthright and build trust will be increasingly important to their customers.
So, you clicked and agreed to your data being used (and ultimately analysed) because you felt the benefits of the product or service from that organisation outweighed the loss to your privacy, but can you trust that organisation to keep your data safe? The answer to that gets more difficult every single day.
As Big Data increases in size and the web of connected devices explodes it exposes more of our data to potential security breaches. Many organisations already struggled with data security even before the complexities added by Big Data, so many of them are drowning to keep up.
First, we’re still playing catch-up and trying to close our Big Data skills gap—there are just too few data security professionals with expertise to feel confident that all business have a handle on their data security.
The biggest security solution might ultimately reside in Big Data-style analysis. Threats can be detected and possibly prevented by—you guessed it—analysing the data!
When everything is known, will it become acceptable to discriminate against people based on data we have on their lives? We already use credit scoring to decide who can borrow money, and insurance is heavily data-driven. While Big Data helps businesses become better marketers and service providers, it can also allow them to discriminate.
There is currently a general acceptance by consumers that they are being analysed and assessed in greater detail and the result of that is a better experience. But, what if all this insight makes it more difficult for some people to get the information or resources they need? That was exactly the question posed by a Federal Trade Commission Report, “Big Data: A Tool for Inclusion or Exclusion?”
There are consumer protection laws in place, such as the Fair Credit Reporting Act and the Federal Trade Commission Act, that are applicable to Big Data analysis. Companies need to keep these acts and equal opportunity laws in mind to be sure they are compliant.
In addition, companies should check their data to ensure:
- It is a representative sample of consumers
- The algorithms prioritise fairness
- They are aware of the biases in the data
- They are checking their Big Data outcomes against traditionally applied statistics practices
As the evolution of Big Data continues, these three Big Data concerns—Data Privacy, Data Security and Data Discrimination—will be priority items to reconcile for federal and state governments, business owners, Big Data specialists and consumers.
There is not an easy answer or a quick fix to any one of them. The deluge of information collected and the rapid change of technology makes solving these issues even more challenging, so I anticipate that they will remain the 3 Big Data concerns for quite some time.