Consumer data is increasingly the fuel enterprise runs on, but if it isn't contained properly this critical resource--as with all accelerants--can evaporate or, worse, explode.

Given this fact of life in the surveillance economy, it would certainly seem to follow logically that the specter of data insecurity might give rise to extreme caution with regard to the handling and care of sensitive consumer data, but recent news suggests otherwise.

Facebook Sells Data Used for Security and/or Grabbed from Other Users

A researcher decided to test Facebook's use of contact data to serve ads, and the social media giant predictably failed.

Kashmir Hill, a reporter at Gizmodo, placed an ad that targeted Northeastern Professor Alan Mislove. "I directed the ad to display to a Facebook account connected to the landline number for Alan Mislove's office," she reported. "A number Mislove has never provided to Facebook."

Hill continued, "He saw the ad within hours."

The ad was delivered to Mislove because Facebook is very good at finding and using data, whether or not a consumer wants it to. Advertisers on Facebook have long "found" customers by cross-referencing phone numbers and other contact information on file with information Facebook users have provided in the course of using the social media site. This isn't that. The information Hill used to target Mislove was not intended for advertising purposes.

Mislove had been working with two other researchers at Northeastern University and a third colleague at Princeton University to determine how data was being used by Facebook to serve ads, and, without going into too much detail, there were two big surprises.

First, Facebook was using phone numbers that users had provided for a security measure--two-factor authentication--to identify and target ads to those very same users. In other words, a security measure gave rise to a privacy issue. No bueno.

Second, Facebook was mining the address books of "friends" on the social network, and linking contact information and addresses found there to contact information associated with existing users who had not shared that contact information with Facebook--and, yes, using that information to target ads. Super no bueno.

Whether you call this data "shadow contact information" or stolen information doesn't matter much. What matters is this sort of data grab creates potential problems for consumers because Facebook's data mining is not sufficiently policed, and in a perfect world the practice would not be legal (NB: in Europe it's already illegal.)

Google+ Announces a Minus

Google shut down consumer use of the long-ailing social platform Google+ after it was revealed that a security bug dating back more than six months had not been disclosed by the company. The flaw existed as a potential zero-day exploit for three years.

According to the Wall Street Journal, Google may have opted not to disclose the bug at least in part to avoid regulatory scrutiny, though the platform, originally launched to compete against Facebook, has had lackluster adoption among users and may well have been slated for the digital dust heap long before the security issue came to the notice of Google.

When Google announced that it had discovered and immediately patched a bug in March 2018, seven months previous, they were engaged in a terrible cyber protocol. The bug affected about a half million users, but the company claimed in a blog post that it found no evidence the bug had been exploited. Regardless, consumers had the right to know. It was a massive failure on Google's part not to disclose the problem when it was discovered.

According to Google, at issue were "static, optional Google+ Profile fields including name, email address, occupation, gender and age."  In other words, content that is often public-facing on social media sites. That said, developers in theory could have accessed data intended to be private such as a user's date of birth. Had the discovery occurred after the GDPR went into effect in May, it is possible the matter may have been more problematic for Google.

That data unintended for prying eyes was accessible is a major problem. The public reaction? An inaudible yawn. (A shrug would have been nice.) Apparently breach notification is still a work in progress when it comes to corporate cybersecurity strategy for some organizations.

Amazon Hosts a Parasite

Amazon revealed a breach of customer data last week, but it wasn't a data breach of the usual variety. Rather than falling prey to a cyberattack or having hackers exploit unsecured code, customer email addresses were leaked by an employee to an online reseller in exchange for money.

What you need to know: 1.) A crime was committed, and 2.) It still counts as a data compromise.

While this public dismissal of an Amazon employee for the theft of customer data isn't common occurrence, it's unlikely to be an isolated incident. The e-commerce giant has been investigating suspected leaks from its databases since mid-September amid allegations that resellers were paying to get an advantage over competitors.

The Wall Street Journal reported incidents of employees accepting bribes to share sales data (including customer email addresses), remove negative reviews, as well as green-lighting banned seller accounts, particularly in China.

The value of customer email addresses to resellers is considerable. Resellers don't typically have access to customer contact information. Getting access makes it possible to contact customers who have left negative reviews, and ask them to change them or remove them entirely in exchange for discounts on future purchases.

Customer reviews are thought to be a key data point in Amazon's search algorithms; online feedback from customers is a factor in how prominently a seller's products appear in searches. Competition for high-ranking results is a matter of survival for resellers since the top three results in searches account for 64% of sales, and 70% of Amazon customers don't bother scrolling past the first page of results. As the number of resellers increases, so too does the motivation to game the site's algorithm.

Algorithms used by online services such as Amazon have become increasingly sophisticated in their ability to accumulate and predict user behavior. While most of the process of serving search results to the consumer is machine-made (i.e., AI), user feedback and the administration of it are processes that require a human touch, and as such present a potential Achilles heel to the way promotions work on the site.

B.O.: Bad Operations, Bad Optics 

We have not yet reached the tipping point when it comes to the consumer's unwillingness to accept the current risk profile associated with sharing their data online. It will come, and the backlash will require speedy and decisive action on the part of companies large and small.

With data breaches, zero-day exploits and the sundry vulnerabilities that come about through updates, feature creep and upgrades of all stripe, there is no failsafe position. When it comes to the way data is handled by trusted employees, there will always be the possibility of crime. But when it comes to the way information is handled by companies that traffic in data for their daily bread, there is a sea change coming, and Google+ is proof that nothing is too big to fail when that starts to happen. 

What can we do while the tide turns? It's crucial to address culture rather than strategy when it comes to the use of sensitive consumer data. A culture of caution remains a utopian ideal in cyber security circles, and until it becomes a part of general practice, needless exposures and vulnerabilities will persist.

The sea change in cyber security is cultural, and it is still nowhere in sight. If you think there's another way to read these three cautionary tales, I'll be happy to hear from you.

Published on: Oct 10, 2018