Last week, the Wall Street Journal exposed that Google hid a data breach related to its Google+ product. The mostly defunct product had exposed more data than intended to developers - similar to Facebook's problem in the Cambridge Analytica scandal.
What's different this time is that Google intentionally covered up the breach, which was only made public due to the report. Per the WSJ, Google hid the breach because it feared attracting negative press or regulatory scrutiny - scrutiny it's almost certain to attract now.
By the end of the week, Google had attracted the ire and attention of Congress. Several senators wrote to the FTC to ask it to take another look at Google. Meanwhile, Senator Chuck Grassley (R-Iowa) wrote directly to Google CEO Sundar Pichai on Friday. Grassley pressed on why Google had declined to participate in earlier Congressional hearings in April that focused on Facebook.
"Despite your contention that Google did not have the same data protection failures as Facebook, it appears from recent reports that Google+ had an almost identical feature to Facebook, which allowed third party developers to access information from users as well as private information of those users' connections," Grassley wrote. "Moreover, it appears that you were aware of this issue at the time I invited you to participate in the hearing and sent you the letter regarding Google's policies."
Google's problems come on the heels of many months of Facebook being in the spotlight. But despite Facebook's latest data breach, which effected an estimated 50 million, that spotlight is almost certain to widen now.
It's increasingly clear that the problems that Facebook has faced are not unique to the social networking platform. Nor, in fact, are they unique to just platform businesses. But given the scale of data and interactions they facilitate, platforms significantly up the stakes and amplify potential problems.
The lack of transparency into how platforms handle user data lies at the heart of the challenge. There's a massive asymmetry of information between the platform and users, and the platforms are able to exploit that gap with little oversight.
There are a number of potential proposals for how to solve this problem, but each has tradeoffs. Enforcing total transparency and data portability comes with potential security risks, as we've seen recently that everyone from competitors to bots, fraudsters and foreign governments are looking to exploit user data for their own ends. We want the platforms to be open - but not too open. Or rather, open in some ways and closed in others.
However, the trade-off of not enforcing transparency, as would likely be the case if the big platform monopolies were designated as utilities, is that it will be difficult to stop the platforms from continuing to exploit their information advantage. The platforms have a large information and resource advantage over and regulator - similar to what we've seen evolve in the financial industry, where regulators are almost always playing catch up with the companies they're supposed to regulate. Additionally, the challenge of regulatory capture, another issue familiar to the financial industry, is also a risk here, given how active the big tech monopolies have become in political lobbying.
Unfortunately, there are no easy solutions. But the current status quo is clearly not working as the big platforms have shown themselves incapable of self-regulating. Government has yet to find its role, but it's becoming increasingly clear it will have to soon.