A week after it landed with a curious (and most likely spurious) thud, Zuckerberg's announcement about a new tack on consumer privacy still has the feel of an unexpected message from some parallel universe where surveillance (commercial and/or spycraft) isn't the new normal.
"I believe a privacy-focused communications platform will be even more important than today's open platforms," Zuckerberg said. "Privacy gives people the freedom to be themselves and connect more naturally, which is why we build social networks." And maybe share more freely their inmost wants and needs, thereby making it easier to serve them ads that convert.
While Facebook has a lengthy history of leaks, gaffes, and outright violations of privacy for users and non-users alike, and Zuckerberg has made unfulfilled promises to remedy their problematic and unpopular practices, one needn't look further than recent news to view this pivot in company policy with deep skepticism:
- Facebook's lobbying against data privacy laws worldwide: Leaked internal memos revealed an extensive lobbying effort against data privacy laws on Facebook's part, targeting the U.S., U.K., Canada, India, Vietnam, Argentina, Brazil, and every member state of the European Union.
- Facebook's Two-Factor Authentication phone numbers exposed: After prompting users to provide phone numbers to secure their accounts, Facebook allows anyone to look up their account by using them. These phone numbers are publicly accessible by default, and users have no way of opting out once they've provided them. (The company has also used security information for advertising in the past.)
- Mobile apps send user data to Facebook (even for non-Facebook users): A study by Privacy International showed that several Android apps, including Yelp, Duolingo, Indeed, the King James Bible app, Qibla Connect, and Muslim Pro all transmit users' personal data back to Facebook. A later update showed that iPhone users were similarly affected: iOS versions of OKCupid, Grindr, Tinder, Migraine Buddy, Kwit, Muslim Pro, Bible, and others were also found to eavesdrop on Facebook's behalf.
- Hundreds of millions of user passwords left exposed to Facebook employees: News recently broke that Facebook left the passwords of between 200 million and 600 million users unencrypted and available to the company's 20,000 employees going back as far as 2012.
Facebook has had more than its share of bad press in recent years, including Russian meddling in U.S. elections and complicity in a genocide campaign in Myanmar, but the company's antipathy toward user privacy seems to belie a wider disdain for the public interest, which leads to a bigger question.
Facebook has become the most profitable, debt-free business in the world by selling the private information of its users. Do you really think it's going to stop? Privacy is increasingly important to consumers, but Facebook is proof that a company need not respect the privacy of the lives it comes in contact with in order to thrive--quite the contrary.
When Did You Stop Beating Your Users?
It seems fair to say that Facebook has not earned the benefit of the doubt when it comes to being open and transparent with the public, and I'm not just saying that because I've been betting against the company's stock (I have a fair amount, and, possibly perversely, I think it's still a sound investment).
I bring this up because Facebook could be doing something to make itself an even better investment. In fact, any business can do it, and increase its value in the process. Put simply, companies can make themselves harder to hit by hackers, and less prone to compromise. While it's impossible to know for certain whether a company has been compromised or not, organizations have reputations. Reputations tend to color the way we read events. And finally, reputation management in the day and age of near-constant compromise and breach requires transparency--or at least the perception of transparency.
This was the cybersecurity song stuck in my head when Facebook, Instagram, and WhatsApp experienced widespread service outages on March 13, marking the company's longest ever downtime.
A little context: MySpace recently announced a major migration gaffe: "As a result of a server migration project, any photos, videos, and audio files you uploaded more than three years ago may no longer be available on or from Myspace." People in the know have estimated the mistake affected 53 million songs from 14 million artists.
The same day as the MySpace buzzkill, Zoll Medical reported it had experienced a data breach during an email server migration that exposed select confidential patient data, including patient names, addresses, dates of birth, limited medical information, and some Social Security numbers.
While Facebook's statement regarding its server configuration change may have been accurate, there may have been more to the story. The problem here is that we're not dealing with a company that releases reliable information (that isn't associated with their users as marketing targets).
While the outage may indeed have been caused by an honest sort of epic fail, Facebook has earned a dose of healthy skepticism. Indeed, scandals and overall wrongdoing sometimes seem the way of the world at Facebook, and as a result of this perception--true, false, or truth-y--there is a significant deficit of trust among the general public. While Facebook is too large to fail as a result of this situation, small- to medium-size companies cannot afford the luxury of being perceived as untrustworthy.
Perception Is Everything
Gustave Flaubert said, "There is no truth. There is only perception." It mattered when he wrote that, and it still matters today.
When a company doesn't report a cyberattack--or only reports the more harmless aspects of an incident--that needn't always be ascribed to sinister motives. Consider what would have happened to Facebook if 1) the recent downtime was caused by an attack (possibly made possible by the configuration that they reported), and 2) they admitted it. Admitting publicly that a cyberattack effectively brought a multibillion-dollar business to a halt for the better part of a day would, first and foremost, have the potential to encourage further attacks. Denying anything happened gives system administrators more time to identify and patch newly discovered vulnerabilities. Then there are the repercussions to the company's stock price. In short, there is no upside.
Regardless of whether the Facebook outage was the result of a cyberattack or internal error, one factor that's been largely overlooked is the company's plan to integrate all of its platforms--specifically to make the previously separate Messenger, WhatsApp, and Instagram applications interoperable.
This cross-platform integration represents a monumental undertaking. Each of these services have, at a minimum, hundreds of millions of active users, all of them with different security protocols, data structures and network requirements. Changing the architecture of three separate applications at a fundamental level not only opens the door to human error and system glitches but also presents a golden opportunity for hackers, and that should be what we're talking about--before anything bad happens.
The primary means of detecting cyber incidents for trained experts or artificial intelligence is to look for inconsistent or unexpected behavior in a system: An influx of traffic could mean a major news event, but it could also mean a DDoS attack. An unexpected delay in network connections could mean a hardware failure, but it could also signify a hijacked DNS server.
It doesn't matter what caused Facebook's recent day-long inter-platform outage. There is a valuable takeaway for businesses regardless: As Facebook trundles toward platform unification, it will be increasingly vulnerable to attack. While all companies are easier to breach when they are making a major change, Facebook and its holdings may represent a clear and present danger the likes of which we've never seen, and one that can help lead the way to better cyber solutions, no matter how big a company is.