Facebook has been receiving criticism once again for how they handled users’ personal data. Here is a quick summary: in 2013, a 3rd party developer acquired large amounts of data from about 50 million users through an old platform capability (which was removed by Facebook itself one year later to prevent abuse); this data was then used to target US voters during the 2016 Presidential Election. The issue is complex in depth and it highlights a bigger underlying problem: users’ privacy expectations are not aligned with the commitment from most tech companies.
Zuckerberg said in a recent interview with Wired, “early on […] we had this very idealistic vision around how data portability would allow all these different new experiences, and I think the feedback that we’ve gotten from our community and from the world is that privacy and having the data locked down is more important to people.”
Regardless, Facebook never committed to fully lock down users’ data, and their business model was in fact built around the value that data can have for advertisers through interest relevance and demographic targeting. Google and Facebook accounted for 73% of all US digital ad revenue in the second quarter of FY18, up from 63% two years before.
I can nonetheless relate to that idealistic vision between privacy and technology. The more information the Google Assistant knows about the music I like, the better it can personalize my listening experience. Richer actions become available too, like allowing me to control the Nest thermostat or the lights by voice. At the end of the day, I’m trusting Google with my music taste and the devices installed in my house, and I get the benefit of convenience in return.
Continue reading “Fixing Facebook’s privacy problem” →