Making Your Privacy Vote Count on Facebook
Page Media
As Facebook has grown from a dorm room project to a publicly-traded company, its users have repeatedly challenged the service on privacy issues, drawing attention from the media and governments as a result. And while Facebook is often perceived as acting like some constitution-less nation doing whatever it wants without regard to user concerns, its very existence as a social networking site depends on users uploading information and trusting the site with that information. Facebook may not be a nation, but it does have a social contract with its users, albeit one driven by revenue and functionality rather than governance. And while user efforts to renegotiate this contract haven't always succeeded, even failed attempts provide some insight into how users might effectively achieve their aims when their wishes conflict with Facebook's actions.
One such failed attempt was the recent Facebook privacy notice meme, which saw many Facebook users post a "license" on their feed purporting to limit use of their content. While its sheer chain-letter-like annoyance factor could have derailed the campaign alone, the idea was doomed from its inception because it was founded on misinformation and unrealistic goals. While the license claimed to limit third parties' rights to a user's information, in reality it had no such effect. The most that can be said was that the meme drew attention to the frustration that Facebook users have with the way Facebook works and the concerns users have with changes related to the recent IPO. But the demand did not create any opportunity for Facebook to respond reasonably and satisfactorily, so its direct impact was negligible.
Facebook users also tried – and failed – to take advantage of Facebook's voting mechanism to force the company to address their concerns. Part of the problem here was a flawed mechanism for enacting change: although Facebook nominally allows users to vote on changes to its Terms of Use and other policies, the actual voting process has instead inspired heavy-handed accusations analogizing Mark Zuckerburg to some autocratic mastermind. It didn't help that the campaign had no specific criticism or direction, and was aimed at rejecting proposed changes outright without explaining what its actual concerns were. Of course, the campaign's chances of reaching the required 300 million vote threshold were always slim, in part because many users were never aware of the vote. In the end, the campaign fell far short of the votes needed to prevent the acceptance of new Terms of Use, although it drew much-deserved attention to the inherent limitations in Facebook's voting mechanism and led many commenters and users to blast Facebook for its lack of promotion in the face of the company's supposed "significant efforts to encourage users to vote.".
So what went right and wrong in these efforts? Both efforts did try to emulate past successful campaigns by leveraging the powerful nature of social networking itself. However, building user engagement is not enough. A successful effort to impact Facebook's policy also needs to focus on educating users about specific issues, and it needs to propose effective solutions to perceived problems. For example, the user outcry about the invasiveness of then-widespread quizzes, spread through Facebook itself, called for specific changes that opened the door for Facebook to respond.
Despite its site governance policies, Facebook is not a democracy and is not likely to become one. But it is based on a social contract. And by highlighting issues which break this social contract and organizing specifically around solutions to these issues, rather than fostering misunderstandings or supporting policies with no chance or intention of success, users and advocates can enter into a meaningful dialog with the service and stand a better chance of accomplishing their aims.
Chris Conley is the Technology and Civil Liberties Fellow with the ACLU of Northern California.