When Social Networks Get Political

Facebook users in the United States who logged in on Election Day (November 4th) were met with a message urging them to vote and a tool to guide them to their polling place. This feature was displayed to all users and was non-partisan in nature; Facebook has not revealed any influence from the United States government or a politically-affiliated group in developing it. This foray into influencing voter turnout begs three questions: is Facebook’s call to action effective? If so, might Facebook monetize this capability in the future, and would users be informed? And lastly, could this replace or augment current corporate speech (such as issue advocacy through lobbying, or financial donations to campaigns)? (Any questions of lawfulness will be omitted from this post, due to lack of legal expertise.)

Regarding effectiveness, Facebook has deployed similar features in 2010 and 2012 to measurable impact. The 2012 effort was affected by code bugs, but the 2010 experiment was run in partnership with data scientists who published a paper in Nature concluding that Facebook caused 0.14% of the US population to vote. (See URL below for details; the main conclusion is that these are truly incremental voters.) This increase in voter turnout is potentially significant enough to change the outcome of state and national elections. Whether users will become inured to Facebook-initiated advocacy is debatable. For example, Facebook is currently promoting donations to charities fighting Ebola in West Africa, and these types of initiatives will likely see diminishing returns if Facebook over-saturates users.

Assuming that Facebook’s efforts to get out the vote are effective, they risk public backlash under two scenarios. First, it’s conceivable that a Facebook engineer might “go rogue” and attempt to influence the actual outcome of an election, not just the level of voter turnout. This could be achieved by displaying the voting message to only select populations, or displaying different (and less effective) messages to targeted demographic segments. It’s possible this could go undiscovered, but even if spotted, there may be no redress available. Given Facebook’s recently-revealed experiment on whether it could impact user emotions (see link to summary below), it’s not clear that the Facebook engineering team can be trusted to make decisions in line with commonly-accepted ethical practices in the social sciences. Second, Facebook could evolve this “call to action” feature into a native advertising format that it sells to political actors, giving them access to voters who may not realize they are viewing a sponsored feature instead of Facebook-generated or user-generated content. (“Native advertising” refers to ad units that appear to be actual site content, and while it is generally marked “sponsored,” many users do not understand they are advertisements.) An uproar over “subliminal” political advertising could cause Facebook to kill the feature. It is unlikely that Facebook will monetize this feature due to brand risk, but the risk of internal teams choosing to selectively deploy the tool remains real.

Even if Facebook controls “unauthorized” selective deployment of the voting tool, it may still choose to intentionally influence election results. Many corporations speak publicly (through their CEOs, press releases, and corporate donations) to influence election or legislative outcomes, and Facebook could choose to do the same to support an issue or a candidate. For example, the CEOs of Starbucks, Whole Foods, and Aetna, among others, spoke publicly against the Affordable Care Act. Already, Mark Zuckerberg has been personally active in campaigning for immigration reform through his FWD.us PAC, and Facebook contributes directly to political campaigns, such as to three politicians who supported the Stop Online Piracy Act and Protect IP Act. Facebook has developed and proven the capability to increase voter turnout, and has incredible insight into its users’ political leanings (based on user-inputted data such as “liking” a political party’s Facebook page, or on predictive data such as age, gender, zip code, job title, etc.) Facebook’s ability to insidiously influence election results is almost unlimited, as users cannot opt out of the messaging, and may not even realize it’s being targeted specifically to them.

Although Facebook appears to want to positively influence the American democratic process in a non-partisan manner, the risks of abuse outweigh the benefits of increased voter participation. Facebook could permanently lose user trust, substantially damaging its revenues. Facebook should restrict its political speech to currently accepted channels and avoid speaking directly to users through Facebook.com.


Nature paper on 2010 experiment: http://fowler.ucsd.edu/massive_turnout.pdf

Description of 2014 initiative in TechCrunch: http://techcrunch.com/2014/11/04/facebook-vote/

Description of emotional influence experiment: http://www.slate.com/articles/health_and_science/science/2014/06/facebook_unethical_experiment_it_made_news_feeds_happier_or_sadder_to_manipulate.html