The role and responsibility of social media as a news delivery outlet

During the latest round of the long and seemingly never-ending conflict between Israel and Palestine, and after following closely the news, links, tweets, posts, etc. that one could find mainly in Facebook and Twitter, my old concern about the role that Social Media is playing beyond its social-fun aspect, in terms of affecting the way we inform ourselves, crawled back inevitably to my mind.

Since the irruption of social media in our lives, there has been ample talk about the democratization of the information, and the empowerment of all of us as civilians. I believe that the praise is well deserved in many aspects, and it is difficult to question the positive role it has played in connecting people and allowing for a free, fast, and high volume of information flow. Getting information has never been easier, but at the same time, in my opinion, never has been harder to differentiate between real versus fake, ill-intentioned posts, news, etc.

Also the fact that anyone of us can, legitimately, post in social media, has opened a new outlet for people to misinform, or to express potentially dangerous ideas, that can affect different groups in our society. Don’t get me wrong, I value and see as a key right the freedom of speech that we have in many countries, but that doesn’t mean it is not important to analyze the potential negative effects of these new technologies.

The News/Media industry has been heavily disrupted by the strong growth of social media and technology in general. Print media is suffering, and has had to make a difficult transition towards mobile platforms. TV news outlets have maybe suffered somewhat to a lesser extent, but still have had to rethink their business models, incorporating technology like online videos, Twitter accounts, and Facebook pages. Even though News/Media companies have always been subject to editorial lines that may come from the owners or the Editor in Chief, or from the own biases that the journalists bring with them, it was easier to trace back where the information was coming from, and from whom I was getting it. This is getting harder, considering the amount of content we receive, and from so many different outlets/sources/users.

Seeing the glass half-full, the fact that social media enabled people in countries like Iran in 2009 with the elections protests1, Egypt in 2011 with the social uprising against Mubarak2, and Venezuela in the 2012 presidential elections3 to express freely, was a significant event, that had bigger implications than everyone might have expected. Even though if it was for a short period of time (in Iran4 and Egypt5 they blocked the use of Twitter and social media in general), enabled people to express their opinions and share them with the rest of the world, generating awareness of their realities. It was one of the first times I can remember seeing relevant events “live” through the eyes of the population of a country, with no filters. Especially considering the low levels of freedom of the press in these particular countries (Egypt ranks 159/180, Iran 173/180, and Venezuela 116/180 in terms of freedom of the press worldwide6), the value generated by social media is even greater.

However, the empty half of the glass shows a somewhat grimmer view. During the latest Israel-Palestine outburst I saw first hand how people from both sides where almost without even reviewing their sources, posting links to old, fake, or non-accurate pictures7, news, and videos that supported their opinions. Worst, I saw how two dear friends, one with Jewish roots and one with Palestinian roots, created a Facebook group trying to unite both communities back in Chile, only to get severe backlash from people with radical views from both sides. They received personal threats, which ultimately led to closing the group, even if this aggressiveness came from a very small group. Social media provided the perfect outlet for people to express discriminatory/racist/xenophobic views against people from Israel, Palestine, or even towards the local communities associated with both places. Comment boxes in news outlets were dominated by irrational, insulting comments. People from both communities suffered virtual and real life, physical attacks. The Arab School of Santiago, Chile suffered from graffiti’s on their walls, and Jewish families were attacked in the streets and at their homes. One is left wandering, especially comparing with previous outbursts of the conflict like in 2008 and 2012 did social media amplify the impact of this international conflict outside their borders? Did social media actually played a role of informing in an unbiased way, or did it only polarized and confronted communities that are thousands of miles away from the point of origin? What effect did social media play in the general population that is not part of these two communities in shaping their views? Is social media simply enabling people hiding behind a computer express views that they don’t dare to express publicly, sometimes hiding behind an alias? I believe the jury is still out on these questions, and I don’t have the answer either.

There are reports suggesting that social media in general, but particularly a YouTube video, played a relevant role in the deaths of the US personnel of the embassy in Libya8 in 2012. Even if the video wasn’t the main cause, the fact that it is in the discussion as a possible enabler of a tragic event, validates the idea of at least making these questions.

If social media is enabling democratization and empowerment, one could think that the shift in power about informing would also shift the sense of responsibility towards the now empowered civilian population. However this is not necessarily happening. Maybe it has to do with the fact that this may be an industry still in its infancy. Is it fair to put the blame on the people that are posting only? Or should the companies providing the outlets take more responsibility? I believe both parties have to be more responsible, however as an example Facebook9, Twitter10, and YouTube11 argue they are not liable for the potential defamatory content posted in their sites. Even though we as users can report content that seems inappropriate, it means putting the burden on the user base. Considering these are successful businesses, where for example Facebook in terms of members would be the 3rd largest country in the world, shouldn’t they approach this in a proactive way? Hiding behind the argument that they are mere platforms and not publishers12 might serve them well for now, but I strongly believe they should try to analyze the ramifications and impact their networks have worldwide, and take a more proactive approach as they keep on growing in relevance and in members.  This decision might impact their business models, and imply investments in different areas, but this doesn’t mean it is something they should or can ignore forever.

Referenced sources:

  8. &


  1. Yasmin Sanie-Hay

    Hi David, thanks for sharing your thoughts on this very important topic – it’s one that concerns me a great deal too. Although one might argue that, on the surface, the broad idea of a social media network represents the golden opportunity for more democratized creation, access to, and sharing of news; in reality, the particular way in which Facebook is designed to function as a platform does not really facilitate this. In fact, I find it does quite the opposite – it serves to reinforce underlying biases people may already have, and limit the type of people, and views, they are exposed to. This, amongst other things, is one reason I personally stopped using Facebook altogether – the preponderence of self-perpetuating bias of ideas in sub-networks of people is boring and useless, at best, and potentially quite harmful, at worst.

    Although there are certainly problems surrounding false information, intentional misinformation, too much information, and people commenting with no sense of accountability, I believe these are simply ‘old media’ problems magnified through the vehicle of social media, as you noted. These are problems people always have and will always fundamentally face with ‘news’. Perhaps the heightened fashion in which we have been exposed to these challenges through social media has actually served to make us more cognizant of these problems existing at all, regardless of the format our news is delivered in.

    What concerns me more, however, are the design intricacies of how Facebook actually works, and how this defines the flow of information that a given user will be a part of. While it’s true that a user can go on to Facebook and search for any term, the feeling of ‘information overflow’ means most users come into more contact with the information Facebook has chosen for them to see immediately in their ‘News Feed.’ The way this information is selected is the first layer of the problem is. Facebook filters according to what it thinks I want to see as a user, and also – although it would never explicitly be stated – content that falls at the intersection of what “I like” and what will optimize for its own ad revenues. The decision on how to filter what it thinks I like or what it thinks I want is very subtle, but has a tremendous impact.

    In August 2013, Facebook engineer Lars Backstrom explained “that Facebook's news feed algorithm boils down the 1,500 posts that could be shown a day in the average news feed into around 300 that it "prioritises".
    How does this algorithm work? Backstrom explained that factors include: how often you interact with a friend, page or public figure; how many likes, shares and comments individual posts have received; how much you have interacted with that kind of post in the past; and whether it’s being hidden and/or reported a lot.” ( If these are the metrics by which information is shown, or not shown to me, this can indeed lead to the creation of very biased sub-communities, rather than a very democratized super-community.

    If what I see can depend on how many other people ‘like’ the story, this creates a reinforcing loop in the flow of information – the more people like a story, the more likely I am to see it, the more likely I am to engage with it, and therefore, even more people will see it. If what I see also depends on how often I have engaged with the same type of content in the past, this also creates a potentially harmful feedback loop in the information flow – the more someone looks a certain type of content or engages with a certain type of person, the less likely they are to be exposed to any different content, and perhaps any different ideas.

    This bias in the flow of information, already a flagrant one, is just the first layer of the problem. Once Facebook filters your information, users serve to reinforce this with their own ability to filter. Usually, it’s human nature to filter ‘people’ and ideas in a homophilic fashion – we end up engaging more with similar people. Once users feed this additional layer of filtering into the system, Facebook’s algorithm ‘learns from this input’ and only become more biased in how it filters the information delivered to you in the future.

  2. Yasmin Sanie-Hay

    Due to the very nature of how information flows on Facebook, I think it ends up creating incredibly thick ‘connections’ and subnetworks between similar users, and creates an information bias in what they are exposed to outside their sub-network of views.
    As you said, “One is left wandering, especially comparing with previous outbursts of the conflict like in 2008 and 2012 did social media amplify the impact of this international conflict outside their borders? Did social media actually played a role of informing in an unbiased way, or did it only polarized and confronted communities that are thousands of miles away from the point of origin?” I think it could very well serve to polarize people who are already homophilic in nature – which is most people – because it just shows them more and more of the same view, rather than showing different, or less ‘popular’ ones.

    So while on the surface, a social network seems very democratic and unbiased in terms of content creation and delivery, digging deeper into how information flows are controlled/influenced in an inherently biased and bias-forming way reveals otherwise.