Do digital platforms contribute as much to creating as to solving crises?

Photo by Adem AY on Unsplash

This blog post was contributed by Birgit Plöchl, a student in the MSc Leading Innovative Organizations at JKU Linz

The Russian invasion in Ukraine has a lot of impact on day-to-day life all around Europe. Millions of Ukrainians are fleeing the country, infrastructure is being destroyed, and people are lacking basic needs, such as food or hygiene. Hence, Europe is facing one of the biggest crises in recent years. In comparison to other wars in European history, this one is also waged online. Soldiers do not only fight over territory, but hackers attack governmental systems, or media platforms. The goal is to get as much internal information of the opponent as possible to accumulate leverage. In addition, social media channels are used to share information about current happenings. People also use it to organise themselves to provide humanitarian aid, to show their lives at the frontiers, or to share their personal opinions. In this essay, I will argue that digital platforms contribute as much to creating as to solving crises. In the next paragraphs, I will display three main aspects of digital platforms and explain how and why those not only help solve the crisis but also enhance it. While there are various kinds of crises, this essay addresses the specific crisis of a war with the example of the current Russian Invasion in Ukraine. My focus will lie on social media platforms. It is of utmost importance to state that I only address problems with regards to the social media platforms, however, knowledge-, media sharing-, and probably service-oriented platforms, such as Quora, Spotify, and AirBnb are areas of interest too in order to understand the importance of digital platforms with regards to the contribution to crises. I will use an inverted pyramid approach and will state an argument per aspect that is then supported by facts and examples.

First, social media platforms are super spreaders for fake news. Since everyone can post anything at any time, people often misinterpret personal opinions as facts. This allows people without sufficient knowledge about a topic to share unverified information with their followers. Various content creators use their social media channels to show their own lifestyle, values, and beliefs. Most content consumers follow influencers because they value those or even because they aim to be like them too. This is a problem, once content creators use their reach to convince people of questionable opinions based on fake or only partial true information. In addition, they often have hidden agendas or get paid in order to share particular information. Online profiles are very opaque, and it is often not clear who the manager of the specific accounts is. For example, David Gilbert, a VICE news reporter, states that Russian Tik Tok influencers were paid to share Russian propaganda. Based on his article, an anonymous administrator shared detailed plans on how and what to post on the individual Tik Tok channels via a Telegram chat group (Gilbert, 2022). Especially critical are channels of state-owned enterprises, governmental association or state-affiliated media (HRW, 2022). By now many globally known social media platforms have taken actions to counter the spread of propaganda and incorrect news. However, it takes too much time to control and identify wrong information and delete them. Both governments and social media platforms are too slow to ban fake news (Susi et al., 2022). However, there is not only news that is fake on social media platforms.

Another significant way social media can be problematic is that people can create fake profiles online to stay anonymous. Those profiles are mostly used to share socially unacceptable ideas, ask embarrassing questions, or to spread hate speech and abusive comments, or conspiracy theories (Wani et al., 2017). Coscia & Rossi (2022) state that the use of social media increases ideological segregation and enhances polarisation. In case of the Russian invasion in Ukraine, people were able to transmit their own ideologies, even if those were based on low quality news or personal opinions instead of facts. Consequently, people in and outside the war zone adapted more extreme opinions for either party. Researchers have identified that the Kremlin used social media to radicalise individuals way before they invaded Ukraine. In addition, BBC reporters discovered that a  pro-russian network steals and uses pictures of famous influencers to create fake profiles where they then share posts linked with hashtags such as #IStandWithPutin or #IStandWithRussia (Gragnani et al., 2022).  To reach as many people as possible, the algorithms of digital platforms are crucial. This brings me to the next main aspect why digital platforms contribute as much to creating as to solving them.

Digital platforms such as Facebook or Instagram rely on algorithms for an individual adapted news feed of their customers. Artificial intelligence, also called AI, is used to identify a person’s preferences, and optimise their news feed or home pages of their social media accounts. By drawing on digital traces, the algorithms can detect personal preferences and create a personalised loop of information shown to the person (Calice et al., 2021). The technology creates an information bias that causes people to reach conclusions quicker. Hence, people form opinions before having enough knowledge to establish a well-informed decision. The above-mentioned fake news, or conspiracy theories, can be distributed easier based on the algorithms. For example, if a person already doubts the media or is vulnerable to conspiracy, they might seek out certain keywords or hashtags online. AI recognises a pattern in the person’s digital footprint and the news feed shows more similar content. This creates a certain “bubble” social media consumers live in and clearly shapes their values and points of view. In addition, there are software robots, also called bots, that work either fully automated or with little intervention of humans. While some of them are used for public good, a lot of them spread malware or spam. The social bots are newer and are very critical as the software interacts with people and it is very hard to detect them. They aim to adapt the people’s behaviour (Himelein-Wachowiak et al., 2021). The researchers have found that bots were supporting and spreading conspiracy theories and misinformation during the Covid-19 crisis. This means, it is possible that bots are used to interact with people during the Russian Ukraine war too. Hence, AI and bots can enhance crises as the sharing of wrong information is very easy.

This essay has explored the effects of social media platforms on the current Ukrainian crisis. Moreover, there are more general implications in which these platforms can potentially harm individual lives and behaviours. One aspect of why digital platforms contribute to crises is the general information overload we face nowadays. Wherever we go, and whenever we want, we can surf online and get information about certain topics. This also means that we are more aware of the bad things happening globally. In his book “Humankind”, Bregman (2021) clarifies that the world has never been as peaceful as today. Nevertheless, people all over the world think that there is more crime and violence happening than ever before. A big part of this misperception is the media coverage of crime rates (John Howard, 2019). Crime is often the most covered topic in the news. As stated in the blog entry of John Howard (2019), Harper & Hogue (2017) discovered that 2% of the crimes in UK were sex offenses but had a media coverage of 20%. This clearly shows a disproportionate coverage. Further, people tend towards a “hometown favouritism” (Duffy & Wake, 2008), meaning that they perceive their own social community as better than others. Hence, people are biased towards their known society bubble, and get more and more information about negative incidents in different areas of their home country or globally. Consequently, the media shapes their opinion about the global state of peace that is misleading. The more news about crime is consumed, the more people fear that they will be victims to crime (Näsi et al., 2021). This increasing fear leads them to look for comfort and seek stricter rules or authoritarian leadership (Gelfand, 2020).

If we look once again at the Russian Ukraine war, this can clearly explain how people who do not have a supportive surrounding or fear the future are easily manipulated by conspiracy theories or fake news they consume on digital platforms. As stated before, the Russian government has used social media to mobilise individuals towards pro-russian beliefs years before they invaded Ukraine. I trust that if there was not such an information overload that impacts the mental health of a lot of people, that would not have been possible to the extent it was. Further, the so-called weltschmerz, which stands for the pain that living people feel towards other people that suffer because of their empathy, increases by the tremendous sources of news available.  To conclude, I have stated three main aspects that clearly show that digital platforms contribute as much to creating as to solving crises. First, I addressed the problem of fake news, followed by the complexity of the algorithms that enable all the digital platforms we use, and lastly, the overall mood worldwide caused by an information overload. While digital platforms surely have great positive contributions to solving crises such as enabling humanitarian aids, it is very important to also understand their negative impact. Governments and social media companies must solve the problems of fake profiles, bots, and misinformation as soon as possible to help to deescalate the current situation or at least to stop intensifying it. During the process of writing this essay, I realised that there are still various areas to discover to understand the specific implications of digital platforms better.


Bregman, R. (2021). Humankind: A Hopeful History. Bloomsbury.

Calice, M. N., Bao, L., Freiling, I., Howell, E., Xenos, M. A., Yang, S., Brossard, D., Newman, T. P., & Scheufele, D. A. (2021). Polarized platforms? How partisanship shapes perceptions of “algorithmic news bias.” New Media & Society, 14614448211034160.

Coscia, M., & Rossi, L. (2022). How minimizing conflicts could lead to polarization on social media: An agent-based model investigation. PLOS ONE, 17(1), e0263184.

Duffy, B., & Wake, R. (2008). Crime and Public Perceptions, 76.

Gelfand, M. (2020, January 2). Authoritarian leaders thrive on fear. We need to help people feel safe. The Guardian.

Gilbert, D. (2022, March 11). Russian TikTok Influencers Are Being Paid to Spread Kremlin Propaganda. Vice.

Gragnani, J., Medhavi, A., & Seraj, A. (2022, May 9). Ukraine war: The stolen faces used to promote Vladimir Putin. BBC News.

Harper, C. A., & Hogue, T. E. (2017). Press coverage as a heuristic guide for social decision-making about sexual offenders. Psychology, Crime & Law, 23(2), 118–134.

Himelein-Wachowiak, M., Giorgi, S., Devoto, A., Rahman, M., Ungar, L., Schwartz, H. A., Epstein, D. H., Leggio, L., & Curtis, B. (2021). Bots and Misinformation Spread on Social Media: Implications for COVID-19. Journal of Medical Internet Research, 23(5), e26933.

HRW. (2022). Russia, Ukraine, and Social Media and Messaging Apps: Questions and Answers on Platform Accountability and Human Rights Responsibilities. Human Rights Watch.

John Howard. (2019, January 7). Media portrayals of crime create problems. The John Howard Society of Canada.

Näsi, M., Tanskanen, M., Kivivuori, J., Haara, P., & Reunanen, E. (2021). Crime News Consumption and Fear of Violence: The Role of Traditional Media, Social Media, and Alternative Information Sources. Crime & Delinquency, 67(4), 574–600.

Susi, M., Benedek, W., Fischer-Lessiak, G., Kettemann, M. C., Schippers, B., & Viljanen, J. (2022). Governing Information Flows During War: A Comparative Study of Content Governance and Media Policy Responses After Russia’s Attack on Ukraine. GDHRNet Working Paper.

Wani, S. Y., Wani, M., & Sofi, M. (2017). Why Fake Profiles: A study of Anomalous users in different categories of Online Social Networks. International Journal of Engineering, Technology, Science and Research, 4, 320–329.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s