Social Media Platforms Face a Difficult Situation to Deal with Misinformation 

Big Tech companies, and in particular social media platforms, face a new situation where content moderation and removal of misinformation will be of utmost importance. The military conflict in Ukraine is yet another situation where digital platforms may have to play an important role. 

Companies are better equipped to deal with disinformation and misinformation now than a few years ago, during the 2016 U.S. elections or the Brexit referendum. However, dealing with content moderation in a military conflict like in Ukraine is easier said than done. 

There are solid arguments to support the elimination of content that misleads people about the current situation and to prevent the publication of false pretexts to justify an invasion. On the other hand, especially in these circumstances, people also need to have unimpeded access to these channels to be able to document and report everything that is happening and to communicate with their loved ones. 

Perhaps the closest situation to this one occurred last year in Myanmar, where after a military coup, YouTube removed channels run by military forces hoping that this would prevent further incitements of violence. In the end, this also resulted in additional problems for the humanitarian and legal efforts to bring the perpetrators to justice.  

Content moderation and questions of whether to hold digital platforms accountable for the content posted in their platforms has been debated in Europe, the U.S. and elsewhere for the last years, and we start to see now the first legislative proposals in this space.  

In the U.S., Senators Richard Blumenthal (D., Conn) and Marsha Blackburn (R., Tenn.) introduced in February bipartisan legislation, dubbed the Kids Online Safety Act, aimed at holding social-media platforms responsible for harm they cause to children. 

The proposed bill would also require tech companies to provide regular assessment of how their algorithms, design features and targeted advertising systems might contribute to harm to minors. Companies would also have to offer minors the ability to opt out of algorithmic recommendations.   

If this proposal becomes law, it will represent a shift from the immunity that Big Tech companies have enjoyed since the adoption of Section 230 of the Communication Decency Act, which essentially gives internet companies immunity from harmful content posted by their users. 

Read More: US Lawmakers Propose Bill To Impose Platform Content Moderation 

Europe recently approved the Digital Service Act (DSA) that will hold Big Tech companies accountable for the illegal content posted in their platforms — they will be required to put in place mechanisms to ensure the content is removed in a timely fashion. Even content considered legal but harmful should be quickly removed. 

Read More: EU Parliament Approves Digital Service Act Targeting Big Tech 

The U.K. is also proposing new legislation, the Online Safety Bill, with similar requirements to the European DSA, but adding new criminal offences to the bill to ensure that companies do their best to guarantee that harmful content is removed. 

Google, Meta, Twitter and others won´t have an easy task in Ukraine, but this will be the latest opportunity yet to prove that they are prepared to deal with misinformation. 

 

Sign up here for daily updates on the legal, policy and regulatory issues shaping the future of the connected economy.