Photo: Gordon Johnson/Pixabay
The long-standing confrontation between Donald Trump and social media has entered a new stage. Still being the head of the White House he is ready to block the military budget until the relevant laws are changed.
Social networks as a threat to national security
A few days ago, Donald Trump threatened to veto the national defense law until Section 230, the law that regulates social media, is changed.
This law determines funding for defense programs and the US Armed Forces. Trump, who fought so zealously against social services in general and Twitter in particular this summer, is ready to sing his swan song and bring about changes in the legislation governing this issue.
His main argument explains the connection between the army and Facebook/Twitter/YouTube as follows: the alleged permissiveness of social services threatens US national security, as Trump announced at the end of November.
And until this problem is resolved, that is, a new norm is not provided for in the defense law, Trump will not sign it.
Already on December 4, Trump confirmed confirmed his decision to veto the defense law until the notorious Section 230 is revised.
What is Section 230
Section 230 is a part of the Communications Decency Act (CDA).
It determines that the users themselves are responsible for the content posted by them on social networks, and not the platforms on which the users posted the information.
This article is called called one of the most important norms that allowed social media existence in their current form. After all, it allowed social services to avoid legal claims regarding the content of their users.
According to Section 230, if a user posts fake content or content containing threats, lies, insults on Facebook/Twitter/Instagram, the author will be responsible for it, not the platform.
Basically, such a responsibility of the content authors, and not of the site where they posted their conclusions, is logical.
Otherwise, one could blame the power plant or the Internet provider for using their funds and resources to hack accounts and steal money, terrorists agree on terrorist attacks, or criminals organize car theft.
At the same time, social services’ administrations can moderate content according to their own rules and remove or restrict access to it, for example, if the content is manipulative or blatantly deceitful.
Prohibited content is usually listed in the user agreement and terms of service.
Therefore, by agreeing to these conditions, social networks’ clients actually agree that their posts can be deleted, and they themselves can be blocked on the platform.
How the war between Trump and Twitter can impact all social media
Donald Trump has always been a problematic "client" for Twitter.
His posts were often deceitful and manipulative, and the Twitter administration did not respond to them, explaining that the words of the head of state, even in a clearly manipulative style, are socially important. And therefore they should be conveyed to the audience.
By choosing this tactic of an outside observer, Twitter management found itself in the crosshairs of market analysts and disgruntled bloggers, who were punished for even lesser offenses on the platform.
As an experiment, one of the Twitter users copied some of Trump's posts on the @SuspendThePres account and these tweets were blocked, at the same time, the original posts on the account of the head of the White House remained available.
Trump has always been a problematic "client" for Twitter.
That changed in May 2020 when Twitter executives lost their patience and tagged Trump's tweet that it was manipulative and misleading to the audience.
The post called fake voting by mail in the framework of the presidential elections in the United States, it asserted about possible fraud when voting in this way.
The post called voting by mail during the presidential elections in the United States a fake, it asserted about possible rigging when voting in this way.
From that moment on, an open war began between the Twitter administration and Donald Trump, who violently expressed disagreement with the microblogging service actions. He continued writing contradictory things, and the service administration reacted to them either by adding a tag about manipulativeness or completely deleting it.
These recordings related to allegations that the coronavirus is not dangerous for children, statements of shooting by national guards in cities protesting against police violence, repeated statements of election by mail, and many other controversial subjecrs.
At firts, Facebook did not support Twitter's demarche against the content posted by the head of the United States. But after its own employees' protests and market participants’ accusations the largest social network synchronized its actions with Twitter.
At some piont, both socail services where Donald Trump has been actively and freely managing accounts for many years, began to react to his content in the same way as they would react to the content of a regular user.
Obviously, this approach does not work for Donald Trump, and he threatened to amend Article 230 back in the summer, explaining that social platforms had too much power over deciding what content can appear on the pages and what does not.
How do they want to rewrite Section 230
Back in June of this year, Donald Trump issued a decree containing his demand to change Section 230 content.
Arguing his idea, Trump noted that, on the one hand, the content of Article 230 removes responsibility for user content from online platforms.
At the same time, social services have unlimited possibilities for managing this content: they can delete, block, limit visibility, or, conversely, promote it. And it is these actions that can be perceived as content censorship and bad-faith actions of social services. The need to change the "main law on social networks" was partially supported by the heads of the largest technology companies.
Mark Zuckerberg, the head of Facebook, is proposing to change the law so that it "really works."
And Jack Dorsey is confident that if social services are deprived of their full immunity regarding user content, they will have to censor (review) everything posted by users.
And if large companies still somehow manage to implement this requirement, then small social services will not cope with this task.
How social media can change
This norm will be changed sooner or later, claims claims the author of the BloGnot channel Sergey Petrenko.
Republicans and Democrats insist on this from different sides. Moreover, the norm is getting more vague. That is the platforms are actually already taking certain proactive measures to moderate user content, but they still want to count on immunity if they suddenly miss something.
Founder and CEO of Terminal 42, author of BloGnot channel
But it is unlikely that this norm will be changed right now, the expert notes in the commentary to . He says that the problem is complex, the legislators' positions are practically opposite, and the Congress that should change this norm is currently openly hostile to Trump.
If the changes are introduced, social networks will moderate user content more strictly.
It cannot be ruled out that, in addition to the moderators manually reviewing suspicious messages, the platforms will introduce automatic analysis that will be applied to all posts published on their pages.
Of course, first of all, these are large platforms that allow you to post content—Facebook, Twitter, Youtube, Instagram, TikTok, Wordpress.com, Vimeo, etc.
But if you go down below, the comment under the article on the news site is also user-generated content.
And if it's not posted through a third-party platform like Facebook Comments, the responsibility for implementing a specific moderation policy will lie with the site itself.
In the worst case scenario, we will forget about social services in which it was possible to publish all at once, and the reaction of such services’ administrations was possible only post factum.
More likely something in between, that is when social services have certain markers signaling that the post should be sent for pre-moderation.
And the task of SMM specialists will be not only publications on social networks, but also knowledge of all the pre-moderation algorithms’ features that will allow them to publish content instantly and without delay.
In addition, many media outlets that have already come to restrict user comments can refuse to publish them on sites, leaving only comments on social networks so as not to violate the new law.
In any case, it is obvious that American lawmakers have decided to streamline the rules related to the Internet services functioning. It is also clear that these laws will, if not change, then at least be carefully studied in connection with the new realities.