Content moderation in Trump’s America is a political minefield
Historically, social media platforms have moderated content the same way a parent regulates a roomful of teenagers: If you live under my roof, you follow my rules. But as social media becomes more pervasive in our offline lives—and more inherently political—the question becomes: Who real Having a roof over your head, who makes these rules, are our civil liberties at risk?
This debate is likely to intensify under the administration of President-elect Donald Trump until the politicization of content moderation reaches a fever pitch.
How did we get here?
The evolution of content moderation started slowly and accelerated as the influence of social media grew. As Facebook, Twitter and YouTube played key roles in the Arab Spring of the 2010s, a series of protests against government corruption across the Arab world, it became increasingly clear that something had to be done. Facebook was used as a tool for activist organizing, but it quickly became controversial. YouTube is considering whether to allow violent videos with educational or documentary purposes in response to revelations of police torture by activists in Egypt and Libya. Around the same time, Twitter launched a “state-withheld tweets” policy.
In 2013, documents leaked from Facebook’s moderation office showed the specific content of Facebook’s moderation. The following year, the issue of online radicalization emerged on social media platforms. YouTube has lifted its policy allowing certain violent videos after a video of the beheading of journalist James Foley went viral. Twitter faces backlash for releasing female leaders amid uncontrolled harassment Ghostbusters movies, which resulted in changes to content moderation.
Behind the scenes, those who manage this content report poor working conditions. Then came 2016.
Misinformation and disinformation plagued the US presidential election between Hillary Clinton and Trump. Even as Facebook launched its fact-checking program, the platform struggled to stop the spread of misinformation and election interference. In Myanmar, the Rohingya people face massive ethnic violence fueled by Facebook content. Meanwhile, Facebook Live became a venue for broadcasts of suicides and shootings, including Philando Castile murdered. In 2018, TikTok launched in China, the same year that Twitter removed 70 million bots in an effort to curb the influence of political misinformation. Later that year, YouTube released its first transparency report and Facebook established an oversight board to allow users to appeal its decisions. In 2019, the Christchurch terror attacks, broadcast live on Facebook Live, sparked a Christchurch “call to action to eliminate terrorist and violent extremist content online”, in which countries “work together under the title of the call to prevent terrorists and violent extremists” “To avoid taking advantage of the Internet. ” Later that year, Twitter allowed its users to appeal content removals, and TikTok eventually launched internationally.
Trump has always been president. He signed an executive order on preventing online censorship, which targets Section 230 of the Communications Decency Act and is intended to curb what he sees as bias against himself and other conservatives in how platforms moderate content. Previously, many of Trump’s tweets were labeled as misleading by Twitter. He and others in his party have accused platforms like Twitter, Facebook and Google of anti-conservative bias, which has led to congressional hearings and investigations into moderating content – Katie Habas, founder and CEO of tech policy firm Anchor Change (Katie Harbath) and a former Facebook executive called it “reputation.”
The peak of the epidemic and politicization on January 6
Then, COVID-19 hit. Misinformation about the global pandemic is rampant, leading to more deaths. Rules governing online content are expanding globally to combat the growing phenomenon of hate speech, election misinformation and health misinformation. Facebook rolled out policies targeting Holocaust denial content, hate groups, organized militias and conspiracy theories, while Twitter launched a transparency center.
But January 6, 2021, marked a turning point. Platforms including Facebook, Twitter and YouTube banned or locked accounts of then-President Trump for inciting violence during the Capitol attack.
“I would say Trump’s deplatforming is the peak swing of the pendulum,” Katie Harbath, founder and CEO of the tech policy firm Anchor Change and a former Facebook executive, told Mashable. “From there, From now on, in the next four years, [platforms have] They’ve come back to center stage in terms of how much content they’re willing to remove. [And] They became quieter about it. They’re not that transparent about it because they don’t want political targets to back them up in this matter.
Where are we now?
Trump has since restored his profile on all social media platforms. But the focus remains: Republicans claim content moderation suppresses conservative voices. As TechFreedom president Berin Szóka told Mashable: “Censorship is just moderation of content that someone doesn’t like.”
Elon Musk, a self-proclaimed “free speech absolutist,” acquired Twitter in late 2022 and has fueled this rhetoric. In January 2023, House Republicans established a subcommittee on the “Weaponization of the Federal Government” to target so-called censorship of conservative views. Meanwhile, a lawsuit accuses President Joe Biden’s administration of pressuring platforms to suppress COVID-19 misinformation, which the attorney general considers a form of suppression of speech.
In a notable shift for Meta, it’s reducing its focus on political content, specifically on its Twitter competitor Threads, which Harbath said is “not necessarily content moderation, but deciding whether or not they show people what type of content.” “. “
What do we see in the future of content moderation?
President-elect Trump has made content moderation a campaign issue. His pick for FCC chairman Brendan Carr has echoed this agenda, calling for the dismantling of what he calls the “censorship cartel” and trying to “restore the free speech rights of ordinary Americans.”
“To do that, they have to bully or require tech companies to spread speech they don’t want spread,” Soca said. “Republicans are at war on content moderation.”
As Habas puts it, this “war” could be fought on several different fronts: legislation and reputation. Reputation-wise, we’ll see more congressional hearings with tech executives, more posts from Trump on X, and more questionable energy about content moderation in general. We have an interesting road ahead of us on the legislative front.
As Soca said, Kahl may be taking orders from Trump on the department’s eligibility standards 230 Immunity, of which “Gives publishers or speakers full immunity regardless of whether the speech in question is illegal.” This means Facebook is not responsible for misinformation, hate speech, or any other issues that arise on the platform it owns and uses its funds to operate responsibility.
“[Republicans will] Use Section 230, because by doing that, they can say, ‘We don’t need anything,'” Szóka said. “As a private company, you are free to do what you want. But if you want Section 230 immunity, you have to remain neutral, and we decide what neutrality is.
Habas sees chaos ahead, but questions whether Article 230 will actually change: “There may be debates and discussions around it, but whether Article 230 will actually change, I doubt it.”
At the same time, the rise of artificial intelligence is reshaping the future of content moderation. “In the next four years, how people consume information, what we’re talking about today will be completely irrelevant and look completely different,” Habas said. “Artificial intelligence will only change how we think about news feeds, what motivates people, what they post content, the appearance of content, and will create new challenges for technology companies in terms of politicization.”
Should we panic? Probably not. Habas said it’s too early to predict what content moderation will look like in a second Trump term. But we should keep our eyes open. Content moderation rules—and who sets them—are increasingly shaped by political power, public perception, and technological developments, which provide opportunities for free speech, corporate responsibility, and the role of government in regulating cyberspace. laid the foundation for the struggle.
“Overall, it’s too early to know exactly what this will look like,” Habas said.