Why Social Media keeps being a haven for Hate Speech

In late-stage capitalism, every aspect of human life is monetized or is about to be monetized for maximum profit extraction for the benefit of the capitalist class. This stretches everywhere from the proliferation of the subscription model to pay in perpetuity, to the push for people adopting multiple modes of income to afford a liveable wage. It might seem that the social media we use are an exception, as all of them can be used without paying the company any money. (This ignores services like Twitter Blue and Youtube Premium. Although both Youtube and Twitter punishes the user for not buying a subscription by pushing ads; they can still be used and curated without Premiums.) In reality, the social media companies extract user data and try to maximize use time in order their profits from selling data and enticing advertisers.

Almost all social media algorithms emphasize use time/watch time for the growth of content creators on that platform. This is why Youtube and Instagram have adopted the Tiktok style of short-form content that adapt to the user’s preferences and can be scrolled through endlessly. On top of that, a lot of social media companies have figured out that inflammatory or radical content helps in maximizing their retention rates. The push for more users and use time, in turn, for more profits; have made social media algorithms promote bigotry, disinformation, and hate.

Although there were (and still are) corners of the internet that house the downright genocidal and White/Hindu supremacists, the mainstream discussion of people being led further right by social media and Youtube started with the “alt-right pipeline” on Youtube circa 2016. Before that, Youtube saw a surge in anti-feminist content thanks to Gamergate, where a few women video game journalists faced intense vitriol for discussing the sexism and misogyny baked in the video game industry and community. This content primed a group of Youtube users, mostly men; to be receptive of racist and White supremacist lectures. It was shown that following the thread of Youtube’s recommendations, one could go from a video bemoaning “the feminists ruining ghostbusters” to a video that calls for genocide of Black people and Muslims.

There have been detailed debates about whether the alt-right pipeline was a mere catalyst for people who already harboured bigoted ideals or a tool to radicalise the apolitical and the centrist towards the far right. But the far-reaching effects and the constant backlash made Youtube reconfigure their algorithm and ban prominent right-wing and neo-Nazi creators. But just because the creators and the major spaces were disrupted and destroyed, doesn’t mean that the members of the community stopped being Nazis. They still spread their hatred on social media, often targeting different minority groups as they pleased. The concept of BJP’s IT Cell, a group of BJP members/supporters who organise mass harassment campaigns online and hashtags to spread their fascist ideas is well-known by everyone who is somewhat online in India. These spaces of hate learned to better hide their tracks.

In 2021-2022, all major social media platforms including Tiktok, Instagram, Twitter, and Youtube shorts were being used by self-proclaimed alpha male Andrew Tate to spread his vitriolic misogyny and use that misogyny to recruit men and boys as downline for his pyramid scheme. The scheme involved mass reposting edits and snippets of his interviews across fan accounts made by his fans, and redirecting others to join the downline. This phenomenon, just like the earlier alt-right movement, led to a widespread uptick in men expressing their misogyny and this hate even reaching boys as young as 10. The success of Andrew Tate’s violent misogyny model inspired multiple copycats, spreading and cementing his ideals further. Although most of Tate’s social media accounts have been deleted for violating terms of services, he is still held in high regard by his fans, who now downplay or dismiss his history of human trafficking and sexual abuse of vulnerable women.

Around the same time, people started to notice that on Youtube shorts, Google’s “competitor” of Tiktok, it was inevitable to land on right-wing content while scrolling through Shorts; even though the users have not engaged with any sort of right-wing content and have reported on seeing them. This observation, however, has only stayed on an anecdotal level.

Last year, Elon Musk promised free speech when he took over Twitter. As his pro-“free speech” promise he reinstated the accounts of Andrew Tate, Donald Trump, and others who had their accounts removed for egregious violations of Twitter ToS. This, along with Musk’s own right-wing ideals that he expressed on his own account, made Twitter a viable space for the far right to congregate. These accounts could then push their tweets on top of others’ tweets by subscribing to Twitter Blue. This blatant display of bigotry made a lot of Twitter users, who were mostly racial, religious, caste, and gender minorities, leave the app; while others had to curate their timelines to prevent platforming hate. Despite pushback and criticism from a huge fraction of the user base, Musk continues to change Twitter to fit his ideal of a right-wing social media utopia, with accounts whose usernames call for sexual assault of racial minorities being able to buy premium subscriptions and “documentaries” promoting transphobic ideals being shown as mandatory ads to all users.

In the new wave of uptick of bigotry, it’s mostly Twitter and Youtube Shorts that draw the ire of people criticising them for platforming and pushing such content. But there is another major platform that is allowing hate to fester in its own way. Instagram reels, Facebook/Meta’s Tiktok alternative, has been noted by some users to have a notoriously gross comments section under the videos. People throw the N word around with zero regard and as a silly joke, some even mashing them with other slurs to fit the person whose video they are commenting under. It’s expected to see a “you okay lil-” under the comments of every child doing something “cool”. Minority creators almost always get comments that attack them or invalidate their experiences, be it trans people existing or non-White people showing their cultures to others. Sometimes sparse but persistent hate comments can snowball into hate campaigns. A few days ago, on November 21st, Pranshu, a queer 16-year-old took their own life after being subjected to homophobic bullying because they wore a saree.

I looked up Pranshu’s news on Twitter to better write this article, and under a tweet declaring the news of their death, there were Twitter Blue users expressing thoughts ranging from “we do not care” to flagrant queerphobia. These comments shadowed comments from other users expressing grief and rage over the death of the queer teen. A similar fate befell Brianna Ghey, a trans girl who, also 16, was murdered in a transphobic hate crime. Users mocked her name and deadnamed her, disrespecting her in death. Twitter is also now the epicenter of the Islamophobic and anti-Palestinian “Pallywood” conspiracy theory, which claims all the videos showing the plight of Palestinians are faked by a group of crisis actors. Supported by the Hindu Right in India, this conspiracy theory is also spread by Israel’s Twitter account.

One can decide to stop using social media altogether, but unless a mess deletion campaign is agreed upon; the decision will just be a personal solution and not a systemic one. Social media like Twitter are still used to mobilise and spread news about activism and the world at large, and Palestinian reporters and civilians are using Twitter and Instagram to show their life under ethnic cleansing to the world. Perhaps the best “solution” is heavily curating one’s social media experiences and hoping for systemic changes; for as long as the socmed companies prioritise profits over user experience and are run by billionaires with their own agenda, surges in hate speech will be a regular affair.

Keith (he/him) is a queer trans man who describes himself as 'chronically online'. Being an expert on the niche Twitter 'dramas', Keith has seen how online spaces warp and distort with time, capital, and ideological influences and how social media discourse influences leftist praxis. As someone who also works in a retail pharmacy, Keith has an interest in making healthcare more accessible for the trans and queer community.

POST COMMENT

Your email address will not be published. Required fields are marked *