From violent insurrections to digital propaganda, the rise of far-right extremism reveals how misinformation can evolve into a serious national security threat.
An insurrection might seem like a relic of the past, reminiscent of days when public executions were a routine spectacle. Yet, just last year, a crowd of Trump supporters violently stormed the US Capitol in an audacious attempt to overturn a presidential election. Explosives such as pipe bombs and Molotov cocktails were discovered during the chaos.
The former US president praised these rioters as “patriots,” while the current president rightfully labeled them “terrorists.”
It’s difficult to imagine such events unfolding in Singapore, where security is tight, rebellion is rare, and citizens are largely obedient. However, the lesson here is that extremism now extends beyond the typical stereotypes of radical Islamism.
The modern face of terrorism no longer involves traditional attire like keffiyehs or turbans; instead, it may involve ironic memes, 4Chan references, and symbols like Pepe the Frog.
Hate in the Internet Age
The Alt-Right is often described as a “modern hate group,” one that blends far-right conservatism with internet culture. Unlike earlier movements that relied on Nazi memorabilia or white hoods, this group thrives on provocative ideas, cold rationalism, and bigotry. The rise of the internet has allowed the spread of their anti-feminism, anti-immigration, and anti-multiculturalism ideologies to become more pervasive, crossing national borders.
They disguise hate as free speech, often hiding behind internet trolling and pranks. It’s a mix of bigotry and humor, making it difficult to take their words seriously at first.
Although it might seem unlikely that these US-born extremist ideas would find footing in Singapore, the threat they pose is very real. White nationalist violence surged in the US during Trump’s presidency, surpassing Islamist extremism in its domestic impact. Movements like QAnon, with millions of followers, have even become national security concerns, as evidenced by the Capitol attack.
Should we, then, treat online trolling with political agendas as national security threats? This question was partly answered in December 2020, when a 16-year-old Singaporean boy was arrested for plotting attacks on mosques after being deeply influenced by far-right ideologies online.
Radicalization doesn’t happen overnight. This teenager, who should have been focused on schoolwork and growing up, was instead plotting an attack after shopping for supplies on Carousell and researching targets on Google Maps. How potent must misinformation and conspiracy theories be to push someone so young into planning such violence?
Victims of Misinformation?
But are all conspiracy believers simply misguided fools? Perhaps they’re victims themselves. Research shows that once a person forms a belief, it’s almost impossible to change it, even when presented with evidence to the contrary.
In the case of self-radicalized individuals in Singapore, many did not actively seek extremist ideas. One national serviceman, arrested at 19, first encountered extremist content during his secondary school years while watching videos on the Israeli-Palestinian conflict. Over time, YouTube’s algorithm began suggesting ISIS-related content, leading to his eventual radicalization.
Many other cases of self-radicalization in Singapore also have an online component, mirroring patterns seen abroad. Social media plays a major role in spreading extremist ideologies and turning them into national security threats.
One former alt-right filmmaker based in London recounted how social media, especially YouTube, significantly contributed to his growing anger toward Muslims. As he consumed more videos, the platform’s algorithm began recommending increasingly extreme content, pushing him further into a skewed worldview.
As in the movie Inception, where ideas are planted within the subconscious, scrolling through social media can subtly brainwash individuals, making them believe false information as truth.
While we can’t ban algorithms, there must be awareness that online communities can create echo chambers, reinforcing dangerous ideas. In some cases, self-radicalized individuals’ questionable online activity went unnoticed even by friends and family, leading to their radicalization.
The current anti-vaccine movement in Singapore is another example of how conspiracy theories take root in times of fear and uncertainty. Misleading information provides comfort and a sense of control, making individuals feel they have power over their circumstances. Unfortunately, once people latch onto misinformation, they become trapped by their own cognitive biases.
Weaponized Irony
It doesn’t take much digging to find misinformation and conspiracy theories circulating within local communities. Anti-vaccine groups on Telegram are still very active, and they continue to spread misleading claims.
If people can be convinced that COVID-19 vaccines are a government plot to harm them, it’s easy to believe that malicious actors are already sowing misinformation to undermine official pandemic responses.
While not all government measures are beyond critique, there’s a difference between healthy skepticism and believing that microchips are being implanted in people. As cognitive scientists Hugo Mercier and Dan Sperber put it, “A mouse that’s convinced there are no cats around is likely to become dinner.”
Disinformation campaigns are nothing new in extremist propaganda, but far-right groups have mastered the art of weaponized irony — using humor and internet culture to spread their ideas. Did you know that milk, of all things, has become a symbol of white supremacy?
The result is that we’re often confused about what’s actually racist or bigoted, dismissing serious ideologies as mere jokes. Irony lets them spread their hateful messages while deflecting criticism.
As academic Viveca S. Greene notes, alt-right memes might seem like edgy humor, but they’re often designed to convey deeper, more dangerous messages. Even seemingly harmless pop culture references can be used to mask overt racism or bigotry.
Take Brenton Tarrant, the Christchurch shooter, for example. His manifesto was filled with internet jokes like “Fortnite taught me to be a killer,” alongside serious plans to commit mass murder.
Blaming memes for terrorism would be an oversimplification, but it’s clear that many of us don’t fully understand how internet culture can be used to radicalize individuals.
Real-World Violence
So, back to the question: Should we view misinformation as a form of terrorism? Can we still treat ideas as harmless in today’s world?
The answer is already apparent, and it’s much closer to home than we’d like to admit.
In the US, conspiracy theories and misinformation have fueled far-right violence. In Singapore, we’ve seen how online sentiment has incited xenophobic attacks, like the incident where a man kicked an Indian woman exercising without a mask.
Another incident involved a man accusing an Indian family of spreading the virus, just days after the Delta variant was identified in Singapore.
As biases grow in social media echo chambers, we can expect to see a vicious cycle of violence between Singapore’s far-right and religious extremists.
The internet, governed by algorithms, reflects our preferences and reinforces our existing biases. It’s becoming increasingly difficult to distinguish original thought from the influence of online misinformation. Before we let these ideas spill over into real-world violence, it’s essential to recognize the psychological forces that shape our beliefs.