Social media is an environment where we share content with each other, consume news, follow our friends, pursue our interests, tell jokes and share memes, and engage in political discussion (by text, images, videos, .gifs and emoji). In social media, public figures and influencers vie for attention and engagement in an environment of high visibility and social contention.
Researchers talk about social media as a space for political engagement, where people discuss and engage with politicians and political issues as well as undertaking social activism (sometimes referred to as connective action and, perhaps more cynically, clicktivisim).
However it’s not only people that are active in this space. We have all sorts of technologies influencing our discussions and information exposure online. These range from algorithms that recognise patterns and target ads to bots that recirculate content or interact with users. There are also more uncanny or malicious user-produced experiences that shape our information and news environment such as deep fakes and disinformation.
Beyond this, political dialogue and activity takes place within the context of digital cultures. Some of these cultures are toxic and, despite the many benefits, there are risks for being active online. Public figure activity on social media brings with it visibility but also becomes a lightning rod for hate and extremism associated with social instability.
In summary, the online environment is reactive and aware and characterised by an attention economy, with a side serving of backlash. Attention across social media occurs when people, monitoring and tracking technologies, and bots recognise and respond to online interactions, events and public figures. Public figures such as politicians use social media to be visible and engaged with constituents, share their position on topical debates and build their reputation in the public domain. Both their rise in public esteem and downfall is captured in social media, along with public backlash.
Online risks are quite broad and have some differences to those we face in person. Risks faced by people active online include cyberbullying (including death threats), trolling, impersonation, deep fakes, pile-ons, bot attacks, disinformation, targeting and scapegoating, hacking and leaks, and image-based abuse. These become intensified and occurring at scale for public figures such as politicians.
In the light of Jacinda Ardern’s recent resignation as New Zealand’s prime minister, we must consider how online abuse affects our public figures and politicians.
Researchers at the University of Auckland found that she faced online vitriol at a rate between 50 and 90 times higher than any other high-profile figure in NZ over the time of social media observation. In their analysis of what was said, they pointed out that misogyny was a key part of it, particularly because Ardern attracted backlash for being a left-wing woman in power who “symbolically or otherwise was taking a number of steps to undermine structures of patriarchy, racial hierarchies and structures within society,”. How ugly is that? Yes, toxic masculinity and the manosphere have a lot to answer for.
NZ Police reported that threats against the Ardern had nearly tripled over three years and that anti-vaccination sentiment was a driving force of a number of threats. In Australia, the Australian Federal Police reportedreceiving more than 500 reports of threats last to the safety of politicians – including online threats – last year.
The pandemic only exacerbated online hate and conspiracism. Within the social media ecologies studied by the authors of the report on mis- and disinformation in Aotearoa, New Zealand, key individuals and groups producing mis- and disinformation capitalise on growing uncertainty and anxiety amongst communities, related to Covid-19 public health interventions, including vaccination and lockdowns, to build fear, disenfranchisement and division.
With rhetoric from international groups trickling into Australia – particularly in Melbourne – lockdowns put the population into a pressure cooker that was intensified by a media environment of uncertainty, disinformation and misinformation. Mis- and disinformation is transmitted within and across platforms, and often very rapidly reaching large audiences, who have likely been targeted. We know that this occurs in the political domain because of the revelations from the Cambridge Analytica scandal.
The authors of the report on mis- and disinformation observe that mis- and disinformation is also particularly targeting and scapegoating already marginalised or vulnerable communities – for whom distrust of the state is the result of intergenerational trauma and lived experience of discrimination or harm, which can increase engagement with conspiratorial explanations and disinformation.
In Australia we saw this social rent come to light as the far left and far right converge on issues, which inflamed social media activity around these topics. We also saw the online polarisation spill into the streets with protests organised through social media.
The point I wanted to make clearly in a recent RMIT expert alert on this topic is that we need to remember that despite the ‘new normal’ and ‘post pandemic’ messaging, we are still experiencing the pandemic and the intensity is still very much alive.
This means we still have a great deal of social instability, which increases the risks that public figures will receive online abuse (beyond the usual disagreements and name calling). Legally, the ‘serious harm’ threshold for adult cyber abuse investigations is set deliberately high so that it balances freedom of speech, or legitimate expressions of opinion, against the need to protect everyone’s ability to participate online.
People are fatigued by the pandemic and rising mental health issues, including our politicians. However the public also have a low tolerance for government intervention, so our politicians will continue to be in the firing line.
There are many strategies that we can take to deal with online abuse targeting adults, but for politicians it’s more complex as they are public figures whose real names and professional (and personal) lives are in the (social) media. While platforms may deploy content moderation strategies and enforcement of their terms and conditions by banning people, how real is an online threat or abuse to the personal safety, emotional safety of the people who are on the receiving end?
The Dangerous Speech project defines “dangerous speech” as any form of expression (e.g. speech, text or images) that can increase the risk that its audience will condone or participate in violence against members of another group. So that’s a fair bit more than an ego battering or an attempt to take someone down a notch or two (as is the Australian way).
We need to collectively keep this discussion going about how we can support a healthy digital political domain and curtail its more toxic aspects. Politicians and those quality people who we would like to enter into political life do need preparation to go into the social media ring. Many may now have grown up in the digital environment, sometimes referred to as digital natives, but operating as a public figure brings vastly different risks.