Photo via Getty Images
Digital misogyny: Why gendered disinformation undermines democracy
In affirmation of International Women’s Day, 8 March, we shed light on the profound impact of gendered disinformation for a broader audience and suggest how the issue can be addressed. We #ChooseToChallenge, raise awareness against bias and take action for equality.
IMS recognises the threat of gender-specific and sexualised disinformation often targeted at women to exploit widespread misogyny. In contributing to the UN Special Rapporteur’s upcoming annual thematic report for the UN Human Rights Council at its 47th session in June 2021, IMS has pledged to continue to combat disinformation through support for media and digital literacy initiatives and quality, gender-sensitive content.
In 2014, the Iranian journalist Yeganeh Rezaian woke up to a hacked Facebook account and an email whose senders threatened to spread “dirty photos” on social media. She had recently married The Washington Post’s Tehran bureau chief, Jason Rezaian. Shortly after receiving the email, Rezaian could not access her account, the Iranian security services raided her home, and she and her husband were arrested and placed in solitary confinement. After being released on bail and with her husband having spent 544 days in prison, convicted of espionage, they moved to the U.S. However, the gendered and sexualised disinformation campaign against her continued.
“It portrays me as a young, uneducated woman…who is so fascinated with living abroad,” Rezaian told the research team of the report “Malign Creativity: How Gender, Sex, and Lies are Weaponized Against Women Online.”
Rezaian’s experience – of being the target of a state actor that used a false gendered narrative to defame her and her husband – is a classic example of state-sponsored gendered disinformation, which has significant implications for women’s equal participation in society. But not only state actors are abusers, and the motives and targets of gendered disinformation vary.
With social distancing policies due to the Covid-19 pandemic, most people are spending more time online. Journalists use social media more than ever: to report on evolving stories, to connect with sources and readers and to publicise their work. Today, many journalists regard maintaining their presence on social media as a prerequisite for professional success.
Many social media platforms promise to improve their detection tools, to become safer and to provide an opportunity for everyone to be heard in public. Yet all too often, this is not the case. As a member of a media development organisation, I know that women journalists, already living under a double threat due to their gender and their profession, and are often targeted violently on three fronts: for being female, for being journalists and for being online.
Women face many types of attacks online, one of these we call gendered and sexualised disinformation. It can be defined as, “a subset of online gendered abuse that uses false or misleading gender and sex-based narratives against women, often with some degree of coordination, aimed at deterring women from participating in the public sphere. It combines three defining characteristics of online disinformation: falsity, malign intent, and coordination”. This type of harassment is planned and set in motion intentionally with the aim of silencing women who raise their voices on social media, and the abusers often deploy both sex- and race-based narratives, compounding the threat for women of colour.
This happens not only to women in media but also to female defenders of human rights, to female politicians, to female entrepreneurs, and to countless other women who use social media for personal or professional reasons. For example, a 2016 survey of female parliamentarians from 107 countries found that more than 85 percent use social media to engage with their voters, and analysis of the 2020 U.S primaries shows that female candidates were attacked more often than male candidates by fake news accounts. Technical innovations enhance both the ways that abusers can attack, and their reach. A recent study found that a bot on the messaging app Telegram created over 668,000 fabricated, pornographic images of women without their consent. Similarly alarming, research found that 96 percent of all deep fakes depict women in fabricated, non-consensual pornography.
As adversaries attempt to exploit widespread misogyny, women may be less likely to choose to participate in public life. However, while this reoccurring issue affects women all over the world, few if any resources are dedicated to understanding how profoundly this phenomenon affects our democratic process. The impact of gendered and sexualised disinformation on women in public life, as well as its corresponding effect on national security and democratic participation, is conspicuously absent from the discourse on disinformation.
The use of coded language, of iterative, context-based visual and textual memes, and of other tactics to avoid detection on social media platforms has been termed “malign creativity”. This is perhaps the greatest challenge to detecting, challenging, and denormalising online abuse, because it is less likely to trigger automated detection and often requires moderate-to-deep situational knowledge to understand. Two examples of malign creativity are the spelling of a word like ‘b!tch’ to avoid automatic detection, or the sending of an empty egg cartoon, in order to direct hate at a woman linked to her fertility.
Gendered disinformation is rooted in societal patriarchal structures; thus to be efficient, any responses must be seen as holistic and intersecting with different fields, i.e. safety, and with multiple segments/levels of society, i.e. internet intermediaries, law and policymakers, and employers. There is no quick fix, and to echo the thematic report to the UN Human Rights Council at its 47th session, IMS suggests the following:
- As safety measures link closely to the issue of gendered disinformation, internet intermediaries must have incident reports that allow women to report multiple abusive posts promptly and thus provide appropriate context and a more holistic view of the abuse they are experiencing. In the same vein, employers need to develop robust support policies for those facing online harassment and abuse and include clear protocols to identify and report it. National mechanisms for journalists’ safety should consider gendered disinformation as a genuine threat.
- Automated detection methods must be improved, and internet intermediaries should introduce “nudges” to discourage users from posting abusive content. Third-party fact-checkers, and those crowdsourcing and setting up datasets to identify disinformation, must incorporate a gendered perspective in their training to identify and respond to gendered disinformation.
- Online gendered disinformation must be monitored, and its data gathered, so that we can understand its scope, prevalence and societal impact, and can use these findings in advocacy work.
- Independent media must be supported to produce high-quality, gender-sensitive content that can function as timely counter-narratives to gendered disinformation.
- A gendered perspective should be fully integrated into media and information literacy efforts, since these can help determine whether or not gender issues are considered important – as legitimate social, political and cultural matters – and can also help reveal the gendered disinformation narrative.