Cutting through the hype

IMS partners are beginning to use AI and exploring what it can – and can’t – do.

The launch of ChatGPT at the end of 2022 raised major questions about the opportunities and threats artificial intelligence (AI) poses for journalism and media organisations. Over the past decade and a half, IMS has witnessed and dealt with how data-driven processes – such as automated social media moderation and Google search (all of which rely on AI components) – are rapidly and radically altering our partners’ local digital realities and abilities to create change.

The introduction and widescale adaptation of generative AI systems are forcing our partners to cut through the hype and identify and innovate solutions while mitigating risks to their local communities at lightning speed.

An AI-generated news presenter

Currently, most AI solutions are produced by dominant tech companies based in the US. One challenge this poses to our partners is the limitations on models trained in their local languages. It’s a problem that can be seen in the limited options of generative AI products. IMS partners are beginning to use AI and exploring what it can – and can’t – do.

Centre for Innovation and Technology (CITE), an IMS partner in Zimbabwe, ran up against this when designing their AI-generated news presenter, Alice. Alice was modified from an off-the-shelf solution that did not have a Zimbabwean person as a pre-programmed option. As a result, Alice’s point of departure was South African, and she mispronounced local names. While audiences felt she lacked a human touch, they were positive about the new technology, and with Alice presenting the news, CITE staff had more time to research and produce stories for her to present, which in turn increased CITE’s output.

Fighting disinformation at scale

In Asia, an IMS Disinformation Learning Forum and subsequent Public Interest Tech Innovation Lab brought together partners from across the region. A notable outcome funded by a pilot grant was the development and implementation of an AI-driven add-on to existing data-driven counter-disinformation efforts in local languages that boosted the efficiency of human factcheckers.

AI generated deepfake pornography makes up 98 percent of all deepfake videos online, according to Home Security Heroes. And, according to the same source, 99 percent of the individuals targeted by deep-fake pornography are women. So, there is a significant need for countermeasures. IMS partner JOSA, in Jordan, is using an AI tool to flag hate speech on Facebook, which they then report as violations. However, we cannot expect the tech companies to take action if they are not incentivised and enabled by policy change.

Changing laws, policies and algorithms

 The Palestinian social media monitoring organisation 7amleh, a long-time IMS partner, has for years been one of the leading actors documenting the discriminatory realities Palestinian social media users face because of tech company policies and AI-driven moderation.

After the war in Gaza broke out, 7amleh’s research showed an intensifying and disproportionate moderation of and limitation on Palestinian voices, including journalists. As a board member – along with Google, Meta and Microsoft – of the multistakeholder organisation Global Network Initiative (GNI), IMS helped facilitate both 7amleh’s presentations of its findings to the full GNI board and direct meetings with Meta and Google at senior levels to discuss its findings and advocate for change both in concrete cases and on policy levels. However, often the leading companies behind our AI-driven internet do not listen to our partners or give them access to the data needed.

That is why IMS collaborated with Research ICT Africa (RIA) in South Africa and Sida’s Africa democracy team on multiple efforts. This culminated in four workshops in November 2022 identifying the potential of coalitions and concrete steps forward at the nexus of freedom of expression and digitalisation by bringing IMS journalistic partners together with data scientists, African digital rights organisations, electoral experts, diplomats and others.

“If we want to understand what is going on in social media platforms, we need to access their data. Some big tech companies offer data access to Europe and America, but not to Africa, and this needs to be addressed,” said Professor Guy Berger, distinguished fellow at RIA and now IMS board member, speaking at the event.

Similar challenges are faced by partners in Ukraine and neighbouring countries. IMS is currently working with local partners in Ukraine to implement UNESCO’s Guidelines for the Governance of Digital Platforms to safeguard freedom of expression and access to information online. This builds on the momentum from the Ukraine War and Disinformation Roundtable that IMS launched with Ukrainian partners and the Danish Tech Ambassador in 2022, and wider regional efforts.

Public interest infrastructure

The above work is guided by a long-term vision that our digital environments are enabled and upheld by public interest infrastructures – sets of digital tools, including AI, that are explicitly designed to serve the public interest rather than any political, commercial or factional interest.

In 2023, we launched the report, Public interest infrastructure: Digital alternatives in our data-driven world and journalism’s role getting there. At the global launch, a representative from Microsoft – arguably the most dominant company in the field of AI – told our audiences: “Technology companies have a responsibility to be aware of public interest infrastructure and how to be a part of building them.” The challenge with policy change and creation of digital alternatives is that they are slow and demand an understanding of the problems at hand and potential solutions. As our partners experience daily in their local communities, when a society is faced with “unknown unknowns”, whether created by reckless tech companies or otherwise, that is when we need good journalism.