This programmatic option describes how the conduct of training and provision of technological resources needed for national stakeholders to be able to assess potential deepfakes and synthetic media and ascertain its veracity.
Deepfakes and other synthetic media – including video, photos and audio – present intricate challenges to election integrity. Generative AI technologies are increasingly capable of crafting convincing yet false content. There are broad fears that these technologies will be applied to generate electoral information pollution to spread misinformation, sow discord and manipulate public perception of candidates or the integrity of the electoral process. Conversely, there is a concern that when a candidate, for example, views a video or image as unfavorable, they seek to discredit it by falsely claiming that it is synthetic. Ultimately, the rise of synthetic media has made the capacity to discern the authenticity of information never more vital.
While synthetic media is not a new phenomenon, generative AI technologies have provided the means to produce more creative content with minimal expertise and cost. These pieces of content are more convincing than before, and disproving their veracity can be highly challenging. This programmatic option offers insights into how the capacity of key national stakeholders can be built to assess the veracity of media and to provide them with the tools to support them in their activities.
Just as vulnerable communities, often already marginalized and disenfranchised, are disproportionately impacted by information pollution, they are also likely to be the subject of deceitful synthetic media. Women in public life, particularly candidates, are especially vulnerable to deepfakes and other synthetic media. Women candidates receive a disproportionate volume of hate speech and other violent online abuse. Deepfakes present a new and abhorrent set of options for abusers. In particular, the creation of content presenting candidates in sexual contexts can pollute the integrity of the political campaign or deter women from participating in elections.
As generative AI content becomes more prevalent, the ability to rapidly assess and identify deepfakes may become vital and a frequent activity. As noted, it is expected that women may disproportionately find themselves as the targets of such attacks; however, they often lack the capacities and resources to analyse or discredit such content.
While the ability to assess media will be an important skill for fact-checking organizations, this may become a concern for all actors in the election process, since they may also be targets of such content.
In many cases of course, the fake nature of the content is self-evident. However, as AI continues to become more sophisticated, the approaches to detecting the authenticity of content may become more complex. The type of media – for example, video, imagery, audio, etc. – will influence the skills and tools required to assess provenance.
The following may be considered when designing an activity:
Understanding the capacity of potential threat actors can help determine how advanced the training needs to be, what tools may be required and the potential need to engage specialist capabilities.
Ascertaining the capacity of the potential trainees is important to understanding what types of training they will be capable of absorbing and which activities they are able to execute.
Proving that content is false is important, but unless this is convincingly communicated to the public, it will not address the harm caused. The determination of how information flows in the country and which entities are most credible will guide how the activity should be designed and deployed.
The legal framework – specifically the legality of fake imagery within the political context as well as pornographic imagery – will vary from country to country; however, this will guide who should be prioritized for capacity-building. Part of this will depend on whether any authorities have a legal remit to regulate the information environment.
Journalists, fact checkers, candidates, human rights defenders and regulatory bodies are groups that would be best placed to benefit from capacity-building in these areas, allowing them to better combat information pollution from their distinct areas of responsibility.
The legal framework around elections will have a significant impact upon how the project is designed. In particular, the existence of any campaign regulatory body will create an entity that may benefit from engagement. In this case, efforts may call for providing specialist skills, beyond direct capacity-building. However, expanding the capacities in a broader and more diverse cohort of groups will strengthen the all-of-society approach to countering information pollution.
As with any type of investigative work, consideration should be given to the protection concerns for actors who work to discredit political or security narratives. In a contested or conflict environment, measures would be warranted to ensure that the investigative skills are being applied equitably to all candidates, regardless of partisan nature.
When considering partners and trainees, involving youth in capacity-building activities is important. This may take the form of youth-focused civil society organizations, human rights defenders or journalists.
A focus of women civil society organizations, or organizations with a history in defending the human rights of women, can support these tools advancing gender equality.
Training can be communicated in typical project-outreach activities, with partners also choosing to publicize their involvement. It is possible – though not necessarily likely – that making it clear that these capacities exist in the country’s information ecosystem will deter political actors from using deepfakes.
Like fact-checking exercises, while identifying a falsehood is vital, it is not sufficient without broadly addressing the harm that it caused. For this reason, coordination with other credible actors to promulgate the message is important.
In some cases, the content will be illegal, and while this may call for engagement with a regulatory body, it may also require direct communication by the trainees/trainees’ institutions with law enforcement.
New techniques are constantly emerging for creating convincing synthetic media, and the means for identifying such are also shifting. It is important to establish structures that support updating trainees in modern developments.
In terms of building out capability for a greater number of persons to be able to practice these skills, a training or trainer methodology can also be deployed.
UNDP Libya runs a digital research fellowship programme to build national capacities in various counter disinformation techniques, including assessing deepfakes.
For more informations contact : [email protected]
follow us on Twitter