Building national capacities to identify deepfakes and other deceptive generative AI audio-visual content

This programmatic option describes how the conduct of training and provision of technological resources needed for national stakeholders to be able to assess potential deepfakes and synthetic media and ascertain its veracity.

ACTIVITY

DESCRIPTION

Deepfakes and other synthetic media – including video, photos and audio – present intricate challenges to election integrity. Generative AI technologies are increasingly capable of crafting convincing yet false content. There are broad fears that these technologies will be applied to generate electoral information pollution to spread misinformation, sow discord and manipulate public perception of candidates or the integrity of the electoral process. Conversely, there is a concern that when a candidate, for example, views a video or image as unfavorable, they seek to discredit it by falsely claiming that it is synthetic. Ultimately, the rise of synthetic media has made the capacity to discern the authenticity of information never more vital.

While synthetic media is not a new phenomenon, generative AI technologies have provided the means to produce more creative content with minimal expertise and cost. These pieces of content are more convincing than before, and disproving their veracity can be highly challenging. This programmatic option offers insights into how the capacity of key national stakeholders can be built to assess the veracity of media and to provide them with the tools to support them in their activities.

Just as vulnerable communities, often already marginalized and disenfranchised, are disproportionately impacted by information pollution, they are also likely to be the subject of deceitful synthetic media. Women in public life, particularly candidates, are especially vulnerable to deepfakes and other synthetic media. Women candidates receive a disproportionate volume of hate speech and other violent online abuse. Deepfakes present a new and abhorrent set of options for abusers. In particular, the creation of content presenting candidates in sexual contexts can pollute the integrity of the political campaign or deter women from participating in elections.

As generative AI content becomes more prevalent, the ability to rapidly assess and identify deepfakes may become vital and a frequent activity. As noted, it is expected that women may disproportionately find themselves as the targets of such attacks; however, they often lack the capacities and resources to analyse or discredit such content.

While the ability to assess media will be an important skill for fact-checking organizations, this may become a concern for all actors in the election process, since they may also be targets of such content.

In many cases of course, the fake nature of the content is self-evident. However, as AI continues to become more sophisticated, the approaches to detecting the authenticity of content may become more complex. The type of media – for example, video, imagery, audio, etc. – will influence the skills and tools required to assess provenance.

The following may be considered when designing an activity:

  1. The activity can take the form of training key national stakeholders on how to assess the veracity of media and provision of tools to support them in their activities. There are various forensic means to analyse content and tools that can assist in their task. Some tools seek to assess the content itself, for example assessing for inconsistencies, unexpected artifacts or strange meta-data. Other approaches can try to assess the context of the content or verify the reliability of the source.
  2. In more complex environments or where sophisticated actors may be at play, the activity may require the engagement of a professional organization to conduct a more forensic analysis. Such an engagement may take the form of a partnership with a well-resourced media organization or a contract with a private company.

 

IMPLEMENTATION CONSIDERATIONS

1.

What are important considerations prior to initiating the activity?

Understanding the capacity of potential threat actors can help determine how advanced the training needs to be, what tools may be required and the potential need to engage specialist capabilities.

Ascertaining the capacity of the potential trainees is important to understanding what types of training they will be capable of absorbing and which activities they are able to execute.

Proving that content is false is important, but unless this is convincingly communicated to the public, it will not address the harm caused. The determination of how information flows in the country and which entities are most credible will guide how the activity should be designed and deployed.

The legal framework – specifically the legality of fake imagery within the political context as well as pornographic imagery – will vary from country to country; however, this will guide who should be prioritized for capacity-building. Part of this will depend on whether any authorities have a legal remit to regulate the information environment.

2.

Who is best placed to implement the activity?

Journalists, fact checkers, candidates, human rights defenders and regulatory bodies are groups that would be best placed to benefit from capacity-building in these areas, allowing them to better combat information pollution from their distinct areas of responsibility.

3.

How to ensure context specificity and sensitivity?

The legal framework around elections will have a significant impact upon how the project is designed. In particular, the existence of any campaign regulatory body will create an entity that may benefit from engagement. In this case, efforts may call for providing specialist skills, beyond direct capacity-building. However, expanding the capacities in a broader and more diverse cohort of groups will strengthen the all-of-society approach to countering information pollution.

As with any type of investigative work, consideration should be given to the protection concerns for actors who work to discredit political or security narratives. In a contested or conflict environment, measures would be warranted to ensure that the investigative skills are being applied equitably to all candidates, regardless of partisan nature.

4.

How to involve youth?

When considering partners and trainees, involving youth in capacity-building activities is important. This may take the form of youth-focused civil society organizations, human rights defenders or journalists.

5.

How to ensure gender sensitivity/inclusive programming?

A focus of women civil society organizations, or organizations with a history in defending the human rights of women, can support these tools advancing gender equality.

6.

How to communicate about these activities?

Training can be communicated in typical project-outreach activities, with partners also choosing to publicize their involvement. It is possible – though not necessarily likely – that making it clear that these capacities exist in the country’s information ecosystem will deter political actors from using deepfakes.

7.

How to coordinate with other actors/Which other stakeholders to involve?

Like fact-checking exercises, while identifying a falsehood is vital, it is not sufficient without broadly addressing the harm that it caused. For this reason, coordination with other credible actors to promulgate the message is important.

In some cases, the content will be illegal, and while this may call for engagement with a regulatory body, it may also require direct communication by the trainees/trainees’ institutions with law enforcement.

How to ensure sustainability?

New techniques are constantly emerging for creating convincing synthetic media, and the means for identifying such are also shifting. It is important to establish structures that support updating trainees in modern developments.

In terms of building out capability for a greater number of persons to be able to practice these skills, a training or trainer methodology can also be deployed.

COST CENTRES

  • Technological infrastructure: Implementing and maintaining advanced computational resources and software tools capable of detecting and analysing deepfakes can be a significant cost centre. This includes the initial investment in hardware and software, as well as ongoing expenses for maintenance, upgrades and licensing fees.
  • Specialized expertise: Hiring or contracting experts in fields such as digital research or investigations can be a challenge given the job market’s competitive nature in these fields.
  • Data acquisition and annotation: Access to high-quality datasets of authentic and manipulated media is crucial for training and testing deepfake detection algorithms. Acquiring and annotating these datasets can involve expenses related to data licensing, collection, preprocessing and annotation by human annotators.

LIMITATIONS AND CHALLENGES

  • Complexity of deepfakes: Deepfake technology is becoming increasingly sophisticated, making it challenging for individuals without specialized expertise to accurately detect deepfakes. Understanding the intricacies of how deepfakes are created and the subtle differences they exhibit compared to authentic media requires a high level of technical knowledge. This all indicates a challenging training regime.
  • Rapid evolution: Deepfake techniques are evolving rapidly, with new methods and advancements continually being developed. This means that training programmes may quickly become outdated, requiring constant updates and ongoing education to keep participants informed about the latest developments in deepfake technology.
  • Accessibility of training resources: Access to high-quality training materials and resources for identifying deepfakes may be limited, particularly for individuals without a background in technology or digital media. Developing comprehensive and accessible training programmes that cater to a diverse audience with varying levels of expertise can be challenging.
  • Subjectivity and bias: Identifying deepfakes can be subjective and prone to biases, as individuals may interpret visual and auditory cues differently based on their background, experience and personal biases. Training programmes must address these potential biases and provide participants with objective criteria and methodologies for assessing media authenticity.
  • Time and resource constraints: Training individuals to identify deepfakes requires time, resources and dedicated effort. Many individuals, especially those with busy schedules or limited access to training opportunities, may not have the luxury of dedicating significant time and effort to develop expertise in deepfake detection.
  • Overcoming misinformation and disinformation: Deepfakes are often used as a tool for spreading misinformation and disinformation, making it challenging for individuals to discern truth from falsehood. Training programmes must equip participants not only with the technical skills to identify deepfakes but also with critical thinking skills to evaluate the credibility of media sources and content.

RESOURCES

EXAMPLES

UNDP Libya runs a digital research fellowship programme to build national capacities in various counter disinformation techniques, including assessing deepfakes.

 

IMPLEMENTATION PROCESS

COUNTRY DEPLOYMENTS

ADDITIONAL INFORMATION

DO NOT DELETE THIS SECTION - CONTAINS THE CODE TO HIDE EMPTY ELEMENTS

Information Integrity E-learning

Coming soon