The issue of deepfake detection extends beyond the technology itself to the quality of the media being used in detection models. In many parts of the world, including Africa, lower-quality photos and videos produced by cheap smartphones can significantly impact the effectiveness of detection tools. Models trained on high-quality media may struggle to accurately identify deepfakes created with lower-quality content. Background noise in audio or compressing videos for social media can also lead to false positives or negatives, highlighting the challenges faced when dealing with real-world circumstances.
One of the major obstacles faced by those working on deepfake detection in the Global South is the inequity in training data. Most free, public-facing tools available to journalists and fact checkers are not only inaccurate but also do not represent the diversity of individuals and content found outside of Western countries. As a result, these tools are ill-equipped to handle the challenges posed by lower-quality media common in regions like Africa.
In addition to generative AI, cheapfakes are another common form of manipulated media in the Global South. These manipulated media can be wrongly flagged as AI-generated by faulty models, leading to potential policy repercussions. By inflating numbers of AI-generated content, there is a risk of policymakers cracking down on issues that may not actually exist. This could have serious implications for freedom of speech and access to information in these regions.
Building and running detection models for deepfakes require access to energy and data centers, resources that are often lacking in many parts of the world. Researchers in regions like Ghana face significant challenges in developing local solutions due to the lack of computing infrastructure. This forces them to rely on expensive off-the-shelf tools, inaccurate free options, or partnerships with academic institutions in other regions. The limited access to local alternatives hinders the progress of deepfake detection efforts in the Global South.
Sending content to external entities for verification can lead to significant delays in confirming whether a piece of media is AI-generated. The lag time in verification processes can allow misleading information to spread unchecked, causing irreparable harm before any action can be taken. This delay in verification further emphasizes the need for local solutions and resources to address deepfake detection challenges in real-time.
While the focus on deepfake detection is crucial, there are concerns that overly prioritizing detection efforts may divert funding and support away from initiatives that promote a more resilient information ecosystem. Instead of solely investing in detection technology, funding should also be directed towards news outlets and civil society organizations that build public trust and promote media literacy. By strengthening the overall information ecosystem, societies can better withstand the threats posed by deepfakes and misinformation.
The challenges of detecting deepfakes in the Global South are multifaceted and require a holistic approach. Addressing the inequities in training data, improving access to detection tools, reducing verification lag times, and balancing detection efforts with resilience-building initiatives are crucial steps towards combating the spread of manipulated media in regions outside of Western countries. Collaboration between local researchers, international partners, and policymakers is essential to develop effective solutions that protect the integrity of information in an increasingly digital world.
Leave a Reply