this post was submitted on 01 Sep 2025
2 points (100.0% liked)

Europe

79 readers
5 users here now

A community to share links related to europe and to discuss.

Rules:

Blacklisted Websites:

founded 2 months ago
MODERATORS
 

Over the past decade, the importance of information integrity for democracy has become clearer, as it underpins societal trust and enables citizens to make informed choices and participate meaningfully in democratic life. Information integrity is also key to ensuring meaningful accountability on the part of representatives, and as the information space has become more polluted with false narratives and disinformation in recent years, the public space for discussion is being increasingly disrupted. The availability and sophistication of tools driven by artificial intelligence (AI) have skyrocketed over the past few years, potentially creating more acute dangers for participation and trust. Generative AI—systems that go beyond processing data to create unique synthetic content on the basis of this data—has changed the nature of online communications. Crucially, a dynamic is emerging whereby citizens may find it increasingly difficult to differentiate between what is real and what is fake, and where bad actors manipulate the online information ecosystem to produce alternative realities.

Discussions continue to take place globally regarding how governments, oversight institutions, civil society, international bodies and the private sector should respond to the challenges to democracy and elections posed by the pollution of information ecosystems. Given the growing demand for support, facilitating a resilient response can create opportunities for building international partnerships that have a positive impact.

This report investigates the key risks and opportunities associated with AI in developing contexts, mechanisms of information pollution and countermeasures being pursued, and, most importantly, the differences between the context in developing and developed nations.

The purpose of this report is to support development practitioners in identifying opportunities to promote information integrity and the ethical use of AI, while maintaining a long-term vision rooted in holistic support for democracy. The research for this report therefore took a non-prescriptive approach, drawing on insights from case studies to highlight key actors to collaborate with, principles to pursue, and components of a long-term vision for developing standards and engaging the public in finding solutions. Insights are drawn from case studies on Bangladesh, Ghana, Indonesia, Mexico, Mongolia, Pakistan and South Africa.

In the context of electoral periods, pollution of the online information space with synthetic and misleading content could create challenges for voters who wish to educate themselves and make an informed choice. Coordinated influence campaigns aimed at manipulating online information ecosystems were identified in all seven of the case studies. Between electoral periods, the misuse of these tools can contribute to increased polarization and extremism, a decay in societal trust and a reduction in people’s ability to evaluate the truth of the information they receive. Moreover, there are challenges concerning personal data privacy, with cybersecurity risks also providing openings for manipulation by bad-faith actors. In addition, false narratives frequently target women and other vulnerable minorities, multiplying dangers for these groups, a phenomenon identified especially in Mexico and Pakistan. Generative AI has the potential to accelerate these risks, as tools powered by this new technology can increase the scale and sophistication of harmful content considerably.

In spite of these risks, new communications technologies and AI also provide a number of important opportunities. They have lowered the barriers to and costs of engaging with large audiences, which can be especially helpful for new or disadvantaged actors with fewer resources. Where personal data is used transparently and responsibly, actors can tailor their communications based on advanced data analytics, while ensuring that sensitive data—for example, on race and sexual orientation—is off-limits. AI-powered tools can also help to translate communications and provide better accessibility, giving actors the means to campaign in highly repressive contexts.

Importantly, as the studies show, the manipulation of online narratives is not just about spreading advantageous false information and slandering one’s opponents; rather, it is about polluting the flow of information both online and offline with false narratives that are often spread by local elites to hold on to power.

Developing contexts can be more vulnerable to information manipulation—by both domestic and foreign actors—as they lack the state, technical and civic capacity to engage in meaningful oversight and disrupt the chain of influence. Generative AI increases the scale of these narratives, also by making it possible to create far more believable synthetic media.

It is important to note, however, that online communication tools and generative AI have also been used by those resisting repression, and the use of generative AI has not yet had a serious impact on election campaigning. AI-powered tools also show promise in enhancing oversight and monitoring of political communications by flagging disinformation and other harmful content.

The risks associated with pollution of the information ecosystem are significant: public trust may be seriously weakened, and people, being more susceptible to polarization and violent extremism, may feel less motivated to vote or participate in democratic debate. Moreover, false narratives are frequently aimed at delegitimizing the electoral administration and results, as well as undermining oversight institutions and other key mechanisms of democracy intended to ensure fair competition and checks and balances on power. The dissemination of such narratives helps to create an environment in which people are increasingly unable to differentiate between real and false information, diminishing their ability to make an informed choice and participate in political life in a meaningful way.

Amid these challenges, development practitioners should support information integrity along two main avenues: (a) by helping to establish new rules for and enforce oversight of political communications online; and (b) by complementing these rules with collaborative measures aimed at enhancing ethical behaviour and societal trust. Actions should target not only electoral management bodies (EMBs) but also oversight agencies, encouraging meaningful public engagement more broadly through multistakeholder discussions to agree upon new standards in a whole-of-society approach.

It is vital to involve diverse groups of actors in discussions to agree upon new standards and, in the long term, support solutions that address the root drivers of information pollution, working to ensure the integrity of the information space. These discussions should focus on sharing knowledge and developing legislation; increasing the capacity of oversight institutions is also crucial. It is important that these efforts look beyond the online space—traditional media are still major news sources in many countries—focusing not only on electoral periods but also on the information space between elections. This constructive approach can be especially helpful in the context of international discussions and ongoing efforts to establish agreed-upon standards for the ethical use of AI—and tools for detecting AI-generated content—by creating opportunities for further action. To support greater societal resilience and knowledge, developing electoral integrity, independent and well-funded media, and engaged citizens can help create a more receptive environment by the time these mechanisms are ready to be implemented.

Specific recommendations stemming from an analysis for development practitioners working for the European Union or its member states can be found below. One of the priorities across these recommendations is the inclusion of women, youth and other vulnerable groups in efforts of any kind. Monitoring and evaluation should obtain disaggregated data as much as possible, targeting solutions aimed at increasing the safety and inclusion of these groups. It is also important to make sure that these solutions are accessible across groups, including by encouraging the development of tools in local languages. These factors are particularly key when it comes to generative AI, as these systems are often trained on biased data that can reinforce inequalities (Tuohy-Gaydos 2024; Juneja 2024).

no comments (yet)
sorted by: hot top controversial new old
there doesn't seem to be anything here