Can Democracy Survive AI?

.


Can Democracy Survive AI?

As part of Fordham University’s Civic & Civility Initiative, the Center for Ethics Education hosted a discussion on the ethics of generative AI and the future of democracy on Thursday, October 17, 2024. Mathias Risse, PhD, Director of the Carr Center for Human Rights Policy and Berthold Beitz Professor in Human Rights, Global Affairs, and Philosophy at the Harvard Kennedy School, and Mekela Panditharatne, Senior Counsel for Elections and Government at the Brennan Center for Justice, spoke with Fordham’s Laura Specker Sullivan, PhD, before taking questions from the audience. Both Risse and Panditharatne have focused extensively in their work on the impact of Artificial Intelligence on democracy.

AI and the democratic process intersect in a growing number of ways, making democracy increasingly vulnerable to interference by domestic and foreign adversaries. Examples include misinformation campaigns that utilize deepfake images, videos, and voicemails, as well as the spread of false information through AI-supported search engines and chatbots. Additionally, election boards are increasingly using AI to assist in voter roll purges. Together, these AI-generated or AI-supported technologies have the potential to lead to unprecedented levels of voter disenfranchisement and suppression in the United States.

These developments are a significant concern for the 2024 presidential election. However, efforts by the federal government and large tech companies to curb the impact of AI-generated political content are currently insufficient, according to Panditharatne’s analysis. Labeling restrictions and limitations on generatable assets alone are inadequate to counteract the dangers posed by AI, an open-source technology, to the democratic process. Although the impact of generative AI on the 2024 election cycle may be limited by current technological sophistication, forward-looking measures are essential given AI’s rapid evolution.

Comprehensive legislation at both the state and federal levels should include stricter disclosure and labeling requirements, restrictions on the use of AI-generated content, and voter education to promote awareness and epistemic humility among those encountering AI. Additionally, the rapid advancement of generative AI is fueled in part by the vast amounts of consumer data collected from nearly all digital behaviors and interactions. This data not only enables front-end sophistication but also allows for refined targeting of vulnerable populations with AI-generated misinformation. Stronger data protection laws in the United States are a necessary first step, with the European Union’s General Data Protection Regulation (GDPR) offering a potential model.

Taiwan presents an innovative approach to counter civic disengagement. The country has implemented a model of digital democracy that includes an online civic infrastructure designed to encourage broad policy engagement. Through the government’s website join.gov.tw, citizens can submit petitions that, if they receive 5,000 signatures, are reviewed in bi-monthly ministry meetings. This initiative aims to increase civic engagement in policymaking by providing a straightforward way for citizens’ opinions to influence new policies. Taiwan leverages this approach to combat the rising threat of misinformation campaigns and their detrimental effects on democratic processes.





Image Via Pixel

For more ‘Ethics in the News’ and to keep updated with the latest posts, please consider subscribing to the The Ethics and Society Blog today!


Leave a comment