Much of the recent philosophical, design and policy agendas regarding AI have been focused on democratizing AI, or opening the processes of AI development and decision-making. International cases of AI application in the governmental systems keep suggesting that, contrary to the hopes of more objective and efficient decision-making, AI is poorly equipped for the just allocation of tax benefits and returns, fails to fairly predict the chances for recidivism in parolees  and promotes unfair treatment in the city in an attempt to deploy better safety and surveillance regimes. It is thus not surprising that addressing the problems of algorithmic transparency, bias, fairness and accountability tops the agendas of the stakeholders across the world. While very important, this article focuses on the problem that goes hand-in-hand with the first one but has to date received little attention, namely what deploying AI for democracy could mean. As I will argue, apart from making the AI developmental processes more transparent and fairer, it is equally important to understand how AI is subtly transforming the nature of our democratic processes, changing the values that underlie it and with that in mind, use the AI potential to strengthen our capacities for engaging with one another.
Democracy and AI
Taking cue from the political philosopher Hannah Arendt, I suggest that the value and foundation of democratic societies are in their plurality. As conditional for democracy, plurality and political activity are less concerned with seeking compromise among different parties but are primarily valued as the enactors of agency and as enablers of the human capacities for deliberation and reflective judgment. Democracy here means finding a way of being together amid multiple perspectives, whereas plurality is about crafting a space for public interaction and deliberation. As we have seen above, in recent years, AI has both mediated our capacity to interact with one another and has co-shaped our infrastructure for interaction and deliberation. I would like to focus on two specific instances of how AI facilitates human capacity for critical engagement with each other, namely by affecting how we perceive and understand each other through language, and by helping to shape the visual aspect of reality.
As conditional for democracy, plurality and political activity are less concerned with seeking compromise among different parties but are primarily valued as the enactors of agency and as enablers of the human capacities for deliberation and reflective judgment.
Language is crucial to making sense of the everyday world, in how we understand and voice ourselves in democratic societies. Recently, an introduction of GPT-3 and similar algorithmic text-generation models, showed how AI can produce text virtually indistinguishable from that written by human. While this generates excitement over potential efficiency and additional creativity, GPT-3 also raises questions for democratic participation related to the values of comprehension, verification and informed engagement when written communication is concerned. The pervasive spread of voice assistants equally raises questions, now related to spoken language. While they make interaction with technologies more natural, simply by speaking to them, voice assistants also invoke a top-down command-based model of conversations and reduce the space for conversation based on language proficiency and assumed standard user capabilities. The current practices with Natural Language Processing and Automatic Speech Recognition algorithms beg the question of how AI can instead promote trust, dialogue and mutual understanding, receptive to diverse positions.
Apart from language, to participate in democracy we also rely on what we see, driven by an ‘occularcentric’ bias inherent to humans. Recent developments in synthetic media complicate our tendency toward “seeing is believing,” thus making us question the trust in the observed world. AI algorithms increasingly help to superimpose a face of one’s choice onto specific images and videos, making the people in those videos say or do things they never have in reality (e.g. a deepfake of Queen Elizabeth), recreate and manipulate 3D virtual models of a real person (e.g. Holocaust victims “speaking” in museums) or even create a fully synthetic digital being, looking as if a real person. Among the abundance of ethical issues, e.g. regarding the contribution of altered media to the rise of fake news, democratic concerns take center stage. How does synthetic media affect the concepts of trust, truth and self and together, what kind of democratic practices this fosters? In our visual-biased nature, the obscured mediating qualities of synthetic media can have far-reaching consequences for how we understand each other and the world around us.
It seems that there is a new challenge for AI systems to facilitate and enhance the democratic practices, helping us to express ourselves and engage with one another. Such systems, on the one hand, need to maintain openness to the changing conditions of the world, and on the other, to maintain the core democratic values to facilitate meaningful engagement with AI and promote an informed decision-making regarding it. Answering these questions requires ethical accompaniment and the responsible design of multimodal AI systems that combine both the visual and textual modalities.
Beyond these calls for action to responsible design lurks a deeper democratic implication on the role of AI in society. I suggest that AI should be considered not only as a neutral instrument in the hands of people who design and deploy it. Rather, AI should be considered as demos, or one of the units in ancient societies that compiled a democratic government. Originating from Ancient Greek, δ?μος or dêmos, means common members or representatives from one area. Designating AI as demos not only makes explicit the daily interactions people have with AI systems in democratic contexts but also underscores the fact that AI gives people a particular access to the engagements with others, facilitating their self-expression through the visual, written and spoken modes of activities, e.g. by generating textual narratives (e.g. GPT3), digital personas (e.g. deepfakes) and dialogical interfaces (e.g. voice assistants).
With all of these considerations in mind, the questions regarding AI and democracy seem to be twofold: How does AI facilitate our practice of democracy and the way we understand it? And how can we facilitate a responsible design and use of AI systems for a meaningful democratic engagement? For AI to facilitate and strengthen democratic practices, a responsible way forward seems to consist in the development of multimodal AI systems that would foster deliberation, critical engagement of citizens and enable them to take informed decisions under the conditions of deep uncertainty that these same AI developments bring forth. The multidisciplinary research and development practices, spanning across at least the fields of engineering, design and ethics of technology, suggest that albeit achieving this is not an easy task, it is possible, and necessary to try.
Developing AI for Democracy
From the ethics of technology side, a combination of approaches is needed that both (a) positions AI systems not as neutral instruments but as the mediators of democratic practices that actively participate in and co-shape such experiences; and (b) that examines these AI systems not only from a technological standpoint but as complex and dynamic sociotechnical systems, entangling the social, technological, cultural and institutional components. The relevant cases to consider how AI can foster democratic practices could span from the individual ones to more collective ones. On a micro-level, for instance, we could examine how synthetic media (e.g. deepfakes and GPT3) impacts and can foster human interrelation and trust, i.e. a precondition to deliberating with one another; or how social media algorithms co-shape an ideal of and can contribute to being a good political subject that can thrive in plurality and critically engage with others. Scaling up to the collective level, one could inquire how voice assistants and chatbots impact and can improve the representation and inclusiveness, enabling a plurality; or how algorithmic systems in the city affect political deliberation and can enlarge the space for critical and informed engagement with AI systems that support our collective life, thus facilitating active citizenship. Let us consider some of these cases in more detail.
Some of the concerns surrounding the application of synthetic media in the context of democracy center around its impact on human interrelations and trust, the trust regarding what one is seeing and hearing and the trust in others, met online. Such concerns are warranted as the online world has recently become permeated by AI-based technologies that manipulate the textual (e.g. GPT3), photo- and video content (e.g. deepfakes) to make it seem like people in that content say or do certain things that they might not have done originally. While such synthetic media has received negative reputation for facilitating the spread of fake news and enabling identity fraud and misrepresentation online, it has also allowed for therapeutic practices, allowing to “see” the future one could have or make the past “speak” in new ways. Nascent practices with synthetic media also suggest the blurring boundary between “digital” and “real,” “physical” worlds, for synthetic media facilitates deep uncertainty online and offline, allowing for an unbounded ontological flexibility of everything and altering the sense of self and others. From a standpoint of philosophy and ethics, some of the crucial tensions to clarify here include how synthetic media affects interpersonal relations as people’s self-image comes under pressure; and the value and role of trust as the strategy to deal with uncertainty in the everyday life. The responsible design of such AI systems that precondition democratic engagement would need to explore what is needed from us in the context of synthetic media to have good quality relations among each other. It needs to consider the development of synthetic media beyond transparency and verification methods, and additionally focus on novel media literacy strategies that deal with the limits of knowledge (e.g. the fact that even when people know they are dealing with the manipulated content, they can still be affected by it). From a responsible design standpoint, one could inquire into the public reactions towards deepfake media, reflect on possible vision-centered bias and how to facilitate interpersonal trust in the world of which deepfakes are a part. Combining both ethics of technology and responsible design perspectives would allow for a systemic analysis of AI systems and proactive attitude in the design and implementation.
Algorithmic systems invite what gets on the political agenda and in which way people can engage with it – and as such, AI gives rise to particular ways of political agency.
Another case when considering designing AI systems that could facilitate engaging and reflective democratic practices could concern the use of social media and its relation to shaping a good political subject. Algorithmic systems invite what gets on the political agenda and in which way people can engage with it – and as such, AI gives rise to particular ways of political agency. Some argue that the incessant rich data flows and deep uncertainty regarding what one sees and reads online undermine our possibility of being together, either by suppressing dialogue and ways of expressing ourselves. However, such discourses are shortsighted as they consider AI predominantly as a deterministic force only limiting democratic expression. The technological mediation lens allows us to take a more nuanced look at AI systems. While AI delimits some aspects of democracy, it inevitably opens up new avenues for engagement, self-expression and promotes value change that reflects societal dynamics. Consequently, what it means to be a good citizen in the age of AI, especially with the pervasive use of social media (e.g. Facebook, TikTok, Instagram), merits closer philosophical attention. Here, one could explore how people appropriate social media to make it work for them in the democratic context and study how social media affects the responsibilities of political subjects. These findings, in turn, could contribute to the critical design of social media and foster societal debate through, e.g. design provocations by developing an add-on to the social media feeds that could prompt pausing and reflecting before posting, flag specific content and give users more control over their algorithmically-curated content. The combination of ethics of technology and design fields can thus be a fruitful ground for establishing what is at stake with social media regarding being a good citizen and providing innovative ways to make people more reflective of their use of social media.
The implementation of AI in the governing structures of society has so far witnessed an inaptitude of these technological systems to deal with the public matters in a just and fair manner. At a more individual level, AI systems give shape to the spaces of interaction that tend to promote bias, discord and aggression over plurality of positions and critical engagement with information. Not surprisingly, much of the global discussion surrounding AI concerns its democratic impact, and specifically focuses on the issues of democratizing the process of AI development: demanding that it be open, inclusive, explainable and accountable. In this paper, I have suggested to focus on the other side of the democratic impact of AI, namely exploring how the use of such systems transforms the very nature of the democratic engagement, changes its underlying values and processes and how to design better and more responsible AI systems with that in mind. In short, I have proposed in this paper to consider what AI can mean for democracy and how to coordinate the complex interdisciplinary efforts with that in mind.
Additionally, I offered to position AI as demos, a unit of participation in democratic societies, in view of the different ways in which AI already co-shapes democracy. Such a conceptualization of AI as demos allows us to foreground its mediating capacities in engaging with one another and explicitly consider the complexity of its sociotechnical impact. I have shown the way AI systems already challenge our multimodal expression capacities, for instance, by reducing the space for interaction and formulation of one’s own opinions when GPT-3 language models are in play or by facilitating the default distrust in others and everything we see through an unrestricted use of deepfakes and other forms of synthetic media. Inspired by Arendt, I suggested that AI can also enhance plurality and the ability to express it, on the one hand, and promote the spaces that allow for an informed, respectful and critical exchange of opinions, on the other. Achieving this requires a comprehensive AI-for-democracy approach, powered by an interdisciplinary effort of at least ethics of technology, design and governance.
I have also suggested that such a coordinated interdisciplinary effort towards developing and using AI as deliberative multimodal systems is challenging but possible. As more formal initiatives spring to address this challenge, it is also important to remember that democracy starts with an individual presenting herself to others and interacting with others. In this regard, each of us has a capacity to become aware of the role AI systems play in our daily activities, questioning their seemingly trivial role in sorting the news we see or offering us which groups to join on social media. Becoming critical and informed users of AI is something that we can do already now to contribute to the long-term quest of developing AI for democracy.
 Kate Crawford and Trevor Paglen, "Excavating AI: The politics of images in machine learning training sets," AI and Society (2019), doi: 10.1007/s00146-021-01162-8; Andreas Holzinger, Chris Biemann, Constantinos S. Pattichis and Douglas B. Kell, "What do we need to build explainable AI systems for the medical domain?," arXiv preprint (2017), arXiv:1712.09923; Nicholas Diakopoulos, “Transparency.” in The Oxford Handbook of Ethics of AI, eds. Markus D. Dubber, Frank Pasquale and Sunit Das (New York: Oxford University Press, 2020), p. 197-214.
 Hannah Arendt, The Human Condition (Chicago: University of Chicago Press, 2013. Originally published in 1958).
 Olya Kudina, “’Alexa, who am I?’: Voice Assistants and Hermeneutic Lemniscate as the Technologically Mediated Sense-Making,” Human Studies, Vol. 44, No. 2 (2021): p. 233-253.
 Evgeni Aizenberg and Jeroen Van Den Hoven, "Designing for human rights in AI," Big Data & Society, Vol. 7, No. 2 (2020): https://doi.org/10.1177/2053951720949566; Aimee van Wynsberghe, "Sustainable AI: AI for sustainability and the sustainability of AI," AI and Ethics, Vol. 1, No. 3 (2021): p. 213-218.
 Peter-Paul Verbeek, Moralizing Technology (Chicago: University of Chicago Press, 2011).
 Olya Kudina, "Bridging Privacy and Solidarity in COVID-19 Contact-tracing Apps through the Sociotechnical Systems Perspective," Glimpse, Vol. 22, No. 2 (2021): p. 43-54.
 Deborah G. Johnson and Nicholas Diakopoulos, "What to do about Deepfakes," Communications of the ACM, Vol. 64, No. 3 (2021): p. 33-35.
 Nicholas Caporusso, "Deepfakes for the Good: A beneficial application of contentious artificial intelligence technology," in International Conference on Applied Human Factors and Ergonomics (Cham: Springer, 2020) p. 235-241.
 Sherry Turkle, Reclaiming Conversation: The Power of Talk in a Digital Age (New York: Penguin Random House, 2015).
 Luciano Floridi and Massimo Chiriatti, "GPT-3: Its nature, scope, limits, and consequences," Minds and Machines, Vol. 30, No. 4 (2020): p. 681-694.