Abrir/cerrar menú
Artificial Intelligence in Connection with Democracy
Post 7th February 2024

Artificial Intelligence in Connection with Democracy

Critical reserve for not acting according to the algorithm

Daniel Torrado Rial

Manager of Innovation and Product for PPAA Market

As citizens, we should understand the current context regarding the application of Artificial Intelligence to produce and disseminate information, as well as be aware of its ability to influence our behaviors, convictions and decisions, which end up affecting our democratic system of organization and government. Starting from the individual, we should extend to the collective dynamics the questioning of what is expected or pre-established by others, be they humans or algorithms, and not become mere passive consumers of information without critical capacity and contaminated by standardization and confirmation biases.

Daniel Torrado Rial

Manager of Innovation and Product for PPAA Market
Democracy AI
Madrid

Whether we are aware of it or not, Artificial Intelligence - AI - as a technological concept in its multiple forms and applications, is having a decisive impact on various aspects of our personal and professional lives. If we analyze it considering its use in the field of dissemination of information of public interest and social networks, it will suffice for us to observe how our footprint in the digital world conditions and modulates the contents that algorithms offer us or promote us based on the profile in which they have classified us from the data provided and our activity history. This kind personalization to select and consume information, products and services that satisfy our interests and tastes without even requesting it, also has an application, let's say, less innocent.

It is obvious the enormous capacity of influence that the process of segmentation and automatic filtering of information has on us, which, in addition to directing our purchasing decisions as consumers, also modulates our perception of the political reality and the formation of opinions regarding decisions or proposals in the public sphere, and consequently, in the orientation of our voting decisions, among other effects.

All this activity takes place on a massive scale, at great speed and with global reach, thus favoring a thematic mainstream or predominant media within what is known as the digital world. It is undoubtedly the ideal vehicle for directly or indirectly influencing large groups of people. This is something that, on the other hand, is not new as an approach or objective, whatever it may be, including manipulation, a rather coarse word, but if we stick to its meaning - trying to gain control or dominion over another person or group - it is perfectly valid for the case.

One of the best known and most effective ways of influencing large groups of people, and one that is still very much in force, is through the media, both in its traditional format and in its digital version. If we focus on the effectiveness and efficiency associated with the possibility of exerting such influence, traditional formats and channels -analogical- are limited in terms of being able to apply a refined segmentation that allows them to direct specific and personalized messages to specific profiles or groups with sufficient agility. The solution came with the Internet and the creation of digital versions, evolving in sophistication with the combination of apps and social network-type digital platforms that, together with AI, have brought about a radical change in how we are informed and how we inform ourselves.

Nor can we lose sight of the decline in credibility of classical journalism and its fall from grace as a democratic element of control of power and a generator of more or less objective information. This situation undoubtedly also contributes to reducing our ability to make informed decisions.

The current combination of the Internet with AI and the widespread use of mobile devices, such as smartphones, represents a tool that perfects and maximizes the possibilities of influencing large audiences and thus acting on their perception or opinion -positive, negative or neutral- regarding something or someone. Make no mistake, this is not a new evil brought to us by digitalization or AI, it is something that, as we have already mentioned, has been occurring throughout the ages, and if we are looking for known culprits, Gutenberg and his printing press would also be good candidates.

It is also worth highlighting the "democratic" potential of, for example, social networks, which give people a voice of their own within the global mass, as well as the ability to access plural and diverse information.

All these possibilities and benefits of the digital world -we will mention some undesirable effects later- should also make us reflect on whether the supposedly free and open information we consume actually makes us better able to exercise our democratic right to act freely and make our own decisions in true freedom.

Let us assume as a starting point that nothing, or very few things, are free in the context of the Internet and that we have a value based on our personal data -our activity on it is also free. This data, with greater or lesser accuracy, generates a kind of digitized individual version that, through automated processing, places us in predefined profiles and groups based on existing interest.

This puts us at risk of becoming "controlled" citizens thanks to the large amount of information stored about our likes, dislikes, phobias, political orientation, location, economic capacity, profession, etc. Technology and data is what is needed to influence us. Technology and data are all that is needed to influence us. How? There are several ways to do it. One of them is, as we already anticipated, by personalizing and orienting the information that is directed to us according to our profile and previously assigned group, which the algorithm manages very efficiently. The objective may be to promote a product or service that we had not yet considered acquiring, or to let us know something that we will supposedly find interesting or useful, although in reality it may be that others are interested in us receiving this information in order to expand and generate debate or controversy on some topic. Among the AI techniques that are applied, especially in social networks, we can mention the so-called social bots, which are created to impersonate human users, through fake accounts with fictitious profile data, with the aim of disseminating and feeding opinions or encouraging public discussion about something.

Of course, there are side effects, in principle undesirable and not evident to most people. To exemplify them, we can cite the so-called standardization biases, which are very common in AI models generated from the processing of massive data and machine learning, including the new LLM models used by generative AI. This effect occurs when, from the processing of a large volume of data, the AI-generated model prioritizes the answer or result with the highest frequency of occurrence in the available data due to a question of statistical probability. Something that in principle would make sense may cause that, when we consult an AI-based system, the answer it gives us ensures a high level of correctness following this probability criterion, although this does not guarantee its validity or veracity. On the other hand, if we assume that the answer is completely valid, it could be that the model has discarded other options that are also valid and true but have a lower frequency of occurrence. The consequence is that a possible standard thought or solution ends up prevailing and extending over others that could be correct and even more adequate.

Another effect that we should not overlook in the field of social networks is the tendency of the algorithm to deliver and promote content, contacts or opinions that perfectly match our preferences, tastes and convictions or those of the profile in which we have been classified. Something that in principle represents a great help, makes us at risk of suffering a cognitive effect that in psychology is called confirmation bias because most of the input we receive does nothing more than reinforce our already pre-established thoughts or opinions.

In both cases, this automated "orientation" that occurs causes us to eliminate from our knowledge and analysis certain types of non-standard information or information that is not aligned with our thinking. The consequence is that we solidify our convictions, certainties and opinions in such a way that the emergence of any critical spirit or questioning of what is happening around us is greatly hindered, generating immovable blocks of thought, turning us into a kind of completely predictable and manageable communities that "let themselves go", thus conditioning individual thinking and decisions that affect us all. We must be aware of the dynamics of construction and the rules of dissemination of opinions and news presented to us in digital spaces. Based on this knowledge, we must add to our way of processing information a kind of individual and, by extension, collective critical reserve, which, before adopting a position or decision, questions the pre-established and commonly accepted, blindly guided by the proposals of the algorithm.

We need more outliers and observe them, because sometimes they help to explain the whole context. The other option is that they are just that, outliers that separate themselves from the rest and only generate noise.

The current state of technological development linked to AI represents a huge transformative potential in any sector, activity or technology. As happened with the global expansion of the Internet infrastructure, we must differentiate its positive potential from the inappropriate or perverse uses that people and organizations may make of it.

We need to understand the advances and analyze them, detect their undesired effects and that these are accompanied by rules and regulations that establish controls and good practices in the development and application of AI, an example of the latter being the Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) recently approved by the European Union.

Was it useful?

Choices
newsletter

Ideas in your mail

Subscribe to the Ideas4Democracy newsletter so you don't miss out on global democratic news.

Subscribe