Contenuto disponibile in Italiano

Artificial intelligence. Paolo Benanti: “Algor-ethics to ensure that machines remain at the service of man”

Combining ethics and technology ensures that man remains the master of artificial intelligence at the service of authentic development. But new criteria, categories and languages are needed. The need for an ethics of algorithms. Interview with Paolo Benanti, Professor of Ethics of Technology and AI Ethics, member of the Group of Experts of the Ministry for Economic Development (MISE)

(Foto Siciliani-Gennari/SIR)

The creation of forms of artificial intelligence (AI) designed so that the “way of thinking” of a machine will resemble human reasoning is probably one of the most challenging tasks that mankind is called to address in the near future. Artificial intelligence basically corresponds to algorithms. On the complexity and the power of algorithms and their sequences depends the degree of development of artificial intelligence. Drones, robotics, 3D printing; logistics, construction and biotechnology; diagnostics, increasingly advanced robotic surgery, wearable exoskeletons for motor rehabilitation, unmanned vehicles, are the main areas of application. “The key feature of this new evolutionary process”, said Franciscan Father Paolo Benanti, Professor of Professor of Ethics of Technology and AI Ethics at the Pontifical Gregorian University, member of the Pontifical Academy for Life, in this interview with SIR. “The purpose of AI is not to create something new. It’s a technology that changes the way in which we do things. It can be compared to the inception of steam energy or electricity.” Although AI “can surrogate the human presence in many actions it certainly cannot replace man”, he assured. Benanti is also a member of the Group of Experts recently created within the Ministry for Economic Development (MISE) to support the government in the development of a “National Artificial Intelligence Strategy.”

Professor, what is the impact of the gradual development of artificial intelligence on human self-understanding?
The technological artifact can change man’s understanding of himself and of the world. It already happened in the past, for example in the XV century, when the telescope and the microscope were developed from the convex lens. The infinitely distant and the infinitely small brought about a new understanding of the universe and of the human body. Today data-processing computers brought about the development of an instrument that could be named “macroscope”, our understanding of the world and of ourselves is changing. The inception of artificial intelligence is already changing our self-perception, suffice it to mention neurosciences or the theoretical physics and astrophysics models.

What’s the difference between human intelligence and that of a machine, however intelligent it may be?
There’s a radical difference. AI should not be confused with a human analog as it is designed to perform specific tasks only. The relationship that man can have with artificial intelligence is comparable to the relationship of our forefathers with animals or to that of first responders with rescue dogs today.

AI is a very specific form of intelligence programmed to perform very specific tasks.

Will these machines ever be able to reach the stage of conscious self-determination?
No. Consciousness is a human quality that requires total, general intelligence not the specific form designed for AI. It entails the possibility of creating not something but someone. This doesn’t mean that we won’t reach the stage of developing machines capable of doing things that we cannot explain, whose complex operational ability could exceed our capacity of comprehension, but this certainly would not turn them into human beings.

The European Parliament has raised the issue of the granting “electronic personality”, namely rights and responsibility, to robots capable of taking decisions independently, but 156 experts are against it. What is your opinion?
In this debate we should distinguish three separate levels. The first is the technological level which means trying to understand how to create machines with features that have a high degree of unpredictability. The second question is of an ethical nature, namely, how to handle this unpredictability. Can a machine that takes decisions autonomously make a mistake? And if it makes a mistake, does it involve the issue of accountability? On a third level we find legal regulations that involve how to manage these machines in everyday life in a society that will increasingly be made of human agents (people) and of autonomous robotic agents. The debate of the European regulating body was confined to the juridical aspects, on how to regulate the use of these machines in society. Some claim that ascribing legal personality to robots doesn’t automatically turn them into persons but it would provide the possibility of insuring them and thus ensuring compensation for machine-made damages against people or property. Others claim that this would release manufacturers from certain responsibilities. The bottom line is that the extent of innovation introduced by these machines is such that

traditional categories are no longer sufficient, it is necessary to identify new solutions.

What’s the starting point?
First of all, the legal, ethical and technical aspects should not be viewed separately. You cannot speak of ethics without knowing all the technical aspects involved, nor is it possible to adopt legal regulations irrespective of ethical principles and not knowledgeable of the technological substructure. Furthermore, it is necessary to introduce certification bodies in the relationship involving third party manufacturers, that certify the use of these machines. In the MISE Expert Group we are working on the planning of feasible solutions in this respect. The second aspect to be considered is that these machines operate on the basis of algorithms, codes that determine the machine’s performance. To date these algorithms are “black boxes” protected by copyright. So the question is:

is it convenient to preserve these black boxes or should they be turned into crystal – i.e. transparent – boxes?

In Laudato sì the Pope guards against technocratic pragmatism. What kind of ethics is needed to ensure that this innovation is truly at the service of the human person?
Not every form of progress reflects development. When we speak of technological progress we are speaking of innovation; when we use the term ‘development’ we are referring to an innovation directed at the principles of the common good enshrined in Church social doctrine. Thus the answer is to bind progress and development through ethical values. In the case of Artificial Intelligence it’s a very challenging task because the values underlying the decisions taken by machines are numerical values.  Thus we need to create new paradigms to

transform ethical values into something that machines will be able to understand.

My proposal is

to formulate a new understanding of algor-ethics

What does it mean?
Just as ethics encompasses principles, evaluations and norms, algor-ethics will have to incorporate

tables of values, principles and norms translated into the language of machines.


A possible model could be to

“insinuate” a kind of uncertainty inside the machine.

From the algor-ethics perspective this means that when faced with a doubt the machine will ask the bearer of that ethical code, that is, the human person, in order to validate its decisions. This leads to the creation of a “Human Centered AI” and to the development of machines that don’t answer only with a Yes or a No, but that are integrated with man and with man they will seek the best solution.

Altri articoli in Italia