In robot we trust. Or maybe not?

There’s no doubt, the numbers are clear. 1.7 against 42: this is enough to understand that the 4.0 revolution will not refer to factories as much as our homes. According to the International Federation of Robotics by 2020 1.7 million robots will be introduced to factories around the world, while 42 million domestic or personal use robots will be bought. These robots will have to interact with human beings, in particular in assisting the elderly. That’s why health care applications will be the driving force behind Social Robotics.

But how do we imagine the Social Robots of the near future? What qualities should robots, which we will entrust the task of assistance and care of the grandfather of the house, have? The crucial issue, speaking of Social Robots, becomes that of trust. Can we trust robots? The question is not the technical reliability of the robots, but a moral trust. Not a trivial “to rely on” but an authentic “to trust in”. In other words, robo-ethics is becoming the most urgent and necessary science, to the point that even Pope Francis has summoned the experts of the Pontifical Academy for Life, challenging them in the plenary with a “Roboethics. People, machines and health “.

[legacy-picture caption=”-Giampaolo Ghilardi, researcher of Moral Philosophy” image=”a0f369dd-dbcf-45b5-a65c-6415ee1075b2″ align=”left”]

Giampaolo Ghilardi, researcher of moral philosophy at the Institute of Philosophy of Scientific and Technological Research of the Campus biomedicine at University of Rome, retraces the beginnings of roboethics, which were recognized for the first time as a discipline in 2005. «The first issues were linked to military drones or self driven cars, “recalls Prof Ghilardi.”If a person is run over, whose responsibility is it? And how should I program the algorithm to decide how the machine should behave? Is it better to avoid an elderly or a child? All the questions the robotic engineer has to ask. “ Can an algorithm be moral? Is it possible to implant a moral conscience in robots? “Someone said yes, but then the development was different,” says Ghilardi, “there is no such thing as the ethical robot, but it is the designer and the programmer who have an ethical sense or not. Where ethical meaning signifies to see all the complex of problems and also the fact that every answer generates other problems. The word robot comes from a Czech term that indicates the slave, who by definition has no freedom and therefore cannot be the subject of a moral decision. The slave morality is a contradictio in adiecto, as Kant would say, like talking about a “wooden iron”.

There is no ethical robot. It is those who draw it that can have an ethical sense or not

Giampaolo Ghilardi, researcher at the Campus bio-medicine University of Rome

The question then flips. Are producers careful about projecting with an ethical mind nowadays? Ghilardi explains, “the topic is a hot one because robotics are beginning to suffer a drop in credit, partly due to the dystopian idea that robots take jobs away from the man. As a result, a very strong need to build confidence in machines is emerging ». A symptom of change is the emphasis on the spiritualization of work or the fact that the European calls for tenders are now very much tuned on the “to build trust on robotics“.

What is the central issue at this point? “That trust can only be granted to a similar being, so to develop trust in robots I have to create trust in those who design them, working on transparency between designer and user, making the supply chain clear, making the human face that is behind the robot visible. The point is not reliability, the machine is reliable, but the trust in, the thought that whoever built this machine had at heart my happiness and my well being … otherwise how can I entrust my elderly or sick family? “

I’m not talking of reliability but of trusting in. Otherwise, how can I entrust my grandfather to the robot? For producers it’s a hot topic

Giampaolo Ghilardi, researcher at the Università Campus bio-medicine of Rome

It is evident that the reflection of the roboethics comes forth strongly with the imminent and massive diffusion of Social Robots in our houses. Does having a slave at home make sense? “Yes, if it frees us from the most burdensome aspects of domestic work, as a Roomba does, not for delegating those personal functions that can not be delegated, such as the care of the elderly. It is clear that the machine can assist us and that the machine in its impersonality solves some problems better, but what cannot be delegated is control. The ethical drawn line for assistant robots for the elderly or disabled is that human control must always be certain, whether it is a family member or an operator, it is not a vulnus ” Ghilardi considers, as “it’s one thing to be cared for and another to be fixed. This is a very discussed point, but I do not know if it is truly assimilated. On the other hand we are at an embryonic level, the assistant robots are at the prototype level. It is true, however, that in the field of assistance a completely autonomous robot would be little accepted by the market and the market is what directs sales. ”

The ethical drawn line for the assistant robot is that human control must be certain

Giampaolo Ghilardi, researcher at the Campus bio-medicine University of Rome

For reasons not dissimilar, even the anthropomorphic robots, which were popularly fictionalized a few years ago, seem passé. The phenomenon of the uncanny valley has actually been known since the ’70s: in certain situations, the resemblance of the robot with the human being is disturbing rather than reassuring. The assistant robot with human shape gives off a sense of alienation, of deception, which makes it unacceptable and does not help the familiarity of use at all. Keeping in mind, finally, warns Ghilardi, that although “you can train the machine to catalog facial expressions, to recognize emotions and even to reproduce them, this does not mean that the machine is feeling anything, in the same way in which the car’s computational capacity does not mean that the car is thinking. The robot that beats us at chess is not intelligent for this reason “.

Di |2024-07-15T10:05:22+01:00Febbraio 8th, 2019|english, Innovation, MF, Welfare|0 Commenti