The problem is Dr. Frankenstein, not his monster, Raman Lopez de Montreux

The problem is Dr. Frankenstein, not his monster, Raman Lopez de Montreux

Some AI researchers think that computers can make rational and conclusive decisions based on explicit ethical criteria. For example, Susan Anderson and Michael Anderson (in Mechanical protocols, 2011) Say: “Ideally, we want to rely on machines to make the right protocol decisions for ourselves, for which we need to develop a protocol for machines.”

AI should allow computers to rationalize

AI should allow computers to rationalize

IStack

Their argument is that if these machines are able to make thousands of intelligent decisions on their own (for example, in the case of autonomous vehicles, when to brake, when to stop, when to be effective, etc.), they can also make ethical decisions. The idea is that these researchers do not distinguish between real issues and moral issues because in my opinion they believe that moral decisions can always be rationalized. In fact, in 1853, John Stuart Mill, in his work on pioneerism, already affirmed that the power to make moral decisions is a component of our cause.

Moral decisions cannot always be rationalized

Two common approaches have been suggested for intelligent machines to make protocol decisions for themselves: Attitude Top to bottom (Landing) and focus Bottom-up (Upwards) In focusing Top to bottom, Ethical principles are explicitly planned in the engine, for example in the vehicle’s automatic driving system.

Thus, machines can obey the laws “in the style of the three laws of robotics, designed by Asimo in 1942 in the short story”Run around”, Or general moral philosophy such as Kant’s classified compulsion, Stuart Mill use, or some other effect. The key feature is to combine instructions with a programmer so that the machine goes the most protocol way under specific conditions. That is, the machine makes ethical decisions based on the ethical philosophy embedded in its software.

Critics of the approach Top to bottom They recognize the inherent difficulties in adhering to any moral philosophy because many of them, at one point or another, will be morally unacceptable due to the lack of universal ethical principles that will lead to actions and decisions. It is practically impossible to program a machine that can make ethical decisions based on a particular moral philosophy. On the other hand, we first receive moral values ​​from educators and then embrace them based on the contributions of the social groups and cultures we express, and gradually develop our own personal disciplines.

You cannot program a machine that can make ethical decisions

In the second approach, the approach Bottom-upIt is expected that machines will learn to observe human behavior in real situations and make ethical decisions without being taught any rules or with any particular moral philosophy. The problem is that learning to make ethical decisions in this way will take a long time for the machine to observe human behavior.

To speed up this process, some have suggested that machines can learn from the ethical decisions of millions of people. In the case of driverless vehicles, they will learn from the decisions of millions of human drivers.


Read more

Joseph Carbella

Image courtesy of Artificial Intelligence by graphic artist Carl Sims

However, it should be noted that self-driving cars may engage in more unethical behavior because it is clear that many drivers are less prone to exemplary behavior. By learning what the crowd is doing, smart vehicles will learn things like overtaking dangerous limits without speeding. That is, observing people does not teach them ethics, but generality.

Learning from human behavior does not teach that machines are ethical but general

Another major difficulty in incorporating protocols into machines is the concept of autonomy. Autonomy is actually a gradual concept. AI-equipped machines can operate more autonomously than non-fitted machines. In fact, we know that machine learning methods help us make more and more decisions, for example, about credit, clinical diagnosis, personalized referrals, and selecting candidates for jobs. Some machines can achieve greater autonomy.

An (worst) example of this is the autonomous deadly weapons that select their targets without human intervention and their results cannot be changed. But autonomous deadly weapons are also limited to tasks assigned by one man and are “only” “free” from a set of pre-defined and man-planned targets.

Autonomous hazardous weapons decisions are limited to a set of man-made targets

Military Ethicist George Lucas, Jr., in the book Assassination by Remote Control: Unmanned Army EthicsIn discussions of machine protocols, machine autonomy is often confused with moral autonomy, citing examples of autonomous vacuum cleaners and patriotic missiles. Lucas argues that both can proceed with their work, adapting to unforeseen circumstances and acting with minimal human oversight, but cannot change or stop their work on the basis of moral objections. Machines are, ultimately, human tools that design and manufacture them.

In short, it is we humans who possess the qualities necessary for moral organization. The protocols of AI are the ethics of the people who design the machines. AI depends on people at all stages: from basic research to development. The problem is not the Frankenstein monster, the problem is Dr. Frankenstein!

Protocols of artificial intelligence are the protocols of the people who design machines

Check Also

The Ascendance of Alexei Popyrin Gives Hope of a Long-Awaited Home Australian Open Winner

The Ascendance of Alexei Popyrin Gives Hope of a Long-Awaited Home Australian Open Winner

When Ashleigh Barty won the Australian Open in 2022, it was the first time that …

Leave a Reply

Your email address will not be published. Required fields are marked *