程序代写代做代考 flex case study algorithm Case Study Report

Case Study Report
In this report, I will discuss the ethical and leal issues of the second case study about car¡¯s autopilot feature. The actors involved are car makers, consumers and law makers. I will discuss whether it is appropriate for car makers to release non-perfect autopilot software, who is liable when accident happens in autopilot mode, and what self-driving software should do when forced to choose between several harmful ways.
Tesla released the autopilot software. Its boss said consumers ¡°should exercise caution while using it¡±, and that ¡°if the car is involved in a collision, the driver is still liable¡±. Tesla offered its customers auto-pilot function, but because the software is not perfect yet, Tesla told its customer be careful to use this function and if something bad happens, it is not their liability.
First, look at it with the utilitarian view. For the car maker Tesla, the benefits to release the software early not until it is perfect is that it can know the performance of its autopilot function in real situation, and collect the relevant data and using machine learning to improve its software. And another benefit is that it may help boost its stock price by showing the market its competence in self-driving area. The loss for Tesla is that if accidents happen while in autopilot-mode, it may do a great harm to its technical reputation and causes down valuation of its stock price.
For the Tesla customers, the benefit is that the autopilot function can give them convenience during driving. The loss is that the software is not stable in some situation, and is not totally trustworthy, it may lead to accident. For the other people around the road, whether they are safer depends on whether the autopilot function is better than human driver. Whether the action generates more happiness or more pain in a large degree depend on how well the Tesla¡¯s self-driving software performs. So Tesla should have conducted fully test for their software and show the public how they conduct their experiments and the performance of its auto-pilot function.
Now we discuss in deontological perspective. The motive for Tesla:
1. 2. 3.
Provide more function to its car and improve the sales.
Show the capital market its technical prowess in self-driving and boost the stock price. Test its self-driving software in real situation and facilitate its improving.
The motive 1 and 2 are fine. The motive 3 is controversial. If a company involves its customers in its experiments, it must control the harm of its experiment and let the customers fully aware of the potential harm. Tesla indeed admits in some situation its software is not very good and urges caution for its customers to use this auto-pilot function. But I doubt whether giving caution is enough. The customers should know well the performance of the auto-pilot function under various scenarios and know how the software makes decision for them.
Obviously, auto-pilot software should be good enough before they are used in real situation. But how good is enough. The law should give sensible and concrete standard for the self-driving software. Another legal issue is how to define the division of responsibilities when traffic accidents involving self-driving car happen. How to assess whether the software makes a sensible choice? If it is software¡¯s mistake, is the car owner still liable? Such questions need thought carefully by the law-makers. In United States, Now only Nevada, California and Florida have passed the legal regulations to allow autonomous vehicles on the road test. However, the regulations have additional terms, such as the car owner must hold a driver’s license, know how to control vehicles, and in emergency situations must take over control of the car. Currently, the EU headquarters in Brussels also discuss how to amend existing laws and regulations related to driving (ECE R79) to support the healthy and rapid development of autopilot, discussions and research work.

The self-driving car will unavoidably face situation similar with trolley problem[1] where it have to make choices all having unpleasant consequences. Suppose the self-car faces 3 choices, choice 1 will make 1 men in the car die, choice 2 will make 2 men die, choice 3 will make another 4 men die. Is it ethical for the software to make choice 1, or choice 2, or choice 3? From the utilitarian view, we need to make as less pain as possible. So the right choice may be 1 to let 1 men die better than 2 or 4 men die. But should all men have equal weight under the consideration of the software? Should the software put the car owner first? Should the software give children¡¯s life higher priority? Will customer buy a self-driving car that may sacrifice its owner¡¯s life to minimize the overall losses? So even with utilitarian view, the answer is complicated.
Results of a survey[2] shows that people generally agree that unmanned vehicles under certain circumstances passengers can be sacrificed to save many more passers-by, but most people are not willing to take or buy such car.
The survey, published in the US “Science” magazine pointed out that thanks to an unmanned vehicle autopilot system, the number of accidents is expected to make a 90% reduction to the benefit of the world, but not all of the accident can be avoided in these crashes, unmanned vehicles need to make difficult ethical decisions. For example, when faced with to kill the passenger or the passer-by.
Participated in the study, assistant professor of the University of Oregon Azim Sharif in a telephone news conference convened for the purpose of research, said: “The overall objective is to improve the unmanned vehicle traffic safety, but they also face barriers to widespread use. Aside from a number of technical obstacles aspects, there are some psychological barriers. ”
In order to understand these psychological barriers researchers between June 2015 to November conducted six online surveys for US residents. The results show that, when faced with ethical problems, respondents generally agreed that the unmanned vehicle should maximize the benefits of the “utilitarian” choice, that is, choose to minimize the loss of life of the program.
For example, 76% of respondents believe that choose to sacrifice a driverless car passenger rather than 10 passers-by are more ethical behavior. At the same time, 81% of respondents said they would choose to buy the car that ensure the safety of passengers rather than the passers-by.
The researchers believe that this contradictory attitude of respondents is the ¡°social dilemma¡± to promote unmanned vehicle”.” One author associate professor at MIT Iyad Rahwan explained: “Most people want to live in the automotive world which minimize casualties, but they hope the car will protect them at all costs. If everyone do so, the result will be a tragedy. ”
The survey also shows the majority of Americans strongly opposed the idea that the government should require driverless car to follow the “utilitarianism” principle, namely to make decision by calculating the greatest happiness of most people.
There are three different measures that can be taken in general. First, to make ethical decisions work firmly in the hands of the man himself, when the dilemma occurs the driver makes a decision directly; secondly, the human compiles ethics to computer algorithm which make decisions according to the ethical requirements; Third, the computer that has its own ethical consciousness like humans, and accordingly to make their own decisions.
Algorithm advantage is that it can react beyond the limits of human reaction. “Autonomous vehicles should be able to avoid most ‘selection dilemma’ – they respond faster, more timely brakes, steering smarter than people. Algorithm program might avoid the disaster”. For example, two vehicles were about to collided, if in the driver¡¯s control, he may not respond quickly enough to avoid the clash. But the computer can save 1 car, even though it has to choose between two cars. Algorithm may not always make a satisfactory choice, but better let the two vehicles collided.

Algorithm unsatisfactory is that it is pre-set by people, though after full consideration, they will always encounter the situation that had not previously been considered. The computer algorithm is too mechanical, too inflexible. Therefore, the ideal solution would be let computer has the ability to think by a sense of ethics and artificial intelligence to make judgments, not only can make ethical choices when humans can not be present, and also enough “humane.” However, the current development of artificial intelligence, it is still not competent.
Although you can not determine the perfect algorithm, self-driving cars ethics algorithm strategies remains a practical, and relatively good practices. However, this is only discussed in the case of hypothetical ideal of autonomous vehicles ethical dilemma and countermeasures is not just a thought experiment. Specific algorithm involves more complex cases, only when these complex situations are taken into account, the algorithm can be more practical. These complex situations include, but are not limited to, 1) whether to give certain vehicles (school buses in the algorithm, an important figure in the vehicle) higher level of security, 2) the number of occupants in the self- driving cars 3 ) how animals to be considered in the algorithm, 4) if the algorithm is not unique, different algorithms of autonomous vehicles on the road at the same time, there will be what kind of consequences, and so on.
Although a variety of complex situations be considered, there will be circumstances still not taken into account. And the algorithm may not take into account the complexity of some cases, the presumed best choice made is not really the best choice. However, computer algorithm may improve over time through machine learning.
To solve the dilemma, the self-driving software may adopt a way to minimize the loss of life. In the same idea of analogy, it is better to let a person die than let five people die. But how do we put this into practice? How do we configure the algorithm that decide the life and death of others? A family of three lives are more valuable than a single person’s driver? When we drive a car, any decision made by the instantaneous instinct are to be understood as inadequate response. For this reason, we may forgive the decision made by human. However, when a car is designed to program in advance, the choice made to minimize overall lost may be considered to kill on purpose. It is not acceptable for many people to let innocent people die though the self-driving car¡¯s motive is to let as less people die as possible.
Because how the self-driving algorithm react in different situation have large impact on our lives and ethical values, it is very important for the algorithm to be as transparent as possible. The law should make car-makers to disclose their strategy of their algorithm, and only after strict safety and ethical evaluation, can they put in practice. The law also should specify the general principle when faced with extreme circumstance and the responsibility when traffic accidents happen.
In general, with technology improving, self-driving cars will make driving safer. But self-driving technology encounters many ethical and legal problem which usually have no straight answers. We must solve these problems to make it acceptable for the society. In the next few years, self- driving car is bound to become the discussion topic of philosopher, artificial intelligence experts, legal scholars and the general public. Hopefully, we will reach a sensible consensus in the end.
[1] https://en.wikipedia.org/wiki/Trolley_problem
[2] Bonnefon J F, Shariff A, Rahwan I. The social dilemma of autonomous vehicles[J]. Science, 2016, 352(6293): 1573-1576.