代写代考 Philosophy & Ethics

Philosophy & Ethics
Module 3: Utilitarianism and Deontology

Copyright By PowCoder代写 加微信 powcoder

This material has been reproduced and communicated to you by or on behalf of the University of Melbourne in accordance with section 113P of the Copyright Act 1968 (Act).
The material in this communication may be subject to copyright under the Act.
Any further reproduction or communication of this material by you may be the subject of copyright protection under the Act.
Do not remove this notice

Learning outcomes
At the end of this module, you should be able to:
• Explain what ethics is and understand some basic features of ethical thinking
• Describe the ethical theories of utilitarianism and deontology
• Begin to apply these ethical theories to a case study involving AI

What are the ethics of using AI decision-making to:
• Determine if someone goes to jail?
• Help in allocation of police?
• Write your essays for you?
• Make medical diagnoses?
• Exceed human intelligence?

Ethics and religion
• Buddhism, Montheism (Ch, Islam, Jud), Confucianism, Hinduism, Jainism, Shintoism, African, Dreamtime etc.
• Source of moral authority
• Christians: loving God and each other
• Buddhists: universal compassion, nirvana
• Confucianism: respect for one’s parents and ancestors

What is ethics?
• Socrates: How should one live?
• Wittgenstein: “Ethics is the enquiry into what is valuable, or, into what is really important, or into the meaning of life, or into what makes life worth living, or into the right way of living.
Ethics about values, morals, good and bad, right and wrong
1. How should I act? What standards of behaviour should I adopt?
2. What sort of person should I be?
3. What sort of professional computer scientist should I be?

Ancient and modern ethics – west
Ancient Greece: Ethics as rational activity
Modern ethics – e.g.:
• Is it acceptable to end the lives of suffering people?
• WhenisOKornotOKtogotowar?
• Is it ever justified for a doctor to lie to a
• Can we favor our family over strangers in desperate need?
• Is it right to eat nonhuman animals?
• Were Inuit people wrong to expose
their unwanted infants to the cold?
• Do we have obligations to unborn generations?

Dismissing ethics
• Amoralism: no standards
• Psychopathy
• Egoism: ‘I ought to do only does the most good for me’
• Not having no standards
• Ethics is about respecting or caring for others as well as self
• Ethics and human life

Relativism
• No universal right and wrong
• Right/wrong just means what a
culture believes is right/wrong
• Subjectivism
• E.g. human rights
• Problem: Does right/wrong mean those things?
• People can disagree with other people, cultures
• Give reasons: strong or weak (not just ‘gut feelings’)
• So: relativist doesn’t prove no universal rights and wrongs

Moral machine

Trolley problem

While ethics is a rational process and potentially universal:
• Ethical answers not always clear cut
• Disagree with each other
• Respect other perspectives and diverse insights
• Socrates: We can test our ethical beliefs through open-minded reflection and dialogue
• e.g. in tutorials!
This Photo by Unknown Author is licensed under CC BY

Why should I be ethical?
• Society expects it
• Successful, co-operative team player
• To gain respect
• Inner peace, avoid guilt
• Fulfilling life
• Just because it is right
• “Better to suffer evil than to do it”
• “Unexamined life is not worth living for human beings”

Principles in AI ethics
Fairness Safety Accountability Transparency Benefit Explainability Privacy

Ethical theory
Utilitarianism – consequence based
Deontology – rule based
Virtue ethics – character based
Ethics of care – relationship based

Utilitarianism
Religious ideas dominated
Christianity strict and looked to happiness in
the next world
Revolutionary
Progressive – social change e.g. more liberty
Not opposed to pleasure
Against abstract rules “written in the heavens”
Partial return to Greek philosophers (Socrates, Aristotle, Plato) – reason

Bentham (1748—1832)

Utilitarianism
Consequentialism: consequences alone determine action’s rightness
What consequences?
Greatest-Happiness Principle: “actions are
right in proportion as they tend to promote happiness, wrong as they tend to produce the reverse of happiness” ( Mill 1806-1873)
“Greatest happiness of the greatest number of people”
Principle of utility: Right = maximise total net happiness/wellbeing

Mill (1806-1873)

• Utility: of value to beings
• Teleological theory: ends (good)
determines the right
• Interests: Harms and benefits
• Psychological, physical, emotional, social, economic, spiritual
• Benefits (goods) = positive utility
• Harms (bads) = negative utility
• Maximise utility = increase benefits, decrease harms

Hedonism: Bentham vs Mill
What is utility? What is intrinsically good and bad for us? (not just instrumentally good or bad)
Hedonism: “Two masters”: pleasure & pain
Bentham: all pleasures are equal “Pushpin as good as poetry” Pain/suffering is bad
Intensity, duration
Mill: higher and lower pleasures
Poetry better than pushpin

Preference utilitarianism
Intrinsic good = satisfied preferences Intrinsic bad = thwarted preferences Happiness = overall balance

Best overall state of affairs (net vs gross)
All pleasure/pain matters equally
In that sense: all individuals are equal (including you)
Consider ALL the consequences Good and bad
Near and far
Probability
Calculate best outcome

Surely consequences matter Surely happiness/wellbeing matter
If they matter, isn’t more happiness better than less?
Simple, clear decision procedure: Principle of Utility
Rational (cf. accepting authority)
Equality: all count for 1: no class, race, gender, intelligence, etc. favouritism

Trolley problem

Deontology
Non-teleological (ends) Deontic = duty, obligation Rules Non-consequentialist
We learn rules and principles from childhood
These capture best what morality is about
Can refine and alter rules via reflection and argument
Keep promises
Don’t steal
Be honest, don’t deceive
Be just and fair
Repay kindnesses
Avoid doing harm
Look after your friends and family
Don’t deliberately harm or kill the innocent

Trolley problem

Attacks utilitarianism
For some deontologists: Consequences can matter – e.g. generosity requires calculating whether your action will benefit someone. But – there is more to ethics than consequences and calculation!
Maximising ethic is too demanding – give up much of our own wellbeing for the sake of strangers
Evil-doing – pushing the large man
Not as helpful a decision-making procedure as U think – difficult to impossible calculation
Justice and fairness – although each person’s similar interests counts equally, maximizing wellbeing can cause apparent injustice

Angry mob example

Prima facie vs. absolute rules
Prima facie rules: ‘on the face of it’
• Rule applies presumptively
• Rules can conflict: need to use judgement to resolve (e.g. break promise to save a life)
• Some rules win out, others are overridden
Absolute rules: unconditional
• Don’t have exceptions
• Don’t yield to other rules
• Greatest protagonist: German philosopher (1724-1804)

Kant’s ethics
A special kind of deontology
Absolute duties
Despised “the serpent-windings” of U
Aiming to produce good consequences ≠ right
Right = a “good will”; acting for right reasons; acting for duty’s sake
Morality is rational
But rationality is opposed to consequentialism

Categorical imperative – first
• Actions must be universalisable
• We act on rules
• But rules don’t just apply to you
• Can’t make exceptions for ourselves
• “Act only according to that maxim (rule) whereby you can at the same time will that it should become a universal law”
• E.g.: Rule: it’s OK for me to lie
• This means: it’s OK for anyone to lie
• But if everyone lies when it suits them: truth collapses > lying becomes impossible
• Hence: lying is irrational

Categorical imperative – second
Second part of the Moral Law
Connected to the first
All rational beings can grasp the moral law They have autonomy
“Act in such a way that you treat humanity, whether in your own person or in the person of any other, never merely as a means to an end, but always at the same time as an end”

Ends and merely means
• All autonomous beings are ends-in- themselves
• Sublime and equal dignity (infinite worth)
• Never treat them as merely means
• Hire a plumber – use as means
• But mere means and not ends-in- themselves: e.g., deceiving, breaking promises, not gaining consent, manipulating
• Never allowed
• E.g. murderer asking if you have hidden
his intended victim – may you lie?
• Autonomy must be respected
• U: consequences matter more than rules
• But: rules matter if they affect consequences! (e.g. a social rule against punishing the innocent may do so)

One example:
AI headbands
Morally justified or not?

Wall Street Journal video (https://www.wsj.com/articles/chinas-efforts-to-lead-the-way-in-ai-start-in-its-classrooms-11571958181)

AI headbands example
• Study in selected classrooms to collect data, train AI, and improve the headbands
• Uses electroencephalography (EEG) sensors to measure brain signals and AI algorithm to translate signals into real-time focus levels
• Colours displayed on band
• Also designed to help students focus through
neurofeedback
• Results of classrooms with and without compared
• Data from students kept on company server
• Students and parents not told about details of the study
• What might U and Kant say about this?

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com