Philosophy & Ethics
Module 3: Utilitarianism and Deontology
Copyright By PowCoder代写 加微信 powcoder
This material has been reproduced and communicated to you by or on behalf of the University of Melbourne in accordance with section 113P of the Copyright Act 1968 (Act).
The material in this communication may be subject to copyright under the Act.
Any further reproduction or communication of this material by you may be the subject of copyright protection under the Act.
Do not remove this notice
Learning outcomes
At the end of this module, you should be able to:
• Explain what ethics is and understand some basic features of ethical thinking
• Describe the ethical theories of utilitarianism and deontology
• Begin to apply these ethical theories to a case study involving AI
Power of AI
• Potential for great good
• Potential for great harm
Digital ethics team
What are the ethics of using AI decision-making to:
• Determine if someone goes to jail?
• Help in allocation of police?
• Write your essays for you?
• Make medical diagnoses?
• Exceed human intelligence?
Ethics and religion
• Buddhism, Montheism (Ch, Islam, Jud), Confucianism, Hinduism, Jainism, Shintoism, African, Dreamtime etc.
• Source of moral authority
• Christians: loving God and each other
• Buddhists: universal compassion, nirvana
• Confucianism: respect for parents and ancestors
What is ethics?
• Socrates: How should one live?
• : “Ethics is the enquiry into what is valuable, or into what is really important, or into the meaning of life, or into what makes life worth living, or into the right way of living.
Ethics about values, morals, good and bad, right and wrong
1. How should I act? What standards of behaviour should I adopt?
2. What sort of person should I be?
3. What sort of professional computer scientist should I be?
Normative versus descriptive
Ancient and applied ethics – west
Modern applied ethics – e.g.:
• Is it acceptable to end the lives of suffering people?
• WhenisOKornotOKtogotowar?
• Is it ever justified for a doctor to lie to a
• Can we favor our family over strangers in desperate need?
• Is it right to eat nonhuman animals?
• Do we have obligations to unborn generations?
Ancient Greece: Ethics as rational activity
Self and others
• Nihilism: no standards
• But: self-centred
• Egoism: ‘I ought to do only does the most good for me’
• Ethics is about respecting or caring for others as well as self
• Ethics and human life
Relativism
• No universal right and wrong
• Cultural R: Right/wrong just means
what a culture believes is right/wrong
• Subjectivism – individual
• E.g. human rights
• Problem: Does right/wrong mean those things?
• People can disagree with other people, cultures
• Give reasons: strong or weak (not just ‘gut feelings’)
• So: relativist doesn’t prove no universal rights and wrongs
Moral machine
• Old person vs. young person
• Man vs. pregnant woman
• Famous cancer scientist vs. homeless person
• Intelligent animal vs. serial killer
Trolley problem: What would you do and why?
While ethics is a rational process and potentially universal:
• Ethical answers not always clear cut
• Disagree with each other
• Respect other perspectives and diverse insights
• Socrates: We can test our ethical beliefs through open-minded reflection and dialogue
• e.g. in tutorials!
This Photo by Unknown Author is licensed under CC BY
Why should I be ethical?
• Society expects it
• Successful, co-operative team player
• To gain respect
• Inner peace, avoid guilt
• Just because it is right
• Socrates: “Unexamined life is not worth living for human beings”
• “Better to suffer evil than to do it”
Principles in AI ethics
Fairness Safety Accountability Transparency Benefit Explainability Privacy
Ethical theory
Utilitarianism – consequence based
Deontology – rule based
Virtue ethics – character based
Ethics of care – relationship based
Utilitarianism
Religious ideas dominated pre-Enlightenment Christianity strict prohibitions and looked to
happiness in the next world Revolutionary
Against abstract rules “written in the heavens”
Progressive – social change e.g. more liberty Not opposed to pleasure
Partial return to Greek philosophers (Socrates, Aristotle, Plato) – reason
Bentham (1748—1832)
Utilitarianism
Consequentialism: consequences alone determine action’s rightness
What consequences?
Greatest-Happiness Principle: “actions are
right in proportion as they tend to promote happiness, wrong as they tend to produce the reverse of happiness” ( Mill 1806-1873)
“Greatest happiness of the greatest number of people”
Principle of utility: Right = maximise total net happiness/wellbeing
Mill (1806-1873)
• Teleological theory: ends (good) determines the right
• Utility: value to individuals
• Interests: Harms and benefits
• Psychological, physical, emotional, social, economic, spiritual
• Benefits (goods) = positive utility
• Harms (bads) = negative utility
• Maximise utility = increase benefits, decrease harms
Hedonism: Bentham vs Mill
Utility: intrinsically good and bad for us (not just instrumentally good e.g. money, exercise)
Hedonism: “Two masters”: pleasure & pain Bentham: Pain/suffering is bad
Intensity, duration
All pleasures are equal
“Pushpin as good as poetry”
Mill: higher and lower pleasures Poetry better than pushpin Socrates unsafisfied>satisfied fool
Preference utilitarianism
Intrinsic good = satisfied preferences Intrinsic bad = thwarted preferences Happiness = overall balance Preference calculus
Best overall state of affairs
Net (not gross) pleasure/pref
All pleasure/pain matters equally
In that sense: all individuals are equal (including you)
Consider ALL the consequences Good and bad
Near and far
Probability
Calculate best outcome
Hedonic calculus/preference calculus
Surely consequences matter Surely happiness/wellbeing matter
If they matter, isn’t more happiness better than less?
Simple, clear decision procedure: Principle of Utility
Rational (cf. accepting authority)
Equality: all count for 1: no class, race, gender, intelligence, etc. favouritism
Trolley problem – utilitarianism
Deontology
Non-teleological (ends) Deontic = duty, obligation Rules Non-consequentialist
We learn rules and principles from childhood
These capture best what morality is about
Can refine and alter rules via reflection and argument
Keep promises
Don’t steal
Be honest, don’t deceive
Be just and fair
Repay kindnesses
Avoid doing harm
Look after your friends and family
Don’t deliberately harm/kill the innocent
Trolley problem – deontology
D attacks U
For some deontologists: Consequences can matter – e.g. generosity requires calculating benefit. But – more to ethics than calculating consequences!
Maximising ethic too demanding – give up much of own wellbeing for strangers. Singer – give up high percentage of income.
Evil-doing – pushing the large man
Not as helpful a decision-making procedure as U think – difficult to impossible calculation
Fairness – although each person’s similar interests counts equally, maximizing wellbeing can cause apparent injustice
Angry mob example
Murder in town
Townsfolk want justice Captured a suspect – a homeless loner As sheriff, you know suspect is innocent But if man not hanged – rampage! What would a U do?
What would a D do?
Prima facie vs. absolute rules
Prima facie rules: ‘on the face of it’
• Rule applies presumptively
• Rules can conflict: need judgement to resolve (e.g. break promise to save a life)
• Some rules win out, others are overridden
Absolute rules: unconditional
• Don’t have exceptions
• Don’t yield to other rules
• Greatest protagonist: German philosopher (1724-1804)
Kant’s ethics
A special kind of deontology
Absolute duties
Despised “the serpent-windings” of U
Aiming to produce good consequences ≠ right
Right = a “good will”; acting for right reasons; acting for duty’s sake
Morality is rational
But rationality is opposed to consequentialism
(1724-1804)
Categorical imperative – first
• Actions must be universalisable
• We act on rules in ethics
• But moral rules don’t just apply to you
• Can’t make exceptions for ourselves
• “Act only according to that maxim (rule) whereby you can at the same time will that it should become a universal law”
• E.g.: Rule: it’s OK for me to lie
• This means: it’s OK for anyone to lie
• But if everyone lies when it suits them: truth collapses > lying becomes impossible
• Hence: lying is irrational
• Same for promise-breaking
Categorical imperative – second
Second part of Moral Law
Connected to the first
All rational beings can grasp the moral law They have autonomy
“Act in such a way that you treat humanity, whether in your own person or in the person of any other, never merely as a means to an end, but always at the same time as an end”
Ends and merely means
• All autonomous beings are ends-in- themselves
• Sublime and equal dignity (infinite worth)
• Never treat them as merely means
• Hire a plumber – use as means
• But mere means and not ends-in- themselves: e.g., deceiving, breaking promises, manipulating
• Never allowed
• E.g. murderer asking if you have hidden
his intended victim – may you lie?
• Autonomy must be respected
Modern notion of autonomy
Autonomy is the ability to think for ourselves, plan our lives, act on our values
To respect autonomy, need people’s informed consent (e.g. collecting private information about them)
We should aim to:
• Respect the autonomy of others
• Try to understand people’s values and beliefs and get their consent
• Respect control over personal information
• Remember both powerful and weak have autonomy
• Be honest with people, including when things go wrong (deceiving people disrespects their autonomy)
Digital ethics team 44
U and rules
• U: consequences matter more than rules
• But: rules matter if they affect consequences!
• E.g. Social rule against punishing innocent – good U rule?
• Some rules, laws, basic rights are important (e.g. don’t kill, don’t torture etc.)
• But: must be changed if not best consequences!
• U: ‘Morality made for people, not people for morality’
AI headbands
Morally justified or not?
Wall Street Journal video (https://www.wsj.com/articles/chinas-efforts-to-lead-the-way-in-ai-start-in-its-classrooms-11571958181)
AI headbands example
• Study in selected classrooms to collect data, train AI, and improve the headbands
• Uses electroencephalography (EEG) sensors to measure brain signals and AI algorithm to translate signals into real-time focus levels
• Colours displayed on band
• Also designed to help students focus through
neurofeedback
• Results of classrooms with and without compared
• Data from students kept on company server
• Compulsory and students and parents not told about details of the study
• What might U and Kant say about this?
Nature of ethics Religion
Egoism, relativism Moral reason Utilitarianism Deontology and to apply to AI
程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com