Image courtesy Unsplash / @timmossholder
Week 8/S1/2022
Accessibility & Equity
Marc of Computing and Information Systems Centre for AI & Digital Ethics
Copyright By PowCoder代写 加微信 powcoder
The University of Melbourne
marc.cheong [at] unimelb.edu.au
Learning Outcomes
1. Define the concept of accessibility and universal usability in computing (especially in HCI and related fields) and understand how it is promoted by computing best practices as well as by law.
2. Define the concept of equity, in relation to a machine’s idea of purported fairness.
3. Understand how complex systems can sometimes neglect accessibility and equity in their design process – even though on the surface they seem ‘neutral’ – and ways to mitigate this.
4. Learn about the conflicting technical definitions of fairness as well as ideas on how to ameliorate issues in the design process.
– COMP90087 – Semester 1, 2022 – © University of Melbourne 2022 2
Related Reading
This module has two readings corresponding to the two broad themes within.
Accessibility: Social Biases in NLP Models as Barriers for Persons with Disabilities.
, , , , , . arXiv [cs.CL], 2 May 2020. https://arxiv.org/abs/2005.00813
Equity: Ethical Implications of AI Bias as a Result of Workforce Gender Imbalance.
, , Loughney, , , . University of Melbourne / UniBank. 2020.
This research report is a result of an interdisciplinary collaboration between University of Melbourne and UniBankinuncoveringsourcesofbias– bothhumanandalgorithmic–toconsiderwhendeployingany form of automated system in recruitment/shortlisting of job candidates.
Read only pp. 5-34 inclusive – the appendices are optional!
[Link updated on Canvas 20220428]
– COMP90087 – Semester 1, 2022 – © University of Melbourne 2022 3
1. About accessibility.
2. About equity.
3. Complexity, complex systems, and unintended consequences!
4. 💡 Case Study & Reflection: Natural Language Processing: Sexist? Ableist?
5. 💡 Case Study & Reflection: AI-based Hiring: Neutral from the outset, but not equitable?
6. Conclusion: Can a machine determine what is fair and equitable?
– COMP90087 – Semester 1, 2022 – © University of Melbourne 2022 4
About accessibility.
Image courtesy Unsplash / @timmossholder
What is accessibility? (1/2)
“Basically, technology is accessible if it can be used as effectively by people with disabilities as by those without” (Thatcher, 2004).
“Accessibility refers to the degree to which an interactive product is accessible by as many people as possible. A focus is on people with disabilities.” (Sharp, 2011)
Sources: Thatcher, J. (2004) “Web Accessibility for Section 508”, http://www.jimthatcher.com/webcourse1.htm Preece, J, Sharp, H, Rogers, Y. (2015). Interaction Design: Beyond Human-Computer Interaction. & Sons.
– COMP90087 – Semester 1, 2022 – © University of Melbourne 2022 6
What is accessibility? (2/2)
Accessibility is the degree to which a system is usable by as many people as possible without modification. Its goal: equality of access and removal of barriers to access based on disability, technical or environmental limitations. Usability and accessibility are compatible design approaches – sharing a concern for universal design as a foundation for good design.
(Alexander, 2004a).
Source: Alexander, D. (2004a) What is the relationship between usability and accessibility, and what should it be? http://deyalexander.com/presentations/ usability-acc es sibility
Credits: Adapted from material by , built upon earlier material shared by Sheard, J; Lay, W; Fleming, R; Kathpalia, M.; Linger, H. and others.
– COMP90087 – Semester 1, 2022 – © University of Melbourne 2022 7
Universal Usability and HCI
Usability and Human-Computer Interaction first to notice this – cf. design of tech artifacts and user interfaces.
• Hardware Products
• Software Interfaces etc.
Universal Usability = a “design for all” approach which is about making a product as accessible as possible to as wide a group of people as possible. The term originated from architecture (consider stairs vs. ramps/elevators/escalators).
Credits: Adapted from material by , built upon earlier material shared by Sheard, J; Lay, W; Fleming, R; Kathpalia, M.; Linger, H. and others.
– COMP90087 – Semester 1, 2022 – © University of Melbourne 2022 8
Image source: Wikipedia / The Verge (Owen, 2020).
– COMP90087 – Semester 1, 2022 – © University of Melbourne 2022
Audience activity I [2-5 mins]
Facilitator: Head Tutor, Maddie.
Online: please use Canvas Chat to share your ideas.
In-person: chat with your neighbour, then share your views with the class. Incentive:
Image source: Cadbury
Besides accessibility for, say, wheelchair users, how else does this design feature promote universal usability?
– COMP90087 – Semester 1, 2022 – © University of Melbourne 2022
Misconceptions
Accessibility goes beyond just ’catering for those with disabilities’.
Situational impairments
Consider: a busy parent during the breakfast rush
Consider: defense personnel during deployments in a humanitarian crisis Consider: remote learning/work during the Covid-19 pandemic
Temporary disability/temporary impairment
Consider: a student who broke their arm after a bicycle accident Consider: a lecturer who has a spinal injury
See: https://www.w3.org/WAI/EO/wiki/Situational_terminology Credits thanks to Lay, W.
– COMP90087 – Semester 1, 2022 – © University of Melbourne 2022 11
Accessibility and the Law
Landmark case: Maguire versus SOCOG ( . Committee for the Olympic Games)
“Maguire made a complaint to the human rights and equal opportunity commission (HREOC)… (SOCOG) had discriminated against him as a person disabled, in contravention of the Disability Discrimination Act 1992…”
Main point: “failure to provide a website which was accessible to Maguire…”
“SOCOG said that it did not discriminate unlawfully … cost and effort in retraining staff and redrawing entire development methods was an unjustifiable hardship in providing an accessible website…”
Basically: SOCOG gave excuses (too much time needed etc); refuted by expert witnesses!
“The Commissioner found that SOCOG had engaged in unlawful discrimination against Maguire in violation of Section 24 of the DDA 1992”
SOCOG was stubborn; “The Commissioner found that SOCOG only partially complied and as a result, by section 103(1)(b)(iv) of the DDA, the commissioner awarded Maguire $20,000.
Verbatim quotes taken from Wikipedia Contributors (2020)
https://en.wikipedia.org/wiki/Maguire_v_Sydney_Organising_Committee_for_the_Olympic_Games_(2000) – COMP90087 – Semester 1, 2022 – © University of Melbourne 2022
Reflection.
Human-Computer Interaction (HCI), specifically
Usability studies – subfields of CS to first notice issues with accessibility.
→ Designing tech artifacts (e.g. how to design the hardware); user interfaces (the software).
Reflection: But wait – where does this factor into AI/ML?
Image source: HowToGeek / Imagentle/Shutterstock
– COMP90087 – Semester 1, 2022 – © University of Melbourne 2022 13
About equity.
Image courtesy Unsplash / @timmossholder
What is equity?
equity | ˈɛkwɪti | noun (plural equities) [mass noun]
1 the quality of being fair and impartial: equity of treatment.
• Law a branch of law that developed alongside common law and is concerned with fairness and justice, formerly administered in special courts: if there is any conflict between the principles of common law and equity, equity prevails.
Source: OxfordDictionaryofEnglish,viaAppleDictionary.app
– COMP90087 – Semester 1, 2022 – © University of Melbourne 2022 15
Focus of the module
“1 the quality of being fair and impartial: equity of treatment. ” Source: OxfordDictionaryofEnglish,viaAppleDictionary.app
Many other interrelated (similar) concepts such as fairness (philosophy→ethics), that you may have encountered before.
This module continues the discussion from Simon’s Lecture 5; but takes a more ‘applied’ view from the perspective of the technology. As such, there will be slides linking the concepts found in Lecture 5 with this one.
– COMP90087 – Semester 1, 2022 – © University of Melbourne 2022 16
Focus of the module: Recall Lecture 5?
“1 the quality of being fair and impartial: equity of treatment. ” Source: OxfordDictionaryofEnglish,viaAppleDictionary.app
Recall Simon’s discussion on how Northpointe justified their allegedly fair COMPAS design using the statistics?
– COMP90087 – Semester 1, 2022 – © University of Melbourne 2022
Audience activity II [2-5 mins]
Facilitator: Head Tutor, Maddie.
Online: please use Canvas Chat to share your ideas.
In-person: chat with your neighbour, then share your views with the class. Incentive:
Image source: Cadbury
Recall Simon’s Lecture 5…
– COMP90087 – Semester 1, 2022 – © University of Melbourne 2022
… how many ‘mathematical definitions of fairness’ do you think there are, based on research?
Narayanan (2018): 21 definitions!
Recommended viewing (very accessible, not too math-y)
A simplified example of the ‘tensions of fairness’.
Decision maker wants to hire the best person for the job, gender is not important.
Machine algo wants to optimise for prior precedent, even if it means it excludes certain genders!
– COMP90087 – Semester 1, 2022 – © University of Melbourne 2022 19
Key Point (Cheong et al, 2020)
Let’s just focus on equity in computer science, i.e. especially algorithmic design.
“In academic papers discussing the notion of fairness … researchers have found that different ideas of fairness can co-exist … (Chouldechova 2017; Kleinberg et al. 2016). …
Importantly, these different notions of fairness are known in some scenarios to be incompatible: a single model cannot meet every reasonable or accepted definition of fairness, and therefore bias must exist in one way or another inside the model…”
– COMP90087 – Semester 1, 2022 – © University of Melbourne 2022 20
Thought Experiment 1:
job hiring and 3 fairness definitions?
Assume we have an algorithm predict how likely an individual is to succeed in a job.
An individual belongs to either one of two groups (A/a or B/b) – e.g. ATAR (high/low) Suppose that (unknown to the algorithm), the ”real world” situation is as follows:
• UPPERCASE versions represent the true positives (actually likely to succeed).
• lowercase versions represent the true negatives (actually likely to not succeed).
• In the “real world”, we have different numbers of a’s and b’s with the following ground truth:
A’s (15 total, 10 +ve, 5 -ve): A A A A A A A A A A a a a a a B’s (6 total, 2 +ve, 4 -ve): B B b b b b
This is where we have a tension: total numbers of B:A are disproportionate (2 to 5), and the ratio of true positives per group are different (A is 2 to 1, B is 1 to 2). How, then, do we propose to be equitable to all?
• Do we, say, follow a 2:1 ratio (per A), and hire two more ‘b’s who is at risk of not succeeding?
• Or do we, say, follow a 1:2 ratio (per B), and NOT hire five ‘A’s who are denied the chance to succeed?
• Or do we, say, pick 12 out of 21 people with e.g. the best scores, while ignoring their groupings (A vs B)
– COMP90087 – Semester 1, 2022 – © University of Melbourne 2022
Thought Experiment 2: EqualShareAlgorithm
Even if the design is well-intentioned, and code was written in a way that is mathematically and logically sound, inequity can arise – as there are many (mathematical/social) definitions of equity in the logic (and models we employ).
For now, let’s turn to one very naïve case, to reflect on.
Create an algorithm to divide a finite pool of resources (X) equitably across N participants (P1, … PN).
Example answer: EqualShareAlgorithm
Calculate share = (X / N)
For each person in participant pool {P1, … PN}:
Allocate current person their equal allocation (share)
– COMP90087 – Semester 1, 2022 – © University of Melbourne 2022 22
☕️ Break time! See you in 5
Image courtesy Unsplash / @timmossholder
Audience activity III [5-10 mins]
Facilitator: Head Tutor, Maddie.
Online: please use Canvas Chat to share your ideas.
In-person: chat with your neighbour, then share your views with the class. Incentive:
Image source: Cadbury
In what condition(s) does this algorithm become unfair?
EqualShareAlgorithm to divide a finite pool of resources (X) equitably across N participants (P1, … PN).
• Calculate share = (X / N)
• For each person in participant pool {P1, … PN}:
Allocate current person their equal allocation (share)
– COMP90087 – Semester 1, 2022 – © University of Melbourne 2022
Discussion: EqualShareAlgorithm
EqualShareAlgorithm to divide a finite pool of resources (X) equitably across N participants (P1, … PN).
Calculate share = (X / N)
For each person in participant pool {P1, … PN}:
Allocate current person their equal allocation (share)
Now consider that the algorithm is to be deployed in the real world to automate the allocation of resources to different communities.
For a given affluent community, assume everyone is sufficiently well-off and have more than enough resources, money etc EXCEPT for two people (only P1 and P2).
P0 and P1 are the only ones who needs access to resources (food, water, etc) due to (hunger, health conditions, etc)
Is EqualShareAlgorithm still equitable???
– COMP90087 – Semester 1, 2022 – © University of Melbourne 2022 25
Reflection.
Suddenly, your equitable algorithm doesn’t seem so equitable after all.
Reflection: Can we predict these things from the outset? How can we fail if we can plan for these things beforehand?
Image source: HowToGeek / Imagentle/Shutterstock
– COMP90087 – Semester 1, 2022 – © University of Melbourne 2022 26
Complexity, complex systems, and unintended consequences!
Image courtesy Unsplash / @timmossholder
Complexity = the enemy.
Unintended consequences after deployment.
The design of an automated / computerised / AI-driven system can seem fair…
Again, consider an EqualShareAlgorithm which divides a finite pool of resources equally (by simply getting the average share per person, without fear or favour)… reviewing it at face value, we may gain some trust (cf Jacovi, Marasović, Miller, Goldberg, 2021)
Yet, these algos might violate equity (and accessibility) AFTER they are deployed.
We only notice the problem when we deploy it…
and only then find out that it doesn’t work in certain cases.
Systems are inherently complex: what works in isolation does not work ‘as a whole’, or even when deployed in circumstances (external factors, e.g. social factors) we did not foresee.
Image source: Giphy/The Masked Singer
– COMP90087 – Semester 1, 2022 – © University of Melbourne 2022
Complexity = the enemy.
Unintended consequences after deployment.
Let’s revisit the ‘provocation’ or thought experiment for this module.
– COMP90087 – Semester 1, 2022 – © University of Melbourne 2022
Examples: Accessibility issues after deployment?
Image source: Bioshock / 2K (Via NeoGAF forums, Engadget/NegativeGamer)
– COMP90087 – Semester 1, 2022 – © University of Melbourne 2022 30
Audience activity IV [2-5 mins]
Facilitator: Head Tutor, Maddie.
Online: please use Canvas Chat to share your ideas.
In-person: chat with your neighbour, then share your views with the class. Incentive:
Image source: Cadbury
How is this (hopefully) fixed in modern games?
Image source: Bioshock / 2K (Via NeoGAF forums, Engadget/NegativeGamer)
– COMP90087 – Semester 1, 2022 – © University of Melbourne 2022
Examples: Equity issues after deployment?
– COMP90087 – Semester 1, 2022 – © University of Melbourne 2022
Imagesource:
Time Magazine – Rose (2010)
Examples: Equity issues after deployment?
– COMP90087 – Semester 1, 2022 – © University of Melbourne 2022
Image source: d boyd – Quartz (2014)
ML/AI: Issues even before deployment?
The examples – EqualShareAlgorithm, Bioshock ‘Hacking’ Minigame (2010), Nikon Cameras, Oculus:
We only notice the problem when we deploy it… and only then find out that there are issues in certain cases which we didn’t test sufficiently for.
With machine learning, we need vast amounts of complex data when building the systems as well. Feedback loops + complexity = bad.
Analogy: what if we build an ensemble face-detector classification system, using the face detection capability of many consumer-grade cameras on a set of training data?
→The problems ‘after deployment’ get fed back into the system to entrench these issues. Again, the complexity of modern systems make these hard to untangle!
– COMP90087 – Semester 1, 2022 – © University of Melbourne 2022 34
💡Case Study: Natural Language Processing: Sexist? Ableist?
C/W: discriminatory language might be found within.
Image courtesy Unsplash / @timmossholder
Reading: Hutchinson et al (2020)
– COMP90087 – Semester 1, 2022 – © University of Melbourne 2022
Reading: Cheong et al (2020)
– COMP90087 – Semester 1, 2022 – © University of Melbourne 2022
Case Study: NLP Models (drawing upon both readings)
Machine learning models are trained on large volumes of data
(we focus on Natural Language Processing / NLP here as it is the easiest to discuss, and widely applicable in systems involving large amounts of textual data).
Where does the data come from?
It has to learn by starting somewhere.
That ‘somewhere’ – lots of websites, news, blog posts, Wikipedia, etc. The statistical patterns of words are found in a language model.
E.g. en_core_web_md in SpaCy:
– https://spacy.io/models/en#en_core_web_sm
“trained on OntoNotes, with GloVe vectors trained on Common Crawl”.
Slide adapted from “Gender Bias: From language models to disparate impact” – credits to [CAIDE] G. Bush, S. Coghlan, K. Leins, A. Lodders, T. Miller, J. Paterson; and [CIS/Policy Lab] L. Frermann; S. Njoto; L. Ruppanner (in alphabetical order).
– COMP90087 – Semester 1, 2022 – © University of Melbourne 2022
Case Study: NLP Models (drawing upon both readings)
Gender bias in word embeddings
(Duman, Kalai, Leiserson, Mackey, Sursesh, 2017) http://wordbias.umiacs.umd.edu/
“as the daughter of an attorney Mrs. Bennet married up when she captivated the landed Mr. Bennet”
– Pride and Prejudice, as cited in
http://www.diva- portal.org/smash/get/diva2:207053/FULLTEXT01.pdf
(extrapolated to ‘big data’…)
– COMP90087 – Semester 1, 2022 – © University of Melbourne 2022
Slide adapted from “Gender Bias: From language models to disparate impact” – credits to [CAIDE] G. Bush, S. Coghlan, K. Leins, A. Lodders, T. Miller, J. Paterson; and [CIS/Policy Lab] L. Frermann; S. Njoto; L. Ruppanner (in alphabetical order).
Audience activity V [2-5 mins]
Facilitator: Head Tutor, Maddie.
Online: please use Canvas Chat to share your ideas.
In-person: chat with your neighbour, then share your views with the class. Incentive:
Image source: Cadbury
What are your thoughts on the current state-of-the-art of language models? e.g. GPT-3 etc.
Do you think they are less biased?
– COMP90087 – Semester 1, 2022 – © University of Melbourne 2022
Reflection.
Current state of the art (GPT-3)? Point to ponder.→
Chan, A. GPT-3 and InstructGPT: technological dystopianism,
程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com