Emerging from AI utopia
Afuture driven by artificial intelligence (AI) is of- ten depicted as one paved with improvements across every aspect of life—from health, to jobs, to how we connect. But cracks in this utopia are starting to appear, particularly as we glimpse how AI can also be used to surveil, discrimi- nate, and cause other harms. What existing legal frameworks can protect us from the dark side of this brave new world of technology?
Facial recognition is a good example of an AI-driven technology that is starting to have a dramatic human im- pact. When facial recognition is used to unlock a smart- phone, the risk of harm is low, but the stakes are much higher when it is used for policing. In well over a dozen countries, law enforcement agencies
of a company preferred to recruit people of a particular race, unfairly disadvantaging other people, or if a bank offered credit more readily to men than women. Clearly, this is unlawful discrimination. So, why would the legal position be any different if discrimination occurred be- cause these people were similarly disadvantaged by the use of an algorithm?
Copyright By PowCoder代写 加微信 powcoder
The laws that many countries already use to protect citizens—including laws that prohibit discrimination— need to be applied more rigorously and effectively in the new technology context. There has been a proliferation of AI ethics frameworks that provide guidance in iden- tifying the ethical implications of new technologies and propose ways to develop and use these technologies for
is Australia’s Human Rights Commissioner. humanrights. commissioner@ humanrights.gov.au
have started using facial recogni-
tion to identify “suspects” by match-
ing photos scraped from the social
media accounts of 3 billion people
around the world. Recently, the Lon-
don Metropolitan Police used the
technology to identify 104 suspects,
102 of whom turned out to be “false
positives.” In a policing context, the
human rights risk is highest because
a person can be unlawfully arrested,
detained, and ultimately subjected
to wrongful prosecution. Moreover,
facial recognition errors are not
evenly distributed across the com-
munity. In Western countries, where
there are more readily available
data, the technology is far more ac-
curate at identifying white men than
any other group, in part because it
tends to be trained on datasets of
photos that are disproportionately made up of white men. Such uses of AI can cause old problems—like unlawful discrimination—to appear in new forms.
Right now, some countries are using AI and mobile phone data to track people in self-quarantine because of the coronavirus disease 2019 pandemic. The privacy and other impacts of such measures might be justified by the scale of the current crisis, but even in an emer- gency, human rights must still be protected. Moreover, we will need to ensure that extreme measures do not be- come the new normal when the period of crisis passes.
It’s sometimes said that existing laws in Western countries don’t apply in the new world of AI. But this is a myth—laws apply to the use of AI, as they do in every other context. Imagine if a chief executive officer
the better. The Australian Human Rights Commission’s Human Rights and Technology Discussion Paper acknowledges an important role for ethics frameworks but notes that to date, their practical impact has been limited and cannot be a substitute for applying the law. Although this project has considered how Austra- lia specifically should respond to the challenges of emerging technologies such as AI, the recommendations are general. The Commission sets out practical steps that researchers, government, industry, and regula- tors should take to ensure that AI is accountable in its development and use. It also suggests targeted re- form to fill the gaps that have been exposed by the unprecedented adop- tion of AI and related technologies. For example, our laws should make
crystal clear that momentous decisions—from sentencing in criminal cases to bank loan decisions—cannot be made in a “black box,” whether or not AI is used in the decision- making process. And where the risk of harm is particu- larly severe, such as in the use of facial recognition for policing, the Commission proposes a moratorium in Aus- tralia until proper human rights safeguards are in place.
The proposals in the discussion paper are written in pencil rather than ink and are open for public comment until the end of this month (tech.humanrights.gov.au) before the final report is released later this year. AI of- fers many exciting possibilities and opportunities for humanity, but we need to innovate for good and ensure that what we create benefits everyone.
SCIENCE sciencemag.org
3 APRIL 2020 • VOL 368 ISSUE 6486 9
“…cracks in this utopia are starting to appear…”
Published by AAAS
10.1126/science.abb9369
Downloaded from http://science.sciencemag.org/ on April 2, 2020
CREDITS: PHOTO HUMAN RIGHTS COMMISSIONER EDWARD SANTOW © AUSTRALIAN HUMAN RIGHTS COMMISSION; ILLUSTRATION WILLIAM DUKE/PHOTO BY DAXIAO PRODUCTIONS/SHUTTERSTOCK
Emerging from AI utopia
Science 368 (6486), 9.
DOI: 10.1126/science.abb9369
ARTICLE TOOLS
PERMISSIONS
http://science.sciencemag.org/content/368/6486/9
http://www.sciencemag.org/help/reprints-and-permissions
Use of this article is subject to the Terms of Service
Science (print ISSN 0036-8075; online ISSN 1095-9203) is published by the American Association for the Advancement of
Science, 1200 Avenue NW, Washington, DC 20005. The title Science is a registered trademark of AAAS.
Copyright © 2020 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works
Downloaded from http://science.sciencemag.org/ on April 2, 2020
程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com