程序代写 LJ 1239, 1264; , and , ‘Transparency to Contest

Paterson & Maker: to be published in and (eds), The Cambridge Handbook of Private Law and
Artificial Intelligence (Cambridge University Press, Forthcoming)
AI IN THE HOME: ARTIFICIAL INTELLIGENCE AND CONSUMER PROTECTION LAW Jeannie * and Yvette Maker**
This chapter considers the role of consumer protection law in responding to the risks to consumers’ autonomy and welfare posed by the burgeoning use of artificial intelligence (AI) in ordinary consumer products. It focuses on voice-activated digital assistants as a prime example of this kind of AI informed product. While such products may offer assistance with everyday tasks around the home, they also carry risks of harm to consumers, including eroding privacy, perpetuating discrimination, and interfering with consumers’ decision-making autonomy. AI consumer products also not work very well, or be unreliable or unsafe. This chapter considers the way in which consumer protection law may address these risks and draws on insights from principles of AI ethics to identify a more effective response by consumers and regulators. It further notes complementary contributions of other fields of law and policy in addressing matters on which consumer protection law has less to say, particularly equity, accessibility and fair treatment.

Copyright By PowCoder代写 加微信 powcoder

Consumer protection, reasonable care and skill, accessibility, accountability, digital assistants, human-computer interactions
A INTRODUCTION
As this collection demonstrates, there is ongoing debate about the ability of private law to adapt to technological change. One concern is whether private law, which typically develops in a cautious manner by reference to precedent and analogy, can prove capable of responding to the burgeoning use of artificial intelligence (AI) in social, business and government applications.1 These questions also apply to statutory regimes, albeit in a somewhat different form, including to statutory consumer protection regimes, which are the focus of this chapter. In many ways, consumers are at the forefront of market uses of AI.2 The data-driven, algorithmic processes associated with AI increasingly inform targeted advertising,3 differential pricing4 and the automated decision-making that increasingly
* Professor of Law, Co-Director, Centre for Artificial Intelligence and Digital Ethics, The University of Melbourne.
** Senior Research Fellow, Centre for Artificial Intelligence and Digital Ethics and Melbourne Social Equity Institute, The University of Melbourne.
1 See ch xxx.
2 There are different views about what kinds of software are included in the description ‘artificial intelligence’. and , Artificial Intelligence: A Modern Approach (4th ed, Pearson 2021) focus on the extent to which computer programs act as rational agents, which means being able to perform actions autonomously, perceive their environment and pursue goals to achieve the best possible outcomes (para 1.1.4) — ideally with those outcomes beneficial to humans (para 1.5). This approach avoids the philosophical question of whether machines merely replicate intelligence or can actually think: see Searle, ‘Is the Brain’s Mind a Computer Program?’ (1990) 262 Scientific American 26. In a consumer protection context, we consider that performance of actions is precisely the right focus. For consumers, what is or is not strictly AI is less important than how the product performs and the claims made about it. Accordingly, our use of the term in this chapter refers to the kinds of performance claims made about the product in question.
3 See further , ‘Digital Market Manipulation’ (2014) 82 Geo Wash L Rev 995; and ̈ller, ‘Down by Algorithms? Siphoning Rents, Exploiting Biases, and Shaping Preferences: Regulating the Dark Side of Personalized Transactions’ (2019) 86 U Chi L Rev 581. See also Paterson and others, ‘The Hidden Harms of Targeted Advertising by Algorithm and Interventions from the Consumer Protection Toolkit’ (forthcoming).
4 Borgesius and , ‘Online Price Discrimination and EU Data Privacy Law’ (2017) 40 J Consum Policy 347, 351; Stucke and , ‘How Digital Assistants Can Harm Our Economy, Privacy, and Democracy’ (2017) 32 Berkeley Tech LJ 1239, 1264; , and , ‘Transparency to Contest
Differential Pricing’ (2021) 93 Computers & Law 49.
Electronic copy available at: https://ssrn.com/abstract=3973179

Paterson & Maker: to be published in and (eds), The Cambridge Handbook of Private Law and
Artificial Intelligence (Cambridge University Press, Forthcoming)
determines access to a variety of services,5 including insurance6 and credit.7 Additionally, consumers are now able to purchase products that use AI to assist them in their day-to-day lives.8 Indeed, these products provide a good exemplar of the risks to consumers and challenges to consumer protection law arising from emerging digital technologies.
Prominent examples of AI consumer products include voice-activated digital assistants, such as Siri or Alexa, as well as ‘internet of things’ or ‘smart’ devices, which embed computing capacity, sensors and internet connectivity in everyday devices.9 AI consumer products create opportunities to free consumers from mundane tasks and to assist consumers in making more informed decisions about all sorts of matters, from shopping to investing, and gaining new skills.10 AI consumer products also give rise to a number of risks of harm to consumer autonomy and welfare. Such risks include the potential to erode privacy, and perpetuate undesirable bias and unlawful discrimination that are inherent in most data- driven technologies.11 AI consumer products also carry risks that arise from their status as products: they may not work very well, they prove unreliable or unsafe, or they may fail to live up to representations about their utility. Although AI consumer products have been promoted for their potential to facilitate or improve social, market and other forms of participation for some people, such as people with disability, there has been little scrutiny of the degree to which such products are actually accessible or promote greater social equity.
Our view is that the challenges for consumer protection law in responding to the risks of harm arising from AI consumer products are primarily technical and evidential rather than demonstrating a fundamental mismatch between the challenges raised by this technology and precepts of consumer protection law.12 At least in principle, the open-textured standards relied upon by most statutory consumer protection regimes should be sufficiently flexible to adapt to AI and emerging digital technologies. However, there will need to be some recalibration of the way in which those principles are understood and applied in order to respond to the kinds of harms presented by AI consumer products13 and to the ways in which responsibility for those harms is established.14
We also think that it is important to recognise the limits of this body of law. As we shall see, consumer protection law has less to say about important issues of bias, equity and the kinds of relationships we should have with AI. Some concerns around AI consumer products need to be addressed in different legal domains, in particular through human rights and anti-discrimination law, and through responses driven by policy and values rather than law. In all of these inquiries, we suggest that the principles developed under the field of inquiry into AI ethics15 may provide useful and, in some ways, unavoidable.
5 See Danielle and , ‘The Scored Society: Due Process for Automated Predictions’ (2014) 89 Wash L Rev 1.
6 See eg 7 See eg
8 cf Russell and Norvig (n 2) vii.
9 See further US Congressional Research Service, ‘The Internet of Things (IoT): An Overview’ (2020) accessed 15 October 2021.
10 , ‘Towards an Ethics of AI Assistants: An Initial Framework’ (2018) 31 629, 636. See also The Treasury, Commonwealth of Australia, ‘Inquiry into Future Directions for the Consumer Data Right’ (Final Report, 2020) 20 accessed 15 October 2021.
11 and Bert- , ‘The Challenges of Ambient Law and Legal Protection in the Profiling Era’ (2010) 73 MLR 428; Stucke and Ezrachi (n 4).
12 See more generally Moses, ‘How to Think about Law, Regulation and Technology: Problems with Technology as a Regulatory Target’ (2013) 5 Law Innovation & Tech 1.
13 See Kayleen Manwaring, ‘Emerging Information Technologies: Challenges for Consumers’ (2017) 17 OUCLJ 265. 14 ibid 287-88.
15 See ch XX.
and , ‘Choosing How to Discriminate: Navigating Ethical Trade-Offs in Fair Algorithmic
Design for the Insurance Sector’ [2021] .
, ‘The Norms of Algorithmic Credit Scoring’ (2021) 80 CLJ 42;
Electronic copy available at: https://ssrn.com/abstract=3973179
Pasquale, ‘Humans Judged
by Machines: The Rise of Artificial Intelligence in Finance, Insurance, and Real Estate’ in Joachim von Braun and others
(eds), Robotics, AI, and Humanity: Science, Ethics, and Policy (Springer 2021).

Paterson & Maker: to be published in and (eds), The Cambridge Handbook of Private Law and
Artificial Intelligence (Cambridge University Press, Forthcoming)
These principles may provide a lens for identifying the risks inherent in AI consumer products, as well as technical and strategic approaches for establishing a contravention of consumer protection law in the use of AI in consumer products.
Part B sets the scene for this discussion. We consider the core imperatives of consumer protection law in reducing the risk of harm to consumers in the use of everyday products. We raise the possible role of frameworks of AI ethics in informing legal responses to the risks associated with AI consumer products. We then outline the nature of AI consumer products, focusing on in-home internet of things devices and digital assistants. We then turn to the responses that may be provided by consumer protection law to three identified categories of possible harm arising from AI consumer products. Part C considers harms associated with the use of personal data. Part D considers harms to consumers arising from the status of AI consumer products as consumer goods and services. Part E returns to ethics, noting that the deepest and most philosophical concerns about AI in the home arise from questions of human character and the nature of being. These may be beyond the scope of the law in most respects, though even here they may point to the need for specific legislative responses.
B SETTING THE SCENE
1 Consumer protection law
Statutory consumer protection law has its origins in private law but provides additional protections for consumers in transactions for goods and services with traders.16 Statutory consumer protection regimes usually contain provisions that scrutinise the full life of a transaction. Thus, statutory prohibitions on unfair commercial practices17 misleading practices,18 and aggressive commercial practices19 scrutinise the contracting process to invalidate contracts tainted by conduct that undermines any notion of free and informed consent.20 Provisions that invalidate unfair terms address concerns about substantive fairness of the terms of consumer contracts.21 A baseline level of quality to ensure products are reasonably safe and meet reasonable consumer expectations is provided through implied standards of satisfactory quality,22 liability for product defects,23 and rendering void terms that purport to exclude responsibility for product failings.24
Statutory consumer protection interventions are typically premised on an assessment of the risks to consumers and the degree to which consumers may themselves be expected to respond to those risks, recognising the information asymmetries and inequalities of bargaining power that make it difficult for consumers themselves to protect their own best interests.25 Greater responses are justified for interactions involving significant risk and, equally, relevant vulnerability on the part of consumers, which magnifies their exposure to the risks.26 Both of these elements are present in the current and predicted uses of AI. AI consumer products carry a number of potential risks for consumer autonomy
16 The consumer protected by consumer protection statutes is commonly defined by reference to the purpose of the transaction: see eg Consumer Rights Act 2015 (UK) (CRA 2015) s 3.
17 Consumer Protection from Unfair Trading Regulations 2008 (UK) (CPUTR 2008), SI 2008/1277, s 3. See also Jeannie
18 See CPUTR 2008, s 5. 19 See ibid s 7.
21 CRA 2015, pt 2.
22 ibid s 9.
23 See chapter xx.
24 CRA 2015, ss 63 and 65.
25 , ‘The Australian Unfair Contract Terms Law: The Rise of Substantive Unfairness as a Ground for Review of Standard Form Consumer Contracts’ (2009) 33 Melb U L Rev 934.
26 See eg Federal Trade Commission Act 1914 (US) s 45.
Paterson and , ‘Should Australia Introduce a Prohibition on Unfair Trading? Responding to Exploitative Business
Systems in Person and Online’ (2020) 55 J Consum Policy 1.
Electronic copy available at: https://ssrn.com/abstract=3973179

Paterson & Maker: to be published in and (eds), The Cambridge Handbook of Private Law and
Artificial Intelligence (Cambridge University Press, Forthcoming)
and welfare. Yet consumers are at a significant disadvantage in assessing the relative merits or the veracity of claims made about such products. AI consumer products are a relatively recent arrival on the market, which means consumer may have little experience with the products, their inner most workings are often opaque and complex, and they are typically governed by unreadable and incomprehensible terms.27 Moreover, the capacities of the digital technologies in consumer products may be considerably overestimated in the promotional material that surrounds them.
Consumer protection statutes typically contain a combination of precise rules and open-textured standards.28 The standards perform a ‘safety net’ function, catching conduct that is not covered by the specific rules yet is nevertheless judged unacceptable by reference to the values embedded in the legislation.29 This feature should, in principle, ensure that consumer protection law is capable of responding to the risks to consumers raised by new uses of AI in consumer products. Indeed, we suggest that a key challenge lies not in the ‘fit’ of the law to AI but in establishing a contravention of that law. It will often not be straightforward for consumers and, more particularly, regulators to investigate the operations of AI products to prove a breach of the statutory standards, largely because of the combined impact of novelty and complexity of the products, referred to above.30 Here, we suggest that the field of AI ethics may provide some assistance in both framing the kinds of concerns about AI consumer products to which consumer law should be sensitive and providing insights into the technical and practical processes that may assist in enforcing the law more effectively.
2 AI ethics frameworks
Principles of AI ethics have been put forward as a way of responding to the risks of increasing human reliance on AI. There are many formulations of these principles, and they may be expressed in different ways.31 Nonetheless, there are some common themes.32 AI ethical principles typically emphasise the need for emerging technologies to respect values of privacy and fairness, or an absence of bias, along with equity and accessibility. AI should be safe and reliable. The principles typically require AI to be transparent, or explainable/explicable, and to provide mechanisms for ensuring accountability and contesting adverse outcomes. AI ethical principles usually emphasise an overriding goal of non- maleficence or, ideally, beneficence, meaning AI should enrich rather than harm human lives.
Some scholars and activists have been concerned that these ethical frameworks may be co-opted by firms deploying AI to further entrench their market power and cloak the need for stronger controls over high-risk developments of AI.33 Codes of AI ethics have also been criticised as being too general to
27 See eg La Diega and , ‘Contracting for the “Internet of Things”: Looking into the Nest’ (2016) 7(2) EJLT, 3.
28 and , ‘Misrepresentation, Misleading Conduct and Statute through the Lens of Form and Substance’ in and (eds), Form and Substance in the Law of Obligations (Hart Publishing 2019).
29 Jeannie P
30 See Manwaring (n 13) 285.
31 See eg European Commission, ‘On Artificial Intelligence: A European Approach to Excellence and Trust’ (White Paper, February 2020); and others, ‘AI Ethics Framework’ (Australian Government Department of Industry, Innovation and Science, 2019) accessed 15 October 2021; Independent High-Level Expert Group on Artificial Intelligence, ‘Ethics Guidelines for Trustworthy AI’ (European Commission 2019) accessed 15 October 2021; Select Committee on Artificial Intelligence, AI in the UK: Ready, Willing and Able? (HL 2017–19, 100) 38.
32 See Brent and others, ‘The Ethics of Algorithms: Mapping the Debate’ [2016] Big Data & Society 1; , and , ‘The Global Landscape of AI Ethics Guidelines’ (2019) 1 Na Mac Intell 389.
33 , and Hoffmann, ‘Critical Perspectives on Governance Mechanisms for AI/ML Systems’ in & (eds), The Cultural Life of Machine Learning: An Incursion into Critical AI Studies ( 2020) 257; Australian Human Rights Commission, Article Intelligence and Human Rights (Discussion Paper, 2019) 54. See also Carly Kind, ‘The Term “Ethical AI” is Finally Starting to Mean Something’ (VentureBeat, August 23 2020) accessed 15 October 2021.
aterson and , ‘“Safety Net” Consumer Protection: Using Prohibitions on Unfair and
Unconscionable Conduct to Respond to Predatory Business Models’ (2015) 38 Journal of Consumer Policy 3.
Electronic copy available at: https://ssrn.com/abstract=3973179

Paterson & Maker: to be published in and (eds), The Cambridge Handbook of Private Law and
Artificial Intelligence (Cambridge University Press, Forthcoming)
provide effective guidance or sanction.34 We do not suggest that principles of AI ethics should be treated as the only or even the main mechanism for responding to the risks of AI consumer products. However, we do think that principles of AI ethics have a useful role to play in this inquiry. Moreover, it is in some ways necessary to engage with the principles of AI ethics because they dominate current ways of thinking about the design and oversight of AI. Treated merely as one form of regulatory intervention, we suggest that the principles of AI ethics are useful as a way of identifying the risks of harm that arise from AI uses, the possible technical responses to concerns about these risks and contexts where hard boundaries should be placed on the use of particular techniques.35 In this way, principles of AI ethics can complement a legal

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com