留学生代考 COMP90087 Assignment 1 Critical Evaluation Essay

COMP90087 Assignment 1 Critical Evaluation Essay
Task 1 – Utilitarian Response to Option 1 (760 Words)
The hedonic calculus of utilitarianism states that the most ethical path is the course of action that produces the most overall good. Implementing an AI system in Meadowlands to prevent abuse of patients and identify those responsible will be a valuable tool in creating a safer environment for vulnerable patients. As the proposed AI system drastically improves the resident’s quality of life by protecting them from physical and emotional harm at a negligible risk to innocent staff. It is, therefore an ethically acceptable option from a utilitarian perspective.
To show why the AI system is an ethically acceptable solution, we must consider the potential harm this system might introduce and weigh its impact against the benefits.

Copyright By PowCoder代写 加微信 powcoder

As with other AI systems that utilize facial recognition to report illegal or unwanted behavior, the accuracy of the AI is a significant factor. It is crucial not only if the system makes errors but what kind of errors the system makes. Consideration must also be given to the repercussions of these errors when determining the ethical acceptability of the new system. If the AI reports an innocent staff member, this is called a false-positive error. The risk faced by the staff member could vary from disciplinary actions such as termination up to legal action. While these are adverse outcomes for innocent staff members, they readily have means of recourse available to them via human intervention and auditing of the flagged footage to mitigate these negative outcomes. These avenues of review will ensure that the AI and those behind it can be held accountable for its decisions and protects the innocent staff from unfair punishments. Hence, the AI system presents minimal risk and thus minimal chance of harm to the innocent staff of Meadowlands.
Alternatively, if the AI does not pick up a staff member who abuses patients, this is called a false-negative. While this is unfortunate, it is the same situation facing Meadowland before implementing the AI. Therefore no new harm has been

COMP90087 Assignment 1
introduced by the system. It is reasonable to assume the AI system as a commercial product has been audited and meets some required level of accuracy. Similar models have reported an accuracy of 91% (Nievas et al., 2011) and 97.1% (Sudhakaran and Lanz, 2017). Thus, it will catch most perpetrators within the staff even if it misses a few due to false negatives. So while the system will ignore a few abusers, overall, the AI will create a safer environment for most Meadowland residents.
While the system will introduce some mitigatable risks to staff being incorrectly flagged for abuse, the additional oversight and resulting safety it will create is an overwhelmingly net positive. Victims of abuse, particularly those in residential care due to their dependency on their abusers, are susceptible to severe long-term effects. These include depression, anxiety, withdrawn personality, and other detrimental mental illnesses and behavioral changes (www.nursinghomeabuseguide.org, n.d.). By removing the abusers from a position of power over these victims and preventing them from acquiring new victims, the AI system will help avoid considerable suffering.
Concerns over residents’ privacy are valid but minimal in the grand scheme of the initiative. We can assume that common areas are already monitored via CCTV, so installing additional cameras here is of little concern. Although there is some concern over residents’ right to privacy in their bedrooms, the AI system would be considerably less effective if it did not have access to sensitive areas where abuse is highly likely – due to the secluded/hidden location. However, this concern could be addressed by only installing cameras in bedrooms where the residents have given consent. Even still the sacrifice of privacy is not enough of a detriment to counteract the protection the AI systems gives from the abusers.
The new system will hold staff accountable and help build trust in Meadowlands. By introducing an external means to keep the staff accountable for their treatment of the patients in their care, the AI system will encourage them to refrain from mistreating patients in the future. These changes will go a long way in helping calm the concerned residents, family members, and other community members that are losing trust in the safety and dignity of those entrusted to Meadowlands care.

COMP90087 Assignment 1
In conclusion, the AI system has introduced several benefits that outweigh the potential harm associated with it in both quantity and severity. The proposed system presents vast assistance for the vulnerable with little to no additional risk to them and those in authority over them. The utilitarian calculus summating the good and bad of the proposed system leans heavily toward the system achieving the ethical greater good.
Essay 1 – References
Bermejo Nievas, E., , O., ́a, G. and Sukthankar, R., 2011, August. Violence detection in video using computer vision techniques.
In International conference on Computer analysis of images and patterns (pp. 332- 339). Springer, Berlin, Heidelberg.
Sudhakaran, S. and Lanz, O., 2017, August. Learning to detect violent videos using convolutional long short-term memory. In 2017 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS) (pp. 1-6). IEEE.
https://www.nursinghomeabuseguide.org/physical-abuse/effects

COMP90087 Assignment 1
Task 2 – Care Ethics Response to Option 1 (820 Words)
In response to the essay above, I will show that the utilitarian assessment regarding the ethical acceptability of the proposed AI system has failed to consider the cause of the abuse and the unethical impacts the AI will have at Meadowlands. In this essay, I will be exploring the relationship between the vulnerable residents of Meadowlands and their caregivers through the lens of care ethics.
The utilitarian argument for the new AI systems’ most obvious oversight concerns the disregard for the residents’ right to privacy. It claims that the invasion of the residents’ privacy in their room is of “minimal” concern and that it is a small sacrifice for the overall good. The author has failed to address that these residents are a historically marginalized group with severe mental illnesses, potentially not fully aware of their situation. Thus gaining their informed consent must be handled with utmost care and respect for their person. This vulnerable group of people often struggles to maintain their dignity and autonomy due to their condition and, at times, extreme reliance on others to care for them. The proposed AI system is another case of stripping the “autonomy of patients with mental disorder …. due to paternalism” (Zenelaj, 2108). The system is focused on providing care without considering the person in its care. It forces the residents (or their families) to choose between protecting them from bodily and mental harm or maintaining their right to privacy. The residents will likely feel that they have no choice and unwillingly sacrifice their privacy to protect themselves from abuse. While the proposed system is likely to catch abusers, it fails to respect the human dignity of an already vulnerable group that faces continual insults to their person by the institutions that care for them.
The proposed AI system acts as a reactive deterrent that fails to address the underlying causes leading to patient abuse. Meadowlands claims it is difficult to recruit good staff. Instead of attempting to fix the abuse problem by introducing an invasive AI system, Meadowlands should investigate the relationship dynamics of their existing staff and management. They should question why this current group of employees fosters an environment that promotes abuse and deters good new employees. The working conditions of the care facility and the staff morale should be considered intrinsic factors that might motivate the poor treatment of residents by staff. While these are not excuses for the abuses, they can be an explanation.

COMP90087 Assignment 1
Suppose some staff members feel that they are being poorly treated by management or those in positions of authority above them. In that case, it is common for “targets of abusive behavior [to] often reciprocate by engaging in negative behaviors of their own” (Bowling and Michel, 2011). These negative behaviors are unlikely to be directed at those with power to punish the staff members, so it is not unexpected if these actions are instead directed toward their patients who cannot retaliate. Unless these inter-personal concerns are addressed, Meadowlands is likely to continue to experience patient abuse, although in a more subtle or hard-to-detect manner. While the utilitarian approach claims that the deterrent nature of the AI system is an acceptable solution, it fails to provide competent and successful care for the entire institution, which, as I have argued above, is necessary to resolve the root causes of the abuse.
Finally, the author above has not considered the potential for further discrimination against marginal groups due to biased historical training data. The AI model was trained on data from cases of abuse in international residences and hospitals. Unless the demographic makeup of the training data is a fair representation of the demographic within Meadowlands, underrepresented groups are likely to account for a significant portion of the falsely reported or missed case of abuse. In the case of staff, this will likely result in certain individuals’ facing false and repeated reports of abusing patients due to AI bias. While the author above claimed any harm from false accusations against staff could be mitigated via auditing the flagged footage, they do not consider the risk of incorrectly affirming negative bias held against certain minority groups. Consideration should be given to the insult and stress that the staff member would experience, resulting from continually defending themselves from the false reports. In the case of the abused patients, it is possible that certain groups within Meadowlands not found in the training data will be ignored by the AI and will receive no protection from future abuse. So the proposed AI is likely to end up as another institutional means of protecting and serving the majority at the expense of historically oppressed minorities.
In conclusion, the proposed AI model cannot be considered ethically acceptable under the definition of care ethics. It fails to consider the relationships within Meadowlands that foster the unsafe environment and the potential negative

COMP90087 Assignment 1
ramifications its impassive approach will have on marginal groups struggling to overcome historical bias and oppression.
Essay 2 – References
Zenelaj, B., 2018. Human dignity, autonomy and informed consent for patients with a mental disorder under biomedicine convention. Med. & L., 37, p.297.
Bowling, N.A. and Michel, J.S., 2011. Why do you treat me badly? The role of attributions regarding the cause of abuse in subordinates’ responses to abusive supervision. Work & Stress, 25(4), pp.309- 320.

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com