程序代写 IN 2014, was running late to pick up her god- sister from school when she

26/06/2020 Machine Bias — ProPublica
by , Je Larson, and , ProPublica May 23, 2016
ON A SPRING AFTERNOON IN 2014, was running late to pick up her god- sister from school when she spotted an unlocked kid’s blue Huy bicycle and a silver Razor scooter. Borden and a friend grabbed the bike and scooter and tried to ride them down the street in the Fort Lauderdale suburb of Coral Springs.
Just as the 18-year-old girls were realizing they were too big for the tiny conveyances — which belonged to a 6-year-old boy — a woman came running after them saying, “That’s my kid’s stu.” Borden and her friend immediately dropped the bike and scooter and walked away.

Copyright By PowCoder代写 加微信 powcoder

But it was too late — a neighbor who witnessed the heist had already called the police. Borden and her friend were arrested and charged with burglary and petty theft for the items, which were valued at a total of $80.
Compare their crime with a similar one: The previous summer, 41-year-old was picked up for shoplifting $86.35
Machine Bias: Investigating the algorithms that control our lives.
https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing 1/16

26/06/2020 Machine Bias — ProPublica
worth of tools from a nearby Home Depot store.
Prater was the more seasoned criminal. He had already been convicted of armed robbery and attempted armed robbery, for which he served ve years in prison, in addition to another armed robbery charge. Borden had a record, too, but it was for misdemeanors committed when she was a juvenile.
Yet something odd happened when Borden
and Prater were booked into jail: A
computer program spat out a score
predicting the likelihood of each
committing a future crime. Borden — who
is black — was rated a high risk. Prater — who is white — was rated a low risk.
Two years later, we know the computer algorithm got it exactly backward. Borden has not been charged with any new crimes. Prater is serving an eight-year prison term for subsequently breaking into a warehouse and stealing thousands of dollars’ worth of electronics.
Scores like this — known as risk assessments — are increasingly common in courtrooms across the nation. They are used to inform decisions about who can be set free at every stage of the criminal justice system, from assigning bond amounts — as is the case in Fort Lauderdale — to even more fundamental decisions about defendants’ freedom. In Arizona, Colorado, Delaware, Kentucky, Louisiana, Oklahoma, Virginia, Washington and Wisconsin, the results of such assessments are given to judges during criminal sentencing.
Rating a defendant’s risk of future crime is often done in conjunction with an evaluation of a defendant’s rehabilitation needs. The Justice Department’s National Institute of Corrections now encourages the use of such combined assessments at every stage of the criminal justice process. And a landmark sentencing reform bill currently pending in Congress would mandate the use of such assessments in federal prisons.
In 2014, then U.S. Attorney General warned that the risk scores might be injecting bias into the courts. He called for the U.S. Sentencing Commission to study their use. “Although these measures were crafted with the best of intentions, I am
📄 Northpointe document collection 📄 Sentencing reports that include risk
assessments
📓 Read about how we analyzed the risk assessments algorithm
􏰀 Download the full data used in our analysis

https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing 2/16

26/06/2020 Machine Bias — ProPublica
Borden was rated high risk for future crime after she and a friend took a kid’s bike and scooter that were sitting outside. She did not reoend.
concerned that they inadvertently undermine our eorts to ensure individualized and equal justice,” he said, adding, “they may exacerbate unwarranted and unjust disparities that are already far too common in our criminal justice system and in our society.”
The sentencing commission did not, however, launch a study of risk scores. So ProPublica did, as part of a larger examination of the powerful, largely hidden eect of algorithms in American life.
We obtained the risk scores assigned to more than 7,000 people arrested in , Florida, in 2013 and 2014 and checked to see
how many were charged with new crimes over the next two years, the same benchmark used by the creators of the algorithm.
The score proved remarkably unreliable in forecasting violent crime: Only 20 percent of the people predicted to commit violent crimes actually went on to do so.
When a full range of crimes were taken into account — including misdemeanors such as driving with an expired license — the algorithm was somewhat more accurate than a coin ip. Of those deemed likely to re-oend, 61 percent were arrested for any subsequent crimes within two years.
We also turned up signicant racial disparities, just as Holder feared. In forecasting who would re-oend, the algorithm made mistakes with black and white defendants at roughly the same rate but in very dierent ways.
The formula was particularly likely to falsely ag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants.
White defendants were mislabeled as low risk more often than black defendants.
Could this disparity be explained by defendants’ prior crimes or the type of crimes they were arrested for? No. We ran a statistical test that isolated the eect of race from criminal history and recidivism, as well as from defendants’ age and gender. Black defendants were still 77 percent more likely to be pegged as at higher risk of committing a future violent crime and 45 percent more likely to be predicted to commit a future crime of any kind. (Read our analysis.)
https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing 3/16

26/06/2020 Machine Bias — ProPublica
The algorithm used to create the Florida risk scores is a product of a for-prot compa Northpointe. The company disputes our analysis.
In a letter, it criticized ProPublica’s methodology and defended the accuracy of its test: “Northpointe does not agree that the results of your analysis, or the claims being made based upon that analysis, are correct or that they accurately reect the outcomes from the application of the model.”
Northpointe’s software is among the most widely used assessment tools in the country. The company does not publicly disclose the calculations used to arrive at defendants’ risk scores, so it is not possible for either defendants or the public to see what might be driving the disparity. (On Sunday, Northpointe gave ProPublica the basics of its future-crime formula — which includes factors such as education levels, and whether a defendant has a job. It did not share the specic calculations, which it said are proprietary.)
Northpointe’s core product is a set of scores derived from 137 questions that are either answered by defendants or pulled from criminal records. Race is not one of the questions. The survey asks defendants such things as: “Was one of your parents ever sent to jail or prison?” “How many of your friends/acquaintances are taking drugs illegally?” and “How often did you get in ghts while at school?” The questionnaire also asks people to agree or disagree with statements such as “A hungry person has a right to steal” and “If people make me angry or lose my temper, I can be dangerous.”
The appeal of risk scores is obvious: The United States locks up far more people than any other country, a disproportionate number of them black. For more than two centuries, the key decisions in the legal process, from pretrial release to sentencing to parole, have been in the hands of human beings guided by their instincts and personal biases.
If computers could accurately predict which defendants were likely to commit new crimes, the criminal justice system could be fairer and more selective about who is incarcerated and for how long. The trick, of course, is to make sure the computer gets it right. If it’s wrong in one direction, a dangerous criminal could go free. If it’s wrong in another direction, it could result in someone unfairly receiving a harsher sentence or waiting longer for parole than is appropriate.
The rst time heard of his score — and realized how much was riding on it — was during his sentencing hearing on Feb. 15, 2013, in court in , Wisconsin. Zilly had been convicted of stealing a push lawnmower and some tools. The prosecutor recommended a year in county jail and follow-up supervision that could help Zilly with “staying on the right path.” His lawyer agreed to a plea deal.
But Judge had seen Zilly’s scores. Northpointe’s software had rated Zilly as a high risk for future violent crime and a medium risk for general recidivism. “When I look at the risk assessment,” Babler said in court, “it is about as bad as it could be.”
https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing 4/16

26/06/2020 Machine Bias — ProPublica
Then Babler overturned the plea deal that had been agreed on by the prosecution and defense and imposed two years in state prison and three years of supervision.
CRIMINOLOGISTS HAVE LONG TRIED to predict which criminals are more dangerous before deciding whether they should be released. Race, nationality and skin color were often used in making such predictions until about the 1970s, when it became politically unacceptable, according to a survey of risk assessment tools by Columbia University law professor .
In the 1980s, as a crime wave engulfed the nation, lawmakers made it much harder for judges and parole boards to exercise discretion in making such decisions. States and the federal government began instituting mandatory sentences and, in some cases, abolished parole, making it less important to evaluate individual oenders.
But as states struggle to pay for swelling prison and jail populations, forecasting criminal risk has made a comeback.
Dozens of risk assessments are being used across the nation — some created by for- prot companies such as Northpointe and others by nonprot organizations. (One tool being used in states including Kentucky and Arizona, called the Public Safety Assessment, was developed by the Laura and Foundation, which also is a funder of ProPublica.)
There have been few independent studies
of these criminal risk assessments. In 2013,
researchers and
examined 19 dierent risk methodologies
used in the United States and found that “in
most cases, validity had only been
examined in one or two studies” and that
“frequently, those investigations were completed by the same people who developed the instrument.”
Their analysis of the research through 2012 found that the tools “were moderate at best in terms of predictive validity,” Desmarais said in an interview. And she could not nd any substantial set of studies conducted in the United States that examined whether risk scores were racially biased. “The data do not exist,” she said.
Fugett was rated low risk after being arrested with cocaine and marijuana. He was arrested three times on drug charges after that.
https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing 5/16

26/06/2020 Machine Bias — ProPublica
Since then, there have been some attempts to explore racial disparities in risk scores. One 2016 study examined the validity of a risk assessment tool, not Northpointe’s, used to make probation decisions for about 35,000 federal convicts. The researchers, at University of California, Berkeley, and . Lowenkamp from the Administrative Oce of the U.S. Courts, found that blacks did get a higher average score but concluded the dierences were not attributable to bias.
The increasing use of risk scores is controversial and has garnered media coverage, including articles by the Associated Press, and the and FiveThirtyEight last year.
Most modern risk tools were originally designed to provide judges with insight into the types of treatment that an individual might need — from drug treatment to mental health counseling.
“What it tells the judge is that if I put you on probation, I’m going to need to give you a lot of services or you’re probably going to fail,” said , a University of Cincinnati professor who is the author of a risk assessment tool that is used in Ohio and several other states.
But being judged ineligible for alternative treatment — particularly during a sentencing hearing — can translate into incarceration. Defendants rarely have an opportunity to challenge their assessments. The results are usually shared with the defendant’s attorney, but the calculations that transformed the underlying data into a score are rarely revealed.
“Risk assessments should be impermissible unless both parties get to see all the data that go into them,” said , director of the criminal justice program at School. “It should be an open, full-court adversarial proceeding.”
https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing 6/16

26/06/2020 Machine Bias — ProPublica
1 2 3 4 5 6 7 8 9 10
Risk Score
These charts show that scores for white defendants were skewed toward lower-risk categories. Scores for black defendants were not. (Source: ProPublica analysis of data from , Fla.)
Proponents of risk scores argue they can be used to reduce the rate of incarceration. In 2002, Virginia became one of the rst states to begin using a risk assessment tool in the sentencing of nonviolent felony oenders statewide. In 2014, Virginia judges using the tool sent nearly half of those defendants to alternatives to prison, according to a state sentencing commission report. Since 2005, the state’s prison population growth has slowed to 5 percent from a rate of 31 percent the previous decade.
In some jurisdictions, such as Napa County, California, the probation department uses risk assessments to suggest to the judge an appropriate probation or treatment plan for
1 2 3 4 5 6 7 8 9 10
Risk Score
https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing 7/16
Count Count

26/06/2020 Machine Bias — ProPublica
individuals being sentenced. Napa County Superior Court Judge Mark Boessenecker said he nds the recommendations helpful. “We have a dearth of good treatment programs, so lling a slot in a program with someone who doesn’t need it is foolish,” he said.
However, Boessenecker, who trains other judges around the state in evidence-based sentencing, cautions his colleagues that the score doesn’t necessarily reveal whether a person is dangerous or if they should go to prison.
“A guy who has molested a small child every day for a year could still come out as a low risk because he probably has a job,” Boessenecker said. “Meanwhile, a drunk guy will look high risk because he’s homeless. These risk factors don’t tell you whether the guy ought to go to prison or not; the risk factors tell you more about what the probation conditions ought to be.”
Sometimes, the scores make little sense even to defendants.
, a 54-year old Hollywood, Florida, man, was arrested two years ago for shoplifting seven boxes of Crest Whitestrips from a CVS drugstore. Despite a criminal record that included aggravated assault, multiple thefts and felony drug tracking, the Northpointe algorithm classied him as being at a low risk of reoending.
“I am surprised it is so low,” Rivelli said when told by a reporter he had been rated a 3 out of a possible 10. “I spent ve years in state prison in Massachusetts. But I guess they don’t count that here in .” In fact, criminal records from across the nation are supposed to be included in risk assessments.
“I’m surprised [my risk score] is so low. I spent ve years in state prison in Massachusetts.” ( for ProPublica)
Less than a year later, he was charged with two felony counts for shoplifting about $1,000 worth of tools from Home Depot. He said his crimes were fueled by drug addiction and that he is now sober.
NORTHPOINTE WAS FOUNDED in 1989 by , then a professor of statistics at the University of Colorado, and , who was running a corrections program in Traverse
https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing 8/16

26/06/2020 Machine Bias — ProPublica
City, Michigan.
Wells had built a prisoner classication system for his jail. “It was a beautiful piece of work,” Brennan said in an interview conducted before ProPublica had completed its analysis. Brennan and Wells shared a love for what Brennan called “quantitative taxonomy” — the measurement of personality traits such as intelligence, extroversion and introversion. The two decided to build a risk assessment score for the corrections industry.
Brennan wanted to improve on a leading risk assessment score, the LSI, or Level of Service Inventory, which had been developed in Canada. “I found a fair amount of weakness in the LSI,” Brennan said. He wanted a tool that addressed the major theories about the causes of crime.
Brennan and Wells named their product the Correctional Oender Management Proling for Alternative Sanctions, or COMPAS. It assesses not just risk but also nearly two dozen so- called “criminogenic needs” that relate to the major theories of criminality, including “criminal personality,” “social isolation,” “substance abuse” and “residence/stability.” Defendants are ranked low, medium or high risk in each category.
Lugo crashed his into a Toyota Camry while drunk. He was rated as a low risk of reoending despite the fact that it was at least his fourth DUI.
As often happens with risk assessment tools, many jurisdictions have adopted Northpointe’s software before rigorously testing whether it works. State, for instance, started using the tool to assess people on probation in a pilot project in 2001 and rolled it out to the rest of the state’s probation departments — except City — by 2010. The state didn’t publish a comprehensive statistical evaluation of the tool until 2012. The study of more than 16,000 probationers found the tool was 71 percent accurate, but it did not evaluate racial dierences.
A spokeswoman for the state
division of criminal justice services said the study did not examine race because it only sought to test whether the tool had been
properly calibrated to t ’s probation population. She also said judges in nearly all counties are given defendants’ Northpointe assessments during sentencing.
In 2009, Brennan and two colleagues published a validation study that found that Northpointe’s risk of recidivism score had an accuracy rate of 68 percent in a sample of 2,328 people. Their study also found that the score was slightly less predictive for black men
https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing 9/16

26/06/2020 Machine Bias — ProPublica
than white men — 67 percent versus 69 percent. It did not examine racial disparities beyond that, including whether some groups were more likely to be wrongly labeled higher risk.
Brennan said it is dicult to construct a score that doesn’t include items that can be correlated with race — such as poverty, joblessness and social marginalization. “If those are omitted from your risk assessment, accuracy goes down,” he said.
In 2011, Brennan and Wells sold Northpointe to Toronto-based conglomerate Constellation Software for an undisclosed sum.
Wisconsin has been

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com