I- Introduction
Studies suggest that well-designed algorithms may be far more accurate than a judge alone. Fairness is a subjective concept and we are human beings, so we make mistakes due to our emotions. Let judges make critical decisions based on their personal experience, intuition, and whatever they decide is relevant, it is unfair.
a. Can machine learning improve human decisions? (Kleinberg, 7)
b. How can we make sure that algorithms are fair?
II- Algorithms and risk assessment tools in the criminal justice system
Black defendants were about twice as likely as white defendants to be made ineligible for the supervised release program based on the risk assessment. Hispanic defendants were about 1.5 times as likely to be ineligible as White defendants. Thus, this tool has the potential to disproportionately impact communities of color relative to White communities by denying access to a potentially beneficial program at a higher rate.
a. Discrimination in the age of algorithms (The Marshall Project)
b. Racist algorithms
c. Predictive accuracy (Douglas 134-137)
d. Discrimination and stigmatization
e. In defense of risk assessment tools
III- Measures of fairness in NYC risk assessment tool:
The most important thing to evaluate a risk assessment tool is the need for transparency in the development of these risk assessment tools. Data may itself encode racial bias into the tool since comes from the peak years of “Stop, Question, and Frisk,” a policing practice found by courts to be racially discriminatory.
a. Release assessment questions (NYC CJA)
b. Racial bias
c. Data (Lum, 3)
d. Fair collection data
e. Use of a risk assessment tool affects racial disparity in pretrial outcomes (Picard, 5)
IV- Conclusion
Recent Comments