Home » Imaginary Interview

Recent Comments

    Archives

    Attribution-NonCommercial-ShareAlike 4.0 International

    Except where otherwise noted, content on this site is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license.

    Imaginary Interview

    I chose two authors from my annotated bibliography.

    • Kristian Lum is the Lead Statistician at the Human Rights Data Analysis Group (HRDAG), where she leads the HRDAG project on criminal justice in the United States.
    • Elizabeth Glazer is the director of the New York City Mayor’s Office of Criminal Justice.

     Question #1

    Juan:  How do you evaluate the specific risk assessment tool used in New York City?

     Lum: The most important thing to evaluate a risk assessment tool is the need for transparency in the development of these risk assessment tools. Data may itself encode racial bias into the tool since comes from the peak years of “Stop, Question, and Frisk,” a policing practice found by courts to be racially discriminatory.

     Glazer: New York City has achieved the lowest incarceration rate of any big city in the nation, even as we’ve kept our crime rates below national averages. Risk-assessment instruments have been a major factor in that achievement. In New York, they help judges make fair decisions while keeping the city safe. So, you evaluate the risk with the results.

     

    Question #2

    Juan: Do you think that the risk assessment model reinforces racial inequalities in the criminal justice system?

    Lum: Black defendants were about twice as likely as white defendants to be made ineligible for the supervised release program based on the risk assessment. Hispanic defendants were about 1.5 times as likely to be ineligible as White defendants. Thus, this tool has the potential to disproportionately impact communities of color relative to White communities by denying access to a potentially beneficial program at a higher rate.

    Glazer: We know that certain communities, especially communities of color, are disproportionately over-policed, more likely to be over-charged by prosecutors, and forced into pleas that result in convictions. However, through the smart use of data and technology, New York City hopes to continue its path to creating a criminal justice system that is smaller, safer, and fairer.

     

    Question #3

    Juan: When a risk assessment is considered ‘fair’?

     Lum: it is dependent on the definition of fairness used, and different definitions could result in different conclusions regarding whom the risk assessment model is unfair, if at all.

     Glazer: Studies suggest that well-designed algorithms may be far more accurate than a judge alone. Fairness is a subjective concept and we are human beings, so we make mistakes due to our emotions. Let judges make critical decisions based on their personal experience, intuition, and whatever they decide is relevant, it is unfair.

     Lum: I believe that existing fair machine algorithms are weak in many ways. The rational and logical part of the brain is essential in making optimal decisions in which a machine cannot do it.