Aequitas

 

An open source bias audit toolkit for machine learning developers, analysts, and  policymakers to audit machine learning models for discrimination and bias, and make informed and equitable decisions around developing and deploying predictive risk-assessment tools.

Why we created Aequitas

Machine Learning, AI and Data Science based predictive tools are being increasingly used in problems that can have a drastic impact on people’s lives in policy areas such as criminal justice, education, public health, workforce development and social services. Recent work has raised concerns on the risk of unintended bias in these models, affecting individuals from certain groups unfairly. While a lot of bias metrics and fairness definitions have been proposed, there is no consensus on which definitions and metrics should be used in practice to evaluate and audit these systems. Further, there has been very little empirical work done on using and evaluating these measures on real-world problems, especially in public policy.

Aequitas, an open source bias audit toolkit developed by the Center for Data Science and Public Policy at University of Chicago, can be used to audit the predictions of machine learning based risk assessment tools  to understand different types of biases, and make informed decisions about developing and deploying such systems.

Different bias and fairness criteria need to be used for different types of interventions. Aequitas allows audits to be done across multiple metrics

Equal Parity

Also known as Demographic or Statistical Parity

When do you care?

If you want each group represented equally among the selected set.

Proportional Parity

Also known as Impact Parity or Minimizing Disparate Impact

When do you care?

If you want each group represented proportional to their representation in the overall population

False Positive Parity

Desirable when your interventions are punitive

When do you care?

If you want each group to have equal False Positive Rates

False Negative Parity

Desirable when your interventions are assistive/preventative

When do you care?

If you want each group to have equal False Negative Rates

What do you need to do an Audit?

You can audit your risk assessment system for two types of biases:

  1. Biased actions or interventions that are not allocated in a way that’s representative of the population.
  2. Biased outcomes through actions or interventions that are a result of your system being wrong about certain groups of people.

For both of those audits, you need the following data:

  • Data about the the overall population considered for interventions along with the protected attributes (that you want to audit) for each of them (race, gender, age, income for example).

  • The set of individuals in the above population that your risk assessment system recommended/selected for intervention or action. It’s important to have this set come from the assessments made after the system has been built, and not from the data the machine learning system was “trained” on. You can also audit the training set but it’s critical to run the audit on the population going forward.

  • If you want to audit for biases due to disparate errors of your system, then you also need to collect (and provide) actual outcomes for the individuals who were selected and not selected.  In order to collect this information, you may need to run a trial and/or hold out part of the data from the recent past when building your machine learning system.

How can you use Aequitas?

What does Aequitas produce?

The Team

Aequitas was created by the Center for Data Science and Public Policy at the University of Chicago. Our goal is to further the use of data science in policy research and practice. Our work includes educating current and future policymakers, doing data science projects with government, nonprofit, academic, and foundation partners, and developing new methods and open-source tools that support and extend the use of data science for public policy and social impact in a measurable, fair, and equitable manner.

To contact the team, please email us at aequitas at lists dot uchicago dot edu

Pedro Saleiro
Pedro Saleiro
Abby Stevens
Abby Stevens
Ari Anisfeld
Ari Anisfeld
Rayid Ghani
Rayid Ghani

Interested in creating data-driven policies and systems that are fair and equitable?

Talk to us. We’re building a series of bias, fairness, and equity audit tools, trainings, and methodologies for governments, non-profits, and corporations.