By Sarah Gillespie

Published October 4, 2021

Overview

The paper, On Formalizing Fairness in Prediction with Machine Learning, discusses a number of definitions on how to decide if an algorithm is “fair,” specifically by breaking the concept down into five ways that an outcome can fail fairness. These are just the most popular of many different ways to look at a “fair” algorithm. It cannot be emphasized enough that fairness is not a simple accuracy metric that is easy to optimize a model to achieve. It is crucial to look at your model through different fairness lenses to see if your model has negative externalities that fail different fairness measures, despite succeeding on a specific fairness measure.


Comparison: Algorithm Fairness Categories

Concept Definition Example
Counterfactual fairness Decision is fair towards an individual if the decision is the same in both the actual world and counterfactual worlds where the individual belongs to a different demographic group. Amazon’s resume algorithm rejecting the resumes of anyone with the word “woman” on her resume.
Group fairness The outcome prediction for individuals should be the same outcome across different demographic groups with almost equal probability. Collectivist egalitarianism. Northepoint sentencing algorithm failed group fairness between different racial groups. The formula was particularly likely to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants. White defendants were mislabeled as low risk more often than black defendants.
Individual fairness A predictor is fair if it produces similar outputs for similar individuals. Associated with individual egalitarianism. A victim of the Northpointe algorithm, Brisha Borden, was given a disproportionately harsh sentencing suggestion compared to another data point, Vernon Prater.
Equality of opportunity In a scenario where there is only one beneficial outcome, the true positive rate should be the same for all groups. Not accounting for systemic discrimination or economic circumstances that may affect one group’s data used in the algorithm could have an adverse impact.
Preference-based fairness An algorithm satisfies an individual’s preferred treatment and has the highest rate of matching an individual’s real preference to the algorithm’s prediction for the individual’s preference. An algorithm that has a high error percentage for preference matching, such as an unsuccessful roommate algorithm at a large university.


Common confusion points

Group fairness vs. individual fairness

One common confusion point when comparing fairness definitions is the difference between group fairness and individual fairness. One might wonder how they are different, since treating groups differently means being unfair to all the individuals in those groups.

First, let our two different groups be students at Smith College studying computer science and students at Mount Holyoke College studying computer science. All students are applying for jobs at the same career fairs and the same LinkedIn posts and the same company websites.

If there was group fairness, then the Smith College computer science students would get the same number of interview invitations as Mount Holyoke College students. If Smith computer science students received more or less interview invitations than the Mount Holyoke computer science students then this would fail group fairness.

Individual fairness is focused on individual points in the data set and checking to make sure identical data points receive the same treatment. So, if two Smith computer science students had the same academic background, grades, and experiences and applied to the same jobs, then the interview process would be individually fair if the two Smith students received the same number of interview invitations. If the students did not receive the same number of interview invitations, then the process would not be individually fair. A company might offer more interview opportunities to people in groups underrepresented in their employee population to achieve a group fairness goal. This decreases individual fairness in the applicant population but has a potential to increase group fairness in the company’s employee population.

Equality of opportunity, expanded upon.

Not accounting for systemic discrimination or economic circumstances that may affect one group’s data used in the algorithm could have an adverse impact. This can be considered through a metric like standardized testing: lower income students may not have the same resources as higher income students to access private tutors, the luxury of time to study for an optional test (rather than work or care for family), and better funded schools for the prior twelve years. Using standardized test scores to predict college and career performance is problematic when considering students equality of opportunity when preparing for the test. Using SAT scores in a model is not an apples-to-apples approach for different students.

Relying on these scores in aa metric also speaks to how people’s attributes can be oversimplified in models. Expanding on the SAT score example, using only numeric attributes to rate aa candidate may ignore qualitative attributes, such as real-world skills, unique perspectives, or determination.

One way to address a model neglecting equality of opportunity is to include more metrics that can showcase someone’s useful skills and perspectives apart from metrics that ignore systemic discrimination or economic circumstances.

Equity vs. equality

For example, the U.S. government sending out a maximum of four free at-home COVID tests to each household is an example of how equality does not achieve equity. While every household does get the same number of tests, this approach does not consider a myriad of factors where a household might need more tests, such as a multigenerational household or collection of roommates or levels of household cooperation. The four tests per household approach benefits people who can afford to live alone and with infrequent exposure to COVID risk.


Treatment vs. Impact

This relates to another point the Formalizing Fairness paper discusses: the difference between an algorithm’s treatment and its impact. Treatment and impact are not exclusive to algorithms.

Treatment is the specific type of action that an algorithm suggests for each data point, while impact is the effect on that data point’s well-being. Since each person tends to be more unique and has their own set of skills and resources, providing the exact same treatment to people is unlikely to lead to the exact same outcome. For example, an educational algorithm provides the same treatment that suggests two first grade students read the same chapter in a book. If one student has an older sibling around to help look up challenging words and explain deeper literary elements like foreshadowing and social context, that student will gain a larger benefit and a different impact than a student who reads the book without her support system around helping her to understand the chapter. Despite having the same treatment, these two students had a very different impact.


ACM’s perspective on algorithm accountability

The Association for Computing Machinery published 7 Principles for Algorithmic Transparency and Accountability, which is written for the creators of algorithms and has a larger emphasis on encouraging algorithm creators to fully understand the algorithm they create and make it auditable.

  1. Awareness
    Owners, designers, builders, users, and other stakeholders of analytic systems should be aware of the possible biases involved in their design, implementation, and use and the potential harm that biases can cause to individuals and society.
  2. Access and Redress
    Regulators should encourage the adoption of mechanisms that enable questioning and redress for individuals and groups that are adversely affected by algorithmically informed decisions.
  3. Accountability
    Institutions should be held responsible for decisions made by the algorithms that they use, even if it is not feasible to explain in detail how the algorithms produce their results.
  4. Explanation
    Systems and institutions that use algorithmic decision-making are encouraged to produce explanations regarding both the procedures followed by the algorithm and the specific decisions that are made. This is particularly important in public policy contexts.
  5. Data Provenance
    A description of the way in which the training data was collected should be maintained by the builders of the algorithms, accompanied by an exploration of the potential biases induced by the human or algorithmic data-gathering process. Public scrutiny of the data provides maximum opportunity for corrections. However, concerns over privacy, protecting trade secrets, or revelation of analytics that might allow malicious actors to game the system can justify restricting access to qualified and authorized individuals.
  6. Audibility
    Models, algorithms, data, and decisions should be recorded so that they can be audited in cases where harm is suspected.
  7. Validation and Testing
    Institutions should use rigorous methods to validate their models and document those methods and results. In particular, they should routinely perform tests to assess and determine whether the model generates discriminatory harm. Institutions are encouraged to make the results of such tests public.

This differs from the On Formalizing Fairness in Prediction with Machine Learning paper, which places more focus on how the subjects of an algorithm should be treated.

The take-away from this is to have individuals know the purpose and impact of algorithms they are responsible for. This may include documentation on an algorithm’s goals, economic impact and social impact. When an algorithm is affecting actual people’s lives rather than just being a gimmick, it is important to consider the intersection of its computing efficiency with its real world effect.


Going forward

Katrina Ligett’s keynote speech at The Conference of Fairness, Accountability, and Transparency (FAccT) titled In Praise of Flawed Mathematical Models discusses the value of using math to model these fairness values. It’s an excellent talk and I cannot state things as eloquently as Dr. Ligett, but she explains how to use the economics concept of game theory and the pareto efficient outcome to consider algorithm trade-offs, like trading off privacy for efficiency or different types of fairness definitions for each other. Dr. Ligett notes that mathematical models are excellent at defining problems and setting bounds but it is up to people to clearly define what should be valued in an algorithm. This includes having it be decided which people in society should get to decide what is valued in an algorithm and addresses how mathematical models can serve as a cover for malicious or careless actions that hurt other people.

A quote from an Expo Talk Panel at NeurIPS 2020 titled “From scikit-learn and fairness, tools and challenges” by Adrin Jalali best summed up the current algorithm fairness terrain when he said “If you take an algorithm, it’s not too hard for me to find a fairness definition for which my algorithm does not perform badly. So we need to be really careful about that.”