Skip to main content

Bias in AI systems

Despite their incredible ability to drive innovation, automate mundane tasks, and potentially improve the accuracy of numerous decision-making processes, AI systems can also reinforce social biases on an unprecedented scale. To ensure that AI-driven driven systems flourish without jeopardizing core American values, any AI-focused legislation should consider how bias comes about and what technical and social solutions have been proposed to mitigate harm without unduly hampering innovation. 

How does bias occur in the first place? 

As demonstrated in our first two posts in this series, (ML explainer) and (Deep learning explainer), all machine learning tools are created using training data. While unsupervised learning and reinforcement learning can unintentionally lead to unfair outcomes depending on their implementation, most documented AI bias cases are attributed to supervised learning.  

In a supervised learning task, an analyst attempts to predict a particular variable of interest using a collection of features about a set of observations. Consider a hiring tool that attempts to automate the video interview process. Such a tool prompts an applicant with the questions that an interviewer would typically ask – “how do you address conflict in the workplace,” for example – and generates an ‘employability’ metric for that applicant. For example – and generates an ‘employability’ metric for that applicant.  

To determine a candidate’s employability, the tool needs examples of interviews for similar jobs and the result of those interviews – whether the person was hired or not. Then, using additional tools (some of which will be introduced in later AI case studies), features of these interviews are extracted. Some of these features might include tone of voice, facial expression, and the actual responses given to each question. These features, as well as the outcome of whether a candidate was hired or not, allow the algorithm to be trained to predict employability and generate a score for any applicant in the future.   

There are a few ways that this sort of model might produce bias results: 

  • Error in feature generation. The hiring tool might not be as effective at extracting features from some interviews when compared to others. For example, an applicant with a regional dialect might have some of their words misinterpreted by the algorithm, leading to some of their responses losing their original meaning. Similarly, an applicant with facial hair might have their facial expressions obscured or misinterpreted. In this case, depending on the feature, this may introduce noise into the candidate’s employability.
  • Non-representative training data. The interviews used to train the model might over-represent a particular skin color, accent, or regional dialect. Even if the actual words said by the applicant were perfectly captured, if every successful applicant refers to their carbonated drinks as a coke, a potential applicant referring to it as a pop or a soda might be considered less favorably by the model. 
  • Biased social contexts. The examples of successful interviews used to train the model on what a ‘good’ candidate looks like may not represent the entire population of applicants. Whether or not this is the result of affinity bias on behalf of the employer or a broader social context where a particular demographic group is hired more frequently for a specific job, the model may pick up on these demographic indicators and give applicants a bonus or demerit depending on whether they look like successful applicants of the past.  

Dangers of ‘online’ systems 

In a vacuum, these models may do little more than codify historical biases into the present. In practice, however, many modern machine learning tools used in production are ‘online’ – they are trained once on the original data, with subsequent decisions fed back into the algorithm to make the model more accurate and more representative of current decision-making. There are numerous benefits; the data does not need to be stored long-term, the computation involved in training is less costly, and the model can adapt to change. For a model trained to predict consumer demand of lumber, for example, an online model allowed to adapt to COVID-19 lockdowns and the subsequent appetite for home remodeling would likely outperform a static model. 

In and of itself, such a system is not inherently prone to more bias. If the hiring model were trained using historically biased data, but the hiring managers using the model in practice are conscious of this bias, they may make decisions despite what the model might project to be the best candidate. If, however, the opposite occurs – a biased model is used by hiring managers who are prone to the same bias in their decision-making – the model may become more biased over time as these already-biased decisions are fed back into the model. Due to their dynamic nature, online systems may more closely reflect modern decision-making, but whether that modern decision-making is perpetuating or ameliorating bias depends on social context.  

Technical solutions 

There are many potential technical solutions to the issues outlined– though experts in the field debate their efficacy and feasibility. For example: 

  • Dynamic upsampling. Similar to how statisticians employ weighting to generate unbiased statistics, dynamic upsampling allows models to preference underrepresented observations in the training data. In practice, this means that while all observations in the data will be used to train a given model, the model will be optimized to make correct predictions regarding underrepresented observations at the expense of majority observations. An inherent advantage of this approach is that the analyst does not need to know which sensitive features need to be dynamically upsampled – it occurs automatically while training. The example employability model might use this technique to prefer candidates with accents or other unique attributes during training. 
  • Adversarial de-biasing. Adversarial learning, a modern development in deep learning, allows a secondary model to compete in training a primary model. In the case of adversarial de-biasing, while the first model attempts to predict a target value, a secondary model attempts to be uncertain about a sensitive feature of the observations. Using an employability model for hiring adversarial de-biasing might produce a good prediction of employability but an uncertain prediction of race or sex. While these tools can all but guarantee that a given model does not make predictions based on some established features, those sensitive features need to be established before training, potentially leading to unforeseen bias due to other features (such as age, for example).  
  • Synthetic data generation. A combination of the above approaches, synthetic data generation allows novel data to be generated that represents the original data it is trained on, potentially filling in for underrepresented groups to create more representative datasets or creating an entirely new dataset that accurately represents the variety of the original data. Synthetic data might allow an employability model to accurately identify applicants’ facial expressions with unique physical characteristics by synthesizing a representative and balanced training dataset of faces. 

Non-technical solutions 

In addition to a host of non-technical solutions to biased AI systems, which can be found in BPC’s AI Ethics Whitepaper, experts have proposed:  

  • Standardized data documentation. The quality of training data is crucially important when designing AI systems. The federal government, in tandem with industry and non-profit stakeholders, can support voluntary documentation standards to ensure that the end-users of open online data understand how the data were collected, motivations for its creation, as well as its recommended uses. To support these efforts, federal agencies can release well-documented benchmark datasets with programmatic access for developers and researchers. 
  • Auditing of outcomes. Regular audits can be performed on the decisions made by these algorithms to determine how they treat specific demographics compared to others.  Given the employability model, an analyst can calculate whether the proportion of those hired are deemed employable differs based on race and sex. This is referred to as the equality of true positive rate, representing one of many approaches to quantify fairness in any decision-making process. Care must be taken to ensure these measures of ‘fairness’ are observable in the first place. For example, a reasonable fairness metric for the employability model would determine if the employable applicants were rejected at different rates based on race and sex. However, an analyst cannot know whether any applicant deemed unemployable by the model is genuinely employable since the in-person interviews would not be performed if the hiring manager trusts the initial judgment of their model.  
  • Risk-based approaches. Due to the self-reinforcing nature of AI-based decision-making systems, some uses of AI (including those supporting lending decisions, benefits distribution, or employment screening, for example) pose a greater risk to American civil liberties than others. Regulators, model creators, and eventual end-users of these systems should consider not only whether model bias could lead to illegal discrimination against protected classes but also whether the decisions made by these models could deepen inequality. 

Ultimately, there is no quick solution to the issue of biased AI systems, especially when the decisions made by those systems require trained machine learning experts to interrogate. BPC’s AI blog post series will now focus primarily on case studies of AI in industry and government, as well as opportunities and challenges that may arise from them now and in the future.  

Share
Read Next

Support Research Like This

With your support, BPC can continue to fund important research like this by combining the best ideas from both parties to promote health, security, and opportunity for all Americans.

Give Now
Tags