Fighting AI Bias by Obtaining High-Quality Training Data

Audrey Boguchwal

February 4, 2020

5 Minute Read

I recently presented a talk at the ReWork Deep Learning Summit titled, “Fighting AI Bias: How to Obtain Secure, High-Quality Training Data," but I think it’s equally important this knowledge is also shared outside of the summit.

rework-dl-summit-samasource-talk

Bias can make its way into your model at any stage of the training data lifecycle, potentially compromising the accuracy and performance of your algorithms. And as more organizations develop their own AI and ML programs, the necessity of superior quality data is even more pertinent.  

Impact of Biased Data in Computer Vision

AI bias can creep in at any stage of the training data lifecycle, and bias presents itself most commonly in three categories: dataset bias, training bias and algorithmic bias. 

Dataset bias is as you might expect—the data does not provide enough information for the model to learn the problem, or it’s unrepresentative of reality in some way. Training bias is the result of poor quality or inconsistent data labeling, and lastly, algorithmic bias occurs when the algorithm itself makes poor predictions or produces poor results. 

Models trained on biased data not only produce inaccurate algorithms, they also present ethical, legal and safety problems. And in some cases, biased data in computer vision can perpetuate historical, negative stereotypes across race and gender. 

Left unchecked, algorithms trained on biased data greatly impact the lives of people using the very technologies meant to enhance their everyday experiences.

Download the Data Quality Solution Brief

Countering Bias in Training Data

Countering bias in training data starts by having an effective training data strategy.

Last year at Embedded Vision Summit, I presented a talk on practical training data strategies to avoid bias, sharing four ways to mitigate unwanted bias in training data.  

I want to echo my thoughts here that an effective training data strategy makes for a strong defense against AI bias. Fighting bias in training data means determining your data needs, developing training rules to cover known uses cases, and diversifying data to cover edge cases.

As your model learns, countering bias means evolving rules and sourcing more data when needed—all while keeping apprised of legal and ethical sourcing considerations.

rework-dl-summit-samasource

Obtaining Superior Quality Datasets

Top organizations understand that if they want smarter models, they need ethically sourced, quality data. Your quality requirements might vary, depending on your model, but the fact remains that diverse, high-quality data helps counter AI bias. 

For over a decade, hundreds of organizations, including 25% of the Fortune 50 have relied on Samasource to deliver secure, high-quality training data and model validation for machine learning.

We’ve helped organizations like Walmart improve their retail item coverage, and others like Vulcan Inc., improve turnaround time to process training datasets. We’ve even partnered with organizations like Cornell Tech to produce an open-source dataset of our own. 

Here are a few things to keep in mind when sourcing superior quality datasets:

  • Be aware of local privacy and property laws as you collect data. 
  • Ensure you have legal user consent for data capture.
  • Stay informed of the security protocols of facilities processing your training data.
  • When possible, stay informed of the working conditions of the workers labeling your data, and support solutions that pay living wages and benefits.

From pilots to multi-year projects, Samasource securely trains and validates computer vision and NLP models. We work on a range of use cases ranging from e-commerce to autonomous transportation, manufacturing, navigation, retail, AR/VR, and biotech, and if your goal is to build smarter AI, contact our team to discuss your training data needs. 

Contact Us

 

Audrey Boguchwal

Currently a Senior Product Manager at Samasource, Audrey guides cross-functional teams to create thoughtful product solutions. She has guided teams of designers and engineers at HUGE Inc. and NBCUniversal, and monitored user analytics at the Wall Street Journal. With a BA in history from Harvard, an MA in anthropology from Columbia and an MBA from UNC Chapel Hill KFBS, Audrey is passionate a using technology and data analytics facilitate social impact and environmental solutions through technology.