November 10 – 12, 2020

Online Edition!

Testing AI for Unwanted Bias

Tariq King

Different types of bias in AI systems and the AI bias cycle.

A set of testing heuristics (mnemonics, checklists) to aid testing AI systems for unwanted bias

A freely available tool that quantifies the risk of unwanted bias in AI systems.

Open-source toolkits for detecting and mitigating unwanted bias in AI systems.


Testing AI for Unwanted Bias

Testing will play a key role in preventing and detecting unwanted bias in AI and machine learning systems.

A quick Internet search on bias reveals a list of nearly 200 cognitive biases that psychologists have classified based on human beliefs, decisions, behaviors, social interactions, and memory patterns. Since the world is filled with bias, it follows that any data we collect from it contains biases. If we then take that data and use it to train AI, the machines will reflect those biases. So how then do we start to engineer AI-based systems that are fair and inclusive? Is it even practical to remove bias from AI-based systems, or is it too daunting of a task? Join Tariq King as he describes the different types of bias in AI systems, and explains the AI bias cycle. Tariq will share a set of heuristics developed to help engineers prevent and detect unwanted AI bias. Based on the heuristics, Tariq and the team at developed and released a freely available tool for AI bias risk, codenamed the AI BRAT.

More Related Sessions

30-min New Voice Talk

12:05 p.m. – 12:45 p.m.

30-minute Talk

2:50 p.m. – 3:30 p.m.

45-minute Keynote

9:10 a.m. – 10:00 a.m.

If you like AgileTD you might also be interested in :

Your privacy matters

We use cookies to understand how you use our site and to give you the best experience on our website. If you continue to use this site we will assume that you are happy with it and accept our use of cookies, Privacy Policy and Terms of Use.