Testing AI for Unwanted Bias

30-minute Talk

Testing will play a key role in preventing and detecting unwanted bias in AI and machine learning systems.

Virtual Pass session

Timetable

3:45 p.m. – 4:25 p.m. Wednesday 11th

Audience

Tester, Developer, Manager, Product Owner

Key-Learning

  • Different types of bias in AI systems and the AI bias cycle.
  • A set of testing heuristics (mnemonics, checklists) to aid testing AI systems for unwanted bias
  • A freely available tool that quantifies the risk of unwanted bias in AI systems.
  • Open-source toolkits for detecting and mitigating unwanted bias in AI systems.

A quick Internet search on bias reveals a list of nearly 200 cognitive biases that psychologists have classified based on human beliefs, decisions, behaviors, social interactions, and memory patterns. Since the world is filled with bias, it follows that any data we collect from it contains biases. If we then take that data and use it to train AI, the machines will reflect those biases. So how then do we start to engineer AI-based systems that are fair and inclusive? Is it even practical to remove bias from AI-based systems, or is it too daunting of a task? Join Tariq King as he describes the different types of bias in AI systems, and explains the AI bias cycle. Tariq will share a set of heuristics developed to help engineers prevent and detect unwanted AI bias. Based on the heuristics, Tariq and the team at test.ai developed and released a freely available tool for AI bias risk, codenamed the AI BRAT.

Related Sessions