AI-powered solutions can introduce bias into outcomes, skewing results and potentially leading to cascade of negative effects – at massive scale. For example, your company’s AI-powered recruiting tool may be valuable for whittling down the number of candidates to consider for an open position, but have algorithms introduced bias into the candidate recommendations? In this session, we’ll explore how technology can be used to examine and prevent, rather than propagate, bias within AI-powered processes. In addition, we’ll touch on how a culture of diversity and inclusion encourages DevOps teams to ask the uncomfortable questions about why things are done the way they are – both inside and outside of technology.