Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

CXOInsights by CXOCIETY

PodChats for FutureCIO: Best practices to ensure AI fairness

28 Dec 2021

Description

In 2016, Microsoft pulled the plug on Tay (short for Thinking About You) a chatbot designed to mimic the language patterns of a 19-year-old American girl and to learn from interacting with human users of Twitter. According to Microsoft CEO, Satya Nadella, Tay was an important influence on how Microsoft is approaching AI," and has taught the company the importance of taking accountability.It can be argued that as we come to depend on data and on technology to make decisions, we also need to consider the implications such dependence has on the outcomes.With us today is Brandon Purcell, VP, Principal Analyst with Forrester. 1.       One of the attributes of machines is that they are “supposedly” unbiased executing based on a pre-defined set of “rules”. And yet, studies from the World Economic Forum and commentaries from Harvard Business Review suggests AI is biased. Where does the fault (if any) lie? On the code? On the algos?2.       Would you consider these concerns about AI bias as having a significant impact on how AI will be adopted in commercial environments?3.       What should leadership ask of their data science/AI research teams to mitigate against the risks that may come from perceived AI bias?4.       In your view, how far away are we from achieving ethical AI?5.       You contributed to the Forrester report, How to Measure AI Fairness. What was the conclusion of the report? 

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.