Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

Veriff Voices

Ep. 16 Addressing AI bias using reinforcement learning from human feedback

16 Nov 2023

Description

In the second Veriff Voices episode with our Senior Vice President of Product, Suvrat Joshi, we tackle the thorny topic of AI bias. With more than two decades of product leadership experience at major tech companies including Amazon, Dropbox, Meta, Microsoft, and Yahoo, Suvrat is well versed in how algorithmic bias can affect technological applications. In this conversation, we focused on how Veriff uses reinforcement learning from human feedback (RLHF) to address bias in our systems. [01:11] The biggest threats and issues resulting from AI bias in general [02:37] What causes AI bias in the first place  [03:44] Can AI bias ever be beneficial? [04:34] Is it possible to have an unbiased AI model? [05:56] Specific issues relating to AI bias in identity verification (IDV) [07:09] What organizations can do to address AI bias [08:42] How Veriff identifies and addresses AI bias in its applications [10:21] How reinforcement learning from human feedback (RLHF) can be used to address bias [11:22] How RLHF can provide reassurance to customers in relation to the use of AI [12:18] Some specific use cases for RLHF by sector [13:08] How RLHF works [14:41] Can RLHF be used to effectively remove bias? [15:27] RLHF as a differentiator for Veriff [17:47] How RLHF enables Veriff to offer a superior product  [18:31] What to expect in terms of developments in generative AI in 2024  You can learn more on our website: https://bit.ly/3FUZFMV.     

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.