Company
Date Published
Author
Aaron Schneider - Associate Solutions Engineer, Couchbase
Word count
1835
Language
English
Hacker News points
None

Summary

Google's natural language engine LaMDA AI has publicly shown demos, but it was not released due to reputational risk concerns about accuracy and bias issues. ChatGPT's virality led people to ask why Google didn't release a similar product, with some fearing that Google's response would be inferior. However, Google CEO Sundar Pichai cited the importance of factuality, bias, and safety in search applications as reasons for not releasing such a product yet. The article explores how AI engines like ChatGPT can produce offensive or biased results due to training on large datasets containing problematic content. It discusses two approaches to mitigate this risk: Shift Right (putting a gate at the end of the process to filter out bias) and Shift Left (removing bias from the dataset before training). The article also highlights the example of SalaryAdvise, a company trying to build an AI model that suggests fair salaries based on employee details, which removes biased data points to avoid bias. Additionally, it mentions Couchbase's Eventing service, which automatically removes protected information from an AI dataset, empowering researchers to build unbiased models.