Home / Companies / Snyk / Blog / Post Details
Content Deep Dive

When “Private" Isn't: The Security Risk of GPT Chats Leaking to Search Engines

Blog post from Snyk

Post Details
Company
Date Published
Author
Sonya Moisset
Word Count
729
Language
English
Hacker News Points
-
Summary

In July 2025, users discovered that their ChatGPT conversations, shared via links, were appearing on search engines like Google, Bing, and DuckDuckGo, exposing personal and sensitive content without a data breach. This exposure was due to a feature that allowed shared chats to be indexed by search engines, which was quickly removed after privacy concerns arose. The incident highlighted issues of insecure default settings and inadequate consent design, as users often assume their interactions with language models are private. OpenAI responded by de-indexing the conversations and removing the feature, but the persistence of cached data poses ongoing privacy risks. The situation underscores the need for platforms to adopt secure-by-design principles and implement clearer user warnings, consent mechanisms, and automatic expiration for shared links. It also serves as a cautionary tale for other AI platforms to evaluate their public link-sharing architectures to prevent similar privacy pitfalls.