ChatGPT Conversations Found on Google: OpenAI Responds

In a surprising turn of events, users discovered that some ChatGPT conversations were found on Google search results. This led to widespread privacy concerns, pushing OpenAI to take swift action. The company has now disabled the feature that made this possible, saying it created too much room for accidental oversharing.

Here’s everything you need to know about the situation, what went wrong, and what OpenAI is doing in response.

What Was the ChatGPT Sharing Feature?

Earlier this year, OpenAI introduced a feature that allowed users to share specific ChatGPT conversations with others through a public link. The intent was to make it easy for users to share interesting, helpful, or funny chats without exposing their private account.

These shared conversations were published under the domain chat.openai.com/share and were publicly accessible. Importantly, this feature was opt-in; users had to manually click “Share” and confirm the action.

However, there was a catch: once shared publicly, these links became visible to search engines like Google and Bing. That’s when the issue began.

How Did ChatGPT Conversations End Up on Google?

In July 2025, tech users and researchers noticed that thousands of ChatGPT chats were showing up in Google search results. A simple search like site:chat.openai.com/share returned a list of indexed conversations—some of which appeared to contain sensitive or personal information.

Even though shared chats removed usernames and direct identifiers, many still included private stories, emotional confessions, or professional data, which users likely didn’t realize could go public.

This visibility was the result of standard web crawling. Because the shared links were not marked as “noindex,” search engines treated them like any other public webpage and indexed them accordingly.

User Reactions and Privacy Concerns

The reaction was swift. Across Reddit, X (formerly Twitter), and Hacker News, users expressed shock and frustration, especially those who believed their shared conversations were only visible to the people they sent the link to.

Security researchers and privacy advocates raised alarms about the risk of data exposure—especially for users who may have shared sensitive medical, legal, or workplace information via ChatGPT.

OpenAI’s Official Response

On July 31, 2025, OpenAI disabled the feature entirely. In a statement shared via X, OpenAI CTO Mira Murati wrote:

“We’ve removed the ability to share conversations in a way that allows them to be indexed by search engines. We underestimated the risk of accidental exposure.”

OpenAI confirmed that they are working with search engines to remove previously indexed shared links. While users had to opt-in to make chats public, the company acknowledged that the experience didn’t do enough to highlight the risks of making a conversation searchable online.

Sources: Tom’s Guide, Business Insider, Search Engine Land

What This Means for ChatGPT Users

If you didn’t share any ChatGPT conversation using the “Share” feature, your chats were never made public. This issue only affected those who actively clicked the share button and generated a public link.

However, for users who did share conversations, it’s a wake-up call. Once a link is indexed by search engines, it can remain searchable even after deletion—unless a formal removal request is made.

How to Check and Remove Your ChatGPT Shared Links

If you’ve used the share feature before, here are steps you can take:

  1. Search for your links: Use Google with site:chat.openai.com/share and keywords you may have used in the chat.
  2. Delete the shared chats from your ChatGPT history or sharing dashboard.
  3. Use Google’s Removal Tool: Submit the shared URL via Google’s Remove Outdated Content Tool to get it removed from search results.
  4. Avoid sharing sensitive content using public sharing features—AI-generated or not.

The Bigger Picture: Privacy and Generative AI

This incident highlights a broader concern around transparency and user awareness in AI tools. While the feature was technically opt-in, OpenAI admits that not enough was done to warn users about the searchability of shared conversations.

As generative AI becomes more common in workplaces, education, and personal life, privacy-by-default needs to be a standard—not an option.

Conclusion

OpenAI acted quickly to shut down a feature that unintentionally exposed user conversations to search engines. While the exposure was limited to chats users manually shared, the incident has reignited discussions about AI privacy, user education, and the responsibilities of AI companies.

For now, the best advice is simple: if you’re going to share something generated by AI, think twice because the internet never forgets.

    Scroll to Top