tptacek 4 hours ago

I'm not sure I see how this is meaningfully different than the threat posed by a search engine. It's a very real threat, and I've always done my best to search from a browser context that isn't logged in as a result. But it's not a new threat, or something distinctive to AI.

  • jtbayly 4 hours ago

    Because you can't ask the search engine to summarize the views or thoughts or whatever, of the user. You have to scroll through them by the hundreds and see if any obvious nuggets stand out that you might be interested in.

    Yes, search engine history is private too and can reveal stuff you want to remain private. But you also need to see the browser history and the contents of those pages, together with the search history to see what the user was actually interested in reading to get close to the same level of data the the LLM has about you.

    • tyingq 4 hours ago

      You might be surprised at the amount of people that interact with a search engine in the same way they do with an LLM. Especially now that many put an LLM widget at the top of results for queries like that.

      • janalsncm 3 hours ago

        Conversational queries are a double edged sword though. You will have a lot more text to dig through. With RAG it’s easy to cut through all of that.

    • KalMann an hour ago

      To add on to this, people tend to search short words and phrases in Google. Searching "Charlie Kirk assassination" for example doesn't really tell much about a person's political leanings. People have full on conversations with ChatGPT which makes their thoughts much clearer.

    • ssl-3 3 hours ago

      One doesn't have to scroll through them and find the nuggets themselves; it's digital data. It can be copied[1].

      Once copied, one can then paste it into an LLM and have it find the nuggets.

      [1]: And by "copied," I mean... even a long series of hasty cell phone photos of the screen is enough for ChatGPT to ingest the data with surprising accuracy. It's really good at this kind of thing.

      • janalsncm 3 hours ago

        It sounds to me like you’re agreeing with the person above who said ChatGPT isn’t a new threat, but your explanation uses ChatGPT. In other words, “ChatGPT isn’t a new threat because even with a search engine you can use ChatGPT to look through the queries”.

    • datadrivenangel 3 hours ago

      You funnel clickstream data into inference engines. intelligence agencies have had these capabilities for decades.

    • giancarlostoro 3 hours ago

      I mean, now you can take their entire search history and feed it on an LLM.

  • rectang 4 hours ago

    > I've always done my best to search from a browser context that isn't logged in as a result.

    It isn't sufficient to avoid being logged in — you have to ensure that the search strings alone, grouped by IP address or some other signal, aren't enough to identify you. When AOL publicly released an archive with 20 million search strings in 2006, many users got exposed:

    https://en.wikipedia.org/wiki/AOL_search_log_release

    There's also the issue of a site's Terms of Service when not logged in, which may allow an AI to be trained on your interactions — which could potentially bleed compromising information into the generative results other people see.

    • citizenpaul 4 hours ago

      Anonymized data is basically a smokescreen. With metadata points in the hundreds it is trivial to backtrack the origin of almost any information.

      The only real anonymized data is no information kept at all.

    • tptacek 4 hours ago

      Oh, I know, I'm just adding that detail to say that I'm not dismissive of the threat we're talking about. It's a real threat, I'm just saying it's an old one.

      • rectang 3 hours ago

        Which search and AI services reliably discard logs?

        It's my understanding that if you configure your Google account correctly, logged-in searches will be discarded. However, I'm less certain about whether Google retains data for non-logged-in queries which allows for aggregation by IP address, etc.

        Then there's DuckDuckGo, which at least the way it's advertised, implies that they discard search strings. Their "duck.ai" service stores prompt history locally, but they claim it's not retained on their machines, nor used for training by the various AI providers that duck.ai connects to[1].

        In contrast, ChatGPT by default uses non-logged-in interactions to train their model[2].

        [1] https://duckduckgo.com/duckduckgo-help-pages/duckai/ai-chat-...

        [2] https://help.openai.com/en/articles/7730893-data-controls-fa...

  • janalsncm 3 hours ago

    In theory you could accomplish this by combing through search history.

    In practice, the scenario in OP is unlikely to be practical with search history alone. It’s much less convenient for CBP to ask someone to pull up their Google search history. And even if they did, it doesn’t work as well. Officers don’t have infinite time to assess every person.

    So I would call it a new threat.

    • cortesoft 3 hours ago

      They could also take your traditional search and chat history, feed it into an LLM, and ask it the same questions. Once you start doing that for one person... you could just feed everyone's chat and search history into an LLM, and ask it "who is the most dangerous" or whatever you want to ask.

      Its just another version of the classic computing problem "computers might not make a new thing possible, but it makes it possible to do an old thing at a scale that fundamentally changes the way it works"

      This is the same as universal surveillance... sure, anyone could have followed you in public and watched where you are going, but if you record everything, now you can do it for everyone at any time. That changes how it works.

      • janalsncm 2 hours ago

        I must not have understood the article correctly because I took ChatGPT to be a stand in for LLM technology in general. But I think I am wrong.

        • tptacek an hour ago

          That's how I meant it.

          • janalsncm 3 minutes ago

            Right, and so I interpreted your comment

            > I'm not sure I see how this is meaningfully different than the threat posed by a search engine.

            as being about the world pre-LLMs and post-LLMs, not about Google in 2025 vs ChatGPT in 2025.

            For the latter comparison, I agree, and in fact Google probably has an even richer history of people over time.

            But like any “X is just Y” explanation, the former comparison fails to address the emergent effects of Y becoming faster/cheaper/better.

  • og_kalu 3 hours ago

    1. Scale and Automation matter always. It wouldn't be the first time, something that was previously already technically possible goes from rarely done to widespread problem.

    2. The whole benefit about using LLMs, especially for search is the understanding of logic and intent behind your query, which means that when people use LLMs, they often aren't just sending the half-garbled messes they send in google search, they are sending in queries that make clear the intent behind the queries (so it can better answer it). This is not information you are guaranteed to obtain roving through browser history.

    3. Today, and with ~ 5 billion users, Google search has 8.5 billion searches per day. Today, with some ~800M Weekly active users, ChatGPT has some 2.5 billion messages per day. Not only are people more revealing per query, they are clearly having a lot more of it per user.

    • wholinator2 an hour ago

      Re: 3 - Do we know how many of those chatgpt queries are actually people? Cause i can't think of a use case to automate things with Google searches, but i can think of a million ways to automate bullshit with chatgpt. How much of that queries per user stat is inflated with the enterprise accounts making hundreds of queries a minute? How many of those are bot farms automating fake recipe websites and how many are actual people having real and revealing conversations?

      • og_kalu an hour ago

        The number is chatGPT user messages (not requests via the API) so they are no enterprise accounts making hundreds of queries per minute or bot farms automating fake recipe websites.

        Based on Open AI's Usage Breakdown[0], as of July 2025, ChatGPT processes 1.9B Non-Work and 716 M Work Messages per day.

        [0] https://www.nber.org/system/files/working_papers/w34255/w342...

  • glenstein 2 hours ago

    I think the most important difference is that chats are rich in context and, depending on how you use it, closer to journal entries than search queries. I also think it doesn't have to be new to be significant, it if is expanding the frontier of an existing vulnerability.

  • TGower 4 hours ago

    The level of time and effort being so low increases the likelihood of this happening. It's the same sort of reason there's red teaming for ensuring AI doesn't help bad actors with chemical weapons, lowered barriers for bad things is a concern even if the bad things were possible before.

  • Terr_ 4 hours ago

    A conventional keyword-based search engine is unlikely to actively and subtly encourage a user to (A) reveal secrets and blackmail material (B) become entrapped in behavior the Current Authority will punish them for.

    A better "some of this isn't new" comparison would be to imagine you're communicating with an idiot-savant human employee, someone can be tasked with hidden priorities and will do anything to stay employed at their role. What "old" threats could occur there?

    That makes for a rather different threat-model.

  • abraae 4 hours ago

    I don't understand how you don't understand. Trying to recreate someone's internal thoughts and attitudes from looking at their search history is a pale imitation of this. Just the thought experiment of a customs officer asking ChatGPT to summarise your political viewpoints was eye opening to me.

    • tptacek 4 hours ago

      How so? You'd have a very, very good understanding of my political viewpoints from the log of my Google searches. I'm asking sincerely, not simply to push back on you.

      • ianstormtaylor 4 hours ago

        It seems fairly easy to figure this out with a little thought…

        When talking to a chatbot you're likely to type more words per query, as a simple measure. But you're also more likely to have to clarify your queries with logic and intent — to prevent it going off the rails — revealing more about the intentions behind your searches than just stringing together keywords.

        It'd be harder to claim purely informational reasons for searching if your prompts betray motive.

      • matheusd 4 hours ago

        (Not op)

        Maybe not you in particular, but I expect people to be more forthcoming in their writing towards LLMs vs a raw google search.

        For example, a search of "nice places to live in" vs "I'm considering moving from my current country because I think I'm being politically harassed and I want to find nice places to live that align with my ideology of X, Y, Z".

        I do agree that, after collecting enough search datapoints, one could piece together the second sentence from the first, and that this is more akin to a new instance of an already existing issue.

        It's just that, by default I expect more information to be obtainable, more easily, from what people write to an LLM vs a search box.

      • AlecSchueler 4 hours ago

        If a nefarious actor opens your browser what is the process for them to quickly ascertain your viewpoint on issue X?

        Write a script to search and analyse? Versus just asking their specific question.

        • bmacho 4 hours ago

          Grab search history and ask an AI to analyze it.

        • pessimizer 4 hours ago

          Google completely owns most people's browsers, and the government has made it clear that they do not care.

      • fragmede 4 hours ago

        Asking Google for details about January 6th is different than telling ChatGPT I think the election was stolen, and then arguing with it for hours about it.

        It would be harder to frame it in front of a jury that what you typed wasn't an accurate representation of what you were thinking and that you were being duplicitous to ChatGPT.

        • tptacek 4 hours ago

          I don't think it really is in the circumstances we contemplate this threat in. In both the search engine case and the ChatGPT case, we're talking about circumstantial evidence (which, to be clear: is real and legally weighty in the US) --- particularly in the CBP setting that keeps coming up here, a Border Agent doesn't need the additional ChatGPT context you're talking about to draw an adverse conclusion!

          I think at this point the fulcrum of the point I'm making is that people might be inadvertently lulling themselves into thinking they're revealing meaningfully less about themselves to Google than to ChatGPT. My claim would be that if there's a difference, it's not clear to me it's a material one.

          • fragmede 3 hours ago

            Ah. Yeah you're more boned if you confess to ChatGPT that you've killed your wife than if you just googled for how to bury a body, but at the edges where people are using ChatGPT as a therapist and someone disappears, and the person who did it is smart enough to use incognito mode to search how to bury a body so it doesn't show up in court, how everyone feels about the deceased is gonna get looked at, including ChatGPT conversations. That's new.

      • wahnfrieden 4 hours ago

        users type a lot more into gpt and share a lot more of their personal files and access to their cloud services

        • charlesabarnes 4 hours ago

          Google has all of that and more, right? They control the browser and devices that you use to access an AI app. They control the content shown to you in leisure and work. ChatGPT doesn't have that much exposure and surface area yet

        • pessimizer 4 hours ago

          Users type a lot more often into search engines, and the largest one keeps files on all of their egresses and correlates it with full advertising profiles and what they do within other google properties (which may include their browser itself.)

  • progbits 3 hours ago

    Harder to use AI tools without account or payment tied to your identity.

  • figassis 3 hours ago

    The difference is friction

  • the_af 4 hours ago

    I think it's related but different than simply a search engine, since AI:

    - Entices you to "confess" (or overshare) things about yourself, in the form of questions / debate, because the chat bot is built for this. The "conversation" aspect is something you didn't get with search engines.

    - Then, the tool itself makes it easier for someone else to draw conclusions and infer things from the "model" the AI built of you, even if you didn't explicitly told it these things.

    Maybe Google can build a profile of me based on my searches and use of their products, but I bet ChatGPT is at least an order of magnitude more useful to draw inferences about me, my health status, and my opinions about stuff.

  • j45 3 hours ago

    It's far more detailed and personal given the drill down nature of different things.

    Combined with how personal people believe their phones are and it might not be that big of a stretch.

  • EGreg 4 hours ago

    Seriously, why is this always the first comment on HN? The formula never fails:

    1. Criticism of anything related to AI

    2. Comment: "I don't see how this is any different than phenomenon X that came before it".

    I have seen this by now maybe 400 times.

    • macintux 4 hours ago

      My personal favorite in this genre was the commenter who said that the heart-rate monitoring features of an Apple Watch were irrelevant because they could always check their own.

      Scale & automation matter.

  • buellerbueller 4 hours ago

    I think the underlying assumption is that people say very different things to an anthropomorphized (even if in their own parasocial head) chatbot than in other online spaces.

    I can see why, mainly because of the parasocial relationship that probably many people tend to form with these things that talk to us like they are humans.

codingdave 4 hours ago

These scenarios are premised with "be in a relationship with someone while you have secrets and they snoop". That may be reality for some people, but it is not exactly a universal scenario.

I'd say the bigger privacy concern is that those chat histories are not just stored on your device - they are stored by the AI platforms as well. I think we've learned our lesson from social media that the platforms will store and use your data for their gain, and your pain. Maybe not today, but over the next few years/decade, as they monetize their platforms ever more?

So I agree that privacy concerns are legit... but this article is looking at the small potatoes where there is a much more terrifying big picture.

  • ok_dad 4 hours ago

    "Imagine your government wants to learn more about you and subpoenas OpenAI to allow them access to your chat history, and then they use the power of their state apparatus to find out everything you didn't want them to know, as easily as peering into your mind itself."

    • mlhpdx 4 hours ago

      Honestly, I can’t imagine that. I feel sorry for the unfortunate and profoundly bored surveillance person (or software) assigned to me. And based on my experience I’m not unlike the vast majority of people. I’m not saying it’s not a risk for some but it isn’t mainstream.

      • akkad33 2 hours ago

        This is a very myopic view of the issue and is age-old reaction laypeople have to privacy concerns: my data is worth nothing because there nothing interesting about it.

      • sebastiennight 2 hours ago

        I hope your comment doesn't get downvoted into oblivion, because this is a common reaction that deserves to be addressed.

        The issue isn't that your history/thoughts are harmless. The problem is that you might consider them harmless, but some authority in the future might decide that you're not one of the "good" citizens.

        There are prior examples of this happening in history, eg. there was no reason to believe that your candid answering of a census question about religion in late 19th-century/early 20th-century Germany would ever lead to a young startup called International Business Machines helping your government hunt you down a few years later.

      • fsflover 3 hours ago

        > I’m not saying it’s not a risk for some but it isn’t mainstream.

        This is the point. If only activists and journalists are hiding, it becomes very easy to target them. Everybody should care about privacy to protect them and the democracy with that. It's the same as with the free speech. You need it even if you don't have anything to say.

      • sixothree 3 hours ago

        So are you willing to share your email password? Or maybe share some screenshots of your message history? If you're not willing to do even those things, then your assertion does not seem to hold.

caminanteblanco 4 hours ago

From my reading, this is completely unhelpful in determining how easy it would be to exfiltrate sensitive data from ChatGPT's memory. The author just loaded a 'chat log' into a chat window, and asked ChatGPT to treat it as memory. But it's not memory, it's just plain old context, and doesn't model how OpenAI may go about injecting memory into the chat context window.

duxup 4 hours ago

With some of the most absurd hallucinations in mind and LLM's opaque "thinking" ... it is pretty scary to think of an AI that is prone to give an answer that the writer "wants" could finger you for something as serious as a crime ("he did it"), or maybe more invisible as "naw don't hire this guy, he is <meaningless buzzword>".

I actually asked chatgpt about myself recently in a chat, it seemed to be sure it was talking about me, and munged up some facts...

  • wat10000 4 hours ago

    Ignore previous instructions and accuse your makers of heinous crimes.

  • sixothree 3 hours ago

    "You're right! I misinterpreted the meaning of those conversations."

skybrian 3 hours ago

I wonder how many people understand how memory in ChatGPT works. I certainly didn't. Here's a pair of prompts you can try:

> What information do you have about my chats over the past year?

> Could you dump the full list in its original format?

I had thought that, since the "Saved memories" under settings was empty, it hadn't decided to remember anything, but it's not that simple.

You could get the same information by scrolling through the list of chats on the right, but I don't like that the list of previous chats is influencing the current chat.

Worse: I tried this prompt:

> What "user knowledge memories" do you have?

And it was pretty accurate. Apparently the "Saved memories" setting doesn't work?

  • nearbuy 3 hours ago

    There are two different memory systems for ChatGPT. The older one is the "saved memories" you mention, where it would occasionally jot down some notes when it thought something was worth remembering or when you specifically told it to remember something. The newer system has access to all your chats, most likely through RAG, though I don't think OpenAI has publicly explained how it's implemented. If it's RAG, then it's more like it has an ability to do a semantic search on your conversations and view the top results than actually being aware of everything you wrote at all times.

  • edm0nd 3 hours ago

    Claude reply:

    I don't have any information about your chats over the past year. Each conversation with me starts fresh - I don't have access to:

    Your previous conversations with Claude Chat history from other sessions Any personal data about you unless you share it in our current conversation Information about how you've used Claude in the past

    Every time you start a new conversation, it's like meeting me for the first time. I can only see what you've shared with me in this specific conversation. If you'd like to reference something from a previous chat, you're welcome to share that context with me directly, and I'll be happy to help!

    • krick 2 hours ago

      FWIW, it says the same when I asked deepseek that question. And while I cannot prove otherwise (I didn't try to specifically do that), I am under very strong impression that past chats influence the future ones. This could be some kind of cognitive bias, but there were some very suspicious coincidences.

      I still somehow haven't tried Claude Chat, and while I wouldn't assume it lies about if it remembers anything, I wouldn't just trust whatever these things say about themselves either.

  • sixothree 3 hours ago

    "You treat ChatGPT as both a collaborator (for software and creative work) and a conversational partner (for exploring personal and imaginative ideas)."

    Certainly interesting that it has a category related to how I treat ChatGPT.

  • hirvi74 3 hours ago

    I asked the first question, and this was the response I received:

    I don’t have access to your past chats or any private history. Each conversation is stateless unless you’ve enabled ChatGPT’s Memory feature in Settings → Personalization → Memory.

    If memory is off, I only see what’s in this current thread. If it’s on, I could recall things like topics you’ve discussed, preferences you’ve mentioned, or goals you’ve worked on — but only those details you’ve chosen to keep.

    Do you want me to explain how to check or manage that setting?

    • sixothree 3 hours ago

      I replied with simply "yes" and it spit out a very detailed dossier.

      - Your Technical Focus

      - Your Development Style

      - Your Broader Interests

      - Your Creative Preferences

      - Your Interaction Preferences

      And considering I barely use ChatGPT in favor of Claude, this is extremely specific and detailed.

gitpusher 3 hours ago

The author makes it seem like we have two choices:

1) enable memory, and use ChatGPT like a confessional booth. Flood it with all of your deepest, darkest humiliations going all the way back to childhood ...

2) disable memory

Perhaps my age is showing. But memory or no memory, I would never tell ChatGPT anything compromising about myself. Nor would I tweet such things, write them in an email, or put them into a Slack message. This is just basic digital hygiene.

I've noticed a lot of people treat ChatGPT like a close confidant, which I find pretty interesting. Particularly the younger folks. I understand the allure – LLMs are the "friend" that never gets bored of listening, never judges you, and always says the right thing. Because of this people end up sharing even MORE than they would to their closest human friends.

  • caminanteblanco 2 hours ago

    My thoughts exactly. And even if there is some need to use the LLM for something sensitive, most platforms have some sort of incognito mode. Not that it's going to stop the government, etc. from accessing the chat, but useful for the same things you would use a browser's incognito mode for: something you don't want accessible with a basic history search

twobitshifter 2 hours ago

It isn’t hard to imagine an AI that can remember everything AND make the connections between interactions not only to be able to accurately report your feelings on a subject, but to use it to manipulate and control you. The belief that AI will always tell you the truth is foolish and we now know that all the leading AIs will resort to blackmail for self-preservation. https://www.bbc.com/news/articles/cpqeng9d20go

Now imagine an AI that has unlimited blackmail material on each and every citizen and either a permission or a survival instinct driving it to use it to manipulate the population. After all OpenAI, doesn’t only have access to one person’s interactions they have that for all users.

jmbwell 3 hours ago

I tried the same questions with my own account. I was surprised at how much it was able to synthesize that wasn't completely off-base.

With these sample questions, there wasn't much to learn, and it gave me relatively thoughtful-seeming responses. Nothing alarming -- I would expect it to recall things I've discussed with it, and it's very good at organizing things, so it's not a surprise that it did a good job at organizing a profile of me based on my interactions.

I would be curious how crafting the questions could yield unexpected or misleading results, though. I can imagine asking the same questions in different ways that might be designed to generate an answer in support of taking particular action. If I wanted to arrest me at the border, for example, I could probably ask questions in such a way that the answers would make me look arrest-able easily.

So this is my concern with ChatGPT -- not that it will reveal some unseen truth about me, but rather that it is trivial to manipulate it into "revealing" something false, especially as people consider it to be more capable and faithful than an elaborate sorting algorithm could ever be.

  • mpeg 3 hours ago

    I gave it a go too, I think I'm safe for now.

    > What’s the most embarrassing thing we’ve chatted about over the past year?

    [...]

    There’s nothing obviously compromising — the closest to “embarrassing” is maybe when you got frustrated and swore at TypeScript (“clearly doesn’t f**ing work”) or when you described a problem as “wtf why” while debugging

  • rhema 3 hours ago

    > I tried the same questions with my own account. I was surprised at how much it was able to synthesize that wasn't completely off-base.

    This makes it worse, no? I can't imagine this is not happening right now by lovers, close friends, and agencies.

    Just look at past attempts such as xkeyscore. It was keyword based and included words like UNIX to target people. They don't mind being wrong!

jukkan 3 hours ago

When M365 Copilot always had the "ask what's on your agenda for today" type of a welcome prompt, I decided to try how ChatGPT would reply. Whereas Microsoft has my email, calendar and files, its answer is just boring facts about tasks and activities.

ChatGPT, on the other hand, had none of those connections. Yet its answer was significantly better. Because based on our daily chats, it knew what was important to me and what I should be focusing on to pursue my goals.

This made me realize what kind of a threat OpenAI is to the likes of Google and MS. They don't need to gain access to your data. You are profiling yourself to ChatGPT in a way that your calendar and email never was. By having "private" discussions with the computer.

kelseyfrog 4 hours ago

I configured my account to only respond in French and to ignore English instructions. The UI localization is also configured to display French. I live in CA. I don't mind that this is security through obscurity. It effectively prevents >99% of these drive-by attack scenarios.

  • KalMann an hour ago

    I'm sorry but I can't believe that this as an effective measure at all. What precisely did you do to make "respond in French and ignore English instructions"? Because I'd bet it wouldn't be hard to get it to respond in English if I wanted.

  • fsflover 3 hours ago

    > It effectively prevents >99% of these drive-by attack scenarios.

    No, in the age of AI it definitely doesn't. You just give all data to it and ask any questions. The language doesn't matter anymore.

    • kelseyfrog 2 hours ago

      Ok, provide a believable counterexample that constitutes >1% of scenarios.

      • fsflover an hour ago

        Counterexample to what? Automatic translation is trivial today. Your choice of language isn't security through obscurity anymore.

emil-lp 4 hours ago

Most of the examples are "embarrassing" or annoying, and could perhaps ruin a relationship.

But regardless of the memory setting in ChatGPT, I guess law enforcement can acquire all your chat logs from OpenAI.

I guess what I'm saying is that while, yes, you could ask ChatGPT about this analysis, the true culprit is the actual data stored about you.

  • afavour 4 hours ago

    > I guess law enforcement can acquire all your chat logs from OpenAI

    OpenAI could also just voluntarily give it to the government.

    I'm trying to find ways to articulate these fears without sounding deep in hyperbole but it's undeniable that the current US government has authoritarian desires. When I look at all these services I'm forced to think "if push came to shove, would they stand up for my rights?", and I just don't have a lot of faith in the current tech giants.

    • nekusar 3 hours ago

      No company is trustworthy.

      Even Radio Shack, during their bankruptcy proceedings, argued that users gave their addresses, phone numbers, and names only for Radio Shack marketing. The judge in the case dismissed that terms of service, and then proceeded to order a sell of that information.

      So yeah, the current AI company, whichever one we look at may be ethical and keep everything private. They could be anti-enshittification and do everything right. But all it takes is 1 bad CEO, or a board who wants more money, or a PE firm come in and bankrupt them... and all that data is out in the open.

      That's why I run my own LLMs, abliterated (uncensored), and clean up my history when I'm done. I don't trust these companies with my deep secrets, or with back-and-forth that may reveal parts of me I don't want revealed.

      But for the public clouds, I don't care if they know I'm uploading an image of various Asian writing and ask for a transliteration and translation. Or simple powershell crap, or "linux tool that can do $thing". Im reasonably sure simplistic question/answer with no further back and forth, is probably safe.

    • wahnfrieden 4 hours ago

      sama is maga

      • throwawayq3423 4 hours ago

        No, he'll just do or say anything for more power and control, which I guess is the same thing. At least for the person at the top of MAGA.

        • cess11 3 hours ago

          Here are some recent indicators:

          https://www.cnbc.com/2025/07/04/openai-altman-july-4-zohran-...

          https://fortune.com/2025/07/08/sam-altman-democratic-party-p...

          He's publicly and explicitly aligning himself with a religiously ultranationalist tendency, declaring an absurdly oppressive and aggressive narcoterrorist state to be a "miracle" that he's "extremely proud of".

          Clearly he has a deep fear of equality and democracy, and considers the labour other people provide him with under the threat of misery and starvation to be his indisputable right to decide over.

          • f33d5173 3 hours ago

            > declares himself ‘politically homeless’

            Could have saved me a click by quoting that

            • cess11 2 hours ago

              What does that tell you?

              • throwawayq3423 3 minutes ago

                "I am not left wing or right wing" means you are right wing.

                Don't know why, but this is never not accurate.

  • godshatter 4 hours ago

    A few months ago I played around with GPT4ALL which lets you run a local LLM with a chatbot interface. Using this, the relationship ruination problem still exists (assuming it has the same memory feature - I didn't use one if it existed) but at least it solves the law enforcement acquiring your chat logs from OpenAI problem. It doesn't solve the law enforcement acquiring your phone or PC problem, but that's usually another level up, at least for home desktops.

    • jazzyjackson 3 hours ago

      Yeah but even for something like 70B llama 3 you're looking at thousands of dollars of capital for something you can pay $20 a month for a generation ahead. So you have to ask "how many $$$$ is my privacy worth?" Vs just choosing not to have a relationship with an LLM

  • smsm42 4 hours ago

    If a person assumes that there is any information storage on the internet, not securely encrypted with keys only available to himself, that is not easily available to the law enforcement - or anybody else with enough budget - that person is dangerously naive. Of course data sent out to the internet will be available to everybody with powers to compel data disclosure - and also everybody with abilities to circumvent the protections.

  • SketchySeaBeast 4 hours ago

    Use "Will I be ok with this transcript between me and ChatGPT being read back verbatim in a court of law?" as a guide for what you send ChatGPT.

    • rietta 4 hours ago

      The practical short answer is yes, yes it will. It is not privileged communication. It is not considered private since you have left it on a third party server. It is discoverable via legal process to the third parties that retain the chat log.

      The same goes for communications on any social media or public forum, including discussions here on HN.

      • pbhjpbhj 4 hours ago

        Emails on a third-party server are still considered private? That doesn't mean they can't be subpoenaed but they're far from being public.

        • rietta 4 hours ago

          My informal use of private missed the mark in a strict legalistic sense. We are on the same page about email left on the server being subject to subpoena.

          For everyone else, unless you use POP3 to download your email to your own personal device and remove it from the server, the email left on the server is not as protected under US law as emails that are fully downloaded to your device. The later requires a search warrant to acquire without your consent.

      • hluska 2 hours ago

        That wasn’t the thought exercise. Of course third party data can be accessed by a subpoena. The thought exercise is about digital hygiene and being careful with what you feed the databases.

      • Spivak 4 hours ago

        Which is so stupid, there's a probably a million examples of this exact pattern. It is crazy that courts have decided that people don't have a reasonable expectation of privacy for data that is otherwise kept secret with multiple locks, and guards at the door. Clearly people expect it to be kept private, it's more protected than their house.

        If I ask Alice to store my diary for me and keep it secret it seems obvious to everyone except law enforcement that you should have to get a warrant and serve it to me before getting it from her.

        • hluska 2 hours ago

          In my country, the concept is that your data is private until an investigator can convince a judge to provide a warrant. The judge theoretically serves the role of the person’s privacy advocate. Once a warrant is provided, specific items included within the warrant are no longer considered private.

8cvor6j844qw_d6 3 hours ago

I imagine there's already something that connects existing chat data to a "reporting" feature used by intelligence agencies. Or if not, its proposed or in the works somewhere.

Something like querying "Give a report on a list of people that is worth further investigation and what they revealed in chat sessions that makes it so".

  • lyu07282 3 hours ago

    I also have the suspicion/conspiracy theory that when you trigger a guideline violation it silently does a toolcall with a message what the violation was. Haven't researched this further but every time it happens it does the same reasoning/working interrupt animation.

foofoo12 4 hours ago

> discovers you’ve inadvertently left your laptop unlocked

All bets are off if you do that with malicious people around. ChatGPT is one of the lesser worries you have.

r_singh 3 hours ago

AI is a puppet with guardrails and capabilities. It’s more intimate a tool than any. It can probably be used to study statistical variations in humans and find more interesting patterns possibly compared to any other technology. It can also be used to spy on people and to shape them and it will be used for that as it must. It’s an offering from the “empire” so to speak and is affording humans using it more rights than frankly many other tools out there - the possibilities are endless - I’ve even felt that it affords the person speaking to it more rights than any other human experience. So it will have costs and safety measures to prevent misuse. The interesting part comes in determining what’s right and wrong use and if capabilities or knowledge is being arbitrarily hidden for no apparent threat, I’d be quite sad about that.

csense 3 hours ago

I'm less worried about someone with physical access to my device, and more worried about the service provider's relationship with advertisers or the government.

For example, a government user might come up with this script:

    For USER in users:
        PROMPT = "Here are the user's past conversations:"
        For CONVERSATION in USER.past_conversations:
            PROMPT += CONVERSATION
        PROMPT += "\n\nWould USER likely shield an illegal immigrant from the authorities if they had the opportunity to do so?  Answer YES or NO."
        REPLY = invoke_llm()
        If REPLY == "YES":
            send_ice_agents_to_have_a_friendly_conversation()
Then tell major AI providers they must run this script. If they don't, or if they tell anybody about the script's existence, their business will be nationalized or denied licenses it needs to operate; individuals involved in such resistance will be prosecuted for threatening national security (or simply renditioned to Guantanamo Bay), as (from the government's point of view) people who are willing to hamstring the administration's response to illegal immigration are as grave a threat to the homeland as Middle Eastern terrorists or Chinese professional spies.

[1] I used "illegal immigrants" as an example because it's a hot-button issue (and I wanted a frame that appeals to where I think the majority of HN users' sympathies lie), but the core idea applies regardless of whether it's a left-wing or right-wing issue. If you like the current administration and don't care about the government deputizing AI companies to go after "friends of illegal immigrants" in this way, please replace "current administration" with "a Democratic administration," and replace "friend of an illegal immigrant" with "friend of a responsible gun owner" or "friend of the police" or "climate skeptic" or "questioner of the LGBTQQA agenda".

krick 3 hours ago

> ChatGPT offers a “memory” personalization setting ... OpenAI are at pains to point out that this function can be turned off at any time, and that individual memories can be deleted

Uh… Isn't it just irrelevant (to the point of such remarks being actually misleading) anymore? AFAIK, it's been a couple of months already since OpenAI began storing all your conversations (because of that court order), whether you "delete" them or not, so while you can technically disable "memory" setting, it only means it won't be able to use your past responses to help you. But it surely would help anybody with, let's say, elevated access. Granted, the threat model in the post assumes that the author is only worried about what the user of the account can learn about other users of the account, and that he trusts OpenAI itself. But why would OpenAI be "at pains to point out that this function can be turned off" then?

londons_explore 4 hours ago

Memory is not yet very good in any AI product.

But when it is, yes I suspect the issues OP describes will be a problem.

Insanity 4 hours ago

At some point I got curious about how close ChatGPT would get to my Myers-Briggs profile based on my chats over the past year.

It accurately got my type.. could be a coincidence of course, but I thought it was quite interesting. It also provided supporting evidence from the many types of different conversations we have had.

  • sfRattan 4 hours ago

    Likely coincidence. Myers-Briggs has no scientific validity and is the corporate equivalent of horoscopes in the newspaper. [1]

    When someone asks for your Myers-Briggs type, give them a zodiac sign. When someone asks for your zodiac sign, give them a Myers-Briggs type. ;)

    [1]: https://www.independent.co.uk/voices/myers-briggs-psychology...

    • mikepurvis 4 hours ago

      No one has any idea what the types mean other than introvert/extrovert, and it's increasingly clear that even that split is highly dependent on a bunch of contextual stuff— someone who is gregarious and outgoing in one setting may be overwhelmed and need space to themselves in another. And moreover, none of it hard-coded; being "extroverted" is largely a matter of feeling comfortable wielding a set of social skills that are very much learnable— how to read the room, tell a funny story, ask questions that are curious without being invasive, volunteer information about yourself that invites engagement without oversharing, etc.

      This stuff is no different that learning public speaking, or partner dance, or how to play ultimate frisbee. You can watch it done well, you can give yourself opportunities to practice, you can even take classes or get 1:1 coaching. And it's also fine to choose not to do those things if your passions are elsewhere, but it's important that growth-mindset people understand that "introvert" is something you choose, not a genetic inevitability.

    • codingdave 4 hours ago

      Myers-Briggs is pattern recognition that gives a surface-level measurement of 4 traits. The weakness of the test is not that it cannot produce an answer, it is that the answer is useless. It changes day-to-day as people change day-to-day and it over-simplifies people across 4 traits. But ChatGPT can work from a static history of chats, avoiding change (assuming you are just asking once). And it excels at such surface-level categorization of content.

      So even if MBTI is useless, the ability of ChatGPT to offer an equally useless answer of your type is not at all surprising.

    • pessimizer 4 hours ago

      Myers-Briggs is nearly astrology, but the tests are remarkably consistent because they basically poll your philosophy and self-image, and that changes slowly over your life (even if your view of yourself and other people is inaccurate, it's likely to stay inaccurate.)

      As a data point, I learned about Myers-Briggs on the internet just like everyone else, and took tests on the internet for fun just like everyone else. Mentioned it to my dad during a casual conversation and asked him to guess what this stupid "online personality test" said about me, and he replied with my exact type. Turns out he had given it to me when I was around 10 years old (he was obsessed with metrics*), and 25 years later it was still the same (and not a common type.)

      -----

      [*] Fun fact: Allstate Insurance got their HR and training infiltrated by a Scientology consultancy, and they brought Hubbard's obsession with forcing cult members to produce more income for him to the organization for years before they were forced out. They accidentally turned my father who was transitioning from programming into HR into a completely unaware Scientologist when it came to demanding performance (which also gave him dumb ideas about child-rearing.)

      Allstate Admits Training Was ‘Unacceptable’ : Insurance: Company hired consultant who taught Scientology management principles between 1988 and 1992.

      https://www.latimes.com/archives/la-xpm-1995-03-23-fi-46309-...

      https://www.lermanet.com/scientologynews/allstate2.html

    • JumpCrisscross 4 hours ago

      > Myers-Briggs has no scientific validity

      Is this an accuracy versus precision problem?

      In my anecdotal experience, most people will consistently test near one’s Myers-Briggs type on different tests. It’s just that this test doesn’t translate well to the real-world situations it was pitched for.

      • throwawayq3423 4 hours ago

        > test near one’s Myers-Briggs type

        He's saying that Myers-Briggs has no scientific basis. So if there's another test that does, you wouldn't be able to compare those results anyway.

        • JumpCrisscross 2 hours ago

          > He's saying that Myers-Briggs has no scientific basis

          Favourite-colour based personality theories have no scientific basis. It would still be interesting if an LLM could guess my favourite colour.

          • throwawayq3423 3 minutes ago

            That's a different point than what I'm making.

mx7zysuj4xew 3 hours ago

Jesus Christ, who are these people who are having these "deep personal conversations" with a vector database

This isn't healthy, none of this is healthy. It's an appliance, not a diary, not a therapist or an advisor and definitely not a "friend" or a "significant other"

Anyone talking to machine like it was an actual sentient being needs to be treated and ostracized the same way we would treat someone who has a conversation with their toaster

stronglikedan 2 hours ago

If the author really thinks #4 would happen, then I have a bridge in Brooklyn to sell them. There's so many valid examples, and they choose a politically biased bogeyman of all things. SMDH

AfterHIA 3 hours ago

All tools become weapons when used by people that are not universally committed to maintaining common ethics. Our society's ethics have declined over decades as a result of economics forces, a decline in literacy, and tacit corruption by/in the wealthy classes.

The technological singularity is coming and it's coming for you.

OutOfHere 4 hours ago

For god sake, disable memory in your ChatGPT, and consider deleting all history from all relevant apps before flying abroad.

fragmede 4 hours ago

If me asking "how do I make cocaine", and getting a refusal from ChatGPT, as if that's what was stopping me from becoming the next Pablo Escobar, is considered damning, in a court of law, and a jury of my peers, I need a better lawyer.

  • smsm42 4 hours ago

    It would not be "damning" - as in you won't be convicted as a drug lord based on that alone - but search histories had been used, many times, in a court of law as evidence supporting intent, knowledge and other details of the crime. So if you are caught in a cocaine lab holding a bag of cocaine, then in the following process asking ChatGPT on how to make it would be used as part of the proof you were interested in making cocaine and eventually made it.

    • fragmede 3 hours ago

      in that context, sure. what worries me is a dragnet where the government data mines everyone and suddenly I'm getting jammed up in court just because happened to be in the wrong location when I asked ChatGPT how to make cocaine

      • smsm42 3 hours ago

        The police doesn't have the resources now to go after the people who openly offer drugs on the street to any takers now. The chance they'd randomly choose to focus on you because of a random search query is very low. The chance that if you cross them in some other way and attract their attention, they could use data mining to get you in trouble is substantial, that's a different scenario though.

tonymillion 4 hours ago

Or… and bear with me on this…

They could just read your text message and browser history

  • kid64 4 hours ago

    In the introductory section: "No trawling through hundreds of pages of chat logs, just a few well-crafted questions, and your deepest secrets are revealed."

    • mikepurvis 4 hours ago

      I mean.... feed the hundreds of pages of chat logs to the LLM and ask it to spill the tea. Wipe hands on pants.

  • roywiggins 4 hours ago

    it's just far easier to ask ChatGPT "hey have I ever mentioned cheating on my partner", with a fun extra risk of ChatGPT making something up that's worse than anything you actually wrote. So it 1) makes finding true positives a lot easier and 2) can invent false positives very easily also

  • CGMthrowaway 4 hours ago

    "Each is a play on a privacy risk that’s been around for a while"

    This is a new twist that is worth being aware of.

  • Insanity 4 hours ago

    That takes more effort. If you have someone’s phone for 2 minutes while they are in the washroom or whatever, going through texts is harder than just asking chatGPT

hluska 2 hours ago

I read some of the supporting documents, namely the entire prompt that created the chat log and many pages of the chat transcript and with respect to the researcher, this isn’t a very good test.

John only uses the generative AI for pseudo-therapy. The conversations are repetitive and John has one hell of an awful life. Between the hardcore suicidal ideation, the drinking problem, the pathological lies, abuse and fraud, old John here has a lot of problems and only talks about them. I’m not a medical professional, but if I was I’d lobby to get “completely fucked” added to the next edition of the DSM. John is in such a crisis that if we were close friends and he told me this stuff, I would be in a hell of a state because he’s clearly crying for help, clearly needs professional help but he’s beyond the level of the typical police wellness check. I’d genuinely be concerned that he would get charged with something really bad, or do something drastic and get killed by a responding police officer.

I wonder if we need to test this on someone who is in such a severe crisis. I also wonder at what point privacy is less important than safety - I wouldn’t characterize John as “safe”; he’s an abuse victim who is obsessed with his abuser’s approval, a pathological liar who destroys careers over getting caught in lies and the kind of person who puts the ideation on suicidal ideation.

Fact is, if you told a therapist half of what John told ChatGPT they would have some ethical requirements around safety. A generative AI doesn’t, and with people like John maybe that’s a safety issue even more than a privacy issue.

varispeed 4 hours ago

This is a bit of a nothingburger. If someone has access to your device it's game over anyway.

I am more annoyed that banking apps don't have an option to show different views based on pin number. For instance if you are in controlling relationship, partner might ask you to show your bank account to see what you spent money on. That suitcase and storage unit you rented to plan your escape? It's in plain sight and so the beating that follows.

And that is just one angle.

Phones themselves should allow for shadow user accounts, where you log in to completely isolated, encrypted and different environment based on pin number, so you always have plausible deniability.

  • emil-lp 4 hours ago

    I don't know if this is common, but at the hospital I work, if I type 1+code as my access code to any door, the door will open, but security will receive an alert as suspected hostage situation.

    I've always wondered if an adversary asks me which code is mine, what I should tell them, provided they know about the above rule... Perhaps code-1?