Great project! I’ve had a similar experience with Rewind and the related privacy concerns. A quick thought: if I recall correctly, Rewind performs OCR locally, so it only needs to send textual data. Since you’re focusing on macOS, you could rely on VNRecognizeTextRequest and skip the extra OCR complexity. It might also help to detect and mask sensitive information with lightweight models (e.g., BERT), especially when leveraging cloud-based AI.
Self‑hosting swaps one risk for another. A cleaner option may be local‑first apps that speak an open sync protocol: your data lives on your phone/laptop/homeserver; an optional, end‑to‑end‑encrypted relay just moves opaque blobs for backup and multi‑device access.
Why this matters: the No. 1 failure I see isn’t servers getting hacked. It’s users losing the only copy of their files.
* ~70 million phones are lost annually; most are never recovered. https://snohomishcountywa.gov/Archive.asp?ADID=543
* Consumer hard‑drive AFR hovers around 1‑2 %—≈7 % fail within five years. https://blocksandfiles.com/2024/05/02/disk-failure-rates-in-...
* 29 % of people have already suffered data loss from deletion, ransomware, or disk death. https://www.worldbackupday.com/en
With odds like these, an off‑device, unreadable backup is basic hygiene, not a luxury.
Disclaimer: I work on Anytype, which follows this model: local ownership, open‑source sync protocol, source‑available clients.
Thank you, I made an update to the original post with your explanation, and because you stated that the output was a pure hallucination, I also attached one of them.
In one of the responses, it provided the financial analysis of a not well-known company with a non-Latin name located in a small country. I found this company; it is real and numbers in the response are real. When I asked my ChatGPT to provide a financial report for this company without using web tools, it responded: `Unfortunately, I don’t have specific financial statements for “xxx” for 2021 and 2022 in my training data, and since you’ve asked not to use web search, I can’t pull them live.`.
Reported a flaw to OpenAI that lets users peek at others' chat responses. Got an auto-reply on May 29th, radio silence since. Issue remains unpatched :(
Avoided their bug bounty due to permanent NDAs preventing disclosure even after fixes. Following standard 45-day disclosure window—users should avoid sharing sensitive data until this is resolved.
No, definitely not the empty string hallucination bug. These are clearly real user conversations. They start like proper replies to requests, sometimes reference the original question, and appear in different languages.
i had the exact same behavior back in 2023, it seemed like clearly leakage of user conversations - but it was just a bug with api calls in the software i was using.
It was the classic "oh no we did caching wrong" bug that many startups bump into. It didn't expose actual conversations though, only their titles: https://openai.com/index/march-20-chatgpt-outage/
In one of the responses, it provided the financial analysis of a not well-known company with a non-Latin name located in a small country. I found this company; it is real and numbers in the response are real. When I asked my ChatGPT to provide a financial report for this company without using web tools, it responded: `Unfortunately, I don’t have specific financial statements for “xxx” for 2021 and 2022 in my training data, and since you’ve asked not to use web search, I can’t pull them live.`.
It's not just about sensitive data like passwords, contracts, or IP. It's also about the personal conversations people have with ChatGPT. Some are depressed, some are dealing with bullying, others are trying to figure out how to come out to their parents. For them, this isn't just sensitive, it's life-changing if it gets leaked. It's like Meta leaking their WhatsApp messages.
I really hope they fix this bug and start taking security more seriously. Trust is everything.
After some hemming and hawing, my most cromulent thought is, having good security posture isn't synonymous with accepting every claim you get from the firehose
everything is vulnerable. the question is, has this researcher demonstrated that they have discovered and successfully exploited such a vulnerability. what exactly in this post makes you believe that this is the case?
This is going to be subject to the legal discovery process with the usual safeguards to prevent leaks; in particular, the judge will directly supervise the decision of who needs access to these logs, and if someone discloses information derived from them for an improper purpose, there's a very good chance they'll go to jail for contempt of court, which is much more stringent than you can usually expect for data privacy. You can still quite reasonably be against it, but you cannot reasonably call it "plain text logs available for everyone at the company to view".
My understanding is that all Bugcrowd bounties do by default.
You can shame it all you want, but you can also just publish your bugs directly. Nobody has to use the Bugcrowd platform. You don't even have to wait 45 days; I don't buy these "CERT/CC" rules.
You said it was pretty standard for bug bounty programs, and I disagreed pointing to several of the largest and longest lived bug bounty programs, none of which do that, and your response is pointing out that one particular platform does it?
Even among 3rd party platforms, of which there are several bigs, the NDAs are not a platform requirement, just an option for participating firms.
NDAs are not the norm. Don't mislead people who would otherwise get into this game with non-issues they need not worry over.
OpenAI's security team commented on the thread themselves that they believe they simply accepted the Bugcrowd defaults. I think you're trying to find a controversy that just isn't here.
The bug bounty world is a funny one. I remember one complaining that their bug was dismissed and fixed after they signed an NDA, no payout, nothing. Another one got $100 instead of $5,000 because the company downgraded the severity from high to low. So they ended up with little or no money, and no recognition either. Not sure if these were edge cases, but it does make you wonder how fair the process really is.
If you're dealing with large companies, a good rule of thumb is that the bounty program is incentivized to pay you out. Their internal metrics improve the more they pay; the point is to turn up interesting bugs, and the figure of merit for that is "how much did we have to spend". At a large company, a bounty that isn't paying anything out is a failure.
All bets are off with small random startups that do bug bounties because they think they're supposed to (most companies should not run bounties). But that's not OpenAI. Dave Aitel works at OpenAI. They're not trying to stiff you.
Simultaneous discovery (either with other researchers or, even more often, with internal assessments) is super common. What's more, you're not going to get any corroboration or context for them (sets up a crazy bad incentive with bounty seekers, who litigate bounty results endlessly). When you get a weird and unfair-seeming response to a bounty from a big tech company, for the sake of your own sanity (and because you'll probably be right), just assume someone internal found the bug before you did, and you reported it in the (sometimes long) window during which they were fixing it.
We do releases in the Github Actions CI. So you can inspect the CI logs and published artefacts(desktop/android). Then you can compare the binaries checksums. I would appreciate ideas on how we can make it more transparent
Hello, I’m Roman and I’m a technical co-founder at Anytype. We are building a new operating environment that breaks down barriers between apps and gives back privacy and data ownership to users.
Anytype runs locally and exchanges data directly in a peer-to-peer way without exposing it to intermediaries even when users work across devices and with each other. Because of this, Anytype is free without storage and upload limits. We plan to open-source it with the public release.
This is our early demo and I’d like to get feedback on what we are building, so we can incorporate it into Anytype. Also, if you have any questions, I’d love to answer them.
It is indeed in a closed test process right now, with onboarding new testers regularly. I think the selection is random from the pool of people who have filled out their interest forms on Telegram.
1. In case of public pages, as far as any of the nodes has your page saved it will be available.
2. In case the data is shared with someone(e.g. teammate or family) you always be able to restore it with the seed phrase. In case of private-only data, our plan is to encourage users to have multiple devices(mobile+desktop) to have a data backup
3. We plan to implemet per-device private key(like the keybase does) and use it to decrypt files' private keys. In case the device's private key was stolen you can remove this device from the list and it will not be able to receive new changes