Hacker Newsnew | past | comments | ask | show | jobs | submit | 3s's commentslogin

Not to mention their recent integration of Persona ID verification - that was the last straw for me.

But they already have PII on nearly all users. Many user upload documents with their name, or pictures of themselves, or have a chat where home addresses are involved. All of this is information anthropic already has on their users (voluntarily provided via chats or via api) and is equivalent to what Persona gets via their verification - it’s just more convenient to use a third party SaaS product for this than vibe coding their own identity verification platform I guess

This might be conflating two things. What data exists somewhere, and how many different independent parties hold it. It's not the same risk.

Put this way: I sort of already trust Anthropic with some of my PII. And that's ... maybe not ok actually. But it's a single failure surface.

But that's definitely not the same thing as trusting Anthropic, AND Persona AND All Persona's partners AND their Partners ad infinitum.

And let's say Persona is actually ok; who knows, they might be? But it's still an extra surface; and if they share again, that's another extra surface again.

It's fairly common sense blast radius minimization. This is part of the actual theory behind GDPR.

"We already seem to accidentally be leaking some data through channel A" , doesn't mean it's a good idea to open channels B-Z as well. It means you might want to tighten down that channel A.


Yes it appears your personal data IS being sent to open router and the model provider here. The problem I think is that a lot of people (especially in the openclaw community) mistake “I run it on my mac mini” to mean their data is private. Meanwhile all data is being shipped off for training to anthropic via openrouter and both of those parties see everything.

I guess you could theoretically plug in a local model here but of course the readme should be more precise here when talking about privacy


The attestation report is produced ahead of time and verified on each connection (before the prompt is sent). Every time the client connects to do an inference request via one of the Tinfoil SDKs, the attestation report is checked relative to a known-good/public configuration to ensure the connection is to a server that is running the right model.


The attestation is tied to the Modelwrap root hash (the root hash is included in the attestation report) so you know that the machine that is serving the model has the right model weights


The absence of solutions for LLM privacy on that list is telling. We’ve figured out how to have private communications with other humans via end to end encryption but arguably we’re leaking a lot more to chatbots about ourselves in a few sessions than we do to even our closest friends and family over Whatsapp


It uses confidential computing primitives like Intel TDX and NVIDIA CC, available on the latest generations of GPUs. Secure hardware like this is a building block to enable verifiably private computation without having to trust the operator. While Confer hasn’t released the technical details yet, you can see in the web inspector that they use TDX in the backend by examining the attestation logs. This is a similar architecture to what we’ve been developing at Tinfoil (https://tinfoil.sh) if you’re curious to learn more!


reminds me of a story called “a disneynand without children” about a planet overtaken by AI pursuing meaningless “inbred” GDP goals and completely neglecting the humans in the process https://open.substack.com/pub/nosetgauge/p/a-disneyland-with...


not to mention the privacy concerns associated with connecting my entire life to OpenAI or Anthropic. If you have the memory feature enabled, it's scary how much ChatGPT knows about you already and can even infer implicit thoughts and patterns about you as a person.


I am sure it already knows a lot regardless of the memory feature, as long you're sharing your chat history/ have your history enabled, but I agree, it'd simply worsen it.


This is really neat! Didn’t realize it could be this simple to run RL on models. Quick question: How would I specify the reward function for tool use? or is this something you automatically do for me when I specify the available tools and their uses?


Thanks! Our goal is to make rl "just work" with completely automated GPU provisioning/algorithm selection/SFT-warm up, but giving people the ability to switch away from the defaults if they want to.

The way tools currently work in the beta is you add tools via MCP to the configuration, and they get passed in as additional context for the model; the model might then choose to use a tool during inference; the tool is then automatically called and the output is returned as a tool message. If you really want to you could parse the tool output as part of reward calculation, but I expect you'd usually base the reward just on the model's completion. I could give more details if there's a specific tool setup you're envisioning!


To add to this, you can currently manually parse tool calls in your environment's step function, but we'll be rolling out a UI that makes this easier soon.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: