I wondered the same! FWIW I'm currently migrating from managed postgres to self-managed on hetzner with [autobase](https://autobase.tech/). Though of course for high availability it requires more than one server.
Beware of Hetzner Cloud volumes, they're unusable for a database, they're too slow. I'm not sure what workloads people run on Hetzner but the low-performance volumes and unreliable load balancers don't seem like a good fit for real production stuff with traffic.
I've run some benchmarks a couple years ago, I don't have them at hand unfortunately but off the top of my mind, seqread 4k produced around 1500 IOPS while seqwrite was like a third of that. The practical performance was abysmal, I moved PostgreSQL storage to a volume and it was very noticeably slower just by browsing the web app (compared to NVMe SSD storage).
For comparison, I'm now using UpCloud which uses network-attached storage for all volumes and easily hits 10k IOPS (up to 100k with some tuning).
I certainly may have missed something while testing this so I'm happy if someone else wants to contribute and correct me if I'm wrong.
I'm not the OP but I came here to voice the same concern. I would love to use something like this. I also signed up for rewind.ai and Limitless and pre-ordered the pendant. But ultimately I cancelled it out of privacy concerns.
I wonder if it could be local storage and you could provide your own Open Router endpoint? That way it could be a local model or your own deployment of GPT/Claude in Azure/Bedrock/Vertex etc where you can control retention policies etc.
Basically, I want to know that you guys don't have access to view my stuff. I get that that limits your ability to improve the product and support issues, but when I'm sending everything it really starts to matter. Just thought I'd share what held me back from immediately signing up despite really wanting to use a product like this!
Maybe you could try AirJelly, a context-aware proactive AI companion we are building recently. Data are all stored locally, on your MAC. Local LLM models would be difficult, as we also use Claude/Gemini to serve as the model provider. But as the software builder, we don't collect any user information, anything for improving our product would be sent by our users voluntarily (just send feedback in the app). If you would like to try, you could visit our website airjelly.ai.
Just what I'm looking for, but I'm reluctant to trust a third party with full time screen recording of my device... Any chance of local processing (or even custom LLM endpoints) in the future?
I'm having a hard time understanding how this is different from a bastion server, where you're tunneling through an intermediary server that you've deployed in the target network.
I guess the difference is the fact that the intermediary server doesn't need a port open (as standard nat punching will work)? Or are there other big differences?
We've setup and used peer-relays since it was first announced and they've been great, but they do solve a somewhat specific problem.
Some of our users experienced fairly limited throughput from time to time. Under certain circumstances (eg. always ipv4 NAT/double-NAT, never for ipv6) their Tailscale client couldn't establish a direct connection to the Tailscale node in the datacenter, so data was relayed through Tailscales public relay nodes. Which at times was rate limited/bottleneck - in all fairness, that is to be expected according to their docs.
The first mitigation was to "ban" the specific public relay they were using in the policy. Which helped, but still not a great solution and we might just end up in a weird whack-a-mole-ish ban game with the public peer relays in the long run.
So we setup a peer relay, which networking-wise is in a DMZ sort of network (more open), but location wise still in the datacenter and allowed it to easily reach the internal (more restricted networking) Tailscale nodes. Which solved all throughput problems, since we no longer have users connecting through the public relays.
Also, the peer relays feels a little bit magic, once you allow the use of them in the Tailscale policy, it just works(tm) - there is basically zero fiddling with them.
EDIT: I'll happily provide more details if interested - we did a fair amount of testing and debugging along the way :)
I think that biggest difference is that your client applications don't need to be explicitly configured to use the bastion server. For example ssh, web browsers, rdp, samba and so on can just pretend that you are inside the target network. Doubly useful if this is a "customer" network and you are working with multiple customers.
Not the OP, but I've got a "Names to remember" evergreen note in my Reference folder. Within it, I have a few headings (e.g. neighbours, or locations), and a bullet point for each person, with context that will trigger the memory. That might sound like there's a lot of structure, but it's really the act of writing it down in the first place that helps me remember.
From what I remember, the key differentiating features were:
- a heartbeat, so it was able to 'think'/work throughout the day, even if you weren't interacting with it
- a clever and simple way to retain 'memory' across sessions (though maybe claude code has this now)
- a 'soul' text file, which isn't necessarily innovative in itself, but the ability for the agent to edit its own configuration on the fly is pretty neat
Its a coding agent in a loop (infinite loops are rejected by coding agents usually) with access to your computer, some memory, and can communicate through telegram. That’s it. It’s brilliant though and he was the first to put it out there.
I see, so there's actually an additional for loop here, which is `sleep(n); check_all_conversations()`, that is not something claude code does for sure.
As far as the 'soul' file, claude does have claude.md and skills.md files that it can edit with config changes.
One thing I'm curious about is whether there was significant innovation around tools for interacting with websites/apps. From their wiki, they call out like 10 apps (whatsapp, teams, etc...) that openclaw can integrate with, so IDK if it just made interacting with those apps easier? Having agents use websites is notoriously a shitty experience right now.
Would this be any better than just pasting links to the appropriate documentation for the technology you want to use in your AGENTS.md file? I suppose it's better if it's a single giant text file so there are fewer agent iterations navigating links within the docs but then doc sites could just provide that, like https://docs.avohq.io/3.0/llm-support.html
This should work out of the box with Magic DNS (part of tailscale features). If machine A is named larrys-laptop and is running a service on :8080, then from sandras-laptop just navigate to http://larrys-laptop:8080 and it should work, provided both machines are on the same tailnet.
reply