Long tool outputs/command outputs everything in my harness is spilled over to the filesystem. Context messages are truncated and split to filesystem with a breadcrumb for retrieving the full message.
Cool, just checked out dlgo. Looks like you're targeting Go bindings for on-device inference? Different approach but same conviction that this should run locally. Happy to compare notes if you want to chat about Metal optimization or pipeline architecture.
I agree. I met Knuth briefly after a guest lecture at my university a few years ago and although you could tell his body was getting old, his mind was incredibly fresh.
Although I'm not as bright as him, I can only hope to be as intellectually curious as him at that age.
I don't even think this is controversial, but I don't think it's at all without causation: not remaining curious, keeping the mind stimulated, etc., accelerates one's decline.
If you work in something labour intensive, you should retire young while your body's in good health; if you work in academia you should (strive for emeritus and) never leave! (And if you work in SWE, I don't know, we should probably retire, but then spend more time on our own projects/experiments/reading HN.) (All assuming for sake of argument we're optimising for longevity without considering time with family, having the funds to retire, etc.)
The government are able to access your conversations, data and connections with e2ee in place already. I don't see how not having e2ee would have an effect on that ability in any way.
Myth: End-to-end encryption (E2EE) is the only way to ensure robust cybersecurity.
Reality: E2EE carries its own risks and vulnerabilities. No single, standalone method achieves bulletproof cybersecurity.
Robust cybersecurity requires layering multiple, diligently managed security measures and best practices. Malevolent actors can exploit E2E encryption to avoid critical data security scanning, to allow malware inside a network or onto a device, and to evade law enforcement.
You actually choose to believe that these trillion dollar tech monsters run by some of the most despicable people on the planet are being forthright in that they have no ability to do this on behalf of some government request? For something that isn't open source and can't be audited, and can be changed at the next upgrade without any oversight? I find it so much more unlikely that they can't, and that informs my normie use, mostly.
As someone who desperately wants to use local models, I lament there is no way to use them on consumer hardware for serious coding work. I have a rtx 4070 super ti and I cannot run any large model with enough context and tps compared to a remote offering.
Long tool outputs/command outputs everything in my harness is spilled over to the filesystem. Context messages are truncated and split to filesystem with a breadcrumb for retrieving the full message.
Works really well.
reply