Hacker Newsnew | past | comments | ask | show | jobs | submit | anp's commentslogin

> any [] property can be [taken by the state] from its [original] owners simply by [those owners becoming more powerful than the state wants]

When rephrased like the above, I think what you’re describing is pretty common in history. Many industries and assets have been nationalized when it serves the state’s interests.

IMO the moral justification is that there is no ownership or private property except that which is sanctioned by the state (or someone state-like) applying violence in its defense. In this framing, there’s little moral justification for the state letting private actors accrue outsized power that harms consumers/citizens.


Brutal, but understandable and well-argued. Thank you.

People outsource the brutality (to the government), so that they don't need to deal with it in their daily life. If we couldn't force companies to act in ways we want through a formal system, then the world would look much more brutal.

or alternatively we can just stop using products/services of said companies

I can ban persons from doing things, I rather not have them do. Companies are legal persons, so why shouldn't this apply to them? At some point ignoring behaviour is not making it go away, it needs to be actively worked against, otherwise it will become (practically) mandatory.

the core problem with banning is who is doing it and why, right? once we allow it, it goes into the hands of the “politicians” and then books get banned today, ice scream gets banned tomorrow, math gets banned the next day…

Which is why the more serious consequences a law has the harder it is to change it and the more people need to sign off on it. There is stuff that needs simple majorities, stuff that is in the constitution and requires a super majority, stuff that can't be changed short of abolishing the current state and stuff that can't be changed at all, because it is just an assertion that is independently on anyone asserting it.

This is kind of a "solved*" thing in theory, not so much in practice of course.

*solved meaning we have a proper process established


I’m not sure I see that assumption in the statement above. The fact that no prompt or alignment work is a perfect safeguard doesn’t change who is responsible for the outcomes. LLMs can’t be held accountable, so it’s the human who deploys them towards a particular task who bears responsibility, including for things that the agent does that may disagree with the prompting. It’s part of the risk of using imperfect probabilistic systems.

FWIW I understood GP to mean that it suddenly makes sense to them, not that there’s been a sudden focus shift at google.


This has mostly been my experience as well although I don’t tend to run yolo mode outside of an isolated VM (I’m setting them up manually still, need to try vagrant for it). That said, it seems like some of the people who are more concerned about isolation are working with more untrusted inputs than I’ve been dealing with on my projects. It’s rare for me to ask an agent to e.g. read text from a random webpage that could bring its own prompt injection, but there are a lot of things one might ask an agent to do that risk exposure to “attack text”.


Anyone who finds this relatable (like me) might benefit from learning more about the last couple of decades of research on emotional regulation, trauma, and the nervous system. I have a great “trauma informed” therapist and over time this tendency of mine feels much less compulsive and more like a choice I can make because I know I’m good at something. At least for me having a calmer internal life has made it way easier to pick my battles and it usually means I end up feeding my desire to be useful on more satisfying and impactful things than I would have chased in more obsessive times in my life.


> Others think someone from the Rust (programming language, not video game) development community was responsible due to how critical René has been of that project, but those claims are entirely unsubstantiated.


I find a lot of these points persuasive (and I’m a big Rust fan so I haven’t spent much time with Zig myself because of the memory safety point), but I’m a little skeptical about the bug report analysis. I could buy the argument that Zig is more likely to lead to crashy code, but the numbers presented don’t account for the possibility that the relative proportions of bug “flavors” might shift as a project matures. I’d be more persuaded on the reliability point if it were comparing the “crash density” of bug reports at comparable points in those project’s lifetimes.

For example, it would be interesting to compare how many Rust bugs mentioned crashes back when there were only 13k bugs reported, and the same for the JS VM comparison. Don’t get me wrong, as a Rust zealot I have my biases and still expect a memory safe implementation to be less crashy, but I’d be much happier concluding that based on stronger data and analysis.


I had the same thought. But one thing was actually very useful to compare "bug densities": deno vs bun. They have comparable codebase sizes as well as comparable ages (7y vs 4y). I'd like to see the same stats for tigerbeetle, which is very carefully designed: if segfaults were relatively high on that as too, well...


> I'd like to see the same stats for tigerbeetle

Actual SIGSEGVs are pretty rare, even during development. There was a pretty interesting one that affected our fuzzing infra a little bit ago: https://ziggit.dev/t/stack-probe-puzzle/10291

Almost all of the time we hit either asserts or panics or other things which trigger core dumps intentionally!



> Don’t get me wrong, as a Rust zealot I have my biases and still expect a memory safe implementation to be less crashy

That is a bias. You want all your "memory safety" to be guaranteed at compile time. Zig is willing to move some of that "memory safety" to run time.

Those choices involve tradeoffs. Runtime checks make Zig programs more "crashy", but the language is much smaller, the compiler is vastly faster, "debug" code isn't glacially slow, and programs can be compiled even if they might have an error.

My personal take is that if I need more abstraction than Zig, I need something with managed memory--not Rust or C++. But, that is also a bias.


I understand that I have a bias, which is why I was disclosing it. I think it strengthens my question since naively I'd expect a self-professed zealot to buy into the narrative in the blog post without questioning the data.


> My personal take is that if I need more abstraction than Zig, I need something with managed memory--not Rust or C++

You may potentially like D. Its tooling leaves much to be desired but the language itself is pretty interesting.


Might be worth noting that npm didn’t have lock files for quite a long time, which is the era during which I formed my mental model of npm hell. The popularity of yarn (again importing bundled/cargo-isms) seems like maybe the main reason npm isn’t as bad as it used to be.


npm has evolved, slowly, but evolved, thanks to yarn and pnpm.

It even has some (I feel somewhat rudimentary) support for workspaces and isolated installs (what pnpm does)


Lock files are only needed because of version ranging.

Maven worked fine without semantic versioning and lock files.

Edit: Changed "semantic versioning" to "version ranging"


> Maven worked fine without semantic versioning and lock files.

No, it actually has the exact same problem. You add a dependency, and that dependency specifies a sub-dependency against, say, version `[1.0,)`. Now you install your dependencies on a new machine and nothing works. Why? Because the sub-dependency released version 2.0 that's incompatible with the dependency you're directly referencing. Nobody likes helping to onboard the new guy when he goes to install dependencies on his laptop and stuff just doesn't work because the versions of sub-dependencies are silently different. Lock files completely avoid this.


It is possible to set version ranges but it is hard to see this in real world. Everyone is using pinned dependencies.

Version ranges are really bad idea which we can see in NPM.


My apologies I should have said "version ranging" instead of "semantic versioning".

Before version ranging, maven dependency resolution was deterministic.


Always using exact versions avoids this (your pom.xml essentially is the lock file), but it effectively meant you could never upgrade anything unless every dependency and transitive dependency also supported the new version. That could mean upgrading dozens of things for a critical patch. And it's surely one of the reasons log4j was so painful to get past.


I’ve been out of the Java ecosystem for a while, so I wasn’t involved in patching anything for log4j, but I don’t see why it would be difficult for the majority of projects.

Should just be a version bump in one place.

In the general case Java and maven doesn’t support multiple versions of the same library being loaded at once(not without tricks at least, custom class loaders or shaded deps), so it shouldn’t matter what transitive dependencies depend on.


Right, that's the program. Let's say I really on 1.0.1. I want to upgrade to 1.0.2. Everything that also relies on 1.0.1 also needs to be upgraded.

It effectively means I can only have versions of dependencies that rely on the exact version that I'm updating to. Have a dependency still on 1.0.1 with no upgrade available? You're stuck.

Even worse, let's say you depends on A which depends on B, and B has an update to 1.0.2, if A doesn't support the new version of B, you're equally stuck.


Maven also has some terrible design where it will allow incompatible transitive dependencies to be used, one overwriting the other based on “nearest wins” rather than returning an error.


there are a small number of culprits from logging libraries to guava, netty that can cause these issues. For these you can use the Shade plugin https://maven.apache.org/plugins/maven-shade-plugin/


If in some supply chain attack someone switches out a version's code under your seating apparatus, then good look without lock files. I for one prefer being notified about checksums of things suddenly changing.


Maven releases are immutable


Sounds like the Common Lisp approach, where there are editions or what they call them and those are sets of dependencies at specific versions.

But the problem with that is, when you need another version of a library, that is not in that edition. For example when a backdoor or CVE gets discovered, that you have to fix asap, you might not want to wait for the next Maven release. Furthermore, Maven is Java ecosystem stuff, where things tend to move quite slowly (enterprisey) and comes with its own set of issues.


I was quite tickled to see this, I don’t remember why but I recently started rewatching the show. Perfect timing!


I tend to agree but there are a few scenarios where I really want it to work. Debuggers in particular seem hard to get right for the current agents. I’ve not been able to get the various MCP servers I’ve tried to work, I’ve struck out using the debug adapter protocol from agent-authored python. The best results I’ve gotten are from prompting it to run the debugger under screen, but it takes many tool calls to iterate IME. I’m curious to see how gemini cli works for that use case with this feature.


I would love to use gdb through an agent instead of directly. I spend so much time looking up commands and I sometimes skip things because I get impatient stepping over the next thing


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: