Hacker Newsnew | past | comments | ask | show | jobs | submit | matthewbauer's commentslogin

I don’t think you can do that. Or at least if you could, it would be an unintelligible version of English that would not seem much different from a programming language.

I agree with your conclusion but I don't think it'd necessarily be unintelligible. I think you can describe a program unambiguously using everyday natural language, it'd just be tediously inefficient to interpret.

To make it sensible you'd end up standardising the way you say things: words, order, etc and probably add punctuation and formatting conventions to make it easier to read.

By then you're basically just at a verbose programming language, and the last step to an actual programming language is just dropping a few filler words here and there to make it more concise while preserving the meaning.


I think so too.

However I think there is a misunderstanding between being "deterministic" and "unambiguous". Even C is an ambiguous programming language" but it is "deterministic" in that it behaves in the same ambiguous/undefined way under the same conditions.

The same can be achieved with LLMs too. They are "more" ambiguous of course and if someone doesn't want that, then they have to resort to exactly what you just described. But that was not the point that I was making.


I'm not sure there's any conflict with what you're saying, which I guess is that language can describe instructions which are deterministic while still being ambiguous in certain ways.

My point is just a narrower version of that: where language is completely unambiguous, it is also deterministic where interepreted in some deterministic way. In that sense plain, intelligible english can be a sort of (very verbose) programming language if you just ensure it is unambiguous which is certainly possible.

It may be that this can still be the case if it's partly ambiguous but that doesn't conflict with the narrower case.

I think we're agreed on LLMs in that they introduce non-determinism in the interpretation of even completely unambiguous instructions. So it's all thrown out as the input is only relevant in some probabilistic sense.


I don't think it would be unintelligible.

It would be very verbose, yes, but not unintelligible.


Why not?

Here's a very simple algorithm: you tell the other person (in English) literally what key they have to press next. So you can easily have them write all the java code you want in a deterministic and reproducible way.

And yes, maybe that doesn't seem much different from a programming language which... is the point no? But it's still natural English.


There's also https://github.com/manzaltu/claude-code-ide.el if you're just using claude code.

I like that agent-shell just uses comint instead of a full vterm, but I find myself missing a deeper integration with claude that claude-code-ide has. Like with claude-code-ide you can define custom MCP tools that run Emacs commands.


> Like with claude-code-ide you can define custom MCP tools that run Emacs commands.

Should be possible in newer versions of agent-shell (see https://github.com/xenodium/agent-shell/pull/237)



I went by it a few weeks ago. There's a gate on the driveway, and I assume some kind of security presence. Probably no different than anyone under constant public scrutiny.


Hard to know what OP meant, but I took it as an oblique reference to China.


Well, is China a nation-state or a multi-national state, or essentially just a country(state)? English is my third language, so I just wonder do I miss some nuance here.


Not sure on the exact take of the OP, but:

Package maintainers often think in terms of constraints like I need a 1.0.0 <= pkg1 < 2.0.0 and a 2.5.0 <= pkg2 < 3.0.0. This tends to make total sense in the micro context of a single package but always falls apart IMO in the macro context. The problem is:

- constraints are not always right (say pkg1==1.9.0 actually breaks things)

- constraints of each dependency combined ends up giving very little degrees of freedom in constraint solving, so that you can’t in fact just take any pkg1 and use it

- even if you can use a given version, your package may have a hidden dependency on one if pkg1’s dependencies, that is only apparent once you start changing pkg1’s version

Constraint solving is really difficult and while it’s a cool idea, I think Nixpkgs takes the right approach in mostly avoiding it. If you want a given version of a package, you are forced to take the whole package set with you. So while you can’t say take a version of pkg1 from 2015 and use it with a version of pkg2 from 2025, you can just take the whole 2015 Nixpkgs and get pkg1 & pkg2 from 2015.


There’s no clear definition (in most languages, of major/minor/patch versioning). Amazon did this reasonably well when I was there, though the patch version was implicitly assigned and the major and minor required humans to follow the convention:

You could not depend on a patch version directly in source. You could force a patch version other ways, but each package would depend on a specific major/minor and the patch version was decided at build time. It was expected that differences in the patch version were binary compatible.

Minor version changes were typically were source compatible, but not necessarily binary compatible. You couldn’t just arbitrarily choose a new minor version for deployment (well, you could, but without expecting it to go well).

Major versions were reserved for source or logic breaking changes. Together the major and minor versions were considered the interface version.

There was none of this pinning to arbitrary versions or hashes (though, you could absolutely lock that in at build time).

Any concept of package (version) set was managed by metadata at a higher level. For something like your last example, we would “import” pkg2 from 2025, bringing in its dependency graph. The 2025 graph is known to work, so only packages that declare dependencies on any of those versions would be rebuilt. At the end of the operation you’d have a hybrid graph of 2015, 2025, and whatever new unique versions were created during the merge, and no individual package dependencies were ever touched.

The rules were also clear. There were no arbitrary expressions describing version ranges.


For the record, Amazon's Builder Tools org (or ASBX or whatever) built a replacement system years ago, because this absolutely doesn't work for a lot of projects and is unsustainable. They have been struggling for years to figure out how to move people off it.

Speaking at an even higher level, their system has been a blocker to innovation, and introduces unique challenges to solving software supply chain issues

Not saying there aren't good things about the system (I like cascading builds, reproducibility, buffering from 3p volatility) but I wouldn't hype this up too much.


> Constraint solving is really difficult and while it’s a cool idea, I think Nixpkgs takes the right approach in mostly avoiding it. If you want a given version of a package, you are forced to take the whole package set with you.

Thank you, I was looking for an explanation of exactly why I hate Nix so much. It takes a complicated use case, and tries to "solve" it by making your use-case invalid.

It's like the Soylent of software. "It's hard to cook, and I don't want to take time to eat. I'll just slurp down a bland milkshake. Now I don't have to deal with the complexities of food. I've solved the problem!"


It’s not an invalid use case in nixpkgs. It’s kind of the point of package overlays.

It removes the “magic” constraint solving that seemingly never works and pushes it to the user to make it work


> I was looking for an explanation of exactly why I hate Nix so much

Note that the parent said "I think Nixpkgs takes the right approach in mostly avoiding it". As others have already said, Nix != Nixpkgs.

If you want to go down the "solving dependency version ranges" route, then Nix won't stop you. The usual approach is to use your normal language/ecosystem tooling (cabal, npm, cargo, maven, etc.) to create a "lock file"; then convert that into something Nix can import (if it's JSON that might just be a Nixlang function; if it's more complicated then there's probably a tool to convert it, like cabal2nix, npm2nix, cargo2nix, etc.). I personally prefer to run the latter within a Nix derivation, and use it via "import from derivation"; but others don't like importing from derivations, since it breaks the separation between evaluation and building. Either way, this is a very common way to use Nix.

(If you want to be even more hardcore, you could have Nix run the language tooling too; but that tends to require a bunch of workarounds, since language tooling tends to be wildly unreproducible! e.g. see http://www.chriswarbo.net/projects/nixos/nix_dependencies.ht... )


I mean you can do it in Nix using overlays and overrides. But it won’t be cached for you and there’s a lot of extra fiddling required. I think it’s pretty much the same as how Bazel and Buck work. This is the future like it or now.


What? Isn't wasm just a bytecode? How could you write rules in wasm?


It used to be you could buy fractions of a mutual fund, but not ETFs. Recently, brokerages have started allowed you to do fractional ETFs as well though.


Very cool! It looks like it assumes everything is flat, but I bet you could pull in elevation data from OSM as well.


It does use elevation data, but does not exaggerate it, I guess. Back when I was working with 3D maps, we noticed that many people liked exaggerated terrain heights better, especially when the terrain is viewed from above and realistic heights looked “flat”. Near where I live it looks fairly close to what it does in real life: https://streets.gl/#48.50063,8.99766,7.25,312.50,135.56 (granted, having added building and roof colors for almost all buildings also helps).


the 4 story building I live in is rendered like a basement only dwelling which is actually growing on me the more I look at it ...


You could add a building:levels value to the object in OpenStreetMap, to record the information of how many stories your building has.

https://wiki.openstreetmap.org/wiki/Key:building:levels


While a good idea in general (the StreetComplete app makes this very easy, by the way), this won't help for this app, as the data is from September 2023. Otherwise I'd love to use it more to validate how renderers handle different buildings. F4Map should show the change fairly quickly, though.


Had never heard of “mayaguez” until now. It actually happened under President Ford not Carter.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: