Hacker Newsnew | past | comments | ask | show | jobs | submit | amardeep's commentslogin

For people who have used both vega and Apache echarts, how do they compare?


Das würde mich auch interessieren. Apache echarts ist meiner Meinung nach mächtiger und einfacher zu nutzen.


Just a shout out to nix-darwin[1]. It is nix, so initial setup is a bit involved. But then it truly makes it easy to configure everything in one place including mac defaults, homebrew apps declaratively and mas apps etc.

There is a sample config in nix-darwin repo[2].

[1] https://github.com/LnL7/nix-darwin

[2] https://github.com/LnL7/nix-darwin/blob/master/modules/examp...


This is also my favorite way of setting up MacBooks. I love being able to set up all my Linux, macOS and Windows[1] machines from a single flake.

[1]: https://lgug2z.com/articles/managing-dotfiles-on-windows-11-...


Sounds too much hassle for something you want to come back to every few years, when the nix commands and procedure have changed anyway.


I skip nix-darwin and use just home-manager. Does all I need. Don't really see the point of adding nix-darwin.


I have been exploring nix for the past few months and my experience with nix has been both exhilarating and frustrating, simultaneously. On one hand, I find it hard to imagine not using nix now, but on the other hand, I hesitate to recommend it to other colleagues due to its steep learning curve, ux issues and potential for footguns.

I sincerely hope that nix community improves the UX to make it more accessible to new users. Though for those willing to invest time in learning it, nix is extremely useful and highly recommended.


There are two sides to this problem, the first is to improve the UX, but the second is to clearly describe a compelling reason for people to adopt. It is very tempting to only blame the first, but I think we need to also need to tell a better story and highlight the values in a better way. This would then give people a reason to get past the UX issues in the hopes of achieving those desired values.

For example; people seem to have accepted that the benefits of using terraform in spite of various difficulties - the learning of its language or needing to hire specialists. The mantra of "infrastructure as code" is enough to drive adoption. What is our mantra? We need to accept that "reproducibility" isn't quite working and that we need either a clearer message, or to explain the message.


> but the second is to clearly describe a compelling reason for people to adopt.

Build source straight from Git:

   nix run github:someuser/someproject
Need a different version?

   nix run github:someuser/someproject?ref=v1.0.0
Wanna replace some dependency?

   nix run \
      --override-input somelib github:someotheruser/somelib \
      github:someuser/someproject
Wanna fork:

   git clone https://github.com/someuser/someproject
   # do your changes
   nix run someproject/
And the best part is, it's conceptually very simple, it's mostly just a bunch of symlinks and environment variables behind the scenes. If you wanna inspect what's in a package, just `cd /nix/store/yourpackage-HASH` and look around.

NixOS just feels like a distribution build from the ground up for Free Software. The "reproducibility" in every day use just means that stuff won't randomly break for no reason. And if you don't wanna go the full NixOS route, you can just install the Nix package manager itself on any other distribution.

That said, the part where Nix gets painful is when it has to interact with the rest of the software world. Things like software that wants to auto-update itself really does not fit into the Nix ecosystem at all and can be rather annoying to get to work.

There are of course numerous other pain points, missing features and all that. But being able to flip between versions, fork, compile and all that with feels just so much better than anything else.


I disagree, I think the value proposition for reproducibility is clear, it's just that the learning curve "is too damn high!" I'm highly motivated to learn and use Nix (or Guix for that matter) but I've bounced off of it three or four times now, and I'm the kind of weirdo who learns new PLs for fun.

Someone once said that you don't learn Nix, you reverse engineer it.


I think the reason why Nix is hard isn't the language (although lazily evaluated functional language is definitively a part of it), but the paradigm shift, but this is necessary for reproducibility.

Imagine if you had a distro where you couldn't depend on existing state (so you couldn't just compile something to /usr/local). Where you had to create package definition with all dependencies explicitly defined.

Kind of like using FreeBSD ports (or Gentoo) without precompiled packages and you were forced to add every package manually before using. You would also complain it is hard even if you would have to just use Makefile and shell script.

I don't think this will get easier until Nix would get embraced by other package s, making it easy to specify those components as dependencies. Now with flakes with can be composable this is possible.


I agree with you. I can see the value of reproducibility, declarative system setup, etc.

I would love that. But there's no way I'm fighting software with such a bad UX.


Two of the major differences between terraform and Nix that I see are 1. it’s possible to muddle through in terraform and 2. Hashicorp has put a non-trivial amount of effort into documentation for all levels of users. I’ve taken a stab at using Nix for Rust projects and could not even get to a point where I had something that functioned. I found plenty of material online but was it out of date, idiosyncratic, did it use flakes or not, etc etc? I suppose I could have contorted my existing project to meet the examples I found in various GitHub repos but my stuff is bog standard Rust so I don’t know that I’d be willing to. As for documentation, what should I, as a new and invested user, be looking for? Are flakes the future? Are they a distraction? Why are all the suggested docs I could find several year old guides on blogs? There’s 20 years of information floating around and the official project documentation, well, I don’t know who the audience is but it’s not learners.

It’s a shame. The promise of Nix/NixOS is really interesting — being able to deterministically create VMs with a custom user land is desirable to me — but in practice I can’t even get a simplistic project to compile, let alone something elaborate. Terraform is jank but it’s not a whole language that needs to be learned, seemingly, before the official docs start to become coherent in their underlying context.


I don't think it's very valid to compare the two. It is a little bit just to compare the experiences using them bit they aren't meant to solve the same set of issues. In fact, they are better together in my experience. I use nix to manage my terraform configurations with a lot of success. It reduces my boilerplate and helps me build abstractions on top of HCL.

If you ever decide to take a stab at nix again, consider looking at https://github.com/ipetkov/crane and using flakes. I've got it down to the point that I can get a new rust project set up with nix in about 30 seconds with linting, package building, and test running all in the checks


Crane is one of the libraries(?) I came across. Couldn’t get it to work on an existing multi-crate workspace project. The crane documentation as-is didn’t provide enough context to debug the errors I saw in the process of trying to muddle through. And then looking further afield ran into all the documentation, bootstrapping issues I alluded to above. Although I don’t doubt I could start a new project the point was to add new capability to existing work, for me.


> We need to accept that "reproducibility" isn't quite working...

I guess it doesn't sell Nix as strongly as it could..

But, it's hardly for a lack of enthusiasm on Nix user's part. -- Rather, I've seen a few "what's nix good for anyway" comments, and this results in many lengthy replies extolling nix.


Agreed, reproducibility is only one aspect of Nix and doesn't quite capture the whole picture. That's why so many newcomers see Nix as nothing more than a Docker replacement. There's also too much misconceptions about Nix the language that's scaring people off.

I'd like to see more being discussed about:

* Its unique ability to treat packages as programmable data (i.e., derivations)

* Its use case as a building block for deployment systems that knows about and integrates with packages

* Its JSON-like simplicity

They're all central to the Nix experience, and yet it's often overlooked in Nix discussions.


> Its unique ability to treat packages as programmable data (i.e., derivations)

How is this useful in practice?


We find it immensely useful to program our packages (technically, "derivations"):

It's trivial to combine multiple packages into one, e.g. via the 'nixpkgs.buildEnv' function. We use this to define a package called 'tools', containing shells, linters, differs, VCS, converters, data processors, etc. Our devs only need to install one package; and if they find something useful enough to share with the team, they can add it to that 'tools' package.

This approach is also modular/compositional: we define a 'minimal-tools' package containing bash, coreutils, git, diffutils, sed, grep, archivers, compressors, etc. which is enough for most shell scripts. Our 'tools' package is defined using that 'minimal-tools' package, plus a bunch of more-interactive tools like process monitors, download managers, etc. The reason we made this modular is so each of our projects can include that 'minimal-tools' package in their development environments, alongside project-specific tooling like interpreters (NodeJS, Python, JVM, etc.), compilers, code formatters, etc. (depending on the whole 'tools' package felt like bloat, and was easy to avoid)

(Outside of work, my personal NixOS config takes this even further; defining different packages for e.g. media tools, development tools, document tools, audio tools, etc. and splitting those into separate packages for gui/cli tools. That's not particularly "useful in practice"; I just like to keep my system config organised!)

Another very common way of programming with packages is to override their definitions. For example, we override the whole of Nixpkgs to use the 'jre11_headless' JVM. This is done by the following 'overlay' function (all of the dependency-propagation happens automatically, since Nixpkgs uses laziness):

  self: super: {
    jre = super.adoptopenjdk-jre-hotspot-bin-11;
    jre_headless = self.jdk;
    jdk = super.jdk11_headless;
  }
Overriding is also useful for individual packages, e.g. if we want to alter something deep down a dependency chain. It's also useful for applying patches or running a "fixup" script to the source, without having to e.g. fork a git repo.


It's super easy to extend packages. For example, I was playing with Kafka and wanted the standard package to coexist with a separate install of Kafka with some jar files I needed. It was super easy to create a new package of Kafka with extra install steps that downloaded and placed those jar files where I needed.


I'm surprised that nix never ended up using augeas for package configuration because last I checked every upstream build option has to be reproduced in nix script by the packager.


It's easy enough for users to extend existing packages in Nix that it's never necessary for packagers to explicitly add support for every conceivable upstream build option out there. For example, the following three lines will create a new package based on an existing one with custom configure flags added.

    existingPackage.overrideAttrs (old: {
      configureFlags = old.configureFlags ++ [ "--custom-build-option" ];
    })
Package options that tweak the build flags are there for convenience and not a requirement. Many of them are there for internal use in the Nixpkgs repo to provide variants of the same package.


I wonder if the corporate backing behind Docker has anything to do with Nix being adopted less. There’s a lot of overlap between Nix and Docker, and Docker had major corporate guns behind it from early on. Reproducibility as you mentioned, is easily achieved with Docker, and there is no need to learn the Nix ecosystem.

Personally, I want to learn Nix, but Ive never forced myself to do it because it’s so much easier to make a Docker image do what I want. Nix is the pinnacle of a reproducible environment that doesn’t randomly break, but Docker is 80% of the way there and much easier.

It’s like comparing trucks (Docker) with planes (Nix) for logistics. I can pay out the wazoo for a plane to get my package there over night, or I can pay a small fraction to wait a few days.


> Reproducibility as you mentioned, is easily achieved with Docker, and there is no need to learn the Nix ecosystem.

Docker isn't reproducible; I have no idea where this myth came from. The core feature of Docker is the ability to snapshot ("image") the result of a script ("Dockerfile"). That's no more "reproducible" than a statically-linked binary.

> Personally, I want to learn Nix, but Ive never forced myself to do it because it’s so much easier to make a Docker image do what I want. Nix is the pinnacle of a reproducible environment that doesn’t randomly break, but Docker is 80% of the way there and much easier.

Personally, I find Docker incredibly difficult. I finally bit the bullet when I took over maintenance of some systems at work, which had been written with Docker. The documentation was awful, the management systems keep falling over, and we have no idea what's actually running (it's tagged ":latest", but so is everything).

I avoid using anything Docker-related when I'm building new systems. It's easier to build container images with jq, tar and sha256sum anyway.


Docker serves the basic functionality of a container as well as any other. The workflows that forced you to migrate away from it are more complicated, I’m sure, than a procedurally defined dev environment.

What you’re calling a snapshot is what I meant when I said reproducible. Images allow me to have reproducible starting points to add other parts to my environment. Over time they get out of date depending on exactly what happens, but it’s much more controllable than a basic Ubuntu installation.

Your comment further pushes me away from Nix, really, even without mentioning it. It makes it seem even more like a tool that’s not for me, but rather for much more complicated things.


> Docker serves the basic functionality of a container as well as any other

I disagree. We've had multiple production outages caused by the Docker daemon misbehaving (usually causing us to run out of disk space). It also makes operations far more difficult than necessary; e.g. want to copy a .tar.gz file to AWS? Sorry, Docker's gonna insert itself in the workflow, and over-complicate the authentication[1]. Instead, I have a script which runs [2] in a loop; much easier!

> The workflows that forced you to migrate away from it are more complicated, I’m sure, than a procedurally defined dev environment.

The container I mentioned literally just runs `java -jar` with a pre-built .jar file. Nothing fancy. Still, I have no idea what it's running, since Docker allows mutable tags, and there's no reference to a version number, let alone a git revision.

> Your comment further pushes me away from Nix, really, even without mentioning it. It makes it seem even more like a tool that’s not for me

I wasn't talking about Nix, I was talking about Docker. They're very different tools (Docker manages running processes/containers; Nix describes the contents of files). I just really hate Docker, and am baffled when people say it's "easy".

> but rather for much more complicated things.

You're giving me too much credit. It took me days to even get Docker installed. It turned out that despite a mountain of documentation telling me to install "Docker Machine", that's actually been discontinued for years and I should have been installing "Docker Desktop" instead.

[1] https://docs.aws.amazon.com/AmazonECR/latest/userguide/getti...

[2] https://docs.aws.amazon.com/cli/latest/reference/ecr/upload-...


On the other hand Docker was quite easy for me to install... because I used nix to do it.


What I see as a barrier for entry to Nix/NixOS is not the UX, but the available documentation, or lack of thereof. One may consider the docs being part of UX though. I am in the process of writing a book about NixOS, you may track the progress here: https://drakerossman.com/blog/practical-nixos-the-book

The emphasis is on how to make it as practical as possible, plus cover the topics which may apply to Linux in general, and in great detail.


Your email form to follow the book is returning errors to me. And you have no other contact channels I can find. Might want to look into that.


Thank you for pointing that out! I have a twitter with the same handle, if you would like to subscribe to that. If not - wouldn't you mind a follow-up email when I fix the subscription form?


Excited to see this being in progress. Have you checked out the docs effort spun up with the Nix Documentation team?


Yes, and was, to be quite honest, not quite impressed. They lay a foundation stone, but don't actually go that much further.


Hey! We've been working at Cachix to address this using https://devenv.sh/.

It's primary targeted to improve DX and on-board users quickly.


I think this project is really promising.

My main concern is that it puts another layer of abstraction atop an already complex (and at times leaky) abstraction.

I’d love to see more clear docs about what devenv is actually doing under the covers, and how to escape-hatch into Nix land when I inevitably need to tweak something.

Also, similarly, how do I map Nix docs (often just a set of example expressions) into equivalent devenv incantations?

(It’s been a few months since I last looked so maybe things have come along since then.)


I think that's a valid concern. You read the nix pills and think you know what you're doing and then it turns out that the community has wrapped the things you've learned about in things you've never heard of, so you still can't learn from other people's repos.


This. I've been actively trying nix-based tooling on and off for my projects because it is legitimately solve the problem around sandboxing, versioning, reproducibility, consistency, etc. Recently, I'm trying to use asdf (and its faster alternative, rtx) and I keep telling myself "huh, nix could solve this problem better". But, damn it is infuriating to learn. Like other commenter said, it is the escape hatch that I'm missing so much. I really really want to convince myself to learn nix The Right Way. But, it feels like you will have another learning curve when using a nix-wrapper tooling.

I still have a high hope for the future of nix. And I believe the time nix will rise in popularity is when they have sorted the UX-related issues.


One thing that has helped somewhat is being more thorough in my exploration of repo's that are doing things that are similar to what I'm trying to do.

I want to use nix with a nim project, so I wrote a script that walks through all of the repo's in nimble (nim's package manager) and then filtered those for ones that had a flake.nix in the repo root.

Going through them was a helpful dose of context. It's like you gotta approach it from theory to practice and from practice to theory and eventually your efforts will meet in the middle. I think. I have glimmers of meeting in the middle happening.


Thanks Domen,

devenv.sh is quite nice, and have already started using it.


Do you still experience these UX issues and lack of documentation?


DevEnv looks neat, maybe I’ll try it.

I expect my main hurdle will be stuff that isn’t in NicPkgs. Especially for dev stuff, I’m going to want to pull in low-profile GitHub stuff or Python packages. What’s the escape hatch to bring those into the dev environment?


> What’s the escape hatch to bring those into the dev environment?

The function `pkgs.runCommand` is useful if you just want to run some Bash commands https://nixos.org/manual/nixpkgs/stable/#trivial-builder-run...

The main difference compared to running commands in a normal terminal is that builds are sandboxed, with no network access by default:

- If your commands need to download some particular files, you can have Nix fetch them separately (e.g. using `fetchurl`, `fetchGit`, etc.) and provide them to your commands via env vars. See https://nixos.org/manual/nixpkgs/stable/#chap-pkgs-fetchers

- If you don't know what will be downloaded, or there's no way to run in an 'offline' mode, then you can specify hash for the result (making it a "fixed output derivation"). That will give it network access, and Nix will check that the output matches the given hash for reproducibility (you can just make up a random hash to start with; Nix will reject the result, telling you its hash, which you can copy/paste into the definition :) )

The reasons I like this approach include:

- Bash is familiar/traditional and mostly-compatible with e.g. official install instructions provided by many projects, Stack Overflow answers, blog posts and tutorials, etc.

- Powerful/unrestricted, in case we need to do some fiddling between some steps

- Nix often reveals problems with those familiar/traditional instructions; e.g. if some deeply-nested part of an installer happens to run Python, it will fail if Python wasn't explicitly listed in its dependencies (AKA `buildInputs`). Revealing and fixing such things up-front avoids the "works on my machine" problem.

- Bash commands are often tedious and inflexible; so after writing a few of these we may find ourselves wanting more structure, more reusable parts, etc. which is exactly what the helper functions in Nixpkgs provide (like `pkgs.stdenv.mkDerivation`, `pkgs.pythonPackages.buildPythonApplication`, etc.). In contrastt, starting off with those helper functions can seem overwhelming, and the benefits may be hard to appreciate immediately.


> What’s the escape hatch to bring those into the dev environment?

One thing you can do is try the Nix package manager on your Linux or macOS system. -- If it's not in nix, you can just do it the way you did things before.

If you want to try NixOS:

1. Writing a package is only necessary for sharing code.. you can still just have Python, setup a virtual env, & run the program as you would.

2. You could use a tool like Podman or Docker, to run stuff in containers. I've heard stuff like https://github.com/containers/toolbox helps this.

(Among other solutions).


For python, you could just install bare python with venv and/or poetry and manage your dev python dependencies outside nix. For eg. if you were using devenv.sh, there is an example here: https://github.com/cachix/devenv/tree/main/examples/python-p...


This looks like a really nice UX improvement on top of typical Flake devshells.


Currently working on a graphical UX where you can create and share flakes without writing Nix code at https://mynixos.com

The site makes it easy to browse indexed flakes and configure flakes via options and packages. Hopefully the structure provided by the UI can makes it easier to get started with Nix flakes :)


Very cool!


What are the footguns? I feel that footgun descriptions always have the most insightful bits about technology.


A few antipatterns/annoyances I've come across over the years:

Importing paths based on environment variables:

There is built-in support for this, e.g. setting the env var `NIX_PATH` to `a=/foo:b=/bar`, then the Nix expressions `<a>` and `<b>` will evaluate to the paths `/foo` and `/bar`, respectively. By default, the Nix installer sets `NIX_PATH` to contain a copy of the Nixpkgs repo, so expressions can do `import <nixpkgs>` to access definitions from Nixpkgs.

The reason this is bad is that env vars vary between machines, and over time, so we don't actually know what will be imported.

These days I completely avoid this by explicitly un-setting the `NIX_PATH` env var. I only reference relative paths within a project, or else reference other projects via explicit git revisions (e.g. I import Nixpkgs by pointing the `fetchTarball` function at a github archive URL)

Channels:

These always confused me. They're used to update the copy of Nixpkgs that the default `NIX_PATH` points to, and can also be used to manage other "updatable" things. It's all very imperative, so I don't bother (I just alter the specific git revision I'm fetching, e.g. https://hackage.haskell.org/package/update-nix-fetchgit helps to automate such updating).

Nixpkgs depends on $HOME:

The top-level API exposed by the Nixpkgs repository is a function, which can be called with various arguments to set/override things; e.g. when I'm on macOS, it will default to providing macOS packages; I can override that by calling it with `system = "x86_64-linux"`. All well and good.

The problem is that some of its default values will check for files like ~/.nixpkgs/config.nix, ~/.config/nixpkgs/overlays.nix, etc. This causes the same sort of "works on my machine" headaches that Nix was meant to solve. See https://github.com/NixOS/nixpkgs/blob/master/pkgs/top-level/...

I avoid this by importing Nixpkgs via a wrapper, which defaults to calling Nixpkgs with empty values to avoid its impure defaults; but still allows me to pass along my own explicit overrides if needed.

The imperative nix-env command:

Nix provides a command called 'nix-env' which manages a symlink called ~/.nix/profile. We can run commands to "install packages", "update packages", "remove packages", etc. which work by building different "profiles" (Nix store paths containing symlinks to a bunch of other Nix store paths).

This is bad, since it's imperative and hard to reproduce (e.g. depending on what channels were pointing to when those commands were run, etc.). A much better approach is to write down such a "profile" explicitly, in a git-controlled text file, e.g. using the `pkgs.buildEnv` function; then use nix-env to just manage that single 'meta-package'.

Tools which treat Nix like Apt/Yum/etc.

This isn't something I haven't personally done, but I've seen it happen in a few tools that try to integrate with Nix, and it just cripples their usefulness.

Package managers like Apt have a global database, which maps manually-written "names" to a bunch of metadata (versions, installed or not, names of dependencies, names of conflicting packages, etc.). In that world names are unique and global: if two packages have the name "foo", they are the same package; clashes must be resolved by inventing new names. Such names are also fetchable/realisable: we just plug the name and "version number" (another manually-written name) into a certain pattern, and do a HTTP GET on one of our mirrors.

In Nix, all the above features apply to "store paths", which are not manually written: they contain hashes, like /nix/store/wbkgl57gvwm1qbfjx0ah6kgs4fzz571x-python3-3.9.6, which can be verified against their contents and/or build script (AKA 'derivation'). Store paths are not designed to be managed manually. Instead, the Nix language gives us a rich, composable way to describe the desired file/directory; and those descriptions are evaluated to find their associated store paths.

Nixpkgs provides an attribute set (AKA JSON object) containing tens of thousands of derivations; and often the thing we want can be described as 'the "foo" attribute of Nixpkgs', e.g. '(import <nixpkgs> {}).foo'

Some tooling that builds-on/interacts-with Nix has unfortunately limited itself to only such descriptions; e.g. accepting a list of strings, and looking each one up in the system's default Nixpkgs attribute set (this misunderstanding may come from using the 'nix-env' tool, like 'nix-env -iA firefox'; but nix-env also allows arbitrary Nix expressions too!). That's incredibly limiting, since (a) it doesn't let us dig into the structure inside those attributes (e.g. 'nixpkgs.python3Packages.pylint'); (b) it doesn't let us use the override functions that Nixpkgs provides (e.g. 'nixpkgs.maven.override { jre = nixpkgs.jdk11_headless; }'); (c) it doesn't let us specify anything outside of the 'import <nixpkgs> {}' set (e.g. in my case, I want to avoid NIX_PATH and <nixpkgs> altogether!)

Referencing non-store paths:

The Nix language treats paths and strings in different ways: strings are always passed around verbatim, but certain operations will replace paths by a 'snapshot' copied into the Nix store. For example, say we had this file saved to /home/chriswarbo/default.nix:

  # Define some constants
  with {
    # Import some particular revision of Nixpkgs
    nixpkgs = import (fetchTarball {...}) {};

    # A path value, pointing to /home/chriswarbo/defs.sh
    defs = ./defs.sh;

    # A path value, pointing to /home/chriswarbo/cmd.sh
    cmd = ./cmd.sh;
  };
  # Return a derivation which builds a text file
  nixpkgs.writeScript "my-super-duper-script" ''
    #!${nixpkgs.bash}/bin/bash
    source ${nixpkgs.lib.escapeShellArg defs}
    ${cmd} foo bar baz
  ''
Notice that the resulting script has three values spliced into it via ${...}:

- The script interpreter `nixpkgs.bash`. This is a Nix derivation, so its "output path" will be spliced into the script (e.g. /nix/store/gpbk3inlgs24a7hsgap395yvfb4l37wf-bash-5.1-p16 ). This is fine.

- The path `cmd`. Nix spots that we're splicing a path, so it copies that file into the Nix store, and that store path will be spliced into the script (e.g. /nix/store/2h3airm07gp55rn9qlax4ak35s94rpim-cmd.sh ). This is fine.

- The string `nixpkgs.lib.escapeShellArg defs`, which evaluates to the string `'/home/chriswarbo/defs.sh'`, and that will be spliced into the script. That's bad, since the result contains a reference to my home folder! The reason this happens is that paths can often be used as strings, getting implicitly converted. In this case, the function `nixpkgs.lib.escapeShellArg` transforms strings (see https://nixos.org/manual/nixpkgs/stable/#function-library-li... ), so:

- The path `./defs.sh` is implicitly converted to the string `/home/chriswarbo/defs.sh`, for input to `nixpkgs.lib.escapeShellArg` (NOTE: you can use the function `builtins.toString` to do the same thing explicitly)

- The function `nixpkgs.lib.escapeShellArg` returns the same string, but wrapped in apostrophes (it also adds escaping with backslashes, but our path doesn't need any)

- That return value is spliced as-is into the resulting script

To avoid this, we should instead splice the path into a string before escaping; giving us nested splices like this:

    source ${nixpkgs.lib.escapeShellArg "${defs}"}


The problem with Nix is that I still have to start with a Linux system--so I still need Docker, Terraform, something to give me a stable base for Nix to work against.

At that point--why should I add Nix to the mess since I still need those other things anyway?


For what reason do you "still need Docker?"

With Linux, the only stable base required for Nix to function is the kernel. Nix packages all the required dependencies right down to glibc. Since the Linux kernel famously "doesn't break userspace," any sufficiently new kernel would suffice. Until recently, I've been able to get the latest Nix packages working on an ancient Linux 2.6 kernel. And even the kernel can be managed with Nix if you use NixOS. But Docker can't, so it's no use here.

As for Terraform, I don't see how it's relevant to this discussion. Nix-based SSH deployment tools can replace some of its functionality, so perhaps that's what you're talking about?


This is not true. Nix is supported on several non-Linux platforms, including macOS and Windows WSL.


WSL2 is very much Linux. Microsoft got tired of trying to reimplement Linux kernel interfaces so they just ship the Linux kernel in a VM.


I find there is a perfect middle ground here (applies to multiple languages like this.)

Nix + LLM.

Especially if using GPT4 for quality. You get all of the usefulness of Nix, but created or edited using plain English.

And then of course at the end you output the nix as an artifact to live in version control.


> On one hand, I find it hard to imagine not using nix now

for what specifically?


Standard ones I would like to try are Solid/Svelte/Vue

- Lit if I wanted to share components with other teams

- Marko if the site had lots of static content

- Qwik if speed was paramount

- Fresh too looks interesting



Interesting to see that fake historical quotes espousing “we will defeat those highly admirable yet lesser races through subversion” isn’t limited to anti-semites


> tomberek

1. Two independent use cases.

a) First was a personal use to carry my dotfiles/home config everywhere (office mac, home mac, virtual machines). Ended up configuring a flakes based home-manager setup.

b) Second is trying to get good devshell experience for the whole team. Especially in a monorepo setting with multiple languages and tools. Right now there is tons of setup required including installing right versions of java, python, nodejs, lots of instructions to get right python version, poetry install, setup aliases etc.

2. No, it is not. It didn't take much of research to conclude flakes are the way to go. Honestly, I am confused by so many responses about confusion between flakes and non-flakes as I don't think it take much effort to realize flakes are the way to go even if it is experimental.

Though, to be fair, it does hurt that two of the recommended resources - nix.dev and nix pills - do not cover flakes.


1) what would you consider to be "wow" or magic that would go beyond a+b. What is steps c+d that would be incredible value?

2) We're working on stabilization... just asking for a bit of patience.


I haven't used it yet but https://github.com/nadirizr/dazel might help. It runs bazel inside a docker container via a seamless proxy.


I also think it is partly as the tooling is not there yet - especially in the typical case when a project depends on lots of external dependencies.

I am looking forward to new set of bazel rules being worked on for eg. https://github.com/aspect-build/rules_js and https://github.com/jvolkman/rules_pycross which will makes it more idiomatic to work with existing language ecosystems.


It reminds me of inkjet printers and the issue with ink cartridges. Free market doesn't always result in an equilibrium which is not consumer hostile.


Hi Addy,

It is sad to see that more often than not, the top comments on any article are overly negative or even vitriolic and any interesting discussion languishes deeper in the article.

I read your draft, and it is full of excellent advice. Some of it being which I also find very important, some which gives me food for thought and some which I am happy to be reminded of again. I look forward to your finished post.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: