Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I totally dig the HCL request. To be honest I'm still mad at Github that initially used HCL for Github Actions and then ditched it for YAML when they went stable.


I detest HCL, the module system is pathetic. It's not composable at all and you keep doing gymnastics to make sure everything is known at plan time (like using lists where you should use dictionaries) and other anti-patterns.

I use Terranix to make config.tf.json which means I have the NixOS module system that's composable enough to build a Linux distro at my fingertips to compose a great Terraform "state"/project/whatever.

It's great to be able to run some Python to fetch some data, dump it in JSON, read it with Terranix, generate config.tf.json and then apply :)


What’s the list vs dictionary issue in Terraform? I use a lot of dictionaries (maps in tf speak), terraform things like for_each expect a map and throw if handed a list.


Internally a lot of modules cast dictionaries to lists of the same length because the keys of the dict might not be known at plan time or something. The "Terraform AWS VPC module does this internally for many things.

I couldn't tell you exactly, but modules always end up either not exposing enough or exposing too much. If I were to write my module with Terranix I can easily replace any value in any resource from the module I'm importing using "resource.type.name.parameter = lib.mkForce "overridenValue";" without having to expose that parameter in the module "API".

The nice thing is that it generates "Terraform"(config.tf.json) so the supremely awesome state engine and all API domain knowledge bound in providers work just the same and I don't have to reach for something as involved as Pulumi.

You can even mix Terranix with normal HCL since config.tf.json is valid in the same project as HCL. A great way to get started is to generate your provider config and other things where you'd reach to Terragrunt/friends. Then you can start making options that makes resources at your own pace.

The terraform LSP sadly doesn't read config.tf.json yet so you'll get warnings regarding undeclared locals and such but for me it's worth it, I generally write tf/tfnix with the provider docs open and the language (Nix and HCL) are easy enough to write without full LSP.

https://terranix.org/ says it better than me, but by doing it with Nix you get programatical access to the biggest package library in the world to use at your discretion (Build scripts to fetch values from weird places, run impure scripts with null_resource or it's replacements) and an expressive functional programming language where you can do recursion and stuff, you can use derivations to run any language to transform strings with ANY tool.

It's like Terraform "unleashed" :) Forget "dynamic" blocks, bad module APIs and hacks (While still being able to use existing modules too if you feel the urge).


Internally... in what? Not HCL itself, I assume? Also I'm not seeing much that implies HCL has a "plan time"...

I'm not familiar with HCL so I'm struggling to find much here that would be conclusive, but a lot of this thread sounds like "HCL's features that YAML does not have are sub-par and not sufficient to let me only use HCL" and... yeah, you usually can't use YAML that way either, so I'm not sure why that's all that much of a downside?

I've been idly exploring config langs for a while now, and personally I tend to just lean towards JSON5 because comments are absolutely required... but support isn't anywhere near as good or automatic as YAML :/ HCL has been on my interest-list for a while, but I haven't gone deep enough into it to figure out any real opinion.


I think Pulumi is in a similar spot, you get a real programming language (of your choice) and it gets to use the existing provider ecosystem. You can use the programming language composition facilities to work around the plan system if necessary, although their plans allow more dynamic stuff than Terraform.

The setup with Terranix sounds cool! I am pretty interested in build system type things myself, I recently wrote a plan/apply system too that I use to manage SQL migrations.

I want learn nix, but I think that like Rust, it's just a bit too wide/deep for me to approach on my own time without a tutor/co-worker or forcing function like a work project to push me through the initial barrier.


Yep it's similar, but you bring all your dependencies with you through Nix rather than a language specific package manager.

Try using something like devenv.sh initially just to bring tools into $PATH in a distro agnostic & mostly-ish MacOS compatible way (so you can guarantee everyone has the same versions of EVERYTHING you need to build your thing).

Learn the language basics after it brings you value already, then learn about derivations and then the module system which is this crazy composable multilayer recursive magic merging type system implemented on top of Nix, don't be afraid to clone nixpkgs and look inside.

Nix derivations are essentially Dockerfiles on steroids, but Nix language brings /nix/store paths into the container, sets environment variables for you and runs some scripts, and all these things are hashed so if any input changes it triggers automatic cascading rebuilds, but also means you can use a binary cache as a kind of "memoization" caching thingy which is nice.

It's a very useful tool, it's very non-invasive on your system (other than disk space if you're not managing garbage collection) and you can use it in combination with other tools.

Makes it very easy to guarantee your DevOps scripts runs exactly your versions of all CLI tools and build systems and whatever even if the final piece isn't through Nix.

Look at "pgroll" for Postgres migrations :)


pgroll seems neat but I ended up writing my own tools for this one because I need to do somewhat unique shenanigans like testing different sharding and resource allocation schemes in Materialize.com (self hosted). I have 480 source input schemas (postgres input schemas described here if you're curious, the materialize stuff is brand new https://www.notion.com/blog/the-great-re-shard) and manage a bunch of different views & indexes built on top of those; create a bunch of different copies of the views/indexes striped across compute nodes, like right now I'm testing 20 schemas per whole-aws-instance node, versus 4 schemas per quarter-aws-node, M/N*Y with different permutations of N and Y. With the plan/apply model I just need to change a few lines in TypeScript and get the minimal changes to all downstream dependencies needed to roll it out.


Sounds like the kustomize mental model: take code you potentially don't control, apply patches to it until it behaves like you wish, apply

If the documentation and IDE story for kustomize was better, I'd be its biggest champion


You can run Kustomize in a Nix derivation with inputs from Nix and apply the output using Terranix and the kubectl provider, gives you a very nice reproducible way to apply Kubernetes resources with the Terraform state engine, I like how Terraform makes managing the lifecycle of CRUD with cascading changes and replacements which often is pretty optimal-ish at least.

And since it's Terraform you can create resources using any provider in the registry to create resources according to your Kubernetes objects too, it can technically replace things like external-dns and similar controllers that create stuff in other clouds, but in a more "static configuration" way.

Edit: This works nicely with Gitlab Terraform state hosting thingy as well.


Well kustomize is IMO where using YAML creates the biggest pain. The patches thing is basically unreadable after you add more than 2-3 of it. I understand that you can also delete nodes, which is pretty powerful, but I really long for the "deep merge" Puppet days.


HCL != Terraform

HCL, like YAML, doesn't even have a module system. It's a data serialization format.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: