Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Right now i have to build and deploy using a cloud server or an older Intel MacBook

Is it really weird to me how little-known the art of cross-compilation is. You can target x86 from an ARM build box, or the other way around.



If the application and test suite are built in Node.js, I'm really not sure why there are architecture issues to begin with. Especially if the test suite ran without alteration on the ARM machines, then I don't see why you'd get different results with running the tests on Intel vs ARM.

> I installed node v16 on both my MacBook and the cloud instance, installed our app and then ran the full end to end test suite.

Given this statement, it doesn't seem like there should be any cross-platform issues at all here (even in included npm packages). It doesn't sound like any of the tests are arch specific.

But I don't use Node.js very often, so maybe I am missing something?


The app itself is not ARM specific, but the base Docker image and its dependencies (Debian + libraries + nodejs binaries...) are platform specific, so they can't build that image on a different platform than their target.

See that, for the same tag (version) you have images with different hashes (hence content) for different architectures:

https://hub.docker.com/_/node?tab=tags


That makes no sense to me. It doesn’t matter what the base image is — the application and test suite are platform agnostic.

So, why not build two different Docker images that are the same except for arch? It seems that the problem isn’t the difference between dev and production environments, but the way Docker is being used. Sure, run an arch specific test before moving something to production, that makes sense. But for development, just use the Docker image that matches the dev arch. This seems like a much easier problem than the author is making it out to be.

Before Docker, I don’t think this would have been an issue.


It makes sense if you take in account that you're packaging the whole stack in a container. It's the same than when you make 'golden images' for a VM with Packer or whatever tool is available: you end up with a platform specific image.

But you aren't far fetched at all with that idea you have, and can be done using a Docker multi-stage build: You build all your application with whatever platform you have (either locally or for CI), and then you import your application layer from an ARM64 image in one side, and from and AMD64 in the other.

We aren't still very used to multi-platform development environments, so the tooling isn't still perfect (at all).


In this case, if there are binary dependencies besides node.js. They would be rebuilt on the box when 'npm install' is run. I've never seen a setup for cross compilation in this case. Though I don't think you really need it.


Also you can run x86-64 and arm64 VMs side by side with UTM on the M1’s. That’s what I am doing. Although I don’t need to use the x86-64 one these days as Debian/arm64 is fine for my needs.


And how exactly do you test the cross-compiled bits before pushing them to a production server?


You should already be using continuous integration, so it should already be moot.


Continuous integration also needs a server to run on. The point is, if you have a dev laptop running M1 ARM and a prod server running Intel x86, you need something in the middle to build/test on unless you want to cross-compile on your laptop and yeet it to production.


QEMU with TCG can do wonders. It's not perfect but it's surprisingly fast and emulates a good amount of instructions, can do SMP, etc.


Easy: push it to a preproduction server first.


Boot in qemu, or have a test instance.


Perhaps on a staging server?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: