> Finally, clouds have painful APIs. This is where projects like K8S come in, papering over the pain so engineers suffer a bit less from using the cloud. But VMs are hard with Kubernetes because the cloud makes you do it all yourself with lumpy nested virtualization. Disk is hard because back when they were designing K8S Google didn’t really even do usable remote block devices, and even if you can find a common pattern among clouds today to paper over, it will be slow. Networking is hard because if it were easy you would private link in a few systems from a neighboring open DC and drop a zero from your cloud spend. It is tempting to dismiss Kubernetes as a scam, artificial make work designed to avoid doing real product work, but the truth is worse: it is a product attempting to solve an impossible problem: make clouds portable and usable. It cannot be done.
Please learn from Unix's mistakes. Learn from Nix. Support create-before-destroy patterns everywhere. Forego all global namespaces you can. Support rollbacks everywhere.
If any cloud provider can do that, cloud IaC will finally stop feeling so fake/empty compared to a sane system like NixOS.
Labor-saving devices don't save labor at work. They increase productivity rather than reducing hours, and the extra value is captured by the employer.
That's the difference between your home dishwasher and the means of production.
It's also probably a big part of what worries Gen Z about when it comes to AI. They're thinking about their own employment and employment prospects, where most people probably understand they have little to gain from it long-term.
You missed the part in TFA where Amodei talks about gazillions of jobs going away, or literally any AI CEO saying the same thing over and over again, or mass layoffs all blaming AI.
Maybe that's true. But I think part of the issue is that for a lot of things developers want to do with them now— certainly for most of the things I want to do with them— they're either barely good enough, or not consistently good enough. And the value difference across that quality threshold is immense, even if the quality difference itself isn't.
One of Anthropic's ostensive ethical goals is to produce AI that is "understandable" as well as exceptionally "well-aligned". It's striking that some of the same properties that make AI risky also just make it hard to consistently deliver a good product. It occurs to me that if Anthropic really makes some breakthroughs in those areas, everyone will feel it in terms of product quality whether they're worried about grandiose/catastrophic predictions or not.
But right now it seems like, in the case of (3), these systems are really sensitive and unpredictable. I'd characterize that as an alignment problem, too.
The approach you outline is totally compatible with an additional one or two day time gate for the artifact mirrors that back prod builds. Deploy in locked-down non-prod environments with strong monitoring after the scans pass, wait a few days for prod, and publicly report whatever you find, and you're now "doing your part" in real-time while still accounting for the fallibility of your automated tools.
There's risk there of a monoculture categorically missing some threats if everyone is using the same scanners. But I still think that approach is basically pro-social even if it involves a "cooldown".
Install tools using a package manager that performs builds as an unprivileged user account other than your own, sandboxes builds in a way that restricts network and filesystem access, and doesn't run let packages run arbitrary pre/post-install hooks by default.
Avoid software that tries to manage its own native (external, outside the language ecosystem) dependencies or otherwise needs pre/post-install hooks to build.
If you do packaging work, try to build packages from source code fetched directly from source control rather than relying on release tarballs or other published release artifacts. These attacks are often more effective at hiding in release tarballs, NPM releases, Docker images, etc., than they are at hiding in Git history.
Learn how your tools actually build. Build your own containers.
Learn how your tools actually run. Write your own CI templates.
My team at work doesn't have super extreme or perfect security practices, but we try to be reasonably responsible. Just doing the things I outlined above has spared me from multiple supply chain attacks against tools that I use in the past few weeks.
Platform, DevEx, and AppSec teams are all positioned well to help with stuff like this so that it doesn't all fall on individual developers. They can:
- write and distribute CI templates
- run caches, proxies, and artifact repositories which might create room to
- pull through packages on a delay
- run automated scans on updates and flag packages for risks?
- maybe block other package sources to help prevent devs from shooting themselves in the foot with misconfiguration
- set up shared infrastructure for CI runners that
- use such caches/repos/proxies by default
- sandbox the network for build$
- help replace or containerize or sandbox builds that currently only run on bare metal on some aging Jenkins box on bare metal
- provide docs
- on build sandboxing tools/standards/guidelines
- on build guidelines surrounding build tools and their behaviours (e.g., npm ci vs npm install, package version locking and pinning standards)
- promote packaging tools for development environments and artifact builds, e.g.,
- promote deterministic tools like Nix
- run build servers that push to internal artifact caches to address trust assumptions in community software distributions
- figure out when/whether/how to delegate to vendors who do these things
I think there's a lot of things to do here. The hardest parts are probably organizational and social; coordination is hard and network effects are strong. But I also think that there are some basics that help a lot. And developers who serve other developers, whether they are formally security professionals or not, are generally well-positioned to make it easier to do the right thing than the sloppy thing over time.
How is ortholinear supposed to be better? I've never used a full-size ortholinear keyboard but every time I've had one on a portable keyboard or mobile device it has driven me nuts (and staggered keyboards of the same sizes have not).
I agree that offering more user choice here could be a unique and standout feature, though. One of the small things I love about their keyboard offerings is that they offer all blank key cap options.
> I never thought about battery life while the lid is closed until my Framework.
In the days of S3, I never thought about it either. Ironically it's on my Mac that I have to remember to hibernate if I won't use my laptop for a week or whatever, because it'll die if I don't. It just happens to be that it was on a Mac on which I first tried "modern standby" features.
Anyway I feel you. This big battery life improvement is what has convinced me my next laptop can be a Framework.
Wow. For as long as I can remember it was usually the opposite for me. Even after configuring TLP and looking for things to tune manually with powertop, I usually didn't get battery usage quite as low as I wanted. I thought it was still typical for Windows to be better.
reply