Recently, there were similar attempts (two) of supply chain attacks on the ClickHouse repository, but: - it didn't do anything because CI does not run without approval; - the user's account magically disappeared from GitHub with all pull requests within a day.
Also, let me recommend our bug bounty program: https://github.com/ClickHouse/ClickHouse/issues/38986 It sounds easy - pick your favorite fuzzer, find a segfault (it should be easy because C++ isn't a memory-safe language), and get your paycheck.
- always pin versions of all packages;
- this includes OS package repositories, Docker repositories, as well as pip, npm, cargo, and others;
- never download anything from the master/main or other branches; specify commit sha;
- ideally, copy all Docker images to our own private registry;
- ideally, calculate hashes after download and compare them with what was before;
- frankly speaking, if CI runs air-gapped, it would be much better...
I don't understand this PR. How is it an "attack"? It seems to just be pinning a package version, was the package compromised, or was this more a "vulnerability"?
If they're pulling from master instead of from a known version, it could be changed to be malicious, and the next time it is fetched, the malicious version would be used instead. It's a vulnerability.
Oh, you'll like this one then. Until 3 months ago GitHub's Runner images was pulling a package directly from Aliyun's CDN. This was executed during image testing (version check). So anyone with the ability to modify Aliyun's CDN in China could have carried out a pretty nasty attack. https://github.com/actions/runner-images/commit/6a9890362738...
Now it's just anyone with write access to Aliyun's repository. :) (p.s. GitHub doesn't consider this a security issue).
Whoa, there's a lot of stuff in there [1] that gets installed straight from vendors, without pinning content checksums to a value known-good to Github.
I get it, they want to have the latest versions instead of depending on how long Ubuntu (or, worse, Debian) package maintainers take to package stuff into their mainline repositories... but creating this attack surface is nuts. Imagine being able to compromise just one of the various small tools they embed, and pivoting from there to all GitHub runners everywhere (e.g. by overwriting /bin/bash or any other popular entrypoint, or even libc itself, with a malware payload).
The balance there is overwhelmingly in favor of usability and having new tools (hence the 1 week deployment cadence). Maybe there is some process they have to go over everything before it makes it into the production pool, but that’s quite an undertaking to perform properly every week.
> on how long Ubuntu (or, worse, Debian) package maintainers take to package stuff into their mainline repositories...
Really a misinformed comment.
For starters Ubuntu is for the most part a snapshot of Debian sid, so except for a few cases it will not have more modern versions.
The python packaging team is really hard working… In most cases stuff that doesn't get updated immediately is because it breaks something or depends on something new that isn't packaged yet.
Please stop demeaning the work of others when you seem to not know that it even happens at all.
Also worth reading a similar example: https://blog.cloudflare.com/cloudflares-handling-of-an-rce-v...
Also, let me recommend our bug bounty program: https://github.com/ClickHouse/ClickHouse/issues/38986 It sounds easy - pick your favorite fuzzer, find a segfault (it should be easy because C++ isn't a memory-safe language), and get your paycheck.