Enriching does a few things, but the main ones are adding CVSS information and CPE information.
CVSS (risk) is already well handled by other sources, but CPE (what software is affected) is kind of critical. I don't even know how they're going to focus enrichment on software the government uses without knowing what software the CVEs are in.
We're going to be launching Chainguard Libraries for Rust in a few weeks, this article perfectly calls out the issues.
crates are somewhat better designed than NPM/PyPI (the dist artifacts are source based), but still much worse than Go where there's an intermediate packaging step disconnected from the source of truth.
I work at Chainguard. We don't guarantee zero active exploits, but we do have a contractual SLA we offer around CVE scan results (those aren't quite the same thing unfortunately).
We do issue an advisory feed in a few versions that scanners integrate with. The traditional format we used (which is what most scanners supported at the time) didn't have a way to include pending information so we couldn't include it there.
The basic flow was: scanner finds CVE and alerts, we issue statement showing when and where we fixed it, the scanner understands that and doesn't show it in versions after that.
so there wasn't really a spot to put "this is present", that was the scanner's job. Not all scanners work that way though, and some just rely on our feed and don't do their own homework so it's hit or miss.
We do have another feed now that uses the newer OSV format, in that feed we have all the info around when we detect it, when we patch it, etc.
Really cool to see all the hard work on Trusted Publishing and Sigstore pay off here. As a reminder, these tools were never meant to prevent attacks like this, only to make them easier to detect, harder to hide, and easier to recover from.
Just getting around to looking at this. There is a certificate in sigstore for the 8.3.41 that claims the package is a build of cb260c243ffa3e0cc84820095cd88be2f5db86ca -- https://search.sigstore.dev/?logIndex=153415340. But it isn't. The package content differ from the content of that commit. This doesn't seem like something that's working that well.
There's no defeating of scanners or even static linking. It's all automation, dynamic linking and patching to make the scanners happy. We go to great lengths to make sure that the scanners actually find everything so the results are accurate.
The second is this blog post which highlights trust and again Docker Hub features (e.g. rate limit removal, metrics, badging) and links back to [2] for more details.
The third link is [1].
So, I'm still left curious about the grandparent comment's questions. It seems these pages are sparse on what the verification process looks like and if it might just be a "pay to play" relationship.
I read the blog, then I clicked on the big "back" button at the top labeled "Unchained" and read that, then I went to your homepage and read that, then I clicked "get started" and read that page too.
I still have no idea what Chainguard is, or what those images do. All I know is those images are "hardened", is that the only thing they're for? Is that Chainguard's product?
In a nutshell we produce minimal container images with a low CVE count. In many cases they should be drop in replacements for the containers you are currently using.
This is particularly useful if your team uses a scanner like trivy/snyk/grype/Docker Scout and spends time investigating CVEs. Less CVES == less time investigating. It can also be critical in regulated environments.
I think this comes down to audience. To a lot of engineers it's just like ... "OK, that's nice. What else?"
But for security teams in large enterprises, Chainguard is like manna from heaven. They immediately understand what is really being sold: the elimination of enormous amounts of compulsory toil due to upgrading vulnerable software -- or having to nag other teams to do it.
It's a bit like visiting the site of a medical devices manufacturer. I probably don't know what the device does, but the target audience sure do.
I just heard of this today and I was like OMG this will save me so. much. time. chasing engineers and teams and creating work around for dumb stuff that the base images refuse to fix because it's a "false positive". I unfortunately HAVE to fix all high CVEs, regardless of peoples opinions.
> But for security teams in large enterprises, Chainguard is like manna from heaven. They immediately understand what is really being sold: the elimination of enormous amounts of compulsory toil due to upgrading vulnerable software -- or having to nag other teams to do it.
Explain to me how Chainguard helps with this. Everywhere I've worked, this process has very specific needs depending on the companies internal and regulatory requirements. Chainguard may help with proof of origin/base imaging, but it doesn't do much beyond what container registries and tools like dependabot/snyk/dependency track already provide (not saying they're directly related), which doesn't really reduce that much toil.
The big ones that help are SBOMs, STIGs, FIPS, and CVE reduction. The images and the paperwork we provide make it so they can be dropped in to even the most regulated environments without toil.
Most of our customers use them for FedRAMP or IL 5/6 stuff out of the box.
As someone who has been watching Chainguard since they were "spun out" of Google, they started out trying to be the defacto container supply chain security company, realized everyone else was already doing that and well ahead of them, and have done a few pivots trying to find PMF. I think they've found more success being consultants, which is probably not what they hoped for.
Many organizations pay people (or entire teams) to maintain a suite of hardened images, either for device/firmware applications, or because they use many languages in-house, etc. This is definitely one of those business models I thought "oh, of course" as soon as I saw it.
EDIT: upon using dockerhub’s organization page for a bit, and realizing there’s no search on the organization page (I swear there was?), I now understand.
Why does the article present this bizarre set of instructions for grabbing the image instead of linking directly? You could just link your organization no?
> Getting started with Chainguard Developer Images in Docker Hub is easy. Follow these simple steps:
> Look up the Image you want.
> Select ‘Recently Updated’ from the dropdown menu on the right.
> Filter out the community images by selecting the filter ‘Verified Publisher.’
> Copy the pull command, paste it into your terminal, and you are all set.
If a primary goal of a consumer of the images is security, how can we trust the images not to have backdoors or virusesesses [extra s added for comedy]?
Great question! We take hardening of our build infrastructure very seriously, and helped build many of the OSS technologies in this space like the SLSA framework and the Sigstore project.
We produce SBOMs during the build process, and cryptographically sign SLSA-formatted provenance artifacts depicting the entire build process so you can trace a built container all the way back to the sources it was built from.
We also try to make as much of our build system reproducible as possible (but we're not all the way there yet), so you can audit or rebuild the process yourself.
CVSS (risk) is already well handled by other sources, but CPE (what software is affected) is kind of critical. I don't even know how they're going to focus enrichment on software the government uses without knowing what software the CVEs are in.
reply