Hacker Newsnew | past | comments | ask | show | jobs | submit | dlor's commentslogin

Enriching does a few things, but the main ones are adding CVSS information and CPE information.

CVSS (risk) is already well handled by other sources, but CPE (what software is affected) is kind of critical. I don't even know how they're going to focus enrichment on software the government uses without knowing what software the CVEs are in.


CPE is a joke. The offical spec doc asserts that correctness of names is not in scope for the spec. See section 5. Well-Formed CPE Name Data Model

https://csrc.nist.gov/pubs/ir/7695/final


We're going to be launching Chainguard Libraries for Rust in a few weeks, this article perfectly calls out the issues.

crates are somewhat better designed than NPM/PyPI (the dist artifacts are source based), but still much worse than Go where there's an intermediate packaging step disconnected from the source of truth.


It's both. They got compromised by another supply chain attack on Trivy initially.


Hey!

I work at Chainguard. We don't guarantee zero active exploits, but we do have a contractual SLA we offer around CVE scan results (those aren't quite the same thing unfortunately).

We do issue an advisory feed in a few versions that scanners integrate with. The traditional format we used (which is what most scanners supported at the time) didn't have a way to include pending information so we couldn't include it there.

The basic flow was: scanner finds CVE and alerts, we issue statement showing when and where we fixed it, the scanner understands that and doesn't show it in versions after that.

so there wasn't really a spot to put "this is present", that was the scanner's job. Not all scanners work that way though, and some just rely on our feed and don't do their own homework so it's hit or miss.

We do have another feed now that uses the newer OSV format, in that feed we have all the info around when we detect it, when we patch it, etc.

All this info is available publicly and shown in our console, many of them you can see here: https://github.com/wolfi-dev/advisories

You can take this example: https://github.com/wolfi-dev/advisories/blob/main/amass.advi... and see the timestamps for when we detected CVEs, in what version, and how long it took us to patch.


Really cool to see all the hard work on Trusted Publishing and Sigstore pay off here. As a reminder, these tools were never meant to prevent attacks like this, only to make them easier to detect, harder to hide, and easier to recover from.


Just getting around to looking at this. There is a certificate in sigstore for the 8.3.41 that claims the package is a build of cb260c243ffa3e0cc84820095cd88be2f5db86ca -- https://search.sigstore.dev/?logIndex=153415340. But it isn't. The package content differ from the content of that commit. This doesn't seem like something that's working that well.


This is awesome to see, and the result of many years of hard work from awesome people.


There's no defeating of scanners or even static linking. It's all automation, dynamic linking and patching to make the scanners happy. We go to great lengths to make sure that the scanners actually find everything so the results are accurate.



Unfortunately across the various links ([1] [2] [3] [4]) there aren't really many details that answer those questions (that I was also curious about).

[1] https://docs.docker.com/trusted-content/dvp-program/

Explains perks and Docker Hub analytics features, but doesn't explain requirements. It does link to ...

[2] https://www.docker.com/partners/programs/

a sign up page that entices you to fill out a contact form if you want to learn more. And links to 2 new pages (and [1]) at the bottom:

[3] https://www.docker.com/press-release/docker-expands-trusted-...

the first of which is a press release explaining the value of the program.

[4] https://www.docker.com/blog/docker-verified-publisher-truste...

The second is this blog post which highlights trust and again Docker Hub features (e.g. rate limit removal, metrics, badging) and links back to [2] for more details.

The third link is [1].

So, I'm still left curious about the grandparent comment's questions. It seems these pages are sparse on what the verification process looks like and if it might just be a "pay to play" relationship.


I work at Chainguard, happy to answer any questions!


I read the blog, then I clicked on the big "back" button at the top labeled "Unchained" and read that, then I went to your homepage and read that, then I clicked "get started" and read that page too.

I still have no idea what Chainguard is, or what those images do. All I know is those images are "hardened", is that the only thing they're for? Is that Chainguard's product?


I work at Chainguard.

In a nutshell we produce minimal container images with a low CVE count. In many cases they should be drop in replacements for the containers you are currently using.

This is particularly useful if your team uses a scanner like trivy/snyk/grype/Docker Scout and spends time investigating CVEs. Less CVES == less time investigating. It can also be critical in regulated environments.


Why not put that information on your website?

For example, put that exact sentence in place of this useless tag line:

> Build it right. Build it safe. Build it fast.


Not at Chainguard but I've watched their growth.

I think this comes down to audience. To a lot of engineers it's just like ... "OK, that's nice. What else?"

But for security teams in large enterprises, Chainguard is like manna from heaven. They immediately understand what is really being sold: the elimination of enormous amounts of compulsory toil due to upgrading vulnerable software -- or having to nag other teams to do it.

It's a bit like visiting the site of a medical devices manufacturer. I probably don't know what the device does, but the target audience sure do.


I just heard of this today and I was like OMG this will save me so. much. time. chasing engineers and teams and creating work around for dumb stuff that the base images refuse to fix because it's a "false positive". I unfortunately HAVE to fix all high CVEs, regardless of peoples opinions.


> But for security teams in large enterprises, Chainguard is like manna from heaven. They immediately understand what is really being sold: the elimination of enormous amounts of compulsory toil due to upgrading vulnerable software -- or having to nag other teams to do it.

Explain to me how Chainguard helps with this. Everywhere I've worked, this process has very specific needs depending on the companies internal and regulatory requirements. Chainguard may help with proof of origin/base imaging, but it doesn't do much beyond what container registries and tools like dependabot/snyk/dependency track already provide (not saying they're directly related), which doesn't really reduce that much toil.


The big ones that help are SBOMs, STIGs, FIPS, and CVE reduction. The images and the paperwork we provide make it so they can be dropped in to even the most regulated environments without toil.

Most of our customers use them for FedRAMP or IL 5/6 stuff out of the box.


It doesn't eliminate all toil, but it eliminates a lot. At least their customers think so.


As someone who has been watching Chainguard since they were "spun out" of Google, they started out trying to be the defacto container supply chain security company, realized everyone else was already doing that and well ahead of them, and have done a few pivots trying to find PMF. I think they've found more success being consultants, which is probably not what they hoped for.


I can confirm our business is roughly 0 percent consulting and that it's 100% selling these hardened images.


I turn away immediately at religious terminology being reused verbatim for commercial environments


I'm an atheist, for what it's worth.

If it helps, substitute "Darmok and Jalad at Tanagra" instead.


Many organizations pay people (or entire teams) to maintain a suite of hardened images, either for device/firmware applications, or because they use many languages in-house, etc. This is definitely one of those business models I thought "oh, of course" as soon as I saw it.


Yep, that's it - the product is hardened container images!


EDIT: upon using dockerhub’s organization page for a bit, and realizing there’s no search on the organization page (I swear there was?), I now understand.

Why does the article present this bizarre set of instructions for grabbing the image instead of linking directly? You could just link your organization no?

> Getting started with Chainguard Developer Images in Docker Hub is easy. Follow these simple steps:

> Look up the Image you want.

> Select ‘Recently Updated’ from the dropdown menu on the right.

> Filter out the community images by selecting the filter ‘Verified Publisher.’

> Copy the pull command, paste it into your terminal, and you are all set.


Good callout, if you know how to use docker and and dockerhub then it's just as easy as `docker pull chainguard/node`


If a primary goal of a consumer of the images is security, how can we trust the images not to have backdoors or virusesesses [extra s added for comedy]?


Great question! We take hardening of our build infrastructure very seriously, and helped build many of the OSS technologies in this space like the SLSA framework and the Sigstore project.

We produce SBOMs during the build process, and cryptographically sign SLSA-formatted provenance artifacts depicting the entire build process so you can trace a built container all the way back to the sources it was built from.

We also try to make as much of our build system reproducible as possible (but we're not all the way there yet), so you can audit or rebuild the process yourself.



Have you ever been on a boat? It's not safe to assume the existence of anything, including a toilet, on them.


have you ever seen a picture of this mothership? I think it's pretty safe to assume it has toilet...

https://oceanexplorer.noaa.gov/okeanos/collaboration-tools/E...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: