Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I guess people who are running their own registries like Nexus and build their own container images from a common base image are feeling at least a bit more secure in their choice right now.

Wonder how many builds or redeployments this will break. Personally, nothing against Docker or Docker Hub of course, I find them to be useful.



It's actually an important practice to have a docker image cache in the middle. You never know if an upstream image is purged randomly from docker, and your K8s node gets replaced, and now can't pull the base image for your service.

Just engineering hygiene IMO.


> You never know if an upstream image is purged randomly from docker, and your K8s node gets replaced, and now can't pull the base image for your service.

That doesn’t make sense unless you have some oddball setup where k8s is building the images you’re running on the fly. Theres no such thing as “base image” for tasks running in k8s. There is just the image itself and its layers which may come from some other image.

But it’s not built by k8s. Its be built in whatever is building your images and storing I. Your registers. That’s where you need your true base image caching.


We are using base images but unfortunately some github actions are pulling docker images in their prepare phase - so while my application would build, I cannot deploy it because the CI/CD depends on dockerhub and you cannot change where these images are pulled from (so they cannot go through a pull-through cache)…


My advice: document the issue, and use it to help justify spending time on removing those vestigial dependencies on Docker asap.

It's not just about reducing your exposure to third parties who you (presumably) don't have a contract with, it's also good mitigation against potential supply chain attacks - especially if you go as far as building the base images from scratch.


Yea we have thought about that - I also want to remove most dependencies on externally imported actions on GitHub CI and probably just go back to simple bash scripts. Our actions are not that complicated and there is little benefit in using some external action to run ESLint than just run the command inside the action directly. Saves time and reduces dependencies - just need to get time to do that…


mirrors can be configured in dockerd or buildkit. if you can update the config (might need a self-hosted runner?) it’s a quick fix - see https://cloud.google.com/artifact-registry/docs/pull-cached-... for an example. aws and azure are similar.


Hmm yea with a self hosted runner this could work. Gotta need to set the dockerd config into the VM before the runner starts I assume - unfortunately GitHub itself does not allow to change anything for the prepare stage - and it's a known issue for 2 years at least...

https://github.com/actions/runner-images/issues/1445#issueco... https://github.com/orgs/community/discussions/76636


We run Harbor and mirror every base image using its Proxy Cache feature, it's quite nice. We've had this setup for years now and while it works fine, Harbor has some rough edges.


I came here to mention that any non-trivial company depending on Docker images should look into a local proxy cache. It’s too much infra for a solo developer / tiny organization, but is a good hedge against DockerHub, GitHub repo, etc downtime and can run faster (less ingress transfer) if located in the same region as the rest of your infra.


Currently unable to do much of anything new in dev/prod environments without manual workarounds. I'd imagine the impact is pretty massive.

Asside; seems Signal is also having issues. Damn.


I’m not sure that the impact will be that big. Most organizations have their own mirrors for artifacts.


From what I've seen: I highly doubt it.

Edit to add: This might spur on a few more to start doing that, but people are quick to forget/prioritise other areas. If this keeps happening then it will change.


“Their own” can and often does mean something hosted on a major cloud provider (whether they manage it in-house or pay a vendor for their system)


Yeah, perhaps. I don't know how many folks host mirrors. Most places I've worked for didn't, though this is anecdotal.


I would say most people would say it't best practice while a minority actually does it.


Seems related to size and/or maturity if anything. I haven't seen any startups less than five year old doing anything like that, but I also haven't seen any huge enterprise not doing that, YMMV.


Yes I noticed Signal being down too


That is nothing compared to how good i feel about not using containers at all.


You don’t want a Rube Goldberg contraption doing everything?

So not agile!


Guess where we host nexus..


Only if they get their base images from somewhere else...


Pull-through caches are still useful even when the upstream is down... assuming the image(s) were pulled recently. The HEAD to upstream will obviously fail [when checking currency], but the software is happy to serve what it has already pulled.

Depends on the implementation, of course: I'm speaking to 'distribution/distribution', the reference. Harbor or whatever else may behave differently, I have no idea.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: