Hacker Newsnew | past | comments | ask | show | jobs | submit | wwarek's commentslogin

> How do you update the software in the containers when new versions come out or vulnerabilities are actively being exploited?

You build new image with updated/patched versions of packages and then replace your vulnerable container with a new one, created from new image


Am I the only one surprised that this is a serious discussion in 2025?


Perhaps. There are many people, even in the IT industry, that don't deal with containers at all; think about the Windows apps, games, embedded stuff, etc. Containers are a niche in the grand scheme of things, not the vast majority like some people assume.


Really? I'm a biologist, just do some self hosting as a hobby, and need a lot of FOSS software for work. I have experienced containers as nothing other than pervasive. I guess my surprise is just stemming from the fact that I, a non CS person even knows containers and see them as almost unavoidable. But what you say sounds logical.


I'm a career IT guy who supports biz in my metro area. I've never used docker nor run into it with any of my customers vendors. My current clients are Windows shops across med, pharma, web retail and brick/mortar retail. Virtualization here is hyper-v.

And it this isn't a non-FOSS world. BSD powers firewalls and NAS. About a third of the VMs under my care are *nix.

And as curious as some might be at the lack of dockerism in my world, I'm equally confounded at the lack of compartmentalization in their browsing - using just one browser and that one w/o containers. Why on Earth do folks at this technical level let their internet instances constantly sniff at each other?

But we live where we live.


Self-hosting and bioinformatics are both great use cases for containers, because you want "just let me run this software somebody else wrote," without caring what language it's in, or looking for rpms, etc etc.

If you're e.g: a Java shop, your company already has a deployment strategy for everything you write, so there's not as much pressure to deploy arbitrary things into production.


The world is too complex, and life paths too varied, to reliably assume "everyone" in a community or group knows about some fact.

You're usually deep within a social bubble of some sort if you find yourself assuming otherwise.


I refer you to:

https://xkcd.com/1053/


About "new life", reminded me of phone booths in UK being reused as defibrillation stations:

https://nerdist.com/article/uk-red-phone-booths-defibrillati...


Communities can 'adopt' them for £1. Other uses include libraries or food bank donation points.

https://business.bt.com/public-sector/street-hubs/adopt-a-ki...


Beats letting them rot or turning them into novelty coffee stands. It's kinda cool how these relics are finding second lives in ways that actually help people


And even API available (for some forms of public transport): https://www.wienerlinien.at/ogd_realtime/doku/ogd/wienerlini...


For reusing pieces of existing pipelines I think `include` would be appropriate, especially `remote` variant:

https://docs.gitlab.com/ci/yaml/#includeremote


include:component is usually what you want now, you can version your components (Semver), add a nice readme and it is somewhat integrated in the gitlab UI. Not sure about the other include: ones, but you can also define inputs for component and use them at arbitrary places like template variables.

Since the integration is done statically, it means gitlab can provide you a view of the pipeline script _after_ all components were included, but without actually running it.

We are using this and it is so nice to set up. I have a lot of gripes with other gitlab features (e.g. environments, esp. protected ones and their package registry) but this is one they nailed so far.


Doesn't include:component still require all your shell script to be written inside YAML? or is there a way to move the logic to a, for instance, .sh file and call it from YAML?


I realize this may be splitting hairs, but pedantically there's nothing in GitLab CI's model that requires shell; it is, as best I can tell, 100% docker image based. The most common setup is to use "script:" (or its "before_script:" and "after_script:" friends) but if you wanted to write your pipeline job in brainfuck, you could have your job be { image: example.com/brainfuckery:1, script: "" } and no shell required[1]

1: although TIL that the "script:" field itself is actually required in GLCI https://docs.gitlab.com/ci/yaml/#script


Covered nicely by Tom Scott some time ago: https://youtu.be/i0RkEs3Xwf0


Weirdly enough that's about a completely different water treatment plant in a different part of the country.


Several Polish cities got that systems. It may sound unusual, but it is pragmatic system.

Poland during cold war was suppose to be battleground and protecting water supply is hard. Many traditional systems have some gaps.



It's more precise than landing on the ground, there is coordination with the tower required, it is way bigger than Falcon 9.

This video goes into details pretty well: https://m.youtube.com/watch?v=OYvWYp0--bQ&t=832s


fyi all links return 404


"Z" is used by Russia in the invasion of the Ukraine:

https://en.m.wikipedia.org/wiki/Z_(military_symbol)



Warning: Hairy nutsack of truth


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: