Perhaps. There are many people, even in the IT industry, that don't deal with containers at all; think about the Windows apps, games, embedded stuff, etc. Containers are a niche in the grand scheme of things, not the vast majority like some people assume.
Really? I'm a biologist, just do some self hosting as a hobby, and need a lot of FOSS software for work. I have experienced containers as nothing other than pervasive. I guess my surprise is just stemming from the fact that I, a non CS person even knows containers and see them as almost unavoidable. But what you say sounds logical.
I'm a career IT guy who supports biz in my metro area. I've never used docker nor run into it with any of my customers vendors. My current clients are Windows shops across med, pharma, web retail and brick/mortar retail. Virtualization here is hyper-v.
And it this isn't a non-FOSS world. BSD powers firewalls and NAS. About a third of the VMs under my care are *nix.
And as curious as some might be at the lack of dockerism in my world, I'm equally confounded at the lack of compartmentalization in their browsing - using just one browser and that one w/o containers. Why on Earth do folks at this technical level let their internet instances constantly sniff at each other?
Self-hosting and bioinformatics are both great use cases for containers, because you want "just let me run this software somebody else wrote," without caring what language it's in, or looking for rpms, etc etc.
If you're e.g: a Java shop, your company already has a deployment strategy for everything you write, so there's not as much pressure to deploy arbitrary things into production.
Beats letting them rot or turning them into novelty coffee stands. It's kinda cool how these relics are finding second lives in ways that actually help people
include:component is usually what you want now, you can version your components (Semver), add a nice readme and it is somewhat integrated in the gitlab UI. Not sure about the other include: ones, but you can also define inputs for component and use them at arbitrary places like template variables.
Since the integration is done statically, it means gitlab can provide you a view of the pipeline script _after_ all components were included, but without actually running it.
We are using this and it is so nice to set up. I have a lot of gripes with other gitlab features (e.g. environments, esp. protected ones and their package registry) but this is one they nailed so far.
Doesn't include:component still require all your shell script to be written inside YAML? or is there a way to move the logic to a, for instance, .sh file and call it from YAML?
I realize this may be splitting hairs, but pedantically there's nothing in GitLab CI's model that requires shell; it is, as best I can tell, 100% docker image based. The most common setup is to use "script:" (or its "before_script:" and "after_script:" friends) but if you wanted to write your pipeline job in brainfuck, you could have your job be { image: example.com/brainfuckery:1, script: "" } and no shell required[1]
You build new image with updated/patched versions of packages and then replace your vulnerable container with a new one, created from new image