Woo. Have been using the RHEL 7 Amazon AMI images, and it's nice not to worry about shell scripts / custom supervisord stuff for your web services anymore.
My node app is deployed with a single `myapp.service` file thanks to systemd:
- Not restarting repeatedly if the app is bouncing up and down.
- I can see how it goes with `journalctl` which reads journald messages, and those messages come from a source called `myapp` rather than the old ancient syslog facilities (local0, uucp, lpd) which everyone just ignored in favour of grepping.
You could potentially register your application with firewalld to make the iptables part even more elegant - then the port would only be open when the service was running.
That said I'm somewhat on the fence about firewalld in a server context - the zones are really designed around mobile computing use-cases, and I'm not a fan of xml as a configuration language.
I gave CentOS 7 a spin last night (been waiting for the release!).
I installed it on VirtualBox (using a Windows host pc). I must say, my first impressions are that this is a very nice release.
They implemented the same new installer from Fedora, which I really like. It makes for 2 clicks (Language and Keyboard) and then it's installing in the background while you can setup your passwords and accounts. This makes for a very fast install.
The system just seems to feel "faster". I know that is completely objective, but mind you, I did not have the VirtualBox Guest Tools installed on the VM, and it still felt "native". I'm not sure what about it specifically feels faster, just everything I guess... especially booting.
eth0 was set to "ONBOOT=no" by default, which tripped me up for a half second (since all previous releases came with all adapters up). A quick config change and restart the network service, and up and running. I was surprised there was already update packages available, including a kernel update.
VirtualBox did not want to install it's Guest Additions/Tools on the VM, due to some change with "numa", from my understanding it's memory allocation related (could be wrong). In any event, I got the "struct mm_struct has no member named numa_next_reset" error.
configtest and graceful are "commands" used by other scripts such as httpd (apache) and nginx scripts which can take alternate signals to do other actions
but even in your example, condrestart, try-restart, those aren't supported by pure systemd scripts - however if you were using the init.d version you could do the easy-to-remember/type service mysqld condrestart
Fortunately with 7.0 you can keep using the old init.d scripts and custom commands, hopefully that will never go away with any of the later 7.x versions (I don't see why).
I'm not a systemd expert, but for configtest, you could add a 'Config' parameter and patch systemd to record the hash of each config file at start time. From then on, all .services, ever, that supplied their Config would be able to do a configtest and condrestart. DRY.
that looks so much cleaner than init.d scripts. I have heard some people refer to systemd as not very linux-esque - can someone comment on what that might mean?
Systemd isn't just replacing the init process and service manager, it encompasses many other boot and session related features. As an example, Gnome now uses parts of systemd for session management (replacing consoleskit, and others). While there are workarounds, this means no one can really use any other init system besides systemd, or things won't work correctly.
This is what many people view un-unix like, a big dependency, that can't be replaced in the chain. The systemd developers also develop udev (which actually builds out of the same source tree as systemd), and there have been a few arguments about whether certain responsibilities are that of the kernel, or udev/userspace http://lwn.net/Articles/518942/, http://lwn.net/Articles/593676/. Systemd also touts its cgroup usage, though they view that they should be the only user of cgroups, since they are in a spot to "manage things properly" (which is partially true). (Some services have also started relying on systemd's cgroups to clean up orphan processes, which is bad for other init systems that don't do this).
So no one really hates how systemd works, just how its managed, and what it aims to do. If it were a simple init system and service manager, there would probably have been no arguments.
>though they view that they should be the only user of cgroups
And they're absolutely correct. There can only be one cgroups manager on the system. cgmanager isn't finished, either.
>As an example, Gnome now uses parts of systemd for session management (replacing consoleskit, and others). While there are workarounds, this means no one can really use any other init system besides systemd, or things won't work correctly.
Gnome requires logind, which is the successor to consolekit. Gnome has repeatedly stated that they're willing to work with the BSDs et al with workarounds. You can either use the old logind, which doesn't require systemd as PID1 (due to cgroups management changes) or you can develop your own replacement to logind, one that doesn't require systemd as PID1.
People criticize these decisions, but they have no suggestions on how to fix it and, for the most part, they have no inkling of why there's a requirement.
There are technical reasons people don't like systemd that are valid, but usually the die hard anti-systemd people boil down to hating Lennart Poettering and complaining about how the US Constitution must some where say they never have to use systemd.
There are some valid concerns (big scope, bloat, feature creep, etc) but they really haven't come to fruition, although are still valid concerns.
At the end of the day, without much effort or trouble of switching to systemd from OpenRC, my system boots faster, which is great.
The very first thing I used to do on any new Linux install (until most distros stopped including it) was to uninstall Pulse and all of its dependencies.
I still uninstall it, even though I know it's improved a bit. I just can't get to the point of trusting it, or see the point for yet another layer of latency and bugs between programs generating audio and the sound card. They should have spent all that effort making a nice interface to dmix or something. Rather than bash ALSA for being "Linux only", then turn around and create systemd which is Linux only by design. I mean, we get it. You're the next Miguel de Icaza. The savior of Linux, coming to save us from the terror of text files, open standards, and clean dependencies. People must be forgetting the hell that was CORBA and Linux circa 2001 or so. You still can't install many apps without installing half of GNOME.
There is a lot of unfairness around PulseAudio. Sure, it had its share of bugs BUT so did Alsa. PulseAudio was pushing the envelope so crappy Alsa drivers showed they limits. Crappy sound chips too. But PA took most of the blame although it was not always it fault. It could have been handled differently, maybe.
Pulse audio on various old Thinkpads (X200s, X60, X61s) seems fine (Debian Wheezy and CentOS 6/pre-release 7) with built in sound card and a cheapo USB microphone as 'input'.
It depends what sound hardware and software you use, what the memory layout of the PulseAudio daemon happens to end up like on your system, your tolerance for audio glitches, and how long you go between reboots. The code quality's awful but if you're lucky you can miss out on the worst of the issues.
I can at least vouch for 1) drivers (arguably this is an alsa thing, but pulseaudio had a way of exposing problems), and 2) your tolerance not just for glitches, but also latency (in the form of buffers) and overall sound quality.
I suspect, based on how few people seem to be upset about pulse audio (as opposed to what one might expect) that some subset of popular sound hw worked rather well. It's just not a subset I ever owned.
Neither, funnily enough. I've never quite understood the hate, but to be fair, if the complaints I've seen were wide-spread problems, that would make sense.
Linux did kind of get the short end of the stick with System V initialization. BSD rc scripts are also written in shell, but much cleaner (particularly when you make use of rcorder(8) dependencies).
systemd's declarative unit file syntax is easier to reason with, but comes at the expense of having to memorize a ton of options and being fundamentally dependent on the toolbox provided to you by systemd, since you can't code your way out of unconventional corner cases as easily.
The unit file syntax isn't the reason people complain, though.
Of course your systemd startup can start a sh script at ease. I think systemd is going to make my life so much easier. Being more on the dev side of devops, but still needing to deploy correctly restarting aps depending on complex systems working. i.e. NFS mounts being available for data etc...
Slackware has always used BSD init scripts and it's one of the oldest distros. You used to be able to choose any init system you wanted but it seems that systemd is becoming entwined with "Linux" in some nasty ways. I wonder how long Slackware (or any other distro) will be able to avoid using it since software like Gnome is starting to require it.
FWIW, systemd can also call scripts under /etc/init.d if that is really what floats your boat. It is fully backwards compatible, you just lose a lot of the flexibility of systemd when doing that. The redis-server package in RHEL7 (from EPEL) installs an init script and I know that systemctl starts it.
>Oh, an embedded HTTP server is loaded to read them. QR codes are served, as well.
And outright falsehoods. systemd-journald-gatewayd is entirely optional. People keep harping on a packaging mistake when it was first pushed to Fedora for testing, but repeating it so many times doesn't make it true.
>In fact, udev merged with systemd a long time ago
systemd relies on udev and dbus, but udev doesn't pull in systemd as a hard dependency: another falsehood I've seen parroted by those in support of the eudev fork.
I'd be very interested to see a qualified crypto expert on the "sealing" that journald uses. This is one of two indicators of a troubling level of arrogance from the developers. The crypto method used by journald to verify messages haven't been tampered with is called Forward Secure Sealing [0]. It's based on an invention of the brother of Lennart - the lead developer, and for a long time after first release even the whitepaper describing it in detail was "coming soon" The code he finally produced is [1] - but rather light on documentation.
I'm still unaware of any proper analysis of this, and using your brothers own crypto methods and ignoring all the questions this has raised does not come across well - and appears to seriously violate the "don't roll your own crypto system" rule.
The second indicator is the attitude to bugs, of which [2] is a good example - several of the developers appear to be extremely defensive towards any suggestion of defects in their software, and simply close bugs blaming the users, other software, anything else.
I'd be hopeful that RedHat manage to reign this behavior in, but that didn't seem to work for Ulrich Drepper when he was employed by RedHat to work on glibc, and I'm not sure if it's going to work here.
That said - I'm not in the "systemd is awful" camp - I do think there's a whole bunch of things it does really well, and that a lot of the hate is really quite reactionary - but the thing that frustrates me is that between the haters and the supporters, there are important questions that are getting lost in the noise.
I just can't agree with your [2] as a problem. The actual problem (an assertion failure in systemd) was fixed, several alternative workarounds are provided in case the user can't or doesn't want to upgrade systemd immediately, and functionality changes about when and where and how to log were going on on the mailing list, as they should be, and the reporters were directed there politely, even after violent vitriolic attacks. After discussion concluded on the mailing list, systemd was changed to direct debug logs away from kmsg as soon as journald is available.
I can't find anything to complain about from the systemd team on that bug report. I'd just dismiss it as varying personal standards of politeness, but the complaints on that bug report are themselves far far worse, with vitriolic abuse and death threats, so there's got to be something else going on here.
That discussion escalated quickly.. What i got from the thread is that systemd will only work with glibc, intentionally? If that's the case then it's kind of sad. Since systemd will become the standard glibc will be the only alternative.
Relevant work in the area is Log Hash Chaining as described in RFC 5848, which at least has been through some peer review.
I don't know why they chose to ignore that, let alone what their design is really supposed to guard against. Their design allows an attacker a window of 15 minutes where they can rewrite the log at will.
So the short of it is: Keep using remote logging. Authenticate that. Don't rely on journald.
(I too have had Drepper vibes about the whole situation for quite some time. But a new init standard was long overdue and if distros can finally rally around systemd it might be worth it.)
From your [2]:
Like for the kernel, there are options to fin-grain control systemd's logging behaviour; just do not use the generic term "debug" which is a convenience shortcut for the kernel AND the Base OS.
Optional, but enabled by default. Most distros seem to ship with defaults and only customize flags related to library paths and FHS details (besides whatever patching they may apply). The fact that it's even there is distressing enough, really.
No one is saying udev pulls in systemd as a dependency. Where is that said?
Where is it enabled by default? In what distro does gatewayd get pulled in by default?
>No one is saying udev pulls in systemd as a dependency.
By the eudev hobbyists, whose justification for their fork was largely "we don't want to force people to use systemd" and "no, we didn't contact upstream with fixes before we forked" in their presentation.
Why do people use words like "violent" to describe the decision to change process launchers? Maybe if they'd toned down the rhetoric just a tiny bit that campaign might've worked better.
I always scare of CentOS/Fedora. Whenever I updated something, it will be very old packages. Leading to use 3rd repo, and without any kind of document, the next sysadmin will be in trouble. I used to install Gearman on Centos 5.9 and it was a nightmare: the original 3rd repo didn't have gearman and I have to use other repo which is complain about PHP-Common conflicting. Remi and webtactic did help at the end.
It's hard to keep Centos Update to date with latest software IMHO.
I'm not an SysAdmin but I think the idea of APT and YUM is same but why it's so hard/trouble to use YUM?
Fedora is the RedHat "testbed" and has regular updates similar to the debian/ubuntu universes.
One of the core features of CentOS/RedHat/Scientific Linux/etc is their longterm support and package freeze (excluding critical security updates). If you are reliant on a different version of software from the default repositories (both newer or older) then you are encouraged to build and package them yourself. Additionally in the RedHat-derivative universe there is the EPEL package repository which contains a wider range of software and is kept more readily updated in addition to being maintained by the Fedora Project (essentially RedHat).
Besides using third party repos, you can also use Red Hat Software Collections for the official Red Hat offering of more recent packages (requires RHN, not available for CentOS).
Perhaps it's what you're familiar with and the intended use case. I personally can't stand the way APT won't let you override certain behavior. RHEL / CentOS have always favored stability over being bleeding-edge, and in my experience EPEL provides enough newer software to make the system usable for most cases, although if you're wanting the latest frameworks, etc.. I can see that being incompatible with your needs. I have very rarely had any problems with broken dependencies in Red Hat repositories except for the occasional 15-minute hiccup. SUSE seems to have more problems, but still has never been enough of a problem to make me question using it in production.
I think the third-party frameworks issue can be solved with LXC / Docker. If app x requires library y and supporting utility z, this can all be put into a container without having to update the OS's versions of y and z.
Although personally I've had more instances of running into an incompatibility with the older CentOS 5 kernel than with any of the libraries in the distribution - which LXC wouldn't be able to help with.
I must say that compared with the CentOS 6 release CentOS 7 is a breath of fresh air. I'll assume for the moment that this is because of Red Hat's involvement in the process. Specically the availability of build scripts and the openness of communication throughout the process.
The delay in releasing CentOS 6 and the delays in updates had me worried about the future of CentOS.
The CentOS team completely re-did their build system at the start of CentOS 6, and that was cause of the delay. This build system still has nothing in common with Red Hat's build system, and it worked pretty well for CentOS 7!
It looks like the listed minimum for RHEL 6 was 1gb[1]. I don't think 128 was officially supported even then, so if you're happy with how it works now, you may still be happy with 7.
Probably. You'll have to hack the install procedure to enable swapping before it's being run though.
If you're serious about trying this, the fastest path is to install CentOS in a VM on a suitable machine that isn't going to spend a day in the installer swapping like crazy, and copy the image to a physical drive which you eventually stuff into the ancient low memory box.
> I've been happily using CentOS 6 on my 128MB box
I used to do that too, but then I figured there's a constructive use I could give to my old desktops when any such system reaches the age of replacement - just max it out on RAM and declare it a "home server". Now I can run a Minecraft node for my kids on it - with 2GB of Java heap, no less.
I mean, my Raspberry Pi has more than 128 MB of RAM.
Oh, RAM sticks are cheap, esp. for old machines. This is true. But I use a VPS, and the 128MB instances are cheap enough to be worth taking ten minutes to shut off junk services (which improves security anyway).
Out of interest, what machine are you using CentOS 6 on? I find 128mb kinda tight - although I've recently found a VPS provider who's managed to get Debian Jessie working in 96!
Granted it is an Enterprise level distro with iscsi, storage drivers, xfs as default, other relatively heavy things in it included by default. Running on smaller machines was never a goal, if it did, it was probably accidental.
I don't even know if they have an ARM port (maybe an ARM 64bit for those new hip low wattage ARM microservers).
Agreed. Config files 100 times bigger and more complicated. Why? Go to it's home page and find a link that says "differences between GRUB Legacy and GRUB" which simply goes to the grub .97 manual which says absolutely nothing about differences between the 2 versions. They really care about their users, that's for sure.
grub2 is pretty much the textbook definition of "overdesigned." All it has to do is transfer control to the OS... that's it. But somehow it grew code to parse filesystems, set up graphical user interfaces, load modules, and do half a dozen things that it really has no reason to do. Half the time this hairball won't even boot after you change something, because you forgot to run the correct script to refresh the other scripts, or you moved something on the disk.
Just install elilo and enjoy having a system that actually works.
Aye, and now grub2 requires a 1MB partition to boot a GPT labelled disk on a BIOS (non-EFI) system. Well, looks like we're out of primary partitions. Adios swap.
RHEL 7 was released on June 10[1], but many people (myself included) don't have a RedHat subscription and run CentOS, the 'community' edition of RedHat.
Up until today, CentOS 6.5 was the latest release... so I can start using CentOS 7 now and enjoy the benefits that RedHat introduced in RHEL 7. You can see that the delay before CentOS releases a community version of RHEL has been much improved in recent years in this Wikipedia chart[2].
Also, iirc, this is the first major CentOS release with the CentOS project more or less under RedHat's wing[3].
You can go to https://git.centos.org which is hosting all the sources used to build the system. Of course for RPM packages that is only metadata, not the actual upstream software source.
Just out of curiosity, since it sounds like you might know: on ubuntu I can do something like sudo apt-get source packagename and the system will just download the associated source code if a source repo is setup in the system's sources list. Does RHEL/CentOS have a similar capability?
The support goes both directions. It is a way to get the organization you work for to indirectly donate to a variety of free software projects, and your company also gets the contractual safety net that they want.
Yes. If you have RHEL installed on hundreds (or thousands) of HP/Dell servers at a Fortune 1000 company, you'll have someone to call if the kernel on some production machine keeps dumping.
I can see a bootstrapped company using CentOS, or a company running on angel/seed money. Once a company gets Series A funding though, you have to start wondering why they wouldn't upgrade to Red Hat. The message from the company basically is they'd prefer the sysadmin to spend their nights and weekends figuring out problems, instead of making a small payment for support service. This is the type of position you want to run, not walk from.
On a job interview, a good question for a sysadmin to ask an interviewer when they say "do you have any questions to ask me?", is, "Will I support any machines, operating systems or applications that are not under a vendor support contract?" Inevitably there will be one or two legacy machines or applications, but if you get a laundry list in response, run.
I've been on both sides of the fence with regards to vendor support. In an extreme case at one company I worked with, when I'd report a slight irregularity or outage to my boss, the first question he'd ask is what is the vendor support ticket number. After all, the company was paying for the support, so it was a very high failing if an employee spent any energy in trying to solve an issue.
The problem with this, of course, was that I don't want to open a vendor ticket until I have something to give them to work on. A random program crashed? First I'd blame the programmer or some other issue, not a kernel bug. But that particular manager just wanted to pin everything on the vendors no matter what.
As an aside, since I've been supporting Linux systems for the last 1.5 decades, I've only had to call in OS vendor support twice -- once was to verify what I already new (clock skew due to a bad hardware timer chip), and another time to satisfy an app developer manager who wouldn't take responsibility for his team's code.
Actually, I should add something -- with Red Hat support, you also get access to a whole history of past case logs at access.redhat.com -- these are discoverable through a Google search, but to see the full solution you have to be logged in. And this has actually kept me from needing to call Red Hat on a number of occasions, so I guess this can count as using Red Hat's support indirectly. And their knowledge base is very well written, esp. when explaining the root cause of specific known past issues.
"The message from the company basically is they'd prefer the sysadmin to spend their nights and weekends figuring out problems, instead of making a small payment for support service."
So... if there's a friday night problem, you'd prefer staff to just be hanging, chillin' at home, and then when pressed, say "we created a ticket with redhat, nothing more to do!" ?
Critical breakages should keep people working until stuff is fixed. If you want to also pay a bit extra and involve external support people, that's fine, but it's not a magic bullet that just 'fixes' everything. You've now got to account for time to manage working with external support staff, making sure you can get them in to the affected systems, etc.
If a machine is kernel dumping, and Red Hat is trying to diagnose and fix the machines, nobody said your sysadmins will automatically just go home.
Instead, they can use their time more productively: making a secondary/workaround, or figuring out ways to re-architect your solution around the failure points.
They'll prioritize fixing your bugs with their product management team, but that is basically support. You pay for RHEL for support, that is all it ever was for.
Am I missing something? This was submitted two hours ago, yet the images (CentOS-6.5-x86_64-bin-DVD1.iso) are the same that I had downloaded on 6/20/2014. The release notes list the new image names, but they don't appear to be pushed out to the mirrors.
My node app is deployed with a single `myapp.service` file thanks to systemd:
Then: (since I run my app on a low port so it can run as nobody, and 3000 isn't visible to the outside) This has all the stuff you expect:- Not restarting repeatedly if the app is bouncing up and down.
- I can see how it goes with `journalctl` which reads journald messages, and those messages come from a source called `myapp` rather than the old ancient syslog facilities (local0, uucp, lpd) which everyone just ignored in favour of grepping.