Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "why not systemd"
-
Linux sucks.
Now now, chill. I'm using it as my main OS for a few years now. I know what I'm talking and this title is a bit click-baity, but this just has to go out there:
1. It's usable as a Windows replacement just fine - FALSE. XFCE4 is years old and buggy as hell especially on multi-monitor set-up, Gnome3 gets stuck more often than my Windows 98 machine used to, KDE is like a rich kid on meth. Plug in Bluetooth headphones? Well no, sorry, you have to research that online, since you'll probably need to install some packages for it to work. Did I say "work"? Well no, because after more research you realize that Debian on Gnome3 on gdm3 launches pulseaudio on its own, so you have 2 instances of pulseaudio, and one of them is stealing your headphones sometimes and you either have no sound or shitty sound. How do I know that you ask? The same way I know everything else - every time you try to do something new on any Linux, it involves a ton of research. Exciting research, don't get me wrong, but at this point it looks more like a toy than a reliable desktop computer operating system.
2. And why am I using pulseaudio? Why not alsa? years ago people were discussing on forums that pulseaudio is old and dead, yet here we are with new LTS release of Ubuntu still shining with Pulseaudio. How about several different service management systems being deprecated by new ones, each having different configurations and calling methods? Apparently systemd is old and lame now. It's a mix of 10 year old software that works badly, with a 5 year old replacement that works worse, somehow trying to live under the same roof. Does it work? Ask my headphones who sound like a fucking dial-up modem.
3. Let's talk about displays, shall we? xorg is old and deprecated, right? We got Wayland that's mostly stable. Don't know what that is? That's just basic knowledge for Linux. And when you try to install network-manager, it also tries to install Mir toolkits. Because why the fuck not install 3 display managers when you want a network manager, of which one is old and dying, one is young and stupid, and another is an infant that died of cancer?
4. Want to integrate with Google Drive? Yeah, there's a tool that mounts the drive as a local directory. Yeah only for Ubuntu. Want it on Debian? You need to compile it. Oh wait, it's on Ocaml, because fuck mainstream languages, we're hipsters. How do you compile Ocaml? Well you need to have Ocaml on your system, dummy. How do you do that? Well you need to compile Ocaml. Ok, how do I do that? Well, git clone, download and install some dependencies, configure, make... oh sorry, you're using libssl1.0.2g when you need libssl1.0.1f, nope, sorry, won't work. Want to install libssl1.0.1f? Why? You already have the "g", stupid! Want to remove libssl1.0.2g? Bye-bye literally everything that you have on your PC. But at least you got the "f". Does it work now? Well no, because you need libssl1.0.2g for another dependency to work.
And all I ever wanted was to get a fucking document from google drive (not nudes, I promise).
5. Want to watch a movie? Let me tear that screen in half and make the bottom half late by a couple of frames, because who needs vertical sync, right? Oh you do? Well install the native drivers maybe. Oh you have? Welcome to eternal Boot to Recovery mode, motherfucka!
---------------------------------
Yeah, most of the times things work just fine. But the reason I know what those things are and how they work is not curiosity. The reason that I know the inner workings of Linux much better than the inner workings of Windows, is because in those few years that I've been using it full time, it has caused me 10 times more headache than I have ever experienced with other systems. And it's not the usual annoyances like "OMG it rebooted when I didn't ask it to", but more like "Oh, it won't work and I need 2 days to find out why" kind of stuff, because even if you experience the same thing again, it's always caused by some new shit and the old solution won't work any more.
I still love it, and will continue to use it. I don't know why really. Maybe because I'm not afraid of fucking it up any more? Maybe because I can do what I want in it and recovering will be easier than on Windows?
It's a toy for me, after all these years. And I also use it for professional reasons.
But whenever someone presents it as a better alternative to Windows, I just want to puke.51 -
*deploys new VPS*
Click clack tap.. alright, done.
*notices that I accidentally made an Ubuntu 14.04*
Well shit... Guess I'll have to update that immediately to 18.04 then.
*logs in, immediately disables SSH password auth*
# systemctl restart sshd
> systemctl: command not found.
What the fuck..?
What was the command for that old init again.. >_<
# /etc/init.d/ssh restart
WHY THE FUCK IS THIS UBUNTU STILL USING THAT OLD INIT?!! Goddamit, Canonical living up to the philosophy of its Debian counterpart indeed!11 -
This is a story of me trying out maintaining a game server and eventually making a mistake, although I do not regret experiencing it.
A month ago I set up a small modded minecraft server because I wanted to experience a fun modpack together with some people from reddit. Besides this, I also wanted to see if I was capable of setting up a server with systemd and screen running in the background. This went great and I learned a lot.
The very next day I was playing with $annoyingKid on the server and everything was well. However the second day, $annoyingKid started pushing the idea to start up a normal minecraft server to build a playerbase.
I asked $annoyingKid 'What about financing, staff management and marketing?'
$annoyingKid: "I don't know much about that, but you can do that while I build a spawn!"
He also didn't want to reveal his age, which alerted me that he's young and inexperienced. He also considered Discord 'scary' because there were haxors and they would get his location and kidnap him, or something. So if he was supposed to become owner (which he desired), he had no way of communicating with a community outside of the game.
He also considered himself owner, while I was the one who paid for the server. 'Owners should be people who own the server', no matter how many times I told him that.
$annoyingKid also asked if he could install plugins on his own, I asked him if he knew anything about ssh, wget or bash because I used ssh to set up the server (I know rcon exists, but didn't want to deal with that at the time), he had no idea what any of those terms meant and he couldn't give proper arguments as to why he should get console access.
In the end, he did jack shit, he had no chance of becoming co-owner or even head-admin because he had no sense of responsibility or hard work. I kept him around as an admin because he was the one who came up with the idea. I banned him on day one after he started abusing his power when someone tipped him of. Even after me ordering him to ignore an annoying player he kept going, of course I could have prevented all this by kicking him earlier since all the red flags around him had already formed a beacon of light. He tried coming back, complaining that he should at least have his moderator rank back, but he never got in again.
A week later I got bored, I had had enough fun with ssh and the server processes to know that I didn't want to continue the small project, so I shut it down and went on to do stuff on GitHub.
Lesson learned: Don't let annoying kids with no sense of responsibility talk you into doing things you aren't sure you want to be doing. And only give people power after they've proved to you that they are capable of handling it.1 -
More often than not, I hear that the mission-critical stuff in Linux is done by paid people, the folks that work from 9 to 5 with a fixed time/resource schedule. Is software in Linux all like that? Say for example, Linux (kernel), systemd, Xorg, all the desktop environments, LibreOffice, Mozilla, Chromium and such.
The reason why I'm asking is because I kind of feel like the premise behind Linux "free, libre, *philanthropic*" and such is kinda wrong. Especially the latter. Do the people in the mission-critical stuff really care about its stability any more than commercial software devs do? Sure the projects driven by personal needs that are published are philanthropic in their nature, I'm having some of those too. But those are all non-critical and maintained as such. The stuff that's behind the steering wheel however? I'm not sure...
In essence, is the mission-critical part of the Linux ecosystem - however open-source it is - any different from other commercial software products QA-wise?3 -
In today's episode of kidding on SystemD, we have a surprise guest star appearance - Apache Foundation HTTPD server, or as we in the Debian ecosystem call it, the Apache webserver!
So, imagine a situation like this - Its friday afternoon, you have just migrated a bunch of web domains under a new, up to date, system. Everything works just fine, until... You try to generate SSL certificates from Lets Encrypt.
Such a mundane task, done more than a thousand times already... Yet... No matter what you do, nothing works. Apache just returns a HTTP status code 403 - Forbidden.
Of course, what many folk would think of first when it came to a 403 error is - Ooooh, a permission issue somewhere in the directory structure!
So you check it... And re-check it to make sure... And even switch over to the user the webserver runs under, yet... You can access the challenge just fine, what the hell!
So you go deeper... And enable the most verbose level of logging apache is capable of - Trace8. That tells you... Not a whole lot more... Apparently, the webserver was unable to find file specified? But... Its right there, you can see it!
So you go another step deeper and start tracing the process' system calls to see exactly where it calls stat/lstat on the file, and you see that it... Calls lstat and... It... Returns -1? What the hell#2!
So, you compile a custom binary that calls lstat on the first argument given and prints out everything it returns... And... It works fine!
Until now, I chose to omit one important detail that might have given away the issue to the more knowledgeable right away. Our webservers have the URL /.well-known/acme-challenge/, used for ACME challenges, aliased somewhere else on the filesystem - To /tmp/challenges.
See the issue already?
Some *bleep* over at the Debian Package Maintainer group decided that Apache could save very sensitive data into /tmp, so, it would be for the best if they changed something that worked for decades, and enabled a SystemD service unit option "PrivateTmp" for the webserver, by default.
What it does is that, anytime a process started with this option enabled writes to /tmp/*, the call gets hijacked or something, and actually makes the write to a private /tmp/something/tmp/ directory, where something... Appeared as a completely random name, with the "apache2.service" glued at the end.
That was also the only reason why I managed fix this issue - On the umpteenth time of checking the directory structure, I noticed a "systemd-private-foobarbas-apache2.service-cookie42" directory there... That contained nothing but a "tmp" directory with 777 as its permission, owned by the process' user and group.
Overriding that unit file option finally fixed the issue completely.
I have just one question - Why? Why change something that worked for decades? I understand that, in case you save something into /tmp, it may be read by 3rd parties or programs, but I am of the opinion that, if you did that, its only and only your fault if you wrote sensitive data into the temporary directory.
And as far as I am aware, by default, Apache does not actually write anything even remotely sensitive into /tmp, so...
Why. WHY!
I wasted 4 hours of my life debugging this! Only to find out its just another SystemD-enabled "feature" now!
And as much as I love kidding on SystemD, this time, I see it more as a fault of the package maintainers, because... I found no default apache2/httpd service file in the apache repo mirror... So...8 -
In last episode of "How SystemD screwed me over", we talked about Systemd's PrivateTMP and how it stopped me from generating SSL certificates.
In today's episode - SystemD vs CGroups!
Mister Pottering and his team apparently felt that CGroups are underused (As they can be quite difficult to set up), and so decided to integrate them into SystemD by default. As well as to provide a friendlier interface to control their values.
One can read about these interactions in the manual page "systemd.resource-control"
All is cool so far. So what happened to me today?
Imagine you did a major system release upgrade of a production server, previously tested on a standalone server. This upgrade doesn't only upgrade the distribution however, it also includes the switch from SysVInit to SystemD. Still, everything went smooth before, nothing to worry now then, right? Wrong.
The test server was never properly stress-tested. This would prove to be an issue.
When the upgrade finishes, it is 4 AM. I am happy to go to bed at last. At 6 AM, however, I am woken up again as the server's webservices are unavailable, and the machine is under 100% CPU load. Weird, I check htop and see that Apache now eats up all 32 virtual cores. So I restart it, casting it off to some weird bug or something as the load returns to normal.
2 hours later, however, the same situation occurs. This time, I scour all the logs I can, and find something weird - Many mentions that Apache couldn't create a worker thread? That's weird.
Several hours of research and tinkering later, I found out the following:
1 - By default, all processes of a system that runs SystemD are part of several CGroups. One of these CGroups is the PID CGroup, meant to stop a runaway process from exhausting all PIDs/TIDs of a system.
This limit is, by default, set to a certain amount of the total available PIDs. If a process exhausts this limit, it can no longer perform operations like fork().
So now, I know the how and why, but how should I solve this? The sanest option would be to get a rough estimate of just how many threads the Apache webserver might need. This option, though, is harder, than apparent. I cannot just take the MaxRequestsWorkers number... The instance has roughly double the amount of threads already. The cause being, as I found out, the HTTP/2 module, which spawns additional threads that do not count towards this limit. So I have no idea what limit to set.
Or I could... Disable the limit for just the webserver via the TasksAccounting switch. I thought this would work. And it did seem to... Until I ran out of TIDs again - Although systemctl status apache2.service no longer reported the number of tasks or a task limit of the process, the PID CGroup stayed set to the previous limit. Later I found out that I can only really disable the Task Accounting for all the units of a given slice and its parents.
This, though, systemctl somewhat didn't make apparent (And I skimmed the manual, that part was my fault)
So... The only remaining option I had was to... Just set the limit to infinite. And that worked, at last.
It took me several hours to debug this issue. And I once again feel like uninstalling systemd again, in favor of sysvinit.
What did I learn? RTFM, carefully, everything is important, it is not enough to read *half* the paragraph of a given configuration option...
Oh, and apache + http/2 = huge TID sink. -
I fucking hate Windows... Yes I know it's beating a dead horse, but bear with me for a second here:
I really didnt mind it before, but I fucked up majorly on my dual-booted PC yesterday (dont fuck around with systemd if you don't know what you're doing) and needed to reinstall both Linux and Windows. Linux still has some hardware problems (second screen's not being detected), but else everything's just dandy.
But Windows... Holy motherfucking horse-cum drenched piece of goddamn trash! Shit wont listen to me!! I can click in that retarded settings app whatever I want and it still merrily keeps doing whatever the hell it wants 🙄🙄 Why exactly do I still put up with that?!
(It's gaming)6 -
Manjaro has some quirks that annoy me(no MST timezone, spotty support for my WD NVME), so I decided that since I'm not interested in any pre-configured graphical desktop of any kind, I should just dive into Arch, since it increasingly felt like that's what I was doing anyway but with Manjaro to dull the blow. So I did, and I am over the moon for doing so. Lots of gnashed teeth, but DDG indexes an answer to every question I've had, and it always makes sense when I find it. I've enjoyed having to dive into systemd in a much more low-level way than ever before-- to actually LEARN what it's doing, how, and why.
But one by one, I have been faced with some issue that I need to resolve, and one by one, I've knocked them off. The result now is the best work and gaming desktop I have ever used.
Arch is not for geniuses or wizards. Just patient people who are willing to read. The payoff is staggering, and many times over worth the effort.4 -
I have a small NUC-like machine in my home with an old external hdd connected to it. I use it to run my local gitlab, nextcloud and to test a few websites I build for the lolz.
If you too have a homelab, whether it's a single raspberry or an entire room full or racks, you know damn well that everything you have running locally as a web service keeps going until it doesn't, for whatever fucking reason. This time, it was the turn of my nextcloud.
The machine has arch linux running, I chose it since I already use it on my coding laptop and being a rolling release means I don't have to manually upgrade to a newer version, risking various fuck-ups and consequent screaming of profanity.
The downside is that arch is a bleeding-edge distro, so, despite being pretty good for what concerns security, as updates are pushed out some packages may still require legacy software to work as intended, since obviously not all developers for all packages can release simultaneously.
The problem was that php reached 8.2.x but nextcloud couldn't use anything beyond 8.1, so the highlighted solution was to download php-legacy, a package with a set of utilities which the cloud could use instead of mainline php.
Pretty easy, right? fuck my life, here we go.
I edited apache-httpd's configurations to link the new libraries, updated every reference in every virtual host that could possibly screw up the web server.
Done.
Then I went on and disabled the php-fpm mainline, creating a new systemd unit that would instead run the legacy executable and afterwards I edited nextcloud's additional configs so they use that instead.
Done, getting a bit dizzy, but I reboot everything and breathe.
At this point the migration should be complete, but wait, the server returns an error saying that the application is still trying to use php 8.2+...wait, what in the sysadmin Christ?
Back to nextcloud config, everything is set, everything else in every other fucking php-legacy and web server is fine, the old fpm service is disabled, I am confused, and why in the FUCKING FUCK is the new php-fpm unit failing to start at boot with "error 78/config - directory not found"? Hello? Am I being trolled by a shitty dual-core amazon fake NUC?
Maybe yes, cause it turns out that the unit was referencing a directory in the external hdd, which gets mounted at boot time after the unit itself starts, so nothing much, just a matter of tinkering with cron jobs, a reboot and at least this one is off my balls.
But why still isn't the server responding correctly? why? WHY?
After slamming my cock on the keyboard here and there scrolling back through all the config files I think to myself, hmmm, my gitlab is working flawlessly, well yeah, I didn't need to install the whole web stack, everything was nice and easy wrapped in a docker container...so why am I even here, why the fuck am I bothering with all this layered web-app bullshit, why don't I just run the up-to-date docker image that someone else has already set up for me, back up all the data and reupload them on the application?
Oh joy, you can't imagine, after 3...almost 4 hours of pure computer-touching the relief I had from seeing the blue web page with the "welcome to nextcloud" title.
Right now it's copying back all the files, and the external hdd is now linked to include the data folder.
Like really, everything was solved in two lines of bash.
I am still fuming, but at least I learned a valuable lesson, if you want a service up for yourself, implement it and deploy it as fucking easy straight-forward as you can, giving MAXIMUM priority to already fully-working options that are out there just waiting to be downloaded and used. I swing my scrotal sack on web-apps elegance as long as it's MY homelab in MY place.
Eat a fat dick php.
sudo pacman -Rns nextcloud
sudo systemctl disable --now php-fpm-legacy
sudo pacman -Rns php-legacy
sudo pacman -Rns $(sudo pacman -Qdtq)2 -
It makes me want to cry in frustration that I... actually love SystemD, as an *init* system. But with all the crap it brings along with that core part, it just makes it so much harder for me to really enjoy! Why can't it be modular? Why can't it be broken down into independently-installable packages, with the init system as a core? Is there some sort of internal API issue? Or does mister Pottering just does not want that to happen? The Linux world has always stayed by the idea of "1 package = 1 task", and it made the system management so much easier!
But now... When I switch to SystemD from SysVInit, I get... What SysVInit did + so much more I didn't ask for... I just... Don't understand it.3