Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "need new sysadmins"
-
If all you have is a hammer, everything looks like a nail!
This was something which my tech lead used to tell me when I was so obsessed with nosql databases a few years back. I would try to find problems to solve that has a use case for nosql databases or even try to convince me(I didn’t realise it back then) that I need to use nosql db for this new idea that I have, without really thinking deep enough whether the data in question is better represented using an sql schema or not.
Now, leading a team of young developers, I come across similar suggestions from few of my team members who just discovered this new and shiny tech and want to use it in production projects.
While I am not against new and shiny, it’s not a good practice to jump right in to it without exploring it deep enough or considering all the shortcomings. The most important question to ask is, whether some of the problems you are trying to solve can be solved with the current stack.
Modifying your stack requires more than just a week’s experience of playing around with the getting started guide and stack overflow replies. This is something which need to be carefully considered after taking inputs from the people who would be supporting it, that include operations, sysadmins and teams that are gonna interface with your stack indirectly.
I am not talking about delaying adoption by waiting for long list of approvals to get some thing that would bring immediate value, but a carefully orchestrated plan for why and how to migrate to a new stack.
Just because one of the tech giants made a move to a new stack and wrote about it in their engineering blog doesn’t mean that you need to make a switch in the same direction. Take a moment to analyse the possible reasons that motivated them to do it, ask yourself if your organisation is struggling with the exact same problems, observe how others facing the same issue are addressing it, and then make an informed decision.
Collect enough data to support your proposal.
Ask yourself again if you are the one holding the hammer.
If the answer is no, forge ahead!9 -
When your sysadmins can't script a file compare and so you do the code for them.
"Sorry but we can't run unknown code on the server"
Read the code then you vile troglodytes!3 -
I previously worked as a Linux/unix sysadmin. There was one app team owning like 4 servers accessible in a very speciffic way.
* logon to main jumpbox
* ssh to elevated-privileges jumpbox
* logon to regional jumpbox using custom-made ssh alternative [call it fkup]
* try to fkup to the app server to confirm that fkup daemon is dead
* logon to server's mgmt node [aix frame]
* ssh to server directly to find confirm sshd is dead too
* access server's console
* place root pswd request in passwords vault, chase 2 mangers via phone for approvals [to login to the vault, find my request and aprove it]
* use root pw to login to server's console, bounce sshd and fkupd
* logout from the console
* fkup into the server to get shell.
That's not the worst part... Aix'es are stable enough to run for years w/o needing any maintenance, do all this complexity could be bearable.
However, the app team used to log a change request asking to copy a new pdf file into that server every week and drop it to app directory, chown it to app user. Why can't they do that themselves you ask? Bcuz they 'only need this pdf to get there, that's all, and we're not wasting our time to raise access requests and chase for approvals just for a pdf...'
oh, and all these steps must be repeated each time a sysadmin tties to implement the change request as all the movements and decisions must be logged and justified.
Each server access takes roughly half an hour. 4 servers -> 2hrs.
So yeah.. Surely getting your accesses sorted out once is so much more time consuming and less efficient than logging a change request for sysadmins every week and wasting 2 frickin hours of my time to just copy a simple pdf for you.. Not to mention that threr's only a small team of sysadmins maintaining tens of thousands of servers and every minute we have we spend working. Lunch time takes 10-15 minutes or so.. Almost no time for coffee or restroom. And these guys are saying sparing a few hours to get their own accesses is 'a waste of their time'...
That was the time I discovered skrillex.3 -
Had to change password on computer for administrative reasons (sysadmins and infosec make us change our pass every quarter). Changes didn't sync to everything so now I can't even log into my computer.
Need to go to the office tomorrow so some guy can type in an admin password on my pc and do stuff to it. If that doesn't work I will just be given a new laptop.
Seriously fuck this week4 -
I already wrote a rant about this yesterday, but since I'm a sysadmin trying to convert to dev.. I dunno, maybe it's not a bad idea to muddy the waters a bit and talk about why not to be a sysadmin.
Personally I think it's that the perceived barrier to entry is just too high, while it isn't. You don't need a huge Ceph cluster and massive servers when you're just starting out. Why overbuild an appliance like that if it's gonna start out at maybe 5 requests a minute?
Let's take an example - DNS servers! So there's been this guy on the bind-users mailing list asking how to set up a DNS server on 2 public servers, along with a website. Nothing special I guess - you can read the thread here: https://0x0.st/ZY-d. Aside from the question being quite confusing, there was advice to read RFC's, get a book, read the BIND ARM, etc etc. And the person to deny this? No one less than Stephane Bortzmeyer, one of the people who works for nic.fr (so he maintains the .fr TLD) and wrote some of those RFC's as part of the DNSOP working group in the IETF. As for valid reasons to set up a DNS server? Could just be to learn how the DNS works, or hell even for fun. As far as professional DNS servers go.. this (https://0x0.st/ZYo9) is the nugget that powers the K root server, one of the 13 root servers that power the root zone of the internet, aka the zone apex. 2 RJ45 connections, and a console connection. The reason why this is possible is the massive recursor networks that ISP's, Google DNS, Cloudflare DNS, Quad9, etc etc provide. Point is, you don't need huge infrastructure to run a server!
Or maybe your business needs email. How many thousands of emails per second are you gonna need to build your mail server against? How many millions will you need to store? If your business has 10 employees and all of those manage about 10k emails total.. well that's easy, 100k emails total. Per second? Hundreds of emails per second per employee? Haha, of course not. Maybe you'll see an email a minute at most. That is not to say that all email services are like this - it is true that ISP's who offer email to their customers, and especially providers like Microsoft and Google do need massive mail servers that can handle thousands of emails per second. But you are not Microsoft or Google. So yeah, focus on the parts of email that are actually hard.. and there is plenty.
Among sysadmins you have this distinction between "professional" sysadmins and homelabbers. I don't mind the distinction itself but I think both augment each other. If you've started out by jumping into a heap of legacy at an established company, you will have plenty of resources, immediately high complexity, and probably a clusterfuck right away. But you will have massive amounts of resources. If you start out with a homelab, you will have not many resources, small workloads, and something completely new for you to build and learn with. And when running a server like that, you'll probably find that the resources required are quite small, to provide you with your new services. My DHCP servers take 12MB memory each. My DNS servers hover around the 40MB mark. The mail server.. to be fair that one consumes around 150. But if you'd hear the people saying that you need huge servers.. omg you need at least a TB of RAM on your server and 72 cores, massive disks and Ceph!1!
No you don't. All that does is scaring people away and creating a toxic environment for everyone. Stop it.1 -
When I was younger I had a decision to go into hardware or software. I chose software and have loved it.
Recentily I just spent 5 hours trying install a Linux distro on an old server. I made no progress.
I made the right decision. Hardware freaking sucks! You spend hours working on outdated pieces of crap and find that to fix your problem you need to sell you kidney to finance your project. Not to mention you have to wait for literally everything! It's like gradel builds everywhere! Want to install a new distro on your USB? Bam, 5min gone. Want to boot into bios and change one setting? BAM! more time wasted...
A note to the sysadmins out there: thank you. I love you. I am so happy you do this kind of work so I don't have to.3