Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "root not required"
-
terms can only be use in programming: -
Where friends have access to your private parts,
Where Parents may kill their child if required,
Where Bugs come in from open windows,
Where one image is worth 128K words,
Where 10 == 2,
Where Zombies are common and not dangerous *,
Where Daemons are always there somewhere
Where the slimmest of USB drives are considered FAT *,
Where comments are made and arguments are passed, **
Where forever alone nerds can also unzip, touch, mount and fsck ***,
Where root is top of the tree,
Where x = x + y is totally correct,
Where opening a jar requires Java,
Where Oct 31 = Dec 25,
Thanks to ASHISH KEDIA for writing these.
Source :- Quora4 -
I’m kind of pissy, so let’s get into this.
My apologies though: it’s kind of scattered.
Family support?
For @Root? Fucking never.
Maybe if I wanted to be a business major my mother might have cared. Maybe the other one (whom I call Dick because fuck him, and because it’s accurate) would have cared if I suddenly wanted to become a mechanic. But in both cases, I really doubt it. I’d probably just have been berated for not being perfect, or better at their respective fields than they were at 3x my age.
Anyway.
Support being a dev?
Not even a little.
I had hand-me-down computers that were outmoded when they originally bought them: cutting-edge discount resale tech like Win95, 33/66mhz, 404mb hd. It wouldn’t even play an MP3 without stuttering.
(The only time I had a decent one is when I built one for myself while in high school. They couldn’t believe I spent so much money on what they saw as a silly toy.)
Using a computer for anything other than email or “real world” work was bad in their eyes. Whenever I was on the computer, they accused me of playing games, and constantly yelled at me for wasting my time, for rotting in my room, etc. We moved so often I never had any friends, and they were simply awful to be around, so what was my alternative? I also got into trouble for reading too much (seriously), and with computers I could at least make things.
If they got mad at me for any (real or imagined) reason (which happened almost every other day) they would steal my things, throw them out, or get mad and destroy them. Desk, books, decorations, posters, jewelry, perfume, containers, my chair, etc. Sometimes they would just steal my power cables or network cables. If they left the house, they would sometimes unplug the internet altogether, and claim they didn’t know why it was down. (Stealing/unplugging cables continued until I was 16.) If they found my game CDs, those would disappear, too. They would go through my room, my backpack and its notes/binders/folders/assignments, my closet, my drawers, my journals (of course my journals), and my computer, too. And if they found anything at all they didn’t like, they would confront me about it, and often would bring it up for months telling me how wrong/bad I was. Related: I got all A’s and a B one year in high school, and didn’t hear the end of it for the entire summer vacation.
It got to the point that I invented my own language with its own vocabulary, grammar, and alphabet just so I could have just a little bit of privacy. (I’m still fluent in it.) I would only store everything important from my computer on my only Zip disk so that I could take it to school with me every day and keep it out of their hands. I was terrified of losing all of my work, and carrying a Zip disk around in my backpack (with no backups) was safer than leaving it at home.
I continued to experiment and learn whatever I could about computers and programming, and also started taking CS classes when I reached high school. Amusingly, I didn’t even like computers despite all of this — they were simply an escape.
Around the same time (freshman in high school) I was a decent enough dev to actually write useful software, and made a little bit of money doing that. I also made some for my parents, both for personal use and for their businesses. They never trusted it, and continually trashtalked it. They would only begrudgingly use the business software because the alternatives were many thousands of dollars. And, despite never ever having a problem with any of it, they insisted I accompany them every time, and these were often at 3am. Instead of being thankful, they would be sarcastically amazed when nothing went wrong for the nth time. Two of the larger projects I made for them were: an inventory management system that interfaced with hand scanners (VB), and another inventory management system for government facility audits (Access). Several websites, too. I actually got paid for the Access application thanks to a contract!
To put this into perspective, I was selected to work on a government software project about a year later, while still in high school. That didn’t impress them, either.
They continued to see computers as a useless waste of time, and kept telling me that I would be unemployable, and end up alone.
When they learned I was dating someone long-distance, and that it was a she, they simply took my computer and didn’t let me use it again for six months. Really freaking hard to do senior projects without a computer. They begrudgingly allowed me to use theirs for schoolwork, but it had a fraction of the specs — and some projects required Flash, which the computer could barely run.
Between the constant insults, yelling, abuse (not mentioned here), total lack of privacy, and the theft, destruction, etc. I still managed to teach myself about computers and programming.
In short, I am a dev despite my parents’ best efforts to the contrary.30 -
I'm trying to sign up for insurance benefits at work.
Step 1: Trying to find the website link -- it's non-existent. I don't know where I found it, but I saved it in keepassxc so I wouldn't have to search again. Time wasted: 30 minutes.
Step 2: Trying to log in. Ostensibly, this uses my work account. It does not. Time wasted: 10 minutes.
Step 3: Creating an account. Username and Password requirements are stupid, and the page doesn't show all of them. The username must be /[A-Za-z0-9]{8,60}/. The maximum password length is VARCHAR(20), and must include upper/lower case, number, special symbol, etc. and cannot include "password", repeated charcters, your username, etc. There is also a (required!) hint with /[A-Za-z0-9 ]{8,60}/ validation. Want to type a sentence? better not use any punctuation!
I find it hilarious that both my username and password hint can be three times longer than my actual password -- and can contain the password. Such brilliant security.
My typical username is less than 8 characters. All of my typical password formats are >25 characters. Trying to figure out memorable credentials and figuring out the hidden complexity/validation requirements for all of these and the hint... Time wasted: 30 minutes.
Step 4: Post-login. The website, post-login, does not work in firefox. I assumed it was one of my many ad/tracker/header/etc. blockers, and systematically disabled every one of them. After enabling ad and tracker networks, more and more of the site loaded, but it always failed. After disabling bloody everything, the site still refused to work. Why? It was fetching deeply-nested markup, plus styling and javascript, encoded in xml, via api. And that xml wasn't valid xml (missing root element). The failure wasn't due to blocking a vitally-important ad or tracker (as apparently they're all vital and the site chain-loads them off one another before loading content), it's due to shoddy development and lack of testing. Matches the rest of the site perfectly. Anyway, I eventually managed to get the site to load in Safari, of all browsers, on a different computer. Time wasted: 40 minutes.
Step 5: Contact info. After getting the site to work, I clicked the [Enroll] button. "Please allow about 10 minutes to enroll," it says. I'm up to an hour and 50 minutes by now. The first thing it asks for is contact info, such as email, phone, address, etc. It gives me a warning next to phone, saying I'm not set up for notifications yet. I think that's great. I select "change" next to the email, and try to give it my work email. There are two "preferred" radio buttons, one next to "Work email," one next to "Personal email" -- but there is only one textbox. Fine, I select the "Work" preferred button, sign up for a faux-personal tutanota email for work, and type it in. The site complains that I selected "Work" but only entered a personal email. Seriously serious. Out of curiosity, I select the "change" next to the phone number, and see that it gives me four options (home, work, cell, personal?), but only one set of inputs -- next to personal. Yep. That's amazing. Time spent: 10 minutes.
Step 6: Ranting. I started going through the benefits, realized it would take an hour+ to add dependents, research the various options, pick which benefits I want, etc. I'm already up to two hours by now, so instead I decided to stop and rant about how ridiculous this entire thing is. While typing this up, the site (unsurprisingly) automatically logged me out. Fine, I'll just log in again... and get an error saying my credentials are invalid. Okay... I very carefully type them in again. error: invalid credentials. sajfkasdjf.
Step 7 is going to be: Try to figure out how to log in again. Ugh.
"Please allow about 10 minutes" it said. Where's that facepalm emoji?
But like, seriously. How does someone even build a website THIS bad?rant pages seriously load in 10+ seconds slower than wordpress too do i want insurance this badly? 10 trackers 4 ad networks elbonian devs website probably cost $1million or more too root gets insurance stop reading my tags and read the rant more bugs than you can shake a stick at the 54 steps to insanity more bugs than master of orion 313 -
Root encounters HR at her new job.
So, I left my job a few weeks ago. I was pretty sad about it, so I didn't want to write anything about it. It was a great place to work, with great managers, decent coworkers, and interesting work. I also had free reign over how I built things, what to improve, etc. Within about four months, I authored over half of the total commits on their backend repo, added a testing suite with 90% coverage, significantly improved the security (more accurately: added security), etc. but I got a job offer that allowed me to work remotely, and make well over six figures (usd). I couldn't turn it down, even though I wanted to. So, I left. I'm still genuinely sad about that. I had emotions and everything. 🙁 I stayed on long enough to finish the last of the features for their new product launch, and make sure everything was stable. I'm welcome back whenever, though they don't want to have remote employees, and I want to move, so. that's probably not going to happen. sigh.
Anyway, I started my new job this week. Rented an office (read: professional closet) and everything! It's been veritable mountains of HR paperwork so far. That's all I've done besides some accounts setup. I've seriously only worked on and completed one ticket so far in two and a half days, and I still have six documents/contracts to sign! (and benefits; that'll probably take my weekend.)
But getting an I9 thing notarized? Apparently I only have three days before I'm legally unemployable by them or something, idk. HR made it sound ridiculously dire and important, and reminded me like five or more times. I figured it was just some notary service; that takes like 10 minutes, right? So I put it off until my second day so I didn't have to disappear in the middle of my first day. Anyway, I called a bunch of notary services on day 2, and apparently only like 5% of them both do notary services this time of year and aren't booked full. And of those, probably another 5% will notarize I9 documents.. No idea why it's rare, but whatever, I'm not a notary.
The HR lady assured me that I didn't need any special documents; I should just go there, present my IDs, and the notary will provide or draft documents for everything else. Totally doesn't sound right, but fine; I'm not a notary nor will I ever work in HR, so I'm not very knowledgeable about this. So, against my better judgement I decided to just go anyway. I called around and finally found a place that wasn't closed, busy, or refusing, and drove over there. Waited. Waited. Waited. Notary lady was super slow in every single action. (I should mention that it's now 10am, and I have a meeting with the Senior VP of Engineering [a stern, stubborn old goat who enjoys making people feel inadequate] at 12:30pm.) The notary lady looks like she's an npc updating in slow motion (maybe at 0.25x speed?) and can't seem to understand what I need. Eventually, she tells me exactly what I had assumed: if there's no document, she can't notarize said document, and she doesn't have an I9 for the company I'm trying to work for. (like, duh.) So I thank her for proving the flow of time is variable, which she ignores in slow motion, and drive back home. It's now about 11.
I message the same HR lady, and the useless wench gawks in surprise and says she's never heard of that ridiculous request before. It took prodding to get her to respond every time, but after some (very slow) back and forth, she says she wants to call the notary personally and ask what they need. I waited around for another response that never came, and eventually just drove to the notary place again to have them notarize the required ID documents. That plus my chat history with HR should be enough to show that I bloody well tried, and HR just shit the bed instead. I finally got them notarized at like 12:10, and totally broke the speed limit the entire way to the office, found the last remaining parking spot, and made it to my office just in time for the meeting. seriously, less than two minutes to spare. Meeting was interesting (mostly about security), but totally made me facepalm, shout "Seriously!? What the hell are you thinking!?" and make slapping motions at some of the people talking. I will probably rant about that next.
But anyway, I'm willing to bet that the useless wench won't get back to me before the notary closes, if at all, and will somehow try to blame it completely on me if I bring it up again. Passive aggressive bitch. She's probably thinking: "If I don't help her with these mandatory legal processes, it'll be her fault she didn't get them done in time. I mean, they're so easy! She's just doing it wrong." I fucking hate HR.13 -
It's enough. I have to quit my job.
December last year I've started working for a company doing finance. Since it was a serious-sounding field, I tought I'd be better off than with my previous employer. Which was kinda the family-agency where you can do pretty much anything you want without any real concequences, nor structures. I liked it, but the professionalism was missing.
Turns out, they do operate more professionally, but the intern mood and commitment is awful. They all pretty much bash on eachother. And the root cause of this and why it will stay like this is simply the Project Lead.
The plan was that I was positioned as glue between Design/UX and Backend to then make the best Frontend for the situation. Since that is somewhat new and has the most potential to get better. Beside, this is what the customer sees everyday.
After just two months, an retrospective and a hell lot of communication with co-workers, I've decided that there is no other way other than to leave.
I had a weekly productivity of 60h+ (work and private, sometimes up to 80h). I had no problems with that, I was happy to work, but since working in this company, my weekly productivity dropped to 25~30h. Not only can I not work for a whole proper work-week, this time still includes private projects. So in hindsight, I efficiently work less than 20h for my actual job.
The Product lead just wants feature on top of feature, our customers don't want to pay concepts, but also won't give us exact specifications on what they want.
Refactoring is forbidden since we get to many issues/bugs on a daily basis so we won't get time.
An re-design is forbidden because that would mean that all Screens have to be re-designed.
The product should be responsive, but none of the components feel finished on Desktop - don't talk about mobile, it doesn't exist.
The Designer next to me has to make 200+ Screens for Desktop and Mobile JUST so we can change the primary colors for an potential new customer, nothing more. Remember that we don't have responsiveness? Guess what, that should be purposely included on the Designs (and it looks awful).
I may hate PHP, but I can still work with it. But not here, this is worse then any ecommerce. I have to fix legacy backend code that has no test coverage. But I haven't touched php for 4 years, letalone wrote sql (I hate it). There should be no reason whatsoever to let me do this kind of work, as FRONTEND ARCHITECT.
After an (short) analysis of the Frontend, I conclude that it is required to be rewritten to 90%. There have been no performance checks for the Client/UI, therefor not only the components behave badly, but the whole system is slow as FUCK! Back in my days I wrote jQuery, but even that shit was faster than the architecuture of this React Multi-instance app. Nothing is shared, most of the AppState correlate to other instances.
The Backend. Oh boy. Not only do we use an shitty outated open-source project with tons of XSS possibillities as base, no we clone that shit and COPY OUR SOURCES ON TOP. But since these people also don't want to write SQL, they tought using Symfony as base on top of the base would be an good idea.
Generally speaking (and done right), this is true. but not then there will be no time and not properly checked. As I said I'm working on Legacy code. And the more I look into it, the more Bugs I find. Nothing too bad, but it's still a bad sign why the webservices are buggy in general. And therefor, the buggyness has to travel into the frontend.
And now the last goodies:
- Composer itself is commited to the repo (the fucking .phar!)
- Deployments never work and every release is done manually
- We commit an "_TRASH" folder
- There is an secret ongoing refactoring in the root of the Project called "_REFACTORING" (right, no branches)
- I cannot test locally, nor have just the Frontend locally connected to the Staging webservices
- I am required to upload my sources I write to an in-house server that get's shared with the other coworkers
- This is the only Linux server here and all of the permissions are fucked up
- We don't have versions, nor builds, we use the current Date as build number, but nothing simple to read, nonono. It's has to be an german Date, with only numbers and has always to end with "00"
- They take security "super serious" but disable the abillity to unlock your device with your fingerprint sensor ON PURPOSE
My brain hurts, maybe I'll post more on this shit fucking cuntfuck company. Sorry to be rude, but this triggers me sooo much!2 -
I've got a confession to make.
A while ago I refurbished this old laptop for someone, and ended up installing Bodhi on it. While I was installing it however, I did have some wicked thoughts..
What if I could ensure that the system remains up-to-date by running an updater script in a daily cron job? That may cause the system to go unstable, but at least it'd be up-to-date. Windows Update for Linux.
What if I could ensure that the system remains protected from malware by periodically logging into it and checking up, and siphoning out potential malware code? The network proximity that's required for direct communication could be achieved by offering them free access to one of my VPN servers, in the name of security or something like that. Permanent remote access, in the name of security. I'm not sure if Windows has this.
What if I could ensure that the system remains in good integrity by disabling the user from accessing root privileges, and having them ask me when they want to install a piece of software? That'd make the system quite secure, with the only penetration surface now being kernel exploits. But it'd significantly limit what my target user could do with their own machine.
At the end I ended up discarding all of these thoughts, because it'd be too much work to implement and maintain, and it'd be really non-ethical. I felt filthy from even thinking about these things. But the advantages of something like this - especially automated updates, which are a real issue on my servers where I tend to forget to apply them within a couple of weeks - can't just be disregarded. Perhaps Microsoft is on to something?11 -
Working on a database priorly designed and maintained by some private agency.
The fuck I'm dealing with!
Boolean values stored as 'TRUE'/'FALSE'. It's varchar, my dudes.
There are no FK relations. Just the values of IDs in a column.
There are no indexes, all on just the PKs, nothing else. Nothing.
Null, what's that? I'm dealing with 'N/A', my dudes.
Unique key, what's that? The table which stores users has all the fields nullable. Email is not unique ( even though that's the required behaviour).
ALL the numeric values are stored as varchar. Varchar, my dudes. Varchar. '1', '1.1'
And finally, the good ole, 1 table to rule them all. Normalisation, fuck that.
And what's the root cause of all this? My PM used to hand them Excel sheets she maintains on her local system. FTW. I don't have a enough explanations.7 -
Recently had a meeting with the company that acquired my startup, where I was required to relinquish root/admin access across AWS, SSH, and database. It was decided that I held too much power, and will now only have read-only access to develop. I'm not entirely sure what I do for work now.5
-
I was engaged as a contractor to help a major bank convert its servers from physical to virtual. It was 2010, when virtual was starting to eclipse physical. The consulting firm the bank hired to oversee the project had already decided that the conversions would be performed by a piece of software made by another company with whom the consulting firm was in bed.
I was brought in as a Linux expert, and told to, "make it work." The selected software, I found out without a lot of effort or exposure, eats shit. With whip cream. Part of the plan was to, "right-size" filesystems down to new desired sizes, and we found out that was one of the many things it could not do. Also, it required root SSH access to the server being converted. Just garbage.
I was very frustrated by the imposition of this terrible software, and started to butt heads with the consulting firm's project manager assigned to our team. Finally, during project planning meetings, I put together a P2V solution made with a customized Linux Rescue CD, perl, rsync, and LVM.
The selected software took about 45 minutes to do an initial conversion to the VM, and about 25 minutes to do a subsequent sync, which was part of the plan, for the final sync before cutover.
The tool I built took about 5 minutes to do the initial conversion, and about 30-45 seconds to do the final sync, and was able to satisfy every business requirement the selected software was unable to meet, and about which the consultants just shrugged.
The project manager got wind of this, and tried to get them to release my contract. He told management what I had built, against his instructions. They did not release my contract. They hired more people and assigned them to me to help build this tool.
They traveled to me and we refined it down to a simple portable ISO that remained in use as the default method for Linux for years after I left.
Fast forward to 2015. I'm interviewing for the position I have now, and one of the guys on the tech screen call says he worked for the same bank later and used that tool I wrote, and loved it. I think it was his endorsement that pushed me over and got me an offer for $15K more than I asked for.4 -
Long rant ahead.. 5k characters pretty much completely used. So feel free to have another cup of coffee and have a seat 🙂
So.. a while back this flash drive was stolen from me, right. Well it turns out that other than me, the other guy in that incident also got to the police 😃
Now, let me explain the smiley face. At the time of the incident I was completely at fault. I had no real reason to throw a punch at this guy and my only "excuse" would be that I was drunk as fuck - I've never drank so much as I did that day. Needless to say, not a very good excuse and I don't treat it as such.
But that guy and whoever else it was that he was with, that was the guy (or at least part of the group that did) that stole that flash drive from me.
Context: https://devrant.com/rants/2049733 and https://devrant.com/rants/2088970
So that's great! I thought that I'd lost this flash drive and most importantly the data on it forever. But just this Friday evening as I was meeting with my friend to buy some illicit electronics (high voltage, low frequency arc generators if you catch my drift), a policeman came along and told me about that other guy filing a report as well, with apparently much of the blame now lying on his side due to him having punched me right into the hospital.
So I told the cop, well most of the blame is on me really, I shouldn't have started that fight to begin with, and for that matter not have drunk that much, yada yada yada.. anyway he walked away (good grief, as I was having that friend on visit to purchase those electronics at that exact time!) and he said that this case could just be classified then. Maybe just come along next week to the police office to file a proper explanation but maybe even that won't be needed.
So yeah, great. But for me there's more in it of course - that other guy knows more about that flash drive and the data on it that I care about. So I figured, let's go to the police office and arrange an appointment with this guy. And I got thinking about the technicalities for if I see that drive back and want to recover its data.
So I've got 2 phones, 1 rooted but reliant on the other one that's unrooted for a data connection to my home (because Android Q, and no bootable TWRP available for it yet). And theoretically a laptop that I can put Arch on it no problem but its display backlight is cooked. So if I want to bring that one I'd have to rely on a display from them. Good luck getting that done. No option. And then there's a flash drive that I can bake up with a portable Arch install that I can sideload from one of their machines but on that.. even more so - good luck getting that done. So my phones are my only option.
Just to be clear, the technical challenge is to read that flash drive and get as much data off of it as possible. The drive is 32GB large and has about 16GB used. So I'll need at least that much on whatever I decide to store a copy on, assuming unchanged contents (unlikely). My Nexus 6P with a VPN profile to connect to my home network has 32GB of storage. So theoretically I could use dd and pipe it to gzip to compress the zeroes. That'd give me a resulting file that's close to the actual usage on the flash drive in size. But just in case.. my OnePlus 6T has 256GB of storage but it's got no root access.. so I don't have block access to an attached flash drive from it. Worst case I'd have to open a WiFi hotspot to it and get an sshd going for the Nexus to connect to.
And there we have it! A large storage device, no root access, that nonetheless can make use of something else that doesn't have the storage but satisfies the other requirements.
And then we have things like parted to read out the partition table (and if unchanged, cryptsetup to read out LUKS). Now, I don't know if Termux has these and frankly I don't care. What I need for that is a chroot. But I can't just install Arch x86_64 on a flash drive and plug it into my phone. Linux Deploy to the rescue! 😁
It can make chrooted installations of common distributions on arm64, and it comes extremely close to actual Linux. With some Linux magic I could make that able to read the block device from Android and do all the required sorcery with it. Just a USB-C to 3x USB-A hub required (which I have), with the target flash drive and one to store my chroot on, connected to my Nexus. And fixed!
Let's see if I can get that flash drive back!
P.S.: if you're into electronics and worried about getting stuff like this stolen, customize it. I happen to know one particular property of that flash drive that I can use for verification, although it wasn't explicitly customized. But for instance in that flash drive there was a decorative LED. Those are current limited by a resistor. Factory default can be say 200 ohm - replace it with one with a higher value. That way you can without any doubt verify it to be yours. Along with other extra security additions, this is one of the things I'll be adding to my "keychain v2".11 -
Time for a REAL fucking rant.
io_uring manpages say you can set the CAP_SYS_NICE capability to allow SQPOLL to work. You can't, you still get an operation not permitted errno result.
Why? I checked, it says 5.10 mainline is required. Pretty sure I just manually downloaded and installed the Deb's myself. uname reports that I am at 5.10. So what gives?
Maintainer submitted a patch because they fucked up and made the *actual* capability check look for what's basically root permissions (CAP_SYS_ADMIN... c'mon...) and is now trying to rectify a glaring security shortcoming.
Patch hasn't been accepted or even addressed yet but they already updated the manpages with the estimated mainline kernel release as if it had made it into the release candidate. Manpages have made it into latest debs but the actual change has not.
Where the fuck is the Linus Torvalds that would ream the fuck out of shitty developers doing shitty things? The political correctness climate has discouraged such criticism now and the result... this. This fucking mess, where people are allowed to cut corners and get away with it because it would hurt their feelings when faced with pressure.
I'm not just guessing either. The maintainer has already said some of the "tone" of criticisms hurt his feelings. Yes, sorry, but when you claim 90% speedup over a typical epoll application using your new magical set of syscalls, and nobody can even get 1-2% speedup on a similar machine, people are going to be fucking skeptical. Then when you lower it to 60% because you originally omitted a bunch of SECURITY RELATED AND CORRECTNESS CHECKING CODE, we're going to call you the fuck out for fudging numbers.
Trying to maintain the equivalent of academic integrity within the computer science field is an exercise of insanity. You'd be fired and shunned from publishing in journals if you pulled that shit in ANY OTHER FUCKING FIELD, but because the CS scene is all about jerking each other off at every corner because the mean people keep saying mean things on Twitter and it hurts your feelings therefore we're all allowed to contribute subpar work and be protected from criticisms when others realize it's subpar.
These aren't mistakes anymore, it's clear you're just trying to farm clout at Facebook - maybe even FOR Facebook.
Fuck you. Do it right, the first time. Sick of shitty code being OK all of a sudden.2 -
I already wrote a rant about this yesterday, but since I'm a sysadmin trying to convert to dev.. I dunno, maybe it's not a bad idea to muddy the waters a bit and talk about why not to be a sysadmin.
Personally I think it's that the perceived barrier to entry is just too high, while it isn't. You don't need a huge Ceph cluster and massive servers when you're just starting out. Why overbuild an appliance like that if it's gonna start out at maybe 5 requests a minute?
Let's take an example - DNS servers! So there's been this guy on the bind-users mailing list asking how to set up a DNS server on 2 public servers, along with a website. Nothing special I guess - you can read the thread here: https://0x0.st/ZY-d. Aside from the question being quite confusing, there was advice to read RFC's, get a book, read the BIND ARM, etc etc. And the person to deny this? No one less than Stephane Bortzmeyer, one of the people who works for nic.fr (so he maintains the .fr TLD) and wrote some of those RFC's as part of the DNSOP working group in the IETF. As for valid reasons to set up a DNS server? Could just be to learn how the DNS works, or hell even for fun. As far as professional DNS servers go.. this (https://0x0.st/ZYo9) is the nugget that powers the K root server, one of the 13 root servers that power the root zone of the internet, aka the zone apex. 2 RJ45 connections, and a console connection. The reason why this is possible is the massive recursor networks that ISP's, Google DNS, Cloudflare DNS, Quad9, etc etc provide. Point is, you don't need huge infrastructure to run a server!
Or maybe your business needs email. How many thousands of emails per second are you gonna need to build your mail server against? How many millions will you need to store? If your business has 10 employees and all of those manage about 10k emails total.. well that's easy, 100k emails total. Per second? Hundreds of emails per second per employee? Haha, of course not. Maybe you'll see an email a minute at most. That is not to say that all email services are like this - it is true that ISP's who offer email to their customers, and especially providers like Microsoft and Google do need massive mail servers that can handle thousands of emails per second. But you are not Microsoft or Google. So yeah, focus on the parts of email that are actually hard.. and there is plenty.
Among sysadmins you have this distinction between "professional" sysadmins and homelabbers. I don't mind the distinction itself but I think both augment each other. If you've started out by jumping into a heap of legacy at an established company, you will have plenty of resources, immediately high complexity, and probably a clusterfuck right away. But you will have massive amounts of resources. If you start out with a homelab, you will have not many resources, small workloads, and something completely new for you to build and learn with. And when running a server like that, you'll probably find that the resources required are quite small, to provide you with your new services. My DHCP servers take 12MB memory each. My DNS servers hover around the 40MB mark. The mail server.. to be fair that one consumes around 150. But if you'd hear the people saying that you need huge servers.. omg you need at least a TB of RAM on your server and 72 cores, massive disks and Ceph!1!
No you don't. All that does is scaring people away and creating a toxic environment for everyone. Stop it.1 -
So for those of you keeping track, I've become a bit of a data munger of late, something that is both interesting and somewhat frustrating.
I work with a variety of enterprise data sources. Those of you who have done enterprise work will know what I mean. Forget lovely Web APIs with proper authentication and JSON fed by well-known open source libraries. No, I've got the output from an AS/400 to deal with (For the youngsters amongst you, AS/400 is a 1980s IBM mainframe-ish operating system that oriiganlly ran on 48-bit computers). I've got EDIFACT to deal with (for the youngsters amongst you: EDIFACT is the 1980s precursor to XML. It's all cryptic codes, + delimited fields and ' delimited lines) and I've got legacy databases to massage into newer formats, all for what is laughably called my "data warehouse".
But of course, the one system that actually gives me serious problems is the most modern one. It's web-based, on internal servers. It's got all the late-naughties buzzowrds in web development, such as AJAX and JQuery. And it now has a "Web Service" interface at the request of the bosses, that I have to use.
The programmers of this system have based it on that very well-known database: Intersystems Caché. This is an Object Database, and doesn't have an SQL driver by default, so I'm basically required to use this "Web Service".
Let's put aside the poor security. I basically pass a hard-coded human readable string as password in a password field in the GET parameters. This is a step up from no security, to be fair, though not much.
It's the fact that the thing lies. All the files it spits out start with that fateful string: '<?xml version="1.0" encoding="ISO-8859-1"?>' and it lies.
It's all UTF-8, which has made some of my parsers choke, when they're expecting latin-1.
But no, the real lie is the fact that IT IS NOT WELL-FORMED XML. Let alone Valid.
THERE IS NO ROOT ELEMENT!
So now, I have to waste my time writing a proxy for this "web service" that rewrites the XML encoding string on these files, and adds a root element, just so I can spit it at an XML parser. This means added infrastructure for my data munging, and more potential bugs introduced or points of failure.
Let's just say that the developers of this system don't really cope with people wanting to integrate with them. It's amazing that they manage to integrate with third parties at all...2 -
I'm in a big fat fucking stinking rut, as in progress on this project has absolutely stagnanted.
Gonna rubber face your duck now **UNZIPS** excepts I don't have zippers, as joggers are the one true way; fake Adidas til I fucking drop.
Brain damage aside, I understand both how I've layed out the data and what I'm supposed to do with it. We have a virtual machine, an array of instructions and arguments for a given process within it, and we need to walk this array and map values to registers.
We also need to spill values inside registers to stack, IF they are required at a further point within that block. This also isn't terribly complex. We simply look forward in the array and see if the value is an argument to any instruction that *needs* this value to be loaded (ie, within a register).
So this implies multiple iterations; we need to better understand how one particular value is used throughout an F before we can make a final decision on how many registers and stack space are actually needed for the whole block.
Here's where it gets tricky. If there's a call, we need to be certain that the symbol being invoked has already been fully processed. Besides the obvious fact that recursion fucks me up, there's another matter: say a private method gets invoked by another private method. We can take advantage of this, by which I mean, sacrilege incoming so put on this toga.
Looking at the output for C compilers, it would seem this is not done in practice, I would assume because it's a pain in the ass. But when you have the guarantee that F will only be called internally, as that's what "private" means, there's two ways it can go:
0. It's well below the 13-20 cycle threshold, so you inline the fucker. No suprises there.
1. It's a more involved affaire, and invoked in more than one place, so you don't inline it. Codesize matters.
Recursion and [1] are the big deal things holding me back. Not because it's too hard, like I said this is kindergarten level abstraction. I'm just slow and fanatical, which is how I prefer to spell "constant obsessive paranoid delusions". I can see the potential optimization I can pull here, so I'm stuck trying to figure it out.
Idea would be, handling the register allocation and stack spill for an internal-internal (or deep internal; what we like to call a "guts" method) in synchronization with the *calling* processes. This is, fundamentally, violating all conventions -- but so under the hood no one will notice.
Let me give you an example. If we were to pass some value to a function, expecting to mutate it and get a different value back, in a lot of cases it'd be stupid to make an implicit copy by using two registers, one for input and another for the output. Dude, it's one cycle. Multiply it by a million, say sixty times per second, for every time you __needlessly__ make a copy of a value that we've already stated is mutable.
Clearly unacceptable. This is, in the strictest sense, everywhere in every single codebase. Premature micro optimization is the root of all goodness, God is great and praiseworthy. So how do we go about it?
Answer is I know and I don't know. By which I mean to say, this very thing I've done by hand. Assembly is fun. Now the issue is teaching a calculator how to do it. Not so fun.
There is a dependency chain between processes, as I believe I've kind of alluded to. I'm trying to make decisions on the side of the caller depending on the details of the callee, which is why recursion is rawdogging my soul. This is the same situation, it's inverting the direction of one or more links in the dependency chain, which makes no fucking sense.
And yet it does.
Brain, explain yourself.
How do *you* handle this without crashing?
Brain?
<<ME STEWPED; BEEP-BOOP>>
Alright then, that was a useless attempt at fuckery. Let's have a nap then, maybe it'll come to me in the morning. That's what I've been saying to myself for almost a month now.
Perhaps it is a hardcoded fuk.1 -
Follow-up rant to my company. Today's day is fairly good, so let's talk about infra.
We're building upon an existing open-source project which is not intended to be extended (e.g. plugins).
Our backend-team somehow hacked symfony into the app, which made the actual work a little bit less annoying. But on the other side, there is absolutely no automation. Everything is setup by hand and I need to upload my sources to my dev-server and watch what files exactly are overwritten. Because if not, I accidentally overwrite core sources which will break the whole app, no matter what. If I forget what file I wrongly overwrote, I have no choice but to setup the core from scratch and apply our sources on-top, AGAIN.
The first time setup took me almost five days.
Oh yeah and the team shares one dev server, so whenever I feel like fucking with a mate, I can easily fuck up his system, since everyone has root-rights.
We're required to use windows, but our dev is linux and I am the only knowledgable linux guy. They need cheatsheets (to be fair, I need my powershell-cheatsheet).
We market the same app with some additional functionality, but we also have clients which require their own stuff. This case has never been thought-out, since for these specific clients, we also modify some core-parts. Which makes it a real hassle to add a basic new feature to that special customer.
At least our frontend is somewhat decent. Simple and without critical thinking, but it works and is decently understandable. I'll rant about that for another day, it's still tedious.
I know I won't stay there for long since I start my own stuff, but it's sad. Nothing is perfect and they _do_ want to make it better, but it's the usual "there is no time, client first" talk. On the other hand, they tell that we should be more efficient, but there is no way to be without looking back at the fundamental structure and what takes us so long.
I don't think I am able to change anything here and as I heard from co-workers, they already look for something new.
cheers -
Rubber ducking your ass in a way, I figure things out as I rant and have to explain my reasoning or lack thereof every other sentence.
So lettuce harvest some more: I did not finish the linker as I initially planned, because I found a dumber way to solve the problem. I'm storing programs as bytecode chunks broken up into segment trees, and this is how we get namespaces, as each segment and value is labeled -- you can very well think of it as a file structure.
Each file proper, that is, every path you pass to the compiler, has it's own segment tree that results from breaking down the code within. We call this a clan, because it's a family of data, structures and procedures. It's a bit stupid not to call it "class", but that would imply each file can have only one class, which is generally good style but still technically not the case, hence the deliberate use of another word.
Anyway, because every clan is already represented as a tree, we can easily have two or more coexist by just parenting them as-is to a common root, enabling the fetching of symbols from one clan to another. We then perform a cannonical walk of the unified tree, push instructions to an assembly queue, and flatten the segmented memory into a single pool onto which we write the assembler's output.
I didn't think this would work, but it does. So how?
The assembly queue uses a highly sophisticated crackhead abstraction of the CVYC clan, or said plainly, clairvoyant code of the "fucked if I thought this would be simple" family. Fundamentally, every element in the queue is -- recursively -- either a fixed value or a function pointer plus arguments. So every instruction takes the form (ins (arg[0],arg[N])) where the instruction and the arguments may themselves be either fixed or indirect fetches that must be solved but in the ~ F U T U R E ~
Thusly, the assembler must be made aware of the fact that it's wearing sunglasses indoors and high on cocaine, so that these pointers -- and the accompanying arguments -- can be solved. However, your hemorroids are great, and sitting may be painful for long, hard times to come, because to even try and do this kind of John Connor solving pinky promises that loop on themselves is slowly reducing my sanity.
But minor time travel paradoxes aside, this allows for all existing symbols to be fetched at the time of assembly no matter where exactly in memory they reside; even if the namespace is mutated, and so the symbol duplicated, we can still modify the original symbol at the time of duplication to re-route fetchers to it's new location. And so the madness begins.
Effectively, our code can see the future, and it is not pleased with your test results. But enough about you being a disappointment to an equally misconstructed institution -- we are vermin of science, now stand still while I smack you with this Bible.
But seriously now, what I'm trying to say is that linking is not required as a separate step as a result of all this unintelligible fuckery; all the information required to access a file is the segment tree itself, so linking is appending trees to a new root, and a tree written to disk is essentially a linkable object file.
Mission accomplished... ? Perhaps.
This very much closes the chapter on *virtual* programs, that is, anything running on the VM. We're still lacking translation to native code, and that's an entirely different topic. Luckily, the language is pretty fucking close to assembler, so the translation may actually not be all that complicated.
But that is a story for another day, kids.
And now, a word from our sponsor:
<ad> Whoa, hold on there, crystal ball. It's clear to any tzaddiq that only prophets can prophecise, but if you are but a lowly goblinoid emperor of rectal pleasure, the simple truths can become very hard to grasp. How can one manage non-intertwining affairs in their professional and private lives while ALSO compulsively juggling nuts?
Enter: Testament, the gapp that will take your gonad-swallowing virtue to the next level. Ever felt like sucking on a hairy ballsack during office hours? We got you covered. With our state of the art cognitive implants, tracking devices and macumbeiras, you will be able to RIP your way into ultimate scrotolingual pleasure in no time!
Utilizing a highly elaborated process that combines illegal substances with the most forbidden schools of blood magic, we are able to [EXTREMELY CENSORED HERETICAL CONTENT] inside of your MATER with pinpoint accuracy! You shall be reformed in a parallel plane of existence, void of all that was your very being, just to suck on nads!
Just insert the ritual blade into your own testicles and let the spectral dance begin. Try Testament TODAY and use my promo code FIRSTBORNSFIRSTNUT for 20% OFF in your purchase of eternal damnation. Big ups to Testament for sponsoring DEEZ rant.3 -
!dev Nice surprise... Hopefully...
Been having a lot of teeth problems and need like 2 crowns and 1 filling now... Old fillings just suddenly fell out. My regular dentist plan is ok for cleaning but isn't so good for these expensive treatments. And it seems the dentists in network are sorta so-so... The original fillings were done by them like last year....
Well somehow it popped up into my mind that with COVID.... Given its a health crisis and the govt is bending over backwards to deal with it... it may also let me change insurance plans during the year.
Usually enrollment is once a yr until you change jobs... But when I googled I saw that apparently they did.... Though it's upto the employer and the insurance company. They have to negotiate and allow it. Not required to by law.
So anyway last week, I called up my HR asking if they allow it. The rep said they'd need to ask higher up and get back to me this Monday.
I never got a call though but today I took off to deal with all the health stuff and just take a personal day. So I called my "current" dentist insurance to ask what I needed to do to see a specialist for the root canal crown as regular dentist can't do this one.
But they couldn't find my policy because it turned out it was cancelled last week. At this point I'm likeOK WHO FUCKED UP... WHAT THE BLOODY FUCK... IM UNINSURED NOW?!!!
I login to the company benefits site to get their support #. But it also shows my current plans. Where it shows that it got switched.
I still had to call the new insurance to get my ID info...
But I'm like hm... This seems to have worked out well... Assuming everything goes as planned. Basically got 1/2 year on cheap normal coverage but now that I need it, got to switch to the more expensive coverage, which now comes out better: lower overall costs, and better drs...1 -
Relatively often the OpenLDAP server (slapd) behaves a bit strange.
While it is little bit slow (I didn't do a benchmark but Active Directory seemed to be a bit faster but has other quirks is Windows only) with a small amount of users it's fine. slapd is the reference implementation of the LDAP protocol and I didn't expect it to be much better.
Some years ago slapd migrated to a different configuration style - instead of a configuration file and a required restart after every change made, it now uses an additional database for "live" configuration which also allows the deployment of multiple servers with the same configuration (I guess this is nice for larger setups). Many documentations online do not reflect the new configuration and so using the new configuration style requires some knowledge of LDAP itself.
It is possible to revert to the old file based method but the possibility might be removed by any future version - and restarts may take a little bit longer. So I guess, don't do that?
To access the configuration over the network (only using the command line on the server to edit the configuration is sometimes a bit... annoying) an additional internal user has to be created in the configuration database (while working on the local machine as root you are authenticated over a unix domain socket). I mean, I had to creat an administration user during the installation of the service but apparently this only for the main database...
The password in the configuration can be hashed as usual - but strangely it does only accept hashes of some passwords (a hashed version of "123456" is accepted but not hashes of different password, I mean what the...?) so I have to use a single plaintext password... (secure password hashing works for normal user and normal admin accounts).
But even worse are the default logging options: By default (atleast on Debian) the log level is set to DEBUG. Additionally if slapd detects optimization opportunities it writes them to the logs - at least once per connection, if not per query. Together with an application that did alot of connections and queries (this was not intendet and got fixed later) THIS RESULTED IN 32 GB LOG FILES IN ≤ 24 HOURS! - enough to fill up the disk and to crash other services (lessons learned: add more monitoring, monitoring, and monitoring and /var/log should be an extra partition). I mean logging optimization hints is certainly nice - it runs faster now (again, I did not do any benchmarks) - but ther verbosity was way too high.
The worst parts are the error messages: When entering a query string with a syntax errors, slapd returns the error code 80 without any additional text - the documentation reveals SO MUCH BETTER meaning: "other error", THIS IS SO HELPFULL... In the end I was able to find the reason why the input was rejected but in my experience the most error messages are little bit more precise.2