Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "directory server"
-
Was doing some work on a server today and removing loads of stuff.
rm -rf file1
Etc
Etc
Etc
Went into another directory with very important data. Wanted to do ls -la but my fingers went:
rm -rf ./
.
.
*1 milisecond later*
😶
FUCK FUCK FUCK FUCK FUCK FUCK FUCK FUCK FUCK FUCK FUCK FUCK
CTRL+C
CTRL+C
CTRL+C
CTRL+C
CTRL+C
CTRL+C
CTRL+C
CTRL+C
CTRL+C
CTRL+C
CTRL+C
CTRL+C
CTRL+C
CTRL+C
CTRL+C
*VIGOROUSLY CHECKS FILES*
Everything still there 😅29 -
!rant
This was over a year ago now, but my first PR at my current job was +6,249/-1,545,334 loc. Here is how that happened... When I joined the company and saw the code I was supposed to work on I kind of freaked out. The project was set up in the most ass-backward way with some sort of bootstrap boilerplate sample app thing with its own build process inside a subfolder of the main angular project. The angular app used all the CSS, fonts, icons, etc. from the boilerplate app and referenced the assets directly. If you needed to make changes to the CSS, fonts, icons, etc you would need to cd into the boilerplate app directory, make the changes, run a Gulp build that compiled things there, then cd back to the main directory and run Grunt build (thats right, both grunt and gulp) that then built the angular app and referenced the compiled assets inside the boilerplate directory. One simple CSS change would take 2 minutes to test at minimum.
I told them I needed at least a week to overhaul the app before I felt like I could do any real work. Here were the horrors I found along the way.
- All compiled (unminified) assets (both CSS and JS) were committed to git, including vendor code such as jQuery and Bootstrap.
- All bower components were committed to git (ALL their source code, documentation, etc, not just the one dist/minified JS file we referenced).
- The Grunt build was set up by someone who had no idea what they were doing. Every SINGLE file or dependency that needed to be copied to the build folder was listed one by one in a HUGE config.json file instead of using pattern matching like `assets/images/*`.
- All the example code from the boilerplate and multiple jQuery spaghetti sample apps from the boilerplate were committed to git, as well as ALL the documentation too. There was literally a `git clone` of the boilerplate repo inside a folder in the app.
- There were two separate copies of Bootstrap 3 being compiled from source. One inside the boilerplate folder and one at the angular app level. They were both included on the page, so literally every single CSS rule was overridden by the second copy of bootstrap. Oh, and because bootstrap source was included and commited and built from source, the actual bootstrap source files had been edited by developers to change styles (instead of overriding them) so there was no replacing it with an OOTB minified version.
- It is an angular app but there were multiple jQuery libraries included and relied upon and used for actual in-app functionality behavior. And, beyond that, even though angular includes many native ways to do XHR requests (using $resource or $http), there were numerous places in the app where there were `XMLHttpRequest`s intermixed with angular code.
- There was no live reloading for local development, meaning if I wanted to make one CSS change I had to stop my server, run a build, start again (about 2 minutes total). They seemed to think this was fine.
- All this monstrosity was handled by a single massive Gruntfile that was over 2000loc. When all my hacking and slashing was done, I reduced this to ~140loc.
- There were developer's (I use that term loosely) *PERSONAL AWS ACCESS KEYS* hardcoded into the source code (remember, this is a web end app, so this was in every user's browser) in order to do file uploads. Of course when I checked in AWS, those keys had full admin access to absolutely everything in AWS.
- The entire unminified AWS Javascript SDK was included on the page and not used or referenced (~1.5mb)
- There was no error handling or reporting. An API error would just result in nothing happening on the front end, so the user would usually just click and click again, re-triggering the same error. There was also no error reporting software installed (NewRelic, Rollbar, etc) so we had no idea when our users encountered errors on the front end. The previous developers would literally guide users who were experiencing issues through opening their console in dev tools and have them screenshot the error and send it to them.
- I could go on and on...
This is why you hire a real front-end engineer to build your web app instead of the cheapest contractors you can find from Ukraine.19 -
So, i tried to demonstrate my roommate how many people push their credentials to github by searching for "password remove" commits.
I decided to show him the file and noticed something interesting. A public IP, and mysql credentials.
I visit the IP and what do i see there, a directory listening with a python script, with injects the database into a webpage (???) and a log of all http requests. Lots of failed attacks aiming at the PHP CGI. Still wondering how they failed on a python server 🤔🤔🤔
Edit phpmyadmin to connect to the mysql database. Success.
Inserted a row telling him the his password is on github. Maybe i should also have told him how to actually remove it. 😅
Yes, root can login from %
This is how far i can get with my current abilities.
------------------------------
Scary how insecure this world is.4 -
Attended one of the best meetups ever. To give you an idea how awesome it was..
Speaker took the first ~20 minutes introducing himself.
His intro card deck kept referring to himself in the third person (he is the only employee in consulting 'company'). Ex. "Mr. Smith began his humble career .."
The powerpoint presentation began with him clicking each page, not executing the slideshow (ex. pressing F5).
Finally someone asked "Can you make slide bigger?"
S:"You can't read that?..um..sure...I guess .."
Starts fumbling around the zoom ...
Dev: "No, can you start the slideshow?"
S: "I don't know what you mean...there...I zoomed it, is that better? Now I can't see my notes..just sec.."
<fumbles again with the zoom>
Dev: "No, not zoom, start the slide show, press F5"
S: "Oh...you want me to F5 it...OK..."
<he *clicks* the slide show button>
Finally getting into code, trying to get out of powerpoint ...
S: "How do I get out of this fullscreen?.."
Dev: "Hit escape"
S:"No..um.."
<keeps trying to click on 'something'>
S:"I see visual studio, but its not on the big screen... "
<keeps click on 'something', no one is sure whats going on>
Dev: "Hit Escape to stop the slideshow"
<finally hits escape, then able to put Visual Studio on the big screen>
S: "Ahh...there, I figured it out."
Speaker had no end of making wild/random statements like:
".Net Core is the future of Microsoft, if you're using .Net 4.5...forget it, its not even supported anymore."
"When I was at Microsoft Build, I asked them why not put all the required .Net assemblies in one directory. Looks like with .Net Core, they listened to me" (he was serious)
"I don't use SQL Server Mgmt Studio. Its free and it sucks. I use <insert a very expensive SSMS clone>, its great, you guys should check it out", then proceeds to struggle to open a query window to write some SQL.
"When you use .Net Core and EntityFramework, you have to write your own stored procedures. If a developer can't write stored procedures, he shouldn't be in this business."
I was on the edge of my seat, hungry for the next crazy bat-shit thing to come out of his mouth. He did not disappoint. BEST MEETUP EVER!9 -
More sysadmin focused but y’all get this stuff and I need a rant.
TLDR: Got the wrong internship.
Start working as a sysadmin/dev intern/man-of-many-hats at a small finance company (I’m still in school). Day 1: “Oh new IT guy? Just grab a PC from an empty cubicle and here’s a flash drive with Fedora, go ahead and manually install your operating system. Oh shit also your desktop has 2g of ram, a core2 duo, and we scavenged your hard drive for another dev so just go find one in the server room. And also your monitor is broken so just take one from another cubicle.”
Am shown our server room and see that someone is storing random personal shit in there (golf clubs propped against the server racks with heads mixed into the cabling, etc.). Ask why the golf clubs etc. are mixed in with the cabling and server racks and am given the silent treatment. Learn later that my boss is the owners son, and he is storing his personal stuff in our server room.
Do desktop support for end users. Another manager asks for her employees to receive copies of office 2010 (they’re running 2003 an 2007). Ask boss about licensing plans in place and upgrade schedules, he says he’ll get back to me. I explain to other manager we are working on a licensing scheme and I will keep her informed.
Next day other manager tells me (*the intern*) that she spoke with a rich business friend whose company uses fake/cracked license keys and we should do the same to keep costs down. I nod and smile. IT manager tells me we have no upgrade schedule or licensing agreement. I suggest purchasing an Office 365 subscription. Boss says $150 a year per employee is too expensive (Company pulls good money, has ~25 employees, owner is just cheap) I suggest freeware alternatives. Other manager refuses to use anything other than office 2010 as that is what she is familiar with. Boss refuses to spend any money on license keys. Learn other manager is owners wife and mother of my boss. Stalemate. No upgrades happen.
Company is running an active directory Windows Server 2003 instance that needs upgrading. I suggest 2012R2. Boss says “sure”. I ask how he will purchase the license key and he tells me he won’t.
I suggest running an Ubuntu server with LDAP functionality instead with the understanding that this will add IT employee hours for maintenance. Bosses eyes glaze over at the mention of Linux. The upgrade is put off.
Start cleaning out server room of the personal junk, labeling server racks and cables, and creating a network map. Boss asks what I’m doing. I show him the organized side of the server room and he says “okay but don’t do any more”.
... *sigh* ...20 -
Today my manager asked me about my research into using RabbitMQ as a backup in case Azure Service Bus ever goes down.
Me: "Good. The way we designed the framework, all we have to do is drop the DLLs into the directory, update the config, and the services will start using RabbitMQ."
Mgr: "Excellent. Probably should be looking into using RabbitMQ as a permanent replacement for Azure"
Me: "What? The whole reason we moved to Azure was to eliminate the problems with having an on prem service bus. Since we've switched, there has been zero downtime."
Mgr: "That's what VP-Joe is afraid of. If Azure ever goes down, he won't know how to explain Azure to the president as to why we're not taking orders or can't ship packages."
Me: "That makes no sense. What did VP-Joe tell the president when a database goes down or a server mis-configuration?"
Mgr: "President understands internal outages, its just the whole 'cloud' thing he doesn't understand."
Me: "Um..then VP-Joe needs to explain it to him?"
Mgr: "The decision has already been made. Are you on board? Lets look at this move as a cost savings."
Me: "You mean the $10 a month? How much hardware will we need to support RabbitMQ?"
Mgr: "Yea, nobody probably thought of that."
Me: "I'm on board with whatever decision, but I'd like a little more than VP-Joe being afraid of the president."
Mgr: "I'm sure its not being afraid."
Me: "..."
Mgr: "OK, lets wait and see if VP-Joe forgets about this and moves on to something new."4 -
I'm not angry, mostly sad.
At my workplace we don't use git.
There are constant overwriting, sending code via email or USB stick and forgetting passwords to zip-files shenanigans going on.
I already use git for all my local projects (literally git init in the directory) but my coworker and I thought that it would be a great idea to have a local server with a Gitlab running on it.
So I started looking into running a self-hosted Gitlab (for about 15 minutes) and then our boss who was sitting right next to me almost shouted at us: "Such stuff should be coordinated with the boss! We don't just do something and burn my money because it's _cool_!"
No, git is not cool, it's necessary for crying out loud! Gitlab is cool but at the end of the day also just another tool too.
I guess I have some persuasion to do.
I don't know what version control has done to our boss that he has such a deep dislike for it.9 -
*Downloading a linux iso (distrohopping YAY) because the download stopped last night*
*200kbs instead of the 5mbs last night*
*sets up a subdomain for downloading iso's*
*enables SSL*
*downloads the iso to my server*
*copies the iso to the directory of the iso subdomain*
*starts downloading the iso from the server*
5mbs YAY
I am weird 😆11 -
Worst legacy experience...
Called in by a client who had had a pen test on their website and it showed up many, many security holes. I was tasked with coming in and implementing the required fixes.
Site turned out to be Classic ASP built on an MS Access database. Due to the nature of the client, everything had to be done on their premises (kind of ironic but there you go). So I'm on-site trying to get access to code and server. My contact was *never* at her desk to approve anything. IT staff "worked" 11am to 3pm on a long day. The code itself was shite beyond belief.
The site was full of forms with no input validation, origin validation and no SQL injection checks. Sensitive data stored in plain text in cookies. Technical errors displayed on certain pages revealing site structure and even DB table names. Server configured to allow directory listing in file stores so that the public could see/access whatever they liked without any permission or authentication checks. I swear this was written by the child of some staff member. No company would have had the balls to charge for this.
Took me about 8 weeks to make and deploy the changes to client's satisfaction. Could have done it in 2 with some support from the actual people I was suppose to be helping!! But it was their money (well, my money as they were government funded!).1 -
Worst thing you've seen another dev do? So many things. Here is one...
Lead web developer had in the root of their web application config.txt (ex. http://OurPublicSite/config.txt) that contained passwords because they felt the web.config was not secure enough. Any/all applications off of the root could access the file to retrieve their credentials (sql server logins, network share passwords, etc)
When I pointed out the security flaw, the developer accused me of 'hacking' the site.
I get called into the vice-president's office which he was 'deeply concerned' about my ethical behavior and if we needed to make any personnel adjustments (grown-up speak for "Do I need to fire you over this?")
Me:"I didn't hack anything. You can navigate directly to the text file using any browser."
Dev: "Directory browsing is denied on the root folder, so you hacked something to get there."
Me: "No, I knew the name of the file so I was able to access it just like any other file."
Dev: "That is only because you have admin permissions. Normal people wouldn't have access"
Me: "I could access it from my home computer"
Dev:"BECAUSE YOU HAVE ADMIN PERMISSIONS!"
Me: "On my personal laptop where I never had to login?"
VP: "What? You mean ...no....please tell me I heard that wrong."
Dev: "No..no...its secure....no one can access that file."
<click..click>
VP: "Hmmm...I can see the system administration password right here. This is unacceptable."
Dev: "Only because your an admin too."
VP: "I'll head home over lunch and try this out on my laptop...oh wait...I left it on...I can remote into it from here"
<click..click..click..click>
VP: "OMG...there it is. That account has access to everything."
<in an almost panic>
Dev: "Only because it's you...you are an admin...that's what I'm trying to say."
Me: "That is not how our public web site works."
VP: "Thank you, but Adam and I need to discuss the next course of action. You two may go."
<Adam is her boss>
Not even 5 minutes later a company wide email was sent from Adam..
"I would like to thank <Dev> for finding and fixing the security flaw that was exposed on our site. She did a great job in securing our customer data and a great asset to our team. If you see <Dev> in the hallway, be sure to give her a big thank you!"
The "fix"? She moved the text file from the root to the bin directory, where technically, the file was no longer publicly visible.
That 'pattern' was used heavily until she was promoted to upper management and the younger webdev bucks (and does) felt storing admin-level passwords was unethical and found more secure ways to authenticate.5 -
TLDR; I just screwed a production server and rendered it useless!!!
Long story:
I went to install a product that we built at the customer's site, and was given a Linux running server, to deploy our app.
I work in windows, and barely know the basic Linux commands.
So I look at the files in the home directory, and see that the are a lot of files, so I ask the customer if it is ok that I move all the files to a separate directory.
He agrees, and me thinking that I am smart, proceed to enter the following commands in the terminal:
mkdir old
mv /* old
Of course I got an error that I don't have permission so my next command was:
sudo mv /* old
And that was the end of that computer.
The amazing part of the story is that as soon as it happened, I understood so much about Linux.
The file structure, sudo, the power of the terminal, aliases and so much more...15 -
Now, instead of shouting, I can just type "fuck"
The Fuck is a magnificent app that corrects errors in previous console commands.
inspired by a @liamosaur tweet
https://twitter.com/liamosaur/...
Some gems:
➜ apt-get install vim
E: Could not open lock file /var/lib/dpkg/lock - open (13: Permission denied)
E: Unable to lock the administration directory (/var/lib/dpkg/), are you root?
➜ fuck
sudo apt-get install vim [enter/↑/↓/ctrl+c]
[sudo] password for nvbn:
Reading package lists... Done
...
➜ git push
fatal: The current branch master has no upstream branch.
To push the current branch and set the remote as upstream, use
git push --set-upstream origin master
➜ fuck
git push --set-upstream origin master [enter/↑/↓/ctrl+c]
Counting objects: 9, done.
...
➜ puthon
No command 'puthon' found, did you mean:
Command 'python' from package 'python-minimal' (main)
Command 'python' from package 'python3' (main)
zsh: command not found: puthon
➜ fuck
python [enter/↑/↓/ctrl+c]
Python 3.4.2 (default, Oct 8 2014, 13:08:17)
...
➜ git brnch
git: 'brnch' is not a git command. See 'git --help'.
Did you mean this?
branch
➜ fuck
git branch [enter/↑/↓/ctrl+c]
* master
➜ lein rpl
'rpl' is not a task. See 'lein help'.
Did you mean this?
repl
➜ fuck
lein repl [enter/↑/↓/ctrl+c]
nREPL server started on port 54848 on host 127.0.0.1 - nrepl://127.0.0.1:54848
REPL-y 0.3.1
...
Get fuckked at
https://github.com/nvbn/thefuck10 -
I decided to setup a little server on my local network just to make use of a 2TB harddrive I use to store videos.
Told everyone in the house I planned to grow the library over time and that they could access it all in a browser using my system name. It's become quite a fun venture and my video library is shaping up nicely.
Using nginx on a Dell XPS 17 with Ubuntu 16.04 to host a server that just auto indexes a shared directory on my external 2TB harddrive. Kind of an embarrassing rig, but it's just a hobby activity and I do plan to upgrade shit later.
The real fun has been getting to understand a bit more about video files. They used to be magic to me, as complex as their file extension. Now I run a script on all of my torrents which checks the video and audio codecs, converting them if they aren't supported by Chrome's and Firefox's web players, and outputting mp4s using ffmpeg. I feel like I have this stuff down fairly well now. Becoming more and more automated.
Next step is to port forward so I can access it from anywhere, but we'll see about that later down the line.22 -
TLDR: In defense of Powershell - the rant:
I don’t get the Powershell hate.
You don’t hate a screwdriver for not being able to turn a nut, you just *don’t use a screwdriver to turn a nut*
Once you recognize what the tool is good for and you don’t try to use it like Bash, it’s wildly powerful, and satisfying to use in a way Cmd.exe never was.
Cygwin or a Linux Subsystem can only go so far on a Windows computer. You’re dealing with two fundamentally different OS architectures. It makes sense you’d need different tools.
And like it or not, Microsoft owns the non-tech-user desktop , corners the non-tech server business market, and Active Directory is THE tool for managing Windows desktops on a large scale - So Wanblows is not going away anytime soon.
Automation without some weird ass sysVol batch login script is finally possible. Anyone who knows .Net classes can leverage their methods from directly within Powershell. Remote management of headless Windows servers is now a reality. If you have an Office 365 Exchange server you can literally Powershell remote to it for management, just like your favorite cloud hosted Linux distribution.
No one said Windows is a better OS, but an object based shell on an object based OS *makes sense*. It’s useful for its environment. Let it be.10 -
Ok, some @#$#!!@ at my company set the WordPress cache directory to "/" on a Linux server, some other plugin triggered the cache clear and the Apache user was the owner of almost all the WordPress directories of several MU installations, this all happened on a Friday afternoon.2
-
The website for our biggest client went down and the server went haywire. Though for this client we don’t provide any infrastructure, so we called their it partner to start figuring this out.
They started blaming us, asking is if we had upgraded the website or changed any PHP settings, which all were a firm no from us. So they told us they had competent people working on the matter.
TL;DR their people isn’t competent and I ended up fixing the issue.
Hours go by, nothing happens, client calls us and we call the it partner, nothing, they don’t understand anything. Told us they can’t find any logs etc.
So we setup a conference call with our CXO, me, another dev and a few people from the it partner.
At this point I’m just asking them if they’ve looked at this and this, no good answer, I fetch a long ethernet cable from my desk, pull it to the CXO’s office and hook up my laptop to start looking into things myself.
IT partner still can’t find anything wrong. I tail the httpd error log and see thousands upon thousands of warning messages about mysql being loaded twice, but that’s not the issue here.
Check top and see there’s 257 instances of httpd, whereas 256 is spawned by httpd, mysql is using 600% cpu and whenever I try to connect to mysql through cli it throws me a too many connections error.
I heard the IT partner talking about a ddos attack, so I asked them to pull it off the public network and only give us access through our vpn. They do that, reboot server, same problems.
Finally we get the it partner to rollback the vm to earlier last night. Everything works great, 30 min later, it crashes again. At this point I’m getting tired and frustrated, this isn’t my job, I thought they had competent people working on this.
I noticed that the db had a few corrupted tables, and ask the it partner to get a dba to look at it. No prevail.
5’o’clock is here, we decide to give the vm rollback another try, but first we go home, get some dinner and resume at 6pm. I had told them I wanted to be in on this call, and said let me try this time.
They spend ages doing the rollback, and then for some reason they have to reconfigure the network and shit. Once it booted, I told their tech to stop mysqld and httpd immediately and prevent it from start at boot.
I can now look at the logs that is leading to this issue. I noticed our debug flag was on and had generated a 30gb log file. Tail it and see it’s what I’d expect, warmings and warnings, And all other logs for mysql and apache is huge, so the drive is full. Just gotta delete it.
I quietly start apache and mysql, see the website is working fine, shut it down and just take a copy of the var/lib/mysql directory and etc directory just go have backups.
Starting to connect a few dots, but I wasn’t exactly sure if it was right. Had the full drive caused mysql to corrupt itself? Only one way to find out. Start apache and mysql back up, and just wait and see. Meanwhile I fixed that mysql being loaded twice. Some genius had put load mysql.so at the top and bottom of php ini.
While waiting on the server to crash again, I’m talking to the it support guy, who told me they haven’t updated anything on the server except security patches now and then, and they didn’t have anyone familiar with this setup. No shit, it’s running php 5.3 -.-
Website up and running 1.5 later, mission accomplished.6 -
It was a normal school day. I was at the computer and I needed to print some stuff out. Now this computer is special, it's hooked up onto a different network for students that signed up to use them. How you get to use these computers is by signing up using their forms online.
Unfortunately, for me on that day I needed to print something out and the computer I was working on was not letting me sign in. I called IT real quick and they said I needed to renew my membership. They send me the form, and I quickly fill it out. I hit the submit button and I'm greeted by a single line error written in php.
Someone had forgotten to turn off the debug mode to the server.
Upon examination of the error message, it was a syntax error at line 29 in directory such and such. This directory, i thought to myself, I know where this is. I quickly started my ftp client and was able to find the actual file in the directory that the error mentioned. What I didn't know, was that I'd find a mountain of passwords inside their php files, because they were automating all of the authentications.
Curious as I was, I followed the link database that was in the php file. UfFortunately, someone in IT hadn't thought far enough to make the actual link unseeable. I was greeted by the full database. There was nothing of real value from what I could see. Mostly forms that had been filled out by students.
Not only this, but I was displeased with the bad passwords. These passwords were maybe of 5 characters long, super simple words and a couple number tacked onto the end.
That day, I sent in a ticket to IT and told them about the issue. They quickly remedied it by turning off debug mode on the servers. However, they never did shut down access to the database and the php files...2 -
I've found and fixed any kind of "bad bug" I can think of over my career from allowing negative financial transfers to weird platform specific behaviour, here are a few of the more interesting ones that come to mind...
#1 - Most expensive lesson learned
Almost 10 years ago (while learning to code) I wrote a loyalty card system that ended up going national. Fast forward 2 years and by some miracle the system still worked and had services running on 500+ POS servers in large retail stores uploading thousands of transactions each second - due to this increased traffic to stay ahead of any trouble we decided to add a loadbalancer to our backend.
This was simply a matter of re-assigning the IP and would cause 10-15 minutes of downtime (for the first time ever), we made the switch and everything seemed perfect. Too perfect...
After 10 minutes every phone in the office started going beserk - calls where coming in about store servers irreparably crashing all over the country taking all the tills offline and forcing them to close doors midday. It was bad and we couldn't conceive how it could possibly be us or our software to blame.
Turns out we made the local service write any web service errors to a log file upon failure for debugging purposes before retrying - a perfectly sensible thing to do if I hadn't forgotten to check the size of or clear the log file. In about 15 minutes of downtime each stores error log proceeded to grow and consume every available byte of HD space before crashing windows.
#2 - Hardest to find
This was a true "Nessie" bug.. We had a single codebase powering a few hundred sites. Every now and then at some point the web server would spontaneously die and vommit a bunch of sql statements and sensitive data back to the user causing huge concern but I could never remotely replicate the behaviour - until 4 years later it happened to one of our support staff and I could pull out their network & session info.
Turns out years back when the server was first setup each domain was added as an individual "Site" on IIS but shared the same root directory and hence the same session path. It would have remained unnoticed if we had not grown but as our traffic increased ever so often 2 users of different sites would end up sharing a session id causing the server to promptly implode on itself.
#3 - Most elegant fix
Same bastard IIS server as #2. Codebase was the most unsecure unstable travesty I've ever worked with - sql injection vuns in EVERY URL, sql statements stored in COOKIES... this thing was irreparably fucked up but had to stay online until it could be replaced. Basically every other day it got hit by bots ended up sending bluepill spam or mining shitcoin and I would simply delete the instance and recreate it in a semi un-compromised state which was an acceptable solution for the business for uptime... until we we're DDOS'ed for 5 days straight.
My hands were tied and there was no way to mitigate it except for stopping individual sites as they came under attack and starting them after it subsided... (for some reason they seemed to be targeting by domain instead of ip). After 3 days of doing this manually I was given the go ahead to use any resources necessary to make it stop and especially since it was IIS6 I had no fucking clue where to start.
So I stuck to what I knew and deployed a $5 vm running an Nginx reverse proxy with heavy caching and rate limiting linked to a custom fail2ban plugin in in front of the insecure server. The attacks died instantly, the server sped up 10x and was never compromised by bots again (presumably since they got back a linux user agent). To this day I marvel at this miracle $5 fix.1 -
So my previous alma mater's IT servers are really hacked easily. They run mostly in Microsoft Windows Server and Active Directory and only the gateway runs in Linux. When I checked the stationed IT's computer he was having problems which I think was another intrusion.
I asked the guy if I can get root access on the Gateway server. He was hesitant at first but I told him I worked with a local Linux server before. He jested, sent me to the server room with his supervision. He gave me the credentials and told me "10 minutes".
What I did?
I just installed fail2ban, iptables, and basically blocked those IP ranges used by the attacker. The attack quickly subsided.
Later we found out it was a local attack and the attacker was brute forcing the SSH port. We triaged it to one kid in the lobby who was doing the brute forcing connected in the lobby WiFi. Turns out he was a script kiddie and has no knowledge I was tracking his attacks via fail2ban logs.
Moral of lesson: make sure your IT secures everything in place.1 -
*wants to watch Re:Zero on Windows*
The files are on my file server, exposed to the Windows machine with Samba. But the Re:Zero directory isn't visible on Windows 🤔
$ mv "Re:Zero" ReZero
*Suddenly becomes visible on Windows*
What the fuck.. can't it do : characters? Something as basic as that? Microsoft, you.. you never heard of character escaping? I mean, Linux shells for example don't deal with certain characters very well either, so what do you do? Either "this", 'this', or this\ stuff, depending on some and the other things that I won't get into, but mostly it boils down to preference.
Meanwhile Windows: sorry man, can't do it >_< but I can fuck up your language, updates, privacy and files!!!
Fucking hell.. at this point I'm not even mad anymore. Just.. what the fuck Microsoft?14 -
Whilst I was browsing the university website I came across a directory that allowed directory listings. Amongst all the .pl files was one named something.pl.old. Rather than interpreting the file the web server returned the raw source, including domain credentials for one of the network admins.1
-
This one's for all the SysAdmins out there.
About 4 years ago I was asked to take over a dental offices systems administration (~20 machines) after their previous guy had allowed their servers RAID 1 to fail and hadn't done any updates or general maintenance. (please take note this office is my parents dental office).
I since have been recovering from his poor configuration and setup by instating an active directory environment and installing up to date software as well as updating machines on the domain to Windows 10 since windows 7 is no longer supported. I have also been properly licensing everything.
My bosses (my parents) are annoyed with this because "it's more expensive" and "it's too complicated we don't know how to manage it" and I don't know how to explain to them that they aren't fucking systems admins. They asked why they could do it before and I tried to explain that now it's secure and things need to be rolled out on the network level. They had every user running full local admin on every workstation plus the server.
Some people don't fucking understand that just because it's simple doesn't make it a good fucking idea. And because it's cheap doesn't mean it will always be (just wait till Microsoft audits you).
Oh and they also don't understand fucking CAL licensing and refuse to pay for gsuite for all their staff who use it. Instead they just have two gsuite accounts and give everyone the fucking password.
I'm going to have an aneurysm5 -
Taking IT classes in college. The school bought us all lynda and office365 accounts but we can't use them because the classroom's network has been severed from the Active Directory server that holds our credentials. Because "hackers." (The non-IT classrooms don't have this problem, but they also don't need lynda accounts. What gives?)
So, I got bored, and irritated, so I decided to see just how secure the classroom really was.
It wasn't.
So I created a text file with the following rant and put it on the desktop of the "locked" admin account. Cheers. :)
1. don't make a show of "beefing up security" because that only makes people curious.
I'm referring of course to isolating the network. This wouldn't be a problem except:
2. don't restrict the good guys. only the bad guys.
I can't access resources for THIS CLASS that I use in THIS CLASS. That's a hassle.
It also gives me legitimate motivation to try to break your security.
3. don't secure it if you don't care. that is ALSO a hassle.
I know you don't care because you left secure boot off, no BIOS password, and nothing
stopping someone from using a different OS with fewer restrictions, or USB tethering,
or some sort malware, probably, in addition to security practices that are
wildly inconsistent, which leads me to the final and largest grievance:
4. don't give admin priveledges to an account without a password.
seriously. why would you do this? I don't understand.
you at least bothered to secure the accounts that don't even matter,
albeit with weak and publicly known passwords (that are the same on all machines),
but then you went and left the LEAST secure account with the MOST priveledges?
I could understand if it were just a single-user machine. Auto login as admin.
Lots of people do that and have a reason for it. But... no. I just... why?
anyway, don't worry, all I did was install python so I could play with scripting
during class. if that bothers you, trust me, you have much bigger problems.
I mean you no malice. just trying to help.
For real. Don't kick me out of school for being helpful. That would be unproductive.
Plus, maybe I'd be a good candidate for your cybersec track. haven't decided yet.
-- a guy who isn't very good at this and didn't have to be
have a nice day <3
oh, and I fixed the clock. you're welcome.2 -
TLDR: There’s truth in the motto “fake it till you make it”
Once upon a time in January 2018 I began work as a part time sysadmin intern for a small financial firm in the rural US. This company is family owned, and the family doesn’t understand or invest in the technology their business is built on. I’m hired on because of my minor background in Cisco networking and Mac repair/administration.
I was the only staff member with vendor certifications and any background in networking / systems administration / computer hardware. There is an overtaxed web developer doing sysadmin/desktop support work and hating it.
I quickly take that part of his job and become the “if it has electricity it’s his job to fix it” guy. I troubleshoot Exchange server and Active Directory problems, configure cloudhosted web servers and DNS records, change lightbulbs and reboot printers in the office.
After realizing that I’m not an intern but actually just a cheap sysadmin I began looking for work that pays appropriately and is full time. I also change my email signature to say “Company Name: Network Administrator”
A few weeks later the “HR” department (we have 30 employees, it’s more like “The accountant who checks hiring paperwork”) sends out an email saying that certain ‘key’ departments have no coverage at inappropriate times. I don’t connect the dots.
Two days later I receive a testy email from one of the owners telling me that she is unhappy with my lack of time spent in the office. That as the Network Administrator I have responsibilities, and I need to be available for her and others 8-5 when problems need troubleshooting. Her son is my “boss” who is rarely in the office and has almost no technical acumen. He neglected to inform her that I’m a part time employee.
I arrange a meeting in which I propose that I be hired on full time as the Network Administrator to alleviate their problems. They agree but wildly underpay me. I continue searching for work but now my resume says Network Administrator.
Two weeks ago I accepted a job offer for double my current salary at a local software development firm as a junior automation engineer. They said they hired me on with so little experience specifically because of my networking background, which their ops dept is weak in. I highlighted my 6 months experience as Network Administrator during my interviews.
My take away: Perception matters more than reality. If you start acting like something, people will treat you like that.2 -
Back when I migrated my file server to a VM, I used robocopy to copy everything over (and caused quite a few fuckups before it'd behave halfway decent.. or so I thought). Now I tried to add my desktop's SSH key to the authorized_keys on said server but it wouldn't accept my key after that. After a bit of digging I've found that my entire home directory (where the file server hosts its mirror of my D: drive) had everything set to 777, and that's why the key isn't accepted. Great permission mode, isn't it? Much secure, very wow! Thank you so much robocopy!!!3
-
I hired a coder to write a WordPress plugin on my dev server. He no longer works for me and is unreachable. The plugin does most of what it needs to do. But when I dig into the code and the database to find what should be obvious bits of code that do obvious things? None of that code is found. Not even with a recursive directory keyword search for that should be easy to find like CSS class names and IDs. Even the data that comes from the database and that I see on the screen is not actually present in the database!!! Yet it all works. I'm pretty sure at this point the code and data reside in a parallel dimension only the coder can get to. How do I debug code that doesn't actually exist?!13
-
Hey there!
So during my internship I learned a lot about Linux, Docker and servers and I recently switched from a shared hosting to my own VPS. On this VPS I currently have one nginx server running that serves a static ReactJs application. This is temponarily, I SFTP-ed the build files to the server and added a config file for ssl, ciphers and dhparams. I plan to change it later to a nextjs application with a ci/di pipeline etc. I also added a 'runuser' that owns the /srv/web directory in which the webserver files are located. Ssh has passwords disabled and my private keys have passphrases.
Now that I it's been running for a few days I noticed a lot of requests from botnets that tried to access phpmyadmin and adminpanels on my server which gave me quite a scare. Luckily my website does not have a backend and I would never expose phpmyadmin like that if I did have it.
Now my question is:
Do you guys know any good articles or have tips and tricks for securing my server and future projects? Are there any good practices that I should absolutely read and follow? (Like not exposing server details etc., php version, rate limiting). I really want to move forward with my quest for knowledge and feel like I should have a good basis when it comes to managing a server, especially with the current privacy laws in place.
Thanks in advance for enduring my rant and infodump 😅7 -
Holy FREAKING shit!! This was worst stupidest mistake I have ever made!
About 9 hours ago, i decided to implement brotli compression in my server.
It looked a bit challenging for me, because the all the guides involved compiling and building the nginx with brotli module and I was not that confident doing that on live site.
By the end of the guide, the site was not reachable anymore. I panicked.
Even the error logs and access logs were not picking up anything.
About a dozens guides and a new server and figuring out few major undocumented errors later, it turns out the main nginx.conf file had a line that was looking for *.conf files in the sites-enabled directory.
But my conf file was named after the domain name and ending with .com and hence were not picked up by the new nginx.conf
I'm not sure if I wasted my 9 hours because of that single line or not. But man, this was a really rough day!3 -
I just gave robocopy another try, in order to get my WanBLowS D: drive and my file server synchronized again, in preparation to move that file server VM to a LXC container instead.. bad choice. I should've used rsync in WSL.
Hey you Not so Robust File Copier for WanBLowS, how many attempts of you fucking up my file server's dotfiles does it take before I configure you right with every fucking option you have specified? How about you actually behave somewhat decently like rsync where -avz works 99% of the time, in local, remote, any scenarios that you can think of that aren't super obscure?! HOW DIFFICULT CAN IT BE, REDMOND CERTIFIED ENGANEERS?!!
Drown in a pond of bleach, Microshit certified MOTHERFUCKERS!!!!
Well, at least this time it didn't fuck up my .ssh directory so I can still authenticate to the VM.. so I guess that at least that's a win. Even that you can't take for granted anymore with this piece of garbage!!!4 -
I previously worked as a Linux/unix sysadmin. There was one app team owning like 4 servers accessible in a very speciffic way.
* logon to main jumpbox
* ssh to elevated-privileges jumpbox
* logon to regional jumpbox using custom-made ssh alternative [call it fkup]
* try to fkup to the app server to confirm that fkup daemon is dead
* logon to server's mgmt node [aix frame]
* ssh to server directly to find confirm sshd is dead too
* access server's console
* place root pswd request in passwords vault, chase 2 mangers via phone for approvals [to login to the vault, find my request and aprove it]
* use root pw to login to server's console, bounce sshd and fkupd
* logout from the console
* fkup into the server to get shell.
That's not the worst part... Aix'es are stable enough to run for years w/o needing any maintenance, do all this complexity could be bearable.
However, the app team used to log a change request asking to copy a new pdf file into that server every week and drop it to app directory, chown it to app user. Why can't they do that themselves you ask? Bcuz they 'only need this pdf to get there, that's all, and we're not wasting our time to raise access requests and chase for approvals just for a pdf...'
oh, and all these steps must be repeated each time a sysadmin tties to implement the change request as all the movements and decisions must be logged and justified.
Each server access takes roughly half an hour. 4 servers -> 2hrs.
So yeah.. Surely getting your accesses sorted out once is so much more time consuming and less efficient than logging a change request for sysadmins every week and wasting 2 frickin hours of my time to just copy a simple pdf for you.. Not to mention that threr's only a small team of sysadmins maintaining tens of thousands of servers and every minute we have we spend working. Lunch time takes 10-15 minutes or so.. Almost no time for coffee or restroom. And these guys are saying sparing a few hours to get their own accesses is 'a waste of their time'...
That was the time I discovered skrillex.3 -
I think I just hit my lowest point.
Spent ALL of last week trying to get my WAMP server to call a PHP script via AJAX and I kept getting 404s. Spent at least 10 hours on stack overflow trying to figure out why the server wasn't accessing it only to find out today that I was both looking at the wrong directory and also working the file name wrong.
I think I just need to walk away from programming for a while... 😧3 -
Not a hack but more of an orchestrated attack. It was high school and our computer labs ran windows and all of them were connected to a central server. Now i had just learnt about windows api and how it can be used to check the space available on a disk. So i wrote a small script to to write chunks of 5mb files in the directory where TURBO C++ was installed and let it run till the system ran out of space.
Then in the spirit of conspiracy i added the said script to the central node and asked everyone in the lab to copy it locally and execute.
Then a few days later, the poor lab incharge corners me and say who added the ms91.dll file(do not remember the exact name😐). I said that it is a standard Microsoft dll and also how would I know. Then he goes on saying how he had to reinstall windows on all computers. At first I felt sorry but then the spirit of satan rose in me and I denied any responsibility about it and returned back to class where each of my classmates had a good laugh about it. 😂😂 -
@JoshBent and @nikola1402 requested a tutorial for installing i3wm in a windows subsystem for linux. Here it is. I have to say though, I'm no expert in windows nor linux, and all I'm going to put here is the result of duckduck searches, reddit and documentation. As you will see, it isn't very difficult.
First things first: Install WSL. It's easy and there's a ton of good tutorials on this. I think I used this one: https://msdn.microsoft.com/en-us/...
Once you got it installed, I guess it would be better to run "sudo apt-get update" to make sure we don't encounter many problems.
Install a windows X server: X is what handles the graphical interface in linux, and it works with the client/server paradigm. So what we'll do with this is provide the linux client we want to use (in this case i3wm) with an X server for it on windows. I guess any X server will do the work, but I highly recommend vcXsrv. You can download it here:
https://sourceforge.net/projects/...
for i3 just "sudo apt-get install i3"
Configurations to make stuff work:
open your ~/.bashrc file ("nano ~/.bashrc" vim is cool too). You'll have to add the following lines to the end of it:
"""
export DISPLAY=:0.0 #This display variable points to the windows X server for our linux clients to use it.
export XDG_RUNTIME_DIR=$HOME/xdg #This is a temporary directory X will use
export RUNLEVEL=3
sudo mkdir /var/run/dbus #part of the dbus fix
sudo dbus-daemon --config-file=/usr/share/dbus-1/system.conf #part of the dbus fix
"""
Ok so after this we'll have a functional x client/server configuration. You'll just have to install your desktop enviroment of choice. I only installed i3wm, but I've seen unity and xfce working on the WSL too. There are still some files that X will miss though.
*** Here we'll add some files X would miss and :
With "nano ~/.xinitrc" edit the xinitrc to your liking. I only added this:
"""
#!/usr/bin/env bash
exec i3
"""
Then run "sudo chmod +x ~/.xinitrc" to make it an excecutable.
Then, to make a linking file named xsession, run:
"ln -s ~/.xinitrc ~/.xsession"
Now you'll be able to run whatever you put in ~/.xinirc with:
"dbus-launch --exit-with-session ~/.xsession"
There's a ton of personalisation to be done, but that would be a whole new tutorial. I'll just share a github repo with my dotfiles so you can see them here:
https://github.com/DanielVZ96/...
SHIT I ALMOST FORGOT:
Everytime you open any graphical interface you'll need to have the x server running. With vcXsrv, you can use X launch. Choose the options with no othe programs running on the X server. I recommend using "one window without title bar".10 -
when I was a newbie I was given a task to upload a site.
I had done that many times before so I thought it wont be a big deal so I thought I never gave a try uploading through ftp.
Okay I began work on it the server was of godaddy and credentials I got were of delegate access.
right I tried connecting through ftp but it wasn't working thought there's some problem with user settings why shouldn't I create my own user to stay away from mess.
Now I creater my own user and could easily login but there were no files in it saw that by creating user my folder is different and I dont have access to server files I wanted to take backup before I do upload.
now I was thinking to give my user access to all files so I changed the access directory to "/" checked ftp again there was still no file.
don't know what happened to me I thought ahh its waste of time for creating ftp user it does nothing and I deleted my ftp account.
now I went through web browser to download data and earth skids beneath my foots. Holy fuck I lost all the data, all were deleted with that account it scared the shit out of me.
There were two sites running which were now gone.
Tried every bit to bring them back but couldn't do so. i contact support of godaddy they said you haven't enabled auto backup so you can't have them for free however they can provide their service in $150. Which is 15k in my country.
I decided to tell my boss about what happened and he got us away :p I wasn't fired gladly -
I had spent the last year working on a online store power by woocommerce with over 100k products from various suppliers. This online store utilized a custom API that would take the various formats that suppliers offer their inventory in and made them consistent. Now everything was going swimmingly initially, but then I began adding more and more products using a plug-in called WP all import. I reached around 100k products and the site would take up to an entire minute to load sometimes timing out. I got desperate so I installed several caching plugins, but to no avail this did not help me. The site was originally only supposed to take three to four months but ended up taking an entire year. Then, just yesterday I found out what went wrong and why this woocommerce website with all of these optimizations was still taking anywhere from 60 to 90 seconds to load, or just timing out entirely. I had initially thought that I needed a beefier server so I moved it to a high CPU digitalocean VM. While this did help a little bit, the site was still very slow and now I had very high CPU usage RAM usage and high disk IO. I was seriously stumped the Apache process was using a high amount of CPU and IO along with MYSQL as well. It wasn't until I started digging deeper into the database that I actually found out what the issue was. As I was loading the site I would run 'show process list' in the SQL terminal, I began to notice a very significant load time for one of the tables, so I went to go and check it out. What I did was I ran a select all query on that particular table just to see how full it was and SQL returned a error saying that I had exceeded the maximum packet size. So I was like okay what the fuck...
So I exited my SQL and re-entered it this time with a higher packet size. I ran a query that would count how many rows were in this particular table and the number came out to being in the millions. I was surprised, and what's worse is that this table belong to a plugin that I had attempted to use early in the development process to cache the site. The plugin was deactivated but apparently it had left PHP files within the wp content directory outside of the actual plugin directory, so it's still executing scripts even though the plugin itself was disabled. Basically every time I would change anything on the site, it would recache the whole thing, and it didn't delete any old records. So 100k+ products caching on saves with no garbage collection... You do the math, it's gonna be a heavy ass database. Not only that but it was serialized data, so when it did pull this metric shit ton of spaghetti from the database, PHP then had to deserialize it. Hence the high ass CPU load. I had caching enabled on the MySQL end of things so that ate the ram. I was really desperate to get this thing running.
Honest to God the main reason why this website took so long was because the load times made it miserable to work on. I just thought that the hardware that I had the site on was inadequate. I had initially started the development on a small Linux VM which apparently wasn't enough, which is why I moved it to digitalocean which also seemed to not be enough, so from there I moved to a dedicated server which still didn't seem to be enough. I was probably a few more 60-second wait times or timeouts from recommending a server cluster to my client who I know would not be willing to purchase it. The client who I promised this site to have completed in 3 months and has waited a year. Seriously, I would tell people the struggles that I would go through with this particular site and they would just tell me to just drop the site; just take the money, just take the loss. I refused to, this was really the only thing that was kicking my ass. I present myself as this high-and-mighty developer like I'm just really good at what I do but then I have this WordPress site that's just beating the shit out of me for a year. It was a very big learning experience and it was also very humbling as well, it made me realize that I really don't know as much as I think I might. It was evidence that there is still so much more to learn out there, I did learn a lot from that experience especially about optimizing websites the different types of methods to do that particular lonely on the server side and I'll be able to utilize this knowledge in the future.
I guess the moral of the story is, never really give up. Ultimately things might get so bad that you're running on hopes and dreams. Those experiences are generally the most humbling. Now I can finally present the site that I am basically a year late on to the client who will be so happy that I did not give up on the project entirely. I'll have experienced this feeling of pure euphoria, and help the small business significantly grow their revenue. Helping others is very fulfilling for me, even at my own expense.
Anyways, gonna stop ranting. Running out of characters. If you're still here... Ty for reading :')7 -
EoS1: This is the continuation of my previous rant, "The Ballad of The Six Witchers and The Undocumented Java Tool". Catch the first part here: https://devrant.com/rants/5009817/...
The Undocumented Java Tool, created by Those Who Came Before to fight the great battles of the past, is a swift beast. It reaches systems unknown and impacts many processes, unbeknownst even to said processes' masters. All from within it's lair, a foggy Windows Server swamp of moldy data streams and boggy flows.
One of The Six Witchers, the Wild One, scouted ahead to map the input and output data streams of the Unmapped Data Swamp. Accompanied only by his animal familiars, NetCat and WireShark.
Two others, bold and adventurous, raised their decompiling blades against the Undocumented Java Tool beast itself, to uncover it's data processing secrets.
Another of the witchers, of dark complexion and smooth speak, followed the data upstream to find where the fuck the limited excel sheets that feeds The Beast comes from, since it's handlers only know that "every other day a new one appears on this shared active directory location". WTF do people often have NPC-levels of unawareness about their own fucking jobs?!?!
The other witchers left to tend to the Burn-Rate Bonfire, for The Sprint is dark and full of terrors, and some bigwigs always manage to shoehorn their whims/unrelated stories into a otherwise lean sprint.
At the dawn of the new year, the witchers reconvened. "The Beast breathes a currency conversion API" - said The Wild One - "And it's claws and fangs strike mostly at two independent JIRA clusters, sometimes upserting issues. It uses a company-deprecated API to send emails. We're in deep shit."
"I've found The Source of Fucking Excel Sheets" - said the smooth witcher - "It is The Temple of Cash-Flow, where the priests weave the Tapestry of Transactions. Our Fucking Excel Sheets are but a snapshot of the latest updates on the balance of some billing accounts. I spoke with one of the priestesses, and she told me that The Oracle (DB) would be able to provide us with The Data directly, if we were to learn the way of the ODBC and the Query"
"We stroke at the beast" - said the bold and adventurous witchers, now deserving of the bragging rights to be called The Butchers of Jarfile - "It is actually fewer than twenty classes and modules. Most are API-drivers. And less than 40% of the code is ever even fucking used! We found fucking JIRA API tokens and URIs hard-coded. And it is all synchronous and monolithic - no wonder it takes almost 20 hours to run a single fucking excel sheet".
Together, the witchers figured out that each new billing account were morphed by The Beast into a new JIRA issue, if none was open yet for it. Transactions were used to update the outstanding balance on the issues regarding the billing accounts. The currency conversion API was used too often, and it's purpose was only to give a rough estimate of the total balance in each Jira issue in USD, since each issue could have transactions in several currencies. The Beast would consume the Excel sheet, do some cryptic transformations on it, and for each resulting line access the currency API and upsert a JIRA issue. The secrets of those transformations were still hidden from the witchers. When and why would The Beast send emails, was still a mistery.
As the Witchers Council approached an end and all were armed with knowledge and information, they decided on the next steps.
The Wild Witcher, known in every tavern in the land and by the sea, would create a connector to The Red Port of Redis, where every currency conversion is already updated by other processes and can be quickly retrieved inside the VPC. The Greenhorn Witcher is to follow him and build an offline process to update balances in JIRA issues.
The Butchers of Jarfile were to build The Juggler, an automation that should be able to receive a parquet file with an insertion plan and asynchronously update the JIRA API with scores of concurrent requests.
The Smooth Witcher, proud of his new lead, was to build The Oracle Watch, an order that would guard the Oracle (DB) at the Temple of Cash-Flow and report every qualifying transaction to parquet files in AWS S3. The Data would then be pushed to cross The Event Bridge into The Cluster of Sparks and Storms.
This Witcher Who Writes is to ride the Elephant of Hadoop into The Cluster of Sparks an Storms, to weave the signs of Map and Reduce and with speed and precision transform The Data into The Insertion Plan.
However, how exactly is The Data to be transformed is not yet known.
Will the Witchers be able to build The Data's New Path? Will they figure out the mysterious transformation? Will they discover the Undocumented Java Tool's secrets on notifying customers and aggregating data?
This story is still afoot. Only the future will tell, and I will keep you posted.6 -
Running a fucking conda environment on windows (an update environment from the previous one that I normally use) gets to be a fucking pain in the fucking ass for no fucking reason.
First: Generate a new conda environment, for FUCKING SHITS AND GIGGLES, DO NOT SPECIFY THE PYTHON VERSION, just to see compatibility, this was an experiment, expected to fail.
Install tensorflow on said environment: It does not fucking work, not detecting cuda, the only requirement? To have the cuda dependencies installed, modified, and inside of the system path, check done, it works on 4 other fucking environments, so why not this one.
Still doesn't work, google around and found some thread on github (the errors) that has a way to fix it, do it that way, fucking magic, shit is fixed.
Very well, tensorflow is installed and detecting cuda, no biggie. HAD TO SWITCH TO PYHTHON 3,8 BECAUSE 3.9 WAS GIVING ISSUES FOR SOME UNKNOWN FUCKING REASON
Ok no problem, done.
Install jupyter lab, for which the first in all other 4 environments it works. Guess what a fuckload of errors upon executing the import of tensorflow. They go on a loop that does not fucking end.
The error: imPoRT eRrOr thE Dll waS noT loAdeD
Ok, fucking which one? who fucking knows.
I FUCKING HATE that the main language for this fucking bullshit is python. I guess the benefits of the repl, I do, but the python repl is fucking HORSESHIT compared to the one you get on: Lisp, Ruby and fucking even NODE in which error messages are still more fucking intelligent than those of fucking bullshit ass Python.
Personally? I am betting on Julia devising a smarter environment, it is a better language already, on a second note: If you are worried about A.I taking your job, don't, it requires a team of fucktards working around common basic system administration tasks to get this bullshit running in the first place.
My dream? Julia or Scala (fuck you) for a primary language in machine learning and AI, in which entire environments, with aaaaaaaaaall of the required dlls and dependencies can be downloaded and installed upon can just fucking run. A single directory structure in which shit just fucking works (reason why I like live environments like Smalltalk, but fuck you on that too) and just run your projects from there, without setting a bunch of bullshit from environment variables, cuda dlls installation phases and what not. Something that JUST FUCKING WORKS.
I.....fucking.....HATE the level of system administration required to run fucking anything nowadays, the reason why we had to create shit like devops jobs, for the sad fuckers that have to figure out environment configurations on a box just to run software.
Fuck me man development turned to shit, this is why go mod, node npm, php composer strict folder structure pipelines were created. Bitch all you want about npm, but if I can create a node_modules setting with all of the required dlls to run a project, even if this bitch weights 2.5GB for a project structure you bet your fucking ass that I would.
"YOU JUST DON'T KNOW WHAT YOU ARE DOING" YES I FUCKING DO and I will get this bullshit fixed, I will get it running just like I did the other 4 environments that I fucking use, for different versions of cuda and python and the dependency circle jerk BULLSHIT that I have to manage. But this "follow the guide and it will work, except when it does not and you are looking into obscure github errors" bullshit just takes away from valuable project time when you have a small dedicated group of developers and no sys admin or devops mastermind to resort to.
I have successfully deployed:
Java
Golang
Clojure
Python
Node
PHP
VB/C# .NET
C++
Rails
Django
Projects, and every single fucking time (save for .net, that shit just fucking works on a dedicated windows IIS server) the shit will not work with x..nT reasons. It fucking obliterates me how fucking annoying this bullshit is. And the reason why the ENTIRE FUCKING FIELD of computer science and software engineering is so fucking flawed.
But we can't all just run to simple windows bs in which we have documentation for everything. We have to spend countless hours on fucking Linux figuring shit out (fuck you also, I have been using Linux since I was 18, I am 30 now) for which graphical drivers for machine learning, cuda and whatTheFuckNot require all sorts of sys admin gymnasts to be used.
Y'all fucked up a long time ago. Smalltalk provided an all in one, easily rollable back to previous images, easily administered interfaces for this fileFuckery bullshit, and even though the JVM and the .NET environments did their best to hold shit down, and even though we had npm packages pulling the universe inside, or gomod compiling shit into one place NOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO we had to do whatever the fuck we wanted to feel l337 and wanted.
Fuck all of you, fuck this field, fuck setting boxes for ML/AI and fuck every single OS in existence2 -
So, idiot me decided it would be a good idea to never get around to configuring my UPS to gracefully shutdown my server after a powercut lasting more than x duration...
Long story short, we had a powercut that lasted 4 minutes or so longer than the battery in the UPS could keep the server up for...
UPS died, server went pew, and after rebooting itself once the power came back on, my raid array wouldn’t mount anymore...
After Googling around, it seemed like running e2fsck would solve the problem.
Didn’t seem to do the trick... and tired me at 3am decided it would be a good idea to poke around.
Pretty sure I ran a command wrong, or two, because now I can’t even mount the fricken array in read only, and fsck complains with a shit ton errors...
Been researching for hours, and no dice...
Test Disk shows the ext4 partition, but fails to list any files...
I may have destroyed the tables or something... I’m a noob at this point.
I’m able to access files with the RStudio tool, however this doesn’t help with file names and directory structure 😭
Is it all over for my 5 years worth of photos and other bits and pieces that I don’t have any backups of ? 😂😭😭
If any of y’all are pros with data recovery and can help a fellow boi out, I’d be more than happy to pay for ya time !2 -
Me and my developer friend worked with my ex-colleague with this fitness directory website because he promised to give us {{ thisAmount }} upon the {{ completionDate }}.
He was my friend and I trusted him.
It took me weeks of sleepless nights building the project. I had a full-time job that time, and I worked on the project during evenings. All went well, and as we reach the {{ completionDate }}, the demo site is already up and running.
A week before the {{ completionDate }}, he hired his new wife as the COO of the startup. It was cool, she keep noticing things on the site which shouldn't be there, and keeps on suggesting sections that has to be there. I was okay with it, until I realized that we are already a month late with the deadline.
Every single hour, I get a message from them like, "it's not working", "when can you finish this feature?", blah blah blah.. and so on.
I got frustrated.
"I want my fucking life back", I told them. No one cared about the {{ completionDate }}, the sleepless zombies they are working with and our payment. They keep on coming up with this "amazing" ass features, and now they are not paying because they said "it's not complete".
Idiot enough to trust a friend. I was unprotected, there was no legal-binding document that states their obligation to pay.
My dev friend and I handed over the project to this web development company which they prefer, and kept a backdoor on the application.
I kind of moved on with the payment issue after a month. But without their knowledge, I kept an eye on the progress and made sure that I still have the access to their server, DNS, etc..
BUT when they announced the official launch on social media, I realized that I was on the wrong train the whole time.
They switched to a different server.
They thanked all the people involved with the project via social media, EXCEPT me and my coding partner who originally built the site from ground up. A little "thank you" note from them will make us feel a little better. But, never happened.
I checked up the site and it was rewritten from originally Laravel 5 to CodeIgniter 1. That is like shifting from a luxury yacht where you can bang some hot chicks, to a row boat where your left hand is holding the paddle whilst your right hand is wanking yourself.
I almost ran out of bullets.
Luckily, CodeIgniter 1 was prone to SQLi by default.
I was able to get the administrator password in plain text and fucked with their data. But that didn't make me feel better because other people's info are involved.
So, I looked for something else to screw with. What I found? A message with the credit card details.
Finally, a chance to do something good for humanity. I just donated a few thousand dollars to different charity websites.3 -
I work as a front end developer at a company. This site is using WordPress and I need a paid plugin, but I wanted to test the full version first without paying, so I googled it. Downloaded it and installed it right away.
NOTE I was working on the test server, where all other projects are placed in a subdirectory of public_html (public_html/websites/<other websites>), but instead on placing the website folder where are the others, I placed it in the parent directory (public_html), (where are some others folders and files). Everything goes fine, but a few days later, I wanted to modify something in functions.php of that theme and I noticed a strange code, base64 format, so I decrypted it and turns out it's a backdoor that puts code in other files of the theme, so it can add an Admin in the DB anytime, so it can remotely connect to the website. Because, as I said, the website was in the public_html directory, and the virus search for the other folders and files in the same directory and his children, it affected the rest of the websites (50+).
I reported that to my boss, but says it's fine and to give more attention next time and to install the website in the same directory as the others. Couldn't fix automatically and I had to remove manually in every website every file created and the lines that the virus added.5 -
Gosh only Idiots out there...
Told my coworker, to install the tomar manager on server 1. Same easiest way for him just copy it from server 2. He was already in console of the first... then I see that he opened winscp, navigating via gui to the directory miss clicked a few times. Tried drag and drop the folder to desktop. Get notified that he didn't installed the plugin. Dragged it to another folder on his pc in winscp. Started new session of winscp for the other server. And so on. I said after he started the first winscp that the command line would be 1000x faster.
Meanwhile I wrote the command for this torture on a sticky note and left the room. That wastes too much time of 2 ppls. Good old days when the most people's know how to use a console.3 -
Last Monday I bought an iPhone as a little music player, and just to see how iOS works or doesn't work.. which arguments against Apple are valid, which aren't etc. And at a price point of €60 for a secondhand SE I figured, why not. And needless to say I've jailbroken it shortly after.
Initially setting up the iPhone when coming from fairly unrestricted Android ended up being quite a chore. I just wanted to use this thing as a music player, so how would you do it..?
Well you first have to set up the phone, iCloud account and whatnot, yada yada... Asks for an email address and flat out rejects your email address if it's got "apple" in it, catch-all email servers be damned I guess. So I chose ishit at my domain instead, much better. Address information for billing.. just bullshit that, give it some nulls. Phone number.. well I guess I could just give it a secondary SIM card's number.
So now the phone has been set up, more or less. To get music on it was quite a maze solving experience in its own right. There's some stuff about it on the Debian and Arch Wikis but it's fairly outdated. From the iPhone itself you can install VLC and use its app directory, which I'll get back to later. Then from e.g. Safari, download any music file.. which it downloads to iCloud.. Think Different I guess. Go to your iCloud and pull it into the iPhone for real this time. Now you can share the file to your VLC app, at which point it initializes a database for that particular app.
The databases / app storage can be considered equivalent to the /data directories for applications in Android, minus /sdcard. There is little to no shared storage between apps, most stuff works through sharing from one app to another.
Now you can connect the iPhone to your computer and see a mount point for your pictures, and one for your documents. In that documents mount point, there are directories for each app, which you can just drag files into. For some reason the AFC protocol just hangs up when you try to delete files from your computer however... Think Different?
Anyway, the music has been put on it. Such features, what a nugget! It's less bad than I thought, but still pretty fucked up.
At that point I was fairly dejected and that didn't get better with an update from iOS 14.1 to iOS 14.3. Turns out that Apple in its nannying galore now turns down the volume to 50% every half an hour or so, "for hearing safety" and "EU regulations" that don't exist. Saying that I was fuming and wanting to smack this piece of shit into the wall would be an understatement. And even among the iSheep, I found very few people that thought this is fine. Though despite all that, there were still some. I have no idea what it would take to make those people finally reconsider.. maybe Tim Cook himself shoving an iPhone up their ass, or maybe they'd be honored that Tim Cook noticed them even then... But I digress.
And then, then it really started to take off because I finally ended up jailbreaking the thing. Many people think that it's only third-party apps, but that is far from true. It is equivalent to rooting, and you do get access to a Unix root account by doing it. The way you do it is usually a bootkit, which in a desktop's ring model would be a negative ring. The access level is extremely high.
So you can root it, great. What use is that in a locked down system where there's nothing available..? Aha, that's where the next thing comes in, 2 actually. Cydia has an OpenSSH server in it, and it just binds to port 22 and supports all of OpenSSH's known goodness. All of it, I'm using ed25519 keys and a CA to log into my phone! Fuck yea boi, what a nugget! This is better than Android even! And it doesn't end there.. there's a second thing it has up its sleeve. This thing has an apt package manager in it, which is easily equivalent to what Termux offers, at the system level! You can install not just common CLI applications, but even graphical apps from Cydia over the network!
Without a jailbreak, I would say that iOS is pretty fucking terrible and if you care about modding, you shouldn't use it. But jailbroken, fufu.. this thing trades many blows with Android in the modding scene. I've said it before, but what a nugget!8 -
...5 minutes ago per ssh on the productivity server...
"ok, let's delete this old test directory ..."
*types rm -r www*
....*thinking* ...*realising* ... "FUUUUUCK!!11"
*quickly types git clone gitadress"
*checks website* "phew!"1 -
Hey, looks like some employee of this hosting company failed to 750 his home directory and 640 the files...
I was SSHing around on our hosting account when I slipped into his home direcory where at least two(!) SSH public keys of his admin account for the server were readable!
Being an honest guy, I had to call them...
It's fixed now.2 -
So our main web server got ransomware'd.
By some miracle only a shared directory was compromised and not the whole server.
The server is on an end-of-life OS (Win Server 2008r2), no antivirus solution, no WAF, no log hardening or aggregation, so basically our Security MSP told us "lol good luck finding the attack origin, nuke it and rebuild it correctly this time"
Thing is IT leadership is like "Eh, no harm done, everything is fine" and want to sweep it under the rug and not report it to senior management.
How do i go about convincing them that this is actually important and for once in their life, they should give a fuck ? (This web server is the main moneymaker, it goes tits up and heads are gonna roll).9 -
You know what, let me jump in on the "I hate PHP" bandwagon.
A couple months ago I upgraded my mail servers unattended. Roundcube got fucked for a couple of months, and I figured.. fuck it, I can still use Dovecot for authenticating with desktop mail clients like K-9.
Recently I unfucked it, turns out that it was an issue with the sock file in php-fpm. That's also when I noticed that PHP apparently hardcodes in its current version in the bloody socket file. Because why the fuck wouldn't you? It makes upgrades so much fucking easier!!! Said no fucking sysadmin ever!!!
And today I upgraded one of my mail servers to Ubuntu Server 18.04, finally, after a lot of hesitation. Bad decision, because now PHP got fucked YET AGAIN.
Again an issue with socket files? I have no fucking idea. systemctl shows no failed services (because you know PHP, why would you fail your service with an error message instead of throwing a meaningless 502 Bad Gateway, right?!!) and looking at the config files, well the socket file got its new php-fpm 7.2 file (still got the fucking version number hardcoded in) and thus I changed that socket file location in /etc/php/7.0...
devRant may just have been my rubber duck.
WHY THE FUCK DO YOU STINKING FUCKING PILE OF SHIT CALLED FUCKING PHP KEEP THE FUCKING 7.0 DIRECTORY OUT THERE WHEN YOU'VE UPGRADED, WITHOUT EVEN HAVING THE FUCKING BALLS TO RENAME THE MOTHERFUCKING DIRECTORY TO 7.2, IF YOU'RE GOING TO HARDCODE IN YOUR VERSION NUMBERS ANYWAY?!!!!!
Bloody fucking pile of fucking junk!!!!18 -
Sometimes I have to work with physical hardware. There are over 300 machines in our lab, split among two subnets. But for some reason, I can never access my machines by hostnames.
Every other week, there's an IP conflict on this network, requiring me to log into the active directory server and delete old DNS entries. This usually happens because someone decided to deploy 64 VMs on a huge server, all at once, didn't boot them with a delay, let alone with with a warning to IT.
Then when my superior asks how my progress has been and I respond with "I can't even get the machines to ping each other by hostname, there's something wrong with the DNS:, I get the following response: "HOW COME NOBODY ELSE IS HAVING PROBLEMS WITH THIS. YOU'RE FULL OF SHIT", from someone who spends 90% of the year abroad, working remotely.5 -
Right, that's fucking it. Enough. I'm all for learning new technologies, frameworks, and development protocols, but my time on this earth is limited and at the end of the day if I'm having to spend DAYS AND FUCKING DAYS just scouring through obscure forum posts because the documentation is shit and just hitting ONE FUCKING PROBLEM AFTER ANOTHER then there comes a point at which the time investment simply isn't worth it. I HATE throwing in the towel because some FUCKING CUNT code problem has got the better of me, but fucking sense must prevail here.
Laravel fucking Mix. Do any any of you use this shit on Windows? Because I take my fucking hat off to you. I'm done with it.
Oh, so your server uses 'public_html' instead of 'public' does it? Well, of course you can just set
mix.setPublicPath('public_html'); then can't you?
No, you can't. Why? Because fuck you, that's why. Not only do you have to hard-code your fucking public directory into each specified path, additionally you have to set
mix.setPublicPath('./');
Why? Because fuck you, that's why. It took me the best part of two days to discover that little nugget of information, buried at the bottom of some obscure corner of the internet in a random github issue thread. Fuck off.
Onto next problem. Another 5 hours invested to extract some patchy solution that I'm not at all happy with.
Rinse, repeat.
Make it work with BrowserSync by wrapping your assets like so:
<link rel="stylesheet" href="{{ mix('/build/css/main.css') }}">
Oh oh oh but "The Mix manifest does not exist"... despite a fresh install of Laravel 5.6 and all relevant node modules installed... follow some other random Github thread with a back and forth of time-consuming suggestions for avenues of experimentation, with no clear solution.
Er no, fuck off. I'm going back to Grunt and maybe I'll try Webpack/Mix in another year or two when there's actually some clear answers, but as it stands this a wild goose chase into a fucking black-hole and I've got better things to do with my precious time. Go die.5 -
Ah, my brain, MY FUCKING BRAIN!
Got some work from the previous company. Need to update some stuff on their website.
Fine, got the files from the server via sFTP.
Made the changes, before uploading the files, wanted to create the latest backup.
Downloaded the files again, just to realize that I forgot to cd into a different directory before re-downloading the files. All the changes are now overwritten.
Half an hour of work lost. DAMN IT!3 -
A (work-)project i spent a year on will finally be released soon. That's the perfect opportunity to vent out all the rage i built up during dealing with what is the javascript version of a zodiac letter.
Everything went wrong with the beginning. 3 people were assigned to rewrite an old flash-application. Me, A and B. B suggested a javascript framework, even though me and A never worked with more than jquery. In the end we chose react/redux with rest on the server, a classic.
After some time i got the hang of time, around that time B left and a new guy, C, was hired soon after that. He didn't know about react/redux either. The perfect start off to a burning pile of smelly code.
Today this burning pile turned into a wasteland of code quality, a house of cards with a storm approaching, a rocket with leaks ready to launch, you get the idea.
We got 2 dozen files with 200-500 loc, each in the same directory and each with the same 2 word prefix which makes finding the right one a nightmare on its on. We have an i18n-library used only for ~10 textfields, copy-pasted code you never know if it's used or not, fetch-calls with no error-handling, and many other code smells that turn this fire into a garbage fire. An eternal fire. 3 months ago i reduced the linter-warnings on this project to 1, now i can't keep count anymore.
We use the reactabular-module which gives us headaches because IT DOESN'T DO WHAT IT'S SUPPOSED TO DO AND WE CANT USE IT WELL EITHER. All because the client cant be bothered to have the table header scroll along with the body. We have methods which do two things because passing another callback somehow crashed in the browser. And the only thing about indentation is that it exists. Copy pasting from websites, other files and indentation wars give the files the unique look that make you wonder if some of the devs hides his whitespace code in the files.
All of this is the result of missing time, results over quality and the worst approach of all, used by A: if A wants an ui-component similar to an existing one, he copies the original and edits he copy until it does what he wants. A knows about classes, modules, components, etc. Still, he can't bring himself to spend his time on creating superclasses... his approach gives results much faster
Things got worse when A tried redux, luckily A prefers the components local state. WHICH IS ANOTHER PROBLEM. He doesn't understand redux and loads all of the data directly from the server and puts it into the local state. The point of redux is that you don't have to do this. But there are only 1 or 2 examples of how this practice hurt us yet, so i'm gonna have to let this slide. IF HE AT LEAST WOULD UPDATE THE DATA PROPERLY. Changes are just sent to the server and then all of the data is re-fetched. I programmed the rest-endpoints to return the updated objects for a very reason. But no, fuck me.
I've heard A decided (A is the teamleader) to use less redux on the next project and use a dedicated rest-endpoints for every little comoutation you COULD DO WITH REDUX INSTEAD. My will is broken and just don't want to work with this anymore.
There are still various subpages that cant f5 because the components cant handle an empty redux state in the beginning, but to be honest i don't care anymore. Lets hope the client will never find out, along with the "on error nothing happens"-bugs. The product should've been shipped last week, but thanks to mandatory bugfixes the release was postponed to next week. Then the next project starts...
Please give me some tips to keep up code quality over time, i cant take this once more.
I'm also aware that i could've done more, talking A and C about code style, prettifying the code, etc. Etc. But i was busy putting out my out fires, i couldn't kill much of the other fires which in the end became a burning building (a perfect metaphor for this software)4 -
In today's episode of kidding on SystemD, we have a surprise guest star appearance - Apache Foundation HTTPD server, or as we in the Debian ecosystem call it, the Apache webserver!
So, imagine a situation like this - Its friday afternoon, you have just migrated a bunch of web domains under a new, up to date, system. Everything works just fine, until... You try to generate SSL certificates from Lets Encrypt.
Such a mundane task, done more than a thousand times already... Yet... No matter what you do, nothing works. Apache just returns a HTTP status code 403 - Forbidden.
Of course, what many folk would think of first when it came to a 403 error is - Ooooh, a permission issue somewhere in the directory structure!
So you check it... And re-check it to make sure... And even switch over to the user the webserver runs under, yet... You can access the challenge just fine, what the hell!
So you go deeper... And enable the most verbose level of logging apache is capable of - Trace8. That tells you... Not a whole lot more... Apparently, the webserver was unable to find file specified? But... Its right there, you can see it!
So you go another step deeper and start tracing the process' system calls to see exactly where it calls stat/lstat on the file, and you see that it... Calls lstat and... It... Returns -1? What the hell#2!
So, you compile a custom binary that calls lstat on the first argument given and prints out everything it returns... And... It works fine!
Until now, I chose to omit one important detail that might have given away the issue to the more knowledgeable right away. Our webservers have the URL /.well-known/acme-challenge/, used for ACME challenges, aliased somewhere else on the filesystem - To /tmp/challenges.
See the issue already?
Some *bleep* over at the Debian Package Maintainer group decided that Apache could save very sensitive data into /tmp, so, it would be for the best if they changed something that worked for decades, and enabled a SystemD service unit option "PrivateTmp" for the webserver, by default.
What it does is that, anytime a process started with this option enabled writes to /tmp/*, the call gets hijacked or something, and actually makes the write to a private /tmp/something/tmp/ directory, where something... Appeared as a completely random name, with the "apache2.service" glued at the end.
That was also the only reason why I managed fix this issue - On the umpteenth time of checking the directory structure, I noticed a "systemd-private-foobarbas-apache2.service-cookie42" directory there... That contained nothing but a "tmp" directory with 777 as its permission, owned by the process' user and group.
Overriding that unit file option finally fixed the issue completely.
I have just one question - Why? Why change something that worked for decades? I understand that, in case you save something into /tmp, it may be read by 3rd parties or programs, but I am of the opinion that, if you did that, its only and only your fault if you wrote sensitive data into the temporary directory.
And as far as I am aware, by default, Apache does not actually write anything even remotely sensitive into /tmp, so...
Why. WHY!
I wasted 4 hours of my life debugging this! Only to find out its just another SystemD-enabled "feature" now!
And as much as I love kidding on SystemD, this time, I see it more as a fault of the package maintainers, because... I found no default apache2/httpd service file in the apache repo mirror... So...8 -
Our rookie sysadmin frack up our web server today. He wanted to make a single directory and all its content accessible but instead, he used this command...
sudi chmod 644 /.6 -
So recently I installed Windows 7 on my thiccpad to get Hyperdimension Neptunia to run (yes 50GB wasted just to run a game)... And boy did I love the experience.
ThinkPads are business hardware, remember that. And it's been booting Debian rock solid since.. pretty much forever. There are no hardware issues here. Just saying.
With that out of the way I flashed Windows 7 Ultimate on a USB stick and attempted to boot it... Oh yay, first hurdle to overcome. It can't boot in UEFI mode. Move on Debian, you too shall boot in BIOS mode now! But okay, whatever right. So I set it to BIOS mode and shuffled Debian's partitions around a bit to be left with 3 partitions where Windows could stick in one more.
Installed, it asks for activation. Now my ThinkPad comes with a Windows 7 Pro license key, so fuck it let's just use that and Windows will be able to disable the features that are only available for Ultimate users, right? How convenient would that be, to have one ISO for all the half a dozen editions that each Windows release has? And have the system just disable (or since we're in the installer anyway, not install them in the first place) features depending on what key you used? Haha no, this is Microsoft! Developers developers developers DEVELOPERS!!! Oh and Zune, if anyone remembers that clusterfuck. Crackhead Microsoft.
But okay whatever, no activation then and I'll just fetch Windows Loader from my webserver afterwards to keygen my way through. Too bad you didn't accept that key Microsoft! Wouldn't that have been nice.
So finally booted into the installed system now, and behold finally we find something nice! Apparently Windows 7 Enterprise and Ultimate offer a native NFS driver. That's awesome! That way I don't have to adjust my file server at all. Just some fuckery with registry keys to get the UID and GID correct, but I'll forgive it for that. It's not exactly "native" to Windows after all. The fact that it even has a built-in driver for it is something I found pretty neat already.
Fast-forward a few hours and it's time to Re Boot.. drivers from Lenovo that required reboots and whatnot. Fire the system back up, and low and behold the network drive doesn't mount anymore. I've read that this is apparently due to Windows (not always but often) mounting the network drive before the network comes up. Absolutely brilliant! Move out shitstaind, have you seen this beauty of an init Mr. Poet?
But fuck it we can mount that manually after every single boot.. you know, convenient like that. C O P E.
With it now manually mounted, let's watch a movie! I've recently seen Pyro's review on The Platform and I absolutely loved it. The movie itself is quite good too. Open the directory on my file server and.. oh. Windows.. you just put db.thumb on it and db.thumb:encryptable. I shit you not, with the colon and everything. I thought that file names couldn't contain colons Windows! I thought that was illegal in NTFS. Why you doing this in NFS mate? And "encryptable", am I already infected with ransomware??? If it wasn't for the fact that that could also be disabled with something as easy as a registry key, I would've thought I contracted ransomware!
Oh and sound to go with that video, let's pair up some Bluetooth headphones with that Bluetooth driver I installed earlier! Except.. haha nope. Apparently you don't get that either.
Right so let's just navigate the system in its Aero glory... Gonna need to flick the mouse for that. Except it's excruciatingly slow, even the fastest speed is slower than what I'm used to on Linux.. and it's jerky as hell (Linux doesn't have any of that at higher speed). But hey it can compensate for that! Except that slows down the mouse even more. And occasionally the mouse driver gets fucked up too. Wanna scroll on Telegram messages in a chat where you're admin? Well fuck you mate, let me select all these messages for you and auto scroll at supersonic speeds! And God forbid that you press delete with that admin access of yours. Oh maybe I'll do it for you, helpful OS I am!
And the most saddening part of it all? I'd argue that Windows 7 is the best operating system that Microsoft ever released. Yeah. That's the best they could come up with. But at least it plays le games!10 -
Just started using the Dropbox API. Want to do a simple directory listing of my files. Sends HTTP GET request at https://api.dropboxapi.com/2/files/....
"Error in call to API function "files/list_folder": Your request's HTTP request method is "GET". This function only accepts the HTTP request method "POST"."
What. The. Fuck. Dropbox.
HTTP POST is for creating a new instance of a resource. HTTP GET is for reading. GET guarantees server state is not changed while POST does not. I want to fucking list a directory, not put stuff in it.1 -
I need to setup a Windows Server with an AD (and therefore an own domain) that can be reached from a Linux host for a test environment... Holy crap I totally forgot what a huge pain in the ass that crap is!
Pro Tip: If youre connected to a Server via VPN and RDP and you create a domain and subsequently get logged out from the server, you're fucked.2 -
About slightly more than a year ago I started volunteering at the local general students committee. They desperately searched for someone playing the role of both political head of division as well as the system administrator, for around half a year before I took the job.
When I started the data center was mostly abandoned with most of the computational power and resources just laying around unused. They already ran some kvm-hosts with around 6 virtual machines, including a cloud service, internally used shared storage, a user directory and also 10 workstations and a WiFi-Network. Everything except one virtual machine ran on GNU/Linux-systems and was built on open source technology. The administration was done through shared passwords, bash-scripts and instructions in an extensive MediaWiki instance.
My introduction into this whole eco-system was basically this:
"Ever did something with linux before? Here you have the logins - have fun. Oh, and please don't break stuff. Thank you!"
Since I had only managed a small personal server before and learned stuff about networking, it-sec and administration only from courses in university I quickly shaped a small team eager to build great things which would bring in the knowledge necessary to create something awesome. We had a lot of fun diving into modern technologies, discussing the future of this infrastructure and simply try out and fail hard while implementing those ideas.
Today, a year and a half later, we look at around 40 virtual machines spiced with a lot of magic. We host several internal and external services like cloud, chat, ticket-system, websites, blog, notepad, DNS, DHCP, VPN, firewall, confluence, freifunk (free network mesh), ubuntu mirror etc. Everything is managed through a central puppet-configuration infrastructure. Changes in configuration are deployed in minutes across all servers. We utilize docker for application deployment and gitlab for code management. We provide incremental, distributed backups, a central database and a distributed network across the campus. We created a desktop workstation environment based on Ubuntu Server for deployment on bare-metal machines through the foreman project. Almost everything free and open source.
The whole system now is easily configurable, allows updating, maintenance and deployment of old and new services. We reached our main goal for this year which was the creation of a documented environment which is maintainable by one administrator.
Although we did this in our free-time without any payment it was a great year with a lot of experience which pays off now. -
techie 1 : hey, can you give me access to X?
techie 2 : the credentials should be in the password manager repository
t1 : oh, but I don't have access to the password manager
t2 : I see your key A1B2C3D4 listed in the recipients of the file
t1 : but I lost that key :(
t2 : okay, give me your new key then.
t1 : I have my personal key uploaded to my server
t1 : can you try fetching it?
t1 : it should work with web key directory ( WKD )
t2 : okay
t2 : no record according to https://keyserver.ubuntu.com
t1 : the keyserver is personal-domain.com
t1 : try this `gpg --no-default-keyring --keyring /tmp/gpg-$$ --auto-key-locate clear,wkd --locate-keys username@personal-domain.com`
t2 : that didn't work. apparently some problem with my dirmgr `Looking for drmgr ...` and it quit
t1 : do you have `dirmngr` installed?
t2 : I have it installed `dirmngr is already the newest version (2.2.27-2)`
t2 : `gpg: waiting for the dirmngr to come up ... (5)` . this is the problem. I guess
t1 : maybe your gpg agent is stuck between states.
t1 : I don't recall the command to restart the GPG agent, but restarting the agent should probably fix it.
t1 : `gpg-connect-agent reloadagent /bye`
source : https://superuser.com/a/1183544
t1 : *uploads ASCII-armored key file*
t1 : but please don't use this permanently; this is a temporary key
t2 : ok
t2 : *uploads signed password file*
t1 : thanks
t2 : cool
*5 minutes later*
t1 : hey, I have forgotten the password to the key I sent you :(
t2 : okay
...
t2 : fall back to SSH public key encryption?
t1 : is that even possible?
t2 : Stack Overflow says its possible
t1 : * does a web search too *
t1 : source?
t2 : https://superuser.com/questions/...
t2 : lets try it out
t1 : okay
t2 : is this your key? *sends link to gitlab.com/username.keys*
t1 : yes, please use the ED25519 key.
t1 : the second one is my old 4096-bit RSA key...
t1 : which I lost
...
t1 : wait, you can't use the ED25519 key
t2 : why not?
t1 : apparently, ED25519 key is not supported
t1 : I was trying out the steps from the answer and I hit this error :
`do_convert_to_pkcs8: unsupported key type ED25519`
t2 : :facepalm: now what
t1 : :shrug:
...
t1 : *uploads ASCII-armored key file*
t1 : I'm sure of the password for this key
t1 : I use it everyday
t2 : *uploads signed password file*
*1 minute later*
t1 : finally... I have decrypted the file and gotten the password.
t1 : now attempting to login
t1 : I'm in!
...
t2 : I think this should be in an XKCD joke
t2 : Two tech guys sharing password.
t1 : I know a better place for it - devRant.com
t1 : if you haven't been there before; don't go there now.
t1 : go on a Friday evening; by the time you get out of it, it'll be Monday.
t1 : and you'll thank me for a _weekend well spent_
t2 : hehe.. okay.8 -
Am I the only developer in existence who's ever dealt with Git on Windows? What a colossal train wreck.
1. Authentication. Since there is no ssh key/git url support on Windows, you have to retype your git credentials Every Stinking Time you push. I thought Git Credential Manager was supposed to save your credentials? And this was impossible over SSH (see below). The previous developer had used an http git URL with his username and password baked in for authentication. I thought that was a horrific idea so I eventually figured out how to use a Bitbucket App password.
2. Permissions errors
In order to commit and push updates, I have to run Git for Windows as Administrator.
3. No SSH for easy git access
Here's where I confess that this is a Windows Server machine running as some form of production. Please don't slaughter me! I am not the server admin.
So, I convinced the server guy to find and install some sort of ssh service for Windows just for the off times we have to make a hot fix in production. (Don't ask, but more common than it should be.)
Sadly, this ssh access is totally useless as the git colors are all messed up, the line wrap length and window size are just weird (seems about 60 characters wide by 25 lines tall) and worse of all I can't commit/push in git via ssh because Permissions. Extremely aggravating.
4. Git on Windows hangs open and locks the index file
Finally, we manage to have Git for Windows hang quite frequently and lock the git index file, meaning that we can't do anything in git (commit, push, pull) without manually quitting these processes from task manager, then browsing to the directory and deleting the .git/index.lock file.
Putting this all together, here's the process for a pull on this production server:
Launch a VNC session to the server. Close multiple popups from different services. Ask Windows to please not "restart to install updates". Launch git for Windows. Run a git pull. If the commits to be pulled involve deleting files, the pull will fail with a permissions error. Realize you forgot to launch as Administrator. Depending on how many files were deleted in the last update, you may need to quit the application and force close the process rather than answer "n" for every "would you like to try again?" file. Relaunch Git as Administrator. Run Git pull. Finally everything works.
At this point, I'd be grateful for any tips, appreciate any sympathy, and understand any hatred. Windows Server is bad. Git on Windows is bad.10 -
I'm ashamed of it, but I want to share my tifu-story:
My colleague asked me if I could rename his windows user name because he married and changed his last name. I changed it in the Active Directory, but he got some problems when he wants to log on. On every startup his old name appears. Simpliest task. Let me google that.
Easy going, let me just change this registry entry. Reboot. Old behaviour. Okay, I changed some of the other entries. Reboot. Yeah, his new name appears. But wait a moment. Windows just nulled his entire user profile and deleted all the data. "oh, haha you have a backup, right?" - "no, I saved everything on the desktop, all my work is gone!"
But at the end, the boss was mad at HIM, because he doesn't used the file server or any backup system.
i am not a smart man5 -
I was doing some maintenance on a production server for a game hosting company (Minecraft hosting, for those interested). A week before, I had created a backup of an account directory before trying to solve an issue, I now wanted to remove this directory.
Since I am way too confident in my ability to not mess up, I was logged in as root.
Instead of typing `rm -rf ../` (I know using -f is a bad idea), I typed `rm -rf /`.
The distro we were using did not have any protections built in.
The directory I wanted to remove as gone, but so was the rest of the server once I realized what I had done.3 -
We called a customer because that on their server a directory is missing which was important for production.
Turned out that they didn't miss a directory because they worked in the development environment of the same customer but in a different location. For the last 3 months. -
Fuck XCode! -
Yesterday I had the stupid idea to rename an icon file. Checked that XCode was building the application still fine. Ran it over the build server: Failed, complaining about the old missing icon file! Checked again and again, but there was no friggin' reference to the old file in the whole repo.
Log in to the machine clear the build folder and try to build the component again. Bang still same error and the references to no longer existing files reappear.
Turns out XCode was caching those references somewhere in the home directory as "DerivedData" and after deleting those, I could build again... but why on earth are you building a cache if you cannot properly invalidate it? Just to waste our time?
(@xcodesucks)3 -
You win Linux. That was the last straw. I will never install ANY linux distribution ever again.
Setup a simple FTP server. What can possibly go wrong ?
Ok now I want it to point to /media/ftp
Easy, right ?
Just add local_root=/media/ftp
Weeeeellll nop.
Not working. Completly ingnoring this setting, all users log to home directory.
#chroot_local_user=YES
tried, no effect.
I'm wiping this server and installing windows server there.
Too bad, the process started very well, the machine is fully confiogured, ready to go, DNS working every thing working. Except this shitty FTP.
So FUCK YOU linux wioth config files, WELCOME windows with nice GUI where I can just SELECT the default ftp folder29 -
Working at a local seo sweat-shop as "whatever the lead dev does't feel like doing" guy.
Inherit their linux "server".
- Over 500 security updates
- Everything in /var/www is chmod to 777
- Everything in /var/www is owned by a random user that isn't apache
- Every single database is owned by root sql user
- Password for sudo user and mysql root user same as wifi password given to everyone at company.
- Custom spaghetti code dashboard with over 400 files in one directory, db/ api logins spread throughout these files, passwords in plain text.
- Dashboard doesn't have passwords, just usernames to login
- Dashboard database has all customer information including credit card stored in plain text
- Company wifi is shared by other businesses in the area
I suggest that I should try to fix some of these things.
Lead Developer / Tech Director : We're an SEO company, not a security company . . .7 -
I'm currently between jobs and have a few rants about my previous job (naturally). In retrospect, it's somewhat therapeutic to range about the sheer brainfuckery that has taken place. Enjoy!
First, let me set the scene: legacy B2B web app made with LEMP stack and sencha ext.js 3 + 4 (don't ask) and a lot of madness. Let's call that app "Alpha".
Alpha is a self made CMS build for typical ERP stuff. Yes, a self made CMS: entities are containers, containers have types and fields and values. Like so many legacy PHP apps, it does not have a dedicated FE: the HTML is rendered on the server and then spewed out to the browser.
Easy right? Coding like it's 1999! But there was a twist: Because everything is basically a container, the HTML-templates are saved in the DB. Along with the nessary JS and the CSS. And the translation variables. Why? Because fuck you! That's why. Who needs a git history anyways.
For some reason, Alpha was kinda slow.
There was also an editor, that allowed you to modify templates (web, mail, pdf) on the fly in prod. Because templates contain repeating data (header/footer), one template could contain additional templates. Much confusion. You could change templates via migration (slow, boring) or just ctrl-c/ctrl-v that sucker (fast, much excitement).
Did I mention Alpha was slow?
On with the rant: e-mails! How do they work? Noone knows. How to send mails asynchronous in PHP? Witchcraft is the only possible answer to that riddle. Here is your enterprise™ solution:
1. create mail
2. insert mail into DB
3. WAIT UP TO 59 SECONDS FOR A FUCKING CRON TO SEND MAIL
Why? "Because that way, we can resend mails in case the network is down :)"
Same procedure for the SOAP-API (db-queue + cron). You read that right: all requests to various other systems are processed once a minute.
Alpha slow.
Alpha was only one of several systems. Imagine a bunch of monolithic php apps, interconnected via SOAP, REST and GraphQL like a godamn intergalactic orgy. Image having to debug that cluster fuck.
Let's say there is a bad request. These things happen. No biggie. Remember the db-queue? Let's try to send the bad request a second time! And a third time! Still no luck? How odd. Let's create a specific file in a specific directory: a LOCK-file. Now, "the db-queue is on hold and no request gets processed :)"
Golly gee thanks Alpha.
Anyhow, did you know that MySQL has a join limit of 61 tables?3 -
There are a few email addresses on my domain that I keep on receiving spam on, because I shared them on forums or whatever and crawlers picked it up.
I run Postfix for a mail server in a catch-all configuration. For whatever reason in this setup blacklisting email addresses doesn't work, and given Postfix' complexity I gave up after a few days. Instead I wrote a little bash script called "unspam" to log into the mail server, grep all the emails in the mail directory for those particular email addresses, and move whatever comes up to the .Junk directory.
On SSD it seems reasonably fast, and ZFS caching sure helps a lot too (although limited to 1GB memory max). It could've been a lot slower than it currently is. But I'm not exactly proud of myself for doing that. But hey it works!1 -
TLDR: I need advice on reasonable salary expectations for sysadmin work in the rural United States.
I need some community advice. I’m the sysadmin at a small (35 employee) credit card processing company. I began as an intern and have now become their full time sysadmin/networking specialist. Since I was hired in January I have:
-migrated their 2007 Exchange server to Office 365
-Upgraded their ailing Windows server 2003 based architecture to 2012R2
-Licensed their unlicensed VMware ESXi servers (which they had already paid for license keys for!!!) and then upgraded them to 6.5 while preventing downtime on hosted VMs using tricky transfers and deployments (without vMotion!)
-Deployed a vCenter server to manage said ESXi servers easier
-Fixed a three month gap in their backups by implementing Veeam, and verifying its functionality
-Migrated a ‘no downtime’ fileserver to a new hypervisor host, implemented a ‘hot standby’ server as a backup kept up to date by the minute with DFS replication.
-Replaced failing hard drives in a RAID array underlying their one ‘business critical’ fileserver, which had no backups for 3 months at that time
-Reorganized Active Directory and Group Policy deployment from a nightmare spiderweb of OUs and duplicate policies
-Documented the entire old network and now the new one as I’ve been upgrading this
-Audited the developers AWS instances and removed redundant machines, optimized load balancing on front end Nginx servers, joined developer run Fedora workstations to the AD domain and implemented centralized syslog monitoring on them.
-Performed network scans and rewrote firewall exceptions to tighten security
There’s more, but you get the idea. I’ve now been tasked with taking point on an upcoming PCI audit which will be my first.
I’m being paid $16/hr US, with marginal health benefits. This is roughly $32,000 a year, before taxes.
I have two years previous work experience managing a third party Apple repair facility (SimplyMac) and every Apple certification for warranty repair and software troubleshooting. I have a two year degree in general sciences, with about 4 years of college credit (Two years of a physics education and two years of computer science after I switched focus) I’m actively pursuing a CCNA and MCSA server 2016 with exams paid for and scheduled.
I’m going into a salary negotiation in two months. What is a reasonable salary to request, from your perspective, for someone in my position?
Thanks in advance!7 -
My company is getting a new website. This involves getting new hosting.
I made the old one, and it's all just static html. I'm not that attached to it but it's an important detail.
The bosses want the switch to the new site to happen instantly, but I pointed out that with DNS propagation times etc it can't really happen that way.
So I suggested the new web guys host our old site for a few days and we change the DNS now. Then when they want to launch we don't have to wait for the DNS and they can just swap it out.
This involves dropping 10MB of html files into the web directory on the new server.
For this service they are charging us for 2 hours of their time!
I guess I'm in the wrong business... -
Ridiculous when ftp guide doesn't include anything about how to change root directory.
"All these commands and Voila! Yiu have your vsftpd server running"
ok but what is the root directory tho?2 -
Okay so one of my friends got an offer for a more powerful server with 128GB RAM, ok processor because the current server load is high. When they got the offer of the new one I saw there was in the licenses part, Windows Server 2016. Which to me seems worst thing you can do for just using PHP, MySQL and nothing active directory or really windows specific. Can some of u please write in short why use linux for servers instead of shitdows. And it would clearly cost much less. Because I guess if other tell it they, the client, will agree...16
-
FML. Troubleshooting a bad mount. My server doesnt seem to know whether it wants "remote_images" to be a directory or a file lol.
admin@off001-truservcomm-jhb1-001:///var/...$ cd remote_images
admin@off001-truservcomm-jhb1-001:///var/...$ ls
ls: reading directory .: Not a directory
admin@off001-truservcomm-jhb1-001:///var/...$ sudo ls
ls: reading directory .: Not a directory
admin@off001-truservcomm-jhb1-001:///var/...$ cd ../
admin@off001-truservcomm-jhb1-001:///var/...$ mkdir remote_images
mkdir: cannot create directory ‘remote_images’: File exists
admin@off001-truservcomm-jhb1-001:///var/...$ rm remote_images
rm: cannot remove ‘remote_images’: No such file or directory
admin@off001-truservcomm-jhb1-001:///var/...$ sudo rm remote_images
rm: cannot remove ‘remote_images’: Is a directory13 -
Random guy messages me on WhatsApp that he needs help, that his friend told him I'm good at blah blah blah.........
the issue: he paid for some random php bitcoin thingy blah blah, sent me a link to the site, pretty straightforward instructions on how to use it. I explained everything to him and he says he wants to tweak the php script before he puts it out.
me: then do it
him: how do I start?
me(in my head): did you not think of this before paying for the script?!
also me: oh well, download xampp, good for beginners, easy to setup.
him: not working! please help me
I knew from the onset that he was a windows user.
he started by running it without admin privileges
I had no idea and kept solving problems that didn't exist until I asked him to snap the log, after explaining how to run a software as administrator, we Solved it
port 80 was taken. had to go through the process of changing the ports, I had to validate every single change.
going through the procedure of reinstalling because he installed to some crappy directory. after all the headaches and then redoing all the processes stated above, it still doesn't work.
one final solution left and I am dropping him like a hot potato. I must have close to a hundred pictures of someone's screen on my phone.
little question: when he types localhost on his browser windows IIS page thingy pops up. I was thinking of changing the server name to localserver: new port address6 -
After a couple of days reading bout CI/CD i find it really compilated for my usecase. Server for jankins server for rancher server for anything and a lot of configurations to make it work. So i gave up and decided to build my own CI tool which turned out to be easiear than all of this. So i wrote a simple cli tool in go which listens on master branch of every directory specified in a .yml file and everytime a new commit is done it pulls the repo and rebuilds the container. Pretty easy without having to deal with all the bullshit15
-
I executed "chmod -R go-r .*" on my home directory at the university server.
Only to realize that ".*" also considers ".." so I chmodded out files for many other students before I hit ctrl-c.3 -
FUCK APPLICATION LEVEL FIREWALLS!
So i cam online today, thought already lets open the shitty outlook webmail client. Holy crap .... thats way to much mails. Many of them are missed teams messages. So i open up teams and holy crap. Like every third dev in my company send me a message screaming "gitab is not working!!!".
Yesterday i updated it so imediately get in panic mode - what the shitty hack have i done?!
So yeah gitlab seems to be working just fine, everything is speedy and responsive, so i call one of my fellow devs and ask him whats wrong? And he is like oh yeah there comes a ldap error saying timeout or something.
I try to login with active directory. Works like a charm. Try another account, same problem?!
Google the problem, search gitlab tickets. Nope there is no open bug or sth. like this.
So alright lets call the network guy. "Yo, can you check if there is something ldap-like getting blocked to the gitlab server?" - He is like oh yeah damn like almost every damn request is getting blocked. Ah wait, there was an firewall update yesterday too. Yeah ldap is no longer ldap. BLOCK THAT SHIT!
After 10 minutes of figuring out what shitty type is detected by the firewall and what needs to be whitelisted to make it fucking work again it seems to work.
But ha no, there is another update rolling on, so same shit like 15 minutes later.
Now it seems to work and i have to inform every damn fcking developer that it works again. And yeah alright you sent a mail, but fuck it, i will call you though! So yeah just answering calls, mails and chat messages. Like why the fuck cant you read your mails like a damn normal person?!1 -
My DEV Story
After reading it, make a favor by ++d
Thought to be a software engineer in future
Learnt Python's basic modules, AI, and some ML
After getting intermediate in python, I started learning Java as my second language but could not do it because of JDK 8. Now don't ask me why.
Then, just stepped into game development with unity and C#, having a basic knowledge of C# with no experience in making a game myself. This is called ignorant.
After getting no success, I started learning PHP and got the chance to make a website having no content ;)
But it cannot meet my requirements
Soon I got content that AdSense regards as no content, no problem
I started learning Flask, a module in python for making web applications.
It took me 1 month to complete my website, which can convert file formats.
The idea for deploying it to the server
Sign Up to DigitalOcean
Domain Name from GoDaddy (I know NameCheap is better but got some offer from it)
Made a VPS for what I have to pay $5/month
Deploy my Flask App using WSGI server
This is the worst dev experience
.
.
.
.
Why in all the tutorial, they only deploy a flask app which displays Hello World only and not anything else
WSGI or UWSGI Server does not give us permission to save any file or make any directory in it
Every time........ERROR
Totally Fucked Up
Finally, it works on localhost with port 80
I know this is not the professional way to host a website but this option was only left.
What can I do
Now, I cannot issue a free SSL certificate through Let's Encrypt because **Error 98 Address Already In Used**
The address was port 80 on which my Flask App was running
Check it out now - www.fileconvertex.com8 -
Dear Webmin,
how is it that you fail to update and fuck up every Apache config file existing on the server.
Why can't I just be a lazy dev tonight, instead of fixing your moronic actions upon those files, one by one.
Why is it that you frigging forget to close Directory tags properly.
Why is it that you show a Forbidden page when everything seems to be finally ok.
And why is it that I can not re-generate that shit with one button.
Fuck this shit.
sudo rm -rf /2 -
I have this little problem,
there is no constant electricity In the country where I live, in fact for the past 4 days there was not a single blink.
I enable auto save on my vs code to save me from tears,
now I have a file server with backup batteries and since it's a laptop mobo that was converted to a server, hooking up the battery was a no brainer.
I just saved copies of my files on it and if I edited any of them I'll just overwrite the file. this was only possible if I did this before the power goes out or else I am stuck again.
I decided to try vs code extensions that will save me from all that copy and paste work.
tried ssh, unsupported architecture error, didn't care I just needed ftp or sftp
I tried the simple ftp/sftp extension. worked pretty well. allowed me to connect to the server and add the remote directory to my workspace and with autosave the changes are uploaded immediately which means once power is out I can continue on my mobile phone(I have some android text editors that support ftp).
little problem. I discovered some things just don't work. even if I opened the whole directory, the contents will not be loaded unless I open them up like stylesheets and images and whatnot.
imagine having to open every single damn file before it appears on the browser, very annoying.
I need a solution, I have really tried.7 -
We noticed that in our landing directory we were receiving duplicate files.
I asked the source to investigate.
He told me that the issue was not at his end. He asked me to mark the issue has been resolved from his end. I refused.
We get on a call to debug the issue. After 30mins he is extremely frustrated. As he was sharing his screen, he runs the command `ls -ltr | uniq -cd` on his server which sends the files and then screams at me "Where are the duplicates? Show me. Check the output. There are no duplicates.".
I first muted the call. Had a good laugh. Made him repeat it to show my team mates. They had a good laugh too.
I then asked him to call it a day. And once you cool down, think about what you just showed me. -
There were many issues that came about during my entire employment, but I woke up today with some, honestly, quite bizarre questions from my manager that made me open an account here. This is just the latest in many frustrations I have had.
For context, my manager is more of a "tech lead" who maintains a few projects, the number can probably be counted in one hand. So he does have the knowledge to make changes when needed.
A few weeks ago, I was asked to develop a utility tool to retrieve users from Active Directory and insert them into a MSSQL Database, pretty straight forward and there were no other requirements.
I developed it, tested it, pushed it to our repository, then deployed the latest build to the server that had Active Directory, told my manager that I had done so and left it at that.
A few weeks later,
Manager: "Can you update the tool to now support inserting to both MSSQL and MySQL?"
Me: "Sure." (Would've been nice to know that beforehand since I'm already working on something else but I understand that maybe it wasn't in the original scope)
I do that and redeploy it, even wrote documentation explaining what it did and how it worked. And as per his request, a technical documentation as well that explains more in depth how it works. The documents were uploaded as well.
A few days after I have done so,
Manager: "Can you send me the built program with the documentation directly?"
I said nothing and just did as he asked even though I know he could've just retrieved it himself considering I've uploaded and deployed them all.
This morning,
Manager: "When I click on this thing, I receive this error."
Me: "Where are you running the tool?"
Manager: "My own laptop."
Me: "Does your laptop have Active Directory?"
Manager: "Nope, but I am connected to the server with Active Directory."
Me: "Well the tool can only retrieve Active Directory information on a PC with it."
Manager: "Oh you mean it has to run on the PC with Active Directory?"
Me: "Yeah?"
Manager: "Alright. Also, what is the valid value for this configuration? You mentioned it is the Database connection string."
After that I just gave up and stopped responding. Not long after, he sent me a screenshot of the configuration file where he finally figured out what to put in.
A few minutes later,
Manager: "Got this error." And sends a screenshot that tells you what the error is.
Me: "The connection string you set is pointing to the wrong database schema."
Manager: "Oh whoops. Now it works. Anyway, what are these attribute values you retrieve from Active Directory? Also, what is the method you used to connect/query/retrieve the users? I need to document it down for the higher ups."
Me: "The values are the username, name and email? And as mentioned in the technical documentation, it's retrieving using this method."
The 2+ years I have been working with this company has been some of the most frustrating in my entire life. But thankfully, this is the final month I will be working with them.21 -
My work product: Or why I learned to get twitchy around Java...
I maintain a Java based test system, that tests a raster image processor. The client is a Java swing project that contains CORBA bindings to the internal API of the raster image processor. It also has custom written UI elements and duplicated functionality that became available in later versions of Java, but because some of the third party tools we use don't work with later versions of Java for some reason, it's not possible to upgrade Java to gain things as simple as recursive directory deletion, yes the version of Java we have to use does not support something as simple as that and custom code had to be written to support it.
Because of the requirement to build the API bindings along with the client the whole application must be built with the raster image processor build chain, which is a heavily customised jam build system. So an ant task calls out to execute a jam task and jam does about 90% of the heavy lifting.
In addition to the Java code there's code for interpreting PostScript files, as these can be used to alter the behaviour of the raster image processor during testing.
As if that weren't enough, there's a beanshell interface to allow users to script the test system, but none of the users know Java well enough to feel confident writing interpreted Java scripts (and that's too close to JavaScript for my comfort). I once tried swapping this out for the Rhino JavaScript interpreter and got all the verbal support in the world but no developer time to design an API that'd work for all the departments.
The server isn't much better though. It's a tomcat based application that was written by someone who had never built a tomcat application before, or any web application for that matter and uses raw SQL strings instead of an orm, it doesn't use MVC in any way, and insane amount of functionality is dumped into the jsp files.
It too interacts with a raster image processor to create difference masks of the output, running PostScript as needed. It spawns off multiple threads and can spend days processing hundreds of gigabytes of image output (depending on the size of the tests).
We're stuck on Tomcat seven because we can't upgrade beyond Java 6, which brings a whole manner of security issues, but that eager little Java updated will break the tool chain if it gets its way.
Between these two components we have the Java RMI server (sometimes) working to help generate image data on the client side before all images are pulled across a UNC network path onto the server that processes test jobs (in PDF format), by reading into the xref table of said PDF, finding the embedded image data (for our server consumed test files are just flate encoded TIFF files wrapped around just enough PDF to make them valid) and uses a tool to create a difference mask of two images.
This tool is very error prone, it can't difference images of different sizes, colour spaces, orientations or pixel depths, but it's the best we have.
The tool is installed in both the client and server if the client can generate images it'll query from the server which ones it needs to and if it can't the server will use the tool itself.
Our shells have custom profiles for linking to a whole manner of third party tools and libraries, including a link to visual studio 2005 (more indirectly related build dependencies), the whole profile has to ensure that absolutely no operating system pollution gets into the shell, most of our apps are installed in our home directories and we have to ensure our paths are correct for every single application we add.
And... Fucking and!
Most of the tools are stored as source bundles in a version control system... Not got or mercurial, not perforce or svn, not even CVS... They use a custom built version control system that is built on top of RCS, it keeps a central database of locked files (using soft and hard locks along with write protecting the files in the file system) to ensure users can't get merge conflicts by preventing other users from writing to the files at all.
Branching is heavy weight and can take the best part of a day to create a new branch and populate the history.
Gathering the tools alone to build the Dev environment to build my project takes the best part of a week.
What should be a joy come hardware refresh year becomes a curse ("Well fuck, now I loose a week spending it setting up the Dev environment on ANOTHER machine").
Needless to say, I enjoy NOT working with Java. A lot of this isn't Javas fault, but there's a lot of things that Java (specifically the Java 6 version we're stuck on) does not make easy.
This is why I prefer to build my web apps in python or node, hell, I'd even take Lua... Just... Compiling web pages into executable Java classes, why? I mean I understand the implementation of how this happens, but why did my predecessor have to choose this? Why?2 -
Fuck ssh. It does 4 things at once and i couldn't get it to do one. I have some pi's and want a shared directory on each of them. On a server i created a user for that and mounted its home directory on a pi, it worked. I did some lockdowns (no shell, only sftp allowed, login only via keyfile), but i was still able to mount it on boot.
Now i had to migrate this setup to another server. It took me a while copying all the configuration etc. All i got for that was a error-message. I figured out the users home-directory had to be owned be root, fixed that, got another error message. Somehow scp didn't use sftp but the login shell which is /usr/sbin/nologin. That made scp (and sshfs) fail, even though it perfectly works with the other server.
I gave up and removed all the setup. I'll find another distributed filesystem for that (but not samba or nfs, those are way to complicated). Those are the setbacks that depress me. -
Composer.json require sendgrid
Composer adds wrong directions to file, fine I'll hard code it.
Composer is deriving file path.
Fine I'll edit 4 files.
Composer is escaping hard path
Change global variables
Composer is still adding its own directory before hard path.
Follow azure and sendgrid documentation to the letter, composer puts wrong way round slashes in file path.
Gives up on 57th server 500 error
Sometimes azure gets me down in its implementation of things.... -
I've had my site up and working for a few months now (still need to finish building it properly the template project is still half default lol) but because I setup the Nginx server on a digital ocean droplet myself using both for the first time ever I obviously made some mistakes. It was up and running though just always spouting 'nginx[1755018]: nginx: [warn] conflicting server name "jessiejfoley.dev" on 0.0.0.0:443, ignored' whenever I 'nginx -t' or 'java.security.cert.CertificateException' on this server monitor app I have on my phone
But it was up and ssl seemed to be working so I ignored it
today I learned about https://sslshopper.com/ssl-checker...., which told me my intermediate certificates were not functioning properly, I was bored today and didn't wanna be too productive (else boss expects the progress I've made this week every week) and decided to finally go through and see about getting everything fixed properly starting by reinstalling the certs and double checking my commands.
2 hours later I still can't fix the cert errors so I decide to focus on the conflicting name error. Go through the nginx directory cleaning anything non essential or things I put there while trying to figure out how to get it up originally (learned as I was going lol bad practice I know, but it's just a practice site that'll eventually be a portfolio when I feel like making it properly and investing an adequate amount of time)
as soon as I get rid of jessiejfoley_dev.save.3 inside /etc/nginx/conf.d (my actual site is in sites-enabled) my server monitor app stops reporting the cert error and when I check the ssl checker everything is properly working now.
so the easiest problem to fix was actually the cause of all my problems. I'm and idiot and this shows I still have a LONG way to go to actually knowing what I'm doing at all.1 -
I have a small NUC-like machine in my home with an old external hdd connected to it. I use it to run my local gitlab, nextcloud and to test a few websites I build for the lolz.
If you too have a homelab, whether it's a single raspberry or an entire room full or racks, you know damn well that everything you have running locally as a web service keeps going until it doesn't, for whatever fucking reason. This time, it was the turn of my nextcloud.
The machine has arch linux running, I chose it since I already use it on my coding laptop and being a rolling release means I don't have to manually upgrade to a newer version, risking various fuck-ups and consequent screaming of profanity.
The downside is that arch is a bleeding-edge distro, so, despite being pretty good for what concerns security, as updates are pushed out some packages may still require legacy software to work as intended, since obviously not all developers for all packages can release simultaneously.
The problem was that php reached 8.2.x but nextcloud couldn't use anything beyond 8.1, so the highlighted solution was to download php-legacy, a package with a set of utilities which the cloud could use instead of mainline php.
Pretty easy, right? fuck my life, here we go.
I edited apache-httpd's configurations to link the new libraries, updated every reference in every virtual host that could possibly screw up the web server.
Done.
Then I went on and disabled the php-fpm mainline, creating a new systemd unit that would instead run the legacy executable and afterwards I edited nextcloud's additional configs so they use that instead.
Done, getting a bit dizzy, but I reboot everything and breathe.
At this point the migration should be complete, but wait, the server returns an error saying that the application is still trying to use php 8.2+...wait, what in the sysadmin Christ?
Back to nextcloud config, everything is set, everything else in every other fucking php-legacy and web server is fine, the old fpm service is disabled, I am confused, and why in the FUCKING FUCK is the new php-fpm unit failing to start at boot with "error 78/config - directory not found"? Hello? Am I being trolled by a shitty dual-core amazon fake NUC?
Maybe yes, cause it turns out that the unit was referencing a directory in the external hdd, which gets mounted at boot time after the unit itself starts, so nothing much, just a matter of tinkering with cron jobs, a reboot and at least this one is off my balls.
But why still isn't the server responding correctly? why? WHY?
After slamming my cock on the keyboard here and there scrolling back through all the config files I think to myself, hmmm, my gitlab is working flawlessly, well yeah, I didn't need to install the whole web stack, everything was nice and easy wrapped in a docker container...so why am I even here, why the fuck am I bothering with all this layered web-app bullshit, why don't I just run the up-to-date docker image that someone else has already set up for me, back up all the data and reupload them on the application?
Oh joy, you can't imagine, after 3...almost 4 hours of pure computer-touching the relief I had from seeing the blue web page with the "welcome to nextcloud" title.
Right now it's copying back all the files, and the external hdd is now linked to include the data folder.
Like really, everything was solved in two lines of bash.
I am still fuming, but at least I learned a valuable lesson, if you want a service up for yourself, implement it and deploy it as fucking easy straight-forward as you can, giving MAXIMUM priority to already fully-working options that are out there just waiting to be downloaded and used. I swing my scrotal sack on web-apps elegance as long as it's MY homelab in MY place.
Eat a fat dick php.
sudo pacman -Rns nextcloud
sudo systemctl disable --now php-fpm-legacy
sudo pacman -Rns php-legacy
sudo pacman -Rns $(sudo pacman -Qdtq)2 -
I‘m currently trying to get an SFTP user for our school's webspace (preinstalled WordPress, don't hate it - it's "great" for non-"it" people) and our network administrator means that he can't create one for me because I would have access to all files on the server.
WTF, you can create SFTP users on Linux and restrict their access and even set a home directory.
Yeah, now we need to forget about themes and plugins in WordPress.
(He said that he also can't create an FTP user)1 -
I remember the day I used Linux to enforce size quotas on user directories oh what a day
I was surrounded by horrible monsters wearing manho suits that made them appear superficially human... though the sounds screeching forth from their many toothed mouths in whining jarring tones would suggest otherwise and I used this wonderful feature : you can call most of the file systems commands on files !
did a dd to create a disk image specified to the correct size made an fs inside and mounted it straight into the home directory adding entries to the fstab to auto mount and setting the io permissions
It was so wonderful and the little bastards refused to use the server !5 -
Json host files of a whole server networks root server passwords under the webservers configs directory open to the public.
-
Thus happened recently.
I installed vestas on our organisation's lab server. I was trying to add the user's key to gitlab.
But vestas doesn't support ssh keys by default, and i thought to go by https way
I don't remember my password, so instead of opening saved logins I was going to install gitlab on our organisation's sub domain.
Later I created custom keys inside the user's directory -_- -
Windows 10 , I just want a flipping built in command line executable to log off another (local) user. I'm not a server, I don't have active directory, I don't want to switch to log in as that user first, i want to just kill their inactive local session because cisco freaking vpn doesn't allow you to connect when a other user is logged in. I can kill the session from admin task manager, I just want to be in the commandline. If your gonna let software check the number of logged in users, let the freaking administration modify the number of logged in users with a cli.
Idk if I could turn it off an on again. On a server I would just issue "query sessions" or "query users" followed by "logoff ##". Why not let me do the same damn thing on my home computer sk I don't have to restart MY SESSION just to close MY WIFE'S session. You stupid fraking company that cannot provide consistent command line programs across various systems. SCREW YOU MICROSOFT AND YOUR UTTER ASANINE DECISION MAKING REGARDING WHAT FEATURES TO INCLUDE IN WHAT BUILDS.2 -
It's okay to slow down. Seriously it's fine. You can write 203 wpm... as long as the command you enter includes all necessary limits. Nothing is worse than having ansible rm -r the wrong directory or the wrong server because you missed a limit and put all.undefined wk26 learn from mistakes devops ansible take your time time is both the solution and the problem
-
This happened to me sometime back.
I want to try out a WordPress plugin in my local machine before installing on a production server. It is an Ubuntu machine. Downloaded and installed Xampp, then setup WordPress with MySQL. Now tried uploading the plugin zip file, it throws some permission error, asking to fix permissions or use FTP. I thought of just chmod 777 recursively for the WordPress directory to fix this easily.
Ran the command, looks like it is hung. Terminated using Ctrl+C and then ran the same command. Again it is taking much time. It should not take so much time to recursively change the permission of just a WordPress directory. Thought something was wrong. Before I realized the damage is already done.
Looks like I ran the command
sudo chmod -R 777 /
instead of
sudo chmod -R 777 ./
Fuck, I missed a dot in the command and it is changing permissions of everything in my machine. Saw the System monitor, CPU usage spiked to 100%. I can't close or open any program. Force shutdown the machine using the power key. It didn't boot again. Recovery mode didn't help. Looks like there is no easy way to restore back from this damage. Most of the files I need are backed up in the cloud, still, need a few more personal files so that I can format and reinstall Ubuntu. Realised I have Windows in dual booting. Boot into Windows and used some ext4 reader to recover the files, formatted and reinstalled the OS. Took a few hours to get back to my previous setup.
Lesson Learned: Don't use sudo unnecessarily.
Double check the command while executing.
Running a wrong command with root permission can fuckup your entire machine. -
Relatively often the OpenLDAP server (slapd) behaves a bit strange.
While it is little bit slow (I didn't do a benchmark but Active Directory seemed to be a bit faster but has other quirks is Windows only) with a small amount of users it's fine. slapd is the reference implementation of the LDAP protocol and I didn't expect it to be much better.
Some years ago slapd migrated to a different configuration style - instead of a configuration file and a required restart after every change made, it now uses an additional database for "live" configuration which also allows the deployment of multiple servers with the same configuration (I guess this is nice for larger setups). Many documentations online do not reflect the new configuration and so using the new configuration style requires some knowledge of LDAP itself.
It is possible to revert to the old file based method but the possibility might be removed by any future version - and restarts may take a little bit longer. So I guess, don't do that?
To access the configuration over the network (only using the command line on the server to edit the configuration is sometimes a bit... annoying) an additional internal user has to be created in the configuration database (while working on the local machine as root you are authenticated over a unix domain socket). I mean, I had to creat an administration user during the installation of the service but apparently this only for the main database...
The password in the configuration can be hashed as usual - but strangely it does only accept hashes of some passwords (a hashed version of "123456" is accepted but not hashes of different password, I mean what the...?) so I have to use a single plaintext password... (secure password hashing works for normal user and normal admin accounts).
But even worse are the default logging options: By default (atleast on Debian) the log level is set to DEBUG. Additionally if slapd detects optimization opportunities it writes them to the logs - at least once per connection, if not per query. Together with an application that did alot of connections and queries (this was not intendet and got fixed later) THIS RESULTED IN 32 GB LOG FILES IN ≤ 24 HOURS! - enough to fill up the disk and to crash other services (lessons learned: add more monitoring, monitoring, and monitoring and /var/log should be an extra partition). I mean logging optimization hints is certainly nice - it runs faster now (again, I did not do any benchmarks) - but ther verbosity was way too high.
The worst parts are the error messages: When entering a query string with a syntax errors, slapd returns the error code 80 without any additional text - the documentation reveals SO MUCH BETTER meaning: "other error", THIS IS SO HELPFULL... In the end I was able to find the reason why the input was rejected but in my experience the most error messages are little bit more precise.2 -
Quick Plesk config question...
Been getting open_basedir() notices in the WordPress logs, and frankly it's flooding the log right now. Sample below:
[24-Feb-2019 07:05:19 UTC] PHP Warning: file_exists(): open_basedir restriction in effect. File(/var/www/vhosts/webspacedomain.com/SiteInstallDirectory/wp-content/db.php) is not within the allowed path(s): (/var/www/vhosts/webspacedomain.com/:/tmp/) in /var/www/vhosts/webspacedomain.com/SiteInstallDirectory/wp-includes/load.php on line 397
Checking the settings for open_basedir in the domain's PHP settings, it's currently set to the following default value:
{WEBSPACEROOT}{/}{:}{TMP}{/}
By my read, that **should** be granting permission to the directory. I just checked it against the setting on the dev server (which doesn't report this error), and it's configured in the same manner. Only difference between Dev environment and this one is that the one in Dev is in vhosts/webspacedomain.net/DEV instead of just vhosts/webspacedomain.net
Is there something I'm missing here?4 -
Would it be possible to use (S)FTP protocol in conjunction with push technology rather than pull? Perhaps websockets since both use TCP?
Say, something like an external server periodically sending my server files and when a new file arrives, I will get a notification. This instead of constantly polling my directory to check if there are files in it.
I think I can see this done with an Angular page that gives me a notification when a new file arrives on my FTP.
I think it might turn into an interesting little hobby project..4 -
# rails manage.py runserver
Usage: ...
# python run server
/use/local/opt/python/bin/python2.7: can't find '__main__' module in 'run'
# npm server
Usage: npm <command> ...
# webpack s[TAB]
...
# [TAB]
...
# ./just_fckn_run_pls.sh
zsh: no such file or directory: ./just_fckn_run_pls.sh
# DO IT
zsh: correct DO to do [nyae]? n
zsh: command not found: DO
# exit
Thank you. Come again!
-- Dr. API Nahasapeemapetilon
============= Broken Pipe =============
10 GOTO PUB -
Ok, so for past 1 whole day I am trying to make vhost work on my brand new laptop, running Ubuntu 16.04 LTS... When I installed OS, I've set hard disk encryption, and on top of it - user home folder encryption. Don't ask me why I did both.
Setting up vhost is simple and straight forward - I did it hundreds, maybe thousands of times, on various Linux distros, server and desktop releases alike.
And of course, as it usually happens, opposed to all logic and reason - setting up virtual host on this machine did't work. No matter what I do - I get 403 (access not allowed).
All is correctly set - directory params in apache config, vhost paths, directory params within vhost, all the usual stuff.
I thought I was going crazy. I go back to several live servers I'm maintaining - exactly the same setup that doesn't work on my machine. Google it, SO-it, all I can see is exactly what I have been doing... I ended up checking char by char every single line, in disbelief that I cannot find what is the problem.
And then - I finally figured it out after loosing one whole day of my life on it:
I was trying to setup vhost to point to a folder inside my user's home folder - which is set to be encrypted.
Aaaaaand of course - even with all right permissions - Apache cannot read anything from it.
As soon as I tried any other folder outside my home folder - it worked.
I cannot believe that nobody encountered this issue before on Stackoverflow or wherever else.9 -
I spent half a day trying to figure out why the app on the staging server does not log in the app log file while it does on the dev server.
Server log said log config file found but could not find the root logger.
Problem was that the directory was readable for the app, but not the logfile configuration file.
Dear devs, when a file is not readable that might be some interesting information one could write into a log. AT LEAST MORE INTERESTING THAN "APPLICATION STARTING..." -
Anyone else effected by the UK fast outage yesterday? We've come in this morning to a failed drive (or so we think) our Web server home directory is just gone wondering if anyone else has noticed any funnies on their hosting
https://theregister.co.uk/2017/12/... -
Someone asked me to check on his Wordpress site. Can't upload media. 'Failure to write to disk' Updating fails too. Directory and file permissions are ok. Server has enough space. Cleared tmp. Any more ideas?8
-
I need help structuring a new TypeScript project built on a MERN stack. I used CRA for the client, so I opted to have separate tsconfig files -- one for client (auto-generated by CRA) and one for server (extends node12 tsconfig). However, I'm trying to setup eslint and prettier globally so that the lint/style rules are uniform across the codebase. CRA adds an eslint config that extends react-app, which is fine, but I'd like to still have my global rules. I have written my eslintrc.json file and am happy with it, so I placed it in the project root directory. I figured I would install eslint, prettier, etc. in the project root, then when I run eslint globally, it would lint the server code with the global rules and the client code with the global rules and the react-scripts rules.
However, react-scripts complains that I've installed a newer version of eslint in a parent directory. I can either ignore that rule or use the same version as react-scripts, but it seems like react-scripts is going to run eslint on its own when I run npm start, regardless of if I have a global config. What should I do? Is there a better way to structure the app?1 -
I need to have a hosting company upgrade the Debian OS install on a VPS. But I also need to know things like what MySQL or Perl modules were added to the server by other admins prior to me outside the /home directory. I don't have any documentation on it at all. If I don't preserve custom stuff like that, it could result in a dead website. Anyone got any tricks for figuring out what was added and when?5
-
If I use a connector to pull files from an SFTP server and when I configure it to pull all files from the root folder after it logs in but it actually pulls from the machine's root directory, is that really an SFTP server or just a server? Is that even secure?4
-
this afternoon, we got email from our pentester. He said that he got some security vulnerability in our project. He found .git/ folder in project directory in production server. He considered it as security vulnerability because user can see all git branch on remote repo. He recommend us to remove that folder but the problem is, we using CI/CD so we need that .git/ folder. My question is it bad practice to use git on production server?10
-
#Suphle Rant 6: Deptrac, phparkitect
This entry isn't necessarily a rant but a tale of victory. I'm no more as sad as I used to be. I don't work as hard as I used to, so lesser challenges to frustrate my life. On top of that, I'm not bitter about the pace of progress. I'm at a state of contentment regarding Suphle's release
An opportunity to gain publicity presented itself last month when cfp for a php event was announced last month. I submitted and reviewed a post introducing suphle to the community. In the post, I assured readers that I won't be changing anything soon ie the apis are cast in stone. Then php 7.4 officially "went out of circulation". It hit me that even though the code supports php 8 on paper, it's kind of a red herring that decorators don't use php 8 attributes. So I doubled down, suspending documentation.
The container won't support union and intersection types cuz I dislike the ambiguity. Enums can't be hydrated. So I refactored implementation and usages of decorators from interfaces to native attributes. Tried automating typing for all class properties but psalm is using docblocks instead of native typing. So I disabled it and am doing it by hand whenever something takes me to an unfixed class (difficulty: 1). But the good news is, we are php 8 compliant as anybody can ask for!
I decided to ride that wave and implement other things that have been bothering me:
1) 2 commands for automating project setup for collaborators and user facing developers (CHECK)
2) transferring some operations from runtime to compile/build TIME (CHECK)
3) re-attempt implementing container scopes
I tried automating Deptrac usage ie adding the newly created module to the list of regulated architectural layers but their config is in yaml, so I moved to phparkitect which uses php to set the rules. I still can't find a library for programmatically updating php filed/classes but this is more dynamic for me than yaml. I set out to implement their library, turns out the entire logic is dumped into the command class, so I can neither control it without the cli or automate tests to it. I take the command apart, connect it to suphle and run. Guess what, it detects class parents as violations to the rule. Wtflyingfuck?!
As if that's not bad enough, roadrunner (that old biatch!) server setup doesn't fail if an initialization script fails. If initialization script is moved to the application code itself, server setup crumbles and takes the your initialization stuff down with it. I ping the maintainer, rustacian (god bless his soul), who informs me point blank that what I'm trying to do is not possible. Fuck it. I have to write a wrapper command for sequentially starting the server (or not starting if initialization operations don't all succeed).
Legitimate case to reinvent the wheel. I restored my deleted decorators that did dependency sanitation for me at runtime. The remaining piece of the puzzle was a recursive film iterator to feed the decorators. I checked my file system reader for clues on how to implement one and boom! The one I'd written for two other features was compatible. All I had to do was refactor decorators into dependency rules, give them fancy interfaces for customising and filtering what classes each rule should actually evaluate. In a night's work (if you're discrediting how long writing the original sanitization decorators and directory iterator), I coupled the Deptrac/phparkitect library of my dreams. This is one of the those few times I feel like a supreme deity
Hope I can eat better and get some sleep. This meme is me after getting bounced by those three library rejections