Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "no deletion"
-
Fuxk yeah! My code works! It's 2AM, I'm happy and there's no one around, so I wrote a poem :-P
What was once impossible,
Is now close to completion,
Thanks to my debug statements,
Which now await their deletion.28 -
I was asked to look into a site I haven't actively developed since about 3-4 years. It should be a simple side-gig.
I was told this site has been actively developed by the person who came after me, and this person had a few other people help out as well.
The most daunting task in my head was to go through their changes and see why stuff is broken (I was told functionality had been removed, things were changed for the worse, etc etc).
I ssh into the machine and it works. For SOME reason I still have access, which is a good thing since there's literally nobody to ask for access at the moment.
I cd into the project, do a git remote get-url origin to see if they've changed the repo location. Doesn't work. There is no origin. It's "upstream" now. Ok, no biggie. git remote get-url upstream. Repo is still there. Good.
Just to check, see if there's anything untracked with git status. Nothing. Good.
What was the last thing that was worked on? git log --all --decorate --oneline --graph. Wait... Something about the commit message seems familiar. git log. .... This is *my* last commit message. The hell?
I open the repo in the browser, login with some credentials my browser had saved (again, good because I have no clue about the password). Repo hasn't gotten a commit since mine. That can't be right.
Check branches. Oh....Like a dozen new branches. Lots of commits with text that is really not helpful at all. Looks like they were trying to set up a pipeline and testing it out over and over again.
A lot of other changes including the deletion of a database config and schema changes. 0 tests. Doesn't seem like these changes were ever in production.
...
At least I don't have to rack my head trying to understand someone else's code but.... I might just have to throw everything that was done into the garbage. I'm not gonna be the one to push all these changes I don't know about to prod and see what breaks and what doesn't break
.
I feel bad for whoever worked on the codebase after me, because all their changes are now just a waste of time and space that will never be used.3 -
I was checking out this wk139 rants & thinking to myself how does one have a dev enemy.. o.O Well TIL that maaaaybe I have one too..
Not sure if ex coworker was a bit 'weird & unskillful' or wanted to intentionally harm us and thank god failed miserably..
I decided to finally cleanup his workspace today: he had a bad habit of having almost all files in solution checked out to himself, most of them containing no changes whatsoever... I reminded him on many occasions that this is bad practice & to only have checked out files he was currently working on. And never checkin files without changes.. Ofc didn't listen.. managed to checkin over 100 files one time, most of which had no changes & some even had alerts for debugging in them.. which ofc made it to the client server.. :/
On one or two occasions I already logged in and wanted to check if files have any real changes that I'd actually want to keep, but gave up after 40 or so files in a batch that were either same or full of sh..
Anyhow today I decided I will discard everything, as the codebase changed a lot since he left an I know I already fixed a lot of his tasks.. I logged in, did the undo pending changes and then proceed to open source control explorer.
While I was cleaning up his workspace, I figured I could test what will happen if I request changeset xy and shelveset yy, will it be ok, or do I have to modify something else & merge code.. Figured using his workspace that was already set up for testing would be easier, faster & less 'stressful' than creating another one on my computer, change IIS settings and all just, to test this merge..
Boy was I wrong.. upon opening source control explorer, I was greeted by a lot of little red Xes staring back at me... more than half the folders on TFS were marked for deletion.. o.O
Now I'm not sure if he wanted to fuck me up when he left or was just 'stupid' when it comes to TFS. O.O
So...maybe I do have a dev enemy after all.. or I don't.. Can't decide.. all I know for sure is tomorrow I'm creating another workspace to test this and I'm not touching his computer ever again.. O.O -
I opened an issue on a repo telling the owner that placing a "test passing" badge on the readme but not having other tests than an "ExampleTest" and no tests of the actual functionality is bad practice and what he thinks about updating the readme.
The result was a deletion (not close) of the issue and a ban from contributing (issues, PRs) on any of his projects.
And it was not some small "ten persons use this" project but a large boilerplate project with 2.4k github stars and over 800 forks. You would expect a little bit more professionalism of someone with that popularity.4 -
The global joke of Information Security
So I broke my iPhone because the nuclear adhesive turned my display into a shopping bag.
This started the ride for my character arc in this boring dystopia novel:
Amazon is preventing me from accessing my account because they want my password, email AND mobile phone number in their TWO.STEP Verifivation.
Just because one too many scammers managed to woo one too many 90+y/o's into bailing their long lost WW2 comrades from a nigerian jail with Amazon gift cards and Amazon doesn't know what to do about anymore,
DHL is keeping my new phone in a "highly secure" vault 200m away from my place, waiting for a letter to register some device with a camera because you need to verify your identity with an app,
all the while my former car insurance is making regress claims of about 7k€ against me for a minor car accident (no-one hurt fortunately, but was my fault).
Every rep from each of the above had the same stupid bitchass scapegoat to create high-tech supra chargers to the account deletion request:
- Amazon: We need to verify your password, whether the email was yours and whether the phone number is yours.
They call it 2-step-verification.
Guess what Amazon requests to verify you before contacting customer support since you dont have access to your number? Your passwoooooord. While youre at it, click on that button we sent you will ya? ...
I call this design pattern the "dement Tupi-Guarani"
- DHL: We need an ID to verify your identity for the request for changing the delivery address you just made. Oh you wanted to give us ANOTHER address than the one written on your ID? Too bad bro, we can't help, GDPR
- Car Insurance: We are making regress claims against you, which might throw you back to mom's basement, oh and also we compensated the injured party for something else, it doesn't matter what it is but it's definitely something, so our claims against you just raised by 1.2k. Wait you want proof we compensated something to the injured at all? Nah mate we cant do that , GDPR. But trust me, those numbers are legit, my quant forecasted the cost of childrens' christmas wishes. You have 14 days or we'll see you in court haha
I am also their customer in a pension scheme. Something special to Germany, where you save some taxes but have to pay them back once you get the fund paid out. I have sent them a letter to terminate the contract.
Funniest thing is, the whole rant is my second take. Because when I hit the post button, devrant made me verify my e-mail. The text was gone afterwards. If someone from devRant reads this, you are free to quote this in the ticket description.
Fuck losing your virginity, or filing your first tax return, or by God get your first car, living through this sad Truman dystopia without going batshit insane is what becoming a true adult is.
I am grateful for all this though:
Amazon's safety measures prevented me from spending the money I can use to conclude the insurance odyssey, and DHLs "giving a fuck about customers" prevention policies made me support local businesses. And having ranted all this here does feel healthy too. So there's that.
Oh, cherry on top. I cant check my balance, because I can only verify my login requests to my banking account wiiiiiiith...?2 -
9000 internet cookie points to whoever figures out this shit:
I'm trying to import a secret gpg key into my keyring.
If I run "gpg2 --import secring.gpg" and manually type each possible password that I can think of, the import fails. So far, nothing unusual.
HOWEVER
If I type the same passwords into a file and run:
echo pwfile.txt | gpg2 --batch --import secring.gpg
IT ACTUALLY FUCKING WORKS
What the fuck??? How can it be that whenever I type the pw manually it fails, but when I import it from a file it works??
And no, it's not typos: I could type those passwords blindfolded from muscle memory alone, and still get them right 99% of the time. And I'm definitely not blindfolded right now.
BUT WAIT, THERE'S MORE!!
Suppose my pwfile.txt looks something like this:
password1
password2
password3
password4
password5
password6
Now, I'm trying to narrow it down and figure out which one is the right password, so I'm gonna split the file in two parts and see which one succeds. Easy, right?
$ cat pw1.txt
password1
password2
password3
$ cat pw2.txt
password4
password5
password6
$ echo pw1.txt | gpg2 --batch --import secring.gpg
gpg: key 149C7ED3: secret key imported
$ gpg2 --delete-secret-key "149C7ED3"
[confirm deletion]
$ echo pw2.txt | gpg2 --batch --import secring.gpg
gpg: key 149C7ED3: secret key imported
In other words, both files successfully managed to import the secret key, but there are no passwords in common between the two!!
Am I going retarded, or is there something really wrong here? WTF!4 -
GitHub Packages Sucks. Like, it REALLY sucks.
It sounds like the best thing in the world - being able to host your project packages alongside your code! It has full support for Maven, Gradle, Ruby Gems, Node packages, Docker images and even dotnet CLI applications. It even lets you view statistics on how many developers have downloaded a given package! For public repositories, the packages are free to host as well!
So, I decide to use it for my Maven project since it's "so great". I've never used a public Maven repository before, so this was all very new to me. I follow the documentation - simply run "mvn deploy ...." and use a generated GitHub personal access token. No problems there. Deployment is a success and I feel a wave of happiness seeing my packages online. I follow through the various links and it even adds automatically generated usage information for other Maven users - fantastic!
That was, until I decide to try and download one of the files from this package repository. In order to download a file, you must have a GitHub access token. Okay, makes sense I guess? What if another developer wants to use my library? To do so, they have to generate their own GitHub access token, store it in their local ~/.m2/settings.xml file and only THEN can they use my library. So clearly, this is significantly inferior to other public Maven repositories where you don't have to get an access token to simply USE a library.
Upon discovering this, I decide to simply delete all of the packages and continue using whatever previous system I was using. Except of course, they forbid the deletion of public packages because "other projects could depend on it". The only way to delete public packages is to either:
[0] Make the repository private (losing all stargazers and watchers), delete the packages and then make the repository public again
[1] Contact support and ask them to delete the public packages. They say that they'll only do this for "special cases", such as legal issues or GDPR breaches.
I've sent a contact form and I'm currently hoping that they see things in my favor. I mean seriously - a public package repository where in order to use it you have to have a GitHub account and then generate an authentication token - it's absurd!3 -
Moving files is emotionally easier than copying and deleting files, and moving eliminates the risk of selecting the wrong files at the deletion part.
I have read that it is safer to manually copy and manually delete files rather than to move it, but copying and deleting has a hidden risk that was not mentioned: selecting the wrong files for deletion.
Moving files feels like moving an obstacle from one room to another. The deletion part of copying and deleting feels like destroying something, which is an added emotional barrier.
Technically, copying and deleting is safer, since there is no risk of source files being deleted without having been transferred as a result of a device disconnecting or the buggy media transfer protocol (MTP) failing to load the entire file list. However, on mass storage devices, this pretty much never happened to me, and on MTP, data loss can be avoided by not moving folders but opening the source folders and selecting all files and moving those out. This prevents a parent folder with incompletely loaded file listing from being deleted.
However, something that is not considered about copying and deleting is that the risk of selecting the wrong files in the deletion step exists. One might end up selecting files that were never copied.
Not only is moving straightforward and time-saving, but it has no emotional barrier and the risk of selecting the wrong files to delete from the source is eliminated, since a proper file manager like Nemo or Windows Explorer (mass storage only, not MTP) only deletes a moved file from the source after it has been properly transferred. The user does not need to pay attention to select the correct files to delete, since the file manager already did it.4 -
I hate the elasticsearch backup api.
From beginning to end it's an painful experience.
I try to explain it, but I don't think I will be able to cover it all.
The core concept is:
- repository (storage for snapshots)
- snapshots (actual backup)
The first design flaw is that every backup in an repository is incremental. ES creates an incremental filesystem tree.
Some reasons why this is a bad idea:
- deletion of (older) backups is slow, as newer backups need to be checked for integrity
- you simply have to trust ES that it does the right thing (given the bugs it has... It seems like a very bad idea TM)
- you have no possibility of verification of snapshots
Workaround... Create many repositories as each new repository forces an full backup.........
The second thing: ES scales. Many nodes / es instances form a cluster.
Usually backup APIs incorporate these in their design. ES does not.
If an index spans 12 nodes and u use an network storage, yes: a maximum of 12 nodes will open an eg NFS connection and start backuping.
It might sound not so bad with 12 nodes and one index...
But it get's pretty bad with 100s of indexes and several dozen nodes...
And there is no real limiting in ES. You can plug a few holes, but all in all, when you don't plan carefully your backups, you'll get a pretty f*cked up network congestion.
So traffic shaping must be manually added. Yay...
The last thing is the API itself.
It's a... very fragile thing.
Especially in older ES releases, the documentation is like handing you a flex instead of toilet paper for a wipe.
Documentation != API != Reality.
Especially the fault handling left me more than once speechless...
Eg:
/_snapshot/storage/backup
gives you a state PARTIAL
/_snapshot/storage/backup/_status
gives you a state SUCCESS
Why? The first one is blocking and refers to the backup status itself. The second one shouldn't be blocking and refers to the backup operation.
And yes. The backup operation state is SUCCESS, while the backup state might be PARTIAL (hence no full backup was made, there were errors).
So we have now an additional API that we query that then wraps the API of elasticsearch. With all these shiny scary workarounds like polling, since some APIs are blocking which might lead to a gateway timeout...
Gateway timeout? Yes. Since some operations can run a LONG (multiple hours) time and you don't want to have a ton of open connections hogging resources... You let the loadbalancer kill it. Most operations simply run in ES in the background, while the connection was killed.
So much joy and fun, isn't it?
Now add the latest SMR scandal and a few faulty (as in SMR instead of CMD) hdds in a hundred terabyte ZFS pool and you'll get my frustration level.
PS: The cluster has several dozen terabyte and a lot od nodes. If you have good advice, you're welcome - but please think carefully about this fact.
I might have accidentially vaporized people sending me links with solutions that don't work on large scale TM.2 -
My work product: Or why I learned to get twitchy around Java...
I maintain a Java based test system, that tests a raster image processor. The client is a Java swing project that contains CORBA bindings to the internal API of the raster image processor. It also has custom written UI elements and duplicated functionality that became available in later versions of Java, but because some of the third party tools we use don't work with later versions of Java for some reason, it's not possible to upgrade Java to gain things as simple as recursive directory deletion, yes the version of Java we have to use does not support something as simple as that and custom code had to be written to support it.
Because of the requirement to build the API bindings along with the client the whole application must be built with the raster image processor build chain, which is a heavily customised jam build system. So an ant task calls out to execute a jam task and jam does about 90% of the heavy lifting.
In addition to the Java code there's code for interpreting PostScript files, as these can be used to alter the behaviour of the raster image processor during testing.
As if that weren't enough, there's a beanshell interface to allow users to script the test system, but none of the users know Java well enough to feel confident writing interpreted Java scripts (and that's too close to JavaScript for my comfort). I once tried swapping this out for the Rhino JavaScript interpreter and got all the verbal support in the world but no developer time to design an API that'd work for all the departments.
The server isn't much better though. It's a tomcat based application that was written by someone who had never built a tomcat application before, or any web application for that matter and uses raw SQL strings instead of an orm, it doesn't use MVC in any way, and insane amount of functionality is dumped into the jsp files.
It too interacts with a raster image processor to create difference masks of the output, running PostScript as needed. It spawns off multiple threads and can spend days processing hundreds of gigabytes of image output (depending on the size of the tests).
We're stuck on Tomcat seven because we can't upgrade beyond Java 6, which brings a whole manner of security issues, but that eager little Java updated will break the tool chain if it gets its way.
Between these two components we have the Java RMI server (sometimes) working to help generate image data on the client side before all images are pulled across a UNC network path onto the server that processes test jobs (in PDF format), by reading into the xref table of said PDF, finding the embedded image data (for our server consumed test files are just flate encoded TIFF files wrapped around just enough PDF to make them valid) and uses a tool to create a difference mask of two images.
This tool is very error prone, it can't difference images of different sizes, colour spaces, orientations or pixel depths, but it's the best we have.
The tool is installed in both the client and server if the client can generate images it'll query from the server which ones it needs to and if it can't the server will use the tool itself.
Our shells have custom profiles for linking to a whole manner of third party tools and libraries, including a link to visual studio 2005 (more indirectly related build dependencies), the whole profile has to ensure that absolutely no operating system pollution gets into the shell, most of our apps are installed in our home directories and we have to ensure our paths are correct for every single application we add.
And... Fucking and!
Most of the tools are stored as source bundles in a version control system... Not got or mercurial, not perforce or svn, not even CVS... They use a custom built version control system that is built on top of RCS, it keeps a central database of locked files (using soft and hard locks along with write protecting the files in the file system) to ensure users can't get merge conflicts by preventing other users from writing to the files at all.
Branching is heavy weight and can take the best part of a day to create a new branch and populate the history.
Gathering the tools alone to build the Dev environment to build my project takes the best part of a week.
What should be a joy come hardware refresh year becomes a curse ("Well fuck, now I loose a week spending it setting up the Dev environment on ANOTHER machine").
Needless to say, I enjoy NOT working with Java. A lot of this isn't Javas fault, but there's a lot of things that Java (specifically the Java 6 version we're stuck on) does not make easy.
This is why I prefer to build my web apps in python or node, hell, I'd even take Lua... Just... Compiling web pages into executable Java classes, why? I mean I understand the implementation of how this happens, but why did my predecessor have to choose this? Why?2 -
I am always afraid to press that delete button on my rant which currently has 165 ++, eventhough it might have a confirmation message, i do not trust that button.
-
The "recycle bin" feature of Samsung "My Files" is amazing for data loss prevention when moving files out of the smartphone.
There used to be two ways to move files out of the smartphone to make space free. One is direct moving, the other is copy-deletion. The first is self-explanatory, the second means first copying the files and then deleting them on the phone.
Thanks to the the recycle bin, which keeps data for a month, files on the phone can be copied out and then put into the recycle bin instead of immediately deleted.
This means that if the copying was incomplete, there is a thirty-day grace period to get the files back from the phone.
The benefit of moving files instead of copy-deleting them is the lack of the deletion step. Moving files out directly does not have the emotional barrier of deleting the files from source like the deletion step of copy-deleting does.
Moving files feels like moving items to a new room, where as the deletion step after copying feels like destroying something.
So why not move files out? Because there is a risk of data loss if the device disconnects while files are moved to an USB OTG device. Due to write buffering, files that are moved out might be deleted on the phone shortly before they are completely written on the USB-OTG.
This is not an issue with MTP (Windows or Linux through USB cable) because the file systems are managed by the computer, so if the phone disconnects while files are moved out of the phone using MTP, the file system is kept intact by Windows or Linux.
Now, thanks to the recycle bin, there is no emotional barrier to deletion because the files on the phone are automatically deleted after 30 days in the absence of the user. The user can press the "delete" button without worries because of knowing "I can get it back until a month from now anyway".