Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "deletion"
-
Fuxk yeah! My code works! It's 2AM, I'm happy and there's no one around, so I wrote a poem :-P
What was once impossible,
Is now close to completion,
Thanks to my debug statements,
Which now await their deletion.28 -
We were all 16 once right? When I was 16, my school had a network of Windows 2000 machines. Since I was learning java at the time, I thought learning batch scripting would be fun.
One day I wrote a script that froze input from the mouse and displayed a pop up with a scary “Critical System Error: please correct before data deletion!!”. It also displayed a five minute countdown timer, after which the computer restarted.
I may or may not have replaced the internet explorer icon on the desktop with a link to my program on the entire student lab of computers. Chaos.12 -
I misclicked an nsfw channel on discord and I got a dialog asking my age. I wasn't interested in loading the channel and you cannot close this dialog - it even reappears if you restart the app because the channel will still be selected.
I input 0 years just to cancel, which lead to an instant account ban and an email about scheduled deletion. In order to retain my account I need to send in selfies of myself holding my ID.
That's... a surprising user flow from a misclick. May I suggest a little x in the corner, as we professionals call it.4 -
The Captain Obvious commenter:
// account deletion
account.delete()
// check results
if results.any()
.... which eventually leads confusing unmaintained comments3 -
Hide Easter Eggs in your code
In my first program we had a secure file deletion feature
I was tasked of the Mac OS version
While windows version had an icon for drag and drop with a document in a trash bin, in my version when you selected different safety options, it changed icons
Basic deletion had the bin
Intermediate deletion had a document grinder
Advanced deletion had a burning file icon
I was very proud of myself4 -
git commit -m "Forgot a semicolon"
[master 92asd32] Forgot a semicolon
1 file changed, 1 insertion(+), 1 deletion(-)7 -
I woke up early and thaught I could finish some database work today.
Beeing tipsy I deleted two production tables and shit my pants.
Luckily I do backups since the last retarded db-deletion.8 -
I was asked to look into a site I haven't actively developed since about 3-4 years. It should be a simple side-gig.
I was told this site has been actively developed by the person who came after me, and this person had a few other people help out as well.
The most daunting task in my head was to go through their changes and see why stuff is broken (I was told functionality had been removed, things were changed for the worse, etc etc).
I ssh into the machine and it works. For SOME reason I still have access, which is a good thing since there's literally nobody to ask for access at the moment.
I cd into the project, do a git remote get-url origin to see if they've changed the repo location. Doesn't work. There is no origin. It's "upstream" now. Ok, no biggie. git remote get-url upstream. Repo is still there. Good.
Just to check, see if there's anything untracked with git status. Nothing. Good.
What was the last thing that was worked on? git log --all --decorate --oneline --graph. Wait... Something about the commit message seems familiar. git log. .... This is *my* last commit message. The hell?
I open the repo in the browser, login with some credentials my browser had saved (again, good because I have no clue about the password). Repo hasn't gotten a commit since mine. That can't be right.
Check branches. Oh....Like a dozen new branches. Lots of commits with text that is really not helpful at all. Looks like they were trying to set up a pipeline and testing it out over and over again.
A lot of other changes including the deletion of a database config and schema changes. 0 tests. Doesn't seem like these changes were ever in production.
...
At least I don't have to rack my head trying to understand someone else's code but.... I might just have to throw everything that was done into the garbage. I'm not gonna be the one to push all these changes I don't know about to prod and see what breaks and what doesn't break
.
I feel bad for whoever worked on the codebase after me, because all their changes are now just a waste of time and space that will never be used.3 -
Jesus Fuck, is it so hard to slap a motherfucking 'Delete Account' button somewhere on that trashpile of 5000 different Javascript-frameworks and bootstrap you call website?!
No I don't want to deactivate it, I want you to DELETE all the information you have on me, preferably without having to fucking beg some low-life suppport agent in India (no offense intended) via E-Mail to do his goddamn duty...6 -
!rant but story
https://devin.xyz (v.0.0.1)
My quick and semi-ugly solution to save amazing rants and comments forever and more organized.
What it is and it will be:
- archive of rants and comments from devrant that I found very good
- the original ranters will be informed when their rants are archived
- the original ranters and/or the management team of devRant has the right to request the archive content's total deletion
- every single thing on there will be accessible by anyone anytime anywhere (as log as server is healthy)
- open-source
What it may become:
- anyone can register and save their archive
- dev content archive from other sources
- dev articles blog
What it will never have/be:
- any form of payment
- ads
- tracking (I don't even wanna know how many users are viewing)
- non dev related content
- devRant
I'm willing to create user accounts for anyone interested in very near future. So please buzz me here if you want one.
So far it's a website of Laravel + Voyager + bulma with very minimal custom codes (I had to write below 100 lines of code in total). It is on Vultr server.
I'm gonna maintain and update as much as I can on my spare time. Hence I don't consider this as a collab. However, the code is on gitlab private repo. I'll make the repo public soon as well. Any contribution is gladly welcome. 😄10 -
So we found an interesting thing at work today...
Prod servers had 300GB+ in locked (deleted) files. Some containers marked them for deletion but we think the containers kept these deleted files around.
300 GB of ‘ghost’ space being used and `du` commands were not helping to find the issue.
This is probably a more common issue than I realize, as I’m on the newer side to Linux. But we got it figured out with:
`lsof / | grep deleted`4 -
WINDOWS FUCKING DELETED EVERYTHING FROM MY HD...... I'm so freaking annoyed right now... Switching to Linux right away !
Windows.users --;10 -
This is a short tale that can be summed up as "oh fuck meee".
After finishing an API the night before I settled in for a day of bug fixes and tidy ups. Until slack went off.
The front end dev was getting an error, a code breaking error. After doing the standard process of request checking i went okay must be me. I find the script that is has the error and the line that it is failing at.
Que 2 hours of the full cycle of anger, sadness, pleading, and finally acepting that it had finally happened I had gone insane. The code was to documentation best practise correct and it still had the same error.
I the cheaked the DB on a whim and I found that my code was not wrong and it was doing exactly what I wanted the data however had a single record that was old and the schema had change juuussstt enoigh to break everything at that record. One 3 secound deletion later code ran perfectly.2 -
Last time client got hacked...we just could not get rid of the malware...it replicated itself short after deletion.
Ended up creating the same files with zero content and setting them read only.
Not clean, but enough to sleep.2 -
I was checking out this wk139 rants & thinking to myself how does one have a dev enemy.. o.O Well TIL that maaaaybe I have one too..
Not sure if ex coworker was a bit 'weird & unskillful' or wanted to intentionally harm us and thank god failed miserably..
I decided to finally cleanup his workspace today: he had a bad habit of having almost all files in solution checked out to himself, most of them containing no changes whatsoever... I reminded him on many occasions that this is bad practice & to only have checked out files he was currently working on. And never checkin files without changes.. Ofc didn't listen.. managed to checkin over 100 files one time, most of which had no changes & some even had alerts for debugging in them.. which ofc made it to the client server.. :/
On one or two occasions I already logged in and wanted to check if files have any real changes that I'd actually want to keep, but gave up after 40 or so files in a batch that were either same or full of sh..
Anyhow today I decided I will discard everything, as the codebase changed a lot since he left an I know I already fixed a lot of his tasks.. I logged in, did the undo pending changes and then proceed to open source control explorer.
While I was cleaning up his workspace, I figured I could test what will happen if I request changeset xy and shelveset yy, will it be ok, or do I have to modify something else & merge code.. Figured using his workspace that was already set up for testing would be easier, faster & less 'stressful' than creating another one on my computer, change IIS settings and all just, to test this merge..
Boy was I wrong.. upon opening source control explorer, I was greeted by a lot of little red Xes staring back at me... more than half the folders on TFS were marked for deletion.. o.O
Now I'm not sure if he wanted to fuck me up when he left or was just 'stupid' when it comes to TFS. O.O
So...maybe I do have a dev enemy after all.. or I don't.. Can't decide.. all I know for sure is tomorrow I'm creating another workspace to test this and I'm not touching his computer ever again.. O.O -
I opened an issue on a repo telling the owner that placing a "test passing" badge on the readme but not having other tests than an "ExampleTest" and no tests of the actual functionality is bad practice and what he thinks about updating the readme.
The result was a deletion (not close) of the issue and a ban from contributing (issues, PRs) on any of his projects.
And it was not some small "ten persons use this" project but a large boilerplate project with 2.4k github stars and over 800 forks. You would expect a little bit more professionalism of someone with that popularity.4 -
I'm currently in the progress of deleting whatsapp and migrating to signal.
The hardest part of it is dealing with friends and family. I informed them about the incoming whatsapp-deletion tomorrow and the results were mixed. One friend told me she will not use signal, but i haven't talked with her that much anyway so... My mother asked me "can you not do this because i don't have space left on my phone?", my father told me "can you tell me about [...] before you go offline?", 2 people don't seem to care, and only my cousin contacted me on signal yet.
I have signal for 3 years now and even invited people to it, but i got the expected response "but all my friends are on whatsapp". Until recently i was the one with the shitload of messengers on my phone but some people can't be bothered to install a second one because i want to take one step (out of many to follow) to widen my privacy.
I'm really pissed by now and will declare any contact lost due to this as collateral damage.29 -
When is the point where you aren't allowed to touch your own AI anymore?
When it's capable of speech?
When it shows emotions?
When it becomes self aware?
When does it stop being file deletion and start becoming murder?5 -
If I had to audit my current code I'd definitly stick a cactus up my arse shouting in the mirror:
ALL YOUR CODE IS GOOD FOR IS ULTIMATE DELETION. YOU FILTHY MAGGOT! LEARN TO CODE... *rage quit*
Really, coding shit because of spare time simply makes me ripping my face of 💀 -
I accidentally deleted a folder containing contracts and files worth millions.
There's no backup. 😭😭😭. EaseUS didn't help with the entire recovery.
.
.
.
.
.
.
.
.
.
.
.
.
J K. I put a recurring backup every week 😌. Hadn't made any changes in the past week. 😂.6 -
Got a new job at a fairly large IT firm which deals with large scale business software for customers like the government's various agencies.
The very first job I'm assigned to: we have to strip down this software and make it more general, go ahead and delete everything related to <feature>.
I haven't had time to get to know the product and I've deleted hundreds of files and lines of code from related files...
I have a feeling this will bite me somehow5 -
>TINFOIL GUYS!!!!!
guys don't just deactivate your FB acc, request deletion, beacuse if you deactivate and use any services like any page login or any social services like insta, and you use your FB credentials to login, the gets reactivated and your news feed even shows what you have missed in the mean time , so all of the (your) data is still in their servers15 -
Stakeholder: Can you investigate the problem with this user profile? We made updates to system A, but user is saying it’s the wrong info on the website.
Me: Looks fine to me. Looks like your updates just needed time to trickle down. Though, you will need to clean up this user’s data because it can cause X problems. There’s not much I can do since the site just displays info from system A.
SH: Can you delete the user’s website account and we can ask user to create a new one?
Me: …Ok, let’s try this again. It’s not necessary to delete the account and make the user create a new one. It’s not going to resolve the X problems that I mentioned. The website really needs clean data from system A.1 -
Wanted to scrub my presence off of Facebook, but wanted to keep the account to stay in touch with friends.
That's why I built a small command-line tool to automate the deletion of my Facebook posts using Node.js & Puppeteer, in order not to resort to using third-party apps that you hand over your credentials to. What do you guys think?
https://github.com/ar-maged/...2 -
The global joke of Information Security
So I broke my iPhone because the nuclear adhesive turned my display into a shopping bag.
This started the ride for my character arc in this boring dystopia novel:
Amazon is preventing me from accessing my account because they want my password, email AND mobile phone number in their TWO.STEP Verifivation.
Just because one too many scammers managed to woo one too many 90+y/o's into bailing their long lost WW2 comrades from a nigerian jail with Amazon gift cards and Amazon doesn't know what to do about anymore,
DHL is keeping my new phone in a "highly secure" vault 200m away from my place, waiting for a letter to register some device with a camera because you need to verify your identity with an app,
all the while my former car insurance is making regress claims of about 7k€ against me for a minor car accident (no-one hurt fortunately, but was my fault).
Every rep from each of the above had the same stupid bitchass scapegoat to create high-tech supra chargers to the account deletion request:
- Amazon: We need to verify your password, whether the email was yours and whether the phone number is yours.
They call it 2-step-verification.
Guess what Amazon requests to verify you before contacting customer support since you dont have access to your number? Your passwoooooord. While youre at it, click on that button we sent you will ya? ...
I call this design pattern the "dement Tupi-Guarani"
- DHL: We need an ID to verify your identity for the request for changing the delivery address you just made. Oh you wanted to give us ANOTHER address than the one written on your ID? Too bad bro, we can't help, GDPR
- Car Insurance: We are making regress claims against you, which might throw you back to mom's basement, oh and also we compensated the injured party for something else, it doesn't matter what it is but it's definitely something, so our claims against you just raised by 1.2k. Wait you want proof we compensated something to the injured at all? Nah mate we cant do that , GDPR. But trust me, those numbers are legit, my quant forecasted the cost of childrens' christmas wishes. You have 14 days or we'll see you in court haha
I am also their customer in a pension scheme. Something special to Germany, where you save some taxes but have to pay them back once you get the fund paid out. I have sent them a letter to terminate the contract.
Funniest thing is, the whole rant is my second take. Because when I hit the post button, devrant made me verify my e-mail. The text was gone afterwards. If someone from devRant reads this, you are free to quote this in the ticket description.
Fuck losing your virginity, or filing your first tax return, or by God get your first car, living through this sad Truman dystopia without going batshit insane is what becoming a true adult is.
I am grateful for all this though:
Amazon's safety measures prevented me from spending the money I can use to conclude the insurance odyssey, and DHLs "giving a fuck about customers" prevention policies made me support local businesses. And having ranted all this here does feel healthy too. So there's that.
Oh, cherry on top. I cant check my balance, because I can only verify my login requests to my banking account wiiiiiiith...?2 -
9000 internet cookie points to whoever figures out this shit:
I'm trying to import a secret gpg key into my keyring.
If I run "gpg2 --import secring.gpg" and manually type each possible password that I can think of, the import fails. So far, nothing unusual.
HOWEVER
If I type the same passwords into a file and run:
echo pwfile.txt | gpg2 --batch --import secring.gpg
IT ACTUALLY FUCKING WORKS
What the fuck??? How can it be that whenever I type the pw manually it fails, but when I import it from a file it works??
And no, it's not typos: I could type those passwords blindfolded from muscle memory alone, and still get them right 99% of the time. And I'm definitely not blindfolded right now.
BUT WAIT, THERE'S MORE!!
Suppose my pwfile.txt looks something like this:
password1
password2
password3
password4
password5
password6
Now, I'm trying to narrow it down and figure out which one is the right password, so I'm gonna split the file in two parts and see which one succeds. Easy, right?
$ cat pw1.txt
password1
password2
password3
$ cat pw2.txt
password4
password5
password6
$ echo pw1.txt | gpg2 --batch --import secring.gpg
gpg: key 149C7ED3: secret key imported
$ gpg2 --delete-secret-key "149C7ED3"
[confirm deletion]
$ echo pw2.txt | gpg2 --batch --import secring.gpg
gpg: key 149C7ED3: secret key imported
In other words, both files successfully managed to import the secret key, but there are no passwords in common between the two!!
Am I going retarded, or is there something really wrong here? WTF!4 -
GitHub Packages Sucks. Like, it REALLY sucks.
It sounds like the best thing in the world - being able to host your project packages alongside your code! It has full support for Maven, Gradle, Ruby Gems, Node packages, Docker images and even dotnet CLI applications. It even lets you view statistics on how many developers have downloaded a given package! For public repositories, the packages are free to host as well!
So, I decide to use it for my Maven project since it's "so great". I've never used a public Maven repository before, so this was all very new to me. I follow the documentation - simply run "mvn deploy ...." and use a generated GitHub personal access token. No problems there. Deployment is a success and I feel a wave of happiness seeing my packages online. I follow through the various links and it even adds automatically generated usage information for other Maven users - fantastic!
That was, until I decide to try and download one of the files from this package repository. In order to download a file, you must have a GitHub access token. Okay, makes sense I guess? What if another developer wants to use my library? To do so, they have to generate their own GitHub access token, store it in their local ~/.m2/settings.xml file and only THEN can they use my library. So clearly, this is significantly inferior to other public Maven repositories where you don't have to get an access token to simply USE a library.
Upon discovering this, I decide to simply delete all of the packages and continue using whatever previous system I was using. Except of course, they forbid the deletion of public packages because "other projects could depend on it". The only way to delete public packages is to either:
[0] Make the repository private (losing all stargazers and watchers), delete the packages and then make the repository public again
[1] Contact support and ask them to delete the public packages. They say that they'll only do this for "special cases", such as legal issues or GDPR breaches.
I've sent a contact form and I'm currently hoping that they see things in my favor. I mean seriously - a public package repository where in order to use it you have to have a GitHub account and then generate an authentication token - it's absurd!3 -
Moving files is emotionally easier than copying and deleting files, and moving eliminates the risk of selecting the wrong files at the deletion part.
I have read that it is safer to manually copy and manually delete files rather than to move it, but copying and deleting has a hidden risk that was not mentioned: selecting the wrong files for deletion.
Moving files feels like moving an obstacle from one room to another. The deletion part of copying and deleting feels like destroying something, which is an added emotional barrier.
Technically, copying and deleting is safer, since there is no risk of source files being deleted without having been transferred as a result of a device disconnecting or the buggy media transfer protocol (MTP) failing to load the entire file list. However, on mass storage devices, this pretty much never happened to me, and on MTP, data loss can be avoided by not moving folders but opening the source folders and selecting all files and moving those out. This prevents a parent folder with incompletely loaded file listing from being deleted.
However, something that is not considered about copying and deleting is that the risk of selecting the wrong files in the deletion step exists. One might end up selecting files that were never copied.
Not only is moving straightforward and time-saving, but it has no emotional barrier and the risk of selecting the wrong files to delete from the source is eliminated, since a proper file manager like Nemo or Windows Explorer (mass storage only, not MTP) only deletes a moved file from the source after it has been properly transferred. The user does not need to pay attention to select the correct files to delete, since the file manager already did it.4 -
The company I worked for had to do deletion runs of customer data (files and database records) every year, mainly for legal reasons. Two months before the next run they found out that the next year would bring multiple times the amount of objects, because a decade ago they had introduced a new solution whose data would be eligible for deletion for the first time.
The existing process was not be able to cope with those amounts of objects and froze to death gobbling up every bit of ram on the testing system. So my task was to rewrite the exising code, optimize api calls and somehow I ended up in multithreading the whole process. It worked and is most probably still in production today. 💨 -
I hate the elasticsearch backup api.
From beginning to end it's an painful experience.
I try to explain it, but I don't think I will be able to cover it all.
The core concept is:
- repository (storage for snapshots)
- snapshots (actual backup)
The first design flaw is that every backup in an repository is incremental. ES creates an incremental filesystem tree.
Some reasons why this is a bad idea:
- deletion of (older) backups is slow, as newer backups need to be checked for integrity
- you simply have to trust ES that it does the right thing (given the bugs it has... It seems like a very bad idea TM)
- you have no possibility of verification of snapshots
Workaround... Create many repositories as each new repository forces an full backup.........
The second thing: ES scales. Many nodes / es instances form a cluster.
Usually backup APIs incorporate these in their design. ES does not.
If an index spans 12 nodes and u use an network storage, yes: a maximum of 12 nodes will open an eg NFS connection and start backuping.
It might sound not so bad with 12 nodes and one index...
But it get's pretty bad with 100s of indexes and several dozen nodes...
And there is no real limiting in ES. You can plug a few holes, but all in all, when you don't plan carefully your backups, you'll get a pretty f*cked up network congestion.
So traffic shaping must be manually added. Yay...
The last thing is the API itself.
It's a... very fragile thing.
Especially in older ES releases, the documentation is like handing you a flex instead of toilet paper for a wipe.
Documentation != API != Reality.
Especially the fault handling left me more than once speechless...
Eg:
/_snapshot/storage/backup
gives you a state PARTIAL
/_snapshot/storage/backup/_status
gives you a state SUCCESS
Why? The first one is blocking and refers to the backup status itself. The second one shouldn't be blocking and refers to the backup operation.
And yes. The backup operation state is SUCCESS, while the backup state might be PARTIAL (hence no full backup was made, there were errors).
So we have now an additional API that we query that then wraps the API of elasticsearch. With all these shiny scary workarounds like polling, since some APIs are blocking which might lead to a gateway timeout...
Gateway timeout? Yes. Since some operations can run a LONG (multiple hours) time and you don't want to have a ton of open connections hogging resources... You let the loadbalancer kill it. Most operations simply run in ES in the background, while the connection was killed.
So much joy and fun, isn't it?
Now add the latest SMR scandal and a few faulty (as in SMR instead of CMD) hdds in a hundred terabyte ZFS pool and you'll get my frustration level.
PS: The cluster has several dozen terabyte and a lot od nodes. If you have good advice, you're welcome - but please think carefully about this fact.
I might have accidentially vaporized people sending me links with solutions that don't work on large scale TM.2 -
My work product: Or why I learned to get twitchy around Java...
I maintain a Java based test system, that tests a raster image processor. The client is a Java swing project that contains CORBA bindings to the internal API of the raster image processor. It also has custom written UI elements and duplicated functionality that became available in later versions of Java, but because some of the third party tools we use don't work with later versions of Java for some reason, it's not possible to upgrade Java to gain things as simple as recursive directory deletion, yes the version of Java we have to use does not support something as simple as that and custom code had to be written to support it.
Because of the requirement to build the API bindings along with the client the whole application must be built with the raster image processor build chain, which is a heavily customised jam build system. So an ant task calls out to execute a jam task and jam does about 90% of the heavy lifting.
In addition to the Java code there's code for interpreting PostScript files, as these can be used to alter the behaviour of the raster image processor during testing.
As if that weren't enough, there's a beanshell interface to allow users to script the test system, but none of the users know Java well enough to feel confident writing interpreted Java scripts (and that's too close to JavaScript for my comfort). I once tried swapping this out for the Rhino JavaScript interpreter and got all the verbal support in the world but no developer time to design an API that'd work for all the departments.
The server isn't much better though. It's a tomcat based application that was written by someone who had never built a tomcat application before, or any web application for that matter and uses raw SQL strings instead of an orm, it doesn't use MVC in any way, and insane amount of functionality is dumped into the jsp files.
It too interacts with a raster image processor to create difference masks of the output, running PostScript as needed. It spawns off multiple threads and can spend days processing hundreds of gigabytes of image output (depending on the size of the tests).
We're stuck on Tomcat seven because we can't upgrade beyond Java 6, which brings a whole manner of security issues, but that eager little Java updated will break the tool chain if it gets its way.
Between these two components we have the Java RMI server (sometimes) working to help generate image data on the client side before all images are pulled across a UNC network path onto the server that processes test jobs (in PDF format), by reading into the xref table of said PDF, finding the embedded image data (for our server consumed test files are just flate encoded TIFF files wrapped around just enough PDF to make them valid) and uses a tool to create a difference mask of two images.
This tool is very error prone, it can't difference images of different sizes, colour spaces, orientations or pixel depths, but it's the best we have.
The tool is installed in both the client and server if the client can generate images it'll query from the server which ones it needs to and if it can't the server will use the tool itself.
Our shells have custom profiles for linking to a whole manner of third party tools and libraries, including a link to visual studio 2005 (more indirectly related build dependencies), the whole profile has to ensure that absolutely no operating system pollution gets into the shell, most of our apps are installed in our home directories and we have to ensure our paths are correct for every single application we add.
And... Fucking and!
Most of the tools are stored as source bundles in a version control system... Not got or mercurial, not perforce or svn, not even CVS... They use a custom built version control system that is built on top of RCS, it keeps a central database of locked files (using soft and hard locks along with write protecting the files in the file system) to ensure users can't get merge conflicts by preventing other users from writing to the files at all.
Branching is heavy weight and can take the best part of a day to create a new branch and populate the history.
Gathering the tools alone to build the Dev environment to build my project takes the best part of a week.
What should be a joy come hardware refresh year becomes a curse ("Well fuck, now I loose a week spending it setting up the Dev environment on ANOTHER machine").
Needless to say, I enjoy NOT working with Java. A lot of this isn't Javas fault, but there's a lot of things that Java (specifically the Java 6 version we're stuck on) does not make easy.
This is why I prefer to build my web apps in python or node, hell, I'd even take Lua... Just... Compiling web pages into executable Java classes, why? I mean I understand the implementation of how this happens, but why did my predecessor have to choose this? Why?2 -
Finally decided to delete my Facebook account to avoid being distracted. Deletion is scheduled on May 19, after that there is going to be a lot less useless distraction6
-
When Github deletes your account because you've used "Malicious Code" in a private repo. (Chrome Password Reader).
-
I am always afraid to press that delete button on my rant which currently has 165 ++, eventhough it might have a confirmation message, i do not trust that button.
-
Make sure your software does not lose data when improperly quit, and does not allow deletion without a proper confirmation dialogue.
I have experienced pre-installed voice recorder applications that leave behind an unsalvageable corrupt file if the smartphone shuts down due to running out of battery charge, or powers off due to battery undervoltage (as a result of an aged battery).
As often, third-party software beats pre-installed software, and the voice recorder "ASR" by "NLL apps" leaves behind a playable file when unexpectedly quit. Might be because it uses the OGG vorbis format rather than M4A or 3GP audio.
Also, the camera software of the Samsung Galaxy Pocket smartphone from 2012 (which was crap anyway) would discard a video file if the recording was quit through the "back" navigation key.
Perhaps this was done deliberately, but it is a terrible idea due to the possibility of accidents happening.
Some gallery software for Android lets the user delete photos and videos by swiping vertically. After this, a so-called "toast" notification appears with an undo button. If not responded to within seconds, or when tapping next to it due to stress, the photo or video is gone. This is, needless to say, terrible design.2 -
Registry management tool that keeps track of entries created by software, allowing full deletion of registry entries.1
-
I need a help salesforce guys,
I am trying to automate Salesforce sandbox creation, then copying the client secret and key from an app and then use those credentials for some application.
Sandbox creation and deletion is done, but I am not able to get how should I fetch client credentials. I searched internet, and I only find gui method : login, select app, select view, get credentials.
At last I wrote a shitty selenium script but I don't have faith in this approach.
If anybody can give me insight, It would be great help.5 -
Just learned that "Updation" is a thing. Seems to be a common word in India.
https://en.oxforddictionaries.com/d...
Seems logical, if you think about Deletion, Creation. But is there also a Readation? -
The only time I didn't envy git is when most of the team had to refetch after our lead front-end developer deleted trunk, committed the deletion, and then a backender had to re-base it off his repo. Until then, I thought only my dog could fetch for days.
-
I'm going insane.
So let's say you have an object in database, with 20-30 related objects (Or lists of objects) (All related objects have a foregn key to the "main" object).
Now, as long as you just edit/create thinga everything is fine.
But the deletion... Oh MY GOD
"Just put on delete cascade", right ? RIGHT ?
WRONG ! presence of some objects should block delete, while others can be deleted and some are "situational" depending on the first object state.
So delete operation with all the checks takes .... 1 - 2 seconds.
Seems fine ? WRONG ! When you have 40 or more objects to delete, even 1 second is too long.
Should I say "fuck it" and just write a stored proc which will crash if object cannot be deleted for any reason ? because with Entity Framework, I don't see how I can do it effitenly.
But I HATE stored proc, after couple of month/years noone remembers how they work...
RHAAAAA.
Ok I feel better.8 -
The "recycle bin" feature of Samsung "My Files" is amazing for data loss prevention when moving files out of the smartphone.
There used to be two ways to move files out of the smartphone to make space free. One is direct moving, the other is copy-deletion. The first is self-explanatory, the second means first copying the files and then deleting them on the phone.
Thanks to the the recycle bin, which keeps data for a month, files on the phone can be copied out and then put into the recycle bin instead of immediately deleted.
This means that if the copying was incomplete, there is a thirty-day grace period to get the files back from the phone.
The benefit of moving files instead of copy-deleting them is the lack of the deletion step. Moving files out directly does not have the emotional barrier of deleting the files from source like the deletion step of copy-deleting does.
Moving files feels like moving items to a new room, where as the deletion step after copying feels like destroying something.
So why not move files out? Because there is a risk of data loss if the device disconnects while files are moved to an USB OTG device. Due to write buffering, files that are moved out might be deleted on the phone shortly before they are completely written on the USB-OTG.
This is not an issue with MTP (Windows or Linux through USB cable) because the file systems are managed by the computer, so if the phone disconnects while files are moved out of the phone using MTP, the file system is kept intact by Windows or Linux.
Now, thanks to the recycle bin, there is no emotional barrier to deletion because the files on the phone are automatically deleted after 30 days in the absence of the user. The user can press the "delete" button without worries because of knowing "I can get it back until a month from now anyway". -
!Rant
New to the whole front end world so pardon me for such a question.
I have a huge set of data about 5-10k records flowing in which needs to be displayed in a tabular form with sorting, filtering, selection addition and deletion.
Is it wise to sort and filter in the front end? Or make multiple calls to backend and perform the operations there?
If in front end what is the best way to perform these operations?
The application is going to be loaded on a screen and left there for users to view. So even local storage could be am option.
Using polymer for frontend so any special tool for this in polymer?3 -
I looked up well-reputed NGO on Google. And then navigated to their Wikipedia page to learn more about them. And this is what I found—
“Sorry, this page was recently deleted (within the last 24 hours). The deletion, protection, and move log for the page are provided below for reference.”
Why was it deleted? fraudulent claims? Plagiarism?6 -
(tldr: are foriegn keys good/bad? Can you give a simple example of a situation where foriegn keys were the only and/or best solution?)
i have been recently trying to make some apps and their databases , so i decided to give a deeper look to sql and its queries.
I am a little confused and wanted to know more about foreign keys , joins and this particular db designing technique i use.
Can anyone explain me about them in a simpler way?
Firstly i wanted to show you this not much unheard tecnique of making relations that i find very useful( i guess its called toxi technique) :
In this , we use an extra table for joining 2 tables . For eg, if we have a table of questions and we have a table of tags then we should also have a table of relation called relation which will be mapping the the tags with questions through their primary IDs this way we can search all the questions by using tag name and we can also show multiple tags for a question just like stackoverflow does.
Now am not sure which could be a possibile situation when i need a foriegn key. In this particular example, both questions and tags are joined via what i say as "soft link" and this makes it very scalable and both easy to add both questions and new tags.
From what i learned about foriegn keys, it marks a mandatory one directional relation between 2 tables (or as i say "hard a to b" link)
Firstly i don't understand how i could use foriegn key to map multiple tags with a question. Does that mean it will always going to make a 1to1 relationship between 2 tables( i have yet to understand what 11 1mant or many many relations arr, not sure if my terminology is correct)
Secondly it poses super difficulty and differences in logics for adding either a tag or question, don't you think?
Like one table (say question) is having a foreign key of tags ID then the the questions table is completely independent of tag entries.
Its insertion/updation/deletion/creation of entries doesn't affect the tags table. but for tag table we cannot modify a particular tag or delete a tag without making without causing harm to its associated question entries.
if we have to delete a particular tag then we have to delete all its associated questions with that this means this is rather a bad thing to use for making tables isn't it?
I m just so confused regarding foriegn keys , joins and this toxi approach. Maybe my example of stack overflow tag/questions is wrong wrt to foreign key. But then i would like to know an example where it is useful5