Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "folder path"
-
Oh, man, I just realized I haven't ranted one of my best stories on here!
So, here goes!
A few years back the company I work for was contacted by an older client regarding a new project.
The guy was now pitching to build the website for the Parliament of another country (not gonna name it, NDAs and stuff), and was planning on outsourcing the development, as he had no team and he was only aiming on taking care of the client service/project management side of the project.
Out of principle (and also to preserve our mental integrity), we have purposely avoided working with government bodies of any kind, in any country, but he was a friend of our CEO and pleaded until we singed on board.
Now, the project itself was way bigger than we expected, as the wanted more of an internal CRM, centralized document archive, event management, internal planning, multiple interfaced, role based access restricted monster of an administration interface, complete with regular user website, also packed with all kind of features, dashboards and so on.
Long story short, a lot bigger than what we were expecting based on the initial brief.
The development period was hell. New features were coming in on a weekly basis. Already implemented functionality was constantly being changed or redefined. No requests we ever made about clarifications and/or materials or information were ever answered on time.
They also somehow bullied the guy that brought us the project into also including the data migration from the old website into the new one we were building and we somehow ended up having to extract meaningful, formatted, sanitized content parsing static HTML files and connecting them to download-able files (almost every page in the old website had files available to download) we needed to also include in a sane way.
Now, don't think the files were simple URL paths we can trace to a folder/file path, oh no!!! The links were some form of hash combination that had to be exploded and tested against some king of database relationship tables that only had hashed indexes relating to other tables, that also only had hashed indexes relating to some other tables that kept a database of the website pages HTML file naming. So what we had to do is identify the files based on a combination of hashed indexes and re-hashed HTML file names that in the end would give us a filename for a real file that we had to then search for inside a list of over 20 folders not related to one another.
So we did this. Created a script that processed the hell out of over 10000 HTML files, database entries and files and re-indexed and re-named all this shit into a meaningful database of sane data and well organized files.
So, with this we were nearing the finish line for the project, which by now exceeded the estimated time by over to times.
We test everything, retest it all again for good measure, pack everything up for deployment, simulate on a staging environment, give the final client access to the staging version, get them to accept that all requirements are met, finish writing the documentation for the codebase, write detailed deployment procedure, include some automation and testing tools also for good measure, recommend production setup, hardware specs, software versions, server side optimization like caching, load balancing and all that we could think would ever be useful, all with more documentation and instructions.
As the project was built on PHP/MySQL (as requested), we recommended a Linux environment for production. Oh, I forgot to tell you that over the development period they kept asking us to also include steps for Windows procedures along with our regular documentation. Was a bit strange, but we added it in there just so we can finish and close the damn project.
So, we send them all the above and go get drunk as fuck in celebration of getting rid of them once and for all...
Next day: hung over, I get to the office, open my laptop and see on new email. I only had the one new mail, so I open it to see what it's about.
Lo and behold! The fuckers over in the other country that called themselves "IT guys", and were the ones making all the changes and additions to our requirements, were not capable enough to follow step by step instructions in order to deploy the project on their servers!!!
[Continues in the comments]26 -
this.title = "gg Microsoft"
this.metadata = {
rant: true,
long: true,
super_long: true,
has_summary: true
}
// Also:
let microsoft = "dead" // please?
tl;dr: Windows' MAX_PATH is the devil, and it basically does not allow you to copy files with paths that exceed this length. No matter what. Even with official fixes and workarounds.
Long story:
So, I haven't had actual gainful employ in quite awhile. I've been earning just enough to get behind on bills and go without all but basic groceries. Because of this, our electronics have been ... in need of upgrading for quite awhile. In particular, we've needed new drives. (We've been down a server for two years now because its drive died!)
Anyway, I originally bought my external drive just for backup, but due to the above, I eventually began using it for everyday things. including Steam. over USB. Terrible, right? So, I decided to mount it as an internal drive to lower the read/write times. Finding SATA cables was difficult, the motherboard's SATA plugs are in a terrible spot, and my tiny case (and 2yo) made everything soo much worse. It was a miserable experience, but I finally got it installed.
However! It turns out the Seagate external drives use some custom drive header, or custom driver to access the drive, so Windows couldn't read the bare drive. ffs. So, I took it out again (joy) and put it back in the enclosure, and began copying the files off.
The drive I'm copying it to is smaller, so I enabled compression to allow storing a bit more of the data, and excluded a couple of directories so I could copy those elsewhere. I (barely) managed to fit everything with some pretty tight shuffling.
but. that external drive is connected via USB, remember? and for some reason, even over USB3, I was only getting ~20mb/s transfer rate, so the process took 20some hours! In the interim, I worked on some projects, watched netflix, etc., then locked my computer, and went to bed. (I also made sure to turn my monitors and keyboard light off so it wouldn't be enticing to my 2yo.) Cue dramatic music ~
Come morning, I go to check on the progress... and find that the computer is off! What the hell! I turn it on and check the logs... and found that it lost power around 9:16am. aslkjdfhaslkjashdasfjhasd. My 2yo had apparently been playing with the power strip and its enticing glowing red on/off switch. So. It didn't finish copying.
aslkjdfhaslkjashdasfjhasd x2
Anyway, finding the missing files was easy, but what about any that didn't finish? Filesizes don't match, so writing a script to check doesn't work. and using a visual utility like windirstat won't work either because of the excluded folders. Friggin' hell.
Also -- and rather the point of this rant:
It turns out that some of the files (70 in total, as I eventually found out) have paths exceeding Windows' MAX_PATH length (260 chars). So I couldn't copy those.
After some research, I learned that there's a Microsoft hotfix that patches this specific issue! for my specific version! woo! It's like. totally perfect. So, I installed that, restarted as per its wishes... tried again (via both drag and `copy`)... and Lo! It did not work.
After installing the hotfix. to fix this specific issue. on my specific os. the issue remained. gg Microsoft?
Further research.
I then learned (well, learned more about) the unicode path prefix `\\?\`, which bypasses Windows kernel's path parsing, and passes the path directly to ntfslib, thereby indirectly allowing ~32k path lengths. I tried this with the native `copy` command; no luck. I tried this with `robocopy` and cygwin's `cp`; they likewise failed. I tried it with cygwin's `rsync`, but it sees `\\?\` as denoting a remote path, and therefore fails.
However, `dir \\?\C:\` works just fine?
So, apparently, Microsoft's own workaround for long pathnames doesn't work with its own utilities. unless the paths are shorter than MAX_PATH? gg Microsoft.
At this point, I was sorely tempted to write my own copy utility that calls the internal Windows APIs that support unicode paths. but as I lack a C compiler, and haven't coded in C in like 15 years, I figured I'd try a few last desperate ideas first.
For the hell of it, I tried making an archive of the offending files with winRAR. Unsurprisingly, it failed to access the files.
... and for completeness's sake -- mostly to say I tried it -- I did the same with 7zip. I took one of the offending files and made a 7z archive of it in the destination folder -- and, much to my surprise, it worked perfectly! I could even extract the file! Hell, I could even work with paths >340 characters!
So... I'm going through all of the 70 missing files and copying them. with 7zip. because it's the only bloody thing that works. ffs
Third-party utilities work better than Microsoft's official fixes. gg.
...
On a related note, I totally feel like that person from http://xkcd.com/763 right now ;;21 -
Me: Open terminal on current folder
Mac: No.
Me: Copy path of current folder
Mac: No.
Me: Delete this desktop icon (Delete key)
Mac: No.
Me: Lets move this icon to trash
Mac: Alrighty then, delete application.
Man i will have a hard time getting used to this, Windows had its cons but these small things made my life easier.20 -
Let me just delete this symbolic link and leave it copying the folder to the ssd real quick while I go to lunch...
Lessons learned:
1 - don't put a fucking / at the end of `rm -rf /path/to/link`
2 - don't ignore the warning of it being a folder after trying to `rm /path/to/link`
3 - backup your fucking dev database too
4 - don't do stuff hungry
SHIT!! FUCK!!3 -
Hello devRant,
This is already from a few days ago but I had to process the whole thing myself first.
It was a normal day at work nothing special. Customers came in got their repaired PC's/Laptops and brought some new work in. So I went through some and then I got to the case that is the most well unbelievable and shocking I had in the only 2 years doing this. At first it was a normal HDD bad sector thing and I started copying the old HDD to a new one.
//NOTE: the program we use shows every file it's copying and the sectors it spans //
Suddenly I saw a weird thing happening where it started copying tons of files from a folder called "mature/kids" over to the new HDD.
I noted the path and after it finished we returned the laptop to the customer and he luckily left his old HDD with us. So my boss and I we did some investigation and we'll turns out the dude has a whole library of childpornography.
tl;dr check what you copied and report such cases to the police.
Don't do such stupid shit and stay legal guys.
Which you all a great day/night/morning/evening/whatever
//EDIT: I ofc won't post pictures cause of obvious reasons3 -
always double check your rm -rf command...
long ago, young me was setting up a server. last thing was to remove a temp folder i created. instead of rm -rf /path/to/dir, i typed rm -rf / path/to/dir...
"this should not take so long...wait... shit"7 -
OBS is advertised as the expert's screen recording and streaming tool, every list on the internet makes it out to be some incredibly difficult program not recommended for newbies.
It's also the only linux screen recorder that works out of the box on Pipewire, records both microphone and system sounds and all configuration was to
1. select recording as my main use case in the setup wizard which is a very verbose English popup, then accept all defaults
2. add a new source, following the instructions written in the box which are also the only instructions on screen after application launch
3. set the output directory (optional) by going to File > Settings > Output > Recording Path, all of which were the first items I guessed. If I had not done this, it would've written everything to my home folder which is a bit dumb but not confusing at all
4. click Start Recording
5. click Stop Recording when done
Some newbie-oriented screen recorders have a more complicated setup procedure than this super advanced experts' tool don't touch without safety gloves and a degree in video engineering.11 -
Running a fucking conda environment on windows (an update environment from the previous one that I normally use) gets to be a fucking pain in the fucking ass for no fucking reason.
First: Generate a new conda environment, for FUCKING SHITS AND GIGGLES, DO NOT SPECIFY THE PYTHON VERSION, just to see compatibility, this was an experiment, expected to fail.
Install tensorflow on said environment: It does not fucking work, not detecting cuda, the only requirement? To have the cuda dependencies installed, modified, and inside of the system path, check done, it works on 4 other fucking environments, so why not this one.
Still doesn't work, google around and found some thread on github (the errors) that has a way to fix it, do it that way, fucking magic, shit is fixed.
Very well, tensorflow is installed and detecting cuda, no biggie. HAD TO SWITCH TO PYHTHON 3,8 BECAUSE 3.9 WAS GIVING ISSUES FOR SOME UNKNOWN FUCKING REASON
Ok no problem, done.
Install jupyter lab, for which the first in all other 4 environments it works. Guess what a fuckload of errors upon executing the import of tensorflow. They go on a loop that does not fucking end.
The error: imPoRT eRrOr thE Dll waS noT loAdeD
Ok, fucking which one? who fucking knows.
I FUCKING HATE that the main language for this fucking bullshit is python. I guess the benefits of the repl, I do, but the python repl is fucking HORSESHIT compared to the one you get on: Lisp, Ruby and fucking even NODE in which error messages are still more fucking intelligent than those of fucking bullshit ass Python.
Personally? I am betting on Julia devising a smarter environment, it is a better language already, on a second note: If you are worried about A.I taking your job, don't, it requires a team of fucktards working around common basic system administration tasks to get this bullshit running in the first place.
My dream? Julia or Scala (fuck you) for a primary language in machine learning and AI, in which entire environments, with aaaaaaaaaall of the required dlls and dependencies can be downloaded and installed upon can just fucking run. A single directory structure in which shit just fucking works (reason why I like live environments like Smalltalk, but fuck you on that too) and just run your projects from there, without setting a bunch of bullshit from environment variables, cuda dlls installation phases and what not. Something that JUST FUCKING WORKS.
I.....fucking.....HATE the level of system administration required to run fucking anything nowadays, the reason why we had to create shit like devops jobs, for the sad fuckers that have to figure out environment configurations on a box just to run software.
Fuck me man development turned to shit, this is why go mod, node npm, php composer strict folder structure pipelines were created. Bitch all you want about npm, but if I can create a node_modules setting with all of the required dlls to run a project, even if this bitch weights 2.5GB for a project structure you bet your fucking ass that I would.
"YOU JUST DON'T KNOW WHAT YOU ARE DOING" YES I FUCKING DO and I will get this bullshit fixed, I will get it running just like I did the other 4 environments that I fucking use, for different versions of cuda and python and the dependency circle jerk BULLSHIT that I have to manage. But this "follow the guide and it will work, except when it does not and you are looking into obscure github errors" bullshit just takes away from valuable project time when you have a small dedicated group of developers and no sys admin or devops mastermind to resort to.
I have successfully deployed:
Java
Golang
Clojure
Python
Node
PHP
VB/C# .NET
C++
Rails
Django
Projects, and every single fucking time (save for .net, that shit just fucking works on a dedicated windows IIS server) the shit will not work with x..nT reasons. It fucking obliterates me how fucking annoying this bullshit is. And the reason why the ENTIRE FUCKING FIELD of computer science and software engineering is so fucking flawed.
But we can't all just run to simple windows bs in which we have documentation for everything. We have to spend countless hours on fucking Linux figuring shit out (fuck you also, I have been using Linux since I was 18, I am 30 now) for which graphical drivers for machine learning, cuda and whatTheFuckNot require all sorts of sys admin gymnasts to be used.
Y'all fucked up a long time ago. Smalltalk provided an all in one, easily rollable back to previous images, easily administered interfaces for this fileFuckery bullshit, and even though the JVM and the .NET environments did their best to hold shit down, and even though we had npm packages pulling the universe inside, or gomod compiling shit into one place NOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO we had to do whatever the fuck we wanted to feel l337 and wanted.
Fuck all of you, fuck this field, fuck setting boxes for ML/AI and fuck every single OS in existence2 -
MTP is utter garbage and belongs to the technological hall of shame.
MTP (media transfer protocol, or, more accurately, MOST TERRIBLE PROTOCOL) sometimes spontaneously stops responding, causing Windows Explorer to show its green placebo progress bar inside the file path bar which never reaches the end, and sometimes to whiningly show "(not responding)" with that white layer of mist fading in. Sometimes lists files' dates as 1970-01-01 (which is the Unix epoch), sometimes shows former names of folders prior to being renamed, even after refreshing. I refer to them as "ghost folders". As well known, large directories load extremely slowly in MTP. A directory listing with one thousand files could take well over a minute to load. On mass storage and FTP? Three seconds at most. Sometimes, new files are not even listed until rebooting the smartphone!
Arguably, MTP "has" no bugs. It IS a bug. There is so much more wrong with it that it does not even fit into one post. Therefore it has to be expanded into the comments.
When moving files within an MTP device, MTP does not directly move the selected files, but creates a copy and then deletes the source file, causing both needless wear on the mobile device' flash memory and the loss of files' original date and time attribute. Sometimes, the simple act of renaming a file causes Windows Explorer to stop responding until unplugging the MTP device. It actually once unfreezed after more than half an hour where I did something else in the meantime, but come on, who likes to wait that long? Thankfully, this has not happened to me on Linux file managers such as Nemo yet.
When moving files out using MTP, Windows Explorer does not move and delete each selected file individually, but only deletes the whole selection after finishing the transfer. This means that if the process crashes, no space has been freed on the MTP device (usually a smartphone), and one will have to carefully sort out a mess of duplicates. Linux file managers thankfully delete the source files individually.
Also, for each file transferred from an MTP device onto a mass storage device, Windows has the strange behaviour of briefly creating a file on the target device with the size of the entire selection. It does not actually write that amount of data for each file, since it couldn't do so in this short time, but the current file is listed with that size in Windows Explorer. You can test this by refreshing the target directory shortly after starting a file transfer of multiple selected files originating from an MTP device. For example, when copying or moving out 01.MP4 to 10.MP4, while 01.MP4 is being written, it is listed with the file size of all 01.MP4 to 10.MP4 combined, on the target device, and the file actually exists with that size on the file system for a brief moment. The same happens with each file of the selection. This means that the target device needs almost twice the free space as the selection of files on the source MTP device to be able to accept the incoming files, since the last file, 10.MP4 in this example, temporarily has the total size of 01.MP4 to 10.MP4. This strange behaviour has been on Windows since at least Windows 7, presumably since Microsoft implemented MTP, and has still not been changed. Perhaps the goal is to reserve space on the target device? However, it reserves far too much space.
When transfering from MTP to a UDF file system, sometimes it fails to transfer ZIP files, and only copies the first few bytes. 208 or 74 bytes in my testing.
When transfering several thousand files, Windows Explorer also sometimes decides to quit and restart in midst of the transfer. Also, I sometimes move files out by loading a part of the directory listing in Windows Explorer and then hitting "Esc" because it would take too long to load the entire directory listing. It actually once assigned the wrong file names, which I noticed since file naming conflicts would occur where the source and target files with the same names would have different sizes and time stamps. Both files were intact, but the target file had the name of a different file. You'd think they would figure something like this out after two decades, but no. On Linux, the MTP directory listing is only shown after it is loaded in entirety. However, if the directory has too many files, it fails with an "libmtp: couldn't get object handles" error without listing anything.
Sometimes, a folder appears empty until refreshing one more time. Sometimes, copying a folder out causes a blank folder to be copied to the target. This is why on MTP, only a selection of files and never folders should be moved out, due to the risk of the folder being deleted without everything having been transferred completely.
(continued below)29 -
1) Using ScreenRecord record a video deleting his work folder(fake one obviously).
2) create command line vlc player to play this video on startup with flags -f and --no-qt-fs-controller
Eg. vlc -f --no-qt-fs-controller file://<file path>
Enjoy the show 👿 -
Unicode support pl0x.
So I had an Windows account with AzureAD, and my real name has "ő" and "ó" in it, and software that did not support Unicde started flipping the fuck out.
I was intially going with junctioning every bullshit corrupted user folder name that showed up in the ENOENTs to my real user folder, but that didn't solve it for a couple of software.
I was trying to share my drives with Docker, but the same shit occurred. No error message, it just didn't work. I ended up creating a new user account for Docker to share the drive with.
I was trying to use the Travis CLI to set up releases, etc., but it replaced the "ő" with "?". Y U DO THAT?! Common knowledge is that "?" and other special characters cannot be in entity names. SO WHY DO YOU REPLACE THE UNKNOWN CHARACTER IN A PATH WITH THAT? And it wasn't a character not found character either! It was just a straight question mark.
I ended up creating a new user account because I couldn't change the name of the current one because fuck AzureAD, and Windows just decided to FUCKING TRASH MY ACCOUNT. I went over to the new one, copied over some files from the old one, tried to go back to the old one to copy env variables, but I noticed that the account has been purged from the registry... At least the files haven't been deleted.
I ended up reinstalling Windows.
After all my frustration, I recommend all companies with a CLI to visit the following website: http://uplz.skiilaa.me/
Thanks.1 -
Beware: Here lies a cautionary tale about shared hosting, backups, and -goes without saying- WordPress.
1. Got a call from a client saying their site presented an issue with a third-party add-on. The vendor asked us to grant him access to our staging copy.
2. Their staging copy, apparently, never got duplicated correctly because, for security reasons, their in-house dev changed the name of the wp-content folder. That broke their staging algo. So no staging site.
3. In order to recreate the staging site, we had to reset everything back to WP defaults. Including, for some reason, absolute paths inside the database. A huge fucking database. Because WordPress.
4. Made the changes directly in a downloaded sql file. Shared hosting, obviously, had an upload limit smaller to the actual database.
5. Spent half an hour trying to upload table by table to no avail.
6. In-house uploads a new, fixed database with the help of the shared hosting provider.
7. Database has the wrong path. Again.
8. In-house performs massive Find and Replace through phpMyAdmin on the production server.
9. Obviously, MySQL crashes instantly and the site gets blocked for over 3 hours for exceeding shared hosting limits.
10. Hosting provider refuses to accept this was caused by such a stupid act and says site needs to be checked because queries are too slow.
11. We are gouging our eyeballs as we see an in-house vs. hosting fight unfold. So we decide to watch a whole Netflix documentary in between.
12. Finally, the hosting folds and enables access to the site, which is obvi not working because, you know, wrong paths.
13. Documentary finishes. We log in again, click restore from backup. Go to bed. Client phones to bless us. Client’s in-house dev probably looking for a cardboard box to pack his stuff first thing in the morning. \_(ツ)_/¯ -
Just ran rm -rf ~
Only good thing is I ran it without sudo.
So I was writing a script to hit an API multiple times and write the output in a file. Instead of providing the absolute path like /User/.... I gave the path as ~/..., So it created a folder named ~ inside the directory I was inside. Now I wanted to delete it and the file inside it. And so I did it.
I am an idiot5 -
Today at work I accidentally redeployed our ELK instance without taking a backup of all of Kibana's saved objects...
I didn't realize Elastic stores all indexes including the Kibana's in it's app folder by default.
Tomorrow will be fun.... I can't decide what to do first... Recreating all the charts and tables from memory... Or fixing the deployment script to change the data dir path...2 -
Dev Diary Entry #56
Dear diary, the part of the website that allows users to post their own articles - based on an robust rights system - through a rich text editor, is done! It has a revision system and everything. Now to work on a secure way for them to upload images and use these in their articles, as I don't allow links to external images on the site.
Dev Diary Entry #57
Dear diary, today I finally finished the image uploading feature for my website, and I have secured it as well as I can.
First, I check filesize and filetype client-side (for user convenience), then I check the same things serverside, and only allow images in certain formats to be uploaded.
Next, I completely disregard the original filename (and extension) of the image and generate UUIDs for them instead, and use fileinfo/mimetype to determine extension. I then recreate the image serverside, either in original dimensions or downsized if too large, and store the new image (and its thumbnail) in a non-shared, private folder outside the webpage root, inaccessible to other users, and add an image entry in my database that contains the file path, user who uploaded it, all that jazz.
I then serve the image to the users through a server-side script instead of allowing them direct access to the image. Great success. What could possibly go horribly wrong?
Dev Diary Entry #58
Dear diary, I am contemplating scrapping the idea of allowing users to upload images, text, comments or any other contents to the website, since I do not have the capacity to implement the copyright-filter that will probably soon become a requirement in the EU... :(
Wat to do, wat to do...1 -
Boss: Let's hire a new person to help us recreate our website
Us: sounds good!
Boss doesn't hire anyone and starts the website on his own
Boss: I started the new website. It's in server X and the address is test.y.com. I also want all of us to work on it
Me thinking: great he just wants us to modify the hosted files like he does 🙄
Me: I'll move a copy to GitHub for version control.
Boss: Great!
Boss creates a backup folder on the same computer and folder path that the hosted files are on.
Someone please nuke that server so my boss learns version control like I've asked before. I think I'll opt not to work on a website where he and my other co-workers will just overwrite each other's changes because he doesn't want to learn to git 😑4 -
Mobilis in mobili.
Yesterday, I was trying to figure out how to open a folder via the linux terminal (like the `open path/to/folder` in MacOS), and I discovered that it can be done via `nemo path/to/folder`. This rang a bell on me because I know that GNOME file manager was named Nautilus.
This got my interest because both names are in Jules Verne's "Twenty Thousand Leagues Under the Sea". Nautilus is the submarine commanded by the great Capt. Nemo, a brilliant individual who plans to explore the depths of the sea with Nautilus.
I learned that the developers of Linux Mint believed the GNOME file manager Nautilus (v3.6) was a catastrophe, and thus, they forked project, giving birth to the awesome Nemo. So instead of exploring the depths of the sea, I guess we could say Nemo is now exploring the depths of our filesystem, right? -
So today it finally happened.
Npm modules broke my system and / or endangered the security of my system.
Installed a global cli utility
That utility depends on package A
That depends on package B
That fucking install a bin called sudo
Yeah.. You heard it right a bin called sudo.
This bin goes in the global module folder that is piped in your path variable.
Now everytime you type sudo you are running somebody else code instead of your system utility.
I am shivering and at loss of swear words.
Opened an issue on the cli that started this matrioska game of horror.
Who the fuck tought that a bin called sudo would be a good fucking idea?
Oh and yes is even an harmless package that try to provide the sudo experience for windows (I went in to check the code of course..)
And I frigging need that cli for work
For now I aliased the sudo in my bashrc still i feel vulnerable and naked now.10 -
Explain to me why people love Apple so much.
What is a simple task in every other OS ever is a multi step dance on a Mac (or iphones too for that matter). It is a productivity nightmare that makes the whole system feel like it is only meant to be used to watch youtube.
The way the keyboard works feels like it was designed by aliens.
Browsing the system with Finder is an absolute pretzel nightmare. No moving files. Copy, paste, then delete is as good as you're going to get. No way to type the path to go straight to it. You will do things the slowest way possible and be happy while doing it.
Want to quickly create a blank file in the current folder? Oh what's that? You thought the right click menu was going to help you like every other OS? Apple laughs in your face for such arrogance.25 -
Situation:
Php not loading oci8 connector for oracle database in windows server, got the all famous and feated oci_connect unknown function error.
Solution:
Check to make sure that the stupid dll is in the extensions folder ---> check
Check to make sure that the extension_dir path is done properly inside php.ini ---> check
Ensure that extension=php_oci8_11g.dll is inside php.ini ---> check
I have no fucking clue why this piece of shit would stop working all out of the sudden and would not fucking work. But here i am yet AGAIN trying to fix something for the fucking web tech department because their fucking lead dev is out.
I
Fucking
HATE
Having to deal with php configurations. Such a fucking pain in the fucking ass man.
FUUUUCKING WOOOOOORK8 -
I'm the company's git guy. Here's an actual, typical, interaction:
[Coworker] 12:08 PM:
Hi jallman112, good afternoon
[Coworker] 12:08 PM:
I get this error -
[Coworker] 12:08 PM:
git.exe clone --progress -v "[path to local SVN checkout]\repoName" "[path to local git folder]\repoName"
Cloning into '[path to local git folder]\repoName'...
fatal: '[path to local SVN checkout]\repoName' does not appear to be a git repository
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
git did not exit cleanly (exit code 128) (515 ms @ 3/14/2019 12:07:26 PM)
[Me] 1:38 PM:
Looks like [path to local SVN checkout]\repoName is not a git repo1 -
This just happened. I have to tell someone how stupid I felt a few minutes ago.
I am working on a script that is supposed to rename some files in sub folders of the current folder.
I used ChatGPT to help me out with the last part of finding all sub folders. Copied the code into VS code. Changed the path from “/path/to/folder/“ to just “/“, ran the code. It took a bit too long time to finish. Then VS Code asked for permissions to my reminders… “Why?” I asked before I realized that I confused “.” With “/“ for current folder…3 -
so I started a side project a while ago.
the only thing it could do was to create some files with desired names and extensions. so this was basically a pretty simple editor.
I left this project with no future plans for a month or so until I started working on it again this week. I added comments to the editor, a console user interface.
the ui isn't futuristic. the program runs in the console. it just lists all the files and folders where the program is currently located in. in the beginning it could take user input and that input was the location where the files created in the editor would be saved. then I thought: it would be more interesting if I created a folder in which I saved the files from the editor. so I did this thing.
then I thought, again: hey, this console is pretty boring and stuff. why should I add some special commands? and so I did.
now you can create an empty folder, before you created a folder and saved at the same time the files created in the editor. now you can open another folder in which you can do the same stuff as before. you can get the current location of the folder you are currently in, so you don't get lost in your fancy computer. you can delete a folder completely, set color, reset color.
but one thing that I lost almost ONE FREAKING HOUR ON IT TO MAKE THE USER EXPERIENCE BETTER was the following: when creating a folder, either empty or with the files from the editor, the program automatically opens the folder, not in the console(hey, I didn't thought of that) but in the file explorer from the os. now it only works for windows and windows explorer because I used system(const char*). I know it's not portable or efficient but I just wanted things to work, I will optimise it later.
the thing that made me lose that one hour debugging was figuring out how to open that file.
ok, so I used windows api with GetCurrentDirectory, I knew how to use system, I knew how to form the path that would match up with the folder, I almost knew how to open the folder with system().
the problem was that I had the path complete, but if the folder had white spaces system() wouldn't recognise the freaking command!
so the string with the path would also contain the command used in system() and I would just .c_str() the string so it could work. as an example my wrong way to make the path was this:
"start C:\\path"
can you figure out what is the problem?
you don't?
it's just so trivial.
how cannot you figure it out?
of course you NEED to put "explorer" between the start command and the actual path!
pffft, you idiot! so easy to figure it out.
so yeah, the right way to open a folder is like this:
"start explorer C:\\path to heLL!!"
p.s.: I still don't understand why putting explorer works and without it doesn't. without explorer it just just says that path with the first word before the white space doesn't exist. -
yes, i know this isn't omg! ubuntu
yes i know it's obviously not StOve (you guys are way too nice)
but im going to look like an idiot for asking this here. my process is a rollercoaster to read so bear with me
context:
· xubuntu 17.10
· zsh (not that it matters)
° i want to be able to use java -jar file.jar
i tried using that command
zsh: java: permission denied
then used sudo
sudo: java: command not found
i grabbed the generic x64 tar.gz and extracted to ~/.local/.app/java/
set the Java path to the openjdk folder inside, no good
i installed openjdk-8-jre
couldnt figure it out that way, so uninstalled
RE-GRABBED the x64 tarball
extracted into /usr/java
set the path to the folder containing java & javaws executables
still no luck
what is the problem here17 -
Was motivated to do a project with ReactNative for Android but already stuck.
I need to read a SQLite DB file from /data/data/some.other.app/database/DB.db
Yes I am rooted.
1. How does I request root from the App (Android Pie)
2. What SQLite npm package can load from an absolute path. I found a few libs but they don't seem to be full access, just for dbs in the app's own data folder.8 -
!Dev
TL Dr :- Debugging a software I barely know about was slow and ended up breaking in the shop it was used in and reverting the changes does not solve the problem
I asked my father a few days ago why he was buying a dedicated server for his ERP software and not using a client computer as his server which he is doing in his shop currently. He said that it was slow on other computers in the LAN which is an wired. The solutions given by the company that made it did not work. Big bills would sometimes also dissapear which took around 30 minutes to make. So when he bought the computer to home during lockdown I pulled up the debugging guide from the company which summed up to check latency,ram and add these files to exclusion list of your antivirus. Latency was kinda high at the first when pinging another computer on the LAN but I was testing on WiFi so it could be pretty inaccurate. The computer met the ram requirements so that was not a problem. I checked the data path by opening the software and accidentally typed something but I did not worry since the changes needed to be manually accepted. I added the files to the Windows defender exclusion list and shut it down.
Next day :- My father calls me up and says the software is working on the server but is broken on other computers. So I check if the changes were automatically accepted for some reason and yes that happened. So so pull up a guide to configure the software in multi user mode and I replace the mistyped setting with the correct one and it still does not work. My father asks me to undo everything by using anydesk. I remove all the exclusions I added to Windows defender and disable windows firewall. Still does not work. Restart the computer and software. Still does not work. Check permissions on data folder. They are correct.
WTF I reverted all the changes I made and the software does not work on other computers.7 -
I feel so dirty rn. Been struggling with webpack all day to just generate some pages based on the object a js script exports. The issue was that I needed a path to an image and the path webpack's require returned was a relative path (it started with a .), which wouldn't work because my generated page wasn't in the root folder. I tried making it work "the right way" all day, nothing worked. So in the end I just removed the . from the fucking path using substr and that's that.
Also, why the fuck does webpack's require return a fucking relative path anyway?1 -
Curse windows nested folder path length and filename limit...can create it no problem but try to move or rename and the path is too long...
ARRRGGG!1 -
Please excuse the "photo of my monitor" picture, but it really was the easiest way to do this...
So, I'm finally getting around to that to-do list item of wrapping my head around Nrwl Nx workspaces, and I stumbled onto this little gem: https://itnext.io/easy-typescript-m...
I didn't take long for the "what the fuck" moments to start cropping up, and then I decided to check what comments might have been left on Daily.dev regarding this one (see attachment).
THAT little nugget there is what led me to the ultimate "what the actually fuck" moment, which is only truly appropriate for DevRant..
Create an Nx workspaces, only to initialise a project with `npm` directly, using a path under a new `libs` folder, next to the `packages` folder, only to build the library, and literally install it into the Nx workspace's `node_modules` folder, b order to import it into the app that exist in the same workspace.
So, seriously.. like.. WHAT THE ACTUAL FUCK? What is this guy smoking?? I need to know so I can stay the fuck away from it! Wow. My brain hurts now.7 -
So... I had to create a VBA macro, ok, it is very simple and it will be necessary during some DOC files reviewing. Ok, not a problem.
I created the functions, added some quick launch buttons and saved it as a .DOTM file. I even included an autoload form with an Install button, so the file copy itself to the Word Startup folder. Nice, everything working just fine.
But... there are two Mac users in the company. I do not have a Mac, but the first thing I thought was I hardcoded the "\" to check if the file already exists and to copy the file. Using the system separator would do the trick. The macro would be copied and everything is done. But...
1. The quick launch buttons do not appear on Mac;
2. The "Application.PathSeparator" returns an ":"
3. The "application.StartupPath" returns an invalid path (something like "Mac's Name:Application:etc")
4. The copy command is not working, the Dir command appears to be not identifying the path etc
5. I need to have it working by Monday morning.2 -
For the love of god developers/programmers, don’t put version numbers to your softwares file paths!
It’s the worst when you have to configure permissions and rules, then the folder path changes in every update!2 -
I don't know why it happen. Windows update then Windows create a TEMP Users Folders, put all the documents/download/etc location (path) into one of those temps users folder that was just created. Hopefully my clients didn't lose their files, since the Good users folder was still there.
Okay now Microsoft, listen, it's okay to update your OS. It certainly need it. BUT HOW THE FUCK WOULD YOU CREATE A NEW USERS AND CHANGE THE PATH OF PERSONAL FILES! Thumb up! At least those file were not erased... -
I am thinking of how I can make data upload reliable. I am sure that I am making it more complex as it could be and I need some pointer.
My goal is to have a pause/resume feature in file uploading.
Here is how it would work.
In order to start uploading , you give the server
1) File Name
2) Folder path you want to upload it to
3) Checksum of the file
Here the server will check whether you can upload it to a folder, whether the file have been previously uploaded (by file name and checksum)
If you could actually upload to the folder , server will return "unique file token" , "folder path" , "unique byte token". Let call it init_upload().
The client will use "unique file token" (to identify the file) , folder path (to know where to upload it to) , "unique byte token" and byte[] (data which to actually upload). Let call this operation data_upload().
If the operation is actually complete , server will return new "unique byte token"
Internally it will actually work like this.Let say we want to upload "file.mp3", when the client call init_upload() it will create
file.mp3 and unique_byte_token.file.mp3.
When the client upload data first time , it will append data to unique_byte_token.file.mp3.
When the client upload data second time , it will check whether the "byte token" that client put is the same as previous "unique_byte_token". If it is same ,
1) we move the data from unique_byte_token.file.mp3 to file.mp3
2) Delete unique_byte_token.file.mp3
3) Create new unique_byte_token.file.mp3
4) Append data to unique_byte_token.file.mp3
The reason I am using "byte token" is because I want to check whether previous upload is actually success.
Let say we need to call 50 part of data_upload() will put 49 part to file.mp3 and 1 part to byte_token.file.mp3.
Finally the client need to call data_upload_complete() which will
1) Put reminding 1 part to file.mp3
2) Remove byte_token.file.mp3 as cleanup6 -
Does anybody know how to create a JSON list of all files in a folder with path+filename, md5 hash, file name and size? My client wished that I rework an open source launcher which is reading an HTML file in JSON format5
-
useful alias when you want to inspect what is taking up space on your disk - needs GNU du
alias duh='function _blah(){ du -ahcd 1 --time $1 | sort -h -r; };_blah'
$ duh /path/to/folder | less2 -
To hell with this auto path rewrite VS Code, when I rename a file you find all files and rewrite the imports, but now you did it wrong
and I have a huge mess to pick, I have no idea how you did this but you wrote long paths which don't make sense
why did you put node_modules in-front of all my imports when I moved a folder which has nothing to do with node modules1 -
Microsoft one drive
I know you wanna sync my folder
I know you wanna provide a one drive alias to every file
But who the fuck in their right fucking mind makes a alias with space in path
Oh you dumb piece of shit 🤬3 -
When you test on production server
"Your system folder path does not appear to be set correctly. Please open the following file and correct this: index.php"1 -
Mac day 2.0, in Ubuntu you can do ctrl + l on a folder, then the full file path will appear. How do I do it in Mac? Basically the easiest to show full path when I am in a folder3
-
Dumb question, but does anyone know how to make VSCode show more of the path than just the folder name on the side bar, I am working on making workspaces to avoid opening 6 file explorer windows but a lot of folders for my workflows have the same name but different locations on the network and I can't change the folder names for automation purposes.
I know it shows the path if i hover over the name, but i'd like to just show path by default on the side panel
example image below (can't show real folders due to NDA)6