Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "file path"
-
One of our web developers reported a bug with my image api that shrunk large images to a thumbnail size. Basically looked like this img = ResizeImage(largeImage, 50); // shrink the image by 50%
The 'bug' was when he was passed in the thumbnail image and requesting a 300% increase, and the image was too pixelated.
I tried to explain that if you need the larger image, use the image from disk (since the images were already sized optimally for display) and the api was just for resizing downward.
Thinking I was done, the next day I was called into a large conference room with the company vice-president, two of the web-dev managers, and several of the web developers.
VP: "I received an alarming email saying you refused to fix that bug in your code. Is that correct?"
Me: "Bug? No, there is no bug. The image api is executing just as it is supposed to."
MGR1: "Uh...no it isn't. Images using *your* code is pixelated and unfit for our site and our customers."
MGR2: "Yes, I looked at your code and don't understand what the big deal is. Looks like a simple fix."
<web developers nodding their heads>
Me: "OK, I'll bite. What is the simple fix?"
<MGR2 looks over at one of the devs>
Dev1: "Well, for example, if we request an image resize of 300, and the image is only 50x50, only increase the size by 10. Maybe 15."
Me: "Wow..OK. So what if the image is, for example, 640x480?"
MGR1: "75. Maybe 80 if it's a picture of boots."
VP: "Oh yes, boots. We need good pictures of boots."
Me: "I'm not exactly sure how to break this to you, but my code doesn't do 'maybe'. I mean, you have the image from disk.
You obviously used the api to create the thumbnail, but are trying to use the thumbnail to go back to the regular size. Why not use the original image?"
<Web-Dev managers look awkwardly towards the web devs>
Dev3: "Yea, well uh...um...that would require us to create a variable or something to store the original image. The place in the code where we need the regular image, it's easier to call your method."
Me: "Um, not really. You still have to resolve the product name from the URL path. Deriving the original file name is what you are doing already. Just do the same thing in your part of the code."
Dev2: "But we'd have to change our code"
Mgr2: "I know..I know. How about if we, for example, send you 12345.jpg and request a resize greater than 100, you go to disk and look for that image?"
<VP, mgrs, and devs nod happily>
Me: "Um, no that won't work. All I see is the image stream. I have no idea what file is and the api shouldn't be guessing, going to disk or anything like that."
Dev1: "What if we pass you the file name?"
<VP, mgrs, and devs nod happily again>
Me: "No, that would break the API contract and ...uh..wait...I'm familiar with your code. How about I make the change? I'm pretty sure I'll only have to change one method"
VP: "What! No...it’s gotta be more than that. Our site is huge."
<Mgrs and devs grumble and shift around in their chairs>
Me: "I'm done talking about this. I can change your code for you or you can do it. There is no bug and I'm not changing the api because you can't use it correctly."
Later I discovered they stopped using the resize api and wrote dynamic html to 'resize' the images on the client (download the 5+ meg images, and use the length and width properties)22 -
Oh, man, I just realized I haven't ranted one of my best stories on here!
So, here goes!
A few years back the company I work for was contacted by an older client regarding a new project.
The guy was now pitching to build the website for the Parliament of another country (not gonna name it, NDAs and stuff), and was planning on outsourcing the development, as he had no team and he was only aiming on taking care of the client service/project management side of the project.
Out of principle (and also to preserve our mental integrity), we have purposely avoided working with government bodies of any kind, in any country, but he was a friend of our CEO and pleaded until we singed on board.
Now, the project itself was way bigger than we expected, as the wanted more of an internal CRM, centralized document archive, event management, internal planning, multiple interfaced, role based access restricted monster of an administration interface, complete with regular user website, also packed with all kind of features, dashboards and so on.
Long story short, a lot bigger than what we were expecting based on the initial brief.
The development period was hell. New features were coming in on a weekly basis. Already implemented functionality was constantly being changed or redefined. No requests we ever made about clarifications and/or materials or information were ever answered on time.
They also somehow bullied the guy that brought us the project into also including the data migration from the old website into the new one we were building and we somehow ended up having to extract meaningful, formatted, sanitized content parsing static HTML files and connecting them to download-able files (almost every page in the old website had files available to download) we needed to also include in a sane way.
Now, don't think the files were simple URL paths we can trace to a folder/file path, oh no!!! The links were some form of hash combination that had to be exploded and tested against some king of database relationship tables that only had hashed indexes relating to other tables, that also only had hashed indexes relating to some other tables that kept a database of the website pages HTML file naming. So what we had to do is identify the files based on a combination of hashed indexes and re-hashed HTML file names that in the end would give us a filename for a real file that we had to then search for inside a list of over 20 folders not related to one another.
So we did this. Created a script that processed the hell out of over 10000 HTML files, database entries and files and re-indexed and re-named all this shit into a meaningful database of sane data and well organized files.
So, with this we were nearing the finish line for the project, which by now exceeded the estimated time by over to times.
We test everything, retest it all again for good measure, pack everything up for deployment, simulate on a staging environment, give the final client access to the staging version, get them to accept that all requirements are met, finish writing the documentation for the codebase, write detailed deployment procedure, include some automation and testing tools also for good measure, recommend production setup, hardware specs, software versions, server side optimization like caching, load balancing and all that we could think would ever be useful, all with more documentation and instructions.
As the project was built on PHP/MySQL (as requested), we recommended a Linux environment for production. Oh, I forgot to tell you that over the development period they kept asking us to also include steps for Windows procedures along with our regular documentation. Was a bit strange, but we added it in there just so we can finish and close the damn project.
So, we send them all the above and go get drunk as fuck in celebration of getting rid of them once and for all...
Next day: hung over, I get to the office, open my laptop and see on new email. I only had the one new mail, so I open it to see what it's about.
Lo and behold! The fuckers over in the other country that called themselves "IT guys", and were the ones making all the changes and additions to our requirements, were not capable enough to follow step by step instructions in order to deploy the project on their servers!!!
[Continues in the comments]26 -
You know who sucks at developing APIs?
Facebook.
I mean, how are so high paid guys with so great ideas manage to come up with apis THAT shitty?
Let's have a look. They took MVC and invented flux. It was so complicated that there were so many overhyped articles that stated "Flux is just X", "Flux is just Y", and exactly when Redux comes to the stage, flux is forgotten. Nobody uses it anymore.
They took declarative cursors and created Relay, but again, Apollo GraphQL comes and relay just goes away. When i tried just to get started with relay, it seemed so complicated that i just closed the tab. I mean, i get the idea, it's simple yet brilliant, but the api...
Immutable.js. Shitload of fuck. Explain WHY should i mess with shit like getIn(path: Iterable<string | number>): any and class List<T> { push(value: T): this }? Clojurescript offers Om, the React wrapper that works about three times faster! How is it even possible? Clojure's immutable data structures! They're even opensourced as standalone library, Mori js, and api is great! Just use it! Why reinvent the wheel?
It seems like when i just need to develop a simple react app, i should configure webpack (huge fuckload of work by itself) to get hot reload, modern es and jsx to work, then add redux, redux-saga, redux-thunk, react-redux and immutable.js, and if i just want my simple component to communicate with state, i need to define a component, a container, fucking mapStateToProps and mapDispatchToProps, and that's all just for "hello world" to pop out. And make sure you didn't forget to type that this.handler = this.handler.bind(this) for every handler function. Or use ev closure fucked up hack that requires just a bit more webpack tweaks. We haven't even started to communicate to the server! Fuck!
I bet there is savage ass overengineer sitting there at facebook, and he of course knows everything about how good api should look, and he also has huge ass ego and he just allowed to ban everything that he doesn't like. And he just bans everything with good simple api because it "isn't flexible enough".
"React is heavier than preact because we offer isomorphic multiple rendering targets", oh, how hard want i to slap your face, you fuckface. You know what i offered your mom and she agreed?
They even created create-react-app, but state management is still up to you. And react-boierplate is just too complicated.
When i need web app, i type "lein new re-frame", then "lein dev", and boom, live reload server started. No config. Every action is just (dispatch) away, works from any component. State subscription? (subscribe). Isolated side-effects? (reg-fx). Organize files as you want. File size? Around 30k, maybe 60 if you use some clojure libs.
If you don't care about massive market support, just use hyperapp. It's way simpler.
Dear developers, PLEASE, don't forget about api. Take it serious, it's very important. You may even design api first, and only then implement the actual logic. That's even better.
And facebook, sincerelly,
Fuck you.17 -
this.title = "gg Microsoft"
this.metadata = {
rant: true,
long: true,
super_long: true,
has_summary: true
}
// Also:
let microsoft = "dead" // please?
tl;dr: Windows' MAX_PATH is the devil, and it basically does not allow you to copy files with paths that exceed this length. No matter what. Even with official fixes and workarounds.
Long story:
So, I haven't had actual gainful employ in quite awhile. I've been earning just enough to get behind on bills and go without all but basic groceries. Because of this, our electronics have been ... in need of upgrading for quite awhile. In particular, we've needed new drives. (We've been down a server for two years now because its drive died!)
Anyway, I originally bought my external drive just for backup, but due to the above, I eventually began using it for everyday things. including Steam. over USB. Terrible, right? So, I decided to mount it as an internal drive to lower the read/write times. Finding SATA cables was difficult, the motherboard's SATA plugs are in a terrible spot, and my tiny case (and 2yo) made everything soo much worse. It was a miserable experience, but I finally got it installed.
However! It turns out the Seagate external drives use some custom drive header, or custom driver to access the drive, so Windows couldn't read the bare drive. ffs. So, I took it out again (joy) and put it back in the enclosure, and began copying the files off.
The drive I'm copying it to is smaller, so I enabled compression to allow storing a bit more of the data, and excluded a couple of directories so I could copy those elsewhere. I (barely) managed to fit everything with some pretty tight shuffling.
but. that external drive is connected via USB, remember? and for some reason, even over USB3, I was only getting ~20mb/s transfer rate, so the process took 20some hours! In the interim, I worked on some projects, watched netflix, etc., then locked my computer, and went to bed. (I also made sure to turn my monitors and keyboard light off so it wouldn't be enticing to my 2yo.) Cue dramatic music ~
Come morning, I go to check on the progress... and find that the computer is off! What the hell! I turn it on and check the logs... and found that it lost power around 9:16am. aslkjdfhaslkjashdasfjhasd. My 2yo had apparently been playing with the power strip and its enticing glowing red on/off switch. So. It didn't finish copying.
aslkjdfhaslkjashdasfjhasd x2
Anyway, finding the missing files was easy, but what about any that didn't finish? Filesizes don't match, so writing a script to check doesn't work. and using a visual utility like windirstat won't work either because of the excluded folders. Friggin' hell.
Also -- and rather the point of this rant:
It turns out that some of the files (70 in total, as I eventually found out) have paths exceeding Windows' MAX_PATH length (260 chars). So I couldn't copy those.
After some research, I learned that there's a Microsoft hotfix that patches this specific issue! for my specific version! woo! It's like. totally perfect. So, I installed that, restarted as per its wishes... tried again (via both drag and `copy`)... and Lo! It did not work.
After installing the hotfix. to fix this specific issue. on my specific os. the issue remained. gg Microsoft?
Further research.
I then learned (well, learned more about) the unicode path prefix `\\?\`, which bypasses Windows kernel's path parsing, and passes the path directly to ntfslib, thereby indirectly allowing ~32k path lengths. I tried this with the native `copy` command; no luck. I tried this with `robocopy` and cygwin's `cp`; they likewise failed. I tried it with cygwin's `rsync`, but it sees `\\?\` as denoting a remote path, and therefore fails.
However, `dir \\?\C:\` works just fine?
So, apparently, Microsoft's own workaround for long pathnames doesn't work with its own utilities. unless the paths are shorter than MAX_PATH? gg Microsoft.
At this point, I was sorely tempted to write my own copy utility that calls the internal Windows APIs that support unicode paths. but as I lack a C compiler, and haven't coded in C in like 15 years, I figured I'd try a few last desperate ideas first.
For the hell of it, I tried making an archive of the offending files with winRAR. Unsurprisingly, it failed to access the files.
... and for completeness's sake -- mostly to say I tried it -- I did the same with 7zip. I took one of the offending files and made a 7z archive of it in the destination folder -- and, much to my surprise, it worked perfectly! I could even extract the file! Hell, I could even work with paths >340 characters!
So... I'm going through all of the 70 missing files and copying them. with 7zip. because it's the only bloody thing that works. ffs
Third-party utilities work better than Microsoft's official fixes. gg.
...
On a related note, I totally feel like that person from http://xkcd.com/763 right now ;;21 -
It were around 1997~1998, I was on middle school. It was a technical course, so we had programing languages classes, IT etc.
The IT guy of our computer lab had been replaced and the new one had blocked completely the access on the computers. We had to make everything on floppy disks, because he didn't trusted us to use the local hard disk. Our class asked him to remove some of the restrictions, but he just ignored us. Nobody liked that guy. Not us, not the teachers, not the trainees at the lab.
Someday a friend and me arrived a little bit early at the school. We gone to the lab and another friend that was a trainee on the lab (that is registered here, on DevRant) allowed us to come inside. We had already memorized all the commands. We crawled in the dark lab to the server. Put a ms dos 5.3 boot disk with a program to open ntfs partitions and without turn on the computer monitor, we booted the server.
At that time, Windows stored all passwords in an encrypted file. We knew the exact path and copied the file into the floppy disk.
To avoid any problems with the floppy disk, we asked the director of the school to get out just to get a homework we theorically forgot at our friends house that was on the same block at school. We were not lying at all. He really lived there and he had the best computer of us.
The decrypt program stayed running for one week until it finds the password we did want: the root.
We came back to the lab at the class. Logged in with the root account. We just created another account with a generic name but the same privileges as root. First, we looked for any hidden backup at network and deleted. Second, we were lucky: all the computers of the school were on the same network. If you were the admin, you could connect anywhere. So we connected to a "finance" computer that was really the finances and we could get lists of all the students with debits, who had any discount etc. We copied it to us case we were discovered and had to use anything to bargain.
Now the fun part: we removed the privileges of all accounts that were higher than the trainee accounts. They had no access to hard disks anymore. They had just the students privileges now.
After that, we changed the root password. Neither we knew it. And last, but not least, we changed the students login, giving them trainee privileges.
We just deleted our account with root powers, logged in as student and pretended everything was normal.
End of class, we went home. Next day, the lab was closed. The entire school (that was school, mid school and college at the same place) was frozen. Classes were normal, but nothing more worked. Library, finances, labs, nothing. They had no access anymore.
We celebrated it as it were new years eve. One of our teachers came to us saying congratulations, as he knew it had been us. We answered with a "I don't know what are you talking about". He laughed and gone to his class.
We really have fun remembering this "adventure". :)
PS: the admin formatted all the servers to fix the mess. They had plenty of servers.4 -
When one of your staff members asks "what's a file path?" It's times like these that I am ever so grateful that @dfox hooked me up with a squishy ball.13
-
Me doing monday morning Support because all of our fucking support members were not available.
Me: Can you navigate to the Installation path of our Software.
Customer: how?
Me: with the Windows File explorer
Customer: i dont have That
me: Explaining how to navigate to the install location (thinking: fuck my life)8 -
Our web department was deploying a fairly large sales campaign (equivalent to a ‘Black Friday’ for us), and the day before, at 4:00PM, one of the devs emails us and asks “Hey, just a heads up, the main sales page takes almost 30 seconds to load. Any chance you could find out why? Thanks!”
We click the URL they sent, and sure enough, 30 seconds on the dot.
Our department manager almost fell out of his chair (a few ‘F’ bombs were thrown).
DBAs sit next door, so he shouts…
Mgr: ”Hey, did you know the new sales page is taking 30 seconds to open!?”
DBA: “Yea, but it’s not the database. Are you just now hearing about this? They have had performance problems for over week now. Our traces show it’s something on their end.”
Mgr: “-bleep- no!”
Mgr tries to get a hold of anyone …no one is answering the phone..so he leaves to find someone…anyone with authority.
4:15 he comes back..
Mgr: “-beep- All the web managers were in a meeting. I had to interrupt and ask if they knew about the performance problem.”
Me: “Oh crap. I assume they didn’t know or they wouldn’t be in a meeting.”
Mgr: “-bleep- no! No one knew. Apparently the only ones who knew were the 3 developers and the DBA!”
Me: “Uh…what exactly do they want us to do?”
Mgr: “The –bleep- if I know!”
Me: “Are there any load tests we could use for the staging servers? Maybe it’s only the developer servers.”
DBA: “No, just those 3 developers testing. They could reproduce the slowness on staging, so no need for the load tests.”
Mgr: “Oh my –bleep-ing God!”
4:30 ..one of the vice presidents comes into our area…
VP: “So, do we know what the problem is? John tells me you guys are fixing the problem.”
Mgr: “No, we just heard about the problem half hour ago. DBAs said the database side is fine and the traces look like the bottleneck is on web side of things.”
VP: “Hmm, no, John said the problem is the caching. Aren’t you responsible for that?”
Mgr: “Uh…um…yea, but I don’t think anyone knows what the problem is yet.”
VP: “Well, get the caching problem fixed as soon as possible. Our sales numbers this year hinge on the deployment tomorrow.”
- VP leaves -
Me: “I looked at the cache, it’s fine. Their traffic is barely a blip. How much do you want to bet they have a bug or a mistyped url in their javascript? A consistent 30 second load time is suspiciously indicative of a timeout somewhere.”
Mgr: “I was thinking the same thing. I’ll have networking run a trace.”
4:45 Networking run their trace, and sure enough, there was some relative path of ‘something’ pointing to a local resource not on development, it was waiting/timing out after 30 seconds. Fixed the path and page loaded instantaneously. Network admin walks over..
NetworkAdmin: “We had no idea they were having problems. If they told us last week, we could have identified the issue. Did anyone else think 30 second load time was a bit suspicious?”
4:50 VP walks in (“John” is the web team manager)..
VP: “John said the caching issue is fixed. Great job everyone.”
Mgr: “It wasn’t the caching, it was a mistyped resource or something in a javascript file.”
VP: “But the caching is fixed? Right? John said it was caching. Anyway, great job everyone. We’re going to have a great day tomorrow!”
VP leaves
NetworkAdmin: “Ouch…you feel that?”
Me: “Feel what?”
NetworkAdmin: “That bus John just threw us under.”
Mgr: “Yea, but I think John just saved 3 jobs. Remember that.”4 -
Give some personality to your CMD!
1. Make a file called i.cmd and add the following text to it:
@echo off
echo me too
2. Move the file to somewhere in the system path (e.g. C:\Windows)
3. Enjoy!1 -
So, I'm using a new MacBook Air (running Sierra), and while I'm still getting used to it (especially the different Sublime hotkeys), overall it really is quite wonderful. I particularly love the magic touchpad and ease of scrolling/swiping between desktops.
However, I ran into an issue this morning that gave me pause: apparent file caching.
My webpack setup auto-compiles my project when files change, and I noticed something was causing errors -- not really surprising since I was in the middle of fixing the project last night. However, the error it displayed wasn't something I was expecting, and referenced a line I was positive I had removed several hours before calling it a night. Whatever, I was probably mistaken, so I went to remove it.
... It wasn't there.
I double checked that I was looking at the right file. Yep, src/styles/header.scss -- that's the correct file. Figuring webpack was acting up, I killed and restarted it.
Same error.
So whatever, maybe Sublime cached it. Rather unexpected, but possible, and I am on a mac now... so maybe. So, I closed the file and reopened it. The line wasn't there. I did this twice more. It STILL wasn't there. Maybe I'm going crazy...? I checked the file with cat. The line was there. I checked with vim. The line was still there.
OKAY. I've seen a lot of people with beef with Sublime, and I often defended it. but maybe they're actually right. maybe Sublime really isn't the way to go. :( So, I killed and reopened Sublime, and I checked the file again.
The line STILL ISN'T THERE.
Maybe I'm going crazy? I double, triple, quadruple checked the path. all correct.
Alright; let's try again and make sure I do it properly. I closed everything I had open in sublime (two projects), and quit. I reopened Sublime, navigated to the correct path, and reopened the file...
The offending line STILL wasn't there.
I'm angry at this point and just mash the keyboard. I save the resulting garbage, and cat the file again. No visible changes.
KAJSFLK STUPID PIECE OF <redacted>
okay, whatever. Reboots fix everything, right? So I reboot, and keep the option to re-open everything again ticked.
The terminal comes back up, along with half(?) my browsers, but Sublime doesn't. grrrrrrr.
so I cat the damn thing.
GUESS WHAT.
THE GARBAGE IS THERE.
Sublime was doing its job. BUT EVERYTHING ELSE FAILED.
(Oh Sublime, why did I ever question you? 💚)
... but seriously, what the fuck could have caused that? Was the OS caching the file for some programs, but not others? Now I'm questioning the macbook...23 -
I've found and fixed any kind of "bad bug" I can think of over my career from allowing negative financial transfers to weird platform specific behaviour, here are a few of the more interesting ones that come to mind...
#1 - Most expensive lesson learned
Almost 10 years ago (while learning to code) I wrote a loyalty card system that ended up going national. Fast forward 2 years and by some miracle the system still worked and had services running on 500+ POS servers in large retail stores uploading thousands of transactions each second - due to this increased traffic to stay ahead of any trouble we decided to add a loadbalancer to our backend.
This was simply a matter of re-assigning the IP and would cause 10-15 minutes of downtime (for the first time ever), we made the switch and everything seemed perfect. Too perfect...
After 10 minutes every phone in the office started going beserk - calls where coming in about store servers irreparably crashing all over the country taking all the tills offline and forcing them to close doors midday. It was bad and we couldn't conceive how it could possibly be us or our software to blame.
Turns out we made the local service write any web service errors to a log file upon failure for debugging purposes before retrying - a perfectly sensible thing to do if I hadn't forgotten to check the size of or clear the log file. In about 15 minutes of downtime each stores error log proceeded to grow and consume every available byte of HD space before crashing windows.
#2 - Hardest to find
This was a true "Nessie" bug.. We had a single codebase powering a few hundred sites. Every now and then at some point the web server would spontaneously die and vommit a bunch of sql statements and sensitive data back to the user causing huge concern but I could never remotely replicate the behaviour - until 4 years later it happened to one of our support staff and I could pull out their network & session info.
Turns out years back when the server was first setup each domain was added as an individual "Site" on IIS but shared the same root directory and hence the same session path. It would have remained unnoticed if we had not grown but as our traffic increased ever so often 2 users of different sites would end up sharing a session id causing the server to promptly implode on itself.
#3 - Most elegant fix
Same bastard IIS server as #2. Codebase was the most unsecure unstable travesty I've ever worked with - sql injection vuns in EVERY URL, sql statements stored in COOKIES... this thing was irreparably fucked up but had to stay online until it could be replaced. Basically every other day it got hit by bots ended up sending bluepill spam or mining shitcoin and I would simply delete the instance and recreate it in a semi un-compromised state which was an acceptable solution for the business for uptime... until we we're DDOS'ed for 5 days straight.
My hands were tied and there was no way to mitigate it except for stopping individual sites as they came under attack and starting them after it subsided... (for some reason they seemed to be targeting by domain instead of ip). After 3 days of doing this manually I was given the go ahead to use any resources necessary to make it stop and especially since it was IIS6 I had no fucking clue where to start.
So I stuck to what I knew and deployed a $5 vm running an Nginx reverse proxy with heavy caching and rate limiting linked to a custom fail2ban plugin in in front of the insecure server. The attacks died instantly, the server sped up 10x and was never compromised by bots again (presumably since they got back a linux user agent). To this day I marvel at this miracle $5 fix.1 -
Dear external developer dumbass from hell.
We bought your company under the assumption you had a borderline functioning product and/or dev team. Ideally both
For future reference expect "file path" arguments can contain backslashes and perhaps even the '.' character. It ain't that hard. Maybe try using the damn built in path parsing capabilities every halfway decent programming environment has had since before you figured out how to smash your head against the keyboard hard enough for your shitty excuse of a compiler stops arguing and gives in.
I am fixing your shit by completely removing it with one line of code calling the framework and you better not reject this.
This is not a pull request ITS A GOD DAMN PULL COMMAND.
- Is what i would _like_ to say right now... you know if i wouldn't be promptly fired for doing so :p
How's you guys friday going?8 -
Hello devRant,
This is already from a few days ago but I had to process the whole thing myself first.
It was a normal day at work nothing special. Customers came in got their repaired PC's/Laptops and brought some new work in. So I went through some and then I got to the case that is the most well unbelievable and shocking I had in the only 2 years doing this. At first it was a normal HDD bad sector thing and I started copying the old HDD to a new one.
//NOTE: the program we use shows every file it's copying and the sectors it spans //
Suddenly I saw a weird thing happening where it started copying tons of files from a folder called "mature/kids" over to the new HDD.
I noted the path and after it finished we returned the laptop to the customer and he luckily left his old HDD with us. So my boss and I we did some investigation and we'll turns out the dude has a whole library of childpornography.
tl;dr check what you copied and report such cases to the police.
Don't do such stupid shit and stay legal guys.
Which you all a great day/night/morning/evening/whatever
//EDIT: I ofc won't post pictures cause of obvious reasons3 -
Share your VS Code installed extensions here.
Mine is: Alignment, Better Comments, change-case, Colonize, CSS Peek, DotENV, File Utils, GitLens (my favorite!), Gulp Snippets, JS-CSS-HTML Formatter, Laravel 5 Snippets, Laravel Blade Snippets, Material Icon Theme, npm Intellisense, Numbered Bookmarks, Path Intellisense, PHP Debug, PHP DocBlocker, PHP Intelephense, PHP IntelliSense, Prettify JSON, Quokka.js, snippet-creator, Vetur.
Feels like there are redundant extensions here that I need to uninstall.
Happy Friday and Cheers! Excited for Infinity War movie! 😎15 -
So probably about a decade ago at this point I was working for free for a friend's start-up hosting company. He had rented out a high-end server in some data center and sold out virtualized chunks to clients.
This is back when you had only a few options for running virtual servers, but the market was taking off like a bat out of hell. In our case, we used User-Mode Linux (UML).
UML is essentially a kernel hack that lets you run the kernel in user space. That alone helps keep things separate or jailed. I'm pretty sure some of you can shed more light on it, but that's as I understood it at the time and I wasn't too shabby at hacking the kernel when we'd have driver issues.
Anyway, one of the ways my friend would on-board someone was to generate a new disk image file, mount it, and then chroot to that mount path. He'd basically use a stock image to do this and then wipe it out before putting it live.
I'm not sure exactly what he was doing at the time, but I got a panicked message on New Years Day saying that he had deleted everything. By everything, he had done an rm -fr /home as root on what he had thought was the root of a drive image.
It wasn't an image. It was the host server.
In the stoke of a single command, all user data was lost. We were pretty much screwed, but I have a knack for not giving up - so I spent a ton of time investigating linux file recovery.
Fun fact about UML - since the kernel runs in user space as a regular ol' process, anything it opens is attached to that process. I had noticed that while the files were "gone", I could still see disk usage. I ended up finding the images attached to their file pointers associated with each running kernel - and thankfully all customers were running at the time.
The next part was crazy, and I still think is crazy. I don't remember the command, but I had to essentially copy the image from the referenced path into a new image file, then shutdown the kernel and power it back on from the new image. We had configs all set aside, so that was easy. When it finally worked I was floored.
Rinse and repeat, I managed to drag every last missing bit out of /proc - with the only side effect being that all MySQL databases needed to be cleaned up.3 -
What is it with this team and the developers it attracts. 2 devs joined and left, both had several years of experience, both couldn’t google an issue to save their lives and needed to be helped 24/7.
Now we are mentoring a PHD student for a piece of his project. Dude was left stumped by an error message that said “Can’t find file at path ...” because the path didn’t exist. He spent a few hours trying to fix it before asking for help.
How, HOW are people getting through college / university without being able to read, or debug such a simple fucking error message2 -
Junior: I don't think the methodology you came up with is working.
Me: Why?
Junior: There's an exception when I ran it
Me: ...what exception
Junior: FileNotFoundError
Me: ......have you checked if the path to the file is correct?
Junior: No
(A few moments later)
Junior: Oh I forgot to decompress the zip. Nevermind.9 -
Pro tip for job candidates:
If you push a code challenge to a live hosting service like github pages or S3, don’t give the reviewers a link to the repo!! Instead put the link into the home page and send the reviewer only a link to the live hosted page.
Why?
Because, if you host with github pages, you’re required to use the project path as the domain root. If the reviewer pulls your project and doesn’t bother to read your readme file with the link at the top, he’ll complain that he couldn’t figure out why your project isn’t hosted from the root domain, and he’ll pass on your application.
True story.2 -
I have this project I've inherited, yea I seem to do that a lot, but this damn thing, has to run in php5.4, has deprecated functions for php7 everywhere and a lot of them and there's no classes anywhere beyond some libraries.
Everything is procedural with random scripts being injected left right and center.
I kid you not,
$thisThing = true;
If(x==y)
require "path/to/some/script.php";
else
require "path/to/a/slightly/different/script.php";
If($thisThing === false){
// well it was modified in that small block about 10 different times
}
Those injected scripts then accept data from the parent scope so, looking at file X, you need to have open file A,B, E, and M to understand where variables have been initialised and what there current state could potentially be.
Basically this thing was bandaid after bandaid for feature requests with 0 refactoring.
Here I am trying to implement some basic functionality (should only take an hour or so + a bit of manual testing) but no, I'm literally at the point of hitting the delete button on the entire project and starting again.rant why you no work what did i do to deserve this alcohol is your friend commented out blocks everywhere even with git there was no deleted code kill me now where the hell did that thing come from cocaine may help is this v2 file the right one don't do drugs18 -
Me: what happens if you type 'echo PATH'? Pipe it to a file and send it to me
Collaborator: *sends me a 17,238 character text file of their PATH contents*
Me: that's no PATH; that's a space station2 -
I've never used Windows in my day-to-day life. No kidding.
When I got my father's first computer, I used an old distribution called BBC Linux. I didn't have any computer knowledge, it was my first contact with a computer, so I went to a friend's house and asked for a CD to install on my computer. I don't know if this friend ended up making a "gotcha" and thought I'd give up, but I just read the manuals and fell in love. That was year 2000.
Then I used Conectiva Linux, then I went to Red Hat 9, then Slackware, then in 2007 I started using Solaris. And I stayed on Solaris (Solaris 10, Solaris Nevada and OpenSolaris) until 2011.
In 2011 I bought a Mac. I stayed at Apple until 2020, when I couldn't stand Apple forcing me to buy new computers (I still don't understand how a 2011 iMac, i5 (4 Hyper Thread cores) with 16GB of RAM, 1TB SSD only runs up to High Sierra).
Then I bought a Dell. It came with Windows 10, the first thing I did was install WSL2. I could not stand it, the system is bad, sorry. I installed OpenSuse and have been using it for two years.
It's just that every day someone tells me "how can you use this"? "There is no alternative to Windows, do you want to be different?"
I know that my story was the reverse of the "mainstream", so I'm going to talk about my vision of Windows, that in my brain it is actually the "alternative".
- Having a file explorer without "tabs" in 2022 is unthinkable for me.
- I love terminal. And the Windows terminal is very limited. "ps ... | awk ... | xargs ..." is a must for me. "find ./ -name '...' -exec ..."... these things on Windows are totally "different" and have the "powershell way" while all other operating systems keep the same form. And cygwin is not an option. As Wine for serious work is also not.
- Dragging a file into the terminal, and having it write its path, is so natural, that when Windows didn't do it, I was dismayed.
- I've always used StarOffice, OpenOffice and now LibreOffice. All the people in my story received my documents and reports as a PDF and no one complained. Until a coworker saw me editing in LibreOffice and said "oh I want it in word format". As long as he didn't know, everything was fine, right?
- Windows is paid. And is there advertising? I don't understand. And I refuse. If you want to display advertising, then excuse me. I have no problem paying, I'm not an opensource shiite. It's just that paying and not working bothers me much more than an opensource that I can fix or expect a fix knowing the good will of the people involved.
- Hyper-V is a joke. QEMU/KVM is better, and Bhyve on FreeBSD which is a very young project, is already a million times better than Hyper-V.
- Developing in C/C++ for Windows is only possible in two ways: Either you've always lived in Windows and your brain is conditioned, or you compile with MSYS2 (CLang or GCC).
- There is no significant evolution of the windows desktop since 95.
- Multiple workspace support with multiple monitors, not ready. It's another joke.
- REGEDIT does not need any comment.
- The system loses performance over time. I still don't know how Windows achieves this.
- I've seen people complain about desktop fragmentation on Unix and Linux. Many DEs end up leaving applications with different themes (like running a Qt application in Gnome and GTK in KDE), but to be quite honest, the lack of Windows standard bothered me much more. Even Microsoft's own software is completely different: Control Panel, Calculator, Paint and Office, To-Do, and Settings, have horrible style differences and look-and-feel fragmentation.
- Dark mode has not been implemented. It's another joke. Many applications are white while everything else is dark. Sorry, even on Linux which is a mess, this has been resolved. And well resolved.
- NTFS? Serious?
- C:, D:.. It doesn't convince me since DOS.
- Bloatware.
- News "biased" in the search bar is a lack of respect for those who use the computer to work.
And that. For me, Windows is the alternative operating system. I can't take Windows seriously, for me it's an experimental one like Haiku or ReactOS. It's good to play.
About market share, it doesn't convince me to use it. But convinces me to sell. I've always developed applications to run on Windows. And when I need it, I turn on a VM to compile the project. But in everyday life? Impractical.15 -
So I'm writing some multithreaded shit in C that is supposed to work cross-platform. MingW has Posix threads for Windows, so that saved already half of the platform dependency. The other half was that these threads need to run external programs.
Well, there's system(), right? Uhm yes, but it sucks. It's incredibly slow on Windows, and it looks like you can have only one system() call ongoing at the same time. Which kinda defeats the multithreaded driver. Ok, but there's CreateProcessA(), and that doesn't suck.
Fine, now for Linux. The fork/exec hack is quite ugly, but it works and is even fast. Just never use fork() without immediate exec(). First try under Cygwin... crap I fork bombed my system! What is this shit? Ah I fucked up the path names so that the external executable couldn't be run.
Lesson learnt: put an exit() right after the exec() in the path for child process. Should never be reached, but if it goes there, the exit() at least prevents a fork bomb.
Well yeah, sort of works under Cygwin, but only with up to 3 threads. Beyond that, it seems like fork() at some point gives two processes the same PID, and then shit hangs.
Even slapping a mutex around the fork and releasing it only in the parent process didn't help. Fork in Cygwin is like a fork in the ass. posix_spawn() should work better because it can be mapped more easily to the Windows model, but still no dice.
OK, testing under real Linux. Yeah, no issues with that one! But instead, I get some obscure "free(): invalid size" abort. What the fuck would that even mean?! Checking my free() calls: all fine.
Time to fire up GDB in the terminal! Put a catch on the abort signal, mh got just hex data. Shit I forgot to compile with -O0 and -g. Next try. Backtrace shows the full call trace, back to the originating line in my program - which is fclose() on a file.
Ahhh I remember! Under Linux, fclosing a file that is already closed makes the program crash. So probably I was closing it twice. Checking back.. yeah that's where it was.
Shit runs fast on several cores now!8 -
EoS1: This is the continuation of my previous rant, "The Ballad of The Six Witchers and The Undocumented Java Tool". Catch the first part here: https://devrant.com/rants/5009817/...
The Undocumented Java Tool, created by Those Who Came Before to fight the great battles of the past, is a swift beast. It reaches systems unknown and impacts many processes, unbeknownst even to said processes' masters. All from within it's lair, a foggy Windows Server swamp of moldy data streams and boggy flows.
One of The Six Witchers, the Wild One, scouted ahead to map the input and output data streams of the Unmapped Data Swamp. Accompanied only by his animal familiars, NetCat and WireShark.
Two others, bold and adventurous, raised their decompiling blades against the Undocumented Java Tool beast itself, to uncover it's data processing secrets.
Another of the witchers, of dark complexion and smooth speak, followed the data upstream to find where the fuck the limited excel sheets that feeds The Beast comes from, since it's handlers only know that "every other day a new one appears on this shared active directory location". WTF do people often have NPC-levels of unawareness about their own fucking jobs?!?!
The other witchers left to tend to the Burn-Rate Bonfire, for The Sprint is dark and full of terrors, and some bigwigs always manage to shoehorn their whims/unrelated stories into a otherwise lean sprint.
At the dawn of the new year, the witchers reconvened. "The Beast breathes a currency conversion API" - said The Wild One - "And it's claws and fangs strike mostly at two independent JIRA clusters, sometimes upserting issues. It uses a company-deprecated API to send emails. We're in deep shit."
"I've found The Source of Fucking Excel Sheets" - said the smooth witcher - "It is The Temple of Cash-Flow, where the priests weave the Tapestry of Transactions. Our Fucking Excel Sheets are but a snapshot of the latest updates on the balance of some billing accounts. I spoke with one of the priestesses, and she told me that The Oracle (DB) would be able to provide us with The Data directly, if we were to learn the way of the ODBC and the Query"
"We stroke at the beast" - said the bold and adventurous witchers, now deserving of the bragging rights to be called The Butchers of Jarfile - "It is actually fewer than twenty classes and modules. Most are API-drivers. And less than 40% of the code is ever even fucking used! We found fucking JIRA API tokens and URIs hard-coded. And it is all synchronous and monolithic - no wonder it takes almost 20 hours to run a single fucking excel sheet".
Together, the witchers figured out that each new billing account were morphed by The Beast into a new JIRA issue, if none was open yet for it. Transactions were used to update the outstanding balance on the issues regarding the billing accounts. The currency conversion API was used too often, and it's purpose was only to give a rough estimate of the total balance in each Jira issue in USD, since each issue could have transactions in several currencies. The Beast would consume the Excel sheet, do some cryptic transformations on it, and for each resulting line access the currency API and upsert a JIRA issue. The secrets of those transformations were still hidden from the witchers. When and why would The Beast send emails, was still a mistery.
As the Witchers Council approached an end and all were armed with knowledge and information, they decided on the next steps.
The Wild Witcher, known in every tavern in the land and by the sea, would create a connector to The Red Port of Redis, where every currency conversion is already updated by other processes and can be quickly retrieved inside the VPC. The Greenhorn Witcher is to follow him and build an offline process to update balances in JIRA issues.
The Butchers of Jarfile were to build The Juggler, an automation that should be able to receive a parquet file with an insertion plan and asynchronously update the JIRA API with scores of concurrent requests.
The Smooth Witcher, proud of his new lead, was to build The Oracle Watch, an order that would guard the Oracle (DB) at the Temple of Cash-Flow and report every qualifying transaction to parquet files in AWS S3. The Data would then be pushed to cross The Event Bridge into The Cluster of Sparks and Storms.
This Witcher Who Writes is to ride the Elephant of Hadoop into The Cluster of Sparks an Storms, to weave the signs of Map and Reduce and with speed and precision transform The Data into The Insertion Plan.
However, how exactly is The Data to be transformed is not yet known.
Will the Witchers be able to build The Data's New Path? Will they figure out the mysterious transformation? Will they discover the Undocumented Java Tool's secrets on notifying customers and aggregating data?
This story is still afoot. Only the future will tell, and I will keep you posted.6 -
OBS is advertised as the expert's screen recording and streaming tool, every list on the internet makes it out to be some incredibly difficult program not recommended for newbies.
It's also the only linux screen recorder that works out of the box on Pipewire, records both microphone and system sounds and all configuration was to
1. select recording as my main use case in the setup wizard which is a very verbose English popup, then accept all defaults
2. add a new source, following the instructions written in the box which are also the only instructions on screen after application launch
3. set the output directory (optional) by going to File > Settings > Output > Recording Path, all of which were the first items I guessed. If I had not done this, it would've written everything to my home folder which is a bit dumb but not confusing at all
4. click Start Recording
5. click Stop Recording when done
Some newbie-oriented screen recorders have a more complicated setup procedure than this super advanced experts' tool don't touch without safety gloves and a degree in video engineering.11 -
Instructions on how to become suicidal:
- Create an API controller for the /file/ path
- Add an empty endpoint for POST /file/upload (will write it later!)
- Forget about this endpoint at some point
- Later, create a page for /file/upload
- GET /file/upload returns page
- POST /file/upload returns empty 200
Pure psychological horror for like an hour Googling why the fuck my razor page is returning empty responses and my breakpoint on OnPost is not fucking hitting even if I copy and paste example code from the ms website
Oh yeah, that controller.5 -
As devs, our keyboards are arguably the most used tools in the creative process of software development. Shortcuts are essential for (most of) us.
What's your most used keyboard shortcut in your most used IDE? Please explain what it does in which IDE.
Mine is Cmd+Alt+L in IntelliJ (reformat code, but only VCS changed or selected lines). I press it all the time, almost maniacally, after changing anything.
Close second place candidates: Shift+F6 (rename anything, e.g. file, class, function, variable), double Shift (search everywhere), Cmd+Alt+F (find in path, also in code), Cmd+B (go to declaration).12 -
MTP is utter garbage and belongs to the technological hall of shame.
MTP (media transfer protocol, or, more accurately, MOST TERRIBLE PROTOCOL) sometimes spontaneously stops responding, causing Windows Explorer to show its green placebo progress bar inside the file path bar which never reaches the end, and sometimes to whiningly show "(not responding)" with that white layer of mist fading in. Sometimes lists files' dates as 1970-01-01 (which is the Unix epoch), sometimes shows former names of folders prior to being renamed, even after refreshing. I refer to them as "ghost folders". As well known, large directories load extremely slowly in MTP. A directory listing with one thousand files could take well over a minute to load. On mass storage and FTP? Three seconds at most. Sometimes, new files are not even listed until rebooting the smartphone!
Arguably, MTP "has" no bugs. It IS a bug. There is so much more wrong with it that it does not even fit into one post. Therefore it has to be expanded into the comments.
When moving files within an MTP device, MTP does not directly move the selected files, but creates a copy and then deletes the source file, causing both needless wear on the mobile device' flash memory and the loss of files' original date and time attribute. Sometimes, the simple act of renaming a file causes Windows Explorer to stop responding until unplugging the MTP device. It actually once unfreezed after more than half an hour where I did something else in the meantime, but come on, who likes to wait that long? Thankfully, this has not happened to me on Linux file managers such as Nemo yet.
When moving files out using MTP, Windows Explorer does not move and delete each selected file individually, but only deletes the whole selection after finishing the transfer. This means that if the process crashes, no space has been freed on the MTP device (usually a smartphone), and one will have to carefully sort out a mess of duplicates. Linux file managers thankfully delete the source files individually.
Also, for each file transferred from an MTP device onto a mass storage device, Windows has the strange behaviour of briefly creating a file on the target device with the size of the entire selection. It does not actually write that amount of data for each file, since it couldn't do so in this short time, but the current file is listed with that size in Windows Explorer. You can test this by refreshing the target directory shortly after starting a file transfer of multiple selected files originating from an MTP device. For example, when copying or moving out 01.MP4 to 10.MP4, while 01.MP4 is being written, it is listed with the file size of all 01.MP4 to 10.MP4 combined, on the target device, and the file actually exists with that size on the file system for a brief moment. The same happens with each file of the selection. This means that the target device needs almost twice the free space as the selection of files on the source MTP device to be able to accept the incoming files, since the last file, 10.MP4 in this example, temporarily has the total size of 01.MP4 to 10.MP4. This strange behaviour has been on Windows since at least Windows 7, presumably since Microsoft implemented MTP, and has still not been changed. Perhaps the goal is to reserve space on the target device? However, it reserves far too much space.
When transfering from MTP to a UDF file system, sometimes it fails to transfer ZIP files, and only copies the first few bytes. 208 or 74 bytes in my testing.
When transfering several thousand files, Windows Explorer also sometimes decides to quit and restart in midst of the transfer. Also, I sometimes move files out by loading a part of the directory listing in Windows Explorer and then hitting "Esc" because it would take too long to load the entire directory listing. It actually once assigned the wrong file names, which I noticed since file naming conflicts would occur where the source and target files with the same names would have different sizes and time stamps. Both files were intact, but the target file had the name of a different file. You'd think they would figure something like this out after two decades, but no. On Linux, the MTP directory listing is only shown after it is loaded in entirety. However, if the directory has too many files, it fails with an "libmtp: couldn't get object handles" error without listing anything.
Sometimes, a folder appears empty until refreshing one more time. Sometimes, copying a folder out causes a blank folder to be copied to the target. This is why on MTP, only a selection of files and never folders should be moved out, due to the risk of the folder being deleted without everything having been transferred completely.
(continued below)29 -
Does anyone else use:
cat /path/to/file | grep "blah"
Rather than:
grep "blah" /path/to/file
...when grepping? Or is it just me? Mainly asking because in my half asleep state I just wrote `tail -f | grep "blah"` by mistake and wondered why it was taking way to long to read the file...10 -
>import ENi18n
>import ZHi18n
en = {…ENi18n, moreStuff}
zh = {…ZHi18n, moreStuffZH}
pt = {…ZHi18n, moreStuffPT}
“Hey man can you fix this? Seems like we are missing the Portuguese i18n. Check this file please (path)”
“I’m sorry I don’t understand, can you call me and explain?”
Why do I need to explain this? What is difficult to grasp here? How can it take more than 20 seconds to know what to do here? It’s not even a file I made, you made it and I just ran into it!
Fuck man, I’m going to blow my brains out.2 -
1) Using ScreenRecord record a video deleting his work folder(fake one obviously).
2) create command line vlc player to play this video on startup with flags -f and --no-qt-fs-controller
Eg. vlc -f --no-qt-fs-controller file://<file path>
Enjoy the show 👿 -
(Dev)Life in the past 12 hours
Oh boy have the last 12 hours been a roller coaster ride for me. Noob me decided to "compile" AoSP for my device to get a taste of how custom ROMs are built from source. Overall it was fun but the errors were a very good excercise for googling, SO. Couple stuff I learnt ( possibly useful for anyone who comes here )
* The shebang line ( #!/usr/bin/env python ) on my system translated to Python 3.7 environment instead of the expected Python 2.7. Best solution I think to avoid confusion is to create a python 2.7 environment and source it.
* Get your trees right. A jar file called WfdCommon.jar ( apparently known as wifi-display common ) was the cause of several hours of hunting the fault. My vendor tree somehow didn't have this file so dex2oat was borking out like mad. I'm still amazed how I figured this one out almost by myself. ( Basically I had to check every file included in the boot class path, and find the odd one )
* I wasted a lot of time in finding the right files to change version numbers and all. Maybe I didn't search XDA properly for a guide ?
Overall it was a fun experience. Also if anyone's experienced in this area could you share resources to learn more about custom ROM development? Specifically on the tweaking part where you mix features from different ROMs to make a great ROM ( like AoSP extended or Pixel Experience ). All I could find were on the zips and not on sources.7 -
who the fuck decided macos not showing the file path in finder etc is the default
also not having a home directory available by default in the side bar
i fucking hate you so much9 -
In a meeting yesterday working through our WebAPI coding standards, starting from File -> New project..etc..etc.. and ironing out some of the left-or-right decisions so we can have a consistent coding style, working in a meeting room with an overhead projector and sharing keyboard around with one another.
Then we hit the routing 'rules' in the WebApiConfig, "api/{controller}/{id}"…
DevMgr: "Do we need the 'api' prefix? It seems redundant."
Ralph: "Yes it's needed. Prefixing the controllers with 'api' is industry best practice. Otherwise, how is anyone to know it's a web api"
Prancer: "Yea, it's part of the REST standard."
Me: "I don't think so. That is only part of the Asp.Net routing rule. We can put anything we want or take anything out."
DevMgr: "Yea, it looks silly. All the new services are going to be business process specific."
Ralph: "That's how everyone does it. It's kind of the point of why REST services are called WebApi"
Prancer: "What's the point of doing any of this work if we're not going to follow industry standards."
Me: "I understand if the service is part of larger web site, but we're developing standalone services. Prefixing routes with 'api' is redundant. I mean who are these 'everyone' you're talking about?"
<ralph rolls his eyes>
Ralph: "Lets see …uhhh… Netflix?. They're kinda a big deal."
Me: "Like I said, it's an integral part of their site and the services they provide. That's fine. I'm talking about the 12 other 3rd party services we integrate with. None of them have 'api' on any of their routes."
Prancer: "We're talking about serious web services."
Me: "Last time I checked, UPS is a big and serious service."
Ralph: "Their services are a fracking joke" – he didn't say fracking.
Me: "Our payroll system, our billing system, billion dollar companies, didn't have '/api' prefix anywhere. Heck, even that free faxing service we used for a while was a dead-simple routing path."
<I take the keyboard away from Ralph, remove the 'api' from the route.>
Me: "There. Done. Now, lets talk about error handling.."
Rest of the meeting Ralph and Prancer don't say much of anything, arms crossed…I swear Ralph looked like he was going to cry.
This morning I catch my boss…
Me: "What did you think of the meeting? I thought Ralph was going to take a swing at me when I took the keyboard away from him."
DevMgr: "Oh yes…I almost laughed out loud….blows my freaking mind how worked up people get about crap that doesn't matter. Api..or not…who the frack cares. Just make it consistent"
Me: "Exactly…I didn't care either way, but I enjoyed calling out that nonsense."
DevMgr: "Yes..waaay too much."
If I didn't call them on their BS and the 'standard' allowed to continue, I can bet my paycheck when the subject comes up in a few months (another mgr asks 'isn't this api prefix redundant?') Ralph and Prancer will be the first to say "Yea, its stupid. We fought really hard to remove it from the standard...its not our fault...its <insert scapegoat> fault." -
Beware: Here lies a cautionary tale about shared hosting, backups, and -goes without saying- WordPress.
1. Got a call from a client saying their site presented an issue with a third-party add-on. The vendor asked us to grant him access to our staging copy.
2. Their staging copy, apparently, never got duplicated correctly because, for security reasons, their in-house dev changed the name of the wp-content folder. That broke their staging algo. So no staging site.
3. In order to recreate the staging site, we had to reset everything back to WP defaults. Including, for some reason, absolute paths inside the database. A huge fucking database. Because WordPress.
4. Made the changes directly in a downloaded sql file. Shared hosting, obviously, had an upload limit smaller to the actual database.
5. Spent half an hour trying to upload table by table to no avail.
6. In-house uploads a new, fixed database with the help of the shared hosting provider.
7. Database has the wrong path. Again.
8. In-house performs massive Find and Replace through phpMyAdmin on the production server.
9. Obviously, MySQL crashes instantly and the site gets blocked for over 3 hours for exceeding shared hosting limits.
10. Hosting provider refuses to accept this was caused by such a stupid act and says site needs to be checked because queries are too slow.
11. We are gouging our eyeballs as we see an in-house vs. hosting fight unfold. So we decide to watch a whole Netflix documentary in between.
12. Finally, the hosting folds and enables access to the site, which is obvi not working because, you know, wrong paths.
13. Documentary finishes. We log in again, click restore from backup. Go to bed. Client phones to bless us. Client’s in-house dev probably looking for a cardboard box to pack his stuff first thing in the morning. \_(ツ)_/¯ -
Just ran rm -rf ~
Only good thing is I ran it without sudo.
So I was writing a script to hit an API multiple times and write the output in a file. Instead of providing the absolute path like /User/.... I gave the path as ~/..., So it created a folder named ~ inside the directory I was inside. Now I wanted to delete it and the file inside it. And so I did it.
I am an idiot5 -
Today in some onboarding meeting i was laughing my ass off.
We were setting up the development machines that we got from the client to work on via citrix.
You guys probably know, that when you put your npm projects too nested into your filesystem, that packages randomly start not behaving because of too long file names or path names and stuff like that. That seems to be a problem with all OS (to be fair i havent actively looked for a solution, but it happened to me on Windis and Linux, so i'm just assuming here)
but even more so for some packages on Windis, when the project is not running on the same fucking drive letter than where your OS is running on. Like... wtf?
Had two UI5 projects pulled, both of them on D:. The first npm install went through flawlessly, the second one has a number of random errors, me and the other dev didn't know what they were. So what i suggested is to move this project onto C: and try it again. Turns out that was exactly it. Et voila, npm install ran through without any hiccups..6 -
Workarounds are great. I remember one time, I had a server that let anyone access any file as long as the knew the right path. I wanted to store data in a .txt (it wasnt secure passwords or anything, so calmyourtities), but then had access too it. Now, this server wasn't running anything except PHP, so I created a database.php, and within was just some php tags. I ended up modifying the database.php from other PHP scripts and storing all the data as PHP comment, then parsing thru it as I needed, so loading mydomain.biz/database.php wouldn't show the data. ex of my database.php (to all that might not understand because I'm bad at explaining):
<?php
//USER1:DATA1
//USER2:DATA2
?>2 -
This is a story of how I did a hard thing in bash:
I need to extract all files with extension .nco from a disk. I don't want to use the GUI (which only works on windows). And I don't want to install any new programs. NCO files are basically like zip files.
Problem 1: The file headers (or something) is broken and 7zip (7z) can only extract it if has .zip extension
Problem 2: find command gives me relative to the disk path and starts with . (a dot)
Solution: Use sed to delete dot. Use sed to convert to full path. Save to file. Load lines from file and for each one, cp to ~/Desktop/file.zip then && 7z e ~/Desktop/file.zip -oOutputDir (Extract file to OutputDir).
Problem 3: Most filenames contain a whitespace. cp doesn't work when given the path wrapped in quotes.
Patch: Use bash parameter substitution to change whitespace to \whitespace.
(Note: I found it easier to apply sed one after another than to put it all in one command)
Why the fuck would anyone compress 345 images into their own archive used by an uncommon windows-only paid back-up tool?
Little me (12 years old) knowing nothing about compression or backup or common software decided to use the already installed shitty program.
This is a big deal for me because it's really the first time I string so many cool commands to achieve desired results in bash (been using Ubuntu for half a year now). Funny thing is the images uncompressed are 4.7GB and the raw files are about 1.4GB so I would have been better off not doing anything at all.
Full command:
find -type f -name "*.nco" |
sed 's/\(^./\)/\1/' |
sed 's/.*/\/media\/mitiko\/2011-2014_1&/' > unescaped-paths.txt
cat unescaped-paths.txt | while read line; do echo "${line// /\\ }" >> escaped-paths.txt; done
rm unescaped-paths.txt
cat escaped-paths.txt | while read line; do (echo "$line" | grep -Eq .*[^db].nco) && echo "$line" >> paths.txt; done
rm escaped-paths.txt
cat paths.txt | while read line; do cp $line ~/Desktop/file.zip && 7z e ~/Desktop/file.zip -oImages >/dev/null; done3 -
Okay ... Windows, I really tried to be nice, but that's it. You get off my ssd right now ...
Just got the error: "filename is too long - unable to delete file" (full name incl. path)
Seriously, WHY?
I mean, sure ... Long file paths/names are a thing and this is why there are limitations. I am totally fine with it!
But Why the hell allows windows you to create those files if it is unable to delete these files later ...
I don't get it. Maybe I am just hit my head to hard as a child and someone may enlighten me ...
PS: windows was running in a VM to give it a real serious try after years on Linux4 -
Mobilis in mobili.
Yesterday, I was trying to figure out how to open a folder via the linux terminal (like the `open path/to/folder` in MacOS), and I discovered that it can be done via `nemo path/to/folder`. This rang a bell on me because I know that GNOME file manager was named Nautilus.
This got my interest because both names are in Jules Verne's "Twenty Thousand Leagues Under the Sea". Nautilus is the submarine commanded by the great Capt. Nemo, a brilliant individual who plans to explore the depths of the sea with Nautilus.
I learned that the developers of Linux Mint believed the GNOME file manager Nautilus (v3.6) was a catastrophe, and thus, they forked project, giving birth to the awesome Nemo. So instead of exploring the depths of the sea, I guess we could say Nemo is now exploring the depths of our filesystem, right? -
So our junior dev constantly asks really obvious things. But this one question really takes the cake.
So we have a small programm that opens a file browser and puts the selected files path in a line edit text box. So he comes over to me and says its broken because he cant edit the path in the text box. Weird, this shouldnt happen at all. Turns out this more than braindead tortoise thought it was just a regular piece of uneditable text and didnt even try to edit it. Its a FUCKING OBVOIUS EDITABLE TEXTBOX!!!!!
I facepalmed so hard that moment you could hear the slap half a mile away!7 -
How to write a proper Hello World program in Java:
public class ProperJavaProgram {
public static void main(String[] args) {
try {
// Write the hello world file
List<String> lines = Arrays.asList(
"#include <stdio.h>",
"int main() {",
"printf(\"Hello World!\\n\");",
"return 0;",
"}"
);
Path file = Paths.get("awesome-program.c");
Files.write(file, lines, Charset.forName("UTF-8"));
// Execute the file
executeCommand("cc awesome-c-program.c -o awesome-executable");
executeCommand("./awesome-executable");
} catch (IOException e) {
System.out.println("You're screwed, just use Java and get over it. " + e);
}
}
public static void executeCommand(String command) {
try {
Process p = Runtime.getRuntime().exec(command); // Run the process
BufferedReader stdInput = new BufferedReader(new InputStreamReader(p.getInputStream())); // Get the output
String s; // Print out the output
while ((s = stdInput.readLine()) != null) {
System.out.println(s);
}
} catch (IOException e) {
System.out.println("You're screwed, just use Java and get over it. " + e); // UR SCREWED
}
}
}2 -
So today was interesting.
I had to extract the domain from an email address and compare the domain to a hard coded whitelist, nothing difficult, fuck takes 2 min really.
Except the project starts throwing 500 errors for no god damn reason, like seriously, I double check syntax, nope looks fine, run pho's syntax checker on the file
# php -l /path/to/file.php
Nope says it's all good.
Checks error log on server -> no log
OoooooooooKay then.
Comments out the few lines, saves, errors gone.
remove comments, error comes back.
Do this a few times, and magically the fucking thing stops throwing errors, now I haven't actually changed anything, and I know this project is so fragile I don't know how it stays running at times but fuck me this is a painful joke.6 -
Just got an email from my company that a http server app I wrote years ago exposed the whole server it runs on because of a misconfig parametered...
Can use it to read any file using server.com/path/to/file1 -
Dev Diary Entry #56
Dear diary, the part of the website that allows users to post their own articles - based on an robust rights system - through a rich text editor, is done! It has a revision system and everything. Now to work on a secure way for them to upload images and use these in their articles, as I don't allow links to external images on the site.
Dev Diary Entry #57
Dear diary, today I finally finished the image uploading feature for my website, and I have secured it as well as I can.
First, I check filesize and filetype client-side (for user convenience), then I check the same things serverside, and only allow images in certain formats to be uploaded.
Next, I completely disregard the original filename (and extension) of the image and generate UUIDs for them instead, and use fileinfo/mimetype to determine extension. I then recreate the image serverside, either in original dimensions or downsized if too large, and store the new image (and its thumbnail) in a non-shared, private folder outside the webpage root, inaccessible to other users, and add an image entry in my database that contains the file path, user who uploaded it, all that jazz.
I then serve the image to the users through a server-side script instead of allowing them direct access to the image. Great success. What could possibly go horribly wrong?
Dev Diary Entry #58
Dear diary, I am contemplating scrapping the idea of allowing users to upload images, text, comments or any other contents to the website, since I do not have the capacity to implement the copyright-filter that will probably soon become a requirement in the EU... :(
Wat to do, wat to do...1 -
Explain to me why people love Apple so much.
What is a simple task in every other OS ever is a multi step dance on a Mac (or iphones too for that matter). It is a productivity nightmare that makes the whole system feel like it is only meant to be used to watch youtube.
The way the keyboard works feels like it was designed by aliens.
Browsing the system with Finder is an absolute pretzel nightmare. No moving files. Copy, paste, then delete is as good as you're going to get. No way to type the path to go straight to it. You will do things the slowest way possible and be happy while doing it.
Want to quickly create a blank file in the current folder? Oh what's that? You thought the right click menu was going to help you like every other OS? Apple laughs in your face for such arrogance.25 -
At work one morning, I was asked in chat for a way to edit an xml file on a Mac. They couldn't open it due to permissions. I told them to open Terminal and run sudo vi /path/to/file.xml. Never got a message back about it, so I assumed everything was OK. Later that afternoon, I received another question: "I'm in, I've made the changes, now what? How do I get out?" It wasn't funny until I realized how many memes existed for this. I'd imagined they'd quickly opened and edited it and spent hours unable to exit it; though, realistically, it probably wasn't attempted until the afternoon. Truthfully, I was new to it, too, and have no idea why I suggested vi over something else.
-
Fucking hate working with dotnet. Just spent half an hour fixing the most fucked up bug.
So I installed a nuget package on my computer, tested everything, it worked, and pushed. My classmates then pulled it to their pc, and holy hell broke loose.
Everything was red, it couldn't even import System! By a turn of luck, I looked in the .csproj file, and saw that it had made an absolute path to the nuget package on my computer. Well no fucking wonder it only worked on mine then!
And here's the weird thing: it only did it now, it hasn't done it with the other packages we've imported3 -
Affinity Designer export to SVG, normally an easy task for a vector programme.
Furthermore it is only a little picture, a clear eight filled with a single colour.
Result: SVG with an unbelievable file size of 98.7 kB. Holy shit!
Looked at it: The export made a huge long high scaled path... oh man...
Little hand made recoding, made four circles. Done. New size: 0.8 kB.
That's better.4 -
Fuck Wordpress, Fuck Wordpress's PHP
I'm so fucking tired of everything in this godforsaken CMS. Import a JS File? Sure, just add a *completely obvious* line into a very specific PHP File, where you'll have to specify a lot of "useful" parameters. No, I somehow DON'T want to specify that I don't wand jQuery in every import. And don't even get me started on Content Delivering. Embed CSS? Sure, just write the fucking whole path to the file, or use the broken get_stylesheet_uri() Function. Embed an Image? Sure, let me just go to the Backend and wait 6 Minutes for this bullshitty System to upload the image and then copy the hard-coded Link. Oh, you want to remove googleapi embeds? Sure, let me just fuck up your whole Website in return.
You want jQuery? Well instead of using the "$" Symbol, you have to use the jQuery() Function. Except when you don't have to, which is 100% random each time you reload the page. Oh, you actually did import a JS File? Sure, let me just not run it. Thank you fucking piece of shit thats calling itself "WordPress" and fuck you and everyone whos actively encouraging its usage1 -
- name: "Clean up {{ project_dir }}
file:
state: absent
path: "{[ project_dir }}"
Took me 2 hours to figure out why this wasn't deleting files1 -
Why, in 2023, do we still have a path length limit in Windows?
I know it's not that trivial finding a good soution, but at least if I managed to get a .zip which I can't extract in that directory because of a few files with a too long file name, let me know in advance (and not during the extraction) and maybe (amazing idea) let me know how many steps I have to go back in the directories tree to make it work…12 -
Wasted a day as Shitlock Holmes with the build chain.
It would not reproduce the firmware hexfile that had been checked in. Reverse engineering that along with the mapfile to find out the cause, it was a const string that was guarded by an ifdef from another file that was auto-generated as prebuild step via a script that fetched some version control info.
Or, it would have been if the installation instructions had been correct and someone had described that no spaces in the absolute path name of the project are allowed. Otherwise, that shit just failed silently.
I then had to reverse engineer the intended workflow from the commit history in the version control to figure out that the last dev obviously hadn't quite understood the project specific workflow and how the version control interacts with these build scripts.
At least, I finally did get a matching hexfile.1 -
Steps to make a portable version of Minecraft Java Edition (for Windows):
1. Get a flash drive, preferably of a decent size (>500MB). I used a 128GB flash USB 3.
2. Download Java JRE (version 8).
3. Download MultiMC
4. Install Java, put destination on flash drive. example: x:/mc/java
5. Eject flash drive.
6. Uninstall JRE from computer. This will remove the installation entry in the computer. Since flash drive is ejected it cannot delete off drive.
7. Install multimc. example: x:/mc/multimc
8. Point multimc to JRE location on flash drive.
9. Edit the path of the JRE to be something like this: ../java/etc... This keeps it from trying to use the drive letter and use a relative path instead. I edited the multimc config file to do this. Can probably be done in program too. If you modify config file you will have to quit multimc.
10. Login with your minecraft account in multimc.
11. Download some version of minecraft or modpack.
12. Enjoy on any windows computer and take with you!8 -
Today I had to restart a VM that should contain only an apache server, after a period of time some clients complained that there a functionality in the system is not working. I checked the code and I found that this functionality is depends on local server in deferent port, so I back to VM log and found there was a node server was working on this port.
Every thing was fine till now, I check folders but there is no FUCKIN js files for node server.
After hours of searching I found 5 files in the public/assets/dist/js path that named with the same functionality name and a number 1 to 4 and the 5th one is a TEST.js. After checking all files I discovered that the server file is the TEST.js !!!!! seriously WTF ...2 -
I was just going over some projects I need to transfer to others team members and was reminded of all the utility apps I created. Particularly on that covers Windows paths to Linux....
Or basically path.Replace("\\","/") in a GUI.
I actually use it a lot whenever I hardcode a file path in Java for testing or make some partial path Linux compliant.
I think it saves me a lot of time but I'm the only person I think that creates these apps... basically for anything I find myself repeating often... Even these simple things.
Am I weird? Or just good at identifying things that can be outsourced? And outsourcing them?16 -
Well I just found a security issue with my company's website thats potentially been there for YEARS
You can just fucjing bypass the login screen and access any file. You do have to know the filename and path from the site root. But I doubt that matters to anyone willing to try hard enough. I'm sure there's tools to find the paths
Especially since the files names are fucking predictable 🙄 😒5 -
so I started a side project a while ago.
the only thing it could do was to create some files with desired names and extensions. so this was basically a pretty simple editor.
I left this project with no future plans for a month or so until I started working on it again this week. I added comments to the editor, a console user interface.
the ui isn't futuristic. the program runs in the console. it just lists all the files and folders where the program is currently located in. in the beginning it could take user input and that input was the location where the files created in the editor would be saved. then I thought: it would be more interesting if I created a folder in which I saved the files from the editor. so I did this thing.
then I thought, again: hey, this console is pretty boring and stuff. why should I add some special commands? and so I did.
now you can create an empty folder, before you created a folder and saved at the same time the files created in the editor. now you can open another folder in which you can do the same stuff as before. you can get the current location of the folder you are currently in, so you don't get lost in your fancy computer. you can delete a folder completely, set color, reset color.
but one thing that I lost almost ONE FREAKING HOUR ON IT TO MAKE THE USER EXPERIENCE BETTER was the following: when creating a folder, either empty or with the files from the editor, the program automatically opens the folder, not in the console(hey, I didn't thought of that) but in the file explorer from the os. now it only works for windows and windows explorer because I used system(const char*). I know it's not portable or efficient but I just wanted things to work, I will optimise it later.
the thing that made me lose that one hour debugging was figuring out how to open that file.
ok, so I used windows api with GetCurrentDirectory, I knew how to use system, I knew how to form the path that would match up with the folder, I almost knew how to open the folder with system().
the problem was that I had the path complete, but if the folder had white spaces system() wouldn't recognise the freaking command!
so the string with the path would also contain the command used in system() and I would just .c_str() the string so it could work. as an example my wrong way to make the path was this:
"start C:\\path"
can you figure out what is the problem?
you don't?
it's just so trivial.
how cannot you figure it out?
of course you NEED to put "explorer" between the start command and the actual path!
pffft, you idiot! so easy to figure it out.
so yeah, the right way to open a folder is like this:
"start explorer C:\\path to heLL!!"
p.s.: I still don't understand why putting explorer works and without it doesn't. without explorer it just just says that path with the first word before the white space doesn't exist. -
So today I was messing with a side project and for context it’s a networking program.
So I’ve designed the programs packets and what each do. The final step is just constructing them and sending them, but wait some random error that I traced from the file path being wrong to the packet containing a files name but then I realized that the packet after the file name wasn’t sending and so I looked at the contents of the first packet and IT WAS SENDING BOTH CONTENTS IN ONE and I fucking can’t tell you how hung up on this I got because there was nothing wrong with any other packet in anyway, and if I commented the file name packet out the next one worked and vice versa and it was so fucking infuriating and out of desperation I thought “what if I just gave it time between sending both” AND IT FUCKING WORKED. ONE LITTLE FUCKING sleep(.5) FIXES THE PROBLEM THAT PLAGUED ME QUITE LITERALLY ALL DAY I CANT. IM PRETTY SURE ITS STILL NOT A GOOD SOLUTION BUT IM ROLLING WITH IT!1 -
Fuck who ever put the `hosts` file in that path WHICH IS IMPOSSIBLE TO REMEMBER!
and then fuck who put the httpd-vhosts.conf in a totally different path that is impossible to remember!8 -
Group assignment: writing a own Java logger component in a group of four, using nothing else than Java SE libraries, Maven and Jenkins. The software must be able to substitute the logger component without recompilation, just by editing the config.xml (setting jar file path and fully qualified class name of the logger).
I asked around on Slack which group is ready for a component exchange, so that we could test the switch. I found another group and I started doing some testing.
Then I got a `java.lang.NoClassDefFoundError: org/apache/log4j/Logger`. I got in touch with my peer from the other group and asked him, if they've been using log4j. Apparently they did, so I told him that the assignment was to write a logger of one's own, not just using log4j. Then he told me: "Uh, ok, I'm going to tell the guy responsible for the logger part about that..."
X-D -
Ugh... some people...
Just left the office early because of the toxic climate. That one infamous collegue is basically unable to communicate without being a narcissistic 5-year-old and was arguing whether we should write a test (I was going to write the test) that would need a single additional branch in the build system.
(The test was for a parser and it should test whether it can handle absolute paths. A simple regression test with a file and an expected output. Because absolute paths are different for every platform and user, the files to be parsed would have to be generated with appropriate paths before the tests were run. Well that would require one single python script and a single line in the script that runs the script and DONE)
Well that guy was unable to focus on his own work and started an argument about whether that test was necessary.
Even though I still think it is necessary, it might have been a reasonable argument if he would have acted more agreeable. But he was saying the feature was useless anyways "everyone will use relative paths only anyways" and "because noone here cares a ratass about maintaining the tests it will all fall on me again" ..
Wtf was this guys problem, I (CAPS) was going to write the stupid test and since when do we not write tests in order to better maintain our product? I get that he worries that the test environment will get more messy, but thats better than having the product code go messy or unfunctional! And c'mon guys, how are absolute paths a redundant feature... -
WHY DO DEVELOPERS OF MAJOR PROJECTS UNDER LINUX USE INCONSISTENT CONFIGURE SCRIPT FLAGS !!!
SOME OF THE TIME YOU POINT A PATH WHERE A STANDARD SUBDIRECTORY IS
SOME OF THE TIME YOU POINT TO A WHERE THE FILE CAN BE FOUND PRECISELY
SOME OF THE TIME THEY WANT THE FULL FILEPATH !
AND THE DOCS DONT READ THAT WAY !
'--with-curl=arg path to curl-config'
...
ok /SomePath/bin ? right ? NO
/SomePath/bin/curl-config !!!6 -
Can't git push
because of an "access denied" error message
because I didn't set up my key file properly (with right paths, right format and so on)
because I'm working from my home laptop device
because I'm in home office
because Corona
..but..
I can connect to my work computer where git is set up properly but also I
can't git push
because I can't "cd" into the project path
because the samba mount point is messed up
because I don't reboot my machine to fix it
because I can't enter my password
because it does have a full hard drive encryption and the password screen shows up before the network services are started.2 -
Other peoples' code... (in C++)
I am finding what some people consider good code is not as described. I found a class that provides strings. Great it gives me paths and stuff. I incorporated it in a new project.
segfaults
Hmmm, it must have an init function... It does, but not in the class. It has a friended init function:
friend init_function(). If this function is not created and called external to the class then the class will segfault...
okay...
I implement this. I use code from another project that implements this correctly. The friend class allows the private constructor to be called to create the main instance of the class. So its a fucking cryptic ass singleton. I look at this class. It uses a macro to decide what to function call in the class. The class already has function names for each call it needs to make. The class is literally a string lookup table. I vow to redo this shitty code, someday...
I start to wonder what other fragile code I will find. Not long later I keep getting errors on malloc. Like any malloc that is called results in a segfault. The malloc is not at fault though. I run valgrind and find a websocket library is returning an object a different size than the header file describes.
WTF...
Somebody has left an old ass highly modified definition of the websocket header in a location in that I include headers (partly my fault). I eliminate that from my include path. All is well, everything behaves. I will be making sure this fucking header is not used and it is going to die. Wasted a bunch of time.
Lessons learned: some code is just fucked and don't leave old ass shit you tried laying around.5 -
When I first started down the path to becoming a developer, I was a "business analyst" where I managed our departments reports and ended up migrating all the reports from daily query run in MS Access with Task manager and emailed out to all the managers including the VP of the entire business unit, I created
Views in the database and sent out the same spreadsheet with the view in excel daily since management didn't want "change". Granted this was at a large health care company in the US and didn't want to invest in a real dashboard for their reports. The only thing that was changed in the email and file was the file name with the current date. I left the company a while ago and recently applied for a similar position for the shits and gigs. Interviewed with the It manager and they're still using the same excel macro I wrote 3 years later.2 -
How to serve a static file when infra setup is something every hard to do?
<script src="/path/to/my/fav/lang.cxx?filename=mystaticfile.js" type="text/javascript" />
... Brilliant !5 -
One of my commits today:
“Corrected a file path for view as it’s causing issues within the live environment”
What I actually wanted to put:
“Changed letter to uppercase in file path for view because live is being a fucking bitch about it”
Why do my Dev and Live environments have to differ 😞, I kept my cool though, which is a first for me 🙂4 -
When someone specifies file path of file as below
File fileToRead=new File("C:\Users\You\Desktop\testfiles.Zip\test.txt");
Compiler be like1 -
Revision to the Peter/Dilbert Principle, the ProjektAquarius principle: a company will systematically shift the least competent employee on to the assignments the competent employees can't be bothered to do until they become an integral part of the team and drag you down with them. (E.g. eventually they completely fuck up your delivery process, although it's probably still cheaper and quicker than having them do anything else.)
ProjektAquarius principle: A Case Study
We have an engineer who is getting paid quite a bit more than me. Over time his responsibilities have gradually been reduced to documentation and running our almost entirely automated build. Well today the build failed. He pulls me over to tell me, and says he's confused because there is a file there he has never seen before in there and a file he always has seen that isn't there (basically a file got renamed. It was not non-obvious). Answer: change the file name.
Then he comes over and tells us that it's failing again because the script is not finding a file. So a coworker of mine and I go over. He explains the whole build process to us when we ask if there is any point in the script that would help us identify where the script is looking for the file and failing (there wasn't but that's besides the point).
Turns out, he had decided to put the assembly list in order. Normally no problem, but the list is in source destination pairs. So the fucking file was being put in a different directory than the one the script was looking for it in and failing. And that's the story about how my company just paid 3 engineers a quarter of a man hour each for something that would have been resolved in 30 seconds via file search/copying and pasting a file path. Related note: our process for building an install is now about 4 hours long with no change on process besides the BCAK. -
Mount an azure file share in an app service container? Sounds handy. Nice clicky-draggy wizard to set it up, pick your file share, type a path to mount it to, hit save.
And does it work?
Does it buggery.
And is there a helpful error message so you can see what you've done wrong?
In a pig's arse is there a helpful fucking error message.
"Application error", and a link to some "diagnostic resources" that displays the exact same error message, including the same link, so a link to itself, in an infinite recursive loop of rank, inhuman stupidity.
Let me see what's in the logs. Absolutely fuck all. No, wait! There's the html markup for the fucking useless error message I'm looking at in the browser. So the UI is telling me to fuck off, and the logs are recording that I have been told to fuck off.
But this is Azure. So there isn't just one place to look at the logs, there are many places to look at the logs. And they are all geologically slow and most of them don't work.
It's probably a firewall issue. I'll have a look later on if I can be arsed, but frankly I'd rather be performing cunnilingus on a lion.1 -
I'm trying to stand up a docker container to read a storage queue with dotnet and invoke ffmpeg to convert some videos. For a whole day I fought with this wrapper (FFMpegCore) which kept throwing file not found errors on the ffmpeg binary itself. Locally (windows) it worked fine.
I spent a ton of time trying to install the Debian package, trying to add it to the path manually, trying to just use the wrapper just to generate the arguments I wanted (I'm not an ffmpeg pro, so the fluent API the wrapper has is super useful) and running it manually, nothing worked. Finally, I realized it wasn't getting to the part where I ran it manually: just using the fluent API to get the arguments was invoking ffmpeg and throwing.
I took away the wrapper completely, start ffmpeg manually and it works...
Ay carumba. Things just can't be easy.2 -
Was motivated to do a project with ReactNative for Android but already stuck.
I need to read a SQLite DB file from /data/data/some.other.app/database/DB.db
Yes I am rooted.
1. How does I request root from the App (Android Pie)
2. What SQLite npm package can load from an absolute path. I found a few libs but they don't seem to be full access, just for dbs in the app's own data folder.8 -
So im working on a bot. The response of the bot wasn’t correct according to colleague a. So i fixed the response. Now i have a virtual directoy and the response is wrong again. Meanwhile i fixed issue b where the template path wasn’t correct so i fixed that as well. Then i got a merge conflict because i fixed 2 problems in one file in two different branches. So i fixed that. Then feedback came on issue b so we fixed that again. Now my response is fucked completely because the agent isn’t defined i don’t have a clue why. If i ever find out the output probably still isn’t correct either and even if it was i probably get feedback which fucks everything up again.
My god i landed in development hell3 -
Colleague trying to create a Visual Studio project and getting the error message that the file path name is too long. (Raging noises of agony)
Me laughing inside because I was facing the same issue a few days ago.
Now I am using VS on mac. Still a pile of crap but at least no issues with file paths anymore 😔5 -
"hey, write us a simple interface for this shell script.."
script:
- input must be a file, does not accept loading through stdin/redirect
- accepts relative path input from one specific directory only
- fails if provided absolute path
- even though it fails, it still returns return code 0
and every time we've tried to open up a topic of programming practices we got slammed with "we're ops. you should be glad they're doing at least some scripting"5 -
My work product: Or why I learned to get twitchy around Java...
I maintain a Java based test system, that tests a raster image processor. The client is a Java swing project that contains CORBA bindings to the internal API of the raster image processor. It also has custom written UI elements and duplicated functionality that became available in later versions of Java, but because some of the third party tools we use don't work with later versions of Java for some reason, it's not possible to upgrade Java to gain things as simple as recursive directory deletion, yes the version of Java we have to use does not support something as simple as that and custom code had to be written to support it.
Because of the requirement to build the API bindings along with the client the whole application must be built with the raster image processor build chain, which is a heavily customised jam build system. So an ant task calls out to execute a jam task and jam does about 90% of the heavy lifting.
In addition to the Java code there's code for interpreting PostScript files, as these can be used to alter the behaviour of the raster image processor during testing.
As if that weren't enough, there's a beanshell interface to allow users to script the test system, but none of the users know Java well enough to feel confident writing interpreted Java scripts (and that's too close to JavaScript for my comfort). I once tried swapping this out for the Rhino JavaScript interpreter and got all the verbal support in the world but no developer time to design an API that'd work for all the departments.
The server isn't much better though. It's a tomcat based application that was written by someone who had never built a tomcat application before, or any web application for that matter and uses raw SQL strings instead of an orm, it doesn't use MVC in any way, and insane amount of functionality is dumped into the jsp files.
It too interacts with a raster image processor to create difference masks of the output, running PostScript as needed. It spawns off multiple threads and can spend days processing hundreds of gigabytes of image output (depending on the size of the tests).
We're stuck on Tomcat seven because we can't upgrade beyond Java 6, which brings a whole manner of security issues, but that eager little Java updated will break the tool chain if it gets its way.
Between these two components we have the Java RMI server (sometimes) working to help generate image data on the client side before all images are pulled across a UNC network path onto the server that processes test jobs (in PDF format), by reading into the xref table of said PDF, finding the embedded image data (for our server consumed test files are just flate encoded TIFF files wrapped around just enough PDF to make them valid) and uses a tool to create a difference mask of two images.
This tool is very error prone, it can't difference images of different sizes, colour spaces, orientations or pixel depths, but it's the best we have.
The tool is installed in both the client and server if the client can generate images it'll query from the server which ones it needs to and if it can't the server will use the tool itself.
Our shells have custom profiles for linking to a whole manner of third party tools and libraries, including a link to visual studio 2005 (more indirectly related build dependencies), the whole profile has to ensure that absolutely no operating system pollution gets into the shell, most of our apps are installed in our home directories and we have to ensure our paths are correct for every single application we add.
And... Fucking and!
Most of the tools are stored as source bundles in a version control system... Not got or mercurial, not perforce or svn, not even CVS... They use a custom built version control system that is built on top of RCS, it keeps a central database of locked files (using soft and hard locks along with write protecting the files in the file system) to ensure users can't get merge conflicts by preventing other users from writing to the files at all.
Branching is heavy weight and can take the best part of a day to create a new branch and populate the history.
Gathering the tools alone to build the Dev environment to build my project takes the best part of a week.
What should be a joy come hardware refresh year becomes a curse ("Well fuck, now I loose a week spending it setting up the Dev environment on ANOTHER machine").
Needless to say, I enjoy NOT working with Java. A lot of this isn't Javas fault, but there's a lot of things that Java (specifically the Java 6 version we're stuck on) does not make easy.
This is why I prefer to build my web apps in python or node, hell, I'd even take Lua... Just... Compiling web pages into executable Java classes, why? I mean I understand the implementation of how this happens, but why did my predecessor have to choose this? Why?2 -
A command line tool built in Python that helps you analyse your git logs by exporting them into a csv/json file.
Can fetch the logs from a given file path or a git directory.
https://github.com/dev-prakhar/...3 -
(first day at Command line)
cp /path/file
yes :) no destination..because Windows user algorithm
first ctrl+c then ctrl+v
took 2 hrs to figure out what's wrong -
The age-old question between `DD/MM/YYYY` or `MM/DD/YYYY`.
After some shower thoughts, my new preference is `YYYY/MM/DD` for Americans and Lithuanians (only 2 that I know) it just looks like the year has been placed first, otherwise, they read it as normal. To everyone else, the date is reversed and therefore will be read in reverse leading to the same answer.
In addition, `YYYY/MM/DD/some-dated-file` as a file path works exceptionally well for storing files as it uses the least amount of repetition.12 -
Newbie Linux User - Story about not working GUI
I am a proud Opensuse user for about a year, still struggling with some basic stuff, terminal, etc.
The story begins when a few days ago I try to login to the system. To my trusty Gnome. I get stuck on login loop;
successful login - > black screen for a second - > back to login screen.
Zero feedback, not a single error message
Stress level increases taking in count that I am at a climax at my university with tons of projects on my computer.
I assemble the Team A:
Me, Google, Stackoverflow, and for desperate times Russian Stackoverflow
Over 4 hours, found out that my user is affected by this, tried restoring default Gnome configuration, went through bunch of logs only to find out that every user gets the same errors, still only my not working. Even KDE denied to cooperate with the same result.
So what went wrong you may be thinking.
One line in file replaced by miniconda, that changed the PATH.
Linux is the best detective game that I've ever played.
Is it something that I should get used to?2 -
Github Actions.
A nice feature that can drive you nuts.
"GITHUB_EVENT_PATH: The path of the file with the complete webhook event payload. For example, /github/workflow/event.json."
"github.event_path: The path to the full event webhook payload on the runner."
Well guess what? These fucking variables are completely useless since the path in them is non-existent.
Fortunately /github/workflow/event.json works...but for how long?
Also using header Accept: application/vnd.github.v3+json to download a zip file is masochism.4 -
How hard can it be to reference a file on a mounted windows network drive with UNC path to the java command? Seems like everyone uses the same env variable but always different syntax. Sometimes it's
-Dsmth=file://\\net.work.path\sub.xml, then it's -Dsmth=file:///\\net.work.path\\sub.xml, then
-Dsmth="file:///\\\\net...xml"
and none of them work?! 😤4 -
"Help, Xcode hangs when trying to open xib file"
"Have you tried re....considering your career path?" -
Software has no pre-built packages. Clones repo and tries to compile from source. Spends 1.5 hrs hunting for the libraries - no list published. Configure of course had trouble finding one I had installed; had to debug the configure file to see how it was search for it, turns out it was applying a subdirectory to whatever path I gave it. FINALLY configures and I run "make all". Everything compiles!!! Try to follow documtation to setup the software, 1st cli command -> Segmentation Fault with no logs....
-
Composer.json require sendgrid
Composer adds wrong directions to file, fine I'll hard code it.
Composer is deriving file path.
Fine I'll edit 4 files.
Composer is escaping hard path
Change global variables
Composer is still adding its own directory before hard path.
Follow azure and sendgrid documentation to the letter, composer puts wrong way round slashes in file path.
Gives up on 57th server 500 error
Sometimes azure gets me down in its implementation of things.... -
I had the funniest thing today... So our company has some servers off somewhere in a VPN, as well as one server in our own office.
So, for simplicity, S1 is my own laptop, S2 is our office server, S3 is one VPN server, and S4 another.
I want to get a file from S2 to S4. S1 can SSH into S2 and S3, S2 can't ssh into any server, S3 can ssh into S2 and S3, and S4 can't ssh into any server.
So to get a file from S2 to S4, I took the path
S1 pull from S2 -> S1 push to S3 -> S3 push to S4
Part of it was preexisting keys meaning it was easier to send S1 to S4 via S3 than get my pubkey from S1 onto S4, but also S2 not being on the VPN meant I couldn't go straight from S2 to S3 or S4, so I had to route through S1, which I could add to the VPN (I'd sshed into S2 from home and thus couldn't put it on the VPN not to mention permissions, whereas I could put S1 easily onto it)
Twas certainly a fun time :P
Plus, port forwarding from a Docker container on S2 to S2's port to S1's port via ssh was fun to get set up.
Time to document this process :)2 -
Quick Plesk config question...
Been getting open_basedir() notices in the WordPress logs, and frankly it's flooding the log right now. Sample below:
[24-Feb-2019 07:05:19 UTC] PHP Warning: file_exists(): open_basedir restriction in effect. File(/var/www/vhosts/webspacedomain.com/SiteInstallDirectory/wp-content/db.php) is not within the allowed path(s): (/var/www/vhosts/webspacedomain.com/:/tmp/) in /var/www/vhosts/webspacedomain.com/SiteInstallDirectory/wp-includes/load.php on line 397
Checking the settings for open_basedir in the domain's PHP settings, it's currently set to the following default value:
{WEBSPACEROOT}{/}{:}{TMP}{/}
By my read, that **should** be granting permission to the directory. I just checked it against the setting on the dev server (which doesn't report this error), and it's configured in the same manner. Only difference between Dev environment and this one is that the one in Dev is in vhosts/webspacedomain.net/DEV instead of just vhosts/webspacedomain.net
Is there something I'm missing here?4 -
One of the worst practices in programming is misusing exceptions to send messages.
This from the node manual for example:
> fsPromises.access(path[, mode])
> fsPromises.access('/etc/passwd', fs.constants.R_OK | fs.constants.W_OK)
> .then(() => console.log('can access'))
> .catch(() => console.error('cannot access'));
I keep seeing people doing this and it's exceptionally bad API design, excusing the pun.
This spec makes assumptions that not being able to access something is an error condition.
This is a mistaken assumption. It should return either true or false unless a genuine IO exception occurred.
It's using an exception to return a result. This is commonly seen with booleans and things that may or may not exist (using an exception instead of null or undefined).
If it returned a boolean then it would be up to me whether or not to throw an exception. They could also add a wrapper such as requireAccess for consistent error exceptions.
If I want to check that a file isn't accessible, for example for security then I need to wrap what would be a simple if statement with try catch all over the place. If I turn on my debugger and try to track any throw exception then they are false positives everywhere.
If I want to check ten files and only fail if none of them are accessible then again this function isn't suited.
I see this everywhere although it coming from a major library is a bit sad.
This may be because the underlying libraries are C which is a bit funky with error handling, there's at least a reason to sometimes squash errors and results together (IE, optimisation). I suspect the exception is being used because under the hood error codes are also used and it's trying to use throwing an exception to give the different codes but doesn't exist and bad permissions might not be an error condition or one requiring an exception.
Yet this is still the bane of my existence. Bad error handling everywhere including the other way around (things that should always be errors being warnings), in legacy code it's horrendous.6 -
Rubber ducking your ass in a way, I figure things out as I rant and have to explain my reasoning or lack thereof every other sentence.
So lettuce harvest some more: I did not finish the linker as I initially planned, because I found a dumber way to solve the problem. I'm storing programs as bytecode chunks broken up into segment trees, and this is how we get namespaces, as each segment and value is labeled -- you can very well think of it as a file structure.
Each file proper, that is, every path you pass to the compiler, has it's own segment tree that results from breaking down the code within. We call this a clan, because it's a family of data, structures and procedures. It's a bit stupid not to call it "class", but that would imply each file can have only one class, which is generally good style but still technically not the case, hence the deliberate use of another word.
Anyway, because every clan is already represented as a tree, we can easily have two or more coexist by just parenting them as-is to a common root, enabling the fetching of symbols from one clan to another. We then perform a cannonical walk of the unified tree, push instructions to an assembly queue, and flatten the segmented memory into a single pool onto which we write the assembler's output.
I didn't think this would work, but it does. So how?
The assembly queue uses a highly sophisticated crackhead abstraction of the CVYC clan, or said plainly, clairvoyant code of the "fucked if I thought this would be simple" family. Fundamentally, every element in the queue is -- recursively -- either a fixed value or a function pointer plus arguments. So every instruction takes the form (ins (arg[0],arg[N])) where the instruction and the arguments may themselves be either fixed or indirect fetches that must be solved but in the ~ F U T U R E ~
Thusly, the assembler must be made aware of the fact that it's wearing sunglasses indoors and high on cocaine, so that these pointers -- and the accompanying arguments -- can be solved. However, your hemorroids are great, and sitting may be painful for long, hard times to come, because to even try and do this kind of John Connor solving pinky promises that loop on themselves is slowly reducing my sanity.
But minor time travel paradoxes aside, this allows for all existing symbols to be fetched at the time of assembly no matter where exactly in memory they reside; even if the namespace is mutated, and so the symbol duplicated, we can still modify the original symbol at the time of duplication to re-route fetchers to it's new location. And so the madness begins.
Effectively, our code can see the future, and it is not pleased with your test results. But enough about you being a disappointment to an equally misconstructed institution -- we are vermin of science, now stand still while I smack you with this Bible.
But seriously now, what I'm trying to say is that linking is not required as a separate step as a result of all this unintelligible fuckery; all the information required to access a file is the segment tree itself, so linking is appending trees to a new root, and a tree written to disk is essentially a linkable object file.
Mission accomplished... ? Perhaps.
This very much closes the chapter on *virtual* programs, that is, anything running on the VM. We're still lacking translation to native code, and that's an entirely different topic. Luckily, the language is pretty fucking close to assembler, so the translation may actually not be all that complicated.
But that is a story for another day, kids.
And now, a word from our sponsor:
<ad> Whoa, hold on there, crystal ball. It's clear to any tzaddiq that only prophets can prophecise, but if you are but a lowly goblinoid emperor of rectal pleasure, the simple truths can become very hard to grasp. How can one manage non-intertwining affairs in their professional and private lives while ALSO compulsively juggling nuts?
Enter: Testament, the gapp that will take your gonad-swallowing virtue to the next level. Ever felt like sucking on a hairy ballsack during office hours? We got you covered. With our state of the art cognitive implants, tracking devices and macumbeiras, you will be able to RIP your way into ultimate scrotolingual pleasure in no time!
Utilizing a highly elaborated process that combines illegal substances with the most forbidden schools of blood magic, we are able to [EXTREMELY CENSORED HERETICAL CONTENT] inside of your MATER with pinpoint accuracy! You shall be reformed in a parallel plane of existence, void of all that was your very being, just to suck on nads!
Just insert the ritual blade into your own testicles and let the spectral dance begin. Try Testament TODAY and use my promo code FIRSTBORNSFIRSTNUT for 20% OFF in your purchase of eternal damnation. Big ups to Testament for sponsoring DEEZ rant.3 -
I swear I touched some weird and complex programming shit in over a decade of programming.
I interfaced myself through C# to C++ Firmware, I wrote Rfid antennas calibration and reading software with a crappy framework called OctaneSDK (seems easy until you have to know how radio signal math and ins and outs work to configure antennas for good performance), I wrote full blown, full stack enterprise web portals and applications.with most weird ass dbs since the era of JDBC, ODBC up to managed data access and entity framework, cloud documental databases and everything.
Please, please, please, PLEASE I BEG YOU, anyone, I don't even have the enough life force to pour into this, explain me why the hell Jest is still a thing in javascript testing.
I read on the site:
"Jest is a delightful JavaScript Testing Framework with a focus on simplicity."
Using jest doesn't feel any delightful and I can't see any spark of focus and simplicity in it.
I tried to configure it in an angular project and it's a clustefuck of your worst nightmares put togheter.
The amount of errors and problems and configurations I had to put up felt like setting up a clunky version of a rube goldberg's machine.
I had to uninstall karma/jasmine, creating config files floating around, configure project files and tell trough them to jest that he has to do path transformations because he can't read his own test files by itself and can't even read file dependencies and now it has a ton of errors importing dependencies.
Sure, it's focused on simplicity.
Moreover, the test are utter trash.
Hey launch this method and verify it's been launched 1 time.
Hey check if the page title is "x"
God, I hate js with passion since years, but every shit for js I put my hands on I always hope it will rehab its reputation to me, instead every fucking time it's worse than before. -
For the love of god developers/programmers, don’t put version numbers to your softwares file paths!
It’s the worst when you have to configure permissions and rules, then the folder path changes in every update!2 -
Google is like the parent or teacher who is never happy with your work. I've never seen something so unattainable in a world where non-technical clients rely on CMSes, theme templating, server-side page rendering, and external scripting as Google's mobile PageSpeed recommendations. Especially under the Lighthouse audit in Chrome Inspector. Unless I go back to pre-2001 web development methods, and never have external scripting, and make every page have its own CSS file with only critical path CSS for each page, I will never get all the high scores I'm expected to have to rank well for mobile. When and how will Google get called out on this B.S.?9
-
I simply can't get shit to work. I just can't, I feel retarded and useless. I know I am a slow coder but this isn't even the problem now. I can't even setup my shit.
I couldn't get virtualenv to work, so I used the python built-in, then I tryed autoenv. Nada. It doesn't fucking work. When i try to source the activation script for my env...
No such file or directory my ass. I tryed evey possible path to that file, still doesn't work.
I ignored that and just continued, trying to setup heroku. It took me 2 fucking hours to get why git wasn't working.
Hopefully I will finish my project one day. I tough it will take me one week top. I was so wrong. The more I do, the more work I realise I have to do. -
Hey ... Is it possible to figure out the clients path (f.e. C:\Users\...) to a file he uploaded to a website on the server side?
My boss thinks it could be done and wants me to programm it. But I think we'd need a zero day vulnerability in a specific (and probably very old) browser to do something like that... That would be a huge security issue...
Wouldn't it?
What do you think?13 -
So... I had to create a VBA macro, ok, it is very simple and it will be necessary during some DOC files reviewing. Ok, not a problem.
I created the functions, added some quick launch buttons and saved it as a .DOTM file. I even included an autoload form with an Install button, so the file copy itself to the Word Startup folder. Nice, everything working just fine.
But... there are two Mac users in the company. I do not have a Mac, but the first thing I thought was I hardcoded the "\" to check if the file already exists and to copy the file. Using the system separator would do the trick. The macro would be copied and everything is done. But...
1. The quick launch buttons do not appear on Mac;
2. The "Application.PathSeparator" returns an ":"
3. The "application.StartupPath" returns an invalid path (something like "Mac's Name:Application:etc")
4. The copy command is not working, the Dir command appears to be not identifying the path etc
5. I need to have it working by Monday morning.2 -
I don't know why it happen. Windows update then Windows create a TEMP Users Folders, put all the documents/download/etc location (path) into one of those temps users folder that was just created. Hopefully my clients didn't lose their files, since the Good users folder was still there.
Okay now Microsoft, listen, it's okay to update your OS. It certainly need it. BUT HOW THE FUCK WOULD YOU CREATE A NEW USERS AND CHANGE THE PATH OF PERSONAL FILES! Thumb up! At least those file were not erased... -
Another hours wasted on debugging, on what I hate most about programming: strings!
Don't get me started on C-strings, this abomination from hell. Inefficient, error prone. Memory corruption through off by one errors, BSOD by out of bound access, seen it all. No, it's strings in general. Just untyped junk of data, undocumented formats. Everything has to be parsed back and forth. And this is not limited to our stupid stupid code base, as I read about the security issues of using innerHTML or having to fight CMake again.
So back to the issue this rant is about. CMake like other scripting languages as bash have their peculiarities when dealing with the enemy (i.e. strings), e.g. all the escaping. The thing I fought against was getting CMake's fixup_bundle work on macOS. It was a bit pesky to debug. But in the end it turned out that my file path had one "//" instead of an "/" and the path comparison just did a string comparison without path normalization.
Stop giving us enough string to hang ourselves!rant debugging shit scripts of death fuck file paths fuck macos string to hang ourselves fuck strings cmake hell12 -
Been struggling with compiling a PyQT-program the whole weekend. It worked with PyInstaller on Friday, except that the .ui-file was not included but referenced to the path on my computer. Have tried fbs instead which caused this error that now also occurs when I try to start the program created with PyInstaller.1
-
Question. I ran my code and found one of my images rotation out of wack. I had to delete the img file and upload again and then reinsert the file path and ran again. No issues. Any reason why that happen especially since the file path was correct before5
-
Does anybody know how to create a JSON list of all files in a folder with path+filename, md5 hash, file name and size? My client wished that I rework an open source launcher which is reading an HTML file in JSON format5
-
Another great website error code fail (dumped its full error output to the website):
Traceback (most recent call last):
File "/usr/lib/python2.4/site-packages/trac/web/api.py", line 436, in send_error
data, 'text/html')
File "/usr/lib/python2.4/site-packages/trac/web/chrome.py", line 808, in render_template
template = self.load_template(filename, method=method)
File "/usr/lib/python2.4/site-packages/trac/web/chrome.py", line 768, in load_template
self.templates = TemplateLoader(
File "/usr/lib/python2.4/site-packages/trac/web/chrome.py", line 481, in get_all_templates_dirs
for provider in self.template_providers:
File "/usr/lib/python2.4/site-packages/trac/core.py", line 78, in extensions
return filter(None, [component.compmgr[cls] for cls in extensions])
File "/usr/lib/python2.4/site-packages/trac/core.py", line 213, in __getitem__
component = cls(self)
File "/usr/lib/python2.4/site-packages/trac/core.py", line 119, in maybe_init
init(self)
File "/usr/lib/python2.4/site-packages/authopenid/authopenid.py", line 157, in __init__
db = self.env.get_db_cnx()
File "/usr/lib/python2.4/site-packages/trac/env.py", line 335, in get_db_cnx
return get_read_db(self)
File "/usr/lib/python2.4/site-packages/trac/db/api.py", line 90, in get_read_db
return _transaction_local.db or DatabaseManager(env).get_connection()
File "/usr/lib/python2.4/site-packages/trac/db/api.py", line 152, in get_connection
return self._cnx_pool.get_cnx(self.timeout or None)
File "/usr/lib/python2.4/site-packages/trac/db/pool.py", line 172, in get_cnx
return _backend.get_cnx(self._connector, self._kwargs, timeout)
File "/usr/lib/python2.4/site-packages/trac/db/pool.py", line 105, in get_cnx
cnx = connector.get_connection(**kwargs)
File "/usr/lib/python2.4/site-packages/trac/db/sqlite_backend.py", line 180, in get_connection
return SQLiteConnection(path, log, params)
File "/usr/lib/python2.4/site-packages/trac/db/sqlite_backend.py", line 255, in __init__
user=getuser(), path=path))
TracError: The user apache requires read _and_ write permissions to the database file /home/trac/morituri/db/trac.db and the directory it is located in. -
How do y'all approach media-endpoints?
Specially publicly accessible user-uploaded media
Rn I encrypt the path to the media-
/file and expose it, decrypt on the server (returning a relative file-path) which then fetches the file via File.Read and returns it as-is
I put a cache header and works fine
But something in the back of my mind makes me feel it isnt right
Like, normal endpoints and file-read endpoints shouldnt be in the same backend, potentially affecting each other
But since it's just a fun pet project so Im not paying for a 2nd baremetal server as a CDN/media server -.-
Worst case scenario I use it as-is, but would appreciate hearing other approaches9 -
It would be really nice if bower packages had a consistent naming convention as far as getting to the relevant file path. I'm always surprised how whacky it is. bower_component/special_plugin/code/dist/SpecialPlugin/Script.js ... nonsense!
-
So when installing an rpm there is a file collision check. When you add a file to an rpm package with cmake / cpack it unfortunately adds parent directories your file goes into, which will give you conflicts with other packages. But well you have that beautiful feature to exclude directories from being added:
CPACK_RPM_EXCLUDE_FROM_AUTO_FILELIST_ADDITION
Now somehow it failed. Turned out it would not work if my path ended with a backslash. Brush my banana! Like "/etc/sysctl.d" is a different animal than "/etc/systclt.d/". - But at least that's nothing against the strangeness of the "mv" command in those respects. -
Dammit! CSS is such a huge pain in the ass. I just want to use a <style> tag inline with a class to control margin positioning of one friggin’ image. (Yes, I know it’s better in a CSS file but this is a temp fix that will be reverted soon.)
<style>
.30-day-seal {
margin-top: -27.5em;
margin-left: 39.5625em;
}
</style>
<img class=“30-day-seal” src=/path/to/img.png”/>
Nothing happens. Only if I use a style=“” attribute directly in the img tag.
I’ve even tried:
<style>
img.30-day-seal {
margin-top: -27.5em;
margin-left: 39.5625em;
}
</style>
And
<style>
.30-day-seal img {
margin-top: -27.5em;
margin-left: 39.5625em;
}
</style>
And even
<style>
img .30-day-seal {
margin-top: -27.5em;
margin-left: 39.5625em;
}
</style>
Why do I suck so bad at this?! Still!?6 -
How do i show a profile pic from s3 bucket?
One way is to fetch it from backend and send it to frontend as a huge blob string. This is how i made it currently and it works.
.... what if i want to frequently get the profile image? Am i supposed to send a separate API request to the backend every time? What if I need to show the profile picture 100 times then that means I will have to send 100 requests to the backend API?
...... or even worse, what if I need to fetch a list of images from the S3 bucket for example, a list of posts that contain images or a card with the list of profile images of multiple users? If I need to display 100 posts, each post containing one image, That means I would have to separately call 100 API request to fetch 100 images…
That is fucking absurd.
Of course I can make it so that it saves that URL to that image as a public setting but the problem is the URL will be the exact URL to the S3 bucket, including the bucket name, the path and the file name as well as the user information such as the user ID. this feels like it is a huge security risk
What the fuck am I supposed to do and how am I supposed to properly handle display images which are supposed to be viewed publicly?20 -
I am thinking of how I can make data upload reliable. I am sure that I am making it more complex as it could be and I need some pointer.
My goal is to have a pause/resume feature in file uploading.
Here is how it would work.
In order to start uploading , you give the server
1) File Name
2) Folder path you want to upload it to
3) Checksum of the file
Here the server will check whether you can upload it to a folder, whether the file have been previously uploaded (by file name and checksum)
If you could actually upload to the folder , server will return "unique file token" , "folder path" , "unique byte token". Let call it init_upload().
The client will use "unique file token" (to identify the file) , folder path (to know where to upload it to) , "unique byte token" and byte[] (data which to actually upload). Let call this operation data_upload().
If the operation is actually complete , server will return new "unique byte token"
Internally it will actually work like this.Let say we want to upload "file.mp3", when the client call init_upload() it will create
file.mp3 and unique_byte_token.file.mp3.
When the client upload data first time , it will append data to unique_byte_token.file.mp3.
When the client upload data second time , it will check whether the "byte token" that client put is the same as previous "unique_byte_token". If it is same ,
1) we move the data from unique_byte_token.file.mp3 to file.mp3
2) Delete unique_byte_token.file.mp3
3) Create new unique_byte_token.file.mp3
4) Append data to unique_byte_token.file.mp3
The reason I am using "byte token" is because I want to check whether previous upload is actually success.
Let say we need to call 50 part of data_upload() will put 49 part to file.mp3 and 1 part to byte_token.file.mp3.
Finally the client need to call data_upload_complete() which will
1) Put reminding 1 part to file.mp3
2) Remove byte_token.file.mp3 as cleanup6 -
Consider an API that uses the HTTP path to represent position in a tree that literally represents a file tree with minimal constraints, and GET/PUT/DELETE methods to read, write and destroy the nodes. How would you encode read/write operations to per-node metadata? The kinds of metadata are static and around 4, so inventing HTTP verbs for each of them is infeasible but filtering is not necessary.
Options considered so far:
- toplevel resources alongside a namespaced /data such as /acl, /lock
- magic keywords to the Range header (this is apparently compliant)
- mimetypes such as text/plain+acl
- SETPROP / PROP methods in the spirit of WebDAV
- headers (I worry this may become an immitigable bottleneck really fast)
I'm looking for any kind of suggestion or insight, not perfect answers.
I read the WebDAV specification and I won't even suggest that I'm trying to align with it, the only protocol I'd seen in the past with comparable scope bloat is WebRTC.22 -
So apparently jupyter / ipython adds the current workdir to kernel library path, and it crashes if you happen to have a file named something like "tokenize.py" in your workdir because it gets prioritised over ipython's builtin module with the same name. What a great design for something which is specifically made to run isolated chunks of code, that it can't even properly isolate itself from the workdir.1
-
How to write programs on Android 10 that work with files/directories? Have used a number of JVM-based languages like Groovy, Clojure and Kotlin.
My last try was with Groovy. I ran it under Dcoder which has to be cloud-, based as it supports numerous languages. I gave it permission to access storage but got a file not found error from Java. Copied this excerpt for the file path.
import java.io.File
class Example {
static void main(String[] args) {
new File("/storage/emulated/0/read_file.grvy").eachLine {
line -> println "line : $line";
}
}
}
Do I need root? Do I need to change file permissions using Termux? Why can't I find a way to write simple software on a Motorola Super, 3 GB RAM and 8 cores? I hate using a phone for a computer but a seizure has me in a nursing home with only one usable hand.
Any help is greatly appreciated.5 -
Gf asked me to help her with getting science articles. She had some page that her university suggested students to use, but had troubles with downloading documents.
At first I was like "Hey, it says use IE, other browsers are not supported. Thats bad but.. whatever". Then it popped that she needs Java enabled - well, I guess we have to... Even updated it cause it was needed.
Restarted IE, clicked download again and... Java security blocked web app... Eh, I don't trust it but whatever, just let's check what if I whitelist it.
Got some basic view, 1 dropdown list for "file name format" (like anybody cares), path selection where to save file, and some checkbox. Lame, but let's just leave it behind.
Downloaded, it turned out to be html file, not pdf, fishy that it was single file, but hoped for some text styled with css, so I opened it and got redirected to page where I clicked download.
Checked that file content - html with empty body and script tag containing js that redirects on load.
Srsly?😐2 -
I started looking into building my Android app but wanted to see if I could get a refresher on a few things. The starter template for the Nav Layout isn't exactly functional.
So first question is anyone know any resources like an actually functioning demo project.
Also I need DB access but want to open any db file given the *.db path and the DAO should be persistent, share across all fragments/activities. What would be the best design, way of doing that in Android though.
I don't think you can pass the object between activities but what about fragments. I'm thinking the main app opens the DB and then can pass a DAO Interface to all the fragments to use?2 -
20 years in and I’m just now discovering Fish! Why the hell isn’t it more popular? For real, it has more features OOB than bash or zsh and the scripted is so much nicer. Oh I need to add to my path? Just add onto a built in variable from the CLI and your good, no need for a script to append a line to some file loaded by zsh or opening up the .zshrc and manually editing it. And how bout that “funcsave” built in huh? Freaking awesome. More people need to be championing Fish, it’s better than your terminal bros zsh6
-
Oracle while trying to install their database on Windows 10 (both, at least, only inside a VirtualBox): sorry, but the path containins space character. Which is because either your UI dialog failed to properly escape or your installer failed properly handle a native file path, Oracle! Nevermind totally ignoring the OS's UI style and cropping your own error message in the second line.1
-
To hell with this auto path rewrite VS Code, when I rename a file you find all files and rewrite the imports, but now you did it wrong
and I have a huge mess to pick, I have no idea how you did this but you wrote long paths which don't make sense
why did you put node_modules in-front of all my imports when I moved a folder which has nothing to do with node modules1 -
My question is very easy and it is possible that this is a stupid question but I need your help. I have tried to develop a component with a function where I implemented a require function. In this last function, I included a state where a "select" option send the value (the path of the css file) to update this state. The value depend the choice of the user. So, it partially works because when I come back to an option that I picked a few seconds ago, the require function doesn't work anymore. I am not sure that was the best option. I am a beginner. Can someone explain me the reason why "require()" stops to work and which is the best way to resolve this functionality please ?
-
Microsoft one drive
I know you wanna sync my folder
I know you wanna provide a one drive alias to every file
But who the fuck in their right fucking mind makes a alias with space in path
Oh you dumb piece of shit 🤬3 -
file names
full capitalized, spaces in between some, mix of underscores and hyphens depending
but it works when i used them raw as is to be supplied for a path to something
didnt even bother escaping anything
should be a pleasant surprise but was told (reasonably considering the situation) to correct what i received1 -
How much of a security risk is it to serve static data from a json file on flask? Values are posted from a mobile device to a server to groom objects to return. My coworker is giving me a lot of shit for it as the file is accessed through a relative path, but the file names are checked and sanitised. He says the objects should be in a database.3
-
Dumb question, but does anyone know how to make VSCode show more of the path than just the folder name on the side bar, I am working on making workspaces to avoid opening 6 file explorer windows but a lot of folders for my workflows have the same name but different locations on the network and I can't change the folder names for automation purposes.
I know it shows the path if i hover over the name, but i'd like to just show path by default on the side panel
example image below (can't show real folders due to NDA)6 -
Is there any way to detect the current in focus document in an ide and get its file path??
I want to write a python script (or other language if necessary) to check files for a commented out phrase in the first line regardless of if I’m using visual studio. vscode. Or pycharm
Tried google and simple stackoverflow search. Don’t want to do a stackoverflow question till my idea is more fleshed out
Preemptive thanks for your time and assistance 🙃1 -
I spent most of today debugging the server part of my service. The logo on the page didn't show on the local Windows Server.
My first thought was that the static files path is messed up (nginx with Windows path might be confusing, is it D:/file, D:\file, or even D:\\file), so I tried playing with it. But wait, the page works, so it must be something else because css and js and even the fonts are loaded.
Could it be a cache issue? Are the images too big?
No, fuck you Microsoft, Internet Explorer doesn't show webp images. FML6 -
So, I was googling for cross platform javascript things.. every answer, there's only weex and nativescript, but both aren't ready for prod, so I tried weex, it's alright but the documentation is non existant, and the support is practically on dial up, and hardly anyone has used it. And nativescript isn't really an option cause it's only for mobile.
So I chose weex, web + mobile, and I can easily port my already written vue project, sweet, so I get to porting, run into a few issues but it's pretty easy, need to play with some of the root file path definitions, no "./"'s just "@/" (if you use @ as your root symbol).
great. Pug works, sass... seems to work, then I run into a pretty big issue with sass compilation/loading, can't find an answer for an hour.
So I go out. Then come home, no answer on my SO question.
So I google "jsfiddle weex" to get a jsfiddle template for debugging weex/vue projects.
A few results down. I see this: https://reddit.com/r/javascript/...
well I've heard of framework7, but it would require me rewriting most of my element tags and components, but what's quasar?
I have a look, totally cross platform, desktop, web, mobile... wtf..
read the docs, "uses vue single file components"
..what, holy fuck, the documentation is beautiful, it uses vuex, fucking fuck.
I just found it 10 minutes ago....
wish me luck......... -
When you test on production server
"Your system folder path does not appear to be set correctly. Please open the following file and correct this: index.php"1 -
Mac day 2.0, in Ubuntu you can do ctrl + l on a folder, then the full file path will appear. How do I do it in Mac? Basically the easiest to show full path when I am in a folder3
-
I did some of the front-end and whole backend. build and manage the SQL + elasticsearch database. After all of this, only 17 lines of mother fu**er code ruined my life. The client is asking for code. And.... And... Can't say anymore.
input {
file {
path => "/home/rsa-key-20200528 /aslogger.log"
type => "java"
start_position => "beginning"
}
}
output {
stdout {
codec => rubydebug
}
elasticsearch {
hosts => ["localhost:9200"]
index => "aslogger"
}
} -
So I found one of the most random bugs I've ever come across.
So we have this file management system as part of the website, showing breadcrumbs to the current directory, with 'home' as the root of the path. This path is passed to the back end whenever the user navigates to a new directory etc.. The back end code then does a replace on 'home' with the actual directory path.
Ended up with a directory for a person called Homer. Guess what happened..