Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "gzip"
-
I absolutely HATE "web developers" who call you in to fix their FooBar'd mess, yet can't stop themselves from dictating what you should and shouldn't do, especially when they have no idea what they're doing.
So I get called in to a job improving the performance of a Magento site (and let's just say I have no love for Magento for a number of reasons) because this "developer" enabled Redis and expected everything to be lightning fast. Maybe he thought "Redis" was the name of a magical sorcerer living in the server. A master conjurer capable of weaving mystical time-altering spells to inexplicably improve the performance. Who knows?
This guy claims he spent "months" trying to figure out why the website couldn't load faster than 7 seconds at best, and his employer is demanding a resolution so he stops losing conversions. I usually try to avoid Magento because of all the headaches that come with it, but I figured "sure, why not?" I mean, he built the website less than a year ago, so how bad can it really be? Well...let's see how fast you all can facepalm:
1.) The website was built brand new on Magento 1.9.2.4...what? I mean, if this were built a few years back, that would be a different story, but building a fresh Magento website in 2017 in 1.x? I asked him why he did that...his answer absolutely floored me: "because PHP 5.5 was the best choice at the time for speed and performance..." What?!
2.) The ONLY optimization done on the website was Redis cache being enabled. No merged CSS/JS, no use of a CDN, no image optimization, no gzip, no expires rules. Just Redis...
3.) Now to say the website was poorly coded was an understatement. This wasn't the worst coding I've seen, but it was far from acceptable. There was no organization whatsoever. Templates and skin assets are being called from across 12 different locations on the server, making tracking down and finding a snippet to fix downright annoying.
But not only that, the home page itself had 83 custom database queries to load the products on the page. He said this was so he could load products from several different categories and custom tables to show on the page. I asked him why he didn't just call a few join queries, and he had no idea what I was talking about.
4.) Almost every image on the website was a .PNG file, 2000x2000 px and lossless. The home page alone was 22MB just from images.
There were several other issues, but those 4 should be enough to paint a good picture. The client wanted this all done in a week for less than $500. We laughed. But we agreed on the price only because of a long relationship and because they have some referrals they got us in the door with. But we told them it would get done on our time, not theirs. So I copied the website to our server as a test bed and got to work.
After numerous hours of bug fixes, recoding queries, disabling Redis and opting for higher innodb cache (more on that later), image optimization, js/css/html combining, render-unblocking and minification, lazyloading images tweaking Magento to work with PHP7, installing OpCache and setting up basic htaccess optimizations, we smash the loading time down to 1.2 seconds total, and most of that time was for external JavaScript plugins deemed "necessary". Time to First Byte went from a staggering 2.2 seconds to about 45ms. Needless to say, we kicked its ass.
So I show their developer the changes and he's stunned. He says he'll tell the hosting provider create a new server set up to migrate the optimized site over and cut over to, because taking the live website down for maintenance for even an hour or two in the middle of the night is "unacceptable".
So trying to be cool about it, I tell him I'd be happy to configure the server to the exact specifications needed. He says "we can't do that". I look at him confused. "What do you mean we 'can't'?" He tells me that even though this is a dedicated server, the provider doesn't allow any access other than a jailed shell account and cPanel access. What?! This is a company averaging 3 million+ per year in revenue. Why don't they have an IT manager overseeing everything? Apparently for them, they're too cheap for that, so they went with a "managed dedicated server", "managed" apparently meaning "you only get to use it like a shared host".
So after countless phone calls arguing with the hosting provider, they agree to make our changes. Then the client's developer starts getting nasty out of nowhere. He says my optimizations are not acceptable because I'm not using Redis cache, and now the client is threatening to walk away without paying us.
So I guess the overall message from this rant is not so much about the situation, but the developer and countless others like him that are clueless, but try to speak from a position of authority.
If we as developers don't stop challenging each other in a measuring contest and learn to let go when we need help, we can get a lot more done and prevent losing clients. </rant>14 -
I showed a friend of mine a project I made in two days in Docker and Symfony php. It is a rather simple app, but it did involve my usual setup: Nginx with gzip/cache/security headers/ssl + redis caching db + php-fpm for symfony. I also used php7.4 for the lolz
He complained that he didn't like using Docker and would rather install dependencies with composer install and then run it with a Laravel command. He insisted that he wanted a non-docker installation manual.
I advised him to first install Nginx and generate some self-signed certificates, then copy all the config files and replace any environment-injected values (I use a self-made shell script for this) with the environment values in the docker-compose files.
Then I told him to download php-fpm with php 7.4 alpha, install and configure all the extensions needed, download and set up a local Redis database and at last re-implement a .env file since I removed those to replace them with a container environment.
He sent an angry emoji back (in a funny way)
God bless containerized applications, so easy to spin up entire applications (either custom or vendor like redis/mysql) and throw them away after having played with them. No need to clutter up your own pc with runtime environments.
I wonder if he relents :p9 -
My worst experience was at my job where they told me I have to move to a permanent position from 3 years of contracting without a specific offer.
Why is that bad? In my country it means approximatly 40% lower wage.
I came into the job with PHP knowledge when they were looking for Perl on a project one year behind schedule. I learned the language and finished working demo in 6 weeks.
After that, every project that was ever assigned to me was done within 5-15% of the allocated time. I'm not kidding here. My manager loved be, because I was reliable, fast and I even 'accidentaly' solved other problems, like for instance I developed simple syslog search tool and benchmarked zip algos for reading speed, and the fastest had 70% better compression than the algo used before (gzip into plzip on 1-2gb files). That solved anothet problem - syslog servers did not have enough disk space and they didn't have money to upgrade the server.
The number of projects I touched or developed was over 20.
I also lead and developed our team's most successful tool, that every customer was throwing money to buy, while cutting down costs everywhere.
And after three years of that, my manager says that there are no more money for contractors. And the only possibility is going for employment. Without any specific offer! Just 'we cant do this anymore'.
Which I understand, that can happen in corporation, but ffs after all I've done, I expected warmer attitude. Not like 'you may have to leave, since we do not really care'.
I liked the people there, even though the corporation environment was lacking in many respects, but I wanted to help our local branch with everything I could and they gave up on me like that.
So I started looking elsewhere and I found a startup which offered 6 times the money I had in my previous job and promises to relocate me to USA. Which is the best thing that has happened to me that year and second best in my whole life!3 -
Long rant ahead.. 5k characters pretty much completely used. So feel free to have another cup of coffee and have a seat 🙂
So.. a while back this flash drive was stolen from me, right. Well it turns out that other than me, the other guy in that incident also got to the police 😃
Now, let me explain the smiley face. At the time of the incident I was completely at fault. I had no real reason to throw a punch at this guy and my only "excuse" would be that I was drunk as fuck - I've never drank so much as I did that day. Needless to say, not a very good excuse and I don't treat it as such.
But that guy and whoever else it was that he was with, that was the guy (or at least part of the group that did) that stole that flash drive from me.
Context: https://devrant.com/rants/2049733 and https://devrant.com/rants/2088970
So that's great! I thought that I'd lost this flash drive and most importantly the data on it forever. But just this Friday evening as I was meeting with my friend to buy some illicit electronics (high voltage, low frequency arc generators if you catch my drift), a policeman came along and told me about that other guy filing a report as well, with apparently much of the blame now lying on his side due to him having punched me right into the hospital.
So I told the cop, well most of the blame is on me really, I shouldn't have started that fight to begin with, and for that matter not have drunk that much, yada yada yada.. anyway he walked away (good grief, as I was having that friend on visit to purchase those electronics at that exact time!) and he said that this case could just be classified then. Maybe just come along next week to the police office to file a proper explanation but maybe even that won't be needed.
So yeah, great. But for me there's more in it of course - that other guy knows more about that flash drive and the data on it that I care about. So I figured, let's go to the police office and arrange an appointment with this guy. And I got thinking about the technicalities for if I see that drive back and want to recover its data.
So I've got 2 phones, 1 rooted but reliant on the other one that's unrooted for a data connection to my home (because Android Q, and no bootable TWRP available for it yet). And theoretically a laptop that I can put Arch on it no problem but its display backlight is cooked. So if I want to bring that one I'd have to rely on a display from them. Good luck getting that done. No option. And then there's a flash drive that I can bake up with a portable Arch install that I can sideload from one of their machines but on that.. even more so - good luck getting that done. So my phones are my only option.
Just to be clear, the technical challenge is to read that flash drive and get as much data off of it as possible. The drive is 32GB large and has about 16GB used. So I'll need at least that much on whatever I decide to store a copy on, assuming unchanged contents (unlikely). My Nexus 6P with a VPN profile to connect to my home network has 32GB of storage. So theoretically I could use dd and pipe it to gzip to compress the zeroes. That'd give me a resulting file that's close to the actual usage on the flash drive in size. But just in case.. my OnePlus 6T has 256GB of storage but it's got no root access.. so I don't have block access to an attached flash drive from it. Worst case I'd have to open a WiFi hotspot to it and get an sshd going for the Nexus to connect to.
And there we have it! A large storage device, no root access, that nonetheless can make use of something else that doesn't have the storage but satisfies the other requirements.
And then we have things like parted to read out the partition table (and if unchanged, cryptsetup to read out LUKS). Now, I don't know if Termux has these and frankly I don't care. What I need for that is a chroot. But I can't just install Arch x86_64 on a flash drive and plug it into my phone. Linux Deploy to the rescue! 😁
It can make chrooted installations of common distributions on arm64, and it comes extremely close to actual Linux. With some Linux magic I could make that able to read the block device from Android and do all the required sorcery with it. Just a USB-C to 3x USB-A hub required (which I have), with the target flash drive and one to store my chroot on, connected to my Nexus. And fixed!
Let's see if I can get that flash drive back!
P.S.: if you're into electronics and worried about getting stuff like this stolen, customize it. I happen to know one particular property of that flash drive that I can use for verification, although it wasn't explicitly customized. But for instance in that flash drive there was a decorative LED. Those are current limited by a resistor. Factory default can be say 200 ohm - replace it with one with a higher value. That way you can without any doubt verify it to be yours. Along with other extra security additions, this is one of the things I'll be adding to my "keychain v2".11 -
TLDR; i wrote recursive compression with random algorithm to fuck up some lazy ass girl.
one day, unknown classmate told me she has a family reunion and cannot do his programming assignment which will be collected next day in the morning, so she ask me to do it. i say i need to put a price tag to it because i want to buy a new RasPi --i don't know her either so i don't feel bad about it. i told her i need $20 and after some bargaining it settled at $15. i work on it about 3 hours and told her it's finished and send her demo video as a proof. she happy with the result. and will come to my house later that night to get the source code. at night, she came, and give me only $8 bucks, of course i get mad, with every arguments she throws at me i resist to give her the source code. but since i tired enough to get into a longer arguments i accept the 8 bucks i go upstairs to get the source code. but instead of giving her the actual source code; i wrote a quick script to do 50 compress source code folder recursively with random compression algorithm--sometimes gzip sometimes lzma. and give her the final 50 times compressed source code. EAT SHIT MOTHERFUCKER11 -
Today I wanted to activate the gzip compression on the site of a customer before delivery.
Unfortunately the hosting service (the most popular in France) did not activate this module on its servers. Why in 2016, this module is not enabled by default !15 -
Why isn't gzip compression on by default on servers? Cannot think of any case, where this is what the user wishes?5
-
Me: the web app is downloading a lot of static content while loading the page, leading to the app being very slow in low bandwidth locations. can you ensure compression is enabled while serving static files ?
UI Developer: sure, I'll look into that. Btw, I have a question reg that.
Me: yes, pls.
UI Developer: once the compressed static files are downloaded to the browser, should I write a separate module to uncompress them ?
Me: :-(Strategic Facepalm) -
Wait, why is nginx communicating from our cache servers to app servers using HTTP1.0? Added http_version 1.1 to a general config. Moments away our responses return 500 on our production because one of our module doesn't handle gzip. If I ever had a heart attack...
-
Hm... Apparently I've been doing TDD all along... it's just that I don't save the tests in a seperate project.
I just keep editing Main() to test whatever i'm working on (each class).
Also the NJTransit site is sneaky as ****. It seems the devs know a bit about how to prevent site scraping by checking Headers and Client information...
Took all afternoon to get this test to pass....
it works in Chrome but not in my code... and even after I spoofed all the headers... including GZIP.... it wouldn't work for multiple requests...
I need to create a new WebClient for each request.... no idea how it knows the difference or why it cares... maybe it's a WebClient bug...
And this is only the test app. Originally was supposed to be built in React Native but that has it's own problems...
Books are too old, the examples don't work with the latest...
But I guess this also has a upside... learn TDD and React rather than just React... hopefully can finish this week...
I'm actually on vacation... yea... i still code like a work day... 10AM - 8PM....2 -
So for the past day I've been obsessed with adding compression into my build pipeline for web dev, I've implemented html minify, babel JS minify, gzip and made the server specify content length to prevent chunking and shave off a few unused bytes. Is there anything else anyone can think of to get even more gains? Sofar my project went from 1.33mb to 180kb transferred. It's a huge win, just wondering if I can push it further somehow?1
-
Oh boyyy, I just had to work with Asterisk again. And holy shit it is still the clusterfuck it was many years ago.
We got:
- Inconsequent documentation that is mixed through all versions.
- The config sprinkled over what feels like 20 gazillion files.
- AEL being a half assed attempt at a "pRoGRamMinG LanGuAgE"
- The fuck you mean with extensions, endpoints and AOR's?
- Inconsistent config parameter naming. Some are snake case, some camel case some are just everything smushed into a single word.
- queue_log determines wheter to write a log to a file. queue_log_to_file Says to do so independent of you having a realtime backend. Whatever the fuck that is.
- Log compression is done by executing a gzip command after a rotation??!!?!! -
Spent last 2 days trying to get an upstream data file loaded. I've now concluded it's just corrupted during transfer beyond repair... But I got to practice lots of Linux commands trying to figure out what the issue was and fix it (xml parser was throwing some error about nulls originally)
vi, grep, head, tail, sed, tr, wc, nohup, gzip, gunzip, input output redirection -
When gulp build (expressjs + browsersync + browserify + uglify + gzip + gulp-zip + gulp-tar) perfectly compiles your code, minify and concat all js, gzipped them, minify images, package the files into a .zip and .tar... and refresh your browser 😍
you begin to have faith that its gona be a nice day ahead -
Ahoy der Ranters!
I'm looking for a log management service. My server application has a 90 days rolling policy (with gzip) but I would like to store logs somewhere else before they get deleted (after 90 days).
I've heard of Cloud watch, paper trail, and logz.io
What would you recommend?5 -
Just switched from gzip to brotli compression and I have to say I am impressed! Google may suck in some regards but brotli is awesome ☺️2
-
Just wanted to free up some space and separate all of my projects.
First idea ... failed!
mksquashfs /home/tracktraps/Development/myproject1 ~/Squash/myproject1.sfs -info -progress -b 1048576 -comp xz -Xdict-size 100%
mkdir /mnt/myproject1
mount ~/Squash/myproject1.sfs /mnt/myproject1
unionfs -o allow_other,nonempty ~/.unionfs/changes/myproject1=rw:/mnt/myproject1/=ro ~/Development/Project1
Too much cpu overhead, too many folders, can't delete files, all get mixed up ...
Second idea ... failed!
dd if=/dev/zero of=~/Imgs/myproject1.btfs bs=1M count=10240
mkfs.btrfs ~/Imgs/myproject1.btfs
mount -o defaults,noatime,autodefrag,compress,compress-force,inode_cache ~/Imgs/myproject1.btfs ~/Development/Project1
Well ... little overhead, gzip compression, saved a lot of space, but fixed img size.
Third idea ... yay!
truncate -s 200G ~/Imgs/myproject1.btfs
mkfs.btrfs ~/Imgs/myproject1.btfs
mount -o defaults,noatime,autodefrag,compress,compress-force,inode_cache ~/Imgs/myproject1.btfs ~/Development/Project1
Well ... little overhead, gzip compression, saved a lot of space ... but wait ... why do my btfs files consume more and more space?
Hmm ... time for a little bash and my beloved systemd timers.
for f in `find . -type f -name "*.btfs"`
do
project=${$f%.*}
btrfs balance start -v -dusage=100 ~/Development/$project
btrfs balance start -v -musage=100 ~/Development/$project
fstrim ~/Development/$project
fallocate -d -v $f
done1 -
Sooo. My team and I have module we're supposed to be porting to async code and aiohttp will not work. The server keeps rejecting the byte payload, but if we use synch code like the requests library, it works fine. The code is like identical, the only difference is async. It's been really frustrating because another drop in async version of requests (httpx) works just fine! I don't want to use httpx, the rest of our codebase is already using aiohttp! We think the problem is with gzip encoding being handled incorrectly by aiohttp. I've reported the issue.1
-
Would you consider compression (gzip static files) as a prepare step of the deploy stage or as a part of the build stage?
It's somewhat irrelevant but its bugging me8