Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "compression"
-
Gavin: "Christina, so how bad is this? Be honest. Is this windows Vista bad? It is not iPhone 4 bad, is it? Fuck, don't tell me this is Zune bad."
Christina: "Sorry Gavin. It is Apple Maps bad."12 -
pm: our client wants a proprietary pdf compression app.
me: Okay gimme 3 days and some sample PDFs.
pm: they won't supply any sample PDFs because they contain confidential information.
me: okay fine, I'll download some from the interwebs.
** 3 days later **
me: here is the pdf compression app. all done and works with all of about 100 PDFs we tested with.
pm: okay great I'll have the client take a look.
** half and hour later **
pm: the client said that the compression app errors out.
me: okay I'll go look at the server logs to see what's up.
** 10 seconds later **
me: what the shit is a "foxit phantompdf" file.
pm: it's the proprietary pdf format that they are using.
me: oh joy. I'll go try to find some sample files and see if I can fix it.
** 1 hour later, no sample files found **
pm: got anything?
me: *sobs obnoxiously*9 -
this.title = "gg Microsoft"
this.metadata = {
rant: true,
long: true,
super_long: true,
has_summary: true
}
// Also:
let microsoft = "dead" // please?
tl;dr: Windows' MAX_PATH is the devil, and it basically does not allow you to copy files with paths that exceed this length. No matter what. Even with official fixes and workarounds.
Long story:
So, I haven't had actual gainful employ in quite awhile. I've been earning just enough to get behind on bills and go without all but basic groceries. Because of this, our electronics have been ... in need of upgrading for quite awhile. In particular, we've needed new drives. (We've been down a server for two years now because its drive died!)
Anyway, I originally bought my external drive just for backup, but due to the above, I eventually began using it for everyday things. including Steam. over USB. Terrible, right? So, I decided to mount it as an internal drive to lower the read/write times. Finding SATA cables was difficult, the motherboard's SATA plugs are in a terrible spot, and my tiny case (and 2yo) made everything soo much worse. It was a miserable experience, but I finally got it installed.
However! It turns out the Seagate external drives use some custom drive header, or custom driver to access the drive, so Windows couldn't read the bare drive. ffs. So, I took it out again (joy) and put it back in the enclosure, and began copying the files off.
The drive I'm copying it to is smaller, so I enabled compression to allow storing a bit more of the data, and excluded a couple of directories so I could copy those elsewhere. I (barely) managed to fit everything with some pretty tight shuffling.
but. that external drive is connected via USB, remember? and for some reason, even over USB3, I was only getting ~20mb/s transfer rate, so the process took 20some hours! In the interim, I worked on some projects, watched netflix, etc., then locked my computer, and went to bed. (I also made sure to turn my monitors and keyboard light off so it wouldn't be enticing to my 2yo.) Cue dramatic music ~
Come morning, I go to check on the progress... and find that the computer is off! What the hell! I turn it on and check the logs... and found that it lost power around 9:16am. aslkjdfhaslkjashdasfjhasd. My 2yo had apparently been playing with the power strip and its enticing glowing red on/off switch. So. It didn't finish copying.
aslkjdfhaslkjashdasfjhasd x2
Anyway, finding the missing files was easy, but what about any that didn't finish? Filesizes don't match, so writing a script to check doesn't work. and using a visual utility like windirstat won't work either because of the excluded folders. Friggin' hell.
Also -- and rather the point of this rant:
It turns out that some of the files (70 in total, as I eventually found out) have paths exceeding Windows' MAX_PATH length (260 chars). So I couldn't copy those.
After some research, I learned that there's a Microsoft hotfix that patches this specific issue! for my specific version! woo! It's like. totally perfect. So, I installed that, restarted as per its wishes... tried again (via both drag and `copy`)... and Lo! It did not work.
After installing the hotfix. to fix this specific issue. on my specific os. the issue remained. gg Microsoft?
Further research.
I then learned (well, learned more about) the unicode path prefix `\\?\`, which bypasses Windows kernel's path parsing, and passes the path directly to ntfslib, thereby indirectly allowing ~32k path lengths. I tried this with the native `copy` command; no luck. I tried this with `robocopy` and cygwin's `cp`; they likewise failed. I tried it with cygwin's `rsync`, but it sees `\\?\` as denoting a remote path, and therefore fails.
However, `dir \\?\C:\` works just fine?
So, apparently, Microsoft's own workaround for long pathnames doesn't work with its own utilities. unless the paths are shorter than MAX_PATH? gg Microsoft.
At this point, I was sorely tempted to write my own copy utility that calls the internal Windows APIs that support unicode paths. but as I lack a C compiler, and haven't coded in C in like 15 years, I figured I'd try a few last desperate ideas first.
For the hell of it, I tried making an archive of the offending files with winRAR. Unsurprisingly, it failed to access the files.
... and for completeness's sake -- mostly to say I tried it -- I did the same with 7zip. I took one of the offending files and made a 7z archive of it in the destination folder -- and, much to my surprise, it worked perfectly! I could even extract the file! Hell, I could even work with paths >340 characters!
So... I'm going through all of the 70 missing files and copying them. with 7zip. because it's the only bloody thing that works. ffs
Third-party utilities work better than Microsoft's official fixes. gg.
...
On a related note, I totally feel like that person from http://xkcd.com/763 right now ;;21 -
Story about an obscure bug: https://twitter.com/mmalex/status/...
"We had a ‘fun’ one on LittleBigPlanet 1: 2 weeks to gold, a Japanese QA tester started reliably crashing the game by leaving it on over night. We could not repro. Like you, days of confirmation of identical environment, os, hardware, etc; each attempt took over 24h, plus time differences, and still no repro.
"Eventually we realised they had an eye toy plugged in, and set to record audio (that took 2 days of iterating) still no joy.
"Finally we noticed the crash was always around 4am. Why? What happened only in Japan at 4am? We begged to find out.
"Eventually the answer came: cleaners arrived. They were more thorough than our cleaners! One hour of vacuuming near the eye toy- white noise- caused the in game chat audio compression to leak a few bytes of memory (only with white noise). Long enough? Crash.
"Our final repro: radios tuned to noise, turned up, and we could reliably crash the game. Fix took 5 minutes after that. Oh, gamedev...."5 -
Well apparently my compression algorithm actually made the file bigger. Back to the drawing board I guess?8
-
"Tabs create smaller file sizes. I run a compression company, trust me, I've devoted my life to minimalizing file sizes."
- Richard Hendricks10 -
!Story
The day I became the 400 pound Chinese hacker 4chan.
I built this front-end solution for a client (but behind a back end login), and we get on the line with some fancy European team who will handle penetration testing for the client as we are nearing dev completion.
They seem... pretty confident in themselves, and pretty disrespectful to the LAMP environment, and make the client worry even though it's behind a login the project is still vulnerable. No idea why the client hired an uppity .NET house to test a LAMP app. I don't even bother asking these questions anymore...
And worse, they insist we allow them to scrape for vulnerabilities BEHIND the server side login. As though a user was already compromised.
So, I know I want to fuck with them. and I sit around and smoke some weed and just let this issue marinate around in my crazy ass brain for a bit. Trying to think of a way I can obfuscate all this localStorage and what it's doing... And then, inspiration strikes.
I know this library for compressing JSON. I only use it when localStorage space gets tight, and this project was only storing a few k to localStorage... so compression was unnecessary, but what the hell. Problem: it would be obvious from exposed source that it was being called.
After a little more thought, I decide to override the addslashes and stripslashes functions and to do the compression/decompression from within those overrides.
I then minify the whole thing and stash it in the minified jquery file.
So, what LOOKS from exposed client side code to be a simple addslashes ends up compressing the JSON before putting it in localStorage. And what LOOKS like a stripslashes decompresses.
Now, the compression does some bit math that frankly is over my head, but the practical result is if you output the data compressed, it looks like mandarin and random characters. As a result, everything that can be seen in dev tools looks like the image.
So we GIVE the penetration team login credentials... they log in and start trying to crack it.
I sit and wait. Grinning as fuck.
Not even an hour goes by and they call an emergency meeting. I can barely contain laughter.
We get my PM and me and then several guys from their team on the line. They share screen and show the dev tools.
"We think you may have been compromised by a Chinese hacker!"
I mute and then die my ass off. Holy shit this is maybe the best thing I've ever done.
My PM, who has seen me use the JSON compression technique before and knows exactly whats up starts telling them about it so they don't freak out. And finally I unmute and manage a, "Guys... I'm standing right here." between gasped laughter.
If only it was more common to use video in these calls because I WISH I could have seen their faces.
Anyway, they calmed their attitude down, we told them how to decompress the localStorage, and then they still didn't find jack shit because i'm a fucking badass and even after we gave them keys to the login and gave them keys to my secret localStorage it only led to AWS Cognito protected async calls.
Anyway, that's the story of how I became a "Chinese hacker" and made a room full of penetration testers look like morons with a (reasonably) simple JS trick.9 -
This is simply beautiful. Visualized sorting algorithms using colors.
Just discovered it on twitter (@galka_max)
If you have capped mobile data, search for WiFi first. Could be pretty much data...
Watch them here:
https://m.imgur.com/gallery/voutF
The videos and gifs pretty much disable any compression technique.
Attached is their merge sort example heavily compressed from 16 to 5 megabytes to fit devRant's limits...3 -
Making a meme to pass the time...
All this waiting, only because I wanted to try one small thing -.-
Oh boy, JPEG compression strikes again...
Wish I could upload pngs. Then I could use alpha channel too.3 -
Satoru Iwata.
You might remember it as the former president of Nintendo, but he was also a very impressive programmer. As he was president of HAL Laboratories, he helped with the development of Pokémon Stadium for the Nintendo 64 by porting the Pokémon Red/Blue battle system not by having any sort of documentation, but by reading the assembly source code.
He did so to allow Game Freak's developers (who were only a team of 4 at the time) to focus on their work on Pokémon Gold/Silver. But he did more: when they had to localize Red/Blue for America, they couldn't fit everything in a cartridge. They had the same problem while developing Gold/Silver, since cartridges had at most 8 Mb of storage capacity back then, and they had to fit not only the Johto region but the Kanto one as well! So Iwata stepped in, and created a graphics compression tool which managed to make everything fit in the cartridges.
He did this while not even being part of Nintendo, and the work was so impressive that the Pokémon devs thought it was "a waste to just have [him] as president!" (ie. why not make use of such programming skills).
Truly someone I look up to.8 -
Fucking retards. They make us submit 3 fully fledged fucking Android apps (with ALL the generated boilerplate crap), all zipped into one fucking folder which cannot exceed 10MB.
ARE YOU FUCKING KIDDING ME, YOU DUNG-EATING PREHISTORIC APE ?! ONE PROJECT ALONE IS 60 MB, HOW IN THE MOTHER-FLIPPING HELL DO YOU EXPECT ME TO FIT 3 OF THOSE INTO 10 MEASLY MEGABYTES?!
Ever heard of git you moth-eating-cactus-fucking pricks?! Time has come to learn it !!! Private repos are a thing, you cocksuckers.
May your bed be infested with bugs and your code riddled with Greek semi-colons. Fucking dimwits.7 -
*Hopefully devRant's compression of images doesn't ruin it.*
The center of the trackpad simply feels smooth because of the oil from my finger.
The keys are oily as well. Specifically the space bar and backspace.
I guess it's time to find a way to clean it. I probably can't save the trackpad lest I damage it.31 -
Apple you drove of delusional suckers! When will your retarded fashion devices finally support WEBP?!
A gallery page with images, and thanks to WEBP, it's 408 kB. Because Google made WEBP and handed out a well documented CLI FOSS compression tool that even can convert the source PNGs to lossy WEBP with bloody transparency. Well done, Google!
Except that Apple's shitty management can't take it that Google actually made something nice, so no WEBP. Instead, JPEG-2000 that enjoys nearly no fucking tool support. The free tools that even can deal with that mostly don't support transparency, and the encoder sucks donkeys so that JPEG still fucks JPEG-2000 big time.
So it's JPEG with matching background for iOS. Fine, but since JPEG's blocky artifacts are much more visible, the compression can't be that high, and it's 769 kB. That's 88% more image data for Shittari than for non-retarded browsers and even Edge! EDGE!!
Oh and if the user changes light/dark system mode according to surrounding light conditions, guess what happens? Yep, since JPEG doesn't support transparency, now it's different JPEGs with dark background via the media query in the "picture" element, and it's another 754 kB download. Bloody 1523 kB instead of 408 kB, that's a factor of 3.7!
Fuck your ass Crapple, with an electric eel!19 -
devRant on a HoloLens!
The HoloLens is really cool, I was allowed to use it after a short hackathon. I am still surprised, but it works great and the concept feels natural after a short moment - web browsing is not recommended as no website is optimized for mixed reality (yet?).
Sorry for the low quality photo (it is not the compression algorithm's fault this time).10 -
The company I work for wants me to create an app to compress "foxit phantompdf" files, so that they are still readable after compression.
(referencing my last rant)7 -
I apparently got home drunk last night and watched half an hour of a talk on optimizing compression for web applications.
Now I'm caressing a slight hangover with coffee and watching the rest of it.1 -
Jesus Christ, how did that even get past QA. Non-resizable widget that nobody asked for where the text doesn't even fucking fit.. and that's not devRant compression... the text is fucking blurry for some reason????11
-
I've optimised so many things in my time I can't remember most of them.
Most recently, something had to be the equivalent off `"literal" LIKE column` with a million rows to compare. It would take around a second average each literal to lookup for a service that needs to be high load and low latency. This isn't an easy case to optimise, many people would consider it impossible.
It took my a couple of hours to reverse engineer the data and implement a few hundred line implementation that would look it up in 1ms average with the worst possible case being very rare and not too distant from this.
In another case there was a lookup of arbitrary time spans that most people would not bother to cache because the input parameters are too short lived and variable to make a difference. I replaced the 50000+ line application acting as a middle man between the application and database with 500 lines of code that did the look up faster and was able to implement a reasonable caching strategy. This dropped resource consumption by a minimum of factor of ten at least. Misses were cheaper and it was able to cache most cases. It also involved modifying the client library in C to stop it unnecessarily wrapping primitives in objects to the high level language which was causing it to consume excessive amounts of memory when processing huge data streams.
Another system would download a huge data set for every point of sale constantly, then parse and apply it. It had to reflect changes quickly but would download the whole dataset each time containing hundreds of thousands of rows. I whipped up a system so that a single server (barring redundancy) would download it in a loop, parse it using C which was much faster than the traditional interpreted language, then use a custom data differential format, TCP data streaming protocol, binary serialisation and LZMA compression to pipe it down to points of sale. This protocol also used versioning for catchup and differential combination for additional reduction in size. It went from being 30 seconds to a few minutes behind to using able to keep up to with in a second of changes. It was also using so much bandwidth that it would reach the limit on ADSL connections then get throttled. I looked at the traffic stats after and it dropped from dozens of terabytes a month to around a gigabyte or so a month for several hundred machines. The drop in the graphs you'd think all the machines had been turned off as that's what it looked like. It could now happily run over GPRS or 56K.
I was working on a project with a lot of data and noticed these huge tables and horrible queries. The tables were all the results of queries. Someone wrote terrible SQL then to optimise it ran it in the background with all possible variable values then store the results of joins and aggregates into new tables. On top of those tables they wrote more SQL. I wrote some new queries and query generation that wiped out thousands of lines of code immediately and operated on the original tables taking things down from 30GB and rapidly climbing to a couple GB.
Another time a piece of mathematics had to generate all possible permutations and the existing solution was factorial. I worked out how to optimise it to run n*n which believe it or not made the world of difference. Went from hardly handling anything to handling anything thrown at it. It was nice trying to get people to "freeze the system now".
I build my own frontend systems (admittedly rushed) that do what angular/react/vue aim for but with higher (maximum) performance including an in memory data base to back the UI that had layered event driven indexes and could handle referential integrity (overlay on the database only revealing items with valid integrity) or reordering and reposition events very rapidly using a custom AVL tree. You could layer indexes over it (data inheritance) that could be partial and dynamic.
So many times have I optimised things on automatic just cleaning up code normally. Hundreds, thousands of optimisations. It's what makes my clock tick.4 -
My worst experience was at my job where they told me I have to move to a permanent position from 3 years of contracting without a specific offer.
Why is that bad? In my country it means approximatly 40% lower wage.
I came into the job with PHP knowledge when they were looking for Perl on a project one year behind schedule. I learned the language and finished working demo in 6 weeks.
After that, every project that was ever assigned to me was done within 5-15% of the allocated time. I'm not kidding here. My manager loved be, because I was reliable, fast and I even 'accidentaly' solved other problems, like for instance I developed simple syslog search tool and benchmarked zip algos for reading speed, and the fastest had 70% better compression than the algo used before (gzip into plzip on 1-2gb files). That solved anothet problem - syslog servers did not have enough disk space and they didn't have money to upgrade the server.
The number of projects I touched or developed was over 20.
I also lead and developed our team's most successful tool, that every customer was throwing money to buy, while cutting down costs everywhere.
And after three years of that, my manager says that there are no more money for contractors. And the only possibility is going for employment. Without any specific offer! Just 'we cant do this anymore'.
Which I understand, that can happen in corporation, but ffs after all I've done, I expected warmer attitude. Not like 'you may have to leave, since we do not really care'.
I liked the people there, even though the corporation environment was lacking in many respects, but I wanted to help our local branch with everything I could and they gave up on me like that.
So I started looking elsewhere and I found a startup which offered 6 times the money I had in my previous job and promises to relocate me to USA. Which is the best thing that has happened to me that year and second best in my whole life!3 -
Seven years ago, a russian artist started the "Putin Every Day" community. It started with one picture of putin. Every day, he was downloading the previous day's pic and uploading it again.
Over the course of seven years, JPEG compression artefacts built up to what you can see here.
What an amazing metaphor.
https://vk.com/putineveryday34 -
Working on a script that shows weather in a notification because why the hell not.
[edit] such compression much wow, thanks devrant7 -
I seriously don't know why some people still using Google Drive, DropBox. Pied Piper is much better
www.piedpiper.com7 -
TLDR; i wrote recursive compression with random algorithm to fuck up some lazy ass girl.
one day, unknown classmate told me she has a family reunion and cannot do his programming assignment which will be collected next day in the morning, so she ask me to do it. i say i need to put a price tag to it because i want to buy a new RasPi --i don't know her either so i don't feel bad about it. i told her i need $20 and after some bargaining it settled at $15. i work on it about 3 hours and told her it's finished and send her demo video as a proof. she happy with the result. and will come to my house later that night to get the source code. at night, she came, and give me only $8 bucks, of course i get mad, with every arguments she throws at me i resist to give her the source code. but since i tired enough to get into a longer arguments i accept the 8 bucks i go upstairs to get the source code. but instead of giving her the actual source code; i wrote a quick script to do 50 compress source code folder recursively with random compression algorithm--sometimes gzip sometimes lzma. and give her the final 50 times compressed source code. EAT SHIT MOTHERFUCKER11 -
Holy FREAKING shit!! This was worst stupidest mistake I have ever made!
About 9 hours ago, i decided to implement brotli compression in my server.
It looked a bit challenging for me, because the all the guides involved compiling and building the nginx with brotli module and I was not that confident doing that on live site.
By the end of the guide, the site was not reachable anymore. I panicked.
Even the error logs and access logs were not picking up anything.
About a dozens guides and a new server and figuring out few major undocumented errors later, it turns out the main nginx.conf file had a line that was looking for *.conf files in the sites-enabled directory.
But my conf file was named after the domain name and ending with .com and hence were not picked up by the new nginx.conf
I'm not sure if I wasted my 9 hours because of that single line or not. But man, this was a really rough day!3 -
Just to be clear:
I don't hate Zuckerberg,
I hate Facebook.
Always respect a guy who can put money together like it's a python list compression.7 -
Algorithms real life implementation
On the way to your college canteen? -> A* search
Waiting in line in the canteen? -> Queue
Notice that girl standing in front? -> Linear search
Searching for her dad in the phone book? -> Binary search
Stupid! Google it! -> Trie
Search for her on Facebook! -> Depth-first search
Found her! Friend request? Accepted! Send a Hi! -> Graph
Writing her a secret love letter? -> Caesar cipher
Uploading your first date pic on fb? -> Image compression algorithms
Looking through her Whatsapp messages? -> KMP algorithm
She found out and had your first fight? -> Start over with some gifts! Backtracking
Got her list of items to buy? -> Array
Too many items! Low on cash, maybe? -> Priority queue
Making her play treasure hunt for her gifts? -> Linked list
Wait! Go back! Is that a ring? -> Stack
Girl’s family not agreeing to your proposal? -> Divide and conquer
Got married? Congrats! Going for your honeymoon? -> Travelling salesman problem
Your mom packing luggage for you? -> 0/1 Knapsack problem
She packed your favorite pickles? -> Hash table
Driving to the airport? -> Breadth-first search1 -
Yesterday I pulled 36 straight hours of coding compression algorithms. So help me god, I never want to look at another... Until I need to continue next week. ¯\_(ツ)_/¯3
-
TL;DR: Printers suck. MS-Word sucks.
Yesterday I wanted to print a few participation certificates for my blender project students.
*Turns on printer, runs downstairs, gets paper, runs upstairs, puts paper in*
So I tried to print in word. Nothing happened. Printer was online. I checked queue: Nothing.
*a couple of tries later*
Okay, fuck it! I export it as a pdf and open it in edge (8 times. 8 documents. Edge is a neat pdf-viewer, fight me). I press print on one. It works. I print the others and check: They look shit. The images look like 25% resolution and 50% jpg compression. I check word.
It by default exports in low quality. Yea, thanks for asking me. I export pdfs again and check "high quality". Open them, print. Done.
These were like 30 wasted minutes and print color. And paper.
Btw they look fucking neat. I can't show them right now but gradient text headline, project name is a rendered and edited 3D object :D4 -
!rant
TODAY WAS A SUCCESS!
-learned how to forward ports
-hosting a minecraft server
-made that stupid HP stream USEFUL
-i actually feel good about myself
note: modded server. You'll need Mantle (1.7.10), Tinker's Construct (1.7.10) and Ultra Block Compression (1.7.10).
pretty sure whitelist is disabled. the max is 50 players, not sure how good the connection will be. be nice to the ops, YoungWolves and Mehrsun
ip: 66.243.225.51
(default port)
again please be polite, the two OPs are not techy at all, but very nice gals6 -
when a client sends an error message as a dual-screen screenshot and the actual message is unreadable because of JPEG compression
-
I am a Windows person. I always argue how great it is.
Well, not today.
I was today years old when I learned that you CANNOT uninstall store app via store ;p You need to go to settings / apps and functionality / your app / uninstall
The photo app (Yes the one bundled with win10) doesn't work if you use Hard drive compression AND it is a symlink for OneDrive (So you don't need to keep all photos on the drive). Fucking Paint works without problems.
Email client : If you alt+tab too fast after hitting 'Send email", there is 50% chances that email won't be send. Basiclly you need to hit "send" and wait until you see it in "sent" folder.
Well, as i'm ranting, here for Linux too :
I have a small ubuntu server VM, worked very well for last 6 months. Now "System in read only mode". Fucking apt-get upgrade fucked with something. I don't want to look, so I'll just rebuild a fresh vm.
And macOS should take sometyhing too : Who the fuck decided "enter" is for editing the name of file ?! really !
Well, ALL os are shit, all have downsides, I need my own OS. But I still want AA games... So windows for me.25 -
Today I wanted to activate the gzip compression on the site of a customer before delivery.
Unfortunately the hosting service (the most popular in France) did not activate this module on its servers. Why in 2016, this module is not enabled by default !15 -
Guys!!!! (And Girls) There's a real life version of Pied Piper being developed by Dropbox engineers! www.openh264.org2
-
I don't get the point of spamming a link...
Let's get on a long journey with Android Studio by my side (or not if I read these rants)...
Edit: Compression fucks it up... you can still see it's the same link being spammed.1 -
Idea for a next gen image compression method:
* Put your image into a reverse AI image generator to get the prompt for that image.
* store the text string as the compressed image
* Put the text string into to AI image generator to get back your compressed image.14 -
(long post is long)
This one is for the .net folks. After evaluating the technology top to bottom and even reimplementing several examples I commonly use for smoke testing new technology, I'm just going to call it:
Blazor is the next Silverlight.
It's just beyond the pale in terms of being architecturally flawed, and yet they're rushing it out as hard as possible to coincide with the .Net 5 rebranding silo extravaganza. We are officially entering round 3 of "sacrifice .Net on the altar of enterprise comfort." Get excited.
Since we've arrived here, I can only assume the Asp.net Ajax fiasco is far enough in the past that a new generation of devs doesn't recall its inherent catastrophic weaknesses. The architecture was this:
1. Create a component as a "WebUserControl"
2. Any time a bound DOM operation occurs from user interaction, send a payload back to the server
3. The server runs the code to process the event; it spits back more HTML
Some client-side js then dutifully updates the UI by unceremoniously stuffing the markup into an element's innerHTML property like so much sausage.
If you understand that, you've adequately understood how Blazor works. There's some optimization like signalR WebSockets for update streaming (the first and only time most blazor devs will ever use WebSockets, I even see developers claiming that they're "using SignalR, Idserver4, gRPC, etc." because the template seeds it for them. The hubris.), but that's the gist. The astute viewer will have noticed a few things here, including the disconnect between repaints, inability to blend update operations and transitions, and the potential for absolutely obliterative, connection-volatile, abusive transactional logic flying back and forth to the server. It's the bring out your dead approach to seeing how much of your IT budget is dedicated to paying for bandwidth and CPU time.
Blazor goes a step further in the server-side render scenario and sends every DOM event it binds to the server for processing. These include millisecond-scale events like scroll, which, at least according to GitHub issues, devs are quickly realizing requires debouncing, though they aren't quite sure how to accomplish that. Since this immediately becomes an issue with tickets saying things like, "scroll event crater server, Ugg need help! You said Blazorclub good. Ugg believe, Ugg wants reparations!" the team chooses a great answer to many problems for the wrong reasons:
gRPC
For those who aren't familiar, gRPC has a substantial amount of compression primarily courtesy of a rather excellent binary format developed by Google. Who needs the Quickie Mart, or indeed a sound markup delivery and view strategy when you can compress the shit out of the payload and ignore the problem. (Shhh, I hear you back there, no spoilers. What will happen when even that compression ceases to cut it, indeed). One might look at all this inductive-reasoning-as-development and ask themselves, "butwai?!" The reason is that the server-side story is just a way to buy time to flesh out the even more fundamentally broken browser-side story. To explain that, we need a little perspective.
The relationship between Microsoft and it's enterprise customers is your typical mutually abusive co-dependent relationship. Microsoft goes through phases of tacit disinterest, where it virtually ignores them. And rightly so, the enterprise customers tend to be weaksauce, mono-platform, mono-language types who come to work, collect a paycheck, and go home. They want to suckle on the teat of the vendor that enables them to get a plug and play experience for delivering their internal systems.
And that's fine. But it's also dull; it's the spouse that lets themselves go, it's the girlfriend in the distracted boyfriend meme. Those aren't the people who keep your platform relevant and competitive. For Microsoft, that crowd has always been the exploratory end of the developer community: alt.net, and more recently, the dotnet core community (StackOverflow 2020's most loved platform, for the haters). Alt.net seeded every competitive advantage the dotnet ecosystem has, and dotnet core capitalized on. Like DI? You're welcome. Are you enjoying MVC? Your gratitude is understood. Cool serializers, gRPC/protobuff, 1st class APIs, metadata-driven clients, code generation, micro ORMs, etc., etc., et al. Dear enterpriseur, you are fucking welcome.
Anyways, b2blazor. So, the front end (Blazor WebAssembly) story begins with the average enterprise FOMO. When enterprises get FOMO, they start to Karen/Kevin super hard, slinging around money, privilege, premiere support tickets, etc. until Microsoft, the distracted boyfriend, eventually turns back and says, "sorry babe, wut was that?" You know, shit like managers unironically looking at cloud reps and demanding to know if "you can handle our load!" Meanwhile, any actual engineer hides under the table facepalming and trying not to die from embarrassment.36 -
Our team - if ever existed - is falling apart. Pressure raising. Release deadline probably failing. No release ready for Big Sur.
Almost seemed we were getting somewhere: More focus on code quality, unit tests, proper design, smaller classes. But somehow we now ended up in "microservice" hell; a gazillion classes, mostly tested in isolation, but together they just fail to do their job. A cheap and dirty proof of concept from March is still more capable than this pile. I really start to doubt all that "Clean code", TDD, Agility rhetorics. What does it help you, if nobody cares for the end result? It's like a month I try to hammer down that message: we have to have testable artifacts, we have to ensure code signing works, our artifact is packaged and installable, we have to give QA something they can test - but time just passes and this piece of shit software is still being killed or does nothing.
Now my knee is broken and can do no sports and are tied to my chair even more. To top it all my coffee machine broke and my internet connection was abysmal this week. Not the usual small disconnects, after which it would recover, but more annoying and enduring: often being throttled to 1.7 MB/s (ranking my connection in the slowest 7% even in Germany). My RDP sessions had compression artifacts all over the screen and a mouse click would only take effect 5 sec later.
But my Esspresso machine was just repaired. Not all hope is lost.7 -
What the literal fuck apple? YOU ADDED ANOTHER USELESS FORM OF DATA COMPRESSION??? NO ONE WILL EVER USE XIP!!5
-
Since I love playing music on my piano, and because I love programming, I created this:
A program which uses YIN (some kind of FFT algorithm) to create this audio spectrum. It can read from any audio source, be it microphone or the computers audio output. There are 3 lines in the graph at the bottom:
Both magenta lines, both are movable, allow you to just select a part of the audio spectrum to show, which is very useful if you just want to get the chords out of a song, or just the notes.
The cyan line can also be moved, it tells the program the lower limit to calculate the actual notes/chords from the frequency bars (which are calculated by the FFT algorithm).
In the top right of the spectrum is, in magenta color, one single note. It shows the currently loudest frequency as a note.
In the top right corner is a simple image of a one-octave piano. Every note over the cyan line will be shown on this piano. This is very useful for chords.
Since compression will fuck this image up, here is a link: https://i.gyazo.com/fbbee76faecbac3...
What do you think about it? I fucking love it.2 -
There should be an open source, Linux based Printer operating system. Like OpenWRT for routers, also works with almost every device in the wild.
This would be such a relief for everyone. Come on, most printer firmwares are crap.
Remember Scannergate? Yep, the one about professional xerox scanners changing numbers in scanned document. Went unnoticed for 8 years and affected almost every workcenter even in the highest compression setting. Just because they wanted to save a few bytes by using pattern matching. -
2 days ago I realized that I met Abraham Lempel (from the ziv-lempel/zip compression) in person, and didn't figure out it's the same Lempel
-
This is a story of how I did a hard thing in bash:
I need to extract all files with extension .nco from a disk. I don't want to use the GUI (which only works on windows). And I don't want to install any new programs. NCO files are basically like zip files.
Problem 1: The file headers (or something) is broken and 7zip (7z) can only extract it if has .zip extension
Problem 2: find command gives me relative to the disk path and starts with . (a dot)
Solution: Use sed to delete dot. Use sed to convert to full path. Save to file. Load lines from file and for each one, cp to ~/Desktop/file.zip then && 7z e ~/Desktop/file.zip -oOutputDir (Extract file to OutputDir).
Problem 3: Most filenames contain a whitespace. cp doesn't work when given the path wrapped in quotes.
Patch: Use bash parameter substitution to change whitespace to \whitespace.
(Note: I found it easier to apply sed one after another than to put it all in one command)
Why the fuck would anyone compress 345 images into their own archive used by an uncommon windows-only paid back-up tool?
Little me (12 years old) knowing nothing about compression or backup or common software decided to use the already installed shitty program.
This is a big deal for me because it's really the first time I string so many cool commands to achieve desired results in bash (been using Ubuntu for half a year now). Funny thing is the images uncompressed are 4.7GB and the raw files are about 1.4GB so I would have been better off not doing anything at all.
Full command:
find -type f -name "*.nco" |
sed 's/\(^./\)/\1/' |
sed 's/.*/\/media\/mitiko\/2011-2014_1&/' > unescaped-paths.txt
cat unescaped-paths.txt | while read line; do echo "${line// /\\ }" >> escaped-paths.txt; done
rm unescaped-paths.txt
cat escaped-paths.txt | while read line; do (echo "$line" | grep -Eq .*[^db].nco) && echo "$line" >> paths.txt; done
rm escaped-paths.txt
cat paths.txt | while read line; do cp $line ~/Desktop/file.zip && 7z e ~/Desktop/file.zip -oImages >/dev/null; done3 -
Why isn't gzip compression on by default on servers? Cannot think of any case, where this is what the user wishes?5
-
Me: the web app is downloading a lot of static content while loading the page, leading to the app being very slow in low bandwidth locations. can you ensure compression is enabled while serving static files ?
UI Developer: sure, I'll look into that. Btw, I have a question reg that.
Me: yes, pls.
UI Developer: once the compressed static files are downloaded to the browser, should I write a separate module to uncompress them ?
Me: :-(Strategic Facepalm) -
Today started off great!
New 5TiB HDD... Check!
Formatted with zfs under LUKS, with a high level.of compression and dedup... Check!
Copying over roughly 4TiB of data, about 2 of which was scattered in small files... Coworker unplugged it from AC thinking it was his (they are sort of similar), when the process was almost complete.
Goddamit. zpool scrub.... 6 hours left. It's 9 pm over here, and I'm not a fan of leaving my stuff at work. Goddammit.
...I guess tomorrow is another day.8 -
In my last year of high school (for those familiar with the Indian education system, that's Class 12), for my final project I made an image compressor/decompressor in Turbo C++ that used a discrete Fourier transform (DCT, actually) to work on 8x8 px blocks of images. I based it off some stuff I found about how JPEG compression works. It worked even better than I'd hoped even though it was slow as molasses (I programmed a naive Fourier transform instead of using a FFT variant). I remember jumping with joy all around my room at 3 o'clock in the morning like an excited gas molecule in a box. Fun times.
-
I was copying data from a failing zfs drive with rsync and I noticed that it spent a long time on the file ~/.local/share/Baloo/index
du -h index showed a 500ish MB file which didn't seem large enough to take this long.
I recalled that du shows disk usage, not file size and since I was using zfs compression they could be quite different.
so I added -A for apparent size:
du -hA index and it comes back with 1.7E
The file was 1.7 exabytes...6 -
Attached to this post should be a photograph which features one of my workspaces, in addition to some JPEG compression artefacts.11
-
Had some fun with textgenrnn (Tensorflow text generating thingy on Github). So I created a tiny dataset with some example c# code and let it train for a while.
Sorry people, but I ruined our jobs. We don't need to write code anymore.
Update: image was unreadable due to compression. Let me find an alternative.7 -
Why does the image compression on here suck so bad? It's especially bad if you try to post text. Just go and buy pied piper and your problems are solved.4
-
Remember kids: the more space a compression algorithm saves, the slower it is to deflate/inflate!
(imho this is why xzip isn't as mainstream as it is)5 -
Do you hate / like loss-of-quality compression?
Check on this music video (might be loud):
https://youtube.com/watch/...2 -
I've decided to, as an educational exercise, implement DEFLATE compression / decompression and zip file format, and eventually tackling Excel format (which is just a .zip) so I can generate true excel spreadsheets (instead of .csv files) client-side using JavaScript.
Are there already libraries that do this? Yes, but then I don't get to try to implement these interesting algorithms. Is it currently 1 AM? Yes. Do I have work tomorrow? Also yes.
If I don't just fall flat on my face, I'll post updates!1 -
Dear Python,
Seriously fuck your horrible compression APIs. What if I want to make a normal stream a tarball stream?
- Me2 -
Why do SAMBA network drives have to suck this much? Yeah I understand that compiling to a network drive is probably a bad idea just for performance reasons alone but can't you at least not fuck with my git repo?
$ git gc
Enumerating objects: 330, done.
Counting objects: 100% (330/330), done.
Delta compression using up to 24 threads
Compressing objects: 100% (165/165), done.
Writing objects: 100% (330/330), done.
Total 330 (delta 177), reused 281 (delta 151), pack-reused 0
error: unable to open .git/objects/7e: Not a directory
error: unable to open .git/objects/7e: Not a directory
fatal: unable to mark recent objects
fatal: failed to run prune
$ git gc
error: unable to open .git/objects/00: Not a directory
fatal: unable to add recent objects
fatal: failed to run repack
$ git gc
Enumerating objects: 330, done.
Counting objects: 100% (330/330), done.
Delta compression using up to 24 threads
Compressing objects: 100% (139/139), done.
Writing objects: 100% (330/330), done.
Total 330 (delta 177), reused 330 (delta 177), pack-reused 0
Removing duplicate objects: 100% (256/256), done.
error: unable to open .git/objects/05: Not a directory
error: unable to open .git/objects/05: Not a directory7 -
Client complains constantly over image quality.
Then continues to upload diagrams as jpg not as png and then is bothered about compression artifacts...
It also doesnt help he works on a retina screen and we had to migrate his tiny thumbnail images from his old website.
Maybe I should buy a microscope... Or maybe send him the imagemagick documentation and he can choose the parameters to his liking? -
Okay so after a few days of thinking I think I'm sure about what I'm about to write :
Best : Discovering how to use streams while making a service that should extract a tar.gz, extract the tar.gz within it, filter the extracted files and correct some of them, then compress each folder as tar.gz and compress all the archives as .zip. The amazing thing for me is that with streams I could do all the operations in just two passes, maybe one if I had more time, saving disk writing time.
Worst : upgrading a bunch of legacy Access 97 apps and its VBA code to Access 2013 -
Why are GIFs still a thing?!? I mean don't get me wrong, I don't have anything against short, audioless videos but why GIF?
GIF only provides a lossless per-frame compression so the performance sucks really bad. For example a 1 MB GIF converted to MP4 is only 100 KB.
So that's why they take longer to load and have so bad quality.1 -
Im building an android library that basically does file compression/ image compression etc its just a wrapper library around android's native c++ code.
I wonder how this is going to behave on different devices, as you all know when you do low level stuff, each manufacturer has his own way of doing things in their version of the Android OS.
Once i put this lib up on github please use the lib on your devices and tell me if you get any issues :D -
!rant
tl;dr: programmer's excuse vs civilian excuse funny moment in conversation w/ gf
pertinent info: gf has access to my calendar; I added my class schedule for the upcoming semester earlier today
gf: you're taking human psychology where as im taking human development lol
me: I'm taking human psychology?
wat
gf: <screenshot of my calendar entry (it's human physiology)>
see
me: OH
Physiology
!=psychology
psybnlogy
close enough
the human brain's word recognition relies on lossy compression
not my fault
.-.
gf: ohhhhhh
I don't have my glasses on and my computer is far so that's my excuse lol
me: LOL
I assumed I misread it
didn't even double check your spelling6 -
Oh boyyy, I just had to work with Asterisk again. And holy shit it is still the clusterfuck it was many years ago.
We got:
- Inconsequent documentation that is mixed through all versions.
- The config sprinkled over what feels like 20 gazillion files.
- AEL being a half assed attempt at a "pRoGRamMinG LanGuAgE"
- The fuck you mean with extensions, endpoints and AOR's?
- Inconsistent config parameter naming. Some are snake case, some camel case some are just everything smushed into a single word.
- queue_log determines wheter to write a log to a file. queue_log_to_file Says to do so independent of you having a realtime backend. Whatever the fuck that is.
- Log compression is done by executing a gzip command after a rotation??!!?!! -
So for the past day I've been obsessed with adding compression into my build pipeline for web dev, I've implemented html minify, babel JS minify, gzip and made the server specify content length to prevent chunking and shave off a few unused bytes. Is there anything else anyone can think of to get even more gains? Sofar my project went from 1.33mb to 180kb transferred. It's a huge win, just wondering if I can push it further somehow?1
-
RIP my sunday...
Assignment for uni:
Code a decompression routine in cortex m0 assembly for the compression function your teacher provided....
It can't get much worse than that!5 -
Not being able to look at people’s faces in person.
My autistic empathic mind-reading hyperperception works best when it has a lot of data, e.g. when visual contact isn’t obstructed by a video compression algorithm. Without that sense, my brain has to work extra hard to read minds. It becomes exhausting. When I don’t have this power for some reason, I feel very anxious. In absence of data, a naturally anxious and depressed brain assumes the worst.1 -
IMAGE COMPRESSION QUESTION
lets say i upload a 100x100 photo from my android device. this image has a size of e.g. 2MB. not a lot. if i compress it then the size will be e.g. 300kB. cool. upload is thunderbolt for any internet speed.
lets consider this case. a random ass motherfucker decides it is cool to upload a 10000x10000 image that has a size e.g. 300MB. compressing this would be e.g. 150MB which is still a lot as fuck for one pic.
heres my question: where should the compression be handled? at backend (REST API server) or client (android image compression library)?
because if i try to send a 150MB pic to the server and their internet sucks but to be fucking honest even the best internet speed would take way too long to upload, is it better to do the compression on the backend or client?
or should i do compression in android? if i should do compression on client then should i;
1) do the compression on the main thread with a progress dialog to wait them until the compression + PLUS the fucking upload is done or
2) do the compression + THE upload in a background thread in which case it can be dangerous for verbose amount of fuckups (internet dies phone explodes etc) and the app crashes
which (one) option of the 2 suboptions from the second parent option branch?
of course this is an extremely unrealistic case, it is possible but thats not my point: my point is WHERE SHOULD THE COMPRESSION (as some kind of universal standard) BE HANDLED AT?6 -
Ideas I've had over the years that could pan out and be useful:
SMS-DB: Stands for SMS-Data Burst. Used to allow those with low cell signal or no data plan to transfer data between a phone and some client via the standard SMS text space. Would be slow, but would act kinda like dial-up over SMS (as mobile lines are compressed on all service levels, even LTE, so traditional dial-up wouldn't work!) I have a general idea on how packets would be laid out, but that's about it so far...
everything2PNG: Allows one to transpose any file's data into a PNG with a 3 byte per pixel (full color RGB), which allows for a "compression" of sorts (about 91, 93% on preliminary tests) AND allowing further, more efficient compression of the resulting file. (Plus... it's just kinda cool to see files transposed as PNGs.) I actually have a simple transposer to go to PNG, but can't yet go back. Large files (around 600MB) use upwards of 4GB with efficient paging and other optimizations via NumPy so far, so it's not *viable* yet, but it's coming along nicely.
RPi-GPIO Interconnection Bus: A master/slave or round robin method to allow for Raspberry Pis to communicate using GPIO, which can help free up network bandwidth in RPi cloud computing clusters. At most, this'd allow for 4 bits used for pushing to the GPIO "bus", and 4 bits used for pulling from the "bus". 8 pins total are usually unused minimum, so either 3 or 4 pins for upload, 3 or 4 for download, and potentially 1 or 2 for commands, general non-data communication, etc. I made a version of this concept using Round Robin for a client, but it was horribly slow. (I also don't have distribution rights for the code, so i'm working from scratch.) Definitely doable. -
Blake 1803: 📜"To see a World in a Grain of Sand And a Heaven in a Wild Flower"
🌏🏖🌌🌻
Me 2017: 📱Impressive compression ratio, but what is the complexity of Sand to World decompression? -
This is what happens when you don't use compression on messages in a real time backend (with RabbitMQ)5
-
The real web development is optimising the shitty front end code.
The task assigned to me is optimisation of dashboard page of website which was developed by freenlancers.(end of contract from their side)
The front end is mess. Individual js files (bootstrap, popper, jQuery, jQuery ui, loader and main) loading in production inside head tag of html file
No text compression.
Every template has random number of their own js files in any block of template. Nothing structured. There will be fantastic waste of time figuring out file dependencies.
Same with css files. Some are scss, some plain css. No compression. No proper modules.
Basically, I have to go through 25-30 html files. Then understand, which template is extending which one. Go through all js and css files in each html file and again understand dependencies between them
This is gonna be real fun.1 -
Am I the only one that uses YouTube as a video editor and compression converter?
Is there a better way that don't require After Effects and therefore a desktop with huge processing power?1 -
Never ever mount a btrfs sparse file (-o compression-force) to /var/lib/docker.
- 70% space usage 😀
+ 300% random system freeze 😰 -
Just switched from gzip to brotli compression and I have to say I am impressed! Google may suck in some regards but brotli is awesome ☺️2
-
isRant <- false
I finally got a tiny white dot displayed, using modern OpenGL, in F#, through OpenTK. There would be a picture, but picture compression made the dot invisible.1 -
So a couple of months ago I had some stability issues which seems to have caused Baloo go crazy and create an 1.7 exabyte index file. It was apparently mainly empty as zfs compressed it down to 535MB
Today I spent some time trying to reproduce the "issue" and turns out that wasn't that hard.
So this little program running on FreeBSD with a compressed (lz4) zfs dataset creates an 1.9 Exabyte large file, nicely compressed down to 45KB :)
#include <stdio.h>
#include <fcntl.h>
#include <unistd.h>
#include <sys/limits.h>
int main(int argc, char** argv) {
int fd = open("bigfile.lge", O_RDWR|O_CREAT, 0644);
for (int i = 0 ; i < 1000000000; i++) {
lseek(fd, INT_MAX, SEEK_CUR);
}
write(fd, " ",1);
close(fd);
}3 -
Ticket: implement compression algorithm to crypto object x
Details: object to big, we must devise a way to compress it. A deflate algorithm should be added here, yada yada yada we did not have the time Yara yada...
Go see crypto provider's documentation... It has compression options... -_-
You lazy fucking stack overflow copy question dimwits!!! Jesus fucking Christ! This reached production like this shit, I've got clients complaining of the size of the payload because you are a bunch of lazy fucks who can't even read simple documentation!!!
I want to kill someone for wasting my time and patience... Don't call me for this kind of crap... I have better things to do!
I mean, the time it took you to write the ticket should suffice... -
I have always wanted too work on lossless compression technology. I have just never had the time to go after it. Or do I really know where to start lol1
-
Just wanted to free up some space and separate all of my projects.
First idea ... failed!
mksquashfs /home/tracktraps/Development/myproject1 ~/Squash/myproject1.sfs -info -progress -b 1048576 -comp xz -Xdict-size 100%
mkdir /mnt/myproject1
mount ~/Squash/myproject1.sfs /mnt/myproject1
unionfs -o allow_other,nonempty ~/.unionfs/changes/myproject1=rw:/mnt/myproject1/=ro ~/Development/Project1
Too much cpu overhead, too many folders, can't delete files, all get mixed up ...
Second idea ... failed!
dd if=/dev/zero of=~/Imgs/myproject1.btfs bs=1M count=10240
mkfs.btrfs ~/Imgs/myproject1.btfs
mount -o defaults,noatime,autodefrag,compress,compress-force,inode_cache ~/Imgs/myproject1.btfs ~/Development/Project1
Well ... little overhead, gzip compression, saved a lot of space, but fixed img size.
Third idea ... yay!
truncate -s 200G ~/Imgs/myproject1.btfs
mkfs.btrfs ~/Imgs/myproject1.btfs
mount -o defaults,noatime,autodefrag,compress,compress-force,inode_cache ~/Imgs/myproject1.btfs ~/Development/Project1
Well ... little overhead, gzip compression, saved a lot of space ... but wait ... why do my btfs files consume more and more space?
Hmm ... time for a little bash and my beloved systemd timers.
for f in `find . -type f -name "*.btfs"`
do
project=${$f%.*}
btrfs balance start -v -dusage=100 ~/Development/$project
btrfs balance start -v -musage=100 ~/Development/$project
fstrim ~/Development/$project
fallocate -d -v $f
done1 -
MySQL 5.7.32 breaks innodb zlib compression in combination with xtrabackup.
Oracle started the trend to break GA cycles....
https://percona.com/blog/2020/...
Seems like the MySQL ecosystem finally splits in MariaDB (as 10.5 renamed MySQL to Mariadb) and MySQL.
I hope Oracle MySQL dies.
What Oracle does is beyond madness.
MariaDB 10.5 has it's troubles too. But at least you can look up their sources, check their bugtracker and don't get surprise foot fisting up your arse.8 -
>the human mind cannot process non-Euclidean laws of physics and space
Yes it can. Non-Euclidean terms don't phase me. I've played too many games like Antichamber, removing laws of physics/the universe doesn't phase me.
How do I do these fucking things???2 -
Storytime.
The Prometheus tales
Part IV - A new FUBAR.
A new and very fascinating problem emerged a few days, after feeding some node definitions to the new titan instance.
It's a storage fuck-up. A major one.
If I'm informed correctly, the latest prometheus should have the same (or even better) log compression algorithms for metrics, as the old one - because these fuckers are so damn good at what they are doing: compress some fucking logs.
The new instance is agregating metrics as planned. Grafana work's like a fucking charm.
Nethertheless, because of very fascinating but unknown reasons, the new instance creates 50GB of metrics in under 4 fucking hours.
Am I missing something here? Some magic parameter that has to be passed to the titan, that enables the hardcore compress-them-fuckers-feature?
Debugging session is tomorrow.
To be continued. -
Why the fuck is it Impossible to get crisp font rendering on chrome (widows desktop), Firefox looks sooooo much more crispy... Get your shit together Google, also while your at it, catch up to Firefox with WebAssembly loading time.
And Firefox, it would be really nice if you could start supporting brotli compression... Just saying.2 -
My fingers hurt while typing. I have a habit of hitting the keys way too hard and it's too hard to get rid of.
Should I buy those compression glove thingies? Which brand? Has anyone used them? Do they look cool?4 -
Do we need compression on api level? say I have a rest api sending json data on requests. So if compression is needed then should it be in the server when returning the json response or in the client side when receiving it? which one is ideal?13
-
Would you consider compression (gzip static files) as a prepare step of the deploy stage or as a part of the build stage?
It's somewhat irrelevant but its bugging me8 -
So when I started at my current company, I was the second developer in the company. My job is to handle the embedded development side of our product. The existing code base? Made in fucking China. All of the comments were in Chinese too. They had implemented Huffman compression incorrectly and AES CBC encryption incorrectly as well. It was seriously some of the worst code I've ever seen. I remember one gem I found:
Int header = *(*int) "MGIK";
😫1 -
Any WordPress developers here? Which free plugin is best for image compression/optimization?
Need Quick Suggestions!!2 -
Oh yeah, I wanna rant... What is this awful image compression on DevRant?! A lot of time, images posted by ranters are illegible if they contains text. If we are lucky enough, the ranter will then post an Imgur, else... It would be really great to get an image quality equal to DevRant quality... Please!2
-
Alright, so i'm making an MS-DOS 6.22 box on a modern laptop: 80GB HDD (SATA 2), 2GB RAM (rip 9x, smallest stick I got,) AMD x64 CPU (they stuck a single-core 1GHz CPU in a laptop meant for Win8.0, with age it's slowed to around 600MHz effective, so it's more suited to being an XP box but I have no XP drivers...)
I'm not strapped for space at all (I could make 4 2GB partitions with no issue) but should I use DRVSPACE? I've never used DOS-era disc compression (and won't fucking touch DoubleSpace, for those that remember that and its issues) and i'm wanting to fuck with it some.
EDIT: fuck CSM, gotta use GRUB to load DOS...2 -
9 Ways to Improve Your Website in 2020
Online customers are very picky these days. Plenty of quality sites and services tend to spoil them. Without leaving their homes, they can carefully probe your company and only then decide whether to deal with you or not. The first thing customers will look at is your website, so everything should be ideal there.
Not everyone succeeds in doing things perfectly well from the first try. For websites, this fact is particularly true. Besides, it is never too late to improve something and make it even better.
In this article, you will find the best recommendations on how to get a great website and win the hearts of online visitors.
Take care of security
It is unacceptable if customers who are looking for information or a product on your site find themselves infected with malware. Take measures to protect your site and visitors from new viruses, data breaches, and spam.
Take care of the SSL certificate. It should be monitored and updated if necessary.
Be sure to install all security updates for your CMS. A lot of sites get hacked through vulnerable plugins. Try to reduce their number and update regularly too.
Ride it quick
Webpage loading speed is what the visitor will notice right from the start. The war for milliseconds just begins. Speeding up a site is not so difficult. The first thing you can do is apply the old proven image compression. If that is not enough, work on caching or simplify your JavaScript and CSS code. Using CDN is another good advice.
Choose a quality hosting provider
In many respects, both the security and the speed of the website depend on your hosting provider. Do not get lost selecting the hosting provider. Other users share their experience with different providers on numerous discussion boards.
Content is king
Content is everything for the site. Content is blood, heart, brain, and soul of the website and it should be useful, interesting and concise. Selling texts are good, but do not chase only the number of clicks. An interesting article or useful instruction will increase customer loyalty, even if such content does not call to action.
Communication
Broadcasting should not be one-way. Make a convenient feedback form where your visitors do not have to fill out a million fields before sending a message. Do not forget about the phone, and what is even better, add online chat with a chatbot and\or live support reps.
Refrain from unpleasant surprises
Please mind, self-starting videos, especially with sound may irritate a lot of visitors and increase the bounce rate. The same is true about popups and sliders.
Next, do not be afraid of white space. Often site owners are literally obsessed with the desire to fill all the free space on the page with menus, banners and other stuff. Experiments with colors and fonts are rarely justified. Successful designs are usually brilliantly simple: white background + black text.
Mobile first
With such a dynamic pace of life, it is important to always keep up with trends, and the future belongs to mobile devices. We have already passed that line and mobile devices generate more traffic than desktop computers. This tendency will only increase, so adapt the layout and mind the mobile first and progressive advancement concepts.
Site navigation
Your visitors should be your priority. Use human-oriented terms and concepts to build navigation instead of search engine oriented phrases.
Do not let your visitors get stuck on your site. Always provide access to other pages, but be sure to mention which particular page will be opened so that the visitor understands exactly where and why he goes.
Technical audit
The site can be compared to a house - you always need to monitor the performance of all systems, and there is always a need to fix or improve something. Therefore, a technical audit of any project should be carried out regularly. It is always better if you are the first to notice the problem, and not your visitors or search engines.
As part of the audit, an analysis is carried out on such items as:
● Checking robots.txt / sitemap.xml files
● Checking duplicates and technical pages
● Checking the use of canonical URLs
● Monitoring 404 error page and redirects
There are many tools that help you monitor your website performance and run regular audits.
Conclusion
I hope these tips will help your site become even better. If you have questions or want to share useful lifehacks, feel free to comment below.
Resources:
https://networkworld.com/article/...
https://webopedia.com/TERM/C/...
https://searchenginewatch.com/2019/...
https://macsecurity.net/view/...