Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "optimizations"
-
I absolutely HATE "web developers" who call you in to fix their FooBar'd mess, yet can't stop themselves from dictating what you should and shouldn't do, especially when they have no idea what they're doing.
So I get called in to a job improving the performance of a Magento site (and let's just say I have no love for Magento for a number of reasons) because this "developer" enabled Redis and expected everything to be lightning fast. Maybe he thought "Redis" was the name of a magical sorcerer living in the server. A master conjurer capable of weaving mystical time-altering spells to inexplicably improve the performance. Who knows?
This guy claims he spent "months" trying to figure out why the website couldn't load faster than 7 seconds at best, and his employer is demanding a resolution so he stops losing conversions. I usually try to avoid Magento because of all the headaches that come with it, but I figured "sure, why not?" I mean, he built the website less than a year ago, so how bad can it really be? Well...let's see how fast you all can facepalm:
1.) The website was built brand new on Magento 1.9.2.4...what? I mean, if this were built a few years back, that would be a different story, but building a fresh Magento website in 2017 in 1.x? I asked him why he did that...his answer absolutely floored me: "because PHP 5.5 was the best choice at the time for speed and performance..." What?!
2.) The ONLY optimization done on the website was Redis cache being enabled. No merged CSS/JS, no use of a CDN, no image optimization, no gzip, no expires rules. Just Redis...
3.) Now to say the website was poorly coded was an understatement. This wasn't the worst coding I've seen, but it was far from acceptable. There was no organization whatsoever. Templates and skin assets are being called from across 12 different locations on the server, making tracking down and finding a snippet to fix downright annoying.
But not only that, the home page itself had 83 custom database queries to load the products on the page. He said this was so he could load products from several different categories and custom tables to show on the page. I asked him why he didn't just call a few join queries, and he had no idea what I was talking about.
4.) Almost every image on the website was a .PNG file, 2000x2000 px and lossless. The home page alone was 22MB just from images.
There were several other issues, but those 4 should be enough to paint a good picture. The client wanted this all done in a week for less than $500. We laughed. But we agreed on the price only because of a long relationship and because they have some referrals they got us in the door with. But we told them it would get done on our time, not theirs. So I copied the website to our server as a test bed and got to work.
After numerous hours of bug fixes, recoding queries, disabling Redis and opting for higher innodb cache (more on that later), image optimization, js/css/html combining, render-unblocking and minification, lazyloading images tweaking Magento to work with PHP7, installing OpCache and setting up basic htaccess optimizations, we smash the loading time down to 1.2 seconds total, and most of that time was for external JavaScript plugins deemed "necessary". Time to First Byte went from a staggering 2.2 seconds to about 45ms. Needless to say, we kicked its ass.
So I show their developer the changes and he's stunned. He says he'll tell the hosting provider create a new server set up to migrate the optimized site over and cut over to, because taking the live website down for maintenance for even an hour or two in the middle of the night is "unacceptable".
So trying to be cool about it, I tell him I'd be happy to configure the server to the exact specifications needed. He says "we can't do that". I look at him confused. "What do you mean we 'can't'?" He tells me that even though this is a dedicated server, the provider doesn't allow any access other than a jailed shell account and cPanel access. What?! This is a company averaging 3 million+ per year in revenue. Why don't they have an IT manager overseeing everything? Apparently for them, they're too cheap for that, so they went with a "managed dedicated server", "managed" apparently meaning "you only get to use it like a shared host".
So after countless phone calls arguing with the hosting provider, they agree to make our changes. Then the client's developer starts getting nasty out of nowhere. He says my optimizations are not acceptable because I'm not using Redis cache, and now the client is threatening to walk away without paying us.
So I guess the overall message from this rant is not so much about the situation, but the developer and countless others like him that are clueless, but try to speak from a position of authority.
If we as developers don't stop challenging each other in a measuring contest and learn to let go when we need help, we can get a lot more done and prevent losing clients. </rant>14 -
Client: Can you provide some kind of guaranteed timeline that you're going to be able to move our website to our new servers with the optimizations implemented? I know you said it should take a week, but we have 3 weeks to get this moved over and we cannot afford to be double billed. I'm waiting to fire up the new server until you can confirm.
Me: As I said, it SHOULD take about a week, but that's factoring in ONLY the modifications being made for optimization and a QA call to review the website. This does not account for your hosting provider needing to spin up a new server.
We also never offered to move your website over to said new server. I sent detailed instructions for your provider to move a copy of the entire website over and have it configured and ready to point your domain over to, in order to save time and money since your provider won't give us the access necessary to perform a server-to-server transfer. If you are implying that I need to move the website over myself, you will be billed for that migration, however long it takes.
Client: So you're telling me that we paid $950 for 10 hours of work and that DOESN'T include making the changes live?
Me: Why would you think that the 10 hours that we're logged for the process of optimizing your website include additional time that has not been measured? When you build out a custom product for a customer, do you eat the shipping charges to deliver it? That is a rhetorical question of course, because I know you charge for shipping as well. My point is that we charge for delivery just as you do, because it requires our time and manpower.
All of this could have been avoided, but you are the one that enforced the strict requirement that we cannot take the website down for even 1 hour during off-peak times to incorporate the changes we made on our testbed, so we're having to go through this circus in order to deliver the work we performed.
I'm not going to give you a guarantee of any kind because there are too many factors that are not within our control, and we're not going to trap ourselves so you have a scapegoat to throw under the bus if your boss looks to you for accountability. I will reiterate that we estimate it would take about a week to implement, test and run through a full QA together, as we have other clients within our queue and our time must be appropriately blocked out each day. However, the longer you take to pull the trigger on this new server, the longer it will take on my end to get the work scheduled within the queue.
Client: If we get double billed, we're taking that out of what we have remaining to pay you.
Me: On the subject of paying us, you signed a contract acknowledging that you would pay us the remaining 50% after you approved the changes, which you did last week, in order for us to deliver the project. Thank you for the reminder that your remaining balance has not yet been paid. I'll have our CFO resend the invoice for you to remit payment before we proceed any further.
---
I love it when clients give me shit. I just give it right back.6 -
Client: We're gonna be hosting our site on [Overly popular shit host] via a shared hosting account.
Me: Well the performance isn't going to be stellar with WordPress on there, but if that's what you want, sure. I'll enable all the cache rules possible and make sure PHP 7 is running it, but there won't be any further optimizations I can do to make it faster with such limited hosting access.
[Next day after launching the website...]
Client: The website is super slow. I thought you were going to optimize it?
Me: That is the loading time with the optimizations I said I can apply. That host isn't great for performance unfortunately.
Client: Well you're going to need to find us a reputable host as a replacement, set up the account and move the website there so we aren't waiting forever for a page to load.
Me (in a reply via email):12 -
To become an engineer (CS/IT) in India, you have to study:
1. 3 papers in Physics (2 mechanics, 1 optics)
2. 1 paper in Chemistry
3. 2 papers in English (1 grammar, 1 professional communication). Sometimes 3 papers will be there.
4. 6 papers in Mathematics (sequences, series, linear algebra, complex numbers and related stuff, vectors and 3D geometry, differential calculus, integral calculus, maxima/minima, differential equations, descrete mathematics)
5. 1 paper in Economics
6. 1 paper in Business Management
7. 1 paper in Engineering Drawing (drawing random nuts and bolts, locus of point etc)
8. 1 paper in Electronics
9. 1 paper in Mechanical Workshop (sheet metal, wooden work, moulding, metal casting, fitting, lathe machine, milling machine, various drills)
And when you jump in real life scenario, you encounter source/revision/version control, profilers, build server, automated build toolchains, scripts, refactoring, debugging, optimizations etc. As a matter of fact none of these are touched in the course.
Sure, they teach you a large set of algorithms, but they don't tell you when to prefer insertion sort over quick sort, quick sort over merge sort etc. They teach you Las Vegas and Monte Carlo algorithms, but they don't tell you that the randomizer in question should pass Die Hard test (and then you wonder why algorithm is not working as expected). They teach compiler theory, but you cannot write a simple parser after passing the course. They taught you multicore architecture and multicore programming, but you don't know how to detect and fix a race condition. You passed entire engineering course with flying colors, and yet you don't know ABC of debugging (I wish you encounter some notorious heisenbug really soon). They taught 2-3 programming languages, and yet you cannot explain simple variable declaration.
And then, they say that you should have knowledge of multiple fields. Oh well! you don't have any damn idea about your major, and now you are talking about knowledge in multiple fields?
What is the point of such education?
PS: I am tired of interviewing shitty candidates with flying colours in their marksheets. Go kids, learn some real stuff first, and then talk some random bullshit.18 -
I just can't understand what will lead an so called Software Company, that provides for my local government by the way, to use an cloud sever (AWS ec2 instance) like it were an bare metal machine.
They have it working, non-stop, for over 4 years or so. Just one instance. Running MySQL, PostgreSQL, Apache, PHP and an f* Tomcat server with no less than 10 HUGE apps deployed. I just can't believe this instance is still up.
By the way, they don't do backups, most of the data is on the ephemeral storage, they use just one private key for every dev, no CI, no testing. Deployment are nightmares using scp to upload the .war...
But still, they are running several several apps for things like registering citizen complaints that comes in by hot lines. The system is incredibly slow as they use just hibernate without query optimizations to lookup and search things (n+1 query problems).
They didn't even bother to get a proper domain. They use an IP address and expose the port for tomcat directly. No reverse proxy here! (No ssl too)
I've been out of this company for two years now, it was my first work as a developer, but they needed help for an app that I worked on during my time there. I was really surprised to see that everything still the same. Even the old private key that they emailed me (?!?!?!?!) back then still worked. All the passwords still the same too.
I have some good rants from the time I was there, and about the general level of the developers in my region. But I'll leave them for later!
Is it just me or this whole shit is crazy af?3 -
Today I feel like a monster.
Due to the optimizations/automatizations I have made to the processes in my job, some low level employees are not longer required.... Ever again.
Their last day of work was yesterday, but I'm still uneasy if this is how life is now. If every job will be removed to be made by machines.10 -
I am sure this has happened to all of us in some extent with some variations.
Colleague not writing comments on code.
Ask him something like "How am I suppose understand that piece of garbage you have written when there is no comments or documentation?"
This keeps happening for a long time. Some time after, I write a kernel module using idiomatic C and ASM blocks for optimizations (for some RTOS) and purposely not write neither documentation nor comments.
When he asked for an explanation, I answered to everything he questioned as general as I could for "that trivial piece of code".
After that he always documents his code!
Win! 🏆4 -
1. Scripting out a team. I've built a collection of bash scripts to do what one of our teams does. Except the script does it in 30min and always does it well where that team used to take 4 to 10 hours and almost always missed something in the way.
2. Automate 70-80% of our BAU tasks with a single >4k loc bash script. Integrations with servicenow, lots of internal portals, predefined huge sets of commands to run on separate servers or lists of servers, do all sorts of diagnostics, schedule hw maintenance for DC folks, chase for approvals, track CHNG/CTSK tickets in a graphical chart so we would not miss any of them and lots lots more.
Finally we were able to afford time to make some coffee/tea.
These are the bau optimizations I'm proud of the most. And they have made significant impact on how our teams operate.
Whoever recognizes both company values in the tags and know what is that company - are they still using ´S´ in unix team? :)1 -
As an IT student learning only C# and Java, I was asked very specific questions on c++ about micro optimizations, and binary operations (why i haven't learned that i still wonder, i had to self teach it)
Because of not being able to answer that i was denied that internship, because fuck your and wanting to learn as a student.
I litterally mastered all questions asked the day after the interview just out of spite. It were all concepts i easily understood but they valued their paper based interview more than actually giving me some code to work with.2 -
I've been writing a complex mutation engine that dynamically modifies compiled C++ code. Now there's alot of assembly involved, but I got it to work. I finished off writing the last unit test before it was time to port it all to windows. I switched into a release build, ready to bask in the glory of it all. FUCKING GCC OPTIMIZATIONS BROKE EVERYTHING. I had been doing all my dev in debug mode and now some obscure optimization GCC does in release mode is causing a segfault...somewhere. Just when I thought I was done 😅5
-
just had a 3AM fucking brain blast
On the TI-8x line of calculators, you can do link port shit to essentially blit music to the barrel link port. However, the only library for this sounds like dick and is a superslow piece of shit.
I've just had a thought:
The library this dude wrote spaces the note out, waits for it to end and then resets the port.
If you're blitting music to the port rapid-fire, you don't need to reset it except either after the music or during rests. I can push a note with a waveform of 0hz to the link port to reset it, so...
The dude posted his source, so I could theoretically have it set the link port to whatever note and immediately return.
This would make the time between notes upwards of like 8x faster, bringing my sample rate from like 25 to 200 or so. That's almost fast enough for barely-recognizable voice synth! A little fiddling would be needed to get me the rest of the way there, but there's probably optimizations I can do to push the envelope some.8 -
I finally fucking made it!
Or well, I had a thorough kick in my behind and things kinda fell into place in the end :-D
I dropped out of my non-tech education way too late and almost a decade ago. While I was busy nagging myself about shit, a friend of mine got me an interview for a tech support position and I nailed it, I've been messing with computers since '95 so it comes easy.
For a while I just went with it, started feeling better about myself, moved up from part time to semi to full time, started getting responsibilities. During my time I have had responsibility for every piece of hardware or software we had to deal with. I brushed up documentation, streamlined processes, handled big projects and then passed it on to 'juniors' - people pass through support departments fast I guess.
Anyway, I picked up rexx, PowerShell and brushed up on bash and windows shell scripting so when it felt like there wasn't much left I wanted to optimize that I could easily do with scripting I asked my boss for a programming course and free hands to use it to optimize workflows.
So after talking to programmer friends, you guys and doing some research I settled on C# for it's broad application spectrum and ease of entry.
Some years have passed since. A colleague and I built an application to act as portal for optimizations and went on to automate AD management, varius ssh/ftp jobs and backend jobs with high manual failure rate, hell, towards the end I turned in a hobby project that earned myself in 10 times in saved hours across the organization. I felt pretty good about my skills and decided I'd start looking for something with some more challenge.
A year passed with not much action, in part because I got comfy and didn't send out many applications. Then budget cuts happened half a year ago and our Branch's IT got cut bad - myself included.
I got an outplacement thing with some consultant firm as part of the goodbye package and that was just hold - got control of my CV, hit LinkedIn and got absolutely swarmed by recruiters and companies looking for developers!
So here I am today, working on an AspX webapp with C# backend, living the hell of a codebase left behind by someone with no wish to document or follow any kind of coding standards and you know what? I absolutely fucking love it!
So if you're out there and in doubt, do some competence mapping, find a nice CV template, update your LinkedIn - lots of sources for that available and go search, the truth is out there! -
Teaching my homeschooled son about prime numbers, which of course means we need to also teach prime number determination in Python (his coding language of choice), when leads to a discussion of processing power, and a newly rented cloud server over at digital ocean, and a search of prime number search optimizations, questioning if python is the right language, more performance optimizations, crap, the metrics I added are slowing this down, so feature flags to toggle off the metrics, crap, I actually have a real job I need to get back to. Oooh, I'm up to prime numbers in two millions, and , oh, I really should run that ssh session in screen so it keeps running if I close my laptop. I could make this a service and let it run in the background. I bet there's a library for this. He's only 9. We've already talked about encryption and the need to find large prime numbers.3
-
I had spent the last year working on a online store power by woocommerce with over 100k products from various suppliers. This online store utilized a custom API that would take the various formats that suppliers offer their inventory in and made them consistent. Now everything was going swimmingly initially, but then I began adding more and more products using a plug-in called WP all import. I reached around 100k products and the site would take up to an entire minute to load sometimes timing out. I got desperate so I installed several caching plugins, but to no avail this did not help me. The site was originally only supposed to take three to four months but ended up taking an entire year. Then, just yesterday I found out what went wrong and why this woocommerce website with all of these optimizations was still taking anywhere from 60 to 90 seconds to load, or just timing out entirely. I had initially thought that I needed a beefier server so I moved it to a high CPU digitalocean VM. While this did help a little bit, the site was still very slow and now I had very high CPU usage RAM usage and high disk IO. I was seriously stumped the Apache process was using a high amount of CPU and IO along with MYSQL as well. It wasn't until I started digging deeper into the database that I actually found out what the issue was. As I was loading the site I would run 'show process list' in the SQL terminal, I began to notice a very significant load time for one of the tables, so I went to go and check it out. What I did was I ran a select all query on that particular table just to see how full it was and SQL returned a error saying that I had exceeded the maximum packet size. So I was like okay what the fuck...
So I exited my SQL and re-entered it this time with a higher packet size. I ran a query that would count how many rows were in this particular table and the number came out to being in the millions. I was surprised, and what's worse is that this table belong to a plugin that I had attempted to use early in the development process to cache the site. The plugin was deactivated but apparently it had left PHP files within the wp content directory outside of the actual plugin directory, so it's still executing scripts even though the plugin itself was disabled. Basically every time I would change anything on the site, it would recache the whole thing, and it didn't delete any old records. So 100k+ products caching on saves with no garbage collection... You do the math, it's gonna be a heavy ass database. Not only that but it was serialized data, so when it did pull this metric shit ton of spaghetti from the database, PHP then had to deserialize it. Hence the high ass CPU load. I had caching enabled on the MySQL end of things so that ate the ram. I was really desperate to get this thing running.
Honest to God the main reason why this website took so long was because the load times made it miserable to work on. I just thought that the hardware that I had the site on was inadequate. I had initially started the development on a small Linux VM which apparently wasn't enough, which is why I moved it to digitalocean which also seemed to not be enough, so from there I moved to a dedicated server which still didn't seem to be enough. I was probably a few more 60-second wait times or timeouts from recommending a server cluster to my client who I know would not be willing to purchase it. The client who I promised this site to have completed in 3 months and has waited a year. Seriously, I would tell people the struggles that I would go through with this particular site and they would just tell me to just drop the site; just take the money, just take the loss. I refused to, this was really the only thing that was kicking my ass. I present myself as this high-and-mighty developer like I'm just really good at what I do but then I have this WordPress site that's just beating the shit out of me for a year. It was a very big learning experience and it was also very humbling as well, it made me realize that I really don't know as much as I think I might. It was evidence that there is still so much more to learn out there, I did learn a lot from that experience especially about optimizing websites the different types of methods to do that particular lonely on the server side and I'll be able to utilize this knowledge in the future.
I guess the moral of the story is, never really give up. Ultimately things might get so bad that you're running on hopes and dreams. Those experiences are generally the most humbling. Now I can finally present the site that I am basically a year late on to the client who will be so happy that I did not give up on the project entirely. I'll have experienced this feeling of pure euphoria, and help the small business significantly grow their revenue. Helping others is very fulfilling for me, even at my own expense.
Anyways, gonna stop ranting. Running out of characters. If you're still here... Ty for reading :')7 -
Buckle up, it's a long one.
Let me tell you why "Tree Shaking" is stupidity incarnate and why Rich Harris needs to stop talking about things he doesn't understand.
For reference, this is a direct response to the 2015 article here: https://medium.com/@Rich_Harris/...
"Tree shaking", as Rich puts it, is NOT dead code removal apparently, but instead only picking the parts that are actually used.
However, Rich has never heard of a C compiler, apparently. In C (or any systems language with basic optimizations), public (visible) members exposed to library consumers must have that code available to them, obviously. However, all of the other cruft that you don't actually use is removed - hence, dead code removal.
How does the compiler do that? Well, it does what Rich calls "tree shaking" by evaluating all of the pieces of code that are used by any codepaths used by any of the exported symbols, not just the "main module" (which doesn't exist in systems libraries).
It's the SAME FUCKING THING, he's just not researched enough to fully fucking understand that. But sure, tell me how the javascript community apparently invented something ELSE that you REALLY just repackaged and made more bloated/downright wrong (React Hooks, webpack, WebAssembly, etc.)
Speaking of Javascript, "tree shaking" is impossible to do with any degree of confidence, unlike statically typed/well defined languages. This is because you can create artificial references to values at runtime using string functions - which means, with the right input, almost anything can be run depending on the input.
How do you figure out what can and can't be? You can't! Since there is a runtime-based codepath and decision tree, you run into properties of Turing's halting problem, which cannot be solved completely.
With stricter languages such as C (which is where "dead code removal" is used quite aggressively), you can make very strong assertions at compile time about the usage of code. This is simply how C is still thousands of times faster than Javascript.
So no, Rich Harris, dead code removal is not "silly". Your entire premise about "live code inclusion" is technical jargon and buzzwordy drivel. Empty words at best.
This sort of shit is annoying and only feeds into this cycle of the web community not being Special enough and having to reinvent every single fucking facet of operating systems in your shitty bloated spyware-like browser and brand it with flashy Matrix-esque imagery and prose.
Fuck all of it.20 -
Sometimes I just don't know what to say anymore
I'm working on my engine and I really wanna push high triangle counts. I'm doing a pretty cool technique called visibility rendering and it's great because it kind of balances out some known causes of bad performance on GPUs (namely that pixels are always rasterized in quads, which is especially bad for small triangles)
So then I come across this post https://tellusim.com/compute-raster... which shows some fantastic results and just for the fun of it I implement it. Like not optimized or anything just a quick and dirty toy demo to see what sort of performance I can get
... I just don't know what to say. Using actual hardware accelerated rasterization, which GPUs are literally designed to be good at, I render about 37 million triangles in 3.6 ms. Eh, fine but not great. Then I implement this guys unoptimized(!) software rasterizer and I render the same scene in 0.5 ms?!
IT'S LITERALLY A COMPUTE SHADER. I rasterize the triangles manually IN SOFTWARE and write them out with 64-bit atomic image stores. HOW IS THIS FASTER THAN ACTUAL HARDWARE!???
AND BY LIKE A ORDER OF MAGNITUDE AT THAT???
Like I even tried doing some optimizations like backface cone culling on the meshlets, but doing that makes it slower. HOW. Im rendering 37 million triangles without ANY fancy tricks. No hi-z depth culling which a GPU would normally do. No backface culling which a GPU with normally do. Not even damn clipping of triangles. I render ALL of them ALL the time. At 0.5 ms7 -
PR comment: You don't need this useCallback and this useMemo, it's premature optimization, this component will not have a lot of rerenders anyway.
*remove it and merge it*
Few weeks later...
Slack message: why the app is so slow???
*opens a PR putting back all "premature" optimizations*
PR comment: Hum... I don't think that's it, but let's try it.
*merge it*
*slowness is gone*
Slack message: wow! How did you figured out so quickly???
Me: I was lucky, I guess ¯\_(ツ)_/¯2 -
Client: We are completely unable to plan a construction project successfully. We want you to use AI to do all of our project planning for us. Our requirements are that instead of needing to spend any money or time planning we just want to press a button and have a computer instantly put together all of our project plans for us. The program also needs to identify optimizations on it’s own and change all related plans enterprise-wide. All copies of the plans should be able to be kept up to date at all times so we’re never looking at an old plan again. We also want the ability to print.
Dev: …11 -
This shit makes me fucking rage! Ok, so here it goes. I use Multiswipe (multiswipe.com) and I'm actually very happy with it. It works as advertised and doesn't crash. Then I bought a new laptop. I wiped the old one and installed multiswipe on the new one. After launching I'm greeted with the registration (I bought it) where it claims I have already registered the software on a different computer. So I reach out to the owner and ask him to help out and am told
"MultiSwipe can be installed and updated as many times as you want as long as it is in the same machine, but if you change laptops then you will need to purchase a new license, this is because MultiSwipe implements a serious of optimizations depending on your touchpad brand and model and as such it's physically linked to it."
This sounds like horseshit to me. If I download a fresh copy, wouldn't it optimize depending on the new system?
But ok. I'll purchase another one, only to be told that my e-mail address is already in use. After reaching out I am told that I HAVE TO CREATE A NEW EMAIL ADDRESS.
Jon, if you read this, I seriously love your software but what the fuck is this fuckery. Like, seriously, how god damn hard can it be to allow customers to purchase multiple licences?
I'm so angry that I'm considering pouring time into cracking it and sending him the new version with the text "nvm, got it to work."
I'm open to suggestions....8 -
You know the configuration sucks if it's a one file, 10 K lines nginx reverse proxy configuration.
But what really really really sucks....
If the person who wrote it was a google craptastic copy pasta ninja.
For fucks sake, if you don't know what you are doing, just stop.
I've had this in so many rants, it's terrifying how many devs seem to be completely unaware of what they're doing Oo
This time, fuckwad ignored the basic principle of NGINX configuration: set the HTTP version for the proxy.
It's by default HTTP 1.0 - as HTTP 1.1 requires a Host Header _which you must set if not already present_.
The fuckwad had all kinds of scary optimizations enabled. Literally a bukkaka (not a typo) of <way too high value> and <too obscure configuration value that cannot apply here>.
But the most trivial thing, enabling HTTP 1.1 and keepalive. Nope.
Not in it.
It's funny how fast NGINX can be without the bukkaka of configuration values but HTTP keepalive enabled.
*me sits in the silent corner of the plushy pink room with soft walls*1 -
Trying to install python3.7 for one fucking dynamic library for the past 3 hours. Built from multiple 3.7.x sources, tried altinstall, enable-shared, install, with/without optimizations, tried virtualenv which is absolutely fucking irrelevant but I tried.
Now I'm just asking someone who has it to send me the worthless soup of voltage fluctuations
oh and fuck python. or people that use it irresponsibly. im not in the right mind to distinguish
Our profs: you are CSE students, you must be familiar with Linux
Their Linux: Ubuntu 20.04 LTS
Their tools: working for them11 -
I regret moving to backend. I loved the days when I used to write lines of code and refresh my browser for the changes to be displayed on the screen. I loved seeing the output of my code, the code flow, the light weight text editor, the visual satisfaction and the chrome debugger.
Now I am fucked up, I am working on creating microservices for restful api. I am hating everything about it. The fact that I should compile the entire war, manually copy them to a webapp folder, restart my tomcat and wait for 5 minutes just to see my code, and the text editors are just a pain in the ass, the debugger sucks too.
I was so looking forward to being a backend Dev because I thought Java was cool and I also was fedup with cross browser optimizations on the front end. Now I would gladly write a streaming service foe ie6. Spring has fucked me up so hard
God save me from this mess.6 -
Looked up at the clock... 2 AM... Thought about giving up and going to sleep, but something kept me there...
Rewrote my encoder and decoder for my steganography program, which are used to insert and retrieve data respectively from images. Compiled, ran, and output was as expected!
Tried to write actual data, instead of just headers, to the image, and it broke... Of course it wouldn't work first try, it's me writing the code after all.
But then, after debugging for a while and changing a couple lines, the encoder looked like it had done its work properly. Then I decoded it, and voila, data completely recovered! It almost felt too magical to be true, usually I have to modify a lot more to get it working.
So now I'm in bed, after literally decimating the memory usage of the program, amongst other optimizations, and I know that the code works perfectly 😎 best part is I refactored each class down to 100 lines each, so now it's clean and dense 😇
Just had to share, feeling so good right now 😄2 -
If you're working on close to hardware things, make sure you run static analysis, and manually inspect the output of your compiler if you feel something's off - it may be doing something totally different from what you expect, because of optimization and what not. Also, optimizations don't always trigger as expected. Also, sometimes abstractions can cost a fair amount too (C++ std::string c/dtor, for example, dtors in general), more than you'd expect, and in those cases you might want to re-examine your need for them.
Having said all that, also know how to get the compiler to work for you, hand-optimization at the assembly level isn't usually ideal. I've often been surprised by just how well compilers figure out ways to speed up / compactify code, especially when given hints, and it's way better than having a blob of assembly that's totally unmaintainable.
Learnt this from programming MCUs and stuff for hobby/college team/venture, and from messing around with the Haskell compiler and LLVM optimization passes.3 -
Surveying Web developers who have used a Framework (like Angular, React, Vue, etc.) for my Master Thesis
Hi all,
I am writing my Master Thesis on Code optimizations when using a Web Framework.
Basically, I want to do a statistical analysis to see if any Web Framework makes web developers optimize their code.
To do this, I have set up a survey of 19 questions that shouldn’t take longer than 10 minutes of your time.
With these results I hope to find if any code using a specific Web Framework is more optimized than another.
https://forms.gle/2A1pZKgHSUs2eyV3A
I thank you for your time and effort!
Dunky10 -
Imagine an intelligent platform where creators are rewarded for launching and completing open source projects and groups of people teaming up to set up open source factories , implementing top level optimizations . There will ofcourse be judges who can earn a basic income from the platform as well . I don't want to go into too much details.
The platform could offer products as well with a modular backend . So for example this platform could have an mobile OS with a maps app on it . What the platform should do is adopt the best map algorithm to the application at certain intervals .
Product stays constant , open source power behind it changes and promotes competition .
Factories, products , space tech, open source labs which require certain reputation to get in ......
It's very ambitious but sounds like the way the future should take off . Companies and politics would be off picture down the line and maybe even terrorism will take a break . After all it's everyone together .
Oh and ofcourse it would have a search engine for sure.2 -
I have made a lot of small changes in my app like minor bug fixes, Animation, optimizations, better database management, new options, overall interface improvements, ... to give the application a better overall appearance. Then I decide to show it to my Client.
"From what I can tell, you haven't done much since last time"1 -
Orchid runs!
It's very far from done, but now I'm motivated to get shit done! My optimizations can now have measurable impact! The hypothetical examples no longer have to be hypothetical!!!10 -
I cant really understand all the hate about JavaScript. I learned C as my first language and I have to admit that once you think of js as a kind of C scripting language it all make so much more sense especially when it comes to optimizations which come with syntactic sugar like coffeejs and so on.4
-
Literally begging my brain to relax so I can sleep.
I've gotten my drive and passion for coding back (Working on a project that'll be released for beta soon) but unfortunately, the sleepless nights came back with it.
It's currently 6am and I want to sleep but soon as I close my eyes, I see more features, optimizations and potential bug fixes. I have a slight migraine which is quite annoying.
Wish I could turn my brain off on command. Please make it stop 😔17 -
Ideas I've had over the years that could pan out and be useful:
SMS-DB: Stands for SMS-Data Burst. Used to allow those with low cell signal or no data plan to transfer data between a phone and some client via the standard SMS text space. Would be slow, but would act kinda like dial-up over SMS (as mobile lines are compressed on all service levels, even LTE, so traditional dial-up wouldn't work!) I have a general idea on how packets would be laid out, but that's about it so far...
everything2PNG: Allows one to transpose any file's data into a PNG with a 3 byte per pixel (full color RGB), which allows for a "compression" of sorts (about 91, 93% on preliminary tests) AND allowing further, more efficient compression of the resulting file. (Plus... it's just kinda cool to see files transposed as PNGs.) I actually have a simple transposer to go to PNG, but can't yet go back. Large files (around 600MB) use upwards of 4GB with efficient paging and other optimizations via NumPy so far, so it's not *viable* yet, but it's coming along nicely.
RPi-GPIO Interconnection Bus: A master/slave or round robin method to allow for Raspberry Pis to communicate using GPIO, which can help free up network bandwidth in RPi cloud computing clusters. At most, this'd allow for 4 bits used for pushing to the GPIO "bus", and 4 bits used for pulling from the "bus". 8 pins total are usually unused minimum, so either 3 or 4 pins for upload, 3 or 4 for download, and potentially 1 or 2 for commands, general non-data communication, etc. I made a version of this concept using Round Robin for a client, but it was horribly slow. (I also don't have distribution rights for the code, so i'm working from scratch.) Definitely doable. -
Got my QBASIC demopart ready for testing, completely optimized and ready to roll. Found 486 owner (33MHz, 8MB of RAM) who tested it for me.
Takes 13 seconds for the scroller to draw each frame at 300x200.
No possible optimizations are present, everything's crushed as far as it'll go, meaning a completely different method of scrolling is needed.
(Still faster than the array method by 37 seconds per frame, but y'know...)
fuck me... -
More a positive rant...
Just casually looked into an invitation to a collab tool my workplace set up for discussing optimizations of workflows, internal collabs, communication, yada yada...
Just to figure out, that there's A LOT of room for improvement being discussed and new ideas related to our work. Which is fucking great! Like "Hey we could maybe introduce A/B testing for our software" or "We should change the way our CI/CD works".
One of the best things I've seen so far: "We should do smth about (react) component XY, as it currently holds many configurable parameters for look and feel with too many possibilities" ... these components are like each 1 big file or so, that covers EVERY possibility. I had a feeling in my gut that some things were built quite complicated, but originally with a good idea/intention in mind. I thought that I just needed time to get used to new things. Now I know that I need to learn nevertheless but that things NEED improvement and that others agree on that, too.
I think this is a good sign when a company tries to reflect on itself to become better.2 -
It seems to me that browsers lagging behind is the reason we've seen the JS framework boom both in recent years and ongoing, evident in what they regard as major updates. Most of the functionalities implemented in my time working on the front end are high level problems ubiquitous enough to have been solved at the browser level. Same goes for all the optimizations CSR frameworks are struggling to attain. Every CSR app genuinely feels like recreating a browser, both in UX and dev requirements. These problems exist because current browsers are analog software still accustomed to loading all content at once, no in-app state, just scroll states
The React-Vue-Angular wars of today are a direct hat-tip to the Netscape-Microsoft wars of the early years. If they can form a coalition that sets a standard for syntax, best rendering engine, natural way for user facing devs to control app state, fetch data or connect the back end, somehow render this on the server or find a workaround SEO issues on CSRs, etc, given the shared agreement on expectations for modern web software, it'll be fascinating to see such a possibility8 -
Does any of you have the compulsion to micro-optimize every bit of code that you write? How do you deal with it?
I'm not just talking about algorithmic optimizations, but the real nitty gritty stuff. I'm talking about using bit fiddling to avoid if statements where speculative processors might make mispredictions. Anything that might make a program compile to fewer machine instructions or avoid extra stack frame overhead.
This all started a year ago when I took a systems programming course at my university, and started learning C and C++. But I find myself doing this in the wrong places. Who cares if this trivial program that I wrote runs in 1.2 or 0.6 seconds? My future employers won't care if my code is 10% more efficient when it takes four times as long to write.
It's gotten to the point that I can't bring myself to use languages like Python because I don't know how it's implemented under the hood and can't predict how the different ways I could write a function will affect performance. How do I bring myself to trust that the compilers (or interpreters) and the programmers that wrote them will be sufficiently optimal, and just move on? 😩4 -
What are my odds that V8 will recognize my efforts and convert my for loop to use SIMD? It literally just adds the elements of a Float32Array to another, it's the single best chance for compiler optimizations to shine.4
-
now... Im just tired and bored of what i do. i had a very hectic year rewriting a core functionality in my company, it was full of optimizations, logic improvements and learning new things.
I took 10 days off hoping id come hating my job less. I learned kotlin and worked on a personal server side project with it during the vacation and honestly i loved it. I missed learning new languages and concepts.
so i thought, well if i enjoyed coding during the vacation then my burnout is cured right ? well once i went back to work today I felt like shit and couldn't do a thing. disgusted of the idea coding for my employer. Too tired to continue my personal project after 8 hours of my job
I guess im back to square one2 -
So at work, I was trying to convince a senior programmer to adopt Golang for a new project that would need to handle large amount of throughput as oppose to Django with gunicorn and other optimizations (which is what we usually do).
The response was "we'd have a hard maintaining it, as no one really knows Go and it will be hard to hire people with Go knowledge".
So my question: "in your opinion, is that a good reason?" I have some go experience from other jobs, it seems like a superior solution for this problem.6 -
after having to deal with a lot of weird "rewrites" and "refactorings" by co-workers i started to add this comment into the head of my sourcefiles:
You may think you know what the following code does.
But you dont. Trust me.
Fiddle with it, and youll spend many sleepless nights cursing the moment you thought youd be clever enough to "optimize" the code below. Now close this file and go play with something else.
Found this somewhere on the interwebs and since i use it the "refactorings" and "optimizations" of my code stopped nearly completely -
Chromium dev tools and Lighthouse audits sound like a Chrome features marketing campaign, once you proceed beyond basic optimizations and bug fixes, like
use our new image formats, stop shipping old JavaScript to new browsers, provide a source map, use web font preload but only if you use it exactly matching the best case scenario, rewrite your manifest file which used to work just fine etc.
actively encourage people to exclude up to 5% of global website audience?!
"This means that 95% of global web traffic comes from browsers that support the most widely used JavaScript language features from the past 10 years"
https://web.dev/publish-modern-java... -
In Postgres, parallel query works by having multiple workers....
As such, depending on workers available, queries can run theoretically fast (workers available) or slow (workers not available) - right?
Question is: are there more optimizations like this... And is my assumption correct?
It seems like a major pain in the ass... Scalability and reliability wise. (?!)8 -
I am working on a small project for school and want to share it with you. Can you give me some feedback on it? It is a payroll system in which you can register, view and delete employees and also generate payslips. Is my first ever project and I am somehow proud of it but I want to see what optimizations I can make to it if you are so kind to spare a few minutes of your busy life. o7 commanders :P
Here is the link to the GitHub: https://github.com/iiulian8/...7 -
System.OutOfMemoryException was thrown trying to execute a stored procedure in the database. I think it might be time for some optimizations again. ... but no before coffee
-
I don't know what's happening. In passing a char pointer to a function. I'm having issue, so I'm printf'ing the address pointed to by the pointer. Right after I assign it, it contains the right address, but the printf on the next line has it containing a different address. Another printf shows another different value, but all the following printfs show that third value. They're all consecutive printfs with nothing happening in between in the program, and the char* starts it's life as an array. All compiler optimizations are disabled. I don't know what's happening, it's just randomly changing. 😭2
-
If you are falling to stored procedures for every performance optimizations
Concludes You haven't started well.. -
In the dynamic realm of software development, where the user interface meets the complex machinery behind the scenes, Back-End Expertise https://sombrainc.com/expertise/... emerges as the unsung hero. As businesses increasingly rely on digital platforms to connect, engage, and transact with their audience, the prowess of back-end development becomes paramount.
At its core, Back-End Expertise refers to the specialized knowledge and skills required to architect, build, and maintain the server-side of applications. While the front end dazzles users with intuitive interfaces and captivating designs, the back end silently weaves the intricate tapestry that ensures seamless functionality, robust security, and optimal performance.
The Back-End Symphony: Orchestrating Digital Harmony
Imagine a symphony where each instrument plays its part to perfection, creating a harmonious melody. Similarly, in the world of software, the back end orchestrates a symphony of databases, servers, and frameworks, ensuring that data flows smoothly, operations execute seamlessly, and applications respond promptly to user commands.
Back-End Experts are the virtuosos who write the code that makes applications tick. They delve into the intricacies of databases, crafting queries that retrieve and store data efficiently. They architect server-side logic, meticulously designing algorithms that power functionalities ranging from user authentication to complex business processes.
Security as the Forte: Safeguarding the Digital Fortress
In an era where data breaches loom as potential threats, Back-End Expertise becomes a formidable fortress. These experts implement robust security measures, safeguarding sensitive information and ensuring the integrity of digital ecosystems. Encryption, authentication protocols, and secure API integrations are the tools of their trade as they create digital bastions against cyber threats.
Optimizing Performance: The Need for Speed
User experience hinges on speed, and Back-End Experts understand the importance of optimizing performance. Through efficient coding practices, load balancing, and server-side optimizations, they strive to minimize latency and ensure that applications respond swiftly, even under heavy user loads.
Future Trends: Back-End Evolution
As technology evolves, so does the landscape of back-end development. Cloud computing, serverless architectures, and microservices are shaping the future of back-end expertise. Back-End Experts must adapt to these trends, embracing new tools and methodologies to stay at the forefront of innovation.
In conclusion, Back-End Expertise is the backbone of digital experiences. While users interact with the front end, the magic unfolds behind the scenes, where Back-End Experts craft the architecture that defines the reliability, security, and performance of applications. Their alchemy transforms lines of code into seamless digital experiences, leaving an indelible mark on the ever-evolving landscape of software development.1