Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "memory usage"
-
Google: “Your websites must load the first byte in under 500ms and be fully loaded with no render blocking and local caching of all external site callouts to even begin to rank in Google searches.”
Me: “Ok, Google. Your wish is my command.”
*Looks at Chrome’s memory usage to load a blank page*7 -
The time my Java EE technology stack disappointed me most was when I noticed some embarrassing OutOfMemoryError in the log of a server which was already in production. When I analyzed the garbage collector logs I got really scared seeing the heap usage was constantly increasing. After some days of debugging I discovered that the terrible memory leak was caused by a bug inside one of the Java EE core libraries (Jersey Client), while parsing a stupid XML response. The library was shipped with the application server, so it couldn't be replaced (unless installing a different server). I rewrote my code using the Restlet Client API and the memory leak disapperead. What a terrible week!2
-
When you create a bunch of objects in Java and it crashes because you're used to the memory usage of C's structs.3
-
So, some time ago, I was working for a complete puckered anus of a cosmetics company on their ecommerce product. Won't name names, but they're shitty and known for MLM. If you're clever, go you ;)
Anyways, over the course of years they brought in a competent firm to implement their service layer. I'd even worked with them in the past and it was designed to handle a frankly ridiculous-scale load. After they got the 1.0 released, the manager was replaced with some absolutely talentless, chauvinist cuntrag from a phone company that is well known for having 99% indian devs and not being able to heard now. He of course brought in his number two, worked on making life miserable and running everyone on the team off; inside of a year the entire team was ex-said-phone-company.
Watching the decay of this product was a sheer joy. They cratered the database numerous times during peak-load periods, caused $20M in redis-cluster cost overrun, ended up submitting hundreds of erroneous and duplicate orders, and mailed almost $40K worth of product to a random guy in outer mongolia who is , we can only hope, now enjoying his new life as an instagram influencer. They even terminally broke the automatic metadata, and hired THIRTY PEOPLE to sit there and do nothing but edit swagger. And it was still both wrong and unusable.
Over the course of two years, I ended up rewriting large portions of their infra surrounding the centralized service cancer to do things like, "implement security," as well as cut memory usage and runtimes down by quite literally 100x in the worst cases.
It was during this time I discovered a rather critical flaw. This is the story of what, how and how can you fucking even be that stupid. The issue relates to users and their reports and their ability to order.
I first found this issue looking at some erroneous data for a low value order and went, "There's no fucking way, they're fucking stupid, but this is borderline criminal." It was easy to miss, but someone in a top down reporting chain had submitted an order for someone else in a different org. Shouldn't be possible, but here was that order staring me in the face.
So I set to work seeing if we'd pwned ourselves as an org. I spend a few hours poring over logs from the log service and dynatrace trying to recreate what happened. I first tested to see if I could get a user, not something that was usually done because auth identity was pervasive. I discover the users are INCREMENTAL int values they used for ids in the database when requesting from the API, so naturally I have a full list of users and their title and relative position, as well as reports and descendants in about 10 minutes.
I try the happy path of setting values for random, known payment methods and org structures similar to the impossible order, and submitting as a normal user, no dice. Several more tries and I'm confident this isn't the vector.
Exhausting that option, I look at the protocol for a type of order in the system that allowed higher level people to impersonate people below them and use their own payment info for descendant report orders. I see that all of the data for this transaction is stored in a cookie. Few tests later, I discover the UI has no forgery checks, hashing, etc, and just fucking trusts whatever is present in that cookie.
An hour of tweaking later, I'm impersonating a director as a bottom rung employee. Score. So I fill a cart with a bunch of test items and proceed to checkout. There, in all its glory are the director's payment options. I select one and am presented with:
"please reenter card number to validate."
Bupkiss. Dead end.
OR SO YOU WOULD THINK.
One unimportant detail I noticed during my log investigations that the shit slinging GUI monkeys who butchered the system didn't was, on a failed attempt to submit payment in the DB, the logs were filled with messages like:
"Failed to submit order for [userid] with credit card id [id], number [FULL CREDIT CARD NUMBER]"
One submit click later and the user's credit card number drops into lnav like a gatcha prize. I dutifully rerun the checkout and got an email send notification in the logs for successful transfer to fulfillment. Order placed. Some continued experimentation later and the truth is evident:
With an authenticated user or any privilege, you could place any order, as anyone, using anyon's payment methods and have it sent anywhere.
So naturally, I pack the crucifixion-worthy body of evidence up and walk it into the IT director's office. I show him the defect, and he turns sheet fucking white. He knows there's no recovering from it, and there's no way his shitstick service team can handle fixing it. Somewhere in his tiny little grinchly manager's heart he knew they'd caused it, and he was to blame for being a shit captain to the SS Failboat. He replies quietly, "You will never speak of this to anyone, fix this discretely." Straight up hitler's bunker meme rage.13 -
Woohoo! 32k achieved!!! Finally I can post some new rant without risking some sudden overshoot 😁
So putting celebrations aside for a minute, a while ago I've noticed a tingle when I stroke my finger across metal areas of my tablet, or the sides of my phone (which probably has metal near it too) while it's charging. And it's been bugging me ever since.
Now, some things to note are that it only happens when my feet are touching the ground though slippers, and that the frequency is so low that I can actually feel the tingle when I slide my finger across the material. This to me at least seems like electricity flows through me into ground, and touching the ground directly provides a path so easy for the electrons to run away that I don't feel it at all. But if I lift my feet off the ground entirely, I just get charged up and after that, nothing else happens.
So those are my ideas. The answers on the subject on the other hand.. absolute cancer. Unsurprisingly, most of them came from Apple users. Here's some of them.
https://discussions.apple.com/threa...
- I've not noticed it, but if you're concerned bring the phone to Apple for evaluation.
- Me too facing same problem.. did u visit apple care?
And one good answer at least...
- google emf sensitivity, its real. You are right, there is a small current flowing through your body, try to limit your usage. The problem with this issue is those who aren't affected (lucky ones for now) will tell you these products are 100% safe. To a degree they are, i used my ipod touch for about 2 years straight vwith virtually no symptoms. then the tingling started and it gets worse.You will get more sensitive to progressively less powerful things. I dont want to scare you but just limit your usage like i didnt do 🙂
Overall that discussion was pretty good actually, aside from "bring it to the Genius Bar, they'll know for sure and not just sell you another unit". But then there's Reddit.
https://reddit.com/r/iphone/...
- Ok, real reason is probably that the extension cord and/or outlet is probably not grounded correctly. Either that or you are using a cheap knockoff charger.
Either use a surge protector and/or use the authentic Apple Charger.
- It's not the volts that hurt you, it's the amps
- I think you are in deep love with your phone. That tingling sensation is usually referred to as "love" in human language.
- Do less acid, I would advise.
Okay, so that's the real cancer. Grounding issue sounds reasonable despite it being wrong. Grounding is actually not needed when your charging appliance doesn't have any exposed metal parts. And isolation from high voltage to low voltage side actually happens through things like routering holes into the PCB, creating spark gaps, and using galvanic isolation through things like optocouplers. As for a surge protector? I'm using them to protect my PC and my servers, but the only purpose they serve is to protect from.. you guessed it.. voltage surges, like lightning bolts hitting the grid. They don't do shit for grounding or reducing this tingle! What a fucking tool.
It's not the volts that kill, it's the amps.. yeah I'm sure that the debunking of that is easy to find. Not gonna explain that here. And the rest of it.. yeah it's just fucking cancer.
Now what's the real issue with this tingle? It's actually a Class-Y rated (i.e. kV rated) capacitor that's on the transformer of any switch-mode power supply, including phone chargers. If memory serves me right, it helps with decoupling the switching noise and so on. But as it's connected to the primary side of the transformer, if the cap is sufficiently large and you are sufficiently sensitive, it can actually cause that tingle by passing a fraction of the mains electricity into your body. It's totally safe though, as the power that these caps pass is very small. But to some, it's noticeable.
Hope you found this interesting! And thanks a lot for bringing me to 2^15. I really appreciate it ♥️15 -
Application has had a suspected memory leak for years. Tech team got developers THE EXACT CODE that caused it. Few months of testing go by, telling us they're resolving their memory leak problem (finally).
Today: yeah, we still need restarts because we don't know if this new deployment will fix our memory leak, we don't know what the problem is.
WHAT THE FUCK WERE YOU DOING IN THE LOWER REGIONS FOR THREE FUCKING MONTHS?!?!?! HAVING A FUCKING ORGY???????????????
My friends took the time to find your damn problem for you AND YOU'RE GOING TO TELL ME YOU DON'T KNOW WHAT THE PROBLEM IS???
It was in lower regions for 3 MONTHS and you don't know how it's impacting memory usage?!?!?! DO YOU WANT TO STILL HAVE A JOB? BECAUSE IF NOT, I CAN TAKE CARE OF THAT FOR YOU. YOU DON'T DESERVE YOUR FUCKING JOB IF YOU CAN'T FUCKING FIX THIS.
Every time your app crashes, even though I don't need to get your highest level boss on anymore for approval to restart your server, I'M GOING TO FUCKING CALL HIM AND MAKE HIM SEE THAT YOU'RE A FUCKING IDIOT. Eventually, he'll get so annoyed with me, your shit will be fixed. AND I WON'T HAVE TO DEAL WITH YOUR USELESS ASS ANYMORE.
(Rant directed at project manager more than dev. Don't know which is to blame, so blaming PM)28 -
A Developer is desperate: his java application servers are unresponsive, thousand of dead zombie threads are sucking all cpus, memory is leaking everywhere, garbage collector has gone crazy, the cluster sessions are fucked....
The Developer goes to the closest bridge, ties a stone to his neck and gets ready to jump.
Suddenly a bearded old man with a fiery look runs toward him, yelling:
- stop stop!!!! Your application is not scaling and misconfigured, your servers are melting, cpu usage is not sustainable anymore, but don't despair
The Developer, puzzled, looks at him:
-I've never seen you...how do you know...
- Hey, man, I'm the Devil. I know everything. All your problems are solved. I'll give you magic functions. They are called Lambda.
You'll never have to worry about your servers, scalability, security, configuration and shit.
The Developer seems astonished but relieved:
- Ok, sounds great! let's try it - suddenly suspicion creeps in - hmmmm but you are the Devil....so...you want something back, don't you?
(the Devil nods lightly with a diabolic smile)
- ...and...you want my soul, I guess...
- your soul??? come on!!! - the Devil burst in a laugh - we are in 2019. I don't care about your soul. I want your ass.
- What!???!!!?
- yes, I want to fuck your ass
The Developer, evaluates quickly the situation.
Few moments of pain or slight discomfort (?) in exchange for magic lambda. It could be worth. He accepts.
After a while of rough anal fucking, the devil asks
- Hey, how old are you anyway?
- 45, why?
- Oh jeeez...45!!!??? and you still believe in the devil?5 -
I like memory hungry desktop applications.
I do not like sluggish desktop applications.
Allow me to explain (although, this may already be obvious to quite a few of you)
Memory usage is stigmatized quite a lot today, and for good reason. Not only is it an indication of poor optimization, but not too many years ago, memory was a much more scarce resource.
And something that started as a joke in that era is true in this era: free memory is wasted memory. You may argue, correctly, that free memory is not wasted; it is reserved for future potential tasks. However, if you have 16GB of free memory and don't have any plans to begin rendering a 3D animation anytime soon, that memory is wasted.
Linux understands this. Linux actually has three States for memory to be in: used, free, and available. Used and free memory are the usual. However, Linux automatically caches files that you use and places them in ram as "available" memory. Available memory can be used at any time by programs, simply dumping out whatever was previously occupying the memory.
And as you well know, ram is much faster than even an SSD. Programs which are memory heavy COULD (< important) be holding things in memory rather than having them sit on the HDD, waiting to be slowly retrieved. I much rather a web browser take up 4 GB of RAM than sit around waiting for it to read the caches image off my had drive.
Now, allow me to reiterate: unoptimized programs still piss me off. There's no need for that electron-based webcam image capture app to take three gigs of memory upon launch. But I love it when programs use the hardware I spent money on to run smoother.
Don't hate a program simply because it's at the top of task manager.6 -
Not just another Windows rant:
*Disclaimer* : I'm a full time Linux user for dev work having switched from Windows a couple of years ago. Only open Windows for Photoshop (or games) or when I fuck up my Linux install (Arch user) because I get too adventurous (don't we all)
I have hated Windows 10 from day 1 for being a rebel. Automatic updates and generally so many bugs (specially the 100% disk usage on boot for idk how long) really sucked.
It's got ads now and it's generally much slower than probably a Windows 8 install..
The pathetic memory management and the overall slower interface really ticks me off. I'm trying to work and get access to web services and all I get is hangups.
Chrome is my go-to browser for everything and the experience is sub par. We all know it gobbles up RAM but even more on Windows.
My Linux install on the same computer flies with a heavy project open in Android Studio, 25+ tabs in Chrome and a 1080p video playing in the background.
Up until the creators update, UI bugs were a common sight. Things would just stop working if you clicked them multiple times.
But you know what I'm tired of more?
The ignorant pricks who bash it for being Windows. This OS isn't bad. Sure it's not Linux or MacOS but it stands strong.
You are just bashing it because it's not developer friendly and it's not. It never advertises itself like that.
It's a full fledged OS for everyone. It's not dev friendly but you can make it as much as possible but you're lazy.
People do use Windows to code. If you don't know that, you're ignorant. They also make a living by using Windows all day. How bout tha?
But it tries to make you feel comfortable with the recent bash integration and the plethora of tools that Microsoft builds.
IIS may not be Apache or Nginx but it gets the job done.
Azure uses Windows and it's one of best web services out there. It's freaking amazing with dead simple docs to get up and running with a web app in 10 minutes.
I saw many rants against VS but you know it's one of the best IDEs out there and it runs the best on Windows (for me, at least).
I'm pissed at you - you blind hater you.
Research and appreciate the things good qualities in something instead of trying to be the cool but ignorant dev who codes with Linux/Mac but doesn't know shit about the advantages they offer.undefined windows 10 sucks visual studio unix macos ignorance mac terminal windows 10 linux developer22 -
My first actual rant on devRant:
Fuck corporate companies. Fuck agile development.
In the last 8 months I’ve been with this company, I’ve 1) made the app layout (which was super fucked) compatible with iPad. 2) reduced the apps size by 1/3 of the original size. 3) improved memory usage by double the efficiency, nearly eliminated all memory leaks. 4) gotten employee of the quarter for some of the above mentioned.
After all of this I got a talking to from product manager that “he knows I am a good developer but needs more consistency” after I spent a sprint on one story trying to consolidate front end validation logic and make a “validatableTextField” actually do some validation. So much for the MVVM you promised me.
Also, was promised I’d get some experience with Android, and with a team of 8 devs 6 of which have droid backgrounds and other two are juniors, guess whose only even built the droid project once in 8 months? You guessed it. This company has drained me of all of my knowledge, went against most of its promises to me, and values pushing features to the point of adding tech debt faster than I can solve it.
Unfortunately my personal life relies on this job or I’d quit right away. But you bet your ass I’m passively looking for something and I can’t wait till I get a job offer and quit on these ungrateful hypocrites.5 -
That moment when you thought you've fortified yourself with enough RAM for the future (32GB) and Blender fails to work with a large project because...it runs out of memory (just in the loading phase, building them intermediate data structures pushes it over the edge I guess).
Fml.
It was kind of fascinating to watch the memory usage indicator creep up though. Morbid fascination.3 -
Had nothing to do today, so I thought I´ll test the migration of SVN to Git in Gitlab.
Boss sent me a mail today, that when I migrate we need to preserve the history, so I actually have to put some effort in it. *sigh*
Shout-out to the Gitlab documentation at this point.
That´s probably the best doc I´v ever read...
Well so I tried to use svn2git. And well...
Who the fuck thought that this piece of shit software is in any way usable?
Holy crap!
If it fails, it just does so without any info why. Even in verbose mode.
And the RAM usage? What the actual fuck?
This whole thing is a complete memory leak!
32Gigs of RAM full in Minutes and the whole system starts to stall!
And then when I thought it finally runs through.
Bam another git checkout error...
Googling for that error then I found something. A version of svn2git made in .Net Core.
Didn´t expect much but I tried it anyways.
And would you look at that!
It ran so smooth and didn´t need that much RAM , I had some doubt it did work correctly.
But it did!
I think I´m gonna pay a coffee or two to some guy over in China now!6 -
After months of development, testing, testing and even more testing the app was ready for deployment to production. Happy days, the end was in sight!
I had a week's leave so I handed over the preparation for deployment to my Senior Developer and left it in his capable hands while I enjoyed the sun and many beers.
I came back on the day of deployment and proudly pressed the deploy button. Hurrah!
Not long after I got loads of phone calls from around the country as the app wasn't working. What madness is this?! We tested this for months!
Turns out my Senior didn't like the way I'd written the SQL queries so he changed them. Which is obviously both annoying and unprofessional, but even worse he got a join wrong so the memory usage was a billion times more and it drained the network bandwidth for the whole site when I tried to debug it.
I got all the grief for the app not working and for causing many other incidents by running queries that killed the network.
So...much... rage!!!3 -
Today was a day at work that I felt like I made a significant contribution. It was not a lot of code. Actually it was a difference of 3 characters.
I am developing an industrial server so that my employer can provide access to their machines to enterprise industrial systems. You know, the big boys toys. Probably in fucking java...
Anyway, I am putting this server on an embedded system. So naturally you want to see how much serving a server can serve. In this case the device in more processor starved than memory starved. So I bumped up the speed of the serving from 1000mS to 100mS per sample. This caused the processor to jump from 8% of one core (as read from top) to 70%. Okay, 10x more sampling then 10x approx cpu usage. That is good. I know some basic metrics for a certain amount of data for a couple of different sampling rates.
Now, I realized this really was not that much activity for this processor. I mean, it didn't seem to me that it "took much" to see a large increase of processor usage. So I started wondering about another process on the system that was eating 60 to 70 % all the time. I know it updated a screen that showed some not often needed data from its display among controlling things. Most of the time it will be in a cabinet hidden from the world. I started looking at this code and figured out where the display code was being called.
This is where it gets interesting. I didn't write this code. Another really good programmer I work with wrote this. It also seemed to be pretty standard approach. It had a timer that fired an event every 50mS. This is 20 times per second. So 20 fps if you will. I thought, What would happen if I changed this to 250mS? So I did. It dropped the processor usage to 15%! WTF?! I showed another programmer: WTF?! I showed the guy who wrote it: WTF?! I asked what does it do? He said all it does it update the display. He said: Lets take to 1000mS! I was hesitant, but okay. It dropped to 5%!
What is funny is several people all said: This is running kinda hot. It really shouldn't be this hot.
Don't assume, if you have a hunch, play with it if its safe to do so. You might just shave off 55 to 60 % cpu usage on your system.
So the code I ended up changing: "50" to "1000".16 -
Spent a month working on a website that relied on crawled data
Got the memory leaks and usage down from 700mb to ~150mb
CPU usage from ~100% to <5%
Shrink-wrapped the DB requirements based on data
Created self-supporting services and what not
When everything FINALLY worked good enough for me to look at it and go "damn, this actually worked"
the whole monitoring sys got dyed in red :v
A quick look up and my crawlers exhausted my godaddy's per-user db limits.
Kill me.
Just fuckin kill me.7 -
Fucking piece of shit windows 10!
Screw you to hell!
Disk usage : 100%
Memory usage: 60%
CPU usage: 2%
Sluggish as hell! And downloading updates without even asking! How fucking dare you!
Stuck with this shit just because of Photoshop and Illustrator.11 -
I find it funny on Windows, Android studio reaches as much as 12GB of ram usage, while on Linux two instances barely take up to 3GB
Either java sucks on Windows or AS sucks in memory usage but happened to be saved by Linux9 -
Spent 6 hours implementing a feature because my senior didn't want to use a 3rd party plug-in.
After said 6 hours, went to look at the plugin's source code to get some inspiration with a problem I was having.
Guess fucking what? Plug-in was implemented exactly as I had done it to start with. Even better, actually, since it fixed some native bugs I couldn't find a way around.
Went back to my senior, showed him both sources and argued again in favour of the plug-in.
Senior: "Meh, I'm not sure. Don't really like to keep adding plugins"
Me: "Why? Do they cause performance hits? Increase memory usage?"
S: "No, not all. But I don't like plugins"
/flip
We ended up using the plug-in, but I "wasted" a whole day doing something we scrapped. Guess I learned some interesting things about encryption on Android, at least...6 -
This is my first rant here, so I hope everyone has a good time reading it.
So, the company I am working for got me going on the task to do a rewrite of a firmware that was extended for about 20 years now. Which is fine, since all new machines will be on a new platform anyways. (The old firmware was written for an 8051 initially. That thing has 256 byte of ram. Just imagine the usage of unions and bitfields...)
So, me and a few colleagues go ahead and start from scratch.
In the meantime however, the client has hired one single lonely developer. Keep in mind that nobody there understands code!
And oh boy did he go nuts on the old code, only for having it used on the very last machine of the old platform, ever! Everything after that one will have our firmware!
There are other machines in that series, using the original extended firmware. Nothing is compatible, bootloaders do not match, memory layouts do not match, code is a horrible mess now, the client is writing the specification RIGHT NOW (mind, the machine is already sold to customers), there are no tests, and for the grand finale, the guy canceled his job and went to a different company. Did I mention the bugs it has and the features it lacks?
Guess who's got to maintain that single abomination of a firmware now?1 -
Tried to figure out why my computer was being slow and lagging earlier. Thought it may have been a bad update to the kernel I recently did, or an update to a package.
No, it was chrome and its horrible memory usage.7 -
I fucking hate Electron, what ever happened to developing software natively? It's not like you have to stick to dot Net and C# or whatever, there's literally Lazarus or Delphi, which, at least Lazarus, not only is open source but also supports all major platforms.
Even Python has GTK, Qt and Pywin32 or whatever its called. While not exactly cross platform, it's still not eating up 1GB of RAM when you launch it.
I don't care if Bob from across the street uses it because he's too lazy to learn anything new, but when huge companies like fucking Discord (valued at 10B dollars) use it, it's insane.
More than once has Discord had a memory leak and was reaching upwards of 6.5GB of RAM usage.
Whats the most popular code editor? VSCode, Electron.
Chat client? Discord, Electron.
Wanna use something other than Discord? Maybe Matrix? Well guess what, while they do have multiple clients, the most developed and usable one is Element, yeah, Electron.
Slack? Electron
My crypto wallet? Exodus, Electron.
I genuinely don't think 16GB of RAM is enough nowadays. Thankfully I'm running a very minimal install of Arch Linux and do most of my work in a KVM, but it still hurts my brain.
By the way things are looking nowadays, We'll be using Javascript for Kernels soon.
Thanks for coming to my Ted Talk.
Also apparently the filter on this site sees ". net" as an url.10 -
Hello guys. Today I bring you my list of top 3 programs that use too much memory
🪟 Windows 10+
⚛️ Web browsers, Electron
🐋 Docker containers
Honorable mention: ☕Java
The developers of those programs should put more effort into optimizing memory usage14 -
My 27" 8-core imac, i7, 3.8ghz, AMD radeon pro 5500, 40 GB RAM 512 GB storage,
keeps screaming in agony.
But never stuttered.
Never lagged.
Never glitched
Never failed
Never ran out of memory
I can just hear how hard the the ventilation was going. It was getting loud.
I touched its ass from behind. It was heated up and there was lots of dust from the holes
This has been going on for several days but i ignored it knowing what kind of a beast machine i have (big mistake)
Intellj popped up notification to disable hints in order to improve cpu usage performance.
Immediately it struck me. Hold on lemme check the activity monitor stats and find out why my imac has been screaming for days
Turns out intellj is using over 1090% of my fucking CPU?????
THAT SHIT U SEE ON THE IMAGE WENT ABOVE 1100% OF CPU USAGE AND IT WAS ONLY 1 PROCESS CAUSING IT - INTELLIJ
WHAT???12 -
Is your code green?
I've been thinking a lot about this for the past year. There was recently an article on this on slashdot.
I like optimising things to a reasonable degree and avoid bloat. What are some signs of code that isn't green?
* Use of technology that says its fast without real expert review and measurement. Lots of tech out their claims to be fast but actually isn't or is doing so by saturation resources while being inefficient.
* It uses caching. Many might find that counter intuitive. In technology it is surprisingly common to see people scale or cache rather than directly fixing the thing that's watt expensive which is compounded when the cache has weak coverage.
* It uses scaling. Originally scaling was a last resort. The reason is simple, it introduces excessive complexity. Today it's common to see people scale things rather than make them efficient. You end up needing ten instances when a bit of skill could bring you down to one which could scale as well but likely wont need to.
* It uses a non-trivial framework. Frameworks are rarely fast. Most will fall in the range of ten to a thousand times slower in terms of CPU usage. Memory bloat may also force the need for more instances. Frameworks written on already slow high level languages may be especially bad.
* Lacks optimisations for obvious bottlenecks.
* It runs slowly.
* It lacks even basic resource usage measurement.
Unfortunately smells are not enough on their own but are a start. Real measurement and expert review is always the only way to get an idea of if your code is reasonably green.
I find it not uncommon to see things require tens to hundreds to thousands of resources than needed if not more.
In terms of cycles that can be the difference between needing a single core and a thousand cores.
This is common in the industry but it's not because people didn't write everything in assembly. It's usually leaning toward the extreme opposite.
Optimisations are often easy and don't require writing code in binary. In fact the resulting code is often simpler. Excess complexity and inefficient code tend to go hand in hand. Sometimes a code cleaning service is all you need to enhance your green.
I once rewrote a data parsing library that had to parse a hundred MB and was a performance hotspot into C from an interpreted language. I measured it and the results were good. It had been optimised as much as possible in the interpreted version but way still 50 times faster minimum in C.
I recently stumbled upon someone's attempt to do the same and I was able to optimise the interpreted version in five minutes to be twice as fast as the C++ version.
I see opportunity to optimise everywhere in software. A billion KG CO2 could be saved easy if a few green code shops popped up. It's also often a net win. Faster software, lower costs, lower management burden... I'm thinking of starting a consultancy.
The problem is after witnessing the likes of Greta Thunberg then if that's what the next generation has in store then as far as I'm concerned the world can fucking burn and her generation along with it.6 -
Respect to the devs of this app, noticed a changelog entry "Improved memory usage".
More devs should take ram consumption into account! -
¡rant|rant
Nice to do some refactoring of the whole data access layer of our core logistics software, let me tell an story.
The project is around 80k lines of code, with a lot of integrations with an ERP system and an sql database.
The ERP system is old, shitty api for it also, only static methods through an wrapper to an c++ library
imagine an order table.
To access an order, you would first need to open the database by calling Api.Open(...file paths) (yes, it's an fucking flat file type database)
Now the database is open, now you would open the orders table with method Api.Table(int tableId) and in return you would get an integer value, the pointer.
Now for the actual order. first you need to search for it by setting the search parameter to the column ID of the order number while checking all calls for some BS error code
Api.SetInt(int pointer, int column, int query Value)
Then call the find method.
Api.Find(int pointer)
Then to top this shitcake of an api of: if it doesn't find your shit it will use the "close enough" method of search.
And now to read a singe string 😑
First you will look in the outdated and incorrect documentation given to you from the devil himself and look for the column ID to find the length of the column.
Then you create a string variable with ALL FUCKING SPACES.
Now you call the Api.GetStr(int pointer, int column, ref string emptyString, int length)
Now you have passed your poor string to the api's demon orgy by reference.
Then some more BS error code checking.
Now you have read an string value 😀
Now keep in mind to repeat these steps for all 300+ columns in the order table.
News from the creators: SQL server? yes, sql is good so everything will be better?
Now imagine the poor developers that got tasked to convert this shitcake to use a MS SQL server, that they did.
Now I can honestly say that I found the best SQL server benchmark tool. This sucker creams out just above ~105K sql statements per second on peak and ~15K per second for 1.5 second to read an order. 1.5 second to read less than 4 fucking kilobytes!
Right at that moment I released that our software would grind to an fucking halt before even thinking about starting it. And that me & myself and I would be tasked to fix it.
4 months later and two weeks until functional beta, here I am. We created our own api with the SQL server 😀
And the outcome of all this...
Fixes bugs older than a year, Forces rewriting part of code base. Forces removal of dirty fixes. allows proper unit and integration testing and even database testing with snapshot feature.
The whole ERP system could be replaced with ~10 lines of code (provided same relational structure) on the application while adding it to our own API library.
Best part is probably the performance improvements 😀. Up to 4500 times faster and 60 times less memory usage also with only managed memory.3 -
Stopped running Grunt ( Task runner for JavaScript ) process, watching files for changes
Memory usage dropped by around 1.5-2 GB 😑2 -
I'm trying to investigate why chrome keeps crashing after i implemented web sockets to a web app.
I used windows perfmon to see the memory usage over night.
The usage between 17:30 and 01:50 is expected behaviour as this part of the app is a live data graph of the last 48 hours.
Now i have to find out why the app doubles in memory twice in a hour.2 -
Looked up at the clock... 2 AM... Thought about giving up and going to sleep, but something kept me there...
Rewrote my encoder and decoder for my steganography program, which are used to insert and retrieve data respectively from images. Compiled, ran, and output was as expected!
Tried to write actual data, instead of just headers, to the image, and it broke... Of course it wouldn't work first try, it's me writing the code after all.
But then, after debugging for a while and changing a couple lines, the encoder looked like it had done its work properly. Then I decoded it, and voila, data completely recovered! It almost felt too magical to be true, usually I have to modify a lot more to get it working.
So now I'm in bed, after literally decimating the memory usage of the program, amongst other optimizations, and I know that the code works perfectly 😎 best part is I refactored each class down to 100 lines each, so now it's clean and dense 😇
Just had to share, feeling so good right now 😄2 -
everytime i buy a new phone ,i feel this sense of extreme regret :(
i bought a moto g 5g phone last year in feb, it was so good . it didn't had any out of the world cameras or some funky stuff, but it gave a decent performance and i couldn't want any other phone.
In October my mom's phone started giving issues so i bought a realme phone for her that was half my phone's price. i couldn't spent any mor e because otherwise she wouldn't take it. she accepted the cheaper phone and within 4 days sue was cursing it. the phone had decent specs but would lag in certain apps like zoom, and won't run some call recorder apps. at the end i swapped my phone with mom's since i didn't cared about zoom or the recorder.
now this shit realme phone's memory has gone around 60% full of my stuff, and its showing its limitations. this shit auto relaunches insta after a few minutes of usage, probably because its runtime memory gets short( 4gb 128gb device gets memory shortages. nice). its video quality is shit and camera also takes rarely good pics.
the worst thing i like about smartphones today is how they over optimise the ui. this insta issue and auto call recorders not working is simply because of the realme skin running over the stock android. i had similar issues with a xiaomi device i bought for my dad sometime ago. (fortunately my dad is more medieval so that crap has not came back to me :'/ )
so overall i am buying a 3rd phone in 17 months.
This time it's Samsung f23 and am worried that it's also going to suck. i was this 🤏close to buying a pixel 6 or even an iphone coz i can afford them.
but the regret of buying such an expensive phone that will need replacement in 2 years made me rethink.
the only android os that have suited me the best is stock and as of now only 2 companies are making it : google and moto(* it's 100% aosp with 3 extra apps but they can't say that, so they also state that they are not stock os) . one plus is also a brand that i have heard makes a good os . but recently i also heard that they have completely scrapped their os and using oppo's softwares . plus the amount of tickets we get for notifications not working in oneplus, am sure their optimization is extremely aggressive.
so everything between a moderate price phone ( that will need a replacement in 2 years ) to a flagship felt unnecessary to me, so i went ahead with a Samsung's shit phone. f23 has almost same specs as moto but it's again a heavily customised os. i wanna waste my money on trying a custom os and declare it shitty.
most of my friends that use Samsung are fan of it but they are also not very techy so i guess it suits them well. i am the guy who first installs nova launcher in his device, so let's see what it brings on the table. from the 3rd person p.o.v, i felt its screen and camera images to he nice whenever i used their mobiles, so let's see what this brings to the table :(7 -
I always had this mentality that I shouldn’t rely on a certain library or framework for my entire project because what if one day they stop supporting it. (Yeah I’m talking to u vuetify) That’s why I came up with this code structure that for everything that I wanna do I have a ‘driver’ library all coded by myself that interacts with that third party framework or library so if they stop supporting it I could just change a couple of lines of code in my driver file and my codebase should be working again. But I feel like this ‘driver’ approach is not the most efficient way of going in terms of memory usage. Do you guys think I should keep it simple and directly use those libraries or this is actually not a bad approach.7
-
I know streams are useful to enable faster per-chunk reading of large files (eg audio/ video), and in Node they can be piped, which also balances memory usage (when done correctly). But suppose I have a large JSON file of 500MB (say from a scraper) that I want to run some string content replacements on. Are streams fit for this kind of purpose? How do you go about altering the JSON file 'chunks' separately when the Buffer.toString of a chunk would probably be invalid partial JSON? I guess I could rephrase as: what is the best way to read large, structured text files (json, html etc), manipulate their contents and write them back (without reading them in memory at once)?4
-
By always striving to do better each time. Making code less sloppy every time I write GL code. Better performance everytime I write an algorithm. Lower memory usage every time I write application state. Learning a new trick for an old problem, one at a time.
Learning best practice in one go is impossible, but taking it a bit at a time makes things more reasonable.3 -
Did I get old or did I just finish plucking all the low hanging fruit?
When I started on a programming journey about a decade ago everything feel exciting and I learn a lot of things per day (variable,loop,method,class,---etc)
Now a decade later I am more concern with the overall system design,algorithms usage (Big O Notation),how reliable the system it,and how the configurations are set up and how easy is it to change them.
I now notice that I don't really learn anything learn new.Everything feel the same.
Want redundancy? Use more server
Want faster performance? Make a parallel system.
Want program to run on low end device? Think about how memory and storage will be used in system.
Is this a stage everyone went through like puberty? or I am just having a mid life crisis?
PS : I haven't even reach 30 yet but I feel too old.4 -
Need to find the visual studio process to kill it because the app is frozen. Pen task manager > processes> sort by memory usage. right there at the top...always at the top...2gb+ usage.3
-
ATTENTION PLEASE! Important announcement following:
Please check your interface implementations for correct byteorder according specification BEFORE YOU START COMPLAINING ABOUT DATA FAILURES ON EXCHANGING DATA.
Freakin hell, if I'd get some money for every byte order mismatch on testing interfaces, I'd be a be a billionaire.
And why are all those highlevel I-know-every-fucking-framework developer incapable of checking the real memory content of a datatype, and the real data content on the interface even if you tell them that their byte order is obviously wrong?
No, your system is not the centre of the universe and I don't care how you get your less-than-32bit-datatypes-are-for-assembler-usage-frameworks to change byteorder. It's not rocket science, if there's no ready-to-use-function then write those 4 lines yourself.
Next time I get to specify an interface I'll go for mixed-endian, just to make sure everybody involved knows the concepts of endianess afterwards.2 -
How about incompetent management? Company absolutely murders any possible increase in productivity. Laptop provided? Slow as balls. Takes minutes to log in. I get a Mac for mobile development and that's OK. SSD and adequate memory but I'm primarily a .NET Dev. Can't get on the network with a virtual machine. They won't I stall even a managed image. So can't use databases because they're all AD authenticated. Got a virtual desktop environment and that sucks worse in performance than the laptop. Add the Assault on local administration rights and the monitoring software that constantly thrashers any memory and hard drive usage and im about to quit over all this... All this decided by a non developer and not asked for our opinions. Yay large Enterprises
-
I like rants that are thought provoking and push a message forward regardless of whether they may sting a little, so for my first post on here I'd like to hit at home with many of you.
Html5 "Native" Applications are not needed. Let's cover mobile first of all, the misconception that apps are written in either javascript or Native android/ Native ios environment. Or even some third party paid tools like xamarin is quite strange to me. OpenGL ES is on both IOS and Android there is no difference. It's quite easy to write once run everywhere but with native performance and not having to jump through js when it's not needed. Personally I never want to see html or css if I'm working on a mobile app or desktop. Which brings me to desktop, I can't begin to describe how unthought out an electron app is. Memory usage, storage space for embedding chromium, web views gained at the expense of literally everything else, cross platform desktop development has been around for decades, openGL is everywhere enough said. Finally what about targeting browser if your writing a native app for mobile and desktop let's say in c++ and it's not in javascript how can it turn back into javascript, well luckily c++ has emscripten which does that simply put, or you could be using a cross complier language like haxe which is what I use. It benefits with type safety, while exporting both c++ and javascript code. Conclusion in reality I see the appeal to the js ecosystem it's large filled with big companies trying to make js cross development stronger every day. However development in my mind should be a series of choices, choices that are invisible don't help anyone, regardless of the popularity of the choice, or the skill required.8 -
So i wasted last 24 hours trying to satisfy my ego over a shitty interview and revisiting my old job's codebase and realising that i still don't like that shit. just i am 25 and have no clue where am i heading at. i am just restless, my most of the decisions in 2023 have given very bad outcomes and i am just trying doing things to feel hopeful.
context for the interview story-----
my previous job was at a b2b marketing company whose sdk was used by various startups to send notifications to their users, track analytics etc. i understood most of it and don't find it to be any major engineering marvel, but that interviewer was very interested in asking me to design a system around it.
in my 1.2 years of job there, i found the codebase to be extremely and unnecessarily verbose ( java 7) with questionable fallbacks and resistance towards change from the managers. they were always like "we can't change it otherwise a lot of our client won't use our sdk". i still wrote a lot of testcases and tried to understand the working of major features.
BTW, before you guys go on a declare me an embarrassment of an engineer who doesn't know the product's code base, let me tell you that we are talking SDKs (plural) and a service based company here. their was just one SDK with interesting, heavy lifting stuff and 9 more SDKs which were mostly wrappers and less advanced libraries. i got tasks in all of them, and 70% of my time went into maintaining those and debugging client side bugs instead of exploring the "already-stable-dont-change" code base.
so based on my vague understanding and my even more vague memory from 1 year ago, i tried to explain an overall architecture to that interviewer guy. His face was screaming the word "pathetic" from his expressions, so i thought that today i will try to decode the codebase in 12-15 hours, publish a cool article and be proud of how much i know a so called martech system design. their codebase is open sourced, so it wasn't difficult to check it out once more.
but boy oh boy i got so bored. unnecessary clases , unnecessary callbacks static calls , oof. i tried to refactor a few classes, but even after removing 70% of codebase, i was still left with 100+ classes , most of them being 3000-4000 files long. and this is your plain old java library adding just 800kb to your project.
boring , boring stuff. i would probably need 2-3 more days to get an understanding of complete project, although by then i would be again questioning my life choices , that was this a good use of my 36 hours?
what IS a correct usage of my time? i am currently super dissatisfied with my job, so want to switch. i have been here for 6 months, so probably i wouldn't be going unless i get insane money or an irresistible company offer. For this i had devised a 2 part plan to either become good at modern hot buzz stuff in my domain( the one being currently popularized by dev influenzas) or become good at dsa/leetcode/cp. i suck bad at ds/algo stuff, nor am i much motivated. so went with that hot buzz stuff.
but then this interview expected me to be a mature dev with system design knowledge... agh fuck. its festive season going on and am unable to buy any cool shirts since i am so much limited with my money from my mediocre salary and loans. and mom wants to buy a home too... yeah kill me3 -
Just had a meeting about performance and monitoring. The main topic of the meeting was to be aware of disk space usage. If there are issues with memory leaks or processor hogging don't worry those are fine, just give it more.1
-
The past couple of weeks I've been struggling with my laptop. It regularly ran out of memory and when that happens everything runs in a snail pace. I always thought 8GB would be enough for developing software, but I was terribly wrong.
So I ordered another 8GB and installed it yesterday. Later at work I looked at the ram usage and noticed that it was up to nearly 13GB!
I have no idea how I managed to get by with only 8 for so long. 🤔
FYI: I usually have 2 to 3 IDEs and a gazillion chrome tabs open 😅6 -
As a long time Ubuntu user, last month I upgraded from Xenial to Bionic to try the new Gnome based desktop.
At first I thought it was a good transition, everything was working fine, beautiful UI, nice animations, so I installed all my tools and started the real work... then the problems started. The memory usage was always very high and only getting higher, the animations were stuttering and laggy, and it was having an unrecoverable freeze at least twice a week. Searching the web I was seeing more and more people complaining about freezes, lags, bugs, memory leaks, password input field bugs... damn, how I missed Unity! That was it, Gnome Shell made me miss Unity more and more.
This week I installed Unity 7 and purged Gnome Shell from Bionic. Now I'm happy again!
It's so good to be free of the anxiety caused by the lack of stability of the system, so good to know that the system will not break or freeze if I'm doing a resource intensive task. Now he sh** is working fast and stable, and I'm here wondering why such a good DE could be dumped for something so buggy like Gnome.1 -
Before I started working, I used to feel like I depended on documentation and the internet a little too much owing to ultra crappy long term memory. After spending some time at my internship going through code written by "professional developers" several years senior to me and trying to write unit tests for it (surprise: the code was in production without having underwent any sort of testing), I feel like the amount of time I spend online reading usage recommendations, alternates for optimisation, best practices for writing clean and descriptive code and all that is a lot more rewarding. Some bad things help you feel good about yourself.
-
I bought a MacBook Pro Retina wanting to upgrade the memory, then realized the Retina models have it soldered onto the motherboard.
I need to run Visual Studio for ASP.NET development but can't fathom paying $80 for a Parallels license at the moment. I've tried VirtualBox, but the RAM usage is really high for the 4GB I'm limited to.
FML.7 -
https://github.com/mozilla/pdf.js/...
Buffer *is* Uint8Array (there's literary "Buffer extends Uint8Array" in NodeJS lib/internal/buffer.js), why would there be a need to wrap it? But thanks to this bullshit error, I have to copy my buffer to a plain Uint8Array, quote, "which essentially means creating a copy of the data and thus increasing memory usage."5 -
Linux.
Guys, I need some inspiration. How are you dealing with memory leaks, i. .e identifying which component of the system is leaking memory?
Regular method of dumping ps aux sorted by virtual memory usage is not working as all the processes are using the same amount of memory all the time. This is XEN dom0 memory leak, and I have no more ideas what to do.
Is it possible that guests could be eating the dom0 memory?15 -
To anyone with good knowledge of RxJs:
Should I be careful how many subscriptions I have open at a time? I'm specifically thinking about memory usage.
While obviously it's more, does it make a huge impact on memory usage, if I have 20 subscriptions active, compared to 2?5 -
For web devs here, do we really still need to support browsers of the evil (yeah I'm talking about MS browsers, Edge included) ?
I mean, building a css ui library here in 2017, without the benefits of custom properties, grid and so many other cool things, is so fucking frustrating.
A practical example : color theming with custom properties = Fuck Yeah / color theming without custom properties = so verbose and painfull, sucks.
The library is mostly for private usage at the moment so... I'm about to drop IE and Edge in the deepest shithole of the darkest cavern of my memory, and move on coding my lib with modern CSS, with almost no regret for the ghosts of the past who are still using these shitware today.
Should I ? Or should I... maintain compatibility as we traditionnally do ?
What's you guys opinion about this ? Can we finally kickban these browsers from our lives ?3 -
I hate Intellij Idea but it's best option available to develop in Scala. Improvements in VSCode/Metals is my last hope.
The (few) things I NEED from Intellij:
* Very good autocompletion
* Refactoring tools (renaming, auto imports)
* Search tools (find usages, sub/super-types)
The (many) things I hate of Intellij:
* Layout with panel sizes doesn't behave properly and it scales instead of remaining fixed.
* Tedious 2-hands shortcuts makes the right hand to move a lot from the mouse
* Delays and lag in the UI, freezes on garbage collection
* High memory consumption, high CPU usage and generally slow and cumbersome
* The delay in the UI between commands is so that it's accidentally possible to introduce typos
* Can't move tabs around and organize them as I like
* Ugly font rendering and missing typography settings
* Multi-caret implementation as a different editing mode is annoying because requires frequent switching
* Unnatural code folding regions, why method arguments are not folded with the method?
* Unhelpful support forum, sometimes dismissive answers
* Highlighting of current word under the caret doesn't work
* Very slow editor, can't keep spacebar pressed to move text or it hangs!
* Several settings reset at every update. Like the auto fetch of git
* New features are added and enabled by default which is very invasive
* Some of the features mentioned above are really annoying and it's not possible/not trivial to disable them
* It uses its own compile and several times it highlights false positives7 -
I'm a full stack developer, I have been using windows all my life but I purchased a new laptop recently, it has only 4gigs of RAM and I will upgrade it in the future but that's gonna take a while but mean while its running windows and its a pain in the ass! Memory is always almost full, disk(HDD 5400rpm) usage is 100% when I don't expect it to be. Chrome and VSCode hogs my memory and the laptop lags like crazy because of that webstorm and pycharm are all out of the question. I'd like to switch to a Linux distro, dual boot it since my windows is a genuine copy. Which Linux distro would be the best for me?9
-
WE: javaagent-based monitoring, as seen in this screenshot <attached>, is reporting full old-gen, full young-gen, full one of the survivors and a sky-rocketing full GC right before the service outage.
WE: container monitoring in this screenshot <attached> shows that the application peaked its memory very suddenly to MAX values and platoed on that. Then container monitoring is blank, suggesting a complete outage of a few minutes. After that monitoring starts again with memory usage reported at low levels and immediatelly spiking back to MAX again, suggesting the container crashed and had been respawned by an orchestrator. This repeats a few times throughout the day.
they: I did not find any evidence of application running out of memory. Maybe our monitoring is not working correctly?
we: *considering updating our resumes* -
Should I be bothered if I have added 200 lines of badly written code to my Android project (which had readable, organized code before) to make it almost unreadable? BTW the app's memory usage is still low like before.3
-
Bullshittery continues. This time around, absolutely innocent, clamav is root cause. For once not incompetent idiot, but piece of software. IDK if that makes me happy or upset.
So our email server that I configured and took care of died. RIP. Damn, better put it back together ASAP. So Im under pressure, while still pissed at everything that I ranted before (actually my last 2 rants were throttled, and in total all of that happened past 60 minutes but devrant rate limiting) I start auditing logs. You imagine, we kindda need it NOW, and it's second time last month clamav is pulling stunts and MTA refuses (properly) to work without antivirus. So pressurized, I look at logs, what the fuck went wrong.
clamav deamonize() failed - cannot allocate memory
Hmm. Intresting, but sounds like bullshit. I know server is quite micro becouse they wanted to save on costs as much as possible, but it has well over half a gig free ram just before it crashes (like 800MB) with that message. Is it allocating almost gig in one call or what? Looked carefully at trusty htop while it was starting, and indeed, suddenly it just dies with quite a bit of ram free, almost as much as it weights already. And I remember booting it up when I was configuring it, and it had fair bit of headroom.
Google, help me friend... Okay, great, so apparently at some point clamav loads virus DB into ram (dafuq?), and than forks, which causes spike of 2x the ram usage, and than immidietely frees it up.
Great, that sounds like great design decision... At least I know, I can just slap on SWAP file, restart it and call it a day.
It worked, swap file is almost empty (used 15megs, 900 megs free ram, whatever).
That leaves me wandering, who figured out to load DB to ram? That means pretty much that clamav will eat a little bit more ram each vir db update, and that milisecond "double ram" spike will confuse innocent people who just wanted to run clamav and it worked last *long period of time* and now crashes without warning without any changes to configuration.
Maybe there is logical explanation, I want to know it.8 -
if anyone is familiar with immer js or immutable js:
if the producer copies the base state to nextState, and nextState is a const, doesnt that defeat the purpose?
I mean you're going for immutability, which is great for say an undo function, or for finding bugs, but what are you doing with all these immutable values now hanging around in memory?
I assume each new state returned is being pushed onto an array? (because you cant stuff it into nextState because nextState is immutable).
Wont this lead to memory usage increasing over a user session, the longer the session lasts?
I feel like I'm misunderstanding some core concept here.
edit: also what the hell is structural sharing?18 -
TL;DR When talking about caching, is it even worth considering try and br as memory efficient as possible?
Context:
I recently chatted with a developer who wanted to improve a frameworks memory usage. It's a framework creating discord bots, providing hooks to events such as message creation. He compared it too 2 other frameworks, where is ranked last with 240mb memory usage for a bot with around 10.5k users iirc. The best framework memory wise used around 120mb, all running on the same amount of users.
So he set out to reduce the memory consumption of that framework. He alone reduced the memory usage by quite some bit. Then he wanted to try out ttl for the cache or rather cache with expirations times, adding no overhead, besides checking every interval of there are so few records that should be deleted. (Somebody in the chat called that sort of cache a meme. Would be happy , if you coukd also explain why that is so😅).
Afterwards the memory usage droped down to 100mb after a Around 3-5 minutes.
The maintainer of the package won't merge his changes, because sone of them really introduce some stuff that might be troublesome later on, such as modifying the default argument for processes, something along these lines. Haven't looked at these changes.
So I'm asking myself whether it's worth saving that much memory. Because at the end of the day, it's cache. Imo cache can be as big as it wants to be, but should stay within borders and of course return memory of needed. Otherwise there should be no problem.
But maybe I just need other people point of view to consider. The other devs reasoning was simple because "it shouldn't consume that much memory", which doesn't really help, so I'm seeking you guys out😁 -
I had been assigned a task to create a cross-platform desktop application that keeps track of the expiry of a certain product and notify in real-time.
So, my journey to create such an application starts today and the list below describes the first few hours.
1. Google/Date and time in javascript
2. Google/Javascript date object
3. W3school/Time in javascript
4. W3school/Javascript date getTime() method
5. Google/Are electron.js applications platform independent
6. Google/Dart for desktop applications
7. Google/Is dart cross-platform
8. Google/Best desktop application framework
9. Google/Python for desktop app development
10. Freecodecamp/How to build your first desktop application in python
11. Google/Pyqt
12. Google/Which is the best technology to build cross-platform desktop application
13. Google/Cross-platform desktop app development for windows mac and linux
14. Udemy / cross platform desktop app development for windows mac and linux
15. Youtube/ electron desktop app, demo
16. Youtube/ electron.js is obsolete
17. Youtube/Neutralinojs
18. Youtube/ neutralinojs tutorial
19. Google/Neutralinojs or electronjs
20. Google/Math.js
21. Google/Math.js/JS Bin
22. Google/Cannot find package “math.js”
23. StackOverFlow/How do I resolve “cannot find module” error using Node.js
24. Google/ is it better to install npm packages locally
25. Quora/ why should you stop installing NPM packages globally
26. Google/ what is nvm
27. Google/nvm version check
28. Stackoverflow/node version management on windows
29. Github/coreybutler/nvm-windows: a nvm for windows. Ironically written in Go
30. Google/how to uninstall a npm package
31. Npm docs/uninstalling packages and dependencies
32. Google/require in javascript
33. Youtube/how to install electronjs
34. Youtube/electronjs in 100s(fireship.io)
35. Roryok.com/electronjs memory usage compared to other cross-platform frameworks
36. Google/is electronjs memory hungry
37. Youtube/sql in one hour
38. Youtube/learn sql in 60 mins
39. Geeksforgeeks/connect mysql with node app
40. Stackoverflow/How to return to previous directory using cmd
41. Stackoverflow/how to require using const
42. Geeksforgeeks/difference between require and es6 import and export
TO BE CONTINUED...1 -
Being too careful and always trying to reduce memory and processoe usage might be a bad thing after all. Lengthening development time and inducing more stress on the developer just to reduce resource usage is not very sensible when dealing with small to medium size programs that doesn't deal with big data/file types.
What made me notice this habit in programmers was when I was smashing my head on the keyboard contemplating what method I should use to store the history of outputs for a fucking text based program that has minimal gui elements..
Having ocd as a programmer is a nightmare. But thank god it's not as bad as it was a year ago. I couldn't even read something without repeating the same page over and over again because my stupid brain decided that I was not reading it right. WHAT THE FUCK IS READING IT RIGHT ? Thank god for my psychiatrist and pills. I can atleast work on my projects without wanting to kill myself now ! 😂1 -
tips on how to retain something in memory for a long time? especially if it is something difficult , unpleasant and rarely occurring event like usage of differential calculus or dsa/ leetcode questions ?5
-
How someone can think that the best idea to store a vector of physical values, knowing perfectly in which unit measure it needs to be provided for the back end to work, is to couple a vector of strings with the units, is beyond me.
-
Recently joined new Android app (product) based project & got source code of existing prod app version.
Product source code must be easy to understand so that it could be supported for long term. In contrast to that, existing source structure is much difficult to understand.
Package structure is flat only 3 packages ui, service, utils. No module based grouped classes.
No memory release is done. So on each screen launch new memory leaks keep going on & on.
Too much duplication of code. Some lazy developer in the past had not even made wrappers to avoid direct usage of core classes like Shared Preference etc. So at each place same 4-5 lines were written.
Too much if-else ladders (4-5 blocks) & unnecessary repetitions of outer if condition in inner if condition. It looks like the owner of this nested if block implementation has trust issues, like that person thought computer 'forgets' about outer if when inside inner if.
Too much misuse of broadcast receiver to track activities' state in the era of activity, apપ life cycle related Android library.
Sometimes I think why people waste soooo... much efforts in the wrong direction & why can't just use library?!!
These things are found without even deep diving into the code, I don't know how much horrific things may come out of the closet.
This same app is being used by many companies in many different fields like banking, finance, insurance, govt. agencies etc.
Sometimes I surprise how this source passed review & reached the production. -
Portrait of Me, Writting Documentation -- a short french film:
The processes applied to any section of memory utilized for a given purpose should be strictly limited to those declared by the associated type that encapsulates the purpose in question until release or mutation.
That is to say, improperly encoding the intended usage of such a block by utilizing an identical type or alias thereof for a multitude of incompatible situations, giving place to guesswork to arise, constitutes the prostitution of an abstraction.
Such heinous acts of symbolical pimping have received strong condemnation from multiple digital rights organizations, as well as our own, prestigious office. Let it be made Crystal, Alizé and Hennessy clear, that we will not stand for this kind of degenerate practice, and that any heretical sects and cabals built around worship of the strange creatures that arise every eleventh night from the depths of the Black Mausoleum will be prosecuted with the full force of the law.
As a young, corageous man once said at the peak of his career: "it is only through the self-inflicted, hyperbolic discharge of smouldered, comminute perennial anadenanthera colubrina spermatic fluid that the cannonical transfiguration of our collective rectosigmoid junction can be brought to fruition". He was immediately violated with might and ire far beyond our wildest, most profligately depraved fantasies, yet his message lives on.
I leave you now to be ritually and figuratively blown by a posssessed mortician that is to become concubine to our dark master; the long journey to the old graveyard will be perilous, and my destination most assuredly fatal, as I depart to give my firstborn to our Lord Berzchjanzad -- a blood sacrifice meant to appease him from peeling off my skin and refashioning it into a bloodied scarf to be worn around his thumping, grandemonic cock.
And in this moment, as I stare blankly at this teleprompter, the president wishes to reassure you of his sacred vows of stalwart and promethean gayhood, and may __these__ nuts bounce on chins forevermore. Here's to *not* bleeding to death in retribution for this unending litany of sins...
Yet all predictions come to pass.
««««««««««« finẽ »»»»»»»»»»»