Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "c is faster"
-
classmate: Hey, "friend" told me you do freelance website development. right? I need to create a new website and need your help.
Me: umm... OK... what's it about?
Classmate: It's for my dad's friend's business.
Me: OK. but I will charge the standard rate.
classmate: No... I will make it myself. I just want your help.
Me(Internally): ...not again...
Me: Do it yourself then.
Classmate: It will be quick. an hour or two max.
Me: *speechless*
Classmate: And one of my uncle who did IT told me that c++ is faster. can we use that instead of HTML?
Me: huh...?
classmate: you don't know shit.
... classmate walks away...
This guy somehow manages to get As in exams (mostly cheating. and our papers are shitty theory papers which you can mug up. so that helps) and in a year will have an IT degree.56 -
So, you start with a PHP website.
Nah, no hating on PHP here, this is not about language design or performance or strict type systems...
This is about architecture.
No backend web framework, just "plain PHP".
Well, I can deal with that. As long as there is some consistency, I wouldn't even mind maintaining a PHP4 site with Y2K-era HTML4 and zero Javascript.
That sounds like fucking paradise to me right now. 😍
But no, of course it was updated to PHP7, using Laravel, and a main.js file was created. GREAT.... right? Yes. Sure. Totally cool. Gotta stay with the times. But there's still remnants of that ancient framework-less website underneath. So we enter an era of Laravel + Blade templates, with a little sprinkle of raw imported PHP files here and there.
Fine. Ancient PHP + Laravel + Blade + main.js + bootstrap.css. Whatever. I can still handle this. 🤨
But then the Frontend hipsters swoosh back their shawls, sip from their caramel lattes, and start whining: "We want React! We want SPA! No more BootstrapCSS, we're going to launch our own suite of SASS styles! IT'S BETTER".
OK, so we create REST endpoints, and the little monkeys who spend their time animating spinners to cover up all the XHR fuckups are satisfied. But they only care about the top most visited pages, so we ALSO need to keep our Blade templated HTML. We now have about 200 SPA/REST routes, and about 350 classic PHP/Blade pages.
So we enter the Era of Ancient PHP + Laravel + Blade + main.js + bootstrap.css + hipster.sass + REST + React + SPA 😑
Now the Backend grizzlies wake from their hibernation, growling: We have nearly 25 million lines of PHP! Monoliths are evil! Did you know Netflix uses microservices? If we break everything into tiny chunks of code, all our problems will be solved! Let's use DDD! Let's use messaging pipelines! Let's use caching! Let's use big data! Let's use search indexes!... Good right? Sure. Whatever.
OK, so we enter the Era of Ancient PHP + Laravel + Blade + main.js + bootstrap.css + hipster.sass + REST + React + SPA + Redis + RabbitMQ + Cassandra + Elastic 😫
Our monolith starts pooping out little microservices. Some polished pieces turn into pretty little gems... but the obese monolith keeps swelling as well, while simultaneously pooping out more and more little ugly turds at an ever faster rate.
Management rushes in: "Forget about frontend and microservices! We need a desktop app! We need mobile apps! I read in a magazine that the era of the web is over!"
OK, so we enter the Era of Ancient PHP + Laravel + Blade + main.js + bootstrap.css + hipster.sass + REST + GraphQL + React + SPA + Redis + RabbitMQ + Google pub/sub + Neo4J + Cassandra + Elastic + UWP + Android + iOS 😠
"Do you have a monolith or microservices" -- "Yes"
"Which database do you use" -- "Yes"
"Which API standard do you follow" -- "Yes"
"Do you use a CI/building service?" -- "Yes, 3"
"Which Laravel version do you use?" -- "Nine" -- "What, Laravel 9, that isn't even out yet?" -- "No, nine different versions, depends on the services"
"Besides PHP, do you use any Python, Ruby, NodeJS, C#, Golang, or Java?" -- "Not OR, AND. So that's a yes. And bash. Oh and Perl. Oh... and a bit of LUA I think?"
2% of pages are still served by raw, framework-less PHP.32 -
Why did the chicken cross the road?
Assembler Chicken: First, it builds the road ......
C Chicken: It crosses the road without looking both ways.
C++ Chicken: The chicken wouldn't have to cross the road, you' d simply refer to him on the other side.
COBOL Chicken: 0001-CHICKEN-CROSSING.
IF NO-MORE-VEHICLES
THEN PERFORM 0010-CROSS-THE-ROAD
VARYING STEPS FROM 1 BY 1 UNTIL
ON-THE-OTHER-SIDE
ELSE
GO TO 0001-CHICKEN-CROSSING
Cray Chicken: Crosses faster than any other chicken, but if you don't dip it in liquid nitrogen first, it arrives on the other side frazzled.
Delphi Chicken: The chicken is dragged across the road and dropped on the other side.
Gopher Chicken: Tried to run but got beaten by the Web chicken.
Intel Pentium Chicken: The chicken crossed 4.9999978 times.
Iomega Chicken: The chicken should have ' backed up' before crossing.
Java Chicken: If your road needs to be crossed by a chicken, then the server will download one to the other side. (Of course, those are chicklets.) See also WMI Monitor.
Linux Chicken: Don't you *dare* try to cross the road the same way we do!
Mac Chicken: No reasonable chicken owner would want a chicken to cross the road, so there's no way to tell it how to cross the road.
Newton Chicken: Can't cluck, can't fly, and can't lay eggs, but you can carry it across the road in your pocket.
OOP Chicken: It doesn't need to cross the road, it just sends a message.
OS/2 Chicken: It crossed the road in style years ago, but it was so quiet that nobody noticed.
Microsoft's Chicken: It's already on both sides of the road. What's more its just bought the road.
Windows 95 Chicken: You see different coloured feathers while it crosses, but when you cook it still tastes like........ chicken.
Quantum Logic Chicken: The chicken is distributed probabilistically on all sides of the road until you observe it on the side of your choice.
VB Chicken: USHighways! <TheRoad.cross> (aChicken)
XP Chicken Jumps out onto the road, turns right, and just keeps on running.
The Longhorn Chicken had an identity crisis and is now calling itself Vista.
The Vista Chicken dazzled itself with its own graphics.20 -
Customer : c
Me : m
*Few weeks ago*
C: the server is slow, it sometimes takes 7 seconds before I see our data
(the project is 7+ years old and wasn't written by someone who is very good in SQL)
M: yeah I see that, our servers are busy with this one "process" (SQL query)
C: make it faster
M: well that's possible but it will take a few days (massive SQL spaghetti that I first have to untangle)
C: 😡 nvm then
*Yesterday*
C: server is down !
M: 🤔 *loads data from server and waits ~ 7 seconds*
M: Well what's the problem?
C: I need the data but it's so slow
WELL YOU MINDLESS IMBECILE... If something is slow it doesn't mean our god damn production server is down !
That just means that you have to give us a day or two so we can optimise the (ALSO BY YOUR REQUEST) rushed project... And save you YOUR money that YOU waste on the processing time on our server...4 -
I know a guy who writes everything in Haskell.
He started learning it because his parents got him into a math school (and math schools in Russia use either Python or Haskell), he liked it, but later he dropped out. Today, apart from Haskell, he only really knows HTML and CSS, and maybe some JavaScript.
He writes backend AND frontend in Haskell and uses some kind of JRPC stuff to manage all that. He told me that his life is a pure heaven. He IS RELEVANT (!!!!!!), his apps always run without bugs (because in Haskell you can mathematically prove that there are no bugs), they are performant, faster than C (because you can't write a complex enough app in C that will be as efficient as compiled Haskell, because it's you vs compiler). He doesn't have any problems in life whatsoever. He never got burned out, he never got anxiety or depression. He doesn't act pretentiously and stuff, he's just a normal person who rarely even mentions that he can program.
Science says it can't be done! You can't only know Haskell and be a relevant software engineer! You know what, he didn't _know_ it was impossible. He's like that grandpa from a meme, he got Alzheimers, but because of it he forgot that he had Alzheimers, and now remembers everything.
The fun thing is that he looks like a typical gopnik, with adidas suits and stuff.
What a gem of a person.26 -
Worst fight I've had with a co-worker?
Had my share of 'disagreements', but one that seemed like it could have gone to blows was a developer, 'T', that tried to man-splain me how ADO.Net worked with SQLServer.
<T walks into our work area>
T: "Your solution is going to cause a lot of problems in SQLServer"
Me: "No, its not, your solution is worse. For performance, its better to use ADO.Net connection pooling."
T: "NO! Every single transaction is atomic! SQLServer will prioritize the operation thread, making the whole transaction faster than what you're trying to do."
<T goes on and on about threads, made up nonsense about priority queues, on and on>
Me: "No it won't, unless you change something in the connection string, ADO.Net will utilize connection pooling and use the same SPID, even if you explicitly call Close() on the connection. You are just wasting code thinking that works."
T walks over, stands over me (he's about 6.5", 300+ pounds), maybe 6 inches away
T: "I've been doing .net development for over 10 years. I know what I'm doing!"
I turn my chair to face him, look up, cross my arms.
Me: "I know I'm kinda new to this, but let me show you something ..."
<I threw together a C# console app, simple connect, get some data, close the connection>
Me: "I'll fire up SQLProfiler and we can see the actual connection SPID and when sql server closes the SPID....see....the connection to SQLServer is still has an active SPID after I called Close. When I exit the application, SQLServer will drop the SPD....tada...see?"
T: "Wha...what is that...SQLProfiler? Is that some kind of hacking tool? DBAs should know about that!"
Me: "It's part of the SQLServer client tools, its on everyone's machine, including yours."
T: "Doesn't prove a damn thing! I'm going to do my own experiment and prove my solution works."
Me: "Look forward to seeing what you come up with ... and you haven't been doing .net for 10 years. I was part of the team that reviewed your resume when you were hired. You're going to have to try that on someone else."
About 10 seconds later I hear him from across the room slam his keyboard on his desk.
100% sure he would have kicked my ass, but that day I let him know his bully tactics worked on some, but wouldn't work on me.7 -
A memorial for my favorite rant of all time "Why did the chicken cross the road?"
+++++++++++++++++++++++++++++++++++++
Why did the chicken cross the road?
Assembler Chicken: First, it builds the road ......
C Chicken: It crosses the road without looking both ways.
C++ Chicken: The chicken wouldn't have to cross the road, you' d simply refer to him on the other side.
COBOL Chicken: 0001-CHICKEN-CROSSING.
IF NO-MORE-VEHICLES
THEN PERFORM 0010-CROSS-THE-ROAD
VARYING STEPS FROM 1 BY 1 UNTIL
ON-THE-OTHER-SIDE
ELSE
GO TO 0001-CHICKEN-CROSSING
Cray Chicken: Crosses faster than any other chicken, but if you don't dip it in liquid nitrogen first, it arrives on the other side frazzled.
Delphi Chicken: The chicken is dragged across the road and dropped on the other side.
Gopher Chicken: Tried to run but got beaten by the Web chicken.
Intel Pentium Chicken: The chicken crossed 4.9999978 times.
Iomega Chicken: The chicken should have ' backed up' before crossing.
Java Chicken: If your road needs to be crossed by a chicken, then the server will download one to the other side. (Of course, those are chicklets.) See also WMI Monitor.
Linux Chicken: Don't you *dare* try to cross the road the same way we do!
Mac Chicken: No reasonable chicken owner would want a chicken to cross the road, so there's no way to tell it how to cross the road.
Newton Chicken: Can't cluck, can't fly, and can't lay eggs, but you can carry it across the road in your pocket.
OOP Chicken: It doesn't need to cross the road, it just sends a message.
OS/2 Chicken: It crossed the road in style years ago, but it was so quiet that nobody noticed.
Microsoft's Chicken: It's already on both sides of the road. What's more its just bought the road.
Windows 95 Chicken: You see different coloured feathers while it crosses, but when you cook it still tastes like........ chicken.
Quantum Logic Chicken: The chicken is distributed probabilistically on all sides of the road until you observe it on the side of your choice.
VB Chicken: USHighways! <TheRoad.cross> (aChicken)
XP Chicken Jumps out onto the road, turns right, and just keeps on running.
The Longhorn Chicken had an identity crisis and is now calling itself Vista.
The Vista Chicken dazzled itself with its own graphics.21 -
The education system is a fucking joke. How do you get through all the required courses and get to the capstone course where your one goal is to build a simple prototype of a project(like a simple website) for a real world client and not know HTML or CSS when you spent a whole fuckboy semester on a class dedicated to HTML, css, JavaScript and the teacher gave you the PHP. Not only that but you can't even figure out how to use a simple google search to look up the documentation on any of these topics or even the easy to follow tutorials littering the internet on how to use Bootstrap which is what we're fucking using to make it faster to develop the core logic of our app but all you fucking want to do is take shortcuts and create a PowerPoint presentation in google slides and make an easy project look like shit and make me and yourselves look like shit. But don't fucking worry, I'll code the whole thing in a fucking night because you didn't do your part of taking care of just the front end and planned for your incompetence and lack of questions or help. I know you're busy looking for a job for after you graduate but you can't even answer a simple programming question. Let me give you the solution on how to reverse a string, cuz you don't remember c# but it literally takes 30 seconds to google the solution that is everywhere. My project team is why no one takes a degree from this university seriously.9
-
This rant is particularly directed at web designers, front-end developers. If you match that, please do take a few minutes to read it, and read it once again.
Web 2.0. It's something that I hate. Particularly because the directive amongst webdesigners seems to be "client has plenty of resources anyway, and if they don't, they'll buy more anyway". I'd like to debunk that with an analogy that I've been thinking about for a while.
I've got one server in my home, with 8GB of RAM, 4 cores and ~4TB of storage. On it I'm running Proxmox, which is currently using about 4GB of RAM for about a dozen VM's and LXC containers. The VM's take the most RAM by far, while the LXC's are just glorified chroots (which nonetheless I find very intriguing due to their ability to run unprivileged). Average LXC takes just 60MB RAM, the amount for an init, the shell and the service(s) running in this LXC. Just like a chroot, but better.
On that host I expect to be able to run about 20-30 guests at this rate. On 4 cores and 8GB RAM. More extensive migration to LXC will improve this number over time. However, I'd like to go further. Once I've been able to build a Linux which was just a kernel and busybox, backed by the musl C library. The thing consumed only 13MB of RAM, which was a VM with its whole 13MB of RAM consumption being dedicated entirely to the kernel. I could probably optimize it further with modularization, but at the time I didn't due to its experimental nature. On a chroot, the kernel of the host is used, meaning that said setup in a chroot would border near the kB's of RAM consumption. The busybox shell would be its most important RAM consumer, which is negligible.
I don't want to settle with 20-30 VM's. I want to settle with hundreds or even thousands of LXC's on 8GB of RAM, as I've seen first-hand with my own builds that it's possible. That's something that's very important in webdesign. Browsers aren't all that different. More often than not, your website will share its resources with about 50-100 other tabs, because users forget to close their old tabs, are power users, looking things up on Stack Overflow, or whatever. Therefore that 8GB of RAM now reduces itself to about 80MB only. And then you've got modern web browsers which allocate their own process for each tab (at a certain amount, it seems to be limited at about 20-30 processes, but still).. and all of its memory required to render yours is duplicated into your designated 80MB. Let's say that 10MB is available for the website at most. This is a very liberal amount for a webserver to deal with per request, so let's stick with that, although in reality it'd probably be less.
10MB, the available RAM for the website you're trying to show. Of course, the total RAM of the user is comparatively huge, but your own chunk is much smaller than that. Optimization is key. Does your website really need that amount? In third-world countries where the internet bandwidth is still in the order of kB/s, 10MB is *very* liberal. Back in 2014 when I got into technology and webdesign, there was this rule of thumb that 7 seconds is usually when visitors click away. That'd translate into.. let's say, 10kB/s for third-world countries? 7 seconds makes that 70kB of available network bandwidth.
Web 2.0, taking 30+ seconds to load a web page, even on a broadband connection? Totally ridiculous. Make your website as fast as it can be, after all you're playing along with 50-100 other tabs. The faster, the better. The more lightweight, the better. If at all possible, please pursue this goal and make the Web a better place. Efficiency matters.9 -
Long rant...
*Designer Posted image of newly designed layout for our app on trello.
Dev 1 (me, being the junior, on ios) : so... What's the size for x, Y, z, a, B, C?
She: it's 9 for the small text, 10 for sub title, 12 for main title.
*shows her the design on app
Dev 1: seems too small
She: just make it to look not small.
Dafug?
*finishes the app layout for that screen.
*working on next screen
Dev 1: your new design is for the screen of 1920x1080. But our supported screen size starts from 320 width. So there'll be text overlapping each other and ui might screw up.
She: uh.. Just... Put those that will overlap to the next line.
*shrugs
Dev 1: ok
=======
2 days later
Dev 2 (senior, working on Android)
Dev 2: so... What's the colour for x, Y, z
*Dev 1 laughs on the inside because of the struggles we have with her.
Dev 1 to Dev 2: is it common for her not to follow the design guidelines?
Dev 2: yeah man.. We just have to adapt her design into our app guidelines.
*sigh
Dev 2: there's a new icon here on this screen, so you wanna change the icon? Can I have the icon file?
She: oh.. No.. Use back the old one, because I just copy and paste.
Dev 1: so... This progress bar of yours, doesn't show its background colour, because you filled it already. So what's the background colour if the bar isn't filled?
She : hmm.... Oh.. Well.. Maybe try x.. ? *doesn't look nice* how about Y? *doesn't look nice* how about...
Me : why not you try in your computer first instead of me changing it here by code, it's much faster this way.
*seriously, wth?
Dev 1 and 2: there's additional text in your new design, what is it for?
She : oh.. No no. I copied extra due to copy and paste. Just ignore it.
Dev 1 and 2: what's the spacing gap between x and Y? And how about the size of the box?
She : oh.. I just estimate it, and for the box, not sure either, you can follow old design, because I'm just putting a box there for illustration purpose.
Mother fickle, what fuck man.
Dev 1 and 2: *flips table.
*we didn't, but.. It's freaking annoying.7 -
My school just tried to hinder my revision for finals now. They've denied me access just today of SSHing into my home computer. Vim & a filesystem is soo much better than pen and paper.
So I went up to the sysadmin about this. His response: "We're not allowing it any more". That's it - no reason. Now let's just hope that the sysadmin was dumb enough to only block port 22, not my IP address, so I can just pick another port to expose at home. To be honest, I was surprised that he even knew what SSH was. I mean, sure, they're hired as sysadmins, so they should probably know that stuff, but the sysadmins in my school are fucking brain dead.
For one, they used to block Google, and every other HTTPS site on their WiFi network because of an invalid certificate. Now it's even more difficult to access google as you need to know the proxy settings.
They switched over to forcing me to remote desktop to access my files at home, instead of the old, faster, better shared web folder (Windows server 2012 please help).
But the worst of it includes apparently having no password on their SQL server, STORING FUCKING PASSWORDS IN PLAIN TEXT allowing someone to hijack my session, and just leaving a file unprotected with a shit load of people's names, parents, and home addresses. That's some super sketchy illegal shit.
So if you sysadmins happen to be reading this on devRant, INSTEAD OF WASTING YOUR FUCKING TIME BLOCKING MORE WEBSITES THAN THEIR ARE LIVING HUMANS, HOW ABOUT TRY UPPING YOUR SECURITY, PASSWORDS LIKE "", "", and "gryph0n" ARE SHIT - MAKE IT BETTER SO US STUDENTS CAN ACTUALLY BROWSE MORE FREELY - I THINK I WANT TO PASS, NOT HAVE EVERY OTHER THING BLOCKED.
Thankfully I'm leaving this school in 3 weeks after my last exam. Sure, I could stay on with this "highly reputable" school, but I don't want to be fucking lied to about computer studies, I don't want to have to workaround your shitty methods of blocking. As far as I can tell, half of the reputation is from cheating. The students and sysadmins shouldn't have to have an arms race between circumventing restrictions and blocking those circumventions. Just make your shit work for once.
**On second thought, actually keep it like that. Most of the people I see in the school are c***s anyway - they deserve to have half of everything they try to do censored. I won't be around to care soon.**undefined arms race fuck sysadmin ssh why can't you just have any fucking sanity school windows server security2 -
Keybinds you need (Windows):
Copy: Ctrl + c
Cut: Ctrl + x
Paste: Ctrl + v
Jump from word to word: Strg + Left arrow or right arrow
Mark text: Shift + Right arrow or Left arrow
Mark text (jump from word to word): Ctrl + Shift + Left arrow or right arrow
Quickly open task manager: Ctrl + Shift + Esc
Windows button alternative(e.g. for gaming sessions when you've disabled the windows button): Ctrl + Esc
*legend* Multitasking legend for switching quickly between programs (keep Alt key pressed to select the program you want to open by pressint Tab) Alt + Tab
Multitasking legend with a nice animation (not there for quick workflow but to manage programs, files, multidesktop): Windows + Tab
For people who have multiple desktops - If you don't have, go add two more:
Switch to next desktop: Ctrl + Windows + Right arrow
Switch to previous desktop: Ctrl + Windows + Left arrow
Navigate in taskbar: Windows + t
Quickly look computer: Windows + L
Some boot options (personal tip: navigate with arrow keys for faster workflow): Windows + X
Quickly toggle desktop: Windows + D
Screenshot of current program: Ctrl + Alt + Print
Screenshot of the whole screen and your external ones (will be saved in C:/Users/user/Pictures/Screenshots): Windows + Print
Open run.exe (can be used to open .exe files, e.g. to execute cmd, regedit quickly)
Close browser tab: Ctrl + w
Open browser tab: Ctrl + t
Search: Ctrl + f
// just single keys that are useful
Reload page: f5
Url bar: f6
reopen closed tabs (not sure about compatibility but is definitely working in chrome and firefox): Ctrl + Shift + t
Fullscreen mode (not a keybind too): F11
Alt + F4 to win the game
The boss of all key(bind)s (also not a keybind): Tab
If you got more tho write it down in the comments section. I really tried my best :'D16 -
As a consultant, you get tasked with a variety of stuff. Last few weeks been struggling to maintain an old C++ application that was written by a complete tool of an a$$hole with zero knowledge on how to write maintainable and production quality code. It would hardly run without a crash. First it was a challenge I had to accept, but as I stabilized the code and just fell over even more traps, I had to admit defeat and review my approach.
Rewrite is something I would choose last, but this one ticked all the marks worthy of a rewrite. So, the customer is a very friendly researcher and gladly spent 15 hours with me explaining all the math and concepts - just a delight for a programmer to have such a customer. Two days in, with a DDD approach - a functional, more precise, faster and stable application.
Sometimes there is no rant to share, it's rare to have that perfect communication with a customer that is so dedicated that he spends so much time teaching you his speciality and actually understand your approach. DDD was really a lifesaver here, by using it's key concepts and ubiquitous language. The program is essentially 8000 lines of math, but wrapping it up with value objects and strong domain models made me understand his domain and him mine. It also allowed me to parallelize the computations, giving me a huge performance boost. Textbook approach, there will not be many like this!4 -
Biggest challenge I overcame as dev? One of many.
Avoiding a life sentence when the 'powers that be' targeted one of my libraries for the root cause of system performance issues and I didn't correct that accusation with a flame thrower.
What the accusation? What I named the library. Yep. The *name* was causing every single problem in the system.
Panorama (very, very expensive APM system at the time) identified my library in it's analysis, the calls to/from SQLServer was the bottleneck
We had one of Panorama's engineers on-site and he asked what (not the actual name) MyLibrary was and (I'll preface I did not know or involved in any of the so-called 'research') a crack team of developers+managers researched the system thoroughly and found MyLibrary was used in just about every project. I wrote the .Net 1.1 MyLibrary as a mini-ORM to simplify the execution of database code (stored procs, etc) and gracefully handle+log database exceptions (auto-logged details such as the target db, stored procedure name, parameter values, etc, everything you'd need to troubleshoot database errors). This was before Dapper and the other fancy tools used by kids these days.
By the time the news got to me, there was a team cobbled together who's only focus was to remove any/every trace of MyLibrary from the code base. Using Waterfall, they calculated it would take at least a year to remove+replace MyLibrary with the equivalent ADO.Net plumbing.
In a department wide meeting:
DeptMgr: "This day forward, no one is to use MyLibrary to access the database! It's slow, unprofessionally named, and the root cause of all the database issues."
Me: "What about MyLibrary is slow? It's excecuting standard the ADO.Net code. Only extra bit of code is the exception handling to capture the details when the exception is logged."
DeptMgr: "We've spent the last 6 weeks with the Panorama engineer and he's identified MyLibrary as the cause. Company has spent over $100,000 on this software and we have to make fact based decisions. Look at this slide ... "
<DeptMgr shows a histogram of the stacktrace, showing MyLibrary as the slowest>
Me: "You do realize that the execution time is the database call itself, not the code. In that example, the invoice call, it's the stored procedure that taking 5 seconds, not MyLibrary."
<at this point, DeptMgr is getting red-face mad>
AreaMgr: "Yes...yes...but if we stopped using MyLibrary, removing the unnecessary layers, will make the code run faster."
<typical headknodd-ers knod their heads in agreement>
Dev01: "The loading of MyLibrary takes CPU cycles away from code that supports our customers. Every CPU cycle counts."
<headknod-ding continues>
Me: "I'm really confused. Maybe I'm looking at the data wrong. On the slide where you highlighted all the bottlenecks, the histogram shows the latency is the database, I mean...it's right there, in red. Am I looking at it wrong?"
<this was meeting with 20+ other devs, mgrs, a VP, the Panorama engineer>
DeptMgr: "Yes you are! I know MyLibrary is your baby. You need to check your ego at the door and face the facts. Your MyLibrary is a failed experiment and needs to be exterminated from this system!"
Fast forward 9 months, maybe 50% of the projects updated, come across the documentation left from the Panorama. Even after the removal of MyLibrary, there was zero increases in performance. The engineer recommended DBAs start optimizing their indexes and other N+1 problems discovered. I decide to ask the developer who lead the re-write.
Me: "I see that removing MyLibrary did nothing to improve performance."
Dev: "Yes, DeptMgr was pissed. He was ready to throw the Panorama engineer out a window when he said the problems were in the database all along. Didn't you say that?"
Me: "Um, so is this re-write project dead?"
Dev: "No. Removing MyLibrary introduced all kinds of bugs. All the boilerplate ADO.Net code caused a lot of unhandled exceptions, then we had to go back and write exception handling code."
Me: "What a failure. What dipshit would think writing more code leads to less bugs?"
Dev: "I know, I know. We're so far behind schedule. We had to come up with something. I ended up writing a library to make replacing MyLibrary easier. I called it KnightRider. Like the TV show. Everyone is excited to speed up their code with KnightRider. Same method names, same exception handling. All we have to do is replace MyLibrary with KnightRider and we're done."
Me: "Won't the bottlenecks then point to KnightRider?"
Dev: "Meh, not my problem. Panorama meets primarily with the DBAs and the networking team now. I doubt we ever use Panorama to look at our C# code."
Needless to say, I was (still) pissed that they had used MyLibrary as dirty word and a scapegoat for months when they *knew* where the problems were. Pissed enough for a flamethrower? Maybe.6 -
Fuck you haters, I'm not dying of corona so PHP dies with it.
PHP is an amazing language. It has evolved nicely has almost all high performing functionally you need build in. Has a good package manager eco system. It's insanely fast (since 7.0, older versions where just fast with opcache).
Most of the called out inconsistencies are actually because it is consistently following C/POSIX equivalent or people that don't understand dynamic typing (it doesn't mean any shit will stick).
https://awesomeopensource.com/proje...
Fuck off with your JS backend solution because it's faster...
This is a big thanks to all the amazing members of the PHP community that worked hard to make PHP the great language it is today!!!82 -
I do not like the direction laptop vendors are taking.
New laptops tend to feature fewer ports, making the user more dependent on adapters. Similarly to smartphones, this is a detrimental trend initiated by Apple and replicated by the rest of the pack.
As of 2022, many mid-range laptops feature just one USB-A port and one USB-C port, resembling Apple's toxic minimalism. In 2010, mid-class laptops commonly had three or four USB ports. I have even seen an MSi gaming laptop with six USB ports. Now, much of the edges is wasted "clean" space.
Sure, there are USB hubs, but those only work well with low-power devices. When attaching two external hard drives to transfer data between them, they might not be able to spin up due to insufficient power from the USB port or undervoltage caused by the impedance (resistance) of the USB cable between the laptop's USB port and hub. There are USB hubs which can be externally powered, but that means yet another wall adapter one has to carry.
Non-replaceable [shortest-lived component] mean difficult repairs and no more reserve batteries, as well as no extra-sized battery packs. When the battery expires, one might have to waste four hours on a repair shop for a replacement that would have taken a minute on a 2010 laptop.
The SD card slot is being replaced with inferior MicroSD or removed entirely. This is especially bad for photographers and videographers who would frequently plug memory cards into their laptop. SD cards are far more comfortable than MicroSD cards, and no, bulky external adapters that reserve the device's only USB port and protrude can not replace an integrated SD card slot.
Most mid-range laptops in the early 2010s also had a LAN port for immediate interference-free connection. That is now reserved for gaming-class / desknote laptops.
Obviously, components like RAM and storage are far more difficult to upgrade in more modern laptops, or not possible at all if soldered in.
Touch pads increasingly have the buttons underneath the touch surface rather than separate, meaning one has to be careful not to move the mouse while clicking. Otherwise, it could cause an unwanted drag-and-drop gesture. Some touch pads are smart enough to detect when a user intends to click, and lock the movement, but not all. A right-click drag-and-drop gesture might not be possible due to the finger on the button being registered as touch. Clicking with short tapping could be unreliable and sluggish. While one should have external peripherals anyway, one might not always have brought them with. The fallback input device is now even less comfortable.
Some laptop vendors include a sponge sheet that they want users to put between the keyboard and the screen before folding it, "to avoid damaging the screen", even though making it two millimetres thicker could do the same without relying on a sponge sheet. So they want me to carry that bulky thing everywhere around? How about no?
That's the irony. They wanted to make laptops lighter and slimmer, but that made them adapter- and sponge sheet-dependent, defeating the portability purpose.
Sure, the CPU performance has improved. Vendors proudly show off in their advertisements which generation of Intel Core they have this time. As if that is something users especially care about. Hoo-ray, generation 14 is now yet another 5% faster than the previous generation! But what is the benefit of that if I have to rely on annoying adapters to get the same work done that I could formerly do without those adapters?
Microsoft has also copied Apple in demanding internet connection before Windows 11 will set up. The setup screen says "You will need an Internet connection…" - no, technically I would not. What does technically stand in the way of Windows 11 setting up offline? After all, previous Windows versions like Windows 95 could do so 25 years earlier. But also far more recent versions. Thankfully, Linux distributions do not do that.
If "new" and "modern" mean more locked-in and less practical and difficult to repair, I would rather have "old" than "new".12 -
Boss is also a programmer which is nice. boss is also incredibly impatient. so when he gives me a project to do, when I don't have it done the day of, he goes and does it over the weekend. but he doesn't tell until a few days later when I finish the following Tuesday. he chucked my git branch and just pushed his stuff to master. then he belittled me because there was a feature missing in his code and I hadn't done it yet. I don't know how to deal with this. on the one hand, I could try and work faster. but on the other, I am trying to add features to software he wrote in c-style c++, didn't comment, and hasn't been updated to modern standards since 1998. even the copyright files are 1997 to 2001. just very discouraged as its my first job in the field. it wouldn't have been so frustrating if he had just told me he'd worked on it himself instead of letting me finish it and then throwing it in the trash.
end rant8 -
Not about favorite language but about why PHP is not my favorite language.
I recently launched a web shop built on Prestashop. I found that some product pages are so god damn slow, like taking 50 fuckin' seconds to load. So I started investigating and analyzing the problem. Turns out that for some products we have so many different combinations that it results in a cartesian product totalling about 75K of unique combinations.
Prestashop did a real bad job coding the product controller because for every combination they fetch additional data. So that results in 75K queries being executed for just 1 product detail page. Crazy, even more when you know that the query that loads all these combinations, before iterating through them, takes 7 fuckin' seconds to execute on my dev machine which is a very very fast high end machine.
That said I analyzed the query and now I broke the query down into 3 smaller queries that execute in a much faster 400 ms (in total!) fetching the exact same data.
So what does this have to do with PHP? As PHP is also OO why the fuck would you always put stuff in these god damn associative arrays, that in turn contain associative arrays that contain more arrays containing even more arrays of arrays.
Yes I could do the same in C# and other languages as well but I have never ever encountered that in other languages but always seem to find this in PHP. That's why I hate PHP. Not because of the language but all those fucking retarded assholes putting everything in arrays. Nothing OO about that.2 -
I once had a class mate who argued that coding in C not only produced faster code than .NET C#, but that he could actually produce applications faster than me in C.
I challenged him to make a Web browser. While he was struggling to remember if it was #include <stdio.h> or #include <iostream>, I started typing WebBro... and let IntelliSense work it's magic.
Needless to say I won.
Sadly, he wouldn't admit his defeat but went on about how much faster his browser would run in the end...
He has yet to release a Web browser written completely in C.15 -
My grandpa is using his computer for video editing and creating photo books. His setup was:
- A 100GB SSD for C
- A 1TB HD for D
The problem:
He never had more than 6GB free on his C Drive because somehow Windows and his programs filled it all with some utter bullshit which couldn't be removed or whatever.
So I promised him to install Linux for his Emails and Surfing and create a Windows 10 VM for him to use his programs.
The Linux installation from downloading a iso over creating a bootable drive to actually installing it was faster than finding the fucking Windows 10 Iso.
Which was about the same time as installing fucking windows because this bullshit prints out one fucking line at a time and then waits for you to read it for 15 motherfucking seconds before printing the next line.
And don't get me started on the fucking telemetry.17 -
I loved what Flash used to be. Most people thought it was proprietary stuff. The program was. It's language was not. And damn, did we have fun together! We rendered vector graphics from code and pushed perlin noise into bitmaps while the HTML guys were still struggling with rounded corners. Oh, those bezier curves we dreamed up out of thin lines of code!
Other people just couldn't see how beautiful you were. They hated you because you were popular, and ads were beginning to dominate the landscape. And lots of dildo's made ads by abusing your capabilities, straining you with their ugly code that didn't remove event listeners properly. I always did, because I loved you.
They made fun of you because you had to be compiled. Look what those cavemen are doing now, dear ActionScript 3.0. They are compiling Javascript and pushing it to production. They are all fools my dear, unworthy to read even a single line of your gracious typed syntax. We were faster then Java. More animated and fluid then CSS. We were even responsive if we needed to.
But... I have to move on. I don't know if you're still watching over me but I can't deny I've been trying to find some happiness. I think you would have wanted me to. C# is a sweet girl and I'm thankful for her, but I won't ever forget those short few years we had together. They were the absolute best.
Rest well my dear princess.8 -
I have that friend who keeps telling me that he doesn't like java just because it's slow! (I hate this excuse).
Friend: look what java did to Android, it's because of java that iOS is faster tham Android.
Me: whaaat!! do you know that Android OS have nothing to do with java? it's C++ you...
Friend: No it's Java, we develop Android apps with java
Me: 🔫31 -
Buckle up, it's a long one.
Let me tell you why "Tree Shaking" is stupidity incarnate and why Rich Harris needs to stop talking about things he doesn't understand.
For reference, this is a direct response to the 2015 article here: https://medium.com/@Rich_Harris/...
"Tree shaking", as Rich puts it, is NOT dead code removal apparently, but instead only picking the parts that are actually used.
However, Rich has never heard of a C compiler, apparently. In C (or any systems language with basic optimizations), public (visible) members exposed to library consumers must have that code available to them, obviously. However, all of the other cruft that you don't actually use is removed - hence, dead code removal.
How does the compiler do that? Well, it does what Rich calls "tree shaking" by evaluating all of the pieces of code that are used by any codepaths used by any of the exported symbols, not just the "main module" (which doesn't exist in systems libraries).
It's the SAME FUCKING THING, he's just not researched enough to fully fucking understand that. But sure, tell me how the javascript community apparently invented something ELSE that you REALLY just repackaged and made more bloated/downright wrong (React Hooks, webpack, WebAssembly, etc.)
Speaking of Javascript, "tree shaking" is impossible to do with any degree of confidence, unlike statically typed/well defined languages. This is because you can create artificial references to values at runtime using string functions - which means, with the right input, almost anything can be run depending on the input.
How do you figure out what can and can't be? You can't! Since there is a runtime-based codepath and decision tree, you run into properties of Turing's halting problem, which cannot be solved completely.
With stricter languages such as C (which is where "dead code removal" is used quite aggressively), you can make very strong assertions at compile time about the usage of code. This is simply how C is still thousands of times faster than Javascript.
So no, Rich Harris, dead code removal is not "silly". Your entire premise about "live code inclusion" is technical jargon and buzzwordy drivel. Empty words at best.
This sort of shit is annoying and only feeds into this cycle of the web community not being Special enough and having to reinvent every single fucking facet of operating systems in your shitty bloated spyware-like browser and brand it with flashy Matrix-esque imagery and prose.
Fuck all of it.20 -
Follow-up to my previous story: https://devrant.com/rants/1969484/...
If this seems to long to read, skip to the parts that interest you.
~ Background ~
Maybe you know TeamSpeak, it's basically a program to talk with other people on servers. In TeamSpeak you can generate identities, every identity has a security level. On your server you can set a minimum security level you need to connect. Upgrading the security level takes longer as the level goes up.
~ Technical background ~
The security level is computed by doing this:
SHA1(public_key + offset)
Where public_key is your public key in Base64 and offset is an 8 Byte unsigned long. Offset is incremented and the whole thing is hashed again. The security level comes from the amount of Zero-Bits at the beginning of the resulting hash.
My plan was to use my GPU to do this, because I heared GPUs are good at hashing. And now, I got it to work.
~ How I did it ~
I am using a start offset of 0, create 255 Threads on my GPU (apparently more are not possible) and let them compute those hashes. Then I increment the offset in every thread by 255. The GPU also does the job of counting the Zero-Bits, when there are more than 30 Zero-Bits I print the amount plus the offset to the console.
~ The speed ~
Well, speed was the reason I started this. It's faster than my CPU for sure. It takes about 2 minutes and 40 seconds to compute 2.55 Billion hashes which comes down to ~16 Million hashes per second.
Is this speed an expected result, is it slow or fast? I don't know, but for my needs, it is fucking fast!
~ What I learned from this ~
I come from a Java background and just recently started C/C++/C#. Which means this was a pretty hard challenge, since OpenCL uses C99 (I think?). CUDA sadly didn't work on my machine because I have an unsupported GPU (NVIDIA GeForce GTX 1050 Ti). I learned not to execute an endless loop on my GPU, and so much more about C in general. Though it was small, it was an amazing project.1 -
Here is another rather big example of how C++ is WAY slower than assembler (picture)
Sure - std::copy is convenient
but asm is just way faster.
This code should be compatible with EVERY x86_64 CPU.
I even do duffs device without having the loop:
the loop happens in the rep opcode which allows for prefetching (meaning that it doesnt destroy the prefetch queue and can even allow for preprocessing).
BTW: for those who commented on my comment porn last time: I made sure to satisfy your cravings ;-)
To those who can't make sense of my command line:
C++ 1m24s
ASM 19s
To those who tell me to call clang with -o<something>:
1) clang removes the call to copy on o3 or o2
2) the result isnt better in o1 (well... one second but that might be due to so many other things, and even if... one second isn't that much)25 -
just remembered watching a video where a little shit wannabe programmer was interviewed by another shit wannabe it professional "hacker" and the first shit claimed he designed a new language that is better with a compiler 10 times faster and better than gcc when he demonstrated his language it was nothing but a header file with couple of define statements for different C function.
and this dude was in the news and was glorified by people and shit
#justturdworldstuff
I'm glad i left "my" country3 -
I’m a long time Objective-C and Swift developer...I’ve been asked to “research” a project using React Native.
It felt dirty to be writing an iOS app using JavaScript.
I’m still not sure why and how React Native is better/faster than native Swift.
Someone change my mind...11 -
When I was in college OOP was emerging. A lot of the professors were against teaching it as the core. Some younger professors were adamant about it, and also Java fanatics. So after the bell rang, they'd sometimes teach people that wanted to learn it. I stayed after and the professor said that object oriented programming treated things like reality.
My first thought to this was hold up, modeling reality is hard and complicated, why would you want to add that to your programming that's utter madness.
Then he started with a ball example and how some balls in reality are blue, and they can have a bounce action we can express with a method.
My first thought was that this seems a very niche example. It has very little to do with any problems I have yet solved and I felt thinking about it this way would complicate my programs rather than make them simpler.
I looked around the at remnants of my classmates and saw several sitting forward, their eyes lit up and I felt like I was in a cult meeting where the head is trying to make everyone enamored of their personality. Except he wasn't selling himself, he was selling an idea.
I patiently waited it out, wanting there to be something of value in the after the bell lesson. Something I could use to better my own programming ability. It never came.
This same professor would tell us all to read and buy gang of four it would change our lives. It was an expensive hard cover book with a ribbon attached for a bookmark. It was made to look important. I didn't have much money in college but I gave it a shot I bought the book. I remember wrinkling my nose often, reading at it. Feeling like I was still being sold something. But where was the proof. It was all an argument from authority and I didn't think the argument was very good.
I left college thinking the whole thing was silly and would surely go away with time. And then it grew, and grew. It started to be impossible to avoid it. So I'd just use it when I had to and that became more and more often.
I began to doubt myself. Perhaps I was wrong, surely all these people using and loving this paradigm could not be wrong. I took on a 3 year project to dive deep into OOP later in my career. I was already intimately aware of OOP having to have done so much of it. But I caught up on all the latest ideas and practiced them for a the first year. I thought if OOP is so good I should be able to be more productive in years 2 and 3.
It was the most miserable I had ever been as a programmer. Everything took forever to do. There was boilerplate code everywhere. You didn't so much solve problems as stuff abstract ideas that had nothing to do with the problem everywhere and THEN code the actual part of the code that does a task. Even though I was working with an interpreted language they had added a need to compile, for dependency injection. What's next taking the benefit of dynamic typing and forcing typing into it? Oh I see they managed to do that too. At this point why not just use C or C++. It's going to do everything you wanted if you add compiling and typing and do it way faster at run time.
I talked to the client extensively about everything. We both agreed the project was untenable. We moved everything over another 3 years. His business is doing better than ever before now by several metrics. And I can be productive again. My self doubt was over. OOP is a complicated mess that drags down the software industry, little better than snake oil and full of empty promises. Unfortunately it is all some people know.
Now there is a functional movement, a data oriented movement, and things are looking a little brighter. However, no one seems to care for procedural. Functional and procedural are not that different. Functional just tries to put more constraints on the developer. Data oriented is also a lot more sensible, and again pretty close to procedural a lot of the time. It's just odd to me this need to separate from procedural at all. Procedural was very honest. If you're a bad programmer you make bad code. If you're a good programmer you make good code. It seems a lot of this was meant to enforce bad programmers to make good code. I'll tell you what I think though. I think that has never worked. It's just hidden it away in some abstraction and made identifying it harder. Much like the code methodologies themselves do to the code.
Now I'm left with a choice, keep my own business going to work on what I love, shift gears and do what I hate for more money, or pivot careers entirely. I decided after all this to go into data science because what you all are doing to the software industry sickens me. And that's my story. It's one that makes a lot of people defensive or even passive aggressive, to those people I say, try more things. At least then you can be less defensive about your opinion.53 -
Just spent an hour looking at the NYC Subway maps vs the direction Google wanted me to take.
Google found the most efficient way is to take E train then transfer to R which then goes back a bit like a U-turn to get to my stop.
Then looking at the subway map, I can just take the R train... Since none of these trains are express... How the fuck did Google think that A-B-C is faster than A-C....11 -
Random talk with a colleague:
-How familiar are you with oop concepts?
- I don't need that, I will make my life easier instead. They say "the" Java is faster though.
-Faster from which lang?
- C.
Me: Aw shiet.
Can't believe who I share this precious air with.7 -
A (work-)project i spent a year on will finally be released soon. That's the perfect opportunity to vent out all the rage i built up during dealing with what is the javascript version of a zodiac letter.
Everything went wrong with the beginning. 3 people were assigned to rewrite an old flash-application. Me, A and B. B suggested a javascript framework, even though me and A never worked with more than jquery. In the end we chose react/redux with rest on the server, a classic.
After some time i got the hang of time, around that time B left and a new guy, C, was hired soon after that. He didn't know about react/redux either. The perfect start off to a burning pile of smelly code.
Today this burning pile turned into a wasteland of code quality, a house of cards with a storm approaching, a rocket with leaks ready to launch, you get the idea.
We got 2 dozen files with 200-500 loc, each in the same directory and each with the same 2 word prefix which makes finding the right one a nightmare on its on. We have an i18n-library used only for ~10 textfields, copy-pasted code you never know if it's used or not, fetch-calls with no error-handling, and many other code smells that turn this fire into a garbage fire. An eternal fire. 3 months ago i reduced the linter-warnings on this project to 1, now i can't keep count anymore.
We use the reactabular-module which gives us headaches because IT DOESN'T DO WHAT IT'S SUPPOSED TO DO AND WE CANT USE IT WELL EITHER. All because the client cant be bothered to have the table header scroll along with the body. We have methods which do two things because passing another callback somehow crashed in the browser. And the only thing about indentation is that it exists. Copy pasting from websites, other files and indentation wars give the files the unique look that make you wonder if some of the devs hides his whitespace code in the files.
All of this is the result of missing time, results over quality and the worst approach of all, used by A: if A wants an ui-component similar to an existing one, he copies the original and edits he copy until it does what he wants. A knows about classes, modules, components, etc. Still, he can't bring himself to spend his time on creating superclasses... his approach gives results much faster
Things got worse when A tried redux, luckily A prefers the components local state. WHICH IS ANOTHER PROBLEM. He doesn't understand redux and loads all of the data directly from the server and puts it into the local state. The point of redux is that you don't have to do this. But there are only 1 or 2 examples of how this practice hurt us yet, so i'm gonna have to let this slide. IF HE AT LEAST WOULD UPDATE THE DATA PROPERLY. Changes are just sent to the server and then all of the data is re-fetched. I programmed the rest-endpoints to return the updated objects for a very reason. But no, fuck me.
I've heard A decided (A is the teamleader) to use less redux on the next project and use a dedicated rest-endpoints for every little comoutation you COULD DO WITH REDUX INSTEAD. My will is broken and just don't want to work with this anymore.
There are still various subpages that cant f5 because the components cant handle an empty redux state in the beginning, but to be honest i don't care anymore. Lets hope the client will never find out, along with the "on error nothing happens"-bugs. The product should've been shipped last week, but thanks to mandatory bugfixes the release was postponed to next week. Then the next project starts...
Please give me some tips to keep up code quality over time, i cant take this once more.
I'm also aware that i could've done more, talking A and C about code style, prettifying the code, etc. Etc. But i was busy putting out my out fires, i couldn't kill much of the other fires which in the end became a burning building (a perfect metaphor for this software)4 -
Is your code green?
I've been thinking a lot about this for the past year. There was recently an article on this on slashdot.
I like optimising things to a reasonable degree and avoid bloat. What are some signs of code that isn't green?
* Use of technology that says its fast without real expert review and measurement. Lots of tech out their claims to be fast but actually isn't or is doing so by saturation resources while being inefficient.
* It uses caching. Many might find that counter intuitive. In technology it is surprisingly common to see people scale or cache rather than directly fixing the thing that's watt expensive which is compounded when the cache has weak coverage.
* It uses scaling. Originally scaling was a last resort. The reason is simple, it introduces excessive complexity. Today it's common to see people scale things rather than make them efficient. You end up needing ten instances when a bit of skill could bring you down to one which could scale as well but likely wont need to.
* It uses a non-trivial framework. Frameworks are rarely fast. Most will fall in the range of ten to a thousand times slower in terms of CPU usage. Memory bloat may also force the need for more instances. Frameworks written on already slow high level languages may be especially bad.
* Lacks optimisations for obvious bottlenecks.
* It runs slowly.
* It lacks even basic resource usage measurement.
Unfortunately smells are not enough on their own but are a start. Real measurement and expert review is always the only way to get an idea of if your code is reasonably green.
I find it not uncommon to see things require tens to hundreds to thousands of resources than needed if not more.
In terms of cycles that can be the difference between needing a single core and a thousand cores.
This is common in the industry but it's not because people didn't write everything in assembly. It's usually leaning toward the extreme opposite.
Optimisations are often easy and don't require writing code in binary. In fact the resulting code is often simpler. Excess complexity and inefficient code tend to go hand in hand. Sometimes a code cleaning service is all you need to enhance your green.
I once rewrote a data parsing library that had to parse a hundred MB and was a performance hotspot into C from an interpreted language. I measured it and the results were good. It had been optimised as much as possible in the interpreted version but way still 50 times faster minimum in C.
I recently stumbled upon someone's attempt to do the same and I was able to optimise the interpreted version in five minutes to be twice as fast as the C++ version.
I see opportunity to optimise everywhere in software. A billion KG CO2 could be saved easy if a few green code shops popped up. It's also often a net win. Faster software, lower costs, lower management burden... I'm thinking of starting a consultancy.
The problem is after witnessing the likes of Greta Thunberg then if that's what the next generation has in store then as far as I'm concerned the world can fucking burn and her generation along with it.6 -
!rant
So coming from the interpreted language world (mainly using python), I'm always amazed on how compiled languages work. Especially C.
Every time I use C, it's like everything is sooooo faster (runtime), and yes I've read about it so many times. It's just that I can't explain this great feeling about actually seeing the results of using C.
Man, I think I just love C (even though I'm still confused in using pointers).4 -
I couldn't sleep so I made a CLI 3D to 2D cube displayer in C# in an hour. Controls are (WASD) or (arrow + DEL + END). If you press ALT, the cube will rotate faster. Simple af. This is perfect as my first public repo.
https://github.com/filthycoding/...4 -
Writing x86 assembly code in VS Code feels so weird. I mean, I'm using something that's built using crazily high level languages (JS, HTML, CSS), on top of a mammoth runtime environment (Node, V8), which is itself sitting on a modern and sophisticated operating system (Antergos), and I'm writing code that shifts bits and bytes around in memory in order to get one part of my C program to run just a little faster. Wow.1
-
This is a story about my disappointment in modern GUI editors for desktop applications.
Well, first of all, I grew up with Delphi 5. Delphi has an awesome form editor. It's intuitive and works without any problem. It always does what you want it to do. Prototyping is really a problem of seconds here, even for people that never used it (I guess).
But the problem is that it is Delphi. Its so old, bloated, and most problems you'll ever have have been solved (through a hack) 20 years ago in some weird forum.
So I looked on and tried many other drag'n'drop gui editors.
The one for java is the biggest pile of crap I've ever seen. It slows down eclipse /intellij and does almost never do what I want. At least its not really intuitive.
Right after that, the one for C# (this xml Designer ) is okay-ish, but it's also not really intuitive and does not always what the user wants.
I also tried other ones. But I still miss an intuitive one that works without weird side effects.
I now can understand why the Web dev stack grows in the region of desktop apps. I can prototype stuff even faster in angular than in Delphi.
But shouldn't we improve the desktop stack instead of taking some bloated stack using a language that should have never existed?9 -
During one of our visits at Konza City, Machakos county in Kenya, my team and I encountered a big problem accessing to viable water. Most times we enquired for water, we were handed a bottle of bought water. This for a day or few days would be affordable for some, but for a lifetime of a middle income person, it will be way too much expensive. Of ten people we encountered 8 complained of a proper mechanism to access to viable water. This to us was a very demanding problem, that needed to be sorted out immediately. Majority of the people were unable to conduct income generating activities such as farming because of the nature of the kind of water and its scarcity as well.
Such a scenario demands for an immediate way to solve this problem. Various ways have been put into practice to ensure sustainability of water conservation and management. However most of them have been futile on the aspect of sustainability. As part of our research we also considered to check out of the formal mechanisms put in place to ensure proper acquisition of water, and one of them we saw was tree planting, which was not sustainable at all, also some few piped water was being transported very long distances from the destinations, this however did not solve the immediate needs of the people.We found out that the area has a large body mass of salty water which was not viable for them to conduct any constructive activity. This was hint enough to help us find a way to curb this demanding challenge. Presence of salty water was the first step of our solution.
SOLUTION
We came up with an IOT based system to help curb this problem. Our system entails purification of the salty water through electrolysis, the device is places at an area where the body mass of water is located, it drills for a suitable depth and allow the salty water to flow into it. Various sets of tanks and valves are situated next to it, these tanks acts as to contain the salty water temporarily. A high power source is then connected to each tank, this enable the separation of Chlorine ions from Hydrogen Ions by electrolysis through electrolysis, salt is then separated and allowed to flow from the lower chamber of the tanks, allowing clean water to from to the preceding tanks, the preceding tanks contains various chemicals to remove any remaining impurities. The whole entire process is managed by the action of sensors. Water alkalinity, turbidity and ph are monitored and relayed onto a mobile phone, this then follows a predictive analysis of the data history stored then makes up a decision to increase flow of water in the valves or to decrease its flow. This being a hot prone area, we opted to maximize harnessing of power through solar power, this power availability is almost perfect to provide us with at least 440V constant supply to facilitate faster electrolysis of the salty water.
Being a drought prone area, it was key that the outlet water should be cold and comfortable for consumers to use, so we also coupled our output chamber with cooling tanks, these tanks are managed via our mobile application, the information relayed from it in terms of temperature and humidity are sent to it. This information is key in helping us produce water at optimum states, enabling us to fully manage supply and input of the water from the water bodies.
By the use of natural language processing, we are able to automatically control flow and feeing of the valves to and fro using Voice, one could say “The output water is too hot”, and the system would respond by increasing the speed of the fans and making the tanks provide very cold water. Additional to this system, we have prepared short video tutorials and documents enlighting people on how to conserve water and maintain the optimum state of the green economy.
IBM/OPEN SOURCE TECHNOLOGIES
For a start, we have implemented our project using esp8266 microcontrollers, sensors, transducers and low payload containers to demonstrate our project. Previously we have used Google’s firebase cloud platform to ensure realtimeness of data to-and-fro relay to the mobile. This has proven workable for most cases, whether on a small scale or large scale, however we meet challenges such as change in the fingerprint keys that renders our device not workable, we intend to overcome this problem by moving to IBM bluemix platform.
We use C++ Programming language for our microcontrollers and sensor communication, in some cases we use Python programming language to process neuro-networks for our microcontrollers.
Any feedback conserning this project please?8 -
Fellow Deviants, I need your help in understanding the importance of C++
Okay, I need to clarify a few things:
I am not a beginner or a newbie who has just entered this community...
I have been using C++ for some time and in fact, it was the language which introduced me to the world of programming... Before, I switched to Java, since I found it much better for application development...
I already know about the obvious arguments given in favour of C/C++ like how it is a much more faster and memory efficient than other languages...
But, at the same time, C/C++ exposes us and doesn't protect us from ourselves.. I hope that you understand what I mean to say..
And, I guess that it is a fair tradeoff for the kind of power and control that these languages (C/C++) provide us..
And, I also agree with the fact that it is an language that ideally suits our need, if we wish to deal with compilers, graphics, OS, etc, in the future...
But, what I really want to ask here is:
In this age and times, when hardware has advanced so much, where technically, memory efficiency or execution speeds no longer is the topmost priority... These were the reasons for which C/C++ was initially created...
In today's time, human concept of time matters more and hence, syntactical less complicated languages like Java or Python are much more preferred, especially for domains like application development or data sciences...
So, is continuing with C++, an endeavour worth sticking with in the future or is it not required...
I am talking about this issue since I am in a dilemma about the use of C++ in the future...
I would be grateful if we could talk about keeping AI, Machine Learning or Algorithms Optimisation in mind... Since, these are the fields in which I am interested in...
I know that my question could have been posted in a better way.. But, considering the chaos that is present in my mind, regarding this question doesn't allow me to do so...
Any kind of suggestion or thoughts would be welcome and much appreciated...
P.S: I currently use C++ only for competitive programming or challenges...28 -
A post today made me think about this interview I had about a year ago... I shall tell you all about it.
Went to a hiring event for this company everyone was cool but this one guy (who would have been my boss) was a total arrogant, douchey, know it all.
So anyways cue interviewing with him a week later. He says if I give you task A, B, C and they need to be done by the end of the week in which order do you do them. (Note: he literally said abc not actual tasks)
Me: I don't really know how to answer that question without more information.
Him: Well I like to see how people answer it... So think about it for a second.
Me: Ok I guess if doing one task would help me do the next task faster I would do it in that order. Like I can use code or concepts from C to do B faster.
Him: What if A would take 2 days and B and C would take 1 day.
Me: I don't see how that would influence the order I do them in.
Him: Would you complete one task after another or switch between them or do see simultaneously.
Me: It would depend on my previous answer and what would be the fastest way to get them done but without more info I literally cannot give you an answer of like I would do B, C, A.
Him: Would you do the longest task first or the shorter tasks.
Me: I don't know I guess it depends on if it would help you or somebody else move forward with something if I got a particular task or the shorter tasks done first.
So cue more of this kind of back and forth about arbitrary details about undefined theoretical tasks for like 10 minutes.
Suffice is to say I didn't get the job and I'm glad.7 -
Imagine you have a car and it runs faster than other cars and needs less gasoline, but due to historical reasons the steering wheel is made of barbed wire, there's 8 different accelerator pedals for different streets, pushing the wrong one may lead to a crash, and instead of a driver's seat there's a huge wooden dildo sticking out of the floor.
This is, in a nutshell, what using the C++ type system feels like.11 -
¡rant|rant
Nice to do some refactoring of the whole data access layer of our core logistics software, let me tell an story.
The project is around 80k lines of code, with a lot of integrations with an ERP system and an sql database.
The ERP system is old, shitty api for it also, only static methods through an wrapper to an c++ library
imagine an order table.
To access an order, you would first need to open the database by calling Api.Open(...file paths) (yes, it's an fucking flat file type database)
Now the database is open, now you would open the orders table with method Api.Table(int tableId) and in return you would get an integer value, the pointer.
Now for the actual order. first you need to search for it by setting the search parameter to the column ID of the order number while checking all calls for some BS error code
Api.SetInt(int pointer, int column, int query Value)
Then call the find method.
Api.Find(int pointer)
Then to top this shitcake of an api of: if it doesn't find your shit it will use the "close enough" method of search.
And now to read a singe string 😑
First you will look in the outdated and incorrect documentation given to you from the devil himself and look for the column ID to find the length of the column.
Then you create a string variable with ALL FUCKING SPACES.
Now you call the Api.GetStr(int pointer, int column, ref string emptyString, int length)
Now you have passed your poor string to the api's demon orgy by reference.
Then some more BS error code checking.
Now you have read an string value 😀
Now keep in mind to repeat these steps for all 300+ columns in the order table.
News from the creators: SQL server? yes, sql is good so everything will be better?
Now imagine the poor developers that got tasked to convert this shitcake to use a MS SQL server, that they did.
Now I can honestly say that I found the best SQL server benchmark tool. This sucker creams out just above ~105K sql statements per second on peak and ~15K per second for 1.5 second to read an order. 1.5 second to read less than 4 fucking kilobytes!
Right at that moment I released that our software would grind to an fucking halt before even thinking about starting it. And that me & myself and I would be tasked to fix it.
4 months later and two weeks until functional beta, here I am. We created our own api with the SQL server 😀
And the outcome of all this...
Fixes bugs older than a year, Forces rewriting part of code base. Forces removal of dirty fixes. allows proper unit and integration testing and even database testing with snapshot feature.
The whole ERP system could be replaced with ~10 lines of code (provided same relational structure) on the application while adding it to our own API library.
Best part is probably the performance improvements 😀. Up to 4500 times faster and 60 times less memory usage also with only managed memory.3 -
I want to learn C++. I know C#, Java and Ruby. Should I go Directly, or there is some sort of path?
How long it takes to master it? I am a quick learner, I catch things a lot faster.
What impact it has on my skills?
Is it really a nightmare to learn?8 -
I'm delirious so here's your daily dose of fuck:
```fasm
; --- * --- * ---
; 64-bit byte-by-byte mash
macro clamp_u8 {
mov cl,$08;
mov rdx,rax;
rept 8 \{
rol rdx,cl;
xor al,dl;
\};
};
; --- * --- * ---
; give 8-bit random seed
macro prng_u8 {
rdtsc;
shl rdx,32;
or rax,rdx;
clamp_u8;
};
; --- * --- * ---
; roll dice
d20: prng_u8;
; x%20, according to gcc ;>
mov edi,eax;
mov eax,-51;
mul dil;
shr ax,12;
lea eax,[rax+rax*4];
lea edx,[0+rax*4];
mov eax,edi;
sub eax,edx;
; discard high and give
and rax,$FF;
ret;
```
I guess `d20` could be inlined too but I thought it'd be too much.
Is it faster than straight C? Probably not. But it's way lighter, so it loads faster. Below five hundred bytes mother fucker.
Now if you'll excuse me, I'll go sit in the darkness repeteadly typing roll 1d20 on the terminal. For reasons.9 -
Heres some research into a new LLM architecture I recently built and have had actual success with.
The idea is simple, you do the standard thing of generating random vectors for your dictionary of tokens, we'll call these numbers your 'weights'. Then, for whatever sentence you want to use as input, you generate a context embedding by looking up those tokens, and putting them into a list.
Next, you do the same for the output you want to map to, lets call it the decoder embedding.
You then loop, and generate a 'noise embedding', for each vector or individual token in the context embedding, you then subtract that token's noise value from that token's embedding value or specific weight.
You find the weight index in the weight dictionary (one entry per word or token in your token dictionary) thats closest to this embedding. You use a version of cuckoo hashing where similar values are stored near each other, and the canonical weight values are actually the key of each key:value pair in your token dictionary. When doing this you align all random numbered keys in the dictionary (a uniform sample from 0 to 1), and look at hamming distance between the context embedding+noise embedding (called the encoder embedding) versus the canonical keys, with each digit from left to right being penalized by some factor f (because numbers further left are larger magnitudes), and then penalize or reward based on the numeric closeness of any given individual digit of the encoder embedding at the same index of any given weight i.
You then substitute the canonical weight in place of this encoder embedding, look up that weights index in my earliest version, and then use that index to lookup the word|token in the token dictionary and compare it to the word at the current index of the training output to match against.
Of course by switching to the hash version the lookup is significantly faster, but I digress.
That introduces a problem.
If each input token matches one output token how do we get variable length outputs, how do we do n-to-m mappings of input and output?
One of the things I explored was using pseudo-markovian processes, where theres one node, A, with two links to itself, B, and C.
B is a transition matrix, and A holds its own state. At any given timestep, A may use either the default transition matrix (training data encoder embeddings) with B, or it may generate new ones, using C and a context window of A's prior states.
C can be used to modify A, or it can be used to as a noise embedding to modify B.
A can take on the state of both A and C or A and B. In fact we do both, and measure which is closest to the correct output during training.
What this *doesn't* do is give us variable length encodings or decodings.
So I thought a while and said, if we're using noise embeddings, why can't we use multiple?
And if we're doing multiple, what if we used a middle layer, lets call it the 'key', and took its mean
over *many* training examples, and used it to map from the variance of an input (query) to the variance and mean of
a training or inference output (value).
But how does that tell us when to stop or continue generating tokens for the output?
Posted on pastebin if you want to read the whole thing (DR wouldn't post for some reason).
In any case I wasn't sure if I was dreaming or if I was off in left field, so I went and built the damn thing, the autoencoder part, wasn't even sure I could, but I did, and it just works. I'm still scratching my head.
https://pastebin.com/xAHRhmfH33 -
I started working my new job as a programmer(c#, java, etc.) in a very good programming company.
My first task was to optimise their DB. The DB has indexes and around 3mil rows. The db is slowwww as fuck.
So i made a windows service that reorganises indexes (Depending on blank pages and fragmentation of the index) in DB each week on time.
But as soon as new rows start to come in, the fragmentation of the indexes just sky rocket.
I tried with changing idexes so there will accually be onli indexes we need.
Can anyone help me how can i fix fragmentation problem so the select querries will be much faster.
Sorry if I don't know the solution, I'm new at this task.
Thank you!7 -
I discovered a language I didn't know AND i like.
It's not under active development anymore, but I decide it has a nice syntax. It's made by the writer of craftinginterpreters. There are still people writing some extensions for it.
I decided to implement socket support in it.
That went very well and the result is just BEAUTIFUL. But now, i have a collection of socket functions that require a file descriptor (sock) for every function like write, read and close. We're not living in the 90's. I want to do sock.send(), sock.write() and sock.close(). So socket as an object.
I wrote a wrapper and it is freaking TWO times slower! Hows that even possible.
I've made wrapping to object optional now. Bit disappointing.
The language shows off with benchmarks on their page. Their fibers can even be faster than Elixr. Yeah, if you only use the fiber and nothing else from language. I benchmarked string concat for example against python: 1000 times slower or so.
The source code of wren is so freaking beautiful. Before Lua was my favorite language regarding source. The extensibility is so great that I prefer to work on this one instead of my own language. They kinda made exactly what I wanted. I can't beat that.
For if you're interested: https://wren.io/
The slot way of communicating between host language (C) and child language (wren) seems odd at beginning but i became fan of it.
Thanks for listening to my ted talk.
What's your opinion about wren (syntax)?25 -
Finished my regex validator. But now the edgy stuff kept coming. It seems that you can do a-d or 3-8. OK, makes sense (else it would be just copies of \w and \d), but anyone ever saw someone using it? I only knew a-z and 0-9.
Thing is, I wrote the perfect design now for the interpreter. Adding features is easy now and not so exciting.
Still, I have a big plan for it that makes it possible to validate nests like (()) or {{"}"}} or anything you see as start / close tag while keeping regex generic. I'm not learning it that signs between some chars ("') has special rules. That would be specialization.
Fun fact: my regex is six times slower than native C code (not c regex) validating the same. In half of test cases faster than c regex. I consider it a success.
Thanks for listening8 -
This week was so not productive! Learning c# on the job is hard and I had hoped that it would go faster🙃1
-
Last year, I made an application of A* maze-solving algorithm in class. I used a linked list and my friends used arrays. Their algorithms were way faster than mine (I remade it later :p).
OK I understand that accessing memory by address if way faster than accessing by iterations, but I also see that python lists or C# lists are really fast. How is it possible to make a list performance-proof like this? Do the python interpreter make a realloc each time you append or pop a value?1 -
Question to all JS Frontend Devs: What light and performant library can you recommend me today?
I'm a C++ dev who is a bit anxious about performance, load times etc. and before I stuck my nose into vanilla js, maybe there is something better/faster. I'm a total frontend noob, e.g. I heard that html tables aren't a thing anymore and that I should use divs?
My task is quite simple, I want to give the users the ability to press some buttons and drag and drop stuff around instead of modifying the URL per hand :/
So if someone could point me in the right direction, that would be awesome!16 -
!rant
Ever find something that's just faster than something else, but when you try to break it down and analyze it, you can't find out why?
PyPy.
I decided I'd test it with a typical discord bot-style workload (decoding a JSON theoretically from an API, checking if it contains stuff, format and then returning it). It was... 1.73x the speed of python.
(Though, granted, this code is more network dependent than anything else.)
Mean +- std dev: [kitsu-python] 62.4 us +- 2.7 us -> [kitsu-pypy] 36.1 us +- 9.2 us: 1.73x faster (-42%)
Me: Whoa, how?!
So, I proceed to write microbenches for every step. Except the JSON decoding, (1.7x faster was at least twice as slow (in one case, one hundred times slower) when tested individually.
The combination of them was faster. Huh.
By this point, I was all "sign me up!", but... asyncpg (the only sane PostgreSQL driver for python IMO, using prepared statements by default and such) has some of it's functionality written in C, for performance reasons. Not Cython, actual C that links to CPython. That means no PyPy support.
Okay then.1 -
So what do you think about the path oracle is going with Java now?
The good side is new versions get usable much earlier. lts versions, open variants and many JDKs customized for any need you might have.
But there is also the darkside with their hey lets support Java 8 for ever, as long as their is someone willing to give us much money.
I mean that could be a reason many old-school company will pay instead of making their shit work with newer versions.
The gap between C# and Java is getting closer from faster releases, towards modern features and many more. Maming it more attractive.
But Oracle is making the thing a bit shitty in my opinion.21 -
I got a new computer recently. I got it with an evo 970. I tried installing the Samsung controller software so that I can view the health of the drive.
No go. Why?
Looked around and everywhere they are saying turn off raid. I checked in bios. Says my drive is not in a raid volume.
Okay, now what?
Look at manual of laptop maker. Says there is a mode that allows you to use either VMD or RAID on the drive. Apparently I was in VMD mode. I had already backed up the computer at this point. Yes, I suspected this was coming. So I changed the mode.
No boot.
Okay, I have Aomei backup and linux boot disk I made using Aomei. Linux boot disk won't boot... Well fuck.
Luckily I have my old computer and a Windows 10 install disk. I install Windows 10 again, install Aomei and proceed to try and restore.
4 hours later... I dunno how long. I went to bed.
Wake up and test.
No boot.
I try disk repair.
No go.
So I boot into Windows 10 install disk to look at partitions. 5 or 6 fucking partitions. It has installed 3 partitions into the space of one.
Delete all the fucking partitions. Cause fuck you!
Okay, lets try this again.
I make a window pe boot disk this time.
It boots.
I do restore while I am at work.
I get home.
No boot.
Check partitions and find only 2. Better than last time.
I try disk repair.
No go.
Search the net. Literally: "Aomei restore no boot"
Someone says, just assign drive letter with drive C using diskpart.
Seriously?! Disk repair couldn't figure this shit out by context?
Seriously doubting this solution.
Solution works...
Now, I am an engineer/programmer/computer genius. I have been learning how to fix this shit for over 30 years.
How the fuck is Joe Bloe ever going to fix an issue like this? I feel sorry for the technically un-inclined. I honestly don't know how neither Aomei nor Microsoft cannot solve restoring disk images by setting a drive letter. How did this not get backed up by Aomei? How did this not get detected as one of the most common problems with a disk restore? Why has this been a problem with Aomei restore for over 3 years? I love Aomei. It works most of the time. But this is terrible. The tech world is definitely a shit show at this point in time.
I also read that VMD actually makes the communication to the drive a bunch faster. Not sure if the samsung drivers do the same. So there may be a tradeoff. Oh well. I can see the temperature of my drives now! Woot!2 -
Project I’m working on is required to be in React-Native...I’m a Swift/Obj-C developer. I wrote two lines of React code and then I have spent the last 3 hours trying to get dependencies for a single library installed to run the project again...
Tell me how React Native is a better and faster way to develop apps over native code???
This seems like a waste.11 -
When you hear “Haskell performance”, what comes to your mind? I was never really interested in Haskell since I had Clojure, and I thought Haskell might be slow.
Haskell with GHC is actually as fast as C or even faster. Haskell runs right on your hardware, no VM or interpreter.
When a program is small, the performance is comparable to C. Sometimes it’s quicker, sometimes not. But when a program is large, Haskell implementation would be faster if you’re not a robot that generates perfect C code.
It’s both very high-order AND very fast. You don’t need math to code in Haskell.
Too bad there are no kewl libraries.12 -
Let's play a little stupid game!
I had a dream last night and when I woke up I was wondering:
"Which one of this PadLeft algorithms is faster in your opinion and why?"
I've performed a (100% not scientific) test in C# and have some results that I will share later, I'm curious what do you think first.
Let's do this! 😁34 -
When they decided to deprecate the old app that went back to early DOS, they decided to use VB.NET because they'd used some VBA and were familiar with it. Except they had a vague idea that C# was faster and decided to write the OpenGL code in that. Also they had some C++ code and decided to write more of it, accessed by the main program via COM.
I come in and the decision is made to integrate some third-party libs via a C++/CLI layer. On one hand screw COM, but on the other we're now using two non-standard MS C++ extensions. Then we decide we need scripting, so throw in some IronPython.
I'm the build engineer for all this, by the way. No fancy package managers since almost all the third-party dependencies are C++; a few of them are open source with our own hacks layered on top of the regular code, a few are proprietary. When I first started here you couldn't build on a fresh SVN checkout (ugh) without repeatedly building the program, copying DLLs manually, building again, ad nauseum. I finally got sick of being called in to do this process and announced that I was fixing it, which took a solid week of staring at failed compiler output.
Every so often someone wants to update that damn COM library and has to sacrifice a goat to figure out how the hell you get it to accept a new method. Maybe one day I'll do a whole rant just based on COM. -
The objective c stdlib is pretty cool, it's what backs the swift stdlib on mac. The collection classes dynamically switch backend depending on size and expected performance characteristics. EG a set of 3 items is faster to linearly search a vector, so it'll switch that out.
https://objc.io/issues/...
I'm not a mac fan but that's some truly artful engineering.
(reposting comment as rant coz I think it's cool)3 -
My main project in work is making program in C# (right now .NET Standard) that can read scans of invoices that are sent from contractors. I'm working on it for almost two years now (with breaks and only halftime because university). Alone. And for last two months I've been redesigning, refactoring and making whole app "better", using experience and knowledge gained in the last two years.
Obviously my boss wasn't happy with that but I got him to accept it, promising that it'll make it work faster, expansion will be simpler and I'll make core as a separate library that can be used anywhere, not only in the JobRouter ecosystem.
And so I reworked most of the code, made it cleaner, I hope, and a tad quicker. And I was happy with it while testing on a package of invoices. Today I made first integration with customer's JobRouter.
The results aren't any better - in some cases they are much worse. Especially while searching for invoice entries, which can be in any shape or form and on any of document's pages.
I guess, being a Junior, I wasn't really up to the task. I'm sick of working on a "guessing" program that has to work with every invoice template users can imagine. I'm sick of not getting any recognition for what I did good. And I'm sick of constantly being pushed to make it work better when I just don't have any more ideas or my skills are just lacking.
To be honest, I don't know what to do. I'll probably have to work on making it search the data better. But it's not trivial to just look at the code and see errors. Iterating on the code while working with different invoices worked for a bit in older versions, but I reached the point where changes made to make one invoice be read better, made another one worse.
Its like on those GIFs where you squish one bug to make another two appear.
So yeah, I'm currently really doubting my career, skills and intelligence.8 -
[CONCEITED RANT]
I'm frustrated than I'm better tha 99% programmers I ever worked with.
Yes, it might sound so conceited.
I Work mainly with C#/.NET Ecosystem as fullstack dev (so also sql, backend, frontend etc), but I'm also forced to use that abhorrent horror that is js and angular.
I write readable code, I write easy code that works and rarely, RARELY causes any problem, The only fancy stuff I do is using new language features that come up with new C# versions, that in latest version were mostly syntactic sugar to make code shorter/more readable/easier.
People I have ever worked with (lot of) mostly try to overdo, overengineer, overcomplicate code, subdivide into methods when not needed fragmenting code and putting tons of variables.
People only needed me to explain my code when the codebase was huge (200K+ lines mostly written by me) of big so they don't have to spend hours to understand what's going on, or, if the customer requested a new technology to explain such new technology so they don't have to study it (which is perfectly understandable). (for example it happened that I was forced to use Devexpress package because they wanted to port a huge application from .NET 4.5 to .NET 8 and rewriting the whole devexpress logic had a HUGE impact on costs so I explained thoroughly and supported during developement because they didn't knew devexpress).
I don't write genius code or clevel tricks and patterns. My code works, doesn't create memory leaks or slowness and mostly works when doing unit tests at first run. Of course I also put bugs and everything, but that's part of the process.
THe point is that other people makes unreadable code, and when they pass code around you hear rising chaos, people cursing "WTF this even means, why he put that here, what the heck this is even supposed to do", you got the drill. And this happens when I read everyone code too.
But it doesn't happens the opposite. My code is often readable because I do code triple backflips only on personal projects because I don't have to explain anyone and I can learn new things and new coding styles.
Instead, people want to impress at work, and this results in unintelligible, chaotic code, full of bugs and that people can't read. They want to mix in the coolest technologies because they feel their virtual penis growing to showoff that they are latest bleeding edge technology experts and all.
They want to experiment on business code at the expense of all the other poor devils who will have to manage it.
Heck, I even worked with a few Microsoft MVPs.
Those are deadly. They're superfast code throughput people that combine lot of stuff.
THen they leave at you the problems once they leave.
This MVP guy on a big project for paperworks digital acquisiton for a big company did this huge project I got called to work in, which consited in a backend and a frontend web portal, and pushed at all costs to put in the middle another CDN web project and another Identity Server project to both do Caching with the cdn "to make it faster" and identity server for SSO (Single sign on).
We had to deal with gruesome work to deal with browser poor caching management and when he left, the SSO server started to loop after authentication at random intervals and I had to solve that stuff he put in with days of debugging that nasty stuff he did.
People definitely can't code, except me.
They have this "first of the class syndrome" which goes to the extent that their skill allows them to and try to do code backflips when they can't even do code pushups, to put them in a physical exercise parallelism.
And most people is like this. They will deny and won't admit, they believe they're good at it, but in reality they aren't.
There is some genius out there that does revoluitionary code and maybe needs to do horrible code to do amazing stuff, and that's ok. And there is also few people like me, with which you can work and produce great stuff.
I found one colleague like this and we had a $800.000 (yes, 800k) project in .NET Technology, which consisted in the renewal of 56 webservices and 3 web portals and 2 Winforms applications for our country main railway transport system. We worked in 2 on it, with a PM from the railway company.
It was estimated 14 months of work and we took 11 and all was working wonders. We had ton of fun doing it because also their PM was a cool guy and we did an awesome project and codebase was a jewel. The difficult thing you couldn't grasp if you read the code is if you don't know how railway systems work and that's the only difficult thing.
Sight, there people is macking me sick of this job11 -
So which do you think would be faster to detect related points in an image to a certain threshold ?
A. Scan a line at a time and define a rectangle surrounding the shape ?
B. Starting at a pixel find values in each direction within a tolerance and recurse each point found with the same function
C. Do something similar to above but try to find the edges by finding the last point before blank space to get a shape
D. Identify all line segments on the horiz very and diagonals and see which ones intersect ? Omg I asked this before. After discovering all the points that are within threshold and iterating through these alone?
E. Is there another goddamn method ??? Lol6 -
It's always a matter of much is there to do and in what language...
There is the IDE-Zone, which is dominated by IntelliJ (CLion be praised when you do Rust or C++) for large stuff and heavy refactorings.
Always disputted by VS Code with synced settings. It's nice and comfy and has every imaginable language supported good enough, especially when its smaller change in native code or web/scripting stuff.
Then there is the "small changes" space, where Vim and VS Code struggle whos faster or which way sticks better in my brain...
might be you SCP stuff down from a box and edit it to re-upload, or you use the ever-present vi (no "m" unfortunately)
sometimes things are more easy for multi-caret editing (Ctrl-D or Alt-J), and sometimes you just want to ":%s/foo/bar/g" in vim.
I am sure that each of these things are perfectly possible in each of the editors, but there is just reflexes in my editor choices.
I try to stay flexible and discover strenghts of each one of my weapon of choice and did change the favorites. (Atom, Brackets, Eclipse, Netbeans, ...)
However there are some things I tried often and they are simply not working for me...
might for you. I don't care. and I'll just use some space to piss people off, because this is supposed to be a rant:
nano just feels wrong, emacs is pestilence from satan that was meant for tentacles instead of fingers, sublime does cost money but should not, gives me a constant guilty feeling (and I don't like that) that, and all the editors from various desktop environments are wasted developer ressources. -
Hi devRant. Wanna rant with some shit about my company. First some good parts. I work in company with 600+ employees. It's one of the best companies in my region. They provide you with any kind of sweets(cookies, coffee, tea, etc), any hardware you need for your work (additional monitor, more ram, SSDs, processor, graphics card, whatever), just about everything you need to make your work faster/comfortable. Then, we have regular reviews (every 6 months), which rise salary from $0.75 to $1.5 per hour. (I live in poor country, where $15 per hour makes your more solvent then 70% of people, so having 100-200 bucks increase every half year is quite good rise).
The resulting increase of review depends on how team leader and project manager are satisfied with my work. And here starts the interesting (e.g. the shit comes in).
1) Seniority level in our company applies depending on the salary you have. That't right. It does not depend on your skill. Except the case when you're applying to vacancy. So if you tell that you're senior dev and prove it during interview, you'll have senior's salary. This is fine if you're just want money. But not if you love programming (as me) because of reasons bellow.
2) You don't need to have lots of programming experience to be a team leader. You can even be a junior team leader (but thanks god, on research projects only). You start from leading research projects and than move to billable if the director of research department is satisfied with your leading skills.
As a consequence our seniors are dumb AF. This pieces me off the most. Not all of them. A would say half of them are real pro guys, but the rest suck at programming (as for a senior). They are around junior/middle level.
I can understand if guy has $15 rate but still remains junior dev. That's fine. But hell no, he is treated as a middle, because his rate is $10+ now! And his mind has priority over middles and juniors. Not that junior have lof of good tougths but sometimes they do.
I'm lucky to work yet on small project so I'm the only dev, and so to speak TL for myself. But my colleague has this kind of senior team leader who is dumb AF. They work on ASP.NET Core project, the senior does not even know how to properly write generic constraints in C#. Seriously.
Just look at this shit. Instead of
MyClass<T> where T: class {}
he does this:
abstract class EnsureClass {}
MyClass<T> where T: EnsureClass {}
He writes empty abstract class, forces other classes to inherit it (thus, wasting the ability to inherit some useful class) just to ensure that generic T is a class. What thA FUCK is wrong with you dude?! You're a senior dev and you don't even know the language you're codding in.
And this shit is all over the company. Every monkey that had enough skill just to not be fired and enough patience to work 4-5 years becomes a senior! No-fucking-body cares and reviews your skill increase. The whole review is about department director asking TL and PM question like "how is this guy doing? is he OK or we should fire him?" That's the whole review. If TL does not like you, he can leave bad review and the company will set you on trial. If you confront TL during this period, pack your suitcase. Two cases of such shit I know personally. A good skilled guy could not just find common language with his TL and got fired. And the cherry on top of the case is that thay don't care about the fired dev's mind. They will only listen to reviewer. This is just absurd and just boils me down.
That's all i wanted to say. Thanks for your attention. -
I just read about the Thunderbolt port vulnerability but it seems that port is interchangeable with USB C, just faster? But basically any USBC ports are vulnerable then? Or just the rounded ones like on phones?
Normal USB 3 rectangular ports on PCs are fine?5 -
!rant
It's rather a question. I am thinking of changing my Linux distro from Lubuntu to Arch Linux or Gentoo.
My main reason is that I want to achieve customizability and the freedom that Linux offer and also build my distro from ground up.
Second reason is that I want to switch a little bit I am using Lubuntu for 2-3 years and is worked great for me. Especially because I have an older laptop (Asus K53E) and windows 7 worked really slow on it. But with this distro, everything works much faster and has all features and tools for programming that I need despite being minimalistic.
I have also used other distros before this one. These are some of them that I can remember Ubuntu, Xubuntu, Mint, Bodhi Linux.
I would say for myself that I am quite familiar with terminal and I also wrote some bash scripts on complexity level like these: https://github.com/RokKos/..., https://github.com/RokKos/...
But my main concern is that would fail to install any of this two distros or that I would damage my computer beyond repair...
So my main questions are:
What are you experience with this two distros?
Did you have any troubles installing and setting up distro?
What is overall experience with this two distros?
Was is worth to switch to any of these two?
And you could also share what distro you are using and maybe some rants that occur using them.14 -
Hi everyone hows it going today? been learning alot lately Question? when working with lib2cpp.so files whats the best inspector for them? and what do these files contain? (example: gamelib.so)
i know a .so file is C++ so i think it has something to do with offsets and memory ranges something like that.
but im trying to open one lol
we have moved to andlua and i learned the api fully
app: https://andnixsh.com/2020/05/...
AndLua+ app is a lightweight scripting tool that allows you to easily perform script programming and testing on your Android phone. This is a very useful tool for those who need script (android development or modding) programming. AndLua+ is based on the open source project lua. It uses a simple and beautiful lua language, which simplifies cumbersome Java statements. At the same time, it supports the use of most Android APIs, free installation and debugging, and makes your development on your mobile phone easier and faster. The permission requested is for you to write a program to use, please rest assured to use.