Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "event handling"
-
I've dreamed of learning business intelligence and handling big data.
So I went to an university info event today for "MAS Data Science".
Everything's sounded great. Finding insights out of complex datasets, check! Great possibilities and salery.
Yay! 😀
Only after an hour they've explained that the main focus of this course is on leading a library, museum or an archive. 😟 huh why? WTF?
Turns out, they've relabled their librarian education course to Data science for getting peoples attention.
Hey you cocksuckers! I want my 2 hours of wasted life time back!
Fuck this "english title whitewashing"4 -
Okay, story time.
Back during 2016, I decided to do a little experiment to test the viability of multithreading in a JavaScript server stack, and I'm not talking about the Node.js way of queuing I/O on background threads, or about WebWorkers that box and convert your arguments to JSON and back during a simple call across two JS contexts.
I'm talking about JavaScript code running concurrently on all cores. I'm talking about replacing the god-awful single-threaded event loop of ECMAScript – the biggest bottleneck in software history – with an honest-to-god, lock-free thread-pool scheduler that executes JS code in parallel, on all cores.
I'm talking about concurrent access to shared mutable state – a big, rightfully-hated mess when done badly – in JavaScript.
This rant is about the many mistakes I made at the time, specifically the biggest – but not the first – of which: publishing some preliminary results very early on.
Every time I showed my work to a JavaScript developer, I'd get negative feedback. Like, unjustified hatred and immediate denial, or outright rejection of the entire concept. Some were even adamantly trying to discourage me from this project.
So I posted a sarcastic question to the Software Engineering Stack Exchange, which was originally worded differently to reflect my frustration, but was later edited by mods to be more serious.
You can see the responses for yourself here: https://goo.gl/poHKpK
Most of the serious answers were along the lines of "multithreading is hard". The top voted response started with this statement: "1) Multithreading is extremely hard, and unfortunately the way you've presented this idea so far implies you're severely underestimating how hard it is."
While I'll admit that my presentation was initially lacking, I later made an entire page to explain the synchronisation mechanism in place, and you can read more about it here, if you're interested:
http://nexusjs.com/architecture/
But what really shocked me was that I had never understood the mindset that all the naysayers adopted until I read that response.
Because the bottom-line of that entire response is an argument: an argument against change.
The average JavaScript developer doesn't want a multithreaded server platform for JavaScript because it means a change of the status quo.
And this is exactly why I started this project. I wanted a highly performant JavaScript platform for servers that's more suitable for real-time applications like transcoding, video streaming, and machine learning.
Nexus does not and will not hold your hand. It will not repeat Node's mistakes and give you nice ways to shoot yourself in the foot later, like `process.on('uncaughtException', ...)` for a catch-all global error handling solution.
No, an uncaught exception will be dealt with like any other self-respecting language: by not ignoring the problem and pretending it doesn't exist. If you write bad code, your program will crash, and you can't rectify a bug in your code by ignoring its presence entirely and using duct tape to scrape something together.
Back on the topic of multithreading, though. Multithreading is known to be hard, that's true. But how do you deal with a difficult solution? You simplify it and break it down, not just disregard it completely; because multithreading has its great advantages, too.
Like, how about we talk performance?
How about distributed algorithms that don't waste 40% of their computing power on agent communication and pointless overhead (like the serialisation/deserialisation of messages across the execution boundary for every single call)?
How about vertical scaling without forking the entire address space (and thus multiplying your application's memory consumption by the number of cores you wish to use)?
How about utilising logical CPUs to the fullest extent, and allowing them to execute JavaScript? Something that isn't even possible with the current model implemented by Node?
Some will say that the performance gains aren't worth the risk. That the possibility of race conditions and deadlocks aren't worth it.
That's the point of cooperative multithreading. It is a way to smartly work around these issues.
If you use promises, they will execute in parallel, to the best of the scheduler's abilities, and if you chain them then they will run consecutively as planned according to their dependency graph.
If your code doesn't access global variables or shared closure variables, or your promises only deal with their provided inputs without side-effects, then no contention will *ever* occur.
If you only read and never modify globals, no contention will ever occur.
Are you seeing the same trend I'm seeing?
Good JavaScript programming practices miraculously coincide with the best practices of thread-safety.
When someone says we shouldn't use multithreading because it's hard, do you know what I like to say to that?
"To multithread, you need a pair."18 -
I've optimised so many things in my time I can't remember most of them.
Most recently, something had to be the equivalent off `"literal" LIKE column` with a million rows to compare. It would take around a second average each literal to lookup for a service that needs to be high load and low latency. This isn't an easy case to optimise, many people would consider it impossible.
It took my a couple of hours to reverse engineer the data and implement a few hundred line implementation that would look it up in 1ms average with the worst possible case being very rare and not too distant from this.
In another case there was a lookup of arbitrary time spans that most people would not bother to cache because the input parameters are too short lived and variable to make a difference. I replaced the 50000+ line application acting as a middle man between the application and database with 500 lines of code that did the look up faster and was able to implement a reasonable caching strategy. This dropped resource consumption by a minimum of factor of ten at least. Misses were cheaper and it was able to cache most cases. It also involved modifying the client library in C to stop it unnecessarily wrapping primitives in objects to the high level language which was causing it to consume excessive amounts of memory when processing huge data streams.
Another system would download a huge data set for every point of sale constantly, then parse and apply it. It had to reflect changes quickly but would download the whole dataset each time containing hundreds of thousands of rows. I whipped up a system so that a single server (barring redundancy) would download it in a loop, parse it using C which was much faster than the traditional interpreted language, then use a custom data differential format, TCP data streaming protocol, binary serialisation and LZMA compression to pipe it down to points of sale. This protocol also used versioning for catchup and differential combination for additional reduction in size. It went from being 30 seconds to a few minutes behind to using able to keep up to with in a second of changes. It was also using so much bandwidth that it would reach the limit on ADSL connections then get throttled. I looked at the traffic stats after and it dropped from dozens of terabytes a month to around a gigabyte or so a month for several hundred machines. The drop in the graphs you'd think all the machines had been turned off as that's what it looked like. It could now happily run over GPRS or 56K.
I was working on a project with a lot of data and noticed these huge tables and horrible queries. The tables were all the results of queries. Someone wrote terrible SQL then to optimise it ran it in the background with all possible variable values then store the results of joins and aggregates into new tables. On top of those tables they wrote more SQL. I wrote some new queries and query generation that wiped out thousands of lines of code immediately and operated on the original tables taking things down from 30GB and rapidly climbing to a couple GB.
Another time a piece of mathematics had to generate all possible permutations and the existing solution was factorial. I worked out how to optimise it to run n*n which believe it or not made the world of difference. Went from hardly handling anything to handling anything thrown at it. It was nice trying to get people to "freeze the system now".
I build my own frontend systems (admittedly rushed) that do what angular/react/vue aim for but with higher (maximum) performance including an in memory data base to back the UI that had layered event driven indexes and could handle referential integrity (overlay on the database only revealing items with valid integrity) or reordering and reposition events very rapidly using a custom AVL tree. You could layer indexes over it (data inheritance) that could be partial and dynamic.
So many times have I optimised things on automatic just cleaning up code normally. Hundreds, thousands of optimisations. It's what makes my clock tick.4 -
Whelp. I started making a very simple website with a single-page design, which I intended to use for managing my own personal knowledge on a particular subject matter, with some basic categorization features and a simple rich text editor for entering data. Partly as an exercise in web development, and partly due to not being happy with existing options out there. All was going well...
...and then feature creep happened. Now I have implemented support for multiple users with different access levels; user profiles; encrypted login system (and encrypted cookies that contain no sensitive data lol) and session handling according to (perceived) best practices; secure password recovery; user-management interface for admins; public, private and group-based sections with multiple categories and posts in each category that can be sorted by sort order value or drag and drop; custom user-created groups where they can give other users access to their sections; notifications; context menus for everything; post & user flagging system, moderation queue and support system; post revisions with comparison between different revisions; support for mobile devices and touch/swipe gestures to open/close menus or navigate between posts; easily extendible css themes with two different dark themes and one ugly as heck light theme; lazy loading of images in posts that won't load until you actually open them; auto-saving of posts in case of browser crash or accidental navigation away from page; plus various other small stuff like syntax highlighting for code, internal post linking, favouriting of posts, free-text filter, no-javascript mode, invitation system, secure (yeah right) image uploading, post-locking...
On my TODO-list: Comment and/or upvote system, spoiler tag, GDPR compliance (if I ever launch it haha), data-limits, a simple user action log for admins/moderators, overall improved security measures, refactor various controllers, clean up the code...
It STILL uses a single-page design, and the amount of feature requests (and bugs) added to my Trello board increases exponentially with every passing week. No other living person has seen the website yet, and at the pace I'm going, humanity will have gone through at least one major extinction event before I consider it "done" enough to show anyone.
help4 -
# Retrospective as Backend engineer
Once upon a time, I was rejected by a startup who tries to snag me from another company that I was working with.
They are looking for Senior / Supervisor level backend engineer and my profile looks like a fit for them.
So they contacted me, arranged a technical test, system design test, and interview with their lead backend engineer who also happens to be co-founder of the startup.
## The Interview
As usual, they asked me what are my contribution to previous workplace.
I answered them with achievements that I think are the best for each company that I worked with, and how to technologically achieve them.
One of it includes designing and implementing a `CQRS+ES` system in the backend.
With complete capability of what I `brag` as `Time Machine` through replaying event.
## The Rejection
And of course I was rejected by the startup, maybe specifically by the co-founder. As I asked around on the reason of rejection from an insider.
They insisted I am a guy who overengineer thing that are not needed, by doing `CQRS+ES`, and only suitable for RND, non-production stuffs.
Nobody needs that kind of `Time Machine`.
## Ironically
After switching jobs (to another company), becoming fullstack developer, learning about react and redux.
I can reflect back on this past experience and say this:
The same company that says `CQRS+ES` is an over engineering, also uses `React+Redux`.
Never did they realize the concept behind `React+Redux` is very similar to `CQRS+ES`.
- Separation of concern
- CQRS: `Command` is separated from `Query`
- Redux: Side effect / `Action` in `Thunk` separated from the presentation
- Managing State of Application
- ES: Through sequence of `Event` produced by `Command`
- Redux: Through action data produced / dispatched by `Action`
- Replayability
- ES: Through replaying `Event` into the `Applier`
- Redux: Through replay `Action` which trigger dispatch to `Reducer`
---
The same company that says `CQRS` is an over engineering also uses `ElasticSearch+MySQL`.
Never did they realize they are separating `WRITE` database into `MySQL` as their `Single Source Of Truth`, and `READ` database into `ElasticSearch` is also inline with `CQRS` principle.
## Value as Backend Engineer
It's a sad days as Backend Engineer these days. At least in the country I live in.
Seems like being a backend engineer is often under-appreciated.
Company (or people) seems to think of backend engineer is the guy who ONLY makes `CRUD` API endpoint to database.
- I've heard from Fullstack engineer who comes from React background complains about Backend engineers have it easy by only doing CRUD without having to worry about application.
- The same guy fails when given task in Backend to make a simple round-robin ticketing system.
- I've seen company who only hires Fullstack engineer with strong Frontend experience, fails to have basic understanding of how SQL Transaction and Connection Pool works.
- I've seen company Fullstack engineer relies on ORM to do super complex query instead of writing proper SQL, and prefer to translate SQL into ORM query language.
- I've seen company Fullstack engineer with strong React background brags about Uncle Bob clean code but fail to know on how to do basic dependency injection.
- I've heard company who made webapp criticize my way of handling `session` through http secure cookie. Saying it's a bad practice and better to use local storage. Despite my argument of `secure` in the cookie and ability to control cookie via backend.18 -
Just came across a site with TERRIBLE CSS and some sort of event handling to block the user from accessing the dev tools.
So in my Chrome, no F12 and no Ctrl+Shift+I.
Well, guess what. The Tools menu in the title bar also contains a link to Developer Tools.
Me: 1 Them: 010 -
ChatGPT, Copilot, React, how to make a link in a frontend website?
To create a link in a frontend website, create a span, a div, or a paragraph that contains the link text. In your JavaScript web app, add an event listener to that element that opens the link on click. If you want to claim you're accessible, add an aria-role to the clickable element. To make debugging harder and only possible for the real arcane experts, let your framework generate generic ids and class name hashes for styling and event handling, like "item_09fcfck" or "elementor_element_foo_bar". Avoid, at all price, to use an a href element!2 -
ASP.NET Web Forns?
Can't tell how many times I printed out the page lifecycle diagram for myself or a coworker. So many hours lost trying to figure out which lifecycle hook to use for a specific scenario and then have it all break down because something new was added to the feature. Or figuring when data can be bound, or doing some hack because things break when handling a POST event or some shit.
Overly abstract piece of technological excrement. Might as well express the thing in contemporary dance and check that into source control instead of that ungodly mess.
The switch to AJAX and API calls was such a huge relief it's almost hard to explain in words (I can do a dance tho). And then upgrading to AngularJS, man, worlds apart...
I don't care how much they pay me (okay, you got me...), I'm never touching Web Forms again. -
In most businesses, self-proclaimed full-stack teams are usually more back-end leaning as historically the need to use JS more extensively has imposed itself on back-end-only teams (that used to handle some basic HTML/CSS/JS/bootstrap on the side). This is something I witnessed over the years in 4 projects.
Back-end developers looking for a good JS framework will inevitably land on the triad of Vue, React and Angular, elegant solutions for SPA's. These frameworks are way more permissive than traditional back-end MVC frameworks (Dotnet core, Symfony, Spring boot), meaning it is easy to get something that looks like it's working even when it is not "right" (=idiomatic, unit-testable, maintainable).
They then use components as if they were simple HTML elements injecting the initial state via attributes (props), skip event handling and immediately add state store libraries (Vuex, Redux). They aren't aware that updating a single prop in an object with 1000 keys passed as prop will be nefarious for rendering performance. They also read something about SSR and immediately add Next.js or Nuxt.js, a custom Node express.js proxy and npm install a ton of "ecosystem" modules like webpack loaders that will become abandonware in a year.
After 6 months you get: 3 basic forms with a few fields, regressions, 2MB of JS, missing basic a11y, unmaintainable translation files & business logic scattered across components, an "outdated" stack that logs 20 deprecation notices on npm install, a component library that is hard to unit-test, validate and update, completely vendor-& version locked in and hundreds of thousands of wasted dollars.
I empathize with the back-end devs: JS frameworks should not brand themselves as "simple" or "one-size-fits-all" solutions. They should not treat their audience as if it were fully aware and able to use concepts of composition, immutability, and custom "hooks" paired with the quirks of JS, and especially WHEN they are a good fit. -
I asked a team member about a simple update for a customer's country/state code and he started saying we should build an event handler and publish and event and I'm like "woah woah... this is before they're engaged."
Thankfully he said, in that case, my original idea of just handling the update is correct. Phew ... that could have been way more complicated than it needed to be.1 -
The joys of being a multi-project, multi-language developer! You think you'll juggle a couple of balls, but suddenly you're in a full-blown circus act, with chainsaws, flaming torches, and a monkey on your back yelling "more features!"
In the morning, you're all TypeScript: "Yes, of course, types make everything more reliable!" By lunch, you're neck-deep in Python and realize types are a vague suggestion at best, leaving you guessing like some bug-squashing mystic. And then just when you’ve finally wrapped your head around that context switch, FastAPI starts demanding things that make you wonder, "Why can’t we all just get along and be JavaScript?"
Oh, and don’t even get me started on syntax. One minute it’s req.body this and express.json() that. The next, Python’s just there with a smug look, saying, "Indentation is my thing, deal with it!" And don’t look now, because meanwhile, Stripe’s trying to barge in with a million webhooks, payment statuses, and event types like “connect” and “payment,” each a subtle bomb to blow up your error logs.
Of course, every language has its "elegant" way of handling errors—which, translated, means fifty shades of “Why isn’t this working?” in different flavors! But hey, at least the machines can’t see us crying through the screen.11 -
Guys, I know this is not stackoverflow, but do you think it will be an overkill to use RxJava for the event handling of a simple uber like app ?