Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "client-side processing"
-
So they were having trouble with the server always being slow and maxed to 100%, so the boss told me when wait times were hitting 5+mins due to server trying to catch up, he complained at me, said if I could get the wait time to 30sec to instant he would raise my pay to 90k a year, then walked away after I agreed, I was quite serious but I don't think he thought I was, so I decided to look over the system, IDK who but they put all the calculations and processing server-side for the CA's on floor then sent the completed view to the CA, so I spent months recreating the entire system except the server only pulled the data needed then the new client would do all the processing on their computer since they weren't doing anything anyways, I did a practice run today as its one of our peak days, wait times went to barely 5secs or "instant" according to CA's, I walked into the office, slapped that hourly report down after just two hours and showed the massive increase in employees production times.
That look on his face...
That look on my face...
That look on my next check...
Bliss10 -
So you are telling me I can run tensorflow.js on the browser and use user's GPU to power my models!
Oh Yeaaaaaaaaaaah!1 -
Never heard of a so terribly designed online game.
For starters: the client-server model is process everything on the client, then save it on the server, and due to the nature of the site design, simply changing a tag will give you another of money.
The PayPal processing system doesn't read any headers or anything of that sort. So if you cancel your payment, this game thinks you've paid anyways.
Also, the trading system is based off of what buttons you can see so if you can see the cancel button it must be yours. So if you copy the cancel button to someones trade offering (FYI this is all done locally), and you click it you have gotten said item(s).
It gets worse, but I don't remember much more than that. The one thing they actually do is make session IDs expire.12 -
Feel free to scroll by if you feel like it.
I am just very excited this evening because with today's commit I have reached a very important milestone in my side-project development. As of today all the [so far] 12 components are all working together and processing the main flow themselves.
No special functions, no test data in the code, nothing like that. A client is able to do its thing now as it should.
I know it doesn't sound like much, but as I'm working on this gigantic beast for 3 years now this milestone is hell of a reward for me!
Just wanted to share :)
edit: f* it! I'm getting a cake!4 -
A server application pulled off some sort of listings as table. Problem was, it crashed with some thousand data files after one and a half hours. I looked into that, and couldn't stop WTFing.
A stupid server side script fetched the data in XML (WTF!) and then inserted shit node-wise (WTF!!), which was O(n^2) - in PHP and on XML! Then it converted the whole shebang into HTML for browser display although users would finally copy/paste the result into Excel anyway.
The original developer even had written a note on the application page that pulling the data "could take long". Yeah because it's so fucking STUPID that Clippy is an Einstein in comparison, that's why!
So I pulled the raw data via batch file without XML wrapping and wrote a little C program for merging the dumped stuff client-side in O(n), spitting out a final CSV for Excel import.
Instead of fucking the server for 1.5 hours and then crashing, shit is done after 7 seconds, out of which the actual data processing takes 40 bloody milliseconds!4 -
(long post is long)
This one is for the .net folks. After evaluating the technology top to bottom and even reimplementing several examples I commonly use for smoke testing new technology, I'm just going to call it:
Blazor is the next Silverlight.
It's just beyond the pale in terms of being architecturally flawed, and yet they're rushing it out as hard as possible to coincide with the .Net 5 rebranding silo extravaganza. We are officially entering round 3 of "sacrifice .Net on the altar of enterprise comfort." Get excited.
Since we've arrived here, I can only assume the Asp.net Ajax fiasco is far enough in the past that a new generation of devs doesn't recall its inherent catastrophic weaknesses. The architecture was this:
1. Create a component as a "WebUserControl"
2. Any time a bound DOM operation occurs from user interaction, send a payload back to the server
3. The server runs the code to process the event; it spits back more HTML
Some client-side js then dutifully updates the UI by unceremoniously stuffing the markup into an element's innerHTML property like so much sausage.
If you understand that, you've adequately understood how Blazor works. There's some optimization like signalR WebSockets for update streaming (the first and only time most blazor devs will ever use WebSockets, I even see developers claiming that they're "using SignalR, Idserver4, gRPC, etc." because the template seeds it for them. The hubris.), but that's the gist. The astute viewer will have noticed a few things here, including the disconnect between repaints, inability to blend update operations and transitions, and the potential for absolutely obliterative, connection-volatile, abusive transactional logic flying back and forth to the server. It's the bring out your dead approach to seeing how much of your IT budget is dedicated to paying for bandwidth and CPU time.
Blazor goes a step further in the server-side render scenario and sends every DOM event it binds to the server for processing. These include millisecond-scale events like scroll, which, at least according to GitHub issues, devs are quickly realizing requires debouncing, though they aren't quite sure how to accomplish that. Since this immediately becomes an issue with tickets saying things like, "scroll event crater server, Ugg need help! You said Blazorclub good. Ugg believe, Ugg wants reparations!" the team chooses a great answer to many problems for the wrong reasons:
gRPC
For those who aren't familiar, gRPC has a substantial amount of compression primarily courtesy of a rather excellent binary format developed by Google. Who needs the Quickie Mart, or indeed a sound markup delivery and view strategy when you can compress the shit out of the payload and ignore the problem. (Shhh, I hear you back there, no spoilers. What will happen when even that compression ceases to cut it, indeed). One might look at all this inductive-reasoning-as-development and ask themselves, "butwai?!" The reason is that the server-side story is just a way to buy time to flesh out the even more fundamentally broken browser-side story. To explain that, we need a little perspective.
The relationship between Microsoft and it's enterprise customers is your typical mutually abusive co-dependent relationship. Microsoft goes through phases of tacit disinterest, where it virtually ignores them. And rightly so, the enterprise customers tend to be weaksauce, mono-platform, mono-language types who come to work, collect a paycheck, and go home. They want to suckle on the teat of the vendor that enables them to get a plug and play experience for delivering their internal systems.
And that's fine. But it's also dull; it's the spouse that lets themselves go, it's the girlfriend in the distracted boyfriend meme. Those aren't the people who keep your platform relevant and competitive. For Microsoft, that crowd has always been the exploratory end of the developer community: alt.net, and more recently, the dotnet core community (StackOverflow 2020's most loved platform, for the haters). Alt.net seeded every competitive advantage the dotnet ecosystem has, and dotnet core capitalized on. Like DI? You're welcome. Are you enjoying MVC? Your gratitude is understood. Cool serializers, gRPC/protobuff, 1st class APIs, metadata-driven clients, code generation, micro ORMs, etc., etc., et al. Dear enterpriseur, you are fucking welcome.
Anyways, b2blazor. So, the front end (Blazor WebAssembly) story begins with the average enterprise FOMO. When enterprises get FOMO, they start to Karen/Kevin super hard, slinging around money, privilege, premiere support tickets, etc. until Microsoft, the distracted boyfriend, eventually turns back and says, "sorry babe, wut was that?" You know, shit like managers unironically looking at cloud reps and demanding to know if "you can handle our load!" Meanwhile, any actual engineer hides under the table facepalming and trying not to die from embarrassment.36 -
I am trying to "invent" secure client-side authentication where all data are stored in browser encrypted and only accessible with the correct password. My question is, what is your opinion about my idea. If you think it is not secure or there is possible backdoor, let me know.
// INPUT:
- test string (hidden, random, random length)
- password
- password again
// THEN:
- hash test string with sha-512
- encrypt test string with password
- save hash of test string
// AUTH:
- decrypt test string
- hash decrypted string with sha-512
- compare hashes
- create password hash sha-512 (and delete password from memory, so you cannot get it somehow - possible hole here because hash is reversible with brute force)
// DATA PROCESSING
- encrypt/decrypt with password hash as secret (AES-256)
Thanks!
EDIT: Maybe some salt for test string would be nice8 -
So a while back I had found a hole in a website's security, one that I has used pretty frequently. I was able to change my cookies and become any user I wanted. The only caveat was that I had to log in as a user in order to get things started. But once I was in I could basically be anyone I wanted to be just by changing a few numbers in the user ID of the cookie. They also did all of their user processing on the client side. Even password checks.
A couple weeks back I decided to go back in to see if anything had changed since then. It did! But not in the way I had thought.
So these guys decided that instead of fixing their security hole, they would have users just contact their people directly in order to get a new account.
Wow that's so much fucking overhead for basically being a lazy shit and not fixing the security holes. I mean how bad is your architecture if you can't go in and fix this?
Not only that I found that they actually stripped all of the users of their original subscriptions. So now if you want to get back on your subscription you'll have to fork over another $399. So that means going to their shitty form filling out your name, your number, email, and just hope that someone contacts you via phone call.
I'm glad I dropped this service. They clearly can't get their shit together.rant hackerman what the fuck are you doing bold and brash it's all shit more like belongs in the trash front end is shit back end is shit -
My work product: Or why I learned to get twitchy around Java...
I maintain a Java based test system, that tests a raster image processor. The client is a Java swing project that contains CORBA bindings to the internal API of the raster image processor. It also has custom written UI elements and duplicated functionality that became available in later versions of Java, but because some of the third party tools we use don't work with later versions of Java for some reason, it's not possible to upgrade Java to gain things as simple as recursive directory deletion, yes the version of Java we have to use does not support something as simple as that and custom code had to be written to support it.
Because of the requirement to build the API bindings along with the client the whole application must be built with the raster image processor build chain, which is a heavily customised jam build system. So an ant task calls out to execute a jam task and jam does about 90% of the heavy lifting.
In addition to the Java code there's code for interpreting PostScript files, as these can be used to alter the behaviour of the raster image processor during testing.
As if that weren't enough, there's a beanshell interface to allow users to script the test system, but none of the users know Java well enough to feel confident writing interpreted Java scripts (and that's too close to JavaScript for my comfort). I once tried swapping this out for the Rhino JavaScript interpreter and got all the verbal support in the world but no developer time to design an API that'd work for all the departments.
The server isn't much better though. It's a tomcat based application that was written by someone who had never built a tomcat application before, or any web application for that matter and uses raw SQL strings instead of an orm, it doesn't use MVC in any way, and insane amount of functionality is dumped into the jsp files.
It too interacts with a raster image processor to create difference masks of the output, running PostScript as needed. It spawns off multiple threads and can spend days processing hundreds of gigabytes of image output (depending on the size of the tests).
We're stuck on Tomcat seven because we can't upgrade beyond Java 6, which brings a whole manner of security issues, but that eager little Java updated will break the tool chain if it gets its way.
Between these two components we have the Java RMI server (sometimes) working to help generate image data on the client side before all images are pulled across a UNC network path onto the server that processes test jobs (in PDF format), by reading into the xref table of said PDF, finding the embedded image data (for our server consumed test files are just flate encoded TIFF files wrapped around just enough PDF to make them valid) and uses a tool to create a difference mask of two images.
This tool is very error prone, it can't difference images of different sizes, colour spaces, orientations or pixel depths, but it's the best we have.
The tool is installed in both the client and server if the client can generate images it'll query from the server which ones it needs to and if it can't the server will use the tool itself.
Our shells have custom profiles for linking to a whole manner of third party tools and libraries, including a link to visual studio 2005 (more indirectly related build dependencies), the whole profile has to ensure that absolutely no operating system pollution gets into the shell, most of our apps are installed in our home directories and we have to ensure our paths are correct for every single application we add.
And... Fucking and!
Most of the tools are stored as source bundles in a version control system... Not got or mercurial, not perforce or svn, not even CVS... They use a custom built version control system that is built on top of RCS, it keeps a central database of locked files (using soft and hard locks along with write protecting the files in the file system) to ensure users can't get merge conflicts by preventing other users from writing to the files at all.
Branching is heavy weight and can take the best part of a day to create a new branch and populate the history.
Gathering the tools alone to build the Dev environment to build my project takes the best part of a week.
What should be a joy come hardware refresh year becomes a curse ("Well fuck, now I loose a week spending it setting up the Dev environment on ANOTHER machine").
Needless to say, I enjoy NOT working with Java. A lot of this isn't Javas fault, but there's a lot of things that Java (specifically the Java 6 version we're stuck on) does not make easy.
This is why I prefer to build my web apps in python or node, hell, I'd even take Lua... Just... Compiling web pages into executable Java classes, why? I mean I understand the implementation of how this happens, but why did my predecessor have to choose this? Why?2