Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "it fails but it is not my fault"
-
Been assigned to the team by management.
Management and I both know team members are junior/early-medium levels.
Management expects outcome.
After few weeks I clearly communicated that these engineers are unreliable. I can grow and coach them, but outcomes can't be guaranteed.
I was told it's my responsibility to deliver outcome anyway.
I was told they know it's unfair - but if whole team fails, it's on me to fix everything.
And most importantly: of course IT IS NOT MANAGEMENT FAULT OF ASSEMBLING TEAM OF WEAK ENGINEERS.
... placing me there should fix it "somehow"6 -
One thing I learned over the years is that even when you think you can't do something or don't have the strength to do it, you actually can.
People do nothing better than to make excuses for themselves or blame others for the things they did without even considering that they could have done something about it.
The brain is a powerful processor to the point that when you think you're sick constantly your body will react accordingly.
Thing is though. If you don't take the opportunities that present themselves or don't look for them you'll probably get nowhere to the point where it could lead to depression.
Sure enough failures and mistakes happen all the time, ardly anything will go right the first time possibly leading to becoming demotivated and sometimes even depression.
Why? Because you forgot to think "what can I improve the next time"
A co-worker of mine keeps going back to his project he's working on because the boss has something in mind but somehow fails to translate it to him. He never stops to think what the desired functionality is compared to what it should do or look like (UI/UX). Eventually he snaps blaming the boss that he had to change it a couple of times.
This has happened multiple times since I started my Internship to the point where it just starts to irritate me.
Of course it's not always your fault but there are plenty of cases where it is or where you could have prevented it.
Mistakes and failures make you stronger only if you want to learn from them.
Have a good day -
tl;dr:
The Debian 10 live disc and installer say: Heavens me, just look at the time! I’m late for my <segmentation fault
—————
tl:
The Debian 10 live cd and its new “calamares” installer are both complete crap. I’ve never had any issues with installing Debian prior to this, save with getting WiFi to work (as expected). But this version? Ugh. Here are the things I’ve run into:
Unknown root password; easy enough to get around as there is no user password; still annoying after the 10th time.
Also, the login screen doesn’t work off-disc because it won’t accept a blank password, so don’t idle or you’ll get locked out.
The lock screen is overzealous and hard-locks the computer after awhile; not even the magic kernel keys work!
The live disc doesn’t have many standard utilities, or a graphical partition editor. Thankfully I’m comfortable with fdisk.
The graphical installer (calamares) randomly segfaults, even from innocuous things like clicking [change partition] when you don’t have a partition selected. Derp.
It also randomly segfaults while writing partitions to disk — usually on the second partition.
It strangely seems less likely to segfault if the partitions are already there, even if it needs to “reformat” (recreate) them.
It also defaults to using MBR instead of GPT for the partition table, despite the tooltip telling you that MBR is deprecated and limited, and that GPT is recommended for new systems. You cannot change this without doing the partitions manually.
If you do the partitions manually and it can’t figure out where to install things, it just crashes. This is great because you can’t tell it where to install things, and specifying mount points like /boot, /, and /home don’t seem to be enough.
It also tries installing 32bit grub instead of 64bit, causing the grub installer to fail.
If you tell it to install grub on /boot, it complains when that partition isn’t encrypted — fair — but if you tell it to encrypt /boot like it wants you to, it then tries installing grub on the encrypted partition it just created, apparently without decrypting it, so that obviously fails — specific error: cannot read file system.
On the rare chance that everything else goes correctly, the install process can still segfault.
The log does include entries for errors, but doesn’t include an error message. Literally: “ERROR: Installation failed:” and the log ends. Helpful!
If the installer doesn’t segfault and the install process manages to complete, the resulting install might not even boot, even when installed without any drive encryption. Why? My guess is it never bothered to install Grub, or put it in the wrong place, or didn’t mark it as bootable, or who knows what.
Even when using the live disc that includes non-free firmware (including Ath9k) it still cannot detect my wlan card (that uses Ath9k).
I’ve attempted to install thirty plus times now, and only managed to get a working install once — where I neglected to include the Ath9k firmware.
I’m now trying the cli-only installer option instead of the live session; it seems to behave at least. I’m just terrified that the resulting install will be just as unstable as the live session.
All of this to copy the contents of my encrypted disks over so I can use them on a different system. =/
I haven’t decided which I’m going with next, but likely Arch, Void, or Gentoo. I’d go with Qubes if I had more time to experiment.
But in all seriousness, the Debian devs need some serious help. I would be embarrassed if I released this quality of hot garbage.
(This same system ran both Debian 8 and 9 flawlessly for years)15 -
Static HTML pages are better than "web apps".
Static HTML pages are more lightweight and destroy "web apps" in performance, and also have superior compatibility. I see pretty much no benefit in a "web app" over a static HTML page. "Web apps" appear like an overhyped trend that is empty inside.
During my web browsing experience, static HTML pages have consistently loaded faster and more reliably, since the browser is immediately served with content useful for consumption, whereas on JavaScript-based web "apps", the useful content comes in **last**, after the browser has worked its way through a pile of script.
For example, an average-sized Wikipedia article (30 KB wikitext) appears on screen in roughly two seconds, since MediaWiki uses static HTML. Everipedia, in comparison, is a ReactJS app. Guess how long that one needs. Upwards of three times as long!
Making a page JavaScript-based also makes it fragile. If an exception occurs in the JavaScript, the user might end up with a blank page or an endless splash screen, whereas static HTML-based pages still show useful content.
The legacy (2014-2020) HTML-based Twitter.com loaded a user profile in under four seconds. The new react-based web app not only takes twice as long, but sometimes fails to load at all, showing the error "Oops something went wrong! But don't fret – it's not your fault." to be displayed. This could not happen on a static HTML page.
The new JavaScript-based "polymer" YouTube front end that is default since August 2017 also loads slower. While the earlier HTML-based one was already playing the video, the new one has just reached its oh-so-fancy skeleton screen.
It would once have been unthinkable to have a website that does not work at all without JavaScript, but now, pretty much all popular social media sites are JavaScript-dependent. The last time one could view Twitter without JavaScript and tweet from devices with non-sophisticated browsers like Nintendo 3DS was December 2020, when they got rid of the lightweight "M2" mobile website.
Sometimes, web developers break a site in older browser versions by using a JavaScript feature that they do not support, or using a dependency (like Plyr.js) that breaks the site. Static HTML is immune against this failure.
Static HTML pages also let users maximize speed and battery life by deactivating JavaScript. This obviously will disable more sophisticated site features, but the core part, the text, is ready for consumption.
Not to mention, single-page sites and fancy animations can be implemented with JavaScript on top of static HTML, as GitHub.com and the 2018 Reddit redesign do, and Twitter's 2014-2020 desktop front end did.
From the beginning, JavaScript was intended as a tool to complement, not to replace HTML and CSS. It appears to me that the sole "benefit" of having a "web app" is that it appears slightly more "modern" and distinguished from classic web sites due to use of splash screens and lack of the browser's loading animation when navigating, while having oh-so-fancy loading animations and skeleton screens inside the website. Sorry, I prefer seeing content quickly over the app-like appearance of fancy loading screens.
Arguably, another supposed benefit of "web apps" is that there is no blank page when navigating between pages, but in pretty much all major browsers of the last five years, the last page observably remains on screen until the next navigated page is rendered sufficiently for viewing. This is also known as "paint holding".
On any site, whenever I am greeted with content, I feel pleased. Whenever I am greeted with a loading animation, splash screen, or skeleton screen, be it ever so fancy (e.g. fading in an out, moving gradient waves), I think "do they really believe they make me like their site more due to their fancy loading screens?! I am not here for the loading screens!".
To make a page dependent on JavaScript and sacrifice lots of performance for a slight visual benefit does not seem worthed it.
Quote:
> "Yeah, but I'm building a webapp, not a website" - I hear this a lot and it isn't an excuse. I challenge you to define the difference between a webapp and a website that isn't just a vague list of best practices that "apps" are for some reason allowed to disregard. Jeremy Keith makes this point brilliantly.
>
> For example, is Wikipedia an app? What about when I edit an article? What about when I search for an article?
>
> Whether you label your web page as a "site", "app", "microsite", whatever, it doesn't make it exempt from accessibility, performance, browser support and so on.
>
> If you need to excuse yourself from progressive enhancement, you need a better excuse.
– Jake Archibald, 20139 -
Sorry, need to vent.
In my current project I'm using two main libraries [slack client and k8s client], both official. And they both suck!
Okay, okay, their code doesn't really suck [apart from k8s severely violating Liskov's principle!]. The sucky part is not really their fault. It's the commonly used 3rd-party library that's fucked up.
Okhttp3
yeah yeah, here come all the booos. Let them all out.
1. In websockets it hard-caps frame size to 16mb w/o an ability to change it. So.. Forget about unchunked file transfers there... What's even worse - they close the websocket if the frame size exceeds that limit. Yep, instead of failing to send it kills the conn.
2. In websockets they are writing data completely async. Without any control handles.. No clue when the write starts, completes or fails. No callbacks, no promises, no nothing other feedback
3. In http requests they are splitting my request into multiple buffers. This fucks up the slack cluent, as I cannot post messages over 4050 chars in size . Thanks to the okhttp these long texts get split into multiple messages. Which effectively fucks up formatting [bold, italic, codeblocks, links,...], as the formatted blocks get torn apart. [didn't investigate this deeper: it's friday evening and it's kotlin, not java, so I saved myself from the trouble of parsing yet unknown syntax]
yes, okhttp is probably a good library for the most of it. Yes, people like it, but hell, these corner cases and weird design decisions drive me mad!
And it's not like I could swap it with anynother lib.. I don't depend on it -- other libs I need do! -
That feeling when the Jenkins build fails and fixing it is both out of your scope and permission.
Dear devops, you should know when a certificate expires that we use to authenticate with external web services. -
!rant(maybe)
So after taking a long weekend and applying to some different companies, doing some cultural fit and technical interviews, I thought to sit down and take a different look at my situation (with the help of my partner, of course, bless her patient soul).
* My work output isn't bad; all things considered, it's the people I work for who are doing a shitty job. If my project fails, I have to remind myself it's not my fault or my team's because we're doing all we can to the best of our abilities. I mean, it's not our fault we're being mismanaged.
* The best way I can effect change is if I am in a position to do so. Instead of looking outside, I should be challenging my way up - and if no opportunities are there, then I have to make them myself.
* This is still a year of uncertainty - starting fresh isn't going to be easy. In contrast, I've already built a rep in my current company - why throw it away because I work for sucky people?
Looking at my previous rants, they were definitely coming from a place of frustration; but as the saying goes, if I'm not part of the solution then I'm part of the problem. I'm gonna see how I can fix that then without clamboring for an escape hatch.
Yes, it was a very insightful Valentine's dinner conversation.1 -
Make an ASP .NET application for job interview take home assignment.
Try to use docker with it.
Runs fine through Visual studio (not code)
I declare is working and submit to organization but say it can run through docker-compose up.
I get reply that even the basic command doesn't work.
Turns out visual studio does some magic mapping or caching under the hood that I couldn't find in any config in the project and somehow gets it to work, but when running without Visual studio it doesn't have that magic context shit and thus running through terminal fails.
Obviously a lot is my fault for assuming what works through IDE would run through terminal without testing, but I will be angry with VS to make myself feel better >.>2