Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "user requests"
-
Oh, man, I just realized I haven't ranted one of my best stories on here!
So, here goes!
A few years back the company I work for was contacted by an older client regarding a new project.
The guy was now pitching to build the website for the Parliament of another country (not gonna name it, NDAs and stuff), and was planning on outsourcing the development, as he had no team and he was only aiming on taking care of the client service/project management side of the project.
Out of principle (and also to preserve our mental integrity), we have purposely avoided working with government bodies of any kind, in any country, but he was a friend of our CEO and pleaded until we singed on board.
Now, the project itself was way bigger than we expected, as the wanted more of an internal CRM, centralized document archive, event management, internal planning, multiple interfaced, role based access restricted monster of an administration interface, complete with regular user website, also packed with all kind of features, dashboards and so on.
Long story short, a lot bigger than what we were expecting based on the initial brief.
The development period was hell. New features were coming in on a weekly basis. Already implemented functionality was constantly being changed or redefined. No requests we ever made about clarifications and/or materials or information were ever answered on time.
They also somehow bullied the guy that brought us the project into also including the data migration from the old website into the new one we were building and we somehow ended up having to extract meaningful, formatted, sanitized content parsing static HTML files and connecting them to download-able files (almost every page in the old website had files available to download) we needed to also include in a sane way.
Now, don't think the files were simple URL paths we can trace to a folder/file path, oh no!!! The links were some form of hash combination that had to be exploded and tested against some king of database relationship tables that only had hashed indexes relating to other tables, that also only had hashed indexes relating to some other tables that kept a database of the website pages HTML file naming. So what we had to do is identify the files based on a combination of hashed indexes and re-hashed HTML file names that in the end would give us a filename for a real file that we had to then search for inside a list of over 20 folders not related to one another.
So we did this. Created a script that processed the hell out of over 10000 HTML files, database entries and files and re-indexed and re-named all this shit into a meaningful database of sane data and well organized files.
So, with this we were nearing the finish line for the project, which by now exceeded the estimated time by over to times.
We test everything, retest it all again for good measure, pack everything up for deployment, simulate on a staging environment, give the final client access to the staging version, get them to accept that all requirements are met, finish writing the documentation for the codebase, write detailed deployment procedure, include some automation and testing tools also for good measure, recommend production setup, hardware specs, software versions, server side optimization like caching, load balancing and all that we could think would ever be useful, all with more documentation and instructions.
As the project was built on PHP/MySQL (as requested), we recommended a Linux environment for production. Oh, I forgot to tell you that over the development period they kept asking us to also include steps for Windows procedures along with our regular documentation. Was a bit strange, but we added it in there just so we can finish and close the damn project.
So, we send them all the above and go get drunk as fuck in celebration of getting rid of them once and for all...
Next day: hung over, I get to the office, open my laptop and see on new email. I only had the one new mail, so I open it to see what it's about.
Lo and behold! The fuckers over in the other country that called themselves "IT guys", and were the ones making all the changes and additions to our requirements, were not capable enough to follow step by step instructions in order to deploy the project on their servers!!!
[Continues in the comments]26 -
!rant
This was over a year ago now, but my first PR at my current job was +6,249/-1,545,334 loc. Here is how that happened... When I joined the company and saw the code I was supposed to work on I kind of freaked out. The project was set up in the most ass-backward way with some sort of bootstrap boilerplate sample app thing with its own build process inside a subfolder of the main angular project. The angular app used all the CSS, fonts, icons, etc. from the boilerplate app and referenced the assets directly. If you needed to make changes to the CSS, fonts, icons, etc you would need to cd into the boilerplate app directory, make the changes, run a Gulp build that compiled things there, then cd back to the main directory and run Grunt build (thats right, both grunt and gulp) that then built the angular app and referenced the compiled assets inside the boilerplate directory. One simple CSS change would take 2 minutes to test at minimum.
I told them I needed at least a week to overhaul the app before I felt like I could do any real work. Here were the horrors I found along the way.
- All compiled (unminified) assets (both CSS and JS) were committed to git, including vendor code such as jQuery and Bootstrap.
- All bower components were committed to git (ALL their source code, documentation, etc, not just the one dist/minified JS file we referenced).
- The Grunt build was set up by someone who had no idea what they were doing. Every SINGLE file or dependency that needed to be copied to the build folder was listed one by one in a HUGE config.json file instead of using pattern matching like `assets/images/*`.
- All the example code from the boilerplate and multiple jQuery spaghetti sample apps from the boilerplate were committed to git, as well as ALL the documentation too. There was literally a `git clone` of the boilerplate repo inside a folder in the app.
- There were two separate copies of Bootstrap 3 being compiled from source. One inside the boilerplate folder and one at the angular app level. They were both included on the page, so literally every single CSS rule was overridden by the second copy of bootstrap. Oh, and because bootstrap source was included and commited and built from source, the actual bootstrap source files had been edited by developers to change styles (instead of overriding them) so there was no replacing it with an OOTB minified version.
- It is an angular app but there were multiple jQuery libraries included and relied upon and used for actual in-app functionality behavior. And, beyond that, even though angular includes many native ways to do XHR requests (using $resource or $http), there were numerous places in the app where there were `XMLHttpRequest`s intermixed with angular code.
- There was no live reloading for local development, meaning if I wanted to make one CSS change I had to stop my server, run a build, start again (about 2 minutes total). They seemed to think this was fine.
- All this monstrosity was handled by a single massive Gruntfile that was over 2000loc. When all my hacking and slashing was done, I reduced this to ~140loc.
- There were developer's (I use that term loosely) *PERSONAL AWS ACCESS KEYS* hardcoded into the source code (remember, this is a web end app, so this was in every user's browser) in order to do file uploads. Of course when I checked in AWS, those keys had full admin access to absolutely everything in AWS.
- The entire unminified AWS Javascript SDK was included on the page and not used or referenced (~1.5mb)
- There was no error handling or reporting. An API error would just result in nothing happening on the front end, so the user would usually just click and click again, re-triggering the same error. There was also no error reporting software installed (NewRelic, Rollbar, etc) so we had no idea when our users encountered errors on the front end. The previous developers would literally guide users who were experiencing issues through opening their console in dev tools and have them screenshot the error and send it to them.
- I could go on and on...
This is why you hire a real front-end engineer to build your web app instead of the cheapest contractors you can find from Ukraine.19 -
I’m surrounded by idiots.
I’m continually reminded of that fact, but today I found something that really drives that point home.
Gather ‘round, everybody, it’s story time!
While working on a slow query ticket, I perused the code, finding several causes, and decided to run git blame on the files to see what dummy authored the mental diarrhea currently befouling my screen. As it turns out, the entire feature was written by mister legendary Apple golden boy “Finder’s Keeper” dev himself.
To give you the full scope of this mess, let me start at the frontend and work my way backward.
He wrote a javascript method that tracks whatever row was/is under the mouse in a table and dynamically removes/adds a “.row_selected” class on it. At least the js uses events (jQuery…) instead of a `setTimeout()` so it could be worse. But still, has he never heard of :hover? The function literally does nothing else, and the `selectedRow` var he stores the element reference in isn’t used elsewhere.
This function allows the user to better see the rows in the API Calls table, for which there is a also search feature — the very thing I’m tasked with fixing.
It’s worth noting that above the search feature are two inputs for a date range, with some helpful links like “last week” and “last month” … and “All”. It’s also worth noting that this table is for displaying search results of all the API requests and their responses for a given merchant… this table is enormous.
This search field for this table queries the backend on every character the user types. There’s no debouncing, no submit event, etc., so it triggers on every keystroke. The actual request runs through a layer of abstraction to parse out and log the user-entered date range, figure out where the request came from, and to map out some column names or add additional ones. It also does some hard to follow (and amazingly not injectable) orm condition building. It’s a mess of functional ugly.
The important columns in the table this query ultimately searches are not indexed, despite it only looking for “create_order” records — the largest of twenty-some types in the table. It also uses partial text matching (again: on. every. single. keystroke.) across two varchar(255)s that only ever hold <16 chars — and of which users only ever care about one at a time. After all of this, it filters the results based on some uncommented regexes, and worst of all: instead of fetching only one page’s worth of results like you’d expect, it fetches all of them at once and then discards what isn’t included by the paginator. So not only is this a guaranteed full table scan with partial text matching for every query (over millions to hundreds of millions of records), it’s that same full table scan for every single keystroke while the user types, and all but 25 records (user-selectable) get discarded — and then requeried when the user looks at the next page of results.
What the bloody fucking hell? I’d swear this idiot is an intern, but his code does (amazingly) actually work.
No wonder this search field nearly crashed one of the servers when someone actually tried using it.
Asdfajsdfk.rant fucking moron even when taking down the server hey bob pass me all the paperclips mysql murder terrible code slow query idiot can do no wrong but he’s the golden boy idiots repeatedly murdered mysql in the face21 -
--- GitHub 24-hour outage post mortem ---
As many of you will remember; Github fell over earlier this month and cracked its head on the counter top on the way down. For more or less a full 24 hours the repo-wrangling behemoth had inconsistent data being presented to users, slow response times and failing requests during common user actions such as reporting issues and questioning your career choice in code reviews.
It's been revealed in a post-mortem of the incident (link at the end of the article) that DB replication was the root cause of the chaos after a failing 100G network link was being replaced during routine maintenance. I don't pretend to be a rockstar-ninja-wizard DBA but after speaking with colleagues who went a shade whiter when the term "replication" was used - It's hard to predict where a design decision will bite back and leave you untanging the web of lies and misinformation reported by the databases for weeks if not months after everything's gone a tad sideways.
When the link was yanked out of the east coast DC undergoing maintenance - Github's "Orchestrator" software did exactly what it was meant to do; It hit the "ohshi" button and failed over to another DC that wasn't reporting any issues. The hitch in the master plan was that when connectivity came back up at the east coast DC, Orchestrator was unable to (un)fail-over back to the east coast DC due to each cluster containing data the other didn't have.
At this point it's reasonable to assume that pants were turning funny colours - Monitoring systems across the board started squealing, firing off messages to engineers demanding they rouse from the land of nod and snap back to reality, that was a bit more "on-fire" than usual. A quick call to Orchestrator's API returned a result set that only contained database servers from the west coast - none of the east coast servers had responded.
Come 11pm UTC (about 10 minutes after the initial pant re-colouring) engineers realised they were well and truly backed into a corner, the site was flipped into "Yellow" status and internal mechanisms for deployments were locked out. 5 minutes later an Incident Co-ordinator was dragged from their lair by the status change and almost immediately flipped the site into "Red" status, a move i can only hope was accompanied by all the lights going red and klaxons sounding.
Even more engineers were roused from their slumber to help with the recovery effort, By this point hair was turning grey in real time - The fail-over DB cluster had been processing user data for nearly 40 minutes, every second that passed made the inevitable untangling process exponentially more difficult. Not long after this Github made the call to pause webhooks and Github Pages builds in an attempt to prevent further data loss, causing disruption to those of us using Github as a way of kicking off our deployment processes (myself included, I had to SSH in and run a git pull myself like some kind of savage).
Glossing over several more "And then things were still broken" sections of the post mortem; Clever engineers with their heads screwed on the right way successfully executed what i can only imagine was a large, complex and risky plan to untangle the mess and restore functionality. Github was picked up off the kitchen floor and promptly placed in a comfy chair with a sweet tea to recover. The enormous backlog of webhooks and Pages builds was caught up with and everything was more or less back to normal.
It goes to show that even the best laid plan rarely survives first contact with the enemy, In this case a failing 100G network link somewhere inside an east coast data center.
Link to the post mortem: https://blog.github.com/2018-10-30-...6 -
>>> print(whoSaid("OlderFriend"))
About 20ish years ago I was working in IT, and it was about around this time where CD-Roms were hitting the stores and becoming the newest craze. However, Microsoft did not write the drivers correctly for this new hardware.
In a nutshell, the driver would be installed and the user would lose the sound to their speaker.
How did this happen? By altering the way the interrupts worked on the computer. At the time there only existed a few unreserved IRQs or Interrupt ReQuests. The installer package would redirect IRQ 5 which is "User Selectable (Sound Cards)" to work with the CD-Rom. This was fine and all unless you wanted to listen to your speakers.
I had come up with a clever hack through rewriting a config file that would be run during bootup. So at the time of boot up IRQ 5 would be dedicated to the sound card, and IRQ7 (which was usually for the Lpt1 Printer) would be dedicated to the CD-Rom. This worked.
And because I was IT at the time, I would get a lot of calls for fixing this problem.
So, as you can imagine, I've gotten **really** good at doing this. I didn't even need to be at a computer to walk someone through the problem.
I receive a call one day, it was a problem with the CD-Rom and sound card. I walk him through the problem and he reboots his computer. I could hear him on the other side jumping with joy when he was able to put in his music CD and hear sound coming from the speakers.
He asks me, how in the hell did you figure this out!? You're a fucking Genius!
And I said, It's not rocket science it's just a computer.
There was a long pause of silence.
Uhhh... Hello? Did I say something wrong?
Sir, I work at NASA I deal with Rocket Science on a daily basis.4 -
Stupid fucking project managers, just posting some slurry in Slack:
"User can't get into app!" _sends useless screenshot_
Yeah? And? I have no context on what time this was, what device, where, how, etc. etc. etc. etc. etc.
You want me to just telepathically jump to their location on earth, sniff the electromagnetic spectrum waves to sleuth out what exact requests they made and when to figure out what the problem is?
Just shut up. Shut up14 -
So our method of complying with user removal requests for GDPR is:
audit.record("user {user.name} removed their account", serialize(user));
user.delete();
🤦8 -
The nightmare continues.
Currently dealing with a code review from a “principal” dev (one step above senior), who is unironically called a “legendary dev” by some coworkers. It’s painfully obvious he didn’t read the code, and just started complaining and nitpicking.
It’s full of requests to do things that make absolutely no sense, and would make the code an unmaintainable mess.
• Ex: moving the logic and data collection from the module’s many callers into the module instead of just passing in the data.
• Ex: hiding api endpoint declarations by placing them in the module itself, and using magic instance variables to pass data to it. Basically: using global functions and variables instead of explicit declarations and calls.
• Ex: moving the logic to determine which api endpoint to use, for all callers, into the view.
More comments about methods being “too complex” (barely holds water) right next to comments saying “why are these separate? merge them together!”
Incredulously asking how many times I’m checking permissions and how ridiculous it all is. (The answer? Twice.)
Conflating my “permissions” param and method names with a supposedly forthcoming permissions system overhaul, and saying I shouldn’t use permissions because my code will all have to get rewritten. Even if that were true, and it’s likely not, the ticket still needs to use the current permissions. I can’t just ignore them because they might be rewritten someday.
Requests to revert some code cleanup because the reviewer thought the previous heavily-nested and uncommented versions (with code duplication) were easier to read. Unsurprisingly, he wrote them.
On the same ticket, my boss wants me to remove all styling and clientside validation, debouncing, and error messages from a form. Says “success” and “connection failed” messages are good enough. The form in question sends SMS and email using arbitrary user input for addresses. He also says it shouldn’t be denounced on the server, and doesn’t want me to bother checking permissions. Hello, spam!
Related: the legendary dev reviewer says he can’t think of a reason why we would want to disable the feature for consumers, so I should remove the consumer feature flag.
You can’t make this stuff up.7 -
I'm fixing a security exploit, and it's a goddamn mountain of fuckups.
First, some idiot (read: the legendary dev himself) decided to use a gem to do some basic fucking searching instead of writing a simple fucking query.
Second, security ... didn't just drop the ball, they shit on it and flushed it down the toilet. The gem in question allows users to search by FUCKING EVERYTHING on EVERY FUCKING TABLE IN THE DB using really nice tools, actually, that let you do fancy things like traverse all the internal associations to find the users table, then list all users whose password reset hashes begin with "a" then "ab" then "abc" ... Want to steal an account? Hell, want to automate stealing all accounts? Only takes a few hundred requests apiece! Oooh, there's CC data, too, and its encryption keys!
Third, the gem does actually allow whitelisting associations, methods, etc. but ... well, the documentation actually recommends against it for whatever fucking reason, and that whitelisting is about as fine-grained as a club. You wanna restrict it to accessing the "name" column, but it needs to access both the "site" and "user" tables? Cool, users can now access site.name AND user.name... which is PII and totally leads to hefty fines. Thanks!
Fourth. If the gem can't access something thanks to the whitelist, it doesn't catch the exception and give you a useful error message or anything, no way. It just throws NoMethodErrors because fuck you. Good luck figuring out what they mean, especially if you have no idea you're even using the fucking thing.
Fifth. Thanks to the follower mentality prevalent in this hellhole, this shit is now used in a lot of places (and all indirectly!) so there's no searching for uses. Once I banhammer everything... well, loads of shit is going to break, and I won't have a fucking clue where because very few of these brainless sheep write decent test coverage (or even fucking write view tests), so I'll be doing tons of manual fucking testing. Oh, and I only have a week to finish everything, because fucking of course.
So, in summary. The stupid and lazy (and legendary!) dev fucked up. The stupid gem's author fucked up, and kept fucking up. The stupid devs followed the first fuckup's lead and repeated his fuck up, and fucked up on their own some more. It's fuckups all the fucking way down.rant security exploit root swears a lot actually root swears oh my stupid fucking people what the fuck fucking stupid fucking people20 -
The company I interned at last summer decided to adopt a JS framework a little over a year ago. The managers went with the old Angular 1.x because they didn't want a JS build process. Each page has ~100 script tags on it, and these are manually included in various files (no automated way to include dependencies). None of the CSS/JS files are minified, either.
They really should have chose Angular 2+, or an entirely different framework (React, VueJS). They're also just now upgrading the codebase from PHP 5.6 to PHP 7.2 (5.6 support ended a long time ago, and security support ends this month).
I love the company itself but these practices are poor.
I may be working there full time eventually. I hope to eventually help with the inevitable transition to a newer framework once Angular 1.x is dead since I am an avid user of newer JS technologies. Any tips on convincing manager(s) towards newer technology? (Or at least convincing them to combine+minify these files in production to reduce # of requests and bandwidth.)
Also this company's product has millions of active users.16 -
This really pisses me off. As a front end developer (ember.js, HTML and Css) colleagues and boss and pm are always making jokes how I just need to change a button or a color and whenever there is a bug in the UI there's always big fun and jokes around it. But when there's a bug in the API, they never joke around, it's just : oh yeah we're getting the wrong data or an exception. But they always like to undervalue UI work even when it involves complex layouts, multi browser compatibility, responsive design, mobile browsers etc.. While they just code their API to connect to a database and everything works they don't really need to worry about what the user is using as a browser. They just get requests and send replies. I don't really think people value the work in front end as much as backend and that pisses me off as I believe there's a lot more going on in the front end.. I know they mean well and they are all cool people but sometimes it pisses me off as they don't value my work..13
-
Fuck you mod_security 😠😩😰😱
We lost a weak of user submission because mod_security was silently droping form post requests.5 -
I previously worked as a Linux/unix sysadmin. There was one app team owning like 4 servers accessible in a very speciffic way.
* logon to main jumpbox
* ssh to elevated-privileges jumpbox
* logon to regional jumpbox using custom-made ssh alternative [call it fkup]
* try to fkup to the app server to confirm that fkup daemon is dead
* logon to server's mgmt node [aix frame]
* ssh to server directly to find confirm sshd is dead too
* access server's console
* place root pswd request in passwords vault, chase 2 mangers via phone for approvals [to login to the vault, find my request and aprove it]
* use root pw to login to server's console, bounce sshd and fkupd
* logout from the console
* fkup into the server to get shell.
That's not the worst part... Aix'es are stable enough to run for years w/o needing any maintenance, do all this complexity could be bearable.
However, the app team used to log a change request asking to copy a new pdf file into that server every week and drop it to app directory, chown it to app user. Why can't they do that themselves you ask? Bcuz they 'only need this pdf to get there, that's all, and we're not wasting our time to raise access requests and chase for approvals just for a pdf...'
oh, and all these steps must be repeated each time a sysadmin tties to implement the change request as all the movements and decisions must be logged and justified.
Each server access takes roughly half an hour. 4 servers -> 2hrs.
So yeah.. Surely getting your accesses sorted out once is so much more time consuming and less efficient than logging a change request for sysadmins every week and wasting 2 frickin hours of my time to just copy a simple pdf for you.. Not to mention that threr's only a small team of sysadmins maintaining tens of thousands of servers and every minute we have we spend working. Lunch time takes 10-15 minutes or so.. Almost no time for coffee or restroom. And these guys are saying sparing a few hours to get their own accesses is 'a waste of their time'...
That was the time I discovered skrillex.3 -
This morning I was looking in our database in order to solve a problem with a user registration and I accidentally noticed some users registered with unusual email addresses (temporary mail services, Russian providers and so on...).
I immediately thought about malicious users so I dug into the logs and I found that the registration requests started from an IP address belonging to our company (we have static IP addresses). My first reaction was: «OMG! Russian hackers infiltrated into our systems and started registering new users!»
So, I found the coworker owning the laptop from which the requests were sent and I went to him in order to warn him that someone violated his computer.
And he said: «Ah! Those 7 users? Yeah, I was doing some tests, I registered them. My email address was already registered so I created some new ones».
Really, man? Really? WTF6 -
Today was a manic-depressive kind of day. Spent the morning helping some developers with getting their code to run a stored procedure to drop old partitions, but it wasn't working on their end. It was a fairly simple proc. But working with partitions is a little like working with an array. I figured out that they were passing the wrong timestamp, and needed to add +1 to delete the right partition. Got that sorted out, and things were good. Lunch time.
After lunch I did some busy work, and then the PO comes up at about 2PM and says he's assigned some requests to me. The first was just attaching some scripts. Easy. The second, the user wants a couple of schemas exported ... at 6PM. I've been in the office since 6:45AM.
While I'm setting up some commands to run for the data export, a BA walks up and asks if I'm filling in for another DBA who is out for a few weeks. Yep. There's a change request that hasn't been assigned, and he normally does the work. I ask when it's due. Well, the pre-implementation was supposed to be done in the morning, but it wasn't, and we're in the implementation window ... half way through. I bring up the change task, and look at. Create new schema and users. That's all it says. The BA laughs. I tell I need more to go on. 10 minutes later he sends an email with the information. There's only two hours left in the window, and I can only use half of it, because the production guys have to their stuff, and we're in their window. Now I'm irritated, because I'm new to Oracle, and it's an unforgiving mistress. Fortunately, another DBA says he'll do it, so that we can get it done in time. But can't work it either, because Dev DBAs don't have access to QA, and the process required access for this task. Gets shelved until the access issue is resolved. It's now after 4:15PM. I'm going to in traffic with that 6PM deadline.
I manage to get home and to the computer by 5:45PM. Log in. Start VPN. Box pops on screen. Java needs to update. I chose skip update. Box pops up again. It won't let me log in until Java is current. Passed.
I finally get logged in, and it's 6:10PM. I'm late getting the job started. I pull up Putty and log into the first box, and paste my pre-prepared command in the command line and hit error. Command not found. I'm tired, so it's a moment to sink in. I don't have time for this.
I log into DBArtisan and pull up the first data base, use the wizard to set the job, and off it goes. Yay. Bring up the second database, and have enter the connect info. Host not found. Wut? Examine host name. Yep, it's correct. Try a different method. Host not found. Go back to Putty. Log in. Past string. Launch. Command not found. Now my brain is quitting on me. Why now? It's after 6:30PM. Fiddle with some settings, reset $Oracle home. Try again. Yay. It works. I'm done. It's after 7PM.
There is nothing like technology to snatch the euphoria of a success away from you. It's a love-hate thing, but I wouldn't trade it for anything else. I'm done. Good night.3 -
When I open sourced one of my projects on GitHub I thought "I got a small but good user base (70k), I think they are some good developers ready to improve the project and send pull requests".
I only received translations. I'm (a bit) disappointed.6 -
Whelp. I started making a very simple website with a single-page design, which I intended to use for managing my own personal knowledge on a particular subject matter, with some basic categorization features and a simple rich text editor for entering data. Partly as an exercise in web development, and partly due to not being happy with existing options out there. All was going well...
...and then feature creep happened. Now I have implemented support for multiple users with different access levels; user profiles; encrypted login system (and encrypted cookies that contain no sensitive data lol) and session handling according to (perceived) best practices; secure password recovery; user-management interface for admins; public, private and group-based sections with multiple categories and posts in each category that can be sorted by sort order value or drag and drop; custom user-created groups where they can give other users access to their sections; notifications; context menus for everything; post & user flagging system, moderation queue and support system; post revisions with comparison between different revisions; support for mobile devices and touch/swipe gestures to open/close menus or navigate between posts; easily extendible css themes with two different dark themes and one ugly as heck light theme; lazy loading of images in posts that won't load until you actually open them; auto-saving of posts in case of browser crash or accidental navigation away from page; plus various other small stuff like syntax highlighting for code, internal post linking, favouriting of posts, free-text filter, no-javascript mode, invitation system, secure (yeah right) image uploading, post-locking...
On my TODO-list: Comment and/or upvote system, spoiler tag, GDPR compliance (if I ever launch it haha), data-limits, a simple user action log for admins/moderators, overall improved security measures, refactor various controllers, clean up the code...
It STILL uses a single-page design, and the amount of feature requests (and bugs) added to my Trello board increases exponentially with every passing week. No other living person has seen the website yet, and at the pace I'm going, humanity will have gone through at least one major extinction event before I consider it "done" enough to show anyone.
help4 -
You realize that the ERP software you use at your company is shit when:
- there is no service-side ERP backend handling requests
- the whole permission system is client-side (!)
- every client directly connects to the MSSQL database with a supervisor user (stored in plain text in a local config file)
- the MSSQL database contains tables with:
- typos
- names like "contract" but then also "contracts"
- mixed german and english words
- the multiple-business-unit implementation uses 4 columns named "Layer 1, Layer 2, Layer 3, Layer 4" in EACH table
- you find out that the ERP software is created with a fucking "software creation tool"
- there is no API, so you have to program one yourself to use for services
Yet, they charge us shit ton of money for their broken ass software.1 -
For about 1.5 years on and off, we've been developing a system to rate tickets/requests sent to our team. We wrote it in Angular, and it turned into this feature-rich gorgeous application with custom-built graphical statistic tracking, in-app social networking capabilities, robust user profiles, etc.
Eventually, we no longer had time to work on it along with all the other applications we're developing. So we passed ownership of the app over to a couple of other developers on our team. You'd think that they'd just work off what we already built and keep the robust environment we created for them. But nope, instead of keeping everything we already built, they scrapped it all and started from scratch using React instead of Angular, and removed all of those robust features and turned the app into a shell of its former self. No more statistic tracking, no more social networking capabilities, no more fancy user profiles. Just a single page with a number representing how many "Good" tickets you've sent to us, and how many "Bad" tickets you've sent.
1.5 years and hundreds of hours worth of work, all gone and replaced with the most rudimentary basic React app ever.2 -
Saw this sent into a Discord chat today:
"Warning, look out for a Discord user by the name of "shaian" with the tag #2974. He is going around sending friend requests to random Discord users, and those who accept his friend requests will have their accounts DDoSed and their groups exposed with the members inside it becoming a victim as well. Spread the word and send this to as many discord servers as you can. If you see this user, DO NOT accept his friend request and immediately block him. Discord is currently working on it. SEND THIS TO ALL THE SERVERS YOU ARE IN. This is IMPORTANT: Do not accept a friend request from shaian#2974. He is a hacker.
Tell everyone on your friends list because if somebody on your list adds one of them, they'll be on your list too. They will figure out your personal computer's IP and address, so copy & paste this message where ever you can. He is going around sending friend requests to random discord users, and those who accept his requests will have their accounts and their IP Addresses revealed to him. Spread the word and send this to as many discord servers as you can. If you see this user, DO NOT accept his friend request and immediately block him. Saw this somewhere"
I was so angry I typed up an entire feature-length rant about it (just wanted to share my anger):
"1. Unless they have access to Discord data centres or third-party data centres storing Discord user information I doubt they can obtain the IP just by sending friend requests.
2. Judging by the wording, for example, 'copy & paste this message where ever you can' and 'Spread the word and send this to as many discord servers as you can. If you see this user, DO NOT accept his friend request and immediately block him.' this is most likely BS, prob just someone pissed off at that user and is trying to ruin their reputation etc.. Sentences equivalent to 'spread the word' are literally everywhere in this wall of text.
3. So what if you block the user? You don't even have their user ID, they can change their username and discrim if they want. Also, are you assuming they won't create any alts?
4. Accounts DDoSed? Does the creator of this wall of text even understand what that means? Wouldn't it be more likely that 'shaian' will be DDoSing your computer rather than your Discord account? How would the account even be DDoSed? Does that mean DDoSing Discord's servers themselves?
5. If 'shaian' really had access to Discord's information, they wouldn't need to send friend requests in order to 'DDoS accounts'. Why whould they need to friend you? It doesn't make sense. If they already had access to Discord user IP addresses, they won't even have to interact with the users themselves. Although you could argue that they are trolling and want to get to know the victim first or smth, that would just be inefficient and pointless. If they were DDoSing lots of users it would be a waste of time and resources.
6. The phrase 'Saw this somewhere' at the end just makes it worse. There is absolutely no proof/evidence of any kind provided, let along witnesses.
How do you expect me to believe this copypasta BS scam? This is like that 'Discord will be shutting down' scam a while back.
Why do people even believe this? Do you just blindly follow what others are doing and without thinking, copy and paste random walls of text?
Spreading this false information is pointless and harmful. It only provides benefits to whoever started this whole thing, trying to bring down whoever 'shaian' is.
I don't think people who copy & paste this sort of stuff are ready to use the internet yet.
Would you really believe everything people on the internet tell you?
You would probably say 'no'.
Then why copy & paste this? Do you have a reason?
Or is it 'just because of 'spread the word''?
I'm just sick of seeing people reposting this sort of stuff
People who send this are probably like the people who click 'Yes' to allow an app to make changes in the User Account Control window without reading the information about the publisher's certificate, or the people who click 'Agree' without actually reading the terms and conditions."8 -
Holy shit firefox, 3 retarded problems in the last 24h and I haven't fixed any of them.
My project: an infinite scrolling website that loads data from an external API (CORS hehe). All Chromium browsers of course work perfectly fine. But firefox wants to be special...
(tested on 2 different devices)
(Terminology: CORS: a request to a resource that isn't on the current websites domain, like any external API)
1.
For the infinite scrolling to work new html elements have to be silently appended to the end of the page and removed from the beginning. Which works great in all browsers. BUT IF YOU HAPPEN TO BE SCROLLING DURING THE APPENDING & REMOVING FIREFOX TELEPORTS YOU RANDOMLY TO THE END OR START OF PAGE!
Guess I'll just debug it and see what's happening step by step. Oh how wrong I was. First, the problem can't be reproduced when debugging FUCK! But I notice something else very disturbing...
2.
The Inspector view (hierarchical display of all html elements on the page) ISN'T SHOWING THE TRUE STATE OF THE DOM! ELEMENTS THAT HAVE JUST BEEN ADDED AREN'T SHOWING UP AND ELEMENT THAT WERE JUST REMOVED ARE STILL VISIBLE! WTF????? You have to do some black magic fuckery just to get firefox to update the list of DOM elements. HOW AM I SUPPOSED TO DEBUG MY WEBSITE ON FIREFOX IF IT'S SHOWING ME PLAIN WRONG DATA???!!!!
3.
During all of this I just randomly decided to open my website in private (incognito) mode in firefox. Huh what's that? Why isn't anything loading and error are thrown left and right? Let's just look at the console. AND IT'S A FUCKING CORS ERROR! FUCK ME! Also a small warning says some URLs have been "blocked because content blocking is enabled." Content Blocking? What is that? Well it appears to be a supper special supper privacy mode by firefox (turned on automatically in private mode), THAT BLOCKS ALL CORS REQUESTS, THAT MAY OR MAY NOT DO SOME TRACKING. AN API THAT 100% CORS COMPLIANT CAN'T BE USED IN FIREFOXs PRIVATE MODE! HOW IS THE END USER SUPPOSED TO KNOW THAT??? AND OF COURSE THE THROWN EXCEPTION JUST SAYS "NETWORK ERROR". HOW AM I SUPPOSED TO TELL THE USER THAT FIREFOX HAS A FEAUTRE THAT BREAKS THE VERY BASIS OF MY WEBSITE???
WHY CAN'T YOU JUST BE NORMAL FIREFOX??????????????????
I actually managed to come up with fix for 1. that works like < 50% of the time -_-5 -
Ah, developers, the unsung heroes of caffeine-fueled coding marathons and keyboard clacking symphonies! These mystical beings have a way of turning coffee and pizza into lines of code that somehow make the world go 'round.
Have you ever seen a developer in their natural habitat? They huddle in dimly lit rooms, surrounded by monitors glowing like magic crystals. Their battle cries of "It works on my machine!" echo through the corridors, as they summon the mighty powers of Stack Overflow and Google to conquer bugs and errors.
And let's talk about the coffee addiction – it's like they believe caffeine is the elixir of code immortality. The way they guard their mugs, you'd think it's the Holy Grail. In fact, a developer without coffee is like a computer without RAM – it just doesn't function properly.
But don't let their nerdy exteriors fool you. Deep down, they're dreamers. They dream of a world where every line of code is bug-free and every user is happy. A world where the boss understands what "just one more line of code" really means.
Speaking of bosses, developers have a unique ability to turn simple requests into complex projects. "Can you make a small tweak?" the boss asks innocently. And the developer replies, "Sure, it's just a minor change," while mentally calculating the time it'll take and the potential for scope creep.
Let's not forget their passion for acronyms. TLA (Three-Letter Acronym) is their second language. API, CSS, HTML, PHP, SQL... it's like they're playing a never-ending game of Scrabble with abbreviations.
And documentation? Well, that's their arch-nemesis. It's as if writing clear instructions is harder than debugging quantum mechanics. "The code is self-explanatory," they claim, leaving everyone else scratching their heads.
In the end, developers are a quirky bunch, but we love them for it. Their quirks and peculiarities are what make them the creative, brilliant minds that power our digital world. So here's to developers, the masters of logic and the wizards of the virtual realm!13 -
I DIDN'T SIGN UP FOR THIS !!!
After seeing bunch of posts about Enki, decided to give it a try,
enters my info on the sign up page
*email address is already taken* : WHAT !!
changes email address
*your username is already taken* : WHAT !!
goes back and search if there's any mails from Enki
*no results found* : Dafuq !!
Requests password reset
*Receives first mail from enki ever, with a reset link*
Did they change their name from something else to Enki or they have bunch of emails in their database to showoff user base ?
Can anyone shed some light on this, cause I'm 100% sure i didn't sign up for this before.
after resetting the password I'm able to login, but in the Notification section it says
*your email is not confirmed*
well i would confirm it, WHEN I GET IT !!9 -
I am building a website inspired by devrant but have never built a server network before, and as im still a student I have no industry experience to base a design on, so was hoping for any advice on what is important/ what I have fucked up in my plan.
The attached image is my currently planned design. Blue is for the main site, and is a cluster of app servers to handle any incoming requests.
Green is a subdomain to handle images, as I figured it would help with performance to have image uploads/downloads separated from the main webpage content. It also means I can keep cache servers and app servers separated.
Pink is internal stuff for logging and backups and probably some monitoring stuff too.
Purple is databases. One is dedicated for images, that way I can easily back them up or load them to a cache server, and the other is for normal user data and posts etc.
The brown proxy in the middle is sorta an internal proxy which the servers need to authenticate with to connect to, that way I can just open the database to the internal proxy, and deny all other requests, and then I can have as many app servers as I want and as long as they authenticate with the proxy, they can access the database without me changing any firewall rules. The other 2 proxies just distribute requests between the available servers in the pool.
Any advice would be greatly appreciated! Thanks in advanced :D13 -
TLDR: There's some days where the Gods of IT are not with you. Just lost a whole day of work.
So this morning, we (me and my team) big performance issues with our web app. Lot's of requests time out, big latency, etc
Try to ssh to VPS, latency of 10 seconds between user input and output.
Usual checks: RAM ok, Proc ok, hard drive ok, reboot server (20 minutes), update/upgrade
We decide to call OVH. After 15 minutes call, we try to reboot in rescue mode. Reboot fails at 60% + everything freezes.
After an hour, OVH opens an incident ticket on +200 vps instances (including mine) everything is down during +1h
Finally everything is okay ! Even had time to migrate my new database schema.
Still, quick heavy on the mind but feels good to go home with everything working out correctly -
Hopefully, you already know that the company controlled by the alledged reptiloid subhuman and olimpic testicle juggler formerly known as Mister Zuck My Tits is not to be trusted.
But as is always the case in this bitch, I've been forced into cowjizz flooded swamps' worth of stinking shit platforms for the sake of avoiding isolation.
And so, I've just found yet another way in which Facebook **THUNDERSTRIKE** ... the company, not the geriatric ward, is one of the CROWN ACHIEVEMENTS of human civilization.
Let me tell you something: some people are fucking broke. Hell, some people sleep on the streets, live on scraps, and willingly engage in acts of public defecation when provoked. But I'm not even talking about them no, just plain *broke*.
And so imagine being that guy who doesn't really use his phone much, except maybe for sharing cat pictures with mom because that's what being an absolute chad is all about. You don't get a new phone, because money is a __little__ bit tight. But THEN...
The dreaded CAPITAL strikes, and requests of you to bend and fall onto your knees so as to provide intense, intimate and manual -- as well as oral -- PLEASURE to the [NOT SO] METAPHORICAL PENIS of the """SYSTEM""".
Oh, what an abominable, drooooooling revenant that lies before you!
"Gimme your ass... " he says, menacingly, as you wail about in a futile attempt to guard and preserve the very last vestiges of your own anal virginity.
And so you fight, and kick him in the NADS with everything you have, down to the final shreds of vigor. Victory! Or so you thought...
"You must... " he mutters, mortally wounded "update WhatsApp... "
"Still you breathe?!" you exclaim, suddenly transformed into a heroic, sexy moustachoed arquebusier "After I'm done ~OILING~ my VICTORIOUS CHEST, I *shall* bestow DEATH uppon you!".
But as you rip open your shirt to apply sensual oiling to your marvellous frontal assets, your nemesis reveals it's portentous Portugal: "this new version of Android... " he gasps as he perishes "is incompatible with your device... "
"Ughh! Sacrebleu!" you shriek out in pain, realizing that you are now unable to ACCESS THE FUCKING DATA THAT IS IN YOUR OWN FUCKING HARDWARE BECAUSE OF A STUPID FORCED BINARY INCOMPATIBILITY.
That's right. Now even if I *do* get a new phone, I can't do shit about losing all of the family memes. And contacts and all of that shit, but the stickers are more important. A minor inconvenience, yes, and it didn't need all of this preamble but I was doing the dramatic fight scene bit inside my head as I was writing and I got into it.
Because the only documented way to transfer all of that data is to OPEN THE APPLICATION and scan some code, but everytime I go to do that, IT TELLS ME I NEED TO UPDATE. And every time I GO TO UPDATE, it says that MY PHONE is TOO FUCKING OLD!! AAAAAAAGHGHGHGHGHGHGHG!!!!
And you too, might be a dashing french man from centuries past, with both balls and tits down to your fucking knees, folding your arms in a position that exhumes smugness in a disgustingly irreverent and self-aggrandizing way, looking at me as a mere plebeian who cannot wrap his head around the mystical art of interacting with Google's black deuce box.
And you would be somewhat right in your judgement! But just having to fiddle about with these fucking pocket Elmo screens is such a traumatic experience for me that I'd rather lose my stickers.
[ADBREAK] Are you a debonair victorian undercover butt pirate, taking unparalleled care of your Falstaffian, highfalutin poils pubiens? Need your "sword" sharpened, as you browse through the pages of this magnanimous lexicon? Would you rather allocate final death to your coworkers than learn one more synonym for sonorous, supercilious and pontifical?
We all know that ALL you need to help keep that honor intact is slaying your enemies in high-stakes combat. But how to satisfy less gallant needs, when male prostitution is outlawed in more than sixteen duchies?
Look no further than BloodCurse, the ancient hex that will haunt your family for countless generations! With BloodCurse, you may crawl the earth as a mindless, shameless, piece of shit cockswallowing JUGGERNAUT that craves nothing BUT the consumption of scabbed human ass!
BloodCurse is easily contracted through consumption of the GENITAL fluids of highly-lecherous succubi, conjured through [EXTREMELY CENSORED]! This forbidden arcana allows the user to debour HIS OWN testicles in no time!
Get your bottle of scents, sensual Portuguese chest oils, and fucking designer-drug bath salts for the low, low price of a passionate, unceassing self-blowjob! And use my code FRONTALASSETS for 60% OFF in your next soul-robbing foray into the felational dark arts!
Big ups to BloodCurse for sponsoring this RRRRRRRR~$RRR$$RR%5RRRRR$0000:>A48CC50A E3A1B22A : 330D4750 7C24E5A5|.......*3.GP|$.. 5262E7D5 0D1C24E6 : 85594B39 1CB7593E|Rb......YK9..Y>
:~11 -
I had a ticket to enhance the loading of a page.
So instead of doing 40K requests to a MySQL DB in order to generate a tree and display it to to the user on each page visit, the initial query was optimized and moreover, the results are saved in a MongoDB which will then are served to the user on each page visit.
Long story short, after a code review the code got shipped to production and there was a bug which got fixed in a Hotfix shortly afterwards.
I got all the blame for the bug.
I don't deny I have a responsibility for the bug.
Do you guys think the code reviewer also has a shared responsibility for the bug?4 -
PM blindly puts user requests into JIRA as tasks to complete without thinking through their relevancy. Some of these are straight up not possible or don't make any damn sense. (╯°□°)╯︵ ┻━┻ just a constant reminder how great it will be when I leave here at the end of the month.1
-
every day I see full stack here and there...
full stack is not only db and code, but also "every step the bit goes through " from end user's screen/input to server and back to him
whether is an app or service, end user is only an example.
it's about knowing how the language behaves, how the server interprets and replies to requests, protocols, even how to do every single configuration on the systems you are using, and in my point of view that includes hardware.
pretty much that...
I get sic when I see on a resume claiming "I'm a full stack dev" and there's nothing on it saying that the guy knows at least to change a light bulb... lol
Even worse, when I see job offers asking for "Full stack Dev, with no experience" ...
that's not possible without experience ! sorry9 -
- Implemented oauth1 - no body hashing
- URL contains credentials in plain text
- Used Azure API management feature as a proxy of the our API, however the documentation was on the our API, thus exposing the API URL with no management to developers.
- easy resource DDoSing because each trial user got a DB, the registration process did not have bot checks. You could literally freeze the db instance by spamming registration requests. -
The dangers of PHP eval()
Yup. "Scary, you better make use of include instead" — I read all the time everywhere. I want to hear good case scenarios and feel safe with it.
I use the eval() method as a good resource to build custom website modules written in PHP which are stored and retrieved back from a database. I ENSURED IS SAFE AND CAN ONLY BE ALTERED THROUGH PRIVILEGED USERS. THERE. I SAID IT. You could as well develop a malicious module and share it to be used on the same application, but this application is just for my use at the moment so I don't wanna worry more or I'll become bald.
I had to take out my fear and confront it in front of you guys. If i had to count every single time somebody mentions on Stack Overflow or the comments over PHP documentation about the dangers of using eval I'd quit already.
Tell me if I'm wrong: in a safe environment and trustworthy piece of code is it OK to execute eval('?>'.$pieceOfCode); ... Right?
The reason I store code on the database is because I create/edit modules on the web editor itself.
I use my own coded layers to authenticate a privileged user: A single way to grant access to admin functions through a unique authentication tunnel granting so privileged user to access the editor or send API requests, custom htaccess rules to protect all filesystem behind the domain root path, a custom URI controller + SSL. All this should do the trick to safely use the damn eval(), is that right?!
Unless malicious code is found on the code stored prior to its evaluation.
But FFS, in such scenario, why not better fuck up the framework filesystem instead? Is one password closer than the database.
I will need therapy after this. I swear.
If 'eval is evil' (as it appears in the suggested tags for this post) how can we ensure that third party code is ever trustworthy without even looking at it? This happens already with chrome extensions, or even phone apps a long time after reaching to millions of devices.11 -
Malwares are nasty applications, that can spy on you, use your computer as an attacker or encrypt your files and hold them on ransom.
The reason that malware exists, is because how the file system works. On Windows, everything can access everything. Of course, there are security measures, like needing administrator permissions to edit/delete a file, but they are exploitable.
If the malware is not using an exploit, nothing is there to stop a user from unknowingly clicking the yes button, when an application requests admin rights.
If we want to stop viruses, in the first place, we need to create a new file-sharing system.
Imagine, that every app has a partition, and only that app can access it.
Currently, when you download a Word document, you would go ahead, start up Word, go into the Downloads folder and open the file.
In the new file-sharing system, you would need to click "Send file to Word" in your browser, and the browser would create a copy of the file in a transfer-partition. Then, it would signal to Word, saying "Hey! Here's a file that I sent to you, copy it to your partition please!". After that, Word just copies the file to its own partition, signals "Ok! I'm done!", and then the browser deletes the file from the shared partition.
A little change in the interface, but a huge change in security.
The permission system would be a better UAC. The best way I can describe it is when you install an app on Android. It shows what permission the app wants, and you could choose to install it, or not to.
Replace "install" with "grant" and that's what I imagined.
Of course, there would be blacklisted permissions, that only kernel-level processes have access to, like accessing all of the partitions, modifying applications, etc.
What do you think?7 -
My colleague had some problems today.
He had problems from both user input and predefined strings sent with ajax to backend. The "&"-sign was splitting values in GET-requests into unintended parameters. So... he was simply going to search and replace all of those signs with the word "and"...
encodeURI()? Why are you sending our forms with GET-requests?! 🙃 -
Does anyone of you fellow devs ever pushes to production during working hours?
I have the luxury to do so and at first was uncomfortable, as this of course takes the system offline for a few seconds, and next web requests from a user are painful due to cold start of web server (and we have 40-100 active users at any given time)...
...but you know what? They all complain SharePoint is slow (it is) anyway, so. I do it.
Sometimes it fucking fails, so I do have all of the historic deployments handy, ready to revert. :)10 -
I once wrote an http interceptor for which was supposed to check the internal cache for user data and only do some work with it if they were (we manually controlled what and who was in cache). There were two methods on the service cGetUser and dGetUser I of course called d which it turned out loaded the user profile from the database which would be fine if it weren't done in an interceptor .. on a web service... With a little over 25000 requests per minute.. on each node..
Tldr. I accidentally wrote a database ddos tool into our app...2 -
When you start a new job and you inherit a steaming pile of shit that NEEDS to integrate with a completely separate application but after repeatedly telling your manager his requests aren’t possible, he denies it and says it is possible.
Some context. They have an old application written in MVC. They want a new application written in react. They want all the old functionality to integrate with the new functionality. I don’t just mean render different views based on the route, I mean they want both applications to integrate seamlessly to create a new application. Not to mention this new application is completely different to the old one and has requirements that aren’t even compatible with the old application.
Also. I got into trouble today for completing the sprint in 2 days and starting on user stories (that were in the sprint, not the backlog). Apparently we’re not allowed to showcase the product until the sprint ends and we go through our retrospective/demo. LMAOOOO -
I just started a new job last week. Old-school sysadmin role for a pretty old-school company, but the pay is nice and the kids've gotta eat.
They gave me a windows laptop. I haven't used windows for work or as a daily driver since 2016, and now, a week into trying to make this machine work for me, I have the following observations to report.
WSL is nice. It's nice to have it installed(though actually installing it was an adventure unto itself), and to set alacritty to open my default user prompt straight into that is very nice. As terminal emulators are by far my most used piece of software, that's nice to have.
Command-line software management through powershell, winget, and chocolatey are also very nice.
I like the accessibility offered by autohotkey, though there is something of a learning curve on it. Once I get better with it, I suspect that what follows will be largely mitigated.
The Bad:
In general, Windows is janky. It feels like it's all kinda taped together without any particular cohesion in mind. As a desktop, it feels decidedly amateur, compared to the feature-mountain polish of MacOS, and especially compared to the flexibility and infinite possibilities of Linux.
Lots of screen real estate is wasted, with window decorations, and fonts that look terrible at smaller sizes, because the antialiasing of fonts is just terrible. Almost all the features I depend on in other desktops: ad-hoc searches and launches(alfred, rofi) are-- again --janky. They work, but they typically require more typing than alfred or rofi. I admit I haven't spent weeks on this problem yet, but I haven't found a workable solution yet with wox, hain, and keypirinha. Quick searches like what you get with alfred, alfred workflows, and the swiss army knife that is rofi, just aren't possible or reliable with the tools I've used so far, and most require some kind of indexing agent to fully function.
It beggars imagination that a desktop in which users are subjected to "default apps" that is purported to be acceptable for enterprise, professional use, does not have a default entry for text editor. I installed nvim-qt, and I want to use it to edit anything and everything I ever edit with text, but all too often, apps have hard-coded instructions to open text files with notepad.
I want to open certain URLs with firefox, certain ones with firefox developer edition, and others with vivaldi, and yet there is not an app available that I have seen yet in my searches that allows me to set this kind of configuration. I found one that's supposed to, but it just ignores everything I put into its config, and just opens MS Edge for everything. Jank.
Simple things take too long. Like the delay between when I laboriously hit ctrl-alt-del to bring up the login and when the actual text field appears, and the delay between that and when I want to start using the computer.
Changing some settings requires a reboot. Updating some software requires a reboot. Updating permissions on something sometimes requires a reboot. And those are all on top of the frequent requests to reboot for updates.
I would have thought Windows would have overcome most of the issues that create these problems, but it's just, as I said, amateur.1 -
This Monday the website I created for my school will go online at 10:45 and about 50 students will start accessing it all at the same time making 20/30 requests each in the timespan of a couple of minutes.
I am very much afraid that it'll go offline or break. And I put my name on it so hopefully everything goes well and Heroku doesn't suddenly decided to restart the server because there's too much traffic.
TBH, I've been a bit stupid, I could easily reduce the number of requests per user. Might do it before Monday.6 -
TL;DR - Coding standards are a shit practice IMO.
What we don't talk about enough among software engineers, is the artistic aspect of the craft of writing code.
For example, consider your client saying this to you.
"Build me a web app where a user will login. They will have a wallet to purchase subscriptions of 3 products of different prices."
Give these two statements to say, 10 devs and see how each of them will come up with their own vision of the problem and how they would implement it in their own ways.
So now you are working on a big team with say 30 people and you have a big project to work on. Different members of the team bring different styles of code to you to review and if, the Team Leader is as incompetent as mine is, they would find it troubling to understand the pull requests.
So what do you do in these scenarios? Implement Coding standards !!! They take away the artistic vision of the devs and tries to force them to follow rules like sheep.
Also the company doesn't give two shits about the code standards cuz, as long as they have working code that makes them money, they wouldn't care how the code is written.
Thoughts ?8 -
I really do love programming, but I really do hate implementing features that will make the database and code way more messy and complex. Which would be fine, if I wouldn't be quite sure the feature is utter bullshit and the user just can't really frame what they need.
And yes, I've asked my boss if he's sure if that's what they want and if not the other feature I implemented will fit those needs too. Yes, he is sure that they're sure they need exactly said requested feature.4 -
I have a few side project ideas. I started one of them a few months ago (project setup, dependencies, git repo, index page, very basic API and client functionality). But I cannot get myself to work on it or even think about it (for months now). The reason? I do not want to work on the client/frontend! I do not want to deal with React or Vue or Svelte or fuckjs or even jquery. It's a fucking mess.
For the backend, the requests are stateless: you get a request, handle it, and respond back. Need to update state? Database. That's it!
For the frontend, there's just tooo many states I can't keep up with! When the user checks or unchecks this checkbox, I need to maintain the state of the checkbox and maintain the all effects of changing the checkbox while syncing with the backend and making sure the elements are still styled correctly with the applied effects. Multiply that with all the expected interactive elements on the page. It's exhausting!4 -
Random recruiter from LinkedIn sends an “opportunity” in a well stablished German company in Madrid ..
.. has three entries in requirements for jquery, associated with, and I quote “OOP, Object Programming, and other frameworks” ..
Goes on to require knowledge of “css, scss and saas”, along with “Don HTML” ..
And requests “experience with the principles of agile user interface methodologies” ..
And Angular 1 ..
How would you respond to this one!?
I actually did, corrected the mistakes, told what other mistakes were at the differences between libraries and frameworks, .. and that I don’t like Angular and I’m not interested in learning the old one at all ..1 -
A very long rant.. but I'm looking to share some experiences, maybe a different perspective.. huge changes at the company.
So my company is starting our microservices journey (we have a 359 retail websites at this moment)
First question was: What to build first?
The first thing we had to do was to decide what we wanted to build as our first microservice. We went looking for a microservice that can be used read only, consumers could easily implement without overhauling production software and is isolated from other processes.
We’ve ended up with building a catalog service as our first microservice. That catalog service provides consumers of the microservice information of our catalog and its most essential information about items in the catalog.
By starting with building the catalog service the team could focus on building the microservice without any time pressure. The initial functionalities of the catalog service were being created to replace existing functionality which were working fine.
Because we choose such an isolated functionality we were able to introduce the new catalog service into production step by step. Instead of replacing the search functionality of the webshops using a big-bang approach, we choose A/B split testing to measure our changes and gradually increase the load of the microservice.
Next step: Choosing a datastore
The search engine that was in production when we started this project was making user of Solr. Due to the use of Lucene it was performing very well as a search engine, but from engineering perspective it lacked some functionalities. It came short if you wanted to run it in a cluster environment, configuring it was hard and not user friendly and last but not least, development of Solr seemed to be grinded to a halt.
Elasticsearch started entering the scene as a competitor for Solr and brought interesting features. Still using Lucene, which we were happy with, it was build with clustering in mind and being provided out of the box. Managing Elasticsearch was easy since there are REST APIs for configuration and as a fallback there are YAML configurations available.
We decided to use Elasticsearch since it provides us the strengths and capabilities of Lucene with the added joy of easy configuration, clustering and a lively community driving the project.
Even bigger challenge? Which programming language will we use
The team responsible for developing this first microservice consists out of a group web developers. So when looking for a programming language for the microservice, we went searching for a language close to their hearts and expertise. At that time a typical web developer at least had knowledge of PHP and Javascript.
What we’ve noticed during researching various languages is that almost all actions done by the catalog service will boil down to the following paradigm:
- Execute a HTTP call to fetch some JSON
- Transform JSON to a desired output
- Respond with the transformed JSON
Actions that easily can be done in a parallel and asynchronous manner and mainly consists out of transforming JSON from the source to a desired output. The programming language used for the catalog service should hold strong qualifications for those kind of actions.
Another thing to notice is that some functionalities that will be built using the catalog service will result into a high level of concurrent requests. For example the type-ahead functionality will trigger several requests to the catalog service per usage of a user.
To us, PHP and .NET at that time weren’t sufficient enough to us for building the catalog service based on the requirements we’ve set. Eventually we’ve decided to use Node.js which is better suited for the things we are looking for as described earlier. Node.js provides a non-blocking I/O model and being event driven helps us developing a high performance microservice.
The leap to start programming Node.js is relatively small since it basically is Javascript. A language that is familiar for the developers around that time. While Node.js is displaying some new concepts it is relatively easy for a developer to start using it.
The beauty of microservices and the isolation it provides, is that you can choose the best tool for that particular microservice. Not all microservices will be developed using Node.js and Elasticsearch. All kinds of combinations might arise and this is what makes the microservices architecture so flexible.
Even when Node.js or Elasticsearch turns out to be a bad choice for the catalog service it is relatively easy to switch that choice for magic ‘X’ or component ‘Z’. By focussing on creating a solid API the components that are driving that API don’t matter that much. It should do what you ask of it and when it is lacking you just replace it.
Many more headaches to come later this year ;)3 -
So GitHub's "Repository Refresh" layout is live now and they haven't fixed the damn .repohead (with the repo name and the links to Issues, Pull Requests, etc.) stretchting across the entire screen instead of being constrained to the middle column :/
If it annoys you just as much as it does me, just wack this CSS into a user css extensions and it's fixed:
.repohead .d-flex:first-of-type {
max-width: 1280px !important;
margin-left: auto !important;
margin-right: auto !important;
}
.repohead nav.js-repo-nav {
max-width: 1280px !important;
margin-left: auto !important;
margin-right: auto !important;
}
.repohead .UnderlineNav {
/* Remove repohead bottom divider */
box-shadow: none;
background-color: transparent !important;
}
.repohead {
/* Add repohead bottom divider to repohead so it stretches across the entire screen */
box-shadow: inset 0 -1px 0 #e1e4e8;
}9 -
eTime Xpress by Celayix Software
Quite possibly the worst time and attendance software on the market. The only reason the company is still using it is because the big cheese refuses to pay any per user fees for any product whatsoever.
It requires an installation of Ericom because all supervisors must log in to schedule employees and record hours for payroll.
Printing is a nightmare to support because you're essentially printing through RDP and all print drivers for everyone's assortment of crappy printers must be installed on the server.
The software supports SOAP API calls, but it can't handle more than three concurrent requests without barfing, so you have to code your application around that...
I could go on... -
I challenge you to start a process from php.
The following criteria must be fulfilled:
- php-fpm
- the process is started on http request by user
- the response does not wait for the process to be finished
- the process must finish, possibly after the response reached the user
- the running process does not block a fpm thread/worker from handling further requests
Simple, right?8 -
*Frustrated user noises* Whyyyy, Grafana, why don't you implement any actual query forgery checks?!
So long as a user has access to the Grafana frontend, they can happily forge the requests going off to the backend, and modify them to return *whatever* data they want from the datasource.
No matter that they're a read-only user. That only stops them from modifying the dashboard definitions on the frontend, but doesn't enforce any sort of immutability on the BE...
If anyone had any tips on how to further secure it, I'm curious...5 -
!!rant
Just spent a week creating a distributed api architecture which I found out won't work due to a singular issue which can't be solved - not unless I hack stuff to a degree where I might as well write my own frameworks.
I've been aiming the user application's requests towards my wsgi, which based on a custom header will proxy it towards the correct api. Each customer base has their own api and dataset, but they all visit the same address.
I've handled CORS manually, just picking up when there's an options request, asserting the origin, then returning the correct headers. Cool everyone's happy. Turns out, socket.io includes session id and handshake info as part of their options preflight, which I can't pair with my api header (or cookie, for that matter) which means my wsgi doesn't know where to send it. You get a 400! You get a 400! You get a 401! </oprah>
So my option is to either roll my own sockets engine or just assign each api to a subdomain or give it some url prefix or something. Subdomains are probably pretty clean and tidy, but that doesn't change having to rewrite a bunch of stuff and the hours I spent staring at empty headers in options preflights.
At least this discussion saved me some time in trying to make it work. One of my bad habits is getting in those grooves of "but surely... what the hell, surely there's a way. There has to be"
https://github.com/socketio/... -
So, I browse to a video livestream and an annoying ad starts before the livestream is shown. Furthermore, the page jumps around because of a cookie notification that also blocks some UI elements at the top.
Note: this is the website of a public (government-paid) national news website with very high standards and a good reputation.
Action 1: refresh page; I hope the ad is skipped. Nope, annoying ad restarts. Page jumps around again because of the cookie notification.
Action 2: accept cookies to remove notification blocking the top UI (it's OK, I know it can't actually save any cookies on my machine). Instead of some nice JS doing it for me in the background, the page refreshes because you know, HTTP requests and whatnot.
Annoying ad restarts again... FML 🤬
Lessons to be learned from this for any web dev: these annoyances can and *will* exponentially get worse if used simultaneously against your users, instead of being used to help or inform your users.
As a user of you website, I want to watch a livestream. I don't care what stupid legislation forced you to shove a fucking cookie notification in my face. Make sure it is not annoying me to the point that I close you website and take minutes to rant about it!
Also, give me the freedom of choice to watch an ad or not. You and I both know that some ads simply are not for me. Better save yourself and myself the bandwidth.
And go get good at web development. You're a news site. That's more than just text and images. If you want great apps, social media coverage, videos, live streams, blogs, etc. go get some better web devs. Your current web frontend devs only qualify to get fired.1 -
Can anyone suggest any websites or resources for a breakdown of how to handle requests for features or handling bugs. Basically, I want some kind of background on best practices for managing the process of receiving a feature request/or bug report from a user to it reaching the dev team, to production/user acceptance testing.5
-
Am I in developer hell already? A shitty project is about to come to an end (hopefully), or should I rather say: It needs to come to an end. But I am still quite lost in how to deal with it, hence procrastinating on it - making the deadline come closer and with it the realization that I'll probably have to rewrite almost everything. I'm not sure how, but I do know that the current code is a dumpster fire.
Basically what I need to do is dealing with the APIs of different payment providers/gateways (like PayPal, AmazonPay). For most cases I'll get a payment ID from the shop and need to act on it later, e.g. capture the authorized money in the case of a credit card transaction or do refunds (without user interaction, unless there is an error). Now at first I put something together where I try to abstract the payment information into two tables:
orders{1}<->{0..n}payments
payments{1}<->{1..n}paymentDetails
Unfortunately trying to abstract the different payment methods and to squeeze them (and their different possible stati and functions) in these tables was not very successful, it's a total mess with magic numbers, half-broken behavior and without any consideration for partial payments/captures or unfinished requests (i.e. if there is an exception before the response is dealt with, there is no indication that anything has ever been sent). Also the current amount is calculated through the history of the paymentDetails table, which basically works differently for each payment type.
How to fix this mess in a way that I'll still have a job by next week?
I'm trying to improve the db schema first, as I think my biggest problems are lying there. Through some research I've come across a recommendation for making payment type specific subtables (with a magic number/string in the main table to prevent having to look up all subtables). That way I can record what I send and receive without having to abstract it too much, so I'll have an acceptable transaction log. The paymentDetails table can be removed (necessary fields go to the payments table). The payments table gets multiple fields for the amount (differentiating between open, authorized, captured, processing and refunded values) and always reflects the current status.
Tables:
payments
paymentRequestsPaypal
paymentRequestsAmazonpay
paymentRequestsXyz
I think I'm going in the right direction here. hm. Maybe there's some light at the end of this long, dark tunnel. Or a train. I'll have two days to find out.question kill me already send help thank you for being my rubber duck payment gateways deadline approaching rant/question burnout6 -
Finally started utilizing this quarantine time and started a new project.
Name - Hermes
Link - https://github.com/gauravat16/...
About - Send Cloud message notifications (like FCM) to your users.
Features I am planning -
1. Send notifications to users based on any specification you want. (eg - users on app version 1.2 and using OS version 9.0 or 8.0 in region India)
2. Search on previous requests and responses.
3. Draw trends on the responses and further actions by the user.
Current tech stack -
1. Spring-Boot
2. Java 1.8
3. Mysql
4. MongoDb
5. Elastic (Planned)17 -
>new feature in application uses external API
>external API has unreliable response times, requires polling to get results, no way to set up webhooks or whatever
>tech lead proposes asynchronous system which will queue up user requests for processing and use websockets to warn frontend clients of finished query results
>higher ups say it will take too much time, make tech lead cut back in scale and treat external API like a regular synchronous REST API
>team dutifully implements feature within the constraints of the new smaller scope
>higher ups try out the feature, find the usage experience is extremely shitty, but don't back down, they only let tech lead scale back to original scope in small increments that still allow new problems to show up
>feature takes up same time or longer, but with more damage to the mental health of developers
At least I'm not in that team1 -
#Suphle Rant 3: Road to PHP8, Flow travails
Some primer: Flows is a feature that causes the framework to bypass handling the request now but read it from cache. This cache entry is meant to be populated without warming, based on the preceding request. It's sort of like prefetching but done on the back end
While building Suphle, I made some notes on some chapters about caveats and gotchas I may forget while documenting. One such note was that when users make the Flow request, the framework will attempt to determine who user is, using authentication mechanism defined on the first module (of the modular monolith)
Now, I got to this point during documentation and started wondering whether it's impossible for the originating request to have used a different authentication mechanism, which would result in an empty entry for returning user. I *think* it's possible cuz I've got something else called "route mirroring", where web based routes can be converted to API routes. They'll then return JSON, get served under defined API path, use JWT, all automatically. But I just couldn't connect the dots for the life of me, regarding how any of this could impact authentication on the Flow request
While trying to figure out how to write the test for this or whether it was even necessary (since I had no use case), it struck me that since Flow requests are not triggered by an actual user, any code attempting to read authenticated user will see nothing!
I HATE it when I realize there's ambiguity or an oversight, after the amount of attention and suffering devoted. This, along with a chain of personal troubles set off despondency for a couple of days. No appetite for food or talk. Grudgingly refactored in this update over some days. Wrote some tests, not all passed. More pain. May have to convert them to unit tests
For clarity, my expectation is, I built this. Nothing should be impossible for me
Surprisingly, I caught a somewhat lucky break –an ex colleague referred me to the 1st gig I'm getting in 1+ year. It's about writing a plugin for some obscure forum software. I'm not too excited cuz it's poorly documented and I'll have to do a lot of groping, they use arrays instead of objects etc. There's no guarantee I'll find how to implement all client's requirements
While brooding last night, surfing the PHP subreddit, stumbled on a post about using Rector to downgrade a codebase. I've always been interested in the reverse but didn't have any incentive to fret over it. Randomly googled and saw a post promising a codebase can be upgraded with 3 commands in 5 minutes to PHP 8. Piqued my interest around 12:something AM. Stayed up all night upgrading it, replacing PHPSTAN with Psalm, initializing the guy's project, merging Flow auth with master etc. I think it may have taken 5 minutes without the challenge of getting local dev environment to PHP 8
My mood is much lighter than it was, although the battle is not won yet –image tests are failing. For some weird reason, PHP8 can't read generated test images. Hope I can ride on that newfound lease on life to study the forum and get the features working
I have some other rant but this is already a lot to digest in one sitting. See you in rant #4 -
That feeling when you realize that the REST API you were trying to consume apparently does not provide a query flag to get for a more detailed response making you think you'll need to fetch one list of items and then fire almost 1,000 requests really does not compare to that feeling when a colleague points out that the REST API in question does in fact support the flag AFTER you implemented the roundabout way.
FUCKING HELL!
I just didn't realize that I could click on GET and POST blocks for the metronome API documentation opening up a frigging pop-up. (See screenshot.)
Why couldn't the information have been more upfront? Only a cursor change on hovering the area could make one thing to click there.
Oh how I blame their lack of a user interface for my blindness.
I thought that it was just a basic documentation that only told you which endpoints exist and expects you to learn by trial of fire. So I searched the interwebs and on their support forum I found an old issue making me think that my round-about way was the way to go m(
Even worse, on the support forum I cannot even leave a comment warning the poor souls comming after me that they should not do the roundabout way as that issue has been long closed.
If you want to see it yourself: https://dcos.github.io/metronome/... -
Architecture for Java REST API going to build/port from existing NodeJS one.
So Spring Boot + *
Lots of concurrent requests and large MongoDB calls. Current APIs use like 4GB memory for each instance because they don't use stream/pipe the response. Hold all data in memory and then return it all at once to user.
And well we expect more load in the future, so want to do this the right way.
So my understanding since this morning, is there's the blocking? MongoClient, (find* returns List) and now a Reactive MongoClient which is very async and like JS promises. Based on Pub, Sub model.
But the downside of JS promises was callback hell.
So actually 2 questions.
1. For each request, the db call done using the same MongoClient/db connection such that if there are 2 requests one would block the other?
2. Reactive Mongo would be non-blocking by design so would be better to support streamed responses?8 -
Not the 'most embarrassing' part but not my proud moment either.
My sir have recently put me alongside him as the teacher assistant in this summer's batch. Last week he had to go somewhere so he asked me to take a github session with the class( well not exactly asked, but i just voluntarily commented) . mind you am myself a novice, never done anything beyond pushing data commits and pull requests. (But sir was fine with it , saying he wants the students to atleast enough knowledgeable to submit there homeworks.)
Fast forward to Night before class and i am trying to sleep but couldn't. I had all ppts prepared, hell i even prepared a transcript( hell i uploaded it to pastebin thinking i will look at it and read ).
But worst shit always has to happen when you do a presentation.
When the class started, the wify was not working. Those guys had never had done anything related to it so first thing we did was to make sure every of them gets git installed(with lots of embarrassments and requesting everyone to share their hotspots.not my faluts, tbh).
Then again, am a Windows-linux user with noobie linux and null mac experience. So when this 1 girl with mac got problems installing, i was like, "please search on SO" 🐣 .
So after half an hour, almost everyone had their git/github accounts ready to work, so i started woth explaining open source and github's working. In the middle of session, i wanted to show them meaning of github's stars ("shows how appreciated a repo is"), nd i had thought of showing them the react js repo . And when i tried searching it i couldn't find it (its name is just react, not reactjs ) so ,again :🐥🐥🐣
So somehow this session of 1-1.5 hour got completed in 4 hours with me repeating myself many many many times.
And the most stupid thing: our institute has every session recorded, so my awkward presentation is definitely in their computers 🐣🐣🐥🐥 -
I was tasked with reviving this mobile app purchased off the shelf. Initially, I was impressed with what I was seeing while perusing the codebase. I'm used to editing laravel projects written by handpicked amateurs. So this felt like a breath of fresh air. Coupled with the fact that I'd recently enquired on this very platform whether anyone has chanced upon an impressive code. All is going well, until
I start finding the multi layers of abstraction and indirection cryptic and obfuscatory; and that is coming from an idealist like me who advocates for "clean" patterns such as event emission. I wonder whether it would have helped if the emission or events were typed for easy listener tracking, instead of a black hole like vm.notifyListeners() (DOESN'T EVEN HAVE AN EVENT NAME!)
With time, I become disgusted by the tons of custom elements with so many parents
My take on production level user of the view model pattern: amazing in theory
One of the architectural decisions made on this project that had me foaming in the mouth, pulling my hair and cursing out the author's generations, past, present and future: can you believe these guys are APPENDING IMAGE DOMAINS TO THE RESOURCE? Ie the domain names are tightly coupled to the images and dictated by the api, instead of the client
If this isn't bad enough, the field names of returned entities/models don't exist on the database, of course because the stupid laravel framework abets this sort of madness by combining eloquent "scopes, attributes, and appends". A trifecta of horrors.
I eventual scaled through the horrors, but not without losing my admiration for the team behind it. App has returned to the shelves, because my company lost patience with my resuscitating it. They have the regular api authentication in place, but that's not good enough. They just had to integrate firebase as well, just because. Meanwhile, this isn't documented anywhere. I stumbled into it during my scuffle with app setup, gradle ish. Eventually got banned by firebase for "sending unusual requests". My company's last straw -
Question regarding android. I am having a problem with retrofit (I am using moshi converter factory) and hope that you can help.
Basically I have a screen with 3 checkboxes. User is able to select any of these checkboxes, and also user is allowed to select none of them.
When user doesn't select any checkboxes and click complete button, I send a PATCH request to backend with a model which contains 3 null values.
Problem here is that PATCH request which is being sent doesn't include any properties which have been nulled.
I spent some time researching why retrofit/moshi doesn't serialize nulled properties and I found a fix.
So I have this line
.addConverterFactory(MoshiConverterFactory.create(moshi))
Which I replaced with
.addConverterFactory(MoshiConverterFactory.create(moshi).withNullSerialization())
Now nulls are serialized and I am able to send a PATCH request model with nulled values. However now I'm facing another problem. Across my app I'm using only one retrofit client and I don't want to serialize nulls for all requests. Also I don't want to create another retrofit client.
How can I fix this problem? As far as I've researched it seems that I need to add an adapter with toJson() and fromJson() methods and then somehow enable nullSerialization only for that adapter. However I don't completely understand that solution and not even sure how to handle it.1 -
!rant
Got a question since I've been working with ancient web technologies for the most part.
How should you handle web request authorization in a React app + Rest API?
Should you create a custom service returning to react app what the user authenticated with a token has access to and create GUI based on that kind of single pre other components response?
Should you just create the react app with components handling the requests and render based on access granted/denied from specific requests?
Or something else altogether? The app will be huge since It's a rewrite off already existing service with 2500 entities and a lot of different access levels and object ownerships. Some pages could easily reach double digits requests if done with per object authorization so I'm not quite sure how to proceed and would prefer not to fuck it up from the get go and everyone on the team has little to no experience with seperated frontend/backend logic.4 -
How do i show a profile pic from s3 bucket?
One way is to fetch it from backend and send it to frontend as a huge blob string. This is how i made it currently and it works.
.... what if i want to frequently get the profile image? Am i supposed to send a separate API request to the backend every time? What if I need to show the profile picture 100 times then that means I will have to send 100 requests to the backend API?
...... or even worse, what if I need to fetch a list of images from the S3 bucket for example, a list of posts that contain images or a card with the list of profile images of multiple users? If I need to display 100 posts, each post containing one image, That means I would have to separately call 100 API request to fetch 100 images…
That is fucking absurd.
Of course I can make it so that it saves that URL to that image as a public setting but the problem is the URL will be the exact URL to the S3 bucket, including the bucket name, the path and the file name as well as the user information such as the user ID. this feels like it is a huge security risk
What the fuck am I supposed to do and how am I supposed to properly handle display images which are supposed to be viewed publicly?20 -
(!rant && user_input)
I have been pondering for the last couple of days on how to validate whether a referring http request is actually coming from the referrer it claims to come from. Any ideas on this?3 -
Desktop PUSH Notification requests are fucking stupid! I get that you’re all edgy and shit and made your stupid site into a PWA or are just trying to spam me with this amazing new access you’ve been granted over the last few years.... But fucking stop it.
If you have a PWA and a user is viewing you on desktop clearrrllyyyy they’re not mobile and your request is pointless. Log the access as 1 of the 3 they need before being allowed to install it as an icon and ONLY on mobile request push as part of the install. Maybe just maybe it’s ok if they’re mobile browsing...
Use your fucking heads people. Just because you can use something doesn’t mean you should. -
Need some advise from all you clever devs out there.
When I finished uni I worked for a year at a good company but ultimately I was bored by the topic.
I got a new job at a place that was run by a Hitler wannabee that didn't want to do anything properly including writing tests and any time I improved an area or wrote a test would take me aside to have a go so I quit after 3 months.
Getti g a new job was not that hard but being at companies for short stints was a big issue.
My new job I've been here 3 months again but the code base is a shit hole, no standardisation, no one knows anything about industry standards, no tests again, pull requests that are in name only as clearly broken areas that you comment on get ignored so you might as well not bother, fake agile where all user stories are not user stories and we just lie every sprint about what we finished, no estimates and so forth, and a code base that is such a piece of shit that to add a new feature you have to hack every time. The project only started a few months back.
For instance we were implementing permissions and roles. My team lead does the table design. I spent 4 hours trying to convince him it was not fit for purpose and now we have spent a month on this area and we can't even enforce the permissions on the backend so basically they don't exist. This is the tip of the iceberg as this shit happens constantly and the worst thing is even though I say there is a problem we just ignore it so the app will always be insecure.
None of the team knows angular or wants to learn but all our apps use angular..
These are just examples, there is a lot more problems right from agile being run by people that don't understand agile to sending database entities instead of view models to client apps, but not all as some use view models so we just duplicate all the api controllers.
Our angular apps are a huge mess now because I have to keep hacking them since the backend is wrong.
We have a huge architectural problem that will set us back 1 month as we won't be able to actually access functionality and we need to release in 3 months, their solution even understanding my point fully is to ignore it. Legit.
The worst thing is that although my team is not dumb, if you try to explain this stuff to them they either just don't understand what you are saying or don't care.
With all that said I don't think they are even aware of these issues somehow so I dont think it's on purpose, and I do like the people and company, but I have reached the point that I don't give a shit anymore if something is wrong as its just so much easier to stay silent and makes no difference anyway.
I get paid very well, it's close to home and I actually learn a lot since their skill level is so low I have to pick up the slack and do all kinds of things I've never done much of like release management or database optimisation and I like that.
Would you leave and get a new job? -
I have the following scenario with a proposed solution, can anyone please confirm it is a secure choice:
- We have critical API keys that we do not want to ship with the app because de-compiling will give access to those keys, and the request is done before the user logs in, we are dealing with guests
Solution:
- Add a Lambda function which accepts requests from the app and returns the API keys
- Lambda will accept the following:
1. Android app signing key sha1
2. iOS signing certificate sha1
- If lambda was able to validate them API keys are sent back.
My concerns:
- Can an attacker read the request from the original (non-tampered) apk and see what the actual sha1 value is on his local network?
- If the answer to the question above is yes, what is the recommended way to validate that the request received is actually from the app that we shipped and not from curl/postman/script/modified version of the app11 -
How would you create a mock for an Aggregator Microservice (stateless) which makes requests to other services for each request, transforms the data and then responds to the user?
I want to create a mock service where I don't t have to run the other services but it should create kinda realistic responses.
Have you had to create something like this?
I'd use it for testing another microservice that uses the aggregator.2 -
I may need some ideas for a personal project in mind:
I plan to have a server that shall connect to a usb stick/device, the usb is plugged to a TV. The usb device can create its own local wifi network which provides CRUD on media files via REST. My own server should be accessible via the internet, but at the same time connect to the local usb wifi, once the usb wifi is available, and then send requests to it. Kind of a user-friendly bridge.
There's a PC near the device, almost always turned on. It's used by family members as regular office machine and could run a local server. What if as remotely accessible server? Then what about DOS attacks? (Would that "kill" the PC?)
An alternative would be a separate server. A raspberry pi? A dedicated server?1 -
EXPERT STOLEN CRYPTO RECOVERY SERVICES ONLINE ⁚ HIRE DIGITAL HACK RECOVERY COMPANY
If I could offer one crucial piece of advice, it would be this: if something seems too good to be true, it probably is. I learned this the hard way in early 2024 when I was drawn into a crypto arbitrage scheme while living in Australia. The platform claimed it could help me effortlessly exploit price differences in cryptocurrencies across various exchanges. They offered a user-friendly app and appeared to have guaranteed strategies, even boasting the use of AI for trading.Initially, everything seemed promising. I started with $4,000, motivated by daily updates showing my balance steadily increasing. In just a few weeks, I watched my account soar to $12,000, which tempted me to invest even more. The consistent, modest gains felt reassuring, and I began to imagine the possibilities that came with this newfound wealth.However, when I attempted to withdraw my funds, the problems began. The first hurdle was a withdrawal fee. At the time, I didn't think much of it. But then, I was hit with requests for additional fees: an exchange commission, followed by a supposed tax settlement fee. Each time I thought I was one step closer to accessing my money, another obstacle emerged. I ended up paying nearly $20,000 in various fees, all without ever seeing a cent of my profits.Eventually, the company went silent, leaving me devastated. I felt ashamed for having fallen for such a sophisticated scam, but also furious at how cleverly it had been orchestrated. I spent months researching and searching for a solution to recover my lost funds.That’s when I discovered Digital Hack Recovery, a team that specializes in recovering lost digital assets. Their expertise turned out to be invaluable. They worked diligently to access my accounts and recover not only my initial investment but also the earnings I had accumulated. To my surprise, they managed to transfer everything back to my bank account seamlessly.Digital Hack Recovery restored my faith in the possibility of reclaiming lost assets and gave me hope in an otherwise dire situation. If you find yourself in a similar predicament, I strongly encourage you to reach out to them. Their assistance can make a world of difference in navigating the aftermath of such scams.
WhatsApp +19152151930
Email; digital
hack recovery @ techie . com
Website; https : //
digital hack recovery . com