Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "actions api"
-
--- GitHub 24-hour outage post mortem ---
As many of you will remember; Github fell over earlier this month and cracked its head on the counter top on the way down. For more or less a full 24 hours the repo-wrangling behemoth had inconsistent data being presented to users, slow response times and failing requests during common user actions such as reporting issues and questioning your career choice in code reviews.
It's been revealed in a post-mortem of the incident (link at the end of the article) that DB replication was the root cause of the chaos after a failing 100G network link was being replaced during routine maintenance. I don't pretend to be a rockstar-ninja-wizard DBA but after speaking with colleagues who went a shade whiter when the term "replication" was used - It's hard to predict where a design decision will bite back and leave you untanging the web of lies and misinformation reported by the databases for weeks if not months after everything's gone a tad sideways.
When the link was yanked out of the east coast DC undergoing maintenance - Github's "Orchestrator" software did exactly what it was meant to do; It hit the "ohshi" button and failed over to another DC that wasn't reporting any issues. The hitch in the master plan was that when connectivity came back up at the east coast DC, Orchestrator was unable to (un)fail-over back to the east coast DC due to each cluster containing data the other didn't have.
At this point it's reasonable to assume that pants were turning funny colours - Monitoring systems across the board started squealing, firing off messages to engineers demanding they rouse from the land of nod and snap back to reality, that was a bit more "on-fire" than usual. A quick call to Orchestrator's API returned a result set that only contained database servers from the west coast - none of the east coast servers had responded.
Come 11pm UTC (about 10 minutes after the initial pant re-colouring) engineers realised they were well and truly backed into a corner, the site was flipped into "Yellow" status and internal mechanisms for deployments were locked out. 5 minutes later an Incident Co-ordinator was dragged from their lair by the status change and almost immediately flipped the site into "Red" status, a move i can only hope was accompanied by all the lights going red and klaxons sounding.
Even more engineers were roused from their slumber to help with the recovery effort, By this point hair was turning grey in real time - The fail-over DB cluster had been processing user data for nearly 40 minutes, every second that passed made the inevitable untangling process exponentially more difficult. Not long after this Github made the call to pause webhooks and Github Pages builds in an attempt to prevent further data loss, causing disruption to those of us using Github as a way of kicking off our deployment processes (myself included, I had to SSH in and run a git pull myself like some kind of savage).
Glossing over several more "And then things were still broken" sections of the post mortem; Clever engineers with their heads screwed on the right way successfully executed what i can only imagine was a large, complex and risky plan to untangle the mess and restore functionality. Github was picked up off the kitchen floor and promptly placed in a comfy chair with a sweet tea to recover. The enormous backlog of webhooks and Pages builds was caught up with and everything was more or less back to normal.
It goes to show that even the best laid plan rarely survives first contact with the enemy, In this case a failing 100G network link somewhere inside an east coast data center.
Link to the post mortem: https://blog.github.com/2018-10-30-...6 -
Start a development job.
Boss: "let's start you off with something very easy. There's this third party we need data from. They have an api, just get the data and place it on our messaging bus."
Me: "sure, sounds easy enough"
Third party api turns out to have the most retarded conversation protocol. With us needing a service to receive data on while also having a client to register for the service. With a lot of timed actions like, 'send this message every five minutes' and 'check whether our last message was sent more than 11 minutes ago'.
Due to us needing a service, we also need special permissions through the company firewall. So I have to go around the company to get these permissions, FOR EVERY DATA STREAM WE NEED!
But the worst of it all is... This whole api is SOAP based!!
Also, Hey DevRant!5 -
Background: I'm not drunk yet, BUT I'M WORKING ON IT.
okay.
I just finished a second sprint on my React app. The first was to build a merchant onboarding flow. The second was to do substantial cleanup as I learned more about react/redux, and to create a "supply order" flow -- basically purchasing marketing materials and services. I finished that in a week, and I'm pretty proud. api-guy wanted it done in a day. i laughed. he probably could have, but it would have been a copy of the code in a new repo with some lines changed.
ANYWAY. it's all done and It's super pretty and works amazingly well. It has both the onboarding flow and the ordering flow, with a nice pop-out sidebar for navigation, namespaced actions, etc. Everything is pretty clean. I even added a cart to the ordering (despite everyone telling me not to) because wtf, what if someone wants to order TWO items? dumbasses. So I made that. it's sexy.
Anyway, it's all done and shiny and fancy and wonderful and I'd *love* to share screenshots if only it didn't give away where I worked. :<
... but the point of the rant!
After the first sprint, I made a copy of the repo so I could rework it and add more functionality without touching the original. (Hey! That's what a branch is for, right? Why didn't I branch it up?
well, read on)
I knew we were going to have multiple separate flows for this app: onboard, ordering, merchant tools, admin tools, support, etc. So, I wrote its server portion (the webpack builder + http server) so it would serve the same app at whatever url the user hit, and set a cookie containing that host+url. This allows the app to serve different content (basically showing/hiding content) based on the URL and future login roles. If someone hits /order, it would hide everything but the order flow. If they're a merchant, it would show all the merchant views plus ordering, etc.
tl;dr This way I can use the same codebase for multiple sites, drastically simplifying development, branding, and what have you. This new app could obv also be a drop-in replacement for the original onboarding project because of the above.
HOWEVER. this apparently isn't good enough for api-guy. He's terrified that adding/updating future components will affect all the existing content somehow.
so.
now we have three repos for basically the same codebase. 1) onboard aka "surfboard", 2) ordering, 3) merchant tools, aka "ferrari" (the "future" app).
Except.
1) "surfboard" is a very old version of the code. 3) "ferrari" is also old, since 2) "ordering" has newer content in it now.
... and somehow this is better?
fuck if i can figure out how.
His reasoning is "well, you won't be touching surfboard or ordering for 6 months, so now you don't have to worry about it." Sure, except, you know, it'll be a pain in the ass in 6 months now when I have a crapton of code and branding to redo. ffs.
Oh. We also have three Heroku pipelines for these three repos. for the same codebase.
and now you know why i'm drinking.undefined idiocy fucking hell fuck this noise api guy i'm just gonna replace everything later this codebase is as dry as the friggin ocean7 -
The bossman asked if our signup service sends an automated email after we successfully process someone's payment or when we promote them to full customer.
That sounds like a simple query, yeah?
Well.
Here's some background:
We have four applications; one in React, three in Rails. I'll replace their names to retain some anonymity.
1) "IceSkate" is the React app, and it's a glorified signup form. (I wrote this one.)
2) "Bogan" is the main application, and is API-only; its frontend has been long since deprecated by the following two:
3) "Bum" is a fork of "Bogan" that has long since diverged. It now contains admin-only tools.
4) "Kulkuri" is also a fork of "Bogan" that has long since diverged. It now contains tools specifically for customers, which they can access.
All but IceSkate (obv) share a database.
Here's how signups happen:
Signups come in from IceSkate, which hits a backend API on Bogan. Bogan writes the data to the database, charges the card immediately, and leaves the signup for moderation.
And here's how promotion from signup to customer happens:
Bum has a view allowing admins to validate, modify, and "promote" a signup to a full customer. Upon successful promotion, Bum calls "ServerWrap", a module which calls actions on the other applications; in this case: Bogan.
Bogan routes execution through three separate models before calling "ServerWrap" again, this time calling KulKuri.
Finally, KulKuri actually creates the customer!
After KulKuri finishes creating the customer, execution resumes on Bogan, which then returns, causing execution to resume on Bum. Bum then runs through several other models, references the newly-created customer object (as all three share a database), and ... updates the customer with its current data, and then updates the signup object. After all of this, it finally shows the admin the "new customer" view.
It took me 25 minutes to follow the chain of calls, and I still don't know quite what's going on. I have no idea if any of it sends an email or not -- I didn't see any signs of this, but I very easily could have overlooked something.
So, to answer bossman's question... I asked the accounting people if they send the email manually. If they don't, it's automatic, which means I missed something and get to burrow through that mess all over again!
I really hope I missed something; otherwise I need to figure out how and where (and when!) to send the email...
just...
errrrgghh9 -
Currently working on the privacy site CMS REST API.
For the curious ones, building a custom thingy on top of the Slim framework.
As for the ones wondering about security, I'm thinking out a content filtering (as in, security/database compatibility) right now.
Once data enters the API, it will first go through the filtering system which will check filter based on data type, string length and so on and so on.
If that all checks out, it will be send into the data handling library which basically performs all database interactions.
If everything goes like I want it to go (very highly unlikely), I'll have some of the api actions done by tonight.
But I've got the whole weekend reserved for the privacy site!20 -
I really hate fucking Wordpress!
I hate it's stupid API, with it's stupid hooks and actions and all those stupid functions and no fucking logic to any of it!
I hate it's stupid plugin system, with all that fucking overhead that brings no real value and adds all that complexity for nothing!
I hate stupid fucking multiple calls for the same fucking assets, loading them over and over again because every stupid plugin calls them again and again!
I hate motherfucking SHORTTAGS, or whatever the fuck they are called!
I hate that every stupid fucking plugin and shortcode and fucking every little fucking piece of HTML comes from a different fucking place, with different fucking structure and different fucking classes and stupid fucking loading seaquences that make no fucking sense!
And I hate fucking page builders !!!!!
Fuck!!!!
I should be fucking coding on this fucking peace of shit, but I just cannot fucking take it any more!!!
IT NEEDS TO FUCKING DIE!
It should be relegated to the darkest corners of the internet and all the servers that have it's fucking code anyware on their systems should be disconnected and buried in the deepest pits of hell, just to be sure it never, EVER, surfaces again!!!
AAARRRRGGGHHHHHHH !!!!!!!!!5 -
This feature I'm building requires crossing over to a second application for some actions (fair, this reduces repetition), but the method used for it is kind of ridiculous.
To keep with the existing patterns, I followed suit, and added two PATCH and a DELETE routes, wrappers, and calls. (Typical CRUD + de/reactivate).
But. This freaking halfassed HTTP model doesn't support anything but POST and PUT! wtf. (Also, the various IDs, naming schemes, and required json data/formats differ across view, controller, and endpoints. but whatever?)
Two and a half hours later, and the feature is done and works wonderfully. Four times the functionality of the previous incarnation, and the code is only about 25% longer! haha.
Ahh, I'm complimenting myself again. (but somebody has to, right? 😅)
but really, when i want to get something done i'm actually surprised at how quickly it all comes together. Even when I need to patch API Guy's madness.
(and this time I actually found someone else's code in the mess! It was actually worse!)
I suppose taking a day off yesterday did me some good.rant double entendres are the best rest after rest root compliments herself expanding someone else's crud1 -
Got pretty peeved with EU and my own bank today.
My bank was loudly advertising how "progressive" they were by having an Open API!
Well, it just so happened I got an inkling to write me a small app that would make statistics of the payments going in and out of my account, without relying on anything third-party. It should be possible, right? Right?
Wrong...
The bank's "Open API" can be used to fetch the locations of all the physical locations of the bank branches and ATMs, so, completely useless for me.
The API I was after was one apparently made obligatory (don't quote me on that) by EU called the PSD2 - Payment Services Directive 2.
It defines three independent APIs - AISP, CISP and PISP, each for a different set of actions one could perform.
I was only after AISP, or the Account Information Service Provider. It provides all the account and transactions information.
There was only one issue. I needed a client SSL certificate signed by a specific local CA to prove my identity to the API.
Okay, I could get that, it would cost like.. $15 - $50, but whatever. Cheap.
First issue - These certificates for the PSD2 are only issued to legal entities.
That was my first source of hate for politicians.
Then... As a cherry on top, I found out I'd also need a certification from the local capital bank which, you guessed it, is also only given to legal entities, while also being incredibly hard to get in and of itself, and so far, only one company in my country got it.
So here I am, reading through the documentation of something, that would completely satisfy all my needs, yet that is locked behind a stupid legal wall because politicians and laws gotta keep the technology back. And I can't help but seethe in anger towards both, the EU that made this regulation, and the fact that the bank even mentions this API anywhere.
Seriously, if 99.9% of programmers would never ever get access to that API, why bother mentioning it on your public main API page?!
It... It made me sad more than anything...6 -
Our boss did always the same thing. When there was a BIG potential customer who indicates a small interest in our software, then he lied constantly about features. After the customer bought our software we got a deadline and should develop the missing features. I could remember two features: The first one was a quote tool for a car transport company. The tool should estimate a price for a transportation from an email with no structure and the other one was an API which should be possible to write dynamicly to MySQL, MariaDB, Postgres, MSSQL, DB2, Mongo or better said any possible dbms. The API should guess the structure of the dbs and offer CRUD actions. The funny thing is must write the api with go. Yeah dynamic and GO.
At some time, we told him we wont make any overtime and if the deadline is not possible we told that immediatly the customers, so that they call him. Thank god I don't work anymore in this company.1 -
While reviewing a PR from one of our newer FE devs, I ended up spending more time than I would like mulling over its composition. The work was acceptable for the most part; the code worked. The part that got me was the heavy usage of options objects.
When encountering the options object pattern (or anti-pattern, at times) in complex scenarios, I have to resist the urge to stop whatever I'm doing and convert it to the builder pattern/smack them in the head with a software design manual. As much as I would like to, code janitor is one of the least valuable activities I engage in daily, and consistently telling someone to go back to the drawing board for work that is functional, but not excellent is a great way to kill morale. Usually, I'll add a note on the PR, approve it, add a brown bag or two on that sort of thing, and make attendance mandatory for repeat slackers. Skills building and catharsis all rolled up in a tiny ball of investing in your people.
Builders make things so much cleaner; they inform users what actions are available in a context; they tend to be immutable, and when done well, provide an intuitive fluent interface for configuration that removes the guesswork. As a bonus, they're naturally compositional, so you can pass it around and accumulate data and only execute the heavy lifting bits when you need to. As a bonus, with typescript, the boilerplate is generally reduced as well, even without any code generation. And they're not just a dumping ground for whatever shit someone was too lazy to figure out how to integrate into the API neatly.
They're more work in js-land, sure; you can't annotate @builder like with Lombok, but they're generally not all that much work and friendlier to use.9 -
I've just seen the documentation of an api I have to communicate with, and facepalmed when I have seen that some actions return 404 on success. And more bizarre things... Just wanted to make it worse for me, didn't you?
Once at it. Why don't you glue spikes onto my keys?
Ffs7 -
Our project at work goes live in 3 weeks.
The code base has no automated tests, breaks very often, has never had any level of manual testing
will not be releasing with any form of enforced roles or permissions in our first release now due to no time to enforce, however there is a whole admin api where you can literally change anything in our database including roles.
We also have teams in various countries all working separately on the same solution using microservices with shared nuget packages and they aren't using them properly.
Our pull requests are so big - as much as, 75 file changes - in our fe app that I can't keep up with it and I honestly have no idea if it even works or not due to no automated tests and no time to manually test.
We have no testing team, or qa team of any sort.
Every request into the system has to hit a minimum of 3 different databases via 3 different microservices so 1 request = 4 requests with the load on the servers.
We don't use any file streams so everything is just shoved in the buffer on the server.
Most of the people working on the angular apps cba to learn angular, no one across 2 teams cba to learn git. We use git so they constantly face problems. The guy in charge has 0 experience in angular but makes me do things how he wants architecturally so half the patterns make no sense.
No one looks at the pull requests, they just click approve so they may as well push directly to master.
Unfinished work gets put in for pull request so we don't know if the app is in a release state since aall teams are working independently, but on the same code base.
I sat down and tested the app myself for an hour and found 25 fe only issues, and 5 breaking cross browser issues.
Most of our databases are not normalised. Most of our databases make no sense. 99% of our tables have no indexing since there is no expertise with free time to do it.
No one there understands css properly. Or javascript.
Our. Net core microservices all directly use ef in the controller actions so there is no shared code there.
Our customer facing fe app is not dry because no tests so it was decided it was better this way.
Management has no idea on code state, it seems team lead is lieing to them about things like having any level of tests.
Management hire devs that claim to be experts but then it turns out they have basically no knowledge of what they were hired to do, even don't know what json is or the framework or language they are hired for, but we just leave them to get on with it and again make prs too big to review.
Honestly I have no hope that this will go well now but I am morbidly curious to watch. I've never seen anything like the train wreck that we are about to get experience.5 -
Am I crazy ?
Right now we have an API which returns a full planning for a week for 300 employees with indicators (Like "late", "may be postponed" etc) in 4 seconds.
I have a pressure, people telling me it's not fast enough.
I honestly think it is fast.
In order of data it'a around 100 MB of JSON. AND you can do actions on the whole set if needed.
Long story short, I think 4 seconds to get all that data is pretty great. Customers think they should have it instantly.
(Never mind the whole filtering system at thier disposal, they literall only lod the full set and then MANUALLY scroll (Yes there is a quick search box)).
What can I do more ????? cache that ? I can. But they also expect that any changed value is reflected.
And we fucking do it. While you are on the page there is a SignalR conenxion created and notified when any of data is changed and updates it on front. Takes around 500 ms.
Apprently "too slow".
I honestly don't see what we can do more with our small 4 dev team.
Give me 56 developpers I can do something, but right now I'm proud of result.14 -
CI CD pipelines in my company... Having CICD suppose to help development... Now we have countless templates and tools (github actions, circle ci, agro, aws beanstalk, azure pipelines, serverless, terraforms, cloud formation, helm charts, ECR, Vault... and few more).
Total chaos, doing simple CICD for 1 api and 1 lambda took 3 days so far, and will take bit more.
On top of that, no one have idea which part of scripts are doing what exactly, as responsibilities are in different tools (each tool have different config files).
Does deployment have to be so complex? Or is it that my company DevOps team makes it so unnecessary complicated?4 -
Not really Dev rant but bought 2 google homes. Set them up all nice and dandy and then boom.
Me: Hey google, set a reminder to buy batteries.
Home: I am sorry, I can't do that yet.
WTF ok.
Me: hey google, set a calendar reminder to buy batteries tomorrow.
Home: I am sorry, I can't add calendar events yet.
And the list goes on. WTF google. Why my phones Google assistant can do all of the above and a home assistant cannot even though they are the same thing...
Guess who is browsing actions api to implement missing functionality that should be in a freaking core...
Talking about buying voice controlled music box...1 -
I did not think that making a serverless Discord bot would be such a learning experience. The code itself was easy. The hard part was the infrastructure, because I decided to automate it all with Terraform and deploy it on AWS.
Before this project, I had no idea how API Gateways worked. Now I still have very little idea how they work but I managed to build one anyway. Eventually. And then I had to figure out how to automate the deployment of a lambda layer and function that would both still be managed in the Terraform state, with any code changes triggering a rebuild and update for the resource.
And then I had to untangle a dependency mess because API Gateways have some weird issues where two resources that have no explicit dependencies on each other will throw an error if they don't deploy in the right order.
And then I went the wrong way with Github actions trying to conditionally chain multiple workflows together before I realized I could just put multiple jobs with conditions in a single workflow.
And now after all that work over the course of 2 days, I have a bot that does this:2 -
I think the following is all in my head, or I am heading towards an office rivalry situation between my tech lead and me.
characters :
me : a no nonsense android guy who is sometimes very blunt when requested for unwarranted demands. i am also realising that i have been a bit too arrogant, as i come up with a lot of counter questions too fast (not related to story tho)
tech lead : an android guy who has been android dev for a total of 4 years (same as me), 3 of them in current company and somehow got promoted to TL
story: I find this guy to be too much political, delegating a lazy bum, and i kinda called him out in public , once during a discussion where other folks were also kinda calling him out and another time when we were having a small meeting of 3 people. he in turn has taken some actions (like giving me a lower kpi, not giving me appropriate data for doing some work and then asking about it in public, casually ignoring my leave requests) which looks he is taking out a revenge.
at first time i called him out in a discussion where everyone was getting against his havit of giving buttery responses to his boss (who occasionally joins our standups) . he says "we are on track" while we are already dependent on him to provide data/decisions.
he then says to us to do it faster , and when the work does not get completed ( because how it could be, without him doing his job), he blames it on devs.
i called him out on a similar but different topic of him making last moment task additions when we are already on brim with our planned tasks.
on second time i called him out on him not looking into the current task enough as he was expecting me to take decisions on my own.
the decision was about how a screens ui will be populated and there was no api payload available that would match the ui . i created 2 mock api jsons which would appropriately load that screen but was not sure if the 2 apis would be enough for the screen and wondered whete some missing data will come from?
this task is a long one, nd i did took a decision, but he should had validated them to make sure we are on track. the issue came when i took some questions to him and instead of answering them , he blamed on me not being mature enough to work without the data!
All things aside, I am on my weary ends with thins guy. He is my boss and holds incredible powers over me, but he is incredibly incompetent and his habits of delay, delegation and blaming is making my work life worse. I don't wanna leave this job too, because as much as i hate it, its currently one of the major names in industries and giving a solid power to my resume -
Fuckadoodling finally!
After 3 days of digging through the documentation of CraftCMS and Yii Framework I got the hang out of how these Controllers, Actions and other RESTful api stuff works on Craft3.
As some of you may have noticed, I am a big fan of CraftCMS (v2) since it was introduced to me. A few days ago we discussed a new project and the option go for Craft3, as it has been released for some time now.
The changes from v2 to v3 are huge... I didn't expect to almost reach my limit to give up on it!
But since the RESTful routes finally work, with proper data serializing and all, I will now go drink a Whiskey or ten and wish you all an awesome, client-disturbance-free, decadent, beerful weekend!
Cheers mateys!
🎉🎊🍭🥃🥃🥃🍻🍺🥂 -
A very long rant.. but I'm looking to share some experiences, maybe a different perspective.. huge changes at the company.
So my company is starting our microservices journey (we have a 359 retail websites at this moment)
First question was: What to build first?
The first thing we had to do was to decide what we wanted to build as our first microservice. We went looking for a microservice that can be used read only, consumers could easily implement without overhauling production software and is isolated from other processes.
We’ve ended up with building a catalog service as our first microservice. That catalog service provides consumers of the microservice information of our catalog and its most essential information about items in the catalog.
By starting with building the catalog service the team could focus on building the microservice without any time pressure. The initial functionalities of the catalog service were being created to replace existing functionality which were working fine.
Because we choose such an isolated functionality we were able to introduce the new catalog service into production step by step. Instead of replacing the search functionality of the webshops using a big-bang approach, we choose A/B split testing to measure our changes and gradually increase the load of the microservice.
Next step: Choosing a datastore
The search engine that was in production when we started this project was making user of Solr. Due to the use of Lucene it was performing very well as a search engine, but from engineering perspective it lacked some functionalities. It came short if you wanted to run it in a cluster environment, configuring it was hard and not user friendly and last but not least, development of Solr seemed to be grinded to a halt.
Elasticsearch started entering the scene as a competitor for Solr and brought interesting features. Still using Lucene, which we were happy with, it was build with clustering in mind and being provided out of the box. Managing Elasticsearch was easy since there are REST APIs for configuration and as a fallback there are YAML configurations available.
We decided to use Elasticsearch since it provides us the strengths and capabilities of Lucene with the added joy of easy configuration, clustering and a lively community driving the project.
Even bigger challenge? Which programming language will we use
The team responsible for developing this first microservice consists out of a group web developers. So when looking for a programming language for the microservice, we went searching for a language close to their hearts and expertise. At that time a typical web developer at least had knowledge of PHP and Javascript.
What we’ve noticed during researching various languages is that almost all actions done by the catalog service will boil down to the following paradigm:
- Execute a HTTP call to fetch some JSON
- Transform JSON to a desired output
- Respond with the transformed JSON
Actions that easily can be done in a parallel and asynchronous manner and mainly consists out of transforming JSON from the source to a desired output. The programming language used for the catalog service should hold strong qualifications for those kind of actions.
Another thing to notice is that some functionalities that will be built using the catalog service will result into a high level of concurrent requests. For example the type-ahead functionality will trigger several requests to the catalog service per usage of a user.
To us, PHP and .NET at that time weren’t sufficient enough to us for building the catalog service based on the requirements we’ve set. Eventually we’ve decided to use Node.js which is better suited for the things we are looking for as described earlier. Node.js provides a non-blocking I/O model and being event driven helps us developing a high performance microservice.
The leap to start programming Node.js is relatively small since it basically is Javascript. A language that is familiar for the developers around that time. While Node.js is displaying some new concepts it is relatively easy for a developer to start using it.
The beauty of microservices and the isolation it provides, is that you can choose the best tool for that particular microservice. Not all microservices will be developed using Node.js and Elasticsearch. All kinds of combinations might arise and this is what makes the microservices architecture so flexible.
Even when Node.js or Elasticsearch turns out to be a bad choice for the catalog service it is relatively easy to switch that choice for magic ‘X’ or component ‘Z’. By focussing on creating a solid API the components that are driving that API don’t matter that much. It should do what you ask of it and when it is lacking you just replace it.
Many more headaches to come later this year ;)3 -
Fuck Redux/ngrx. I'm done, I can't get my head around this ugly shit. All I wanted was to load/save api data in a clean way and display a loading indicator now and then. But definitely not multiplying my entire code base by 10. Actions, Reducers, Effects. What is this?! Fuck that rocket science.5
-
wasting 4 hours trying to send a post request and fetching back the json reply, and having to fall back on fsocket when c url is not available is no fuck, the fuck with C api code in what's supposed to be web directed high level language that has no fucking native interface for REST actions
!rant -
Omg nextjs 14 is so good. I cant believe this. Server actions are so powerful. This shit makes you prototype and move RAPIDLY FAST. And the framework itself is fast as cum! Unbelievable. No wonder every website lately is built in nextjs. This framework is definitely the future of web. It made working with databases blazingly simple. Prisma ORM is unbelievably flexible. The shit you can build with this framework has no fucking limits! It has /api folder to just add restful apis and just reuse the same prisma methods from shared lib functions and boom you can now scale the project to a mobile app!
All of this bullshit took me YEARS to learn how to do properly in a regular frontend-backend separated type of project. While I learned this nextjs shit blazing fast. Am i missing something or is this framework too good to be true?
I'll bend over for nextjs4 -
Thought that it might be a good idea to ask this question here.
Im looking for a nice logging events service for a side project that is a b2b (so my clients got their own users). My targets are tracking users behavior/events/actions in the app while been able to shred the data that belongs to each customer. A great benefit would be having a solution that would allow me to export part of the data (in sql like way) so i could provide the users the option to download their users data as well.
Was thinking about mixpanel but i dont think they have any option to export the data via api. Heap analytics is also an interesting one, but their nice features are limited to corporates..
Any suggestions? Thanks!4 -
Context: I run a chatbot company
Why the fuck does google on actions has to have such a shitty API? Its not even an API, it's a CLI that does some magic uploading to somewhere, no webhook normal integration, no message routes. Everything goes through a magic sdk that does who knows what2