1

Our company has the opportunity to start moving towards a more microservices architecture approach.

There is so much technical debt that needs to be paid back, this opportunity is a godsend!

Now, of course, the whole "programming language debate" comes into play at this point.

To provide some context, we've reached the point where we need to be able to scale, and at the same time where speed and performance are also important. I would argue that scale is of more importance at this stage.

Our "dev manager" (who is really only in that position since he's the oldest, like scribbling on a notepad and the sound of his own voice) wants to use Rust, as this is a peformant language. He wants to write the service once and forget about it. (Not sure that's how programming works, but anyhoo). He's also inclined to want to prematurely optimize solutions before they're even in production.

I want to use Typescript/NodeJS as I, along with most on the team are familiar with it, to the point that we use it on a daily basis in production. Now I'm not oblivious to the fact that Rust is superior to Typescript/NodeJS, but the latter does at least scale well. Also, our team is small - like 5 people small - so we're limited in that aspect as well.

I'm with Kent Beck on this one...
1. Make it work
2. Make it right
3. Make it fast

We're currently only at step 1, moving onto step 2 now!

Comments
  • 1
    Rust is not the most stable thing yo work with.
    Stay with Node at the moment, and make sure you use stable Contracts for your micro services.
    After it works, you can rewrite specific parts in Rust for performence.
  • 2
    I'd take it as an opportunity to escape node
  • 2
    Scaling is a word that needs a lot of explanation in your context.

    Node/JS is a single thread application.

    Scaling means to clone the entire app and run it multiple times - scaling in this context meaning horizontal scaling, vertical scaling won't bring much as a single thread application is limited regarding resource usage.

    Node/JS scaling has a lot of downsides due to this - it's not easy and the loss of resources due to the need of cloning is real.

    Which is why NodeJS isn't a good choice if you need to scale massively - the overhead eats a lot of resources and depending on what you want to achieve it becomes very tricky and fickle to make it run smoothly.

    Rust on the other hand makes it "easier" to implement multi-threading.

    I say "easier" because it is still a tedious task and rusts core concepts like borrowing aren't easy - but they ease up multi-threading tremendously.

    But - Rust is an entirely different ecosystem. If you're only familiar with NodeJS, it's near to impossible to transfer knowledge.

    After all, JavaScript is the one language with the most broken ecosystem and language design ever.

    If you're only 5 people, it's even worse. No shared knowledge means that you'll have to divert the already scarce human resources you'll have and need to do maintenance in rotation to prevent knowledge silos (2 working maintenance, 3 learning rust as an example - you need to rotate every week so all 5 people know what's happening, why etc. - which is pretty much impossible).

    So I'd highly recommend take the suboptimal route, but - as always - do it right from the beginning.

    Minimal dependencies, dependencies must be acked by whole team.

    Fixed set of features, fixed framework and language choice (e.g. only TS and TS only with non JS backwards compatibility -/ types).

    Etc.

    The jolly of microservices comes from being as rigid and strict as possible, so the project doesn't became a pus infected abomination.
  • 0
    Wouldn't each vender optimizes the crap out of JavaScript runtime for their serverless services when everyone is running JavaScript?
  • 1
    It reminds me of my previous CTO who was not so far from asking to write everything in assembly.
    Old devs with dogmas are the worst thing ever.
  • 0
    @h3rp1d3v Not everyone is running JavaScript ... For good reasons.

    Plus the problem is normally not the vendor, but the code itself.

    I've seen things TM.

    Especially JavaScript projects - due to the broken ecosystem called NPM / NodeJS - suffer from "everything except what's needed".

    As an example: 3-5 middlewares chained together, cause "no one cares".

    Then wondering why resource usage is extremely high... Guess what. Piping a request through multiple middlewares is a dumb idea TM.
  • 0
    @IntrusionCM I was talking about serverless functions. You can't use most of npm packages because of the size limit. People have to structure their code carefully and check which packages the platform support.

    You won't have memory hog issues when server resets after each serverless function call.

    I have seen crazy bat shit in JS when people don't follow library documentations and invent their own oop abstractions on well thought out declarative libraries. I guess that won't be the case when no one on the team comes from Java background?

    I think it's fine to have multiple middlewares chained together.
Add Comment