11

probably gonna have to do with environmental software certification (optimizing for energy reduction, that sort of stuff)

Comments
  • 2
    Event driven microservices to not waste so much resources
  • 2
    That might even go hand in hand with subsidies (or discounts from hosting providers) to encourage companies to make their software more efficient
  • 2
    @robbietoppert that's probably gonna be a big part of it, anything to reduce wasted processing time
  • 3
    @robbietoppert no doubt Amazon will put a big ad on that lol get clout and reduce resource consumption? yeah they're all gonna jump on that
  • 2
    @darksideofyay they sure will! All whilst expanding their mark on our planet and making the solution our problem
  • 2
    I dont necessarily think that will happen with software engineering but rather with network engineering, datacenters and stuff
  • 1
    @12bitfloat that'll have to happen to software engineering in the same vein as old school programming: optimize, do more with less. there is some shitty code out there, and when the certification comes some devs are gonna have to study hard
  • 0
    @darksideofyay I don't see how that could be done though. Software moves so quickly and is so complex how would you certify software for low power usage? Number of requests per watt? Well that really depends on what the app is designed to do...
  • 0
    @12bitfloat asymptotic analysis
  • 2
    @darksideofyay But how though? Give me an actual example. How can you decide which complexity is the threshold for what piece of software? What are even the metrics? Code that does simple http responses has a WAY lower cyclomatic complexity than a VISA fraud detection system. How can you even compare the two?

    Just saying "asymptotic analysis" really doesn't mean anything. Should non O(n log n) sorts be forbidden by law? You think that's realistic?

    That sounds completely infeasible tbh. The rational view (imo) on the matter is that you can't assess code in this way whatsoever. What you *can* assess is hardware, especially server and datacenter hardware but also consumer, where it's actually possible to come up with some sort of metric like PSU efficiency, cooling efficiency or cycles per watt (and even that is pretty vague; an ARM cycle isn't a x86 cycle)
  • 0
    @12bitfloat I'm just saying, there is bad code out there. sometimes you can eyeball a function and say it's shit. I'm talking about the O(n²) and O(2^n) of the world. I'm talking about trying to reinvent the wheel when using a library would be much better. data redundancy is also a very stupid mistake. it's not hard stuff to fix, but some code is too bad.

    none of this is new, the novelty will be the "seal of approval" of some entity. there's already discussion on this in academic spaces, it's not a big deal, they just have to settle on the rules
  • 0
    @darksideofyay But how should this entity decide whether my credit card fraud detection system using AI models is "efficient" or not. Against what are the comparing it? What's the threshold that says "yes this is efficient" versus "this isn't"

    > Sometimes you can just eyeball a function and say it's shit

    Well sometimes we devs eyeball a function, think it's shit, spend way too much time trying to improve it just to figure out that even though our new code is much cleverer and "better" it actually performs worse because cache coherency, memory access patterns, stuff like that

    Yeah I agree some code sucks a lot, but how do you come up with an objective metric that tells us that
  • 1
    @12bitfloat if they're able to prove mathematically that there's a better solution, i think that's a reason to deny a certificate. idk, I'm not actively involved in the development of those, don't really have the answers for you. it's just something to watch out for ✌️
  • 1
    @darksideofyay That just sounds like a software optimization agency with extra steps 🤔

    Whatever doesn't matter. I don't think that's ever gonna happen but I also don't know a lot 🤷‍♂️
  • 0
    I think the direction is wrong, but the overall idea is sound.

    What you usually need in large data centers - and what is _very very very_ complicated stuff - is proper energy management everywhere. In realtime.

    A lot of energy is wasted by improper or overzealous cooling...

    I know some of the cooling systems are already pretty smart regarding physics and dynamic energy management... But I'm sure theres still room for improvement.

    Would be interesting if a system could be achieved where the executing server is based up on the load in realtime.

    Kinda the idea between hybrid cpu designs ... Just for server centres.
Add Comment