8
atheist
265d

Go, Rust, C#, Swift, Java, Ruby, Python, Delphi/Object Pascal, Ada.

Those are the "approved" memory safe languages of the us gov. Seriously, go fuck yourself.

https://readwrite.com/the-nsa-list-...

Comments
  • 5
    i would say, that on average developers do less memory voodoo in these in comparison to C/C++

    so i don't necessarily disagree.

    As i mentioned times and times again, programs need to be written with intent, if you can't necessarily differentiate between an inherently unsafe operation to some regular thing you'd do anyways, you're pretty much fucked in that context.

    Doing stuff "fast" is not really a strong argument, because given the right languages, that's the job of your compiler anyways, plus (as much as it hurts to me say it) hardware got really advanced, which for you as the hillbilly regular dev makes it unnecessary to think about micro optimizing shit like in the 70s

    I mean, sure, the proposed languages are not the solution, but on the other hand they atleast have less attack surface, and that's what it's all about.
  • 5
    They have Java in the list but not Kotlin, wtf?

    Beside that, why are you upset?
  • 2
    @thebiochemic Interesting points.

    I guess this pisses me off because I'm not a regular dev...

    Your point about writing with intent, I would say modern c++ allows code to be written to differentiate between sketchy and normal code. Smart pointers, ranges etc.

    Fast, I've also worked on a project where that was the whole thing. At the extreme, we had an embedded hardware project that had 32 milliseconds to finish processing some data and return to the host FPGA. I would *maybe* have tried that in rust and go. But c++ could definitely do it.
  • 4
    people have been taking this way too personal and emotional like holy shit who cares
  • 3
    @atheist Yes there are ample tools in modern c++ for more safe code, BUT you can just as easily use unsafe constructs if you do not know the difference.

    With for example C#, you have to explicitly say you want to use unsafe code.

    As for speed, 98% of all code does not need to focus that much on speed and in many cases if you do need it you can still write those parts in whatever language you need and the rest in a more safe one.
  • 2
    @Lensflare My guess is that kotlin is not big enough to make the list
  • 2
    The us gov doesn't have jurisdiction on my home, I will continue to use C and you can't stop me!
  • 3
    @Voxera and Ada is?

    Anyway I have no clue what this list/initiative is actually for. Who needs to adhere to this list. For sure not everyone. Perhaps it's for future government procurements?
    Think it's easier to have a ban list and later specific linter requirements.
    That is how I would do it. You don't need to restrict the tools just ensure no unsafe usage is possible.
  • 1
    @hjk101 Ada is probably used within the us government and gets special treatment ;)
  • 1
    @Lensflare I would say that one in particular is because of the pool of developers that use it, even though as a language I love Kotlin, I can see why the main JVM language would be the choice.

    And just as you, I too want to know why the OP is so pressed about the list
  • 3
    No javascript :)))))
  • 4
    @netikras omg I know right????

    *starts pinching own nipples furiously**
  • 1
    I am totally going to add some cyberpunk shit to a future game that talks about making C++ illegal.
  • 2
    So the only "certified" standard implementation of OPCUA that is open source is Open62541. It is written in C. It is actively maintained and getting new features. The reason it is written in C is because a lot of hardware platforms only have C available. Lots of use in embedded systems. I think there is a C++ version written by the standards group, but I don't know if it meets certification. The other commercial option is written in C++. Sure there are implementations written in C# and other languages, but I don't think they meet the standard fully and are not certified.

    It will be decades before stuff like this changes direction.

    edit: Honestly it kind of brings me joy to think that writing new C/C++ code might piss of bureaucrats.
  • 1
    @thebiochemic hardware espacially the cpu will soon reach its limit and then we have to think about how to get the most performance out of the available resources. 2nm is the physical limit of Silicon semiconductors.
  • 1
    @max19931 we haven't reached physical limitations by a long shot. Look how they measure the nm manufacturing. The components can be a lot smaller and precise before the physical limit of silicone is hit.
  • 2
    @max19931 0.2 nm is the physical limit (the size of a silicon atom). We're already under 4nm, so not a huge amount left to go. Then it'll either be light based cpus or quantum. But for them to take over they'll have to be faster. And that's not an easy hurdle.
  • 0
    @max19931
    if you really want to turn the discussion into that direction, sure..

    You can always go both ways, it's literally economy 101.

    When the theoretical limit is reached, you can still produce more chips to push the price down. And build more machines to produce even more chips, to push the price even more down, rinse, repeat. You can also optimize chips for specific tasks.

    If you really think about that, this is already happening in the home computing space for ages, or do you still believe that the CPU is the only Compute Unit in your PC?

    After that's cleared up we can also push the cost for power generation down. There is enough R&D and R&D potential here still.

    Turns out, it's not a question of if something's doable, but of how much it will cost.

    At the end of the day it's just a regular Market, like any other market.

    And if someone decides, they want safety, they're probably know already and willing to pay the price.
  • 1
    @max19931 this is as others have said already going on, multiple cores, specialized cores, new algorithms using said cores. Going back to c++ for speed is not the only option, if that was the case, why c++ and not straight to assemble or machine code?

    The answer is development cost and complexity.

    More modern languages use cleaver language constructs to give the intent and then allows the compiler to optimize and use specialized chipsets or parallelism.

    Yes you can do that manually but it add complexity which adds cost and most software does not motivate that added cost, especially since few algorithms need faster single thread performance, we instead do more different things or process more data that can be done in parallel.

    So adding cores works.
  • 1
    @Voxera I'd actually disagree with your "why not assembly instead of c++?" argument. C and by extension C++ is by design as fast as assembly. Java, isn't.
  • 1
    @Voxera Ada was designed under contract to the DoD so yes
  • 0
  • 3
    @atheist well, its only ever going to be as fast as the compiler can do, just as C++ or Rust, and C is designed to be platform agnostic meaning there have to be some tradeoff for more generic constructs.

    The compiler will attempt to use the most optimized instructions but it can only do so much.

    The closer to assembly you are the more you risk that your code limits what rewrites the compiler can make.

    That is one of the ideas behind rust, after safety, that using more high level constructs to define intent will allow the compiler more freedom to use faster constructs.

    And the bigger the code base and the more developers involved the more benefit you will get from better compiling.

    Sure the very best dev can beat the compiler but realistically most companies will mot have devs that out perform the compiler anyway.
  • 1
    @Voxera I get you. Essentially why I've spent several hours staring at the output of objdump to make sure the compiler is vectorizing in the way I want. That said, gcc will tell you why something is/isn't being vectorized and it's very clever about it.
Add Comment