16

I'm convinced this is going to be wildly unpopular, but hey...

Please stop writing stuff in C! Aside from a few niche areas (performance-critical, embedded, legacy etc. workloads) there's really no reason to other than some fumbled reason about "having full control over the hardware" and "not trusting these modern frameworks." I get it, it's what we all grew up with being the de-facto standard, but times have moved on, and the number of massive memory leaks & security holes that keep coming to light in *popular*, well-tested software is a great reason why you shouldn't think you're smart enough to avoid all those issues by taking full control yourself.

Especially, if like most C developers I've come across, you also shun things like unit tests as "something the QA department should worry about" 😬

Comments
  • 6
    But the hardware!
  • 6
    I can agree with the reasons behind this, however, I cannot agree with it entirely. Not only does C have merit in a lot of applications, it may be the only language one is familiar with, or that they're so familiar with it that any other language isn't worth it to them. 'Muh modern frameworks' is just as toxic a way to think as 'Muh low level access' is.
  • 10
    Agreed, use C++ or Rust or Go or whatever instead unless you're in the niche where C is absolutely a requirement.

    Aside from a few niche cases modern compiler optimizers will almost certainly produce better low level code than you can. The sheer effort you need to put into C to do the same or better is usually not worth it, and it probably won't be secure and portable either (of course if it's a core library or hardware driver or something that's different).

    @craftxbox "it may be the only language one is familiar with" is just sad tbh.
  • 0
    I agree like you said it definitely has strength in niche areas but for modern software use something like Java, C++, C#, Go, etc
  • 4
    @RememberMe C is optimized by compiler as well
  • 1
    Writing random stuff with raw array access and malloc() makes me feel powerful.
  • 1
    I agree. I used to be afraid of frameworks (frankly, I still am!), but it's like being afraid of anything - it looks scary until you get closer, more familiar with it, observe how it works and then over time you come to realize it is no longer that bad.

    However, abusing frameworks/libraries is what I'm afraid of. If I can write a thing in 2-3 hours myself, I'd rather do that than import yet another library.

    @craftxbox If I only know ASM - should I be writing webapps with ASM too, because it's the only language I know?
  • 4
    Unfortunately C has a few features I do not see with other langiages:
    - A calling standard, I can reuse C libraries from virtually every language. Many languages may output C compatible shared libraries, but this is usually not very practical and requires to write a lot of boilerplate code. Interpreted languages can't do it at all, compiled languages with GC have a big overhead (in the worst case, I would have multiple GCs in the same programming), ... Even for C++ one must create sometimes very unidiomatic wrappers.
    - C is standardised and only very slowly changing. Even K&R code should mostly still compile. If I write code which has to be rewritten every couple of years, this is not great.

    Nonetheless, I agree one should use the right tool for the right job, and that may not be C every time. But if and only the above things (besides performance) are met, I don't see a problem using C for long term projects. It's not impossible to write good code in it.
  • 1
    as for the first point -- not a problem if you are NOT writing in C (what this post is about in the first place).

    Both interpreted languages AND other compiled languages (maybe not all -- IDK them all) can use C libraries in their runtime. Through some adapters maybe, but still -- the overhead is next to nothing. However, the other way around is more difficult (that's what you're saying - and that's correct). And I believe that was the point of this post -- leave C at the lower level, close to the hardware - where it belongs. Leave C as a way to write system-wide shareable libraries - not just with your higher-level app(s). And the compatibility problem is no more.

    And the GC is a very nice way to prevent segfaults and memory leaks (most of them). Yes, it comes with the price of a tad more resources consumed (though, if the app is poorly written - it could cost lots of CPU/RAM). But it saves a whole lot of development and debugging time.

    At least that's how I see it.
  • 2
    @netikras

    To clarify, I don't see a problem with a GC as used in modern languages, but if those languages are then used to write shared libraries, it might be problematic

    E.g. If I use a shared library written in Golang from C#, I now run two GCs in my process (one for Go and and one for .NET). It does not hurt per se (I tried it and it works), but consumes unnecessary resources.
  • 1
    I even write application level stuff in C, and it's a superb language. Neither the behemoth that is C++ nor the brittle, unstandardised puzzle language that is Rust can keep up with that. C will always be there.
  • 0
    You have control if you know the language. I've control over C++ because I know what the compiler does. I've control over Java because I know the perks, what bytecode is produced, how GC and VM work. I've control over JavaScript because I know how the event queue works, how's interpreted and executed.

    If you feel like you've no control over a language, go fucking study it.
Add Comment