Programmers are usually notoriously bad at guessing which parts of the code are the primary consumers of the resources. It is all too common for a programmer to modify a piece of code expecting see a huge time savings and then to find that it makes no difference at all because the code was rarely executed. - Jon Louis Bentley, Writing Efficient Programs

  • 6
    Who modifies code by guessing what is running the most? That is not hard to find out.
  • 1
    Put logs everywhere. That's the lightest way to find out where the code is reaching and isn't.
  • 6
    @Sid2006 that's pretty much what one should *never* do.

    Use profiling tools, metric system, tracing.

    Logging should never be misused for performance problems.

    It could be used for debugging...

    But please, use metric systems for storing and evaluating performance data / perf tools / tracing. They were made for the job.
  • 0
    @IntrusionCM You're one of *those* kinda devs aren't you?

    Those who always say "There is a tool/app for that".

    Those who shoot down any junior's opinions because obviously simple solutions aren't what we need. We need highly over-engineered platforms that cost a ton and provide pretty graphs to look at.
  • 2
    @Sid2006 I have to use print to make sure my function is being called at times. I should probably get more familiar with the profiling tools though.

    There are a lot of "ways that I code" which could use some efficiency improvements.
  • 3
    @Sid2006 Prometheus / Grafana don't cost a thing. Or as an alternative the TICK stack (Telegraf, InfluxDb, Chronograf, Kapacitor).

    Libraries to create prometheus scrapable data or send directly to InfluxDb are free, too.

    When you abuse logging for performance metrics, it's easy to make a bad judgment call.

    You just see what happens in one function at one point in time, probably not even the context of the function /service (input data, resource usage of the service, had the service a high or low traffic, et cetera).

    The bad judgment call, cause you can only infer a very limited amount of knowledge from a logging call, causes then the usual "refactoring".

    That refactoring then often leads to regressions... Then the goose chase starts over.

    Or worse: Someone writes a logging function that tries to output *everything*... Breaking logging / production or creating a huge security leak. Cause no, logging was not made for megabytes of text...

    Metric and profiling are simple solutions, they just need to be applied more thoroughly instead of using a subpar solution and making bad decisions.
  • 3
    @Sid2006 Logs are unsuited for that job because if a code part is actually hot and hence worth optimisation, you will just drown in log entries exactly because it's called often. Several million log entries per second aren't really nice to evaluate.

    In case of C/C++, GCC offers gprof exactly for that purpose on function level. Juniors should not fudge around with nonsense if there are proper tools for that - they should learn their tools. Like this simple GCC profiling tutorial: https://thegeekstuff.com/2012/08/...

    Log output can be used if you want to see whether a code part executes at all. A debugger with breakpoint may be an alternative, but not always.
  • 0

    And let's not forget that in many stdlibs (not only c++), log functions don't provide any guarantees on completion, ordering, or atomicity.

    This not only can further tank performance due to needing a high contention lock, but could also lead to severe misjudgments, oversights, or race conditions due to missing, out of order or garbled logs.
  • 0

    Also, setting a breakpoint in a debugger costs nothing, and that's what 99% of juniors use console.log for.
Add Comment