2

What are your opinions about minimum % test code coverage? I'm unfamiliar

Comments
  • 13
    This is the wrong approach at quality.
    Not everything should be tested.
    Concentrating on test coverage percentage will only lead to useless tests just for the sake of coverage.

    Make your code as testable as possible and test as much as it makes sense.

    This is not as easy as making a number grow but only this will improve your code.

    Test coverage will only produce a number on a monitor which makes POs happy.
  • 10
    the best thing about the metric is: it's measurable, and therefore a good way to make stupid stakeholders happy.
  • 2
    It depends. If you use it as relative metric in MRs it's nice imo. If the percentage goes down, you should look at the untested code and ask yourself "should this be tested". If you test it, irrelevant.

    For me it's about seeing what's tested and if new untested code was brought in.
  • 1
    I don't think there should be a rule as for what a minimum test coverage should be. It is a good indicator though. I usually end up with North of 70% but it's never a goal in itself. For me but the overall coverage but the per package coverage is more telling if I see it drop that means that someone added stuff without tests, likely in an area where we agreed to test rigourously.

    As with all metrics is important to understand it. Coverage says nothing is the quality of the tests/test cases. Some separate integration tests already guard against regressions but might not be taken into account.

    Stuff that it auto generated or simply doesn't compile unless corrected don't benefit from tests. If your project contains a lot of that it might be as low as 20%. Forcing to increase that is just stupid.
  • 0
    A good start is 60% without functions that wrap other functions or are doing nothing or fullfill imterfaces only.
    If an function has only one line once you set the line length to infinite, its an candidate for removing it from testcoverage.
    If an function/method has an for/while or has an if that does more than assinging/wrappimg values, i usually write an test for. But i count integration tests different, they have to count to 100% with next to no coverage skip. Unittests for classes have to be run without integration tests.
  • 0
    @gintko Yeah that is sensible. Unfortunately that's not always easy to do. To my knowledge you can't in Go.

    It is as I mentioned before a good indicator. I'm my experience projects with complex code or tricky business logic (basically anything surpassing the super trivial) that have around 40% coverage are horrible to maintain. It's the code where a better not touch sentimentally rules with the people who have experience with it.
    It's where the "code without tests = legacy code" definition rings true
  • 0
    Fucking salesforce and their 75% coverage requirement.
  • 0
    It's nice if it's used as a guideline like stated here by others.

    Usually it's a PO/stakeholder metrics which they can shout about not really knowing what's going on but shooting people down 'cause the number is too low or patting themselves to the back when it's on the green.

    (Incoherent, need more coffee.)
  • 2
    Code coverage only tells you what percentage of the code is executed during tests, not that the code is actually tested. It's useful information but enforcing X% coverage tends to result in worthless tests being written to increase coverage which makes the metric worthless.
  • 1
    @hjk101 wow you are so right. ours is at 31% and the code is exactly as horrible as you describe. it took me 5k lines of rafactoring , followed by writing 11 files of unit tests to get that measurement to 33%
  • 0
    Your tests should cover the basic functionality of your features and edge cases. If something is difficult to test then it's probably badly written and needs refactoring. Depending on the codebase good test coverage could range but it should never be 0.
Add Comment