3

Unit tests pass locally but fail on the pipeline. After 3rd re-queue, pipeline tests pass. I am so over this bloody week.

Comments
  • 0
    similar things were happening to me as well... because I used gitlab's "cache". The problem went away when I replaced it with "artifact".
  • 1
    @netikras From theoretical understanding - you just managed to write a horror story in three sentences.

    Is Gitlab so undocumented Oo
  • 1
    It took three times for the Volkswagen test cheating plugin to operate correctly
  • 2
    Pipeline: "Thinking about it for the third time, I came to the conclusion that your unit tests were indeed correct."
  • 0
    Reproeucible builds.

    If they weren't such a pain in the ass and needed such a meticulous preparement, that would be heaven.
  • 0
  • 0
    @netikras Why so is a bit unspecific.

    Meticulous preparement to ensure it's actually the same build in every detail possible.

    Heaven because a lot of build systems - in sense of the whole environment, CI system to language to environment to build system to .... - tend to have devious, obnoxious details that are easy to overlook.

    Reproducible builds make these devious obnoxious details visible.

    The meticulous preparement helps to understand what can go wrong in the process.

    From my experience with build tools, it would be heaven to not rip your hairs out for hours to debug shit, but instead have clear visible details at hand.

    Sadly standards are lacking and for every build system, language, environment, continous integration it's a different process.
  • 0
    @IntrusionCM there's no silver bullet out there. Every build system tries to solve the same problem their own way, introducing their own imperfections.

    I used to think GL nailed it, until I got elbows deep in it. Before that I thought Jenkins was the holly Grail. Before that - ... I don't even remember.

    Every build system (every solution with alternatives, for that matter) tries to do smth better than their competitors do, accidentally (?) overlooking something else / create some new problems other competitors will step in to solve. And the cycle continues..

    If you look right, you don't see what's on your left.
  • 0
    @IntrusionCM
    "Why so" -- I was referring to your message where you've mentioned me.

    > From theoretical understanding - you just managed to write a horror story in three sentences.
  • 0
    @netikras Has a lot to do with what I wrote before.

    An artefact is a build result.

    A cache is a temporary result.

    As obvious as it sounds - many documentations don't explain the non obvious things:

    Storing a build result as an artefact, you can package up the whole build for examination. Even a lock file for dependencies might not be enough to e.g. spot differences in runtime files (like test data that got downloaded, necessary additional files like a GeoIP database, IP address list or stuff like that).

    As a cache gets nuked or is constantly changed, it is not sth. you can examine as it's in a constant state of flux.

    An artefact however - that's generated once per build to a specific revision in the build system. You can eliminate a lot of variables just based on that fact - e.g. if the same commit revision in a new build with an new artifact suddenly passed, sth. isn't deterministic.

    With artefacts, you can start comparing things. See what was actually linked / downloaded, see where the difference lies.

    That's one part.

    The other part is that caches are obnoxious - I mentioned runtime files earlier, but there are deeper, disturbing things lurking there, depending on language. E.g. NodeJS with C/C++ compilation - worst case the system got updated and the cache reuses a broken linked library generated ages ago.

    These are really nightmares to debug.

    The last thing with caches is logging.

    Most build systems cannot show you how a cache has changed. There's no diff view or sth. like that. At best you can grep out downloaded files from the build log, but most libraries don't even spit out that information for runtime stuff.

    I try to separate caching from the CI as much as I can for that reason. Went too many times in the rabbit hole of debugging stuff and then realizing that I needed to work around broken build systems, the CI making wrong assumptions (e.g. that no dependency changes in a cache) or generally just clusterfuck somewhere inbetween.
  • 0
    @IntrusionCM yeah, I figured. Tbh I see no practical use for a gitlab's cache. I replace it with artifacts whereever I can, since it does not cost me any additional dime and I get the sure thing.

    Maybe I'm just not experienced enough to know good use-cases for a cache...
Add Comment