Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "pipeline fail"
-
The reason why aliens are avoiding earth:
Me : Guys, the CI/CD pipeline is ready. ci.yaml is our config file, so don't remove it as the deployments will fail.
**10 seconds later**
slack: BUILD FAILED
Me: *Looks at git commits* "Brian removed ci.yaml
Wtf BRIAN!🖕🖕🖕🖕16 -
Just got a new job at an old school hardware company. The codebase is giving me heart attack. They don't care about dev experience or code navigation at all. Every attempts to modernize the codebase is so half assed. All patches are so bloated that make the codebase even worse.
Frontend is migrated from prototype-oop-jquery cluster fuck to AngularJS, then finally angular. Holy moly, all business logics are baked into UI "classes" using prototype chain. When they migrated to AngularJS, someone simply added a wrapper to that jQuery cluster fuck class and overwrote all the prototype with a 10k +lines file. Since all the methods are hidden in either prototype, JS object, or callback function, it's impossible to trace the data pipeline using IDE when "go to definition" on update() method gives you all the update methods/string in all objects/classes. And they don't care about immutability. References are taken out, renamed, and mutated everywhere. Finding the source of a bug is fucking guessing game.
I don't know what trick they use that makes cLion static analyzer fail.
And there is no unit test or spec doc.
Fuck me dead3 -
My current job at the release & deploy mgmt team:
Basically this is the "theoretically sound flow":
* devs shit code and build stuff => if all tests in pipeline are green, it's eligible for promotion
* devs fill in desired version number build inside an excel sheet, we take this version number and deploy said version into a higher environment
* we deploy all the thingies and we just do ONE spec run for the entire environment
* we validate, and then go home
In the real world however:
* devs build shit and the tests are failed/unstable ===> disable test in the pipeline
* devs write down a version umber but since they disabled the tests they realize it's not working because they forgot thing XYZ, and want us to deploy another version of said application after code-freeze deadline
* deployments fail because said developers don't know jack shit about flyway database migrations, they always fail, we have to point them out where they'd go wrong, we even gave them the tooling to use to check such schema's, but they never use it
* a deploy fails, we send feedback, they request a NEW version, with the same bug still in it, because working with git is waaaaay too progressive
* We enable all the tests again (we basically regenerate all the pipeline jobs) And it turns out some devs have manually modified the pipelines, causing the build/deploy process to fail. We urged Mgmt to seal off the jenkins for devs since we're dealing with this fucking nonsense the whole time, but noooooo , devs are "smart persons that are supposed to have sense of responsibility"...yeah FUCK THAT
* Even after new versions received after deadline, the application still ain't green... What happens is basically doing it all over again the next day...
This is basically what happens when you:=
* have nos tandards and rules inr egards to conventions
* have very poor solution-ed work flow processes that have "grown organically"
* have management that is way too permissive in allowing breaking stuff and pleasing other "team leader" asscracks...
* have a very bad user/rights mgmt on LDAP side (which unfortunately we cannot do anything about it, because that is in the ownership of some dinosaur fossil that strangely enough is alive and walks around in here... If you ask/propose solutions that person goes into sulking mode. He (correctly) fears his only reason for existence (LDAP) will be gone if someone dares to touch it...
This is a government agency mind you!
More and more thinking daily that i really don't want to go to office and make a ton of money.
So the only motivation right now is..the money, which i find abhorrent.
And also more stuff, but now that i am writing this down makes me really really sad. I don't want to feel sad, so i stop being sad and feel awesome instead.1 -
I know you, youre out there somewhere, coding, feeling like shit, putting your best, listening to coldplay, in the server room, your basement ... I know you veryy well1
-
** me setting up GitLab CI **
- run pipeline
- FAIL
- env variable not passed to one of the shell scripts
- set -x, rerun
- FAIL
- same reason. env variable is OK in the `set -x` output
- comment out `set -x`, rerun
- still FAIL
- same reason
- find a `set +x` left in one of the scripts
- comment that out
- rerun
- PASS
- WTF?!?!?!?
- continue on swearing for wasted better half of the day debugging my scripts12 -
Me: "Need help with build config problems, please help almighty documentation page!"
Docs Page: "Nah fam, I got 4 headers about problems with no text, a blank code example, and 2 error 404 pages."
And that's why I don't like build pipelines. -
Stupid pipeline bullshit.
Yeah i get it, it speeds up development/deployment time, but debugging this shit with secret variables/generated config and only viewable inside kubernetes after everything has been entered into the helm charts through Key Vaults in the pipeline just to see the docker image fail with "no such file found" or similar errors...
This means, a new commit, a new commit message, waiting for the docker build and push to finish, waiting for the release pipeline to trigger, a new helm chart release, waiting for kubernetes deployment and taking a look at the logs...
And another error which shouldn't happen.
Docker, fixes "it runs on my machine"
Kubernetes, fixes "it runs on my docker image"
Helm, fixes "it runs in my kubernetes cluster"
Why is this stuff always so unnecessarily hard to debug?!
I sure hope the devs appreciate my struggle with this... well guess what, they won't.
Anyways, weekend is near and my last day in this company is only four months away.2 -
I am lying down on the floor because I cannot figure out why this specs pass locally but repeatedly fail on the ci/cd pipeline. Literally done everything now I just want to lie here and sleep.3
-
Don't you just love it when gitlab's ci pipelines crash for no apparent reason, causing tests which cannot fail to just magically break down, change logging levels to Just about anything and basically PMS for about 3 hours before it decides it needs to restart completely and when you return the same pipeline which you've been trying to fix for the better part of an entire evening, after regular work hours, it. Fucking. Works. With. No. Changes. To. The. Entire. FUCKING. System.
Waste of a day.3 -
I called the hack "blow up bunny", was in my first company.
We had 4 industrial printers which usually got fed by PHP / IPP to generate invoices / picking lists / ...
The dilemma started with inventory - we didn't have time to prepar due to a severe influenza going round (my team of 5 was down to 2 persons, where on was stuck with trying to maintain order. Overall I guess more than 40 % ill, of roughly 70 persons...)
Inventory was the kind of ultimate death process. Since the company sold mobile accessoires and other - small - stuff.
Small is the important word here....
Over 10 000 items were usually in stock.
Everything needed to be counted if open or (if closed) at least registered.
The dev task was to generate PDFs with SKUs and prefilled information to prevent disaster.
The problem wasn't printing.
The problem was time and size.
To generate lists for > 10 000 articles, matching SKUs, segmented by number of teams isn't fun.
To print it even less. Especially since printers can and will fail - if you send nonstop, there is a high chance that the printer get's stuck since the printers command buffer get's cranky and so on.
It was my longest working day: 18 hours.
In the end "Blow up bunny" did something incredibly stupid: It was a not so trivial bash pipeline which "blew up" the large PDF in a max of 5 pages, sent it to one of the 4 printers in round robin fashion.
After a max of 4 iterations, bunny was called.
"bunny" was the fun part.
Via IPP you can of course watch the printer queue.
So...
Check if queue was empty, start next round with determined empty printer queues.
Not so easy already. But due to the amount of pages this could fail too.
This was the moment where my brain suddenly got stuck aft 4 o clock in the morning in a very dark and spookey empty company - what if the printer get's stuck? I could send an reset queue or stuff like that, but all in all - dead is dead. Paper Jam is paper jam.
So... I just added all cups servers to the curl list of bunny.
Yes. I printed on all > 50 printers on 4 beefy CUPS servers in the whole company.
It worked.
People were pretty pissed since collecting them was a pita... But it worked.
And in less than 2 hours, which I would have never believed (cannot remember the previous time or number of pages...)1 -
Unit tests pass locally but fail on the pipeline. After 3rd re-queue, pipeline tests pass. I am so over this bloody week.11
-
How do you do your CI/CD pipeline? Sorry if this is a dumb question. Just wondering how the tests and deployment usually runs. Is it on a per team basis? Is it the whole release getting deployed to Test many times per day? What happens if too many automated tests fail or there is not enough coverage, does it abort the deployment? If so, how can every team get delayed by every issue - is that actually a good policy?
My pipeline is very slow and requires a team of 12 people working in shifts to complete it. I’m not an expert but I know it does a lot of steps and never completes without manual intervention. I would like to help but I’m not sure how bad it is.3 -
GitLab, you really should fix your CI.
I mean, I know .gitlab-ci.yml has to be written carefully, having in mind that GL shell is a castrated bourne shell, but come on... Failing a pipeline because I used a semicolon in an `echo` parameter string?
echo ""items: 0" ## this will fail
echo "items 0" ## this will pass
This is a bit too much.
Removed the semicolon and the pipeline worked just fine.11