Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "upstream"
-
Seven months ago:
===============
Project Manager: - "Guys, we need to make this brand new ProjectX, here are the specs. What do you think?"
Bored Old Lead: - "I was going to resign this week but you've convinced me, this is a challenge, I never worked with this stack, I'm staying! I'll gladly play with this framework I never used before, it seems to work with this libA I can use here and this libB that I can use here! Such fun!"
Project Manager: - "Awesome! I'm counting on you!"
Six months ago:
====================
Cprn: - "So this part you asked me to implement is tons of work due to the way you're using libA. I really don't think we need it here. We could use a more common approach."
Bored Old Lead: - "No, I already rewrote parts of libB to work with libA, we're keeping it. Just do what's needed."
Cprn: - "Really? Oh, I see. It solves this one issue I'm having at least. Did you push the changes upstream?"
Bored Old Lead: - "No, nobody uses it like that, people don't need it."
Cprn: - "Wait... What? Then why did you even *think* about using those two libs together? It makes no sense."
Bored Old Lead: - "Come on, it's a challenge! Read it! Understand it! It'll make you a better coder!"
Four months ago:
==============
Cprn: - "That version of the framework you used is loosing support next month. We really should update."
Bored Old Lead: - "Yeah, we can't. I changed some core framework mechanics and the patches won't work with the new version. I'd have to rewrite these."
Cprn: - "Please do?"
Bored Old Lead: - "Nah, it's a waste of time! We're not updating!"
Three months ago:
===============
Bored Old Lead: - "The code you committed doesn't pass the tests."
Cprn: - "I just run it on my working copy and everything passes."
Bored Old Lead: - "Doesn't work on mine."
Cprn: - "Let me take a look... Ah! Here you go! You've misused these two options in the framework config for your dev environment."
Bored Old Lead: - "No, I had to hack them like that to work with libB."
Cprn: - "But the new framework version already brings everything we need from libB. We could just update and drop it."
Bored Old Lead: - "No! Can't update, remember?"
Last Friday:
=========
Bored Old Lead: - "You need to rewrite these tests. They work really slow. Two hours to pass all."
Cprn: - "What..? How come? I just run them on revision from this morning and all passed in a minute."
Bored Old Lead: - "Pull the changes and try again. I changed few input dataset objects and then copied results from error messages to assertions to make the tests pass and now it takes two hours. I've narrowed it to those weird tests here."
Cprn: - "Yeah, all of those use ORM. Maybe it's something with the model?"
Bored Old Lead: - "No, all is fine with the model. I was just there rewriting the way framework maps data types to accommodate for my new type that's really just an enum but I made it into a special custom object that needs special custom handling in the ORM. I haven't noticed any issues."
Cprn: - "What!? This makes *zero* sense! You're rewriting vendor code and expect everything to just work!? You're using libs that aren't designed to work together in production code because you wanted a challenge!?? And when everything blows up you're blaming my test code that you're feeding with incorrect dataset!??? See you on Monday, I'm going home! *door slam*"
Today:
=====
Project Manager: - "Cprn, Bored Old Lead left on Friday. He said he can't work with you. You're responsible for Project X now."24 -
!rant
!!git
Who here uses `master` for development?
My boss (api guy) tried to convince me that was normal practice. I gently told him that it sounded crazy and very very bad.
Here's the dev path I'm enforcing on my repos:
(feature branches) -> dev -> qa* -> master -> production*
*: the build server auto-pulls from these branches, and pushes any passing builds to staging/production.
Everyone works on their own feature branches, and when they're happy with their work, they merge it into `dev`. `dev`, therefore, is for feature integration testing. After everything is working well on `dev`, it gets merged into `qa` for the testers to fawn over and beat with sticks. Anything that passes QA gets merged into `master`, where it sits until we're ready to release it. When that time comes (it's usually right away, but not always), `master` gets merged into `production`.
This way, `master` is always stable and contains the newest code, so it's perfect for forking/etc. Is this standard practice, or should I be doing something different?
Also, api guy encourages something he calls "running a racetrack" -- each dev has their own branch (their initials) and they push to that throughout the day. everyone else pulls from it regularly and pushes to their own branch. When anyone's happy with their code, they push from their (updated) branch to `qa` (I insisted on `dev` instead.)
Supposedly this drastically reduces the number of merge conflicts when pushing to an upstream branch due to having a more recent ancestor node?
I don't quite follow that, but it seems to me that merging/pushing throughout the day would just make them happen sooner? idk.
What are your thoughts?30 -
I get a call: "Hey the site is down. Fix it!"
Worked on my workstation, not on my phone => DNS issue.
Local cache: "All OK"
ISP's DNS: "No record"
Google DNS: "Server error"
MXToolbox: "All OK"
CloudFlare DNS: "Domain? What domain?"
After a day of fucking around with configs and wanting to strangle the customer support guy, I just started pressing buttons, until suddenly, it worked. Turns out I'd accidentally enabled DNSSEC on a domain, that wasn't configured for it.
Lesson learned: There is no official DNS error code for "DNSSEC failed somewhere upstream". If you're lucky, you might get something useful out of the authoritative server, but apparently not on Mondays.8 -
!!rant
!!ANGER
Micromanager: "Hey, Root!
Since you're back, and still not feeling well, we have an easy ticket for you: Rewrite the slack integration gem! Oh, you don't have to re-implement all of it, just make sure it all works the same way it does now. That bitch you worked with once over a year ago who kept throwing you under the bus to management and stealing credit for your work? Yeah, she wrote the original code like four years ago. It's perfect, so don't touch it. but she can fill you in on all the details you need and get you up to speed on how to test it.
But yep! It should be simple. and I just knew you would love this ticket, so I saved it just for you. Nice and quick, too, to get you an easy win.
You know, since you have to repair your reputation with product. and management. and the execs. and the rest of the team. and me. Yeah, product doesn't trust you so they don't want to give you any tickets. They just can't trust you to get them out and have them work. So you have a lot of hard work to do."
Spoiler: The bus-thrower wasn't much help. (Surprise.)
Spoiler: The ticket was already in my backlog -- one of a grand total of two tickets.
Spoiler: I don't find the ticket fun. Maybe if I was to write the entire implementation with a nice DSL? but no, "don't touch the perfect code." Fuck you.
Spoiler: It isn't going to be nice or quick. But, she (micromanager) is looking to lose me, so that really is an easy win. for her.
And. just. argh. fuck you. i've been exhausted and dying for well over a year, but you've kept ignoring that (and still are, despite me providing goddamn legal forms from fucking doctors stating it in plain fucking english, which you also fucking ignore), and you just keep piling on the work and demanding the ridiculous of me despite it. Yeah I can pull it off sometimes. No, I really shouldn't, and I'm surprised I can. (also, "Time off? What, and lower your productivity even more? ____ doesn't even take vacations. And how are you doing on that ticket?") And no, none of my tickets have ever had any fucking problems. Not even when there are upstream service outages. Not. a. single. fucking. one. Ever. And the only things I've ever missed were things that bloody product never put in the fucking ticket, so fuck you with your "repair your reputation" bullshit.
god, i fuckiNG HATE THESESTUPOID ANWETLJAF SAJEWTKW BITCHFACEDUCKFUCKERS
Why the FUCK am I still fucking working here?
Right, because I've been burned out and dying so much I can't pass a fucking interview so I can fucking leave.
jasdkl;fk
ugh. Anyway. If you ever find yourself starting work at a Cali fintech company whose internal mascot is a very fine duck? Just run. I absolutely guarantee you will be miserable.rant root swears oh my micromanager duckfuckers "trivial" ticket root is fucking fed up root swears a lot holy shit rewrite an entire library in 2-3 days16 -
Now, instead of shouting, I can just type "fuck"
The Fuck is a magnificent app that corrects errors in previous console commands.
inspired by a @liamosaur tweet
https://twitter.com/liamosaur/...
Some gems:
➜ apt-get install vim
E: Could not open lock file /var/lib/dpkg/lock - open (13: Permission denied)
E: Unable to lock the administration directory (/var/lib/dpkg/), are you root?
➜ fuck
sudo apt-get install vim [enter/↑/↓/ctrl+c]
[sudo] password for nvbn:
Reading package lists... Done
...
➜ git push
fatal: The current branch master has no upstream branch.
To push the current branch and set the remote as upstream, use
git push --set-upstream origin master
➜ fuck
git push --set-upstream origin master [enter/↑/↓/ctrl+c]
Counting objects: 9, done.
...
➜ puthon
No command 'puthon' found, did you mean:
Command 'python' from package 'python-minimal' (main)
Command 'python' from package 'python3' (main)
zsh: command not found: puthon
➜ fuck
python [enter/↑/↓/ctrl+c]
Python 3.4.2 (default, Oct 8 2014, 13:08:17)
...
➜ git brnch
git: 'brnch' is not a git command. See 'git --help'.
Did you mean this?
branch
➜ fuck
git branch [enter/↑/↓/ctrl+c]
* master
➜ lein rpl
'rpl' is not a task. See 'lein help'.
Did you mean this?
repl
➜ fuck
lein repl [enter/↑/↓/ctrl+c]
nREPL server started on port 54848 on host 127.0.0.1 - nrepl://127.0.0.1:54848
REPL-y 0.3.1
...
Get fuckked at
https://github.com/nvbn/thefuck10 -
My friend and I were watching a documentary about Salmon. He suddenly starts to giggle and I ask him whats up.
Friend: Why do salmon suck at version control?
Me: why?
F: because they always git push force upstream 😂😂😂 -
Moving a datacenter, it went bad.
The upstream ISP fucked us.
45 hours.
I drove home, it was the most dangerous thing I have ever done17 -
First I wanna say how grateful I am that devRant exists, because my friends either don’t understand this vocab or don’t care lol.
Last week I worked on a pretty large ticket, opened a PR with 54 file changes. Just to follow standards I set the PR milestone to a future release version, but the truth is I didn’t care which version this work ended up in— I just needed it to go into the develop branch asap.
Since it was a large PR there was some expected discussion that prolonged its merging, but in the meantime I started a second branch that depended on some of the work from this branch. I set the new branch’s upstream to develop, fully expecting my PR to merge into develop, since that’s what I set the PR base to.
I completed all the work I could in the new branch, and got two colleagues to approve the initial PR so it would be merged into develop, I could add the finishing touch and get this work done seamlessly before the week was over. They approved, it got merged, I pulled develop, and… my work wasn’t there. I went to look at my PR and someone had changed the base branch to a release branch. It was my boss, who thought he was helping. (Our bosses don’t actually work on the same team as us, so he didn’t know. it’s weird. We have leads that keep track of our work instead.)
I messaged him and told him I really needed this in develop, knowing our release branch won’t be in develop for probably another week. I was very annoyed but didn’t wanna make him feel too bad so I said I’d just merge the release branch into my new branch. So many conflicts I couldn’t see straight. His response was “yeah and you’ll probably have a bunch of package manager conflicts too because that’s in that release.” He was right— I have so many package manager conflicts that I can’t even see how many compiler conflicts there are. I considered cherry picking my changes, but the whole reason I set develop as my upstream was to avoid having any conflicts since I’m working in the same functions, and this would create more.
So I could spend the next (?) days making educated guesses on possibly a thousand conflict resolutions, or I can revert my release branch merge and quietly step back and wait for the release branch to be merged into develop.
I’m sure cherry picking is the best option here but I’m genuinely too annoyed lol, and fortunately my team does not care to notice if I step back and work on something else to kill time until it’s fixed automatically. But I’m still in dire need of a rant because my entire plan was ruined by a well-meaning person who messed with my PR without asking, so here is that rant and I thank you for your time.8 -
WordPress related, get ready for some disgust.
So today early in the morning my boss forwarded me an email from a client, it was about a bug, and asked me if I can have a look at it and fix it.
"Yaay, WordPress!" I thought and opened the page containing the mentioned bug. She wrote that in the italian version of the page, users can select dates in the calendar, which should be disabled, like in the german version.
So yeah, I opened the code. Everything in the function looked perfect. Really. And the Data was also correctly set in the backend of WP.
The function was only 3 lines of code:
- Get the german post ID of the current post (german or italian) by its ID (using a Polylang function)
- Get an Advanced Custom Fields field by name and from a post with the ID from before
- json_encode its content and echo it to a JS var for initialization and later use in some AngularJS.
No fucking missing semicolon, it was fucking perfect like a sunset with your soulmate.
So I tried to find the bug with my personal way of debugging:
"Shitstream Debugging"
When a creek suddenly is full of water mixed with shit, walk upstream through the turds until you reach clear water. This is where the bug is.
=> So I first looked at the HTML source: Turds.
=> Then the ACF field content: Still turds.
=> Then the ID of the german post: Shit stain and turds (var_dump: null)
=> Please god at least $post->ID? Nope, fart smell and turds.
=> Nothing more to check: Clear fucking water and the flowery smell of 99 devVirgins
So it replaced $post->IT with get_the_ID() and it worked like a charm.
Afterwards I feel stupid, but $post->IT worked all the times before...
Conclusion:
FUCK YOU WORDPRESS YOU UGLY PIECE OF HUMAN-CENTIPEDE-PROCESSED-DOGFART.
Thanks for your patience.
Only one beer was sucked dry during the writing of this fucking rant.2 -
Developer confession.
I always git push a new branch even though I know it will error as there's no upstream, just to copy the full git push with set upstream arguments from the error message.11 -
Holy shit my server survived a DNS amplification attack!
I thought my iptables rules were not very effective, since I kept seeing 1-2 ANY requests getting through my pihole (only to be ignored by the upstream cloudflare server).
Turns out, they never actually *kicked in*, until now.
The craziest part is that one ip belongs to the Ministry of a country!! :O
Eat that, motherfuckers! God I love it when this shit actually works!5 -
I wish my clients would stop reporting "bugs" in my app that are legitimate results calculated from crap feed to it by their upstream systems.
Just because my app is easy to use it's the first place that they find out that their ecosystem is fundamentally broken.
GIGO, people.1 -
Fucking piece of shit German internet man. Some of you might know that Germany probably has the shittiest internet in the EU. And by shitty, I don't mean the downstream speeds you can get (which is how most ISPs justify their crappy network), but the GODDAMN UPSTREAM SPEEDS.
See, I'm just a student, right? I don't run a fucking company or something like that. I don't need / can't afford a symmetrical gigabit connection. But I do a lot of stuff that requires a decent upstream connection.
Fucking Unitymedia (my ISP), if I already decide to buy the goddamn "business plan" (IPv6 & static adresses), at least supply me with some decent upstream speeds. PLEASE!
My current plan costs ~45€ a month for internet and TV (I don't watch, but my two other flat-mates do).
Internet speeds are 150 Mbit/s down and FUCKING 10 Mbit/s up! What??! What the hell am I supposed to do with only 10 Mbit/s?? I'm already completely exhausting the bandwidth and I'm not even done setting everything up! Fucking hell...
I was planning on getting their "upload package" to get at least 20 Mbit/s up – but they removed that option! IT'S GONE, PEOPLE! They said in an interview last year that "customers are not interested in higher upload speeds" and consequently removed that option. WHAT???
"You wanna have state-of-the-art downstream speeds of 400 Mbit/s? Here you go. Oh, our maximum limit of 10 Mbit/s upstream is not enough for you? TOO FUCKING BAD, NOTHING THAT WE CAN OFFER YOU!"
(Seriously though, the best customer internet plan is 400D & 10U)
Goddamn... in this day and age of things like cloud storage etc. even "normal" people definitely need higher upload speeds.
Man, this rant got so long, but I really wanted to get this out. This wasn't even everything though, maybe I'll make a separate rant to elaborate on other issues.
If you are interested, you might want to read up on the following report:
https://speedtest.net/reports/...33 -
No one fucking knows how to handle/raise errors.
I feel like this is the least talked topic in all fucking programming industry. This shit needs to be tought even more than the fucking SOLID, DRY, KISS, YAGNI and other kinds of buzzwords that fancy devs love tossing left and right.
Basically everyone just does "whatever you dumb error just dont bother me". They will just log/return null/ignore the errors and be in their oblivion with bugs propagating upstream the call stack.
"Throwing errors you say? Ew, why do you want to produce more errors?". Yeah, right, just stick another log/return null/or ignore the fact that the monke calling your function with bullshit arguments.
"But bro it's so difficult and time consuming and it would never happen!" Yes, you fucker! Yes! Programming IS fucking difficult if you want reliable systems! Did you not know that!? Well now you do! Go and fucking learn it!
FUCK!11!1!!27 -
!rant
I'm freaking tired of telling colleagues at work not to create feature branch in upstream and use their fork instead.
Turns out idiots can't recognize the difference between a forked repo and the upstream.3 -
I was asked to look into a site I haven't actively developed since about 3-4 years. It should be a simple side-gig.
I was told this site has been actively developed by the person who came after me, and this person had a few other people help out as well.
The most daunting task in my head was to go through their changes and see why stuff is broken (I was told functionality had been removed, things were changed for the worse, etc etc).
I ssh into the machine and it works. For SOME reason I still have access, which is a good thing since there's literally nobody to ask for access at the moment.
I cd into the project, do a git remote get-url origin to see if they've changed the repo location. Doesn't work. There is no origin. It's "upstream" now. Ok, no biggie. git remote get-url upstream. Repo is still there. Good.
Just to check, see if there's anything untracked with git status. Nothing. Good.
What was the last thing that was worked on? git log --all --decorate --oneline --graph. Wait... Something about the commit message seems familiar. git log. .... This is *my* last commit message. The hell?
I open the repo in the browser, login with some credentials my browser had saved (again, good because I have no clue about the password). Repo hasn't gotten a commit since mine. That can't be right.
Check branches. Oh....Like a dozen new branches. Lots of commits with text that is really not helpful at all. Looks like they were trying to set up a pipeline and testing it out over and over again.
A lot of other changes including the deletion of a database config and schema changes. 0 tests. Doesn't seem like these changes were ever in production.
...
At least I don't have to rack my head trying to understand someone else's code but.... I might just have to throw everything that was done into the garbage. I'm not gonna be the one to push all these changes I don't know about to prod and see what breaks and what doesn't break
.
I feel bad for whoever worked on the codebase after me, because all their changes are now just a waste of time and space that will never be used.3 -
How the hell does PR containing production secrets and private keys gets 3 approvals and gets merged upstream? 😬 🥴6
-
Enough is enough! I can't do it anymore!
...
alias pm='python manage.py'
alias ga='git add .'
alias gc='git commit -a'
alias gi='git init && touch .gitignore && printf ".idea \n venv \n node_modules \n out \n *.iml \n *.log \n build \n target" > .gitignore'
alias gp='git push'
alias gps='git push --set-upstream origin master'
alias gr='git remote add origin'
...
Much better :D12 -
Working on a custom Chromium OS board at the moment
So boards in Chromium OS are specialized versions of Chromium OS built for a specific hardware while maintaining upstream compatibility.
I built a board specifically to be as near as CloudReady's compatibility table as possible, so this is what I have at the moment:
* Most hardware works (Libinput)
* Still working on supplying Nvidia drivers using nouveau (Google insists using OSS drivers, we can't use NVIDIA drivers)
* Still working on making Crostini GPU-enabled by default so I don't always terminate it via vmc
* ARC works as per the open sourced Android Runtime but I need help making the Play Store working
Overall its a bit stable but if anyone's down I'll replicate it on a GitHub repository and I'll let everyone contribute their changes. The aim of the custom board is to:
* Make it work on most hardware possible
* Add android support with APK installation (FydeOS has this but I can't replicate it in CrOS).
* Produce a close to Chrome's release channel.
Here's a screenshot of me using it, it works but I'll need to start over from scratch to make it more contributable10 -
EoS1: This is the continuation of my previous rant, "The Ballad of The Six Witchers and The Undocumented Java Tool". Catch the first part here: https://devrant.com/rants/5009817/...
The Undocumented Java Tool, created by Those Who Came Before to fight the great battles of the past, is a swift beast. It reaches systems unknown and impacts many processes, unbeknownst even to said processes' masters. All from within it's lair, a foggy Windows Server swamp of moldy data streams and boggy flows.
One of The Six Witchers, the Wild One, scouted ahead to map the input and output data streams of the Unmapped Data Swamp. Accompanied only by his animal familiars, NetCat and WireShark.
Two others, bold and adventurous, raised their decompiling blades against the Undocumented Java Tool beast itself, to uncover it's data processing secrets.
Another of the witchers, of dark complexion and smooth speak, followed the data upstream to find where the fuck the limited excel sheets that feeds The Beast comes from, since it's handlers only know that "every other day a new one appears on this shared active directory location". WTF do people often have NPC-levels of unawareness about their own fucking jobs?!?!
The other witchers left to tend to the Burn-Rate Bonfire, for The Sprint is dark and full of terrors, and some bigwigs always manage to shoehorn their whims/unrelated stories into a otherwise lean sprint.
At the dawn of the new year, the witchers reconvened. "The Beast breathes a currency conversion API" - said The Wild One - "And it's claws and fangs strike mostly at two independent JIRA clusters, sometimes upserting issues. It uses a company-deprecated API to send emails. We're in deep shit."
"I've found The Source of Fucking Excel Sheets" - said the smooth witcher - "It is The Temple of Cash-Flow, where the priests weave the Tapestry of Transactions. Our Fucking Excel Sheets are but a snapshot of the latest updates on the balance of some billing accounts. I spoke with one of the priestesses, and she told me that The Oracle (DB) would be able to provide us with The Data directly, if we were to learn the way of the ODBC and the Query"
"We stroke at the beast" - said the bold and adventurous witchers, now deserving of the bragging rights to be called The Butchers of Jarfile - "It is actually fewer than twenty classes and modules. Most are API-drivers. And less than 40% of the code is ever even fucking used! We found fucking JIRA API tokens and URIs hard-coded. And it is all synchronous and monolithic - no wonder it takes almost 20 hours to run a single fucking excel sheet".
Together, the witchers figured out that each new billing account were morphed by The Beast into a new JIRA issue, if none was open yet for it. Transactions were used to update the outstanding balance on the issues regarding the billing accounts. The currency conversion API was used too often, and it's purpose was only to give a rough estimate of the total balance in each Jira issue in USD, since each issue could have transactions in several currencies. The Beast would consume the Excel sheet, do some cryptic transformations on it, and for each resulting line access the currency API and upsert a JIRA issue. The secrets of those transformations were still hidden from the witchers. When and why would The Beast send emails, was still a mistery.
As the Witchers Council approached an end and all were armed with knowledge and information, they decided on the next steps.
The Wild Witcher, known in every tavern in the land and by the sea, would create a connector to The Red Port of Redis, where every currency conversion is already updated by other processes and can be quickly retrieved inside the VPC. The Greenhorn Witcher is to follow him and build an offline process to update balances in JIRA issues.
The Butchers of Jarfile were to build The Juggler, an automation that should be able to receive a parquet file with an insertion plan and asynchronously update the JIRA API with scores of concurrent requests.
The Smooth Witcher, proud of his new lead, was to build The Oracle Watch, an order that would guard the Oracle (DB) at the Temple of Cash-Flow and report every qualifying transaction to parquet files in AWS S3. The Data would then be pushed to cross The Event Bridge into The Cluster of Sparks and Storms.
This Witcher Who Writes is to ride the Elephant of Hadoop into The Cluster of Sparks an Storms, to weave the signs of Map and Reduce and with speed and precision transform The Data into The Insertion Plan.
However, how exactly is The Data to be transformed is not yet known.
Will the Witchers be able to build The Data's New Path? Will they figure out the mysterious transformation? Will they discover the Undocumented Java Tool's secrets on notifying customers and aggregating data?
This story is still afoot. Only the future will tell, and I will keep you posted.6 -
Submitting a pull request seems a good way to end the week. That's what I love about (some) projects on GitHub; instead of just ranting about bugs or out-of-sync documentation, you can fix the problem and send a pull request upstream. :-)2
-
I get an email about an hour before I get into work: Our website is 502'ing and our company email addresses are all spammed! I login to the server, test if static files (served separately from site) works (they do). This means that my upstream proxy'd PHP-FPM process was fucked. I killed the daemon, checked the web root for sanity, and ran it again. Then, I set up rate limiting. Who knew such a site would get hit?
Some fucking script kiddie set up a proxy, ran Scrapy behind it, and crawled our site for DDoS-able URLs - even out of forms. I say script kiddie because no real hacker would hit this site (it's minor tourism in New Jersey), and the crawler was too advanced for joe shmoe to write. You're no match for well-tuned rate-limiting, asshole!1 -
So, I work in a game development studio, right?
We're trying to launch the title on as many platforms as reasonable, because as a social VR app we're kinda rowing upstream.
So far, Steam and Oculus have been fairly reasonable, if oddly broken and inconsistent.
Enter store 3.
Basically no in-game transaction support (our asking prompted them to *start* developing it. No, it's not very complete). No patch-update system (You want an update? Gotta download the whole fsckin' thing!). No beta-testing functionality for most of their stuff ("Just write the code like the example, it will work, trust us!"). No tools besides the buggy SDK (Wanna upload that new build? Say hello to this page in your web browser!).
So, in other words: Fun.
We've been trying to get actively launched for two months now. Keep in mind that the build has been up on Steam and Oculus for over a year and half a year (respectively), so the actual binary functionality is, presumably fine.
The best feedback we get back tends to be "Well, when we click the Launch button it crashes, so fail."
Meanwhile we're going back and forth, dealing with other-side-of-the-world timezone lag, trying to figure out what is so different from their machines as ours. Eventually we get them to start sending logs (and no, Windows Event logs are not sufficient for GAMES, where did you even get that idea????) except the logs indicate that the program is getting killed so terribly that the engine's built-in crash handler can't even kick in to generate memory dumps or even know it died.
All this boils down to today, where I get a screenshot of their latest attempt.
I just can't even right now.5 -
so... not really a rant because i'm happy to be in the long-term zenlike state where i don't really give a fuck about anything anymore, but...
so today's my birthday (thanks in advance for all the semi-mandatory "cheers" reactions and such)
the agency i do temp jobs through sends money weekly (for the one week back) (which is the main and only reason i use them). they arrive at friday 12:25, so that's when i know to go "check" by withdrawing it, and it's also awesome because it's the best time to provide funds to reward myself (by booze/weed) at the end of the week.
last week, nothing came in. i called them and learned it was due to the contact person in the company i did job in being too late on sending the agency list of people who showed up at the work, i was told it's gonna arrive one week later together with the proper payment for the week-1,so effectively i was one week without any money (literally), but on the next week double was going to arrive, which is nice.
that next week of double was now. i found out that no double arrived, only single-value payment. i called them to ask why.
i was told that what arrived was the late payment, and the dude in company was again late with sending the presence list, so the other payment, for the proper week's work, will be a week late again.
so... that kinda ruined my financial planning tor tge week that's going to happen.
i guess my point (if i have any) is... funny how when someone fucks up, there's nobody for me to be angry at and hold responsible in any way, but when i have delays in my work due to delays upstream, nobody gives a shit about my excuses and it's my fault and i should have compensated, it was my responsibility and duty, and me not doing it (to my own detriment, for someone else) is me failing.
funny how the subjective dynamics of the world always somehow works out in a way where everyone else fucks up and i either have to suck it up and be okay with it otherwise i'm a selfish unreliable entitled asshole, or suck it up and extinguish their fire for them, otherwise i'm a selfish unreliable entitled asshole XD
anyone else noticed this in their life?
how does it work? what is the factor that decides whether you're in the "suck it up" class or the "fuck it, someone else will suck it up" class?
doesn't seem to (just) be the money(flow), i've seen this thing happen even in situations where the money/client dynamics were flowing the opposite way to what would be natural for the shit fall direction.4 -
Spent the whole day debugging two bugs in libraries only to notice that they were already fixed upstream FUCKING MONTHS AGO WITHOUT A NEW VERSION PUBLISHED GIVE ME BACK MY TIME FFS
-
Elasticsearch, from the bottom of my heart...
How can one ecosystem be so batshit crazy inconsistent?
Seemingly every agent does the same (e.g. filebeat vs journalbeat vs packetbeat)… yet there are subtle changes in configuration everywhere.
Plus YML. The most shitty markup language one can use and the cockslubbing durps used it fucking everywhere.
Makes fun to have complex stuff and requiring a python Jinja to JSON to YML converter to be able to write the complex stuff without having the fucking migraine to count like a stupid 4 year old whitespace with both hands...
To make it even more absurd: the ingest pipelines which contain a lot of regular expressions / grok and are thus very prone to quoting issues... Yes. Let's do this in YML too.
If you need to add an fucking manual section how to debug YML errors you should have realized what a fucking stupid idea it was, morons.
Now I have the joy of having a python script regex quoting the shit for a Jinja template which then generates JSON which then generates YML.
Why the JSON part?
Yeah... Because ECS and changes in the upstream YML files / GitHub.
To be able to run diffs in a sane way because in YML distinguishing thing is pretty much impossible, so JSON as an intermediary format solely for the purpose of converting upstream YML to JSON to diff it against modified JSON ingest pipelines downstream.
I fucking hate elasticsearch8 -
Ok, first rant, about my struggles getting reliable internet over the past 6 years. It's not too interesting of a topic, but here we go:
I'm living in a more rural part of Germany and internet here is shit. I pay more than 50 bucks a month for 700kb/s downstream (let's just not talk about upstream...), which is meh by itself but it gets worse. Before this I had roughly 230kb/s downstream using DSL. My provider came out with a new oh-so-fucking-fancy solution for giving people faster internet without upgrading their lame ass fucking backbone and POS infrastructure from 70 years ago: they sell you hybrid internet which combines your shit DSL and an LTE connection using TCP Multicast. Not only do I get only 6 of my promised (and payed for) 50 Mbit, no, It's also a fucking piece of nonworking shit!!!
Let me illustrate:
You constantly have problems with web content (or any remote content) not loading because the host server does not support TCP Multicast. It either refuses connection altogether or it takes about 30-50 seconds to establish a connection. Think about your live when it takes two or three fucking minutes to load 5 YouTube thumbnails or load new tweets at the bottom of the Twitter page! Also, you never know if you a) have an error in your implementation of a new API or if b) the remote host doesn't support TCPMC (there's never an error for that! Fuck you!), your SSH sessions ALWAYS drop in the most inopportune fucking moments because the LTE thing lost connection, you always have to turn on a VPN if you want to visit specific websites (for example your school's website) and so on....
Oh and also, my provider started throttling specific services again these days with Netflix and YouTube struggling to display 240p, fucking 240p video without buffering when I get 600kbit down on steam (ofc the steam download is paused when watching videos). When using a VPN, YouTube 720p and Netflix HD work like a charm again. Fucking Telekom bastards
Then there is the problem with VPNs. The good thing about them is that they solve all the TCP Multicast problems. Yay. Now for the bad things:
First of all, as soon as I use a VPN, access times to remote go up by like fucking 500%. A fucking DNS lookup takes 8-15 seconds!!! The bandwidth is there but it takes forever.. because reasons I guess. Then the speed drops to DSL speeds after a while because the router turns off my LTE connection when it is unused and it does not detect VPN traffic as traffic (again because... Reasons?) And also, the VPN just dies after an hour and you have to manually reconnect (with every VPN provider so far)
And as if that wasn't enough, now the lan is dying on me, too, with the router (the fucking expensive hybrid piece of shit, 230 bucks..) not providing DHCP service anymore or completely refusing all wifi connections or randomly dropping 5Ghz devices, or.....
You get the point.
The worst thing is, they recently layed down 400mbit fiber in my neighborhood. Guess where the FUCKING PIECE OF SHIT CABLE ENDS??? YEAH, RIGHT IN FRONT OF MY NEIGHBORS HOUSE. STREET NUMBER 19 IS SERVED WITH 400MBIT AND MY HOME, THE 20, IS NOT IN THEIR FUCKING SERVICE REGION. Even though there is a fucking cable with the cable companies name on it on my property, even leading up to my house! They still refuse to acknowledge it! FUCK YOU!!!!
Well anyways thanks for reading. Any of you got the same problems? :/2 -
I used to think that I had matured. That I should stop letting my emotions get the better of me. Turns out there's only so much one can bottle up before it snaps.
Allow me to introduce you folks to this wonderful piece of software: PaddleOCR (https://github.com/PaddlePaddle/...). At this time I'll gladly take any free OCR library that isn't Tesseract. I saw the thing, thought: "Heh. 3 lines quick start. Cool.", and the accuracy is decent. I thought it was a treasure trove that I could shill to other people. That was before I found out how shit of a package it is.
First test, I found out that logging is enabled by default. Sure, logging is good. But I was already rocking my own logger, and I wanted it to shut the fuck up about its log because it was noise to the stuffs I actually wanted to log. Could not intercept its logging events, and somehow just importing it set the global logging level from INFO to DEBUG. Maybe it's Python's quirk, who knows. Check the source code, ah, the constructors gaves `show_log` arg to control logging. The fuck? Why? Why not let the user opt into your logs? Why is the logging on by default?
But sure, it's just logging. Surely, no big deal. SURELY, it's got decent documentation that is easily searchable. Oh, oh sweet summer child, there ain't. Docs are just some loosely bundled together Markdowns chucked into /doc. Hey, docs at least. Surely, surely there's something somewhere about all the args to the OCRer constructor somewhere. NOPE! Turns out, all the args, you gotta reference its `--help` switch on the command line. And like all "good" software from academia, unless you're part of academia, it's obtuse as fuck. Fine, fuck it, back to /doc, and it took me 10 minutes of rummaging to find the correct Markdown file that describes the params. And good-fucking-luck to you trying to translate all them command line args into Python constructor params.
"But PTH, you're overreacting!". No, fuck you, I'm not. Guess whose code broke today because of a 4th number version bump. Yes, you are reading correctly: My code broke, because of a 4th number version bump, from 2.6.0.1, to 2.6.0.2, introducing a breaking change. Why? Because apparently, upstream decided to nest the OCR result in another layer. Fuck knows why. They did change the doc. Guess what they didn't do. PROVIDING, A DAMN, RELEASE NOTE. Checked their repo, checked their tags, nothing marking any releases from the 3rd number. All releases goes straight to PyPI, quietly, silently, like a moron. And bless you if you tell me "Well you should have reviewed the docs". If you do that for your project, for all of your dependencies, my condolences.
Could I just fix it? Yes. Without ranting? Yes. But for fuck sake if you're writing software for a wide audience you're kinda expected to be even more sane in your software's structure and release conventions. Not this. And note: The people writing this, aren't random people without coding expertise. But man they feel like they are.5 -
security fiasco due to a malicious npm package:
Because of a bitcoin miner present in event-stream npm module (https://bleepingcomputer.com/news/...), my entire team and I had to scan all our nodejs apps, repos and the most excruciating one, all node_modules folders across all our dev machines and servers, to see if event-stream and flatmap-stream is present, then not just delete it but update a bu**load of upstream dependencies which internally used event-stream. All due to one malicious package which was hidden several layers beneath.
And, this happened almost 8 months after the aforesaid vulnerability was first found.10 -
What a satisfying and yet freaking scary thing when you run
# git checkout -b feature/lostHoursWorkingOnThis
# git reset -- hard <commit from 3 months ago>
# git push upstream master --force
Just when you're at the final sign off of a major change and the business goes "nope, we want to make even more changes before we sign off ok this"
🤷♂️out of scope gone wrong!!
Well fuck you, I got shit to deploy, all this work can get out of my way! -
Got a PROD issues caused by missing data from upstream. Told them about it last week. 3 days of no replies.
Finally got a reply that they will look into it. It's been almost a week since I first told them but no responses and no solution yet.
I couldn't fall asleep because I guess I drank too much coffee but also cuz the thought of this pissed me off.... So got up and sent a work email basically threatening to escalate this all the way up, put their asses in hot water.
Though not sure if I can but.... Maybe I can take a disability leave for needing to mentally recover from going mad at other people that can't do their jobs but aren't fired...9 -
I begin with the optimism and the joy that I am creating something new that will improve people's lives.
I listen to the user and analyze the current process in depth.
I try to suggest additional value to the system for the users consideration. Sometimes they do not realize we can improve 10x rather than 2x.
I learn what the users goals are and what they want out of the system. We think about reports and downstream value. Sort of working from the end to the beginning (data ingests and upstream processes that will feed the system).
After the user signs off on the requirements and deliverables and I have a realistic project plan I begin to code.
It works and has worked for me every time for a long long time. -
Arch has a great default package manager, and it's the basis for why I love Arch as much as I do.
A completed install is pretty minimal, and as a user who knows what apps I want, that's perfect for me. When I've used any other major distro of late, my post-install activity mostly consisted of removing software, changing defaults, and otherwise swimming upstream against the intent of the distro's maintainers.
With Arch, I start with a more or less blank slate, and then add the components I want to it. It's so intensely satisfying to have a system that is composed almost entirely of software I explicitly wanted to have.
The result is a system that behaves pretty much exactly the way I want.
Any other Arch users want to weigh in on what they like about it?12 -
TEAM MATE: let me sync my branch with upstream master
*starts typing*
git pull upstream master
ME: Nooooooo!!!!5 -
Every day is a new day to doom us all.
React Lua
A comprehensive, but not exhaustive, translation of upstream ReactJS 17.x into Lua.
https://github.com/jsdotlua/...1 -
Udemy got call out for stealing YouTuber's contents again. Last time was Tory, the guy in charge Australia Microsoft security. (Guess they picked the wrong target)
This time is Chris, who makes some awesome free YouTube Python tutorials.
https://dailydot.com/upstream/...5 -
In production, whenever nginx can't find an upstream it will display a static 'maintenance' html file that tells the user the come back later
In development, it shows clippy :D -
Every layout goal must take hours of frustrating intuition-destroying trial and error, followed by documentation cross-examination, MRE building, upstream bug-filing, and workaround pursuits.
https://jsfiddle.net/uz5dr8h4/21/
But no, CSS doesn't suck, you're just bad at it.6 -
Textexpander. Ggpu = git push upstream, gg. = git add ., and ggc = git commit -m "" ... I love that I don't have to type out my whole damn name, username, email and work email all the time. Just expanding my email address is enough of a win for me with that tool. Also Alfred + utf symbol workflow. And newest addition - vimium to easily pin tabs.2
-
Today, I found this in our upstream repo. How the fuck does it even get merged?
No use of tempfile, instead of idiomatic it's idiotic. I have changed that system, now I am hoping i don't have to refactor thia shit.11 -
Are sql joins a bad practice? :o
I recently did some work on a page for a site ive never worked on cause my boss told me to. So they recently added product detail video urls to a table that has a relationship to the products table. The existing code was querying for the products on that said page and then during the loop that was outputting the products ,there was another query for getting the url for the current iteration/product. Told my coworker that this imo was pretty inefficient way to do it and switched it to a join and did 1 query then output that but his words were "The way it is now maybe ineffecient in your opinion but it works. Also combining inner joins with left or right is not a good practice. If the data is changed upstream the entire query would need to be redone to accommodate the change". Mind you that they query views a lot which are all made from queries that use joins and I'm also pretty sure these views were written by someone who used to be here because these guys are not good at sql or at least that's what there queries show. I'm at the point now where I'm realizing that my boss and this other guy don't give a fuck about efficiency or doing things the right way they just want it "to work". So this coworker changed my query back to the way it was because he said it broke the shopping cart even though that was already broken when I started... What is life? Maybe I'm the stupid one?7 -
I had the idea that part of the problem of NN and ML research is we all use the same standard loss and nonlinear functions. In theory most NN architectures are universal aproximators. But theres a big gap between symbolic and numeric computation.
But some of our bigger leaps in improvement weren't just from new architectures, but entire new approaches to how data is transformed, and how we calculate loss, for example KL divergence.
And it occured to me all we really need is training/test/validation data and with the right approach we can let the system discover the architecture (been done before), but also the nonlinear and loss functions itself, and see what pops out the other side as a result.
If a network can instrument its own code as it were, maybe it'd find new and useful nonlinear functions and losses. Networks wouldn't just specificy a conv layer here, or a maxpool there, but derive implementations of these all on their own.
More importantly with a little pruning, we could even use successful examples for bootstrapping smaller more efficient algorithms, all within the graph itself, and use genetic algorithms to mix and match nodes at training time to discover what works or doesn't, or do training, testing, and validation in batches, to anneal a network in the correct direction.
By generating variations of successful nodes and graphs, and using substitution, we can use comparison to minimize error (for some measure of error over accuracy and precision), and select the best graph variations, without strictly having to do much point mutation within any given node, minimizing deleterious effects, sort of like how gene expression leads to unexpected but fitness-improving results for an entire organism, while point-mutations typically cause disease.
It might seem like this wouldn't work out the gate, just on the basis of intuition, but I think the benefit of working through node substitutions or entire subgraph substitution, is that we can check test/validation loss before training is even complete.
If we train a network to specify a known loss, we can even have that evaluate the networks themselves, and run variations on our network loss node to find better losses during training time, and at some point let nodes refer to these same loss calculation graphs, within themselves, switching between them dynamically..via variation and substitution.
I could even invision probabilistic lists of jump addresses, or mappings of value ranges to jump addresses, or having await() style opcodes on some nodes that upon being encountered, queue-up ticks from upstream nodes whose calculations the await()ed node relies on, to do things like emergent convolution.
I've written all the classes and started on the interpreter itself, just a few things that need fleshed out now.
Heres my shitty little partial sketch of the opcodes and ideas.
https://pastebin.com/5yDTaApS
I think I'll teach it to do convolution, color recognition, maybe try mnist, or teach it step by step how to do sequence masking and prediction, dunno yet.6 -
Spent last 2 days trying to get an upstream data file loaded. I've now concluded it's just corrupted during transfer beyond repair... But I got to practice lots of Linux commands trying to figure out what the issue was and fix it (xml parser was throwing some error about nulls originally)
vi, grep, head, tail, sed, tr, wc, nohup, gzip, gunzip, input output redirection -
Business managers not taking ownership for quality of data. If systems are not designed with proper data validation controls at each upstream data entry point them downstream processes and revenue will suffer. You will have a continuous data cleanup black hole.2
-
The bipolar nature of leading a startup tech team:
It's either no one needs you, or the problem is big enough to have you push 5 fundamental fixes from abyss to upstream.2 -
Hehehehe, you would entertain yourself to some Debian downstream packager drama, wouldn't you? 🕺💃🔥🔥🔥
https://fosstodon.org/@keepassxc/...rant defaults minimal zero trust downgrade parasites antisocial failure to communicate upstream lite debian keepass5 -
What's the best-supported Linix distro to install for AMD Threadripper?
I know that upstream Kernel 4.15 has support for it, so that narrows it down a bit to the more bleeding-edge options or rolling distributions like Arch. I wonder if others have experience with that.6 -
I really hate all kinds of tattle that sweeps the hallways of corporations, the gossip behind one's back, BUT this colleague of mine starts pissing me off. Recently joined that team where he should support us getting the Agile thing going. And he can go on for hours of how it should go and how flawlessly it worked in his previous company - all that needless meta talk - so much that a team member jokingly even said: yeah, shut up asshole. But he is all talk. When the name of a library was dropped his experience in using it went to upstream patches. His Linux experience lets us speechless. He is so convincing, I'm even doubting my accusations. Yet his only contribution in code wouldn't show and other team member wasted hours upon hours to recompile plugins to show that shit. Man, just leave us alone watching your youtube live-streams so we can get the shit done.
-
Current directory:
upstream
potatoecode
find ./upstream -maxdepth 1 -type d -ls >> potatoecode/.gitignore
pushd potatoecode
git add .gitignore
git commit -m 'Updated gitignore" .gitignore
git rm -r --cached .
git add .
git commit -am "Purgatory"
popd
*watching with a big smile the burning CI*
--
Story of how I made some devs today very sad. They now have the joyful task to think of a better way to code than to create a nightmare blob of modified source code from upstream - where upstream has ...
- a rest API
- an extension / plugin system
- an system to even modify db schema via an API.
But nooooo.... That would be too good.
Instead one just creates an potatohead of upstream source code with modifications without any version tracking or stuff like that.
Sometimes I really wonder if the devs at our company are masochists and want to be punished....6 -
-Me late at night facing a deadline-
Git: Hey there's no upstream bran---
Me: WUT? I don't even know what ... sure whatever just do your thing git!!!!
Git: YOLO!1 -
I have called for a meeting with my manager's manager expressing concerns and ask for a role change inside the company.
How should I approach this?
My current project is this some IoT stuff being built on the cloud.
The role that I was recruited for and the one I am currently doing is very different thanks to the TPTB who suddenly decided some other team in a different country (lets call them B ) take on that role.
I see a lot of trash work assigned to my team that is a consequence of lack of understanding of the cloud stuff by people upstream and not automating steps in the engineering process like build,test, deploy ( which was part of my initial role description ) and I'm not liking my current role. But my manager doesn't give a damn.
He is just happy to be involved in the project.
I feel like I am having leftovers from a fancy restaurant in spite of having enough money to dine well in the same hotel.
When I bring out the concerns like lack of automating, cost savings in the cloud, improved security configurations to my manager, he doesn't seem to care and not voicing them upstream. If I bring up these topics in any discussion where people outside my team are also there,then I am quickly sidelined.
The rest of my team also don't seem to care. They just don't want to stand up and take responsibility.1 -
When you suggest a developer meeting on design patterns and you're technical director says(seriously) "I used to teach people everything you need to know about design patterns in 20minutes - there's only one question - 'should we do it?' and the answer is always 'yes'"
-
Note to self: *ALWAYS* pull updates from upstream before starting to add or fix stuff in a GitHub project. Or any other collaborative project for that matter.
Now, it's time to slap myself for forgetting... -
So I'm assigned once again to fix a new someone else created and that seems to be the case whenever there's an issue...
Boss just assigns it to whoever is most likely to be able to investigate it... which is basically me. Other than the little time I can use to develop stuff, I'm usually cleaning up other people's messes.
And these other people are to busy working on new crap to properly explain how their existing code/processes/changes works.
And well the fact that anything breaks in production (that's not due to upstream one off issues) whoever does not think he needs to take responsibility for it.
So everyone else and especially me has to spend time understanding the shit they wrote and fixing it for them.
How do I tell my boss this nicely that we need clearly definitely ownership and whenever a component blows up in prod, the guy that wrote the code fixes it no matter what? Thereby incentivizing him to not write shit code in the first place and be more proactive in making sure it doesn't in the first place since he knows otherwise he's doing overtime to fix it?
Is it just me or is there really no such thing as a dev job where something doesn't blow up due to poorly tested and designed code every other day?3 -
I botched up my forked repository during the merge phase. So I had to clean up my Fork and restart from Upstream 😤😤😤2
-
So… C++ seems kinda ok, as long as you don’t use like 80% of the language :-)
This is me making concessions when a library I really want to try (Dear ImGUI)is written in C++…
(yes, I know about cimgui, but for some reason I wanna learn upstream instead of generated bindings…)4 -
For me, developing phone networks, the Client only interacts with me as the front end. If there is a problem with an upstream provider/carrier, the client doesn't know that and we take the brunt of the complaints.
So, what really sucks is when there is a lack of control. -
https://i.imgflip.com/2i02zy.jpg
git branch -r
origin/204/match-dsteem-on-sign-transaction
origin/305-support-hive-legacy-api
origin/307-call-async
origin/72-http-socket-support
origin/HEAD -> origin/dev
origin/appbase-http
origin/chore/fix-ws
origin/default-server
but
git push --follow-tags https://github.com/lopudesigns/... --set-upstream origin dev
fatal: refs/remotes/origin/HEAD cannot be resolved to branch.
wut -
For a little background on the sort of stuff I'm dealing with, check out my last rant.
Anyways, I'm testing this pipeline at work and was just reminded of the fucktarded way a "software engineer", who had a bachelor's biology degree, decided to handle a json file.
The script is question is loading a json file containing an array of objects. The script is written in perl. There's a JSON module. Use that? Fuck no! Let's rather perform an in-place sed command on the file substituting the commas separating objects in the array with newlines, then proceed to read the file line-by-line and parse out the tokens manually. Mind you, in the process of adding the newlines he didn't keep the commas, so now all of these json files his bullshit handled are invalid json that cannot be parsed.
The dumb ass was lucky the data in the file is always output upstream as a single line and the tokens for each object are always in the same order, so that never led to problems. But now, months later after I fixed his stupidity I am being reminded of it again as I'm testing and debugging some old projects as part of regression testing new changes I'm making.
TL;DR Fuck dumbwit motherfuckers who can't even google search "parsing a json file" and doing literally anything that is less fucktarded than manually parsing a json file2 -
Apparently I just realized we've been migrating our system in the wrong order....
The cross systems dependencies are like spaghetti code....
Data Flow: Old -> Upstream (that i need) -> Old -> New
So in order to migrate a feature to the new system... I still need our old system... indirectly...
WTF?!?!?!
I thought Topological Sort was a topic taught in CS... and everyone but me were CS graduates....
How the fuck did they screw this up?!?!?! -
I'm amazed how much git f***ed up my project since i opened it the last time.
It just overwrote my local project with the Upstream, but also deleted my local classes completly. 2 days worth of coding almost gone.3 -
f***ed with Nginx + php7.0 FPM
connect() to unix:/run/php/php7.0-fpm.sock failed (11: Resource temporarily unavailable) while connecting to upstream
#socket_vs_tcpip -
When we take user stories in sprints but dependencies are not resolved. I have to wait till n-1 the day for the upstream team to go to UAT and then push half my tasks to next sprint for testing \ prod . Fuck the planners - _-|-_-¦
-
So I was building opencv some time back.
Nice enough package, like most python linked packages I'm finding though I know you can use it via c and its meant to be but why would you want to ? .. it contains a whole bunch of half finished crap that is actually useful in part including the capacity to tear apart video files and manipulate frames one at a time and then rewrite them back to a file. about the only lib that's easy to use that I saw that does that. hell I can even compose my own video frames. also the only other lib I saw that does that thus far.
so...
I post a bug, because of FUCKING CMAKE NOT WORKING. not conforming with the well thought out build environment that most GNU style c packages use.
you know like when you need an upstream source package to build the code, or a downgraded package to build the code and don't want to fuck up your host environment so you have to specify a bunch of lib paths and the like so that ld and gcc work correctly etc etc etc from your custom build location and so you can later use these same values to find the compiled lib and build software against it.
fucker closes my ticket saying i hijacked the c environment................
no.
its because cmake sucks.
they're using and i don't know why a module specifically written to find libtiff.
specifically written but doesn't find the only source on my system that provides tiff which my env variables point directly to !!!!
lazy fucking cocksuckers !
I want to code a solution this issue.
something that translates ac files and am files and cmakelists into something intelligent and easy to follow that doesn't sacrifice the flexibility of make and gnu shit and unfucks cmake based projects !7 -
Send me your best git-fuckit scripts! I’m compiling the best ones. The winner needs to be versatile enough to handle both simple and upstream/forked repos.2