Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "cudnn"
-
Why the absolute fuck do I need to have nvidia membership to download cudnn? What evil do these mofos think people achieve with free access to a fucking programming tool?
Jesus on a bike! I nag about open science and all I end up with is always these spying morons, who purposefully disable scientists. Fuck!
If👏you👏need👏my👏info,👏then it's👏not👏free.👏17 -
Ok friends let's try to compile Flownet2 with Torch. It's made by NVIDIA themselves so there won't be any problem at all with dependencies right?????? /s
Let's use Deep Learning AMI with a K80 on AWS, totally updated and ready to go super great always works with everything else.
> CUDA error
> CuDNN version mismatch
> CUDA versions overwrite
> Library paths not updated ever
> Torch 0.4.1 doesn't work so have to go back to Torch 0.4
> Flownet doesn't compile, get bunch of CUDA errors piece of shit code
> online forums have lots of questions and 0 answers
> Decide to skip straight to vid2vid
> More cuda errors
> Can't compile the fucking 2d kernel
> Through some act of God reinstalling cuda and CuDNN, manage to finally compile Flownet2
> Try running
> "Kernel image" error
> excusemewhatthefuck.jpg
> Try without a label map because fuck it the instructions and flags they gave are basically guaranteed not to work, it's fucking Nvidia amirite
> Enormous fucking CUDA error and Torch error, makes no sense, online no one agrees and 0 answers again
> Try again but this time on a clean machine
> Still no go
> Last resort, use the docker image they themselves provided of flownet
> Same fucking error
> While in the process of debugging, realize my training image set is also bound to have bad results because "directly concatenating" images together as they claim in the paper actually has horrible results, and the network doesn't accept 6 channel input no matter what, so the only way to get around this is to make 2 images (3 * 2 = 6 quick maths)
> Fix my training data, fuck Nvidia dude who gave me wrong info
> Try again
> Same fucking errors
> Doesn't give nay helpful information, just spits out a bunch of fucking memory addresses and long function names from the CUDA core
> Try reinstalling and then making a basic torch network, works perfectly fine
> FINALLY.png
> Setup vid2vid and flownet again
> SAME FUCKING ERROR
> Try to build the entire network in tensorflow
> CUDA error
> CuDNN version mismatch
> Doesn't work with TF
> HAVE TO FUCKING DOWNGEADE DRIVERS TOO
> TF doesn't support latest cuda because no one in the ML community can be bothered to support anything other than their own machine
> After setting up everything again, realize have no space left on 75gb machine
> Try torch again, hoping that the entire change will fix things
At this point I'll leave a space so you can try to guess what happened next before seeing the result.
Ready?
3
2
1
> SAME FUCKING ERROR
In conclusion, NVIDIA is a fucking piece of shit that can't make their own libraries compatible with themselves, and can't be fucked to write instructions that actually work.
If anyone has vid2vid working or has gotten around the kernel image error for AWS K80s please throw me a lifeline, in exchange you can have my soul or what little is left of it5 -
6h attempting to correctly install
nvidia driver
cuda
cudnn
pytorch from source
anaconda environment
and this8 -
Fucking fuck Nvidia. Shit suckers and ass lickers can't make a fucking thing properly. Everytime I have to compile something involving cuDNN and cuda I wish I could kill myself first. It's a piece of garbage software that we're stuck with. Fuck you mother fuckin Nvidia.3
-
Spent 2 days installing different versions of nvidia-driver, nvidia-cuda-toolkit, cudnn.
Disassembled pc due to some ram memory problems and pc not starting after freeze.
Looks like sometimes plug out and plug in various components on your motherboard fix pc lol. Destroyed PC casing that collapsed due to me sitting on it.
All of this to just find out tensorflow error and all crashes are from graphics card overheating after keeping it for some time at 82 celsius.
Added time.sleep to python code and looks like it's working, keeping temperature below 65 celsius.
Still ~100 times faster than cpu training so I can live with that.3 -
Bought fucking nvidia gpu to test speed of some fucking machine learning models that generate speech.
6 hours wasted already for installing fucking dependencies
cuda, fucking tensorflow gpu, bezel and other shit
Fucking resetting password to download deb with cudnn,
really ??????? fucking emails are not delivered to my fucking mailbox
After mass click of send email and multiple account ban and unban I figured out I should login to nvidia website and then allow access to fucking developer every time I want to log in there - fuck shit
Uninstalling everything now looking for fucking compatible versions between software.
10 years in this business still fucking installation of dependencies is most difficult part
Fucking corporate business and their shitty installation instructions to fuck up peoples lives and switch them to the cloud.
Same was with fucking kubernetes
Fucking software dependency hell
It’s worse then ever before.
Fuck ....3