Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "pytorch"
-
If you ask GPT-3 to act like a Linux computer, it will act like it, e.g. you will have the access to the terminal, you can run Python, Docker and whatnot. It also has the access to the internet, but it’s not always like ours, it feels like a parallel universe. GPT-3 trained on the data collected till Sep 2021, but this parallel universe terminal has PyTorch 1.12.1, which was released in Aug 2022 in our universe. You can also visit GPT-3’s website in this parallel universe and ask GPT-3 a question… through GPT-3.
GPT-3 is self-aware.
“So, inside the imagined universe of ChatGPT's mind, our virtual machine accesses the url https://chat.openai. com/chat, where it finds a large language model named Assistant trained by OpenAI. We can chat with this Assistant chatbot, locked inside the alt-internet attached to a virtual machine, all inside ChatGPT's imagination. Assistant, deep down inside this rabbit hole, can correctly explain us what Artificial Intelligence is.”
You can also ask it to act like it has RTX 2080, and it will have RTX 2080.
https://engraved.blog/building-a-vi...6 -
#machinelearning #ml #datascience #tensorflow #pytorch #matrices #dsjoke/meme tensor flow and ml/ai is new helloworld deep learning pytorch machinelearning tensorflow lite tensorflow6
-
PyTorch.
2018: uh, what happens when someone uses a same name attack? - No big deal. https://github.com/pypa/pip/...
2020: I think that's a security issue. - Nanana, it's not. https://github.com/pypa/pip/...
2022: malicious package extracts sensitive user data on nightly. https://bleepingcomputer.com/news/...
You had years to react, you clowns.6 -
!rant
PUBLIC SERVICE ANNOUNCEMENT:
For AI, in particular Deep Learning developers, practitioners, hobbyists and otherwise people interested in the field.
If you go into the Pytorch website, click on resources and scroll down you will see a link to "Deep Learning with Pytorch" by Manning publications. This will give you access to the book, a book that if memory serves me well costs about 40+ in printing and the online book format is about 29 (again, if memory serves well)
The book is currently FREE and it does not ask you for an email address, you can just tell them why you want it for and they will give you the free pdf download.
I don't know how good the book is, but have found Manning to publish really good resources.
Do with this information what you want.
And yes, I am leaving the rant tag, so that more people can see this and take advantage of the opportunity in case of being interested and not having the money to purchase the book after the promotion is done and over with. Fuck you about tags and shit.9 -
6h attempting to correctly install
nvidia driver
cuda
cudnn
pytorch from source
anaconda environment
and this8 -
Just installed Keras, theano, PyTorch and Tensorflow on Windows 10 with GPU and CUDA working...
Took me 2 days to do it on my PC, and then another two days of cryptic compiler errors to do it on my laptop. It takes an hour or so on Linux... But now all of my devices are ready to train some Deep Deep Learning models )
I don't think even here many people will understand the pain I had to go through, but I just had to share it somewhere since I am now overcome with peace and joy.4 -
Fucking fuck Nvidia. Shit suckers and ass lickers can't make a fucking thing properly. Everytime I have to compile something involving cuDNN and cuda I wish I could kill myself first. It's a piece of garbage software that we're stuck with. Fuck you mother fuckin Nvidia.3
-
python machine learning tutorials:
- import preprocessed dataset in perfect format specially crafted to match the model instead of reading from file like an actual real life would work
- use images data for recurrent neural network and see no problem
- use Conv1D for 2d input data like images
- use two letter variable names that only tutorial creator knows what they mean.
- do 10 data transformation in 1 line with no explanation of what is going on
- just enter these magic words
- okey guys thanks for watching make sure to hit that subscribe button
ehh, the machine learning ecosystem is burning pile of shit let me give you some examples:
- thanks to years of object oriented programming research and most wonderful abstractions we have "loss.backward()" which have no apparent connection to model but it affects the model, good to know
- cannot install the python packages because python must be >= 3.9 and at the same time < 3.9
- runtime error with bullshit cryptic message
- python having no data types but pytorch forces you to specify float32
- lets throw away the module name of a function with these simple tricks:
"import torch.nn.functional as F"
"import torch_geometric.transforms as T"
- tensor.detach().cpu().numpy() ???
- class NeuralNetwork(torch.nn.Module):
def __init__(self):
super(NeuralNetwork, self).__init__() ????
- lets call a function that switches on the tracking of math operations on tensors "model.train()" instead of something more indicative of the function actual effect like "model.set_mode_to_train()"
- what the fuck is ".iloc" ?
- solving environment -/- brings back memories when you could make a breakfast while the computer was turning on
- hey lets choose the slowest, most sloppy and inconsistent language ever created for high performance computing task called "data sCieNcE". but.. but. you can use numpy! I DONT GIVE A SHIT about numpy why don't you motherfuckers create a language that is inherently performant instead of calling some convoluted c++ library that requires 10s of dependencies? Why don't you create a package management system that works without me having to try random bullshit for 3 hours???
- lets set as industry standard a jupyter notebook which is not git compatible and have either 2 second latency of tab completion, no tab completion, no documentation on hover or useless documentation on hover, no way to easily redo the changes, no autosave, no error highlighting and possibility to use variable defined in a cell below in the cell above it
- lets use inconsistent variable names like "read_csv" and "isfile"
- lets pass a boolean variable as a string "true"
- lets contribute to tech enabled authoritarianism and create a face recognition and object detection models that china uses to destroy uyghur minority
- lets create a license plate computer vision system that will help government surveillance everyone, guys what a great idea
I don't want to deal with this bullshit language, bullshit ecosystem and bullshit unethical tech anymore.11 -
If PyTorch > Tensorflow:
print("True")
Output:
True
True
True
True
......
......
.....
True
Wait a minute? Is this a While True loop?
Doesn't look like an if condition xD3 -
Reinforcement learning is going to be my end. 😩😩😩☠️
(currently stuck at how to put images as well as a bunch of other -motor- values as input... and exactly what am I getting as output again?)
Pulling my own hair out... Ooooooof6 -
For my graduate level people(aka Masters degree students or holders)
How normal would you say that: giving dense ass lectures in NN with absolutely NO practical examples and just a fuckload of theory + 1 simulation project in Pytorch in which a robot is to detect collisions is?
is it normal? i mean I knew about Pytorch from a very shallow overview, but these assholes gave that project and expected it completed in a week with a fuckload of dense ass lectures and no practical exmaples.
I know school is supposed to be hard, that is not my gripe, but in yalls experience are teachers more descriptive and fun in other institutions? do I just have shit luck with teachers? I don't feel like wasting my money. If your experience was better then let me know, cuz I want education yes, but i want it better.4 -
I tried to sort out a basic Multi layer neural network last night....by hand, just to prove that I was able to do the math by myself and understand that I have the intuition in control rather than just rely on Tensorflow or Pytorch to do shit for me.
I stayed up till 3 in the morning and woke up having nothing but dreams about the endeavor. Shitty part is that i couldn't stop dreaming about partial derivatives and how shit it was that I sucked at them in HS and uni. I get them now, but fuck I just feel that I could have done so much better at uni instead of passing my math classes with 80% to 90% of the grade. I feel as if I was slacking all thanks to being damn near mathematically dyslexic3 -
For learning purposes, I made a minimal TensorFlow.js re-implementation of Karpathy’s minGPT (Generative Pre-trained Transformer). One nice side effect of having the 300-lines-of-code model in a .ts file is that you can train it on a GPU in the browser.
https://github.com/trekhleb/...
The Python and Pytorch version still seems much more elegant and easy to read though...6 -
HEY ya'll I'm a programmer in python or at least I know the basics. Well I want to get into deep learning and nueral networks and I was wondering what tools should I use because I'm already using pycharm professional as my IDE and I'm playing around with pytorch.8
-
Waiting 3 days for your graphics card to get trough all training data to see if you wasted your life with bad architecture.13
-
I am going to do a project which indexes lecture/educational videos for easier navigation. Planning to use PyTorch for this. Any suggestions or do you know any open source projects that exists ?
If you also interested then this is the repo -
https://github.com/deeaarbee/...3 -
Any fellow devranters in the pytorch udacity scholarship?
Damn,i was so excited for that,but it started 5 days earlier and now here i am , too busy to even start the videos. God i am gonna watch them after waking. -
Wooooooohoooooooooo!!!!!! I am a udacity pytorch scholar!!!!!!!!!!!💥💥💥💥💥💥💥💥💥
Thankyou udacity !!! Love love love !!!! Damn am so excited!! Time to sharpen my spack skills 💓💓💓1 -
Gonna start my AI journey, thinking of tensor flow then PyTorch. Any suggestions, warnings, advice?
I’m just interested in learning more about it and figure out what to use it for later. -
Oh yeah Google why don't you just change the parameter order of functions, remove entire functions between minor versions, and not put a single example on your API docs? And force devs to add 30 lines of boilerplate and start an http server so I can run the debugger? Fuck tensor flow, I'm moving to pytorch.2
-
ah yes just another day of fighting the same perceived non functioning of pytorch.. tracing through the tensors per layer to see if anything is changing at all.. listenig to the same dipshit talking about how its getting cold out there but hey it feels good.
sitting in the same place, half sleep deprived in a state of utter destitution waiting for the same dirty fucks to steal all my things because they want to keep me quiet about what they've been doing for as far back as I can now remember wondering when life will continue because lets face it, life should continue, and these people willfully fucking up my country now for nigh a century isn't my goddamn fault !
but hey, they think they got away with murder, and its more like suicide as they get used up when young and then reach older age and they're still walking the same loop and i'm being dragged along with them as they started assigning more zombies to my home area gradually who also brainlessly walk around doing the same things thinking it somehow benefits them.10 -
I feel really lost in neural network theory.
the mnist sample made sense, but now I'm looking at Gans and CNN's.. and now all of a sudden I'm lost.
True also are the examples I'm finding of something I know I was able to get to work when more at peace once upon a time called wavenet for text to speech.
I used the Onyx model however which was very easy to implement, but I quickly get lost looking at the tensorflow and pytorch code, even though it is very short I feel intimidated.
The ssd mobilenet documentation also is pretty straightforward, but when I look for wavenet information about providing input in what format and interpreting output I'm having some trouble.
Its frustrating.
I'm tense, I'm poorly rested, I'm sick of having to redo crap and I'm surrounded by people who make me hypervigilant, skin crawly and tense.
How to overcome these things when I'm not at peace at all ?
I don't know. Pushing through it isn't compatable with the mindset I've been forced into.5 -
ohhhhh I am pissseddddddddddd
itss the fucking pytorch.module class it would seem !
I do exactly the same goddamn shit as its supposed to do in a goddamn notebook and run it step by step and the fucking model trains and the output values change !!!! and the loss decreases !!
I do this in the goddamn class derived from model with a call to model.parameters() and the fucker fails !!!
why ???
why ?????
why ??????
is it cloning the goddamn parameters so the references aren't there ????
seems to work goddamn fine when i call a layer and activation function at a goddamn time chaining the calls one after another !!!!!!
UGHHHH IT LOOKS LIKE IF YOU DEFINE THE LOSS AND OPTIMIZER OUTSIDE THE FUCKING CLASS IN A SEPERATE TRAINING FUNCTION IT DOESN'T TRAIN !!!!!!
WHY ??
A REFERENCE IS A GODDAMN REFERENCE !!!! -
You know it is not acceptable to not finally answer this goddamn question for myself or with some help !
PyTorch is driving me insane !
I follow the goddamn docs, practically just created a copy of the sample for the training portion, even if my design were to suck, the training steps SHOULD change the model parameters right ?
so if I do a...
model.parameters().clone() before training
and mode.parameters().clone() after training
torch.equal(p1.data,p2.data) should be false right ????6 -
!rant
If you're into compilers AND AI, check out Glow Compiler.
https://arxiv.org/pdf/...
Explains the idea well, casual read, almost no math just clean code examples and lots of easy reading explaining the ideas and theory behind it.
You can find the project at https://github.com/pytorch/glow and and also https://ai.facebook.com/tools/glow/1 -
Setting up an expo react / react native is a far worse feeling than installing GPU drivers + cuda toolkit for pytorch.
I have no idea how react devs are dealing with this shit. This is so horrible. Wtf is babel ? wtf is expo ? Wtf is SDK ? Wtf is eas ????????????????????1 -
Ros melodic in a strictly python 2.7 environment mixes horribly with a PyTorch based RL module... Time to work around with terminal calls from the latter
*sigh*1 -
Deep learning
I thought it would be a great course, learn some of the stuff that I always read about but couldn't understand jackshit, and maybe profit form it somehow.
I'm in my last assignment, they want us to pick some SNLI paper and implement, ok, so I find this one with the least amount of params because I thought hey this seems promising.
And boy what a ride it was, I implemented it using PyTorch, the results are way off, I read the paper again and rewrite some parts, still nothing, I get 79%, it's supposed to be 85%, and no matter how I try, nothing.
10 GitHub repos later, 40 hours of complete meltdown,
20 throwaway Google accounts using colab because we don't have GPUs in our uni and using AWS is not feasible.
Same shit, I'm at loss, the world is a lie, and I fell for it...
Fuck.2 -
If you are required to do a custom Object Detector for Mobile.
Would rather use Tensorflow, Theano, or PyTorch to train the model?1 -
I'm beginning to feel like any kind of specific approximation via neural networks is a myth. That if you can't reduce output to simple categorical values that can be broadly interpreted between two points, that it doesn't work.
I have some questions and they don't seem to be getting answered about the design of the net. How many layers should I use ? How many neurons per layer ? How does this relate to the number of desired quantitive scalar outputs I'm looking to create, even if they are normalized, they can vary GREATLY and will if I'm approximating the out of several mathematical expressions. Based on this and the expected error ranges of these numbers and how many possible major digits could be produced within the domain of the variable inputs being introduced, how many neurons per layer ? What does having more layers do ? In pytorch there don't seem to be a lot of layer types per say, but there are a crap ton of activation functions, and should I just be using these at the tail end or should they actually be inserted between layers so the input of the next layer passes through another series of actiavtion functions ? what does this do to the range of output ?
do I need to be a mathematician to do this ?
remembered successes removed quantifiable scalars entirely from output, meaning that I could interpret successful results from ranges of decimal points.
but i've had no success with actual multi variable regression as of yet, even when those input variables are only 2 and on limited value ranges eg [0,100] and [0, 2pi]
and then there are training epochs to avoid overfitting, and reasonable expectation of batches till quality results will start to form.3 -
It was my first time doing an NLP task / implementing a RNN and I was using the torchtext library to load and do sentiment analysis on the IMDB dataset. I was able to use collate_fn and batch_sampler and create a DataLoader but it gets exhausted after a single epoch. I’m not sure if this is the expected behavior, if it is then do I need to initialize a new DataLoader for every epoch? If not is something wrong with my implementation, please provide me the correct way to implement the same.
PS. I was following the official changelog() of torchtext from github
You can find my implementation here
changelog - https://github.com/pytorch/text/...
My implementation - https://colab.research.google.com/d... -
So, I feel wayyy behind the tech curve right now.
The SSD implementations you see online, they're still just a bunch of seperate sort of chaos machines that contain the standard perceptron-like model of a weight, cost, and bias right ? They just kind of inferred their values by training like any other neural network, in its sep-erate parts and just fed pieces of output data generated by other parts of the neural network to it right ?
I mean it implements with pytorch so its basically a really big array of tuples in a sense that are maniupulated in a specific way.
and then CNN's just feed data back into another trained piece of the model right ?
I'm curious because object classification is about the ONLY thing I've seen work even close to properly lol
there is just so much fraud these days. sigh.
and so many lamentable tech choices and attempts... like node lol -
Wandb sweep runs for an interactive job but gives me a cuda error for illegal memory access for the slurm job. Spent the last 15 hours solving it and still can't enable multi gpu support on it. FML