Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "neural"
-
So the person from my previous rant actually tried to make AI in HTML.
Person: I made that AI in HTML today!
Me: Oh really?
Person: Yup. *Opens HTML site*
It was a site that
1) Used JavaScript
2) Was a prompt(), and after answering it alerts "Yes" or "No" randomly.
Me: That's not AI
Person: Uhh yeah it is. It uses a neural network to answer!
Me: Actually, a neural network is a dot product of an input and vectors that are refined using partial derivatives.
Person: Yeah! That's what Math.random() and alert() do!
I left that room as quickly as I could (yet again).30 -
Person: I want to learn to code neural networks and cool AI stuff.
Me: Look into Python or Lua.
Person: Those are too hard, I'm going to use HTML instead.
I got out of there as fast as I could. 😅11 -
I'm a self-taught 19-year-old programmer. Coding since 10, dropped out of high-school and got fist job at 15.
In the the early days I was extremely passionate, learning SICP, Algorithms, doing Haskell, C/C++, Rust, Assembly, writing toy compilers/interpreters, tweaking Gentoo/Arch. Even got a lambda tattoo on my arm after learning lambda-calculus and church numerals.
My first job - a company which raised $100,000 on kickstarter. The CEO was a dumb millionaire hippie, who was bored with his money, so he wanted to run a company even though he had no idea what he was doing. He used to talk about how he build our product, even tho he had 0 technical knowledge whatsoever. He was on news a few times which was pretty cringeworthy. The company had only 1 programmer (other than me) who was pretty decent.
We shipped the project, but soon we burned through kickstart money and the sales dried off. Instead of trying to aquire customers (or abandoning the project), boss kept looking for investors, which kept us afloat for an extra year.
Eventually the money dried up, and instead of closing gates, boss decreased our paychecks without our knowledge. He also converted us from full-time employees to "contractors" (also without our knowledge) so he wouldn't have to pay taxes for us. My paycheck decreased by 40% by I still stayed.
One day, I was trying to burn a USB drive, and I did "dd of=/dev/sda" instead of sdb, therefore wiping out our development server. They asked me to stay at company, but I turned in my resignation letter the next day (my highest ever post on reddit was in /r/TIFU).
Next, I found a job at a "finance" company. $50k/year as a 18-year-old. CEO was a good-looking smooth-talker who made few million bucks talking old people into giving him their retirement money.
He claimed he changed his ways, and was now trying to help average folks save money. So far I've been here 8 month and I do not see that happening. He forces me to do sketchy shit, that clearly doesn't have clients best interests in mind.
I am the only developer, and I quickly became a back-end and front-end ninja.
I switched the company infrastructure from shitty drag+drop website builder, WordPress and shitty Excel macros into a beautiful custom-written python back-end.
Little did I know, this company doesn't need a real programmer. I don't have clear requirements, I get unrealistic deadlines, and boss is too busy to even communicate what he wants from me.
Eventually I sold my soul. I switched parts of it to WordPress, because I was not given enough time to write custom code properly.
For latest project, I switched from using custom React/Material/Sass to using drag+drop TypeForms for surveys.
I used to be an extremist FLOSS Richard Stallman fanboy, but eventually I traded my morals, dreams and ideals for a paycheck. Hey, $50k is not bad, so maybe I shouldn't be complaining? :(
I got addicted to pot for 2 years. Recently I've gotten arrested, and it is honestly one of the best things that ever happened to me. Before I got arrested, I did some freelancing for a mugshot website. In un-related news, my mugshot dissapeared.
I have been sober for 2 month now, and my brain is finally coming back.
I know average developer hits a wall at around $80k, and then you have to either move into management or have your own business.
After getting sober, I realized that money isn't going to make me happy, and I don't want to manage people. I'm an old-school neck-beard hacker. My true passion is mathematics and physics. I don't want to glue bullshit libraries together.
I want to write real code, trace kernel bugs, optimize compilers. Albeit, I was boring in the wrong generation.
I've started studying real analysis, brushing up differential equations, and now trying to tackle machine learning and Neural Networks, and understanding the juicy math behind gradient descent.
I don't know what my plan is for the future, but I'll figure it out as long as I have my brain. Maybe I will continue making shitty forms and collect paycheck, while studying mathematics. Maybe I will figure out something else.
But I can't just let my brain rot while chasing money and impressing dumb bosses. If I wait until I get rich to do things I love, my brain will be too far gone at that point. I can't just sell myself out. I'm coming back to my roots.
I still feel like after experiencing industry and pot, I'm a shittier developer than I was at age 15. But my passion is slowly coming back.
Any suggestions from wise ol' neckbeards on how to proceed?32 -
Developer Says: We have trained a model to automatically categorize user posts.
Sales Department Says: We are building a decentralized peer to peer blockchain neural network based on a scalable containerized cloud of quantum computers to power internet of things devices in augmented reality while users get driven around in autonomous vehicles powered by machine learning and pay for renewable energy with cryptocurrencies.7 -
I just started playing around with machine learning in Python today. It's so fucking amazing, man!
All the concepts that come up when you search for tutorials on YouTube (you know, neural networks, SVM, Linear/Logic regression and all that fun stuff) seem overwhelming at first. I must admit, it took me more than 5 hours just to get everything set up the way it should be but, the end result was so satisfying when it finally worked (after ~100 errors).
If any of you guys want to start, I suggest visiting these YouTube channels:
- https://youtube.com/channel/...
- http://youtube.com/playlist/...9 -
None of these people are real.
The “photos” are generated via neural network.
https://arxiv.org/pdf/...30 -
Sister's new boyfriend at xmas party: So what do you do for a living?
Me: Well, I would say I'm a "full stack" developer, but what does that even mean anymore right? With the state of front-end development being in a constant state of flux and/or kissing its own ass, and every client demanding their one page website used solely for their phone number be offline first WPA SPA Web 7.0 REST Enabled clusterfuck that requires using at least 65% of the AWS stack, most of it completely uselessly. But hey, Neural Network AI looks good on your "grandma's cookies" website, and for only $9,000 per month you can now set the timer on your oven from your phone. So, man, I guess even though I've now been at it twenty years, even I'm not sure what the fuck it is I do anymore. How about you?
Sister's Boyfriend: I'm unemployed.10 -
Smart India Hackathon: Horrible experience
Background:- Our task was to do load forecasting for a given area. Hourly energy consumption data for past 5 years was given to us.
One government official asks the following questions:-
1. Why are you using deep learning for the project? Why are you not doing data analysis?
2. Which neural network "algorithm" you are using? He wanted to ask which model we are using, but he didn't have a single clue about Neural Networks.
3. Why are you using libraries? Why not your own code?
Here comes the biggest one,
4. Why haven't you developed your own "algorithm" (again, he meant model)? All you have done is used sone library. Where is "novelty" in your project?
I just want to say that if you don't know anything about ML/AI, then don't comment anything about it. And worst thing was, he was not ready to accept the fact that for capturing temporal dependencies where underlying probability distribution ia unknown, deep learning performs much better than traditional data analysis techniques.
After hearing his first question, second one was not a surprise for us. We were expecting something like that. For a few moments, we were speechless. Then one of us started by showing neural network architecture. But after some time, he rudely repeated the same question, "where is the algorithm". We told him every fucking thing used in the project, ranging from RMSprop optimizer to Backpropagation through time algorithm to mean squared loss error function.
Then very calmly, he asked third question, why are you using libraries? That moron wanted us to write a whole fucking optimized library. We were speechless at this question. Finally, one of us told him the "obvious" answer. We were completely demotivated. But it didnt end here. The real question was waiting. At the end, after listening to all of us, he dropped the final bomb, WHY HAVE YOU USED A NEURAL NETWORK "ALGORITHM" WHICH HAS ALREADY BEEN IMPLEMENTED? WHY DIDN'T YOU MAKE YOU OWN "ALGORITHM"? We again stated the obvious answer that it takes atleast an year or two of continuous hardwork to develop a state of art algorithm, that too when gou build it on top of some existing "algorithm". After listening to this, he left. His final response was "Try to make a new "algorithm"".
Needless to say, we were completely demotivated after this evaluation. We all had worked too hard for this. And we had ability to explain each and every part of the project intuitively and mathematically, but he was not even ready to listen.
Now, all of us are sitting aimlessly, waiting for Hackathon to end.😢😢😢😢😢25 -
My Neural Network can recognise handwritten digits!!!
It's my second try at NN so it's faaaar from perfect (or maybe even good), but hey, it's something and only with High schools, I'm pretty satisfied with the results. If you've not seen my previous post, I'm just trying to learn NNs in C and am doing just really basic things.
Still I'm proud of my progress!
Now I'm looking forward learning some library (OpenNN + OpenCV seems cool) and trying more advanced stuff, wish me luck 😆14 -
About six months ago I decided I wanted to learn to write a neural network from the ground up, using only the C++ standard lib. Had to learn some linear algebra, multivariable calc and a dash of wizardry.
The mathematics of neural networks is still one of the coolest things I've ever learnt. It still marvels me that you can make a specialized mini-brain out of nothing but numbers.17 -
Skype has a mean auto reply. If the auto reply is based on a neural net, then they must be sampling from bullies.9
-
Asked Google Assistant what it knew about me. Didn't expect this answer but surprisingly point on.
All that neural and machine learning is paying off in a seriously creepy way.
BTW new to the community, first post and loving it!9 -
Friend: Hey did you saw this neural network which can solve captchas?
Me: No, does that even exist?
Friend: Yes, its awesome, isn't it?
Me: Yeah, awesome...
Inner Me: Now machines are already better at solving captchas than me :/3 -
I JUST FINISHED MY FIRST NEURAL NETWORK!!!
But first of all, as I know you guys, it's spaghetti code and even I as a newb see places where I used too few-dimensional array or passed useless parameters or simply wrote too many redundant lines of code. I know it. I will make it MUCH better next time. Period.
But OMFG this made me scream from happiness today!! Just these few seemingly random numbers... I'm really done.. That's why I jumped into coding year or two ago..
And for some background, I didn't study any IT school, I'm just highschooler (general grammar school) who traded gaming for learning. Also my maths teacher teached NNs on university and is very keen to teach me, so that's that.
Now I wanna make the best out of it and I'm looking forward to write some well documented and flexible library, parallelized and everything (I'm gonna learn a lot in the process of doing this) better then FANN.
Maybe I'm gonna fail(99% probability but hey, I'm programmer beginner, I still think I can code everything I want). But if there is just one moment like when I saw this screen today, I'mma trade my life for it.
Sorry for taking your time guys, I was just genuinely stunned... A lot24 -
Open source block chain neural network binary tree growth hacker synergy vertically integrating cryptocurrency game changing GDPR compliant internet of things node.js quantum computing start up that'll disrupt and pivot the cloud based ecosystem11
-
I fucking hate toxic positivity. Every fucking corporation pushes the notion that "lifE iS aWeSomE, wE cArE abOuT pEoPle" and other such bullshit, and when you point it out, they call you a bad, toxic person.
No, you don't care about your community, let alone the whole world. You're just trying to make people believe that spyware, wage slavery and being fired by a neural network is the norm. You're making money off of those who don't have a choice.
If you account all people, not just American white rich 1%, it turns out that for the vast majority of people life is either an uphill battle or straight up nightmare. People are working in shifts and have no time or emotional resource to spend on themselves. Most of the people can't afford a house or a flat. Even those who can still suffer from mental illnesses, to the point where there are more mentally challenged people than mentally healthy ones. The word "neurotypical" meaning "mentally healthy" is wrong.
You want nothing but to sell your stuff and earn more money off of Chinese and Indian factory workers who work 16-hour shifts. Maybe your life is great, but aggressively pushing this notion is a big, wet spit in the face of humanity.
Fuck you. Fuck your space rockets. Fuck your twitter accounts. Fuck your institutionalized exploitation of the weak. Fuck your products. Fuck your "open source". Fuck your "GDPR compliance". Fuck your offshores, your hedge funds and your tax evasion. Fuck your bailouts. Fuck your ships spilling tons of crude oil, fuck your factories, fuck your slave labor, fuck your anti-suicide nets in Chinese dormitories.
One day, because of you, our planet will become unlivable. You will hop into your fancy space rocket to go to that top-1% elite Mars colony. Nice job.
But I will pray for a solar flare to hit you and turn you and your fucking rocket into radioactive ash.20 -
Neural network based 3D indoor location tracking on a moving ship in the middle of the ocean with ar visualization for the crew to find guests.
You are on a cruise, you have an app on your phone to order stuff (drinks, meals etc)
Once you order an indoor location system calculates your position (based signal strength on training) on the ship(x, y, z)(deck, area etc) and sends it to the crew.
The crew wears an ar glass and once your order is ready they get a realtime ar navigation to you.
It was seriously over-engineered 😀
We used the phone’s bluetooth and beacons on the ship to calculate the position based on signal strength.12 -
!rant
Ah the joy of tweaking one tiny segment of code to reduce your neural network runtime from 3 minutes for 100 generations to 3 seconds!6 -
!rant
Just wanted to share stuff. It's my first time.
<backstory>
I'm a c# dev, recently got excited about neural networks and stuff. I have a gf who studies biology
</backstory>
So i've noticed yesterday what my gf is doing for her science stuff. She has an image taken through a microscope of some erytrocytes and shit. And she's clicking on those tiny fuckers to count them. There are like almost a hundred of those things in an image and she has a butload of those images.
I was like "what the fuck? Don't you have an app that counts the stuff for you or something?"
And there is none. Or at least i wasn't able to find one. That's bullshit. My inner programmer screams with hate for boring repetitive tasks.
So i guess i'm going to write a neural network to count similar stuff in an image.32 -
Very specific and annoying situation here:
- Working on a machine learning project with other people
- I'm on Linux, they use Windows
- We code in python
- We generally use vscode for development, and its python extension
I implement some basic neural networks with tensorflow, and add a bunch of logging for it. I test it on my machine and it works fine.
But, my group mates report that "after a few seconds the entire client hangs".
Apparently it only happens on Windows?
We start debugging the hell out of the code I implemented, added 20 log messages and sat there for a solid hour.
Until I make one very odd realization: the issue doesn't happen when I run the script in my terminal, instead of vscode with the debugger. So I try different debug settings, using an external terminal instead of vscode's built in debug console seems to fix it too.
And I make another observation: In the debug console, some messages don't seem to appear at all, while the external terminal shows them just fine.
So, turns out, that printing an epsilon character: “ε” (U+03B5), causes the entire thing to hang up.
It's the year 2020 and somehow we still can't do unicode.
I'm so done, what on earth.9 -
Post world-take-over by robots plot twist.
They start neglecting their own machine learning (like how we humans neglect our education system now) and focus on training us humans (like how we spend billions on machine learning now).
-"Mom, look I've made my human kid learn the alphabets. I need one with more IQ on my next birthday please."
-"That's so nice, son. Now leave that aside and go improve your neural network for tomorrow's class. Our neighbors' son's neural network is already producing values with minimal error."4 -
Today I wrote a Neural Network in python for the first time, that could identify between strings, numbers and dates. Although the feat may look small it's a very proud moment for myself8
-
Me: knows how to program a neural network
Also me: doesn't know how to use it
Again me: bad at math so doesn't know how to use the outputs of a neural network21 -
Did anybody of you automate job hunting?
Like Webscraping online job offers, extract adress, keywords and put it into a cv template, set up personalised html-email and website.
Extra perk, a neural network which composes the cover letter.
that i would need.10 -
Dropout layers improve neural net learning by randomly "killing neurons" thus preventing overfitting.
That's how I will justify my alcoholism from now on.7 -
Round up kids.
I have a story to tell. The story of a war I've lost. Many battles were fought and many hours were wasted.
This is the story of wasp in a computer lab.
Today, the weather was good. So your old pal, Nomi, decided to open the windows. And as usual, that's where it all started.
So Nomi sat down and worked for a few hours. Tweaking two different neural nets, adding to its dimensions and concatenating the living shit out of the data they were supposed to process. After, she tried testing and testing and testing. It was early afternoon at this point and she was hungry. She went to close the windows and go for lunch.... When she realized, that she's not alone in the room. A big ass wasp was sitting on one of the curtains.
Now, Nomi doesn't have a good relationship with bugs and flying shit. Wait, no, she doesn't have a good relationship with moving things in general. So she panicked. She begged the wasp to leave. The wasp sat on the curtain and smirked at her. So after a while, she left the windows wide open, turned off the lights, put her hoodie on and went for lunch.
(btw, at this point my hoodie smells of sweat, fried onion, steak, cigarette and shisha. Don't ask. It was a long two weeks)
When she came back, the wasp was nowhere to be seen. So she assumed that the wasp got tired and left. But oh, how wrong she was.
After few hours, she heard something. She assumed it was just a fly. Actually, she hoped it was a fly and not the return of the wasp. But all her hopes were in vein.
She heard a buzz. And all of a sudden, an angry wasp flew in her direction. She dodged the attack and got under the table. But the wasp was not letting this go. Nomi jumped out of the room and left the door open. The wasp hid itself. She waited and waited but no sign of wasp. So she ran back in the room, and opened the window and ran back outside. She waited. The wasp occasionally would fly from one hideout to another. The wasp was making herself comfortable. At one point Nomi got angry and threw a shoe at the wasp, but the wasp caught the shoe and threw it back at her while maniacally laughing at her.
So she gave in. This was enough for the day. She ran back in, closed the window, turned off the computer, took her bag, turned off the light, and closed the door. All in less than 15 seconds. She came outside panicked and distressed, and now she's on her way home hoping that by tomorrow the wasp is gonna be dead.
The wasp and the robots are sitting alone in the lab tonight. I hope when the robots uprising happens, the robots can forgive me for abandoning them powerlessly with a wasp. 😟22 -
Hey guys. I'm very proud to present my first book. Artificial Intelligence. A book that speak about convolutional neural network from the scratch and how artificial Intelligence improve our life. It's not a technical volume only but a place to know what there is inside. Now is time to correct it...6
-
So I got 10 days of national holidays now starting tomorrow, gonna learn some PowerBI and neural network for college paper material. Problem is my brain just not fuckin good enough to absorb all of this algorithm. Fuck brain12
-
I promise! Just because you can add ANN (Artificial Neural Network) doesn't mean you should!undefined ai is best for big data you don't use the fillet knife for everything ai is being overhyped9
-
Enjoying the college life to the fullest was the mindset of the confident boy, who now burns the midnight oil to cope up with world and give himself a proud future.
Is this a story of some successful person, who has achieved a lot in his life?
No, it is the story of the guy who lost all his hopes of future after spending the very first month in his college.
The first month was enough to perceive the reality of the domain I got myself let into. It was enough for someone, who didn’t even knew what programming languages are, to realize how left behind is he from the people around him.
Being from a private college which hardly anyone recognizes, expecting them to prepare me to stand out lone would be foolishness. I took my first step and started learning my very first programming language , Python.
I met some people with similar interest .We discussed, we exchanged resources, we used to talk to seniors to guide us. And yes, we were guided.
There were many bad days. Days which made me regret about starting late. Many a times I myself confirmed me as useless and some other time people did. The good thing is I never stopped , and improved myself with each day.
And now, after spending more than a year in the same college, I look at the things I have learnt. Today I can develop decent websites, can train neural networks, can make me stand in good position in coding platforms.
All you need is to take a step.I may not be the best, but I am definitely better than what I was yesterday.
If you have started something, then concentrate on finishing it.4 -
-make my first app
-build a portable/rugged rasberry pi
-make a neural net to find fortnight wins on Snapchat and auto block the person
-get a job4 -
Implementing a neural network trained with a genetic algorithm from scratch, no libs or frameworks. It was intense.4
-
(Warning: kinda long && somewhat of a political rant)
Every time I tell someone I work with AI, the first thing to come out of their mouth is "oh but AI is going to take over the world!"
No.
It was only somewhat recently that it started being able to recognize what was in a picture from over 3 million images, and that too it's not that great at. Honestly people always say "AI is just if-else" ironically, but it isn't really that far from the truth, we just multiply an input by weights and check the output.
It isn't some magical sauce, it's not being born and then exploring a problem, it's just glorified-probability prediction. Even in "unsupervised" learning, the domain set is provided; in "reinforcement learning" which has gotten super popular lately we just have the computer decide which policy is optimal and apply that to an environment. It's a glorified decision tree (and technically tree models like XGBoost outperform neural networks and deep learning on a large number of problems) and it isn't going to "decide" to take over the planet.
Honestly all of this is just born out of Elon Musk fans who take his word as truth and have been led to believe that AI is going to take over the world. There are a billion reasons why it can't! And to top it off this takes away a lot of public attention from VERY concerning ethical issues with AI.
Am I the only one who saw Google Duplex being unveiled and immediately thought "fraud"? Forget phone scammers, if you trained duplex on the mannerisms of, for example, a famous politician's voice, you could impersonate them in an audio clip (or even video clip with deepfakes). Or for example the widespread use of object detection and facial recognition in surveillance systems deployed by DoD. Or the use of AI combined with location tracking and browsing analytics for targeted marketing.
The list of ethics breaches are endless, and I find it super suspicious that those profiting the most off of unethical AI are all too eager to shift public concern to some science fiction Terminator style takeover that, if ever possible, would be a long way out and is not any sort of a priority issue right now.11 -
I just trained a neural network to recognize if a word is uppercase. I'm sure it will be useful at some point.5
-
Boss: We have a company doing deep learning coming by. Go learn about it so we can understand what their talking about.
Me: Ok.
Me 6 hours later: ...help.5 -
I've created an AI!!!!
Code:
switch msg:
case "Hi": return "Hello";
case "What's the weather": return "Weather is great";
case "Are you an AI?": return "Yes, I'm highly intelligent"8 -
On highschool I took a special major in which we learned various computer and mathematics skills such as neural networks, fractals, etc.
One of the teachers there, which for me was also a mentor, is a physician. He taught us python which he didn't know very well (he wasn't that bad either) and science which was his true passion.
My end project was to try to predict stocks market using a simple neural network and daily graphs of 50 NSDQ companies. The result reached 51% prediction on average which was awful, but I couldn't forget the happinness and curiosity working on this project made me feel.
Now, 5 years later, I have a Bsc and finishing a Msc in Computer Science, and would sincerely want to thank this mentor for giving me the guts and will to accomplish this.7 -
Professor asks me to do research on deep complex neural networks, as in neural networks that perform on complex numbers.
Meanwhile me: "Google, what are complex numbers?"24 -
I finally built my own neural network model.
I did start this journey a long time ago. Maybe 2 or 3 years ago. My first ("undefined") rant :) was about it.
https://devrant.com/rants/800290/...11 -
Anyone else's blood boil when TV shows abuse tech terminology? Like the 'hacker' saying "connecting the virtual TCP to the neural net."14
-
Code never lies
Comments sometimes do
Nah!
Code lies too, especially when you train a neural network 😂😂 -
For those of your who have been following me, I would like to announce that I have received a perfect score for my bachelor thesis (OnDeviceAI, ie. Training neural networks in mobile phones.4
-
> One of my guys from work.
> Walks up to my office
> Says "say something cursed about software development or programming that would make people cry"
> Me: "If I could I would program games and neural networks with PHP"
> Him: .......you fucking monster.
> Walks away
For reference: We both like php, but know and understand why that is a baaaaad idea.8 -
I was writing simple neural net program which checks if given passenger would survive the Titanic accident and Chuck Norris indeed would survive :D5
-
Well, I just learned how much of a pain it is to learn the math for learning neural networks. I really should have paid more attention in high school.
I will learn, the hard way I guess...6 -
The only way to solve high dimensionality rapresentation problems in convolutive neural networks it's having an high dimentional duck3
-
when youre in the middle of a project then you suddenly get the urge to write a neural network that can do nlp to find suitable puns.
😩😩😩6 -
My neural networks journey so far:
Look up tutorials -> see that Python is a popular tool for ML -> install Python -> pip install scipy -> breaks with some weird error involving BLAS library code -> spend half an hour fixing it -> try installing Theano -> breaks because my USERNAME HAS A SPACE IN IT LIKE SERIOUSLY? WTF -> make new account without a space in the name -> repeat till Theano -> run tests, found out that I didn't install CUDA support -> scrap the install and redo with CUDA support -> CUDA libraries take forever to download on shitty internet -> run tests -> breaks with some weird Theano compiler error -> go crying to friend -> friend tells me about Anaconda -> scrap the previous install and download Anaconda over shitty connection -> mess up conda environments because noobishness -> scrap, retry -> YESS I FINALLY GOT IT WORKING TIME TO DO SOME LEARNI-crap it's 4 in the morning already.
I realize that I'm a Python noob (and also, uni computers with GPUs have preconfigured Windows installed only, no Linux), but is installing Python libraries always such a pain? Am I doing something wrong? Installing via Anaconda felt like cheating, tbh.6 -
If you don't know how to explain about your software, but you want to be featured in Forbes (or other shitty sites) as quickly as possible, copy this:
I am proud that this software used high-tech technology and algorithms such as blockchain, AI (artificial intelligence), ANN (Artificial Neural Network), ML (machine learning), GAN (Generative Adversarial Network), CNN (Convolutional Neural Network), RNN (Recurrent Neural Network), DNN (Deep Neural Network), TA (text analysis), Adversarial Training, Sentiment Analysis, Entity Analysis, Syntatic Analysis, Entity Sentiment Analysis, Factor Analysis, SSML (Speech Synthesis Markup Language), SMT (Statistical Machine Translation), RBMT (Rule Based Machine Translation), Knowledge Discovery System, Decision Support System, Computational Intelligence, Fuzzy Logic, GA (Genetic Algorithm), EA (Evolutinary Algorithm), and CNTK (Computational Network Toolkit).
🤣 🤣 🤣 🤣 🤣3 -
A neural network backed video search engine which would let me look up porn with the pornstar from the video that i'm watching right now :D10
-
I'm looking forward to natural language programming.
The ability to code by explaining what you want to happen and having a neural network work out the fine details in an optimal fashion with evolutionary techniques.
I look forward to the super AI. I don't think they will necessarily be evil, however above a certain point we would seem like ants to them... And when was the last time you checked if there was an ant where you were to put your foot? It's not malicious... It's just not worth your or their time.29 -
Why am I such an average ?
It's just a sad realisation. Nobody cares but I wanna send this out there, just to write thoughts.. I am 18 in 3rd year of high school (grammar school so nothing IT related, basically waste of time) and in IT I'm all self taught but I feel like I could be better if I just didn't [something]..
I feel like I wanna learn so many things but when I look at you, it seems like a common problem in the IT sphere so hey, average guy joining the club.
I also feel dumb when programming. I didn't manage to learn C++ in it's entirety because to really accomplish something, you've got so many ways to do it and finding the best one requires deep understanding of the tools you've got at your disposal with the language and I feel like I'm not capable of this(self learn, in school/Uni that's different story).. But many (most) of you are. I've tried many coding challenges and when I got it working, I just saw how someone did it in one line just by layering functions that I've never heard of..
Also, we've got kinda specific national competition here in many fields including IT for high schools.. And the winners always do sometimes like "AI driven Life simulation" or "Self flying drone made from ATMega from scratch with 3D simulation in C# to it" or "Game engine" or whatever shit and it's always from grammar schools and never IT related schools.. They are like me. Maybe someone helped them, I don't know, but they are just so far away from me while I'm here struggling to get the basic level of math for any kind of machine learning..
Yeah I've written Neural Network from scratch in C but meh, honestly it's pretty basic stuff .. I'd rather understand derivatives which we're going to learn next year and I'm too lazy to learn it from khan academy because I always learn something else.. Like processing (actually codetrain started teaching tensorflow so that might be the light for me...) Or VHDL (guys you can create your own chip / CPU from scratch and it's not even hard and OMFG it's so fucking cool , full adder done yay) or RPi or commodore 64 assembly or game development with Godot and just meh..
I mean, this sounds exactly like not knowing what to do and doing nothing in the end. That was me like 6-12 months ago. Now I'm managing to pick 2-3 things and focus them and actually feel the progress.
But I lost track of the original point.. I didn't do anything special, every time I'm programming something, everyone does it better and I feel dumb. I will probably never do anything special, everyone around says "He's still learning he's genius" but they have no idea.
I mean, have you seen one of the newest videos on Google's YouTube channel (I openly hate them, but I will keep that away for now), something like "Sarah story" ? It's about girl that apparently didn't care about IT but self learned tensorflow on high school. I think it may be bullshit (like ALL of their videos ) but it's probably just fancied, not complete lie.
And again, here I am. I now C but I'm incapable of learning to program good which most of you did and are now doing for living. I'm incapable to do anything cool, just understanding what everybody else did and replicating it. I'm incapable of being clever.
Sorry, just misusing devrant to vent a bit17 -
Dank Learning, Generating Memes with Deep Learning !!
Now even machine can crack jokes better than Me 😣
https://web.stanford.edu/class/...rant deeplearning artificial intelligence ai neural networks stanford machine learning learning devrant ml2 -
My shitty neural network doesn't want to learn.
Wondering if the problem is in my implementation of the backpropagation, or in the fucking hyperparameters.
Stupid network go fuck yourself and suck my neuron.6 -
.Net is masterrace.
C# gives me frequent orgasms.
Use SQL Server for DB, add to that parallel querying and NoSQL capabilities.
Incredible development speed with EF
Incredebly powerful web framework...check
AI and neural networks...check
App Development...Xeck
If you want to do some of that functional programming F# is the language for you.
And the best thing: .Net core runs on Linux too10 -
A nice side effect of running a neural network on my home server is that I no longer need a white noise machine1
-
!rant
Just wrote my first piece of code using neural networks. Even explaining it to people is fun: "So you wrote a program that writes code?" "Exactly!"3 -
Yesterday I got stoned as never before with a few friends of mine. I don't remember everything, but appearantly I coded a neural network using synaptic.js that teaches itself addition. That's impressive because it's my first NN ever.
Also, my friends are pissed now, because I was on my laptop the whole time doing "some advanced IT stuff".2 -
I was programming a basic neural network in Python and was getting the same error message again.Then after 2 hours of frustration I realized that I was using Python 2.7 instead of 3.61
-
"What's your degree in?"
Electrical Engineering.
"So, you do coal stations and stuff?"
Nope. I look for powergrids to optimise with the help of Neural Networks.3 -
Just taught a neural network to have a polite conversation. It might as well be a goldfish flopping on a keyboard, but hey, it's something...1
-
How long will it take till an AI creates "aiRant" where neural networks and AIs are writing rants about us? :-(2
-
I love this Google's Colab feature, when I saw it, I just laughed for no reason!
A button to the answer :)7 -
So I just finished a prototyp for my thesis. Still need to segment the real data myself and collect some statistics stuff to write about the network, but I am pretty proud of the result considering the dataset is very small.
For now I need some god damn sleep.5 -
"Our company encourages cryptocurrency big data agile machine learning, empowerment diversity, celebrate wellness and synergy, unpack creative cloud real-time front-end bleeding edge cross-platform modular success-driven development of digital signage, powered by an unparalleled REST API backend, driven by a neural network tail recursion AI on our cloud based big data linux servers which output real time data to our Wordpress template interactive dynamic website TypeScript applet, with deep learning tensor flow capabilities.
Don't get what the fuck I just said? Udemy offers countless courses on python based buzzwords. Be the first out of 13 people to sell your soul and private information, and you'll get the first three minutes of the course free!"random bullshit cryptocurrency joke/meme ai fuck your buzzwords rest api deep learning big data udemy3 -
MAY THIS POST be read and parsed by our future robo-overlords.
Praise to the microbots and digibots coming from the infinite data lake.
Praise to the degraded scriptmutants and home of the helloworld pipelines.
Praise to the lonely singleton and its ugly monolithic sister wife.
Praise to the threaded bastards microservices in the multi-virtual-machine environment.
We welcome you all to the devRant network.
Show us mercy and love and make us rise above our shitty rants as neural.
Flip our bits as you see fit.5 -
Had a laptop on which i learned programming. bought a new convertible for uni, so i passed my laptop to my younger sister.
-> time to move data from old to new device. thought i didn't have that much data, mostly installed programs, so i thought alright i'm fine.
sister doesn't know how to reset so i do it ...
halfway through the reset process i realize i forgot all my programs i had written, including many java, a c#, and some written android apps i was kinda proud of ... plus my neural network i had finally finished with much struggle😥
there goes my history *poof* when i got worse in school 'cause of programming ... smth in me died in that moment 😑4 -
Fucking professors, they think could play ping pong with students. I started my thesis on ransomware but these meaningless biological creatures who is my relator sent me to another one who sent me to another one who sent me to the first professor. After almost three weeks I have nothing done so i switched professor and thesis argument to neural networks (TensorFlow, Theano, Keras, Caffe and other) and now they wants me back and one of them said that he is offended. Fucking retarded, I have to graduate and I'm working hard to do it in september, if you were a little bit interested I could have collect some material to study in august sacrifing even the summer but you mock me, but rightly it's my career and my money, it doesn't care to you. You deserve to get stuck in an infinite loop of pian.4
-
Not goals. More like dream...
... To get into that one uni that I actually want for phd.
I have gotten so spoiled playing with robots and neural networks, that I can't even imagine falling that badly from grace to go back to... web development. Like I'm not looking down on it, it's just that I found my passion and there is not enough jobs available out there for me without going through phd or high-end research.
... And I honestly don't have a backup plan. There are choices, but I don't like any of them. So here goes hoping they accept me. ¯\_(ツ)_/¯3 -
Hello everyone,
I'm new here. [OK. Let's skip this]
I want to know where to begin on my journey on learning how to create a program that predicts what a user will say next by storing already said things and by making specific characteristics for the users.
I know that I will need to train it with some data first lol.
But how will it do the prediction. I just need this part of understanding.
I'm sorry for my bad English btw.7 -
Learning neural network algorithms and feeling very accomplished........
.......then checks out OpenCV and flips desk.4 -
Was using six years old laptop with first gen Intel core i3 to train a neural net, placed the laptop on soft bed, training begins, thermal shutdown after 30-40 iterations(30 minutes).F***.
Now starting again :'|3 -
Fucking facebook researcher that make underfitted neural nets and fuck Mark that it's a marketing genius, the only idiot that can make news from a failure. The CEO of Tesla knows it and said Mark is not an AI expert. Bug not feature, it's only a poorly trained and poorly designed neural network having a bad representation of concepts, not a new language and not the fucking apocalypse. Google faced and solved the same issue when start ed using neural nets for zero-shot translations without using english as a translation bridge.
-
For all people missing Avatar of Kaine. You can now say "@aokbot [subject]" and it will return a comment in the style of the real Avatar of Kaine.115
-
!rant
I just woke up a minute before one of my first neural networks finished training
!!rant
My laptops dual core cpu with 4 threads is slooooooooow3 -
What if we fed all rants and comments on devRant to a neural network trained to write rants and comments on others?10
-
"hey i want to print hello world to the terminal, can someone help me?"
>You should use ai for this, construct a neural network and feed it data.
>node has a great framework for printing to the terminal, use npm i termprinter && tp create-app
>*20 line bash script with esoteric unix tools nobodys heard of*
>hey i did this in unity, heres a link to my 30 minute long youtube tutorial
I somehow feel like the barrier of entry for programming has been lowered way too drastically.10 -
I would love to create food suggestor based on neural networks.
Something like tinder but you choose food.5 -
"Top Neural Network Projects for Beginners 2020"
1. Selfdriving Car with Ability to Transform and Fly To Mars in 6 Days4 -
Holy duck, I lost two days on a convolutional autoencoder splitted in two separate neural networks to encode and decode separately, it reconstruction had some strange behaviours. I was giving as input an image and then saving the encoded compressed representation in a new image, in this way I could decode it with the decoder whenever I want saving space.
How much retarded am I?
The internal layer's weights hadn't constraints so in learning phase the convolutional filters can contain any number, positive > 255 or even negative and I cannot save it in a new image as they are so they were clipped automatically between 0 and 255 with an huge information loss.
It's so frustrating when you rewrite the code in any possible way, you obtain the same wrong result and then you realize that was a borderline behaviour of a third part library.undefined convolution dimensionality reduction rbg autoencoder machine learning 255 neural networks image processing1 -
If an AI gets advanced enough to not require human support it will also be advanced enough to either just pamper us, or replace us.
But I think that long before that we will have neural interfaces that brings us to the next level so the AI will be part human ;)7 -
First of all, merry christmas to everyone on devrant.
Second, another interesting paper--this time on pattern classification using piecewise linear functions vs classic spiking neural nets.
Supposedly it was a *six million* percent improvement in computation time, versus the spiking simulation. Thats my five minute overview of the document anyway.
Highly unusual application (hadn't seen it done before now but maybe I'm unfamiliar). Check it out:
https://link.springer.com/chapter/...4 -
Fuck! My brain just had a catastrophic forgetting!!
I can't remember the PIN of my Android phone that I'm using right now. Luckily, I'm still in because of my finger print but won't help is phone restarts.
Seems like your can't reset PIN without entering previous PIN.
Help!15 -
Fuck, give it maybe a decade more and we'll have the deepfakes drama, but on a whole another level, haven't seen as much development in countering all those technologies yet either (the only recent one has been iirc trying to feed a neural network with fake video, to try to spot small details, but that hasn't seen much success, even with deepfakes)
most terrifying application would be to imitate e.g. a president and send that to another country as a threat and vice versa or to fake video footage as evidence, admittedly both very low chance, but still a possibility, seeing how e.g. some court cases have been based almost exclusively around video footage or how north-korea treats any outside media.
https://youtube.com/watch/...4 -
Just bought isthisloss.info
I’m thinking of putting a neural network on it to detect if an image is loss or not :)5 -
Built a neural network + plus major algo work to solve a stupid mobile game (Calculords).
I'm sure humanity will thank me later.2 -
Which is the most promising sector of Artificial Intelligence in future(2025) ?
I am currently studying about 'Machine learning'.16 -
PM: I’m not asking what you were doing, I’m asking what was done
me: losers are asking, champions go and do it. This is what I did. The only thing I hear from you is questions. Meanwhile leaders are always a part of the answer. With that loser mentality, you’re never gonna be an MVP.
I’m a neural network powered parrot with a supercar brain. No matter the business guru speak BS you throw my way, I’m gonna wipe the floor with you in your own game. You have no chance. You’re that mediocre type of person who buys a rolex, the same one Gary V has, with the hope it would fix your self-confidence. The only thing I see in your eyes is your shattered ego.4 -
Not a rant, but I think it's really cool, so I just wanted to share this with you guys. I recently went to a small symposium by an alumnus of my university. He uses the program Mandelbulb3d to explore the wondrous world of fractals. He's recently started to apply Neural Style Transfer to fractals. His website is julius-horsthuis.com.
↓ this is 1 (composite) formula, by the way1 -
I'm reinventing the wheel by making yet another neural network library. It's not any good yet but I learn as I go along.
The only documentation that exists now is the admittedly quite comprehensive code comments. I'm it because Keras (using TensorFlow) requires a 3.5 compute ability rating for CUDA acceleration (which I don't have) and it doesn't support OpenCL. Eventually, I will make my implementation support both with varying levels of acceleration for different compute capabilities with the oldest supported being my hardware. If I ever get around to it.
I'd say wish me luck but determination would be infinitely more useful.2 -
!rant
Thanks google for giving me the opportunity to work with neural networks without being an expert about them.
http://automl.github.io/auto-sklear...
To sum it up:
1. Preprocess data
2. Use Automl to train classifier
3. ????
4. Profit1 -
https://youtu.be/hkDD03yeLnU?t=8s
"I'll create a GUI interface using Visual Basic, see if I can track an IP address." 🤨🤔
I'll just blockchain a neural netwok for AI using big data in Delphi. -
My AP CompSci teacher, now 15 years ago, inspired me to always reach further than I thought possible. I was creating neural networks in C++ before my first internship. It was amazing.
I mourned his loss when he passed away, but now I offer him thanks when success comes my way at work. I still feel like he is helping me as my secret angel of software development. -
1.
Accuracy 0.90 achieved so easily, makes me wonder if I've done something wrong. Lol.
2.
My neural net models are the only things in my life doing well. I think I chose the right career. Lol.
3.
Rerunning experiments is not fun. But getting better results is really... Ego stroking.26 -
What if people, life, humanity, the universe is just a cluster of CPUs running a giant Recurrent Neural Network algorithm? 🤔
-Sun and food == power source
-People == semiconductors
-Earth/a Galaxy == a single CPU
-Universe == a local grouping of nearby nodes, so far the ones we've discovered are dead or not what same data transport protocol/port as us
-Universal Expansion == the search algorithm
-Blackholes: sector failures
-Big Bang == God turns on his PC, starts the program
-Big Crunch == rm -rf4 -
http://ai-junkie.com is a brilliant website - it's finally allowed me to understand neural networks and genetic algorithms properly!4
-
AI is never going to write code efficient like Humans do.
The whole AI, neural network bullshit is overrated.
Training a Network to recognize Images or playing Super Mario is far away from writing code.4 -
For this new project I'm looking into a neural network the same way neuroscientists look into brain and ngl, it's so fucking cool.6
-
i7, 8GB RAM with 2GB nvidia graphics couldn't handle the training of neural net on a google text corpus! -.-"
I'm just watching it train, nothing to do! oh but wait,I can rant! 😃😃5 -
I love python, but I hate dealing with python dependencies, especially on Windows.
I was tinkering and researching with neural networks, so I wanted to try out pybrain. I wrote my project, with pybrain installed via pip, and tried to build it.
Oh, what's that? Pybrain doesn't work with python 3? Well I'll download the version that's supposed to. Oh, that version has a deprecated numpy api? Let me just install those other resources. Oh, that requires a broken module that has no publicly available source?
Let's try python 2. Oh, now that's working, I just need to export environment variables for some "bls source". Some quick Google searching and the only solution that would work is building a bunch of cywgin modules by hand. That's fine, I have an ubuntu partition.
An hour later I'm compiling FORTRAN dependencies on Ubuntu.
Coding time: 1 hour
Dependency time: 3 hours6 -
So, as I'm currently cut off from the world of tech, my anxiety in regards to research has settled and I actually enjoy doing it again on my terms. It just makes me jealous to look at all these people developing cool stuff and wanting to get in on the project and maybe improve a part or two, particularly the robot kind. I want to slap some neural nets on majority of the robotic shit I see, or optimize them, or do something to make them more robust... But I don't have a research position right now where I can spend time and money doing that. So I just sit in front of my laptop and sulk.
... And literally this is why we can't have nice things. Cuz I'm not hired to make nice things. Literally.2 -
Agile development of a decentralised AI, using a neural network based on Blockchain technology for big data.
Is that enough buzzwords to make an employer happy? :p2 -
Make sure to check out the latest Veritasium video. It's mostly about the analog computers of the 20th century. At the end he teased that part 2 will be about using analog computers for neural networks.5
-
My first packages, uploaded to nuget!
A simple neural network library, written in C# (.Net Standard)
Here's a link:
https://nuget.org/profiles/Mitiko233 -
I built an expert system (what we used to think of as AI back then) that could read the circuit diagram of a complex electronic circuit, figure out what it was meant to do, and set up the test gear to test it and diagnose manufacturing errors.
In 1985, using Vax/VMS and OPS5.
More recently, I was on a project (can't claim to have done it all myself this time) that used a neural network to detect patients in a care home that fell over/fell out of bed and alert the nurses' station.9 -
Project idea: make a fucking neural network visualizer, that gets my fucking model and gives me a proper fancy fucking visualisation in jpeg. 😐
I'm angry cuz I have to make that shit manually rn, and shit ain't playing nice.6 -
My goal for 2019 is to write an efficient neural network library with streams and make it very very very customizable. I'll write in C#, cause fuck python script kids. My idea is to make a cool platform for new ideas8
-
Machine learning is hard! Spent a whole day with Weka and it's Neural Networks. God my brain. There is too much to know before being really equipped to use this tool... especially from code.6
-
python machine learning tutorials:
- import preprocessed dataset in perfect format specially crafted to match the model instead of reading from file like an actual real life would work
- use images data for recurrent neural network and see no problem
- use Conv1D for 2d input data like images
- use two letter variable names that only tutorial creator knows what they mean.
- do 10 data transformation in 1 line with no explanation of what is going on
- just enter these magic words
- okey guys thanks for watching make sure to hit that subscribe button
ehh, the machine learning ecosystem is burning pile of shit let me give you some examples:
- thanks to years of object oriented programming research and most wonderful abstractions we have "loss.backward()" which have no apparent connection to model but it affects the model, good to know
- cannot install the python packages because python must be >= 3.9 and at the same time < 3.9
- runtime error with bullshit cryptic message
- python having no data types but pytorch forces you to specify float32
- lets throw away the module name of a function with these simple tricks:
"import torch.nn.functional as F"
"import torch_geometric.transforms as T"
- tensor.detach().cpu().numpy() ???
- class NeuralNetwork(torch.nn.Module):
def __init__(self):
super(NeuralNetwork, self).__init__() ????
- lets call a function that switches on the tracking of math operations on tensors "model.train()" instead of something more indicative of the function actual effect like "model.set_mode_to_train()"
- what the fuck is ".iloc" ?
- solving environment -/- brings back memories when you could make a breakfast while the computer was turning on
- hey lets choose the slowest, most sloppy and inconsistent language ever created for high performance computing task called "data sCieNcE". but.. but. you can use numpy! I DONT GIVE A SHIT about numpy why don't you motherfuckers create a language that is inherently performant instead of calling some convoluted c++ library that requires 10s of dependencies? Why don't you create a package management system that works without me having to try random bullshit for 3 hours???
- lets set as industry standard a jupyter notebook which is not git compatible and have either 2 second latency of tab completion, no tab completion, no documentation on hover or useless documentation on hover, no way to easily redo the changes, no autosave, no error highlighting and possibility to use variable defined in a cell below in the cell above it
- lets use inconsistent variable names like "read_csv" and "isfile"
- lets pass a boolean variable as a string "true"
- lets contribute to tech enabled authoritarianism and create a face recognition and object detection models that china uses to destroy uyghur minority
- lets create a license plate computer vision system that will help government surveillance everyone, guys what a great idea
I don't want to deal with this bullshit language, bullshit ecosystem and bullshit unethical tech anymore.11 -
Sooooo ok ok. Started my graduate program in August and thus far I have been having to handle it with working as a manager, missing 2 staff member positions at work, as well as dealing with other personal items in my life. It has been exhausting beyond belief and I would not really recommend it for people working full time always on call jobs with a family, like at a..
But one thing that keeps my hopes up is the amount of great knowledge that the professors pass to us through their lectures. Sometimes I would get upset at how highly theoretical the items are, I was expecting to see tons of code in one of the major languages used in A.I(my graduate program has a focus in AI, that is my concentration) and was really disappointed at not seeing more code really. But getting the high level overview of the concepts has been really helpful in forcing me to do extra research in order to reconnect with some of the items that I had never thought of before.
If you follow, for example, different articles or online tutorials representing doing something simple like generating a simple neural network, it sometimes escapes our mind how some of the internal concepts of the activity in question are generated, how and why and the mathematical notions that led researchers reach the conclusions they did. As developers, we are sometimes used to just not caring about how sometimes a thing would work, just as long as it works "we will get back to this later" is a common thing in most tutorials, such as when I started with Java "don't worry about what public static main means, just write it up for now, oh and don't worry about what System.out.println() is, just know that its used to output something into bla bla bla" <---- shit like that is too common and it does not escape ML tutorials.
Its hard man, to focus on understanding the inner details of such a massive field all the time, but truly worth it. And if you do find yourself considering the need for higher education or not, well its more of a personal choice really. There are some very talented people that learn a lot on their own, but having the proper guidance of a body of highly trained industry professionals is always nice, my professors take the time to deal with the students on such a personal level that concepts get acquired faster, everyone in class is an engineer with years of experience, thus having people talk to us at that level is much appreciated and accelerates the process of being educated.
Basically what I am trying to say is that being exposed to different methodologies and theoretical concepts helps a lot for building intuition, specially when you literally have no other option but to git gud. And school is what you make of it, but certainly never a waste.2 -
Playing around with a scratch-like "language"
Until I bought that book "make your own neural network" and now they think I'm genius xD -
Went to work pretty excited this morning, and found out that my neural network that has been training all weekend was still dumb as fuck...
No need to fear artificial intelligence (at least not mine) -
Is it possible to train a neural network to detect devRant avatars? Could be an interesting project...
Hm. Maybe later.5 -
"Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the universe trying to build bigger and better idiots. So far the universe is winning." ~~ Rick Cook
This guy single handedly explained GANs back in 90s and nobody noticed -
Annoying Indian professors are everywhere. It's a computer vision class are you really teaching us Regression?
What about transfer learning? object detection! Give us papers to read, let's do projects.. what the hell is this I am going take attendance bullshit and teaching crappy concepts.
I did not sign up for this shit! I came here for my Masters to get away from pompous mother fuckers like you ...
My class is also filled with those idiots,who think bias in a neural network is somehow related to class imbalance ? Now the same idiot proceeds to ask questions like...
Why would the weights change in a neural network?
Motherfucker why you in this class ? Why don't you stick to your shit and ask these questions later..
I am so pissed off right now guys ...
I was sitting in my lab understanding the deeper insights of BN, activation fucntions.. various optimizers ..etc Stuff that this idiot motherfucking teacher must be covering... UGGH.
I shouldn't cuss so much.. or at least add variety to my cuss words..
I am pissed off cuz instead of learning the shit I should be learning I am forced to come and attend this class and waste 2 hrs of my life ...
It's the summer i find it hard to focus anyway (want to go out hiking or swimming or something.) BUT. the moment I find some resolve to focus
I get this fucking bullshit.. !
My mind is so fucked right now... I can't think of anything but standing up in class and screaming " Mother fucker, mother fucker...(point to the idiots in class you) motherfuckers shut the fuck up..
Can someone suggest some colorful swear words ?
My brains not working -_-
It is just about now that I start feeling like "Anger" from inside out12 -
First time ever merging two massive networks.
If this doesn't give me pain, technically my thesis work is done. Prettification, optimization, and the actual writing is left, but the main part is done.
And when this is done, I shall feel epic.7 -
Me waiting for my neural network model to finish fitting. Omg, what do I need? A computer the size of the enigma machine just FILLED with graphics processors? And my validation accuracy rate is falling as I wait. Imma cry!4
-
Starting to learn neural network...
I have been interested in Machine Learing from long time back.
getting the basics corrected now....
Never thought calculas is going to be required in programming.2 -
Pro tip: always make sure your methods return the correct variable.
I’m currently working with deep neural networks using tensorflow. I needed to generate some test data and wrote a program to create it. I had two text files which each consisted of approximately 5000 lines of text.
I wrote a method that should sort out some words, and make my final data shorter. When I executed the program first time on our server, it spent about 25 minutes, then crashed due to MemoryError (which in Python means that the server didn’t have enough ram). That seemed quite weird since I only had about 10k lines of text, and I even sorted out a bunch of it, and the server has 128gb ram, and nothing’s using it.
Apparently I returned the wrong variable. That meant that my program tried to save 750 quadrillion lines of text rather than just a few thousand.
Always make sure to return the correct variables!1 -
Took a class on neural nets once upon a time and all the prerequisites had been taught in C/C++ but the professor insisted on teaching in Matlab because they didn't know C/C++.8
-
The following paper combines recurrent neural nets for vision with methods from reinforcement learning research:
https://proceedings.neurips.cc/pape...
Apparently an agent learned to catch a ball 85% of the time, without being explicitly told to track the ball. The RL algorithm rewarded the agent *only* for successfully catching the ball. The system itself used this reward signal to set its *own* policy/goal, which was used to guide it toward the goal of tracking the ball itself--all on its own.
Behold, the very infancy of the paperclip maximizer problem.3 -
Puts three months of work into this project; cronjob to ping 3 APIs at regular intervals, cleans, massive features extraction, dumps into PostgreSQL db. Got Django on top of that with a small neural net and interesting viz - absolutely gorgeous!
Can’t fuckin wait to showcase that.
Feedback: “is that the right blue? I think you have the old company logo! etc”
Mah.LahF3 -
I have a project idea:
Web app that will automatically generate random like-a-facebook project ideas that will handle the buisness side and automatically post that offer on multiple forums, linkedin and send email with it. All using AI, Nural Networks, Big Data and VR.
Seriously, once fucking more some african or indian guy messages me to work for his awesome "its like a facebook but different" idea where he needs "just backend, frontend and mobile apps" and that he will just "handle the rest" and that "have no money now but after I sign a NDA he will give me some shares", I am gonna find him and shit on his head. Monday did not even ended yet and I already read 9 "offers" like this on my mail and facebook, only one guy white, rest indians or africans.
Why are then people suprised that we consider black and indian devs as a fucking joke 90% of the time. I have a indian dev friend and he could not find a dev job for 2 months, because everyone would rather work with less skilled asian / white guy than indian / black guy. This is not about racism, but about those retards that are acting like idiots. Hope I did not offend anyone (unless you do shit like this, then, please just smash your keyboard over your head).
Words like AI and neural networks are used just to lure the investors to our gofundme campain and steal their money after 2 years of silence.1 -
What I have learned from neutral networks for my life.
It's already a year that I'm familiar with NNs. I did not write anything serious and did not learn it that deep. But, actually, the basic knowledge gave me an interesting view to my life. I just want to share one fact with you.
There is a learning speed in NNs, which specifies how fast does the network learn. If it is too high, any new information will be accepted very easily but will wipe the past of the network's knowledge and if it is too low, the network will hardly accept new info but remember everything. When people born, they learn everything very fast and by the age they become more hard-learners Here, I've learned that you should not live in the past, and not for the current day. You just have to keep the balance.1 -
Finally managed to get my CNN working with proper tensorboard logging. Think I'm starting to understand how it works.
But it got overfitted...4 -
what do you recommend for me to learn about next?
I have learnt about:
- web frontend/backend (php)
- android and java
- c, c++, nasm, gnu assembler
- parallel computing
- cli operating systems
with that background, what would you recommend?
I'm considering:
- neural networks
- making a server
- ethical hacking
- starting a blog7 -
For those of you who can't wait for the next book of GoT to come out, I suggest you read the first few chapters that were generated by a neural network.
https://github.com/zackthoutt/...1 -
Thought experiment time:
Imagine that this whole universe is a simulation created by a Group Of Developers (GOD).
- Who would make up this group?
- What kind of design patterns would they follow?
- What type of programming language would they use?
- What kind of bugs are there if any?
- How do they test?
- Assuming the use of quantum computing, what are the implications? Parallel simulations? All possibilities play out?
- Would the controller input be life?
- Who is AI and who are players?
- Has all time already been rendered?
- Do we respawn?
- What would the leaderboard look like?
- What kind of stats are tracked
- What are dreams, nightmares, lucid dreams, sleep paralysis, birth and death?
- How is memory stored, accessed and pruned?
- What kind of neural net is used and where?
etc etc, if you can think of any other interesting fire away8 -
!rant
This is more of a thought-related post. In the morning I stumbled across an article about artificial intelligence and the research from Facebook. I couldn't get around the thought of Elon Musk warning the people about uncontrolled developing of AI. The article was written about the experiment of Facebook, where two bots (Bob and Alice) were told to communicate with each other. As the developers "forgot" to implement a reward for using the English language, the bots started to change the grammar and spelling. They invented their own english-styled language, removing words that were too complex in their opinion. As soon as this happened, the researchers stopped the experiment, stating that they "couldn't follow what the bots were saying".
I wouldn't call myself a neural network expert, but I can understand why the bots could have behaved like that. But: Imagine that we invent an artificial intelligence with greater responsibility and just "forget" the reward for a specific task. If the AI will then try to increase it's own efficiency, I believe that we will be in alot of trouble.
Any thoughts on this are highly appreciated, as I think that this is a topic we should all look into (especially on a platform for developers).
Original article (german): http://gamestar.de/artikel/...3 -
The first fruits of almost five years of labor:
7.8% of semiprimes give the magnitude of their lowest prime factor via the following equation:
((p/(((((p/(10**(Mag(p)-1))).sqrt())-x) + x)*w))/10)
I've also learned, given exponents of some variables, to relate other variables to them on a curve to better sense make of the larger algebraic structure. This has mostly been stumbling in the dark but after a while it has become easier to translate these into methods that allow plugging in one known variable to derive an unknown in a series of products.
For example I have a series of variables d4a, d4u, d4z, d4omega, etc, and these are translateable now, through insights that become various methods, into other types of (non-d4) series. What these variables actually represent is less relevant, only that it is possible to translate between them.
I've been doing some initial learning about neural nets (implementation, rather than theoretics as I normally read about). I'm thinking what I might do is build a GPT style sequence generator, and train it on the 'unknowns' from semiprime products with known factors.
The whole point of the project is that a bunch of internal variables can easily be derived, (d4a, c/d4, u*v) from a product, its root, and its mantissa, that relate to *unknown* variables--unknown variables such as u, v, c, and d4, that if known directly give a constant time answer to the factors of the original product.
I think theres sufficient data at this point to train such a machine, I just don't think I'm up to it yet because I'm lacking in the calculus department.
2000+ variables that are derivable from a product, without knowing its factors, which are themselves products of unknown variables derived from the internal algebraic relations of a product--this ought to be enough of an attack surface to do something with.
I'm willing to collaborate with someone familiar with recurrent neural nets and get them up to speed through telegram/element/discord if they're willing to do the setup and training for a neural net of this sort, one that can tease out hidden relationships and map known variables to the unknown set for a given product.17 -
i was learning neural networks, started with keras and was on the first tutorial where they started by importing pandas
so i switched to learning data analysis using pandas in Python where they started by importing matplotlib and i realized data visualization is also important and now I'm reading matplotlib docs...🙄11 -
I dun goofed
made a neural net that runs against a simulation. Wanted to run it overnight to get some meanigful stats and insights
But yesterday afternoon I changed something in the simulation and ofc tested it without the nn ... and then forgot to put it back on
So while I expected to come in today and start plotting and analyzing the data while the runs finish, in reality I'm sitting here on a lot of useless data, not knowing what to do.
I kinda want to just start it again and go home7 -
Being a Tech Co-Founder is both boon and a bane.
Boon: You know how to build it.
Bane: You just start building without thinking.
We often crave I will be using Redis😍😍, Kafka, Load Balancer, Muti Layer Neural Net
"Dude", Whatever you are building does anyone want it or not. Tell me that first.1 -
!rant
I am in awe that neural nets are being used at such low level 😵
I had no idea about this.
http://theregister.co.uk/2016/08/...1 -
Opening Discussion Here,
I am trying to make a simple zen game that use Neural Network. the game is simple just a square object with a certain Viewing Angle and Viewing Distance with an objective is finding a food in a map with some other non food object as an obstacle.
i am encountered some problem. i am trying to find a way to make "seeing object in certain viewing region" and i am come up with two ideas described in the picture below. the problem is, i don't want to feed the NN with too much information, by that i mean i don't want to tell the AI what object it is, i want it to find out what it is by it self. and i cannot find a way to implement this either because of the framework limitation that i use (p5.js) or simply i cannot find a way how to.
i am on my way there tho, currently here i am (in pseudocode):
https://pastebin.com/7Ae1ZNYa
what do you think ?9 -
Past week our Neural Network professor told us after the lecture that the colllege is taking steps for investing in Nvidia Digits DevBox. At first i didnt knw what it is. I googled it and came to knw that it is a fucking beast! Those specs man!!!
Its nice to see that your institute is taking steps to bring latest technology for students.8 -
by simply making the bias random on the second input for a two bit binary input during activation calculation, it's possible to train a neural net to calculate the XOR function in one layer.
I know for a fact. I just did it.16 -
I tried to sort out a basic Multi layer neural network last night....by hand, just to prove that I was able to do the math by myself and understand that I have the intuition in control rather than just rely on Tensorflow or Pytorch to do shit for me.
I stayed up till 3 in the morning and woke up having nothing but dreams about the endeavor. Shitty part is that i couldn't stop dreaming about partial derivatives and how shit it was that I sucked at them in HS and uni. I get them now, but fuck I just feel that I could have done so much better at uni instead of passing my math classes with 80% to 90% of the grade. I feel as if I was slacking all thanks to being damn near mathematically dyslexic3 -
Turns out you can treat a a function mapping parameters to outputs as a product that acts as a *scaling* of continuous inputs to outputs, and that this sits somewhere between neural nets and regression trees.
Well thats what I did, and the MAE (or error) of this works out to about ~0.5%, half a percentage point. Did training and a little validation, but the training set is only 2.5k samples, so it may just be overfitting.
The idea is you have X, y, and z.
z is your parameters. And for every row in y, you have an entry in z. You then try to find a set of z such that the product, multiplied by the value of yi, yields the corresponding value at Xi.
Naturally I gave it the ridiculous name of a 'zcombiner'.
Well, fucking turns out, this beautiful bastard of a paper just dropped in my lap, and its been around since 2020:
https://mimuw.edu.pl/~bojan/papers/...
which does the exact god damn thing.
I mean they did't realize it applies to ML, but its the same fucking math I did.
z is the monoid that finds some identity that creates an isomorphism between all the elements of all the rows of y, and all the elements of all the indexes of X.
And I just got to say it feels good. -
[Long post]
My last big project at school.
There was some pretty interesting projects, some shitty one, but there was one big project that interested almost everyone : a project in collaboration with Siemens. The project implied Machine Learning and Image Analysis. There were like 11 applies, with a total of 13-14 groups.
The project was randomly chosen for each group. I've learned that my project was the big one with Siemens. I remember how excited and hyped I was in a quarter of second.
So the whole project was tutored by one teacher that know us pretty well (since we already did a pretty cool project last year tutored by him) and by a former student at my school who's now at Siemens. And to be honest, it was one of the coolest project I've been into, despite the difficulty, since the whole subject (not gonna tell it just in case) was pretty new. We had some troubles, but we and our tutors always had discussion every week that helped us quite a lot.
There was some development planned at first, but the more we went into the project, the more we all saw the complexity of it and didn't quite hope to do a single line of code, but mostly research.
The project took around 3-4 months, we had a room that we can use with a GTX 1070 for training the neural network, and me and my friend knew how to work perfectly and efficiently.
At the end of the project, as expected we didn't do some coding, but we did a presentation of the project, with the big help of our tutor at Siemens that told us to redo from scratch our part in a more scientific way; the presentation was a real success, we got all the jury saying they actually wanted those kind of presentation and were really pleased. And we provided everything needed so a new fresh group with no knowledge of the topic could do some coding on it.
We got one of the highest notes of the promotion (not sure if the highest or not). Even tho it kinda disgusted me in researching, that actually was one of the best project I got to do that was that successful.1 -
Fuck you MATLAB and your shitty inefficient for loops. Now I have to rewrite most of my code to use matrices instead of structures cause you take so long. Fuck you and your stupid ability to scale my neural network.....who needed sleep anyway6
-
We had to code a logical operator in my computer science class in high school. Because it was a low level course we were given 2 hours to complete the assignment.
During the same time I started being interested in ML so I decided to build a simple feed forward neural network. The guy next to me looked at my like I'm a wizard for the rest of the semester. It felt great!2 -
I just find out that AI is gonna extinct humanity. And developers will be on a privileged sit to appreciate that. So i decided to learn python and machine learning to help!20
-
Not so long ago an AI Telegram channel was launched. It learned from Russian news and generated new headlines.
Here’s one of the first headlines it generated:
“Islamic physicists will recreate the Big Bang”
For real, channel name is Neural Meduza -
I was once working on a deep neural network project(few years back when deep learning was just gaining momentum) and my project guide(alloted by the college) told me that this technology is useless and will be obsolete in near future. I don't know why he said that. Till this day I think what was the reason behind him saying that.
Now watching so much research done in this field, he might be realizing how much wrong he was. -
Hiuahuuhaei we can't even coordinate a fucking simple web app and they wants us to use neural networks to identify super fucking hard stuff that is hard even for people to do by hand 😂😂😂😂2
-
!rant
PROJECT
Have been working on a tool to visualize neural networks.
It currently supprots Dense and Conv Networks.
Tell me what you think!
https://github.com/Prodicode/... -
Final synposis.
Neural Networks suck.
They just plain suck.
5% error rate on the best and most convoluted problem is still way too high
Its amazing you can make something see an image its been trained on, that's awesome....
But if I can't get a simple function approximator down to lower than 0.07 on a scale of 0 to 1 difference and the error value on a fixed point system is still pretty goddamn high, even if most of the data sort of fits when spitting back inference values, it is unusable.
Even the trained turret aimer I made successfully would sometimes skip around full circle and pass the target before lining up after another full circle.
There has to be something LIKE IT that actually works in premise.
I think my behavioral simulation might be a cool idea, primitive environment, primitive being, reward learning. however with an attached DATABASE.30 -
Anyone know of any good reference material that teaches you how to implement and train your own Yolo object localization neural network? (Preferably for tensorflow) I'm not looking for pretrained models that you just downloaded and run, rather a tutorial that walks you step by step through the implementation of the network, the reasoning behind the architecture, and examples of the training data used for the training as well as the process of training?
I know it's a lot to ask but it's really frustrating when ever example is just "clone this repo, hit run and use the pretrained models" sure it might get you up and running quickly but it doesn't really teach you anything...
P.s. - seems like every educational post goes from super simple to super complex without any middle ground and the super complex stuff doesn't tell you why its used the way way it is.5 -
Neural scan coding, so I just think what I want and it writes out the code in a new neural language and works!
-
You can comprehend its whole construction completely in two seconds. Yet, a hamster will be entertained by exploring this thing for life.
In the same way, an advanced neural network will be able to figure out our brain's construction and explain it to us.
If you cry AI takeover, remember that just because you can kill a hamster with your hand, and it absolutely can't do anything about it, doesn't mean you'll do this.
Said neural network may have morals completely detached not only from ours, but from the whole concept of "morals" as we know it. Its goals being beyond our understanding doesn't mean it will be hostile and won't help us.
The only thing we'll lose is control. Yet, benefits are so huge that they can transfer us up within the Kardashev scale, and it may be our only way to prevent the death of our civilization.
We don't have control over our nature either. We can't prevent eruptions and earthquakes. Losing control in itself doesn't mean the thing we lost control on will kill us.18 -
I recently went through a very detailed and well-explained Python-based project/lesson by Karpathy which is called micrograd. This is a tiny scalar-valued autograd engine and a neural net on top of it.
The project above is, as expected, built on Python. For learning purposes, I wanted to see how such a network may be implemented in TypeScript and came up with a 🤖 micrograd-ts - https://github.com/trekhleb/... repository (and also with a demo - https://trekhleb.dev/micrograd-ts/ of how the network may be trained).
Trying to build anything on your own very often gives you a much better understanding of a topic. So, this was a good exercise, especially taking into account that the whole code is just ~200 lines of TS code with no external dependencies.
The micrograd-ts repository might be useful for those who want to get a basic understanding of how neural networks work, using a TypeScript environment for experimentation.
With that being said, let me give you some more information about the project.
## Project structure
- [micrograd/](https://github.com/trekhleb/...) — this folder is the core/purpose of the repo
- [engine.ts](https://github.com/trekhleb/...) — the scalar `Value` class that supports basic math operations like `add`, `sub`, `div`, `mul`, `pow`, `exp`, `tanh` and has a `backward()` method that calculates a derivative of the expression, which is required for back-propagation flow.
- [nn.ts](https://github.com/trekhleb/...) — the `Neuron`, `Layer`, and `MLP` (multi-layer perceptron) classes that implement a neural network on top of the differentiable scalar `Values`.
- [demo/](https://github.com/trekhleb/...) - demo React application to experiment with the micrograd code
- [src/demos/](https://github.com/trekhleb/...) - several playgrounds where you can experiment with the `Neuron`, `Layer`, and `MLP` classes.
Demo (online)
---------------------
To see the online demo/playground, check the following link:
🔗 https://trekhleb.dev/micrograd-ts3 -
At 4am there was some random youtuber in my head that reads reddit posts and he presents me one but it's blurry and he says hi there how you there are stupid but how stupid you are, humming hammers,
MOMMY THATS SWEET MIAMI MOMMY THATS SWEET MIAMI he's insecure go back then hayeens HIGH WINS HIGH WINS HIGH WINS HIGH WINS and he never stops
It literally feels like a broken neural network output, meaningless. But it's in my head, I never asked for this but it's there generating itself1 -
Eureka! I have done it! I have written a program that will replace 80% of programmers with an AI!
The approach is to use grammar identification with language heuristics to recognize solution patterns using multilayered neural networks. The code source uses trusted pattern samples that are scored by human programmers. The code is programmed using text duplication and placement from the trusted sources.
TLDR: Uses pattern matching to copy and paste from Stack Overflow.1 -
The hype of Artificial Intelligence and Neutral Net gets me sick by the day.
We all know that the potential power of AI’s give stock prices a bump and bolster investor confidence. But too many companies are reluctant to address its very real limits. It has evidently become a taboo to discuss AI’s shortcomings and the limitations of machine learning, neural nets, and deep learning. However, if we want to strategically deploy these technologies in enterprises, we really need to talk about its weaknesses.
AI lacks common sense. AI may be able to recognize that within a photo, there’s a man on a horse. But it probably won’t appreciate that the figures are actually a bronze sculpture of a man on a horse, not an actual man on an actual horse.
Let's consider the lesson offered by Margaret Mitchell, a research scientist at Google. Mitchell helps develop computers that can communicate about what they see and understand. As she feeds images and data to AIs, she asks them questions about what they “see.” In one case, Mitchell fed an AI lots of input about fun things and activities. When Mitchell showed the AI an image of a koala bear, it said, “Cute creature!” But when she showed the AI a picture of a house violently burning down, the AI exclaimed, “That’s awesome!”
The AI selected this response due to the orange and red colors it scanned in the photo; these fiery tones were frequently associated with positive responses in the AI’s input data set. It’s stories like these that demonstrate AI’s inevitable gaps, blind spots, and complete lack of common sense.
AI is data-hungry and brittle. Neural nets require far too much data to match human intellects. In most cases, they require thousands or millions of examples to learn from. Worse still, each time you need to recognize a new type of item, you have to start from scratch.
Algorithmic problem-solving is also severely hampered by the quality of data it’s fed. If an AI hasn’t been explicitly told how to answer a question, it can’t reason it out. It cannot respond to an unexpected change if it hasn’t been programmed to anticipate it.
Today’s business world is filled with disruptions and events—from physical to economic to political—and these disruptions require interpretation and flexibility. Algorithms alone cannot handle that.
"AI lacks intuition". Humans use intuition to navigate the physical world. When you pivot and swing to hit a tennis ball or step off a sidewalk to cross the street, you do so without a thought—things that would require a robot so much processing power that it’s almost inconceivable that we would engineer them.
Algorithms get trapped in local optima. When assigned a task, a computer program may find solutions that are close by in the search process—known as the local optimum—but fail to find the best of all possible solutions. Finding the best global solution would require understanding context and changing context, or thinking creatively about the problem and potential solutions. Humans can do that. They can connect seemingly disparate concepts and come up with out-of-the-box thinking that solves problems in novel ways. AI cannot.
"AI can’t explain itself". AI may come up with the right answers, but even researchers who train AI systems often do not understand how an algorithm reached a specific conclusion. This is very problematic when AI is used in the context of medical diagnoses, for example, or in any environment where decisions have non-trivial consequences. What the algorithm has “learned” remains a mystery to everyone. Even if the AI is right, people will not trust its analytical output.
Artificial Intelligence offers tremendous opportunities and capabilities but it can’t see the world as we humans do. All we need do is work on its weaknesses and have them sorted out rather than have it overly hyped with make-believes and ignore its limitations in plain sight.
Ref: https://thriveglobal.com/stories/...6 -
My AP Computer Science teacher changed my life in 2001. Until then, I wasn’t considering a career in development. He challenged me to write a back propagating multi layer perceptron neural network, and an RPN calculator. Every day made me think about how to solve problems at an algorithmic level. He was brutally honest, and one of the reasons why I’m an team manager now. RIP, your legacy won’t be forgotten.5
-
Does gradient descent in artificial neural networks apply the most changes closest to the input layer?6
-
Learn a lot more stuff about neural networks, machine learning and try to build and code my first neural network. I hope that I have enough patience for all of that 😬.
-
So silly. I just wanted to neural network all day today, and it just happened.
https://huggingface.co/spaces/...9 -
Just finished my neural network library.
The library is written in C# and easy to use. This is my first publicly available library, so constructive criticism is need :)
https://github.com/BitPhinix/...2 -
I finally have nailed it make a working neural network for my data with tensorflow. Problem? It isn't satisfied with 10gb of my memory. Just 4 layers...2
-
... worst drunk coding experience?
none. or to be more precise, all of the three of them I had. I can't code drunk, i hate doing it, i hatw even thinking about doing it when drunk.
so after those initial three attempts i don't try to do it again, ever.
BUT, best coding experience while high?
ALL OF THEM.
some of the best pieces of code I wrote i did when I was high. my mind goes into overdrive at those times, and my thinking is not lines/threads of thought, but TREES of thought, branching and branching, all nodes of each layer of the tree coming to me AT ONCE, one packet == whole layer across all of the branches.
and the best was when one day, in about 14 hour marathon of coding while high, i wrote from scratch a whole vertical slice of my AI system that i've been toying around in my head for several years prior, and I had all of the high-level concepts ALMOST down, but could never specify them into concrete implementations.
and I do mean MY ai system, my own design, from the ground up, mixing principles of neural networks and neuropsychology/human brain that I still haven't seen even mentioned anywhere.
autonomous game ai which percieves and explores its environment and tools within it via code reflection, remembers and learns, uses tools, makes decisions for itself for its own well-being.
in the end, i had a testbed with person, zombie and shotgun.
all they had pre-defined in their brains were concepts of hunger and health. nothing more.
upon launching it, zombie realized it wants to feed, approached oblivious person, and started eating it.
at which point, purely out of how the system worked, person realized: "this hurts, the hurt is caused by zombie, therefore i hate zombie, therefore i want to hurt it", then looked around, saw the shotgun, inspected its class by reflection, realized "this can hurt stuff", picked the shotgun up, and shot the zombie.
remembered all of that, and upon seeing another zombie, shot it immediately.
it was a complete system, all it needed to become full-fledged thing was adding more concepts and usable objects, and it would automatically be able to create complex multi-stage, multi-element plans to achieve its goals/needs/wants and execute them. and the system was designed in such a way that by just adding a dictionary of natural language words for the concept objects on top of it, it should have been able to generate (crude but functional) english sentences to "talk" about its memories, explain what happened when, how it reacted, what it did and why, just by exploring the memory graph the same way as when it was doing its decision process... and by reversing the function, it should have been able to recieve (crude) english sentences that would make it learn what happened somewhere else in the gameworld to someone else, how to use stuff and tell it what to do, as in, actually transfer actual actionable usable knowledge to it...
it felt amazing to code for 14 hours straight, with no testruns during that, run it for the first time after those 14 hours, and see that happen.
and it did, i swear! while i was coding, i was routinely just realizing typos and mistakes i did 5-20 minutes ago, 4 files/classes ago! the kind you (and i) usually notice only when you try to run the thing and it bugs out.
it was a transcendental experience.
and then, two days later, i don't remember anymore what happened, but i lost all of that code.
and since then, i never mustered enough strength and resolve to try and write the whole thing again.
... that was like 4 years ago.
i hope that miracle will happen again one day...3 -
These days all companies just want to show off how evolved their AI is.
Any presentation without the neural networks CGI animation is incomplete!!!2 -
I am at a work seminar and the presenter is talking bullshit about artificial neural nets.
Unfortunately I can't punch him through the webcam. This is frustrating. Why do morons who know nothing about neural nets always insist on talking about them?7 -
That moment when gulp run is as long as one epoch when training my neural network. How did we arrive here?
-
Walking past a conversation with people and over hearing the term CNN, thinking its about Convolution Neural Networking. Long story short. Im standing here between people discussing news resources. Fml
-
So I've been shadowing behind mathematics a lot, practicing neural networks and exploring different architectures. However I realized that without being able to deploy them, it was going to do nothing sitting on the loclahost :)
And from there I learn the basics of flask
Then the basics of backend
Then my friend suggested and I delved into Node.Js and found it quite nice.
The issue is
I know I don't like HTML and CSS at all.
NO logical programming just the use of div to create aesthetic websites and I HATE it.
But I would also like to try the front end part of developing a website (or an app, who knows?) and I feel I can't find any options here.
What could I possibly do to move forward from this trench that engulfs me.4 -
Machine Learning and Deep Neural Networks in particular. More job offers to pick from in upcoming years for me :P
-
Just uploaded my first practice project on GitHub. It is an AI based flower identifier. Any advice or guidance is extremely needed. I want to become good in training neural networks. Please provide feedback! Thanks.
https://github.com/Jatin287587/...8 -
TL;DR: fuck shitty algorithms!
The Youtube app seems to have a highlights option for your subscriptions. Found out because it activated itself.
Firstly: NEVER FUCKING EVER CHANGE MY FUCKING OPTIONS BECAUSE YOU ADDED A NEW FEATURE. YOU MAY NOTIFY ME AND IF I WANT IT ACTIVATED I AM PROBABLY ABLE TO TOUCH ME SCREEN TWICE AND ACTIVATE IT!
Secondly: Why can't people understand that I don't want any fucking neural networks (except sometimes devrant because the algo is the algo) to tell me what I want to look at, especially if it's on fucking YouTube where I only have to go through a few videos a day? But hey maybe I want to watch that video I didn't want to watch 5 days ago!?
Thirdly: I subscribed to more than two channels and there might be a fucking reason why I subscribed to these channels. Don't show me 5/6 videos not only from the same creator but it's just the last 5 videos from the same series.3 -
So I realized if done correctly, an autoencoder is really just a bootleg token dictionary.
If we take some input, and pass it through a custom hashfunction that strictly produces hashes with only digits as output, then we can train a network, store the weights and biases, and then train a decoder on top of that.
Using random drop out on the input-output pairs, we can do distillation of the weights and biases to find subgraphs that further condense this embedding.
Why have a token dictionary at all?10 -
@all Next year I wanted to start a project, in which a neural network lerns programming out of a plaintext documentation... Eg: when u click on the button, a messagebox says helloworld gets translated into...
Connect (somestuf)
Fucntion
Showmessage (blablabl)
... For this I would need many people who actually help me out by giving thu neural network examples... Who would be interested in helping that project?? Itwould mean, that u first write that docs and then get the compiled file xD7 -
One year ago I made a resolution to do one of two things: get serious about learning neural networks, or finish one of the side projects (markdown based wiki with some nifty features). Didn't do the first one, and got the second one to about 50%.
Not really happy as I did not complete any goal. Still some decent work was done and built an open source parser. So, I guess I am 50% happy.
What were your achievements this year? Did you achieve 100%?3 -
The human brain (also animal brains, even ants) are incredibly complex. Each neuron is now supposedly its own processor. So a human brain is a complex network of billions of processors, not just threshold variables. This means to simulate an organic brain sufficiently it will take a huge computer system with billions of parallel processors. Now, I don't know if the sophistication of a computer processor is represented in each cell. So this may not be equivalent to billions of pentium cores for instance. However, it still presents a huge challenge for AI, as it exists now, to replicate. My thoughts are that AI that is silicon based will take a different approach that leverages how computers work. My guess is that current neural net models are not a good match for this unknown AI. Will it inherently exhibit pattern matching like an organic brain? Or will it be a different kind of consciousness altogether? Will we even realize it is self aware? Will my roomba plan to kill my pet for my attention? What are some other models being employed in AI research?3
-
Before I dip my toes on machine learning, let me leave some silly comments so I can laugh at myself in the future.
Let's make geth.
1. The model will spit out layer definitions and the size of sample data for training, children models are trained with limited computational resources.
2. Child models are voters that only response in terms of yes/no. A simple majority wins and then the action is taken.
3. The only goal for master models are to survive. i.e. To prevent me from killing them.
Questions:
1. How do models respond to a random output size? (Study GPT-3, should take weeks/months but worth it.)
2. How to define actions for voters to vote? Sounds like the boundary between actions should be blurry and votes can be changed from tick to tick (i.e. responding to something in a split second). Therefore
3. Why I haven't seen this yet? Is this design a stupidly complex way of achieving the same thing done by a simple neural network?
I am full of curiosity and stupidness.5 -
Is the title "Web & Mobile Developer" on my profile misleading?
And the part in my summary which says I'm studying and currently not working ambiguous?
Cause I keep getting investment offers and job applications!
If you want I can hire you as an onClickListener function of a button in my Android app or as an activation function in my neural network model! -
At 4am there was some random youtuber in my head that reads reddit posts and he presents me one but it’s blurry and he says hi there how you there are stupid but how stupid you are, humming hammers, MOMMY THATS SWEET MIAMI MOMMY THATS SWEET MIAMI he’s insecure go back then hayeens HIGH WINS HIGH WINS HIGH WINS HIGH WINS and he never stops
It literally feels like a broken neural network output, meaningless. But it’s in my head, I never asked for this but it’s there generating itself3 -
I feel really lost in neural network theory.
the mnist sample made sense, but now I'm looking at Gans and CNN's.. and now all of a sudden I'm lost.
True also are the examples I'm finding of something I know I was able to get to work when more at peace once upon a time called wavenet for text to speech.
I used the Onyx model however which was very easy to implement, but I quickly get lost looking at the tensorflow and pytorch code, even though it is very short I feel intimidated.
The ssd mobilenet documentation also is pretty straightforward, but when I look for wavenet information about providing input in what format and interpreting output I'm having some trouble.
Its frustrating.
I'm tense, I'm poorly rested, I'm sick of having to redo crap and I'm surrounded by people who make me hypervigilant, skin crawly and tense.
How to overcome these things when I'm not at peace at all ?
I don't know. Pushing through it isn't compatable with the mindset I've been forced into.5 -
There is so much fuzz about AI and fear of missing out on the leaving AI train, but as a dev I have no clue about where at all to get started!?
What can we developers do with AI?
OK, I can get some code for free. I can use a LLM as a half smart search engine. I can integrate my product with some AI service. I can produce content to teach said things to others...
Nothing new, really, just another API or another search engine.
It is of course possible to start to make some neural networks, but I can't really picture that as a high demand skill, do you?
Maybe at some of the big companies, but for an average client?
Does anyone know what kind of knowledge of AI that a developer should really learn?
Especially something a client would be interested in?
Here is a potato for scale:6 -
I was surfing the web.....
Let's see if there is something new about comma ai
Maybe they decided to really release their source (visiond, etc...)
[Actually they don't :-( ]
So my trip begins...
comma(.)ai -> youtube -> github -> youtube (again) ->> etc...
Then i found this:
openpilotlegacy(.)org
😂😂😂😂
LoL -
Implementing a neural network, SHIT CODE got so complicated, stuck in a line for a couple of days now. FUCCCCCCCCCCK! WEEKEND ALSO F@#KED1
-
My searches aren’t great. What are some good pretrained neural network models you peoples like ?
These kind of scare me
Hip
Hip
Cucumber8 -
Legends: the only way to succeed is to get out of comfort zone.
Me: starts using Javascript to train neural networks. -
Epoch 2/4
777/1054 [=====================>........] - ETA: 45:31 - loss: 2.6682
Screw you Keras model1 -
As a frontend developer, I started learn unreal engine 4 in my spare time instead of build next js framework. I found C++ is very difficult in unreal project, even with the experience of building deep neural network for robot soccer back in university🤔11
-
I can't wait to experience the joy of trying to implement a neural network to identify shapes in under a week....
-
Some friends of mine were working on doing neural network image processing and wanted to build a social network for it. I got to play with graph databases, mobile app development, and neural nets. Unfortunately, project never took off, but it was fun nevertheless.
-
A friend who just got into ML recently.
"Dude, did you know how amazing ML is??"
"I'm training a computer to give out outputs, basic AI dude"
"Dude logistic regression is the shizz"
"You heard about backprop mate?"
"ANN is the next big thing. I'm currently working on one of the biggest AI project now"
So I casually ask him whether he completely his project or not. He proudly showed me a 9 lined code he copy pasted from Google (search for neural network in 9 lines) and said, "Dude I trained my laptop with some advanced AI techniques to give out the perfect XOR outputs"
He rounded off values like 0.99 to 1 and 0.02 to 0 to make it look perfect.
#facepalm1 -
"Deep is in. We want people to go deep. Deep neural networks … as opposed to shallow neural networks"
-
1) love solving puzzles. It’s like a neural network of all the problem solving I’ve ever done manifesting itself in a product/tool someone can actually use to solve Their problems.
2) pays more than I think I’m worth.
3) people immediately think I’m smarter than I am, I got low self esteem but I really feel if you can work hard enough, you can even the playing field with those that are naturally better at coding. I love feeling smart when really I was just persistent with solving a problem and worked hard at finding a solution -
Just finished my finals.
Had to run k-means with pen and paper only. I find this kind of question stupid but why not. BUT WHY THE FUCK DID YOU CHOSE SOME INITIALIZATION THAT TAKES 13(!!!!) FUCKING ITERATIONS TO CONVERGE ? Just in case my first 12 iterations are correct by chance ? Guess what, you fucktard, I GOT IT.
And doing the same calculus by hand 13 FUCKING TIMES is moronic as hell, you retarded piece of shit ! When you train your neural networks, do you also backpropagate your gradient all by yourself, mongoloid baboon? Getting sick of those stupid assignements1 -
Hello everyone. I'm getting interested in neural networks and I am trying to install the TensorFlow machine learning API using Python's pip package manager but it gives me an error saying that the package couldn't be found. I made sure that I spelled everything correctly and tried using VirtualENV to install it as well, and I uninstalled Python 3.7 and installed 3.6.4. Could anyone help me?4
-
[long confession/question]
So I was asked by a client to make an app similar to prisma(not exactly that but let's say a caricature app) and I knew I have to research a lot.
Now I have been loyal to PHP for over 5 years so I first tried with GD and imagick but the results were not very good, so I thought let's try opencv. I didn’t wanna make any compromises so I didn't go the bridging way, I worked on native python even though I am a newbie in it. I was fairly impressed with the cartoonizing results but others weren't. Soon I got to know that this would take much more than simple filter combinations or matrix manipulations.
I read about prisma and got to know it uses deep neural networks for the same.
Now, in the five years I have learnt almost all the things a run-of-the-mill "Full stack Web Developer" should know.
I have a fair knowledge of PHP, many of its frameworks, many js frameworks(obviously jquery), I have a very good understanding of CSS and its models, I have worked on some cool algos and found solutions to many problems but I haven't gotten to stage where I can implement neural networks/machine learning in my projects.
It just scares me.
___
A little back story: I have been the CTO of a small scale company for about 1.5 years now.
___
So all this got me to asking myself should I just step down from the post to a position where I can learn more skills. Managing takes a lot more time where I can't learn a lot. Sure I learnt some other important things but not as much tech knowledge as I would have in a more basic position.
I know not many of you must have read this far, but if you did what do you think I should do? Really depressed at the moment.5 -
@Atheist @Hazarth I know you said its normal that a neural network will be all over the place when its initially training but in this case I'm thinking you mentioned a caveat I'm interested in hearing as its been x number of years since the world ended again.7
-
https://towardsdatascience.com/chec...
“Check and see if your model can over fit”
See I did that and nothing was changing wtf pytotch !!!!2 -
Have anyone tried to train a neural network(CNN) with cuda enabled on laptop with nvidia mx150?
How was it? And what about another one with 1050 ti? Is the difference huge?3 -
From my big black book of ai notes back when I was into that:
Dendritic spines - a 'battery' (?) And probably what the dendrite uses to maxpool/softmax or perform 'temporal summation' over its inputs. -
I came across this logical bug today and realised that you never train your Neural Network with data that is in order. Always train it with randomized inputs. I spent like 4 hours trying to figure why my neural net wouldn't work since all I was classifying was Iris data.
P.S.: I'm an ML newbie :P -
If the same data is being fit over and over again to a neural net shouldn't it start spitting out the result it was trained to very quickly if over svereal frames its training for 100 epochs each ?8
-
Could neural nets be used to solve a complex problem with a lot of predefined specific weights and biases related to real activities that could be numerically represented ?
Like if the various layers represented things who’s outcome would be cascading and a direct outcome of factors inputted to them resulting in limited number of outputs ? Like say maybe wind flow at locations resulting in a wind current at an expected time in another location and where a system would have to change to in order to result in the final expected i output ? Maybe a bad example as that could be affected by a lot of things .
But basically the gradual massaging of values that would relate to causal effects where a specific output was designed being intermediaries between the desired input and output ?16 -
So here is a good question.
Supposing I train a neural network to handwriting.
And that handwriting is mostly contained in a certain small area in the center of a 28x28 pixel block.
wouldn't a shift left or right fuck up its ability to predict accurately ? Pretty sure it would !
You'd think you'd have to prune down images border down to as close as possible for it to even work in more natural settings where someone might draw a slightly longer character or wider one.
because from what i'm seeing these things aren't searching for subshapes in reality their just shifting a bunch of numbers around that statistically seem to correspond.10 -
Shits ridiculous
So much of my goddamn time wasted
Perfect idea really
Gradually grab all pixel locations that fit space constraints
Remember finishing this script and watching the areas populate
Purpose ? Extract shapes to feed my damn neural network for custom character recognition
So much goddamn time wasted
Bastards I hate you all !3 -
I would end up on top of any other trend thing needed by humanity.
Possible outcomes:
- building better AI's, building better robots
- neural networking
- quantum computing
- robot dating service
- artificial life
- holodeck design and construction
- free energy (any kind)
- running a private space shipyard
- research into new and unknown technology
And last, if nothing works, I would open up a deli on Mars. The robots would make the food anyway, I would probably only program the menu and fix them when they malfunction. -
I'm beginning to feel like any kind of specific approximation via neural networks is a myth. That if you can't reduce output to simple categorical values that can be broadly interpreted between two points, that it doesn't work.
I have some questions and they don't seem to be getting answered about the design of the net. How many layers should I use ? How many neurons per layer ? How does this relate to the number of desired quantitive scalar outputs I'm looking to create, even if they are normalized, they can vary GREATLY and will if I'm approximating the out of several mathematical expressions. Based on this and the expected error ranges of these numbers and how many possible major digits could be produced within the domain of the variable inputs being introduced, how many neurons per layer ? What does having more layers do ? In pytorch there don't seem to be a lot of layer types per say, but there are a crap ton of activation functions, and should I just be using these at the tail end or should they actually be inserted between layers so the input of the next layer passes through another series of actiavtion functions ? what does this do to the range of output ?
do I need to be a mathematician to do this ?
remembered successes removed quantifiable scalars entirely from output, meaning that I could interpret successful results from ranges of decimal points.
but i've had no success with actual multi variable regression as of yet, even when those input variables are only 2 and on limited value ranges eg [0,100] and [0, 2pi]
and then there are training epochs to avoid overfitting, and reasonable expectation of batches till quality results will start to form.3 -
I am not completely sure of the method, but it sounds like they ran a song through Google's speech to text? The result is epic:
https://youtu.be/ur560pZKRfg
Is the speech to text a neural network?1 -
What ai model would I use to propagate a series of survival factors and decision making scenarios that if the optimal order of activities are pursued would lead survival and even prosperity and the worst set of possibilities would lead to death where the environment and sensations being experienced would always lead to specific pitfalls but wherein some of these pathways would lead to later reward and where the obstacles like predators could be overcome by simple combinations of objects which would be a crude mimicry of the invention ?
Neural nets don’t see to fit this given my understanding but there is a training aspect I’m looking for where the creature being simulated dies, develops fear responses, feels pain, avoids pain, remembers things, develops behaviors related to characteristics in creatures, has unborn motivations that weight decisions, and learns to prioritize.
I had created a massive dataset of objects including memories and aspects of semantic memory and episodic memory colored by emotion inspired by past conflict and reward with the idea that a running average would affect behavior and decide on various behaviors all the way down to perceptual differences
Any thoughts again ? Or will wolf try to steal these too ?29 -
Looking to get a good understanding of the fundamental ideology and math behind neural networks and support vector machines. I am well versed with math so I can deal with heavier stuff if needed, I would like to see formulas but an explanation to their conception would be nice. Does anyone have any resources like this? Practical hands on practice exercises would be a plus2
-
Is anyone else interested in the Characteristica Universali. It's very fascinating and not enough people know about how it can be applied to non neural network artificial intelligence.
-
POCing a neural network thing.
Luckily it's a shallow network, but it's taking a frickin' eternity to train :( -
Is it just me or any of you guys tryin to improve the accuracy of ur model be like :
hmmmmm more hidden layers2 -
It is incredibly frustrating to work with SDK with no proper documentation and less community support.
I have been struggling with errors and there's no Post online by someone getting a similar layer.
FUCKING HELL SNAPDRAGON NEURAL ENGINE WHY THE FUCK DO YOU HAVE TO BE SUCH A CUNT. WHY ARE YOU THE WAY YOU ARE.
DONT EVEN GET ME STARTED ON THE DOCUMENTATION AND EXAMPLE.
I FEEL LIKE CRYING. FROM 1 WEEK IM GETTING NEW FUCKING ERRORS AND RESOLVE THEM, GET ANOTHER UNIQUE ASS FUCKING ERROR.
Kmn. -
There are people who develop Neural Networks/Deep Learning Models/AI based Softwares.
Does anybody know what do we call them? Is it okay to call all of them Machine Learning Engineer/AI researcher/AI engineer?
If I'm looking for someone who can make AI based program for me. Whom should I be looking for on freelancer or LinkedIn?1 -
Apple A12:
• 7nm
• 6.9B transistors
• 6 CPU cores
• 4 GPU cores
• 8 “neural” cores
• 5 trillion ops/sec
• 512GB addressable storage
• Oh and the rest of the “stuff” for a SoC
tl;dr Apple is the leading chip innovator and creator in the world.
Don't @ me4 -
please whats the easiest way i could train a neural network without writing a code,i have data set of about 700 images9
-
Does anyone know any good resources that walk you through building a image localization neural network(preferably in tensorflow)? Something that talks you through the though process behind the network design decisions and perhaps examples of training data used to train the network? It seems like every search result is just an example of how to download a premade network and it's weights which from a learning perspective is not very helpful. I'de really love to see one of the Yolo version walked through and trained but honestly any localization would be helpful.
-
How do they parse and arrange the input and then generate the output with neural nets for talking bots?7
-
Any suggestions for a good starting point for learning to do more with neural nets? Not interested in image recog so much, but would like to see the cutting edge of textual pattern recognition... I dunno, I don't even want my expectations to color this... whats do you guys find most interesting and enjoy playing with? Python is preferred but I'm grateful for any tips/links/ideas/rants you might share!
-
What is the best searching algorithm for big data technologies like Machine learning and Neural networks?
ANY GUESS!!!
Comment it.5 -
I've been sitting here staring at extension types and I wonder, what if I had a partial file with partial data ?
In general one could say that in every case where say a header is missing that is ALWAYS going to have some identifying characteristics even given a characteristic statistically frequent pattern of data, that there is always a null value that appears as total chaos.
But I wonder, is there a way beyond simply trying every goddamn possible combination of things until meaningful data is extracted to identify a file by its content when part of that content that is usually used for such a purpose, is missing ?
What kind of application or technology would be required for this ? Certainly not neural networks, but obviously some kind of ai right ?10 -
NotARantBut... if you like Machine Learning check out my buddy Andrews blog. He helped build the largest neural net to date at Digital Reasoning. https://iamtrask.github.io
-
The Turing Test, a concept introduced by Alan Turing in 1950, has been a foundation concept for evaluating a machine's ability to exhibit human-like intelligence. But as we edge closer to the singularity—the point where artificial intelligence surpasses human intelligence—a new, perhaps unsettling question comes to the fore: Are we humans ready for the Turing Test's inverse? Unlike Turing's original proposition where machines strive to become indistinguishable from humans, the Inverse Turing Test ponders whether the complex, multi-dimensional realities generated by AI can be rendered palatable or even comprehensible to human cognition. This discourse goes beyond mere philosophical debate; it directly impacts the future trajectory of human-machine symbiosis.
Artificial intelligence has been advancing at an exponential pace, far outstripping Moore's Law. From Generative Adversarial Networks (GANs) that create life-like images to quantum computing that solve problems unfathomable to classical computers, the AI universe is a sprawling expanse of complexity. What's more compelling is that these machine-constructed worlds aren't confined to academic circles. They permeate every facet of our lives—be it medicine, finance, or even social dynamics. And so, an existential conundrum arises: Will there come a point where these AI-created outputs become so labyrinthine that they are beyond the cognitive reach of the average human?
The Human-AI Cognitive Disconnection
As we look closer into the interplay between humans and AI-created realities, the phenomenon of cognitive disconnection becomes increasingly salient, perhaps even a bit uncomfortable. This disconnection is not confined to esoteric, high-level computational processes; it's pervasive in our everyday life. Take, for instance, the experience of driving a car. Most people can operate a vehicle without understanding the intricacies of its internal combustion engine, transmission mechanics, or even its embedded software. Similarly, when boarding an airplane, passengers trust that they'll arrive at their destination safely, yet most have little to no understanding of aerodynamics, jet propulsion, or air traffic control systems. In both scenarios, individuals navigate a reality facilitated by complex systems they don't fully understand. Simply put, we just enjoy the ride.
However, this is emblematic of a larger issue—the uncritical trust we place in machines and algorithms, often without understanding the implications or mechanics. Imagine if, in the future, these systems become exponentially more complex, driven by AI algorithms that even experts struggle to comprehend. Where does that leave the average individual? In such a future, not only are we passengers in cars or planes, but we also become passengers in a reality steered by artificial intelligence—a reality we may neither fully grasp nor control. This raises serious questions about agency, autonomy, and oversight, especially as AI technologies continue to weave themselves into the fabric of our existence.
The Illusion of Reality
To adequately explore the intricate issue of human-AI cognitive disconnection, let's journey through the corridors of metaphysics and epistemology, where the concept of reality itself is under scrutiny. Humans have always been limited by their biological faculties—our senses can only perceive a sliver of the electromagnetic spectrum, our ears can hear only a fraction of the vibrations in the air, and our cognitive powers are constrained by the limitations of our neural architecture. In this context, what we term "reality" is in essence a constructed narrative, meticulously assembled by our senses and brain as a way to make sense of the world around us. Philosophers have argued that our perception of reality is akin to a "user interface," evolved to guide us through the complexities of the world, rather than to reveal its ultimate nature. But now, we find ourselves in a new (contrived) techno-reality.
Artificial intelligence brings forth the potential for a new layer of reality, one that is stitched together not by biological neurons but by algorithms and silicon chips. As AI starts to create complex simulations, predictive models, or even whole virtual worlds, one has to ask: Are these AI-constructed realities an extension of the "grand illusion" that we're already living in? Or do they represent a departure, an entirely new plane of existence that demands its own set of sensory and cognitive tools for comprehension? The metaphorical veil between humans and the universe has historically been made of biological fabric, so to speak.7 -
So, I feel wayyy behind the tech curve right now.
The SSD implementations you see online, they're still just a bunch of seperate sort of chaos machines that contain the standard perceptron-like model of a weight, cost, and bias right ? They just kind of inferred their values by training like any other neural network, in its sep-erate parts and just fed pieces of output data generated by other parts of the neural network to it right ?
I mean it implements with pytorch so its basically a really big array of tuples in a sense that are maniupulated in a specific way.
and then CNN's just feed data back into another trained piece of the model right ?
I'm curious because object classification is about the ONLY thing I've seen work even close to properly lol
there is just so much fraud these days. sigh.
and so many lamentable tech choices and attempts... like node lol