Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "neural networks"
-
Person: I want to learn to code neural networks and cool AI stuff.
Me: Look into Python or Lua.
Person: Those are too hard, I'm going to use HTML instead.
I got out of there as fast as I could. 😅11 -
I'm a self-taught 19-year-old programmer. Coding since 10, dropped out of high-school and got fist job at 15.
In the the early days I was extremely passionate, learning SICP, Algorithms, doing Haskell, C/C++, Rust, Assembly, writing toy compilers/interpreters, tweaking Gentoo/Arch. Even got a lambda tattoo on my arm after learning lambda-calculus and church numerals.
My first job - a company which raised $100,000 on kickstarter. The CEO was a dumb millionaire hippie, who was bored with his money, so he wanted to run a company even though he had no idea what he was doing. He used to talk about how he build our product, even tho he had 0 technical knowledge whatsoever. He was on news a few times which was pretty cringeworthy. The company had only 1 programmer (other than me) who was pretty decent.
We shipped the project, but soon we burned through kickstart money and the sales dried off. Instead of trying to aquire customers (or abandoning the project), boss kept looking for investors, which kept us afloat for an extra year.
Eventually the money dried up, and instead of closing gates, boss decreased our paychecks without our knowledge. He also converted us from full-time employees to "contractors" (also without our knowledge) so he wouldn't have to pay taxes for us. My paycheck decreased by 40% by I still stayed.
One day, I was trying to burn a USB drive, and I did "dd of=/dev/sda" instead of sdb, therefore wiping out our development server. They asked me to stay at company, but I turned in my resignation letter the next day (my highest ever post on reddit was in /r/TIFU).
Next, I found a job at a "finance" company. $50k/year as a 18-year-old. CEO was a good-looking smooth-talker who made few million bucks talking old people into giving him their retirement money.
He claimed he changed his ways, and was now trying to help average folks save money. So far I've been here 8 month and I do not see that happening. He forces me to do sketchy shit, that clearly doesn't have clients best interests in mind.
I am the only developer, and I quickly became a back-end and front-end ninja.
I switched the company infrastructure from shitty drag+drop website builder, WordPress and shitty Excel macros into a beautiful custom-written python back-end.
Little did I know, this company doesn't need a real programmer. I don't have clear requirements, I get unrealistic deadlines, and boss is too busy to even communicate what he wants from me.
Eventually I sold my soul. I switched parts of it to WordPress, because I was not given enough time to write custom code properly.
For latest project, I switched from using custom React/Material/Sass to using drag+drop TypeForms for surveys.
I used to be an extremist FLOSS Richard Stallman fanboy, but eventually I traded my morals, dreams and ideals for a paycheck. Hey, $50k is not bad, so maybe I shouldn't be complaining? :(
I got addicted to pot for 2 years. Recently I've gotten arrested, and it is honestly one of the best things that ever happened to me. Before I got arrested, I did some freelancing for a mugshot website. In un-related news, my mugshot dissapeared.
I have been sober for 2 month now, and my brain is finally coming back.
I know average developer hits a wall at around $80k, and then you have to either move into management or have your own business.
After getting sober, I realized that money isn't going to make me happy, and I don't want to manage people. I'm an old-school neck-beard hacker. My true passion is mathematics and physics. I don't want to glue bullshit libraries together.
I want to write real code, trace kernel bugs, optimize compilers. Albeit, I was boring in the wrong generation.
I've started studying real analysis, brushing up differential equations, and now trying to tackle machine learning and Neural Networks, and understanding the juicy math behind gradient descent.
I don't know what my plan is for the future, but I'll figure it out as long as I have my brain. Maybe I will continue making shitty forms and collect paycheck, while studying mathematics. Maybe I will figure out something else.
But I can't just let my brain rot while chasing money and impressing dumb bosses. If I wait until I get rich to do things I love, my brain will be too far gone at that point. I can't just sell myself out. I'm coming back to my roots.
I still feel like after experiencing industry and pot, I'm a shittier developer than I was at age 15. But my passion is slowly coming back.
Any suggestions from wise ol' neckbeards on how to proceed?32 -
I just started playing around with machine learning in Python today. It's so fucking amazing, man!
All the concepts that come up when you search for tutorials on YouTube (you know, neural networks, SVM, Linear/Logic regression and all that fun stuff) seem overwhelming at first. I must admit, it took me more than 5 hours just to get everything set up the way it should be but, the end result was so satisfying when it finally worked (after ~100 errors).
If any of you guys want to start, I suggest visiting these YouTube channels:
- https://youtube.com/channel/...
- http://youtube.com/playlist/...9 -
Smart India Hackathon: Horrible experience
Background:- Our task was to do load forecasting for a given area. Hourly energy consumption data for past 5 years was given to us.
One government official asks the following questions:-
1. Why are you using deep learning for the project? Why are you not doing data analysis?
2. Which neural network "algorithm" you are using? He wanted to ask which model we are using, but he didn't have a single clue about Neural Networks.
3. Why are you using libraries? Why not your own code?
Here comes the biggest one,
4. Why haven't you developed your own "algorithm" (again, he meant model)? All you have done is used sone library. Where is "novelty" in your project?
I just want to say that if you don't know anything about ML/AI, then don't comment anything about it. And worst thing was, he was not ready to accept the fact that for capturing temporal dependencies where underlying probability distribution ia unknown, deep learning performs much better than traditional data analysis techniques.
After hearing his first question, second one was not a surprise for us. We were expecting something like that. For a few moments, we were speechless. Then one of us started by showing neural network architecture. But after some time, he rudely repeated the same question, "where is the algorithm". We told him every fucking thing used in the project, ranging from RMSprop optimizer to Backpropagation through time algorithm to mean squared loss error function.
Then very calmly, he asked third question, why are you using libraries? That moron wanted us to write a whole fucking optimized library. We were speechless at this question. Finally, one of us told him the "obvious" answer. We were completely demotivated. But it didnt end here. The real question was waiting. At the end, after listening to all of us, he dropped the final bomb, WHY HAVE YOU USED A NEURAL NETWORK "ALGORITHM" WHICH HAS ALREADY BEEN IMPLEMENTED? WHY DIDN'T YOU MAKE YOU OWN "ALGORITHM"? We again stated the obvious answer that it takes atleast an year or two of continuous hardwork to develop a state of art algorithm, that too when gou build it on top of some existing "algorithm". After listening to this, he left. His final response was "Try to make a new "algorithm"".
Needless to say, we were completely demotivated after this evaluation. We all had worked too hard for this. And we had ability to explain each and every part of the project intuitively and mathematically, but he was not even ready to listen.
Now, all of us are sitting aimlessly, waiting for Hackathon to end.😢😢😢😢😢25 -
About six months ago I decided I wanted to learn to write a neural network from the ground up, using only the C++ standard lib. Had to learn some linear algebra, multivariable calc and a dash of wizardry.
The mathematics of neural networks is still one of the coolest things I've ever learnt. It still marvels me that you can make a specialized mini-brain out of nothing but numbers.17 -
!rant
Ah the joy of tweaking one tiny segment of code to reduce your neural network runtime from 3 minutes for 100 generations to 3 seconds!6 -
!rant
Just wanted to share stuff. It's my first time.
<backstory>
I'm a c# dev, recently got excited about neural networks and stuff. I have a gf who studies biology
</backstory>
So i've noticed yesterday what my gf is doing for her science stuff. She has an image taken through a microscope of some erytrocytes and shit. And she's clicking on those tiny fuckers to count them. There are like almost a hundred of those things in an image and she has a butload of those images.
I was like "what the fuck? Don't you have an app that counts the stuff for you or something?"
And there is none. Or at least i wasn't able to find one. That's bullshit. My inner programmer screams with hate for boring repetitive tasks.
So i guess i'm going to write a neural network to count similar stuff in an image.32 -
Very specific and annoying situation here:
- Working on a machine learning project with other people
- I'm on Linux, they use Windows
- We code in python
- We generally use vscode for development, and its python extension
I implement some basic neural networks with tensorflow, and add a bunch of logging for it. I test it on my machine and it works fine.
But, my group mates report that "after a few seconds the entire client hangs".
Apparently it only happens on Windows?
We start debugging the hell out of the code I implemented, added 20 log messages and sat there for a solid hour.
Until I make one very odd realization: the issue doesn't happen when I run the script in my terminal, instead of vscode with the debugger. So I try different debug settings, using an external terminal instead of vscode's built in debug console seems to fix it too.
And I make another observation: In the debug console, some messages don't seem to appear at all, while the external terminal shows them just fine.
So, turns out, that printing an epsilon character: “ε” (U+03B5), causes the entire thing to hang up.
It's the year 2020 and somehow we still can't do unicode.
I'm so done, what on earth.9 -
Enjoying the college life to the fullest was the mindset of the confident boy, who now burns the midnight oil to cope up with world and give himself a proud future.
Is this a story of some successful person, who has achieved a lot in his life?
No, it is the story of the guy who lost all his hopes of future after spending the very first month in his college.
The first month was enough to perceive the reality of the domain I got myself let into. It was enough for someone, who didn’t even knew what programming languages are, to realize how left behind is he from the people around him.
Being from a private college which hardly anyone recognizes, expecting them to prepare me to stand out lone would be foolishness. I took my first step and started learning my very first programming language , Python.
I met some people with similar interest .We discussed, we exchanged resources, we used to talk to seniors to guide us. And yes, we were guided.
There were many bad days. Days which made me regret about starting late. Many a times I myself confirmed me as useless and some other time people did. The good thing is I never stopped , and improved myself with each day.
And now, after spending more than a year in the same college, I look at the things I have learnt. Today I can develop decent websites, can train neural networks, can make me stand in good position in coding platforms.
All you need is to take a step.I may not be the best, but I am definitely better than what I was yesterday.
If you have started something, then concentrate on finishing it.4 -
(Warning: kinda long && somewhat of a political rant)
Every time I tell someone I work with AI, the first thing to come out of their mouth is "oh but AI is going to take over the world!"
No.
It was only somewhat recently that it started being able to recognize what was in a picture from over 3 million images, and that too it's not that great at. Honestly people always say "AI is just if-else" ironically, but it isn't really that far from the truth, we just multiply an input by weights and check the output.
It isn't some magical sauce, it's not being born and then exploring a problem, it's just glorified-probability prediction. Even in "unsupervised" learning, the domain set is provided; in "reinforcement learning" which has gotten super popular lately we just have the computer decide which policy is optimal and apply that to an environment. It's a glorified decision tree (and technically tree models like XGBoost outperform neural networks and deep learning on a large number of problems) and it isn't going to "decide" to take over the planet.
Honestly all of this is just born out of Elon Musk fans who take his word as truth and have been led to believe that AI is going to take over the world. There are a billion reasons why it can't! And to top it off this takes away a lot of public attention from VERY concerning ethical issues with AI.
Am I the only one who saw Google Duplex being unveiled and immediately thought "fraud"? Forget phone scammers, if you trained duplex on the mannerisms of, for example, a famous politician's voice, you could impersonate them in an audio clip (or even video clip with deepfakes). Or for example the widespread use of object detection and facial recognition in surveillance systems deployed by DoD. Or the use of AI combined with location tracking and browsing analytics for targeted marketing.
The list of ethics breaches are endless, and I find it super suspicious that those profiting the most off of unethical AI are all too eager to shift public concern to some science fiction Terminator style takeover that, if ever possible, would be a long way out and is not any sort of a priority issue right now.11 -
On highschool I took a special major in which we learned various computer and mathematics skills such as neural networks, fractals, etc.
One of the teachers there, which for me was also a mentor, is a physician. He taught us python which he didn't know very well (he wasn't that bad either) and science which was his true passion.
My end project was to try to predict stocks market using a simple neural network and daily graphs of 50 NSDQ companies. The result reached 51% prediction on average which was awful, but I couldn't forget the happinness and curiosity working on this project made me feel.
Now, 5 years later, I have a Bsc and finishing a Msc in Computer Science, and would sincerely want to thank this mentor for giving me the guts and will to accomplish this.7 -
Professor asks me to do research on deep complex neural networks, as in neural networks that perform on complex numbers.
Meanwhile me: "Google, what are complex numbers?"24 -
> One of my guys from work.
> Walks up to my office
> Says "say something cursed about software development or programming that would make people cry"
> Me: "If I could I would program games and neural networks with PHP"
> Him: .......you fucking monster.
> Walks away
For reference: We both like php, but know and understand why that is a baaaaad idea.8 -
For those of your who have been following me, I would like to announce that I have received a perfect score for my bachelor thesis (OnDeviceAI, ie. Training neural networks in mobile phones.4
-
Well, I just learned how much of a pain it is to learn the math for learning neural networks. I really should have paid more attention in high school.
I will learn, the hard way I guess...6 -
My neural networks journey so far:
Look up tutorials -> see that Python is a popular tool for ML -> install Python -> pip install scipy -> breaks with some weird error involving BLAS library code -> spend half an hour fixing it -> try installing Theano -> breaks because my USERNAME HAS A SPACE IN IT LIKE SERIOUSLY? WTF -> make new account without a space in the name -> repeat till Theano -> run tests, found out that I didn't install CUDA support -> scrap the install and redo with CUDA support -> CUDA libraries take forever to download on shitty internet -> run tests -> breaks with some weird Theano compiler error -> go crying to friend -> friend tells me about Anaconda -> scrap the previous install and download Anaconda over shitty connection -> mess up conda environments because noobishness -> scrap, retry -> YESS I FINALLY GOT IT WORKING TIME TO DO SOME LEARNI-crap it's 4 in the morning already.
I realize that I'm a Python noob (and also, uni computers with GPUs have preconfigured Windows installed only, no Linux), but is installing Python libraries always such a pain? Am I doing something wrong? Installing via Anaconda felt like cheating, tbh.6 -
The only way to solve high dimensionality rapresentation problems in convolutive neural networks it's having an high dimentional duck3
-
Dank Learning, Generating Memes with Deep Learning !!
Now even machine can crack jokes better than Me 😣
https://web.stanford.edu/class/...rant deeplearning artificial intelligence ai neural networks stanford machine learning learning devrant ml2 -
.Net is masterrace.
C# gives me frequent orgasms.
Use SQL Server for DB, add to that parallel querying and NoSQL capabilities.
Incredible development speed with EF
Incredebly powerful web framework...check
AI and neural networks...check
App Development...Xeck
If you want to do some of that functional programming F# is the language for you.
And the best thing: .Net core runs on Linux too10 -
!rant
Just wrote my first piece of code using neural networks. Even explaining it to people is fun: "So you wrote a program that writes code?" "Exactly!"3 -
"What's your degree in?"
Electrical Engineering.
"So, you do coal stations and stuff?"
Nope. I look for powergrids to optimise with the help of Neural Networks.3 -
How long will it take till an AI creates "aiRant" where neural networks and AIs are writing rants about us? :-(2
-
Not goals. More like dream...
... To get into that one uni that I actually want for phd.
I have gotten so spoiled playing with robots and neural networks, that I can't even imagine falling that badly from grace to go back to... web development. Like I'm not looking down on it, it's just that I found my passion and there is not enough jobs available out there for me without going through phd or high-end research.
... And I honestly don't have a backup plan. There are choices, but I don't like any of them. So here goes hoping they accept me. ¯\_(ツ)_/¯3 -
Fucking professors, they think could play ping pong with students. I started my thesis on ransomware but these meaningless biological creatures who is my relator sent me to another one who sent me to another one who sent me to the first professor. After almost three weeks I have nothing done so i switched professor and thesis argument to neural networks (TensorFlow, Theano, Keras, Caffe and other) and now they wants me back and one of them said that he is offended. Fucking retarded, I have to graduate and I'm working hard to do it in september, if you were a little bit interested I could have collect some material to study in august sacrifing even the summer but you mock me, but rightly it's my career and my money, it doesn't care to you. You deserve to get stuck in an infinite loop of pian.4
-
!rant
I just woke up a minute before one of my first neural networks finished training
!!rant
My laptops dual core cpu with 4 threads is slooooooooow3 -
I would love to create food suggestor based on neural networks.
Something like tinder but you choose food.5 -
"Top Neural Network Projects for Beginners 2020"
1. Selfdriving Car with Ability to Transform and Fly To Mars in 6 Days4 -
Holy duck, I lost two days on a convolutional autoencoder splitted in two separate neural networks to encode and decode separately, it reconstruction had some strange behaviours. I was giving as input an image and then saving the encoded compressed representation in a new image, in this way I could decode it with the decoder whenever I want saving space.
How much retarded am I?
The internal layer's weights hadn't constraints so in learning phase the convolutional filters can contain any number, positive > 255 or even negative and I cannot save it in a new image as they are so they were clipped automatically between 0 and 255 with an huge information loss.
It's so frustrating when you rewrite the code in any possible way, you obtain the same wrong result and then you realize that was a borderline behaviour of a third part library.undefined convolution dimensionality reduction rbg autoencoder machine learning 255 neural networks image processing1 -
!rant
Thanks google for giving me the opportunity to work with neural networks without being an expert about them.
http://automl.github.io/auto-sklear...
To sum it up:
1. Preprocess data
2. Use Automl to train classifier
3. ????
4. Profit1 -
I love python, but I hate dealing with python dependencies, especially on Windows.
I was tinkering and researching with neural networks, so I wanted to try out pybrain. I wrote my project, with pybrain installed via pip, and tried to build it.
Oh, what's that? Pybrain doesn't work with python 3? Well I'll download the version that's supposed to. Oh, that version has a deprecated numpy api? Let me just install those other resources. Oh, that requires a broken module that has no publicly available source?
Let's try python 2. Oh, now that's working, I just need to export environment variables for some "bls source". Some quick Google searching and the only solution that would work is building a bunch of cywgin modules by hand. That's fine, I have an ubuntu partition.
An hour later I'm compiling FORTRAN dependencies on Ubuntu.
Coding time: 1 hour
Dependency time: 3 hours6 -
http://ai-junkie.com is a brilliant website - it's finally allowed me to understand neural networks and genetic algorithms properly!4
-
My AP CompSci teacher, now 15 years ago, inspired me to always reach further than I thought possible. I was creating neural networks in C++ before my first internship. It was amazing.
I mourned his loss when he passed away, but now I offer him thanks when success comes my way at work. I still feel like he is helping me as my secret angel of software development. -
Make sure to check out the latest Veritasium video. It's mostly about the analog computers of the 20th century. At the end he teased that part 2 will be about using analog computers for neural networks.5
-
Machine learning is hard! Spent a whole day with Weka and it's Neural Networks. God my brain. There is too much to know before being really equipped to use this tool... especially from code.6
-
Pro tip: always make sure your methods return the correct variable.
I’m currently working with deep neural networks using tensorflow. I needed to generate some test data and wrote a program to create it. I had two text files which each consisted of approximately 5000 lines of text.
I wrote a method that should sort out some words, and make my final data shorter. When I executed the program first time on our server, it spent about 25 minutes, then crashed due to MemoryError (which in Python means that the server didn’t have enough ram). That seemed quite weird since I only had about 10k lines of text, and I even sorted out a bunch of it, and the server has 128gb ram, and nothing’s using it.
Apparently I returned the wrong variable. That meant that my program tried to save 750 quadrillion lines of text rather than just a few thousand.
Always make sure to return the correct variables!1 -
First time ever merging two massive networks.
If this doesn't give me pain, technically my thesis work is done. Prettification, optimization, and the actual writing is left, but the main part is done.
And when this is done, I shall feel epic.7 -
"Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the universe trying to build bigger and better idiots. So far the universe is winning." ~~ Rick Cook
This guy single handedly explained GANs back in 90s and nobody noticed -
what do you recommend for me to learn about next?
I have learnt about:
- web frontend/backend (php)
- android and java
- c, c++, nasm, gnu assembler
- parallel computing
- cli operating systems
with that background, what would you recommend?
I'm considering:
- neural networks
- making a server
- ethical hacking
- starting a blog7 -
What I have learned from neutral networks for my life.
It's already a year that I'm familiar with NNs. I did not write anything serious and did not learn it that deep. But, actually, the basic knowledge gave me an interesting view to my life. I just want to share one fact with you.
There is a learning speed in NNs, which specifies how fast does the network learn. If it is too high, any new information will be accepted very easily but will wipe the past of the network's knowledge and if it is too low, the network will hardly accept new info but remember everything. When people born, they learn everything very fast and by the age they become more hard-learners Here, I've learned that you should not live in the past, and not for the current day. You just have to keep the balance.1 -
I have a project idea:
Web app that will automatically generate random like-a-facebook project ideas that will handle the buisness side and automatically post that offer on multiple forums, linkedin and send email with it. All using AI, Nural Networks, Big Data and VR.
Seriously, once fucking more some african or indian guy messages me to work for his awesome "its like a facebook but different" idea where he needs "just backend, frontend and mobile apps" and that he will just "handle the rest" and that "have no money now but after I sign a NDA he will give me some shares", I am gonna find him and shit on his head. Monday did not even ended yet and I already read 9 "offers" like this on my mail and facebook, only one guy white, rest indians or africans.
Why are then people suprised that we consider black and indian devs as a fucking joke 90% of the time. I have a indian dev friend and he could not find a dev job for 2 months, because everyone would rather work with less skilled asian / white guy than indian / black guy. This is not about racism, but about those retards that are acting like idiots. Hope I did not offend anyone (unless you do shit like this, then, please just smash your keyboard over your head).
Words like AI and neural networks are used just to lure the investors to our gofundme campain and steal their money after 2 years of silence.1 -
i was learning neural networks, started with keras and was on the first tutorial where they started by importing pandas
so i switched to learning data analysis using pandas in Python where they started by importing matplotlib and i realized data visualization is also important and now I'm reading matplotlib docs...🙄11 -
Opening Discussion Here,
I am trying to make a simple zen game that use Neural Network. the game is simple just a square object with a certain Viewing Angle and Viewing Distance with an objective is finding a food in a map with some other non food object as an obstacle.
i am encountered some problem. i am trying to find a way to make "seeing object in certain viewing region" and i am come up with two ideas described in the picture below. the problem is, i don't want to feed the NN with too much information, by that i mean i don't want to tell the AI what object it is, i want it to find out what it is by it self. and i cannot find a way to implement this either because of the framework limitation that i use (p5.js) or simply i cannot find a way how to.
i am on my way there tho, currently here i am (in pseudocode):
https://pastebin.com/7Ae1ZNYa
what do you think ?9 -
Hiuahuuhaei we can't even coordinate a fucking simple web app and they wants us to use neural networks to identify super fucking hard stuff that is hard even for people to do by hand 😂😂😂😂2
-
I just find out that AI is gonna extinct humanity. And developers will be on a privileged sit to appreciate that. So i decided to learn python and machine learning to help!20
-
Does gradient descent in artificial neural networks apply the most changes closest to the input layer?6
-
... worst drunk coding experience?
none. or to be more precise, all of the three of them I had. I can't code drunk, i hate doing it, i hatw even thinking about doing it when drunk.
so after those initial three attempts i don't try to do it again, ever.
BUT, best coding experience while high?
ALL OF THEM.
some of the best pieces of code I wrote i did when I was high. my mind goes into overdrive at those times, and my thinking is not lines/threads of thought, but TREES of thought, branching and branching, all nodes of each layer of the tree coming to me AT ONCE, one packet == whole layer across all of the branches.
and the best was when one day, in about 14 hour marathon of coding while high, i wrote from scratch a whole vertical slice of my AI system that i've been toying around in my head for several years prior, and I had all of the high-level concepts ALMOST down, but could never specify them into concrete implementations.
and I do mean MY ai system, my own design, from the ground up, mixing principles of neural networks and neuropsychology/human brain that I still haven't seen even mentioned anywhere.
autonomous game ai which percieves and explores its environment and tools within it via code reflection, remembers and learns, uses tools, makes decisions for itself for its own well-being.
in the end, i had a testbed with person, zombie and shotgun.
all they had pre-defined in their brains were concepts of hunger and health. nothing more.
upon launching it, zombie realized it wants to feed, approached oblivious person, and started eating it.
at which point, purely out of how the system worked, person realized: "this hurts, the hurt is caused by zombie, therefore i hate zombie, therefore i want to hurt it", then looked around, saw the shotgun, inspected its class by reflection, realized "this can hurt stuff", picked the shotgun up, and shot the zombie.
remembered all of that, and upon seeing another zombie, shot it immediately.
it was a complete system, all it needed to become full-fledged thing was adding more concepts and usable objects, and it would automatically be able to create complex multi-stage, multi-element plans to achieve its goals/needs/wants and execute them. and the system was designed in such a way that by just adding a dictionary of natural language words for the concept objects on top of it, it should have been able to generate (crude but functional) english sentences to "talk" about its memories, explain what happened when, how it reacted, what it did and why, just by exploring the memory graph the same way as when it was doing its decision process... and by reversing the function, it should have been able to recieve (crude) english sentences that would make it learn what happened somewhere else in the gameworld to someone else, how to use stuff and tell it what to do, as in, actually transfer actual actionable usable knowledge to it...
it felt amazing to code for 14 hours straight, with no testruns during that, run it for the first time after those 14 hours, and see that happen.
and it did, i swear! while i was coding, i was routinely just realizing typos and mistakes i did 5-20 minutes ago, 4 files/classes ago! the kind you (and i) usually notice only when you try to run the thing and it bugs out.
it was a transcendental experience.
and then, two days later, i don't remember anymore what happened, but i lost all of that code.
and since then, i never mustered enough strength and resolve to try and write the whole thing again.
... that was like 4 years ago.
i hope that miracle will happen again one day...3 -
I recently went through a very detailed and well-explained Python-based project/lesson by Karpathy which is called micrograd. This is a tiny scalar-valued autograd engine and a neural net on top of it.
The project above is, as expected, built on Python. For learning purposes, I wanted to see how such a network may be implemented in TypeScript and came up with a 🤖 micrograd-ts - https://github.com/trekhleb/... repository (and also with a demo - https://trekhleb.dev/micrograd-ts/ of how the network may be trained).
Trying to build anything on your own very often gives you a much better understanding of a topic. So, this was a good exercise, especially taking into account that the whole code is just ~200 lines of TS code with no external dependencies.
The micrograd-ts repository might be useful for those who want to get a basic understanding of how neural networks work, using a TypeScript environment for experimentation.
With that being said, let me give you some more information about the project.
## Project structure
- [micrograd/](https://github.com/trekhleb/...) — this folder is the core/purpose of the repo
- [engine.ts](https://github.com/trekhleb/...) — the scalar `Value` class that supports basic math operations like `add`, `sub`, `div`, `mul`, `pow`, `exp`, `tanh` and has a `backward()` method that calculates a derivative of the expression, which is required for back-propagation flow.
- [nn.ts](https://github.com/trekhleb/...) — the `Neuron`, `Layer`, and `MLP` (multi-layer perceptron) classes that implement a neural network on top of the differentiable scalar `Values`.
- [demo/](https://github.com/trekhleb/...) - demo React application to experiment with the micrograd code
- [src/demos/](https://github.com/trekhleb/...) - several playgrounds where you can experiment with the `Neuron`, `Layer`, and `MLP` classes.
Demo (online)
---------------------
To see the online demo/playground, check the following link:
🔗 https://trekhleb.dev/micrograd-ts3 -
Final synposis.
Neural Networks suck.
They just plain suck.
5% error rate on the best and most convoluted problem is still way too high
Its amazing you can make something see an image its been trained on, that's awesome....
But if I can't get a simple function approximator down to lower than 0.07 on a scale of 0 to 1 difference and the error value on a fixed point system is still pretty goddamn high, even if most of the data sort of fits when spitting back inference values, it is unusable.
Even the trained turret aimer I made successfully would sometimes skip around full circle and pass the target before lining up after another full circle.
There has to be something LIKE IT that actually works in premise.
I think my behavioral simulation might be a cool idea, primitive environment, primitive being, reward learning. however with an attached DATABASE.30 -
Eureka! I have done it! I have written a program that will replace 80% of programmers with an AI!
The approach is to use grammar identification with language heuristics to recognize solution patterns using multilayered neural networks. The code source uses trusted pattern samples that are scored by human programmers. The code is programmed using text duplication and placement from the trusted sources.
TLDR: Uses pattern matching to copy and paste from Stack Overflow.1 -
Learn a lot more stuff about neural networks, machine learning and try to build and code my first neural network. I hope that I have enough patience for all of that 😬.
-
These days all companies just want to show off how evolved their AI is.
Any presentation without the neural networks CGI animation is incomplete!!!2 -
!rant
PROJECT
Have been working on a tool to visualize neural networks.
It currently supprots Dense and Conv Networks.
Tell me what you think!
https://github.com/Prodicode/... -
Before I dip my toes on machine learning, let me leave some silly comments so I can laugh at myself in the future.
Let's make geth.
1. The model will spit out layer definitions and the size of sample data for training, children models are trained with limited computational resources.
2. Child models are voters that only response in terms of yes/no. A simple majority wins and then the action is taken.
3. The only goal for master models are to survive. i.e. To prevent me from killing them.
Questions:
1. How do models respond to a random output size? (Study GPT-3, should take weeks/months but worth it.)
2. How to define actions for voters to vote? Sounds like the boundary between actions should be blurry and votes can be changed from tick to tick (i.e. responding to something in a split second). Therefore
3. Why I haven't seen this yet? Is this design a stupidly complex way of achieving the same thing done by a simple neural network?
I am full of curiosity and stupidness.5 -
One year ago I made a resolution to do one of two things: get serious about learning neural networks, or finish one of the side projects (markdown based wiki with some nifty features). Didn't do the first one, and got the second one to about 50%.
Not really happy as I did not complete any goal. Still some decent work was done and built an open source parser. So, I guess I am 50% happy.
What were your achievements this year? Did you achieve 100%?3 -
Just uploaded my first practice project on GitHub. It is an AI based flower identifier. Any advice or guidance is extremely needed. I want to become good in training neural networks. Please provide feedback! Thanks.
https://github.com/Jatin287587/...8 -
TL;DR: fuck shitty algorithms!
The Youtube app seems to have a highlights option for your subscriptions. Found out because it activated itself.
Firstly: NEVER FUCKING EVER CHANGE MY FUCKING OPTIONS BECAUSE YOU ADDED A NEW FEATURE. YOU MAY NOTIFY ME AND IF I WANT IT ACTIVATED I AM PROBABLY ABLE TO TOUCH ME SCREEN TWICE AND ACTIVATE IT!
Secondly: Why can't people understand that I don't want any fucking neural networks (except sometimes devrant because the algo is the algo) to tell me what I want to look at, especially if it's on fucking YouTube where I only have to go through a few videos a day? But hey maybe I want to watch that video I didn't want to watch 5 days ago!?
Thirdly: I subscribed to more than two channels and there might be a fucking reason why I subscribed to these channels. Don't show me 5/6 videos not only from the same creator but it's just the last 5 videos from the same series.3 -
So I realized if done correctly, an autoencoder is really just a bootleg token dictionary.
If we take some input, and pass it through a custom hashfunction that strictly produces hashes with only digits as output, then we can train a network, store the weights and biases, and then train a decoder on top of that.
Using random drop out on the input-output pairs, we can do distillation of the weights and biases to find subgraphs that further condense this embedding.
Why have a token dictionary at all?10 -
So I've been shadowing behind mathematics a lot, practicing neural networks and exploring different architectures. However I realized that without being able to deploy them, it was going to do nothing sitting on the loclahost :)
And from there I learn the basics of flask
Then the basics of backend
Then my friend suggested and I delved into Node.Js and found it quite nice.
The issue is
I know I don't like HTML and CSS at all.
NO logical programming just the use of div to create aesthetic websites and I HATE it.
But I would also like to try the front end part of developing a website (or an app, who knows?) and I feel I can't find any options here.
What could I possibly do to move forward from this trench that engulfs me.4 -
Machine Learning and Deep Neural Networks in particular. More job offers to pick from in upcoming years for me :P
-
Legends: the only way to succeed is to get out of comfort zone.
Me: starts using Javascript to train neural networks. -
I can't wait to experience the joy of trying to implement a neural network to identify shapes in under a week....
-
There is so much fuzz about AI and fear of missing out on the leaving AI train, but as a dev I have no clue about where at all to get started!?
What can we developers do with AI?
OK, I can get some code for free. I can use a LLM as a half smart search engine. I can integrate my product with some AI service. I can produce content to teach said things to others...
Nothing new, really, just another API or another search engine.
It is of course possible to start to make some neural networks, but I can't really picture that as a high demand skill, do you?
Maybe at some of the big companies, but for an average client?
Does anyone know what kind of knowledge of AI that a developer should really learn?
Especially something a client would be interested in?
Here is a potato for scale:6 -
Epoch 2/4
777/1054 [=====================>........] - ETA: 45:31 - loss: 2.6682
Screw you Keras model1 -
Hello everyone. I'm getting interested in neural networks and I am trying to install the TensorFlow machine learning API using Python's pip package manager but it gives me an error saying that the package couldn't be found. I made sure that I spelled everything correctly and tried using VirtualENV to install it as well, and I uninstalled Python 3.7 and installed 3.6.4. Could anyone help me?4
-
[long confession/question]
So I was asked by a client to make an app similar to prisma(not exactly that but let's say a caricature app) and I knew I have to research a lot.
Now I have been loyal to PHP for over 5 years so I first tried with GD and imagick but the results were not very good, so I thought let's try opencv. I didn’t wanna make any compromises so I didn't go the bridging way, I worked on native python even though I am a newbie in it. I was fairly impressed with the cartoonizing results but others weren't. Soon I got to know that this would take much more than simple filter combinations or matrix manipulations.
I read about prisma and got to know it uses deep neural networks for the same.
Now, in the five years I have learnt almost all the things a run-of-the-mill "Full stack Web Developer" should know.
I have a fair knowledge of PHP, many of its frameworks, many js frameworks(obviously jquery), I have a very good understanding of CSS and its models, I have worked on some cool algos and found solutions to many problems but I haven't gotten to stage where I can implement neural networks/machine learning in my projects.
It just scares me.
___
A little back story: I have been the CTO of a small scale company for about 1.5 years now.
___
So all this got me to asking myself should I just step down from the post to a position where I can learn more skills. Managing takes a lot more time where I can't learn a lot. Sure I learnt some other important things but not as much tech knowledge as I would have in a more basic position.
I know not many of you must have read this far, but if you did what do you think I should do? Really depressed at the moment.5 -
"Deep is in. We want people to go deep. Deep neural networks … as opposed to shallow neural networks"
-
Just finished my finals.
Had to run k-means with pen and paper only. I find this kind of question stupid but why not. BUT WHY THE FUCK DID YOU CHOSE SOME INITIALIZATION THAT TAKES 13(!!!!) FUCKING ITERATIONS TO CONVERGE ? Just in case my first 12 iterations are correct by chance ? Guess what, you fucktard, I GOT IT.
And doing the same calculus by hand 13 FUCKING TIMES is moronic as hell, you retarded piece of shit ! When you train your neural networks, do you also backpropagate your gradient all by yourself, mongoloid baboon? Getting sick of those stupid assignements1 -
https://towardsdatascience.com/chec...
“Check and see if your model can over fit”
See I did that and nothing was changing wtf pytotch !!!!2 -
From my big black book of ai notes back when I was into that:
Dendritic spines - a 'battery' (?) And probably what the dendrite uses to maxpool/softmax or perform 'temporal summation' over its inputs. -
Is it just me or any of you guys tryin to improve the accuracy of ur model be like :
hmmmmm more hidden layers2 -
I'm beginning to feel like any kind of specific approximation via neural networks is a myth. That if you can't reduce output to simple categorical values that can be broadly interpreted between two points, that it doesn't work.
I have some questions and they don't seem to be getting answered about the design of the net. How many layers should I use ? How many neurons per layer ? How does this relate to the number of desired quantitive scalar outputs I'm looking to create, even if they are normalized, they can vary GREATLY and will if I'm approximating the out of several mathematical expressions. Based on this and the expected error ranges of these numbers and how many possible major digits could be produced within the domain of the variable inputs being introduced, how many neurons per layer ? What does having more layers do ? In pytorch there don't seem to be a lot of layer types per say, but there are a crap ton of activation functions, and should I just be using these at the tail end or should they actually be inserted between layers so the input of the next layer passes through another series of actiavtion functions ? what does this do to the range of output ?
do I need to be a mathematician to do this ?
remembered successes removed quantifiable scalars entirely from output, meaning that I could interpret successful results from ranges of decimal points.
but i've had no success with actual multi variable regression as of yet, even when those input variables are only 2 and on limited value ranges eg [0,100] and [0, 2pi]
and then there are training epochs to avoid overfitting, and reasonable expectation of batches till quality results will start to form.3 -
Looking to get a good understanding of the fundamental ideology and math behind neural networks and support vector machines. I am well versed with math so I can deal with heavier stuff if needed, I would like to see formulas but an explanation to their conception would be nice. Does anyone have any resources like this? Practical hands on practice exercises would be a plus2
-
There are people who develop Neural Networks/Deep Learning Models/AI based Softwares.
Does anybody know what do we call them? Is it okay to call all of them Machine Learning Engineer/AI researcher/AI engineer?
If I'm looking for someone who can make AI based program for me. Whom should I be looking for on freelancer or LinkedIn?1 -
I've been sitting here staring at extension types and I wonder, what if I had a partial file with partial data ?
In general one could say that in every case where say a header is missing that is ALWAYS going to have some identifying characteristics even given a characteristic statistically frequent pattern of data, that there is always a null value that appears as total chaos.
But I wonder, is there a way beyond simply trying every goddamn possible combination of things until meaningful data is extracted to identify a file by its content when part of that content that is usually used for such a purpose, is missing ?
What kind of application or technology would be required for this ? Certainly not neural networks, but obviously some kind of ai right ?10 -
What is the best searching algorithm for big data technologies like Machine learning and Neural networks?
ANY GUESS!!!
Comment it.5 -
The Turing Test, a concept introduced by Alan Turing in 1950, has been a foundation concept for evaluating a machine's ability to exhibit human-like intelligence. But as we edge closer to the singularity—the point where artificial intelligence surpasses human intelligence—a new, perhaps unsettling question comes to the fore: Are we humans ready for the Turing Test's inverse? Unlike Turing's original proposition where machines strive to become indistinguishable from humans, the Inverse Turing Test ponders whether the complex, multi-dimensional realities generated by AI can be rendered palatable or even comprehensible to human cognition. This discourse goes beyond mere philosophical debate; it directly impacts the future trajectory of human-machine symbiosis.
Artificial intelligence has been advancing at an exponential pace, far outstripping Moore's Law. From Generative Adversarial Networks (GANs) that create life-like images to quantum computing that solve problems unfathomable to classical computers, the AI universe is a sprawling expanse of complexity. What's more compelling is that these machine-constructed worlds aren't confined to academic circles. They permeate every facet of our lives—be it medicine, finance, or even social dynamics. And so, an existential conundrum arises: Will there come a point where these AI-created outputs become so labyrinthine that they are beyond the cognitive reach of the average human?
The Human-AI Cognitive Disconnection
As we look closer into the interplay between humans and AI-created realities, the phenomenon of cognitive disconnection becomes increasingly salient, perhaps even a bit uncomfortable. This disconnection is not confined to esoteric, high-level computational processes; it's pervasive in our everyday life. Take, for instance, the experience of driving a car. Most people can operate a vehicle without understanding the intricacies of its internal combustion engine, transmission mechanics, or even its embedded software. Similarly, when boarding an airplane, passengers trust that they'll arrive at their destination safely, yet most have little to no understanding of aerodynamics, jet propulsion, or air traffic control systems. In both scenarios, individuals navigate a reality facilitated by complex systems they don't fully understand. Simply put, we just enjoy the ride.
However, this is emblematic of a larger issue—the uncritical trust we place in machines and algorithms, often without understanding the implications or mechanics. Imagine if, in the future, these systems become exponentially more complex, driven by AI algorithms that even experts struggle to comprehend. Where does that leave the average individual? In such a future, not only are we passengers in cars or planes, but we also become passengers in a reality steered by artificial intelligence—a reality we may neither fully grasp nor control. This raises serious questions about agency, autonomy, and oversight, especially as AI technologies continue to weave themselves into the fabric of our existence.
The Illusion of Reality
To adequately explore the intricate issue of human-AI cognitive disconnection, let's journey through the corridors of metaphysics and epistemology, where the concept of reality itself is under scrutiny. Humans have always been limited by their biological faculties—our senses can only perceive a sliver of the electromagnetic spectrum, our ears can hear only a fraction of the vibrations in the air, and our cognitive powers are constrained by the limitations of our neural architecture. In this context, what we term "reality" is in essence a constructed narrative, meticulously assembled by our senses and brain as a way to make sense of the world around us. Philosophers have argued that our perception of reality is akin to a "user interface," evolved to guide us through the complexities of the world, rather than to reveal its ultimate nature. But now, we find ourselves in a new (contrived) techno-reality.
Artificial intelligence brings forth the potential for a new layer of reality, one that is stitched together not by biological neurons but by algorithms and silicon chips. As AI starts to create complex simulations, predictive models, or even whole virtual worlds, one has to ask: Are these AI-constructed realities an extension of the "grand illusion" that we're already living in? Or do they represent a departure, an entirely new plane of existence that demands its own set of sensory and cognitive tools for comprehension? The metaphorical veil between humans and the universe has historically been made of biological fabric, so to speak.7