Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "neural-net"
-
Skype has a mean auto reply. If the auto reply is based on a neural net, then they must be sampling from bullies.9
-
Dropout layers improve neural net learning by randomly "killing neurons" thus preventing overfitting.
That's how I will justify my alcoholism from now on.7 -
-make my first app
-build a portable/rugged rasberry pi
-make a neural net to find fortnight wins on Snapchat and auto block the person
-get a job4 -
Anyone else's blood boil when TV shows abuse tech terminology? Like the 'hacker' saying "connecting the virtual TCP to the neural net."14
-
I was writing simple neural net program which checks if given passenger would survive the Titanic accident and Chuck Norris indeed would survive :D5
-
.Net is masterrace.
C# gives me frequent orgasms.
Use SQL Server for DB, add to that parallel querying and NoSQL capabilities.
Incredible development speed with EF
Incredebly powerful web framework...check
AI and neural networks...check
App Development...Xeck
If you want to do some of that functional programming F# is the language for you.
And the best thing: .Net core runs on Linux too10 -
Was using six years old laptop with first gen Intel core i3 to train a neural net, placed the laptop on soft bed, training begins, thermal shutdown after 30-40 iterations(30 minutes).F***.
Now starting again :'|3 -
For all people missing Avatar of Kaine. You can now say "@aokbot [subject]" and it will return a comment in the style of the real Avatar of Kaine.115
-
i7, 8GB RAM with 2GB nvidia graphics couldn't handle the training of neural net on a google text corpus! -.-"
I'm just watching it train, nothing to do! oh but wait,I can rant! 😃😃5 -
1.
Accuracy 0.90 achieved so easily, makes me wonder if I've done something wrong. Lol.
2.
My neural net models are the only things in my life doing well. I think I chose the right career. Lol.
3.
Rerunning experiments is not fun. But getting better results is really... Ego stroking.26 -
My first packages, uploaded to nuget!
A simple neural network library, written in C# (.Net Standard)
Here's a link:
https://nuget.org/profiles/Mitiko233 -
The first fruits of almost five years of labor:
7.8% of semiprimes give the magnitude of their lowest prime factor via the following equation:
((p/(((((p/(10**(Mag(p)-1))).sqrt())-x) + x)*w))/10)
I've also learned, given exponents of some variables, to relate other variables to them on a curve to better sense make of the larger algebraic structure. This has mostly been stumbling in the dark but after a while it has become easier to translate these into methods that allow plugging in one known variable to derive an unknown in a series of products.
For example I have a series of variables d4a, d4u, d4z, d4omega, etc, and these are translateable now, through insights that become various methods, into other types of (non-d4) series. What these variables actually represent is less relevant, only that it is possible to translate between them.
I've been doing some initial learning about neural nets (implementation, rather than theoretics as I normally read about). I'm thinking what I might do is build a GPT style sequence generator, and train it on the 'unknowns' from semiprime products with known factors.
The whole point of the project is that a bunch of internal variables can easily be derived, (d4a, c/d4, u*v) from a product, its root, and its mantissa, that relate to *unknown* variables--unknown variables such as u, v, c, and d4, that if known directly give a constant time answer to the factors of the original product.
I think theres sufficient data at this point to train such a machine, I just don't think I'm up to it yet because I'm lacking in the calculus department.
2000+ variables that are derivable from a product, without knowing its factors, which are themselves products of unknown variables derived from the internal algebraic relations of a product--this ought to be enough of an attack surface to do something with.
I'm willing to collaborate with someone familiar with recurrent neural nets and get them up to speed through telegram/element/discord if they're willing to do the setup and training for a neural net of this sort, one that can tease out hidden relationships and map known variables to the unknown set for a given product.17 -
Puts three months of work into this project; cronjob to ping 3 APIs at regular intervals, cleans, massive features extraction, dumps into PostgreSQL db. Got Django on top of that with a small neural net and interesting viz - absolutely gorgeous!
Can’t fuckin wait to showcase that.
Feedback: “is that the right blue? I think you have the old company logo! etc”
Mah.LahF3 -
Thought experiment time:
Imagine that this whole universe is a simulation created by a Group Of Developers (GOD).
- Who would make up this group?
- What kind of design patterns would they follow?
- What type of programming language would they use?
- What kind of bugs are there if any?
- How do they test?
- Assuming the use of quantum computing, what are the implications? Parallel simulations? All possibilities play out?
- Would the controller input be life?
- Who is AI and who are players?
- Has all time already been rendered?
- Do we respawn?
- What would the leaderboard look like?
- What kind of stats are tracked
- What are dreams, nightmares, lucid dreams, sleep paralysis, birth and death?
- How is memory stored, accessed and pruned?
- What kind of neural net is used and where?
etc etc, if you can think of any other interesting fire away8 -
I dun goofed
made a neural net that runs against a simulation. Wanted to run it overnight to get some meanigful stats and insights
But yesterday afternoon I changed something in the simulation and ofc tested it without the nn ... and then forgot to put it back on
So while I expected to come in today and start plotting and analyzing the data while the runs finish, in reality I'm sitting here on a lot of useless data, not knowing what to do.
I kinda want to just start it again and go home7 -
by simply making the bias random on the second input for a two bit binary input during activation calculation, it's possible to train a neural net to calculate the XOR function in one layer.
I know for a fact. I just did it.16 -
Being a Tech Co-Founder is both boon and a bane.
Boon: You know how to build it.
Bane: You just start building without thinking.
We often crave I will be using Redis😍😍, Kafka, Load Balancer, Muti Layer Neural Net
"Dude", Whatever you are building does anyone want it or not. Tell me that first.1 -
I recently went through a very detailed and well-explained Python-based project/lesson by Karpathy which is called micrograd. This is a tiny scalar-valued autograd engine and a neural net on top of it.
The project above is, as expected, built on Python. For learning purposes, I wanted to see how such a network may be implemented in TypeScript and came up with a 🤖 micrograd-ts - https://github.com/trekhleb/... repository (and also with a demo - https://trekhleb.dev/micrograd-ts/ of how the network may be trained).
Trying to build anything on your own very often gives you a much better understanding of a topic. So, this was a good exercise, especially taking into account that the whole code is just ~200 lines of TS code with no external dependencies.
The micrograd-ts repository might be useful for those who want to get a basic understanding of how neural networks work, using a TypeScript environment for experimentation.
With that being said, let me give you some more information about the project.
## Project structure
- [micrograd/](https://github.com/trekhleb/...) — this folder is the core/purpose of the repo
- [engine.ts](https://github.com/trekhleb/...) — the scalar `Value` class that supports basic math operations like `add`, `sub`, `div`, `mul`, `pow`, `exp`, `tanh` and has a `backward()` method that calculates a derivative of the expression, which is required for back-propagation flow.
- [nn.ts](https://github.com/trekhleb/...) — the `Neuron`, `Layer`, and `MLP` (multi-layer perceptron) classes that implement a neural network on top of the differentiable scalar `Values`.
- [demo/](https://github.com/trekhleb/...) - demo React application to experiment with the micrograd code
- [src/demos/](https://github.com/trekhleb/...) - several playgrounds where you can experiment with the `Neuron`, `Layer`, and `MLP` classes.
Demo (online)
---------------------
To see the online demo/playground, check the following link:
🔗 https://trekhleb.dev/micrograd-ts3 -
The hype of Artificial Intelligence and Neutral Net gets me sick by the day.
We all know that the potential power of AI’s give stock prices a bump and bolster investor confidence. But too many companies are reluctant to address its very real limits. It has evidently become a taboo to discuss AI’s shortcomings and the limitations of machine learning, neural nets, and deep learning. However, if we want to strategically deploy these technologies in enterprises, we really need to talk about its weaknesses.
AI lacks common sense. AI may be able to recognize that within a photo, there’s a man on a horse. But it probably won’t appreciate that the figures are actually a bronze sculpture of a man on a horse, not an actual man on an actual horse.
Let's consider the lesson offered by Margaret Mitchell, a research scientist at Google. Mitchell helps develop computers that can communicate about what they see and understand. As she feeds images and data to AIs, she asks them questions about what they “see.” In one case, Mitchell fed an AI lots of input about fun things and activities. When Mitchell showed the AI an image of a koala bear, it said, “Cute creature!” But when she showed the AI a picture of a house violently burning down, the AI exclaimed, “That’s awesome!”
The AI selected this response due to the orange and red colors it scanned in the photo; these fiery tones were frequently associated with positive responses in the AI’s input data set. It’s stories like these that demonstrate AI’s inevitable gaps, blind spots, and complete lack of common sense.
AI is data-hungry and brittle. Neural nets require far too much data to match human intellects. In most cases, they require thousands or millions of examples to learn from. Worse still, each time you need to recognize a new type of item, you have to start from scratch.
Algorithmic problem-solving is also severely hampered by the quality of data it’s fed. If an AI hasn’t been explicitly told how to answer a question, it can’t reason it out. It cannot respond to an unexpected change if it hasn’t been programmed to anticipate it.
Today’s business world is filled with disruptions and events—from physical to economic to political—and these disruptions require interpretation and flexibility. Algorithms alone cannot handle that.
"AI lacks intuition". Humans use intuition to navigate the physical world. When you pivot and swing to hit a tennis ball or step off a sidewalk to cross the street, you do so without a thought—things that would require a robot so much processing power that it’s almost inconceivable that we would engineer them.
Algorithms get trapped in local optima. When assigned a task, a computer program may find solutions that are close by in the search process—known as the local optimum—but fail to find the best of all possible solutions. Finding the best global solution would require understanding context and changing context, or thinking creatively about the problem and potential solutions. Humans can do that. They can connect seemingly disparate concepts and come up with out-of-the-box thinking that solves problems in novel ways. AI cannot.
"AI can’t explain itself". AI may come up with the right answers, but even researchers who train AI systems often do not understand how an algorithm reached a specific conclusion. This is very problematic when AI is used in the context of medical diagnoses, for example, or in any environment where decisions have non-trivial consequences. What the algorithm has “learned” remains a mystery to everyone. Even if the AI is right, people will not trust its analytical output.
Artificial Intelligence offers tremendous opportunities and capabilities but it can’t see the world as we humans do. All we need do is work on its weaknesses and have them sorted out rather than have it overly hyped with make-believes and ignore its limitations in plain sight.
Ref: https://thriveglobal.com/stories/...6 -
The human brain (also animal brains, even ants) are incredibly complex. Each neuron is now supposedly its own processor. So a human brain is a complex network of billions of processors, not just threshold variables. This means to simulate an organic brain sufficiently it will take a huge computer system with billions of parallel processors. Now, I don't know if the sophistication of a computer processor is represented in each cell. So this may not be equivalent to billions of pentium cores for instance. However, it still presents a huge challenge for AI, as it exists now, to replicate. My thoughts are that AI that is silicon based will take a different approach that leverages how computers work. My guess is that current neural net models are not a good match for this unknown AI. Will it inherently exhibit pattern matching like an organic brain? Or will it be a different kind of consciousness altogether? Will we even realize it is self aware? Will my roomba plan to kill my pet for my attention? What are some other models being employed in AI research?3
-
If the same data is being fit over and over again to a neural net shouldn't it start spitting out the result it was trained to very quickly if over svereal frames its training for 100 epochs each ?8
-
I came across this logical bug today and realised that you never train your Neural Network with data that is in order. Always train it with randomized inputs. I spent like 4 hours trying to figure why my neural net wouldn't work since all I was classifying was Iris data.
P.S.: I'm an ML newbie :P -
I'm beginning to feel like any kind of specific approximation via neural networks is a myth. That if you can't reduce output to simple categorical values that can be broadly interpreted between two points, that it doesn't work.
I have some questions and they don't seem to be getting answered about the design of the net. How many layers should I use ? How many neurons per layer ? How does this relate to the number of desired quantitive scalar outputs I'm looking to create, even if they are normalized, they can vary GREATLY and will if I'm approximating the out of several mathematical expressions. Based on this and the expected error ranges of these numbers and how many possible major digits could be produced within the domain of the variable inputs being introduced, how many neurons per layer ? What does having more layers do ? In pytorch there don't seem to be a lot of layer types per say, but there are a crap ton of activation functions, and should I just be using these at the tail end or should they actually be inserted between layers so the input of the next layer passes through another series of actiavtion functions ? what does this do to the range of output ?
do I need to be a mathematician to do this ?
remembered successes removed quantifiable scalars entirely from output, meaning that I could interpret successful results from ranges of decimal points.
but i've had no success with actual multi variable regression as of yet, even when those input variables are only 2 and on limited value ranges eg [0,100] and [0, 2pi]
and then there are training epochs to avoid overfitting, and reasonable expectation of batches till quality results will start to form.3 -
NotARantBut... if you like Machine Learning check out my buddy Andrews blog. He helped build the largest neural net to date at Digital Reasoning. https://iamtrask.github.io