Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "classification"
-
After months of tedious research, I finally feel like I understand machine learning.
All of my programmer buddies are in envy, but I keep trying to explain that what I finally get is that it's not as hard as it's presented to be.
I feel like a lot of the terminology in machine learning is really pretentious and unnecessary, and just keeps new people from the field.
For example: I could say: "Yeah, I'm training a classification model with two input neurons, a hidden activation layer, and an output neuron", and you might think I was hot shit. But that just gets translated into "I'm putting in two inputs, sorting them, and outputting one thing".
I feel like if there was a plain language guide to machine learning, the field would be a lot more attractive to a lot more people. I know that's why it was hard for me to get in. Maybe I'll write one.28 -
Sooooo me and the lead dev got placed in the wrong job classification at work.
Without sounding too mean, we are placed under the same descriptor and pay scale reserved for secretaries, janitors and the people that do maintenance at work(we work for a college as developers) whilst our cowormer who manages the cms got the correct classification.
The manager went apeshit because the guidelines state that:
Making software products
Administration of dbs
Server maintenance and troubleshooting
Security (network)
And a lot of shit is covered on the exemption list and it is things that we do by a wide fucking margin. The classification would technically prohibit us from developing software and the whole it dptmnt went apeshit over it since he(lead developer) refuses (rightfully so) to touch anything and do basically nothing other than generate reports.
Its a fun situation. While we both got a substantial raise in salary(go figure) we also got demoted at the same time.
There is a department in IT which deals with the databases for other major applications, their title is "programmers" yet for some reason me and the lead end up writing all the sql code that they ever need. They make waaaaay more money than me and the lead do, even in the correct classification.
Resolution: manager is working with the head of the department to correct this blasphemy WHILE asking for a higher pay than even the "programmers"
I love this woman. She has balls man. When the president of the school paraded around the office asking for an update on a high priority app she said that I am being gracious enough to work on it even though i am not supposed to. The fucking prick asked if i could speed it up to where she said that most of my work I do it on my off time, which by law is now something that I cannot do for the school and that she does not expect any of her devs to do jack shit unless shit gets fixed quick. With the correct pay.
Naturally, the president did not like such predicament and thus urged the HR department(which is globally hated now since they fucked up everyone's classification) to fix it.
Dunno if I will get above the pay that she requested. But seeing that royal ammount of LADY BALLS really means something to me. Which is why i would not trade that woman for a job at any of my dream workplaces.
Meanwhile, the level of stress placed my 12 years of service diabetic lead dev at the hospital. Fuck the hr department for real, fuck the vps of the school that fucked this up royally and fuck people in this city in general. I really care for my team, and the lead dev is one of my best friends and a good developer, this shit will not fucking go unnoticed and the HR department is now in low priority level for the software that we build for them
Still. I am amazed to have a manager that actually looks out for us instead of putting a nice face for the pricks that screwed us over.
I have been working since I was 16, went through the Army, am 27 now and it is the first time that I have seen such manager.
She can't read this, but she knows how much I appreciate her.3 -
//
// devRant unofficial UWP update (v1.5.6)
//
Hi!
A new update for Windows 10 users is in certification right now (will be available in a few hours/days), with the new feature you could see a few days ago on the official client.
But the main reason of this post is that I've seen the success of the official Issue Tracker created in August by @dfox and @trogus on GitHub.
For this reason I decided to do the same for the unofficial client.
Feel free to post bugs reports (I prefer this method instead of emails) and requests of features (even if not available in the official client) you would like to see.
https://github.com/JakubSteplowski/...
Complete changelog of v1.5.6:
- Added new 'post type' selector for easier classification of rant types in the future;
- Added Official devRant unofficial Issue Tracker on GitHub;
- Added Feedback page with link to GitHub Issue Tracker repo;
- Added black theme... no, wait... already there.
- Minor UI improvements;
- Minor back-end improvements;
I hope to see a lot of interesting feature requests I will enjoy to implement to make the UWP client always the best for you, Windows community. 😁18 -
My department is legit getting a fuckload of heat over some missing reports that were not generated by the lead dev.
Shit falls on me since he ain't here.
Look b. I am gon give it to ya straight: I don't give a fuck, your shit is secondary, unimportant, bottom of the list...call the vp if you want, he gon get a fuckload of indifference as well ....
know why?
Cuz yall motherfuckers want shit done quick af but don't say shit till the same day. Fuck, shit don't work that way...pendejo.
Best thing? I ain't even supposed to be doing this shit at all because of y'all bitches not placing me in the correct classification... -
Machine learning is overhyped. A fellow halfwitted well-wisher wants my colleague to use supervised classification for a freaking search and replace problem.2
-
Tired of hearing "our ML model has 51% accuracy! That's a big win!"
No, asshole, what you just built is a fucking random number generator, and a crappy one moreover.
You cannot do worse than 50%. If you had a binary classification model that was 10% accurate, that would be a win. You would just need to invert the output of the model, and you'd instantly get 90% accuracy.
50% accuracy is what you get by flipping coins. And you can achieve that with 1 line of code.5 -
Zoom was dead before it even took grip.
Fml. Use jitsi or some other real stuff.
<deity>, I don't care, choose Skype, there we know that security is well established and it's watching workers are well payed (US Court case for proper work classification).9 -
First of all, merry christmas to everyone on devrant.
Second, another interesting paper--this time on pattern classification using piecewise linear functions vs classic spiking neural nets.
Supposedly it was a *six million* percent improvement in computation time, versus the spiking simulation. Thats my five minute overview of the document anyway.
Highly unusual application (hadn't seen it done before now but maybe I'm unfamiliar). Check it out:
https://link.springer.com/chapter/...4 -
I have got 0.99 accuracy and 0.98 f1 score on some text classification task only to realize that I've created TF-IDF vectorizers using the entire corpus (train+test)
Now my professor is furious -_-5 -
Other people in 2019:
Realtime image classification!
Me in 2019:
22075ms to find all occurrences of a string of the screen2 -
Stuck at dealing with a huge amount of images again. 🤦
No idea how quickly I can get this object classification nn up and running, as it seems I have forgotten how to do shit. 🙄😒8 -
got my final computer science BSc classification today, can't believe that I got a first. now to get a job a guess1
-
Does the ease of “hacking”/breaking AI scare anyone else?
I remember a slide from a security presentation I saw once where there were three sections, the first was an AI classification of some animal with about 60% confidence, the second was a small grey static (think old tv static type thing) with a label next to it saying 10% and the third was an AI classification of the first picture overplayed with 10% of that noise and it had 95% confidence that the animal was COMPLETELY DIFFERENT.
Adding just 10% noise and AI goes batshit crazy. (No it was not a bat afaik)
THINK ABOUT THIS IN TERMS OF STOP SIGNS. WELP.3 -
Black box. It does seem to put messages with an URL in a certain category though, but also that's not always correct. It's trained on 3000 normal dR messages, and 3000 spam dR messages. 6000 dR messages in total. Many epochs but not good for use yet. The idea that the system could classify without discriminating new users is from the table. That discrimination is needed as a safe margin. Original spam system is a bit simple, but it doesn't do false positive and works great. Still, I want to make smth advanced out of it for the sake of education. Tomorrow I'll have my neural networks book. Probably over two weeks I have some good insights how to improve this all. New hobby :)
(pretrained 3b models are fine for recognizing spam btw. But it costs resource. 8 CPU's 100%. A self trained model pure on spam doesn't and is fast. With a pretrained model you can't do mass classification.)7 -
The next step for improving large language models (if not diffusion) is hot-encoding.
The idea is pretty straightforward:
Generate many prompts, or take many prompts as a training and validation set. Do partial inference, and find the intersection of best overall performance with least computation.
Then save the state of the network during partial inference, and use that for all subsequent inferences. Sort of like LoRa, but for inference, instead of fine-tuning.
Inference, after-all, is what matters. And there has to be some subset of prompt-based initializations of a network, that perform, regardless of the prompt, (generally) as well as a full inference step.
Likewise with diffusion, there likely exists some priors (based on the training data) that speed up reconstruction or lower the network loss, allowing us to substitute a 'snapshot' that has the correct distribution, without necessarily performing a full generation.
Another idea I had was 'semantic centering' instead of regional image labelling. The idea is to find some patch of an object within an image, and ask, for all such patches that belong to an object, what best describes the object? if it were a dog, what patch of the image is "most dog-like" etc. I could see it as being much closer to how the human brain quickly identifies objects by short-cuts. The size of such patches could be adjusted to minimize the cross-entropy of classification relative to the tested size of each patch (pixel-sized patches for example might lead to too high a training loss). Of course it might allow us to do a scattershot 'at a glance' type lookup of potential image contents, even if you get multiple categories for a single pixel, it greatly narrows the total span of categories you need to do subsequent searches for.
In other news I'm starting a new ML blackbook for various ideas. Old one is mostly outdated now, and I think I scanned it (and since buried it somewhere amongst my ten thousand other files like a digital hoarder) and lost it.
I have some other 'low-hanging fruit' type ideas for improving existing and emerging models but I'll save those for another time.6 -
Story of a data scientist 😞
Spends 80% of the time trying to identify features, while the rest 20% worrying about identifying the features 😭1 -
Finally, after days of research and failures, I managed to understand and tweak TensorFlow's program for image classification.
The feel of power, arcane knowledge and fascination is just incredible.
Might not seem much these days, nobody was interested in it. But I, deep inside, knew: I was proud of myself.2 -
FUCK. YOU. AYLIEN.
- For your shitty hashtag generator, that generated #FCBarcelona for a game review
- For your shitty classification endpoint
- For your shitty sentiment analysis that only works in the demo
- For your shitty image tagging that clarifai is way better at
- And for your "semantic labeling" that doesn't work
FUCK YOUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUU2 -
Used pip install to integrate tensorflow with python 3.5 on windows 10 machine but there were no models available in it. Had to download that separately and add it to tensorflow. Then tried using both inception and clasify_image.py but gives name error cannot find core. But when tested on python idle there were no error there. I don't want to custom create my own classifier but retrain the model. Any solution people?1
-
Deep learning. Working on an image classification problem for a big company. The "boss" ask me to teach an AI to classify images into a few classes.
"Mmm, ok...I just need to create the dataset and then build the AI...so.."
Where is the problem??
The problem is that the classes are so perfectly similar that no one knows how to help me create the dataset and I have to do it alone.
That's how you spend your weeks in a loop where you look at thousands of images over and over just to have something decent start your work.
After that I felt like...
"I'm the hero they deserves, but not the one they need right now" - Cit2 -
Whenever I start talking about machine learning the first question I get from the pretentious ones is, "Is it supervised or unsupervised classification?"
-
After rejoining, this place really does seem a bit deserted. So ill try to bring some controversy to this place.
AI, a hype? Machine learning wearing a mask? Pattern recognition on steroids?
What do y'all think? In my opinion its an awesome technology that has many practical applications but it is far from what they try to tell us it is. Its awesome, yes. But under the hood still mostly pattern recognition, classification etc. LLMs seem a bit more complex but still the same thing.
Sure, it's easy to write a program that does a given task a lot better than a human, however its limited to doing exactly that.. So is a calculator.
What I think of then hearing AI is what is now known as general intelligence but just a question of time until they come up with something that can do more than AI and call that general intelligence and actual general intelligence will be called something else.. You get me?9 -
Make your code available for your team members, please.
So we're working on this robotics project using ROS, a framework that enables multiple nodes in a network exchange their functionality among each other through tcp connections. Each node can be implemented and executed on your own machine, and tested with dummy inputs, but in collaboration they make a robot do fancy stuff.
The knowledgebase needs data from the image processing unit, providing this data to others with semantic context to high level planning, which uses this semantic data for decision making and calling the robot manipulation node with meaningful input, to navigate the robot's components in the environment. We use a dedicated machine, which pulls the corresponding repositories and is always kept configured correctly, to run each node, such that everybody has access to each other's work when needed.
So far so good. We tried to convince the manipulation guy (let's call him John) to run his code on our central machine, not a week, but since the first day, 5 months ago. Our cluster classification has been unavailable for 2 months, but my collegue fixed that. We still can't run the whole project without John's computer. If his machine blows up we're fucked.
Each milestone feels like a big-bang-test, fixing issues in interfaces last-minute. We see the whole demo just moments before our supervisors arrive at the door.
I just hope he doesn't get hit by a truck.2 -
More from my big black book of ai and neuroscience:
I think if trace theory is true to any degree it would go some distance in explaining phenomenal consciousness, assuming I haven't misunderstood anything.
In fuzzy trace theory (FTT) it is posited that people form two types of mental representations about a past event:
*verbatim traces: detailed representations of a past event.
*gist traces: fuzzy representations of a past event.
People can reason with verbatim *and* gist traces but prefer gists.
*vision was suggested to work similarly in 1999. With human vision, two processes could be used: one that aggregates local receptive fields and one that parses the local receptive spatial field. It was suggested that people used prior experience, gists, to decide which dominates a perceptual decision.
Gist processes form representations of events, semantic details, where verbatim reinstates the context found in the surface details of an event.
__notes__
Parallel storage: asserts encoding/storage of verbatim/gist traces operate in *parallel*, not in serial.
I like to think of verbatim traces as databases, and gists as queries constructed by recognition.
Several studies have found that the meaning (gist) of an item is encoded even *before* the surface details (verbatim).
This might be important as a survival mechanism but should not be taken to mean strictly that gists are formed wholly *without* details or important and recognizable features of the item in question. It may well be for high level el processing and classification efficiency this may be an important reprocessing step, in the same way that many functions of the brain are duplicated throughout.5 -
in JavaScript I would just call something what it is and then keep changing the data type as I get more data to add to it because you can
in rust because it's not dynamic types but static and everything is a static struct I need to find like 9 different names for all the different intermediary data types and holy shit I don't understand what to name everything and this is annoying me
I never understood why people complained about naming problems. I found it fun. now I hate it.
stats object. cool. well it converts an address to stats. an address has swaps. each swap was done on a mint. so I guess I make a MintStats object? wrong. because that's confusing.
swaps -> swaps divided by the mint they belong in -> stats for each mint swap set -> then you can add all the mint swap set stats to the address stats object
now what the fuck do you call all these
there's also something I called a MintAttitude and it's an enum. these types just keep growing out of trees. fuk. I don't like long names either. I should probably just call it Attitude but call it via mint::Attitude and get the same clarity result with far less redundancy (which I hate, another annoying thing)
swaps -> ??? mint history? -> MintStats -> then I have a "MintData" that has the history and stats wrapped in it -> MintsData that has many mints and their MintData -> then I can convert MintsData into AddressStats but what and I hate this and also I have a Mint object that does something totally different elsewhere. I hate this. data isn't even descriptive but to call something history when it also has stats seems imprecise.
brain spaghetti. classification part of my brain is shit. no historical training / experience either. I just see everything like vague blobs. bah. naming required clear delineations which is hard enough on its own to get used to5 -
I finished my graduation project
We developed app for skin disease classification, we used Flutter & Python for training the model on a dataset called SD-198
We tried to use Transfer Learning to hit the highest accuracy but actually IT DIDN'T WORK SURPRISINGLY!!
After that's we tried build our CNN model with a few of layers, we scored %24
We couldn't improve it more, we are proud of ourself but we want to improve it moreee
Any suggestions?
Thank you for reading.2 -
I'm looking for a image segmentation and classification web based tool to create ground truth for my dataset in next Deep Learning project, what tool do You use?1
-
Anyone ever tried using user-to-user collaborative filtering to classify the mnist digits dataset?
This is about as far as I got:
https://hastebin.com/obinoyutuw.py
It's literal copy-and-paste frankencode because this is only the second time I've ever done something like this, so pardon the hatchjob.1 -
Okay, so, I have a functional snort agent instance, and it's spewing out alerts in it's "brilliant" unified2 log format.
I'm able to dump the log contents using the "u2spewfoo" utility (wtf even is that name lol... Unified2... something foo) but... It gives me... data. With no actual hint as to *what* rule made it log this. What is it that it found?
All I see are IDs and numbers and timings and stuff... How do I get this
(Event)
sensor id: 0 event id: 5540 event second: 1621329398 event microsecond: 388969
sig id: 366 gen id: 1 revision: 7 classification: 29
priority: 3 ip source: *src-ip* ip destination: *my-ip*
src port: 8 dest port: 0 protocol: 1 impact_flag: 0 blocked: 0
mpls label: 0 vland id: 0 policy id: 0
into information like "SYN flood from src-ip to destination-ip" -
Working a week on LSTM based text classifier, getting 89% accuracy only to then get better result with Logistic Regression which was supposed to serve as baseline, lol. Background: 180+ classes of google product categorization taxonomy, 20 million rows of data items (short texts). Had a similar experience once on sentiment classification, where SVMlight outperformed NN models.
-
So, I feel wayyy behind the tech curve right now.
The SSD implementations you see online, they're still just a bunch of seperate sort of chaos machines that contain the standard perceptron-like model of a weight, cost, and bias right ? They just kind of inferred their values by training like any other neural network, in its sep-erate parts and just fed pieces of output data generated by other parts of the neural network to it right ?
I mean it implements with pytorch so its basically a really big array of tuples in a sense that are maniupulated in a specific way.
and then CNN's just feed data back into another trained piece of the model right ?
I'm curious because object classification is about the ONLY thing I've seen work even close to properly lol
there is just so much fraud these days. sigh.
and so many lamentable tech choices and attempts... like node lol