21
NoMad
3y

Is this learning job cpu intensive or memory intensive?

I don't know and I don't give a flying fuck, because it's 6:20pm and I have not found any of my favorite servers free to rerun this shit the whole fucking week, so this server (which I have actually killed before, btw) can suck a dick and do its fucking job.

๐ŸŽค๐Ÿ–๏ธ

Comments
  • 5
    I swear to god if this job fails again, I'm gonna write the angriest email... Welp, fuck covid too.
  • 7
    Understandable. I just had a job fail after two days at 93% with an out of memory error on a 512GB RAM server. I almost broke my new monitor.
  • 3
    @RememberMe lol. I even manage my mem allocation carefully and use functions wherever I can to reduce the chance of that. But maaaaan, that's a piss off. My condolences.
  • 9
    Lol. Another guy was trying to run his shit on this server. Seems I have successfully kicked him out by not leaving him any resources. Lololololololololol.
  • 8
    Dude: What is that buzzsaw sound coming from that server rack?

    Other Dude: NoMad running another job...
  • 1
    @RememberMe nie that's intense.. (โ˜‰๏ฝกโ˜‰)
  • 1
    OT: Is manually splitting big ANN models into smaller specialized ones and using their output as input for other ANNs still a thing or is everyone just throwing more and more hardware at it nowadays?
  • 1
    @Oktokolo can't do that with infused or recurrent ones. ๐Ÿ˜› But yes, we still do that wherever we can because resources don't grow on trees ๐Ÿ˜œ
  • 1
    @NoMad
    Never heared about infused ANNs.
    But you certainly can have recurrent ANNs which have their output routed through other ANNs before it gets fed back to them.
  • 2
    @Oktokolo the problem is performance. You certainly can do it, but you get massive communication delays per run of inference/backprop that really slows the whole thing down.
  • 0
    Y'all heard of checkpoints
Add Comment