Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "can't compile on paper"
-
From my work -as an IT consultant in one of the big 4- I can now show you my masterpiece
INSIGHTS FROM THE DAILY LIFE OF A FUNCTIONAL ANALIST IN A BIG 4 -I'M NOT A FUNCTIONAL ANALYST BUT THAT'S WHAT THEY DO-
- 10:30, enter the office. By contract you should be there at 9:00 but nobody gives a shit
- First task of the day: prepare the power point for the client. DURATION: 15 minutes to actually make the powerpoint, 45 minutes to search all the possible synonyms of RESILIENCE BIG DATA AGILE INTELLIGENT AUTOMATION MACHINE LEARNING SHIT PISS CUM, 1 hour to actually present the document.
- 12:30: Sniff the powder left by the chalks on the blackboards. Duration: 30 minutes, that's a lot of chalk you need to snort.
13:00, LUNCH TIME. You get back to work not one minute sooner than 15.00
- 15:00, conference with the HR. You need to carefully analyze the quantity and quality of the farts emitted in the office for 2 hours at least
- 17:00 conference call, a project you were assigned to half a day ago has a server down.
The client sent two managers, three senior Java developers, the CEO, 5 employees -they know logs and mails from the last 5 months line by line-, 4 lawyers and a beheading teacher from ISIS.
On your side there are 3 external ucraininans for the maintenance, successors of the 3 (already dead) developers who put the process in place 4 years ago according to God knows which specifications. They don't understand a word of what is being said.
Then there's the assistant of the assistant of a manager from another project that has nothing to do with this one, a feces officer, a sys admin who is going to watch porn for the whole conference call and won't listen a word, two interns to make up a number and look like you're prepared. Current objective: survive. Duration: 2 hours and a half.
- 19:30, snort some more chalk for half an hour, preparing for the mail in which you explain the associate partner how because of the aforementioned conference call we're going to lose a maintenance contract worth 20 grands per month (and a law proceeding worth a number of dollars you can't even read) and you have no idea how could this happen
- 20:00, timesheet! Compile the weekly report, write what you did and how long did it take for each task. You are allowed to compile 8 hours per day, you worked at least 11 but nobody gives a shit. Duration: 30 minutes
- 20:30, update your consultant! Training course, "tasting cum and presenting its organoleptic properties to a client". Bearing with your job: none at all. Duration: 90 minutes, then there's half an hour of evaluating test where you'll copy the answers from a sheet given to you by a colleague who left 6 months ago.
- 22:30, CHANCE CARD! You have a new mail from the HR: you asked for a refund for a 3$ sandwich, but the receipt isn't there and they realized it with a 9 months delay. You need to find that wicked piece of paper. DURATION: 30 minutes. The receipt most likely doesn't even exist anymore and will be taken directly from your next salary.
- 23:00 you receive a message on Teams. It's the intern. It's very late but you're online and have to answer. There's an exception on a process which have been running for 6 years with no problems and nobody ever touches. The intern doesn't know what to do, but you wrote the specifications for the thing, 6 years ago, and everything MUST run tonight. You are not a technician and have no fucking clue about anyhing at all. 30 minutes to make sure it's something on our side and not on the client side, and in all that the intern is as useful as a confetto to wipe your ass. Once you're sure it's something on our side you need to search for the senior dev who received the maintenance of the project, call him and solve the problem.
It turns out a file in a shared folder nobody ever touches was unreachable 'cause one of your libraries left it open during the last run and Excel shown a warning modal while opening it; your project didn't like this last thing one bit. It takes 90 minutes to find the root of the problem, you solve it by rebooting one of your machines. It's 01:00.
You shower, watch yourself on the mirror and search for the line where your forehead ends and your hair starts. It got a little bit back from yesterday; the change can't be seen with the naked eye but you know it's there.
You cry yourself to sleep. Tomorrow is another day, but it's going to be exactly like today.8 -
Ok friends let's try to compile Flownet2 with Torch. It's made by NVIDIA themselves so there won't be any problem at all with dependencies right?????? /s
Let's use Deep Learning AMI with a K80 on AWS, totally updated and ready to go super great always works with everything else.
> CUDA error
> CuDNN version mismatch
> CUDA versions overwrite
> Library paths not updated ever
> Torch 0.4.1 doesn't work so have to go back to Torch 0.4
> Flownet doesn't compile, get bunch of CUDA errors piece of shit code
> online forums have lots of questions and 0 answers
> Decide to skip straight to vid2vid
> More cuda errors
> Can't compile the fucking 2d kernel
> Through some act of God reinstalling cuda and CuDNN, manage to finally compile Flownet2
> Try running
> "Kernel image" error
> excusemewhatthefuck.jpg
> Try without a label map because fuck it the instructions and flags they gave are basically guaranteed not to work, it's fucking Nvidia amirite
> Enormous fucking CUDA error and Torch error, makes no sense, online no one agrees and 0 answers again
> Try again but this time on a clean machine
> Still no go
> Last resort, use the docker image they themselves provided of flownet
> Same fucking error
> While in the process of debugging, realize my training image set is also bound to have bad results because "directly concatenating" images together as they claim in the paper actually has horrible results, and the network doesn't accept 6 channel input no matter what, so the only way to get around this is to make 2 images (3 * 2 = 6 quick maths)
> Fix my training data, fuck Nvidia dude who gave me wrong info
> Try again
> Same fucking errors
> Doesn't give nay helpful information, just spits out a bunch of fucking memory addresses and long function names from the CUDA core
> Try reinstalling and then making a basic torch network, works perfectly fine
> FINALLY.png
> Setup vid2vid and flownet again
> SAME FUCKING ERROR
> Try to build the entire network in tensorflow
> CUDA error
> CuDNN version mismatch
> Doesn't work with TF
> HAVE TO FUCKING DOWNGEADE DRIVERS TOO
> TF doesn't support latest cuda because no one in the ML community can be bothered to support anything other than their own machine
> After setting up everything again, realize have no space left on 75gb machine
> Try torch again, hoping that the entire change will fix things
At this point I'll leave a space so you can try to guess what happened next before seeing the result.
Ready?
3
2
1
> SAME FUCKING ERROR
In conclusion, NVIDIA is a fucking piece of shit that can't make their own libraries compatible with themselves, and can't be fucked to write instructions that actually work.
If anyone has vid2vid working or has gotten around the kernel image error for AWS K80s please throw me a lifeline, in exchange you can have my soul or what little is left of it5 -
So recently I did a lot of research into the internals of Computers and CPUs.
And i'd like to share a result of mine.
First of all, take some time to look at the code down below. You see two assembler codes and two command lines.
The Assembler code is designed to test how the instructions "enter" and "leave" compare to manually doing what they are shortened to.
Enter and leave create a new Stackframe: this means, that they create a new temporary stack. The stack is where local variables are put to by the compiler. On the right side, you can see how I create my own stack by using
push rbp
mov rbp, rsp
sub rsp, 0
(I won't get into details behind why that works).
Okay. Why is this even relevant?
Well: there is the assumption that enter and leave are very slow. This is due to raw numbers:
In some paper I saw ( I couldn't find the link, i'm sorry), enter was said to use up 12 CPU cycles, while the manual stacking would require 3 (push + mov + sub => 1 + 1 + 1).
When I compile an empty function, I get pretty much what you'd expect just from the raw numbers of CPU cycles.
HOWEVER, then I add the dummy code in the middle:
mov eax, 123
add eax, 123543
mov ebx, 234
div ebx
and magically - both sides have the same result.
Why????
For one thing, there is CPU prefetching. This is the CPU loading in ram before its done executing the current instruction (this is how anti-debugger code works, btw. Might make another rant on that). Then there is the fact that the CPU usually starts work on the next instruction while the current instruction is processing IFF the register currently involved isnt involved in the next instruction (that would cause a lot of synchronisation problems). Now notice, that the CPU can't do any of that when manually entering and leaving. It can only start doing the mov eax, 1234 while performing the sub rsp, 0.
----------------
NOW: notice that the code on the right didn't take any precautions like making sure that the stack is big enough. If you sub too much stack at once, the stack will be exhausted, thats what we call a stack overflow. enter implements checks for that, and emits an interrupt if there is a SO (take this with a grain of salt, I couldn't find a resource backing this up). There are another type of checks I don't fully get (stack level checks) so I'd rather not make a fool of myself by writing about them.
Because of all those reasons I think that compilers should start using enter and leave again.
========
This post showed very well that bare numbers can often mislead.21 -
Is it sane or practical to write C programs on PAPER dealing with processes and child processes that handle files and other processes and sh*t?
I don't know 😐3 -
#Suphle Rant 6: Deptrac, phparkitect
This entry isn't necessarily a rant but a tale of victory. I'm no more as sad as I used to be. I don't work as hard as I used to, so lesser challenges to frustrate my life. On top of that, I'm not bitter about the pace of progress. I'm at a state of contentment regarding Suphle's release
An opportunity to gain publicity presented itself last month when cfp for a php event was announced last month. I submitted and reviewed a post introducing suphle to the community. In the post, I assured readers that I won't be changing anything soon ie the apis are cast in stone. Then php 7.4 officially "went out of circulation". It hit me that even though the code supports php 8 on paper, it's kind of a red herring that decorators don't use php 8 attributes. So I doubled down, suspending documentation.
The container won't support union and intersection types cuz I dislike the ambiguity. Enums can't be hydrated. So I refactored implementation and usages of decorators from interfaces to native attributes. Tried automating typing for all class properties but psalm is using docblocks instead of native typing. So I disabled it and am doing it by hand whenever something takes me to an unfixed class (difficulty: 1). But the good news is, we are php 8 compliant as anybody can ask for!
I decided to ride that wave and implement other things that have been bothering me:
1) 2 commands for automating project setup for collaborators and user facing developers (CHECK)
2) transferring some operations from runtime to compile/build TIME (CHECK)
3) re-attempt implementing container scopes
I tried automating Deptrac usage ie adding the newly created module to the list of regulated architectural layers but their config is in yaml, so I moved to phparkitect which uses php to set the rules. I still can't find a library for programmatically updating php filed/classes but this is more dynamic for me than yaml. I set out to implement their library, turns out the entire logic is dumped into the command class, so I can neither control it without the cli or automate tests to it. I take the command apart, connect it to suphle and run. Guess what, it detects class parents as violations to the rule. Wtflyingfuck?!
As if that's not bad enough, roadrunner (that old biatch!) server setup doesn't fail if an initialization script fails. If initialization script is moved to the application code itself, server setup crumbles and takes the your initialization stuff down with it. I ping the maintainer, rustacian (god bless his soul), who informs me point blank that what I'm trying to do is not possible. Fuck it. I have to write a wrapper command for sequentially starting the server (or not starting if initialization operations don't all succeed).
Legitimate case to reinvent the wheel. I restored my deleted decorators that did dependency sanitation for me at runtime. The remaining piece of the puzzle was a recursive film iterator to feed the decorators. I checked my file system reader for clues on how to implement one and boom! The one I'd written for two other features was compatible. All I had to do was refactor decorators into dependency rules, give them fancy interfaces for customising and filtering what classes each rule should actually evaluate. In a night's work (if you're discrediting how long writing the original sanitization decorators and directory iterator), I coupled the Deptrac/phparkitect library of my dreams. This is one of the those few times I feel like a supreme deity
Hope I can eat better and get some sleep. This meme is me after getting bounced by those three library rejections