Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "code machine working again"
-
LONG RANT AHEAD!
In my workplace (dev company) I am the only dev using Linux on my workstation. I joined project XX, a senior dev onboarded me. Downloaded the code, built the source, launched the app,.. BAM - an exception in catalina.out. ORM framework failed to map something.
mvn clean && mvn install
same thing happens again. I address this incident to sr dev and response is "well.... it works on my machine and has worked for all other devs. It must be your environment issue. Prolly linux is to blame?" So I spend another hour trying to dig up the bug. Narrowed it down to a single datamodel with ORM mapping annotation looking somewhat off. Fixed it.
mvn clean && mvn install
the app now works perfectly. Apparently this bug has been in the codebase for years and Windows used to mask it somehow w/o throwing an exception. God knows what undefined behaviour was happening in the background...
Months fly by and I'm invited to join another project. Sounds really cool! I get accesses, checkout the code, build it (after crossing the hell of VPNs on Linux). Run component 1/4 -- all goocy. run component 2,3/4 -- looks perfect. Run component 4/4 -- BAM: LinkageError. Turns out there is something wrong with OSGi dependencies as ClassLoader attempts to load the same class twice, from 2 different sources. Coworkers with Windows and MACs have never seen this kind of exception and lead dev replies with "I think you should use a normal environment for work rather than playing with your Linux". Wtf... It's java. Every env is "normal env" for JVM! I do some digging. One day passes by.. second one.. third.. the weekend.. The next Friday comes and I still haven't succeeded to launch component #4. Eventually I give up (since I cannot charge a client for a week I spent trying to set up my env) and walk away from that project. Ever since this LinkageError was always in my mind, for some reason I could not let it go. It was driving me CRAZY! So half a year passes by and one of the project devs gets a new MB pro. 2 days later I get a PM: "umm.. were you the one who used to get LinkageError while starting component #4 up?". You guys have NO IDEA how happy his message made me. I mean... I was frickin HIGH: all smiling, singing, even dancing behind my desk!! Apparently the guy had the same problem I did. Except he was familiar with the project quite well. It took 3 more days for him to figure out what was wrong and fix it. And it indeed was an error in the project -- not my "abnormal Linux env"! And again for some hell knows what reason Windows was masking a mistake in the codebase and not popping an error where it must have popped. Linux on the other hand found the error and crashed the app immediatelly so the product would not be shipped with God knows what bugs...
I do not mean to bring up a flame war or smth, but It's obvious I've kind of saved 2 projects from "undefined magical behaviour" by just using Linux. I guess what I really wanted to say is that no matter how good dev you are, whether you are a sr, lead or chief dev, if your coworker (let it be another sr or a jr dev) says he gets an error and YOU cannot figure out what the heck is wrong, you should not blame the dev or an environment w/o knowing it for a fact. If something is not working - figure out the WHATs and WHYs first. Analyze, compare data to other envs,... Not only you will help a new guy to join your team but also you'll learn something new. And in some cases something crucial, e.g. a serious messup in the codebase.11 -
string excuses[]={
"it's not a bug it's a feature",
"it worked on my machine",
"i tested it and it worked",
"its production ready",
"your browser must be caching the old content",
"that error means it was successful",
"the client fucked it up",
"the systems crashed and the code got lost" ,
"this code wont go into the final version",
"It's a compiler issue",
"it's only a minor issue",
"this will take two weeks max",
"my code is flawless must be someone else's mistake",
"it worked a minute ago",
"that was not in the original specification",
"i will fix this",
"I was told to stop working on that when something important came up",
"You must have the wrong version",
"that's way beyond my pay grade",
"that's just an unlucky coincidence",
"i saw the new guy screw around with the systems",
"our servers must've been hacked",
"i wasn't given enough time",
"its the designers fault",
"it probably won't happen again",
"your expectations were unrealistic",
"everything's great on my end",
"that's not my code",
"it's a hardware problem",
"it's a firewall issue",
"it's a character encoding issue",
"a third party API isn't responding",
"that was only supposed to be a placeholder",
"The third party documentation is wrong",
"that was just a temporary fix.",
"We outsourced that months ago.","
"that value is only wrong half of the time.",
"the person responsible for that does not work here anymore",
"That was literally a one in a million error",
"our servers couldn't handle the traffic the app was receiving",
"your machines processors must be too slow",
"your pc is too outdated",
"that is a known issue with the programming language",
"it would take too much time and resources to rebuild from scratch",
"this is historically grown",
"users will hardly notice that",
"i will fix it" };11 -
My biggest dev blunder. I haven't told a single soul about this, until now.
👻👻👻👻👻👻
So, I was working as a full stack dev at a small consulting company. By this time I had about 3 years of experience and started to get pretty comfortable with my tools and the systems I worked with.
I was the person in charge of a system dealing with interactions between people in different roles. Some of this data could be sensitive in nature and users had a legal right to have data permanently removed from our system. In this case it meant remoting into the production database server and manually issuing DELETE statements against the db. Ugh.
As soon as my brain finishes processing the request to venture into that binary minefield and perform rocket surgery on that cursed database my sympathetic nervous system goes into high alert, palms sweaty. Mom's spaghetti.
Alright. Let's do this the safe way. I write the statements needed and do a test run on my machine. Works like a charm 😎
Time to get this over with. I remote into the server. I paste the code into Microsoft SQL Server Management Studio. I read through the code again and again and again. It's solid. I hit run.
....
Wait. I ran it?
....
With the IDs from my local run?
...
I stare at the confirmation message: "Nice job dude, you just deleted some stuff. Cool. See ya. - Your old pal SQL Server".
What did I just delete? What ramifications will this have? Am I sweating? My life is over. Fuck! Think, think, think.
You're a professional. Handle it like one, goddammit.
I think about doing a rollback but the server dudes are even more incompetent than me and we'd lose all the transactions that occurred after my little slip. No, that won't fly.
I do the only sensible thing: I run the statements again with the correct IDs, disconnect my remote session, and BOTTLE THAT SHIT UP FOREVER.
I tell no one. The next few days I await some kind of bug report or maybe a SWAT team. Days pass. Nothing. My anxiety slowly dissipates. That fateful day fades into oblivion and I feel confident my secret will die with me. Cool ¯\_(ツ)_/¯12 -
So rewind back about 24 years. I was a little kid who thought computers were the coolest thing evar, and our family had just gotten our first machine (a monstrous tower from a company named CyberMax, running Win 3.11 on DOS 6, 33MHz and a 250MB hard drive).
My aunt (big into coding at the time) came by with a box full of disks and loaded the machine up with all kinds of games and fun stuff. One of the thing she installed was Hoyle Classic Card Games (https://playclassic.games/games/...)
My parents fell in love with this and played it for hours. The problem was, the process to get it started, while not complicated, was still a pain in the ass. You had to either hammer F6 to get the startup menu and type a bunch of commands to switch to the directory and start the game, or let it boot into windows, then leave windows for DOS and do the same thing.
On a lark, when we had gotten the machine, mom had also bought this little dos programming handbook. I can't find it nowadays, but it went into very exhaustive detail on the cool things you could do with batch files. I was a voracious reader, especially on anything to do with computers, and one of the things the book covered was how to write startup menus using the CHOICE command! Little me figured out that you could write this into the AUTOEXEC.bat, and have a menu come up on every start!
It took me a couple days of piddling around (again, I was like 6 or 7, and this was the first "program" I'd ever written), but I eventually got it to the point where you'd turn the computer on, and the first thing it would do is ask if you wanted to go into windows, or if you wanted to play cards. I was proud as hell when this was set up and working!
I didn't do much writing of programs since then (I was more interested in games at the time), but yeaaaarrrs later, I encountered Why's Poignant Guide to Ruby, fell in love, and I've been hacking code ever since2 -
Testing hell.
I'm working on a ticket that touches a lot of areas of the codebase, and impacts everything that creates a ... really common kind of object.
This means changes throughout the codebase and lots of failing specs. Ofc sometimes the code needs changing, and sometimes the specs do. it's tedious.
What makes this incredibly challenging is that different specs fail depend on how i run them. If I use Jenkins, i'm currently at 160 failing tests. If I run the same specs from the terminal, Iget 132. If I run them from RubyMine... well, I can't run them all at once because RubyMine sucks, but I'm guessing it's around 90 failures based on spot-checking some of the files.
But seriously, how can I determine what "fixed" even means if the issues arbitrarily pass or fail in different environments? I don't even know how cli and rubymine *can* differ, if I'm being honest.
I asked my boss about this and he said he's never seen the issue in the ten years he's worked there. so now i'm doubly confused.
Update: I used a copy of his db (the same one Jenkins is using), and now rspec reports 137 failures from the terminal, and a similar ~90 (again, a guess) from rubymine based on more spot-checking. I am so confused. The db dump has the same structure, and rspec clears the actual data between tests, so wtf is even going on? Maybe the encoding differs? but the failing specs are mostly testing logic?
none of this makes any sense.
i'm so confused.
It feels like i'm being asked to build a machine when the laws of physics change with locality. I can make it work here just fine, but it misbehaves a little at my neighbor's house, and outright explodes at the testing ground.4 -
Ok friends let's try to compile Flownet2 with Torch. It's made by NVIDIA themselves so there won't be any problem at all with dependencies right?????? /s
Let's use Deep Learning AMI with a K80 on AWS, totally updated and ready to go super great always works with everything else.
> CUDA error
> CuDNN version mismatch
> CUDA versions overwrite
> Library paths not updated ever
> Torch 0.4.1 doesn't work so have to go back to Torch 0.4
> Flownet doesn't compile, get bunch of CUDA errors piece of shit code
> online forums have lots of questions and 0 answers
> Decide to skip straight to vid2vid
> More cuda errors
> Can't compile the fucking 2d kernel
> Through some act of God reinstalling cuda and CuDNN, manage to finally compile Flownet2
> Try running
> "Kernel image" error
> excusemewhatthefuck.jpg
> Try without a label map because fuck it the instructions and flags they gave are basically guaranteed not to work, it's fucking Nvidia amirite
> Enormous fucking CUDA error and Torch error, makes no sense, online no one agrees and 0 answers again
> Try again but this time on a clean machine
> Still no go
> Last resort, use the docker image they themselves provided of flownet
> Same fucking error
> While in the process of debugging, realize my training image set is also bound to have bad results because "directly concatenating" images together as they claim in the paper actually has horrible results, and the network doesn't accept 6 channel input no matter what, so the only way to get around this is to make 2 images (3 * 2 = 6 quick maths)
> Fix my training data, fuck Nvidia dude who gave me wrong info
> Try again
> Same fucking errors
> Doesn't give nay helpful information, just spits out a bunch of fucking memory addresses and long function names from the CUDA core
> Try reinstalling and then making a basic torch network, works perfectly fine
> FINALLY.png
> Setup vid2vid and flownet again
> SAME FUCKING ERROR
> Try to build the entire network in tensorflow
> CUDA error
> CuDNN version mismatch
> Doesn't work with TF
> HAVE TO FUCKING DOWNGEADE DRIVERS TOO
> TF doesn't support latest cuda because no one in the ML community can be bothered to support anything other than their own machine
> After setting up everything again, realize have no space left on 75gb machine
> Try torch again, hoping that the entire change will fix things
At this point I'll leave a space so you can try to guess what happened next before seeing the result.
Ready?
3
2
1
> SAME FUCKING ERROR
In conclusion, NVIDIA is a fucking piece of shit that can't make their own libraries compatible with themselves, and can't be fucked to write instructions that actually work.
If anyone has vid2vid working or has gotten around the kernel image error for AWS K80s please throw me a lifeline, in exchange you can have my soul or what little is left of it5 -
So this one made me create an account on here...
At work, there's a feature of our application that allows the user to design something (keeping it vague on purpose) and to request a 3D render of their creation.
Working with dynamically positioned objects, textures and such, errors are bound to happen. That's why we implemented a bug report feature.
We have a small team tasked with monitoring the bug reports and taking action upon it, either by fixing a 3D scene, or raising the issue to the dev team.
The other day, a member of that team told me (since I'm part of the dev team) he had received a complain that the image a user received was empty. Strange, we didn't update the code in a while.
So I check the server, all the docker containers are running fine, the code is fine, no errors anywhere.
Then, as I'm scratching my head, that guy comes back to me and says "I don't know if it can help you, but it's been doing it for a week and a half now".
"And we're only hearing about it now?!", I replied.
"Well, I have bug reports going back to the 15th, but we haven't been checking the reports for a while now since everything was fine", he says as if it was actually a normal thing to say.
"How can you know everything is fine if you're not looking at the thing that says if there's an issue?!", I replied with a face filled with despair.
"Well we didn't receive any new reports in a while, so we just stopped looking. And now the report tool window is actually closed on my machine", he says with a smile and a little laugh in his tone.
In the end, I got to fix the server issue quite easily. But still, the feature wasn't working for 1.5 weeks and more that 330 images weren't sent properly...
So yeah, Doctor, the patient's heart is beating again! Let's unplug the monitor, it should be fine.
Welcome to my little piece of hell :)7 -
tldr; Windows security sucks. You as a org-admin cant do anything about it. Encrypt your device. Disable USB Live boot in the bios and protect it with a STRONG password.
First of i just want to say that i DO NOT want to start the good ol' Linux VS Windows debate. I'm just ranting about Windows Security here...
Second, here's why i did all of this. I did all of this mainly becuase i wanted to install some programs on my laptop but also to prove that you can't lock down a Windows pc. I don't recomend doing this since this is against the contract i signed.
So when i got my Laptop from my school i wanted to install some programs on it, sush as VS Code and Spotify. They were not avalible in the 'Software Center' so i had to find another way. Since this was when we still used Windows 7 it was quite easy to turn sticky keys in to a command prompt. I did it this way (https://github.com/olback/...). I decided to write a tutorial while i was at it becuase i didn't find any online using this exact method. I couldn't boot from a USB cause it's disabled in the bios wich is protected by a password. Okey, Sticky keys are now CMD. So let's spam SHIFT 5 times before i log in? Yeah, thanks for the command promt. Running 'whoami' returned 'NT SYSTEM'. Apparantly NT System has domain administator rights wich allowed me to make me an Administrator on the machine. So i installed Everything i wanted, Everything was fine untill it was time to migrate to a new domain. It failed of course. So i handed my Laptop to the IT retards (No offense to people working in IT and managing orgs) and got it back the day after, With Windows 10. Windows 10 is not really a problem, i don't mind it. The thing is, i can't use any of the usual Sticky keys to CMD methods since they're all fixed in W10. So what did i do? Moved the Laptop disk to my main PC and copied cmd.exe to sethc.exe. And there we go again. CMD running as NT System on Windows 10. Made myself admin again, installed Everything i needed. Then i wanted to change my wallpaper and lockscreen, had to turn to PowerShell for this since ALL settings are managed by my School. After some messing arround everything is as i want it now.
'Oh this isnt a problem bla bla bla'. Yes, this is a problem. If someone gets physical access your PC/Laptop they can gain access to Everything on it. They can change your password on it since the command promt is running as NT SYSTEM. So please, protect your data and other private information you have on your pc. Encypt your machine and disable USB Live boot.
Have a good wekend!
*With exceptions for spelling errors and horrible grammar.4 -
Here’s book most of you have either read a newer edition or some variant based on this book, as computer science students you had to take an intro to logic course.. prior to digital logic.. or atleast that’s how it went for me and many others I know.
Which regardless how much the universities screwed up teaching comp sci and programming.. this is one aspect I think they nailed. Requiring philosophical logic course for comp sci.
Again this isn’t a digital logic book. It’s just philosophical logic. The first edition of this book came out in 1953... and I think they are edition 14 or 15... for a book to have this many editions and last this long thru time it’s a good book.
It’s a book that should be a must read for anyone venturing into AI and working on human machine thought processing.
It’s a great book to have around as reference, considering philosophical logic is not a walk in the park atleast not in the beginning because it requires you to change the way you view things.. more specifically it requires you to think objectively and make decisions objectively rather than subjective emotional reasoning.
Programmers need to think objectively with everything they do. The moment you begin thinking subjectively .. ie personal style, wishes and wants, or personal reasons and put that into code for a code base with a team u just put the team at risk.
Does this book teach objective thought? No... indirectly yes, because it teaches the objective rules of logic... you don’t get to have an emotional opinion on wether you agree or disagree or whatnot, logic is logic even philosophical. Many people failed the logic course I was in university.. infact the bell curve was c- / D ... many people had to take the course more than once.. they even had to change the way the grading was done.. just to get more people to pass...
But here’s the thing it’s not about it being taught wrong.. people just couldn’t adapt to thinking objectively, with rules as such in philosophical logic courses. Grant it the symbols takes time getting use to but it literally wasn’t the reason people failed.. it was their subjective opinions and thought process interfereing with the objectiveness of the course exams and homework.5 -
Just my luck that I get the best wk76 story ever on wk77. Either way:
So some of you may know that the current project I am on has some shared code components with one of the other projects in the product line. And we have some differences in our processes. This leads to a lot of fun.
So, I was working on converting one of our shared components into a more modern language. It would save us time, money, and sanity by allowing us to more easily maintain our product. Sounds like a win-win right? That's what I thought. Until I had a meeting with the other team. THEN THE QUESTIONS ROLLED IN. Well who is going to integrate our product with yours? (You?) Are you changing the interface? (Not really.) Are you going to generate a design document? (Absolutely not especially since the interface isn't changing for the most part.) Well you are changing the type of one parameter in one method from an undocumented unmanaged type to a well documented managed type that we control. Shouldn't you generate a document to document that change? (Again absolutely not.)
So first they basically browbeat my lead into putting me in charge of their integration effort. Its fine though, as they gave me an account to charge. However, when I was finally able to get a machine with their build environment on it (at least two months later), they then told me that that account was closing and I had to wait until next quarter. So fuck me right. And because of their process I would break them if I were to check my changes in.
So fast forward to today. They are translating some shared components for the same reason that we are. However, they are changing code that while shared is technically "ours" and that will DEFINITELY break us if they do this work since this is the code that controls our algorithms. And while we have a fault tolerant process, or at least more fault tolerant than the other group's, we are currently doing a huge amount of development in the part they want to change. And when we ask them "who is going to do this work to integrate our product with your changes?" they stare at us slack jawed. Like "um, you right? it doesn't affect us." Like MOTHERFUCKERS!!! YOU LITERALLY JUST FOIST ALL THIS WORK ON US TO INTEGRATE WITH YOU BECAUSE YOU DIDN'T HAVE THE PEOPLE TO SUPPORT IT!!! BUT YOU CAN PAY THIS GUY FOR SIX MONTHS TO DO ALL THIS WORK THAT WILL BREAK US BUT CAN'T SPARE HIM TO INTEGRATE WITH US!?!?!? EVEN IF WE'RE PAYING HIM AND NOT YOU!?!?!
I will let you know how this goes when we have the discussion. I am drinking right now because it it easier and better for my emotional and physical health than bum fights. -
So, someone from support department asked me to check the app as it was displaying blank information on a page.
I started debugging on the api which i found is doing the proboem. I checked and it was working a moment before but not now. Switched in debug mode to see what went wrong and it works again. This happen multiple times.
Before doing anything else i asked our api developer to check if api is working. He said it should work now and problem has been fixed.
Later i found out that he was doing debugging/changing code on production server instead of his local machine or test server. -
Aside from simple programs I wrote by hand-transcribing code from the "Basic Training" section of 3-2-1 Contact magazine when I was a kid in the '80s, I would say the first project I ever undertook on my own that had a meaningful impact on others was when I joined a code migration team when I was 25. It was 2003.
We had a simple migration log that we would need to fill out when we performed any work. It was a spreadsheet, and because Excel is a festering chunk of infected cat shit, the network-shared file would more often than not be locked by the last person to have the file open. One night after getting prompted to open the document read-only again, I decided I'd had it.
I went to a used computer store and paid $75 out of pocket for an old beater, brought it back to the office, hooked it to the network, installed Lunar Linux on it, and built a simple web-based logging application that used a bash-generated flat file backend. Two days later, I had it working well enough to show it to the team, and they unanimously agreed to switch to it, rather than continue to shove Excel's jagged metal dick up our asses.
My boss asked me where I was hosting it, as such an application in company space would have certainly required his approval to procure. I showed him the completely unauthorized Linux machine(remember, this was 2003, when fortune 500 corporations, such as my employer, believed Ballmer's FUD-spew about Linux being a "virus" was real and not nonsense at all), and he didn't even hesitate to back me up and promise to tell the network security gestapo to fuck off if they ever came knocking. They never did.
I was later informed that the team continued to use the application for about five years after I left. -
Just joined a new company and can only describe the merge process as madness.....is it or am I the one that is mad?!
They have the following branches:
UAT#_Development branch
UAT#_Branch (this kicks of a build to a machine named UAT#)
Each developer has a branch with the # being a number 1 to 6 except 5 which has been reserved for UAT_Testing branch.
They are working on a massive monolith (73 projects), it has direct references to projects with no nuget packages. To build the solution requires building other solutions in a particular order, in short a total fucking mess.
Developer workflow:
Branch from master with a feature or hotfix branch
Make commits to said branch and test manually as there are no automated tests
Push the commits to their UAT#_Development branch, this branch isn't recreated each time and may have differences to all the other UAT#_Development branches.
Once happy create a pull request to merge from UAT#_Development to UAT#_Branch you can approve your own pull request, this kicks off a build and pushes it to a server that is named UAT#.
Developer reviews changes on the UAT# server.
QA team create a UAT/year/month/day branch. Then tell developers to merge their UAT#_branch branches in to the previously created branch, this has to be done in order and that is done through a flurry of emails.
Once all merges are in it then gets pushed to a UAT_Testing branch which kicks off a build, again not a single automated test, and is manually tested by the QA team. If happy they create a release branch named Release/year/month/day and push the changes into it.
A pull request from the release branch is then made to pre-live environment where upon merge a build is kicked off. If that passes testing then a pull request to live is created and the code goes out into production.
Ahhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh it's a total mess. I knew when I took on this job it would be a challenge but nothing has prepped me for the scale of the challenge!! My last place it was trunk based development, commit straight to master, build kicks off with automated testing and that just gets pushed through each of the environments, so easy, so simple!
They tell me this all came about because they previously used EntityFramework EDMX models for the database and it caused merge hell.9 -
OK. We've got this tiny little pet project of mine (work related)…
I rescued it from the git archive, simply put: someone hot glued an elasticsearch scroll + document processor (processing) together.
After a lot of refactoring, I had an simple, much improved (non-parallel) Akka Worker System without an Akka topology / hierarchy.
I left out the hierarchy at first, because I didn't know Akka at all.
I've worked with a lot of process workflows, and some systems that come very close to IPC, so I wasn't completely in the dark.
Topology requires knowledge / creation of a state machine / process workflow. And at that point of time I just had... Garbage. Partially working garbage.
I finished yesterday the rewrite into several actors... Compared to before, there are 8 actors vs 2... And round about 20 classes more. Mostly since I rewrote the Receive Methods of Akka as Command DTOs... And a lot of functions needed to be seperated into layers (which where non existent before)
Since that felt more natural than the previous chaos of passing strings or other primitive types around, or in the worst case just object....
(Yes: Previously an Actor was essentially a class with one or more functions "doEverything" and maybe a few additional functions which did everything - from Rest Client to Processing)).
Then I draw the actual state machine based on everything I've written in the last weeks and thought about how to create the actual topology and where / how parallelizing might make sense.
Innocent me stumbled in the Akka Docs on Akka Typed... (Didn't know it existed, since I'm very new to Java and Akka).
Hm, that sounds an a lot like what I did. In an different way, yes. But not so different that it might be VERY hard to port to.... And I need to change (for implementation of hierarchy) a few classes....
[I should have known at this stage that my curiosity would get the best of me, but yeah. Curiosity killed the cat.]
Actually the documentation is not bad. It's just that upon reading the first more complex examples, my brain decided to go into panic state.
The've essentially combined all classes in one class in all source code examples [which makes sense more sense later], where it is fscking hard for an chaotic brain like mine to extract information....
https://doc.akka.io/docs/akka/...
The thing is: It's not hard to understand… actually very simple.
It was just my brain throwing an fuck you tantrum.
So I've opened more examples in other tabs and cross referenced what happened there and why...
Few frustrated hours later I got that part.... And the part why it's called Akka Typed. It was pretty simple....
Open the gates of hell, bloody satan that was too easy for fucks sake.
Nooooow.... I just need to port my stuff to Akka Typed.
Cause. Challenge accepted, bitch - eh brain. You throw tantrum, you work overtime. -.-
I just cannot decide wether to go FP or OOP.
Now... I'm curious wether FP is that hard... Hadn't dealt with it at large before.
Can someone please stop me... I'm far too curious again. -.- *cries*6 -
Relatively often the OpenLDAP server (slapd) behaves a bit strange.
While it is little bit slow (I didn't do a benchmark but Active Directory seemed to be a bit faster but has other quirks is Windows only) with a small amount of users it's fine. slapd is the reference implementation of the LDAP protocol and I didn't expect it to be much better.
Some years ago slapd migrated to a different configuration style - instead of a configuration file and a required restart after every change made, it now uses an additional database for "live" configuration which also allows the deployment of multiple servers with the same configuration (I guess this is nice for larger setups). Many documentations online do not reflect the new configuration and so using the new configuration style requires some knowledge of LDAP itself.
It is possible to revert to the old file based method but the possibility might be removed by any future version - and restarts may take a little bit longer. So I guess, don't do that?
To access the configuration over the network (only using the command line on the server to edit the configuration is sometimes a bit... annoying) an additional internal user has to be created in the configuration database (while working on the local machine as root you are authenticated over a unix domain socket). I mean, I had to creat an administration user during the installation of the service but apparently this only for the main database...
The password in the configuration can be hashed as usual - but strangely it does only accept hashes of some passwords (a hashed version of "123456" is accepted but not hashes of different password, I mean what the...?) so I have to use a single plaintext password... (secure password hashing works for normal user and normal admin accounts).
But even worse are the default logging options: By default (atleast on Debian) the log level is set to DEBUG. Additionally if slapd detects optimization opportunities it writes them to the logs - at least once per connection, if not per query. Together with an application that did alot of connections and queries (this was not intendet and got fixed later) THIS RESULTED IN 32 GB LOG FILES IN ≤ 24 HOURS! - enough to fill up the disk and to crash other services (lessons learned: add more monitoring, monitoring, and monitoring and /var/log should be an extra partition). I mean logging optimization hints is certainly nice - it runs faster now (again, I did not do any benchmarks) - but ther verbosity was way too high.
The worst parts are the error messages: When entering a query string with a syntax errors, slapd returns the error code 80 without any additional text - the documentation reveals SO MUCH BETTER meaning: "other error", THIS IS SO HELPFULL... In the end I was able to find the reason why the input was rejected but in my experience the most error messages are little bit more precise.2