Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "unix socket"
-
PSA: If you do reverse proxying stuff, prefer unix domain sockets over localhost internet sockets, if it's on the same machine (and if it's forwarded over ssh too). You can even serve HTTP as a unix socket.
Unix domain sockets don't have the overhead of IP, so generally speaking, data will flow to your other process with much less overhead.
I've recently stopped being lazy at this, and it's worth it.3 -
Dear Lord, please stop people from enforcing standards and bypassing them themselves.
Take kubernetes for example. Since v1.24 CRI has been announced as the standard, and kubernetes is shifting to live by it.
But it's not.
Yes, it's got the CRI spec defined and the unix://cri.sock used for that standardised communication. What nobody's telling you, is that that socket MUST be on the same runtime as the kube. I.e. you can't simply spin up a dockerd/containerd/cri-o server and share its CRI socket via CIFS/NFS/etc. Because kube-cp will assume that contained is running on the same host as cp and will try to access its services via localhost.
So effectively you feed the container via a socket to another machine, it spins up the container and that container tries to
- bind to your local machine's IP (not the one's the container is running on)
- access its dependencies via localhost:port, while they are actually running on your local machine (not the CRI host)
I HOPE this will change some day. And we'll have a clear cut between dependencies and dependents, separated by a single communications channel - a single unix socket. That'd be a solution I'd really enjoy working with. NOT the ip-port-connect-bind spaghetti we have now.4 -
Relatively often the OpenLDAP server (slapd) behaves a bit strange.
While it is little bit slow (I didn't do a benchmark but Active Directory seemed to be a bit faster but has other quirks is Windows only) with a small amount of users it's fine. slapd is the reference implementation of the LDAP protocol and I didn't expect it to be much better.
Some years ago slapd migrated to a different configuration style - instead of a configuration file and a required restart after every change made, it now uses an additional database for "live" configuration which also allows the deployment of multiple servers with the same configuration (I guess this is nice for larger setups). Many documentations online do not reflect the new configuration and so using the new configuration style requires some knowledge of LDAP itself.
It is possible to revert to the old file based method but the possibility might be removed by any future version - and restarts may take a little bit longer. So I guess, don't do that?
To access the configuration over the network (only using the command line on the server to edit the configuration is sometimes a bit... annoying) an additional internal user has to be created in the configuration database (while working on the local machine as root you are authenticated over a unix domain socket). I mean, I had to creat an administration user during the installation of the service but apparently this only for the main database...
The password in the configuration can be hashed as usual - but strangely it does only accept hashes of some passwords (a hashed version of "123456" is accepted but not hashes of different password, I mean what the...?) so I have to use a single plaintext password... (secure password hashing works for normal user and normal admin accounts).
But even worse are the default logging options: By default (atleast on Debian) the log level is set to DEBUG. Additionally if slapd detects optimization opportunities it writes them to the logs - at least once per connection, if not per query. Together with an application that did alot of connections and queries (this was not intendet and got fixed later) THIS RESULTED IN 32 GB LOG FILES IN ≤ 24 HOURS! - enough to fill up the disk and to crash other services (lessons learned: add more monitoring, monitoring, and monitoring and /var/log should be an extra partition). I mean logging optimization hints is certainly nice - it runs faster now (again, I did not do any benchmarks) - but ther verbosity was way too high.
The worst parts are the error messages: When entering a query string with a syntax errors, slapd returns the error code 80 without any additional text - the documentation reveals SO MUCH BETTER meaning: "other error", THIS IS SO HELPFULL... In the end I was able to find the reason why the input was rejected but in my experience the most error messages are little bit more precise.2 -
THIS is powering the internet:
"[...] was a protocol number, similar to the third argument to socket today. Specifying this structure was the only way to specify the protocol family. Therefore, in this early system the PF_ values were used as structure tags to specify the protocol family in the sockproto structure, and the AF values were used as structure tags to specify the address family in the socket address structures. The sockproto structure is still in 44BSD (pp. 626-627 of TCPv2) but is only used internally by the kernel. The original definition had the comment "protocol family" for the sp_family member, but this has been changed to "address family" in the 4.4BSD source code. To confuse this difference between the AF_ and PF_ constants even more, the Berkeley kernel data structure that contains the value that is compared to the first argument to socket (the dom family member of the domain structure, p. 187 of TCPv2) has the comment that it contains an AF_ value. But some of the domain structures within the kernel are initialized to the corresponding AF value (p. 192 of TCPv2) while others are initialized to the PF value." Richard Stevens 'Unix network programming'