Ranter
Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Comments
-
It can be proxmox fault. It can be another projects fault.
And I mean -any- other project, ZFS / kernel / perl / bash / python / ....
Most of the time when an FS decides to do the death split it's hard - sometimes impossible - to figure out why.
I stumbled across a few issues with Proxmox, but see it overall as a blessing compared to XEN.
ZFS however is something I dislike - not hate, but dislike. Mostly because it's not a filesystem per se.
It's rather a very complicated storage database system. CoW isn't what I like either, especially since it was designed for - extreme - large configurations you'd hardly find below mid level data center, as it even has 128 bit support.
In most cases, ZFS isn't a good solution, especially when all you wanted were snapshots. Same for btrfs.
Related Rants
-
linuxxx11Microsoft motherfucking Windows. (even though its an OS, it's software) It's always brought me tons of issues...
-
bittersweet5Fuck Optimizely. Not because the software/service itself is inherently bad, or because I don't see any value ...
-
AlmondSauce5Salesforce. I mean I hate to be a predictable broken record, but it really is the biggest PITA thing I've com...
For me that would be Proxmox. I know, people like it - but for no apparent reason it decided to nuke half my ZFS datasets in a pool, with no logic behind it whatsoever. All disks were tested, all came out good. Within the same pool there were datasets that were lost and some that remained.
I really don't get it. Looking at Proxmox' source code, it's more or less the command line tools and then there's the web interface (e.g. https://github.com/proxmox/...). Oh and they have the audacity to use their own file extension. Why not I guess?
Anyway, half my data was gone. I couldn't tell how or why or what the fuck even happened there. But Proxmox runs Debian underneath and I've been rather pissed about Proxmox' idea of "don't touch the host system aaa" for a while at that point. So I figured, fuck it I'll just take pure Debian then and write my own slightly better garbage on top of that. And as such the distribution project was born. I've been working on it for a little over a year now. And I've never had such issues again.
I somewhat get the idea of "don't touch the host" now, but still not quite. Yes, the more you do in the containers, the better. And the less you do on the host in terms of reconfiguration, the longer it will stay alive for. That goes for any system - more reconfiguration means usually means less stability and harder to replace. But sometimes you just have to work from the host. Like say migrating a container between hosts, which my code can do. You can't do that from a container, at all. There are good reasons to work with the host. Proxmox isn't telling that. Do they expect their users to be idiots? Only enterprise sysadmins amirite?
So yeah, that project - while I do take inspiration from it in mine - I don't like it. It's enterprise, it has the ZFS and the Ceph and the LXC and the VM's - woohoo! Not like anyone could implement that on a base Debian system. But they have the configuration database (pmxcfs), the distributed configuration database of a couple MB large and capped there, woah!
Ok sure it isn't Microsoft or IBM or Oracle or whatever, and those are definitely worse. But those are usually vendor lock-ins.. I avoid those on that premise alone :)
rant
wk236