Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "#zfs"
-
My Sunday Morning until afternoon. FML. So I was experiencing nightly reboots of my home server for three days now. Always at 3:12am strange thing. Sunday morning (10am ca) I thought I'd investigate because the reboots affected my backups as well. All the logs and the security mails said was that some processes received signal 11. Strange. Checked the periodics tasks and executed every task manually. Nothing special. Strange. Checked smart status for all disks. Two disks where having CRC errors. Not many but a couple. Oh well. Changing sata cables again 🙄. But those CRC errors cannot be the reason for the reboots at precisely the same time each night. I noticed that all my zpools got scrubbed except my root-pool which hasn't been scrubbed since the error first occured. Well, let's do it by hand: zpool scrub zroot....Freeze. dafuq. Walked over to the server and resetted. Waited 10 minutes. System not up yet. Fuuu...that was when I first guessed that Sunday won't be that sunny after all. Connected monitor. Reset. Black screen?!?! Disconnected all disks aso. Reset. Black screen. Oh c'moooon! CMOS reset. Black screen. Sigh. CMOS reset with a 5 minute battery removal. And new sata cable just in cable. Yes, boots again. Mood lightened... Now the system segfaults when importing zroot. Good damnit. Pulled out the FreeBSD bootstick. zpool import -R /tmp zroot...segfault. reboot. Read-only zroot import. Manually triggering checksum test with the zdb command. "Invalid blckptr type". Deep breath now. Destroyed pool, recreated it. Zfs send/recv from backup. Some more config. Reboot. Boots yeah ... Doesn't find files??? Reboot. Other error? Undefined symbols???? Now I need another coffee. Maybe I did something wrong during recovery? Not very likely but let's do it again...recover-recover. different but same horrible errors. What in the name...? Pulled out a really old disk. Put it in, boots fine. So it must be the disks. Walked around the house and searched for some new disks for a new 2 disk zfs root mirror to replace the obviously broken disks. Found some new ones even. Recovery boot, minimal FreeBSD Install for bootloader aso. Deleted and recreated zroot, zfs send/recv from backup. Set bootfs attribute, reboot........
It works again. Fuckit, now it is 6pm, I still haven't showered. Put both disks through extensive tests and checked every single block. These disks aren't faulty. But for some reason they froze my system in a way so that I had to reset my BIOS and they had really low level data errors....? I Wonder if those disks have a firmware problem? So that was most of my Sunday. Nice, isn't it? But hey: calm sea won't make a good sailor, right?3 -
Today started off great!
New 5TiB HDD... Check!
Formatted with zfs under LUKS, with a high level.of compression and dedup... Check!
Copying over roughly 4TiB of data, about 2 of which was scattered in small files... Coworker unplugged it from AC thinking it was his (they are sort of similar), when the process was almost complete.
Goddamit. zpool scrub.... 6 hours left. It's 9 pm over here, and I'm not a fan of leaving my stuff at work. Goddammit.
...I guess tomorrow is another day.8 -
Just mirrored sudo to my own Gitea instance yesterday (https://git.ghnou.su/mir/sudo). Turns out that this chonkster is 200MB compressed (LZ4 on ZFS). I am baffled by it... All it needs to do is reading a configuration file describing what users can be elevated, to which user and which commands they can run. Perhaps doas wasn't a bad idea after all?
Oh and it got a privilege escalation vulnerability just yesterday (https://security-tracker.debian.org/...), which is why I got interested in it. Update your sudo packages if you haven't already.11 -
A few days ago I decided to install Windows 7 on a VM (bad idea as it turned out). All fine and dandy and I ran Windows Update a few times to get it at least as up-to-date as it'll get.
I noticed that out of the 4GB RAM I had allocated, an svchost process responsible for the updates was gobbling up all the available memory, just leaving 82MB for everything else. The process itself was as you might imagine consuming over 3GB RAM just for itself. That's how an OS should work right after installation, I'm sure you'll agree.
So I complained about it. Haven't used Windows anywhere for a while so I wasn't used anymore to this level of efficiency. Disk activity went through the roof, though to be fair the underlying disk wasn't an SSD (qcow2 on ZFS on a spinning drive). RAM consumption is something I already covered. CPU temperature shot up to 95C.
So as any idiot would do, I disabled the service related to that process (the svchost process for wuauserv) and the problem went away. But I complained of course, saying that such amazing system utilization metrics wasn't something I expected. I mean for 4GB allocated, having as much as 82MB usable to get stuff done with! 95C on the CPU, on a lot of chips that's the junction temperature! Absolutely beautiful.
When I complained I heard that I had to replace the thermal grease. I do that twice a year. I wrote a custom fan driver for my system that works absolutely great. It was obviously shit. I must be a horrible sysadmin for solving a problem by eliminating the cause, and companies hiring me must be ashamed of themselves. My hardware must be shit (that's a common one with Windows users) despite being a business laptop and the guest system being a VM. Oh and I'm an idiot of course for complaining about such amazing system metrics in Windows.
I love Windows and its community...8 -
I was copying data from a failing zfs drive with rsync and I noticed that it spent a long time on the file ~/.local/share/Baloo/index
du -h index showed a 500ish MB file which didn't seem large enough to take this long.
I recalled that du shows disk usage, not file size and since I was using zfs compression they could be quite different.
so I added -A for apparent size:
du -hA index and it comes back with 1.7E
The file was 1.7 exabytes...6 -
There are a few email addresses on my domain that I keep on receiving spam on, because I shared them on forums or whatever and crawlers picked it up.
I run Postfix for a mail server in a catch-all configuration. For whatever reason in this setup blacklisting email addresses doesn't work, and given Postfix' complexity I gave up after a few days. Instead I wrote a little bash script called "unspam" to log into the mail server, grep all the emails in the mail directory for those particular email addresses, and move whatever comes up to the .Junk directory.
On SSD it seems reasonably fast, and ZFS caching sure helps a lot too (although limited to 1GB memory max). It could've been a lot slower than it currently is. But I'm not exactly proud of myself for doing that. But hey it works!1 -
Spend 2 hours migrating my old NASs ubuntu zfs pool to the new freeBSD NAS, which has new fancy stuff like a crossflashed raid card new hyper efficient psu and so on. Sadly, the pool just wont import, many drives are missing. I debug. For hours. Trying to test cables. Interesting. No matter which SATA cables i switch, this one drive always starts... Hm... Must be the controller then. Maybe the controller doesnt spin up the other disks, because i removed the boot rom! That must be it! Wait... Why is this cable lying in here... Wait, this is the power cable attached to all missing driv ARE YOU FUCKING KIDDING ME?! I WASTED SO MUCH FUCKING TIME ON THIS SHIT HOW COULD YOU DO THIS TO ME!
Unfortunately, one power cable become loose (i dont know how, these cables have plastic thingies to prevent this...), but it works now. And its better than before. -
Ffs, I just spent the whole weekend setting up our new storage server. Moved it into the rack. Entered the UEFI to enable idrac. And BAM! The uefi decided to load it’s own raid config over the raid controller.
Raid controller bios doesn’t let me load it’s own config after that. So I have to reset the controller and setup raid, os and the whole shot again.
To make it even better. Debian doesn’t load the firmware for the broadcom chip, since it’s a non-free driver. Making me have to do lots of manual config after the install just to get it on the internet.
I wish I could’ve just bought a new server instead of working with this shit.
I would’ve used FreeBSD with ZFS, but our server only has 8GB ram, and I need about 120GB extra to work smoothly with all the storage.
It’s just a pita working with this. One step forward, ten steps back. -
LXC, no doubt.
I mean to be fair, LXC is an amazing container runtime once you manage to set it up. But setting it up is the hard bit. Starting off with LXC 2.x, it was a nightmare to find out how to get things like the storage backends working. But with ZFS it ended up being alright. Find some arcane values to stick in the /etc/lxc/default.conf to use ZFS as the backend and then the default storage location on those ZFS pools (I'll get back to that later), and it worked alright. Again, once it works it's great, but setting it up and finding the right configuration keys is absolute hell.
So, LXC 2.x for a while and a few months ago I finally ended up upgrading to 3.x. Every single configuration key changed. Every single one of them, and that's why I had to 1) learn LXC all over again, and 2) redeploy each and every one of my containers. That process is still not entirely completed. ZFS backend was once again a dive into arcane configuration keys found on forums and whatnot. Yeah.. official documentation has none of it. Oh and in 3.x you now also have to dodge the torrent of "just use LXD m8" messages. Yeah, very helpful when LXD is also the ONLY way to reasonably configure it. Absolutely beautiful. Oh and as far as the ZFS default storage location goes (such as ssd/lxc/ct)? Yeah forget about it. There's no configuration option for it anymore, and the default is "lxc". In ZFS lingo that means that LXC has the audacity to demand a whole pool for itself. No. No you don't deserve a whole pool for yourself. But hey at least you can define the storage location to use in the lxc-create command! Every single time you have to define it in lxc-create. I abstracted it away into my own LXC interface, so no big deal really. But yeah... That could absolutely be better. And in 2.x it was actually better.
Oh and btrfs, the filesystem I'd like to use on low memory systems because ZFS' ARC is too much on such systems? Yeah forget about it. I still have no idea how to do it. Thank you LXC and its amazing documentation!
And if you want the icing on the cake for LXC's terrible documentation, see their repo's index page at https://github.com/lxc/lxc/.... Yeah, it's totally still at 2.x... That's how well they maintain that. Even Debian has 3.x now. And if you look at the branches, you'll find that even 4.x is already available and considered stable. -
Learning about sun solaris, dtrace, zfs and smf after calling the old sun boxes, my colleagues set up, old garbage.
These guys were ahead of us for ages.
If you ever wondered, where all your ram goes, when your application starts or wgy it crashes without further notice, try dtrace.
If you ever wondered of a sophisticated reliable init system would look like, look for the smf init system.
If oracle would already open source all the old sun stuff and if other companies would start using the illumos distros, the world would be a better.
Thats where the sun peopke went after oracle bought sun and started pissing off the devs of sun. -
I hate the elasticsearch backup api.
From beginning to end it's an painful experience.
I try to explain it, but I don't think I will be able to cover it all.
The core concept is:
- repository (storage for snapshots)
- snapshots (actual backup)
The first design flaw is that every backup in an repository is incremental. ES creates an incremental filesystem tree.
Some reasons why this is a bad idea:
- deletion of (older) backups is slow, as newer backups need to be checked for integrity
- you simply have to trust ES that it does the right thing (given the bugs it has... It seems like a very bad idea TM)
- you have no possibility of verification of snapshots
Workaround... Create many repositories as each new repository forces an full backup.........
The second thing: ES scales. Many nodes / es instances form a cluster.
Usually backup APIs incorporate these in their design. ES does not.
If an index spans 12 nodes and u use an network storage, yes: a maximum of 12 nodes will open an eg NFS connection and start backuping.
It might sound not so bad with 12 nodes and one index...
But it get's pretty bad with 100s of indexes and several dozen nodes...
And there is no real limiting in ES. You can plug a few holes, but all in all, when you don't plan carefully your backups, you'll get a pretty f*cked up network congestion.
So traffic shaping must be manually added. Yay...
The last thing is the API itself.
It's a... very fragile thing.
Especially in older ES releases, the documentation is like handing you a flex instead of toilet paper for a wipe.
Documentation != API != Reality.
Especially the fault handling left me more than once speechless...
Eg:
/_snapshot/storage/backup
gives you a state PARTIAL
/_snapshot/storage/backup/_status
gives you a state SUCCESS
Why? The first one is blocking and refers to the backup status itself. The second one shouldn't be blocking and refers to the backup operation.
And yes. The backup operation state is SUCCESS, while the backup state might be PARTIAL (hence no full backup was made, there were errors).
So we have now an additional API that we query that then wraps the API of elasticsearch. With all these shiny scary workarounds like polling, since some APIs are blocking which might lead to a gateway timeout...
Gateway timeout? Yes. Since some operations can run a LONG (multiple hours) time and you don't want to have a ton of open connections hogging resources... You let the loadbalancer kill it. Most operations simply run in ES in the background, while the connection was killed.
So much joy and fun, isn't it?
Now add the latest SMR scandal and a few faulty (as in SMR instead of CMD) hdds in a hundred terabyte ZFS pool and you'll get my frustration level.
PS: The cluster has several dozen terabyte and a lot od nodes. If you have good advice, you're welcome - but please think carefully about this fact.
I might have accidentially vaporized people sending me links with solutions that don't work on large scale TM.2 -
Recovering a Raid 1 EXT4 disk to a ZFS disk. Thank god for Adobe programs and their multi million file-based caching files.
recovery_time.add(4.hours) -
Restarted my ZFS NAS today which last time corrupted one drive because of this changing /dev/sd? stuff. Now replaced them by IDs and rebooted. Enough adrenaline for today.
-
So a couple of months ago I had some stability issues which seems to have caused Baloo go crazy and create an 1.7 exabyte index file. It was apparently mainly empty as zfs compressed it down to 535MB
Today I spent some time trying to reproduce the "issue" and turns out that wasn't that hard.
So this little program running on FreeBSD with a compressed (lz4) zfs dataset creates an 1.9 Exabyte large file, nicely compressed down to 45KB :)
#include <stdio.h>
#include <fcntl.h>
#include <unistd.h>
#include <sys/limits.h>
int main(int argc, char** argv) {
int fd = open("bigfile.lge", O_RDWR|O_CREAT, 0644);
for (int i = 0 ; i < 1000000000; i++) {
lseek(fd, INT_MAX, SEEK_CUR);
}
write(fd, " ",1);
close(fd);
}3 -
I spent the easter weekend migrating a bare metal Windows installation to a Proxmox VE server with a Windows VM (and set up an rdp client).2
-
I'm easily a Digital Oceans fans, though I have heard horror stories, so I might set up a system to do regular backups.
I'm considering migrating my current server to something FreeBSD based, so I can easily do ZFS snapshots, and even code on my machine at home and just send the jail as a snapshot. Like docker, but different.5 -
Unraid, you piece of lovely SHIT...
I love that it has this really easy expandable storage pool, and the ease of installing plugins...
Plex runs perfect on it... so does sonarr (mostly)...
but why the loving FUCK did it have to crash every. 4. fucking. days.
oh... wait... im fucking retarded...
the USB stick I use isnt 32gb... its 64...
fuck...
FUCK THIS!
IM FUCKING OUT OF HERE!
Oh, and dont get me started on ZFS...
Please, use RAID instead of ZFS if you have a NAS... dont use ZFS... it wasnt made for this... it was made to run in enterprise enviroments... hell, even THE Enterprise...1 -
For me that would be Proxmox. I know, people like it - but for no apparent reason it decided to nuke half my ZFS datasets in a pool, with no logic behind it whatsoever. All disks were tested, all came out good. Within the same pool there were datasets that were lost and some that remained.
I really don't get it. Looking at Proxmox' source code, it's more or less the command line tools and then there's the web interface (e.g. https://github.com/proxmox/...). Oh and they have the audacity to use their own file extension. Why not I guess?
Anyway, half my data was gone. I couldn't tell how or why or what the fuck even happened there. But Proxmox runs Debian underneath and I've been rather pissed about Proxmox' idea of "don't touch the host system aaa" for a while at that point. So I figured, fuck it I'll just take pure Debian then and write my own slightly better garbage on top of that. And as such the distribution project was born. I've been working on it for a little over a year now. And I've never had such issues again.
I somewhat get the idea of "don't touch the host" now, but still not quite. Yes, the more you do in the containers, the better. And the less you do on the host in terms of reconfiguration, the longer it will stay alive for. That goes for any system - more reconfiguration means usually means less stability and harder to replace. But sometimes you just have to work from the host. Like say migrating a container between hosts, which my code can do. You can't do that from a container, at all. There are good reasons to work with the host. Proxmox isn't telling that. Do they expect their users to be idiots? Only enterprise sysadmins amirite?
So yeah, that project - while I do take inspiration from it in mine - I don't like it. It's enterprise, it has the ZFS and the Ceph and the LXC and the VM's - woohoo! Not like anyone could implement that on a base Debian system. But they have the configuration database (pmxcfs), the distributed configuration database of a couple MB large and capped there, woah!
Ok sure it isn't Microsoft or IBM or Oracle or whatever, and those are definitely worse. But those are usually vendor lock-ins.. I avoid those on that premise alone :)3 -
Had a Nas with a single 3tb seagate HDD in it.
It ran well for half a year and it was my main backup and a time machine for my dad.
The time came that my budget was allowing a second drive for redundancy so I powered it off, added the second drive and powered it back on.
😐😓😧😭
The drive did indeed die and yes, it was one of those drives with an extremely high failure rate.
My dad was pretty mad that his backups were gone even though he didn't need them.
So my biggest lesson from this was to always encrypt such drives because dads backup wasn't and my files and such weren't either, so someone could restore our hole life's from the drive.
So I can't Rma that fucker.
Zfs at rest encryption ftw!
By the way, writing this I noticed that I didn't need to power the Nas down to add the second drive....
Ffffffffuuuuuuuuucccckkkkkk.
Another more recent thing was a refurb 4tb we red that I bought used for a bargain.
It reported 2 unwritable sectors but I didn't care for the money.
After about a month, it died.
The interesting part is how it died.
It spinns up, gets detected, you can access the data.
You can copy the data.
But after a few moments of continues load, all operations start timing out and the drive either disconnects completely or the zpool degrades and shuts down.
In the first case, replugging brings the drive back untill it does it again.
On zpool degradation only a reboot brings it back.
Put a fan on it in case it was overheating but that didn't fix it.4 -
I just went through a super long debugging process trying to figure out what was going on with my ZFS volumes. It turned out I had bad memory:
https://battlepenguin.com/tech/... -
Did you ever use ZFS?
I want to build a linux-setup with some cool new stuff I didn't use so far and ZFS is among them.8