18
zymk
6y

So we found an interesting thing at work today...

Prod servers had 300GB+ in locked (deleted) files. Some containers marked them for deletion but we think the containers kept these deleted files around.

300 GB of ‘ghost’ space being used and `du` commands were not helping to find the issue.

This is probably a more common issue than I realize, as I’m on the newer side to Linux. But we got it figured out with:

`lsof / | grep deleted`

Comments
  • 1
    Learned something new today
  • 2
    Yeah, file handles and such that block the freeing of those files...
  • 5
    Docker does love leaving its clutter everywhere.
  • 4
    Do not delete files that are open in linux or unix.

    Windows locks such files and forbid deletion. That's annoying. *nux allows you to delete open files.... In a way...

    In ext* filesystems file inode is not unlinked from the FAT as long as at least one hard link exists to that file. If you have a single hard link [the usual case], say ~/myFile, and open it for writing, another hlink will be created. That hard link is a file descriptor, assigned to the writing process. As long as the process is running or as long as it keeps the file open the hlink will be there and will not allow file inode to be removed from fat. You may not have ~/myFile link, but space on your disk will not be freed up.

    There are two ways out. Okay, 3.
    - stop the process
    - trim the removed file through process'es handle
    - gdb into the process and manually close that fd

    in general... Dont remove open files. Better trim them
Add Comment