Press "Enter" to skip to content

删除文件以后,XFS分区的可用容量并未增加?

今天接到一同事报告,在分区里删除了一些大体积文件以后,使用df -hT看到该分区的Avail容量并未增加。我的第一反应就是肯定是XFS文件搞的鬼!经过谷歌以后证明了确实是XFS文件系统的问题。

XFS is designed to support highly parallel operation and massive dynamically expanding file system sizes. Both are supported by dynamic allocation of inodes.

XFS被设计为支持高度并行运算和海量动态扩展文件系统大小。两者都是由inode的动态分配支持。

XFS dynamically allocates space for inodes as they are created, this is different from many other filesystems where inode space is statically allocated at mkfs time. While inode space is dynamically allocated, it is never freed – up until now that is.

When ikeep is specified, XFS does not delete empty inode clusters and keeps them around on disk. When noikeep is specified, empty inode clusters are returned to the free space pool.

一般的文件系统类型在mkfs的时候,就会静态分配好inode的空间。而XFS的inode空间是动态分配的,当一个文件被删除时,空的inode空间并不会被释放。

When ikeep is specified, XFS does not delete empty inode clusters and keeps them around on disk. ikeep is the traditional XFS behaviour. When noikeep is specified, empty inode clusters are returned to the free space pool. The default is noikeep for non-DMAPI mounts, while ikeep is the default when DMAPI is in use.

Fragmentation and things being “scattered around the disk more” cause significant performance problems for traditional spinning-platter disks. However, on an SSD, those things are no problem at all. Conversely, deleting disk blocks is a significant performance problem on SSDs while being no problem for spinning-disks. So while the patch making it the default to delete empty inodes was a performance improvement for spinning-disks, it actually made performance worse on SSDs. Hence the recommendation to use ikeep on SSDs.

在普通硬盘上,碎片化会导致明显的性能问题,但却不会影响到SSD。反过来讲,删除硬盘blocks的操作反而会影到SSD的性能,却不会影响传统硬盘。即,删除空的indoe会提升普通硬盘的功能,却会导致SSD性能降低。因此,建议在SSD上启用ikeep参数,在普通硬盘上启用noikeep参数

因为XFS文件系统的这一特性,还可能会导致其它诡异的现象发生,例如,昨天就遇到一例,两个文件的md5、sha1值全都一样,但就是size不一样,想必也是空inodes未被删除导致的。

Leave a Reply

Your email address will not be published. Required fields are marked *