PQ
PQ.Hosting

Currency

Disk at 100% but du Shows Little: Deleted Files That Don't Free Space

Author
PQ
March 24, 2026
4 min read
40 views
Disk at 100% but du Shows Little: Deleted Files That Don't Free Space

df -h screams the disk is full. du -sh /* adds up to maybe half that. The space is taken — but by what exactly is unclear. This is the classic deleted-file situation: a running process holds a file descriptor open, the file has no name in the filesystem anymore, but the blocks on disk remain reserved until the last descriptor closes.

Why df and du Disagree

du counts space used by files the filesystem can see — files that have a name in a directory. df counts space used on the block device — including blocks reserved for files with no name.

When a file is deleted with rm, its directory entry is removed, but the disk blocks are not freed until every process holding an open file descriptor to it closes that descriptor. Nginx writes to a log — rm /var/log/nginx/access.log runs — but Nginx is still writing to the same file through its open descriptor. The blocks are occupied, du cannot see the file, df sees the used space.

Confirm This Is the Problem

Compare what df reports with what du counts:

df -h /
du -sh /* 2>/dev/null | sort -rh | head -20

If df shows 95%+ and the du total is significantly lower — deleted open files are the cause.

Quick check with lsof:

lsof | grep deleted | head -20

If the list is not empty — those are the culprits. The SIZE column shows how much each one is holding.

Find All Deleted Files Occupying Space

Full list sorted by size:

lsof | grep deleted | awk '{print $7, $1, $2, $9}' | sort -rn | head -20

Output: size in bytes, process name, PID, file path.

Find only large ones — over 100 MB:

lsof | grep deleted | awk '$7 > 104857600 {print $7/1024/1024 " MB\t" $1 "\tPID:" $2 "\t" $9}'

Free the Space Without Restarting the Process

The fastest option — restart the process holding the file. After restart it opens a new file descriptor and closes the old one.

For Nginx:

sudo nginx -s reopen

For rsyslog (holds system logs):

sudo systemctl restart rsyslog

For any systemd service:

sudo systemctl restart service_name

Free the Space Without Restarting: via /proc

If the process is critical and cannot be restarted — truncate the file through its descriptor in /proc. This zeroes the content without closing the descriptor.

Find the PID and descriptor number:

lsof | grep deleted | grep nginx
nginx  1234  www-data  5w  REG  ...  2147483648  /var/log/nginx/access.log (deleted)

Here: PID = 1234, descriptor = 5.

Truncate the file through /proc:

> /proc/1234/fd/5

Or with truncate:

truncate -s 0 /proc/1234/fd/5

Space is freed immediately. The process continues writing through the same descriptor — into the now-empty file.

Other Reasons df and du Disagree

Space reserved for root. ext4 reserves 5% of the disk for the superuser by default. On a 100 GB disk that is 5 GB which du does not count.

Check the reservation:

tune2fs -l /dev/sda1 | grep "Reserved block"

Reduce the reserve to 1% on non-system disks:

sudo tune2fs -m 1 /dev/sda1

Sparse files. Files with holes — VM disk images, databases. du counts real blocks, ls -l shows the logical size. The difference can be enormous.

ls -lsh /var/lib/libvirt/images/

The first column is the real disk size, the fifth is the logical size.

Tmpfs is full but not under /.

df -h | grep tmpfs

/dev/shm, /run, /tmp mount separately. If /run is packed with sockets or /tmp holds large files — they will not appear in du /.

Prevention: logrotate and Monitoring

Most deleted-open-file situations come from logs. logrotate rotates files by default but does not signal the service to reopen them unless postrotate is configured.

Check the logrotate config for Nginx:

cat /etc/logrotate.d/nginx

It should contain:

postrotate
    /usr/sbin/nginx -s reopen
endscript

Or sharedscripts + killall -USR1 nginx. Without this Nginx keeps writing to the rotated (and potentially deleted) file.

Monitor deleted file accumulation with watch:

watch -n 30 'lsof | grep deleted | awk "{sum+=\$7} END {print sum/1024/1024 \" MB held by deleted files\"}"'

Quick Reference

Task Command
Compare df vs du df -h / && du -sh /* 2>/dev/null | sort -rh | head -10
Find deleted open files lsof | grep deleted
Top by size lsof | grep deleted | awk '{print $7, $1, $2}' | sort -rn | head -10
Truncate file without restart truncate -s 0 /proc/PID/fd/N
Reopen Nginx logs sudo nginx -s reopen
Check ext4 reserved blocks tune2fs -l /dev/sda1 | grep Reserved
Reduce reserve to 1% sudo tune2fs -m 1 /dev/sda1
Check tmpfs usage df -h | grep tmpfs

 

Share this article

Related Articles