Root Filesystem Full on our Backup Server

This morning I arrived to find our backup servers root file system full. This is very strange as the backups all go onto a second btrfs disk which has loads of space left.

The Problem

The root file system was using the full ~40Gb of disk even though the backup runs a single python script using rsync and btrfs snapshots and these backups go onto a second disk not the root filesystem.

root@backup:/# df
Filesystem                   1K-blocks       Used Available Use% Mounted on
udev                           4068220          0   4068220   0% /dev
tmpfs                           817520      82784    734736  11% /run
/dev/mapper/ubuntu--vg-root   39875172   39452220         0 100% /
tmpfs                          4087588          0   4087588   0% /dev/shm
tmpfs                             5120          0      5120   0% /run/lock
tmpfs                          4087588          0   4087588   0% /sys/fs/cgroup
/dev/vda1                       240972      57414    171117  26% /boot
/dev/sda                    1572864000 1217936940 352567252  78% /mnt/btrfs
cgmfs                              100          0       100   0% /run/cgmanager/fs
tmpfs                           817520          0    817520   0% /run/user/1000

I ran apt-get autoremove --purge to see if there was anything left behind but it removed nothing. /tmp and /home were all fine too.

I then started walking the largest directory using du -sh to find the largest directories.

$ cd /
$ du -sk *
...
0           sys
68          tmp
887304      usr
16751520    var
0           vmlinuz
$ cd /var/
$  du -sk *
...
4           man-db
4           misc
15809208    mlocate
48          nssdb
8           ntp

This showed that mlocate was using lots of disk.

Our backup system uses btrfs and creates a snapshot every night so from the mlocate view it was adding our whole backup size every night.

The Solution

I simply had to add our /mnt/btrfs directory to the PRUNEPATHS in /etc/updatedb.conf. Below shows the PRUNEPATHS before and after my change.

PRUNEPATHS="/tmp /var/spool /media /home/.ecryptfs /var/lib/schroot"
PRUNEPATHS="/mnt/btrfs /tmp /var/spool /media /home/.ecryptfs /var/lib/schroot"

After running updatedb again the disk usage dropped to this :

root@backup:~# df
Filesystem                   1K-blocks       Used Available Use% Mounted on
udev                           4068220          0   4068220   0% /dev
tmpfs                           817520      82784    734736  11% /run
/dev/mapper/ubuntu--vg-root   39875172    7654340  30509764  21% /
tmpfs                          4087588          0   4087588   0% /dev/shm
tmpfs                             5120          0      5120   0% /run/lock
tmpfs                          4087588          0   4087588   0% /sys/fs/cgroup
/dev/vda1                       240972      57414    171117  26% /boot
/dev/sda                    1572864000 1217936940 352567252  78% /mnt/btrfs
cgmfs                              100          0       100   0% /run/cgmanager/fs
tmpfs                           817520          0    817520   0% /run/user/1000

TLDR : /var/lib/mlocate.db was massive due to lots of btrfs snapshots, added backup directory /mnt/btrfs to PRUNEPATHS variable in /etc/updatedb.conf and then ran updatedb.

Comments are closed.