Disk full is one of the issue faced in linux servers and here I am discussing some tips for resolving disk full issue. Disk full will cause failure of writing files by user and the applications, which results in service outage. Services like mysql, apache were write some data to the server all the time and the disk full will affect these services. The static sites will work, since there is no writing. But the services will fail, if we perform a restart, since the restarting cannot update the state in the file system.
Fixing problem when the disk is full
- Compress uncompressed log and other files using gzip or bzip2 or tar
- Delete unwanted files
- Truncate some big logs
How to find the big files which are consuming high disk usage
Normal way is using the du command form the root (/). First run the command “du sch /*” then run the same inside any of the directory, which consumes more disk size. But it take time for bigger disks ( like 2TB or 3TB of disks ). So we can use some other method to find the usage.
Check the size of the log directory and make necessary adjustment in the log rotate, if there are too many archives are kept in the server.
Normally the packages are downloaded for installation are kept under “/usr/local/src” and some of the space may consume by this location.
Check the backup usage and adjust the backup configuration based on the availability of disk space.
Tar files in the server.
checking “tar.gz” or “tar.bz” files in the whole server will give a list of tar files. By using the following command after updating the locate database will give us a list of files with the size usage.
locate "tar.gz"| grep -v ' ' | xargs du -sch | sort -nk1 | grep 'M\|G' locate "tar.bz"| grep -v ' ' | xargs du -sch | sort -nk1 | grep 'M\|G'
In the above command, we are ignoring the files which are using space in the file name, either you can collect the file list with space and check manually.
Sometimes error_log may grow bigger and causing the disk full. You can use the following command to get the list with the usage. I have seen error_log with 200G in size in some servers. Resolve the reason for the error and delete the file is the solution. Deleting the file without fixing the error has no use.
locate "error_log"| grep -v ' ' | xargs du -sch | sort -nk1 | grep 'M\|G'
Server with cPanel/WHM can be easily detect the high disk using users by listing the accounts using “List Accounts”. Running disk usage check command in servers take time. Follow the steps to easily tackle the users consuming high disk usage and resolve the issue.
- Login into WHM
- Go to ‘List Accounts”
- In the right side, we can see the column with “disk usage”
- Click on disk usage to sort the usage ( click again to sort in reverse order )
- Collect the username from the list
- Go to command line and find the place where more usage is consumed. Or You can go to “cPanel” for the respective user and find the usage by checking the disk usage analysis.
- Check mainly the .trash directory
- Mails, the default mail accounts may consume lots of default alerts and mails, which may not be used by the user
- Any account backups
I hope it will help the system administrators to easily resolve the issue without spending too much time. Also if you noticed any errors or mistakes, please get in touch with me to correct it. Also you can comment with the commands, which are not mentioned here.