linux - Filling up the hard drive until certain amount of space is left?
2014-04
I'm doing some testing of a product on Apple and Android devices. I'd like a script that will fill up the devices hard drive until 100KB space is left on the drive it's runs against to. Also in linux systems I use dd if=/dev/zero of=zeros bs=1M to fill up the hdd entirely, but.. How do I make it fill up the hdd until certain amount of space is left? I'd like this both for linux command line or a bat script(windows) Example of a shell script and a windows batch script would be the best! Thanks!
Do you need to write a real file, or are dummy files sufficient? The dd
method you're using is going to be pretty slow for large drives . . .
In Windows, you can use fsutil file createnew <filename> <length_in_bytes>
which will create a filler file. Here's a technet page with some more details.
In linux, fallocate
works similarly, e.g. fallocate -l 10G 10gig_filler
. Here's a previous SU question on creating filler files, as well as a slightly more technical version on Stack Overflow.
In Linux you could do something like this to leave only 1M*:
avail=$( df --output=source,avail -BM | grep sda6 |
awk '{print $NF}' | sed s/M//i); size=$((avail-left) );
fallocate -l $sizeM filler_file;
The trick is parsing df
to get the space available and then using fallocate
to create a file of the necessary size. However, as @jjlin pointed out, the fallocate
call will not work on all filesystems.
You can turn the little script above into a function to make it easier to use and also have it use an alternative method to create the file when on a filesystem that does not support fallocate
(though fallocate
is much faster and should be preferred where possible). Just add these lines to your ~/.bashrc
(or equivalent for other shells):
fill_disk(){
## The space you want left
left=$1
## The unit you are using (Units are K, M, G, T, P, E, Z, Y (powers of 1024) or
## KB, MB, ... (powers of 1000).
unit=$2
## The target drive
disk=$3
## The file name to create, make sure it is on the right drive.
outfile=$4
## The space currently available on the target drive
avail=$(df --output=source,avail -B$unit | grep $disk | awk '{print $NF}' | sed s/$unit//i);
## The size of the file to be created
size=$((avail-left))
## Skip if the available free space is already less than what requested
if [ "$size" -gt 0 ]; then
## Use fallocate if possible, fall back to head otherwise
fallocate -l $size$unit $outfile 2>/dev/null || head -c $size$unit /dev/zero > $outfile
else
echo "There is already less then $left space available on $disk"
fi
}
You can then launch it like this:
fill_disk desired_free_space unit target_disk out_file
For example, to create a file called /foo.txt
that will leave only 100M free on /
(sda1
), run
fill_disk 100 M sda1 /foo.txt
Just make sure the target file is on the drive you want to fill, the function does not check for that.
* I couldn't get it to work reliably for small sizes, either it would run out of space or give me slightly different values from those I requested.
As requested, here's the same thing as a script:
#!/bin/env/bash
left=$1
unit=$2
disk=$3
outfile=$4
avail=$(df --output=source,avail -B$unit | grep $disk | awk '{print $NF}' | sed s/$unit//i);
size=$((avail-left))
if [ "$size" -gt 0 ]; then
fallocate -l $size$unit $outfile 2>/dev/null || head -c $size$unit /dev/zero > $outfile
else
echo "There is already less then $left space available on $disk"
fi
Save it as fill_disk.sh
and run like this:
bash fill_disk.sh 100 M sda1 /foo.txt
I decided to resize the partition on my storage drive (with gparted), all seemed to go well but now when I try to create directories or copy files to the drive I get a "No space left on device" error.
Also, even if I delete some files it does not allow me to replace them.
All of the files on the drive seem to be readable just fine and I can move the existing files into other directories with no problems.
There is space on the drive. Checking the size of all the files reports: 175,840 items, totalling 839.8 GB
It is an ext3 partition.
One wierd thing is that Ubuntu (64bit karmic) still picks the drive up as "957GB Filesystem" in the Places menu.
Note that the affected drive is not my main boot drive but simply a storage drive that I mount from the Places menu when needed.
Output of "df -h":
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 885G 842G 0 100% /media/acd61702-ff34-460f-8539-ac762d1dc466
Output of "df -i":
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sdb1 58433536 175818 58257718 1% /media/acd61702-ff34-460f-8539-ac762d1dc466
I have ran "fsck -f -v /dev/sdb1":
fsck from util-linux-ng 2.16
e2fsck 1.41.9 (22-Aug-2009)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
175818 inodes used (0.30%)
8348 non-contiguous files (4.7%)
142 non-contiguous directories (0.1%)
# of inodes with ind/dind/tind blocks: 76655/8046/56
222404925 blocks used (95.17%)
0 bad blocks
79 large files
161176 regular files
12907 directories
0 character device files
0 block device files
0 fifos
38 links
1726 symbolic links (1512 fast symbolic links)
0 sockets
--------
175847 files
Any help would be appreciated.
Thanks, e.
Edit: As requested "tune2fs -l /dev/sdb1":
tune2fs 1.41.9 (22-Aug-2009)
Filesystem volume name: <none>
Last mounted on: <not available>
Filesystem UUID: acd61702-ff34-460f-8539-ac762d1dc466
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype sparse_super large_file
Filesystem flags: signed_directory_hash
Default mount options: (none)
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 58433536
Block count: 233703571
Reserved block count: 11685177
Free blocks: 11298646
Free inodes: 58257718
First block: 0
Block size: 4096
Fragment size: 4096
Reserved GDT blocks: 968
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 8192
Inode blocks per group: 256
Filesystem created: Thu May 15 14:59:19 2008
Last mount time: Fri Nov 13 13:47:23 2009
Last write time: Fri Nov 13 14:40:32 2009
Mount count: 2
Maximum mount count: 35
Last checked: Thu Nov 12 15:14:03 2009
Check interval: 15552000 (6 months)
Next check after: Tue May 11 16:14:03 2010
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 128
Journal inode: 8
Default directory hash: tea
Directory Hash Seed: 793715af-66d6-46da-82aa-97ab4549b0ad
Journal backup: inode blocks
Firstly, you can't simply add up the sizes of all the files on the disk, and expect that to be the total amount used. Every time you store a file, there is some space wasted. It's like putting books on a shelf, if the books vary in size, then you're going to have a space between the top of the book and the bottom of the next shelf.
Secondly, if you have any files which are open but deleted, then the space that is used will still be used until the program with that file either closes it, or exits. That's often used for temporary files, the program doesn't have to worry about cleaning them up, all it needs to do is open a file, then delete it, before working with it. These files used space will show up in df, but you can't find a filename which corresponds to it. If you want to find them, then you'll have to look in /proc/*/fd
Thirdly, and this is your issue here, ext3 file systems have a percentage of reserved space which can only be written to by root. There are two reasons for this, many file systems become inefficient when the disk becomes close to becoming full, the system has to spend more and more time fitting files into the spaces that are left. Also reading and writing to the files is slow, as they end up being badly fragmented. Another reason for reserving space for root is that it allows root to compress files and hopefully recover some space for the users. If the disk was totally full, then that wouldn't be possible.
Therefore, there is nothing wrong, what you are seeing is normal behaviour for a full disk.
It says that the filesystem is 100% used and has 0 available space. The filesystem is full. For various reasons, Avail + Used != Size.
Ok, you were all right, it was full :D
Thanks to quack for the tune2fs comment I see where I went wrong:
Reserved block count: 11685177
Free blocks: 11298646
I have just moved about 60GB from the drive and it is now working as it should :)
Thank you all for your help.