linux - Can I expand the size of a file based disk image?

05
2014-04
  • kerrreem

    I created an empty disk image using dd, then I used mkfs to make it a real filesystem image. I am mounting and using it fine. What I need is to be able to expand or shrink this file based disk image when needed. Is it possible to increase the size of an disk image that way? Is there a way to make this file based disk image have a dynamic resizing feature like that is found with Virtual machine drives.

    thanks

  • Answers
  • Mikhail Morfikov

    At first you have to create an image file:

    # dd if=/dev/zero of=./binary.img bs=1M count=1000
    1000+0 records in
    1000+0 records out
    1048576000 bytes (1.0 GB) copied, 10.3739 s, 101 MB/s
    

    Now you have to create a partition on it -- you can use whatever tool you want, fdisk, parted, gparted, I prefer parted, so:

    # parted binary.img
    

    You have to create a partition table first and then one big partition:

    (parted) mktable                                                          
    New disk label type? msdos      
    
    (parted) mkpartfs
    WARNING: you are attempting to use parted to operate on (mkpartfs) a file system.
    parted's file system manipulation code is not as robust as what you'll find in
    dedicated, file-system-specific packages like e2fsprogs.  We recommend
    you use parted only to manipulate partition tables, whenever possible.
    Support for performing most operations on most types of file systems
    will be removed in an upcoming release.
    Partition type?  primary/extended? primary
    File system type?  [ext2]? fat32
    Start? 1
    End? 1049M
    

    Now let's see:

    (parted) print
    Model:  (file)
    Disk /media/binary.img: 1049MB
    Sector size (logical/physical): 512B/512B
    Partition Table: msdos
    
    Number  Start   End     Size    Type     File system  Flags
     1      1049kB  1049MB  1048MB  primary  fat32        lba
    

    It looks good,

    You want to enlarge it, so fist add some zeros to the image using dd:

    # dd if=/dev/zero bs=1M count=400 >> ./binary.img
    400+0 records in
    400+0 records out
    419430400 bytes (419 MB) copied, 2.54333 s, 165 MB/s
    root:/media# ls -al binary.img 
    -rw-r--r-- 1 root root 1.4G Dec 26 06:47 binary.img
    

    That added 400M to the image:

    # parted binary.img 
    GNU Parted 2.3
    Using /media/binary.img
    Welcome to GNU Parted! Type 'help' to view a list of commands.
    (parted) print                                                            
    Model:  (file)
    Disk /media/binary.img: 1468MB
    Sector size (logical/physical): 512B/512B
    Partition Table: msdos
    
    Number  Start   End     Size    Type     File system  Flags
     1      1049kB  1049MB  1048MB  primary  fat32        lba
    

    As you can see, the size of the image is different (1468MB). Parted can also show you free space in the image. If you want to see it just type print free instead of print. Now you have to add the extra space to the filesystem:

    (parted) resize 1
    WARNING: you are attempting to use parted to operate on (resize) a file system.
    parted's file system manipulation code is not as robust as what you'll find in
    dedicated, file-system-specific packages like e2fsprogs.  We recommend
    you use parted only to manipulate partition tables, whenever possible.
    Support for performing most operations on most types of file systems
    will be removed in an upcoming release.
    Start?  [1049kB]?
    End?  [1049MB]? 1468M
    

    and check it:

    (parted) print
    Model:  (file)
    Disk /media/binary.img: 1468MB
    Sector size (logical/physical): 512B/512B
    Partition Table: msdos
    
    Number  Start   End     Size    Type     File system  Flags
     1      1049kB  1468MB  1467MB  primary  fat32        lba
    

    Pretty nice. If you want to shrink it, just do similar thing:

    (parted) resize 1
    WARNING: you are attempting to use parted to operate on (resize) a file system.
    parted's file system manipulation code is not as robust as what you'll find in
    dedicated, file-system-specific packages like e2fsprogs.  We recommend
    you use parted only to manipulate partition tables, whenever possible.
    Support for performing most operations on most types of file systems
    will be removed in an upcoming release.
    Start?  [1049kB]?
    End?  [1468MB]? 500M
    

    Now you can check if the partition is smaller:

    (parted) print
    Model:  (file)
    Disk /media/binary.img: 1468MB
    Sector size (logical/physical): 512B/512B
    Partition Table: msdos
    
    Number  Start   End    Size   Type     File system  Flags
     1      1049kB  500MB  499MB  primary  fat32        lba
    

    Yes it is.

    If you try to resize the partition when data is on it, you have to pay attention to the size of the data because when you shrink it too much, you will get an error:

    Error: Unable to satisfy all constraints on the partition
    

    After shrinking the files system, you also have to cut some file off. But this is tricky. You could take the value from parted 500M (END):

    # dd if=./binary.img of=./binary.img.new bs=1M count=500
    

    But this leaves some space at the end of the file. I'm not sure why, but the image works.

    And there's one thing about mounting such image -- you have to know an offset to pass it to the mount command. You can get the offset from, for instance, fdisk:

    # fdisk -l binary.img
    
    Disk binary.img: 1468 MB, 1468006400 bytes
    4 heads, 32 sectors/track, 22400 cylinders, total 2867200 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x000f0321
    
         Device Boot      Start         End      Blocks   Id  System
    binary.img1            2048     2867198     1432575+   c  W95 FAT32 (LBA)
    

    2048 (start) x 512 (sector size) = 1048576 , so you have to use the following command in order to mount the image:

    # mount -o loop,offset=1048576 binary.img /mnt
    
  • davidgo

    Yes, this is possible - it works just like a partition. I tried the following, which worked:

    Make the original file, mount it, check, unmount it

    dd if=/dev/zero of=test.file count=102400 
    mkfs.ext3 test.file 
    mount test.file /m4 -o loop
    df
    umount /m4
    

    Grow it

    dd if=/dev/zero count=102400 >> test.file
    mount test.file /m4 -o loop
    df
    resize2fs /dev/loop0
    df
    

    There is no reason why shrinking a file would not work similarly, but shrinking a file is always more difficult then growing a file (and, of-course, needs to be done when the block device is not mounted etc)

    Have a look at this link which talks about using qemu-nbd to mount qcow2 images


  • Related Question

    filesystems - How do I convert a Linux disk image into a sparse file?
  • endolith

    I have a bunch of disk images, made with ddrescue, on an EXT partition, and I want to reduce their size without losing data, while still being mountable.

    How can I fill the empty space in the image's filesystem with zeros, and then convert the file into a sparse file so this empty space is not actually stored on disk?

    For example:

    > du -s --si --apparent-size Jimage.image 
    120G Jimage.image
    > du -s --si Jimage.image 
    121G Jimage.image
    

    This actually only has 50G of real data on it, though, so the second measurement should be much smaller.

    This supposedly will fill empty space with zeros:

    cat /dev/zero > zero.file
    rm zero.file
    

    But if sparse files are handled transparently, it might actually create a sparse file without writing anything to the virtual disk, ironically preventing me from turning the virtual disk image into a sparse file itself. :) Does it?

    Note: For some reason, sudo dd if=/dev/zero of=./zero.file works when cat does not on a mounted disk image.


  • Related Answers
  • mihi

    First of all, sparse files are only handled transparently if you seek, not if you write zeroes.

    To make it more clear, the example from Wikipedia

    dd if=/dev/zero of=sparse-file bs=1k count=0 seek=5120
    

    does not write any zeroes, it will open the output file, seek (jump over) 5MB and then write zero zeroes (i. e. nothing at all). This command (not from Wikipedia)

    dd if=/dev/zero of=sparse-file bs=1k count=5120
    

    will write 5MB of zeroes and will not create a sparse file!

    As a consequence, a file that is already non-sparse will not magically become sparse later.

    Second, to make a file with lots of zeroes sparse, you have to cp it

    cp --sparse=always original sparsefile
    

    or you can use tar's or rsync's --sparse option as well.

  • Janne Pikkarainen

    Do you mean that your ddrescue created image is, say, 50 GB and in reality something much less would suffice?

    If that's the case, couldn't you just first create a new image with dd:

    dd if=/dev/zero of=some_image.img bs=1M count=20000
    

    and then create a filesystem in it:

    mkfsofyourchoice some_image.img
    

    then just mount the image, and copy everything from the old image to new one? Would that work for you?

  • Grumbel

    PartImage can create disk images that only store the used blocks of a filesystem, thus drastically reducing the required space by ignoring unused block. I don't think you can directly mount the resulting images, but going:

    image -> partimage -> image -> cp --sparse=alway
    

    Should produce what you want (might even be possible to stick the last step, haven't tried).

  • endolith

    There's now a tool called virt-sparsify which will do this. It fills up the empty space with zeros and then copies the image to a sparse file. It requires installing a lot of dependencies, though.

  • hotei

    I suspect you'll require a custom program written to that spec if that's REALLY what you want to do. But is it...?

    If you've actually got lots of all-zero areas then any good compression tool will get it down significantly. And trying to write sparse files won't work in all cases. If I recall correctly, even sparse files take up a minimum of 1 block of output storage where the input block contains ANY bits that are non-zero. For instance - say you had a file that had an average of even 1 non-zero bit per 512 byte block - it can't be written "sparsely". By the way, you're not going to lose data if you compress the file with zip, bzip, bzip2 or p7zip. They aren't like mpeg or jpeg compression that is lossy.

    On the other hand, if you need to do random seek reads into the file then compression might be more trouble than it's worth and you're back to the sparse write. A competent C or C++ programmer should be able to write something like that in an hour or less.