linux - Why is GNU shred faster than dd when filling a drive with random data?

06
2014-04
  • bytesum

    While securely erasing a hard drive before decommissioning I noticed, that dd if=/dev/urandom of=/dev/sda takes nearly a whole day, whereas shred -vf -n 1 /dev/sda only takes a couple of hours with the same computer and the same drive.

    How is this possible? I guess that the bottleneck is the limited output of /dev/urandom. Does shred use some a pseudorandomness generator that is less random and only sufficient for it's single purpose (i.e. more efficient) than urandom?

  • Answers
  • RedGrittyBrick

    Shred uses an internal pseudorandom generator

    By default these commands use an internal pseudorandom generator initialized by a small amount of entropy, but can be directed to use an external source with the --random-source=file option. An error is reported if file does not contain enough bytes.

    For example, the device file /dev/urandom could be used as the source of random data. Typically, this device gathers environmental noise from device drivers and other sources into an entropy pool, and uses the pool to generate random bits. If the pool is short of data, the device reuses the internal pool to produce more bits, using a cryptographically secure pseudorandom number generator. But be aware that this device is not designed for bulk random data generation and is relatively slow.

    I'm not persuaded that random data is any more effective than a single pass of zeroes (or any other byte value) at obscuring prior contents.

    To securely decommission a drive, I use a big magnet and a large hammer.

  • jpalecek

    I guess it would be caused rather by dd using smaller chunks to write the data. Try dd if=... of=... bs=(1<<20) to see if it performs better.


  • Related Question

    ntfs - dd clone hard drive: Input/Output Error though "chkdsk" says OK
  • user31575

    I've used dd to clone hard drives before using 'dd' and a live cd, but have run into a problem.

    The issue:

    dd fails with an "Input/Output Error" on /dev/sda3 , even though windows "check disk" (chkdsk) says it's ok.

    Context:

    • Trying to replace my laptop hard drive w/ a faster one of the same size
    • Laptop has NTFS on a 320gb hard drive
    • Booting into knoppix
    • Knoppix recognizes 'original' drive(/dev/sda)
    • I am using a usb connection for ‘new' drive (irrelevant, but just an fyi)
    • Knoppix recognizes the usb drive as /dev/sdb
    • Using dd, as follows:

      dd if=/dev/sda of=/dev/sdb

    • `dd gives the I/O error above at 82Gb (out of 320Gb)

    • I then tried checking each partition as follows and found it failed on /dev/sda3:

      dd  if=/dev/sda1 of=/dev/null
      dd  if=/dev/sda2 of=/dev/null
      dd  if=/dev/sda3 of=/dev/null 
      
    • I have ran windows xp chkdsk on the offending drive in both "find only" and "find and fix" mode and it reports no errors

    Question

    How can I find and fix the error on my original hard drive partition (i.e. /dev/sda3) so that dd reads it successfully?


  • Related Answers
  • Michał Górny

    Use ddrescue for that, it's able to read damaged disks.

    And chkdsk probably won't find the issue because it only does basic checks of filesystem integrity; by default, it won't check all the partition space for read errors caused by damage.

  • Robert Martin

    I ran into the same problem and my OpenSUSE livecd didn't include ddrescue or Clonezilla. However, when I checked out the dd manual, I discovered that there was an option "conv=noerror" that allowed dd to continue past the I/O error.

    dd conv=noerror if=/dev/sda of=/dev/sdc
    
  • mmv-ru

    To copy data to different HDD attempt use special tools. Norton Ghost (commercial) or Clonezilla (opensource) http://clonezilla.org/