My ZFS dataset is gone after system upgrade/reboot (ubuntu)

07
2014-07
  • maxigs

    I did recently upgrade my little NAS to ubuntu 14.04 and initially everything worked fine. I'm used to re-import my ZFS pools after restarts and so it was no surprise when the mount point was gone after a restart 2 days ago. But this time i was not able to re-import it and when i started looking closer the whole ZFS dataset disappeared too and i start to run out of ideas what to try (as do the google results i find).

    $ zpool list or zpool status no pools available

    $ zfs list no datasets available

    $ ls /dev/disk/by-id/ ... disks as i have used them in my config ...none missing

    Everything was working fine before a reboot 2 days ago, and before that reboot i did run some upgrades including the kernel (standard upgrade via apt). Re-installing/further upgrading or going back to previous versions (selected in grub) all did not help.

    It would be really cool to get it back running, especially since i switched to ZFS for its promised reliability. The nice data redundancy does not help me if the whole storage disappears out of nowhere...

    Any ideas?

  • Answers
  • user337417

    I had the same Problem. Doing

    apt-get install ubuntu-zfs zfsutils
    

    and rebooting solved it for me. For some Reasons ubuntu did not update all zfs-packages. You can see if this is the case at the end of apt-get update output (packages held back)


  • Related Question

    opensolaris - ZFS RAIDZ Parity
  • Questioner

    I am building an OpenSolaris box to attempt to use ZFS and RAIDZ. I am starting simple with the OS on one drive and wanting to store all my data on the RAIDZ volume. I am a bit confused on RAIDZ is handling parrity. I created a pool using the command "zpool create pool_1 raidz drive1 drive2 drive3" and when i did a zpool list it shows the available size of the three drives together. I figure if the parity was automatically calculated it should be short the size of 1 drive. So I deleted that pool and created one using "zpool create pool_1 raidz drive1 drive2 spare drive3" and the available size was what I expeceted to see. To me a hotspare isn't parity though it is a disk that is ready to fill in for parity in the event that your raid loses a drive. I don't want to get down the road and have one of those two drives fail and find out no parity.

    Any explanation on this would be much appreciated.


  • Related Answers
  • Majenko

    The "pool" size of a RAID-Z is the entire physical space available to the pool - not the space for a filesystem.

    For that you should examine the "zfs list" command.

    # zpool list
    NAME      SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
    storage  5.32T  2.23T  3.09T    41%  1.00x  ONLINE  -
    
    # zfs list
    NAME                       USED  AVAIL  REFER  MOUNTPOINT
    storage                   1.49T  2.00T  38.6K  /storage
    

    RAID-Z (or RAID-Z1) is, as you rightly surmise, (N-1)*size space, and RAID-Z2 is (N-2)*size space, where N is the number of drives and size is the capacity of one drive.

    Unlike RAID-5, RAID-Z doesn't use one specific drive for the parity, but it rotates the parity around different disks. This makes the system more efficient and prevents the parity disk from wearing out as fast:

    In this example dw is a data write operation and pw is a parity write operation. D3 is the parity disk in the raid 5 arrangement

    Raid 5:

    D1   D2   D3
    dw        pw
         dw   pw
         dw   pw
    dw        pw 
    dw        pw
    dw        pw
         dw   pw
    

    You can see how every write operation to either disk 1 or disk 2 results in a write to disk 3.

    Raid-Z:

    D1   D2   D3
    dw   pw
    dw        pw
    pw   dw
         pw   dw
    dw        pw
    pw   dw
         pw   dw  
    

    You can see a much more even disk write pattern.

    The zpool's spare vdev is a reserved disk which can be swapped for any of the other disks on the fly - a hot spare. This disk is not used until you tell it to.

  • Andrew

    RAID 4 uses a dedicated parity drive. RAID 5 stripes the parity.