osx - zpool: pool I/O is currently suspended
2014-07
I'm using ZFS on OSX and I've zpool which is active and online:
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
WD_1TB 931G 280G 651G 30% 1.00x ONLINE -
but I can't actually mount it.
$ sudo zfs mount WD_1TB
cannot open 'WD_1TB': pool I/O is currently suspended
cannot open 'WD_1TB': pool I/O is currently suspended
or unmount it:
$ sudo zfs unmount WD_1TB
cannot open 'WD_1TB': pool I/O is currently suspended
cannot open 'WD_1TB': pool I/O is currently suspended
or even destroy it:
$ sudo zpool destroy -f WD_1TB
cannot open 'WD_1TB': pool I/O is currently suspended
When doing zpool export WD_1TB
it just freezes.
When clearing device errors in a pool, there is an error as well:
$ sudo zpool clear WD_1TB
cannot clear errors for WD_1TB: I/O error
Above happening whatever the disk is connected via USB or not.
What's interesting that zpool status
points zpool to /dev/disk1, but diskutil list
points to /dev/disk3.
I've enabled debug messages via: sysctl -w zfs.vnops_osx_debug=1
and run sudo dmesg | tail
which shows something like:
0 [Level 3] [Facility com.apple.system.fs] [ErrType IO] [ErrNo 6] [IOType Read] [PBlkNum 0] [LBlkNum 0]
0 [Level 3] [Facility com.apple.system.fs] [DevNode devfs] [MountPt /dev] [Path /dev/disk1s2]
disk1s2: media is not present.
0 [Level 3] [Facility com.apple.system.fs] [ErrType IO] [ErrNo 6] [IOType Read] [PBlkNum 512] [LBlkNum 512]
0 [Level 3] [Facility com.apple.system.fs] [DevNode devfs] [MountPt /dev] [Path /dev/disk1s2]
zfs_vnop_write(vp 0xffffff804f6303c0, offset 0x12b00000 size 0x10000
zfs_vnop_write(vp 0xffffff804f6303c0, offset 0x12b10000 size 0x10000
zfs_vnop_write(vp 0xffffff804f6303c0, offset 0x12b20000 size 0x10000
zfs_vnop_write(vp 0xffffff804f6303c0, offset 0x12b30000 size 0x10000
zfs_vnop_write(vp 0xffffff8051b031e0, offset 0x1f0000 size 0x10000
Connecting or disconnecting HDD doesn't help.
Any way of simply mounting the HDD on OSX in above circumstances?
Related:
If executing sudo zpool clear WD_1TB
won't work, try:
$ sudo zpool clear -nFX WD_1TB
and then try to re-import again:
$ zpool import WD_1TB
If won't help, try the following commands to remove the invalid zpool:
$ zpool list -v
$ sudo zfs unmount WD_1TB
$ sudo zpool destroy -f WD_1TB
$ zpool detach WD_1TB disk1s2
$ zpool remove WD_1TB disk1s2
$ zpool remove WD_1TB /dev/disk1s2
$ zpool set cachefile=/etc/zfs/zpool.cache WD_1TB
Finally if nothing helps, remove the file /etc/zfs/zpool.cache
(optionally) and just restart your computer.
Related:
I have a machine with two harddrives. I have installed OpenSolaris on one of them and now I want to add the other one as a mirror-drive in my zpool rpool. I guess I have to format the second disk first and then add it to the pool. How can I do this?
I have tried to follow OpenSolaris ZFS rpool mirror, but when I come to prtvtoc /dev/rdsk/c7t0d0s0 | fmthard -s - /dev/rdsk/c7t1d0s0
then I get this message: fmthard: Cannot stat device /dev/rdsk/c7t1d0s0
and prtvtoc: /dev/rdsk/c7t0d0s0: No such file or directory
Here is some commands and my output (I have removed parts of the output that I don't think is needed:
pfexec format
AVAILABLE DISK SELECTIONS:
0. c7d0
1. c7d1
and
zpool status
pool: rpool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
c7d0s0 ONLINE 0 0 0
EDIT: After running devfsadm -v
the following comman works fine:
pfexec fdisk /dev/rdsk/c7d1s2
prtvtoc /dev/rdsk/c7d0s2 | fmthard -s - /dev/rdsk/c7d1s2
zpool attach -f rpool c7d0s0 c7d1s0
and
zpool status
pool: rpool
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver completed after 0h10m with 0 errors
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
c7d0s0 ONLINE 0 0 0
c7d1s0 ONLINE 0 0 0 3,77G resilvered
errors: No known data errors
but I fail with installgrub
pfexec installgrub /boot/grub/stage1 /boot/grub/stage2 c7d1s0
cannot open/stat device c7d1s0
- Use
format
to get a list of the available harddisks. - rpools are special. Their disks must not have an EFI label. You can delete the EFI label with
format/fdisk
. - You don't have to format the drive before adding it to a zpool. But in case of rpools you need to copy the partition layout from the first to the 2nd disk. The commands you've mentioned are correct but you need to call them with s2 (entire disk) and not s0.
- Use
zpool attach
to add a new mirror device for the existing device. - Verify the new mirror with
zpool status rpool
. - It's recommended to add entire disks to data zpools (and not only a single slice/partition).
- Don't forget to install
grub
on the 2nd disk, too, to make it bootable. (Enable it as a boot drive in the BIOS, too. And test it!)
So finally here's the command sequence:
fdisk /dev/rdsk/c7d1s2 (confirm that you want a 100% Solaris partition)
prtvtoc /dev/rdsk/c7d0s2 | fmthard -s - /dev/rdsk/c7d1s2
zpool attach [-f] rpool c7d0s0 c7d1s0 (maybe use "-f" flag)
zpool status
installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c7d1s0
If you still can't get it to work please show us the output of zpool status
and the drive list output from format
.