This obviously requires that the array does not depend on the disk being replaced, hence the array must have redundancy; if it does depend on the drive in question, this approach presents the same problem as above. When a device is taken offline, it is not detached from the storage pool. After a detach lots of data may still be on the drive, but it will be practically impossible to remount the drive and view the data as a usable filesystem. If you attempt to bring online a faulted device, a message similar to the following is displayed:. To temporarily take a device offline, use the zpool offline -t option. Hot Network Questions. The difference is that resilvering only examines data that ZFS knows to be out of date for example, when attaching a new device to a mirror or replacing an existing devicewhereas scrubbing examines all data to discover silent errors due to hardware faults or disk failure. For example: zpool offline -t tank c1t0d0 bringing device 'c1t0d0' offline When the system is rebooted, this device is automatically returned to the ONLINE state. When turning the drive notification light off on the chassis, be sure to use the same slot and chassis IDs as you did when enabling it.
You can take a device offline by using the zpool offline command followed by the pool name and the device name.
Solaris ZFS How to Offline / Online / Detach / Replace device in a storage pool – The Geek Diary
In the code example in the slide, the c1t0d0. Attaching and Detaching Devices in a Storage Pool. In addition If you are replacing a disk in a ZFS root pool, see How to Replace a Disk in the ZFS Root Pool.
ZFS allows individual devices to be taken offline or brought online. When hardware is When a device is taken offline, it is not detached from the storage pool.
Server Fault works best with JavaScript enabled.
zfs detach/replace
Can you operate a second server onsite or even a 2nd ZFS array in the same server? Sign up using Facebook.

Answer updated. I've been doing a bit of reading, and it looks like ZFS doesn't like disks being removed from non-redundant arrays :.
![]() RIPPLING WATERCOLORS BRIAN BALMAGES SPARKS |
Home Questions Tags Users Unanswered.![]() You can take a device offline by using the zpool offline command. In the same machine, have you tried creating a new pool with the 2 drives in a mirror? This scenario is possible assuming that the systems in question can detect the storage after it is attached to the new switches, possibly through different controllers than before, and your pools are set up as RAID-Z or mirrored configurations. By default, pool size is not expanded to its full size unless the autoexpand pool property is enabled. One thing to make note of is that checksum errors on individual drives, from time to time, is normal and expected behavior if not optimal. |
linux How to replace a disk in a nonredundant ZFS pool Unix & Linux Stack Exchange
Identify the FAULTED or UNAVAILBLE drive; zpool replace the drive in. Because of this, mirror members can be 'detached' where you would.
Video: Zpool offline vs detachment Oracle Solaris IPMP
"zpool detach" and "zpool replace" are two very different, and totally unconnectedthings. "zpool detach" is used to remove drives from mirror.
It only takes a minute to sign up. In order to replace a drive in the same slot as the faulted drive, it must be removed from the pool and unconfigured from the OS before a new disk can be inserted.
ChrisS instead of a -1 how about writing an answer with some citations.
Detach device from zpool
What I am suggesting is that you format the two backup drives with whatever file system you want, and using the zfs send command to take full or incremental backup streams saved to the backup disks, or use zfs recv to make a duplicate disk. With ZFS, they are not of much concern, but some degree of preventative maintenance is necessary to prevent a failure from accumulation. For added protection against detach you can even physically unplug the device after the offline command, yet prior to issuing the detach command.
I've been doing a bit of reading, and it looks like ZFS doesn't like disks being removed from non-redundant arrays :. An in-progress spare replacement can be cancelled by detaching the hot spare.
You can verify that the device has been detached by running the zpool status command again.
Can you operate a second server onsite or even a 2nd ZFS array in the same server? After a detach lots of data may still be on the drive, but it will be practically impossible to remount the drive and view the data as a usable filesystem.