Setting up ZFS RAIDZ(N) with missing drives

I’m planning to migrate from an old RAIDZ1 pool of three drives to a new RAIDZ2 pool with five. I was thinking that I’d start by setting up four drives in a degraded configuration and add the fifth when I get around to buying it. I figured I could create a loopback drive (or the FreeBSD equivalent, set up the pool, and remove that drive before adding any actual data.

The problem is that I’m using TrueNAS and it, probably wisely, doesn’t let me select that loopback drive when it’s time to set up the pool. I did quite a bit of searching and everyone says that there’s definitely no way to do it via the UI.

Since I’m not afraid of the command line, I tried looking around to find out exactly what parameters TrueNAS uses when it creates the pool. Even though I’m cutting corners, I still wanted to stay as close as possible to a standard configuration. I found these old instructions but I wasn’t convinced the zpool create command would give me exactly the same options that TrueNAS would use, especially since I wanted things like compression and encryption like my old drive had.

So I decided to take another approach.

I knew that it’s possible to expand a RAIDZ array by replacing drives with larger drives. The available space stays unchanged as drives are upgraded but once the last drive is upgraded the array takes on the size of the new drives.

I happened to have a really old drive laying around unused. Really old. 640GB! But it didn’t really much matter. I wasn’t going to be trusting it with any data. I installed that drive, went in to create my RAIDZ2 pool, and selected it along with my 4 “real” drives. All good. I now had a RAIDZ2 pool that was ready to hold about 1.5TB of data (3x640GB).

Next step was to replace that drive with a large loopback drive:

# Create a sparse file to hold the loopback device
truncate -s <disk size, eg 8t> disk.img
# Create the loopback device
mdconfig disk.img
(it output a device name, eg. md0)
zpool replace <pool name> <little drive> <loopback drive>

The pool now showed the full capacity expected of the larger drives. Then I removed that loopback drive from the pool, deleted the loopback device, and the backing file:

zpool detach <pool name> <loopback drive, eg. md0>
mdconfig -du <loopback drive, eg. md0>
rm disk.img

The capacity was unchanged (still the right size) but the pool state showed DEGRADED, as expected.

When the new drive arrives, I’ll add it. Probably best to do it from the TrueNAS GUI. From “Dashboard” scroll down to the pool and select the pool status gear icon. Click on the offline drive and choose Replace.

2 Replies to “Setting up ZFS RAIDZ(N) with missing drives”

  1. 1. You definitely need to use the cli and can’t create the poll using the TrueNAS UI.

    2. You don’t need a loopback device. Zfs Will use block devices (disks or partitions) OR files. So you only need to create a sparse file of the correct size.

    3. Don’t forget that you can now expand a RAIDZ pool, and rewrite existing blocks with native zfs – so you can stay with a 4x RaidZ2 and expand it to a 5x RaidZ2 all from the UI, and run a simple cli zfs rewrite to make existing data use the correct parity.

    1. Hi,

      Thanks for your comment.

      To your first point, I was definitely able to create the pool from the UI as I described. The benefit to me of doing this was that the create command exactly matched whatever TrueNAS usually uses. I concede that the rest used the CLI but that didn’t affect the structure of the pool.

      I wasn’t aware that I could have just used a sparse file. I’ll try it next time.

      Zpool expansion was a long time coming. I don’t believe it was GA yet when I started this project but I’m looking forward to trying it when I need to expand. After all, I still have those 2 spare drives now.

Leave a Reply to dbrand666 Cancel reply

Your email address will not be published. Required fields are marked *