> Posted for my friend Marko: > > I've been reading up on ZFS with the idea to build a > home NAS. > > My ideal home NAS would have: > > - high performance via striping > - fault tolerance with selective use of multiple > copies attribute > - cheap by getting the most efficient space > utilization possible (not raidz, not mirroring) > - scalability > > > I was hoping to start with 4 1TB disks, in a single > striped pool with only some filesystems > set to copies=2. > > I would be able to survive a single disk failure for > my data which was on the copies2 filesystem. > > (trusting that I had enough free space across > multiple disks that copies2 writes were not placed > on the same physical disk) > > I could grow this filesystem just by adding single > disks. > > Theoretically, at some point in time I would switch > to copies=3 to increase my chances of surviving > two disk failures. The block checksums would be a > useful in early detection of failed disks. > > > The major snag I discovered is that if a striped pool > loses a disk, I can still read and write from > the remaining data, but I cannot reboot and remount a > partial piece of the stripe, even with -f. > > For example, if I lost some of my "single copies" > data, I'd like to still access the good data, pop in > a > new (potentially larger) disk, re "cp" the important > data to have multiple copies rebuilt, and not have > to rebuild the entire pool structure. > > > So the feature request would be for zfs to allow > selective disk removal from striped pools, with the > resultant data loss, but any data that survived, > either by chance (living on the remaining disks) or > policy (multiple copies) would still be accessible. > > Is there some underlying reason in zfs that precludes > this functionality? > > If the filesystem partially-survives while the > striped pool member disk fails and the box is still > up, why not after a reboot?
You may never get a good answer to this, so I'll give it to you straight up. ZFS doesn't do this because no business using Sun products wants to do this. Thus nobody at Sun ever made ZFS do this. Maybe you can convince someone at Sun to care about this feature, but I doubt it because it is a pretty fringe use case. In the end you can probably work around this problem, though. Striping doesn't improve performance that much and it doesn't provide that much more space. Next year we'll be using 2TB hard drives, and when you can make a 6TB RAIDZ array with 4 hard drives one year and a 7.5TB one the year after, and put them both in the same pool so it looks like 13.5TB coming from 8 drives that can tolerate 1/4 + 1/4 drives failing, that isn't too shabby. -- This message posted from opensolaris.org _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss