A couple of ZFS questions:
1. ZFS dynamic striping will automatically use new added devices when
there are write requests. Customer has a *mostly read-only* application
with I/O bottleneck, they wonder if there is a ZFS command or mechanism
to enable the manual rebalancing of ZFS data when adding new drives to
an existing pool?
2. Will ZFS automatically/proactively seek out bad blocks (self-healing)
when there're idle cpu cycles? I don't think so but like to get a
confirmation. We are aware of 'zpool scrub', a manual way to verify
checksums and correct bad blocks. We also know that bad blocks will be
self-healed when there's a access request to the bad block.
3. Can zpool determine and alert if server2 is attemping to import a ZFS
pool that is currently imported by server1? Can server2 force an import
in case server1 crashes - manual failover scenario?
4. When S10 ZFS boot is available, will Sun offer a migration strategy
(commands, processes, etc.) to convert/migrate root devices from
SVM/VxVM to a ZFS root file system?
Best regards,
Kimberly
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss