Hi All,
Umm... I recently posted that I have successfully deployed zfs in a san.
Well, I just had a disk fail on the second day of production, and am
currently in downtime waiting for a disk from Sun. I have a Sun SE 3511
array with 5 x 500 GB SATA-I disks in a RAID 5. This 2 TB logical drive
is partitioned into 10 x 200GB slices. I gave 4 of these slices to a
Solaris 10 U2 machine and added each of them to a concat (non-raid)
zpool as listed below:
[EMAIL PROTECTED] ~]# zpool status zp1
pool: zp1
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
zp1 ONLINE 0 0 0
c2t216000C0FF892AE3d0 ONLINE 0 0 0
c2t216000C0FF892AE3d2 ONLINE 0 0 0
c2t216000C0FF892AE3d3 ONLINE 0 0 0
c2t216000C0FF892AE3d4 ONLINE 0 0 0
errors: No known data errors
Basically, is this a supported zfs configuration? You are gonna laugh,
but do you think my zfs configuration caused the drive failure?
Cheers,
Mike
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss