On 04/26/2011 01:25 AM, Nikola M. wrote:
On 04/26/11 01:56 AM, Lamp Zy wrote:
Hi,
One of my drives failed in Raidz2 with two hot spares:
What are zpool/zfs versions? (zpool upgrade Ctrl+c, zfs upgrade Cttr+c).
Latest zpool/zfs versions available by numerical designation in all
OpenSolaris based distributions, are zpool 28 and zfs v. 5. (That is why
one should Not update so S11Ex Zfs/Zpool version if wanting to use/have
installed or continue using in multiple Zfs BE's other open OpenSolaris
based distributions)
What OS are you using with ZFS?
Do you use Solaris 10/update release, Solaris11Express, OpenIndiana
oi_148 dev/ 148b with IllumOS, OpenSolaris 2009.06/snv_134b, Nexenta,
Nexenta Community, Schillix, FreeBSD, Linux zfs-fuse.. (I guess still
not using Linux with Zfs kernel module, but just to mention it
available.. and OSX too).
Thank you for all replies.
Here is what we are using.
- Hardware:
Server: SUN SunFire X4240
DAS Storage: SUN Storage J4400 with 24x1TB SATA drives. Original drives.
I assume they are identical.
- Software:
OS: Solaris 10 5/09 s10x_u7wos_08 X86; Stock install. No upgrades, no
patches.
ZFS pool version 10
ZFS filesystem version 3
Another confusing thing is that I wasn't able to put the failed drive
off-line because there wasn't enough replicas (?). First, the drive
already failed and second - it's raidz2 which is equivalent of raid6 and
it should be able to handle 2 failed drives. I skipped that step but
wanted to mention it here.
I used the "zpool replace" and resilvering finished successfully.
Then the "zpool detach" removed the drive and now I have this:
# zpool status fwgpool0
pool: fwgpool0
state: ONLINE
scrub: resilver completed after 12h59m with 0 errors on Wed Apr 27
05:15:17 2011
config:
NAME STATE READ WRITE CKSUM
fwgpool0 ONLINE 0 0 0
raidz2 ONLINE 0 0 0
c4t5000C500108B406Ad0 ONLINE 0 0 0
c4t5000C50010F436E2d0 ONLINE 0 0 0
c4t5000C50011215B6Ed0 ONLINE 0 0 0
c4t5000C50011234715d0 ONLINE 0 0 0
c4t5000C50011252B4Ad0 ONLINE 0 0 0
c4t5000C500112749EDd0 ONLINE 0 0 0
c4t5000C50014D70072d0 ONLINE 0 0 0
c4t5000C500112C4959d0 ONLINE 0 0 0
c4t5000C50011318199d0 ONLINE 0 0 0
c4t5000C500113C0E9Dd0 ONLINE 0 0 0
c4t5000C500113D0229d0 ONLINE 0 0 0
c4t5000C500113E97B8d0 ONLINE 0 0 0
c4t5000C50014D065A9d0 ONLINE 0 0 0
c4t5000C50014D0B3B9d0 ONLINE 0 0 0
c4t5000C50014D55DEFd0 ONLINE 0 0 0
c4t5000C50014D642B7d0 ONLINE 0 0 0
c4t5000C50014D64521d0 ONLINE 0 0 0
c4t5000C50014D69C14d0 ONLINE 0 0 0
c4t5000C50014D6B2CFd0 ONLINE 0 0 0
c4t5000C50014D6C6D7d0 ONLINE 0 0 0
c4t5000C50014D6D486d0 ONLINE 0 0 0
c4t5000C50014D6D77Fd0 ONLINE 0 0 0
spares
c4t5000C50014D7058Dd0 AVAIL
errors: No known data errors
#
Great. So, now how do I identify which drive out of the 24 in the
storage unit is the one that failed?
I looked on the Internet for help but the problem is that this drive
completely disappeared. Even "format" and "iostat -En" show only 23
drives when there are physically 24.
Any ideas how to identify which drive is the one that failed so I can
replace it?
Thanks
Peter
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss