Hello all
I had upgraded opensolaris from snv_39 to snv_40 then everything is ok. Now I
can use smcwebserver to manage my zfs pool. ;-)
Thanks all.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
h
wooo it's a first good news after i woke up. ;-)
I will try to upgrade sv_39 to sv_40 then post my result here.
Thanks halstead. ;-)
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.
Hello Eric :
Thanks for your reply. ;-)
ZFS team member and you are really hard to work on ZFS.
Thank you very much to help us a wonderful storage solution.
Okey
As you said : Does 'zpool replace' work? In particular:
# zpool replace 3449041879167855716 c6t1d2
I tried that command (zpool
Hello All :
I have a 16xSATA disks DISK ARRAY with JBOD configuration and it's attached LSI
FC HBA card. I use 2 raidz groups there are combine with 8 disks. zpool status
result as following:
=== zpool status ==
NAME STATE READ WRITE CKSUM
pool
Hello Halstead:
I got the same error... however i tried to debug that jsp. but it seems not
work to me @_@
anyone knows how to solve this problem ?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
ht
>raidz is like raid 5, so you can survive the death of one disk, not 2.
>I would recomend you configure the 12 disks into, 2 raidz groups,
>then you can survive the death of one drive from each group. This is
>what i did on my system
Hi James , Thank you very much. ;-)
I'll configure 2 raidz grou