Re: [zfs-discuss] Convert Zpool RAID Types

2006-11-30 Thread Jason J. W. Williams
Hi Richard, Thank you for taking so much time on this! The array is a StorageTek FLX210 so it is a bit underpowered...best we could afford at the time. In terms of the load on it we have two servers running Solaris 10. Each physical server then has two containers, each one has a MySQL instance

[zfs-discuss] Re: What happens when adding a mirror, or put a mirror offline/online

2006-11-30 Thread Pierre Chatelier
I finally found the answer myself. By re-reading the doc, I re-discovered the term" resilvering", that I did not understand properly the first time. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org htt

Re: [zfs-discuss] raidz DEGRADED state

2006-11-30 Thread Thomas Garner
In the same vein... I currently have a 400GB disk that is full of data on a linux system. If I buy 2 more disks and put them into a raid-z'ed zfs under solaris, is there a generally accepted way to build an degraded array with the 2 disks, copy the data to the new filesystem, and then move the or

Re: [zfs-discuss] How do I obtain zfs with spare implementation?

2006-11-30 Thread Dick Davies
On 30/11/06, Michael Barto <[EMAIL PROTECTED]> wrote: I would like to update some of our Solaris 10 OS systems to the new zfs file system that supports spares. The Solaris 6/06 version does have zfs but does not have this feature. What is the best way to upgrade to this functionality? Hot

[zfs-discuss] How do I obtain zfs with spare implementation?

2006-11-30 Thread Michael Barto
I would like to update some of our Solaris 10 OS systems to the new zfs file system that supports spares.  The Solaris 6/06 version does have zfs but does not have this feature. What is the best way to upgrade to this functionality? Also we have a 3/05 version of Solaris and the Sun Express nv_

Re: [zfs-discuss] raidz DEGRADED state

2006-11-30 Thread Krzys
Ah, did not see your follow up. Thanks. Chris On Thu, 30 Nov 2006, Cindy Swearingen wrote: Sorry, Bart, is correct: If new_device is not specified, it defaults to old_device. This form of replacement is useful after an existing disk has failed and

Re: [zfs-discuss] raidz DEGRADED state

2006-11-30 Thread Krzys
Hold on, so I need to add another drive to the system for the replacemnt? I do not have any more slots in my system to add another disk to it. :( Chris On Thu, 30 Nov 2006, Cindy Swearingen wrote: One minor comment is to identify the replacement drive, like this: # zpool replace mypool2 c3t

Re: [zfs-discuss] raidz DEGRADED state

2006-11-30 Thread Krzys
Great, thank you, it certainly helped, I did not want to loose data on that disk therefore wanted to be sure than sorry thanks for help. Chris On Thu, 30 Nov 2006, Bart Smaalders wrote: Krzys wrote: my drive did go bad on me, how do I replace it? I am sunning solaris 10 U2 (by the wa

Re: [zfs-discuss] Re: system wont boot after zfs

2006-11-30 Thread Casper . Dik
>All with the same problem. I disabled the onboard nvidia nforce 410/430 >raid bios in the bios in all cases. Now whether it actually does not look >for a signature, I do not know. I'm attempting to make this box into an >iSCSI target for my ESX environments. I can put W3K and SanMelody on ther

Re: [zfs-discuss] Convert Zpool RAID Types

2006-11-30 Thread Richard Elling
Hi Jason, It seems to me that there is some detailed information which would be needed for a full analysis. So, to keep the ball rolling, I'll respond generally. Jason J. W. Williams wrote: Hi Richard, Been watching the stats on the array and the cache hits are < 3% on these volumes. We're ver

Re: [zfs-discuss] Re: system wont boot after zfs

2006-11-30 Thread Jonathan Edwards
Dave which BIOS manufacturers and revisions? that seems to be more of the problem as choices are typically limited across vendors .. and I take it you're running 6/06 u2 Jonathan On Nov 30, 2006, at 12:46, David Elefante wrote: Just as background: I attempted this process on the follo

Re: [zfs-discuss] raidz DEGRADED state

2006-11-30 Thread Cindy Swearingen
Sorry, Bart, is correct: If new_device is not specified, it defaults to old_device. This form of replacement is useful after an existing disk has failed and has been physically replaced. In this case, the new disk may have the same

Re: [zfs-discuss] raidz DEGRADED state

2006-11-30 Thread Cindy Swearingen
One minor comment is to identify the replacement drive, like this: # zpool replace mypool2 c3t6d0 c3t7d0 Otherwise, zpool will error... cs Bart Smaalders wrote: Krzys wrote: my drive did go bad on me, how do I replace it? I am sunning solaris 10 U2 (by the way, I thought U3 would be out i

Re: [zfs-discuss] raidz DEGRADED state

2006-11-30 Thread Bart Smaalders
Krzys wrote: my drive did go bad on me, how do I replace it? I am sunning solaris 10 U2 (by the way, I thought U3 would be out in November, will it be out soon? does anyone know? [11:35:14] server11: /export/home/me > zpool status -x pool: mypool2 state: DEGRADED status: One or more devi

Re: [zfs-discuss] ZFS and EFI labels

2006-11-30 Thread Torrey McMahon
Douglas Denny wrote: In reading the list archives, am I right to conclude that disks larger than 1 TB need to support EFI? I one of my projects the SAN does not support EFI labels under Solaris. Does this mean I would have to create a pool with disks < 1 TB? Out of curiosity ... what array is

Re: [zfs-discuss] raidz DEGRADED state

2006-11-30 Thread Tim Foster
Hi Krzys, On Thu, 2006-11-30 at 12:09 -0500, Krzys wrote: > my drive did go bad on me, how do I replace it? You should be able to do this using zpool replace. There's output below from me simulating your situation with file-based pools. This is documented in Chapters 7 and 10 of the ZFS admin g

[zfs-discuss] raidz DEGRADED state

2006-11-30 Thread Krzys
my drive did go bad on me, how do I replace it? I am sunning solaris 10 U2 (by the way, I thought U3 would be out in November, will it be out soon? does anyone know? [11:35:14] server11: /export/home/me > zpool status -x pool: mypool2 state: DEGRADED status: One or more devices could not

[zfs-discuss] Managed to corrupt my pool

2006-11-30 Thread Jim Hranicky
Platform: - old dell workstation with an Andataco gigaraid enclosure plugged into an Adaptec 39160 - Nevada b51 Current zpool config: - one two-disk mirror with two hot spares In my ferocious pounding of ZFS I've managed to corrupt my data pool. This is what I've been doing to test

Re: [zfs-discuss] ZFS and EFI labels

2006-11-30 Thread Darren Dunham
> In reading the list archives, am I right to conclude that disks larger than > 1 TB need to support EFI? I one of my projects the SAN does not support EFI > labels under Solaris. Does this mean I would have to create a pool with > disks < 1 TB? I would assume so. The Solaris VTOC label (likely y

[zfs-discuss] ZFS and EFI labels

2006-11-30 Thread Douglas Denny
In reading the list archives, am I right to conclude that disks larger than 1 TB need to support EFI? I one of my projects the SAN does not support EFI labels under Solaris. Does this mean I would have to create a pool with disks < 1 TB? TIA. -Doug ___

[zfs-discuss] What happens when adding a mirror, or put a mirror offline/online

2006-11-30 Thread Pierre Chatelier
Hello, Sorry if my question is a newbie question, but I am not an administrator familiar with storage, I am just curious about ZFS. I tried to look for an answer in the ZFS documentation and the mailing list, but I found nothing obvious. Can you tell me what happens when an additional device is

Re: [zfs-discuss] Re: system wont boot after zfs

2006-11-30 Thread Jonathan Edwards
On Nov 29, 2006, at 13:24, [EMAIL PROTECTED] wrote: I suspect a lack of an MBR could cause some BIOS implementations to barf .. Why? Zeroed disks don't have that issue either. you're right - I was thinking that a lack of an MBR with a GPT could be causing problems, but actually it loo

[zfs-discuss] ZFS caught resilvering when only one side of mirror persent

2006-11-30 Thread Darren J Moffat
When I booted my laptop up this morning it took much longer than normal and there was a lot of disk activity even after I logged in. A quick use of dtrace and iostat revealed that all the writes were to the zpool. I ran zpool status and found that the pool was resilvering. Strange thing is t