Le mercredi 02 avril 2008 à 16:23 -0500, [EMAIL PROTECTED] a
écrit :
> Been goggling around on this to no avail...
>
> We're hoping to soon put into production an x4500 with a big ZFS pool,
> replacing a (piece of junk) NAS head which replaced our old trusty
> NetApp.
>
> In each of those olde
On Wed, Apr 2, 2008 at 4:23 PM, <[EMAIL PROTECTED]> wrote:
> Been goggling around on this to no avail...
>
> We're hoping to soon put into production an x4500 with a big ZFS pool,
> replacing a (piece of junk) NAS head which replaced our old trusty NetApp.
>
> In each of those older boxes, we con
I'll take this as a yes.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[EMAIL PROTECTED] wrote on 04/02/2008 04:54:58 PM:
> On 02 April, 2008 - [EMAIL PROTECTED] sent me these 3,4K bytes:
>
> > Been goggling around on this to no avail...
> >
> > We're hoping to soon put into production an x4500 with a big ZFS pool,
> > replacing a (piece of junk) NAS head which repla
On 02 April, 2008 - [EMAIL PROTECTED] sent me these 3,4K bytes:
> Been goggling around on this to no avail...
>
> We're hoping to soon put into production an x4500 with a big ZFS pool,
> replacing a (piece of junk) NAS head which replaced our old trusty
> NetApp.
>
> In each of those older box
Been goggling around on this to no avail...
We're hoping to soon put into production an x4500 with a big ZFS pool,
replacing a (piece of junk) NAS head which replaced our old trusty
NetApp.
In each of those older boxes, we configured them to send out an email
when there was a component failure.
> We've got a couple of X4500's. I am able to get into ZFS
> Administration on one of them, but on the newer one (latest
> Solaris 10 8/7 with patches), which also has a larger number
> of ZFS filesystems, whenever I go to the Java Web Console and
> click on ZFS Administration, after a couple o
On Sat, Mar 15, 2008 at 2:06 PM, Marc Bevand <[EMAIL PROTECTED]> wrote:
> I think I'll go back to the 128-byte setting. I wouldn't want to
> see errors happening under heavy usage even though my stress
> tests were all successful (aggregate data rate of 610 MB/s
> generated by reading the disks
Oh, it should say retryable and normal write errors - I have permanent
errors too
/Martin
On 2 apr 2008, at 00:55, Richard Elling wrote:
> Martin Englund wrote:
>> I've got a newly created zpool where I know (from the previous UFS)
>> that one of the disks has retryable write errors.
>>
>> Wh
Hi experts
zpool import shows some weird config of an old zpool
bash-3.00# zpool import
pool: data1
id: 7539031628606861598
state: FAULTED
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
see: htt
I seem to have made some progress. For some reason when I ran prtvtoc there
was no slice 0. I added it such that it would occupy the entire disk, and
now when I run an import it looks like this:
zpool import
pool: store
id: 7369085894363868358
state: UNAVAIL
status: The pool was last acce
11 matches
Mail list logo