>
> As Edna and Robert mentioned, zpool attach will add the mirror.
> But note that the X4500 has only two possible boot devices:
> c5t0d0 and c5t4d0. This is a BIOS limitation. So you will want
> to mirror with c5t4d0 and configure the disks for boot. See the
> docs on ZFS boot for details on h
we did a mistake :(
tom
On Wed, Jul 2, 2008 at 5:58 PM, Richard Elling <[EMAIL PROTECTED]> wrote:
> Tommaso Boccali wrote:
>>
>> Ciao, the rot filesystem of my thumper is a ZFS with a single disk:
>>
>> bash-3.2# zpool status rpool
>> pool: rpool
>> state: ONLINE
>> scrub: none requested
>> co
Does 'zpool attach' enough for a root pool?
I mean, does it install GRUB bootblocks on the disk?
On Wed, Jul 2, 2008 at 1:10 PM, Robert Milkowski <[EMAIL PROTECTED]> wrote:
> Hello Tommaso,
>
> Wednesday, July 2, 2008, 1:04:06 PM, you wrote:
> the root filesystem of my thumper is a ZFS with a si
On Sat, Jul 5, 2008 at 9:48 PM, Brian Hechinger <[EMAIL PROTECTED]> wrote:
> On Sat, Jul 05, 2008 at 03:03:34PM -0500, Mike Gerdts wrote:
>> $ kstat -p ::vopstats_zfs:{nread,read_bytes,nwrite,write_bytes}
>> unix:0:vopstats_zfs:nread 418787
>> unix:0:vopstats_zfs:read_bytes612076305
>>
On Sat, Jul 05, 2008 at 03:03:34PM -0500, Mike Gerdts wrote:
>
> You can access the kstats directly to get the counter values.
First off, let me say that: kstat++
That's too cool.
> $ kstat -p ::vopstats_zfs:{nread,read_bytes,nwrite,write_bytes}
> unix:0:vopstats_zfs:nread 418787
> unix:0:
FYI, we are literally just days from having this fixed.
Matt: after putback you really should blog about this one --
both to let people know that this long-standing bug has been
fixed, and to describe your approach to it.
It's a surprisingly tricky and interesting problem.
Jeff
On Sat, Jul 05,
Booted from 2008.05
and the error was the same as before: corrupted data for both last disks.
zdb -l was the same as before: read label from disk 1 but not from disks 2 & 3.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-di
Mike Gerdts wrote:
> $ kstat -p ::vopstats_zfs:{nread,read_bytes,nwrite,write_bytes}
> unix:0:vopstats_zfs:nread 418787
> unix:0:vopstats_zfs:read_bytes612076305
> unix:0:vopstats_zfs:nwrite163544
> unix:0:vopstats_zfs:write_bytes 255725992
Thanks Mike, thats exactly what I w
If it ever does get released I'd love to hear about it. That bug, and the fact
it appears to have been outstanding for three years, was one of the major
reasons behind us not purchasing a bunch of x4500's.
This message posted from opensolaris.org
_
On Sat, Jul 5, 2008 at 2:33 PM, Matt Harrison
<[EMAIL PROTECTED]> wrote:
> Alternatively is there a better way to get read/write ops etc from my
> pool for monitoring applications?
>
> I would really love if monitoring zfs pools from snmp was better all
> round, but I'm not going to reel off my wis
On Sat, Jul 5, 2008 at 9:34 PM, Robert Lawhead <
[EMAIL PROTECTED]> wrote:
> About a month ago (Jun 2008), I received information indicating that a
> putback fixing this problem was in the works and might appear as soon as
> b92. Apparently this estimate was overly optimistic; Does anyone know
>
About a month ago (Jun 2008), I received information indicating that a putback
fixing this problem was in the works and might appear as soon as b92.
Apparently this estimate was overly optimistic; Does anyone know anything about
progress on this issue or have a revised estimate for the putback?
Hi gurus,
I like zpool iostat and I like system monitoring, so I setup a script
within sma to let me get the zpool iostat figures through snmp.
The problem is that as zpool iostat is only run once for each snmp
query, it always reports a static set of figures, like so:
[EMAIL PROTECTED]:snmp #
Hi--
Here's the scoop, in probably too much detail:
I'm a sucker for new filesystems and new tech in general. For you old-
time Mac people, I installed Sequoia when it was first seeded, and had
to reformat my drive several times as it grew to the final release. I
flipped the "journaled" fla
Ross wrote:
> Just re-read that and it's badly phrased. What I meant to say is that a
> raid-z / raid-5 array based on 500GB drives seems to have around a 1 in 10
> chance of loosing some data during a full rebuild.
>
>
>
Actually, I think it's been explained already why this is actually
Just re-read that and it's badly phrased. What I meant to say is that a raid-z
/ raid-5 array based on 500GB drives seems to have around a 1 in 10 chance of
loosing some data during a full rebuild.
This message posted from opensolaris.org
___
zfs-d
I've read various articles along those lines. My understanding is that a 500GB
odd raid-z / raid-5 array has around a 1 in 10 chance of loosing at least some
data during a rebuild.
I'd have raid-5 arrays fail at least 4 times, twice during a rebuild. In most
cases I've been able to recover th
17 matches
Mail list logo