On Sat, Jan 7, 2012 at 7:37 PM, Richard Elling wrote:
> Hi Jim,
>
> On Jan 6, 2012, at 3:33 PM, Jim Klimov wrote:
>
> > Hello all,
> >
> > I have a new idea up for discussion.
> >
> > Several RAID systems have implemented "spread" spare drives
> > in the sense that there is not an idling disk wait
2012-01-08 5:37, Richard Elling пишет:
The big question is whether they are worth the effort. Spares solve a
serviceability
problem and only impact availability in an indirect manner. For single-parity
solutions, spares can make a big difference in MTTDL, but have almost no impact
on MTTDL for d
On Jan 7, 2012, at 7:12 AM, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Jim Klimov
>>
>> For smaller systems such as laptops or low-end servers,
>> which can house 1-2 disks, would it make sense to dedicate
Hi Jim,
On Jan 6, 2012, at 3:33 PM, Jim Klimov wrote:
> Hello all,
>
> I have a new idea up for discussion.
>
> Several RAID systems have implemented "spread" spare drives
> in the sense that there is not an idling disk waiting to
> receive a burst of resilver data filling it up, but the
> capa
On Sat, 7 Jan 2012, Jim Klimov wrote:
Several RAID systems have implemented "spread" spare drives
in the sense that there is not an idling disk waiting to
receive a burst of resilver data filling it up, but the
capacity of the spare disk is spread among all drives in
the array. As a result, the
On Sat, 7 Jan 2012, Jim Klimov wrote:
I believe in this case it might make sense to boot the
target system from this BootCD and use "zpool upgrade"
from this OS image. This way you can be more sure that
your recovery software (Solaris BootCD) would be helpful :)
Also keep in mind that it would
I wonder if it is possible (currently or in the future as an RFE)
to tell ZFS to automatically read-ahead some files and cache them
in RAM and/or L2ARC?
One use-case would be for Home-NAS setups where multimedia (video
files or catalogs of images/music) are viewed form a ZFS box. For
example, if
it seems that s11 shadow migration can help:-)
On 1/7/2012 9:50 AM, Jim Klimov wrote:
Hello all,
I understand that relatively high fragmentation is inherent
to ZFS due to its COW and possible intermixing of metadata
and data blocks (of which metadata path blocks are likely
to expire and get
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jim Klimov
>
>I understand that relatively high fragmentation is inherent
> to ZFS due to its COW and possible intermixing of metadata
> and data blocks (of which metadata path blocks are l
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jim Klimov
>
>For smaller systems such as laptops or low-end servers,
> which can house 1-2 disks, would it make sense to dedicate
> a 2-4Gb slice to the ZIL for the data pool, separate fro
Hello all,
I understand that relatively high fragmentation is inherent
to ZFS due to its COW and possible intermixing of metadata
and data blocks (of which metadata path blocks are likely
to expire and get freed relatively quickly).
I believe it was sometimes implied on this list that such
f
Hi Grant
On 01/06/2012 04:50 PM, Richard Elling wrote:
> Hi Grant,
>
> On Jan 4, 2012, at 2:59 PM, grant lowe wrote:
>
>> Hi all,
>>
>> I've got a solaris 10 running 9/10 on a T3. It's an oracle box with 128GB
>> memory RIght now oracle . I've been trying to load test the box with
>> bonnie++.
2012-01-06 17:49, Edward Ned Harvey пишет:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Ivan Rodriguez
Dear list,
I'm about to upgrade a zpool from 10 to 29 version, I suppose that
this upgrade will improve several performance issues tha
Hello all,
For smaller systems such as laptops or low-end servers,
which can house 1-2 disks, would it make sense to dedicate
a 2-4Gb slice to the ZIL for the data pool, separate from
rpool? Example layout (single-disk or mirrored):
s0 - 16Gb - rpool
s1 - 4Gb - data-zil
s3 - *Gb - data pool
Hello, Jesus,
I have transitioned a number of systems roughly by the
same procedure as you've outlined. Sadly, my notes are
not in English so they wouldn't be of much help directly;
but I can report that I had success with similar "in-place"
manual transitions from mirrored SVM (pre-solaris 10u
15 matches
Mail list logo