On Tue, Nov 27, 2012 at 08:52:06AM +0100, Grégory Giannoni wrote:
>
> The LSI 9240-4I was not able to connect to the 25-drives bay ; Not tested
> LSI 9260-16I or LSI 9280-24i.
>
What was the problem connecting LSI 9240-4i to the 25-drives bay?
-- Pasi
Le 29 nov. 2012 à 09:27, Pasi Kärkkäinen a écrit :
>> The LSI 9240-4I was not able to connect to the 25-drives bay ; Not tested
>> LSI 9260-16I or LSI 9280-24i.
>>
>
> What was the problem connecting LSI 9240-4i to the 25-drives bay?
>
The 25-drives backplane needs two SFF-8087 (multilane ca
On Thu, Nov 29, 2012 at 09:42:21AM +0100, Grégory Giannoni wrote:
>
> Le 29 nov. 2012 à 09:27, Pasi Kärkkäinen a écrit :
> >> The LSI 9240-4I was not able to connect to the 25-drives bay ; Not tested
> >> LSI 9260-16I or LSI 9280-24i.
> >>
> >
> > What was the problem connecting LSI 9240-4i to
I've heard a claim that ZFS relies too much on RAM caching, but
implements no sort of priorities (indeed, I've seen no knobs to
tune those) - so that if the storage box receives many different
types of IO requests with different "administrative weights" in
the view of admins, it can not really thr
Hi
Say I have an ldoms guest that is using zfs root pool that is mirrored,
and the two sides of the mirror are coming from two separate vds
servers, that is
mirror-0
c3d0s0
c4d0s0
where c3d0s0 is served by one vds server, and c4d0s0 is served by
another vds server.
Now if for some reaso
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jim Klimov
>
> this is
> the part I am not certain about - it is roughly as cheap to READ the
> gzip-9 datasets as it is to read lzjb (in terms of CPU decompression).
Nope. I know LZJB is not
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Enda o'Connor - Oracle Ireland -
>
> Say I have an ldoms guest that is using zfs root pool that is mirrored,
> and the two sides of the mirror are coming from two separate vds
> servers, that i
I tried the zpool replace on the failed drive. It returned an I/O error so I am
assuming that is confirmation that the drive is indeed dead. I'll visit the
data center to night and swap it out. Thanks for everybody's help!
- Original Message -
From: "Edward Ned Harvey (opensolarisisdeadl
On Thu, 29 Nov 2012, Jim Klimov wrote:
I've heard a claim that ZFS relies too much on RAM caching, but
implements no sort of priorities (indeed, I've seen no knobs to
tune those) - so that if the storage box receives many different
types of IO requests with different "administrative weights" in