Sorry, I was skipping bits to get to the main point. I did use replace (as
previously instructed on the list). I think that worked because my spare had
taken over for the failed drive. That's the same situation now - spare in
service for the failed drive.
Sent from my iPhone
On Nov 27, 2012,
Hi Chris,
On Tue, Nov 27, 2012 at 6:56 PM, Chris Dunbar - Earthside, LLC <
cdun...@earthside.net> wrote:
> Hello,
>
> ** **
>
> I have a degraded mirror set and this is has happened a few times (not
> always the same drive) over the last two years. In the past I replaced the
> drive and and r
And you can try 'zpool online' on the failed drive to see if it comes back
online.
On Nov 27, 2012 6:08 PM, "Freddie Cash" wrote:
> You don't use replace on mirror vdevs.
>
> 'zpool detach' the failed drive. Then 'zpool attach' the new drive.
> On Nov 27, 2012 6:00 PM, "Chris Dunbar - Earthside,
You don't use replace on mirror vdevs.
'zpool detach' the failed drive. Then 'zpool attach' the new drive.
On Nov 27, 2012 6:00 PM, "Chris Dunbar - Earthside, LLC" <
cdun...@earthside.net> wrote:
> Hello,
>
> ** **
>
> I have a degraded mirror set and this is has happened a few times (not
> a
Hello,
I have a degraded mirror set and this is has happened a few times (not
always the same drive) over the last two years. In the past I replaced the
drive and and ran zpool replace and all was well. I am wondering, however,
if it is safe to run zpool replace without replacing the drive to s
Going a bit on a tangent, does anyone know if those drives are
available for sale anywhere?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Tue, Nov 27, 2012 at 5:13 AM, Eugen Leitl wrote:
> Now there are multiple configurations for this.
> Some using Linux (roof fs on a RAID10, /home on
> RAID 1) or zfs. Now zfs on Linux probably wouldn't
> do hybrid zfs pools (would it?)
Sure it does. You can even use the whole disk as zfs, with
Now that I thought of it some more, a follow-up is due on my advices:
1) While the best practices do(did) dictate to set up zoneroots in
rpool, this is certainly not required - and I maintain lots of
systems which store zones in separate data pools. This minimizes
write-impact on rpools
Performance-wise, I think you should go for mirrors/raid10, and
separate the pools (i.e. rpool mirror on SSD and data mirror on
HDDs). If you have 4 SSDs, you might mirror the other couple for
zoneroots or some databases in datasets delegated into zones,
for example. Don't use dedup. Carve out som
On Tue, Nov 27, 2012 at 12:12:43PM +, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
> > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> > boun...@opensolaris.org] On Behalf Of Eugen Leitl
> >
> > can I make e.g. LSI SAS3442E
> > directly do SSD caching (it s
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Eugen Leitl
>
> can I make e.g. LSI SAS3442E
> directly do SSD caching (it says something about CacheCade,
> but I'm not sure it's an OS-side driver thing), as it
> is supposed to boost IOPS? U
>> [...]
>> The results were the same with 10 or 25 drives, so I suspected either the
>> PCI bus, either the expander in the 25-drives bay (HP 530946-001).
>> Plugging the disks directly to the LSI card allowed to gain few MB/s :
>> the expander was limiting a bit, but moreover, it disallowed to u
On 11/27/12 1:52 AM, "Grégory Giannoni" wrote:
>
>Le 27 nov. 2012 à 01:17, Erik Trimble a écrit :
>
>> On 11/26/2012 12:54 PM, Grégory Giannoni wrote:
>>> [snip]
>>> I switched few month ago from Sun X45x0 to HP things : My fast NAS are
>>>now DL 180 G6. I got better perfs using LSI 9240-8I rath
13 matches
Mail list logo