Sent from my iPhone
> On Sep 16, 2015, at 11:49 AM, Watson, Dan wrote:
>
> I've noticed that drives with a labeled WWN tend to be less error prone, and
> only when a driv
This a distinction that only exists in your head. :)
I have only 768 HGST 600gb sas drives in spinning in production at
On 09/16/15 10:42 PM, Andrew Gabriel wrote:
Also what OI/illumos is that, because I was reading long ago there
were some bugs solved in illumos for mpt_sas.
Somewhere around 18 months ago IIRC, Nexenta pushed a load of fixes
for this into their git repo. I don't think I've seen these picked up
On 16/09/2015 19:24, Nikola M wrote:
On 09/11/15 08:57 PM, Watson, Dan wrote:
I'm using mpt_sas with SATA drives, and I_DO_ have error counters
climbing for some of those drives, is it probably that?
Any other ideas?
It is generally strongly advised to use SATA disks on SATA controllers
and
nly import option
> mentioned previously and it seems to be working fine. I'm betting I just
> did not have enough RAM available to do dedupe.
>
> Thanks!
> Dan
>
> -Original Message-
> From: Nikola M [mailto:minik...@gmail.com]
> Sent: September 16, 2015 11:2
Thanks!
> Dan
>
> -----Original Message-----
> From: Nikola M [mailto:minik...@gmail.com]
> Sent: September 16, 2015 11:25 AM
> To: Discussion list for OpenIndiana
> Subject: Re: [OpenIndiana-discuss] Kernel panic on hung zpool accessed via
> lofi
>
> On 09/11/15 08:57 P
27;m betting I just did
not have enough RAM available to do dedupe.
Thanks!
Dan
-Original Message-
From: Nikola M [mailto:minik...@gmail.com]
Sent: September 16, 2015 11:25 AM
To: Discussion list for OpenIndiana
Subject: Re: [OpenIndiana-discuss] Kernel panic on hung zpool accessed vi
On 09/11/15 08:57 PM, Watson, Dan wrote:
I'm using mpt_sas with SATA drives, and I_DO_ have error counters climbing for
some of those drives, is it probably that?
Any other ideas?
It is generally strongly advised to use SATA disks on SATA controllers
and SAS disks on SAS controllers. And to
g
>>Subject: Re: [OpenIndiana-discuss] Kernel panic on hung zpool accessed
>via lofi
>>
>>11 сентября 2015 г. 20:57:46 CEST, "Watson, Dan"
> пишет:
>>>Hi all,
>>>
>>>I've been enjoying OI for quite a while butI'm running into a pro
>-Original Message-
>From: Jim Klimov [mailto:jimkli...@cos.ru]
>Sent: September 12, 2015 10:31 AM
>To: Discussion list for OpenIndiana; Watson, Dan;
>openindiana-discuss@openindiana.org
>Subject: Re: [OpenIndiana-discuss] Kernel panic on hung zpool accessed via lofi
&
11 сентября 2015 г. 20:57:46 CEST, "Watson, Dan"
пишет:
>Hi all,
>
>I've been enjoying OI for quite a while butI'm running into a problem
>with accessing zpool on disk image files sitting on zfs accessed via
>lofi that I hope someone can give me a hint on.
>
>To recover data from a zpool I've cop
Hi all,
I've been enjoying OI for quite a while butI'm running into a problem with
accessing zpool on disk image files sitting on zfs accessed via lofi that I
hope someone can give me a hint on.
To recover data from a zpool I've copied slice 0 off of all the disks to a
different host under /al
11 matches
Mail list logo