Hi Richard,
Thanks for your reply.
>> I am struggling with a storage pool on a server, where I would like to
>> offline a device for replacement.
>> The pool consists of two-disk stripes set up in mirrors (yep, stupid, but we
>> were
>> running out of VDs on the controller at the time, and that
Dear List,
I am struggling with a storage pool on a server, where I would like to offline
a device for replacement. The pool consists of two-disk stripes set up in
mirrors (yep, stupid, but we were running out of VDs on the controller at the
time, and that's where we are now...).
Here's the po
Hi,
2011/12/19 Hung-Sheng Tsao (laoTsao) :
> what is the ram size?
32 GB
> are there many snap? create then delete?
Currently, there are 36 snapshots on the pool - it is part of a fairly
normal backup regime of snapshots every 5 min, hour, day, week and
month.
> did you run a scrub?
Yes, as p
On Sun, Dec 18, 2011 at 22:14, Nathan Kroenert wrote:
> Do you realise that losing a single disk in that pool could pretty much
> render the whole thing busted?
Ah - didn't pick up on that one until someone here pointed it out -
all my disks are mirrored, however some of them are mirrored on the
Hi,
On Sun, Dec 18, 2011 at 22:38, Matt Breitbach wrote:
> I'd look at iostat -En. It will give you a good breakdown of disks that
> have seen errors. I've also spotted failing disks just by watching an
> iostat -nxz and looking for the one who's spending more %busy than the rest
> of them, or
Hi Craig,
On Sun, Dec 18, 2011 at 22:33, Craig Morgan wrote:
> Try fmdump -e and then fmdump -eV, it could be a pathological disk just this
> side of failure doing heavy retries that id dragging the pool down.
Thanks for the hint - didn't know about fmdump. Nothing in the log
since 13 Dec, thou
Hi,
On Sun, Dec 18, 2011 at 22:14, Nathan Kroenert wrote:
> I know some others may already have pointed this out - but I can't see it
> and not say something...
>
> Do you realise that losing a single disk in that pool could pretty much
> render the whole thing busted?
>
> At least for me - the
Hi,
On Sun, Dec 18, 2011 at 22:00, Fajar A. Nugraha wrote:
> From http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
> (or at least Google's cache of it, since it seems to be inaccessible
> now:
>
> "
> Keep pool space under 80% utilization to maintain pool performance.
> Cur
Hi,
On Sun, Dec 18, 2011 at 16:41, Fajar A. Nugraha wrote:
> Is the pool over 80% full? Do you have dedup enabled (even if it was
> turned off later, see "zpool history")?
The pool stands at 86%, but that has not changed in any way that
corresponds chronologically with the sudden drop in perform
Hi,
On Sun, Dec 18, 2011 at 15:13, "Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D."
wrote:
> what are the output of zpool status pool1 and pool2
> it seems that you have mix configuration of pool3 with disk and mirror
The other two pools show very similar outputs:
root@stor:~# zpool status pool1
pool: p
Dear List,
I have a storage server running OpenIndiana with a number of storage
pools on it. All the pools' disks come off the same controller, and
all pools are backed by SSD-based l2arc and ZIL. Performance is
excellent on all pools but one, and I am struggling greatly to figure
out what is wron
11 matches
Mail list logo