Steven Hartland wrote
> On 05/11/2014 06:15, Marcus Reid wrote:
>> On Tue, Nov 04, 2014 at 06:13:44PM +, Steven Hartland wrote:
>>> On 04/11/2014 17:22, Allan Jude wrote:
snip...
Justin Gibbs and I were helping George from Voxer look at the same
issue
they are having. They h
Other
Steven Hartland wrote
> This is likely spikes in uma zones used by ARC.
>
> The VM doesn't ever clean uma zones unless it hits a low memory
> condition, which explains why your little script helps.
>
> Check the output of vmstat -z to confirm.
>
> On 04/11/2014 1
Hi Current,
It seems like there is constant flow (leak) of memory from ARC to Inact in
FreeBSD 11.0-CURRENT #0 r273165.
Normally, our system (FreeBSD 11.0-CURRENT #5 r260625) keeps ARC size very
close to vfs.zfs.arc_max:
Mem: 16G Active, 324M Inact, 105G Wired, 1612M Cache, 3308M Buf, 1094M F
vfs.zfs.trim.txg_delay 32
vfs.zfs.trim.timeout30
vfs.zfs.trim.max_interval 1
> On 2013-10-15 07:53, Dmitriy Makarov wrote:
> > Please, any idea
Please, any idea, thougth, help!
Maybe what information can be useful for diggin - anything...
System what I'm talkin about has a huge problem: performance degradation in
short time period (day-two). Don't know can we somehow relate this vmstat fails
with degradation.
> Hi all
>
> On CURRE
Hi all
On CURRENT r255173 we have some interesting values from vmstat -z : REQ = FAIL
[server]# vmstat -z
ITEM SIZE LIMIT USED FREE REQ FAIL SLEEP
... skipped
NCLNODE: 528, 0, 0, 0, 0, 0, 0
space_seg_cache:
Hi all,
On our production system on r255173 we have problem with abnormal high system
load caused (not sure) with L2ARC placed on a few SSD, 490 GB total size.
After a fresh boot everything seems to be fine, Load Average less then 5.00.
But after some time (nearly day-two) Load Average jump to 1
The attached patch by Steven Hartland fixes issue for me too. Thank you!
--- Исходное сообщение ---
От кого: "Steven Hartland" < kill...@multiplay.co.uk >
Дата: 18 сентября 2013, 01:53:10
- Original Message -
From: "Justin T. Gibbs" <
---
Дмитрий Макаров
And have to say that ashift of a main pool doesn't matter.
I've tried to create pool with ashift 9 (default value) and with ashift 12 with
creating gnops over gpart devices, export pool, destroy gnops, import pool.
There is the same problem with cache device.
There is no problem with ZIL devi
There is no problem with ZIL devices, they reports ashift: 12
children[1]:
type: 'disk'
id: 1
guid: 6986664094649753344
path: '/dev/gpt/zil1'
phys_path: '/dev/gpt/zil1'
whole_disk: 1
metaslab_array:
10 matches
Mail list logo