> From: Jim Klimov [mailto:jimkli...@cos.ru]
> Sent: Monday, October 22, 2012 7:26 AM
>
> Are you sure that the system with failed mounts came up NOT in a
> read-only root moment, and that your removal of /etc/zfs/zpool.cache
> did in fact happen (and that you did not then boot into an earlier
> B
On Oct 19, 2012, at 4:59 PM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Richard Elling
>>
>>> At some point, people will bitterly regret some "zpool upgrade" with no
On Oct 22, 2012, at 6:52 AM, Chris Nagele wrote:
>> If after it decreases in size it stays there it might be similar to:
>>
>>7111576 arc shrinks in the absence of memory pressure
>
> After it dropped, it did build back up. Today is the first day that
> these servers are working under r
2012-10-22 20:58, Brian wrote:
hi jim,
writes are sequential and to a ring buffer. reads of course would not
be sequential, and would be intermixed with writes.
Thanks... Do I get it correctly that if a block from L2ARC is
requested by the readers, then it is fetched from the SSD and
becomes
Hello all,
A few months ago I saw a statement that L2ARC writes are simplistic
in nature, and I got the (mis?)understanding that some sort of ring
buffer may be in use, like for ZIL. Is this true, and the only metric
of write-performance important for L2ARC SSD device is the sequential
write ba
> If after it decreases in size it stays there it might be similar to:
>
> 7111576 arc shrinks in the absence of memory pressure
After it dropped, it did build back up. Today is the first day that
these servers are working under real production load and it is looking
much better. arcstat i
Alexander Block wrote:
> tar/pax was the initial format that was chosen for btrfs send/receive
> as it looked like the best and most compatible way. In the middle of
> development however I realized that we need more then storing whole
> and incremental files/dirs in the format. We needed to stor
Are you sure that the system with failed mounts came up NOT in a
read-only root moment, and that your removal of /etc/zfs/zpool.cache
did in fact happen (and that you did not then boot into an earlier
BE with the file still in it)?
On a side note, repairs of ZFS mount order are best done with a
s
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>
> If you rm /etc/zfs/zpool.cache and reboot... The system is smart enough (at
> least in my case) to re-import rpool, and another pool, but it didn't figure
> out
> to re-
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Gary Mills
>
> On Sun, Oct 21, 2012 at 11:40:31AM +0200, Bogdan Ćulibrk wrote:
> >Follow up question regarding this: is there any way to disable
> >automatic import of any non-rpool on
On 22 October, 2012 - Robert Milkowski sent me these 3,6K bytes:
> Hi,
>
> If after it decreases in size it stays there it might be similar to:
>
> 7111576 arc shrinks in the absence of memory pressure
>
> Also, see document:
>
> ZFS ARC can shrink down without memory pressure resu
Hi,
If after it decreases in size it stays there it might be similar to:
7111576 arc shrinks in the absence of memory pressure
Also, see document:
ZFS ARC can shrink down without memory pressure result in slow
performance [ID 1404581.1]
Specifically, check if arc_no_grow is set
12 matches
Mail list logo