Scott Bennett wrote:
> thousand blocks allocated. Directories don't shrink. Directory entries do
> not get moved around within directories when files are added or deleted.
> Directories can remain the same length or they can grow in length. If a
> directory once had many tens of thousands of f
On Fri, Oct 21, 2016 at 10:04 AM, Pete French
wrote:
> Not forgotten, just under the impression that ZFS shrinks directories
> unlike good old UFS. Apparenrly not,
>
Someone offhandedly mentioned this earlier (it's apparently intended for
the future sometime). I at least hope they do something s
On 2016/10/21 13:47, Pete French wrote:
>> In bad case metadata of every file will be placed in random place of disk.
>> ls need access to metadata of every file before start of output listing.
>
> Umm, are we not talkong abut an issue where the directoyr no longer contains
> any files. It used to
> Oh, my goodness, how far afield nonsense has gotten! Have all the
> good folks posting in this thread forgotten how directory blocks are
> allocated in UNIX?
Not forgotten, just under the impression that ZFS shrinks directories
unlike good old UFS. Apparenrly not, and yes, if thats true th
On Fri, 21 Oct 2016 16:51:36 +0500 "Eugene M. Zheganin"
wrote:
>On 21.10.2016 15:20, Slawa Olhovchenkov wrote:
>>
>> ZFS prefetch affect performance dpeneds of workload (independed of RAM
>> size): for some workloads wins, for some workloads lose (for my
>> workload prefetch is lose and manu
On Fri, Oct 21, 2016 at 01:47:08PM +0100, Pete French wrote:
> > In bad case metadata of every file will be placed in random place of disk.
> > ls need access to metadata of every file before start of output listing.
>
> Umm, are we not talkong abut an issue where the directoyr no longer contains
> In bad case metadata of every file will be placed in random place of disk.
> ls need access to metadata of every file before start of output listing.
Umm, are we not talkong abut an issue where the directoyr no longer contains
any files. It used to have lots, now it has none.
> I.e. in bad case
On Fri, Oct 21, 2016 at 04:51:36PM +0500, Eugene M. Zheganin wrote:
> Hi.
>
> On 21.10.2016 15:20, Slawa Olhovchenkov wrote:
> >
> > ZFS prefetch affect performance dpeneds of workload (independed of RAM
> > size): for some workloads wins, for some workloads lose (for my
> > workload prefetch is
Hi.
On 21.10.2016 15:20, Slawa Olhovchenkov wrote:
ZFS prefetch affect performance dpeneds of workload (independed of RAM
size): for some workloads wins, for some workloads lose (for my
workload prefetch is lose and manualy disabled with 128GB RAM).
Anyway, this system have only 24MB in ARC by
Instead of the guesswork and black magic, you could try to use tools to analyze
the problem. E.g., determine if the delay is because a CPU does a lot of work
or it is because of waiting. Find the bottleneck, etc.
pmcstat, dtrace are your friends :-)
--
Andriy Gapon
On Fri, Oct 21, 2016 at 11:02:57AM +0100, Steven Hartland wrote:
> > Mem: 21M Active, 646M Inact, 931M Wired, 2311M Free
> > ARC: 73M Total, 3396K MFU, 21M MRU, 545K Anon, 1292K Header, 47M Other
> > Swap: 4096M Total, 4096M Free
> >
> > PID USERNAME PRI NICE SIZERES STATE C TIME
On 21/10/2016 10:04, Eugene M. Zheganin wrote:
Hi.
On 21.10.2016 9:22, Steven Hartland wrote:
On 21/10/2016 04:52, Eugene M. Zheganin wrote:
Hi.
On 20.10.2016 21:17, Steven Hartland wrote:
Do you have atime enabled for the relevant volume?
I do.
If so disable it and see if that helps:
zfs
Hi.
On 21.10.2016 9:22, Steven Hartland wrote:
On 21/10/2016 04:52, Eugene M. Zheganin wrote:
Hi.
On 20.10.2016 21:17, Steven Hartland wrote:
Do you have atime enabled for the relevant volume?
I do.
If so disable it and see if that helps:
zfs set atime=off
Nah, it doesn't help at all.
A
Have you done any ZFS tuning?
Could you try installing ports/sysutils/zfs-stats and posting the output
from "zfs-stats -a". That might point to a bottleneck or poor cache
tuning.
--
Peter Jeremy
signature.asc
Description: PGP signature
On 21/10/2016 04:52, Eugene M. Zheganin wrote:
Hi.
On 20.10.2016 21:17, Steven Hartland wrote:
Do you have atime enabled for the relevant volume?
I do.
If so disable it and see if that helps:
zfs set atime=off
Nah, it doesn't help at all.
As per with Jonathon what does gstat -pd and top
In your case there your vdev (ada0) is saturated with writes from postgres.
You should consider more / faster disks.
You might also want to consider enabling lz4 compression on the PG
volume as its works well in IO bound situations.
On 21/10/2016 01:54, Jonathan Chen wrote:
On 21 October 20
Hi.
On 20.10.2016 21:17, Steven Hartland wrote:
Do you have atime enabled for the relevant volume?
I do.
If so disable it and see if that helps:
zfs set atime=off
Nah, it doesn't help at all.
Thanks.
Eugene.
___
freebsd-stable@freebsd.org mailin
On 21 October 2016 at 12:56, Steven Hartland wrote:
[...]
> When you see the stalling what does gstat -pd and top -SHz show?
On my dev box:
1:38pm# uname -a
FreeBSD irontree 10.3-STABLE FreeBSD 10.3-STABLE #0 r307401: Mon Oct
17 10:17:22 NZDT 2016 root@irontree:/usr/obj/usr/src/sys/GENERIC
a
On 20/10/2016 23:48, Jonathan Chen wrote:
On 21 October 2016 at 11:27, Steven Hartland wrote:
On 20/10/2016 22:18, Jonathan Chen wrote:
On 21 October 2016 at 09:09, Peter wrote:
[...]
I see this on my pgsql_tmp dirs (where Postgres stores intermediate
query data that gets too big for mem -
On 21 October 2016 at 11:27, Steven Hartland wrote:
> On 20/10/2016 22:18, Jonathan Chen wrote:
>>
>> On 21 October 2016 at 09:09, Peter wrote:
>> [...]
>>>
>>> I see this on my pgsql_tmp dirs (where Postgres stores intermediate
>>> query data that gets too big for mem - usually lots of files) -
On 20/10/2016 22:18, Jonathan Chen wrote:
On 21 October 2016 at 09:09, Peter wrote:
[...]
I see this on my pgsql_tmp dirs (where Postgres stores intermediate
query data that gets too big for mem - usually lots of files) - in
normal operation these dirs are completely empty, but make heavy disk
While I have yet to encounter this with PG on ZFS, knock on wood, this
obviously is not an isolated issue and if possible those experiencing it should
do as much investigation as possible and open a PR. This seems like something
I'm going to read about FreeBSD and PG/ZFS over at Hacker News from
On 21 October 2016 at 09:09, Peter wrote:
[...]
>
> I see this on my pgsql_tmp dirs (where Postgres stores intermediate
> query data that gets too big for mem - usually lots of files) - in
> normal operation these dirs are completely empty, but make heavy disk
> activity (even writing!) when doing
Eugene M. Zheganin wrote:
Hi.
I have FreeBSD 10.2-STABLE r289293 (but I have observed this situation
on different releases) and a zfs. I also have one directory that used to
have a lot of (tens of thousands) files. I surely takes a lot of time to
get a listing of it. But now I have 2 files and a
Do you have atime enabled for the relevant volume?
If so disable it and see if that helps:
zfs set atime=off
Regards
Steve
On 20/10/2016 14:47, Eugene M. Zheganin wrote:
Hi.
I have FreeBSD 10.2-STABLE r289293 (but I have observed this situation
on different releases) and a zfs. I al
> > I've the same issue, but only if the ZFS resides on a LSI MegaRaid and one
> > RAID0 for each disk.
> >
> Not in my case, both pool disks are attached to the Intel ICH7 SATA300
> controller.
Nor my case - my discs are on this:
ahci0:
___
freebsd
Hi.
On 20.10.2016 19:18, Dr. Nikolaus Klepp wrote:
I've the same issue, but only if the ZFS resides on a LSI MegaRaid and one
RAID0 for each disk.
Not in my case, both pool disks are attached to the Intel ICH7 SATA300
controller.
Thanks.
Eugene.
__
Hi,
On 20.10.2016 19:12, Pete French wrote:
Have ignored this thread untiul now, but I observed the same behaviour
on mysystems over the last week or so. In my case its an exim spool
directory, which was hugely full as some point (thousands of
files) and now takes an awfully long time to open an
Hi.
On 20.10.2016 19:03, Miroslav Lachman wrote:
What about snapshots? Are there any snapshots on this filesystem?
Nope.
# zfs list -t all
NAMEUSED AVAIL REFER MOUNTPOINT
zroot 245G 201G 1.17G legacy
zroot/tmp 10.1M 201G
Hi.
On 20.10.2016 18:54, Nicolas Gilles wrote:
Looks like it's not taking up any processing time, so my guess is
the lag probably comes from stalled I/O ... bad disk?
Well, I cannot rule this out completely, but first time I've seen this
lag on this particular server about two months ago, and I
Am Donnerstag, 20. Oktober 2016 schrieb Eugene M. Zheganin:
> Hi.
>
> I have FreeBSD 10.2-STABLE r289293 (but I have observed this situation
> on different releases) and a zfs. I also have one directory that used to
> have a lot of (tens of thousands) files. I surely takes a lot of time to
> ge
Have ignored this thread untiul now, but I observed the same behaviour
on mysystems over the last week or so. In my case its an exim spool
directory, which was hugely full as some point (thousands of
files) and now takes an awfully long time to open and list. I delet
and remake them and the problem
Eugene M. Zheganin wrote on 2016/10/20 15:47:
Hi.
I have FreeBSD 10.2-STABLE r289293 (but I have observed this situation
on different releases) and a zfs. I also have one directory that used to
have a lot of (tens of thousands) files. I surely takes a lot of time to
get a listing of it. But now
On Thu, Oct 20, 2016 at 3:47 PM, Eugene M. Zheganin wrote:
> Hi.
>
> I have FreeBSD 10.2-STABLE r289293 (but I have observed this situation on
> different releases) and a zfs. I also have one directory that used to have a
> lot of (tens of thousands) files. I surely takes a lot of time to get a
>
Hi.
I have FreeBSD 10.2-STABLE r289293 (but I have observed this situation
on different releases) and a zfs. I also have one directory that used to
have a lot of (tens of thousands) files. I surely takes a lot of time to
get a listing of it. But now I have 2 files and a couple of dozens
direc
35 matches
Mail list logo