MacBook Pro SPI-attached keyboard/touchpad

2018-09-07 Thread Yuri Pankov

Hi,

I'm trying to run -current on MacBook Pro 2017, and with recent fixes to 
EFIRT, there's a hope.  The problem is that nothing essential works, and 
while I can live with usb wifi adapter, getting internal 
keyboard/touchpad to work would be really nice.


It has been extensively discussed on linux lists suggesting that those 
are connected via SPI in these models:


https://bugzilla.kernel.org/show_bug.cgi?id=99891
https://bugzilla.kernel.org/show_bug.cgi?id=108331

There's also a WIP linux driver for those and touchbar:

https://github.com/roadrunner2/macbook12-spi-driver

Just wondering if anyone already working on it and/or can provide some 
initial guidance before starting my clumsy attempts at porting it.

___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: ZFS perfomance regression in FreeBSD 12 APLHA3->ALPHA4

2018-09-07 Thread Subbsd
On Thu, Sep 6, 2018 at 3:28 AM Mark Johnston  wrote:
>
> On Wed, Sep 05, 2018 at 11:15:03PM +0300, Subbsd wrote:
> > On Wed, Sep 5, 2018 at 5:58 PM Allan Jude  wrote:
> > >
> > > On 2018-09-05 10:04, Subbsd wrote:
> > > > Hi,
> > > >
> > > > I'm seeing a huge loss in performance ZFS after upgrading FreeBSD 12
> > > > to latest revision (r338466 the moment) and related to ARC.
> > > >
> > > > I can not say which revision was before except that the newver.sh
> > > > pointed to ALPHA3.
> > > >
> > > > Problems are observed if you try to limit ARC. In my case:
> > > >
> > > > vfs.zfs.arc_max="128M"
> > > >
> > > > I know that this is very small. However, for two years with this there
> > > > were no problems.
> > > >
> > > > When i send SIGINFO to process which is currently working with ZFS, i
> > > > see "arc_reclaim_waiters_cv":
> > > >
> > > > e.g when i type:
> > > >
> > > > /bin/csh
> > > >
> > > > I have time (~5 seconds) to press several times 'ctrl+t' before csh is 
> > > > executed:
> > > >
> > > > load: 0.70  cmd: csh 5935 [arc_reclaim_waiters_cv] 1.41r 0.00u 0.00s 0% 
> > > > 3512k
> > > > load: 0.70  cmd: csh 5935 [zio->io_cv] 1.69r 0.00u 0.00s 0% 3512k
> > > > load: 0.70  cmd: csh 5935 [arc_reclaim_waiters_cv] 1.98r 0.00u 0.01s 0% 
> > > > 3512k
> > > > load: 0.73  cmd: csh 5935 [arc_reclaim_waiters_cv] 2.19r 0.00u 0.01s 0% 
> > > > 4156k
> > > >
> > > > same story with find or any other commans:
> > > >
> > > > load: 0.34  cmd: find 5993 [zio->io_cv] 0.99r 0.00u 0.00s 0% 2676k
> > > > load: 0.34  cmd: find 5993 [arc_reclaim_waiters_cv] 1.13r 0.00u 0.00s 
> > > > 0% 2676k
> > > > load: 0.34  cmd: find 5993 [arc_reclaim_waiters_cv] 1.25r 0.00u 0.00s 
> > > > 0% 2680k
> > > > load: 0.34  cmd: find 5993 [arc_reclaim_waiters_cv] 1.38r 0.00u 0.00s 
> > > > 0% 2684k
> > > > load: 0.34  cmd: find 5993 [arc_reclaim_waiters_cv] 1.51r 0.00u 0.00s 
> > > > 0% 2704k
> > > > load: 0.34  cmd: find 5993 [arc_reclaim_waiters_cv] 1.64r 0.00u 0.00s 
> > > > 0% 2716k
> > > > load: 0.34  cmd: find 5993 [arc_reclaim_waiters_cv] 1.78r 0.00u 0.00s 
> > > > 0% 2760k
> > > >
> > > > this problem goes away after increasing vfs.zfs.arc_max
> > > > ___
> > > > freebsd-current@freebsd.org mailing list
> > > > https://lists.freebsd.org/mailman/listinfo/freebsd-current
> > > > To unsubscribe, send any mail to 
> > > > "freebsd-current-unsubscr...@freebsd.org"
> > > >
> > >
> > > Previously, ZFS was not actually able to evict enough dnodes to keep
> > > your arc_max under 128MB, it would have been much higher based on the
> > > number of open files you had. A recent improvement from upstream ZFS
> > > (r337653 and r337660) was pulled in that fixed this, so setting an
> > > arc_max of 128MB is much more effective now, and that is causing the
> > > side effect of "actually doing what you asked it to do", in this case,
> > > what you are asking is a bit silly. If you have a working set that is
> > > greater than 128MB, and you ask ZFS to use less than that, it'll have to
> > > constantly try to reclaim memory to keep under that very low bar.
> > >
> >
> > Thanks for comments. Mark was right when he pointed to r338416 (
> > https://svnweb.freebsd.org/base/head/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c?r1=338416&r2=338415&pathrev=338416
> > ). Commenting aggsum_value returns normal speed regardless of the rest
> > of the new code from upstream.
> > I would like to repeat that the speed with these two lines is not just
> > slow, but _INCREDIBLY_ slow! Probably, this should be written in the
> > relevant documentation for FreeBSD 12+
>
> Could you please retest with the patch below applied, instead of
> reverting r338416?
>
> diff --git a/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c 
> b/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c
> index 2bc065e12509..9b039b7d4a96 100644
> --- a/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c
> +++ b/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c
> @@ -538,7 +538,7 @@ typedef struct arc_state {
>   */
>  int zfs_arc_meta_prune = 1;
>  unsigned long zfs_arc_dnode_limit_percent = 10;
> -int zfs_arc_meta_strategy = ARC_STRATEGY_META_BALANCED;
> +int zfs_arc_meta_strategy = ARC_STRATEGY_META_ONLY;
>  int zfs_arc_meta_adjust_restarts = 4096;
>
>  /* The 6 states: */

Unfortunately, I can not conduct sufficiently serious regression tests
right now (e.g through benchmark tools, with statistics via PMC(3),
etc.). However, I can do a very surface comparison - for example with
find.
tested revision: 338520
loader.conf settings: vfs.zfs.arc_max="128M"

meta strategy ARC_STRATEGY_META_BALANCED result:

% time find /usr/ports -type f > /dev/null
0.495u 6.912s 7:20.88 1.6%  34+172k 289594+0io 0pf+0w

meta strategy ARC_STRATEGY_META_ONLY:
% time find /usr/ports -type f > /dev/null
0.721u 7.184s 4:57.18 2.6%  34+171k 300910+0io 0pf+0w

So the difference in ~183 seconds.
Both test started after a full boot OS cycle.

Interes

Re: ZFS perfomance regression in FreeBSD 12 APLHA3->ALPHA4

2018-09-07 Thread Jakob Alvermark

On 9/6/18 2:28 AM, Mark Johnston wrote:

On Wed, Sep 05, 2018 at 11:15:03PM +0300, Subbsd wrote:

On Wed, Sep 5, 2018 at 5:58 PM Allan Jude  wrote:

On 2018-09-05 10:04, Subbsd wrote:

Hi,

I'm seeing a huge loss in performance ZFS after upgrading FreeBSD 12
to latest revision (r338466 the moment) and related to ARC.

I can not say which revision was before except that the newver.sh
pointed to ALPHA3.

Problems are observed if you try to limit ARC. In my case:

vfs.zfs.arc_max="128M"

I know that this is very small. However, for two years with this there
were no problems.

When i send SIGINFO to process which is currently working with ZFS, i
see "arc_reclaim_waiters_cv":

e.g when i type:

/bin/csh

I have time (~5 seconds) to press several times 'ctrl+t' before csh is executed:

load: 0.70  cmd: csh 5935 [arc_reclaim_waiters_cv] 1.41r 0.00u 0.00s 0% 3512k
load: 0.70  cmd: csh 5935 [zio->io_cv] 1.69r 0.00u 0.00s 0% 3512k
load: 0.70  cmd: csh 5935 [arc_reclaim_waiters_cv] 1.98r 0.00u 0.01s 0% 3512k
load: 0.73  cmd: csh 5935 [arc_reclaim_waiters_cv] 2.19r 0.00u 0.01s 0% 4156k

same story with find or any other commans:

load: 0.34  cmd: find 5993 [zio->io_cv] 0.99r 0.00u 0.00s 0% 2676k
load: 0.34  cmd: find 5993 [arc_reclaim_waiters_cv] 1.13r 0.00u 0.00s 0% 2676k
load: 0.34  cmd: find 5993 [arc_reclaim_waiters_cv] 1.25r 0.00u 0.00s 0% 2680k
load: 0.34  cmd: find 5993 [arc_reclaim_waiters_cv] 1.38r 0.00u 0.00s 0% 2684k
load: 0.34  cmd: find 5993 [arc_reclaim_waiters_cv] 1.51r 0.00u 0.00s 0% 2704k
load: 0.34  cmd: find 5993 [arc_reclaim_waiters_cv] 1.64r 0.00u 0.00s 0% 2716k
load: 0.34  cmd: find 5993 [arc_reclaim_waiters_cv] 1.78r 0.00u 0.00s 0% 2760k

this problem goes away after increasing vfs.zfs.arc_max
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Previously, ZFS was not actually able to evict enough dnodes to keep
your arc_max under 128MB, it would have been much higher based on the
number of open files you had. A recent improvement from upstream ZFS
(r337653 and r337660) was pulled in that fixed this, so setting an
arc_max of 128MB is much more effective now, and that is causing the
side effect of "actually doing what you asked it to do", in this case,
what you are asking is a bit silly. If you have a working set that is
greater than 128MB, and you ask ZFS to use less than that, it'll have to
constantly try to reclaim memory to keep under that very low bar.


Thanks for comments. Mark was right when he pointed to r338416 (
https://svnweb.freebsd.org/base/head/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c?r1=338416&r2=338415&pathrev=338416
). Commenting aggsum_value returns normal speed regardless of the rest
of the new code from upstream.
I would like to repeat that the speed with these two lines is not just
slow, but _INCREDIBLY_ slow! Probably, this should be written in the
relevant documentation for FreeBSD 12+


Hi,

I am experiencing the same slowness when there is a bit of load on the 
system (buildworld for example) which I haven't seen before.


I have vfs.zfs.arc_max=2G.

Top is reporting

ARC: 607M Total, 140M MFU, 245M MRU, 1060K Anon, 4592K Header, 217M Other
 105M Compressed, 281M Uncompressed, 2.67:1 Ratio

Should I test the patch?


Jakob


Could you please retest with the patch below applied, instead of
reverting r338416?

diff --git a/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c 
b/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c
index 2bc065e12509..9b039b7d4a96 100644
--- a/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c
+++ b/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c
@@ -538,7 +538,7 @@ typedef struct arc_state {
   */
  int zfs_arc_meta_prune = 1;
  unsigned long zfs_arc_dnode_limit_percent = 10;
-int zfs_arc_meta_strategy = ARC_STRATEGY_META_BALANCED;
+int zfs_arc_meta_strategy = ARC_STRATEGY_META_ONLY;
  int zfs_arc_meta_adjust_restarts = 4096;
  
  /* The 6 states: */

___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"

___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: ifnet use after free

2018-09-07 Thread Shawn Webb
On Fri, Aug 24, 2018 at 06:19:55PM -0400, Shawn Webb wrote:
> Hey All,
> 
> Somewhere in the last month or so, a use after free was introduced. I
> don't have the time right now to bisect the commits and figure out
> which commit introduced the breakage. Attached is the core.txt (which
> seems nonsensical because the dump is reporting on a different
> thread). If the core.txt gets scrubbed, I've posted it here:
> https://gist.github.com/796ea88cec19a1fd2a85f4913482286a
> 
> I'm running HardenedBSD 12-CURRENT/amd64, commit 6091fec317a.
> 
> FreeBSD hbsd-dev-laptop 12.0-ALPHA2 FreeBSD 12.0-ALPHA2 #4
> 6091fec317a(hardened/current/master)-dirty: Thu Aug 23 18:37:45 EDT
> 2018
> shawn@hbsd-dev-laptop:/usr/obj/usr/src/amd64.amd64/sys/LATT-SEC  amd64

New core.txt: https://gist.github.com/d1ee63e578c09f35d40c977093b402d6

I'm not sure if it's the same issue, but at least I'm getting a proper
backtrace. I wonder if ifp or ifp->if_xname is already freed by the time
ifunit_ref is called.

FreeBSD hbsd-dev-laptop 12.0-ALPHA4 FreeBSD 12.0-ALPHA4 #6  
a581146ba17(hardened/current/master)-dirty: Mon Sep  3 12:51:49 EDT 2018 
shawn@hbsd-dev-laptop:/usr/obj/usr/src/amd64.amd64/sys/LATT-SEC  amd64

panic: vm_fault_hold: fault on nofault entry, addr: 0xfe685000

GNU gdb (GDB) 8.1.1 [GDB v8.1.1 for FreeBSD]
Copyright (C) 2018 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-portbld-freebsd12.0".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
.
Find the GDB manual and other documentation resources online at:
.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from /boot/kernel/kernel...Reading symbols from 
/usr/lib/debug//boot/kernel/kernel.debug...done.
done.

Unread portion of the kernel message buffer:
[12101] panic: vm_fault_hold: fault on nofault entry, addr: 0xfe685000
[12101] cpuid = 3
[12101] time = 1536281241
[12101] __HardenedBSD_version = 1200058 __FreeBSD_version = 1200083
[12101] version = FreeBSD 12.0-ALPHA4 #6  
a581146ba17(hardened/current/master)-dirty: Mon Sep  3 12:51:49 EDT 2018
[12101] shawn@hbsd-dev-laptop:/usr/obj/usr/src/amd64.amd64/sys/LATT-SEC
[12101] KDB: stack backtrace:
[12101] db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame 
0xfe1fef53d1c0
[12101] vpanic() at vpanic+0x1a8/frame 0xfe1fef53d220
[12101] panic() at panic+0x43/frame 0xfe1fef53d280
[12101] vm_fault_hold() at vm_fault_hold+0x1faf/frame 0xfe1fef53d3d0
[12101] vm_fault() at vm_fault+0x60/frame 0xfe1fef53d410
[12101] trap_pfault() at trap_pfault+0x188/frame 0xfe1fef53d460
[12101] trap() at trap+0x560/frame 0xfe1fef53d570
[12101] calltrap() at calltrap+0x8/frame 0xfe1fef53d570
[12101] --- trap 0xc, rip = 0x80bd5455, rsp = 0xfe1fef53d640, rbp = 
0xfe1fef53d640 ---
[12101] strncmp() at strncmp+0x15/frame 0xfe1fef53d640
[12101] ifunit_ref() at ifunit_ref+0x51/frame 0xfe1fef53d680
[12101] ifioctl() at ifioctl+0x7bd/frame 0xfe1fef53d750
[12101] kern_ioctl() at kern_ioctl+0x2c0/frame 0xfe1fef53d7b0
[12101] sys_ioctl() at sys_ioctl+0x16e/frame 0xfe1fef53d880
[12101] amd64_syscall() at amd64_syscall+0x29e/frame 0xfe1fef53d9b0
[12101] fast_syscall_common() at fast_syscall_common+0x101/frame 
0xfe1fef53d9b0
[12101] --- syscall (54, FreeBSD ELF64, sys_ioctl), rip = 0x3c2595b7f8a, rsp = 
0x7461b1772838, rbp = 0x7461b17728a0 ---
[12101] Uptime: 3h21m41s
[12101] Dumping 8310 out of 65330 
MB:..1%..11%..21%..31%..41%..51%..61%..71%..81%..91%

__curthread () at ./machine/pcpu.h:230
230 __asm("movq %%gs:%1,%0" : "=r" (td)
(kgdb) #0  __curthread () at ./machine/pcpu.h:230
#1  doadump (textdump=1) at /usr/src/sys/kern/kern_shutdown.c:368
#2  0x80aec5b6 in kern_reboot (howto=260)
at /usr/src/sys/kern/kern_shutdown.c:448
#3  0x80aeca08 in vpanic (fmt=, ap=0xfe1fef53d260)
at /usr/src/sys/kern/kern_shutdown.c:877
#4  0x80aec763 in panic (fmt=)
at /usr/src/sys/kern/kern_shutdown.c:801
#5  0x80e285cf in vm_fault_hold (map=0xf80005001000, 
vaddr=, fault_type=1 '\001', fault_flags=, 
m_hold=0x0) at /usr/src/sys/vm/vm_fault.c:585
#6  0x80e265d0 in vm_fault (map=0xf80005001000, 
vaddr=, fault_type=1 '\001', fault_flags=0)
at /usr/src/sys/vm/vm_fault.c:536
#7  0x80fc0648 in trap_pfault (frame=0xfe1fef53d580, 
usermode=) at /usr/src/sys/amd64/amd64/trap.c:829
#8  0x80fbfcd0 in trap (frame=0xfe1fef53d580)
at /usr/src/sys/amd64/amd64/trap.c:441
#9  
#10 0x80bd5455 in strncmp (s1=0

Re: SD card reader only works after a suspend/resume

2018-09-07 Thread Jakob Alvermark

On 9/7/18 12:41 AM, Marius Strobl wrote:

On Thu, Sep 06, 2018 at 12:33:39PM +0200, Jakob Alvermark wrote:

Hi,


I discovered this by chance.

The SD card reader in my laptop has never worked, but now I noticed it
does after suspending and resuming.

The controller is probed and attached on boot:

sdhci_acpi1:  iomem
0x90a0-0x90a00fff irq 47 on acpi0

But nothing happens if I put a card in. Unless I suspend and resume:

mmc1:  on sdhci_acpi1
mmcsd0: 32GB  at mmc1
50.0MHz/4bit/65535-block

Then I can remove and replug cards and it seems to work just fine.

I believe that making SD card insertion/removal with the integrated
SDHCI controlers of newer Intel SoCs work out-of-the-box requires
support for ACPI GPE interrupts and ACPI GPIO events respectively to
be added to FreeBSD. Otherwise insertion/removal interrutps/events
aren't reported and polling the card present state doesn't generally
work as a workaround with these controllers either, unfortunately.
I'm not aware of anyone working on the former, though.

Polling the card present state happens to work one time after SDHCI
initialization with these controllers which is why a card will be
attached when inserted as part of a suspend/resume cycle (resume of
mmc(4) had some bugs until some months ago, which probably explains
why that procedure hasn't worked as a workaround for you in the past).
Inserting the card before boot, unloading/loading sdhci_acpi.ko or
triggering detach/attach of sdhci_acpi(4) via devctl(8) should allow
to attach a card, too.



If a card is inserted before booting it is not detected.

Removing and inserting card after boot is not detected unless I suspend 
and resume.


After I have suspended and resumed once, cards are detected. Removals 
and insertions are detected as they happen.



Jakob

___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: ZFS perfomance regression in FreeBSD 12 APLHA3->ALPHA4

2018-09-07 Thread Mark Johnston
On Fri, Sep 07, 2018 at 04:34:23PM +0300, Subbsd wrote:
> > > > On 2018-09-05 10:04, Subbsd wrote:
> > > > > Hi,
> > > > >
> > > > > I'm seeing a huge loss in performance ZFS after upgrading FreeBSD 12
> > > > > to latest revision (r338466 the moment) and related to ARC.
> > > > >
> Unfortunately, I can not conduct sufficiently serious regression tests
> right now (e.g through benchmark tools, with statistics via PMC(3),
> etc.). However, I can do a very surface comparison - for example with
> find.
> tested revision: 338520
> loader.conf settings: vfs.zfs.arc_max="128M"
> 
> meta strategy ARC_STRATEGY_META_BALANCED result:
> 
> % time find /usr/ports -type f > /dev/null
> 0.495u 6.912s 7:20.88 1.6%  34+172k 289594+0io 0pf+0w
> 
> meta strategy ARC_STRATEGY_META_ONLY:
> % time find /usr/ports -type f > /dev/null
> 0.721u 7.184s 4:57.18 2.6%  34+171k 300910+0io 0pf+0w
> 
> So the difference in ~183 seconds.
> Both test started after a full boot OS cycle.

Seems like a pretty substantial difference.  Is this roughly the same
performance that you had before the upgrade to -ALPHA4?
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: ZFS perfomance regression in FreeBSD 12 APLHA3->ALPHA4

2018-09-07 Thread Mark Johnston
On Fri, Sep 07, 2018 at 03:40:52PM +0200, Jakob Alvermark wrote:
> On 9/6/18 2:28 AM, Mark Johnston wrote:
> > On Wed, Sep 05, 2018 at 11:15:03PM +0300, Subbsd wrote:
> >> On Wed, Sep 5, 2018 at 5:58 PM Allan Jude  wrote:
> >>> On 2018-09-05 10:04, Subbsd wrote:
>  Hi,
> 
>  I'm seeing a huge loss in performance ZFS after upgrading FreeBSD 12
>  to latest revision (r338466 the moment) and related to ARC.
> 
>  I can not say which revision was before except that the newver.sh
>  pointed to ALPHA3.
> 
>  Problems are observed if you try to limit ARC. In my case:
> 
>  vfs.zfs.arc_max="128M"
> 
>  I know that this is very small. However, for two years with this there
>  were no problems.
> 
>  When i send SIGINFO to process which is currently working with ZFS, i
>  see "arc_reclaim_waiters_cv":
> 
>  e.g when i type:
> 
>  /bin/csh
> 
>  I have time (~5 seconds) to press several times 'ctrl+t' before csh is 
>  executed:
> 
>  load: 0.70  cmd: csh 5935 [arc_reclaim_waiters_cv] 1.41r 0.00u 0.00s 0% 
>  3512k
>  load: 0.70  cmd: csh 5935 [zio->io_cv] 1.69r 0.00u 0.00s 0% 3512k
>  load: 0.70  cmd: csh 5935 [arc_reclaim_waiters_cv] 1.98r 0.00u 0.01s 0% 
>  3512k
>  load: 0.73  cmd: csh 5935 [arc_reclaim_waiters_cv] 2.19r 0.00u 0.01s 0% 
>  4156k
> 
>  same story with find or any other commans:
> 
>  load: 0.34  cmd: find 5993 [zio->io_cv] 0.99r 0.00u 0.00s 0% 2676k
>  load: 0.34  cmd: find 5993 [arc_reclaim_waiters_cv] 1.13r 0.00u 0.00s 0% 
>  2676k
>  load: 0.34  cmd: find 5993 [arc_reclaim_waiters_cv] 1.25r 0.00u 0.00s 0% 
>  2680k
>  load: 0.34  cmd: find 5993 [arc_reclaim_waiters_cv] 1.38r 0.00u 0.00s 0% 
>  2684k
>  load: 0.34  cmd: find 5993 [arc_reclaim_waiters_cv] 1.51r 0.00u 0.00s 0% 
>  2704k
>  load: 0.34  cmd: find 5993 [arc_reclaim_waiters_cv] 1.64r 0.00u 0.00s 0% 
>  2716k
>  load: 0.34  cmd: find 5993 [arc_reclaim_waiters_cv] 1.78r 0.00u 0.00s 0% 
>  2760k
> 
>  this problem goes away after increasing vfs.zfs.arc_max
>  ___
>  freebsd-current@freebsd.org mailing list
>  https://lists.freebsd.org/mailman/listinfo/freebsd-current
>  To unsubscribe, send any mail to 
>  "freebsd-current-unsubscr...@freebsd.org"
> 
> >>> Previously, ZFS was not actually able to evict enough dnodes to keep
> >>> your arc_max under 128MB, it would have been much higher based on the
> >>> number of open files you had. A recent improvement from upstream ZFS
> >>> (r337653 and r337660) was pulled in that fixed this, so setting an
> >>> arc_max of 128MB is much more effective now, and that is causing the
> >>> side effect of "actually doing what you asked it to do", in this case,
> >>> what you are asking is a bit silly. If you have a working set that is
> >>> greater than 128MB, and you ask ZFS to use less than that, it'll have to
> >>> constantly try to reclaim memory to keep under that very low bar.
> >>>
> >> Thanks for comments. Mark was right when he pointed to r338416 (
> >> https://svnweb.freebsd.org/base/head/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c?r1=338416&r2=338415&pathrev=338416
> >> ). Commenting aggsum_value returns normal speed regardless of the rest
> >> of the new code from upstream.
> >> I would like to repeat that the speed with these two lines is not just
> >> slow, but _INCREDIBLY_ slow! Probably, this should be written in the
> >> relevant documentation for FreeBSD 12+
> 
> Hi,
> 
> I am experiencing the same slowness when there is a bit of load on the 
> system (buildworld for example) which I haven't seen before.

Is it a regression following a recent kernel update?

> I have vfs.zfs.arc_max=2G.
> 
> Top is reporting
> 
> ARC: 607M Total, 140M MFU, 245M MRU, 1060K Anon, 4592K Header, 217M Other
>   105M Compressed, 281M Uncompressed, 2.67:1 Ratio
> 
> Should I test the patch?

I would be interested in the results, assuming it is indeed a
regression.
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


RE: ZFS perfomance regression in FreeBSD 12 APLHA3->ALPHA4

2018-09-07 Thread Cy Schubert
I'd be interested in seeing systat -z output.

---
Sent using a tiny phone keyboard.
Apologies for any typos and autocorrect.
Also, this old phone only supports top post. Apologies.

Cy Schubert
 or 
The need of the many outweighs the greed of the few.
---

-Original Message-
From: Mark Johnston
Sent: 07/09/2018 09:09
To: Jakob Alvermark
Cc: Subbsd; allanj...@freebsd.org; freebsd-current Current
Subject: Re: ZFS perfomance regression in FreeBSD 12 APLHA3->ALPHA4

On Fri, Sep 07, 2018 at 03:40:52PM +0200, Jakob Alvermark wrote:
> On 9/6/18 2:28 AM, Mark Johnston wrote:
> > On Wed, Sep 05, 2018 at 11:15:03PM +0300, Subbsd wrote:
> >> On Wed, Sep 5, 2018 at 5:58 PM Allan Jude  wrote:
> >>> On 2018-09-05 10:04, Subbsd wrote:
>  Hi,
> 
>  I'm seeing a huge loss in performance ZFS after upgrading FreeBSD 12
>  to latest revision (r338466 the moment) and related to ARC.
> 
>  I can not say which revision was before except that the newver.sh
>  pointed to ALPHA3.
> 
>  Problems are observed if you try to limit ARC. In my case:
> 
>  vfs.zfs.arc_max="128M"
> 
>  I know that this is very small. However, for two years with this there
>  were no problems.
> 
>  When i send SIGINFO to process which is currently working with ZFS, i
>  see "arc_reclaim_waiters_cv":
> 
>  e.g when i type:
> 
>  /bin/csh
> 
>  I have time (~5 seconds) to press several times 'ctrl+t' before csh is 
>  executed:
> 
>  load: 0.70  cmd: csh 5935 [arc_reclaim_waiters_cv] 1.41r 0.00u 0.00s 0% 
>  3512k
>  load: 0.70  cmd: csh 5935 [zio->io_cv] 1.69r 0.00u 0.00s 0% 3512k
>  load: 0.70  cmd: csh 5935 [arc_reclaim_waiters_cv] 1.98r 0.00u 0.01s 0% 
>  3512k
>  load: 0.73  cmd: csh 5935 [arc_reclaim_waiters_cv] 2.19r 0.00u 0.01s 0% 
>  4156k
> 
>  same story with find or any other commans:
> 
>  load: 0.34  cmd: find 5993 [zio->io_cv] 0.99r 0.00u 0.00s 0% 2676k
>  load: 0.34  cmd: find 5993 [arc_reclaim_waiters_cv] 1.13r 0.00u 0.00s 0% 
>  2676k
>  load: 0.34  cmd: find 5993 [arc_reclaim_waiters_cv] 1.25r 0.00u 0.00s 0% 
>  2680k
>  load: 0.34  cmd: find 5993 [arc_reclaim_waiters_cv] 1.38r 0.00u 0.00s 0% 
>  2684k
>  load: 0.34  cmd: find 5993 [arc_reclaim_waiters_cv] 1.51r 0.00u 0.00s 0% 
>  2704k
>  load: 0.34  cmd: find 5993 [arc_reclaim_waiters_cv] 1.64r 0.00u 0.00s 0% 
>  2716k
>  load: 0.34  cmd: find 5993 [arc_reclaim_waiters_cv] 1.78r 0.00u 0.00s 0% 
>  2760k
> 
>  this problem goes away after increasing vfs.zfs.arc_max
>  ___
>  freebsd-current@freebsd.org mailing list
>  https://lists.freebsd.org/mailman/listinfo/freebsd-current
>  To unsubscribe, send any mail to 
>  "freebsd-current-unsubscr...@freebsd.org"
> 
> >>> Previously, ZFS was not actually able to evict enough dnodes to keep
> >>> your arc_max under 128MB, it would have been much higher based on the
> >>> number of open files you had. A recent improvement from upstream ZFS
> >>> (r337653 and r337660) was pulled in that fixed this, so setting an
> >>> arc_max of 128MB is much more effective now, and that is causing the
> >>> side effect of "actually doing what you asked it to do", in this case,
> >>> what you are asking is a bit silly. If you have a working set that is
> >>> greater than 128MB, and you ask ZFS to use less than that, it'll have to
> >>> constantly try to reclaim memory to keep under that very low bar.
> >>>
> >> Thanks for comments. Mark was right when he pointed to r338416 (
> >> https://svnweb.freebsd.org/base/head/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c?r1=338416&r2=338415&pathrev=338416
> >> ). Commenting aggsum_value returns normal speed regardless of the rest
> >> of the new code from upstream.
> >> I would like to repeat that the speed with these two lines is not just
> >> slow, but _INCREDIBLY_ slow! Probably, this should be written in the
> >> relevant documentation for FreeBSD 12+
> 
> Hi,
> 
> I am experiencing the same slowness when there is a bit of load on the 
> system (buildworld for example) which I haven't seen before.

Is it a regression following a recent kernel update?

> I have vfs.zfs.arc_max=2G.
> 
> Top is reporting
> 
> ARC: 607M Total, 140M MFU, 245M MRU, 1060K Anon, 4592K Header, 217M Other
>   105M Compressed, 281M Uncompressed, 2.67:1 Ratio
> 
> Should I test the patch?

I would be interested in the results, assuming it is indeed a
regression.
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"

___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebs