Re: kernel: mps0: Out of chain frames, consider increasing hw.mps.max_chains.
On Sun, Mar 06, 2016 at 11:40:55PM -0800, Scott Long wrote: > > > On Mar 6, 2016, at 10:04 PM, Slawa Olhovchenkov wrote: > > > > On Sun, Mar 06, 2016 at 06:20:06PM -0800, Scott Long wrote: > > > >> > >>> On Mar 6, 2016, at 1:27 PM, Slawa Olhovchenkov wrote: > >>> > >>> On Sun, Mar 06, 2016 at 01:10:42PM -0800, Scott Long wrote: > >>> > Hi, > > The message is harmless, it's a reminder that you should tune the kernel > for your workload. When the message is triggered, it means that a > potential command was deferred, likely for only a few microseconds, and > then everything moved on as normal. > > A command uses anywhere from 0 to a few dozen chain frames per I/o, > depending on the size of the io. The chain frame memory is allocated at > boot so that it's always available, not allocated on the fly. When I > wrote this driver, I felt that it would be wasteful to reserve memory > for a worst case scenario of all large io's by default, so I put in this > deferral system with a console reminder to for tuning. > > Yes, you actually do have 900 io's outstanding. The controller buffers > the io requests and allows the system to queue up much more than what > sata disks might allow on their own. It's debatable if this is good or > bad, but it's tunable as well. > > Anyways, the messages should not cause alarm. Either tune up the chain > frame count, or tune down the max io count. > >>> > >>> I am don't know depends or not, but I see dramaticaly performance drop > >>> at time of this messages. > >>> > >> > >> Good to know. Part of the performance drop might be because of the > >> slowness of printing to the console. > > > > no, on console print may be one per minute > > > > The one-per-minute prints are by design. I should probably make it print > once and then increment a sysctl counter. I.e. this is can't be cause slowness of printing to the console? > >>> How I can calculate buffers numbers? > >> > >> If your system is new enough to have mpsutil, please run it ‘mpsutil > >> show iocfacts’. > > > > As I see mpsutil present only on -HEAD. > > Can I compile it on 10-STABLE? > > > > Yes, I believe it should compile on 10, but I have not tried it recently. # mpsutil show iocfacts MaxChainDepth: 128 WhoInit: 0x4 NumberOfPorts: 1 MaxMSIxVectors: 0 RequestCredit: 1720 ProductID: 0x2713 IOCCapabilities: 0x185c FWVersion: 0x0f00 IOCRequestFrameSize: 32 MaxInitiators: 1 MaxTargets: 256 MaxSasExpanders: 11 MaxEnclosures: 12 ProtocolFlags: 0x2 HighPriorityCredit: 116 MaxRepDescPostQDepth: 65504 ReplyFrameSize: 32 MaxVolumes: 2 MaxDevHandle: 286 MaxPersistentEntries: 128 MinDevHandle: 9 > >> If not, then boot your system with bootverbose and send me the output. > > > > I can do this day ago. > > > >>> I am have very heavy I/O. > >> > >> Out of curiosity, do you redefine MAXPHYS/DFLTPHYS in your kernel config? > > > > no > > > >>> This allocated one for all controllers, or allocated for every controller? > >> > >> It’s per-controller. > >> > >> I’ve thought about making the tuning be dynamic at runtime. I > >> implemented similar dynamic tuning for other drivers, but it seemed > >> overly complex for low benefit. Implementing it for this driver > >> would be possible but require some significant code changes. > > > > What cause of chain_free+io_cmds_active << max_chains? > > One cmd can use many chains? > > Yes. A request uses and active command, and depending on the size of the I/O, > it might use several chain frames. > > Scott > ___ freebsd-stable@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: FreeBSD 10.3 - nvme regression
On Mon, Mar 7, 2016 at 5:33 AM, Borja Marcos wrote: > > Hello, > > I am trying a SuperMicro server with NVME disks. The system boots FreeBSD > 10.2, panics when booting FreeBSD 10.3. > > It was compiled on March 7th and Revision 296191 is included. > > On 10.3 it’s crashing right after this line: > > nvme9: mem 0xfba1-0xfba13fff irq 59 at > device 0.0 on pci134 > > with a panic. > > panic: couldn’t find an APIC vector for IRQ 59. > > cpuid = 0 > The backtrace is (sorry, copying from a screen video) > > #0 kdb_backtrace=0x60 > #1 vpanic+0x126 > #2 panic+0x43 > #3 ioapic_disable_intr+0 > #4 intr_add_handler+0xfb > #5 nexus_setup_inter+0x8a > #6 pci_setup_intr+0x33 > #7 pci_setup_intr+0x33 > #8 bus_setup_intr+0xac > #9 nvme_ctrlr_configure_intx+0x88 > #10 nvme_ctrlr_construct+0x407 > #11 nvme_attach+0x20 > #12 device_attach+0x43d > #13 bus_generic_attach+0x2d > #14 acpi_pci_attach+0x15c > #15 device_attach+0x43d > #16 bus_generic_attach+0x2d > #17 acpi_pcib_attach+0x22c > > It said “Uptime 1s” and did a cold reboot. > Hi, (Moving to freebsd-stable. NVMe is not associated with the SCSI stack at all.) Can you please file a bug report on this? Also, can you try setting the following loader variable before install? hw.nvme.min_cpus_per_ioq=4 I am fairly certain you are hitting bug 199321, and since you have so many devices in your system (NVMe + NICs) allocating per-CPU MSIx vectors, that this last NVMe device cannot even allocate one APIC vector entry for an INTx interrupt. https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=199321 -Jim > > > > > dmesg.boot from 10.2 (the system in installed on a memory stick). > > root@ssd9:/usr/src/sys/dev/nvme # cat /var/run/dmesg.boot > Copyright (c) 1992-2015 The FreeBSD Project. > Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994 > The Regents of the University of California. All rights reserved. > FreeBSD is a registered trademark of The FreeBSD Foundation. > FreeBSD 10.2-RELEASE #0 r28: Wed Aug 12 15:26:37 UTC 2015 > r...@releng1.nyi.freebsd.org:/usr/obj/usr/src/sys/GENERIC amd64 > FreeBSD clang version 3.4.1 (tags/RELEASE_34/dot1-final 208032) 20140512 > CPU: Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz (2400.04-MHz K8-class CPU) > Origin="GenuineIntel" Id=0x306f2 Family=0x6 Model=0x3f Stepping=2 > > Features=0xbfebfbff > > Features2=0x7ffefbff,FMA,CX16,xTPR,PDCM,PCID,DCA,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,OSXSAVE,AVX,F16C,RDRAND> > AMD Features=0x2c100800 > AMD Features2=0x21 > Structured Extended > Features=0x37ab > XSAVE Features=0x1 > VT-x: PAT,HLT,MTF,PAUSE,EPT,UG,VPID,VID,PostIntr > TSC: P-state invariant, performance statistics > real memory = 137438953472 (131072 MB) > avail memory = 133409718272 (127229 MB) > Event timer "LAPIC" quality 600 > ACPI APIC Table: > FreeBSD/SMP: Multiprocessor System Detected: 32 CPUs > FreeBSD/SMP: 2 package(s) x 8 core(s) x 2 SMT threads > cpu0 (BSP): APIC ID: 0 > cpu1 (AP): APIC ID: 1 > cpu2 (AP): APIC ID: 2 > cpu3 (AP): APIC ID: 3 > cpu4 (AP): APIC ID: 4 > cpu5 (AP): APIC ID: 5 > cpu6 (AP): APIC ID: 6 > cpu7 (AP): APIC ID: 7 > cpu8 (AP): APIC ID: 8 > cpu9 (AP): APIC ID: 9 > cpu10 (AP): APIC ID: 10 > cpu11 (AP): APIC ID: 11 > cpu12 (AP): APIC ID: 12 > cpu13 (AP): APIC ID: 13 > cpu14 (AP): APIC ID: 14 > cpu15 (AP): APIC ID: 15 > cpu16 (AP): APIC ID: 16 > cpu17 (AP): APIC ID: 17 > cpu18 (AP): APIC ID: 18 > cpu19 (AP): APIC ID: 19 > cpu20 (AP): APIC ID: 20 > cpu21 (AP): APIC ID: 21 > cpu22 (AP): APIC ID: 22 > cpu23 (AP): APIC ID: 23 > cpu24 (AP): APIC ID: 24 > cpu25 (AP): APIC ID: 25 > cpu26 (AP): APIC ID: 26 > cpu27 (AP): APIC ID: 27 > cpu28 (AP): APIC ID: 28 > cpu29 (AP): APIC ID: 29 > cpu30 (AP): APIC ID: 30 > cpu31 (AP): APIC ID: 31 > ioapic0 irqs 0-23 on motherboard > ioapic1 irqs 24-47 on motherboard > ioapic2 irqs 48-71 on motherboard > random: initialized > module_register_init: MOD_LOAD (vesa, 0x80db8eb0, 0) error 19 > kbd1 at kbdmux0 > acpi0: on motherboard > acpi0: Power Button (fixed) > cpu0: on acpi0 > cpu1: on acpi0 > cpu2: on acpi0 > cpu3: on acpi0 > cpu4: on acpi0 > cpu5: on acpi0 > cpu6: on acpi0 > cpu7: on acpi0 > cpu8: on acpi0 > cpu9: on acpi0 > cpu10: on acpi0 > cpu11: on acpi0 > cpu12: on acpi0 > cpu13: on acpi0 > cpu14: on acpi0 > cpu15: on acpi0 > cpu16: on acpi0 > cpu17: on acpi0 > cpu18: on acpi0 > cpu19: on acpi0 > cpu20: on acpi0 > cpu21: on acpi0 > cpu22: on acpi0 > cpu23: on acpi0 > cpu24: on acpi0 > cpu25: on acpi0 > cpu26: on acpi0 > cpu27: on acpi0 > cpu28: on acpi0 > cpu29: on acpi0 > cpu30: on acpi0 > cpu31: on acpi0 > atrtc0: port 0x70-0x71,0x74-0x77 irq 8 on acpi0 > Event timer "RTC" frequency 32768 Hz quality 0 > attimer0: port 0x40-0x43,0x50-0x53 irq 0 on acpi0 > Timecounter "i8254" frequency 1193182 Hz quality 0 > Event timer "i8254" frequency 1193182 Hz quality 100 > hpet0: iomem 0xfed0-0xfed003ff on acpi0 > Tim
Re: Newer clang than comes with install?
On Fri, Mar 04, 2016 at 09:53:08AM -0500, Kevin P. Neal wrote: > On Fri, Mar 04, 2016 at 02:22:26PM +, Brooks Davis wrote: > > On Thu, Mar 03, 2016 at 08:45:05AM -0500, kpn...@pobox.com wrote: > > > I notice on 10.2 we're using "FreeBSD clang version 3.4.1". But there are > > > bugs in this version of clang that I'm having trouble with. > > > > > > Is compiling a newer (say, 3.7.1) version of clang to target FreeBSD > > > supported? I have no desire to replace any of the libraries, just the > > > compiler itself. Is that supposed to work _without_ going through the > > > ports/pkgs system? > > > > > > IOW, can I just download from llvm.org the clang+llvm source, compile > > > it on FreeBSD, and then use it safely? > > > > It should. The ports don't include many patches. > > Yeah, I was just looking at the patches we do include. One of them it looks > like causes some of the llvm.org-provided includes to not be installed. > > I'm not sure I can, well, not install them because I also need to use the > same install to do cross compiles. A quick check shows that those includes > are used when targetting cross and native. > > Am I correct about the include files? And, if so, are there plans to > upstream patches so the llvm.org includes will work out of the box for > FreeBSD-hosted-and-targetted compiles? The std*.h include files included with clang are broken on FreeBSD. On one has stepped forward to fix them so you will have to chose between installing them and bening able build FreeBSD. In practice, you should still be able to cross compile if you have a working sysroot for your target. -- Brooks signature.asc Description: PGP signature
Re: FreeBSD 10.3 - nvme regression
> On 07 Mar 2016, at 15:28, Jim Harris wrote: > (Moving to freebsd-stable. NVMe is not associated with the SCSI stack at > all.) Oops, my apologies. I was assuming that, being storage stuff, -scsi was a good list. > Can you please file a bug report on this? Sure, doing doing some simple tests right now and I’ll file it. > > Also, can you try setting the following loader variable before install? > > hw.nvme.min_cpus_per_ioq=4 It now boots, thanks :) Note that it’s the first time I use NVMe drives, so bear with me in case I do anything stupid ;) I have noticed some odd performance problems. I have created a “raidz2” ZFS pool with the 10 drives. Doing some silly tests with several “Bonnie++” instances, I have noticed that delete commands seem to be very slow. After running several bonnie++ instances in parallel, when deleting the files, the drivers are almost stuck for a fairly long time, showing 100% bandwidth usage on “gstat” and indeed being painfully slow. Disabling the usage of BIO_DELETE for ZFS (sysctl vfs.zfs.vdev.bio_delete_disable=1) solves this problem, although, of course, BIO_DELETE is desirable as far as I know. I observed the same behavior on 10.2. This is not a proper report, I know, I will follow up tomorrow. Thanks! Borja. ___ freebsd-stable@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: Skylake Loader Performance 10.3-BETA3
> On 4 Mar 2016, at 18:49, Mark Dixon wrote: > > Will Green sundivenetworks.com> writes: > >> I am happy to test patches and/or current on this server if that helps. If > you want more details on the >> motherboard/system I have started a post on it at > http://buildwithbsd.org/hw/skylake_xeon_server.html >> > > I've made the UEFI switch which worked fine, but I'm also happy to help out > with testing if anyone looks at this. Are you booting from ZFS? Unless I’ve missed something this isn’t yet supported by the installer, but it is possible to get working manually. Will ___ freebsd-stable@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: Skylake Loader Performance 10.3-BETA3
On 07/03/2016 16:43, Will Green wrote: On 4 Mar 2016, at 18:49, Mark Dixon wrote: Will Green sundivenetworks.com> writes: I am happy to test patches and/or current on this server if that helps. If you want more details on the motherboard/system I have started a post on it at http://buildwithbsd.org/hw/skylake_xeon_server.html I've made the UEFI switch which worked fine, but I'm also happy to help out with testing if anyone looks at this. Are you booting from ZFS? Unless I’ve missed something this isn’t yet supported by the installer, but it is possible to get working manually. Pretty sure you missed something and those changes where merged, imp should be able to confirm. Regards Steve ___ freebsd-stable@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Hangs with mrsas?
I have a new Dell server with a typical Dell hardware RAID. pciconf identifies it as "MegaRAID SAS-3 3008 [Fury]"; mfiutil reports: mfi0 Adapter: Product Name: PERC H330 Adapter Serial Number: 5AT00PI Firmware: 25.3.0.0016 RAID Levels: Battery Backup: not present NVRAM: 32K Onboard Memory: 0M Minimum Stripe: 64K Maximum Stripe: 64K Since I'm running ZFS I have the RAID functions disabled and the drives are presented as "system physical drives" ("mfisyspd[0-3]" when using mfi(4)). I wanted to use mrsas(4) instead, so that I could have direct access to the drives' SMART functions, and this seemed to work after I set the hw.mfi.mrsas_enable tunable, with one major exception: all drive access would hang after about 12 hours and the machine would require a hard reset to come back up. Has anyone seen this before? The driver in head doesn't appear to be any newer. -GAWollman ___ freebsd-stable@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
unbound-control-setup missing from 10.3-RC1 media
/usr/sbin/unbound-control-setup is missing from the 10.3-RC1 release media (at least disc1.iso and memstick.img). I have filed PR 207748 reporting this. ___ freebsd-stable@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: unbound-control-setup missing from 10.3-RC1 media
On Mon, Mar 07, 2016 at 08:33:17PM +, David Boyd wrote: > /usr/sbin/unbound-control-setup is missing from the 10.3-RC1 release media > (at least disc1.iso and memstick.img). > > > I have filed PR 207748 reporting this. > It was removed in r295690. Glen signature.asc Description: PGP signature
Re: Hangs with mrsas?
On 03/07/2016 14:09, Garrett Wollman wrote: > I have a new Dell server with a typical Dell hardware RAID. pciconf > identifies it as "MegaRAID SAS-3 3008 [Fury]"; mfiutil reports: > > mfi0 Adapter: > Product Name: PERC H330 Adapter >Serial Number: 5AT00PI > Firmware: 25.3.0.0016 > RAID Levels: > Battery Backup: not present >NVRAM: 32K > Onboard Memory: 0M > Minimum Stripe: 64K > Maximum Stripe: 64K > > Since I'm running ZFS I have the RAID functions disabled and the > drives are presented as "system physical drives" ("mfisyspd[0-3]" when > using mfi(4)). I wanted to use mrsas(4) instead, so that I could have > direct access to the drives' SMART functions, and this seemed to work > after I set the hw.mfi.mrsas_enable tunable, with one major exception: > all drive access would hang after about 12 hours and the machine would > require a hard reset to come back up. > > Has anyone seen this before? The driver in head doesn't appear to be > any newer. > > -GAWollman I did some similar testing in late Jan but perhaps not long enough to notice your symptoms. I'm pretty certain I used mrsas_enable since that is what I would plan to use in production. I had a H330-mini with the same firmware rev in a R430. I was testing with some 2.5" Seagate ST9600205SS 600gb disks from another system. What kind of disks were you using and in what kind of configuration? Does a simpler config stay up? If you are using SSD, I wonder if disks would survive? SSD firmware issue? Was it hard hung at the console too? Can you enter DDB? If you don't mind, which Dell model is this? Sorry I don't have any directly helpful suggestions but you have good timing because this could very well influence hardware choices. Thanks. ___ freebsd-stable@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: Skylake Loader Performance 10.3-BETA3
On 2016-03-07 17:24, Steven Hartland wrote: On 07/03/2016 16:43, Will Green wrote: On 4 Mar 2016, at 18:49, Mark Dixon wrote: Will Green sundivenetworks.com> writes: I am happy to test patches and/or current on this server if that helps. If you want more details on the motherboard/system I have started a post on it at http://buildwithbsd.org/hw/skylake_xeon_server.html I've made the UEFI switch which worked fine, but I'm also happy to help out with testing if anyone looks at this. Are you booting from ZFS? Unless I’ve missed something this isn’t yet supported by the installer, but it is possible to get working manually. Pretty sure you missed something and those changes where merged, imp should be able to confirm. You're right: that was an error on my part. I've now got ZFS boot working with UEFI. :) I booted the Skylake motherboard with FreeBSD-10.3-RC1-amd64-uefi-memstick.img and it successfully installed to ZFS *and* loaded at normal speed. Looks like UEFI is the way to go on Skylake systems. All tests have gone well so far. Thanks ___ freebsd-stable@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: Skylake Loader Performance 10.3-BETA3
On 2016-03-07 16:21, Will Green wrote: On 2016-03-07 17:24, Steven Hartland wrote: On 07/03/2016 16:43, Will Green wrote: On 4 Mar 2016, at 18:49, Mark Dixon wrote: Will Green sundivenetworks.com> writes: I am happy to test patches and/or current on this server if that helps. If you want more details on the motherboard/system I have started a post on it at http://buildwithbsd.org/hw/skylake_xeon_server.html I've made the UEFI switch which worked fine, but I'm also happy to help out with testing if anyone looks at this. Are you booting from ZFS? Unless I’ve missed something this isn’t yet supported by the installer, but it is possible to get working manually. Pretty sure you missed something and those changes where merged, imp should be able to confirm. You're right: that was an error on my part. I've now got ZFS boot working with UEFI. :) I booted the Skylake motherboard with FreeBSD-10.3-RC1-amd64-uefi-memstick.img and it successfully installed to ZFS *and* loaded at normal speed. Looks like UEFI is the way to go on Skylake systems. All tests have gone well so far. Thanks I noted the same. Legacy on my Skylake laptop was slow as molasses. UEFI Rocks. -- Larry Rosenman http://www.lerctr.org/~ler Phone: +1 214-642-9640 E-Mail: l...@lerctr.org US Mail: 7011 W Parmer Ln, Apt 1115, Austin, TX 78729-6961 ___ freebsd-stable@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"