Re: Call for testing: VM bugs in 10.3

2016-08-17 Thread Andrea Venturoli

On 08/02/16 21:25, Konstantin Belousov wrote:

Below is the merge of some high-profile virtual memory subsystem bug
fixes from stable/10 to 10.3. I merged fixes for bugs reported by
users, issues which are even theoretically unlikely to occur in real
world loads, are not included into the patch set. The later is mostly
corrections for the handling of radix insertion failures. Included fixes
are for random SIGSEGV delivered to processes, hangs on "vodead" state
on filesystem operations, and several others.

List of the merged revisions:
r301184 prevent parallel object collapses, fixes object lifecycle
r301436 do not leak the vm object lock, fixes overcommit disable
r302243 avoid the active object marking for vm.vmtotal sysctl, fixes
"vodead" hangs
r302513 vm_fault() race with the vm_object_collapse(), fixes spurious SIGSEGV
r303291 postpone BO_DEAD, fixes panic on fast vnode reclaim

I am asking for some testing, it is not necessary for your system to
exhibit the problematic behaviour for your testing to be useful. I am
more looking for smoke-testing kind of confirmation that patch is fine.
Neither I nor people who usually help me with testing,  run 10.3 systems.

If everything appear to be fine, my intent is to ask re/so to issue
Errata Notice with these changes in about a week from now.


I upgraded a 10.3/amd64 system which in fact was showing some possibly 
related troubles.


So far so good, since I haven't had any problem: altough it's close to 
impossible to deterministically reproduce the locks I've had, I saw no 
regression so far.


I plan to upgrade other boxes in some weeks.

 bye & Thanks
av.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


I/O is very slow for FreeBSD 10.3 amd64 guest running on Citrix XenServer 6.5

2016-08-17 Thread Rainer Duffner
Hi,

I had to realize this week that my VMs on Citrix XenServer are very slow, 
compared to Linux.

I’m getting maybe 8 or 10 MB/s, wheres an Ubuntu 14 guest gets 110+ MB/s 
(Megabyte).
Independent of the filesystem, just wiping the disks with dc3dd.

This went unnoticed, because not much I/O is done in these VMs. But recently, a 
customer complained and I had to look into it.

The VM is running stock 10.3-RELEASE-p6.

The OS-type is set to FreeBSD 10 64 bit.
Same thing happens with FreeBSD 11-RC1.

This is what I get in 11-RC1 from sysctl:

(freebsd11 ) 0 # sysctl -a |grep xen
kern.vm_guest: xen
device  xenpci
vfs.pfs.vncache.maxentries: 0
dev.xenbusb_back.0.%parent: xenstore0
dev.xenbusb_back.0.%pnpinfo:
dev.xenbusb_back.0.%location:
dev.xenbusb_back.0.%driver: xenbusb_back
dev.xenbusb_back.0.%desc: Xen Backend Devices
dev.xenbusb_back.%parent:
dev.xn.0.xenstore_peer_path: /local/domain/0/backend/vif/245/0
dev.xn.0.xenbus_peer_domid: 0
dev.xn.0.xenbus_connection_state: Connected
dev.xn.0.xenbus_dev_type: vif
dev.xn.0.xenstore_path: device/vif/0
dev.xn.0.%parent: xenbusb_front0
dev.xbd.1.xenstore_peer_path: /local/domain/0/backend/vbd3/245/768
dev.xbd.1.xenbus_peer_domid: 0
dev.xbd.1.xenbus_connection_state: Connected
dev.xbd.1.xenbus_dev_type: vbd
dev.xbd.1.xenstore_path: device/vbd/768
dev.xbd.1.%parent: xenbusb_front0
dev.xbd.0.xenstore_peer_path: /local/domain/0/backend/vbd3/245/832
dev.xbd.0.xenbus_peer_domid: 0
dev.xbd.0.xenbus_connection_state: Connected
dev.xbd.0.xenbus_dev_type: vbd
dev.xbd.0.xenstore_path: device/vbd/832
dev.xbd.0.%parent: xenbusb_front0
dev.xenbusb_front.0.%parent: xenstore0
dev.xenbusb_front.0.%pnpinfo:
dev.xenbusb_front.0.%location:
dev.xenbusb_front.0.%driver: xenbusb_front
dev.xenbusb_front.0.%desc: Xen Frontend Devices
dev.xenbusb_front.%parent:
dev.xs_dev.0.%parent: xenstore0
dev.xctrl.0.%parent: xenstore0
dev.xenballoon.0.%parent: xenstore0
dev.xenballoon.0.%pnpinfo:
dev.xenballoon.0.%location:
dev.xenballoon.0.%driver: xenballoon
dev.xenballoon.0.%desc: Xen Balloon Device
dev.xenballoon.%parent:
dev.debug.0.%parent: xenpv0
dev.privcmd.0.%parent: xenpv0
dev.evtchn.0.%parent: xenpv0
dev.xenstore.0.%parent: xenpv0
dev.xenstore.0.%pnpinfo:
dev.xenstore.0.%location:
dev.xenstore.0.%driver: xenstore
dev.xenstore.0.%desc: XenStore
dev.xenstore.%parent:
dev.xen_et.0.%parent: xenpv0
dev.xen_et.0.%pnpinfo:
dev.xen_et.0.%location:
dev.xen_et.0.%driver: xen_et
dev.xen_et.0.%desc: Xen PV Clock
dev.xen_et.%parent:
dev.granttable.0.%parent: xenpv0
dev.xenpv.0.%parent: nexus0
dev.xenpv.0.%pnpinfo:
dev.xenpv.0.%location:
dev.xenpv.0.%driver: xenpv
dev.xenpv.0.%desc: Xen PV bus
dev.xenpv.%parent:
dev.xenpci.0.%parent: pci0
dev.xenpci.0.%pnpinfo: vendor=0x5853 device=0x0001 subvendor=0x5853 
subdevice=0x0001 class=0x01
dev.xenpci.0.%location: slot=3 function=0 dbsf=pci0:0:3:0 handle=\_SB_.PCI0.S18_
dev.xenpci.0.%driver: xenpci
dev.xenpci.0.%desc: Xen Platform Device
dev.xenpci.%parent:
dev.xen.xsd_kva: 18446735281894703104
dev.xen.xsd_port: 3
dev.xen.balloon.high_mem: 0
dev.xen.balloon.low_mem: 0
dev.xen.balloon.hard_limit: 18446744073709551615
dev.xen.balloon.driver_pages: 0
dev.xen.balloon.target: 2097152
dev.xen.balloon.current: 2096128

I’ve tried switching the „OS Type“ to something like „other PV“ and get a bit 
more throughput. But nowhere near enough to make this useful.

Over at freebsd-xen, Roger thinks it looks right from the FreeBSD-side.

I’m not the administrator of the Xen-Server itself (it’s part of an Apache 
CloudStack private cloud cluster setup), but I can have pretty much any setting 
checked/tried I need.







___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Re: unionfs bugs, a partial patch and some comments [Was: Re: 1-BETA3 Panic: __lockmgr_args: downgrade a recursed lockmgr nfs @ /usr/local/share/deploy-tools/RELENG_11/src/sys/fs/unionfs/union_vnops.c

2016-08-17 Thread Rick Macklem
 Kostik wrote:
[stuff snipped]
>insmnque() performs the cleanup on its own, and that default cleanup isnot 
>suitable >for the situation.  I think that insmntque1() would betterfit your 
>requirements, your >need to move the common code into a helper.It seems that 
>>unionfs_ins_cached_vnode() cleanup could reuse it.

I've attached an updated patch (untested like the last one). This one creates a
custom version insmntque_stddtr() that first calls unionfs_noderem() and then
does the same stuff as insmntque_stddtr(). This looks like it does the required
stuff (unionfs_noderem() is what the unionfs VOP_RECLAIM() does).
It switches the node back to using its own v_vnlock that is exclusively locked,
among other things.

rick



unionfs-newvnode.patch
Description: unionfs-newvnode.patch
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"