svn commit: r355209 - head/sys/powerpc/pseries
Author: luporl Date: Fri Nov 29 11:34:11 2019 New Revision: 355209 URL: https://svnweb.freebsd.org/changeset/base/355209 Log: [PPC] Remove extra \0 char inserted on vty by QEMU Since version 2.11.0, QEMU became bug-compatible with PowerVM's vty implementation, by inserting a \0 after every \r going to the guest. Guests are expected to workaround this issue by removing every \0 immediately following a \r. Reviewed by: jhibbits Differential Revision:https://reviews.freebsd.org/D22171 Modified: head/sys/powerpc/pseries/phyp_console.c Modified: head/sys/powerpc/pseries/phyp_console.c == --- head/sys/powerpc/pseries/phyp_console.c Fri Nov 29 06:25:07 2019 (r355208) +++ head/sys/powerpc/pseries/phyp_console.c Fri Nov 29 11:34:11 2019 (r355209) @@ -287,6 +287,7 @@ uart_phyp_get(struct uart_phyp_softc *sc, void *buffer { int err; int hdr = 0; + uint64_t i, j; uart_lock(&sc->sc_mtx); if (sc->inbuflen == 0) { @@ -297,7 +298,7 @@ uart_phyp_get(struct uart_phyp_softc *sc, void *buffer uart_unlock(&sc->sc_mtx); return (-1); } - hdr = 1; + hdr = 1; } if (sc->inbuflen == 0) { @@ -305,15 +306,35 @@ uart_phyp_get(struct uart_phyp_softc *sc, void *buffer return (0); } - if (bufsize > sc->inbuflen) - bufsize = sc->inbuflen; - if ((sc->protocol == HVTERMPROT) && (hdr == 1)) { sc->inbuflen = sc->inbuflen - 4; /* The VTERM protocol has a 4 byte header, skip it here. */ memmove(&sc->phyp_inbuf.str[0], &sc->phyp_inbuf.str[4], sc->inbuflen); } + + /* +* Since version 2.11.0, QEMU became bug-compatible with +* PowerVM's vty implementation, by inserting a \0 after +* every \r going to the guest. Guests are expected to +* workaround this issue by removing every \0 immediately +* following a \r. +*/ + if (hdr == 1) { + for (i = 0, j = 0; i < sc->inbuflen; i++, j++) { + if (i > j) + sc->phyp_inbuf.str[j] = sc->phyp_inbuf.str[i]; + + if (sc->phyp_inbuf.str[i] == '\r' && + i < sc->inbuflen - 1 && + sc->phyp_inbuf.str[i + 1] == '\0') + i++; + } + sc->inbuflen -= i - j; + } + + if (bufsize > sc->inbuflen) + bufsize = sc->inbuflen; memcpy(buffer, sc->phyp_inbuf.str, bufsize); sc->inbuflen -= bufsize; ___ svn-src-head@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/svn-src-head To unsubscribe, send any mail to "svn-src-head-unsubscr...@freebsd.org"
svn commit: r355210 - head/sys/fs/nfsclient
Author: kib Date: Fri Nov 29 13:55:56 2019 New Revision: 355210 URL: https://svnweb.freebsd.org/changeset/base/355210 Log: In nfs_lock(), recheck vp->v_data after lock before accessing it. We might race with reclaim, and then this is no longer a nfs vnode, in which case we do not need to handle deferred vnode_pager_setsize() either. Reported by: r...@ronald.org PR:242184 Sponsored by: The FreeBSD Foundation MFC after:3 days Modified: head/sys/fs/nfsclient/nfs_clvnops.c Modified: head/sys/fs/nfsclient/nfs_clvnops.c == --- head/sys/fs/nfsclient/nfs_clvnops.c Fri Nov 29 11:34:11 2019 (r355209) +++ head/sys/fs/nfsclient/nfs_clvnops.c Fri Nov 29 13:55:56 2019 (r355210) @@ -312,6 +312,8 @@ nfs_lock(struct vop_lock1_args *ap) if (error != 0 || vp->v_op != &newnfs_vnodeops) return (error); np = VTONFS(vp); + if (np == NULL) + return (0); NFSLOCKNODE(np); if ((np->n_flag & NVNSETSZSKIP) == 0 || (lktype != LK_SHARED && lktype != LK_EXCLUSIVE && lktype != LK_UPGRADE && @@ -345,6 +347,9 @@ nfs_lock(struct vop_lock1_args *ap) error = VOP_LOCK1_APV(&default_vnodeops, ap); if (error != 0 || vp->v_op != &newnfs_vnodeops) return (error); + if (vp->v_data == NULL) + goto downgrade; + MPASS(vp->v_data == np); NFSLOCKNODE(np); if ((np->n_flag & NVNSETSZSKIP) == 0) { NFSUNLOCKNODE(np); ___ svn-src-head@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/svn-src-head To unsubscribe, send any mail to "svn-src-head-unsubscr...@freebsd.org"
svn commit: r355211 - in head/sys: cddl/contrib/opensolaris/uts/common/fs/zfs kern sys
Author: kib Date: Fri Nov 29 14:02:32 2019 New Revision: 355211 URL: https://svnweb.freebsd.org/changeset/base/355211 Log: Add a VN_OPEN_INVFS flag. vn_open_cred() assumes that it is called from the top-level of a VFS syscall. Writers must call bwillwrite() before locking any VFS resource to wait for cleanup of dirty buffers. ZFS getextattr() and setextattr() VOPs do call vn_open_cred(), which results in wait for unrelated buffers while owning ZFS vnode lock (and ZFS does not use buffer cache). VN_OPEN_INVFS allows caller to skip bwillwrite. Note that ZFS is still incorrect there, because it starts write on an mp and locks a vnode while holding another vnode lock. Reported by: Willem Jan Withagen Sponsored by: The FreeBSD Foundation MFC after:1 week Modified: head/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c head/sys/kern/vfs_vnops.c head/sys/sys/vnode.h Modified: head/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c == --- head/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c Fri Nov 29 13:55:56 2019(r355210) +++ head/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c Fri Nov 29 14:02:32 2019(r355211) @@ -5490,7 +5490,7 @@ vop_getextattr { flags = FREAD; NDINIT_ATVP(&nd, LOOKUP, NOFOLLOW, UIO_SYSSPACE, attrname, xvp, td); - error = vn_open_cred(&nd, &flags, 0, 0, ap->a_cred, NULL); + error = vn_open_cred(&nd, &flags, VN_OPEN_INVFS, 0, ap->a_cred, NULL); vp = nd.ni_vp; NDFREE(&nd, NDF_ONLY_PNBUF); if (error != 0) { @@ -5627,7 +5627,8 @@ vop_setextattr { flags = FFLAGS(O_WRONLY | O_CREAT); NDINIT_ATVP(&nd, LOOKUP, NOFOLLOW, UIO_SYSSPACE, attrname, xvp, td); - error = vn_open_cred(&nd, &flags, 0600, 0, ap->a_cred, NULL); + error = vn_open_cred(&nd, &flags, 0600, VN_OPEN_INVFS, ap->a_cred, + NULL); vp = nd.ni_vp; NDFREE(&nd, NDF_ONLY_PNBUF); if (error != 0) { Modified: head/sys/kern/vfs_vnops.c == --- head/sys/kern/vfs_vnops.c Fri Nov 29 13:55:56 2019(r355210) +++ head/sys/kern/vfs_vnops.c Fri Nov 29 14:02:32 2019(r355211) @@ -219,7 +219,8 @@ restart: ndp->ni_cnd.cn_flags |= AUDITVNODE1; if (vn_open_flags & VN_OPEN_NOCAPCHECK) ndp->ni_cnd.cn_flags |= NOCAPCHECK; - bwillwrite(); + if ((vn_open_flags & VN_OPEN_INVFS) == 0) + bwillwrite(); if ((error = namei(ndp)) != 0) return (error); if (ndp->ni_vp == NULL) { Modified: head/sys/sys/vnode.h == --- head/sys/sys/vnode.hFri Nov 29 13:55:56 2019(r355210) +++ head/sys/sys/vnode.hFri Nov 29 14:02:32 2019(r355211) @@ -579,6 +579,7 @@ typedef void vop_getpages_iodone_t(void *, vm_page_t * #defineVN_OPEN_NOAUDIT 0x0001 #defineVN_OPEN_NOCAPCHECK 0x0002 #defineVN_OPEN_NAMECACHE 0x0004 +#defineVN_OPEN_INVFS 0x0008 /* * Public vnode manipulation functions. ___ svn-src-head@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/svn-src-head To unsubscribe, send any mail to "svn-src-head-unsubscr...@freebsd.org"
svn commit: r355212 - head/sys/kern
Author: kevans Date: Fri Nov 29 14:46:13 2019 New Revision: 355212 URL: https://svnweb.freebsd.org/changeset/base/355212 Log: tty_rel_gone: add locking assertion We already assert the lock is held later during tty_rel_free(), but it is arguably good form to clarify locking expectations here as well at the top-level that other drivers use. Modified: head/sys/kern/tty.c Modified: head/sys/kern/tty.c == --- head/sys/kern/tty.c Fri Nov 29 14:02:32 2019(r355211) +++ head/sys/kern/tty.c Fri Nov 29 14:46:13 2019(r355212) @@ -1180,6 +1180,7 @@ void tty_rel_gone(struct tty *tp) { + tty_lock_assert(tp, MA_OWNED); MPASS(!tty_gone(tp)); /* Simulate carrier removal. */ ___ svn-src-head@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/svn-src-head To unsubscribe, send any mail to "svn-src-head-unsubscr...@freebsd.org"
svn commit: r355213 - head/sys/arm64/arm64
Author: andrew Date: Fri Nov 29 16:14:32 2019 New Revision: 355213 URL: https://svnweb.freebsd.org/changeset/base/355213 Log: Use the VM_MEMATTR macros to describe the MAIR offsets. Remove the duplicate macros that defined a subset of the VM_MEMATTR values. While here use VM_MEMATTR macros when filling in the MAIR register. Reviewed by: alc, markj Sponsored by: DARPA, AFRL Differential Revision:https://reviews.freebsd.org/D22241 Modified: head/sys/arm64/arm64/locore.S head/sys/arm64/arm64/pmap.c Modified: head/sys/arm64/arm64/locore.S == --- head/sys/arm64/arm64/locore.S Fri Nov 29 14:46:13 2019 (r355212) +++ head/sys/arm64/arm64/locore.S Fri Nov 29 16:14:32 2019 (r355213) @@ -34,6 +34,7 @@ #include #include #include +#include #include #defineVIRT_BITS 48 @@ -42,10 +43,6 @@ .globl kernbase .setkernbase, KERNBASE -#defineDEVICE_MEM 0 -#defineNORMAL_UNCACHED 1 -#defineNORMAL_MEM 2 - /* * We assume: * MMU on with an identity map, or off @@ -396,7 +393,7 @@ create_pagetables: /* Create the kernel space L2 table */ mov x6, x26 - mov x7, #NORMAL_MEM + mov x7, #VM_MEMATTR_WRITE_BACK mov x8, #(KERNBASE & L2_BLOCK_MASK) mov x9, x28 bl build_l2_block_pagetable @@ -433,15 +430,17 @@ create_pagetables: mov x6, x27 /* The initial page table */ #if defined(SOCDEV_PA) && defined(SOCDEV_VA) /* Create a table for the UART */ - mov x7, #(ATTR_nG | ATTR_IDX(DEVICE_MEM)) + mov x7, #(ATTR_nG | ATTR_IDX(VM_MEMATTR_DEVICE)) mov x8, #(SOCDEV_VA)/* VA start */ mov x9, #(SOCDEV_PA)/* PA start */ mov x10, #1 bl build_l1_block_pagetable #endif - /* Create the VA = PA map */ - mov x7, #(ATTR_nG | ATTR_IDX(NORMAL_UNCACHED)) + /* +* Create the VA = PA map +*/ + mov x7, #(ATTR_nG | ATTR_IDX(VM_MEMATTR_UNCACHEABLE)) mov x9, x27 mov x8, x9 /* VA start (== PA start) */ mov x10, #1 @@ -658,10 +657,10 @@ start_mmu: .align 3 mair: - .quad MAIR_ATTR(MAIR_DEVICE_nGnRnE, 0) | \ - MAIR_ATTR(MAIR_NORMAL_NC, 1) | \ - MAIR_ATTR(MAIR_NORMAL_WB, 2) | \ - MAIR_ATTR(MAIR_NORMAL_WT, 3) + .quad MAIR_ATTR(MAIR_DEVICE_nGnRnE, VM_MEMATTR_DEVICE)| \ + MAIR_ATTR(MAIR_NORMAL_NC, VM_MEMATTR_UNCACHEABLE) | \ + MAIR_ATTR(MAIR_NORMAL_WB, VM_MEMATTR_WRITE_BACK)| \ + MAIR_ATTR(MAIR_NORMAL_WT, VM_MEMATTR_WRITE_THROUGH) tcr: .quad (TCR_TxSZ(64 - VIRT_BITS) | TCR_TG1_4K | \ TCR_CACHE_ATTRS | TCR_SMP_ATTRS) Modified: head/sys/arm64/arm64/pmap.c == --- head/sys/arm64/arm64/pmap.c Fri Nov 29 14:46:13 2019(r355212) +++ head/sys/arm64/arm64/pmap.c Fri Nov 29 16:14:32 2019(r355213) @@ -169,14 +169,6 @@ __FBSDID("$FreeBSD$"); #define PMAP_INLINE #endif -/* - * These are configured by the mair_el1 register. This is set up in locore.S - */ -#defineDEVICE_MEMORY 0 -#defineUNCACHED_MEMORY 1 -#defineCACHED_MEMORY 2 - - #ifdef PV_STATS #define PV_STAT(x) do { x ; } while (0) #else @@ -707,7 +699,7 @@ pmap_bootstrap_dmap(vm_offset_t kern_l1, vm_paddr_t mi KASSERT(l2_slot != 0, ("...")); pmap_store(&l2[l2_slot], (pa & ~L2_OFFSET) | ATTR_DEFAULT | ATTR_XN | - ATTR_IDX(CACHED_MEMORY) | L2_BLOCK); + ATTR_IDX(VM_MEMATTR_WRITE_BACK) | L2_BLOCK); } KASSERT(va == (pa - dmap_phys_base + DMAP_MIN_ADDRESS), ("...")); @@ -719,7 +711,7 @@ pmap_bootstrap_dmap(vm_offset_t kern_l1, vm_paddr_t mi l1_slot = ((va - DMAP_MIN_ADDRESS) >> L1_SHIFT); pmap_store(&pagetable_dmap[l1_slot], (pa & ~L1_OFFSET) | ATTR_DEFAULT | ATTR_XN | - ATTR_IDX(CACHED_MEMORY) | L1_BLOCK); + ATTR_IDX(VM_MEMATTR_WRITE_BACK) | L1_BLOCK); } /* Create L2 mappings at the end of the region */ @@ -744,7 +736,7 @@ pmap_bootstrap_dmap(vm_offset_t kern_l1, vm_paddr_t mi l2_slot = pmap_l2_index(va); pmap_store(&l2[l2_slot], (pa & ~L2_OFFSET) | ATTR_DEFAULT | ATTR_XN | - ATTR_IDX(CACHED_MEMORY) | L2_BL
svn commit: r355214 - head/sys/dev/gpio
Author: ian Date: Fri Nov 29 18:05:54 2019 New Revision: 355214 URL: https://svnweb.freebsd.org/changeset/base/355214 Log: Ignore "gpio-hog" nodes when instantiating ofw_gpiobus children. Also, in ofw_gpiobus_probe() return BUS_PROBE_DEFAULT rather than 0; we are not the only possible driver to handle this device, we're just slightly better than the base gpiobus (which probes at BUS_PROBE_GENERIC). In the time since this code was first written, the gpio controller bindings aquired the concept of a "hog" node which could be used to preset one or more gpio pins as input or output at a specified level. This change doesn't fully implement the hogging concept, it just filters out hog nodes when instantiating child devices by scanning for child nodes in the fdt data. The whole concept of having child nodes under the controller node is not supported by the standard bindings, and appears to be a freebsd extension, probably left over from the days when we had no support for cross-tree phandle references in the fdt data. Modified: head/sys/dev/gpio/ofw_gpiobus.c Modified: head/sys/dev/gpio/ofw_gpiobus.c == --- head/sys/dev/gpio/ofw_gpiobus.c Fri Nov 29 16:14:32 2019 (r355213) +++ head/sys/dev/gpio/ofw_gpiobus.c Fri Nov 29 18:05:54 2019 (r355214) @@ -492,7 +492,7 @@ ofw_gpiobus_probe(device_t dev) return (ENXIO); device_set_desc(dev, "OFW GPIO bus"); - return (0); + return (BUS_PROBE_DEFAULT); } static int @@ -511,6 +511,8 @@ ofw_gpiobus_attach(device_t dev) */ for (child = OF_child(ofw_bus_get_node(dev)); child != 0; child = OF_peer(child)) { + if (OF_hasprop(child, "gpio-hog")) + continue; if (!OF_hasprop(child, "gpios")) continue; if (ofw_gpiobus_add_fdt_child(dev, NULL, child) == NULL) ___ svn-src-head@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/svn-src-head To unsubscribe, send any mail to "svn-src-head-unsubscr...@freebsd.org"
Re: svn commit: r355145 - head/sys/arm64/arm64
On 11/28/19 7:52 AM, Konstantin Belousov wrote: On Thu, Nov 28, 2019 at 09:17:15AM +, Andrew Turner wrote: On 28 Nov 2019, at 08:48, Michal Meloun wrote: On 27.11.2019 21:33, Alan Cox wrote: Author: alc Date: Wed Nov 27 20:33:49 2019 New Revision: 355145 URL: https://svnweb.freebsd.org/changeset/base/355145 Log: There is no reason why we need to pin the underlying thread to its current processor in pmap_invalidate_{all,page,range}(). These functions are using an instruction that broadcasts the TLB invalidation to every processor, so even if a thread migrates in the middle of one of these functions every processor will still perform the required TLB invalidations. I think this is not the right assumption. The problem is not in TLB operations themselves, but in following 'dsb' and / or 'isb'. 'dsb' ensures that all TLB operation transmitted by the local CPU is performed and visible to other observers. But it does nothing with TLBs emitted by other CPUs. For example, if a given thread is rescheduled after all TLB operations but before 'dsb' or 'isb' is performed, then the requested synchronization does not occur at all. The tibi instructions need a context synchronisation point. One option is the dsb & isb instructions, another is an exception entry. For a thread to be rescheduled it requires the timer interrupt to fire. As an exception entry is a context synchronisation point and an interrupt will cause an exception entry there will be such a point after the the tibi instruction. D5.10.2. TLB maintenance instructions, 'Ordering and completion of TLB maintenance instructions' states that DSB on the PE that issued TLBI is required. It does not state that arbitrary even causing SynchronizeContext() is enough. Also I was not able to find any explanation of SynchronizeContext(). The entry for "Context Synchronization Events" in the Glossary on page 7460 provides the best explanation that I've found. The first three listed events are an isb instruction, taking an exception, and returning from an exception. However, there is nothing here to suggest that taking or returning from an exception can take the place of a dsb instruction. (On a related note, I'll observe that Linux does not perform an isb instruction during TLB invalidation unless it is changing a leaf in the kernel page table. For changes to user-space page tables, they appear to be assuming that the return to user-space will suffice to resync the user-space instruction stream.) Curiously, on IA32 exceptions are not specified to issue a serialization point, although rumors say that on all produced microarchitectures they are. This issue has similarities to the one that we discussed in https://reviews.freebsd.org/D22007. For example, on a context switch we will perform a dsb instruction in pmap_activate_int() unless we are switching to a thread within the same address space. Moreover, that dsb instruction will be performed before the lock on the old thread is released. So, in this case, we are guaranteed that any previously issued tlbi instructions are completed before the thread can be restarted on another processor. However, if we are simply switching between threads within the same address space, we are not performing a dsb instruction. And, my error was believing that the memory barriers inherent to the locking operations on the thread during context switches would be sufficient to ensure that all of the tlbi instructions on the initial processor would have completed before the dsb instruction on the eventual processor finishing the pmap_invalidate_*() completed. After further reading, I'm afraid that we have a similar issue with cache management functions, like cpu_icache_sync_range(). Quoting K11.5.2, "The ARMv8 architecture requires a PE that executes an instruction cache maintenance instruction to execute a DSB instruction to ensure completion of the maintenance operation." So, we have a similar problem if we are preempted during the cpu_icache_sync_range() calls from pmap_enter(), pmap_enter_quick(), and pmap_sync_icache(). Unless we are switching to a different address space, we are not guaranteed to perform a dsb instruction on the initial processor. Moreover, we are configuring the processor in locore.S so that user-space can directly perform "ic ivau" instructions. (See SCTLR_UCI in sctlr_set.) So, I'm inclined to say that we should handle this issue analogously to r354630 on amd64: Index: arm64/arm64/pmap.c === --- arm64/arm64/pmap.c (revision 355145) +++ arm64/arm64/pmap.c (working copy) @@ -5853,8 +5853,11 @@ pmap_activate_int(pmap_t pmap) KASSERT(PCPU_GET(curpmap) != NULL, ("no active pmap")); KASSERT(pmap != kernel_pmap, ("kernel pmap activation")); - if (pmap == PCPU_GET(curpmap)) + if (pmap == PCPU_GET(curpmap)) { + /* XXXExplain wh
svn commit: r355215 - head/sys/vm
Author: jeff Date: Fri Nov 29 19:47:40 2019 New Revision: 355215 URL: https://svnweb.freebsd.org/changeset/base/355215 Log: Fix a perf regression from r355122. We can use a shared lock to drop the last ref on vnodes. Reviewed by: kib, markj Differential Revision:https://reviews.freebsd.org/D22565 Modified: head/sys/vm/vm_object.c Modified: head/sys/vm/vm_object.c == --- head/sys/vm/vm_object.c Fri Nov 29 18:05:54 2019(r355214) +++ head/sys/vm/vm_object.c Fri Nov 29 19:47:40 2019(r355215) @@ -520,15 +520,22 @@ static void vm_object_vndeallocate(vm_object_t object) { struct vnode *vp = (struct vnode *) object->handle; + bool last; KASSERT(object->type == OBJT_VNODE, ("vm_object_vndeallocate: not a vnode object")); KASSERT(vp != NULL, ("vm_object_vndeallocate: missing vp")); + /* Object lock to protect handle lookup. */ + last = refcount_release(&object->ref_count); + VM_OBJECT_RUNLOCK(object); + + if (!last) + return; + if (!umtx_shm_vnobj_persistent) umtx_shm_object_terminated(object); - VM_OBJECT_WUNLOCK(object); /* vrele may need the vnode lock. */ vrele(vp); } @@ -548,7 +555,7 @@ void vm_object_deallocate(vm_object_t object) { vm_object_t robject, temp; - bool last, released; + bool released; while (object != NULL) { /* @@ -565,18 +572,22 @@ vm_object_deallocate(vm_object_t object) if (released) return; - VM_OBJECT_WLOCK(object); - KASSERT(object->ref_count != 0, - ("vm_object_deallocate: object deallocated too many times: %d", object->type)); - - last = refcount_release(&object->ref_count); if (object->type == OBJT_VNODE) { - if (last) + VM_OBJECT_RLOCK(object); + if (object->type == OBJT_VNODE) { vm_object_vndeallocate(object); - else - VM_OBJECT_WUNLOCK(object); - return; + return; + } + VM_OBJECT_RUNLOCK(object); } + + VM_OBJECT_WLOCK(object); + KASSERT(object->ref_count > 0, + ("vm_object_deallocate: object deallocated too many times: %d", + object->type)); + + if (refcount_release(&object->ref_count)) + goto doterm; if (object->ref_count > 1) { VM_OBJECT_WUNLOCK(object); return; ___ svn-src-head@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/svn-src-head To unsubscribe, send any mail to "svn-src-head-unsubscr...@freebsd.org"
svn commit: r355216 - head/sys/vm
Author: jeff Date: Fri Nov 29 19:49:20 2019 New Revision: 355216 URL: https://svnweb.freebsd.org/changeset/base/355216 Log: Avoid acquiring the object lock if color is already set. It can not be unset until the object is recycled so this check is stable. Now that we can acquire the ref without a lock it is not necessary to group these operations and we can avoid it entirely in many cases. Reviewed by: kib, markj Differential Revision:https://reviews.freebsd.org/D22565 Modified: head/sys/vm/vm_mmap.c Modified: head/sys/vm/vm_mmap.c == --- head/sys/vm/vm_mmap.c Fri Nov 29 19:47:40 2019(r355215) +++ head/sys/vm/vm_mmap.c Fri Nov 29 19:49:20 2019(r355216) @@ -1325,12 +1325,14 @@ vm_mmap_vnode(struct thread *td, vm_size_t objsize, } else { KASSERT(obj->type == OBJT_DEFAULT || obj->type == OBJT_SWAP, ("wrong object type")); - VM_OBJECT_WLOCK(obj); - vm_object_reference_locked(obj); + vm_object_reference(obj); #if VM_NRESERVLEVEL > 0 - vm_object_color(obj, 0); + if ((obj->flags & OBJ_COLORED) == 0) { + VM_OBJECT_WLOCK(obj); + vm_object_color(obj, 0); + VM_OBJECT_WUNLOCK(obj); + } #endif - VM_OBJECT_WUNLOCK(obj); } *objp = obj; *flagsp = flags; ___ svn-src-head@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/svn-src-head To unsubscribe, send any mail to "svn-src-head-unsubscr...@freebsd.org"
svn commit: r355217 - head/sys/vm
Author: jeff Date: Fri Nov 29 19:57:49 2019 New Revision: 355217 URL: https://svnweb.freebsd.org/changeset/base/355217 Log: Restore swap space accounting for non-anonymous swap objects. This was broken in r355082. Reduce some locking in nearby related object type checks. Reviewed by: kib, markj Differential Revision:https://reviews.freebsd.org/D22565 Modified: head/sys/vm/vm_map.c Modified: head/sys/vm/vm_map.c == --- head/sys/vm/vm_map.cFri Nov 29 19:49:20 2019(r355216) +++ head/sys/vm/vm_map.cFri Nov 29 19:57:49 2019(r355217) @@ -2445,9 +2445,7 @@ vm_map_pmap_enter(vm_map_t map, vm_offset_t addr, vm_p if ((prot & (VM_PROT_READ | VM_PROT_EXECUTE)) == 0 || object == NULL) return; - VM_OBJECT_RLOCK(object); if (object->type == OBJT_DEVICE || object->type == OBJT_SG) { - VM_OBJECT_RUNLOCK(object); VM_OBJECT_WLOCK(object); if (object->type == OBJT_DEVICE || object->type == OBJT_SG) { pmap_object_init_pt(map->pmap, addr, object, pindex, @@ -2456,7 +2454,8 @@ vm_map_pmap_enter(vm_map_t map, vm_offset_t addr, vm_p return; } VM_OBJECT_LOCK_DOWNGRADE(object); - } + } else + VM_OBJECT_RLOCK(object); psize = atop(size); if (psize + pindex > object->size) { @@ -2623,6 +2622,8 @@ again: continue; } + if (obj->type != OBJT_DEFAULT && obj->type != OBJT_SWAP) + continue; VM_OBJECT_WLOCK(obj); if (obj->type != OBJT_DEFAULT && obj->type != OBJT_SWAP) { VM_OBJECT_WUNLOCK(obj); @@ -3809,14 +3810,14 @@ vm_map_check_protection(vm_map_t map, vm_offset_t star /* * - * vm_map_copy_anon_object: + * vm_map_copy_swap_object: * - * Copies an anonymous object from an existing map entry to a + * Copies a swap-backed object from an existing map entry to a * new one. Carries forward the swap charge. May change the * src object on return. */ static void -vm_map_copy_anon_object(vm_map_entry_t src_entry, vm_map_entry_t dst_entry, +vm_map_copy_swap_object(vm_map_entry_t src_entry, vm_map_entry_t dst_entry, vm_offset_t size, vm_ooffset_t *fork_charge) { vm_object_t src_object; @@ -3898,8 +3899,9 @@ vm_map_copy_entry( */ size = src_entry->end - src_entry->start; if ((src_object = src_entry->object.vm_object) != NULL) { - if ((src_object->flags & OBJ_ANON) != 0) { - vm_map_copy_anon_object(src_entry, dst_entry, + if (src_object->type == OBJT_DEFAULT || + src_object->type == OBJT_SWAP) { + vm_map_copy_swap_object(src_entry, dst_entry, size, fork_charge); /* May have split/collapsed, reload obj. */ src_object = src_entry->object.vm_object; ___ svn-src-head@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/svn-src-head To unsubscribe, send any mail to "svn-src-head-unsubscr...@freebsd.org"
svn commit: r355218 - head/usr.bin/calendar/calendars
Author: grog Date: Fri Nov 29 23:04:45 2019 New Revision: 355218 URL: https://svnweb.freebsd.org/changeset/base/355218 Log: Correct date and time of George Harrison's death. Source: https://www.beatlesbible.com/2001/11/29/george-harrison-dies/ Modified: head/usr.bin/calendar/calendars/calendar.music Modified: head/usr.bin/calendar/calendars/calendar.music == --- head/usr.bin/calendar/calendars/calendar.music Fri Nov 29 19:57:49 2019(r355217) +++ head/usr.bin/calendar/calendars/calendar.music Fri Nov 29 23:04:45 2019(r355218) @@ -215,7 +215,7 @@ 11/26 Cream performs their farewell concert at Royal Albert Hall, 1968 11/26 Paul Hindemith is born in Hanau, Germany, 1895 11/27 Jimi Hendrix (Johnny Allen Hendrix) is born in Seattle, 1942 -11/30 George Harrison dies at 13:30 in L.A., 2001 +11/29 George Harrison dies at 13:20 in Los Angeles, California, 2001 12/04 Frank Zappa dies in his Laurel Canyon home shortly before 18:00, 1993 12/05 Wolfgang Amadeus Mozart dies in Vienna, Austria, 1791 12/06 First sound recording made by Thomas Edison, 1877 ___ svn-src-head@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/svn-src-head To unsubscribe, send any mail to "svn-src-head-unsubscr...@freebsd.org"
svn commit: r355222 - head/sbin/ipfw
Author: delphij Date: Sat Nov 30 05:57:54 2019 New Revision: 355222 URL: https://svnweb.freebsd.org/changeset/base/355222 Log: Use strlcat(). MFC after:2 weeks Modified: head/sbin/ipfw/dummynet.c Modified: head/sbin/ipfw/dummynet.c == --- head/sbin/ipfw/dummynet.c Sat Nov 30 05:43:24 2019(r355221) +++ head/sbin/ipfw/dummynet.c Sat Nov 30 05:57:54 2019(r355222) @@ -497,7 +497,7 @@ print_flowset_parms(struct dn_fs *fs, char *prefix) fs->max_th, 1.0 * fs->max_p / (double)(1 << SCALE_RED)); if (fs->flags & DN_IS_ECN) - strncat(red, " (ecn)", 6); + strlcat(red, " (ecn)", sizeof(red)); #ifdef NEW_AQM /* get AQM parameters */ } else if (fs->flags & DN_IS_AQM) { ___ svn-src-head@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/svn-src-head To unsubscribe, send any mail to "svn-src-head-unsubscr...@freebsd.org"