On Wed, 10 Jun 2020 18:56:57 -0700 Mark Millard <mark...@yahoo.com> wrote:
> On 2020-May-13, at 08:56, Justin Hibbits <chmeeed...@gmail.com> wrote: > > > Hi Mark, > > Hello Justin. Hi Mark, > > > On Wed, 13 May 2020 01:43:23 -0700 > > Mark Millard <mark...@yahoo.com> wrote: > > > >> [I'm adding a reference to an old arm64/aarch64 bug that had > >> pages turning to zero, in case this 32-bit powerpc issue is > >> somewhat analogous.] > >> > >>> . . . > > ... > >> . . . > >> > >> (Note: dsl-only.net closed down, so the E-mail > >> address reference is no longer valid.) > >> > >> Author: kib > >> Date: Mon Apr 10 15:32:26 2017 > >> New Revision: 316679 > >> URL: > >> https://svnweb.freebsd.org/changeset/base/316679 > >> > >> > >> Log: > >> Do not lose dirty bits for removing PROT_WRITE on arm64. > >> > >> Arm64 pmap interprets accessed writable ptes as modified, since > >> ARMv8.0 does not track Dirty Bit Modifier in hardware. If writable > >> bit is removed, page must be marked as dirty for MI VM. > >> > >> This change is most important for COW, where fork caused losing > >> content of the dirty pages which were not yet scanned by > >> pagedaemon. > >> > >> Reviewed by: alc, andrew > >> Reported and tested by: Mark Millard <markmi at > >> dsl-only.net> PR: 217138, 217239 > >> Sponsored by: The FreeBSD Foundation > >> MFC after: 2 weeks > >> > >> Modified: > >> head/sys/arm64/arm64/pmap.c > >> > >> Modified: head/sys/arm64/arm64/pmap.c > >> ============================================================================== > >> --- head/sys/arm64/arm64/pmap.c Mon Apr 10 12:35:58 > >> 2017 (r316678) +++ head/sys/arm64/arm64/pmap.c Mon > >> Apr 10 15:32:26 2017 (r316679) @@ -2481,6 +2481,11 @@ > >> pmap_protect(pmap_t pmap, vm_offset_t sv sva += L3_SIZE) { > >> l3 = pmap_load(l3p); > >> if (pmap_l3_valid(l3)) { > >> + if ((l3 & ATTR_SW_MANAGED) && > >> + pmap_page_dirty(l3)) { > >> + > >> vm_page_dirty(PHYS_TO_VM_PAGE(l3 & > >> + ~ATTR_MASK)); > >> + } > >> pmap_set(l3p, ATTR_AP(ATTR_AP_RO)); > >> PTE_SYNC(l3p); > >> /* XXX: Use pmap_invalidate_range > >> */ > >> > >> . . . > >> > > > > Thanks for this reference. I took a quick look at the 3 pmap > > implementations we have (haven't check the new radix pmap yet), and > > it looks like only mmu_oea.c (32-bit AIM pmap, for G3 and G4) is > > missing vm_page_dirty() calls in its pmap_protect() implementation, > > analogous to the change you posted right above. Given this, I think > > it's safe to say that this missing piece is necessary. We'll work > > on a fix for this; looking at moea64_protect(), there may be > > additional work needed to support this as well, so it may take a > > few days. > > Ping? Any clue when the above might happen? > > I've been avoiding the old PowerMacs and leaving > them at head -r360311 , pending an update that > would avoid the kernel zeroing pages that it > should not zero. But I've seen that you were busy > with more modern contexts this last about a month. > > And, clearly, my own context has left pending > (for much longer) other more involved activities > (compared to just periodically updating to > more recent FreeBSD vintages). > > === > Mark Millard > marklmi at yahoo.com > ( dsl-only.net went > away in early 2018-Mar) > Sorry for the delay, I got sidetracked with a bunch of other development. I did install a newer FreeBSD on my dual G4 and couldn't see the problem. That said, the attached patch effectively copies what's done in OEA6464 into OEA pmap. Can you test it? - Justin
diff --git a/sys/powerpc/aim/mmu_oea.c b/sys/powerpc/aim/mmu_oea.c index c5b0b048a41..2f1422b36c4 100644 --- a/sys/powerpc/aim/mmu_oea.c +++ b/sys/powerpc/aim/mmu_oea.c @@ -1776,6 +1776,9 @@ moea_protect(pmap_t pm, vm_offset_t sva, vm_offset_t eva, { struct pvo_entry *pvo, *tpvo, key; struct pte *pt; + struct pte old_pte; + vm_page_t m; + int32_t refchg; KASSERT(pm == &curproc->p_vmspace->vm_pmap || pm == kernel_pmap, ("moea_protect: non current pmap")); @@ -1803,12 +1806,31 @@ moea_protect(pmap_t pm, vm_offset_t sva, vm_offset_t eva, pvo->pvo_pte.pte.pte_lo &= ~PTE_PP; pvo->pvo_pte.pte.pte_lo |= PTE_BR; + old_pte = *pt; + /* * If the PVO is in the page table, update that pte as well. */ if (pt != NULL) { moea_pte_change(pt, &pvo->pvo_pte.pte, pvo->pvo_vaddr); + if (pm != kernel_pmap && m != NULL && + (m->a.flags & PGA_EXECUTABLE) == 0 && + (pvo->pvo_pte.pa & (PTE_I | PTE_G)) == 0) { + if ((m->oflags & VPO_UNMANAGED) == 0) + vm_page_aflag_set(m, PGA_EXECUTABLE); + moea_syncicache(pvo->pvo_pte.pa & PTE_RPGN, + PAGE_SIZE); + } mtx_unlock(&moea_table_mutex); + if ((pvo->pvo_vaddr & PVO_MANAGED) && + (pvo->pvo_pte.prot & VM_PROT_WRITE)) { + m = PHYS_TO_VM_PAGE(old_pte.pte_lo & PTE_RPGN); + refchg = atomic_readandclear_32(&m->md.mdpg_attrs); + if (refchg & PTE_CHG) + vm_page_dirty(m); + if (refchg & PTE_REF) + vm_page_aflag_set(m, PGA_REFERENCED); + } } } rw_wunlock(&pvh_global_lock);
_______________________________________________ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"