https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114563

--- Comment #12 from Richard Biener <rguenth at gcc dot gnu.org> ---
(In reply to andi from comment #8)
> > > Needs a workload where it matters
> > 
> > PR119387 had
> > 
> >   85.81%       1500713  cc1plus  cc1plus               [.]
> > ggc_internal_alloc(un
> > 
> > for me.  Can we keep an index to freelist from allocation order to avoid
> > the linear search? 
> 
> Yes for the alloc
> 
> > Other than that the patch looks simple than I thought,
> > and it definitely resolves an algorithmic complexity issue, so even without
> > a clear workload where it matters it should be OK (during stage1, that is).
> 
> The main drawback is that the madvise patterns to the OS are worse
> because it will do it in smaller chunks. That was the reason I had
> second thoughts later.

Btw, for this, sth I also wondered before, we'd likely want to change
alloc_page when it does

#ifdef USING_MMAP
  else if (entry_size == G.pagesize)
    {
      /* We want just one page.  Allocate a bunch of them and put the
         extras on the freelist.  (Can only do this optimization with
         mmap for backing store.)  */
      struct page_entry *e, *f = free_list->free_pages;
      int i, entries = GGC_QUIRE_SIZE;


to do this for entry_size < G.pagesize * GGC_QUIRE_SIZE, this should
avoid fragmenting the virtual address space.  Possibly do this only
for USING_MADVISE, not sure.

Reply via email to