On Mon, Jul 9, 2018 at 12:35 PM Eric S. Raymond <e...@thyrsus.com> wrote:
>
> David Edelsohn <dje....@gmail.com>:
> > > The truth is we're near the bleeding edge of what conventional tools
> > > and hardware can handle gracefully.  Most jobs with working sets as
> > > big as this one's do only comparatively dumb operations that can be
> > > parallellized and thrown on a GPU or supercomputer.  Most jobs with
> > > the algorithmic complexity of repository surgery have *much* smaller
> > > working sets.  The combination of both extrema is hard.
> >
> > If you come to the conclusion that the GCC Community could help with
> > resources, such as the GNU Compile Farm or paying for more RAM, let us
> > know.
>
> 128GB of DDR4 registered RAM would allow me to run conversions with my
> browser up, but be eye-wateringly expensive.  Thanks, but I'm not
> going to yell for that help unless the working set gets so large that
> it blows out 64GB even with nothing but i4 and some xterms running.

Funds in the FSF GNU Toolchain Fund probably can be allocated to
purchase additional RAM, if that proves necessary.

Also, IBM Power Systems have excellent memory subsystems.  The ones in
the GNU Compile Farm have more than 128GB of memory available.

Thanks, David

Reply via email to