That should be fairly easy to optimize I guess? Maybe even using read-only
shared memory to share the parsed database in native binary format? On Fri,
Apr 10, 2009 at 1:08 AM, Andrea Vezzosi <[email protected]> wrote:

> The main bottleneck right now is that each ghc process has to read the
> package.conf, which afaiu is done with Read and it's awfully slow,
> especially if you have many packages installed.
> I've started seeing total time improvements when approaching ~300% CPU
> usage and only the extralibs installed.
>
> On Thu, Apr 9, 2009 at 5:51 PM, Neil Mitchell <[email protected]>
> wrote:
> >> Not with cabal, with GHC, yes: assuming you have enough modules. Use ghc
> >> -M to dump a makefile, and then make -j20 (or whatever you have)
> >
> > There is a performance penalty to running ghc on separate files vs
> > --make. If your number of core's is limited --make may be better. I'd
> > love someone to figure out what the cross over point is :-)
> >
> > As a related question, how does GHC implement -j3? For my programs, if
> > I want to run in parallel, I have to type +RTS -N3. Can I use the same
> > trick as GHC?
> >
> > Thanks
> >
> > Neil
> > _______________________________________________
> > Haskell-Cafe mailing list
> > [email protected]
> > http://www.haskell.org/mailman/listinfo/haskell-cafe
> >
> _______________________________________________
> Haskell-Cafe mailing list
> [email protected]
> http://www.haskell.org/mailman/listinfo/haskell-cafe
>
_______________________________________________
Haskell-Cafe mailing list
[email protected]
http://www.haskell.org/mailman/listinfo/haskell-cafe

Reply via email to