> On Sep 12, 2017, at 7:57 AM, Matthias Felleisen <matth...@ccs.neu.edu>
> wrote:
> 
> A word of caution on parallelism in general. Not too long ago, someone
> in CS at an Ivy League school studied the use of parallelism across
> different uses and applications. The goal was to find out how much the
> ‘hype’ about parallelism affected people. My recollection is that they
> found close to 20 projects where professors (in CE, EE, CS, Bio, Econ,
> etc) told their grad students/PhDs to use parallelism to run programs
> faster. They checked all of them and for N - 1 or 2, the projects ran
> faster once the parallelism was removed. Significantly faster. 

Hah! I’ve heard similar anecdotes before, but this is an especially
amusing one. I’d love to have a citation for something like this if one
exists.

> A word on futures. As James said, they work as advertised but if you
> do read the fine print, you need to understand that in Racket, too
> many operations block futures.
> 
> This obstacle will require a decent amount of labor on run-time
> system.

Yes, this has described my experience pretty well. I did two things
since my experiment: I read the paper on the future visualizer, and I
reimplemented the same experiment in Haskell. The former was helpful —
it gave me a little bit more perspective on the way they’re intended to
be used — and the latter mostly just provides a bit of a baseline for
what I feel I could feasibly hope for.

I rewrote the Racket program in Haskell, trying to do as direct a
translation as possible. Here’s the program, adjusted very slightly to
make it easy to add parallelism:

  import Data.List (permutations)
  import Data.Maybe (catMaybes)

  checkDiagonals :: [Int] -> Bool
  checkDiagonals bd =
    or $ flip map [0 .. length bd - 1] $ \r1 ->
      or $ flip map [r1 + 1 .. length bd - 1] $ \r2 ->
        abs (r1 - r2) == abs ((bd !! r1) - (bd !! r2))

  n :: Int
  n = 11

  main :: IO ()
  main =
    let results = flip map (permutations [0 .. n-1]) $ \brd ->
          if checkDiagonals brd then Nothing else Just brd
    in mapM_ print (catMaybes results)

I was able to easily add some parallelism using the
Control.Parallel.Strategies library. I added a line to the main function
that introduced some parallel evaluation:

  import Control.Parallel.Strategies
  import Data.List.Split (chunksOf)

  main :: IO ()
  main =
    let results =
          concat . withStrategy (parBuffer 10 rdeepseq) . chunksOf 100 $
          flip map (permutations [0 .. n-1]) $ \brd ->
            if checkDiagonals brd then Nothing else Just brd
    in mapM_ print (catMaybes results)

It took some time to figure out the right chunk and rolling buffer
sizes, but these values gave me a consistent 30-40% speedup over the
original, sequential program.

Now, obviously, Haskell’s runtime is considerably more suited for
parallel programming than Racket’s, so this comparison is hardly fair.
But it helped me to see for myself that, despite having 4 cores (8 with
hyperthreading) at my disposal, I wasn’t able to get even a 50% speedup,
which is in line with your comments about the overhead of parallel
programming.

Still, I wonder how out of reach this sort of performance boost is for
Racket. As Philip said, I’m especially curious to learn more details
about how the Chez rewrite will impact Racket’s support for parallelism.
Might we get access to a lightweight thread primitive that provides
parallelism in addition to concurrency, a la GHC’s threads?

Either way, thank you, James and Matthias, for your frank feedback.

-- 
You received this message because you are subscribed to the Google Groups 
"Racket Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to racket-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to