On 10/9/2019 2:34 AM, Zelphir Kaltstahl wrote:
I don't think places are a good example for good support of parallelism.
Hoare's "Communicating Sequential Processes" is a seminal work in
Computer Science. We can argue about whether places are - or not - a
good implementation of CSP, but you can't very well knock the concept.
It is difficult to get a flexible multi processing implementation done,
without hard-coding the lambdas, that run in each place, because we
cannot send serializable lambdas (which also are not core, but only
exist in the web server package) over channels. That means, that one
needs to define ones lambdas ahead of time before even starting a place
and sending it the data to process. That means, that we cannot have
something like a process pool.
1) Serial lambdas and closures are in web-server-lib, not the
framework. What is the problem including a library?
2) /serial-lambda/, /define-closure/, etc. only create functions that
CAN BE serialized. You still do have to (de)serialize them.
3) You can send serialized code over place channels - it is just a string.
The difficulty in sending code is making sure all the support context is
available. To be sure, this can be a pain. But consider that for a
local place you can send the name of a code file to dynamic-require ...
and for a remote place you can send the file itself.
And if channels present a problem, take a look at Paulo Matos's Loci
package which implements (local, same machine) distributed places over
sockets. https://pkgs.racket-lang.org/package/loci
The other problem is, that using places means using multiple Racket VMs,
if I remember correctly, which uses some RAM. It is not like places are
super lightweight at least.
This is a more substantial argument. Each place is a separate copy of
the VM. I've wished publicly to have more control over the resources
used by Racket - hard limits on heap and so forth, like with server
JVM. It's just a small problem on Unix/Linux where you can ulimit ...
but its a major PITA on Windows where you can't.
But, you can choose whether to use *dynamic* places which are OS threads
in the same process, or *distributed* places which are separate
processes, or a mixture of the two.
Racket threads run on a single core, I think.
Actually the whole VM is single core limited, but multiple threads can
execute within it. That's why places and futures are important - for
multi-core support.
The reason for the thread limitation is GC. The mutator has to stop -
at least momentarily - when the collection begins and again (possibly
multiple times) to check that the collection is complete. User-space
threads can be halted easily ... OS threads are much more difficult to
deal with - there generally is little control over their scheduling and
little or no visibility into when it would be safe to stop them.
Obviously GC can be done with OS threads sharing memory - but it is
order of magnitude harder than with user-space threads. Maybe with the
switch to Chez, it is time to revisit this.
I know there is a tutorial about using futures somewhere, where it
depends on the number type one uses, whether the code can be
automatically run in parallel or not, so there is also some issue there,
or at least it did not look to me like one could use futures everywhere
and have neat parallelism.
/would-be-future/ returns what is essentially testing code that logs
any future-unsafe operations when executed. You can use it to determine
if the code actually would run in parallel as a future.
Generally, you can access vectors/arrays and do math. Almost anything
else is problematic.
Correct any of the things I wrote above, if they are not true, but I
think Racket definitely needs a better multi processing story.
You certainly can argue that sending code for execution on a remote
place is not as easy as it could be. And unloading code is not as easy
as it should be. Killing / spawning a new place works, but it is a
heavy hammer that works against having a compute pool.
I would love to see something like Guile Fibers. Andy Wingo even mentioned in
his video, that some of the Racket greats advised him to look at
Concurrent ML and that that is where he got some ideas from, when
implementing Guile Fibers as a library. Shouldn't Racket then be able to
have a similar library? I don't understand how Fibers really works, but
that is a thought I had many times, since I heard about the Fibers library.
A fiber is just a coroutine ... implemented with continuations and a
nicer API.
Regards,
Zelphir
George
--
You received this message because you are subscribed to the Google Groups "Racket
Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to racket-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/racket-users/17156165-7e76-8cd0-f2de-36d4ee6e38ad%40comcast.net.