I am aware, that each place has its own stuff even if I put it at top
level of the module, since it is its own Racket instance. I think
sending things as immutable data is what I want to do. "Just send
everything as immutable things to that place, so that it has everything
it needs to complete some computation." (including procedure
definitions) is the idea.

Having to write all things in terms of where things come from like in:

> '([racket/base +] . [1 2])

is not ergonomic at all. I think no one unsuspecting would think to do
that when using some library. If such is really necessary to send things
as immutable data, then users would expect it to translate procedures to
"where they come from + name" automatically. Maybe a macro could find
these things out for the user. However, things might be ambiguous and
the macro would have to make educated guesses or have some policy about
how it guesses and things could go wrong, making some stuff impossible.

(I am not experienced at all at writing macros, so maybe its non-sense.
Every time I go through "Fear of Macros" I sort of get part of it, until
I don't get the rest and then I don't use it (because I rarely get the
idea: "Oh, here I should use a macro!") and then forget most of it again.)

I just feel that things like the '([racket/base +] . [1 2]) are not good
enough to make a general work distributing library. It feels like people
would have to consider to much and when doing anything with that.

> Finally, on a broader point, I don't think you can avoid having to
think about the fact that your code is going to run in parallel: for
example, you will always have to make sure that you don't depend on
shared state. With the implementation of places in particular, you also
need to consider communication overhead and startup time. If I were
designing a library for parallelism, I would start with some specific
use-case in mind and focus on coming up reusable solutions for specific
sub-parts of the problem.

I get the point, I mean "Why do it if there is no use-case?". But I sort
of want to do it "for the future", "in case someone needs easy
parallelism in Racket", as a "drop in library that solves your multi
processing problem". I simply don't have a personal use-case for it,
except for its own sake and except for minor use-cases, where it would
only be a little better to have things be a bit faster, but it does not
really matter, because stuff is fast enough. I think it would be great
though, if people could say:

"It takes too much time? Never mind, I'll simply finally use my many cores!"

when using Racket.I am sorry, I cannot provide a use-case for this right
now, as I am not working on a high-performance project in Racket. Racket
has so many great features, I would love to see such a thing. That's why
I am going at the problem from such a general perspective. I don't
personally need this tomorrow or next month. Of course one can always
bake ones own solution specific to one use-case, but then one has to
"reinvent the wheel" every single time, by writing how one wants to use
places again for every project, when all one want is to spread the work
on multiple cores to make it faster.

It might take someone way more experienced in Racket than me to get it
done, but I thought with a lot of asking and trying, maybe I could get
something at least mostly working for arbitrary use cases going ^^'
Right now I have no idea how to get closer to solving this problem and
feel like: "It's not possible in Racket."

Thank you for your response again though, like before in the other
topics, it's been helpful.


On 15.04.2018 19:17, Philip McGrath wrote:
> I think it would help to take a step back and think about what you're
> doing when you communicate with a place. As you know, places are
> effectively separate instances of the Racket VM: other than the
> explicit, low-level mechanisms like `make-shared-bytes`, they share no
> state at all, not even the same module instances. Let's imagine we
> have this module and have required it in two different places:
> (module example racket
>   (provide remember!
>            recall)
>   (define store
>     (make-hash))
>   (define (remember! k v)
>     (hash-set! store k v))
>   (define (recall k)
>     (hash-ref store k)))
> Each place would have its own distinct instance of that module, each
> with its own hash table: mutating the hash table in one place would
> not change the other place's hash table.
>
> This is the reason why procedures can't be sent across places. Place
> A's version of `remember!` closes over Place A's version of `store`.
> It doesn't know anything about Place B's version of `store`, so it
> certainly can't mutate that. Allowing Place A's `remember!` to be
> called from Place B and mutate Place A's version of `store` would
> violate the safety guarantees that places provide by requiring
> explicit message-passing rather than shared state.
>
> That means, if you want one place to tell another to call a function,
> you need to send it some kind of immutable message telling it what to
> do. It's rather like calling an API over the network. Let's say you
> want to run tell some place to execute the following thunk:
> (λ ()
>   (+ 1 2))
>
> How might you represent that function as data?
>
> Well, if a function is available as a module-level export, you can
> access it with `dynamic-require`, so a natural way to represent "call
> this function with these arguments" would be with a list of arguments
> for `dynamic-require` to get the function you have in mind, plus a
> list of the arguments to give to the function. The example above might
> be represented like this:
> '([racket/base +] . [1 2])
>
> The receiver place could then interpret such a message like this:
> (λ (message)
>   (apply (apply dynamic-require (car message))
>          (cdr message)))
>
> That's essentially how `serial-lambda` works under the hood. Each
> syntactic use of the `serial-lambda` macro is turned into a
> module-level structure type definition that implements
> `prop:procedure`. When a use is evaluated and a closure is allocated,
> it creates an instance of that structure type, packaging up its free
> lexical variables into the structure's fields (that's the hard part).
> The `prop:serializable` protocol for `racket/serialize` essentially
> records the same information you would need to use `dynamic-require`.
>
> So, to answer some of your specific questions:
>
> On Sun, Apr 15, 2018 at 10:51 AM, Zelphir Kaltstahl
> <zelphirkaltst...@gmail.com <mailto:zelphirkaltst...@gmail.com>> wrote:
>
>     - What if in that serial-lambda the user needs to use some custom
>     procedure? Does that suddenly also have to be serializable? What about
>     its "dependencies"? --> everything in the user program ends up being a
>     serial-lambda. That would be really bad.
>
>
> For the procedure value to be serializable, all of the values it
> lexical closes over have to be serializable. If you remember that
> those values have to be packaged up into fields of a struct, this
> makes sense: a list is also only serializable if its contents are
> serializable.
>
> It takes some experience to readily recognize just what it is that an
> anonymous function will close over. One helpful rule is that
> module-level variables are never part of the closure, so they aren't
> required to be serializable: thus, it's ok that things like + aren't
> serializable. On the other hand, in this example:
> (define (make-thunk x)
>   (serial-lambda ()
>     (println x)))
> the function returned by make-thunk will only be serializable when `x`
> is serializable.
>
> You can find some more background about this in the #lang web-server
> documentation. I also wrote some notes on serialization pitfalls for
> the `web-server/formlets` library, which (now) uses serializable
> procedures internally:
> http://docs.racket-lang.org/web-server/formlets.html#%28part._.Formlets_and_.Stateless_.Servlets%29
> <http://docs.racket-lang.org/web-server/formlets.html#%28part._.Formlets_and_.Stateless_.Servlets%29>
>  
>
>     - Is there a better way than requiring everything to be serial-lambda?
>
>
> With the caveat that, as I said, not "everything" has to use
> serial-lambda, I don't think there is a better way. Any other solution
> for serializing an arbitrary function would just end up
> re-implementing what `web-server/lang/serial-lambda` does (and has
> been tested and used in production doing). I can think of things that
> `serial-lambda` doesn't do—for example, I've experimented with trying
> to find a mechanism for serialized procedures to take their contracts
> with them—but I would want to use `serial-lambda` to implement such
> additional features, not replace `serial-lambda`. It does its job very
> well.
>  
>
>     - Is the idea to have lambdas be serializable by default language wide
>     insane? It would be great to be able to simply start a new place and
>     give it some arbitrary lambda to execute.
>
>
> #lang web-server/base is just like #lang racket/base, except newly
> created functions are serializable by default (as are continuations).
> However, there is overhead in making a function serializable, and it
> probably wouldn't be a good default for the overwhelming majority of
> functions that nobody ever wants to serialize.
>
> Finally, on a broader point, I don't think you can avoid having to
> think about the fact that your code is going to run in parallel: for
> example, you will always have to make sure that you don't depend on
> shared state. With the implementation of places in particular, you
> also need to consider communication overhead and startup time. If I
> were designing a library for parallelism, I would start with some
> specific use-case in mind and focus on coming up reusable solutions
> for specific sub-parts of the problem.
>
> Hope some of this helps.
>
> -Philip
>  

-- 
You received this message because you are subscribed to the Google Groups 
"Racket Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to racket-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to