In article <[EMAIL PROTECTED]>, "James Aguilar" <[EMAIL PROTECTED]> wrote: ... > So, I have a couple of questions: > > * Is there any way to have Python objects (Such as a light or a color) > put themselves into a byte array and then pull themselves out of the > same array without any extra work? If each of the children had to load > all of the values from the array, we would probably lose much of the > benefit of doing things this way. What I mean to say is, can I say to > Python, "Interpret this range of bytes as a Light object, interpret > this range of bytes as a Matrix, etc." This is roughly equivalent to > simply static_casting a void * to an object type in C++.
Not exactly. One basic issue is that a significant amount of the storage associated with a light or a color is going to be "overhead" specific to the interpreter process image, and not shareable. A Python process would not be able to simply acquire a lot of objects by mapping a memory region. However, if you're ready to go to the trouble to implement your data types in C, then you can do the (void *) thing with their data, and then these objects would automatically have the current value of the data at that address. I'm not saying this is a really good idea, but right off hand it seems technically possible. The simplest thing might be to copy the array module and make a new type that works just like it but borrows its storage instead of allocating it. That would be expedient, maybe not as fast because each access to the data comes at the expense of creating an object. > * Are memory mapped files fast enough to do something like this? Shared memory is pretty fast. > * Are pipes a better idea? If so, how do I avoid the problem of > wasting extra memory by having all of the children processes hold all > of the data in memory as well? Pipes might likely be a better idea, but a lot depends on the design. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list