I've been mulling over this idea for an alternative graphics system to our 
current server/client models. Instead of applications connecting to a central 
display server, what if all graphics and input device routing were handled 
through the parent/child process hierarchy? For instance, your desktop 
environment could be towards the top of the hierarchy, and it would be 
responsible for launching applications as child processes. The child would then 
ask its parent for a graphics buffer using one of the many parent/child IPC 
mechanisms UNIX has to offer, and the parent would create one and hand it back. 
The child could then draw whatever it wants on its buffer, and the parent could 
composite it alongside other applications however it sees fit. Parents would 
also be possible for routing and transforming user input.
One benefit, for example, is how it would simplify desktop environment 
development. One desktop could easily be nested inside another one's window.
Where I think this becomes interesting, though, is if you think of an 
application like Steam. While Steam is currently a desktop application, its 
"Big Picture Mode" could conceivably replace a desktop environment entirely. 
This parent/child model would allow Steam to determine weather or not it needs 
to act like an application or a desktop replacement at runtime. In application 
mode, it could simply pass buffer requests along to its parent. But if it were 
in desktop replacement mode, it could grant those buffer requests itself 
(perhaps providing fullscreen windows only, though). In either case, Steam 
would have access to its child processes' graphics buffers and could render the 
Steam Overlay over them.
Perhaps there could even be a parent above the desktop environment process 
which owns all graphics and input devices. It could communicate the requests to 
the kernel so desktop environments just need to know the IPC protocol. Perhaps 
the top-level parent could also be configured to handle multiple desktop 
environments in-parallel with different devices, simplifying multi-seat support.
All-in-all, this approach seems more UNIXy to me. It's inspired by how 
command-line shells and applications handle stdio.
The reason I'm bringing it up on mesa-dev is because I'm wondering if it's even 
possible or reasonable with the current Linux graphics infrastructure. As I'm 
sure some of you are aware, the relevant documentation is scattered and I'm not 
always sure which ones are up-to-date and which ones aren't. I've been able to 
infer a little bit from the Wayland/Weston source code, but I wanted your input 
before I continue my investigation.
The main thing this would require is the ability to securely pass graphics 
buffers between processes. Is this what the DRM PRIME API is for? If buffers 
can be encapsulated in file descriptors, then they can be passed around using 
UNIX domain sockets.
Any feedback would be appreciated. Thanks!                                      
  
_______________________________________________
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev

Reply via email to