> I'm sorry if this is the wrong list, but the other GGI lists seem to be
> largely inactive.

It's o.k. - it's a general architecture question, so the list at least isn't
the wrong place :-).

> I am interested in security and microkernels, so GGI appeals to me because
> I like the idea of small kernel modules that securely multiplex graphics.

Ah - great. Any interest in porting the system to some microkernel ?
If so, I offer you my help. I already ported the kgicon tree to a
microkernelish OS that was fairly non-Unixish, so that should go smoothly.

> Firstly, more architectural diagrams please ...

Uh - yes. That's really a weak point ...

Actually it is a bit difficult in many cases, as using LibGGI one often can
do "recursive" stuff like running an SVGAlib application via the wrapper on
LibGGI running on XGGI running on kgicon or similar ...

> I think it might help other interested people if architectural diagrams
> occurred up front in an introduction
> document, then readers know which bits they want to read about.

I'll try to make up a few.

> In particular the FAQ compares GGI to X, to SVGAlib, window managers etc.
> Several of these are obviously bogus comparisons (as the FAQ points out)
> but I think a couple of diagrams would clear this up much better.
> [ 4 in particular: GGI, X, SVGAlib, and GGI on X on GGI ]

Yeah - I see.

> The obvious missing comparison was Plan 9's 8.5 which securely virtualises
> devices.

Do you have experience with that ? Or anyone else here ? I don't have much
information about plan 9, other than that it just must be great, as everyone
that ever talks about it says so :-).

> And a couple of security questions.

Yeah - that's another weak spot, as the current tree makes a big compromise
by using the fbcon layer. The "real" KGI Steffen is developing, or the
"scrdrv" versions were much more secure (you could kill -9 a graphics app
or switch consoles without application help), but these are currently not
very widespread, as they require(d) heavy kernel patching.

> 1) Can two GGI programs securely share a display ?

No. Except by using some kind of display server. The "remote" target might 
be easily enhanced to handle that case. Hmm - Cube3d can do it ... but well,
I don't think that's what you had in mind :-).

It is not possible by default, as most PC hardware does not accomodate that 
situation. Almost all graphics cards expect a mmap()able framebuffer for
simple operations like setting individual pixels.

However this implies, that you can read the whole framebuffer, or only have
access control with PAGE_SIZE granularity using the CPU's MMU, which is
obviously useless for windowing (clipping and z buffers and similar do
usually not set in when using a mmaped framebuffer, as they operate on the
coordinates used by the accelerator, but not on bus-IO). It could eventually 
be done for horizontal split screen or some cards that have hardware 
overlays or where you can cheaply route all commands through the 
accelerator, but not generic.

> Presumably each would have a device whose virtual display was a portion of
> the physical display.

As explained above, it isn't possible to do _securely_ with a direct mapping
of the regions to the display. It is simple with a trusted server that
handles the multiplexing and security-screening. 
It is pretty trivial, if you have no strong security needs. 
The sub target works o.k. for that.

> 1b) If two programs securely share a display, do they do so by giving up
> hardware acceleration ?

On the opposite. Acceleration-only cards would allow to do that securely.
SGI hardware for example does not expose any buffers of the graphics HW, but
route all accesses through the accel, which can enforce clipping with the
desired granularity (window borders, eventually with overlapping using
WindowIDs).

A possible solution for secure sharing would be the method cube3d uses for
just a "cool effect" (placing 6 arbitrary LibGGI programs, say an XGGI
server, a nixterm terminal, some demos, some games, on the 6 sides of a cube
that works as a "multiplexer": You can turn the cube to your liking, select
the application that gets events, ...).

Cube3d communicates using shared memory, what of course disables
acceleration on the primary applications, as they draw to system memory,
then.

Another possible path is making another server for the "remote" target that
can handle multiple connections. That would be accelerated, as LibGGI
command streams are passed through to the remote end.

> 2) Can I implement a feature like the WinNT logon dialog that comes up in
> response to the three fingered salute ?

This is in principle possible. As with KGi/KGIcon, there is a kernel driver 
for the graphics subsystem, all relevant systems (keyboard and output) are
under kernel control and thus secure for such action.

It might result in corrupting the display of applications that are currently
accessing the screen, but that can be avoided by saving it away of course.

The path to go would be:
- SAK comes in
- signals all processes that are accessing the output subsystem that they
  will loose display. This allows them to clean up their state and flush
  "half-done" accelerator queues.
- When all applications acked the switch, or a timeout or "repeated-SAK"
  occurs, applications that rea still accessing the display device will be
  halted. In that case, display corruption can occur on resume, as many
  graphics cards cannot save their complete state.
- System gives control to the SAK-Dialog. I'd suggest to simply spawn a
  process for that e.g. by signaling init.
  That process should run on its own virtual console reserved for that
  process, and process termination will result in a switchback.

This should be relatively easy to implement, as it is a trivial extension 
to virtual console switching.

CU, ANdy

-- 
= Andreas Beck                    |  Email :  <[EMAIL PROTECTED]> =

Reply via email to