On Tue, 7 Mar 2000, Andreas Beck wrote:
> > Currently the "triangle drawing" extension library which most people use is
> > of course GGIMesa, but LibGGI2D has triangle drawing functions as well,
> > and of course there is always LibGGI3D.
>
> I'd say either rewrite LibGGI2D and add it (rewrite, to resolve the
> licensing issue) or add it somewhere else like LibGGI3D.
Of course I favor the LibGGI3D approach, but using LibGGI3D will
not remove the need for a LibGGI extension which has drawing functions
which correspond to the mode advanced (i.e. not handled in LibGGI) 2D
acceleration features found on some types of video hardware. We still
need procedural APIs for that stuff, which can then be bound into GGI3D
modules however is desired.
The nice thing about using LibGGI3D is that it removes the need
for the userspace drawing function APIs (usually but not always LibGGI
extension libraries) to be generic. They can all be as hardware-specific
as Glide/Texus if necessary, since they'll be driven by extension-specific
target code of the individual GGI3D modules anyway. Same principle as
with the KGI drivers that export a hardware-specific 'metalanguage' to
userspace and are driven by driver-specific LibGGI target code.
> I already _do_ have an accelerated implementation for plain triangles for my
> Permedia 2 chip. However it's pretty much dormant right now, as you'd have
> to use the raw command interface to make use of it now.
You know Steffen and I are working on that PM2 driver for the new
KGI, right? Can we use that code?
> > > I'm interested in also setting up Z
> > > and alpha buffers. How would I go about doing that ?
>
> Basically same thing ... add a function to allocate them to the kernel
> driver and them mmap.
Yes. The /proc/gfx[N] code in the current CVS (disabled now)
allows you to do this for KGIcon drivers. For each head number N, a
/proc/gfx[N] directory is created. Inside each directory are files, some
of which you can open() and then mmap() to expose a hardware buffer. I
exposed every buffer the Savage4 could export (front, back, z, stencil,
texture0, texture1, DMA command FIFO, MMIO command FIFO, you name it) in
this fashion. These days we would probably ditch /proc in favor of devfs,
but the principle would be mostly the same.
One thing that sucked was that all the mappings were static. I
chopped the 32MB video memory into a good common layout, generated a
static list of kgi_mmio_regions to match, and got it to work in GGIMesa
for textures. Mesa's device driver intraface didn't support physical z or
stencil buffers at the time either, so I had to give up on those
altogether, but otherwise it worked perfectly and I was happy. I don't
think I ever tried having multiple processes open() and mmap() the exposed
buffers, and I never figured out a good way to virtualize the exposed
hardware resource mappings.
The Emu10k1 driver I work with at Creative uses an interesting
method of dynamically allocating and freeing hardware resources based on
multiple open()s of the /dev/dsp file. Each open() tries to allocate the
basic unit of hardware resource which corresponds to the file being opened
- a stereo PCM output channel, in the case of the Emu10k1 driver. Once
the initial open() is successful, you can use ioctl()s on the file to
request more resources, if such a thing makes sense for the given case.
For example, if you wanted to load a DSP patch which was only used within
your local mixer context, this is how it could be done (well we don't
support DSP patches yet, but...).
The could provide a very nice and simple way to manage the
hardware buffer layouts of a video card. Just create the mapper files for
all possible types of exportable buffers, and then dynamically allocate
them with open(). This does require the driver to have the intelligence
needed in order to manage a dynamic resource map, and possibly to deal
with contiguous resource fragmentation issues as well. But it does
provide userspace with a clean, simple, logical and powerful method of
requesting resources from a device driver. Definitely a technique we
should keep in mind for KGI+devfs.
> We should talk about a generic API for requesting
> such extra features, though.
I think the best approach would be to extend LibGGI's concept of
resources, and the ggiResourceXYZ() functions, in whole or in part, to
understand the concept of abstract resource types. The ggiResourceXYZ()
functions could be passed a third parameter which would probably be
similar to a suggest-string, and would define a path from the root node of
the namespace to the desired resource identifier node. LibGGI would define
its own simple resource namespace, and then each extension, when attached
to a visual, would extend the resource namespace tree with its own
resource string identifiers.
Examples:
/* Hierarchical resource namespace strings */
resource // Generic untyped resource, like we have now
/* Lock/mutex/semaphore type resources */
resource-lock // Generic lock/mutex
resource-lock-glide-lfb // grLfbLock(), terget-specific
resource-lock-sysvipc-semaphore // semctl(), environment-specific
/* DirectBuffer resource types */
resource-db-plb-clut-8bpp // 8bpp palletized pixel linear buffer
resource-db-plb-true-rgb565 // 16bpp truecolor PLB
resource-db-indirect-yuv555 // YUV buffer
resource-db-indirect-amiga-ham6 // Target-specific
resource-db-rgba8888-tiled-s3-savage4 // Target-specific
resource-db-z // Generic z buffer, not that useful by itself
resource-db-z-16bit // More useful for e.g. GGIMesa
resource-db-z-16bit-autoclear-s3-savage4 // Target-specific
resource-db-stencil // Generic stencil buffer
resource-db-stencil-8bit // More useful
resource-db-stencil-1bit-d3dalphatest // D3D alpha test used as stencil
resource-accelqueue // Serialized primitive pipe
resource-accelqueue-explicitflush // ggiFlush(), etc
resource-accelqueue-implicitflush // Ping pong buffers, etc
resource-accelqueue-mux // Like giiJoinInputs()
/* Resources for querying target info */
resource-target // Not too useful by itself
resource-target-glide // Can we use the Glide target?
resource-target-fbdev // Can we use the fbdev target?
resource-target-fbdev-mga2642 // Do we have this exact driver?
resource-target-kgi // Can we use the KGI target?
resource-target-kgi-2daccels-copybox // Does our KGI driver accel this?
/* Resources for querying the extension environment */
resource-extension // Are _any_ extensions loaded? */
resource-extension-ggimesa // Is GGIMesa attached to the current visual?
resource-extension-ggimesa-version // How old is it?
resource-extension-ggimesa-target-glide // Using GGIMesa's glide target?
/* ... Ten billion other possible resource strings ... */
You (hopefully) get the idea. A system like the one described
above would have many advantages. In particular, it would require only
one minimal change the base LibGGI API. All we need to do is extend the
ggiResourceXYZ functions to take one more parameter, which is a namespace
string. The rest of the details of the namespace(s), how to parse the
strings and how to interpret the substrings do not matter at the API
level.
This scheme would distribute the necessary intelligence to parse
and understand the resource type strings only to the code which needs to
understand that part of the namespace. The simple top-level resource
strings (resource-lock, resource-db, resource-target, etc) can be easily
handled by LibGGI. When it becomes necessary to extend those strings to
be able to handle target-specific or extension-specific resources or
resource categories (e.g. resource-db-z-16bit), only the target or
extension (or extension target) code which needs to understand what that
last '-z-16bit' really means will have to deal with it. GGIMesa would
need to know what a z-buffer is, and what is the difference between a 16
bit z-buffer and other types of z-buffers, but LibGGI itself would not
have to know anything beyond "there is a type of resource called a
DirectBuffer".
Comments, anyone?
Jon
---
'Cloning and the reprogramming of DNA is the first serious step in
becoming one with God.'
- Scientist G. Richard Seed