On 04/16/2015 06:56 PM, Nigel Tao wrote:
On Fri, Apr 17, 2015 at 5:53 AM, Aaron Plattner <aplatt...@nvidia.com> wrote:
SHM pixmaps are only allowed if the driver enables them.  It's the
application's job to check before trying to create one.  In NVIDIA's case,
we disabled them because they can't be accelerated by the GPU and are
generally terrible for performance.

You can query it with "xdpyinfo -ext MIT-SHM"

Ah, SHM QueryVersion will do this programatically. Thanks.


I'm not sure why you're using shared memory to begin with.  Especially if
you're just doing alpha blending, you're almost certainly much better off
using OpenGL or the X11 RENDER extension to let the GPU do the graphics
work.

Yes, I want to use Render. I also want to avoid copying millions of
pixels between X client and X server processes via the kernel, so I
want to use SHM too.


At least for NVIDIA, you're going to need to copy the pixels into video RAM
at some point anyway.  If you can upload the pixels to the GPU once and then
leave them there, that's your best bet.

Ah, so what ended up working for me is to create a new (regular,
non-SHM) Pixmap, call SHM PutImage to copy the pixels to the Pixmap,
then use Render with that Pixmap as source.

Yes, that sounds like the right approach to me.

For NVIDIA, does a (server-side) Pixmap always mean video RAM and not
general purpose RAM? Either way, it works for me, but it'd help my
mental model of what actually happens on the other side of the X
connection.

It's not always the case, but that's a good mental model. The driver will kick pixmaps out of video RAM and into system RAM for a variety of reasons, but it'll generally move it back to video RAM if you try to use it.

Generally, you only want to use the 32-bit visual if you expect the alpha
channel of your window to be used by a composite manager to blend your
window with whatever's below it.  If you're just doing alpha blending
yourself in order to produce opaque pixels to present in a window, you
should use a 24-bit visual and do your rendering using OpenGL or an
offscreen 32-bit pixmap.

Yeah, I eventually got it working without any Bad Match errors. My
window contents needed its own (depth-24) Visual (i.e. the Screen's
RootVisual), GContext and Pictformat, and my source pixels (an
offscreen pixmap that may or may not be a SHM pixmap) separately
needed its own (depth-32) Visual, GContext, Pictformat and Colormap.
I'm not sure if there's a better method, but I also made an unmapped
1x1 depth-32 Window just get that GContext. It all makes sense, in
hindsight. It just wasn't obvious to me in foresight.

You should be able to create a GC for a Pixmap directly, rather than using a dummy window. Or am I misunderstanding what your dummy window is for?

It might make sense to do your rendering code using a library like Cairo that can take care of the behind-the scenes X11 work for you.

I'd like to avoid using OpenGL if possible. I'm using the Go
programming language, and can write a pure Go (no C) program that uses
the equivalent of XCB. IIUC, using OpenGL requires using Xlib instead
of XCB (or an equivalent of XCB), and I'd prefer to avoid depending on
Xlib and any potential issues with mixing goroutines (Go green
threads, roughly speaking) and OpenGL and Xlib's threading model.

_______________________________________________
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.x.org/mailman/listinfo/xorg
Your subscription address: %(user_address)s

Reply via email to