On 11/05/2013 09:54 PM, Alexandros Frantzis wrote:
On Tue, Nov 05, 2013 at 06:47:55PM +0100, Thomas Hellstrom wrote:
[snip]
When creating a dumb GBM buffer we create a DRIimage associated with the
dumb DRM buffer using DRI's createImageFromName(). After that we treat the
GBM buffer as a normal non-dumb buffer as far as rendering is concerned:
we create an EGLImage using the GBM buffer object as the native pixmap
type, and "upload" the data with glEGLImageTargetTexture2DOES().
Alexandros, unfortunately casting a dumb buffer to an accelerated
GBM buffer is as much of an API
violation as the other way around, and works on some drivers for the
same reason, but may stop working at
any point in time. I don't think any patches utilizing this approach
will ever make it upstream.
[snip]
I think that being able to use accelerated buffers with linearly
accessible pixels (in the DRI case: dumb drm buffers with associated
DRIimages) should at least be an option, even if there is no guarantee
of general support. All major implementations support it relatively
efficiently at the moment, and although it's clear that using buffers in
such a way may come with a performance penalty, or not be possible at
all in some cases, I don't think we should preclude it from the API
completely.
I understand your point, but the APIs look the way they look for a reason:
Instead of relying on mmaping a huge buffer, like old X, a modern
implementation
should be smarter and use the preferred API method to transfer pixel
data to the accelerator. This is not just something that happened to be
the way it is,
It's preceded and agreed upon by discussions by the various driver
developers
and It wouldn't surprise me if this access path will be blocked even
with the major
drivers developers once they find out about it.
The GBM WRITE buffers are intended for cursors, and as such it's legal
for a driver
to block read access, and even if they don't drivers may and will
probably put these buffers in
write-combined or uncacheable memory which will make things like
software compositing
painfully slow. I took a brief look at the Wayland code and they are
using shmem + damage tracking
to handle this, and IMO that seems like a good choice. Another reason is
that shmem can handle processor
cache coherency problems efficiently, whereas dumb buffers generally
can't. They'd probably need to rely on
coherent (uncached) memory for this.
Also I think it's important to deal with this now, because once software
Mir clients that have no idea about
damage tracking start to emerge, and if this usage path gets blocked,
this issue will be immediately raised
from a vmware problem to a serious Mir performance problem.
I'm trying to tell you this as politely as I can, but I urge you to
rework this path.
Unfortunately I don't think this is an option for us. Even if we'd
ignore the API violation, without some form of synchronization
information telling us transfer directions and dirty regions it
would be too inefficient.
Regardless of the GBM API concerns, I imagine that if a DRI
implementation doesn't support the aforementioned operation it should
report a failure, e.g., when trying to create the DRIimage for a dumb
DRM buffer. If the VMware driver works this way then it would be easy
for Mir to handle the error and fall back to some other supported
method, e.g., a plain dumb buffer plus glTex{Sub}Image2D().
Is this behavior feasible/reasonable from the VMware driver's
perspective?
I think vmware will already error here, (if the error is propagated up
the DRI stack, that is), and we could probably
work something more efficient out, but damage tracking is needed.
Thanks,
Alexandros
Thanks,
Thomas
--
Mir-devel mailing list
Mir-devel@lists.ubuntu.com
Modify settings or unsubscribe at:
https://lists.ubuntu.com/mailman/listinfo/mir-devel