Hi!

I'm new to this list and I'm trying to get Mir running in a VMware virtual machine on top of the vmwgfx driver stack. The idea is to first get "mir_demo_server_basic" running with demo clients and then move on to Xmir, patching up our drivers as necessary.

So far, I've encountered a couple of issues that might need attention from MIR developers:

1) function mggh::DRMHelper::is_appropriate_device() in gbm_display_helpers.c checks whether a drm device has any children except itself. This is not true for vmwgfx, and the server will fail to start thinking that our drm device is not appropriate. Why the child requirement?

2) Once I get the basic server to start, the cursor disappears as soon as I move the mouse. This boils down to Mir thinking that the cursor is outside of the current mode bounding box. At Mir server startup, there is no KMS setup configured, hence DisplayConfigurationOutput::current_mode_index will be set to max (or -1) in mgg::RealKMSDisplayConfiguration::add_or_update_output(). The value of DisplayConfigurationOutput::current_mode_index then doesn't seem to change even when Mir sets a display configuration, and when the mode bounding box is calculated, an out of bounds array access is performed.

3) Minor thing: The "Virtual" connector type is not recognized by Mir. (actually it's not in xf86drmMode.h either, I'll see if I can fix that up), but it's in the kernel user-space api file "drm_mode.h" and is right after the "eDP" connector type. Should be added in connector_type_name() in real_kms_output.cpp

4) vmwgfx does not yet implement the drm "Prime" mechanism for sharing of dma buffers, which Mir relies on. I'm about to implement that.
However, it seems like Mir is using dma buffers in an illegal way:
1) Mir creates a GBM bufffer.
2) Mir uses Prime to export a dma_buf handle which it shares with its clients. 3) The client imports the dma_buf handle and uses drm to turn it into a drm buffer handle. 4) The buffer handle is typecast to a "dumb" buffer handle, and then mmap'ed. in struct GBMMemoryRegion : mcl::MemoryRegion.

It's illegal to typecast a GBM buffer to a dumb buffer in this way. It accidently happens to work on the major driver because deep inside, both a GBM buffer and a dumb buffer is represented by a GEM buffer object. With vmwgfx that's not the case either for a GBM buffer or a dumb buffer, and they are different objects.

In fact, currently the only way to mmap() a GBM buffer (unless it's a cursor buffer) is to export a dma_buf and use it's mmap() operation. But that is not implemented by any of the major drivers since it's not really desired. The reason is that a GBM buffer is completely opaque and may not even reside in mappable memory. Hence any attempt to map it may result in coherence issues and, in some cases, extremely costly operations. This results in awkward driver code that attempts to guess the usage-patterns of applications that mix mmap'ed cpu-access and accelerated access to the same object.

The correct way to do this is to have the client import the buffer to the appropriate API, and use texsubImage like operations (or perhaps readPixels / writePixels) to access the buffer from the CPU.

IMHO, I think this is something that needs to be addressed as soon as possible.

Thanks,
Thomas

--
Mir-devel mailing list
Mir-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/mir-devel

Reply via email to