eglCreateImageKHR for nested Mir

2013-09-06 Thread Alexandros Frantzis
Hi all,

I have been working on eglCreateImageKHR for the Mir EGL platform in
Mesa, using gbm_bo as the "native pixmap" type (EGL_NATIVE_PIXMAP_KHR).
This is needed by nested Mir, but could also be useful in general.

The current state of the (still very hacky) WIP can be found at:

lp:~afrantzis/mir/spike-nested-gbm-egl-image-hack/

and

http://paste.ubuntu.com/6071345/

(it is the diff against https://github.com/RAOF/mesa/ branch
egl-platform-mir)

I have run into the following problem: when a client connects to the
nested mir server, it fails almost immediately because of what seems to
be a problem referencing a buffer that was created earlier. In
particular, we create gbm_bos (using GEM objects internally) for the
buffers backing the surface of the client, but, sometimes, when trying
to reference these GEM objects, e.g. when trying to create an EGLImage
for them we can't find them anymore!

An interesting observation is that if the client submits the first 3
buffers slowly (which trigger the gbm_bo creation and EGLImage
initialization and rendering on the server), the problem doesn't occur,
which indicates that there is some kind of synchronization problem.

For anyone interested, things to try with the code above:

!!! Make sure you run against the modified Mesa
(e.g. set your LD_LIBRARY_PATH)

First run the host server:
$ sudo LD_LIBRARY_PATH=... bin/mir_demo_server_basic -f /tmp/mir_host

Run the nested server:
$ sudo LD_LIBRARY_PATH=... bin/mir_demo_server_basic --nested-mode 
/tmp/mir_host --enable-input off

Run a normal client and watch the nested server crash:
$ sudo LD_LIBRARY_PATH=...  bin/mir_demo_client_egltriangle

Run an (initially) slow client and watch it work (I have modified this
example to start slow and speed up):
$ sudo LD_LIBRARY_PATH=...  bin/mir_demo_client_scroll

To aid the investigation I created a client that stresses multi-threaded
gbm_bo and EGLImage creation and rendering, trying to approximate nested
mir behavior, but I can't get it to fail! :
$ sudo LD_LIBRARY_PATH=...  bin/mir_demo_client_egl_image -m /tmp/mir_host

Thoughts welcome :)

Note that I have only tried this on an Intel GPU. It would be interesting
to see if the behavior is different with Radeon or Nouveau.

Thanks,
Alexandros

-- 
Mir-devel mailing list
Mir-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/mir-devel


Re: eglCreateImageKHR for nested Mir

2013-09-09 Thread Alexandros Frantzis
On Fri, Sep 06, 2013 at 09:24:16PM +0300, Alexandros Frantzis wrote:
> Hi all,
> 
> I have been working on eglCreateImageKHR for the Mir EGL platform in
> Mesa, using gbm_bo as the "native pixmap" type (EGL_NATIVE_PIXMAP_KHR).
> This is needed by nested Mir, but could also be useful in general.
> 
> The current state of the (still very hacky) WIP can be found at:
> 
> lp:~afrantzis/mir/spike-nested-gbm-egl-image-hack/
> 
> and
> 
> http://paste.ubuntu.com/6071345/
> 
> (it is the diff against https://github.com/RAOF/mesa/ branch
> egl-platform-mir)
> 
> I have run into the following problem: when a client connects to the
> nested mir server, it fails almost immediately because of what seems to
> be a problem referencing a buffer that was created earlier. In
> particular, we create gbm_bos (using GEM objects internally) for the
> buffers backing the surface of the client, but, sometimes, when trying
> to reference these GEM objects, e.g. when trying to create an EGLImage
> for them we can't find them anymore!
> 
> An interesting observation is that if the client submits the first 3
> buffers slowly (which trigger the gbm_bo creation and EGLImage
> initialization and rendering on the server), the problem doesn't occur,
> which indicates that there is some kind of synchronization problem.
> 
> For anyone interested, things to try with the code above:
> 
> !!! Make sure you run against the modified Mesa
> (e.g. set your LD_LIBRARY_PATH)
> 
> First run the host server:
> $ sudo LD_LIBRARY_PATH=... bin/mir_demo_server_basic -f /tmp/mir_host
> 
> Run the nested server:
> $ sudo LD_LIBRARY_PATH=... bin/mir_demo_server_basic --nested-mode 
> /tmp/mir_host --enable-input off
> 
> Run a normal client and watch the nested server crash:
> $ sudo LD_LIBRARY_PATH=...  bin/mir_demo_client_egltriangle
> 
> Run an (initially) slow client and watch it work (I have modified this
> example to start slow and speed up):
> $ sudo LD_LIBRARY_PATH=...  bin/mir_demo_client_scroll
> 
> To aid the investigation I created a client that stresses multi-threaded
> gbm_bo and EGLImage creation and rendering, trying to approximate nested
> mir behavior, but I can't get it to fail! :
> $ sudo LD_LIBRARY_PATH=...  bin/mir_demo_client_egl_image -m /tmp/mir_host
> 
> Thoughts welcome :)
> 
> Note that I have only tried this on an Intel GPU. It would be interesting
> to see if the behavior is different with Radeon or Nouveau.

Further investigation uncovered that this happens only when using PRIME
fds in Mesa. Using GEM names works correctly.

I have updated the branch lp:~afrantzis/mir/spike-nested-gbm-egl-image-hack/
to send both a PRIME fd and a GEM name for each buffer for testing
purposes.

The diff (against https://github.com/RAOF/mesa/) at:

http://paste.ubuntu.com/6083139/

allows selecting which handle to use (set USE_PRIME in platform_mir.c to
0 to use GEM names).

Thanks,
Alexandros

-- 
Mir-devel mailing list
Mir-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/mir-devel


Re: Mir on vmwgfx

2013-11-05 Thread Alexandros Frantzis
On Tue, Nov 05, 2013 at 08:22:28AM +0100, Thomas Hellstrom wrote:
> Hi!
>
> I'm new to this list and I'm trying to get Mir running in a VMware
> virtual machine on top of the vmwgfx driver stack.
> The idea is to first get "mir_demo_server_basic" running with demo
> clients and then move on to Xmir, patching up our drivers as
> necessary.

Hi Thomas, thanks for looking into this. Feel free to also join
#ubuntu-mir on freenode if you need more direct information.

> So far, I've encountered a couple of issues that might need
> attention from MIR developers:
> 
> 1) function mggh::DRMHelper::is_appropriate_device() in
> gbm_display_helpers.c checks whether a drm device has any children
> except itself. This is not true for vmwgfx, and the server will fail
> to start thinking that our drm device is not appropriate. Why the
> child requirement?

Will take a deeper look, probably an arbitrary requirement based
on what major hardware drivers expose.

> 2) Once I get the basic server to start, the cursor disappears as
> soon as I move the mouse. This boils down to Mir thinking that the
> cursor is outside of the current mode bounding box. At Mir server
> startup, there is no KMS setup configured, hence
> DisplayConfigurationOutput::current_mode_index will be set to max
> (or -1) in mgg::RealKMSDisplayConfiguration::add_or_update_output().
> The value of DisplayConfigurationOutput::current_mode_index then
> doesn't seem to change even when Mir sets a display configuration,
> and when the mode bounding box is calculated, an out of bounds array
> access is performed.

Will take a deeper look.

> 3) Minor thing: The "Virtual" connector type is not recognized by
> Mir. (actually it's not in xf86drmMode.h either, I'll see if I can
> fix that up), but it's in the kernel user-space api file
> "drm_mode.h" and is right after the "eDP" connector type. Should be
> added in connector_type_name() in real_kms_output.cpp

No problem.

> 4) vmwgfx does not yet implement the drm "Prime" mechanism for
> sharing of dma buffers, which Mir relies on. I'm about to implement
> that.
> However, it seems like Mir is using dma buffers in an illegal way:
> 1) Mir creates a GBM bufffer.
> 2) Mir uses Prime to export a dma_buf handle which it shares with
> its clients.
> 3) The client imports the dma_buf handle and uses drm to turn it
> into a drm buffer handle.
> 4) The buffer handle is typecast to a "dumb" buffer handle, and then
> mmap'ed. in struct GBMMemoryRegion : mcl::MemoryRegion.
> 
> It's illegal to typecast a GBM buffer to a dumb buffer in this way.
> It accidently happens to work on the major driver because deep
> inside, both a GBM buffer and a dumb buffer is represented by a GEM
> buffer object. With vmwgfx that's not the case either for a GBM
> buffer or a dumb buffer, and they are different objects.

This code path (i.e. mmap-ing the buffer on the client side) is only
valid when the client has requested a "software" buffer, which on the
server side leads to the creation of a "dumb" DRM buffer (i.e., with
DRM_IOCTL_MODE_CREATE_DUMB). Although mmaping non-dumb buffers doesn't
fail per se with the major hardware drivers, the returned pixel data
usually has a non-linear layout (e.g. some sort of tiling), so it's not
really usable for our purpose.

Note that the Mesa codebase we are using has some changes in the GBM code
(experimental, not upstream yet). Notably:
  * we allow creation of "dumb" drm buffers of arbitrary size (not just 64x64)
when using GBM_BO_USE_WRITE
  * gbm buffers backed by a "dumb" DRM buffer also get a DRIimage

We have found that the major hardware drivers support these changes.
Do they pose a problem for the VMware driver?

See http://github.com/RAOF/mesa/ for the changed code base. It's still
based on the Mesa version shipped in Ubuntu 13.10, but we plan to rebase
on a more recent version.

Thanks,
Alexandros

-- 
Mir-devel mailing list
Mir-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/mir-devel


Re: Component clarification

2013-11-05 Thread Alexandros Frantzis
On Mon, Oct 28, 2013 at 10:41:29AM +0800, Daniel van Vugt wrote:
> Yeah, very good point about "gbm". That confused me when I joined to
> project too. It should be called "dri", I think.

What about just "mesa"? I think "mesa" is more recognizable, and
adequately descriptive of the backend's target driver model and APIs.
I don't think Mesa has or will have significant competing non-dri
backends. Having said that, I am fine with either "dri" or "mesa".

Whatever the final choice, I think this is something we are better off
doing early in the cycle (i.e. soon), since it's when we have a window
for non-feature oriented work.

Thanks,
Alexandros

-- 
Mir-devel mailing list
Mir-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/mir-devel


Re: Component clarification

2013-11-05 Thread Alexandros Frantzis
On Tue, Nov 05, 2013 at 11:36:34AM +0100, Thomas Voß wrote:
> On Tue, Nov 5, 2013 at 11:34 AM, Alexandros Frantzis
>  wrote:
> > On Mon, Oct 28, 2013 at 10:41:29AM +0800, Daniel van Vugt wrote:
> >> Yeah, very good point about "gbm". That confused me when I joined to
> >> project too. It should be called "dri", I think.
> >
> > What about just "mesa"? I think "mesa" is more recognizable, and
> > adequately descriptive of the backend's target driver model and APIs.
> > I don't think Mesa has or will have significant competing non-dri
> > backends. Having said that, I am fine with either "dri" or "mesa".
> >
> 
> I would rather prefer dri as opposed to mesa. Although your argument
> is technically correct, there is the difference of interface name
> (dri) and implementation (mesa) and I tend to favor the interface name
> perspective.

We are heavily using some Mesa components/interfaces, like GBM, that are
not part of DRI, though.

Thanks,
Alexandros

-- 
Mir-devel mailing list
Mir-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/mir-devel


Re: Mir on vmwgfx

2013-11-05 Thread Alexandros Frantzis
On Tue, Nov 05, 2013 at 03:36:52AM -0800, Jakob Bornecrantz wrote:

[SNIP]

Hi Jakob!

> > 
> > Note that the Mesa codebase we are using has some changes in the GBM code
> > (experimental, not upstream yet). Notably:
> >   * we allow creation of "dumb" drm buffers of arbitrary size (not just
> > 64x64) when using GBM_BO_USE_WRITE
> 
> There is no technical limit on this.

Good to hear.

> 
> >   * gbm buffers backed by a "dumb" DRM buffer also get a DRIimage
> 
> This will be a problem, at least to my knowledge DRIimages are backed
> by a gallium resource/texture, in SVGA this is backed by a surface,
> while dumb drm buffers would be backed by a dma-buffer (I think as
> of right now vmwgfx does not support the dumb interface).
> 
> Taking a step back here, SVGA have two types of resources: a surface
> which is in simplified terms a opaque handle to a GL texture on the
> host side, which can not be mapped; and dma-buffers which are regions
> of memory both the guest and the host and used for transferring data
> to and from surfaces.
> 
> > 
> > We have found that the major hardware drivers support these changes.
> > Do they pose a problem for the VMware driver?
> 
> See above, and adding too that, if you are doing software a approach we
> would like the storage to be a dma-buffer and not to be backed by a surface
> (to avoid unnecessary data transfers and resource overhead). 

The main use case for this is to allow direct surface pixel access for
clients that either can't, or prefer not to use GL to render, while
still keeping compositing performant.

In an early Mir server implementation, when the server had to deal with
"dumb" buffers it just mmap-ed the pixel data and uploaded them to a
texture with glTexImage2D(), so that the compositor could use them.
However, it turned out that this was very slow. To speed up things, we
added a backing DRIimage to the "dumb" gbm buffer, so that we could use
glEGLImageTargetTexture2DOES() with it to populate the texture (like we
do for non-dumb buffers), which is significantly faster.

If this is just a matter of reduced performance in the VMware driver for
this use case, then perhaps we should wait to see if it's actually a
problem before adding a special case for it in Mir. On the other hand,
if it is a matter of complicating the VMware driver excessively, we can
try to find a way to accommodate this elegantly in Mir. Would the
first approach (mmap-ing the dumb buffer and using glTexImage2D()) be
a better match for the VMware drivers?

Thanks,
Alexandros

-- 
Mir-devel mailing list
Mir-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/mir-devel


Re: Mir on vmwgfx

2013-11-05 Thread Alexandros Frantzis
On Tue, Nov 05, 2013 at 12:13:42PM +0100, Thomas Hellstrom wrote:
[snip]
> Again that depends on the use-case. Is there any coherence assumed
> or data-transfer
> going on between the dumb drm buffer and the DRIimage? Or are you
> using it to be
> able to render to cursors?

The main use case for this is to allow direct surface pixel access for
clients that either can't, or prefer not to use GL to render, while
still keeping compositing performant.

Please see my reply to Jakob's email, for more information about this.

Thanks,
Alexandros

-- 
Mir-devel mailing list
Mir-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/mir-devel


Re: Mir on vmwgfx

2013-11-05 Thread Alexandros Frantzis
On Tue, Nov 05, 2013 at 03:17:49PM +0100, Thomas Hellstrom wrote:
> On 11/05/2013 02:55 PM, Alexandros Frantzis wrote:
[snip]
> >The main use case for this is to allow direct surface pixel access for
> >clients that either can't, or prefer not to use GL to render, while
> >still keeping compositing performant.
> >
> >In an early Mir server implementation, when the server had to deal with
> >"dumb" buffers it just mmap-ed the pixel data and uploaded them to a
> >texture with glTexImage2D(), so that the compositor could use them.
> >However, it turned out that this was very slow. To speed up things, we
> >added a backing DRIimage to the "dumb" gbm buffer, so that we could use
> >glEGLImageTargetTexture2DOES() with it to populate the texture (like we
> >do for non-dumb buffers), which is significantly faster.
> 
> I still don't quite understand how you get the pixel data in the
> dumb buffer to the DRIimage?

When creating a dumb GBM buffer we create a DRIimage associated with the
dumb DRM buffer using DRI's createImageFromName(). After that we treat the
GBM buffer as a normal non-dumb buffer as far as rendering is concerned:
we create an EGLImage using the GBM buffer object as the native pixmap
type, and "upload" the data with glEGLImageTargetTexture2DOES().

> On the glTexImage2D() approach, did you try ordinary shmem sharing
> rather than a dumb GBM buffer?
> It might have been that the GBM buffer was residing in uncached
> memory, which makes reading painfully slow.

This was a long time ago, and I don't remember if I tried using shmem
to share the data. I will keep that in mind, though, if we need to
implement a glTexImage2D() fallback.

> >If this is just a matter of reduced performance in the VMware driver for
> >this use case, then perhaps we should wait to see if it's actually a
> >problem before adding a special case for it in Mir. On the other hand,
> >if it is a matter of complicating the VMware driver excessively, we can
> >try to find a way to accommodate this elegantly in Mir. Would the
> >first approach (mmap-ing the dumb buffer and using glTexImage2D()) be
> >a better match for the VMware drivers?
> 
> In this case glTexImage2D() might be better since GBM dumb buffers
> reside in cached memory. However,
> it would be desirable only to copy the data that has been dirtied by
> the client. Our X server driver uses this approach.

As a first step, can we ignore this until it proves to be a performance
problem, or does even a simple implementation of supporting DRIimage for
dumb buffers fall into "complicating the VMware driver excessively"?

Thanks,
Alexandros

-- 
Mir-devel mailing list
Mir-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/mir-devel


Re: Mir on vmwgfx

2013-11-05 Thread Alexandros Frantzis
On Tue, Nov 05, 2013 at 06:47:55PM +0100, Thomas Hellstrom wrote:

[snip]
> >When creating a dumb GBM buffer we create a DRIimage associated with the
> >dumb DRM buffer using DRI's createImageFromName(). After that we treat the
> >GBM buffer as a normal non-dumb buffer as far as rendering is concerned:
> >we create an EGLImage using the GBM buffer object as the native pixmap
> >type, and "upload" the data with glEGLImageTargetTexture2DOES().
> 
> Alexandros, unfortunately casting a dumb buffer to an accelerated
> GBM buffer is as much of an API
> violation as the other way around, and works on some drivers for the
> same reason, but may stop working at
> any point in time. I don't think any patches utilizing this approach
> will ever make it upstream.
[snip]

I think that being able to use accelerated buffers with linearly
accessible pixels (in the DRI case: dumb drm buffers with associated
DRIimages) should at least be an option, even if there is no guarantee
of general support. All major implementations support it relatively
efficiently at the moment, and although it's clear that using buffers in
such a way may come with a performance penalty, or not be possible at
all in some cases, I don't think we should preclude it from the API
completely.

> Unfortunately I don't think this is an option for us. Even if we'd
> ignore the API violation, without some form of synchronization
> information telling us transfer directions and dirty regions it
> would be too inefficient.

Regardless of the GBM API concerns, I imagine that if a DRI
implementation doesn't support the aforementioned operation it should
report a failure, e.g., when trying to create the DRIimage for a dumb
DRM buffer. If the VMware driver works this way then it would be easy
for Mir to handle the error and fall back to some other supported
method, e.g., a plain dumb buffer plus glTex{Sub}Image2D().

Is this behavior feasible/reasonable from the VMware driver's
perspective?

Thanks,
Alexandros

-- 
Mir-devel mailing list
Mir-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/mir-devel


Re: Mir on vmwgfx

2013-11-06 Thread Alexandros Frantzis
On Wed, Nov 06, 2013 at 09:08:11AM +0100, Thomas Hellstrom wrote:
> I'm trying to tell you this as politely as I can, but I urge you to
> rework this path.

OK, hint taken. I will investigate what's involved in using shmem
instead of dumb buffers in Mir, and also what the performance
implications are.

In any case, feel free to ignore the "software" buffer case, since one
way or the other it shouldn't affect the driver (e.g. we may temporarily
add the dumb buffer fallback, until shmem is supported). Plus, you won't
trigger this path anyway unless you run a client that asks for
"software" buffers.

Thanks,
Alexandros

-- 
Mir-devel mailing list
Mir-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/mir-devel


Re: Mir on vmwgfx

2013-11-06 Thread Alexandros Frantzis
On Wed, Nov 06, 2013 at 06:07:30AM -0800, Jakob Bornecrantz wrote:
> - Original Message -
> > On Wed, Nov 06, 2013 at 09:08:11AM +0100, Thomas Hellstrom wrote:
> > > I'm trying to tell you this as politely as I can, but I urge you to
> > > rework this path.
> > 
> > OK, hint taken. I will investigate what's involved in using shmem
> > instead of dumb buffers in Mir, and also what the performance
> > implications are.
> 
> When using software is the application expected to still have
> opened a EGL display with the mir-display/connection?

No. Just a normal connection to the mir server is sufficient.
See examples/flicker.c for a client using software buffers.

Thanks,
Alexandros

-- 
Mir-devel mailing list
Mir-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/mir-devel


Re: Component clarification

2013-11-18 Thread Alexandros Frantzis
On Mon, Nov 18, 2013 at 10:27:31AM +, Alan Griffiths wrote:
> This came up again with my resent proposal to move Session related state
> to the "surfaces" component.
> 
> On 25/10/13 15:22, Kevin Gunn wrote:
> > I'm ok with "state & implementation code" changing from "surface" to
> > "core". I'm sure others will weigh in.
> 
> I'm not convinced that it says "semantic data model" but neither does
> "surfaces". But what do folks think about "core"?
> 
> Strongly For/Weakly For/Weakly Against/Strongly Against?

I think the term "core" is at the same time too vague and too strong.
It's too vague because it doesn't describe what the "core" component of
mir contains. It's too strong because "core" forces us to think in terms
of a special core component and other non-core components, which I don't
think is appropriate for our architecture.

My vote is on the stronger verge of "Weakly Against"; I am sure we could
get used to it, but I think we can do better. Some alternatives
mentioned on IRC:

mir::scene
mir:model
mir::model_controller 

Thanks,
Alexandros

-- 
Mir-devel mailing list
Mir-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/mir-devel


Re: screenshotting, screencasting & unity8+mir

2013-11-26 Thread Alexandros Frantzis
On Tue, Nov 26, 2013 at 01:13:28PM +, Gerry Boland wrote:
> Hey,
> The system compositor will probably not be using the Qt scenegraph, but
> instead Mir's own compositor. I don't know if using
> QQuickWindow::grabWindow() will work in that case (though if it just
> calls glReadPixels, it probably will).
> 
> Also if hardware (non-GL) compositing is being used, reading back pixels
> via glReadPixels won't be enough as not everything on screen will be
> drawn by GL.
>
> As a result, I'm not certain we can rely on QQuickWindow::grabWindow() /
> glReadPixels, but would need something internal to Mir.
>

In general, graphics cards and drivers don't offer access to the final
output buffer (after HW compositing), at least not through a standard
API (I don't know if Android drivers offer such functionality). Even if
we move the screenshoting functionality into Mir, it's probable that the
best we will be able to do is to recomposite everything using OpenGL and
read back the pixels.

Perhaps what could happen is that when Unity8 wants to take a screenshot
it can tell Mir to composite everything with OpenGL for the current
frame.

The original discussion was about autopilot being able to take
screenshots/casts of unity8 for testing and validation purposes, and we
came up with a simple solution for this use case.

However, it seems that people have more use cases for screen capturing
that require additional complexity. I think we need to discuss a bit
more about what we really need and when, check what is feasible in our
time frames and prioritize work. The upcoming sprint seems ideal for
this.

Thanks,
Alexandros

-- 
Mir-devel mailing list
Mir-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/mir-devel


Re: Mir in a virtual machine

2014-01-26 Thread Alexandros Frantzis
On Sat, Jan 25, 2014 at 02:04:29PM -0800, Rian Quinn wrote:
> At the moment I am trying to better understand how buffers are used
> and shared in Mir. I have spent the past couple of days doing nothing
> but reading the Mir source code, the DRM source code, and portions of
> the Mesa source code. Here is what I think I have learned so far,
> which will lead me to a question:
> 
>
> - The server creates the buffers. It can create either a hardware
> buffer, which is a gbm allocated “dumb” buffer, or a software buffer,
> which is nothing more than shared memory (in RAM). 

Using the mir_buffer_usage_hardware flag leads to the creation of a
normal gbm buffer, i.e., one suited for direct usage by the GPU. This is
*not* a gbm "dumb" buffer.

You are correct that using mir_buffer_usage_software creates a shared
memory (RAM) buffer.

> - When a hardware buffer is created, it uses DRM prime
> (drmPrimeHandleToFD) to create an FD for the “dumb" buffer. 

Correct, but as noted above, it's not a gbm "dumb" buffer, but a normal
gbm buffer.

> - The server then provides the client with a “ClientBuffer” which is
> an abstraction of the hardware / software buffer containing the
> information about the buffer (like format, stride, etc…) and its FD.

> - To draw into this buffer, you need to
> call mir_surface_get_graphics_region which through some indirection,
> mmaps the buffer to a vaddr that can be written into. 

Not correct for hardware buffers, see below for more. This only works
for software buffers, in which case the fd passed to the client is a
shared memory fd which can be mapped. In the case of hardware buffers
it's a "prime" fd which in general doesn't support sensible mapping of
that kind.

> If you look at the basic.c demo, it creates a hardware buffer, and if
> you look at the fingerpaint.c demo, it creates a software buffer. If I
> modify the basic.c demo to call mir_surface_get_graphics_region, it
> fails on VMWare, saying that it could not mmap the buffer. It works
> fine if I change the basic.c to use a software buffer. 
>
> Is this an issue with VMWare? Or am I fundamentally not understanding
> something about how hardware buffer’s are used? If I am… why would a
> client use hardware buffers if it cannot map the buffer to use it?

In general, buffers created for direct GPU usage cannot be reasonably
mmap-ed and their pixels accessed directly. Even if the mapping
operation itself is supported by the driver, the contents of the mapped
area are usually not laid out in memory in a linear fashion (i.e. they
have some sort of tiling), and therefore not useful for pixel
manipulation.

The only way to draw into a hardware buffer in Mir is to go through one
of the "accelerated" APIs supported by Mesa EGL (e.g. OpenGL). The
example at examples/scroll.c shows how to do that.

Bottom line: the failure to mmap is expected for hardware buffers, it's
not a VMWare issue; the mir_surface_get_graphics_region call is only
meaningful for Mir software buffers.

Thanks,
Alexandros

-- 
Mir-devel mailing list
Mir-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/mir-devel


Re: mir's internal renderer + other shells

2014-04-01 Thread Alexandros Frantzis
On Mon, Mar 31, 2014 at 03:04:29PM -0700, Kevin DuBois wrote:

> [1] Last I've heard about QtSG, DefaultDisplayBufferCompositor is the place
> that needs replacement). Correct me if things have changed :)

QtSG provides its own rendering threads, which are difficult to manage
externally and to disentangle from the actual rendering operation. So,
as a first step at least, the plan is to override the mc::Compositor
completely.

We will evaluate how well this works, and whether it's worth
trying to make QtSG cooperate with the existing MultithreadCompositor
and DisplayBufferCompositor infrastructure (which I would prefer).

Thanks,
Alexandros

-- 
Mir-devel mailing list
Mir-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/mir-devel


Thoughts on the limits of public headers

2014-09-29 Thread Alexandros Frantzis
Hi all,

we recently hid a lot of our public headers, to avoid frequent ABI
breaks.  As we start to gradually re-expose useful headers I think it
will be good to have broad guidelines about the limits of what we
expose.

One point I feel needs further discussion is the level (if any) at which
we stop re-exposing abstractions that are needed only by the default
implementations of higher abstractions.

For example, we have the mc::Compositor class, the default
implementation of which uses a mc::DisplayBufferCompositor(Factory), the
default implementation of which uses a mc::Renderer(Factory). Each level
we expose adds flexibility to the framework, but also increases our
public API/ABI. Furthermore, the further down we move in the abstraction
layers, the more potential for instability the API/ABI has, since it's
affected more by surrounding implementation decisions (which tend to be
more unstable) instead of higher level architectural decisions (which
tend to be more stable).

The level of exposure should be evaluated on a case by case basis, but
here are some aspects that I think the answer depends on:

1. Whether the exposed abstractions represent core concepts

2. How much we want to encourage reuse of a particular default
   implementation, by exposing the interfaces of its re-implementable
   dependencies

3. Related to (2): How much easier this makes the lives of developers
   trying to use the Mir framework

Thoughts?

Thanks,
Alexandros

-- 
Mir-devel mailing list
Mir-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/mir-devel


Re: private stuff "temporary" exposed

2014-10-03 Thread Alexandros Frantzis
On Fri, Oct 03, 2014 at 12:23:40PM +0100, Alan Griffiths wrote:
> Some time ago we introduce a "temporary" facility to hook into the
> protobuf messaging protocol. This is exemplified by the
> DemoPrivateProtobuf acceptance tests. This support was intended to allow
> faster prototyping of some features that would eventually migrate into Mir.
> 
> Once these features were stable and integrated into Mir we intended to
> remove this support (and these files have always carried a warning not
> to use these APIs "for real").
> 
> As part of our stabilization of API and ABI efforts we should be trying
> to remove this support from the API. In particular, Mir cannot ensure
> compatibility across different Mir servers for any toolkits that depend
> on such features.
> 
> However, things have not worked out as intended. These APIs are in use
> for an (apparently stable) unity.protobuf.UnityService service copied
> between platform-api, unity-mir and qtmir.

My understanding is that Unity is now using DBus for clipboard
support. QtMir doesn't use protobuf any more, and the unity-mir project
is obsolete. There are still references to unity.protobuf.UnityService
in platform-api, but I don't think the code path is actually useful.

If the above is correct, then no one is using our temporary protobuf
facility any more. Unless we think that it may still be useful as a
prototyping facility, we can remove it.

> It would clearly be better to have a single definition of this service
> in Mir than requiring all downstream projects to replicate it for
> compatibility.

As discussed above, this may be a non-issue, but, in case we do need to
keep the service, I don't think we should involve Mir. Currently, the
UnityService API is specific to the Unity shell and Mir shouldn't have
any knowledge of shell specific constructs. If we think the
functionality is useful in general we should promote it to a proper Mir
API.

Thanks,
Alexandros

-- 
Mir-devel mailing list
Mir-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/mir-devel


Reworking our support for platform specific functions

2014-10-13 Thread Alexandros Frantzis
Hi all,

the need has arisen to add another platform-specific function, in order
to resolve bug LP: #1379266. This addition triggered some concerns about
the way we implement such functions.

The problem
---

In our current implementation, adding a platform-specific function
causes platform-specific details to leak into many layers from client to
server. For example, for the existing drm_auth_magic() call we have:

Client API: mir_connection_drm_auth_magic(int magic, ...)
Client: MirConnection::drm_auth_magic(...)
Protobuf  : rpc drm_auth_magic(DRMMagic) returns (DRMAuthMagicStatus)
Server: SessionMediator::drm_auth_magic(), mg::DRMAuthenticator

This approach has a few drawbacks:

1. Platform-specific details leak in code that should be platform
   agnostic.
2. It can't support platform-specific functions for 3rd party platforms.
3. It's burdensome to add a new platform-specific functions (I guess
   this is a problem for our API functions in general).

Proposed solution
-

To solve the issues mentioned above we need to provide a way to make an
opaque (as far as the rest of Mir is concerned) request to the platform:

Client API: mir_connection_platform_operation(type, DataAndFds request, ...);
Client: MirConnection::platform_operation(type, ...)
Protobuf  : rpc platform_operation(type, DataAndFds) returns (DataAndFds)
Server: SessionMediator::platform_operation()
Platform  : DataAndFds PlatformIpcOperations::platform_operation(type, 
DataAndFds)

Each platform should expose a header with its operation types and
expected parameters types. For mesa it could be something like:

enum MesaPlatformOperationType { drm_auth_magic, drm_get_auth_fd };
struct DRMAuthMagicRequest { int magic; }
struct DRMAuthMagicReply { int status; }
...

And a request would be:

DRMAuthMagicRequest request = { magic };

void platform_operation_callback(
MirConnection* connection, DataAndFds const* reply, void* context)
{
struct DRMAuthMagicReply const* magic_reply = reply->data;
...
}

mir_connection_platform_operation(
drm_auth_magic, request, platform_operation_callback);

With this approach:

1. No platform-specific details leak into unrelated layers. Only the
   platform knows about these details and exposes them to other interested
   parties.

2+3. It's trivial for 3rd party platforms to expose platform-specific operations

This proposal involves changes and ABI bumps to a few layers, so I would
appreciate some early feedback and discussion.

Thoughts?

Thanks,
Alexandros

-- 
Mir-devel mailing list
Mir-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/mir-devel


Re: Reworking our support for platform specific functions

2014-10-14 Thread Alexandros Frantzis
On Tue, Oct 14, 2014 at 09:14:46AM +0100, Alan Griffiths wrote:
> On 14/10/14 08:14, Christopher James Halse Rogers wrote:
> >
> > We're talking about extensions here, but again we're talking about
> > throwing void* around. It's entirely possible to add a mechanism for
> > the platform (or other code) to register extra protobuf handlers.
>
> Not only is it possible, it has been done. We're currently talking of
> withdrawing that feature:
> 
>
> https://code.launchpad.net/~alan-griffiths/mir/privatize-PrivateProtobuf/+merge/237436

I'd rather we didn't go down the direct protobuf path. Protobuf is an
implementation detail which, IMO, would be a mistake to officially
expose outside our RPC layers. We could potentially add an abstraction
around the protobuf functionality we want to expose, but it remains to
be seen if such an effort is worth it.

Thanks,
Alexandros

-- 
Mir-devel mailing list
Mir-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/mir-devel


Re: Reworking our support for platform specific functions

2014-10-14 Thread Alexandros Frantzis
On Tue, Oct 14, 2014 at 12:00:59PM +0100, Alan Griffiths wrote:
> On 13/10/14 12:37, Alexandros Frantzis wrote:

> >
> > Client API: mir_connection_platform_operation(type, DataAndFds request, 
> > ...);
> > Client: MirConnection::platform_operation(type, ...)
> > Protobuf  : rpc platform_operation(type, DataAndFds) returns (DataAndFds)
> > Server: SessionMediator::platform_operation()
> > Platform  : DataAndFds PlatformIpcOperations::platform_operation(type, 
> > DataAndFds)
> >
> > Each platform should expose a header with its operation types and
> > expected parameters types. For mesa it could be something like:
> >
> > enum MesaPlatformOperationType { drm_auth_magic, drm_get_auth_fd };
> > struct DRMAuthMagicRequest { int magic; }
> > struct DRMAuthMagicReply { int status; }
> > ...
> >
> > And a request would be:
> >
> > DRMAuthMagicRequest request = { magic };
> >
> > void platform_operation_callback(
> > MirConnection* connection, DataAndFds const* reply, void* context)
> > {
> > struct DRMAuthMagicReply const* magic_reply = reply->data;
> > ...
> > }
> >
> > mir_connection_platform_operation(
> > drm_auth_magic, request, platform_operation_callback);

> > Thoughts?
> 
> I like the general approach, but have concerns about a bit of the detail.

A caveat: the original email was meant mainly to illustrate the overall
approach, I didn't focus on the details much (e.g. names and types are
all tentative).

> 1. Is there any point to having separating "type" from "data" and "fds"?
> Or should the various data structures combine all three.

We could avoid exposing any structure, apart from data vs fds (see
below).

> 2. In the client API would be good to use an opaque "PlatformMessage"
> type, rather than a DataAndFds struct.

We need to be able to separate the fds from the rest of the data, so
that our RPC layer can send the fds properly. That's what the
(tentative) name was trying to convey.

> E.g.
> 
>DRMAuthMagicReply const* magic_reply = (DRMAuthMagicReply 
> const*)mir_platform_get_data(reply);
> 
> 3. With a C API we have to be explicit about ownership. E.g. does the client 
> have to release 
> magic_reply or does it become invalid after the call?

We could go either way, weighing the cost of copying data (and dup()-ing
fds) vs the potential for memory/fd leaks. Let's be consistent with how
we handle other similar cases in the client API (do we have other such
cases?).

> 4. Do we need to address byte encoding, sizes and alignment of platform data?

I would say not in Mir. If a message has special needs, the sender
should take care to serialize/deserialize it properly.

Thanks,
Alexandros

-- 
Mir-devel mailing list
Mir-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/mir-devel


Re: Talk more

2014-11-18 Thread Alexandros Frantzis
On Tue, Nov 18, 2014 at 02:29:52PM +, Alan Griffiths wrote:
> On 18/11/14 10:04, Daniel van Vugt wrote:
> > I'd be happy with weekly (which I do already with the USA), but
> > spanning all timezones with one meeting isn't feasible...
> 
> We have a daily "standup" with the USA, but whatever works for you.
> 
> I'm just coming online at 9UTC - so allowing for delays, glitches and
> urgent emails I'd suggest 9:15 or 9:30. The day doesn't matter to me.

How about having it at 09:30 UTC on Tuesdays, which is the same day as
the US-Australia meeting?

Thanks,
Alexandros

-- 
Mir-devel mailing list
Mir-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/mir-devel


How to deal with client API precondition failures

2014-11-26 Thread Alexandros Frantzis
Hi all,

in a recent review the subject of how to deal with precondition failures
in the client API came up again. In discussions we had yesterday the
consensus was that we should abort the process. This has the benefit of
catching the error as early as possible, making debugging much easier.

The drawback versus a more forgiving approach is that and some programs
may not be able to deal with such abrupt failures well. However,
programs that absolutely need to do some critical cleanup should handle
such failures anyway, regardless of whether they come from the Mir
client library or not.

This is not a new discussion, but we hadn't explicitly stated any
guidelines before, so we had not been actively encouraging such handling
of precondition failures during reviews.

So, the proposed guideline is: abort on precondition failures in client
API functions. This can be best achieved with a mir_assert() or similar
function/macro that will be always on regardless of the build type.

Thoughts?

Thanks,
Alexandros

-- 
Mir-devel mailing list
Mir-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/mir-devel


Re: saving off display config

2015-01-20 Thread Alexandros Frantzis
On Tue, Jan 20, 2015 at 08:19:32AM -0600, Kevin Gunn wrote:
> hi all -
> Just catching up on some bug house cleaning & found an open question on
> this bug
> 
> https://bugs.launchpad.net/mir/+bug/1401916
> 
> Is there an architectural preference for where to save off display configs
> ? and reasons for the preference ?
> 
> br,kg

Hi,

I don't think Mir itself should be the one responsible for
saving/restoring display configurations. Mir should (and indeed does)
provide a mechanism for shells to react to hardware changes and set the
configuration according to their needs.

Each shell will want to make different decisions about configurations
and also save/restore any information in a shell specific manner. I
don't see a benefit in bringing Mir into the mix.

Thanks,
Alexandros

-- 
Mir-devel mailing list
Mir-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/mir-devel


RFC: Move to C++14

2015-02-17 Thread Alexandros Frantzis
Hi all,

I think it's to time to consider moving from C++11 to C++14 (at least
the C++14 features our compilers support). C++14 offers some useful
improvements over C++11, like:

* std::make_unique()
* standard literal suffixes for std::chrono (e.g. auto delay = 10ms;)
* std::shared_timed_mutex (aka RW mutex)
* generic lambdas
* lambda capture initializers (which, importantly, allow moving values into
  lambdas)

Moving to C++14 means that we will not be able to build Mir on a
pristine Ubuntu LTS 14.04 system. Note that, even now, Mir cannot be
fully built on the LTS because of changes in the Android interfaces. As
things currently stand, I believe that dropping support for 14.04 builds
in order to allow C++14 is a worthy trade-off.

In addition, not supporting 14.04 builds means that we will be able to
use all the new features of other libraries we depend on, e.g.
gmock/gtest.

It's also worth noting that our packages depend on g++-4.9 (which
supports the C++14 features we want).

I have published an MP introducing C++14 into our codebase:

https://code.launchpad.net/~afrantzis/mir/introduce-c++14/+merge/249988

Let me know what you think.

Thanks,
Alexandros

-- 
Mir-devel mailing list
Mir-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/mir-devel


[RFC] Performance test framework

2015-06-19 Thread Alexandros Frantzis
Hello all,

there have recently been a few occasions where we wanted to experiment with
performance improvements, but lacked a way to easily measure their effect.
There have also been a few occasions where different implementations of the
(supposedly) same performance test came up with different results. In light of
these issues, it would be helpful to have a common performance test framework,
to make it easy to create and share performance tests scenarios.

As our codebase is C++ it would be natural to provide such a framework
in C++, providing the maximum flexibility and customizability.

On the other hand, sharing C++ code is not convenient, especially if want more
casual users to be able to try out the scenarios. As an alternative, we could
provide this test framework as a python library, possibly backed by a glue C++
library as needed (note that for some scenarios a glue layer is not needed,
since we could use our demo servers and clients, or write new ones).

Here are some pros and cons of the solutions (pros of one are cons of the other
in this case):

C++:
 + Easy integration with code base
 + Customizability of server/client behavior

python (possibly backed by C++ glue layer):
 + Easy to write
 + Convenience in sharing test scenarios
 + Plethora of existing libraries we need (e.g. python libs for babeltrace,
   uinput, statistics)

This how I imagine a performance test script could look like in Python (the C++
version wouldn't be very different in terms of abstractions):

import time
from mir import PerformanceTest, Server, Client, Input

host = Server(reports=["input","compositor"])
nested = Server(host=host)
client = Client(server=nested)

test = PerformanceTest([host,nested,client])
test.start()

# Send an input event every 50ms
input = Input()
for i in range(100):
input.inject(...)
time.sleep(0.050)

test.stop()

trace = test.babeltrace()
... process trace and print results ...

[Note: This is example is for illustration only. The specifics of the
performance framework API are not the main point of this post. We will have a
chance to discuss more about them in the future, after the high level decisions
have been made.]

The main drawback with the script based approach (vs a pure C++ approach) is
that it's not clear how to provide custom behavior for servers and, more
importantly, clients.

If we find that our performance tests need only a small collection of behaviors
we could make them available as configuration options:

Client(behavior="swap_on_input")

Alternatively, we could provide more fine grained customization points that run
python code (which will of course increase the complexity of the glue layer):

def on_input_received(...): do stuff
Client(on_input_received=on_input_received)

So, what I would like to hear from you is:

1. Your general preference for python vs C++ for the test scripts
2. Any particular performance tests that you would like to see implemented,
   so we can get a first idea of what kinds of custom behaviors we may need
3. Any other comments, of course!

Thank you,
Alexandros

-- 
Mir-devel mailing list
Mir-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/mir-devel


Re: [RFC] Performance test framework

2015-06-19 Thread Alexandros Frantzis
On Fri, Jun 19, 2015 at 11:55:54AM +0100, Alan Griffiths wrote:
> On 19/06/15 11:52, Alan Griffiths wrote:
> > On 19/06/15 11:22, Alexandros Frantzis wrote:
> >> Hello all,
> >>
> >> there have recently been a few occasions where we wanted to experiment with
> >> performance improvements, but lacked a way to easily measure their effect.
> >> There have also been a few occasions where different implementations of the
> >> (supposedly) same performance test came up with different results. In 
> >> light of
> >> these issues, it would be helpful to have a common performance test 
> >> framework,
> >> to make it easy to create and share performance tests scenarios.
> >
> > +1
> ...
> > I'd say try the python approach and see if we hit limits.
> 
> Is there any need to capture requirements? Or do we have a few "current"
> test scenarios to try automating?

I have a basic input-to-frame latency test in mind, and I would like to
hear about other scenarios that people had to actually measure in the
past, or think they will need to measure in the near future.

-- 
Mir-devel mailing list
Mir-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/mir-devel


Re: The future of MirInputEvent

2015-06-26 Thread Alexandros Frantzis
On Thu, Jun 25, 2015 at 10:45:10AM -0700, Robert Carr wrote:
> Hi!

> I think the value of MirInputEvent is pretty limited.

Having a base MirInputEvent and deriving specialized input event types
seems like a natural model for the domain. However, it's problematic
since the interface shared by the types is small, so the benefits of
inheritance are mostly lost, because most code needs to know the exact
type it's working with.

That being said, input events are usually treated specially as a
group at the higher levels, so it's useful to have a way to check if an
event is an input event. This can be achieved without a base
MirInputEvent type by checking the type manually:

  if (type == mir_event_keyboard || type == mir_event_motion || ...)

However, I think it's worth having a convenience function:

  if (mir_event_is_input_event(event))

> I think it's possible to defend as correct, for example something may
> wish to access the time of events without knowing the event
> type...even then though this seems pretty marginal.

Writing a wrapper to provide this functionality is trivial, although not
elegant from a user's point of view, since users need to explicitly care
about the types.

> On the other hand there is no way to implement the conversion we
> want without casting inm C so I think we'd probably be better to remove it
> in the end...this would also enable us to keep MirEvent as a discriminated
> union...where as it stands thats become awkward with MirInputEvent becoming
> this like...fake type.

Perhaps we could reach a middle ground by trading compile time
type safety for convenience by introducing functions for the common
properties of input events, without having an explicit MirInputEvent
type. The accessor functions would abort if passed an non-input event.

bool mir_event_is_input_event(MirEvent const*)
int64_t mir_input_event_timestamp(MirEvent const*)
MirInputDeviceId mir_input_event_device_id(MirEvent const*)

Thanks,
Alexandros

-- 
Mir-devel mailing list
Mir-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/mir-devel


[RFC] Release branches for unity-system-compositor

2015-07-21 Thread Alexandros Frantzis
Hi all,

in the Mir project we are following the standard practice of forking off
a new branch from trunk for each (minor) release. The benefits of this
approach are well known, but to summarize, the main points are:

  * Decouples trunk from the release process, i.e., we don't need
to temporarily freeze trunk to release from it.
  * Allows us to handle multiple versions more cleanly and effectively
(e.g., fix a bug in a particular released version).

For the reasons mentioned above, I propose that we move to the same
scheme for unity-system-compositor too. At the moment,
unity-system-compositor does not follow this model, instead having a
trunk and single packaging/release branch, tracking the latest released
version.

We can select any version scheme that suits us, but one recommendation
is that we bump the main version number (whichever we decide that to
be), and therefore create a new version branch, at least every time USC
is changed to use a newer server or client API/ABI.

In other words, if we use the minor version as the main version, we want
to guarantee that all releases 0.x.y will continue to work with the same
mir library ABIs that 0.x.0 was originally released against. This scheme
makes it straightforward to manage updates/fixes for released versions
as point releases.

Thanks,
Alexandros

-- 
Mir-devel mailing list
Mir-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/mir-devel


Re: Mir's building. And Mir's not building.

2015-08-19 Thread Alexandros Frantzis
On Tue, Aug 18, 2015 at 05:47:19PM +0100, Alan Griffiths wrote:to 
> We're planning to try switching the clang builds to run on Vivid.

If we manage to fix the phone CI problem, which will then leave the
clang problem as the only CI blocker, I suggest that we disable clang
builds until the issue is resolved, so we can resume our normal CI
workflow as soon as possible. It's good to ensure that we have clang
support, but it's not essential and not worth blocking on.

Thanks,
Alexandros

-- 
Mir-devel mailing list
Mir-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/mir-devel


Re: symbols.map stanza names

2015-08-28 Thread Alexandros Frantzis
On Fri, Aug 28, 2015 at 04:46:24PM +0100, Alan Griffiths wrote:
> The current approach to naming stanzas in the symbol maps leads to a
> potential for mistakes. For example, src/platform/symbols.map has the
> following stanzas:
> 
> MIRPLATFORM_9 {
> ...
> }
> 
> MIRPLATFORM_9.1 {
> ...
> } MIRPLATFORM_9;
> 
> It is far from obvious when adding a symbol whether it should be added
> to MIRPLATFORM_9.1 or to a new MIRPLATFORM_9.2. As it happens
> MIRPLATFORM_9.1 was created after 0.15 was branched so that is the
> "right one". But it isn't obvious: If MIRPLATFORM_9.1 had shipped in
> 0.15 then MIRPLATFORM_9.2 would be right.
> 
> I don't know of any reason why we name stanzas this way except "tradition".
> 
> What does the team think of using this instead?
> 
> MIRPLATFORM_9_new_symbols_from_0.16 {
> ...
> } MIRPLATFORM_9;
> 
> And after we branch release 0.16 it is clearer we should add:
> 
> MIRPLATFORM_9_new_symbols_from_0.17 {
> ...
> } MIRPLATFORM_9_new_symbols_from_0.16;
> 
> When the ABI breaks we consolidate as before.

+1 to including the release version in the stanza name.

As for the naming scheme I would propose the following variation:

MIRPLATFORM_9_symbols_from_0.15
MIRPLATFORM_9_symbols_from_0.16
...

and when we bump ABI and consolidate, let's say in 0.17:

MIRPLATFORM_10_symbols_from_0.17

Thanks,
Alexandros

-- 
Mir-devel mailing list
Mir-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/mir-devel


Re: Client display configuration notification

2015-09-30 Thread Alexandros Frantzis
On Wed, Sep 30, 2015 at 02:04:00PM +1000, Christopher James Halse Rogers wrote:
> Forked from review¹.
> 
> I think we're currently handling display configuration events incorrectly,
> or at least in a way that will be surprising to clients.

For a bit of background, the current scheme is:

1. User changes the hardware config (e.g. adds/removes a monitor)
2. Update base config
3.1 If base config was applied before then apply the new base config
3.2 If per-session config was applied, do nothing, expect the client
to respond to the configuration_changed event by setting a new session
config based on the updated base config.

The reason for "do nothing" in 3.2 is to avoid applying the base
configuration before the client has applied its own new client config,
in order to avoid extra flickering.

> When the hardware configuration changes we send the new configuration to all
> clients. However, if a client has specified a different display
> configuration then we *don't* change it. The upshot is that
> mir_connection_create_display_configuration() will *not* return the
> currently active display configuration.
>
> Furthermore, we clear the session's configuration cache, so if a session
> with custom configuration loses focus and then gains focus again it will
> *not* have its previous configuration applied.
>
> This seems like confusing behaviour, which becomes even more confusing when
> the shell gets to change the default display configuration.
> 
> I'm not entirely sure what our behaviour *should* be, though I think that we
> should at least:
> 
> a) When hardware configuration changes we should verify that any session-set
> config remains valid (for example: was the only display that session had
> enabled removed?), update the session-set config, and submit that to the
> client so that create_display_configuration() will report the *actual*
> display configuration.

Sounds reasonable, and probably easy to implement (all added and removed
outputs are disabled). We just need to ensure we don't introduce any
unnecessary configuration changes so that we don't get extra flickering.

> b) Have a way for a client to revert to default display configuration.

Also reasonable, given that with (a) the client will no longer get the
base config.

Thanks,
Alexandros

-- 
Mir-devel mailing list
Mir-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/mir-devel


Re: Should this be "core"?

2015-10-05 Thread Alexandros Frantzis
On Mon, Oct 05, 2015 at 11:33:44AM +0100, Alan Griffiths wrote:
> In debugging our cursor code and getting it under tests I found
> something surprising (to me). Vis:
> 
> std::shared_ptr
> mir::DefaultServerConfiguration::the_cursor_images()
> {
> return cursor_images( [this]() -> std::shared_ptr
> {
> auto xcursor_loader = std::make_shared();
> if (has_default_cursor(*xcursor_loader))
> return xcursor_loader;
> else return std::make_shared();
> });
> }
> 
> Which translates to "see if X cursors images are available, if so use
> them, otherwise use a builtin cursor". I don't think this is correct
> behaviour for *core* libmirserver - I think that should always and
> predictably use the builtin image. Something like the above makes
> perfect sense in example code, but I don't think we should hard wire
> this as the default.
> 
> Does anyone have an objection to changing the above to...
> 
> std::shared_ptr
> mir::DefaultServerConfiguration::the_cursor_images()
> {
> return cursor_images( [this]() -> std::shared_ptr
> {
>  return std::make_shared();
> });
> }
> 
> ...and moving the X cursor support to libexampleserverconfig.a?

We are using the X cursor support as a convenient way to theme the
cursors. If we consider cursor theming to be core functionality on the
server side, then leveraging existing infrastructure is a reasonably way
to achieve it, and the XCursor code should remain as the default
implementation.

Otherwise, we could remove XCursor based theming, but we should ensure
our built-in cursor images support all the predefined cursor types. In
our client API we are exposing a set of predefined cursor names, which
the BuiltinCursorImage implementation doesn't currently support.
Although we don't explicitly promise that these are always supported, it
would be surprising if our default implementation didn't provide them.

I think that the cursor buffer stream mechanism is enough to cover the
needs of clients by moving cursor theme support to the client/toolkit
side, but I don't feel I have enough information on the matter yet to
make a final decision.

-- 
Alexandros

-- 
Mir-devel mailing list
Mir-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/mir-devel


Re: Client display configuration notification

2015-10-12 Thread Alexandros Frantzis
On Wed, Sep 30, 2015 at 12:50:14PM +0300, Alexandros Frantzis wrote:
> On Wed, Sep 30, 2015 at 02:04:00PM +1000, Christopher James Halse Rogers 
> wrote:
> > Forked from review¹.
> > 
> > I think we're currently handling display configuration events incorrectly,
> > or at least in a way that will be surprising to clients.
> 
> For a bit of background, the current scheme is:
> 
> 1. User changes the hardware config (e.g. adds/removes a monitor)
> 2. Update base config
> 3.1 If base config was applied before then apply the new base config
> 3.2 If per-session config was applied, do nothing, expect the client
> to respond to the configuration_changed event by setting a new session
> config based on the updated base config.
> 
> The reason for "do nothing" in 3.2 is to avoid applying the base
> configuration before the client has applied its own new client config,
> in order to avoid extra flickering.
> 
> > When the hardware configuration changes we send the new configuration to all
> > clients. However, if a client has specified a different display
> > configuration then we *don't* change it. The upshot is that
> > mir_connection_create_display_configuration() will *not* return the
> > currently active display configuration.

A few additional thoughts and realizations after investigating LP
#1498021 ([1]) and pondering on the current code some more.

mir_connection_create_display_configuration() was originally created
with a single purpose in mind, which was to allow clients to get a valid
configuration on which to base their own custom configurations. It was
not intended to give back either the current session config or the
active config. The only guarantee was that it accurately represented
the hardware state (connected monitors and available modes). The base config
was used for this purpose although we could have used the more bare
configuration as returned by the mg::Display class.

Until recently, MediatingDisplayChanger honored this purpose by sending
new configurations to clients only as a response to hardware changes.
This was enough (sans bugs) for the use cases we had until then, which
involved clients responding to configuration changes from the server.

However, since then we have made changes that try to turn the value
returned by mir_connection_create_display_configuration() to something
it wasn't designed to be, i.e., an accurate representation of the
current state.  These changes, as Alan mentioned in an previous email,
have conflated the various types of display configurations we have
in Mir (base, active, session). 

A few questions and my take on them:

1. Which configuration types and changes to them does a normal
   (non-modesetting) client need to know about?

None of the types are particularly useful.

Perhaps it would be useful for the client to know about the active
config (e.g. when another session with a custom config has the focus),
but I can't think of any interesting use cases for this. I am all ears,
though.

2. Which configuration types and changes to them does a
   modesetting client need to know about?

In order for a client to set a custom config it needs at least a config
that accurately represents the hardware state. The base config is useful
if we want to allow clients to take it into account when setting a
custom config, e.g., don't turn a screen on if it's off in the base
config.

Again I don't see an interesting use case for following the active config.

3. Which configuration types and changes does the display
   settings app need to know about
   modesetting) client need to know about?

It needs to be able to get and set the current base configuration.


So, my take is that the only config that clients need to be notified
about is the base one. Clients also need to be able to set the session
and base configs (subject to permission restrictions):

// The same as the current mir_connection_create_display_configuration
// with a more accurate name and guaranteed to always return the base config,
// not just a valid hardware config
mir_connection_create_base_display_config() 

// Set base configuration, used only be display settings app
mir_connection_apply_base_display_config()

// Set session configuration
mir_connection_apply_display_config()

The display config change notification callback will notify only about
base config changes (like it did before the recent changes).

If we decide we also want active configs too, we need to
differentiate between the two kinds in notifications and add
mir_connection_create_active_display_config().

Thanks,
Alexandros

-- 
Mir-devel mailing list
Mir-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/mir-devel


Re: Client display configuration notification

2015-10-12 Thread Alexandros Frantzis
On Mon, Oct 12, 2015 at 02:14:17PM +0100, Alan Griffiths wrote:
> On 12/10/15 12:53, Alexandros Frantzis wrote:
> > So, my take is that the only config that clients need to be notified
> > about is the base one. Clients also need to be able to set the session
> > and base configs (subject to permission restrictions):
> 
> I know the current scheme also has issues, but how would a settings
> application work with this approach (of not being able to read the
> current settings)?

The settings app only cares about the current base configuration, which
it will be able to get with mir_connection_create_base_config() and set
with mir_connection_apply_base_config().

A separate active config concept is not really interesting for a GUI
setting app because when the settings app has the focus (i.e., when the
user interacts with it) the active config will be the base config (since
the settings app won't set a session config). So, what is shown by the
settings app will be in sync with reality when it matters.

If we want for the settings app to always be in sync with the active
config, even when another app with a session config has the focus, then
we indeed need a separate active config change notification and way to
get it. However, as soon as the user tries to interact with the settings
app the config displayed by the app will change back to the base config.
This could be a bit confusing for users, but in any case could be added
later.

For a command line application that gets/sets the base config, being
able to report the active config is more reasonable, since no focus is
involved. However, I fear that it also may be misleading because perhaps
users will expect that they can force a particular change to the active
config, but this is not true if a session config is active (they can
only change the base config and then the focused session will react to
this change any way it wants, possibly ignoring it).

Having session configs (instead of a single global config) constitutes a
slight paradigm shift in the world of display configuration in Linux,
and it will take some time for both users and us to get used to it and
find a proper way to communicate the semantics. Having proper names for
functions and tools would be a good start (e.g.
mir_set_base_display_config for a command line tool instead of
mir_set_display_config).

Thanks,
Alexandros

-- 
Mir-devel mailing list
Mir-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/mir-devel


Re: PSA: CI compile flags edition

2016-03-02 Thread Alexandros Frantzis
On Tue, Mar 01, 2016 at 05:08:47PM +1100, r...@ubuntu.com wrote:
> Hey all,
> 
> Pursuant to Alan's comment¹ and subsequent follow-ons, in the interests of
> rapid CI turnaround Jenkins now does regular CI runs with
> DEB_BUILD_OPTIONS=noopt. This currently means it does an -O0 build, and will
> disable LTO when that lands.
> 
> These changes don't apply to the pre-autolanding builds; they will continue
> to be run with the standard Debian optimisations (-O2) and with LTO (again,
> once that lands).
> 
> For those playing at home, the relevant parameter is $run_landing_checks on
> build-2-binpkg-mir if you've got any other checks that should be enabled
> prior to landing but not every CI run.
> 
> ¹: 
> https://code.launchpad.net/~raof/mir/fix-and-enable-lto/+merge/282133/comments/733096

Thanks Chris!

I noticed that after the LTO branch landed, the mir-ci jobs started
taking longer, which is not consistent with disabling optimizations and
LTO.

It turned out there was a misconfiguration in the build-mir job, causing
$run_landing_checks to always be true in all invocations of
build-2-binpkg-mir. I corrected the problem and it had the effect we
want on non-cross builds (e.g., see [1] vs [2]).

Unfortunately, the problem persists on cross builds (e.g., see [3]), due
to an sbuild peculiarity/bug: when cross building, sbuild overwrites
DEB_BUILD_OPTIONS (setting it to "nocheck"). See the the debian bug
report [4] for more info.

I will investigate a few potential workarounds tomorrow (unless someone
else gets to it first).

[1] 
https://mir-jenkins.ubuntu.com/job/build-2-binpkg-mir/306/arch=amd64,compiler=gcc,platform=mesa,release=xenial/consoleFull
[2] 
https://mir-jenkins.ubuntu.com/job/build-2-binpkg-mir/307/arch=amd64,compiler=gcc,platform=mesa,release=xenial/consoleFull
[3] 
https://mir-jenkins.ubuntu.com/job/build-2-binpkg-mir/307/arch=cross-armhf,compiler=gcc,platform=android,release=vivid+overlay/consoleFull
[4] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=775539

-- 
Mir-devel mailing list
Mir-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/mir-devel


Re: PSA: CI compile flags edition

2016-03-03 Thread Alexandros Frantzis
On Wed, Mar 02, 2016 at 07:58:44PM +0200, Alexandros Frantzis wrote:
> On Tue, Mar 01, 2016 at 05:08:47PM +1100, r...@ubuntu.com wrote:
> > Hey all,
> > 
> > Pursuant to Alan's comment¹ and subsequent follow-ons, in the interests of
> > rapid CI turnaround Jenkins now does regular CI runs with
> > DEB_BUILD_OPTIONS=noopt. This currently means it does an -O0 build, and will
> > disable LTO when that lands.
> > 
> > These changes don't apply to the pre-autolanding builds; they will continue
> > to be run with the standard Debian optimisations (-O2) and with LTO (again,
> > once that lands).
> > 
> > For those playing at home, the relevant parameter is $run_landing_checks on
> > build-2-binpkg-mir if you've got any other checks that should be enabled
> > prior to landing but not every CI run.
> > 
> > ¹: 
> > https://code.launchpad.net/~raof/mir/fix-and-enable-lto/+merge/282133/comments/733096
> 
> Thanks Chris!
> 
> I noticed that after the LTO branch landed, the mir-ci jobs started
> taking longer, which is not consistent with disabling optimizations and
> LTO.
> 
> It turned out there was a misconfiguration in the build-mir job, causing
> $run_landing_checks to always be true in all invocations of
> build-2-binpkg-mir. I corrected the problem and it had the effect we
> want on non-cross builds (e.g., see [1] vs [2]).
> 
> Unfortunately, the problem persists on cross builds (e.g., see [3]), due
> to an sbuild peculiarity/bug: when cross building, sbuild overwrites
> DEB_BUILD_OPTIONS (setting it to "nocheck"). See the the debian bug
> report [4] for more info.
> 
> I will investigate a few potential workarounds tomorrow (unless someone
> else gets to it first).
> 

Cross building is now fixed (see [1]) using a workaround in ~/.sbuildrc.
I have also added the fix to the preparation jobs, so it will be applied
automatically if we need to redeploy.

[1] 
https://mir-jenkins.ubuntu.com/job/build-2-binpkg-mir/313/arch=cross-armhf,compiler=gcc,platform=android,release=vivid+overlay/consoleFull

-- 
Mir-devel mailing list
Mir-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/mir-devel


Removal of offscreen display support (--offscreen)

2016-07-06 Thread Alexandros Frantzis
Hi all,

for a few years now, Mir has provided the --offscreen option to render
to offscreen buffers instead of the real outputs (note that it still
needs proper GPU/GLESv2 support, and it doesn't work in all
environments). This option was implemented experimentally in order to
help with automated QA scenarios, but it was never adopted, and we
didn't get any feedback or improvement requests.

Unless someone is currently using --offscreen, or has a valid use case
for it, I would like to remove it from our code base.

Thanks,
Alexandros

-- 
Mir-devel mailing list
Mir-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/mir-devel