Re: [Qemu-devel] [PATCH 1/8] virtio-gpu/2d: add hardware spec include file

2014-09-29 Thread Dave Airlie
> Triggered by the framebuffer endian issues we have with stdvga I've
> started to check where we stand with virtio-gpu and whenever we have to
> fix something in the virtio protocol before setting in stone with the
> upstream merge.

Let me start by saying its not that I don't care about endianness, but
its a mess I had hoped to avoid until someone else care more about it.

We haven't even managed to get endianness fixed for real 3D gpu hardware yet,

We sort of have decent endianness support for the llvmpipe sw driver now, after
I spent a week trawling patches and fixing up the results.

So before we try fixing things, we probably need to design something
that defines
where all the swapping happens, and what formats the virgl "hw" device supports.
The main problem is getting an efficient solution that avoids double swapping
of the major items like texture and vertex data if we can in some scenarios.

Currently the virgl "hw" supports little endian defined formats, as
per the gallium
interface, i.e. B8G8R8A8 means blue/red/green/alpha,

http://gallium.readthedocs.org/en/latest/format.html
is the documentation.

>
>> +VIRTIO_GPU_FORMAT_B5G5R5A1_UNORM  = 5,
>> +VIRTIO_GPU_FORMAT_B5G6R5_UNORM= 7,
>
> Do we really wanna go down this route?  I'm inclined to just zap 16bpp
> support.

We have to support 16bpp as an OpenGL format no matter what, and this
is why endianness sucks, we have lots of strange ass formats we need
to send over the wire, that have no nicely defined endianness, like Z24S8.

>
>> +VIRTIO_GPU_FORMAT_R8_UNORM= 64,
>
> What is this btw?  Looks like gray or alpha, but why "R8" not "G8" or
> "A8" then?  Why the gap in the enumeration?

Red 8, the enumeration is taken from the gallium formats list, rather than
reinventing our own. Most modern GPU hardware doesn't really have
A8 texture support, as it was deprecated in newer OpenGL.

>
> What about 3d mode?  We are passing blobs between virglrenderer and
> guest driver:  capabilities and gallium 3d command stream.  What happens
> to them in case host and guest endianness don't match?  I think at least
> the capabilities have 32bit values which need to be swapped.  Dunno
> about the gallium commands ...

For 3D we probably need to define the gallium command streams to be
little endian, however the big problem is the data that is stored
inside objects.
Texture and vertex, constants, indices etc. How do we decide to swap these,
when do we swap things, on the DMA transfers to the host, do we just
swap the formats on the host side etc.

I probably need to spend some time working this out with BenH, but
I'm not really sure how we can avoid backing ourselves into a large inefficent
hole at some point.

Dave.



Re: [Qemu-devel] [PATCH 1/8] virtio-gpu/2d: add hardware spec include file

2014-10-02 Thread Dave Airlie
On 30 September 2014 17:55, Gerd Hoffmann  wrote:
>
> On Di, 2014-09-30 at 10:27 +1000, Dave Airlie wrote:
>> > Triggered by the framebuffer endian issues we have with stdvga I've
>> > started to check where we stand with virtio-gpu and whenever we have to
>> > fix something in the virtio protocol before setting in stone with the
>> > upstream merge.
>>
>> Let me start by saying its not that I don't care about endianness, but
>> its a mess I had hoped to avoid until someone else care more about it.
>
> Understood.  It's a big mess indeed.
>
>> So before we try fixing things, we probably need to design something
>> that defines
>> where all the swapping happens, and what formats the virgl "hw" device 
>> supports.
>
> 2D case is easy.  Everything is little endian.  kernel driver / qemu are
> doing the byteswaps for the structs sent over the control pipe.
>
> Problem with 3D is that both qemu and kernel driver passing through data
> where they don't even know what is inside, so they can't do the
> byteswapping.
>
>> The main problem is getting an efficient solution that avoids double swapping
>> of the major items like texture and vertex data if we can in some scenarios.
>
> Yes.
>
>>
>> Currently the virgl "hw" supports little endian defined formats, as
>> per the gallium
>> interface, i.e. B8G8R8A8 means blue/red/green/alpha,
>>
>> http://gallium.readthedocs.org/en/latest/format.html
>> is the documentation.
>
> Thanks.
>
>> >> +VIRTIO_GPU_FORMAT_B5G5R5A1_UNORM  = 5,
>> >> +VIRTIO_GPU_FORMAT_B5G6R5_UNORM= 7,
>> >
>> > Do we really wanna go down this route?  I'm inclined to just zap 16bpp
>> > support.
>>
>> We have to support 16bpp as an OpenGL format no matter what, and this
>> is why endianness sucks, we have lots of strange ass formats we need
>> to send over the wire, that have no nicely defined endianness, like Z24S8.
>
> Ok.  But for 2D we can just not support it, right?

We can, I expect some pushback at some point, people still want to
test with 16bpp
for other areas, and it would be nice to know they can. But I don't really care
about it personally. I just though we should provide at least a basic
number of working
bpps (8,16,32).


>> > What about 3d mode?  We are passing blobs between virglrenderer and
>> > guest driver:  capabilities and gallium 3d command stream.  What happens
>> > to them in case host and guest endianness don't match?  I think at least
>> > the capabilities have 32bit values which need to be swapped.  Dunno
>> > about the gallium commands ...
>>
>> For 3D we probably need to define the gallium command streams to be
>> little endian, however the big problem is the data that is stored
>> inside objects.
>> Texture and vertex, constants, indices etc. How do we decide to swap these,
>> when do we swap things, on the DMA transfers to the host, do we just
>> swap the formats on the host side etc.
>>
>> I probably need to spend some time working this out with BenH, but
>> I'm not really sure how we can avoid backing ourselves into a large 
>> inefficent
>> hole at some point.
>
> I surely don't wanna go down that route, and I think it is reasonable to
> just not support 3D/virgl mode if we would have to swap data.

I think initially not support 3D mode is the way to go until we can work it out.

>
>
> So, we could define *two* virgl feature bits.  One for little endian,
> one for bigendian.  endianness applies to the gallium command stream and
> to gallium formats using integers in host endianess.
>
> On the host side we'll go set the feature bit matching host endianness.
> qemu handles the virtqueue command struct swapping, and virglrenderer
> should only see native endian.
>
> On the guest side we'll look for the feature bit matching guest
> endianness, and if it isn't set due to guest + host having different
> byte order you'll get 2D support only.
>
> The endianness negotiation is a bit iffy, but that way it is possible to
> have virgl on bigendian without swapping everything twice.
>
>
> The other reasonable way would be to simply define virgl being little
> endian.  Bigendian guests / hosts would either simply not support 3D
> then, or implement swapping.  But in the bigendian-guest on
> bigendian-host case we'll swap everything twice then ...
>

I think we should probably move a few more formats from the 3D side
into the 2D side, so we can have the guests just pick the LE format
it requires

http://cgit.freedesktop.org/mesa/mesa/tree/src/gallium/include/pipe/p_format.h#n354

is what gallium currently does, and we could just provide XRGB, XBGR
formats in both endianness
and have the guest pick the one it wants to use.

The 2D pixman code would need updating to provide 2D support for these
formats as well.

I suspect I could add an endian cap for the 3D bits that I could pass
through from guest to host.

How do you test guests with big endian? Isn't it really slow?

Dave.



Re: [Qemu-devel] [PATCH 1/8] virtio-gpu/2d: add hardware spec include file

2014-10-15 Thread Dave Airlie
On 15 October 2014 20:05, Gerd Hoffmann  wrote:
>   Hi,
>
>> +/* VIRTIO_GPU_RESP_OK_DISPLAY_INFO */
>> +#define VIRTIO_GPU_MAX_SCANOUTS 16
>> +struct virtio_gpu_resp_display_info {
>> +struct virtio_gpu_ctrl_hdr hdr;
>> +struct virtio_gpu_display_one {
>> +uint32_t enabled;
>> +uint32_t width;
>> +uint32_t height;
>> +uint32_t x;
>> +uint32_t y;
>> +uint32_t flags;
>> +} pmodes[VIRTIO_GPU_MAX_SCANOUTS];
>
> One more thing: I think it would be a good idea to add the display
> resolution here.  We start seeing highres displays on desktops, and the
> guest should know whenever the host display runs at 100 or 300 dpi ...
>
> What do you think?

Passing host display side into the guest isn't going to help, unless
you are running in full screen,

I suppose the guest could adjust the numbers, but what happens with
viewers, if I connect on a hidpi and move to a lodpi screen.

I think this would require a lot more thought in how it would work in
some use-cases.

That said reserving 2 32 bit fields for possible screen measurements
might future proof things a little, though most OSes use EDID to detect
this sort of thing, so we might not find a great use for it later.

Dave.



Re: [Qemu-devel] [PATCH 1/8] virtio-gpu/2d: add hardware spec include file

2014-10-15 Thread Dave Airlie
>
> Lets try to get away with 32bpp only in 2d mode then.
>
> bochsdrm likewise supports 32bpp only and I yet have to see a request
> for 16bpp or even 8bpp support.
>
>> I think we should probably move a few more formats from the 3D side
>> into the 2D side, so we can have the guests just pick the LE format
>> it requires
>>
>> http://cgit.freedesktop.org/mesa/mesa/tree/src/gallium/include/pipe/p_format.h#n354
>>
>> is what gallium currently does, and we could just provide XRGB, XBGR
>> formats in both endianness
>> and have the guest pick the one it wants to use.
>
>PIPE_FORMAT_R8G8B8A8_UNORM  = 67,
>PIPE_FORMAT_X8B8G8R8_UNORM  = 68,
>
>PIPE_FORMAT_A8B8G8R8_UNORM  = 121,
>PIPE_FORMAT_R8G8B8X8_UNORM  = 134,
>
> With the last two ones being in a /* TODO: re-order these */ block.
> How stable are these numbers?

In theory the mesa/gallium numbers aren't stable, though I've never
seen them change yet,

If they diverge in the future I'll just provide a remapping table
inside the guest driver.

So it should be fine to expose these formats for 2D use.

> Initially this doesn't matter much as the host will support only one
> endianness anyway.
>
> But in case we get the byteswapping work reasonable well some day and
> the host supports both be and le virgl we'll know that way which
> endianness the guest is using.
>
>> How do you test guests with big endian? Isn't it really slow?
>
> emulated pseries machine with fedora ppc64.  Yes, it is slow.  Building
> a kernel with virtio-gpu driver takes a day or so.

I spent a little while trying to get a ppc64 f20 install to complete, just
using the F20 qemu ppc64 system package but hit a bug I
think is related to missing SIMD instructions, so I'm not sure how best
to move forward with getting a test platform here.

Dave.



Re: [Qemu-devel] [virtio-dev] [PATCH 2/2] virtio-gpu/2d: add docs/specs/virtio-gpu.txt

2014-09-12 Thread Dave Airlie
>> Can the host refuse due to lack of resources?
>
> Yes.  virtgpu_ctrl_hdr.type in the response will be set to
> VIRTGPU_RESP_ERR_* then.  Current implementation does that only on
> malloc() failure, there is no accounting (yet) to limit the amout of
> memory the guest is allowed to allocate.

We do probably need to work out some sort of accounting system, it can
probably reliably only be a total value of resources, since we've no
idea if the host driver will store them in VRAM or main memory. Quite
how to fail gracefully is a question, probably need to report to the
guest what context did the allocation and see if we can destroy it.

>> > +
>> > +VIRTGPU_CMD_RESOURCE_INVAL_BACKING:
>> > +  Command: struct virtgpu_resource_inval_backing
>>
>> Why is it called INVAL_BACKING instead of DETACH_BACKING?  "Detach" is
>> logical since there is also an "attach" command.
>
> No particular reason I think.  Dave?
>

Not reason I can remember, I think I was thinking of having separate
inval and detach at one point, but it didn't really make any sense, so
renaming to detach is fine with me.

Dave.



Re: [Qemu-devel] [virtio-dev] [PATCH 0/2] virtio-gpu: hardware specification

2014-09-12 Thread Dave Airlie
On 12 September 2014 01:09, Gerd Hoffmann  wrote:
>   Hi folks,
>
> Lets kick off the virtio-gpu review process, starting with the virtio
> protocol.
>
> This is a tiny patch series for qemu.  Patch #1 carries the header file
> describing the virtual hardware: config space, command structs being
> sent over the rings, defines etc.  Patch #2 adds a text file describing
> virtio-gpu to docs/specs/.  It covers 2D support only for now.
>
> For anybody who wants to dig a bit deeper:  Here are the branches for
> qemu and the linux kernel:
>
>   https://www.kraxel.org/cgit/qemu/log/?h=rebase/vga-wip
>   https://www.kraxel.org/cgit/linux/log/?h=virtio-gpu-rebase
>
> The qemu patches are in a reasonable state.  The linux kernel patches
> are not cleaned up yet, you probably want to look at the final tree
> only.
>
> Work has been done by Dave Airlie  and me.  The
> authorship info in git got lost in the qemu patch cleanup process
> though.  Suggestions how to handle that best?  Simply add both mine
> and Dave's signed-off-by to all virtio-gpu qemu patches?  Is that fine
> with you Dave?  Anyone has better ideas?

I don't mind just adding a line in the commit msgs with Authors in it,
along with both signoffs.

Dave.



Re: [Qemu-devel] [virtio-dev] Re: [PATCH 2/2] virtio-gpu/2d: add docs/specs/virtio-gpu.txt

2014-09-14 Thread Dave Airlie
On 14 September 2014 19:16, Michael S. Tsirkin  wrote:
> On Thu, Sep 11, 2014 at 05:09:33PM +0200, Gerd Hoffmann wrote:
>> Signed-off-by: Gerd Hoffmann 
>> ---
>>  docs/specs/virtio-gpu.txt | 165 
>> ++
>>  1 file changed, 165 insertions(+)
>>  create mode 100644 docs/specs/virtio-gpu.txt
>
> Please don't put this hardware spec in QEMU.
> Contribute it to OASIS Virtio TC instead.
> If you need help with that, please let me know.

I think we'd be premature in putting this anywhere that isn't
alongside the code until we've addressed at least any resource
allocation or possible security concerns.

Dave.



Re: [Qemu-devel] multihead & multiseat in qemu

2014-03-21 Thread Dave Airlie
On Fri, Mar 21, 2014 at 11:27 PM, Gerd Hoffmann  wrote:
>   Hi,
>
> I'm thinking about how to handle multihead and multiseat in qemu best.
>
> On multihead:  Mouse in virtual machines works best with absolute
> coordinates, and the way this is done today is to assign a (virtual) usb
> tablet to the guest.  With multihead this becomes a bit difficuilt.
> Today we try to calculate the coordinates for the tablet that they cover
> all displays, which has a number of drawbacks.  I think it would be
> better to operate more like a touchscreen, i.e. have one touch input
> device for each monitor.  For that to work the guest must know which
> touch device belongs to which monitor.

Yeah I think I'm in agreement with this, one touch device per monitor,

> On multiseat:  Very simliar problem here (thats why both issues in one
> mail):  The guest needs to know which devices belong to which seat.
>
> Qemu needs the grouping/assignment information too, to the the input
> routing right (i.e. input from this window needs to go to that virtual
> input device etc).  Doing the configuration twice (once for qemu, once
> for the guest) and make sure they actually match would be annoying
> though.  So I think we should configure qemu only, then pass that
> information to the guest somehow.
>
> Question is how to do that best?

The GNOME guys have been working on auto config of touch devices,
I need to find who is doing that work and what they are triggering on,
since X just provides the devices currently, I think we might need some agent
in the guest for this.

>
> I'd like to have some way to assign tags such as seat=foo or head=1 to
> devices.  Preferably in some way which can easily picked up with udev
> rules, so it is easily usable by system-logind and Xorg server.

The only current example of seat autoconfiguration is for the usb hubs
from displaylink I think, which have video/audio/ethernet/usb behind a
hub that udev detects as being part of a seat, I'd need to look up the
specifics,

> We have virtio devices (virtio-gpu for example).  For these it is easy,
> we can put something into the virtio protocol, and the guest driver can
> create a sysfs file where udev/systemd can pick it up.
>
> We have pci devices (cirrus for example).  One idea for them would be to
> create a vendor-specific pci capabiliy for tags.  Probably needs some
> small utility to read them, or kernel driver support to export them via
> sysfs.
>
> We have usb devices (kbd/mouse/tablet).  We could put something into the
> string table, or have some vendor-specific descriptor.  Same problem
> here, not easy accessible, needs a tool or kernel support.
>
> Comments?  Better ideas?  Other suggestions?

It does seems like tagging the devices somehow would be better than providing
a seat device, like we could in theory have a pci and usb controller
per seat, and
have devices move around between them, this would be like what we for
the real hw,
however per-device tags does look like it might be nicer in the long run.

Dave.



[Qemu-devel] updated virtio-gpu code

2014-03-24 Thread Dave Airlie
Hey,

I've pushed a new version of the unaccelerated virtio-gpu code to my repo

git://git.freedesktop.org/~airlied/qemu virtio-gpu

this is Gerd vga-wip branch, with the virtgpu_hw file moved, removing
the event queue and a config space added with a events_read,
events_clear u32.

I've also pushed the changes to the kms driver to use this,
http://cgit.freedesktop.org/~airlied/linux/log/?h=virtio-vga-3d

Gerd, I've also dropped my experimental config space hacks and pushed
the two pci/vga fixes into that branch as well.

Just out of interest, with sdl and remote-viewer I seem to get 2
displays, one for the VGA time, and a separate one for the driver
loaded, any ideas why?

Dave.



Re: [Qemu-devel] updated virtio-gpu code

2014-03-25 Thread Dave Airlie
>
> Just out of interest, with sdl and remote-viewer I seem to get 2
> displays, one for the VGA time, and a separate one for the driver
> loaded, any ideas why?

Ah this seems to be an artefact of my libvirt xml which demands I add
a non-virtio vga device, thanks strict parser!

Dave.



Re: [Qemu-devel] [PATCH 7/8] virtio-vga: v1

2014-01-07 Thread Dave Airlie
On Fri, Dec 6, 2013 at 6:58 PM, Dave Airlie  wrote:
> On Fri, Dec 6, 2013 at 6:24 PM, Gerd Hoffmann  wrote:
>>   Hi,
>>
>>> Now the advice given was to have virtio-vga wrap virtio-gpu-base but
>>> from what I can see it really can't. Since it needs to act and look
>>> like a PCI device
>>
>> Oops, yes.  VirtIOPCIProxy wasn't on my radar.  That makes things a bit
>> more messy.  Can you just push what you have now somewhere?  I'll take a
>> closer look next week.
>
> http://cgit.freedesktop.org/~airlied/qemu/log/?h=virtio-gpu-inherit
>
> Well I didn't really get anything working, but the top commit in that
> branch was where I was on my last random fail.
>
> I think another object is probably required, or making the base one
> not abstract.
>
Hi Gerd,

just repinging on this, not sure if you can see a solution that works
with VirtIOPCIProxy that avoids wrapping.

Dave.



Re: [Qemu-devel] input, virtio-vga & multihead -- current state

2014-01-29 Thread Dave Airlie
On Thu, Jan 30, 2014 at 2:28 AM, Gerd Hoffmann  wrote:
>>  * Didn't came very far in testing.  Guest kernel just says:
>>[   50.712897] [drm] got -2147483647 outputs
>>I've used
>>http://cgit.freedesktop.org/~airlied/linux/log/?h=virtio-vga
>
> Switched to virtio-vga-3d branch -- that is working better.  Handover
> from vesafb to virtio-vga drm driver doesn't work though ...
>

Excellent, I'm a bit snowed under in non-virgl work at the moment,
hoping next week
to pick myself back into the virt world,

This look good to me and yes virtio-vga-3d is the branch the other one I need
to probably kill with fire, I'll apply your patch to it to fix the handover.

I'll try and get this stuff running here to play around with.

Dave.



[Qemu-devel] gpu and console chicken and egg

2013-12-03 Thread Dave Airlie
So I've hit a bit of a init ordering issue that I'm not sure how best to solve,

Just some background:
In order for the virt GPU and the UI layer (SDL or GTK etc) to
interact properly over OpenGL use, I have created and OpenGL provider
in the console, and the UI layer can register callbacks for a single
GL provider (only one makes sense really) when it starts up. This is
mainly to be used for context management and swap buffers management.

Now in the virtio GPU I'd was going to use a virtio feature to say
whether the qemu hw can support the 3D renderer, dependant on whether
it was linked with the virgl renderer and whether the current UI was
GL capable.

I also have the virtio gpu code checking in its update_display
callback whether the first console has acquired a GL backend.

Now the problem:
The virtio hw is initialised before the console layers. So the feature
bits are all set at that point, before the UI ever registers a GL
interface layer. So is there a method to modify the advertised feature
bits later in the setup sequence before the guest is started? Can I
call something from the update display callback?

Otherwise I was thinking I would need something in my config space on
top of feature bits to say the hw is actually 3d capable.

Dave.



Re: [Qemu-devel] gpu and console chicken and egg

2013-12-04 Thread Dave Airlie
On Wed, Dec 4, 2013 at 6:23 PM, Gerd Hoffmann  wrote:
> On Mi, 2013-12-04 at 17:02 +1000, Dave Airlie wrote:
>> So I've hit a bit of a init ordering issue that I'm not sure how best to 
>> solve,
>>
>> Just some background:
>> In order for the virt GPU and the UI layer (SDL or GTK etc) to
>> interact properly over OpenGL use, I have created and OpenGL provider
>> in the console, and the UI layer can register callbacks for a single
>> GL provider (only one makes sense really) when it starts up. This is
>> mainly to be used for context management and swap buffers management.
>>
>> Now in the virtio GPU I'd was going to use a virtio feature to say
>> whether the qemu hw can support the 3D renderer, dependant on whether
>> it was linked with the virgl renderer and whether the current UI was
>> GL capable.
>
> Hmm, why does it depend on the UI?  Wasn't the plan to render into a
> dma-buf no matter what?  Then either read the rendered result from the
> dmabuf (non-gl UI like vnc) or let the (gl-capable) UI pass the dma-buf
> to the compositor?

That would be the hopeful plan, however so far my brief investigation says
I'm possibly being a bit naive with what EGL can do. I'm still talking to the
EGL and wayland people about how best to model this, but either way
this won't work with nvidia drivers which is a case we need to handle, so
we need to interact between the UI GL usage and the renderer. Also
non-Linux platforms would want this in some way I'd assume, at least
so virtio-gpu is usable with qemu on them.

I've started looking at how to integrate GL with the gtk frontend as well.

> Also note that the virtio-gpu gl-capability needs to be configurable for
> live migration reasons, so you can migrate between hosts with different
> 3d capabilities.  Something like -device
> virtio-gpu,gl={none,gl2,gl3,host} where "none" turns it off, "gl
> $version" specifies the gl support level and "host" makes it depend on
> the host capabilities (simliar to -cpu host).  For starters you can
> leave out "host" and depend on the user set this.

GL isn't that simple, and I'm not sure I can make it that simple unfortunately,
the renderer requires certain extensions on top of the base GL 2.1 and GL3.0.
live migration with none might be the first answer, and then we'd have to expend
serious effort on making live migration work for any sort of different
GL drivers.
Reading everything back while renderering continues could be a lot of
fun. (or pain).

>> So is there a method to modify the advertised feature
>> bits later in the setup sequence before the guest is started?
>
> You can register a callback to be notified when the guest is
> started/stopped (qemu_add_vm_change_state_handler).  That could be used
> although it is a bit hackish.

I don't think this will let me change the feature bits though since the virtio
PCI layer has already picked them up I think. I just wondered if we have any
examples of changing features later.

>
> PS: Now that 1.7 is out of the door and 2.0 tree is open for development
> we should start getting the bits which are ready merged to make your
> patch backlog smaller.  SDL2 would be a good start I think.
>

I should probably resubmit the multi-head changes and SDL2 changes and
we should look at merging them first. The input thing is kind off up in the
air, we could probably just default to using hints from the video setup,
and move towards having an agent do it properly in the guest.

I've just spent a week reintegrating virtio-gpu with the 3D renderer so I can
make sure I haven't backed myself into a corner, it kinda leaves 3 major things
outstanding:

a) dma-buf/EGL, EGLimage vs EGLstream, nothing exists upstream, so
unknown timeframe.
I don't think we should block merging on this, also dma-buf doesn't
exist on Windows/MacOSX
so qemu there should still get virtio-gpu available.

b) dataplane - I'd really like this, the renderer is a lot slower when
its not in a thread, and it
looks bad on benchmarks. I expect other stuff needs to happen before this.

c) GTK multi-head + GL support - I'd like to have the GTK UI be able
for multi-head as well
my first attempt moved a lot of code around, I'm not really sure what
the secondary head
windows should contain vs the primary head.

Dave.



Re: [Qemu-devel] gpu and console chicken and egg

2013-12-05 Thread Dave Airlie
On Thu, Dec 5, 2013 at 6:52 PM, Gerd Hoffmann  wrote:
>   Hi,
>
>> > Hmm, why does it depend on the UI?  Wasn't the plan to render into a
>> > dma-buf no matter what?  Then either read the rendered result from the
>> > dmabuf (non-gl UI like vnc) or let the (gl-capable) UI pass the dma-buf
>> > to the compositor?
>>
>> That would be the hopeful plan, however so far my brief investigation says
>> I'm possibly being a bit naive with what EGL can do. I'm still talking to the
>> EGL and wayland people about how best to model this, but either way
>> this won't work with nvidia drivers which is a case we need to handle, so
>> we need to interact between the UI GL usage and the renderer.
>
> Hmm.  That implies we simply can't combine hardware-accelerated 3d
> rendering with vnc, correct?

For SDL + spice/vnc I've added a readback capabilty to the renderer,
and hooked things up if there is > 1 DisplayChangeListener then it'll
do readbacks, and keep the surface updated, this slows things down,
but it does work.

but yes it means we can't just run the qemu process in its sandbox
without a connection to the X server for it to do GL rendering, or
without using SDL,

I don't think we should block merging the initial code on this, it was
always a big problem on its own that needed solving.

>> GL isn't that simple, and I'm not sure I can make it that simple 
>> unfortunately,
>> the renderer requires certain extensions on top of the base GL 2.1 and GL3.0.
>> live migration with none might be the first answer, and then we'd have to 
>> expend
>> serious effort on making live migration work for any sort of different
>> GL drivers.
>> Reading everything back while renderering continues could be a lot of
>> fun. (or pain).
>
> We probably want to start with gl={none,host} then.  Live migration only
> supported with "none".
>
> If we can't combine remote displays with 3d rendering (nvidia issue
> above) live migration with 3d makes little sense anyway.

Well we can, we just can't do it without also having a local display
connection, but yes it does limit the migration capabilities quite a
lot!

>> I don't think this will let me change the feature bits though since the 
>> virtio
>> PCI layer has already picked them up I think. I just wondered if we have any
>> examples of changing features later.
>
> I think you can.  There are no helper functions for it though, you
> probably have to walk the data structures and fiddle with the bits
> directly.
>
> Maybe it is easier to just have a command line option to enable/disable
> 3d globally, and a global variable with the 3d status.  Being able to
> turn off all 3d is probably useful anyway.  Either as standalone option
> or as display option (i.e. -display sdl,3d={on,off,auto}).  Then do a
> simple check for 3d availability when *parsing* the options.  That'll
> also remove the need for the 3d option for virtio-gpu, it can just check
> the global flag instead.

Ah yes that might work, and just fail if we request 3D but can't fulfil it.
>
>> I should probably resubmit the multi-head changes and SDL2 changes and
>> we should look at merging them first.
>

I've got some outstanding things to redo on the virtio-gpu/vga bits, then I'll
resubmit the sdl2 and unaccelerated virtio-gpu bits.

Dave.



Re: [Qemu-devel] [PATCH 7/8] virtio-vga: v1

2013-12-05 Thread Dave Airlie
On Thu, Nov 21, 2013 at 9:06 PM, Gerd Hoffmann  wrote:
> On Do, 2013-11-21 at 13:12 +1000, Dave Airlie wrote:
>> On Wed, Nov 20, 2013 at 10:02 PM, Gerd Hoffmann  wrote:
>> > On Mi, 2013-11-20 at 15:52 +1000, Dave Airlie wrote:
>> >> From: Dave Airlie 
>> >>
>> >> This is a virtio-vga device built on top of the virtio-gpu device.
>> >
>> > Ah, I see what you use the wrapping for.  Hmm.  I think you should use a
>> > common base class instead, i.e. something like virtio-gpu-base which
>> > holds all the common stuff.  Both virtio-gpu and virtio-vga can use that
>> > as TypeInfo->parent then.  This way virtio-vga doesn't have to muck with
>> > virtio-gpu internals.  virtio-gpu-base can be tagged as abstract class
>> > (using .abstract = true) so it will not be instantiated directly.
>> >
>>
>> I'm not sure what that buys me here, I need the virtio-vga to attach
>> the vga ops the first console that the virtio-gpu registers, it can't
>> be a separate console, and since virtio-gpu initialises before
>> virtio-vga I can't tell it to not register the console.
>
> virtio-gpu-core registers no consoles.  It just export the hw_ops
> functions.   virtio-gpu-core inly initializes the stuff which is
> identical for both virtio-gpu and virtio-vga, everything else is left to
> the init functions of the subclasses.
>
> virtio-gpu uses virtio-gpu-core as parent.  Registers the the consoles,
> using the hw_ops functions exported by virtio-gpu-core.  Also sets the
> pci class to DISPLAY_OTHER.
>
> virtio-vga uses virtio-gpu-core as parent too.  Registers the consoles,
> using functions basically doing "if vgamode then call vga hw_ops else
> call virtio-gpu-core hw_ops".  Simliar to what you have today but
> without the funky wrapping.  Sets pci class to DISPLAY_VGA and
> initializes vga stuff.
>
> cheers,
>   Gerd

Okay I'm really missing something here and I think I've confused
myself completely.

My plan was

virtio-gpu-base - VirtIOGPUBase object - VirtIODevice parent_obj -
abstract class - contains vqs + exposes ops

virtio-gpu - virtio-gpu-base wrapper with init sequence for non-VGA
virtio gpus (mmio + pci)
virtio-gpu-pci - VirtIOPCIProxy parent_obj - contains a VirtIOGPU vdev
that it instantiates in its instance init like all the PCI wrappers

Now the advice given was to have virtio-vga wrap virtio-gpu-base but
from what I can see it really can't. Since it needs to act and look
like a PCI device

virtio-vga: Also has a VirtIOPCIProxy parent_obj, however as
virtio-gpu-base is abstract I can't directly instantiate it, and I
can't instantiate virtio-gpu as its the wrong thing,

so do I really need to add another class? rename virtio-vga to
virtio-pci-vga and add a new virtio-vga that just wraps
virtio-gpu-base?

This is getting a lot messier than the code I had, and the benefits
are escaping me, which must mean I'm misinterpreting the instructions
given.

This then led to another question how do I call the virtio-gpu-base
init functions? directly? as I can't use qdev as they are abstract
from what I can see.

Dave.



Re: [Qemu-devel] [PATCH 7/8] virtio-vga: v1

2013-12-06 Thread Dave Airlie
On Fri, Dec 6, 2013 at 6:24 PM, Gerd Hoffmann  wrote:
>   Hi,
>
>> Now the advice given was to have virtio-vga wrap virtio-gpu-base but
>> from what I can see it really can't. Since it needs to act and look
>> like a PCI device
>
> Oops, yes.  VirtIOPCIProxy wasn't on my radar.  That makes things a bit
> more messy.  Can you just push what you have now somewhere?  I'll take a
> closer look next week.

http://cgit.freedesktop.org/~airlied/qemu/log/?h=virtio-gpu-inherit

Well I didn't really get anything working, but the top commit in that
branch was where I was on my last random fail.

I think another object is probably required, or making the base one
not abstract.

Dave.



[Qemu-devel] gtk3 mouse warping

2013-12-08 Thread Dave Airlie
does the gtk3 mouse warping work for anyone else?

I've just been testing with gtk3 + virtio-vga and the mouse seems to
bounce around a lot off the edges of the window, but never make it
into it.

Dave.



Re: [Qemu-devel] gtk3 mouse warping

2013-12-08 Thread Dave Airlie
On Mon, Dec 9, 2013 at 10:49 AM, Dave Airlie  wrote:
> does the gtk3 mouse warping work for anyone else?
>
> I've just been testing with gtk3 + virtio-vga and the mouse seems to
> bounce around a lot off the edges of the window, but never make it
> into it.

It appears to be an issue with cursor hotspot offsets only, which might
explain why nobody else has seen it yet.

Dave.



Re: [Qemu-devel] gtk3 mouse warping

2013-12-08 Thread Dave Airlie
On Mon, Dec 9, 2013 at 1:54 PM, Dave Airlie  wrote:
> On Mon, Dec 9, 2013 at 10:49 AM, Dave Airlie  wrote:
>> does the gtk3 mouse warping work for anyone else?
>>
>> I've just been testing with gtk3 + virtio-vga and the mouse seems to
>> bounce around a lot off the edges of the window, but never make it
>> into it.
>
> It appears to be an issue with cursor hotspot offsets only, which might
> explain why nobody else has seen it yet.

Oops ignore me, looks like a virtio-gpu bug, I just didn't notice
before with SDL.

Dave.



[Qemu-devel] [PATCH 1/2] gtk: don't warp point in absolute mode

2013-12-09 Thread Dave Airlie
From: Dave Airlie 

This makes gtk act the same way as the current sdl backend, which doesn't
do the warp in this case.

If your guest GPU has hw pointer this leads you get endless loops where the
warp causes motion causes input events, causes the guest to move the cursor
causes warp.

Signed-off-by: Dave Airlie 
---
 ui/gtk.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/ui/gtk.c b/ui/gtk.c
index 6316f5b..2abf289 100644
--- a/ui/gtk.c
+++ b/ui/gtk.c
@@ -338,6 +338,9 @@ static void gd_mouse_set(DisplayChangeListener *dcl,
 GdkDeviceManager *mgr;
 gint x_root, y_root;
 
+if (kbd_mouse_is_absolute())
+   return;
+
 dpy = gtk_widget_get_display(s->drawing_area);
 mgr = gdk_display_get_device_manager(dpy);
 gdk_window_get_root_coords(gtk_widget_get_window(s->drawing_area),
-- 
1.8.3.1




[Qemu-devel] [PATCH 2/2] gtk: respect cursor visibility

2013-12-09 Thread Dave Airlie
From: Dave Airlie 

If the guest asks for no cursor, switch gtk to using the null cursor.

Signed-off-by: Dave Airlie 
---
 ui/gtk.c | 25 +
 1 file changed, 17 insertions(+), 8 deletions(-)

diff --git a/ui/gtk.c b/ui/gtk.c
index 2abf289..f60e66f 100644
--- a/ui/gtk.c
+++ b/ui/gtk.c
@@ -158,6 +158,8 @@ typedef struct GtkDisplayState
 bool external_pause_update;
 
 bool modifier_pressed[ARRAY_SIZE(modifier_keycode)];
+
+GdkCursor *current_cursor;
 } GtkDisplayState;
 
 static GtkDisplayState *global_state;
@@ -338,6 +340,11 @@ static void gd_mouse_set(DisplayChangeListener *dcl,
 GdkDeviceManager *mgr;
 gint x_root, y_root;
 
+if (!visible)
+   gdk_window_set_cursor(gtk_widget_get_window(s->drawing_area), 
s->null_cursor);
+else
+   gdk_window_set_cursor(gtk_widget_get_window(s->drawing_area), 
s->current_cursor);
+
 if (kbd_mouse_is_absolute())
return;
 
@@ -369,21 +376,23 @@ static void gd_cursor_define(DisplayChangeListener *dcl,
 {
 GtkDisplayState *s = container_of(dcl, GtkDisplayState, dcl);
 GdkPixbuf *pixbuf;
-GdkCursor *cursor;
+
+if (s->current_cursor) {
+#if !GTK_CHECK_VERSION(3, 0, 0)
+gdk_cursor_unref(s->current_cursor);
+#else
+g_object_unref(s->current_cursor);
+#endif
+s->current_cursor = NULL;
+}
 
 pixbuf = gdk_pixbuf_new_from_data((guchar *)(c->data),
   GDK_COLORSPACE_RGB, true, 8,
   c->width, c->height, c->width * 4,
   NULL, NULL);
-cursor = 
gdk_cursor_new_from_pixbuf(gtk_widget_get_display(s->drawing_area),
+s->current_cursor = 
gdk_cursor_new_from_pixbuf(gtk_widget_get_display(s->drawing_area),
 pixbuf, c->hot_x, c->hot_y);
-gdk_window_set_cursor(gtk_widget_get_window(s->drawing_area), cursor);
 g_object_unref(pixbuf);
-#if !GTK_CHECK_VERSION(3, 0, 0)
-gdk_cursor_unref(cursor);
-#else
-g_object_unref(cursor);
-#endif
 }
 
 static void gd_switch(DisplayChangeListener *dcl,
-- 
1.8.3.1




[Qemu-devel] gtk cursor patches

2013-12-09 Thread Dave Airlie
These were just fallout fixes from exploring gtk multi-head with virtio-gpu,

hopefully they are useful to people before then, not sure if the 
warping/absolute interaction is defined or not.

Dave.




[Qemu-devel] [PATCH 5/8] console: add ability to wrap a console.

2013-12-09 Thread Dave Airlie
From: Dave Airlie 

In order to implement virtio-vga on top of virtio-gpu we need
to be able to wrap the first console virtio-gpu registers from
inside virtio-vga which initialises after virtio-gpu. With this
interface virtio-vga can store the virtio-gpu interfaces, and
call them from its own ones.

Signed-off-by: Dave Airlie 
---
 include/ui/console.h |  7 +++
 ui/console.c | 13 +
 2 files changed, 20 insertions(+)

diff --git a/include/ui/console.h b/include/ui/console.h
index f6e8957..2ad9238 100644
--- a/include/ui/console.h
+++ b/include/ui/console.h
@@ -285,6 +285,13 @@ QemuConsole *graphic_console_init(DeviceState *dev,
   const GraphicHwOps *ops,
   void *opaque);
 
+void graphic_console_wrap(QemuConsole *con,
+ DeviceState *dev,
+ const GraphicHwOps *ops,
+ void *opaque,
+  const GraphicHwOps **orig_ops,
+  void **orig_opaque);
+
 void graphic_hw_update(QemuConsole *con);
 void graphic_hw_invalidate(QemuConsole *con);
 void graphic_hw_text_update(QemuConsole *con, console_ch_t *chardata);
diff --git a/ui/console.c b/ui/console.c
index def11ea..f2d6721 100644
--- a/ui/console.c
+++ b/ui/console.c
@@ -1619,6 +1619,19 @@ QemuConsole *graphic_console_init(DeviceState *dev,
 return s;
 }
 
+void graphic_console_wrap(QemuConsole *con,
+ DeviceState *dev,
+ const GraphicHwOps *hw_ops,
+ void *opaque,
+  const GraphicHwOps **orig_ops,
+  void **orig_opaque)
+{
+*orig_opaque = con->hw;
+*orig_ops = con->hw_ops;
+con->hw_ops = hw_ops;
+con->hw = opaque;
+}
+
 QemuConsole *qemu_console_lookup_by_index(unsigned int index)
 {
 if (index >= MAX_CONSOLES) {
-- 
1.8.3.1




[Qemu-devel] [RFC] sdl2 + virtio-gpu repost

2013-12-09 Thread Dave Airlie
Hi,

This is a repost of the latest SDL2 UI layer along with virtio-gpu,

I've merged the SDL2 base and multi-head patches into one, and I've split
out the display notifiers stuff and the sdl2 demo hack.

The virtio-gpu/vga wrapping is still unresolved, I've failed to find a cleaner
way yet, hopefully Gerd will advise further, but having 4 virtio-gpu- objects
we getting ugly fast. I've also included a doc on the basic queues used in the
virtio-gpu and what commands are sent on them.

I'd really like to get the sdl2 base patch merged at least,

Dave.




[Qemu-devel] [PATCH 1/8] ui/sdl2 : initial port to SDL 2.0 (v2.0)

2013-12-09 Thread Dave Airlie
From: Dave Airlie 

I've ported the SDL1.2 code over, and rewritten it to use the SDL2 interface.

The biggest changes were in the input handling, where SDL2 has done a major
overhaul, and I've had to include a generated translation file to get from
SDL2 codes back to qemu compatible ones. I'm still not sure how the keyboard
layout code works in qemu, so there may be further work if someone can point
me a test case that works with SDL1.2 and doesn't with SDL2.

Some SDL env vars we used to set are no longer used by SDL2,
Windows, OSX support is untested,

I don't think we can link to SDL1.2 and SDL2 at the same time, so I felt
using --with-sdlabi=2.0 to select the new code should be fine, like how
gtk does it.

v1.1: fix keys in text console
v1.2: fix shutdown, cleanups a bit of code, support ARGB cursor

v2.0: merge the SDL multihead patch into this, g_new the number of consoles
needed, wrap DCL inside per-console structure.

Signed-off-by: Dave Airlie 
---
 configure|  23 +-
 ui/Makefile.objs |   4 +-
 ui/sdl.c |   3 +
 ui/sdl2.c| 981 +++
 ui/sdl2_scancode_translate.h | 260 
 ui/sdl_keysym.h  |   3 +-
 6 files changed, 1267 insertions(+), 7 deletions(-)
 create mode 100644 ui/sdl2.c
 create mode 100644 ui/sdl2_scancode_translate.h

diff --git a/configure b/configure
index 0666228..3b7490d 100755
--- a/configure
+++ b/configure
@@ -171,6 +171,7 @@ docs=""
 fdt=""
 pixman=""
 sdl=""
+sdlabi="1.2"
 virtfs=""
 vnc="yes"
 sparse="no"
@@ -322,6 +323,7 @@ query_pkg_config() {
 }
 pkg_config=query_pkg_config
 sdl_config="${SDL_CONFIG-${cross_prefix}sdl-config}"
+sdl2_config="${SDL2_CONFIG-${cross_prefix}sdl2-config}"
 
 # If the user hasn't specified ARFLAGS, default to 'rv', just as make does.
 ARFLAGS="${ARFLAGS-rv}"
@@ -727,6 +729,8 @@ for opt do
   ;;
   --enable-sdl) sdl="yes"
   ;;
+  --with-sdlabi=*) sdlabi="$optarg"
+  ;;
   --disable-qom-cast-debug) qom_cast_debug="no"
   ;;
   --enable-qom-cast-debug) qom_cast_debug="yes"
@@ -1113,6 +1117,7 @@ echo "  --disable-strip  disable stripping 
binaries"
 echo "  --disable-werror disable compilation abort on warning"
 echo "  --disable-sdldisable SDL"
 echo "  --enable-sdl enable SDL"
+echo "  --with-sdlabiselect preferred SDL ABI 1.2 or 2.0"
 echo "  --disable-gtkdisable gtk UI"
 echo "  --enable-gtk enable gtk UI"
 echo "  --disable-virtfs disable VirtFS"
@@ -1781,12 +1786,22 @@ fi
 
 # Look for sdl configuration program (pkg-config or sdl-config).  Try
 # sdl-config even without cross prefix, and favour pkg-config over sdl-config.
-if test "`basename $sdl_config`" != sdl-config && ! has ${sdl_config}; then
-  sdl_config=sdl-config
+
+if test $sdlabi == "2.0"; then
+sdl_config=$sdl2_config
+sdlname=sdl2
+sdlconfigname=sdl2_config
+else
+sdlname=sdl
+sdlconfigname=sdl_config
+fi
+
+if test "`basename $sdl_config`" != $sdlconfigname && ! has ${sdl_config}; then
+  sdl_config=$sdlconfigname
 fi
 
-if $pkg_config sdl --exists; then
-  sdlconfig="$pkg_config sdl"
+if $pkg_config $sdlname --exists; then
+  sdlconfig="$pkg_config $sdlname"
   _sdlversion=`$sdlconfig --modversion 2>/dev/null | sed 's/[^0-9]//g'`
 elif has ${sdl_config}; then
   sdlconfig="$sdl_config"
diff --git a/ui/Makefile.objs b/ui/Makefile.objs
index f33be47..721ad37 100644
--- a/ui/Makefile.objs
+++ b/ui/Makefile.objs
@@ -9,12 +9,12 @@ vnc-obj-y += vnc-jobs.o
 
 common-obj-y += keymaps.o console.o cursor.o input.o qemu-pixman.o
 common-obj-$(CONFIG_SPICE) += spice-core.o spice-input.o spice-display.o
-common-obj-$(CONFIG_SDL) += sdl.o sdl_zoom.o x_keymap.o
+common-obj-$(CONFIG_SDL) += sdl.o sdl_zoom.o x_keymap.o sdl2.o
 common-obj-$(CONFIG_COCOA) += cocoa.o
 common-obj-$(CONFIG_CURSES) += curses.o
 common-obj-$(CONFIG_VNC) += $(vnc-obj-y)
 common-obj-$(CONFIG_GTK) += gtk.o x_keymap.o
 
-$(obj)/sdl.o $(obj)/sdl_zoom.o: QEMU_CFLAGS += $(SDL_CFLAGS) 
+$(obj)/sdl.o $(obj)/sdl_zoom.o $(obj)/sdl2.o: QEMU_CFLAGS += $(SDL_CFLAGS) 
 
 $(obj)/gtk.o: QEMU_CFLAGS += $(GTK_CFLAGS) $(VTE_CFLAGS)
diff --git a/ui/sdl.c b/ui/sdl.c
index 9d8583c..736bb95 100644
--- a/ui/sdl.c
+++ b/ui/sdl.c
@@ -26,6 +26,8 @@
 #undef WIN32_LEAN_AND_MEAN
 
 #include 
+
+#if SDL_MAJOR_VERSION == 1
 #include 
 
 #include "qemu-common.h"
@@ -966,3 +968,4 @@ void sdl_display_init(DisplayState *ds, int full_screen, 
int no_frame)
 
 atexit(sdl_cleanup);
 }
+#endif
diff --git a/ui/sdl2.c b/ui/sdl2.c
new file mode 100644
ind

[Qemu-devel] [PATCH 2/8] console: add state notifiers for ui<->display

2013-12-09 Thread Dave Airlie
From: Dave Airlie 

These are to be used for the UI to signal the video display,
and vice-versa about changes in the state of a console, like
size and offsets in relation to other consoles for input handling.

Signed-off-by: Dave Airlie 
---
 include/ui/console.h |  8 +++-
 ui/console.c | 26 ++
 2 files changed, 33 insertions(+), 1 deletion(-)

diff --git a/include/ui/console.h b/include/ui/console.h
index 4156a87..f6e8957 100644
--- a/include/ui/console.h
+++ b/include/ui/console.h
@@ -173,6 +173,9 @@ typedef struct DisplayChangeListenerOps {
   int x, int y, int on);
 void (*dpy_cursor_define)(DisplayChangeListener *dcl,
   QEMUCursor *cursor);
+
+void (*dpy_notify_state)(DisplayChangeListener *dcl,
+int x, int y, uint32_t width, uint32_t height);
 } DisplayChangeListenerOps;
 
 struct DisplayChangeListener {
@@ -223,7 +226,8 @@ void dpy_text_resize(QemuConsole *con, int w, int h);
 void dpy_mouse_set(QemuConsole *con, int x, int y, int on);
 void dpy_cursor_define(QemuConsole *con, QEMUCursor *cursor);
 bool dpy_cursor_define_supported(QemuConsole *con);
-
+void dpy_notify_state(QemuConsole *con, int x, int y,
+  uint32_t width, uint32_t height);
 static inline int surface_stride(DisplaySurface *s)
 {
 return pixman_image_get_stride(s->image);
@@ -274,6 +278,7 @@ typedef struct GraphicHwOps {
 void (*gfx_update)(void *opaque);
 void (*text_update)(void *opaque, console_ch_t *text);
 void (*update_interval)(void *opaque, uint64_t interval);
+void (*notify_state)(void *opaque, int idx, int x, int y, uint32_t width, 
uint32_t height);
 } GraphicHwOps;
 
 QemuConsole *graphic_console_init(DeviceState *dev,
@@ -283,6 +288,7 @@ QemuConsole *graphic_console_init(DeviceState *dev,
 void graphic_hw_update(QemuConsole *con);
 void graphic_hw_invalidate(QemuConsole *con);
 void graphic_hw_text_update(QemuConsole *con, console_ch_t *chardata);
+void graphic_hw_notify_state(QemuConsole *con, int x, int y, uint32_t width, 
uint32_t height);
 
 QemuConsole *qemu_console_lookup_by_index(unsigned int index);
 QemuConsole *qemu_console_lookup_by_device(DeviceState *dev);
diff --git a/ui/console.c b/ui/console.c
index 502e160..def11ea 100644
--- a/ui/console.c
+++ b/ui/console.c
@@ -265,6 +265,16 @@ void graphic_hw_invalidate(QemuConsole *con)
 }
 }
 
+void graphic_hw_notify_state(QemuConsole *con, int x, int y, uint32_t width, 
uint32_t height)
+{
+if (!con) {
+con = active_console;
+}
+if (con && con->hw_ops->notify_state) {
+con->hw_ops->notify_state(con->hw, con->index, x, y, width, height);
+}
+}
+
 static void ppm_save(const char *filename, struct DisplaySurface *ds,
  Error **errp)
 {
@@ -1525,6 +1535,22 @@ bool dpy_cursor_define_supported(QemuConsole *con)
 return false;
 }
 
+void dpy_notify_state(QemuConsole *con, int x, int y,
+  uint32_t width, uint32_t height)
+{
+DisplayState *s = con->ds;
+DisplayChangeListener *dcl;
+
+QLIST_FOREACH(dcl, &s->listeners, next) {
+if (con != (dcl->con ? dcl->con : active_console)) {
+continue;
+}
+if (dcl->ops->dpy_notify_state) {
+dcl->ops->dpy_notify_state(dcl, x, y, width, height);
+}
+}
+}
+
 /***/
 /* register display */
 
-- 
1.8.3.1




[Qemu-devel] [PATCH 6/8] virtio-gpu: v0.2 of the virtio based GPU code.

2013-12-09 Thread Dave Airlie
From: Dave Airlie 

This is the basic virtio-gpu which is

multi-head capable,
ARGB cursor support,
unaccelerated.

Some more info is in docs/specs/virtio-gpu.txt.

changes since v0.1:
add reset handling
fix display info response
fix cursor generation issues
drop 3d stuff that snuck in

Signed-off-by: Dave Airlie 
---
 default-configs/x86_64-softmmu.mak |   1 +
 docs/specs/virtio-gpu.txt  |  89 +
 hw/display/Makefile.objs   |   2 +
 hw/display/virtgpu_hw.h| 142 
 hw/display/virtio-gpu.c| 649 +
 hw/virtio/virtio-pci.c |  49 +++
 hw/virtio/virtio-pci.h |  15 +
 include/hw/pci/pci.h   |   1 +
 include/hw/virtio/virtio-gpu.h |  89 +
 9 files changed, 1037 insertions(+)
 create mode 100644 docs/specs/virtio-gpu.txt
 create mode 100644 hw/display/virtgpu_hw.h
 create mode 100644 hw/display/virtio-gpu.c
 create mode 100644 include/hw/virtio/virtio-gpu.h

diff --git a/default-configs/x86_64-softmmu.mak 
b/default-configs/x86_64-softmmu.mak
index 31bddce..1a00b78 100644
--- a/default-configs/x86_64-softmmu.mak
+++ b/default-configs/x86_64-softmmu.mak
@@ -9,6 +9,7 @@ CONFIG_VGA_PCI=y
 CONFIG_VGA_ISA=y
 CONFIG_VGA_CIRRUS=y
 CONFIG_VMWARE_VGA=y
+CONFIG_VIRTIO_GPU=y
 CONFIG_VMMOUSE=y
 CONFIG_SERIAL=y
 CONFIG_PARALLEL=y
diff --git a/docs/specs/virtio-gpu.txt b/docs/specs/virtio-gpu.txt
new file mode 100644
index 000..987d807
--- /dev/null
+++ b/docs/specs/virtio-gpu.txt
@@ -0,0 +1,89 @@
+Initial info on virtio-gpu:
+
+virtio-gpu is a virtio based graphics adapter. The initial version provides 
support for:
+
+- ARGB Hardware cursors
+- multiple outputs
+
+There are no acceleration features available in the first release of this 
device, 3D acceleration features will be provided later via an interface to an 
external renderer.
+
+The virtio-gpu exposes 3 vqs to the guest,
+
+1) control vq - guest->host queue for sending commands and getting responses 
when required.
+2) cursor vq - guest->host queue for sending cursor position and resource 
updates
+3) event vq - host->guest queue for sending async events like display 
topography updates and errors (todo).
+
+How to use the virtio-gpu:
+
+The virtio-gpu is based around the concept of resources private to the host, 
the guest must DMA transfer into these resources. This is a design requirement 
in order to interface with future 3D rendering. In the unaccelerated there is 
no support for DMA transfers from resources, just to them.
+
+Resources are initially simple 2D resources, consisting of a width, height and 
format along with an identifier. The guest must then attach backing store to 
the resources in order for DMA transfers to work. This is like a GART in a real 
GPU.
+
+A typical guest user would create a 2D resource using 
VIRTGPU_CMD_RESOURCE_CREATE_2D, attach backing store using 
VIRTGPU_CMD_RESOURCE_ATTACH_BACKING, then attach the resource to a scanout 
using VIRTGPU_CMD_SET_SCANOUT, then use VIRTGPU_CMD_TRANSFER_SEND_2D to send 
updates to the resource, and finally VIRTGPU_CMD_RESOURCE_FLUSH to flush the 
scanout buffers to screen.
+
+Command queue:
+
+VIRTGPU_CMD_GET_DISPLAY_INFO:
+Retrieve the current output configuration.
+
+This sends a response in the same queue slot. The response contains the max 
number of scanouts the host can support,
+along with a list of per-scanout information. The info contains whether the
+scanout is enabled, what its preferred x, y, width and height are and some 
future flags.
+
+VIRTGPU_CMD_RESOURCE_CREATE_2D:
+Create a 2D resource on the host.
+
+This creates a 2D resource on the host with the specified width, height and 
format. Only a small subset of formats are support. The resource ids are 
generated by the guest.
+
+VIRTGPU_CMD_RESOURCE_UNREF:
+Destroy a resource.
+
+This informs the host that a resource is no longer required by the guest.
+
+VIRTGPU_CMD_SET_SCANOUT:
+Set the scanout parameters for a single output.
+
+This sets the scanout parameters for a single scanout. The resource_id is the
+resource to be scanned out from, along with a rectangle specified by x, y, 
width and height.
+
+VIRTGPU_CMD_RESOURCE_FLUSH:
+Flush a scanout resource
+
+This flushes a resource to screen, it takes a rectangle and a resource id,
+and flushes any scanouts the resource is being used on.
+
+VIRTGPU_CMD_TRANSFER_TO_HOST_2D:
+Transfer from guest memory to host resource.
+
+This takes a resource id along with an destination offset into the resource,
+and a box to transfer from the host backing for the resource.
+
+VIRTGPU_CMD_RESOURCE_ATTACH_BACKING:
+Assign backing pages to a resource.
+
+This assign an array of guest pages as the backing store for a resource. These
+pages are then used for the transfer operations for that resource from that
+point on.
+
+VIRTGPU_CMD_RESOURCE_DETACH_BACKING:
+Detach backing pages from a resource.
+
+This detaches any backing pages from a resource, to be used i

[Qemu-devel] [PATCH 3/8] sdl2: add display notify change support

2013-12-09 Thread Dave Airlie
From: Dave Airlie 

this adds support for the callbacks from the console layer, when the gpu
changes the layouts.

Signed-off-by: Dave Airlie 
---
 ui/sdl2.c | 10 ++
 1 file changed, 10 insertions(+)

diff --git a/ui/sdl2.c b/ui/sdl2.c
index 2eb3e9c..dd8cd2b 100644
--- a/ui/sdl2.c
+++ b/ui/sdl2.c
@@ -880,6 +880,15 @@ static void sdl_mouse_define(DisplayChangeListener *dcl,
 SDL_SetCursor(guest_sprite);
 }
 
+static void sdl_notify_state(DisplayChangeListener *dcl,
+ int x, int y, uint32_t width, uint32_t height)
+{
+struct sdl2_console_state *scon = container_of(dcl, struct 
sdl2_console_state, dcl);
+
+scon->x = x;
+scon->y = y;
+}
+
 static void sdl_cleanup(void)
 {
 if (guest_sprite)
@@ -894,6 +903,7 @@ static const DisplayChangeListenerOps dcl_ops = {
 .dpy_refresh   = sdl_refresh,
 .dpy_mouse_set = sdl_mouse_warp,
 .dpy_cursor_define = sdl_mouse_define,
+.dpy_notify_state  = sdl_notify_state,
 };
 
 void sdl_display_init(DisplayState *ds, int full_screen, int no_frame)
-- 
1.8.3.1




[Qemu-devel] [PATCH 4/8] sdl2: add UI to toggle head 1 on/off

2013-12-09 Thread Dave Airlie
From: Dave Airlie 

This just adds ctrl-alt-n to toggle head 1 on/off for testing and demo purposes.

Signed-off-by: Dave Airlie 
---
 ui/sdl2.c | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/ui/sdl2.c b/ui/sdl2.c
index dd8cd2b..c52dcd9 100644
--- a/ui/sdl2.c
+++ b/ui/sdl2.c
@@ -522,6 +522,13 @@ static void handle_keydown(SDL_Event *ev)
 if (gui_key_modifier_pressed) {
 keycode = sdl_keyevent_to_keycode(&ev->key);
 switch (keycode) {
+case 0x31:
+/* spawn a new connected monitor if we have one */
+if (sdl2_console[1].surface)
+graphic_hw_notify_state(sdl2_console[1].dcl.con, 0, 0, 0, 0);
+else
+graphic_hw_notify_state(sdl2_console[1].dcl.con, 0, 0, 1024, 
768);
+break;
 case 0x21: /* 'f' key on US keyboard */
 toggle_full_screen(scon);
 gui_keysym = 1;
-- 
1.8.3.1




[Qemu-devel] [PATCH 7/8] virtio-vga: v1

2013-12-09 Thread Dave Airlie
From: Dave Airlie 

This is a virtio-vga device built on top of the virtio-gpu device.

Signed-off-by: Dave Airlie 
---
 Makefile   |   2 +-
 default-configs/x86_64-softmmu.mak |   1 +
 hw/display/Makefile.objs   |   1 +
 hw/display/virtio-vga.c| 156 +
 hw/pci/pci.c   |   2 +
 include/sysemu/sysemu.h|   2 +-
 pc-bios/vgabios-virtio.bin | Bin 0 -> 40448 bytes
 roms/Makefile  |   2 +-
 roms/config.vga.virtio |   6 ++
 vl.c   |  13 
 10 files changed, 182 insertions(+), 3 deletions(-)
 create mode 100644 hw/display/virtio-vga.c
 create mode 100644 pc-bios/vgabios-virtio.bin
 create mode 100644 roms/config.vga.virtio

diff --git a/Makefile b/Makefile
index bdff4e4..a7dabe4 100644
--- a/Makefile
+++ b/Makefile
@@ -291,7 +291,7 @@ bepocz
 
 ifdef INSTALL_BLOBS
 BLOBS=bios.bin sgabios.bin vgabios.bin vgabios-cirrus.bin \
-vgabios-stdvga.bin vgabios-vmware.bin vgabios-qxl.bin \
+vgabios-stdvga.bin vgabios-vmware.bin vgabios-qxl.bin vgabios-virtio.bin \
 acpi-dsdt.aml q35-acpi-dsdt.aml \
 ppc_rom.bin openbios-sparc32 openbios-sparc64 openbios-ppc QEMU,tcx.bin \
 pxe-e1000.rom pxe-eepro100.rom pxe-ne2k_pci.rom \
diff --git a/default-configs/x86_64-softmmu.mak 
b/default-configs/x86_64-softmmu.mak
index 1a00b78..22d8587 100644
--- a/default-configs/x86_64-softmmu.mak
+++ b/default-configs/x86_64-softmmu.mak
@@ -10,6 +10,7 @@ CONFIG_VGA_ISA=y
 CONFIG_VGA_CIRRUS=y
 CONFIG_VMWARE_VGA=y
 CONFIG_VIRTIO_GPU=y
+CONFIG_VIRTIO_VGA=y
 CONFIG_VMMOUSE=y
 CONFIG_SERIAL=y
 CONFIG_PARALLEL=y
diff --git a/hw/display/Makefile.objs b/hw/display/Makefile.objs
index 10e4066..63427e9 100644
--- a/hw/display/Makefile.objs
+++ b/hw/display/Makefile.objs
@@ -34,3 +34,4 @@ obj-$(CONFIG_VGA) += vga.o
 common-obj-$(CONFIG_QXL) += qxl.o qxl-logger.o qxl-render.o
 
 obj-$(CONFIG_VIRTIO_GPU) += virtio-gpu.o
+obj-$(CONFIG_VIRTIO_VGA) += virtio-vga.o
diff --git a/hw/display/virtio-vga.c b/hw/display/virtio-vga.c
new file mode 100644
index 000..9e8eff9
--- /dev/null
+++ b/hw/display/virtio-vga.c
@@ -0,0 +1,156 @@
+#include "hw/hw.h"
+#include "hw/pci/pci.h"
+#include "ui/console.h"
+#include "vga_int.h"
+#include "hw/virtio/virtio-pci.h"
+
+/*
+ * virtio-vga: This extends VirtIOGPUPCI
+ */
+#define TYPE_VIRTIO_VGA "virtio-vga"
+#define VIRTIO_VGA(obj) \
+OBJECT_CHECK(VirtIOVGA, (obj), TYPE_VIRTIO_VGA)
+
+typedef struct VirtIOVGA VirtIOVGA;
+struct VirtIOVGA {
+struct VirtIOGPUPCI gpu;
+VGACommonState vga;
+const struct GraphicHwOps *wrapped_ops;
+void *wrapped_opaque;
+};
+
+static void virtio_vga_invalidate_display(void *opaque)
+{
+VirtIOVGA *vvga = opaque;
+
+if (!vvga->gpu.vdev.enable) {
+   vvga->vga.hw_ops->invalidate(&vvga->vga);
+   return;
+}
+
+vvga->wrapped_ops->invalidate(vvga->wrapped_opaque);
+}
+
+static void virtio_vga_update_display(void *opaque)
+{
+VirtIOVGA *vvga = opaque;
+
+if (!vvga->gpu.vdev.enable) {
+   vvga->vga.hw_ops->gfx_update(&vvga->vga);
+}
+vvga->wrapped_ops->gfx_update(vvga->wrapped_opaque);
+}
+
+static void virtio_vga_text_update(void *opaque, console_ch_t *chardata)
+{
+VirtIOVGA *vvga = opaque;
+
+if (!vvga->gpu.vdev.enable) {
+   if (vvga->vga.hw_ops->text_update)
+   vvga->vga.hw_ops->text_update(&vvga->vga, chardata);
+}
+vvga->wrapped_ops->text_update(vvga->wrapped_opaque, chardata);
+}
+
+static void virtio_vga_notify_state(void *opaque, int idx, int x, int y, 
uint32_t width, uint32_t height)
+{
+VirtIOVGA *vvga = opaque;
+
+if (!vvga->gpu.vdev.enable) {
+if (vvga->vga.hw_ops->notify_state)
+   vvga->vga.hw_ops->notify_state(&vvga->vga, idx, x, y, width, 
height);
+}
+vvga->wrapped_ops->notify_state(vvga->wrapped_opaque, idx, x, y, width, 
height);
+}
+
+static const GraphicHwOps virtio_vga_ops = {
+.invalidate = virtio_vga_invalidate_display,
+.gfx_update = virtio_vga_update_display,
+.text_update = virtio_vga_text_update,
+.notify_state = virtio_vga_notify_state,
+};
+
+/* VGA device wrapper around PCI device around virtio GPU */
+static int virtio_vga_init(VirtIOPCIProxy *vpci_dev)
+{
+VirtIOVGA *vvga = VIRTIO_VGA(vpci_dev);
+DeviceState *vdev = DEVICE(&vvga->gpu.vdev);
+VGACommonState *vga = &vvga->vga;
+
+qdev_set_parent_bus(vdev, BUS(&vpci_dev->bus));
+if (qdev_init(vdev) < 0) {
+   return -1;
+}
+
+graphic_console_wrap(vvga->gpu.vdev.con[0], DEVICE(vpci_dev), 
&virtio_vga_ops, vvga, &vvga->wrapped_ops, &vvga->wrapped_opaque);
+vga->con = vvga->gpu.vdev.con[0];
+
+vga_common_init(vga, OBJECT(v

[Qemu-devel] [PATCH 8/8] HACK: just to make things start easier with libvirt

2013-12-09 Thread Dave Airlie
From: Dave Airlie 

---
 hw/display/vga-pci.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/hw/display/vga-pci.c b/hw/display/vga-pci.c
index b3a45c8..e4bea17 100644
--- a/hw/display/vga-pci.c
+++ b/hw/display/vga-pci.c
@@ -146,6 +146,7 @@ static int pci_std_vga_initfn(PCIDevice *dev)
 PCIVGAState *d = DO_UPCAST(PCIVGAState, dev, dev);
 VGACommonState *s = &d->vga;
 
+   return 0;
 /* vga + console init */
 vga_common_init(s, OBJECT(dev));
 vga_init(s, OBJECT(dev), pci_address_space(dev), pci_address_space_io(dev),
@@ -195,7 +196,7 @@ static void vga_class_init(ObjectClass *klass, void *data)
 k->romfile = "vgabios-stdvga.bin";
 k->vendor_id = PCI_VENDOR_ID_QEMU;
 k->device_id = PCI_DEVICE_ID_QEMU_VGA;
-k->class_id = PCI_CLASS_DISPLAY_VGA;
+k->class_id = 0;// PCI_CLASS_DISPLAY_VGA;
 dc->vmsd = &vmstate_vga_pci;
 dc->props = vga_pci_properties;
 set_bit(DEVICE_CATEGORY_DISPLAY, dc->categories);
-- 
1.8.3.1




[Qemu-devel] [PATCH] virtio-gpu: use glib alloc/free routines

2013-12-09 Thread Dave Airlie
From: Dave Airlie 

Oh I forgot to fix these up previously.

Signed-off-by: Dave Airlie 
---
 hw/display/virtio-gpu.c | 14 ++
 1 file changed, 6 insertions(+), 8 deletions(-)

diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
index 7bf2fbb..28dcd1e 100644
--- a/hw/display/virtio-gpu.c
+++ b/hw/display/virtio-gpu.c
@@ -137,7 +137,7 @@ static void virtgpu_resource_create_2d(VirtIOGPU *g,
 pixman_format_code_t pformat;
 struct virtgpu_simple_resource *res;
 
-res = calloc(1, sizeof(struct virtgpu_simple_resource));
+res = g_new0(struct virtgpu_simple_resource, 1);
 if (!res)
return;
 
@@ -159,7 +159,7 @@ static void virtgpu_resource_destroy(struct 
virtgpu_simple_resource *res)
 {
 pixman_image_unref(res->image);
 QLIST_REMOVE(res, next);
-free(res);
+g_free(res);
 }
 
 static void virtgpu_resource_unref(VirtIOGPU *g,
@@ -310,12 +310,10 @@ static void virtgpu_resource_attach_backing(VirtIOGPU *g,
 if (!res)
return;
 
-res_iovs = malloc(att_rb->nr_entries * sizeof(struct iovec));
-if (!res_iovs)
-   return;
+res_iovs = g_new0(struct iovec, att_rb->nr_entries);
 
 if (iov_cnt > 1) {
-   data = malloc(gsize);
+   data = g_malloc(gsize);
iov_to_buf(iov, iov_cnt, 0, data, gsize);
 } else
data = iov[0].iov_base;
@@ -337,7 +335,7 @@ static void virtgpu_resource_attach_backing(VirtIOGPU *g,
 res->iov_cnt = att_rb->nr_entries;
 
 if (iov_cnt > 1)
-   free(data);
+g_free(data);
 }
 
 static void virtgpu_resource_inval_backing(VirtIOGPU *g,
@@ -354,7 +352,7 @@ static void virtgpu_resource_inval_backing(VirtIOGPU *g,
 for (i = 0; i < res->iov_cnt; i++) {
cpu_physical_memory_unmap(res->iov[i].iov_base, res->iov[i].iov_len, 1, 
res->iov[i].iov_len);
 }
-free(res->iov);
+g_free(res->iov);
 res->iov_cnt = 0;
 res->iov = NULL;
 }
-- 
1.8.3.1




Re: [Qemu-devel] [RFC] sdl2 + virtio-gpu repost

2013-12-09 Thread Dave Airlie
On Tue, Dec 10, 2013 at 2:05 PM, Dave Airlie  wrote:
> Hi,
>
> This is a repost of the latest SDL2 UI layer along with virtio-gpu,
>
> I've merged the SDL2 base and multi-head patches into one, and I've split
> out the display notifiers stuff and the sdl2 demo hack.
>
> The virtio-gpu/vga wrapping is still unresolved, I've failed to find a cleaner
> way yet, hopefully Gerd will advise further, but having 4 virtio-gpu- objects
> we getting ugly fast. I've also included a doc on the basic queues used in the
> virtio-gpu and what commands are sent on them.

Oh I added one more patch to use glib allocation routines in
virtio-gpu, I forgot that was on my list!

Dave.



Re: [Qemu-devel] [PATCH 1/8] ui/sdl2 : initial port to SDL 2.0 (v2.0)

2013-12-10 Thread Dave Airlie
On Wed, Dec 11, 2013 at 12:35 AM, Gerd Hoffmann  wrote:
>   Hi,
>
>> The biggest changes were in the input handling, where SDL2 has done a major
>> overhaul, and I've had to include a generated translation file to get from
>> SDL2 codes back to qemu compatible ones. I'm still not sure how the keyboard
>> layout code works in qemu, so there may be further work if someone can point
>> me a test case that works with SDL1.2 and doesn't with SDL2.
>
> I've picked it up, cleaned up a bit (more to be done), switched over to
> new input core.  Simplified the keyboard mess, the new input support in
> SDL2 allows us to reduce it to a single+simple lookup table.  Pushed
> here:

Oh thanks for that, looks a lot cleaner esp the input handling, let me
know if there
is anything you'd like me to look at.

I wasn't sure if we wanted to keep the old text console switching
stuff on console 0 or not,
I see you've ripped it out, I can't say I ever used it anyways.

I've also started reworking gtk for multi-head and have the basics
working there as well.

Dave.



[Qemu-devel] issue with vgabios lfb and virtio vga

2013-12-11 Thread Dave Airlie
Hi Gerd,

so I have a bit of a conflict I'm not sure how best to handle between
how vgabios.c locates its LFB and how virtio requires the BAR get laid
out.

So virtio-pci requires the BARs are laid out for MSI support with the
config i/o ports at BAR 0, and MSI BAR at BAR 1.

Now the vgabios.c does a check of bar 0 and bar 1 to see if they are
0xfff1 masked, this protects against the the i/o bar but fails to
protect against the LFB one as PCI BARs don't encode the size just the
base address, and a 4k BAR can be aligned to a larger size.

So even if I expand the vgabios.c current code to check for a
potential BAR2 it'll fail if the BAR1 gets aligned to something better
than the 4k is requires.

The correct way to size BARs is to store the old value, write
0x and see how many bits stick, then write back the old value.

Any ideas? I seem to remember vgabios.c had a hack in the past for
vmware, but I'm not sure.

Dave.



Re: [Qemu-devel] issue with vgabios lfb and virtio vga

2013-12-12 Thread Dave Airlie
On Thu, Dec 12, 2013 at 6:17 PM, Gerd Hoffmann  wrote:
> On Do, 2013-12-12 at 09:51 +1000, Dave Airlie wrote:
>> Now the vgabios.c does a check of bar 0 and bar 1 to see if they are
>> 0xfff1 masked, this protects against the the i/o bar but fails to
>> protect against the LFB one as PCI BARs don't encode the size just the
>> base address, and a 4k BAR can be aligned to a larger size.
>
>> Any ideas? I seem to remember vgabios.c had a hack in the past for
>> vmware, but I'm not sure.
>
> The fallback to bar #1 *is* the vmware hack ;)
>
> Something like the attached patch should do the trick.
>
Oh do we generate the VGABIOS from seabios now or are we going to?

I've been using the vgabios url from the pc-bios/README file

Dave.



Re: [Qemu-devel] [PATCH 01/42] ui/sdl2 : initial port to SDL 2.0 (v2.0)

2013-12-16 Thread Dave Airlie
On Tue, Dec 17, 2013 at 8:12 AM, Stefan Weil  wrote:
> Am 16.12.2013 11:48, schrieb Gerd Hoffmann:
>> From: Dave Airlie 
>>
>> I've ported the SDL1.2 code over, and rewritten it to use the SDL2 interface.
>>
>> The biggest changes were in the input handling, where SDL2 has done a major
>> overhaul, and I've had to include a generated translation file to get from
>> SDL2 codes back to qemu compatible ones. I'm still not sure how the keyboard
>> layout code works in qemu, so there may be further work if someone can point
>> me a test case that works with SDL1.2 and doesn't with SDL2.
>>
>> Some SDL env vars we used to set are no longer used by SDL2,
>> Windows, OSX support is untested,
>>
>> I don't think we can link to SDL1.2 and SDL2 at the same time, so I felt
>> using --with-sdlabi=2.0 to select the new code should be fine, like how
>> gtk does it.
>>
>> v1.1: fix keys in text console
>> v1.2: fix shutdown, cleanups a bit of code, support ARGB cursor
>>
>> v2.0: merge the SDL multihead patch into this, g_new the number of consoles
>> needed, wrap DCL inside per-console structure.
>>
>> Signed-off-by: Dave Airlie 
>> Signed-off-by: Gerd Hoffmann 
>> ---
>>  configure|  23 +-
>>  ui/Makefile.objs |   4 +-
>>  ui/sdl.c |   3 +
>>  ui/sdl2.c| 981 
>> +++
>>  ui/sdl2_scancode_translate.h | 260 
>>  ui/sdl_keysym.h  |   3 +-
>>  6 files changed, 1267 insertions(+), 7 deletions(-)
>>  create mode 100644 ui/sdl2.c
>>  create mode 100644 ui/sdl2_scancode_translate.h
>>
>> diff --git a/configure b/configure
>> index edfea95..88080cc 100755
>> --- a/configure
>> +++ b/configure
> [...]
>> @@ -1789,12 +1794,22 @@ fi
>>
>>  # Look for sdl configuration program (pkg-config or sdl-config).  Try
>>  # sdl-config even without cross prefix, and favour pkg-config over 
>> sdl-config.
>> -if test "`basename $sdl_config`" != sdl-config && ! has ${sdl_config}; then
>> -  sdl_config=sdl-config
>> +
>> +if test $sdlabi == "2.0"; then
>
> Please replace '==' by a single '=' here. dash (and maybe other less
> sophisticated shells) don't like '=='.
>
>> +sdl_config=$sdl2_config
>> +sdlname=sdl2
>> +sdlconfigname=sdl2_config
>> +else
>> +sdlname=sdl
>> +sdlconfigname=sdl_config
>> +fi
>> +
>> +if test "`basename $sdl_config`" != $sdlconfigname && ! has ${sdl_config}; 
>> then
>> +  sdl_config=$sdlconfigname
>>  fi
>>
>> -if $pkg_config sdl --exists; then
>> -  sdlconfig="$pkg_config sdl"
>> +if $pkg_config $sdlname --exists; then
>> +  sdlconfig="$pkg_config $sdlname"
>>_sdlversion=`$sdlconfig --modversion 2>/dev/null | sed 's/[^0-9]//g'`
>>  elif has ${sdl_config}; then
>>sdlconfig="$sdl_config"
> [...]
>
>
> I know that sdl2.c is based on sdl.c which was coded before the
> introduction of the current coding rules, but would you mind if I send a
> follow-up patch which fixes the warnings from checkpatch.pl for sdl2.c?
> Tools like astyle can do this automatically, and the result is pretty good.
>
> Some of the other patches don't include a Signed-off-by, so those
> patches are only ready for local testing.
>
> I was very curious to get test results here in my environment because of
> a strange effect which I had noticed recently: booting a Linux system
> (Tiny Core Linux) with SDL 1.2 (1:06) takes twice as long as with GTK
> (0:33) or curses (0:29). The test was run on a remote Linux x86_64
> server (so X display output is slower than normal). The huge difference
> is not caused by more activity but simply by delays were the QEMU
> process is waiting.
>
> I hoped that SDL 2.0 might do a better job but was disappointed: I had
> to abort the test after 10 minutes, the Linux boot made progress, but
> very slowly and with lots of timeout errors.

Remote X isn't a scenario SDL is meant for at all, SDL is meant for
fast local screen access.

SDL2 generally uses OpenGL underneath as that is what is most likely
fastest in most scenarios for it. There may be some tuning you can use
to work around this,

I think Gerd has some follow-up patches to clean sdl2.c up a bit as well.

Dave.



Re: [Qemu-devel] Multi-head support RFC

2013-11-05 Thread Dave Airlie
On Wed, Nov 6, 2013 at 6:42 AM, John Baboval  wrote:
> Hello,
>
> I am currently the device model maintainer for XenClient Enterprise. As you
> may or may not know, we maintain a patch queue on top of QEMU (currently
> 1.3) that adds functionality needed to support XCE features.
>
> One of the major things we add is robust multi-head support. This includes
> DDC emulation for EDID data, variable VRAM size, monitor hot-plug support,
> simulated VSYNC, and guest controlled display orientation. This includes
> both the necessary interfaces between the hw and ui, and a new emulated
> adapter (with drivers) that exercises the interfaces.

I don't think we'd want to lump all these things together though,

I've started looking at doing multi-head support for a new virtio-gpu,
and I've gotten basic multi-head working with SDL2.0 with cursor
support.

It currently just adds multiple DisplaySurfaces to the QemuConsole,
now Gerd said he thought I should be using multiple QemuConsoles but I
really didn't think this was a good idea, and I still think multiple
surfaces makes more sense wrt how best to interact with this.

Why do you need to emulate DDC btw? is this just to fool the guest
vesa code etc?

Dave.



Re: [Qemu-devel] Multi-head support RFC

2013-11-06 Thread Dave Airlie
On Wed, Nov 6, 2013 at 8:57 PM, Gerd Hoffmann  wrote:
>   Hi,
>
>> It currently just adds multiple DisplaySurfaces to the QemuConsole,
>> now Gerd said he thought I should be using multiple QemuConsoles but I
>> really didn't think this was a good idea,
>
> Why?

It's a fair question and I haven't tried the other way yet and I fully
intend to do further  investigating,

I think my main reason was the current console at least when
interacting with gtk frontend are done on a console per tab, now if we
have multiple outputs I would want them to be visible at the same
time, now I understand we could fix this but we'd need some sort of
console grouping to say this group of consoles represent this set of
outputs,

Though further to that I'm not sure how we'd want to represent
multiple graphic cards I assume we'd want to be able to see them all
at once, but still have some sort of binding between separate outputs
on one card.

Dave.



[Qemu-devel] [PATCH] ui/sdl2 : initial port to SDL 2.0

2013-11-10 Thread Dave Airlie
From: Dave Airlie 

I've ported the SDL1.2 code over, and rewritten it to use the SDL2 interface.

The biggest changes were in the input handling, where SDL2 has done a major
overhaul, and I've had to include a generated translation file to get from
SDL2 codes back to qemu compatible ones. I'm still not sure how the keyboard
layout code works in qemu, so there may be further work if someone can point
me a test case that works with SDL1.2 and doesn't with SDL2.

Some SDL env vars we used to set are no longer used by SDL2,
Windows, OSX support is untested,

I don't think we can link to SDL1.2 and SDL2 at the same time, so I felt
using --with-sdlabi=2.0 to select the new code should be fine, like how
gtk does it.

Signed-off-by: Dave Airlie 
---
 configure|  23 +-
 ui/Makefile.objs |   4 +-
 ui/sdl.c |   3 +
 ui/sdl2.c| 889 +++
 ui/sdl2_scancode_translate.h | 260 +
 ui/sdl_keysym.h  |   3 +-
 6 files changed, 1175 insertions(+), 7 deletions(-)
 create mode 100644 ui/sdl2.c
 create mode 100644 ui/sdl2_scancode_translate.h

diff --git a/configure b/configure
index 9addff1..bf3be37 100755
--- a/configure
+++ b/configure
@@ -158,6 +158,7 @@ docs=""
 fdt=""
 pixman=""
 sdl=""
+sdlabi="1.2"
 virtfs=""
 vnc="yes"
 sparse="no"
@@ -310,6 +311,7 @@ query_pkg_config() {
 }
 pkg_config=query_pkg_config
 sdl_config="${SDL_CONFIG-${cross_prefix}sdl-config}"
+sdl2_config="${SDL2_CONFIG-${cross_prefix}sdl2-config}"
 
 # default flags for all hosts
 QEMU_CFLAGS="-fno-strict-aliasing $QEMU_CFLAGS"
@@ -710,6 +712,8 @@ for opt do
   ;;
   --enable-sdl) sdl="yes"
   ;;
+  --with-sdlabi=*) sdlabi="$optarg"
+  ;;
   --disable-qom-cast-debug) qom_cast_debug="no"
   ;;
   --enable-qom-cast-debug) qom_cast_debug="yes"
@@ -1092,6 +1096,7 @@ echo "  --disable-strip  disable stripping 
binaries"
 echo "  --disable-werror disable compilation abort on warning"
 echo "  --disable-sdldisable SDL"
 echo "  --enable-sdl enable SDL"
+echo "  --with-sdlabiselect preferred SDL ABI 1.2 or 2.0"
 echo "  --disable-gtkdisable gtk UI"
 echo "  --enable-gtk enable gtk UI"
 echo "  --disable-virtfs disable VirtFS"
@@ -1751,12 +1756,22 @@ fi
 
 # Look for sdl configuration program (pkg-config or sdl-config).  Try
 # sdl-config even without cross prefix, and favour pkg-config over sdl-config.
-if test "`basename $sdl_config`" != sdl-config && ! has ${sdl_config}; then
-  sdl_config=sdl-config
+
+if test $sdlabi == "2.0"; then
+sdl_config=$sdl2_config
+sdlname=sdl2
+sdlconfigname=sdl2_config
+else
+sdlname=sdl
+sdlconfigname=sdl_config
+fi
+
+if test "`basename $sdl_config`" != $sdlconfigname && ! has ${sdl_config}; then
+  sdl_config=$sdlconfigname
 fi
 
-if $pkg_config sdl --exists; then
-  sdlconfig="$pkg_config sdl"
+if $pkg_config $sdlname --exists; then
+  sdlconfig="$pkg_config $sdlname"
   _sdlversion=`$sdlconfig --modversion 2>/dev/null | sed 's/[^0-9]//g'`
 elif has ${sdl_config}; then
   sdlconfig="$sdl_config"
diff --git a/ui/Makefile.objs b/ui/Makefile.objs
index f33be47..721ad37 100644
--- a/ui/Makefile.objs
+++ b/ui/Makefile.objs
@@ -9,12 +9,12 @@ vnc-obj-y += vnc-jobs.o
 
 common-obj-y += keymaps.o console.o cursor.o input.o qemu-pixman.o
 common-obj-$(CONFIG_SPICE) += spice-core.o spice-input.o spice-display.o
-common-obj-$(CONFIG_SDL) += sdl.o sdl_zoom.o x_keymap.o
+common-obj-$(CONFIG_SDL) += sdl.o sdl_zoom.o x_keymap.o sdl2.o
 common-obj-$(CONFIG_COCOA) += cocoa.o
 common-obj-$(CONFIG_CURSES) += curses.o
 common-obj-$(CONFIG_VNC) += $(vnc-obj-y)
 common-obj-$(CONFIG_GTK) += gtk.o x_keymap.o
 
-$(obj)/sdl.o $(obj)/sdl_zoom.o: QEMU_CFLAGS += $(SDL_CFLAGS) 
+$(obj)/sdl.o $(obj)/sdl_zoom.o $(obj)/sdl2.o: QEMU_CFLAGS += $(SDL_CFLAGS) 
 
 $(obj)/gtk.o: QEMU_CFLAGS += $(GTK_CFLAGS) $(VTE_CFLAGS)
diff --git a/ui/sdl.c b/ui/sdl.c
index 9d8583c..736bb95 100644
--- a/ui/sdl.c
+++ b/ui/sdl.c
@@ -26,6 +26,8 @@
 #undef WIN32_LEAN_AND_MEAN
 
 #include 
+
+#if SDL_MAJOR_VERSION == 1
 #include 
 
 #include "qemu-common.h"
@@ -966,3 +968,4 @@ void sdl_display_init(DisplayState *ds, int full_screen, 
int no_frame)
 
 atexit(sdl_cleanup);
 }
+#endif
diff --git a/ui/sdl2.c b/ui/sdl2.c
new file mode 100644
index 000..1a71f59
--- /dev/null
+++ b/ui/sdl2.c
@@ -0,0 +1,889 @@
+/*
+ * QEMU SDL display driver
+ *
+ * Copyright (c) 2003 Fabrice Bellard
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associate

Re: [Qemu-devel] [PATCH] ui/sdl2 : initial port to SDL 2.0

2013-11-11 Thread Dave Airlie
On Mon, Nov 11, 2013 at 2:02 PM, Anthony Liguori  wrote:
> On Sun, Nov 10, 2013 at 3:15 PM, Dave Airlie  wrote:
>> From: Dave Airlie 
>>
>> I've ported the SDL1.2 code over, and rewritten it to use the SDL2 interface.
>>
>> The biggest changes were in the input handling, where SDL2 has done a major
>> overhaul, and I've had to include a generated translation file to get from
>> SDL2 codes back to qemu compatible ones. I'm still not sure how the keyboard
>> layout code works in qemu, so there may be further work if someone can point
>> me a test case that works with SDL1.2 and doesn't with SDL2.
>>
>> Some SDL env vars we used to set are no longer used by SDL2,
>> Windows, OSX support is untested,
>>
>> I don't think we can link to SDL1.2 and SDL2 at the same time, so I felt
>> using --with-sdlabi=2.0 to select the new code should be fine, like how
>> gtk does it.
>>
>> Signed-off-by: Dave Airlie 
>> ---
>>  configure|  23 +-
>>  ui/Makefile.objs |   4 +-
>>  ui/sdl.c |   3 +
>>  ui/sdl2.c| 889 
>> +++
>
> Can we refactor this to not duplicate everything and instead have
> function hooks or even #ifdefs for the things that are different?  We
> try to guess the right SDL to use in configure.  See how we handle
> GTK2 vs. GTK3.
>
> It's very hard to review ATM due to the split.

No I talked to enough people at kvmforum and everyone said I should
split this into a separate file, please don't make me undo that now, I
originally did it with ifdefs and just spent a few days redoing it the
other way!

>
> Regarding the keycodes, danpb has a great write up on his blog:
>
> https://www.berrange.com/posts/2010/07/04/a-summary-of-scan-code-key-codes-sets-used-in-the-pc-virtualization-stack/

Okay I'll read that later,

>
> Internally, we use a variant of raw XT scancodes.  We have a
> keymapping routine that translates from a symbolic key (i.e. "capital
> A") to the appropriate XT scancode.
>
> SDL 1.x at least lets you get at raw X11 scancodes which are either
> evdev or PS/2 codes depending on the version of X11.  So for SDL 1.x
> we have two translations mechanisms 1) X11 scancodes to XT scancodes
> and 2) SDL keysyms to internal QEMU keysyms.
>
> From what I can tell, SDL2 has moved from just passing through raw X11
> scancodes (which like I said, can be different depending on your X
> configuration) to having an intermediate translation layer.  See
> comments inline:
>
>>  ui/sdl2_scancode_translate.h | 260 +
>>  ui/sdl_keysym.h  |   3 +-
>>  6 files changed, 1175 insertions(+), 7 deletions(-)
>>  create mode 100644 ui/sdl2.c
>>  create mode 100644 ui/sdl2_scancode_translate.h
>>
>> diff --git a/configure b/configure
>> index 9addff1..bf3be37 100755
>> --- a/configure
>> +++ b/configure
>> @@ -158,6 +158,7 @@ docs=""
>>  fdt=""
>>  pixman=""
>>  sdl=""
>> +sdlabi="1.2"
>>  virtfs=""
>>  vnc="yes"
>>  sparse="no"
>> @@ -310,6 +311,7 @@ query_pkg_config() {
>>  }
>>  pkg_config=query_pkg_config
>>  sdl_config="${SDL_CONFIG-${cross_prefix}sdl-config}"
>> +sdl2_config="${SDL2_CONFIG-${cross_prefix}sdl2-config}"
>>
>>  # default flags for all hosts
>>  QEMU_CFLAGS="-fno-strict-aliasing $QEMU_CFLAGS"
>> @@ -710,6 +712,8 @@ for opt do
>>;;
>>--enable-sdl) sdl="yes"
>>;;
>> +  --with-sdlabi=*) sdlabi="$optarg"
>> +  ;;
>>--disable-qom-cast-debug) qom_cast_debug="no"
>>;;
>>--enable-qom-cast-debug) qom_cast_debug="yes"
>> @@ -1092,6 +1096,7 @@ echo "  --disable-strip  disable stripping 
>> binaries"
>>  echo "  --disable-werror disable compilation abort on warning"
>>  echo "  --disable-sdldisable SDL"
>>  echo "  --enable-sdl enable SDL"
>> +echo "  --with-sdlabiselect preferred SDL ABI 1.2 or 2.0"
>>  echo "  --disable-gtkdisable gtk UI"
>>  echo "  --enable-gtk enable gtk UI"
>>  echo "  --disable-virtfs disable VirtFS"
>> @@ -1751,12 +1756,22 @@ fi
>>
>>  # Look for sdl configuration program (pkg-config or sdl-config).  Try
>>  # sdl-config even without cross prefix, and favour pkg-config over 
>&

Re: [Qemu-devel] [PATCH] ui/sdl2 : initial port to SDL 2.0

2013-11-11 Thread Dave Airlie
On Tue, Nov 12, 2013 at 12:07 AM, Anthony Liguori  wrote:
>
> On Nov 11, 2013 1:10 AM, "Dave Airlie"  wrote:
>>
>> On Mon, Nov 11, 2013 at 2:02 PM, Anthony Liguori 
>> wrote:
>> > On Sun, Nov 10, 2013 at 3:15 PM, Dave Airlie  wrote:
>> >> From: Dave Airlie 
>> >>
>> >> I've ported the SDL1.2 code over, and rewritten it to use the SDL2
>> >> interface.
>> >>
>> >> The biggest changes were in the input handling, where SDL2 has done a
>> >> major
>> >> overhaul, and I've had to include a generated translation file to get
>> >> from
>> >> SDL2 codes back to qemu compatible ones. I'm still not sure how the
>> >> keyboard
>> >> layout code works in qemu, so there may be further work if someone can
>> >> point
>> >> me a test case that works with SDL1.2 and doesn't with SDL2.
>> >>
>> >> Some SDL env vars we used to set are no longer used by SDL2,
>> >> Windows, OSX support is untested,
>> >>
>> >> I don't think we can link to SDL1.2 and SDL2 at the same time, so I
>> >> felt
>> >> using --with-sdlabi=2.0 to select the new code should be fine, like how
>> >> gtk does it.
>> >>
>> >> Signed-off-by: Dave Airlie 
>> >> ---
>> >>  configure|  23 +-
>> >>  ui/Makefile.objs |   4 +-
>> >>  ui/sdl.c |   3 +
>> >>  ui/sdl2.c| 889
>> >> +++
>> >
>> > Can we refactor this to not duplicate everything and instead have
>> > function hooks or even #ifdefs for the things that are different?  We
>> > try to guess the right SDL to use in configure.  See how we handle
>> > GTK2 vs. GTK3.
>> >
>> > It's very hard to review ATM due to the split.
>>
>> No I talked to enough people at kvmforum and everyone said I should
>> split this into a separate file, please don't make me undo that now, I
>> originally did it with ifdefs and just spent a few days redoing it the
>> other way!
>
> Perhaps whoever you spoke with should speal up then.  Forking sdl.c seems
> like a pretty bad idea to me.

It does right now, but since I can't add multi-head support to SDL1.2
it'll rapidly start diverging in order to add features that SDL1.2
can't support.

btw here is my old sdl.c implementation its pretty ugly, later commit
in that branch have the input bits.

http://cgit.freedesktop.org/~airlied/qemu/commit/?h=virtio-gpu&id=ee44399a3dbce8da810329230f0a439a3b88cd67

As I said I suspect this will just get uglier as time goes on, as we
add SDL2 only features.
>
>>
>> >
>> > Regarding the keycodes, danpb has a great write up on his blog:
>> >
>> >
>> > https://www.berrange.com/posts/2010/07/04/a-summary-of-scan-code-key-codes-sets-used-in-the-pc-virtualization-stack/
>>
>> Okay I'll read that later,

>> >> +static uint8_t sdl_keyevent_to_keycode(const SDL_KeyboardEvent *ev)
>> >> +{
>> >> +int keycode;
>> >> +
>> >> +keycode = ev->keysym.scancode;
>> >> +
>> >> +keycode = sdl2_scancode_to_keycode[keycode];
>> >> +if (keycode >= 89 && keycode < 150) {
>> >> +keycode = translate_evdev_keycode(keycode - 89);
>> >> +}
>> >
>> > This doesn't seem right to me.  It just so happened that the low value
>> > scan codes are the same for both X11, evdev, and SDL 1.x.  That's why
>> > we only used the evdev/x11 translation tables for higher codes.  It
>> > was just a table size optimization.
>> >
>> > Presumably, sdl2 doesn't do this though, or does it?  If it does,
>> > don't we need to probe for evdev instead of assuming it's there?
>>
>> The table I wrote translates to evdev codes, then we use the evdev
>> translator to translate to qemu ones, so its perfectly fine as is,
>> SDL2 hides everything in its own table.
>
> Why not translate to XT directly?  The intermediate evdev step seems
> unnecessary.
>

Correctness and future maintainability,
there exists a table in SDL2 source code going from evdev->SDL2,
I wrote trivial code to reverse the contents of that table so I know
its correct, if it changes in future I can just do the same simple
reversal and import it. Though there is probably nothing stopping me
adding the extra step at that stage I wasn't sure it was a good idea
going forward.

Dave.



Re: [Qemu-devel] [PATCH] vfio-pci: Fix Nvidia MSI ACK through 0x88000 quirk

2013-11-11 Thread Dave Airlie
On Tue, Nov 12, 2013 at 7:43 AM, Alex Williamson
 wrote:
> When MSI is enabled on Nvidia GeForce cards the driver seems to
> acknowledge the interrupt by writing a 0xff byte to the MSI capability
> ID register using the PCI config space mirror at offset 0x88000 from
> BAR0.  Without this, the device will only fire a single interrupt.
> VFIO handles the PCI capability ID/next registers as virtual w/o write
> support, so any write through config space is currently dropped.  Add
> a check for this and allow the write through the BAR window.  The
> registers are read-only anyway.

This is only half the truth, I'm afraid if I'm right its much worse than that.

At least on some GPUs the MSI ack is done via PCI config space itself,
and on some its done via the mirror, and yes it matters on some cards
which way it works.

Dave.



Re: [Qemu-devel] [PATCH] vfio-pci: Fix Nvidia MSI ACK through 0x88000 quirk

2013-11-11 Thread Dave Airlie
On Tue, Nov 12, 2013 at 8:32 AM, Alex Williamson
 wrote:
> On Tue, 2013-11-12 at 07:55 +1000, Dave Airlie wrote:
>> On Tue, Nov 12, 2013 at 7:43 AM, Alex Williamson
>>  wrote:
>> > When MSI is enabled on Nvidia GeForce cards the driver seems to
>> > acknowledge the interrupt by writing a 0xff byte to the MSI capability
>> > ID register using the PCI config space mirror at offset 0x88000 from
>> > BAR0.  Without this, the device will only fire a single interrupt.
>> > VFIO handles the PCI capability ID/next registers as virtual w/o write
>> > support, so any write through config space is currently dropped.  Add
>> > a check for this and allow the write through the BAR window.  The
>> > registers are read-only anyway.
>>
>> This is only half the truth, I'm afraid if I'm right its much worse than 
>> that.
>>
>> At least on some GPUs the MSI ack is done via PCI config space itself,
>> and on some its done via the mirror, and yes it matters on some cards
>> which way it works.
>
> I was hoping that wouldn't be the case since it seems fairly universal
> that PCI config space access should be considered slow and avoided for
> things like this.  But, I suppose with MMConfig it's no worse than
> device MMIO space.

For reference dig around in here

http://www.mail-archive.com/nouveau@lists.freedesktop.org/msg14437.html

nvidia commented on it as well, it may be they require the slowness or
just some other coherency issue.

Dave.



[Qemu-devel] dataplane, thread and gpu stuff

2013-11-17 Thread Dave Airlie
Hi,

So after talking to a few people at kvm forum I think the GPU code
should probably use the dataplane stuff from the outset,

The main advantages I think this gives me is being able to dequeue
objects from the vq from a thread and send irq vectors from there as
well.

Though since it appears the dataplane stuff is kvm specific (at least
the irq handling), I was wondering how I should deal with fallbacks
for non-kvm operation, and quite how much falling back I need to do.

Can I still use the dataplane/vring code from the normal bottom half
handlers or do I have to write separate code for both situations.

Dave.



[Qemu-devel] console muti-head some more design input

2013-11-18 Thread Dave Airlie
So I've started banging my head against using QemuConsole as the
container for a single output, and have been left with the usual 10
ways to design things, but since I don't want to spend ages
implementing one way just to be told its unacceptable it would be good
to get some more up front design input.

Current code is in
http://cgit.freedesktop.org/~airlied/qemu/log/?h=virtio-gpu-multiconsole

So I felt I had a choice here for sharing a single output surface
amongst outputs:

a) have multiple QemuConsole reference multiple DisplaySurface wihch
reference a single pixman image,
b) have multiple QemuConsole reference a single DisplaySurface which
reference a single pixman image.

In either case we need to store, width/height of the console and x/y
offset into the output surface somewhere, as the output dimensions
will not correspond to surface dimensions or the surface dimensions
won't correspond to the pixman image dimensions

So I picked (b) in my current codebase, once I untangled a few
lifetimes issues (replace_surface - frees the displaysurface == bad,
this is bad in general), I've stored the x/y/w/h in the QemuConsole
(reused the text console values for now),

Another issue I had is I feel the console layer could do with some
sort of subclassing of objects or the ability to store ui layer info
in the console objects, e.g. I've added a ui_priv to the
DisplaySurface instead of having sdl2.c end up with SDL_Texture array
and having to dig around to find it.

At the moment this is rendering a two-head console for me, with
cursors, with virtio-vga kernel and Xorg modesetting driver persuaded
to work, but I'd really like more feedback on the direction this is
going, as I get the feeling Gerd you have some specific ideas on how
this should all work.

Dave.



Re: [Qemu-devel] console muti-head some more design input

2013-11-19 Thread Dave Airlie
On Tue, Nov 19, 2013 at 6:11 PM, Gerd Hoffmann  wrote:
>   Hi,
>
>> So I felt I had a choice here for sharing a single output surface
>> amongst outputs:
>>
>> a) have multiple QemuConsole reference multiple DisplaySurface wihch
>> reference a single pixman image,
>
> This one.
>
>> In either case we need to store, width/height of the console and x/y
>> offset into the output surface somewhere, as the output dimensions
>> will not correspond to surface dimensions or the surface dimensions
>> won't correspond to the pixman image dimensions
>
> Not needed (well, internal to virtio-gpu probably).

I think you are only considering output here, for input we definitely
need some idea of screen layout, and this needs to be stored
somewhere.

e.g. SDL2 gets an input event in the right hand window it needs to
translate that into an input event on the whole output surface.

Have a look the virtio-gpu branch in my repo (don't look at the
history, its ugly, just the final state), you'll see code in sdl2.c to
do input translation from window coordinates to the overall screen
space. So we need at least the x,y offset in the ui code, and I think
we need to communicate that via the console.

Otherwise I think I've done things the way you've said and it seems to
be working for me on a dual-head setup.

(oh and yes this all sw rendering only, to do 3D rendering we need to
put a thread in to do the GL stuff, but it interacts with the console
layer quite a bit, since SDL and the virtio-gpu need to be in the same
thread, so things like resize can work).

Dave.



Re: [Qemu-devel] console muti-head some more design input

2013-11-19 Thread Dave Airlie
>> Have a look the virtio-gpu branch in my repo (don't look at the
>> history, its ugly, just the final state), you'll see code in sdl2.c to
>> do input translation from window coordinates to the overall screen
>> space. So we need at least the x,y offset in the ui code, and I think
>> we need to communicate that via the console.
>>
>
> One of the patches I will be submitting as part of this includes 
> bi-directional calls to set the orientation. A HwOp, and a 
> DisplayChangeListenerOp. This allows you to move the display orientation 
> around in the guest (if your driver and backend support it), or to move the 
> orientation around by dragging windows... Either way you have the data you 
> need to get absolute coordinates right, even if you are scaling the guest 
> display in your windows. Whether the orientation offsets end up stored in the 
> QemuConsole or not becomes an implementation detail if you get notifications.

Okay I just hacked up something similar with the bidirectional ops,
and ran into the fact that DisplayChangeListeners are stored per
DisplayState, so when my GPU drivers tries to callback for console
number 1, the dcls for both consoles gets called, this doesn't seem so
optimal.

Dave.



Re: [Qemu-devel] console muti-head some more design input

2013-11-19 Thread Dave Airlie
On Wed, Nov 20, 2013 at 3:17 PM, Dave Airlie  wrote:
>>> Have a look the virtio-gpu branch in my repo (don't look at the
>>> history, its ugly, just the final state), you'll see code in sdl2.c to
>>> do input translation from window coordinates to the overall screen
>>> space. So we need at least the x,y offset in the ui code, and I think
>>> we need to communicate that via the console.
>>>
>>
>> One of the patches I will be submitting as part of this includes 
>> bi-directional calls to set the orientation. A HwOp, and a 
>> DisplayChangeListenerOp. This allows you to move the display orientation 
>> around in the guest (if your driver and backend support it), or to move the 
>> orientation around by dragging windows... Either way you have the data you 
>> need to get absolute coordinates right, even if you are scaling the guest 
>> display in your windows. Whether the orientation offsets end up stored in 
>> the QemuConsole or not becomes an implementation detail if you get 
>> notifications.
>
> Okay I just hacked up something similar with the bidirectional ops,
> and ran into the fact that DisplayChangeListeners are stored per
> DisplayState, so when my GPU drivers tries to callback for console
> number 1, the dcls for both consoles gets called, this doesn't seem so
> optimal.

Actually ignore that, I didn't cut-n-paste properly :-)

Dave.



[Qemu-devel] [PATCH 2/8] console: add state notifiers for ui<->display

2013-11-19 Thread Dave Airlie
From: Dave Airlie 

These are to be used for the UI to signal the video display,
and vice-versa about changes in the state of a console, like
size and offsets in relation to other consoles for input handling.

Signed-off-by: Dave Airlie 
---
 include/ui/console.h |  8 +++-
 ui/console.c | 26 ++
 2 files changed, 33 insertions(+), 1 deletion(-)

diff --git a/include/ui/console.h b/include/ui/console.h
index 98edf41..5731081 100644
--- a/include/ui/console.h
+++ b/include/ui/console.h
@@ -174,6 +174,9 @@ typedef struct DisplayChangeListenerOps {
   int x, int y, int on);
 void (*dpy_cursor_define)(DisplayChangeListener *dcl,
   QEMUCursor *cursor);
+
+void (*dpy_notify_state)(DisplayChangeListener *dcl,
+int x, int y, uint32_t width, uint32_t height);
 } DisplayChangeListenerOps;
 
 struct DisplayChangeListener {
@@ -224,7 +227,8 @@ void dpy_text_resize(QemuConsole *con, int w, int h);
 void dpy_mouse_set(QemuConsole *con, int x, int y, int on);
 void dpy_cursor_define(QemuConsole *con, QEMUCursor *cursor);
 bool dpy_cursor_define_supported(QemuConsole *con);
-
+void dpy_notify_state(QemuConsole *con, int x, int y,
+  uint32_t width, uint32_t height);
 static inline int surface_stride(DisplaySurface *s)
 {
 return pixman_image_get_stride(s->image);
@@ -275,6 +279,7 @@ typedef struct GraphicHwOps {
 void (*gfx_update)(void *opaque);
 void (*text_update)(void *opaque, console_ch_t *text);
 void (*update_interval)(void *opaque, uint64_t interval);
+void (*notify_state)(void *opaque, int idx, int x, int y, uint32_t width, 
uint32_t height);
 } GraphicHwOps;
 
 QemuConsole *graphic_console_init(DeviceState *dev,
@@ -284,6 +289,7 @@ QemuConsole *graphic_console_init(DeviceState *dev,
 void graphic_hw_update(QemuConsole *con);
 void graphic_hw_invalidate(QemuConsole *con);
 void graphic_hw_text_update(QemuConsole *con, console_ch_t *chardata);
+void graphic_hw_notify_state(QemuConsole *con, int x, int y, uint32_t width, 
uint32_t height);
 
 QemuConsole *qemu_console_lookup_by_index(unsigned int index);
 QemuConsole *qemu_console_lookup_by_device(DeviceState *dev);
diff --git a/ui/console.c b/ui/console.c
index aad4fc9..c20e336 100644
--- a/ui/console.c
+++ b/ui/console.c
@@ -265,6 +265,16 @@ void graphic_hw_invalidate(QemuConsole *con)
 }
 }
 
+void graphic_hw_notify_state(QemuConsole *con, int x, int y, uint32_t width, 
uint32_t height)
+{
+if (!con) {
+con = active_console;
+}
+if (con && con->hw_ops->notify_state) {
+con->hw_ops->notify_state(con->hw, con->index, x, y, width, height);
+}
+}
+
 static void ppm_save(const char *filename, struct DisplaySurface *ds,
  Error **errp)
 {
@@ -1562,6 +1572,22 @@ bool dpy_cursor_define_supported(QemuConsole *con)
 return false;
 }
 
+void dpy_notify_state(QemuConsole *con, int x, int y,
+  uint32_t width, uint32_t height)
+{
+DisplayState *s = con->ds;
+DisplayChangeListener *dcl;
+
+QLIST_FOREACH(dcl, &s->listeners, next) {
+if (con != (dcl->con ? dcl->con : active_console)) {
+continue;
+}
+if (dcl->ops->dpy_notify_state) {
+dcl->ops->dpy_notify_state(dcl, x, y, width, height);
+}
+}
+}
+
 /***/
 /* register display */
 
-- 
1.8.3.1




[Qemu-devel] [RFC] virtio-gpu and sdl2 so far

2013-11-19 Thread Dave Airlie
Hey,

I thought I should post this for a bit more feedback and show where
I've gone so far,

all available in git
http://cgit.freedesktop.org/~airlied/qemu/log/?h=virtio-gpu

The first patch is the sdl2 port with some minor changes I posted
before, then there are a bunch of console changes that I think
will make life easier for multi-head.

The sdl2 multi-head patch is why I'd like to keep SDL1.2 separate
from SDL2, there is a lot of changes in this, and it would be
messy to update SDL1 code at the same time and not cause breakages.

Then there is the virtio-gpu, virtio-vga initial patches, there
is still some future proofing work to be done, and I'd like to
try and sort out the 3d integration and threading before doing
much more with them.

The last patch you can ignore, I just included it because it was
in my tree, and its just a hack to keep libvirt happy on my system.

Dave.




[Qemu-devel] [PATCH 4/8] console: add ability to wrap a console.

2013-11-19 Thread Dave Airlie
From: Dave Airlie 

In order to implement virtio-vga on top of virtio-gpu we need
to be able to wrap the first console virtio-gpu registers from
inside virtio-vga which initialises after virtio-gpu. With this
interface virtio-vga can store the virtio-gpu interfaces, and
call them from its own ones.

Signed-off-by: Dave Airlie 
---
 include/ui/console.h |  7 +++
 ui/console.c | 13 +
 2 files changed, 20 insertions(+)

diff --git a/include/ui/console.h b/include/ui/console.h
index be304fe..a143a0d 100644
--- a/include/ui/console.h
+++ b/include/ui/console.h
@@ -286,6 +286,13 @@ QemuConsole *graphic_console_init(DeviceState *dev,
   const GraphicHwOps *ops,
   void *opaque);
 
+void graphic_console_wrap(QemuConsole *con,
+ DeviceState *dev,
+ const GraphicHwOps *ops,
+ void *opaque,
+  const GraphicHwOps **orig_ops,
+  void **orig_opaque);
+
 void graphic_hw_update(QemuConsole *con);
 void graphic_hw_invalidate(QemuConsole *con);
 void graphic_hw_text_update(QemuConsole *con, console_ch_t *chardata);
diff --git a/ui/console.c b/ui/console.c
index 4248a6f..80e17e5 100644
--- a/ui/console.c
+++ b/ui/console.c
@@ -1658,6 +1658,19 @@ QemuConsole *graphic_console_init(DeviceState *dev,
 return s;
 }
 
+void graphic_console_wrap(QemuConsole *con,
+ DeviceState *dev,
+ const GraphicHwOps *hw_ops,
+ void *opaque,
+  const GraphicHwOps **orig_ops,
+  void **orig_opaque)
+{
+*orig_opaque = con->hw;
+*orig_ops = con->hw_ops;
+con->hw_ops = hw_ops;
+con->hw = opaque;
+}
+
 QemuConsole *qemu_console_lookup_by_index(unsigned int index)
 {
 if (index >= MAX_CONSOLES) {
-- 
1.8.3.1




[Qemu-devel] [PATCH 1/8] ui/sdl2 : initial port to SDL 2.0 (v1.2)

2013-11-19 Thread Dave Airlie
From: Dave Airlie 

I've ported the SDL1.2 code over, and rewritten it to use the SDL2 interface.

The biggest changes were in the input handling, where SDL2 has done a major
overhaul, and I've had to include a generated translation file to get from
SDL2 codes back to qemu compatible ones. I'm still not sure how the keyboard
layout code works in qemu, so there may be further work if someone can point
me a test case that works with SDL1.2 and doesn't with SDL2.

Some SDL env vars we used to set are no longer used by SDL2,
Windows, OSX support is untested,

I don't think we can link to SDL1.2 and SDL2 at the same time, so I felt
using --with-sdlabi=2.0 to select the new code should be fine, like how
gtk does it.

v1.1: fix keys in text console
v1.2: fix shutdown, cleanups a bit of code, support ARGB cursor
Signed-off-by: Dave Airlie 
---
 configure|  23 +-
 ui/Makefile.objs |   4 +-
 ui/sdl.c |   3 +
 ui/sdl2.c| 894 +++
 ui/sdl2_scancode_translate.h | 260 +
 ui/sdl_keysym.h  |   3 +-
 6 files changed, 1180 insertions(+), 7 deletions(-)
 create mode 100644 ui/sdl2.c
 create mode 100644 ui/sdl2_scancode_translate.h

diff --git a/configure b/configure
index 9addff1..bf3be37 100755
--- a/configure
+++ b/configure
@@ -158,6 +158,7 @@ docs=""
 fdt=""
 pixman=""
 sdl=""
+sdlabi="1.2"
 virtfs=""
 vnc="yes"
 sparse="no"
@@ -310,6 +311,7 @@ query_pkg_config() {
 }
 pkg_config=query_pkg_config
 sdl_config="${SDL_CONFIG-${cross_prefix}sdl-config}"
+sdl2_config="${SDL2_CONFIG-${cross_prefix}sdl2-config}"
 
 # default flags for all hosts
 QEMU_CFLAGS="-fno-strict-aliasing $QEMU_CFLAGS"
@@ -710,6 +712,8 @@ for opt do
   ;;
   --enable-sdl) sdl="yes"
   ;;
+  --with-sdlabi=*) sdlabi="$optarg"
+  ;;
   --disable-qom-cast-debug) qom_cast_debug="no"
   ;;
   --enable-qom-cast-debug) qom_cast_debug="yes"
@@ -1092,6 +1096,7 @@ echo "  --disable-strip  disable stripping 
binaries"
 echo "  --disable-werror disable compilation abort on warning"
 echo "  --disable-sdldisable SDL"
 echo "  --enable-sdl enable SDL"
+echo "  --with-sdlabiselect preferred SDL ABI 1.2 or 2.0"
 echo "  --disable-gtkdisable gtk UI"
 echo "  --enable-gtk enable gtk UI"
 echo "  --disable-virtfs disable VirtFS"
@@ -1751,12 +1756,22 @@ fi
 
 # Look for sdl configuration program (pkg-config or sdl-config).  Try
 # sdl-config even without cross prefix, and favour pkg-config over sdl-config.
-if test "`basename $sdl_config`" != sdl-config && ! has ${sdl_config}; then
-  sdl_config=sdl-config
+
+if test $sdlabi == "2.0"; then
+sdl_config=$sdl2_config
+sdlname=sdl2
+sdlconfigname=sdl2_config
+else
+sdlname=sdl
+sdlconfigname=sdl_config
+fi
+
+if test "`basename $sdl_config`" != $sdlconfigname && ! has ${sdl_config}; then
+  sdl_config=$sdlconfigname
 fi
 
-if $pkg_config sdl --exists; then
-  sdlconfig="$pkg_config sdl"
+if $pkg_config $sdlname --exists; then
+  sdlconfig="$pkg_config $sdlname"
   _sdlversion=`$sdlconfig --modversion 2>/dev/null | sed 's/[^0-9]//g'`
 elif has ${sdl_config}; then
   sdlconfig="$sdl_config"
diff --git a/ui/Makefile.objs b/ui/Makefile.objs
index f33be47..721ad37 100644
--- a/ui/Makefile.objs
+++ b/ui/Makefile.objs
@@ -9,12 +9,12 @@ vnc-obj-y += vnc-jobs.o
 
 common-obj-y += keymaps.o console.o cursor.o input.o qemu-pixman.o
 common-obj-$(CONFIG_SPICE) += spice-core.o spice-input.o spice-display.o
-common-obj-$(CONFIG_SDL) += sdl.o sdl_zoom.o x_keymap.o
+common-obj-$(CONFIG_SDL) += sdl.o sdl_zoom.o x_keymap.o sdl2.o
 common-obj-$(CONFIG_COCOA) += cocoa.o
 common-obj-$(CONFIG_CURSES) += curses.o
 common-obj-$(CONFIG_VNC) += $(vnc-obj-y)
 common-obj-$(CONFIG_GTK) += gtk.o x_keymap.o
 
-$(obj)/sdl.o $(obj)/sdl_zoom.o: QEMU_CFLAGS += $(SDL_CFLAGS) 
+$(obj)/sdl.o $(obj)/sdl_zoom.o $(obj)/sdl2.o: QEMU_CFLAGS += $(SDL_CFLAGS) 
 
 $(obj)/gtk.o: QEMU_CFLAGS += $(GTK_CFLAGS) $(VTE_CFLAGS)
diff --git a/ui/sdl.c b/ui/sdl.c
index 9d8583c..736bb95 100644
--- a/ui/sdl.c
+++ b/ui/sdl.c
@@ -26,6 +26,8 @@
 #undef WIN32_LEAN_AND_MEAN
 
 #include 
+
+#if SDL_MAJOR_VERSION == 1
 #include 
 
 #include "qemu-common.h"
@@ -966,3 +968,4 @@ void sdl_display_init(DisplayState *ds, int full_screen, 
int no_frame)
 
 atexit(sdl_cleanup);
 }
+#endif
diff --git a/ui/sdl2.c b/ui/sdl2.c
new file mode 100644
index 000..4ad7ce3
--- /dev/null
+++ b/ui/sdl2.c
@@ -0,0 +1,894 @@
+/*
+ * QEMU SDL display driver
+ *
+ * Copyright (c) 2003 Fabrice Bellard
+ *
+ * Permission is her

[Qemu-devel] [PATCH 3/8] console: add information retrival wrappers

2013-11-19 Thread Dave Airlie
From: Dave Airlie 

We need to know how many graphics consoles are registered in the UI
code so it knows how many windows it should prepare for etc, also
so that it could potentially warn for cases it can't handle.

We also need to know the console index so we can add it to the list.
(maybe we don't).

Signed-off-by: Dave Airlie 
---
 include/ui/console.h |  3 +++
 ui/console.c | 12 
 2 files changed, 15 insertions(+)

diff --git a/include/ui/console.h b/include/ui/console.h
index 5731081..be304fe 100644
--- a/include/ui/console.h
+++ b/include/ui/console.h
@@ -306,6 +306,9 @@ void qemu_console_copy(QemuConsole *con, int src_x, int 
src_y,
 DisplaySurface *qemu_console_surface(QemuConsole *con);
 DisplayState *qemu_console_displaystate(QemuConsole *console);
 
+int qemu_get_console_index(QemuConsole *con);
+int qemu_get_number_graphical_consoles(void);
+
 typedef CharDriverState *(VcHandler)(ChardevVC *vc);
 
 CharDriverState *vc_init(ChardevVC *vc);
diff --git a/ui/console.c b/ui/console.c
index c20e336..4248a6f 100644
--- a/ui/console.c
+++ b/ui/console.c
@@ -175,6 +175,7 @@ static DisplayState *display_state;
 static QemuConsole *active_console;
 static QemuConsole *consoles[MAX_CONSOLES];
 static int nb_consoles = 0;
+static int nb_graphics_consoles = 0;
 
 static void text_console_do_init(CharDriverState *chr, DisplayState *ds);
 static void dpy_refresh(DisplayState *s);
@@ -1247,6 +1248,7 @@ static QemuConsole *new_console(DisplayState *ds, 
console_type_t console_type)
 s->index = i;
 consoles[i] = s;
 nb_consoles++;
+nb_graphics_consoles++;
 }
 return s;
 }
@@ -1873,6 +1875,16 @@ DisplayState *qemu_console_displaystate(QemuConsole 
*console)
 return console->ds;
 }
 
+int qemu_get_console_index(QemuConsole *console)
+{
+return console->index;
+}
+
+int qemu_get_number_graphical_consoles(void)
+{
+return nb_graphics_consoles;
+}
+
 PixelFormat qemu_different_endianness_pixelformat(int bpp)
 {
 PixelFormat pf;
-- 
1.8.3.1




[Qemu-devel] [PATCH 8/8] HACK: just to make things start easier with libvirt

2013-11-19 Thread Dave Airlie
From: Dave Airlie 

---
 hw/display/vga-pci.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/hw/display/vga-pci.c b/hw/display/vga-pci.c
index b3a45c8..e4bea17 100644
--- a/hw/display/vga-pci.c
+++ b/hw/display/vga-pci.c
@@ -146,6 +146,7 @@ static int pci_std_vga_initfn(PCIDevice *dev)
 PCIVGAState *d = DO_UPCAST(PCIVGAState, dev, dev);
 VGACommonState *s = &d->vga;
 
+   return 0;
 /* vga + console init */
 vga_common_init(s, OBJECT(dev));
 vga_init(s, OBJECT(dev), pci_address_space(dev), pci_address_space_io(dev),
@@ -195,7 +196,7 @@ static void vga_class_init(ObjectClass *klass, void *data)
 k->romfile = "vgabios-stdvga.bin";
 k->vendor_id = PCI_VENDOR_ID_QEMU;
 k->device_id = PCI_DEVICE_ID_QEMU_VGA;
-k->class_id = PCI_CLASS_DISPLAY_VGA;
+k->class_id = 0;// PCI_CLASS_DISPLAY_VGA;
 dc->vmsd = &vmstate_vga_pci;
 dc->props = vga_pci_properties;
 set_bit(DEVICE_CATEGORY_DISPLAY, dc->categories);
-- 
1.8.3.1




[Qemu-devel] [PATCH 5/8] sdl2: update for multihead support.

2013-11-19 Thread Dave Airlie
From: Dave Airlie 

This reworks the complete SDL2 code to support multi-head,
by using DisplayChangeListeners wrapped inside a structure per-head,
containing the SDL2 information along with the console info.

This also adds a hack to allow Ctrl-Alt-n to toggle the first
console on/off.

Signed-off-by: Dave Airlie 
---
 ui/sdl2.c | 322 --
 1 file changed, 211 insertions(+), 111 deletions(-)

diff --git a/ui/sdl2.c b/ui/sdl2.c
index 4ad7ce3..6f3a919 100644
--- a/ui/sdl2.c
+++ b/ui/sdl2.c
@@ -39,11 +39,18 @@
 
 #include "sdl2_scancode_translate.h"
 
-static DisplayChangeListener *dcl;
-static DisplaySurface *surface;
-static SDL_Window *real_window;
-static SDL_Renderer *real_renderer;
-static SDL_Texture *guest_texture;
+#define SDL2_MAX_OUTPUT 4
+
+static struct sdl2_console_state {
+DisplayChangeListener dcl;
+DisplaySurface *surface;
+SDL_Texture *texture;
+SDL_Window *real_window;
+SDL_Renderer *real_renderer;
+int idx;
+int last_vm_running; /* per console for caption reasons */
+int x, y;
+} sdl2_console[SDL2_MAX_OUTPUT];
 
 static SDL_Surface *guest_sprite_surface;
 static int gui_grab; /* if true, all keyboard/mouse events are grabbed */
@@ -67,70 +74,112 @@ static SDL_Cursor *guest_sprite = NULL;
 static int scaling_active = 0;
 static Notifier mouse_mode_notifier;
 
-static void sdl_update_caption(void);
+static void sdl_update_caption(struct sdl2_console_state *scon);
+
+static struct sdl2_console_state *get_scon_from_window(uint32_t window_id)
+{
+int i;
+for (i = 0; i < SDL2_MAX_OUTPUT; i++) {
+if (sdl2_console[i].real_window == SDL_GetWindowFromID(window_id))
+return &sdl2_console[i];
+}
+return NULL;
+}
 
 static void sdl_update(DisplayChangeListener *dcl,
int x, int y, int w, int h)
 {
-if (!surface)
+struct sdl2_console_state *scon = container_of(dcl, struct 
sdl2_console_state, dcl);
+SDL_Rect rect;
+DisplaySurface *surf = qemu_console_surface(dcl->con);
+
+if (!surf)
+return;
+if (!scon->texture)
 return;
 
-SDL_UpdateTexture(guest_texture, NULL, surface_data(surface),
-  surface_stride(surface));
-SDL_RenderCopy(real_renderer, guest_texture, NULL, NULL);
-SDL_RenderPresent(real_renderer);
+rect.x = x;
+rect.y = y;
+rect.w = w;
+rect.h = h;
+
+SDL_UpdateTexture(scon->texture, NULL, surface_data(surf),
+  surface_stride(surf));
+SDL_RenderCopy(scon->real_renderer, scon->texture, &rect, &rect);
+SDL_RenderPresent(scon->real_renderer);
 }
 
-static void do_sdl_resize(int width, int height, int bpp)
+static void do_sdl_resize(struct sdl2_console_state *scon, int width, int 
height, int bpp)
 {
 int flags;
 
-if (real_window) {
-SDL_SetWindowSize(real_window, width, height);
-SDL_RenderSetLogicalSize(real_renderer, width, height);
+if (scon->real_window && scon->real_renderer) {
+if (width && height) {
+SDL_RenderSetLogicalSize(scon->real_renderer, width, height);
+   
+SDL_SetWindowSize(scon->real_window, width, height);
+} else {
+SDL_DestroyRenderer(scon->real_renderer);
+SDL_DestroyWindow(scon->real_window);
+scon->real_renderer = NULL;
+scon->real_window = NULL;
+}
 } else {
+if (!width || !height) {
+return;
+}
 flags = 0;
 if (gui_fullscreen)
 flags |= SDL_WINDOW_FULLSCREEN;
 else
 flags |= SDL_WINDOW_RESIZABLE;
 
-real_window = SDL_CreateWindow("", SDL_WINDOWPOS_UNDEFINED,
-   SDL_WINDOWPOS_UNDEFINED,
-   width, height, flags);
-real_renderer = SDL_CreateRenderer(real_window, -1, 0);
-sdl_update_caption();
+scon->real_window = SDL_CreateWindow("", SDL_WINDOWPOS_UNDEFINED,
+ SDL_WINDOWPOS_UNDEFINED,
+ width, height, flags);
+scon->real_renderer = SDL_CreateRenderer(scon->real_window, -1, 0);
+sdl_update_caption(scon);
 }
 }
 
 static void sdl_switch(DisplayChangeListener *dcl,
DisplaySurface *new_surface)
 {
+struct sdl2_console_state *scon = container_of(dcl, struct 
sdl2_console_state, dcl);
 int format = 0;
+int idx = qemu_get_console_index(dcl->con);
+DisplaySurface *old_surface = scon->surface;
+
 /* temporary hack: allows to call sdl_switch to handle scaling changes */
 if (new_surface) {
-surface = new_surface;
+scon->surface = new_surface;
 }
 
-if (!scaling_active) {
-do_sdl_resize(surface_width(surf

[Qemu-devel] [PATCH 6/8] virtio-gpu: v0.1 of the virtio based GPU code.

2013-11-19 Thread Dave Airlie
From: Dave Airlie 

This is the basic virtio-gpu which is

multi-head capable,
ARGB cursor support,
unaccelerated.

Signed-off-by: Dave Airlie 
---
 default-configs/x86_64-softmmu.mak |   1 +
 hw/display/Makefile.objs   |   2 +
 hw/display/virtgpu_hw.h| 225 ++
 hw/display/virtio-gpu.c| 606 +
 hw/virtio/virtio-pci.c |  49 +++
 hw/virtio/virtio-pci.h |  15 +
 include/hw/pci/pci.h   |   1 +
 include/hw/virtio/virtio-gpu.h |  90 ++
 8 files changed, 989 insertions(+)
 create mode 100644 hw/display/virtgpu_hw.h
 create mode 100644 hw/display/virtio-gpu.c
 create mode 100644 include/hw/virtio/virtio-gpu.h

diff --git a/default-configs/x86_64-softmmu.mak 
b/default-configs/x86_64-softmmu.mak
index 31bddce..1a00b78 100644
--- a/default-configs/x86_64-softmmu.mak
+++ b/default-configs/x86_64-softmmu.mak
@@ -9,6 +9,7 @@ CONFIG_VGA_PCI=y
 CONFIG_VGA_ISA=y
 CONFIG_VGA_CIRRUS=y
 CONFIG_VMWARE_VGA=y
+CONFIG_VIRTIO_GPU=y
 CONFIG_VMMOUSE=y
 CONFIG_SERIAL=y
 CONFIG_PARALLEL=y
diff --git a/hw/display/Makefile.objs b/hw/display/Makefile.objs
index 540df82..10e4066 100644
--- a/hw/display/Makefile.objs
+++ b/hw/display/Makefile.objs
@@ -32,3 +32,5 @@ obj-$(CONFIG_TCX) += tcx.o
 obj-$(CONFIG_VGA) += vga.o
 
 common-obj-$(CONFIG_QXL) += qxl.o qxl-logger.o qxl-render.o
+
+obj-$(CONFIG_VIRTIO_GPU) += virtio-gpu.o
diff --git a/hw/display/virtgpu_hw.h b/hw/display/virtgpu_hw.h
new file mode 100644
index 000..81223de
--- /dev/null
+++ b/hw/display/virtgpu_hw.h
@@ -0,0 +1,225 @@
+#ifndef VIRTGPU_HW_H
+#define VIRTGPU_HW_H
+
+#define VIRTGPU_CMD_HAS_RESP (1 << 31)
+#define VIRTGPU_CMD_3D_ONLY  (1 << 30)
+enum virtgpu_ctrl_cmd {
+   VIRTGPU_CMD_NOP,
+   VIRTGPU_CMD_GET_DISPLAY_INFO = (1 | VIRTGPU_CMD_HAS_RESP),
+   VIRTGPU_CMD_GET_CAPS = (2 | VIRTGPU_CMD_HAS_RESP),
+   VIRTGPU_CMD_RESOURCE_CREATE_2D = 3,
+   VIRTGPU_CMD_RESOURCE_UNREF = 4,
+   VIRTGPU_CMD_SET_SCANOUT = 5,
+   VIRTGPU_CMD_RESOURCE_FLUSH = 6,
+   VIRTGPU_CMD_TRANSFER_SEND_2D = 7,
+   VIRTGPU_CMD_RESOURCE_ATTACH_BACKING = 8,
+   VIRTGPU_CMD_RESOURCE_INVAL_BACKING = 9,
+   
+   VIRTGPU_CMD_CTX_CREATE = (10 | VIRTGPU_CMD_3D_ONLY),
+   VIRTGPU_CMD_CTX_DESTROY = (11 | VIRTGPU_CMD_3D_ONLY),
+   VIRTGPU_CMD_CTX_ATTACH_RESOURCE = (12 | VIRTGPU_CMD_3D_ONLY),
+   VIRTGPU_CMD_CTX_DETACH_RESOURCE = (13 | VIRTGPU_CMD_3D_ONLY),
+
+   VIRTGPU_CMD_RESOURCE_CREATE_3D = (14 | VIRTGPU_CMD_3D_ONLY),
+
+   VIRTGPU_CMD_TRANSFER_SEND_3D = (15 | VIRTGPU_CMD_3D_ONLY),
+   VIRTGPU_CMD_TRANSFER_RECV_3D = (16 | VIRTGPU_CMD_3D_ONLY),
+
+   VIRTGPU_CMD_SUBMIT_3D = (17 | VIRTGPU_CMD_3D_ONLY),
+};
+
+enum virtgpu_ctrl_event {
+   VIRTGPU_EVENT_NOP,
+   VIRTGPU_EVENT_ERROR,
+   VIRTGPU_EVENT_DISPLAY_CHANGE,
+};
+
+/* data passed in the cursor vq */
+struct virtgpu_hw_cursor_page {
+   uint32_t cursor_x, cursor_y;
+   uint32_t cursor_hot_x, cursor_hot_y;
+   uint32_t cursor_id;
+};
+
+struct virtgpu_resource_unref {
+   uint32_t resource_id;
+};
+
+/* create a simple 2d resource with a format */
+struct virtgpu_resource_create_2d {
+   uint32_t resource_id;
+   uint32_t format;
+   uint32_t width;
+   uint32_t height;
+};
+
+struct virtgpu_set_scanout {
+   uint32_t scanout_id;
+   uint32_t resource_id;
+   uint32_t width;
+   uint32_t height;
+   uint32_t x;
+   uint32_t y;
+};
+
+struct virtgpu_resource_flush {
+   uint32_t resource_id;
+   uint32_t width;
+   uint32_t height;
+   uint32_t x;
+   uint32_t y;
+};
+
+/* simple transfer send */
+struct virtgpu_transfer_send_2d {
+   uint32_t resource_id;
+   uint32_t offset;
+   uint32_t width;
+   uint32_t height;
+   uint32_t x;
+   uint32_t y;
+};
+
+struct virtgpu_mem_entry {
+   uint64_t addr;
+   uint32_t length;
+   uint32_t pad;
+};
+
+struct virtgpu_resource_attach_backing {
+   uint32_t resource_id;
+   uint32_t nr_entries;
+};
+
+struct virtgpu_resource_inval_backing {
+   uint32_t resource_id;
+};
+
+#define VIRTGPU_MAX_SCANOUTS 16
+struct virtgpu_display_info {
+   uint32_t num_scanouts;
+   struct {
+   uint32_t enabled;
+   uint32_t width;
+   uint32_t height;
+   uint32_t x;
+   uint32_t y;
+   uint32_t flags;
+   } pmodes[VIRTGPU_MAX_SCANOUTS];
+};
+
+
+/* 3d related */
+struct virtgpu_box {
+   uint32_t x, y, z;
+   uint32_t w, h, d;
+};
+
+struct virtgpu_transfer_send_3d {
+   uint64_t data;
+   uint32_t resource_id;
+   uint32_t level;
+   struct virtgpu_box box;
+   uint32_t stride;
+   uint32_t layer_stride;
+   uint32_t ctx_id;
+};
+
+struct virtgpu_transfer_recv_3d {
+   uint64_t data;
+   uint32_t resource_id;
+   uint32_t level;
+   struct vir

[Qemu-devel] [PATCH 7/8] virtio-vga: v1

2013-11-19 Thread Dave Airlie
From: Dave Airlie 

This is a virtio-vga device built on top of the virtio-gpu device.

Signed-off-by: Dave Airlie 
---
 Makefile   |   2 +-
 default-configs/x86_64-softmmu.mak |   1 +
 hw/display/Makefile.objs   |   1 +
 hw/display/virtio-vga.c| 156 +
 hw/pci/pci.c   |   2 +
 include/sysemu/sysemu.h|   2 +-
 pc-bios/vgabios-virtio.bin | Bin 0 -> 40448 bytes
 roms/Makefile  |   2 +-
 roms/config.vga.virtio |   6 ++
 vl.c   |  13 
 10 files changed, 182 insertions(+), 3 deletions(-)
 create mode 100644 hw/display/virtio-vga.c
 create mode 100644 pc-bios/vgabios-virtio.bin
 create mode 100644 roms/config.vga.virtio

diff --git a/Makefile b/Makefile
index b15003f..093ff9a 100644
--- a/Makefile
+++ b/Makefile
@@ -284,7 +284,7 @@ bepo
 
 ifdef INSTALL_BLOBS
 BLOBS=bios.bin sgabios.bin vgabios.bin vgabios-cirrus.bin \
-vgabios-stdvga.bin vgabios-vmware.bin vgabios-qxl.bin \
+vgabios-stdvga.bin vgabios-vmware.bin vgabios-qxl.bin vgabios-virtio.bin \
 acpi-dsdt.aml q35-acpi-dsdt.aml \
 ppc_rom.bin openbios-sparc32 openbios-sparc64 openbios-ppc \
 pxe-e1000.rom pxe-eepro100.rom pxe-ne2k_pci.rom \
diff --git a/default-configs/x86_64-softmmu.mak 
b/default-configs/x86_64-softmmu.mak
index 1a00b78..22d8587 100644
--- a/default-configs/x86_64-softmmu.mak
+++ b/default-configs/x86_64-softmmu.mak
@@ -10,6 +10,7 @@ CONFIG_VGA_ISA=y
 CONFIG_VGA_CIRRUS=y
 CONFIG_VMWARE_VGA=y
 CONFIG_VIRTIO_GPU=y
+CONFIG_VIRTIO_VGA=y
 CONFIG_VMMOUSE=y
 CONFIG_SERIAL=y
 CONFIG_PARALLEL=y
diff --git a/hw/display/Makefile.objs b/hw/display/Makefile.objs
index 10e4066..63427e9 100644
--- a/hw/display/Makefile.objs
+++ b/hw/display/Makefile.objs
@@ -34,3 +34,4 @@ obj-$(CONFIG_VGA) += vga.o
 common-obj-$(CONFIG_QXL) += qxl.o qxl-logger.o qxl-render.o
 
 obj-$(CONFIG_VIRTIO_GPU) += virtio-gpu.o
+obj-$(CONFIG_VIRTIO_VGA) += virtio-vga.o
diff --git a/hw/display/virtio-vga.c b/hw/display/virtio-vga.c
new file mode 100644
index 000..a6c9438
--- /dev/null
+++ b/hw/display/virtio-vga.c
@@ -0,0 +1,156 @@
+#include "hw/hw.h"
+#include "hw/pci/pci.h"
+#include "ui/console.h"
+#include "vga_int.h"
+#include "hw/virtio/virtio-pci.h"
+
+/*
+ * virtio-vga: This extends VirtIOGPUPCI
+ */
+#define TYPE_VIRTIO_VGA "virtio-vga"
+#define VIRTIO_VGA(obj) \
+OBJECT_CHECK(VirtIOVGA, (obj), TYPE_VIRTIO_VGA)
+
+typedef struct VirtIOVGA VirtIOVGA;
+struct VirtIOVGA {
+struct VirtIOGPUPCI gpu;
+VGACommonState vga;
+const struct GraphicHwOps *wrapped_ops;
+void *wrapped_opaque;
+};
+
+static void virtio_vga_invalidate_display(void *opaque)
+{
+VirtIOVGA *vvga = opaque;
+
+if (!vvga->gpu.vdev.enable) {
+   vvga->vga.hw_ops->invalidate(&vvga->vga);
+   return;
+}
+
+vvga->wrapped_ops->invalidate(vvga->wrapped_opaque);
+}
+
+static void virtio_vga_update_display(void *opaque)
+{
+VirtIOVGA *vvga = opaque;
+
+if (!vvga->gpu.vdev.enable) {
+   vvga->vga.hw_ops->gfx_update(&vvga->vga);
+}
+vvga->wrapped_ops->gfx_update(vvga->wrapped_opaque);
+}
+
+static void virtio_vga_text_update(void *opaque, console_ch_t *chardata)
+{
+VirtIOVGA *vvga = opaque;
+
+if (!vvga->gpu.vdev.enable) {
+   if (vvga->vga.hw_ops->text_update)
+   vvga->vga.hw_ops->text_update(&vvga->vga, chardata);
+}
+vvga->wrapped_ops->text_update(vvga->wrapped_opaque, chardata);
+}
+
+static void virtio_vga_notify_state(void *opaque, int idx, int x, int y, 
uint32_t width, uint32_t height)
+{
+VirtIOVGA *vvga = opaque;
+
+if (!vvga->gpu.vdev.enable) {
+if (vvga->vga.hw_ops->notify_state)
+   vvga->vga.hw_ops->notify_state(&vvga->vga, idx, x, y, width, 
height);
+}
+vvga->wrapped_ops->notify_state(vvga->wrapped_opaque, idx, x, y, width, 
height);
+}
+
+static const GraphicHwOps virtio_vga_ops = {
+.invalidate = virtio_vga_invalidate_display,
+.gfx_update = virtio_vga_update_display,
+.text_update = virtio_vga_text_update,
+.notify_state = virtio_vga_notify_state,
+};
+
+/* VGA device wrapper around PCI device around virtio GPU */
+static int virtio_vga_init(VirtIOPCIProxy *vpci_dev)
+{
+VirtIOVGA *vvga = VIRTIO_VGA(vpci_dev);
+DeviceState *vdev = DEVICE(&vvga->gpu.vdev);
+VGACommonState *vga = &vvga->vga;
+
+qdev_set_parent_bus(vdev, BUS(&vpci_dev->bus));
+if (qdev_init(vdev) < 0) {
+   return -1;
+}
+
+graphic_console_wrap(vvga->gpu.vdev.con[0], DEVICE(vpci_dev), 
&virtio_vga_ops, vvga, &vvga->wrapped_ops, &vvga->wrapped_opaque);
+vga->con = vvga->gpu.vdev.con[0];
+
+vga_common_init(vga, OBJECT(vpci_dev));
+vga_ini

Re: [Qemu-devel] console muti-head some more design input

2013-11-20 Thread Dave Airlie
On Thu, Nov 21, 2013 at 1:14 AM, Gerd Hoffmann  wrote:
> On Mi, 2013-11-20 at 09:32 -0500, John Baboval wrote:
>> On 11/20/2013 03:12 AM, Gerd Hoffmann wrote:
>> >Hi,
>> >
>> >> I think you are only considering output here, for input we definitely
>> >> need some idea of screen layout, and this needs to be stored
>> >> somewhere.
>> > Oh yea, input.  That needs quite some work for multihead / multiseat.
>> >
>> > I think we should *not* try to hack that into the ui.  We should extend
>> > the input layer instead.
>>
>> This would be a contrast to how a real system works.
>
> No.  We have to solve problem here which doesn't exist on real hardware
> in the first place.
>
>> IMO, the UI is the
>> appropriate place for this sort of thing. A basic UI is going to be
>> sending relative events anyway.
>>
>> I think a "seat" should be a UI construct as well.
>
> A seat on real hardware is a group of input (kbd, mouse, tablet, ...)
> and output (display, speakers, ) devices.
>
> In qemu the displays are represented by QemuConsoles.  So to model real
> hardware we should put the QemuConsoles and input devices for a seat
> into a group.
>
> The ui displays some QemuConsole.  If we tag input events with the
> QemuConsole the input layer can figure the correct input device which
> should receive the event according to the seat grouping.
>
> With absolute pointer events the whole thing becomes a bit more tricky
> as we have to map input from multiple displays (QemuConsoles) to a
> single absolute pointing device (usb tablet).  This is what Dave wants
> the screen layout for.  I still think the input layer is the place to do
> this transformation.
>
>
> While thinking about this:  A completely different approach to tackle
> this would be to implement touchscreen emulation.  So we don't have a
> single usb-tablet, but multiple (one per display) touch input devices.
> Then we can simply route absolute input events from this display as-is
> to that touch device and be done with it.  No need to deal with
> coordinate transformations in qemu, the guest will deal with it.

This is a nice dream, except you'll find the guest won't deal with it
very well, and you'll have all kinds of guest scenarios to link up
that touchscreen a talks to monitor a etc.

Dave.



Re: [Qemu-devel] [PATCH 7/8] virtio-vga: v1

2013-11-20 Thread Dave Airlie
On Wed, Nov 20, 2013 at 10:02 PM, Gerd Hoffmann  wrote:
> On Mi, 2013-11-20 at 15:52 +1000, Dave Airlie wrote:
>> From: Dave Airlie 
>>
>> This is a virtio-vga device built on top of the virtio-gpu device.
>
> Ah, I see what you use the wrapping for.  Hmm.  I think you should use a
> common base class instead, i.e. something like virtio-gpu-base which
> holds all the common stuff.  Both virtio-gpu and virtio-vga can use that
> as TypeInfo->parent then.  This way virtio-vga doesn't have to muck with
> virtio-gpu internals.  virtio-gpu-base can be tagged as abstract class
> (using .abstract = true) so it will not be instantiated directly.
>

I'm not sure what that buys me here, I need the virtio-vga to attach
the vga ops the first console that the virtio-gpu registers, it can't
be a separate console, and since virtio-gpu initialises before
virtio-vga I can't tell it to not register the console.

Its no use attaching just the vga or just the gpu ops to the console I
need a wrapper and I can't see how having a common base class would
help. I already have a base class, that pci subclasses then vga
subclasses that.

Dave.



Re: [Qemu-devel] console muti-head some more design input

2013-11-26 Thread Dave Airlie
On Fri, Nov 22, 2013 at 6:41 PM, Gerd Hoffmann  wrote:
>   Hi,
>
>> > While thinking about this:  A completely different approach to tackle
>> > this would be to implement touchscreen emulation.  So we don't have a
>> > single usb-tablet, but multiple (one per display) touch input devices.
>> > Then we can simply route absolute input events from this display as-is
>> > to that touch device and be done with it.  No need to deal with
>> > coordinate transformations in qemu, the guest will deal with it.
>>
>> This is a nice dream, except you'll find the guest won't deal with it
>> very well, and you'll have all kinds of guest scenarios to link up
>> that touchscreen a talks to monitor a etc.
>
> Ok, scratch the idea then.
>
> I don't have personal experience with this,
> no touch capable displays here.

Hmm I think we get to unscratch this idea,

After looking into this a bit more I think we probably do need
something outside the gpu to handle this.

The problem is that there are two scenarios for a GPU multi-head,

a) one resource - two outputs, the second output has an offset to
scanout into the resource
b) two resources - two outputs, both outputs have a 0,0 into their
respective resources.

So the GPU doesn't have this information in all cases on what the
input device configuration should be,
Neither do we have anyway in the guests to specify this relationship
at the driver level.

So I think we probably do need treat multi-head windows as separate
input devices, and/or have
an agent in the guest to do the right thing by configuring multiple
input devices to map to multiple outputs.

I suppose spice must do something like this already, maybe they can
tell me more.

Dave.



[Qemu-devel] virtio-blk wierd memcpy

2013-10-17 Thread Dave Airlie
In my failing attempts to valgrind qemu (how to people live like
this?), I spotted this in the logs

==19521== Source and destination overlap in memcpy(0x31d38938, 0x31d38938, 64)
==19521==at 0x4A0A343: memcpy@@GLIBC_2.14 (in
/usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==19521==by 0x42774E: virtio_blk_device_init (virtio-blk.c:686)
==19521==by 0x46EE9E: virtio_device_init (virtio.c:1158)
==19521==by 0x25405E: device_realize (qdev.c:178)
==19521==by 0x2559B5: device_set_realized (qdev.c:699)
==19521==by 0x3A819B: property_set_bool (object.c:1315)
==19521==by 0x3A6CE0: object_property_set (object.c:803)

The memcpy looks completely superfluous.

Dave.



Re: [Qemu-devel] [Bug 788734] Re: vmwgfx does not work in Linux guest in kvm

2014-03-16 Thread Dave Airlie
On Sun, Mar 16, 2014 at 2:09 PM, Andrew Engelbrecht
<788...@bugs.launchpad.net> wrote:
> *** This bug is a duplicate of bug 1037606 ***
> https://bugs.launchpad.net/bugs/1037606
>
> i am also having this problem on amd64 debian sid. using qemu verion
> 1.7.0+dfsg-5.
>
> on the guest system, i'm also running sid. i have version 1:13.0.1-3+b1
> of xserver-xorg-video-vmware, and kernel version 3.13+56 of linux-image-
> amd64.
>
> is vmwgfx support supposed to be targeted specifically to vmware
> containers, or can it generally be used with qemu?

vmwgfx is targetted at vmware only, qemu would need to support more
parts of the virt gpu to use it.

Dave.



Re: [Qemu-devel] [PULL 7/7] gtk: Don't warp absolute pointer

2014-03-16 Thread Dave Airlie
On Thu, Mar 13, 2014 at 8:46 PM, Gerd Hoffmann  wrote:
> From: Cole Robinson 

Wierd I wrote and submitted these months ago, and they were promptly ignored,

Glad you had to find bugs all over again an reinvent the wheel,

Dave.

>
> This matches the behavior of SDL, and makes the mouse usable when
> using -display gtk -vga qxl
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1051724
> Signed-off-by: Cole Robinson 
> Signed-off-by: Gerd Hoffmann 
> ---
>  ui/gtk.c | 8 
>  1 file changed, 8 insertions(+)
>
> diff --git a/ui/gtk.c b/ui/gtk.c
> index 330dc13..ef7cb47 100644
> --- a/ui/gtk.c
> +++ b/ui/gtk.c
> @@ -340,6 +340,10 @@ static void gd_mouse_set(DisplayChangeListener *dcl,
>  GdkDeviceManager *mgr;
>  gint x_root, y_root;
>
> +if (qemu_input_is_absolute()) {
> +return;
> +}
> +
>  dpy = gtk_widget_get_display(s->drawing_area);
>  mgr = gdk_display_get_device_manager(dpy);
>  gdk_window_get_root_coords(gtk_widget_get_window(s->drawing_area),
> @@ -355,6 +359,10 @@ static void gd_mouse_set(DisplayChangeListener *dcl,
>  GtkDisplayState *s = container_of(dcl, GtkDisplayState, dcl);
>  gint x_root, y_root;
>
> +if (qemu_input_is_absolute()) {
> +return;
> +}
> +
>  gdk_window_get_root_coords(gtk_widget_get_window(s->drawing_area),
> x, y, &x_root, &y_root);
>  gdk_display_warp_pointer(gtk_widget_get_display(s->drawing_area),
> --
> 1.8.3.1
>
>



Re: [Qemu-devel] [PATCH 3/4] virtio-gpu: v0.3 of the virtio based GPU code.

2014-03-16 Thread Dave Airlie
On Thu, Mar 13, 2014 at 8:40 PM, Paolo Bonzini  wrote:
> Il 12/03/2014 21:26, Michael S. Tsirkin ha scritto:
>>>
>>> +Event queue:
>>> +The only current event passed is a message to denote the host
>>> +wants to update the layout of the screens. It contains the same
>>> +info as the response to VIRTGPU_CMD_GET_DISPLAY_INFO.
>
>
> I wonder if an event queue is the best mechanism if you can get the same
> info from a command anyway.  For virtio-scsi I used a queue because I needed
> to specify which target or LUN the event applied to, but here you do not
> need it and a queue is susceptible to dropped events.
>
> Perhaps a configuration field is better, like this:
>
> u32 events_read;
> u32 events_clear;
>
> A new event sets a bit in events_read and generates a configuration change
> interrupt.  The guest should never write to events_read.
>
> Writing to events_clear has the side effect of the device doing "events_read
> &= ~events_clear".  We cannot have R/W1C fields in virtio, but this
> approximation is good enough.
>
> When the guest receives a configuration change interrupt, it reads
> event_read.  If it is nonzero, it writes the same value it read to
> events_clear, and sends the necessary commands to the card in order to
> retrieve the event data.  It can then read again event_read, and loop if it
> is again nonzero.
>

I steered away from using config space for anything for the normal
operation of the GPU after looking at overheads and hearing from S390
people that config space has some special properties on their hw,

The number of these events should be small in a running system, and
I'm not sure how we'd lose one.

Dave.



Re: [Qemu-devel] [PATCH 3/4] virtio-gpu: v0.3 of the virtio based GPU code.

2014-03-16 Thread Dave Airlie
On Mon, Mar 17, 2014 at 2:36 PM, Dave Airlie  wrote:
> On Thu, Mar 13, 2014 at 8:40 PM, Paolo Bonzini  wrote:
>> Il 12/03/2014 21:26, Michael S. Tsirkin ha scritto:
>>>>
>>>> +Event queue:
>>>> +The only current event passed is a message to denote the host
>>>> +wants to update the layout of the screens. It contains the same
>>>> +info as the response to VIRTGPU_CMD_GET_DISPLAY_INFO.
>>
>>
>> I wonder if an event queue is the best mechanism if you can get the same
>> info from a command anyway.  For virtio-scsi I used a queue because I needed
>> to specify which target or LUN the event applied to, but here you do not
>> need it and a queue is susceptible to dropped events.
>>
>> Perhaps a configuration field is better, like this:
>>
>> u32 events_read;
>> u32 events_clear;
>>
>> A new event sets a bit in events_read and generates a configuration change
>> interrupt.  The guest should never write to events_read.
>>
>> Writing to events_clear has the side effect of the device doing "events_read
>> &= ~events_clear".  We cannot have R/W1C fields in virtio, but this
>> approximation is good enough.
>>
>> When the guest receives a configuration change interrupt, it reads
>> event_read.  If it is nonzero, it writes the same value it read to
>> events_clear, and sends the necessary commands to the card in order to
>> retrieve the event data.  It can then read again event_read, and loop if it
>> is again nonzero.
>>
>
> I steered away from using config space for anything for the normal
> operation of the GPU after looking at overheads and hearing from S390
> people that config space has some special properties on their hw,
>
> The number of these events should be small in a running system, and
> I'm not sure how we'd lose one.

Oh I was also going to use this queue to report HW error events from
the guest to the host,

like if the guest tries an illegal operation,

Dave.



[Qemu-devel] virtio device error reporting best practice?

2014-03-16 Thread Dave Airlie
So I'm looking at how best to do virtio gpu device error reporting,
and how to deal with illegal stuff,

I've two levels of errors I want to support,

a) unrecoverable or bad guest kernel programming errors,

b) per 3D context errors from the renderer backend,

(b) I can easily report in an event queue and the guest kernel can in
theory blow away the offenders, this is how GL works with some
extensions,

For (a) I can expect a response from every command I put into the main
GPU control queue, the response should always be no error, but in some
cases it will be because the guest hit some host resource error, or
asked for something insane, (guest kernel drivers would be broken in
most of these cases).

Alternately I can use the separate event queue to send async errors
when the guest does something bad,

I'm also considering adding some sort of flag in config space saying
the device needs a reset before it will continue doing anything,

The main reason I'm considering this stuff is for security reasons if
the guest asks for something really illegal or crazy what should the
expected behaviour of the host be? (at least secure I know that).

Dave.



Re: [Qemu-devel] [RfC PATCH 06/15] virtio-gpu/2d: add virtio gpu core code

2015-03-01 Thread Dave Airlie
>
> But one non-formal question: As far as I understood virtio-gpu's mode of
> operation from this patch, it looks like there is one resource per scanout,
> and that resource is basically the whole screen (which can be updated
> partially).
>
> If that is the case, what do we gain from being able to display a resource
> on multiple scanouts? If we don't associate a scanout to a resource with
> set_scanout, the resource won't ever be displayed on that scanout; and if we
> associate it, the scanout's position and dimension will be exactly the same
> as the resource's, so associating a resource with multiple scanouts means
> that all those scanouts will be duplicates of each other, which in turn
> means we can duplicate heads. But that seems more like a frontend feature to
> me...

Just how real hw works, and sw expects it, we have resources, and
multiple scanouts
can access one resource, or each scanout can have its own private resource.
> too).
>
>> +cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
>> +return;
>> +}
>> +
>> +res = g_new0(struct virtio_gpu_simple_resource, 1);
>> +
>> +res->width = c2d.width;
>> +res->height = c2d.height;
>> +res->format = c2d.format;
>> +res->resource_id = c2d.resource_id;
>
>
> Considering resource_id == 0 is sometimes (apparently) used as "no resource"
> (e.g. in transfer_to_host_2d), would it make sense to test against that
> here?

Gerd, btw I think we mighr want to support a set scanout with no
resource, as it might make Linux atomic modesetting easier in the
future, we'd just scanout black.

>> +  struct iovec **iov)
>> +{
>> +struct virtio_gpu_mem_entry *ents;
>> +size_t esize, s;
>> +int i;
>> +
>> +esize = sizeof(*ents) * ab->nr_entries;
>
>
> Can overflow (on 32 bit hosts).
>
> (And this might be bad, because this function works just fine, and then
> either the for loop does something, or nr_entries is negative when casted to
> signed int, and then the for loop won't do anything; but res->iov_cnt
> (unsigned int) will be set to ab->nr_entries, so this may or may not be
> exploitable, but can probably lead to a crash)
>
>> +ents = g_malloc(esize);
>
>
> And this may need to be g_try_malloc().
>
> However, I think it's best to simply limit ab->nr_entries to some sane value
> (I guess it shouldn't be even in the double digits for a single resource, so
> restricting it to 256 or something seems sane to me...) and return an error
> to the guest if that limit is exceeded.

please be careful about limiting this, limits should be pessimistic, as if the
the GPU allocates one page per iov, so a 1280x1024x32bpp resource will
have 1280 iovs.

Dave.



Re: [Qemu-devel] RFC: running the user interface in a thread ...

2016-01-21 Thread Dave Airlie
On 21 January 2016 at 19:05, Paolo Bonzini  wrote:
>
>
> On 21/01/2016 09:44, Dave Airlie wrote:
>> I've hacked on this before, but only with SDL and it was pretty dirty,
>> and it gave a fairly decent
>> speed up.
>>
>> My thoughts are to use dataplane like design to process the queue in a
>> separate thread,
>> and then have some sort of channel between the UI and virtio-gpu
>> thread to handle things like
>> screen resize etc.
>>
>> http://cgit.freedesktop.org/~airlied/qemu/commit/?h=dave3d&id=fe22a0955255afef12b947c4a91efa57e7a7c429
>> is my previous horror patch.
>
> Instead of having a full-blown thread, are there things (such as the
> TGSI->GLSL conversion) that could be simply offloaded to a userspace
> thread pool, either in QEMU or in virglrenderer?

Not really, the TGSI->GLSL overhead is quite minor, the problem is the
produced GLSL then gets passed to
OpenGL to compile, and depending on the quality of the original
program and my conversion code, the
GLSL compiler can get quite upset.

In theory Mesa could help here, but GL isn't thread friendly at all,
so it probably won't help in the virgl
case even if it did. Since most GL apps compile a shader and block on
using it straight away doing it
in a thread won't help unblock things.

So I think it would be best to have all the virgl vq processing happen
in it's own thread with some API
to the UI to do UI resizes (the most difficult) and dirty regions etc.

Dave.



Re: [Qemu-devel] RFC: running the user interface in a thread ...

2016-01-21 Thread Dave Airlie
On 22 January 2016 at 16:59, Gerd Hoffmann  wrote:
>   Hi,
>
>> In theory Mesa could help here, but GL isn't thread friendly at all,
>> so it probably won't help in the virgl
>> case even if it did. Since most GL apps compile a shader and block on
>> using it straight away doing it
>> in a thread won't help unblock things.
>>
>> So I think it would be best to have all the virgl vq processing happen
>> in it's own thread with some API
>> to the UI to do UI resizes (the most difficult) and dirty regions etc.
>
> We can move only virgl into its own thread, but then we'll have two
> threads (virgl and main which runs ui) which use opengl.  So I was
> thinking maybe it is better to have a single thread which runs both
> virgl and ui (and thats why I've started this thread ...).

In theory as long as we have separate contexts bound in each thread it
should be fine.

OpenGL is okay as long as one context is bound in one thread at a time.

Since the only sharing we have between contexts are textures, they
should be okay
between shared contexts.

Dave.



Re: [Qemu-devel] RFC: running the user interface in a thread ...

2016-01-21 Thread Dave Airlie
On 18 January 2016 at 19:54, Gerd Hoffmann  wrote:
>   Hi folks,
>
> I'm starting to investigate if and how we can move the user interface
> code into its own thread instead of running it in the iothread and
> therefore avoid blocking the guest in case some UI actions take a little
> longer.
>
> opengl and toolkits tend to be bad at multithreading.  So my idea is to
> have a single thread dedicated to all the UI + rendering stuff, possibly
> let even the virglrenderer run in ui thread context.

I'd still like virgl to run it its own thread at some point like
dataplane but for virgl I suppose.

We see cases where backend GLSL compiles and end up taking quite a
while, and I'd prefer
to not block the UI or qemu on those.

I've hacked on this before, but only with SDL and it was pretty dirty,
and it gave a fairly decent
speed up.

My thoughts are to use dataplane like design to process the queue in a
separate thread,
and then have some sort of channel between the UI and virtio-gpu
thread to handle things like
screen resize etc.

http://cgit.freedesktop.org/~airlied/qemu/commit/?h=dave3d&id=fe22a0955255afef12b947c4a91efa57e7a7c429
is my previous horror patch.

Dave.



Re: [Qemu-devel] sending SEGV to qemu crashes host kernel in Fedora 19

2013-07-08 Thread Dave Airlie
On Tue, Jul 9, 2013 at 10:35 AM, Dave Airlie  wrote:
> Hi,
>
> F19
> kernel-3.9.8-300.fc19.x86_64
> qemu-kvm-1.4.2-4.fc19.x86_64
>
> If I start a complete F19 install in the guest and send the qemu
> process a SEGV signal, the host kernel starts giving me random kmalloc
> errors soon after, if I send a normal kill signal things seem fine.
>
> CPU is Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz, on a HP 220z workstation.
>
> I initially blamed bad RAM but this reproduces everytime, and I
> swapped DIMMs around
>
> I haven't tested with upstream kernel/qemu yet, but I wondered if
> anyone else has seen this.
>
> I noticed this because some work I was doing was segfaulting my qemu
> and then my machine would die a few mins later.

Of course now I read my fedora kernel emails and notice vhost_net does
bad things,

disabling vhost_net seems to make it work fine, hopefully the next
Fedora kernel will bring the magic fixes.

Dave.



[Qemu-devel] sending SEGV to qemu crashes host kernel in Fedora 19

2013-07-08 Thread Dave Airlie
Hi,

F19
kernel-3.9.8-300.fc19.x86_64
qemu-kvm-1.4.2-4.fc19.x86_64

If I start a complete F19 install in the guest and send the qemu
process a SEGV signal, the host kernel starts giving me random kmalloc
errors soon after, if I send a normal kill signal things seem fine.

CPU is Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz, on a HP 220z workstation.

I initially blamed bad RAM but this reproduces everytime, and I
swapped DIMMs around

I haven't tested with upstream kernel/qemu yet, but I wondered if
anyone else has seen this.

I noticed this because some work I was doing was segfaulting my qemu
and then my machine would die a few mins later.

Dave.



[Qemu-devel] virtio indirect with lots of descriptors

2013-07-09 Thread Dave Airlie
Hi Rusty,

playing with my virtio gpu, I started hitting the qemu
error_report("Too many read descriptors in indirect table");

Now I'm not sure but this doesn't seem to be a virtio limit that the
guest catches from what I can see, since my host dies quite quickly,
when I'm doing transfers in/out of a 5MB object with an sg entry per
page.

Just wondering if you can confirm if this is only a qemu limitation or
if I should just work around it at a bit of a higher level in my
driver/device?

Dave.



[Qemu-devel] Introducing Virgil - 3D virtual GPU for qemu

2013-07-17 Thread Dave Airlie
Hey Mesa + qemu lists,

since I suppose these communities would be most interested in this and
might not all read my blog or G+ stream,

"Virgil is a research project I've been working on at Red Hat for a
few months now and I think is ready for at least announcing upstream
and seeing if there is any developer interest in the community in
trying to help out.

The project is to create a 3D capable virtual GPU for qemu that can be
used by Linux and eventually Windows guests to provide OpenGL/Direct3D
support inside the guest. It uses an interface based on Gallium/TGSI
along with virtio to communicate between guest and host, and it goal
is to provided an OpenGL renderer along with a complete Linux driver
stack for the guest.

The website is here with links to some videos:
http://virgil3d.github.io/

some badly formatted Questions/Answers (I fail at github):
http://virgil3d.github.io/questions.html

Just a note and I can't stress this strongly enough, this isn't end
user ready, not even close, it isn't even bleeding edge user ready, or
advanced tester usage ready, its not ready for distro packaging, there
is no roadmap or commitment to finishing it. I don't need you to
install it and run it on your machine and report bugs.

I'm announcing it because there maybe other developers interested or
other companies interested and I'd like to allow them to get on board
at the design/investigation stage, before I have to solidify the APIs
etc. I also don't like single company projects and if I can announcing
early can help avoid that then so be it!

If you are a developer interested in working on an open source virtual
3D GPU, or you work for a company who is interested in developing
something in this area, then get in touch with me, but if you just
want to kick the tyres, I don't have time for this yet."

Dave.



Re: [Qemu-devel] virtio-gpu: cursor update not in sync with resource update

2015-09-14 Thread Dave Airlie
On 15 September 2015 at 01:17, Marc-André Lureau
 wrote:
> Hi
>
> On Mon, Sep 14, 2015 at 4:06 PM, Gerd Hoffmann  wrote:
>> The guest can upload different cursors and then switch between them
>> without re-uploading and therefore without ctrl queue updates.  Thats
>> why they have an id in the first place ...
>
> Ok, but the cursor id is reused most of the time. So there are at
> least two things to fix: use different cursor ids for different cursor
> data (with some cache management probably), fix kernel to only upload
> new resource id and wait for data to be transfered. Is that correct?

Currently the kernel driver does this:

ret = virtio_gpu_cmd_transfer_to_host_2d(vgdev, qobj->hw_res_handle, 0,
 cpu_to_le32(64),
 cpu_to_le32(64),
 0, 0, &fence);
if (!ret) {
reservation_object_add_excl_fence(qobj->tbo.resv,
  &fence->f);
virtio_gpu_object_wait(qobj, false);
}

before moving the cursor, shouldn't that suffice?

Do we fail the transfer sometimes?

Dave.



Re: [Qemu-devel] virtio-gpu: cursor update not in sync with resource update

2015-09-14 Thread Dave Airlie
On 15 September 2015 at 07:14, Marc-André Lureau
 wrote:
> Hi
>
> On Mon, Sep 14, 2015 at 11:08 PM, Dave Airlie  wrote:
>> Currently the kernel driver does this:
>>
>> ret = virtio_gpu_cmd_transfer_to_host_2d(vgdev, qobj->hw_res_handle, 
>> 0,
>>  cpu_to_le32(64),
>>  cpu_to_le32(64),
>>  0, 0, &fence);
>> if (!ret) {
>> reservation_object_add_excl_fence(qobj->tbo.resv,
>>   &fence->f);
>> virtio_gpu_object_wait(qobj, false);
>> }
>>
>> before moving the cursor, shouldn't that suffice?
>>
>> Do we fail the transfer sometimes?
>
>
> That's apparently not in Gerd tree.

Interesting, I'm not sure where this got lost,

I had it in my clone of Gerd's tree from a while back.

Dave.



Re: [Qemu-devel] [RFC PATCH] libcacard: move it to a standalone project

2015-09-15 Thread Dave Airlie
On 15 September 2015 at 23:37, Paolo Bonzini  wrote:
> On 15/09/2015 15:28, Daniel P. Berrange wrote:
>> I have looked through the new libcacard git repository you created
>> from QEMU history, and reviewed the extra patches you added on top
>> for the build system and it all looks sane to me. So this this a
>>
>> Reviewed-by: Daniel P. Berrange 
>>
>> for both the new GIT repo and also your proposed patch to switch
>> QEMU to use the new lib.
>>
>> I agree that it could make sense to host libcacard.git on git.qemu.org
>> if we're going to continue to use qemu-devel and a QEMU-like workflow,
>> but equally don't see any problem with it being a totally standalone
>> project with its own infra & practices.
>
> Where are we going to host virglrenderer?  Whatever we do for libcacard,
> we should probably also do for virglrenderer.

I hadn't considered where to host virglrenderer yet, I was initially considering
shipping it with mesa but gave up that idea, so I was contemplating just
getting freedesktop to host it since I've already got a/cs etc there,
but I suppose
I could get setup on qemu infrastructure.

I'm not even sure whether we should have its own mailing list, I'm hoping it
will be useful to non-qemu projects, but that may be some time down the road.

Dave.



[Qemu-devel] vmware video broken in git

2009-12-10 Thread Dave Airlie
Hi guys,

Building 0.11.0, qemu -vga vmware -sdl works fine.
with git I just get a blank screen.

I'm trying to bisect but I keep running into a big chunk of NIC changes
where none of the drivers compile, so I thought maybe someone could
make an educated guess.

Dave.




[Qemu-devel] [PATCH] vmware_vga: add rom file so that it boots.

2009-12-10 Thread Dave Airlie
From: Dave Airlie 

This just adds the rom file to the vmware SVGA chipset so it boots.

Signed-off-by: Dave Airlie 
---
 hw/vmware_vga.c |2 ++
 1 files changed, 2 insertions(+), 0 deletions(-)

diff --git a/hw/vmware_vga.c b/hw/vmware_vga.c
index 240731a..a7e42c6 100644
--- a/hw/vmware_vga.c
+++ b/hw/vmware_vga.c
@@ -22,6 +22,7 @@
  * THE SOFTWARE.
  */
 #include "hw.h"
+#include "loader.h"
 #include "console.h"
 #include "pci.h"
 #include "vmware_vga.h"
@@ -1124,6 +1125,7 @@ static void vmsvga_init(struct vmsvga_state_s *s, int 
vga_ram_size)
 cpu_register_physical_memory(VBE_DISPI_LFB_PHYSICAL_ADDRESS,
  vga_ram_size, s->vga.vram_offset);
 #endif
+ rom_add_vga(VGABIOS_FILENAME);
 }
 
 static void pci_vmsvga_map_ioport(PCIDevice *pci_dev, int region_num,
-- 
1.6.5.2





Re: [Qemu-devel] vmware video broken in git

2009-12-11 Thread Dave Airlie
On Fri, Dec 11, 2009 at 5:11 PM, Mark McLoughlin  wrote:
> Hi Dave,
>
> On Fri, 2009-12-11 at 14:59 +1000, Dave Airlie wrote:
>> Hi guys,
>>
>> Building 0.11.0, qemu -vga vmware -sdl works fine.
>> with git I just get a blank screen.
>>
>> I'm trying to bisect but I keep running into a big chunk of NIC changes
>> where none of the drivers compile, so I thought maybe someone could
>> make an educated guess.
>
> Can't help with the vmware problem, but could you name one of those
> broken commits out of curiosity? Pretty sure the most recent set of NIC
> changes are bisectable at least.

It was ne2000.c failing to build with some undefined NET_ define.

I ended up debugging it the old fashioned way and just randomly adding lines
from other places in the tree until it worked, patch sent earlier.

Dave.


> Thanks,
> Mark.
>
>




[Qemu-devel] approaches to 3D virtualisation

2009-12-11 Thread Dave Airlie
So I've been musing on the addition of some sort of 3D passthrough for
qemu (as I'm sure have lots of ppl)

But I think the goals of such an addition need to be discussed prior
to anyone writing a line of code.

Current existing solutions in the area:
a) VMware virtual graphics adapter - based on DX9, has an open
KMS/Gallium3D driver stack recently released by vmware, has certified
Windows drivers and has a documented vGPU interface (it could be
documented a lot better)

b) VirtualBox - seems to be GL based passthrough based on a Chromium
backend. DX9 support, looks to be done via Wine DX->GL converter built
into a Windows driver (not confirmed code base is crazy) I'm assuming
chromium is being used to stream GL through the passthru but this
definitely requires more investigation.

Now both of these seem to be local only rendering solutions, though I
might be underselling the vbox/chromium fun.

Now to add a remoting 3D rendering protocol is where things get ugly
fast, and very different from current 2D rendering with offscreen
pixmaps. The major difference is most server farms will not contain
any 3D hardware and will not want to contain any 3D hw due to power
considerations. Now if you have a remote protocol, and the client
disconnects, you have to keep some sort of transaction log, and either
replay the transactions when a client reconnects, or have some sort of
sw rendering kick in on the server as a fallback. Now 3D sw rendering
solutions are insanely slow and quite CPU intensive. VMware are
working on an llvm based 3D renderer for something like this situation
but I'm not convinced of how useable it will be.

Also with remoting you need to come up with a minimum acceptable level
of 3D you want to expose to guests OSes depending on what the
capabilities of the client side hw, or refuse connections unless the
client can run all the features that you've exposed in side the guest.
This ranges from DX7->DX11, and GL1.5 to GL3.0 situations.

I'm not sure how ppl who work with VNC see VNC fitting in with this
sort of protocol, I expect SPICE is better placed with proper design
to address this. Though it would firstly be nice to design a vGPU
interface for the guest OS to work against, I lean towards the VMware
because of the availability of guest drivers, however the Virtualbox
one is probably also acceptable if anyone can find where it is
documented.

This is just a sort of a brain dump from me, but I'd like to get ppl
talking about this, since 3D is not to be considered some simple
extension of 2D functionality,
its a whole different world, modern GPUs are 98% of silicon dedicated
to 3D processing, most of the current 2D type accel such as QXL
interface provides are done using 3D engines on most GPUs, and the
inability to render 3D on the server side of a link with any useful
speed in current server hw.

So I suppose the main questions to answer up front are:
1) Rendering done on same hw as VM is running?
just use VNC to dump the final answer over the network.
2) Rendering done on the client viewing end of the link where




[Qemu-devel] Re: approaches to 3D virtualisation

2009-12-11 Thread Dave Airlie
Oops gmail send this, silly laptop has a mind of its own sometimes.

On Sat, Dec 12, 2009 at 11:58 AM, Dave Airlie  wrote:
> So I've been musing on the addition of some sort of 3D passthrough for
> qemu (as I'm sure have lots of ppl)
>
> But I think the goals of such an addition need to be discussed prior
> to anyone writing a line of code.
>
> Current existing solutions in the area:
> a) VMware virtual graphics adapter - based on DX9, has an open
> KMS/Gallium3D driver stack recently released by vmware, has certified
> Windows drivers and has a documented vGPU interface (it could be
> documented a lot better)
>
> b) VirtualBox - seems to be GL based passthrough based on a Chromium
> backend. DX9 support, looks to be done via Wine DX->GL converter built
> into a Windows driver (not confirmed code base is crazy) I'm assuming
> chromium is being used to stream GL through the passthru but this
> definitely requires more investigation.
>
> Now both of these seem to be local only rendering solutions, though I
> might be underselling the vbox/chromium fun.
>
> Now to add a remoting 3D rendering protocol is where things get ugly
> fast, and very different from current 2D rendering with offscreen
> pixmaps. The major difference is most server farms will not contain
> any 3D hardware and will not want to contain any 3D hw due to power
> considerations. Now if you have a remote protocol, and the client
> disconnects, you have to keep some sort of transaction log, and either
> replay the transactions when a client reconnects, or have some sort of
> sw rendering kick in on the server as a fallback. Now 3D sw rendering
> solutions are insanely slow and quite CPU intensive. VMware are
> working on an llvm based 3D renderer for something like this situation
> but I'm not convinced of how useable it will be.
>
> Also with remoting you need to come up with a minimum acceptable level
> of 3D you want to expose to guests OSes depending on what the
> capabilities of the client side hw, or refuse connections unless the
> client can run all the features that you've exposed in side the guest.
> This ranges from DX7->DX11, and GL1.5 to GL3.0 situations.
>
> I'm not sure how ppl who work with VNC see VNC fitting in with this
> sort of protocol, I expect SPICE is better placed with proper design
> to address this. Though it would firstly be nice to design a vGPU
> interface for the guest OS to work against, I lean towards the VMware
> because of the availability of guest drivers, however the Virtualbox
> one is probably also acceptable if anyone can find where it is
> documented.
>
> This is just a sort of a brain dump from me, but I'd like to get ppl
> talking about this, since 3D is not to be considered some simple
> extension of 2D functionality,
> its a whole different world, modern GPUs are 98% of silicon dedicated
> to 3D processing, most of the current 2D type accel such as QXL
> interface provides are done using 3D engines on most GPUs, and the
> inability to render 3D on the server side of a link with any useful
> speed in current server hw.
>
> So I suppose the main questions to answer up front are:
> 1) Rendering done on same hw as VM is running?
> just use VNC to dump the final answer over the network.

This requires 3D rendering hw on the client, this could be your laptop
or desktop in the office, in which case its probably fine nto so good
if its server hw in a rack.

> 2) Rendering done on the client viewing end of the link where the viewer is,

Have to transport 3D to the client, but more likely the client end
will have 3D hardware to actually render stuff at a reasonable speed.

So I suppose lets discuss ;-)

Dave.
(btw I work for Red Hat but not in this capacity or near virt teams,
this is purely a personal itch).




Re: [Qemu-devel] X support for QXL and SPICE

2009-12-11 Thread Dave Airlie
On Sat, Dec 12, 2009 at 1:31 PM, Anthony Liguori  wrote:
> Soeren Sandmann wrote:
>>
>> Hi,
>>
>> Here is an overview of what the current QXL driver does and does not
>> do.  The parts of X rendering that are currently being used by cairo
>> and Qt are:
>>
>> - Most of XRender        - Image compositing
>>        - Glyphs
>>
>
> Does anything use Xrender for drawing glyphs these days?

Yes all of cairo apps and gtk, pretty much anything doing anti-aliased
fonts on a modern Linux desktop
uses X render to draw glyps. I think QT can use Xrender also but not
100% sure what the default is.
>
> Certainly, with a compositing window manager, nothing is getting rendered by
> X...

Yes there is, with a GL compositing window manager nothing is rendered
by X, with an Xrender
compositing window manager (metacity or kwin xrender mode). Also
allanti-aliased font operations, and
a number of cairo operations are done using xrender, which means most
of the modern desktop,
I actually broke sw fallbacks to the frontbuffer on my driver last
week and it took me nearly a half a day to realise it.

X rendering in modern apps is like Soeren describes, render to
offscreen pixmap, copy to onscreen.

> Scrolling is accelerated in VNC.  In the Cirrus adapter, both X and Windows
> use a video-to-video bitblt, we use this to create a VNC CopyRect which
> makes scrolling and Window movement smooth.

Firefox scrolling? (though I'm not sure anyone deals with it sainly).

>
>> However, as things stand right now, there is not much point in adding
>> this support, because X applications essentially always work like
>> this:
>>
>>        - render to offscreen pixmap
>>        - copy pixmap to screen
>>
>> There is not yet support for offscreen pixmaps in SPICE, so at the
>> moment, solid fill and CopyArea are the two main things that actually
>> make a difference.
>>
>
> Okay, that's in line with what my expectations were.  So what's the future
> of Spice for X?  Anything clever or is Windows the only target right now?
>

If spice grows offscreen surfaces along with some a few more rendering
operations I expect we can make it go fairly fast for most X apps. Since
at RH we have both spice and X teams I think we can probably produce
some sort of collaborative plan in the short term on how to go forward.

Dave.




[Qemu-devel] Re: approaches to 3D virtualisation

2009-12-12 Thread Dave Airlie
>>
>> Current existing solutions in the area:
>> a) VMware virtual graphics adapter - based on DX9, has an open
>> KMS/Gallium3D driver stack recently released by vmware, has certified
>> Windows drivers and has a documented vGPU interface (it could be
>> documented a lot better)

http://vmware-svga.svn.sourceforge.net/viewvc/vmware-svga/trunk/doc/gpu-wiov.pdf?revision=1

is a good whitepaper on the different 3D virtualisation approaches and why
vmware picked what they did also.

Dave.




Re: [Qemu-devel] Re: Spice and legacy VGA drivers

2009-12-12 Thread Dave Airlie
On Sun, Dec 13, 2009 at 5:28 AM, Anthony Liguori  wrote:
> Izik Eidus wrote:
>>
>> That specific area in spice will be changed very soon due to new
>> requiments that the offscreens will add.
>> Windows direct draw allow modifying offscreen (or even primary)
>> surfaces using a pointer giving to the user, this mean we can`t know
>> what parts of the surface was changed... (In some modes the primary
>> screen can be changed without we know about this)
>>
>> We already thought about few algorithems we might want to add to spice
>> to better address this "changed without notifications surfaces", But it
>> is still not in a state I can confirm in what direction we will go in
>> the end (We still need to test most of the cases to know what fit us
>> best)
>>
>
> Okay, I'm interested in hearing more about this as it develops.  I think
> good support for legacy modes is an important requirement.
>
> For instance, I very often interact with VMs in text console mode.  In cloud
> deployments, it's pretty common to have minimal appliances that don't have a
> full X session.

We should develop a KMS stack for QXL like VMware have done for SVGA,

This would mean getting a fb console in the guests and you can use fb's
dirty support. Getting out of VGA ASAP is a kernel graphics driver goal going
forward.

Also re:cairo, nearly ever app on my desktop uses it via gtk.
Now how many properitary apps exist that don't is an open
question, but anyone using QT or GTK will be using X render for lots of
drawing right now.

I'm not really sure how VNC works at the non-screen scraping level so I suppose
I should investigate, I've never seen VNC as useful fit for doing 3D rendering,
but I suppose it could be extended.

Dave.




Re: [Qemu-devel] Re: approaches to 3D virtualisation

2009-12-12 Thread Dave Airlie
On Sun, Dec 13, 2009 at 1:23 AM, Anthony Liguori  wrote:
> Juan Quintela wrote:
>>
>> Dave Airlie  wrote:
>>
>>>>>
>>>>> Current existing solutions in the area:
>>>>> a) VMware virtual graphics adapter - based on DX9, has an open
>>>>> KMS/Gallium3D driver stack recently released by vmware, has certified
>>>>> Windows drivers and has a documented vGPU interface (it could be
>>>>> documented a lot better)
>>>>>
>>>
>>>
>>> http://vmware-svga.svn.sourceforge.net/viewvc/vmware-svga/trunk/doc/gpu-wiov.pdf?revision=1
>>>
>>> is a good whitepaper on the different 3D virtualisation approaches and
>>> why
>>> vmware picked what they did also.
>>>
>>> Dave.
>>>
>>
>> I have zero clue of 3D, but for the qemu part, vmware_vga is the "nicer"
>> driver.
>>
>
> I like the design of the vmware_vga driver but it has one critical flaw.
>  The Windows drivers has a EULA that prohibits their use outside of VMware.

Good point, I hadn't read the vmware EULA, this does put a spanner in the works,
unless someone is willing to write alternate drivers.

My reason for liking vmware is its not a redesign it all at once
solution, you can bring
up the emulated adapter using known working drivers (the Linux ones
have no EULA),
and confirm it works, if you then want to write Windows drivers
outside the EULA, at
least you have two platforms to validate them on, Someone could in theory write
Windows drivers under VMware itself and we could work in parallel

>
> Without reasonably licensed Windows drivers, I don't think it's viable.
>
> I'm hoping QXL can fill this niche.

It would be nice, its just the design everything at once approach
never sits well
with me, having to do the host side interface, guest vGPU and guest drivers
all at once requires a lot of blame hunting, i.e. where correct fixes go etc.

But yes ideally an open QXL that can challenge VMware would be coolest

Maybe the QXL interface can leverage some of the VMware design at least
rather than reinventing the wheel

Dave.




[Qemu-devel] vmware vga + kvm interaction

2009-12-13 Thread Dave Airlie
If I boot an F12 LiveCD with vmware VGA without KVM enabled, I get the
syslinux boot screen and can pick
options, the same qemu run with -enable-kvm, I just get a blank screen.

Anyone have any clues on why this might be?

all with latest git tree.
Dave.




Re: [Qemu-devel] vmware vga + kvm interaction

2009-12-13 Thread Dave Airlie
On Mon, Dec 14, 2009 at 3:59 AM, Anthony Liguori  wrote:
> Avi Kivity wrote:
>>
>> On 12/13/2009 10:55 AM, Dave Airlie wrote:
>>>
>>> If I boot an F12 LiveCD with vmware VGA without KVM enabled, I get the
>>> syslinux boot screen and can pick
>>> options, the same qemu run with -enable-kvm, I just get a blank screen.
>>>
>>> Anyone have any clues on why this might be?
>>>
>>>
>>
>> One of the niceties of vmvga is that it accesses cpu registers in response
>> to an I/O instruction.  Maybe this bit is broken.  Does your hw/vmport.c
>> have cpu_synchronize_state() in vmport_ioport_read()?
>>
>> Hmm, upstream doesn't, so no surprise it is broken.
>
> vmware-vga does not use vmport.
>
> The issue is about related to dirty tracking and how the vbe bios memory
> gets mapped.  I've posted patches on the list before.
>

I actually reinvented at least one of the patches locally and it
didn't seem to help,
but I'll try and take a closer look today,

Dave.




Re: [Qemu-devel] vmware vga + kvm interaction

2009-12-14 Thread Dave Airlie
On Tue, Dec 15, 2009 at 12:28 AM, Anthony Liguori  wrote:
> Dave Airlie wrote:
>>
>> I actually reinvented at least one of the patches locally and it
>> didn't seem to help,
>> but I'll try and take a closer look today,
>>
>
> http://repo.or.cz/w/qemu/aliguori-queue.git vmware-vga-for-dave
>
> Is the local branch I have for vmware-vga work.  I'm not sure why I never
> pushed those patches, but I suspect it's because I was still tracking down a
> bug.  IIRC, this branch fixes things with -enable-kvm, but I'd usually see a
> SEGV in qemu about 1-2 minutes after getting into X.  I don't think that has
> anything to do with kvm though.
>

Just to say me too, thats happening here alright, I just haven't had spare time
to debug it yet.

Dave.




[Qemu-devel] vmware fixes in a git tree

2009-12-17 Thread Dave Airlie
Hi guys,

I've gotten vmware going again on master using some patches from
Anthony (rebased onto master)
and a fix for the cursor pixmap which was causing a segfault.

The patches are in
git://people.freedesktop.org/~airlied/qemu.git vmware-queue
http://cgit.freedesktop.org/~airlied/qemu/log/?h=vmware-queue

I can send the queue to the list if required, including Anthony's patches.

It also contains a fix to expose the fifo via a second PCI BAR which
is what the vmware
svga spec states.

I've booted an F12 LiveCD with this.

Dave.




[Qemu-devel] qemu fixup patch sequence.

2009-12-17 Thread Dave Airlie
First patch adds a second PCI BAR as per the VMware SVGA specification for
the fifo instead of using the top of the VRAM BAR.

Patches 2-5 are from Anthony and I just rebased them on top of master.

Patch6: fixes a crasher in vmware when X starts inside the guest.

I've booted these with an F12 LiveCD and all seems to work.

Dave.





[Qemu-devel] [PATCH 2/6] Make sure to enable dirty tracking of VBE vram mapping

2009-12-17 Thread Dave Airlie
From: Anthony Liguori 

Apparently, VBE maps the VGA vram to a fixed physical location.  KVM requires
that all mappings of the VGA vram have dirty tracking enabled on them.  Any
access to the VGA vram through the VBE mapping currently fails to result in
dirty page tracking updates causing a black screen.

This is the true root cause of VMware VGA not working correctly under KVM and
likely also an issue with some of the std-vga black screen issues too.

Cirrus does not enable VBE so it would not be a problem when using Cirrus.

Signed-off-by: Anthony Liguori 
Rebased-by: Dave Airlie 
---
 hw/vga-isa.c|6 +-
 hw/vga-pci.c|7 +--
 hw/vga.c|   24 
 hw/vga_int.h|5 +++--
 hw/vmware_vga.c |8 ++--
 5 files changed, 31 insertions(+), 19 deletions(-)

diff --git a/hw/vga-isa.c b/hw/vga-isa.c
index 5f29904..7937144 100644
--- a/hw/vga-isa.c
+++ b/hw/vga-isa.c
@@ -42,11 +42,7 @@ int isa_vga_init(void)
 s->ds = graphic_console_init(s->update, s->invalidate,
  s->screen_dump, s->text_update, s);
 
-#ifdef CONFIG_BOCHS_VBE
-/* XXX: use optimized standard vga accesses */
-cpu_register_physical_memory(VBE_DISPI_LFB_PHYSICAL_ADDRESS,
- VGA_RAM_SIZE, s->vram_offset);
-#endif
+vga_init_vbe(s);
 /* ROM BIOS */
 rom_add_vga(VGABIOS_FILENAME);
 return 0;
diff --git a/hw/vga-pci.c b/hw/vga-pci.c
index e8cc024..eef78ed 100644
--- a/hw/vga-pci.c
+++ b/hw/vga-pci.c
@@ -106,12 +106,7 @@ static int pci_vga_initfn(PCIDevice *dev)
  PCI_BASE_ADDRESS_MEM_PREFETCH, vga_map);
  }
 
-#ifdef CONFIG_BOCHS_VBE
-/* XXX: use optimized standard vga accesses */
-cpu_register_physical_memory(VBE_DISPI_LFB_PHYSICAL_ADDRESS,
- VGA_RAM_SIZE, s->vram_offset);
-#endif
-
+vga_init_vbe(s);
  /* ROM BIOS */
  rom_add_vga(VGABIOS_FILENAME);
  return 0;
diff --git a/hw/vga.c b/hw/vga.c
index 740fe28..5b0c55e 100644
--- a/hw/vga.c
+++ b/hw/vga.c
@@ -1581,6 +1581,14 @@ static void vga_sync_dirty_bitmap(VGACommonState *s)
 cpu_physical_sync_dirty_bitmap(isa_mem_base + 0xa, 0xa8000);
 cpu_physical_sync_dirty_bitmap(isa_mem_base + 0xa8000, 0xb);
 }
+
+#ifdef CONFIG_BOCHS_VBE
+if (s->vbe_mapped) {
+cpu_physical_sync_dirty_bitmap(VBE_DISPI_LFB_PHYSICAL_ADDRESS,
+   VBE_DISPI_LFB_PHYSICAL_ADDRESS + 
s->vram_size);
+}
+#endif
+
 }
 
 void vga_dirty_log_start(VGACommonState *s)
@@ -1592,6 +1600,13 @@ void vga_dirty_log_start(VGACommonState *s)
 kvm_log_start(isa_mem_base + 0xa, 0x8000);
 kvm_log_start(isa_mem_base + 0xa8000, 0x8000);
 }
+
+#ifdef CONFIG_BOCHS_VBE
+if (kvm_enabled() && s->vbe_mapped) {
+kvm_log_start(VBE_DISPI_LFB_PHYSICAL_ADDRESS, s->vram_size);
+}
+#endif
+
 }
 
 /*
@@ -2294,6 +2309,15 @@ void vga_init(VGACommonState *s)
 qemu_register_coalesced_mmio(isa_mem_base + 0x000a, 0x2);
 }
 
+void vga_init_vbe(VGACommonState *s)
+{
+#ifdef CONFIG_BOCHS_VBE
+/* XXX: use optimized standard vga accesses */
+cpu_register_physical_memory(VBE_DISPI_LFB_PHYSICAL_ADDRESS,
+ VGA_RAM_SIZE, s->vram_offset);
+s->vbe_mapped = 1;
+#endif 
+}
 //
 /* vga screen dump */
 
diff --git a/hw/vga_int.h b/hw/vga_int.h
index c03c220..b5302c1 100644
--- a/hw/vga_int.h
+++ b/hw/vga_int.h
@@ -71,8 +71,8 @@
 uint16_t vbe_regs[VBE_DISPI_INDEX_NB];  \
 uint32_t vbe_start_addr;\
 uint32_t vbe_line_offset;   \
-uint32_t vbe_bank_mask;
-
+uint32_t vbe_bank_mask;\
+int vbe_mapped;
 #else
 
 #define VGA_STATE_COMMON_BOCHS_VBE
@@ -217,6 +217,7 @@ void vga_draw_cursor_line_32(uint8_t *d1, const uint8_t 
*src1,
  unsigned int color_xor);
 
 int vga_ioport_invalid(VGACommonState *s, uint32_t addr);
+void vga_init_vbe(VGACommonState *s);
 
 extern const uint8_t sr_mask[8];
 extern const uint8_t gr_mask[16];
diff --git a/hw/vmware_vga.c b/hw/vmware_vga.c
index 28bbc3f..07befc8 100644
--- a/hw/vmware_vga.c
+++ b/hw/vmware_vga.c
@@ -1129,12 +1129,8 @@ static void vmsvga_init(struct vmsvga_state_s *s, int 
vga_ram_size)
  vmsvga_screen_dump,
  vmsvga_text_update, s);
 
-#ifdef CONFIG_BOCHS_VBE
-/* XXX: use optimized standard vga accesses */
-cpu_register_physical_memory(VBE_DISPI_LFB_PHYSICAL_ADDRESS,
- vga_ram_size, s->vga.vram_offset);
-#endif
- rom_add_vga(VGABIOS_FILENAME);
+vga_init_vbe(&s->vga);
+rom_add_vga(VGABIOS_FILENAME);
 }
 
 static void pci_vmsvga_map_ioport(PCIDevice *pci_dev, int region_num,
-- 
1.6.5.2





[Qemu-devel] [PATCH 1/6] vmware: setup PCI BAR 2 for FIFO as per vmware spec

2009-12-17 Thread Dave Airlie
From: Dave Airlie 

---
 hw/vmware_vga.c |   35 ++-
 1 files changed, 30 insertions(+), 5 deletions(-)

diff --git a/hw/vmware_vga.c b/hw/vmware_vga.c
index f3e3749..28bbc3f 100644
--- a/hw/vmware_vga.c
+++ b/hw/vmware_vga.c
@@ -67,6 +67,11 @@ struct vmsvga_state_s {
 int syncing;
 int fb_size;
 
+ram_addr_t fifo_offset;
+uint8_t *fifo_ptr;
+unsigned int fifo_size;
+target_phys_addr_t fifo_base;
+
 union {
 uint32_t *fifo;
 struct __attribute__((__packed__)) {
@@ -680,7 +685,7 @@ static uint32_t vmsvga_value_read(void *opaque, uint32_t 
address)
 return 0x0;
 
 case SVGA_REG_VRAM_SIZE:
-return s->vga.vram_size - SVGA_FIFO_SIZE;
+return s->vga.vram_size;
 
 case SVGA_REG_FB_SIZE:
 return s->fb_size;
@@ -701,10 +706,10 @@ static uint32_t vmsvga_value_read(void *opaque, uint32_t 
address)
 return caps;
 
 case SVGA_REG_MEM_START:
-return s->vram_base + s->vga.vram_size - SVGA_FIFO_SIZE;
+return s->fifo_base;
 
 case SVGA_REG_MEM_SIZE:
-return SVGA_FIFO_SIZE;
+return s->fifo_size;
 
 case SVGA_REG_CONFIG_DONE:
 return s->config;
@@ -790,7 +795,7 @@ static void vmsvga_value_write(void *opaque, uint32_t 
address, uint32_t value)
 
 case SVGA_REG_CONFIG_DONE:
 if (value) {
-s->fifo = (uint32_t *) &s->vga.vram_ptr[s->vga.vram_size - 
SVGA_FIFO_SIZE];
+s->fifo = (uint32_t *) s->fifo_ptr;
 /* Check range and alignment.  */
 if ((CMD(min) | CMD(max) |
 CMD(next_cmd) | CMD(stop)) & 3)
@@ -1059,7 +1064,7 @@ static int vmsvga_post_load(void *opaque, int version_id)
 
 s->invalidated = 1;
 if (s->config)
-s->fifo = (uint32_t *) &s->vga.vram_ptr[s->vga.vram_size - 
SVGA_FIFO_SIZE];
+s->fifo = (uint32_t *) s->fifo_ptr;
 
 return 0;
 }
@@ -,6 +1116,10 @@ static void vmsvga_init(struct vmsvga_state_s *s, int 
vga_ram_size)
 
 vmsvga_reset(s);
 
+s->fifo_size = SVGA_FIFO_SIZE;
+s->fifo_offset = qemu_ram_alloc(s->fifo_size);
+s->fifo_ptr = qemu_get_ram_ptr(s->fifo_offset);
+
 vga_common_init(&s->vga, vga_ram_size);
 vga_init(&s->vga);
 vmstate_register(0, &vmstate_vga_common, &s->vga);
@@ -1166,6 +1175,19 @@ static void pci_vmsvga_map_mem(PCIDevice *pci_dev, int 
region_num,
 iomemtype);
 }
 
+static void pci_vmsvga_map_fifo(PCIDevice *pci_dev, int region_num,
+pcibus_t addr, pcibus_t size, int type)
+{
+struct pci_vmsvga_state_s *d = (struct pci_vmsvga_state_s *) pci_dev;
+struct vmsvga_state_s *s = &d->chip;
+ram_addr_t iomemtype;
+
+s->fifo_base = addr;
+iomemtype = s->fifo_offset | IO_MEM_RAM;
+cpu_register_physical_memory(s->fifo_base, s->fifo_size,
+iomemtype);
+}
+
 static int pci_vmsvga_initfn(PCIDevice *dev)
 {
 struct pci_vmsvga_state_s *s =
@@ -1189,6 +1211,9 @@ static int pci_vmsvga_initfn(PCIDevice *dev)
 pci_register_bar(&s->card, 1, VGA_RAM_SIZE,
 PCI_BASE_ADDRESS_MEM_PREFETCH, pci_vmsvga_map_mem);
 
+pci_register_bar(&s->card, 2, SVGA_FIFO_SIZE,
+PCI_BASE_ADDRESS_MEM_PREFETCH, pci_vmsvga_map_fifo);
+
 vmsvga_init(&s->chip, VGA_RAM_SIZE);
 
 return 0;
-- 
1.6.5.2





[Qemu-devel] [PATCH 3/6] Make sure to enable dirty log tracking for VMware VGA

2009-12-17 Thread Dave Airlie
From: Anthony Liguori 

This is needed for VMware VGA to work properly under KVM.

Signed-off-by: Anthony Liguori 
---
 hw/vmware_vga.c |4 
 1 files changed, 4 insertions(+), 0 deletions(-)

diff --git a/hw/vmware_vga.c b/hw/vmware_vga.c
index 07befc8..fcb6808 100644
--- a/hw/vmware_vga.c
+++ b/hw/vmware_vga.c
@@ -1169,6 +1169,10 @@ static void pci_vmsvga_map_mem(PCIDevice *pci_dev, int 
region_num,
 #endif
 cpu_register_physical_memory(s->vram_base, s->vga.vram_size,
 iomemtype);
+
+s->vga.map_addr = addr;
+s->vga.map_end = addr + s->vga.vram_size;
+vga_dirty_log_start(&s->vga);
 }
 
 static void pci_vmsvga_map_fifo(PCIDevice *pci_dev, int region_num,
-- 
1.6.5.2





[Qemu-devel] [PATCH 4/6] Fix VMware VGA depth computation

2009-12-17 Thread Dave Airlie
From: Anthony Liguori 

VMware VGA requires that the depth presented to the guest is the same as the
DisplaySurface that it renders to.  This is because it performs a very simple
memcpy() to blit from one surface to another.

We currently hardcode a 24-bit depth.  The surface allocator for SDL may, and
usually will, allocate a surface with a different depth causing screen
corruption.

This changes the code to allocate the DisplaySurface before initializing the
device which allows the depth of the DisplaySurface to be used instead of
hardcoding something.

Signed-off-by: Anthony Liguori 
---
 hw/vmware_vga.c |   14 +++---
 1 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/hw/vmware_vga.c b/hw/vmware_vga.c
index fcb6808..ae91327 100644
--- a/hw/vmware_vga.c
+++ b/hw/vmware_vga.c
@@ -915,8 +915,8 @@ static void vmsvga_reset(struct vmsvga_state_s *s)
 s->width = -1;
 s->height = -1;
 s->svgaid = SVGA_ID;
-s->depth = 24;
-s->bypp = (s->depth + 7) >> 3;
+s->depth = ds_get_bits_per_pixel(s->vga.ds);
+s->bypp = ds_get_bytes_per_pixel(s->vga.ds);
 s->cursor.on = 0;
 s->redraw_fifo_first = 0;
 s->redraw_fifo_last = 0;
@@ -1114,6 +1114,11 @@ static void vmsvga_init(struct vmsvga_state_s *s, int 
vga_ram_size)
 s->scratch_size = SVGA_SCRATCH_SIZE;
 s->scratch = qemu_malloc(s->scratch_size * 4);
 
+s->vga.ds = graphic_console_init(vmsvga_update_display,
+ vmsvga_invalidate_display,
+ vmsvga_screen_dump,
+ vmsvga_text_update, s);
+
 vmsvga_reset(s);
 
 s->fifo_size = SVGA_FIFO_SIZE;
@@ -1124,11 +1129,6 @@ static void vmsvga_init(struct vmsvga_state_s *s, int 
vga_ram_size)
 vga_init(&s->vga);
 vmstate_register(0, &vmstate_vga_common, &s->vga);
 
-s->vga.ds = graphic_console_init(vmsvga_update_display,
- vmsvga_invalidate_display,
- vmsvga_screen_dump,
- vmsvga_text_update, s);
-
 vga_init_vbe(&s->vga);
 rom_add_vga(VGABIOS_FILENAME);
 }
-- 
1.6.5.2





[Qemu-devel] [PATCH 6/6] vmware: increase cursor buffer size.

2009-12-17 Thread Dave Airlie
From: Dave Airlie 

The cursor pixmap size we calculate later ends up being 4096 dwords
long by the looks of it. This boots an F12 LiveCD now.

Signed-off-by: Dave Airlie 
---
 hw/vmware_vga.c |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/hw/vmware_vga.c b/hw/vmware_vga.c
index e3d5706..7ab1c79 100644
--- a/hw/vmware_vga.c
+++ b/hw/vmware_vga.c
@@ -467,7 +467,7 @@ struct vmsvga_cursor_definition_s {
 int hot_x;
 int hot_y;
 uint32_t mask[1024];
-uint32_t image[1024];
+uint32_t image[4096];
 };
 
 #define SVGA_BITMAP_SIZE(w, h) w) + 31) >> 5) * (h))
-- 
1.6.5.2





[Qemu-devel] [PATCH 5/6] VMware VGA: Only enable dirty log tracking when fifo is disabled

2009-12-17 Thread Dave Airlie
From: Anthony Liguori 

This patch enables dirty log tracking whenever it's needed and disables it
when it is not.

We unconditionally enable dirty log tracking on reset, restart dirty log
tracking when PCI IO regions are remapped, and disable/enable it based on
commands from the guest.

Rebased-by: Dave Airlie 
Signed-off-by: Anthony Liguori 
---
 hw/vga.c|   22 ++
 hw/vga_int.h|2 ++
 hw/vmware_vga.c |   16 
 3 files changed, 36 insertions(+), 4 deletions(-)

diff --git a/hw/vga.c b/hw/vga.c
index 5b0c55e..d05f1f9 100644
--- a/hw/vga.c
+++ b/hw/vga.c
@@ -1606,7 +1606,29 @@ void vga_dirty_log_start(VGACommonState *s)
 kvm_log_start(VBE_DISPI_LFB_PHYSICAL_ADDRESS, s->vram_size);
 }
 #endif
+}
+
+void vga_dirty_log_stop(VGACommonState *s)
+{
+if (kvm_enabled() && s->map_addr)
+   kvm_log_stop(s->map_addr, s->map_end - s->map_addr);
+
+if (kvm_enabled() && s->lfb_vram_mapped) {
+   kvm_log_stop(isa_mem_base + 0xa, 0x8);
+   kvm_log_stop(isa_mem_base + 0xa8000, 0x8);
+}
 
+#ifdef CONFIG_BOCHS_VBE
+if (kvm_enabled() && s->vbe_mapped) {
+   kvm_log_stop(VBE_DISPI_LFB_PHYSICAL_ADDRESS, s->vram_size);
+}
+#endif
+}
+
+void vga_dirty_log_restart(VGACommonState *s)
+{
+vga_dirty_log_stop(s);
+vga_dirty_log_start(s);
 }
 
 /*
diff --git a/hw/vga_int.h b/hw/vga_int.h
index b5302c1..23a42ef 100644
--- a/hw/vga_int.h
+++ b/hw/vga_int.h
@@ -194,6 +194,8 @@ void vga_init(VGACommonState *s);
 void vga_common_reset(VGACommonState *s);
 
 void vga_dirty_log_start(VGACommonState *s);
+void vga_dirty_log_stop(VGACommonState *s);
+void vga_dirty_log_restart(VGACommonState *s);
 
 extern const VMStateDescription vmstate_vga_common;
 uint32_t vga_ioport_read(void *opaque, uint32_t addr);
diff --git a/hw/vmware_vga.c b/hw/vmware_vga.c
index ae91327..e3d5706 100644
--- a/hw/vmware_vga.c
+++ b/hw/vmware_vga.c
@@ -771,8 +771,12 @@ static void vmsvga_value_write(void *opaque, uint32_t 
address, uint32_t value)
 s->height = -1;
 s->invalidated = 1;
 s->vga.invalidate(&s->vga);
-if (s->enable)
-s->fb_size = ((s->depth + 7) >> 3) * s->new_width * s->new_height;
+if (s->enable) {
+ s->fb_size = ((s->depth + 7) >> 3) * s->new_width * s->new_height;
+ vga_dirty_log_stop(&s->vga);
+   } else {
+ vga_dirty_log_start(&s->vga);
+   }
 break;
 
 case SVGA_REG_WIDTH:
@@ -948,6 +952,8 @@ static void vmsvga_reset(struct vmsvga_state_s *s)
 break;
 }
 s->syncing = 0;
+
+vga_dirty_log_start(&s->vga);
 }
 
 static void vmsvga_invalidate_display(void *opaque)
@@ -1119,7 +1125,6 @@ static void vmsvga_init(struct vmsvga_state_s *s, int 
vga_ram_size)
  vmsvga_screen_dump,
  vmsvga_text_update, s);
 
-vmsvga_reset(s);
 
 s->fifo_size = SVGA_FIFO_SIZE;
 s->fifo_offset = qemu_ram_alloc(s->fifo_size);
@@ -1130,7 +1135,10 @@ static void vmsvga_init(struct vmsvga_state_s *s, int 
vga_ram_size)
 vmstate_register(0, &vmstate_vga_common, &s->vga);
 
 vga_init_vbe(&s->vga);
+
 rom_add_vga(VGABIOS_FILENAME);
+
+vmsvga_reset(s);
 }
 
 static void pci_vmsvga_map_ioport(PCIDevice *pci_dev, int region_num,
@@ -1172,7 +1180,7 @@ static void pci_vmsvga_map_mem(PCIDevice *pci_dev, int 
region_num,
 
 s->vga.map_addr = addr;
 s->vga.map_end = addr + s->vga.vram_size;
-vga_dirty_log_start(&s->vga);
+vga_dirty_log_restart(&s->vga);
 }
 
 static void pci_vmsvga_map_fifo(PCIDevice *pci_dev, int region_num,
-- 
1.6.5.2





[Qemu-devel] [PATCH] virtio-gpu-3d: add support for second capability set (v2)

2018-02-20 Thread Dave Airlie
From: Dave Airlie 

Due to a kernel bug we can never increase the size of capability
set 1, so introduce a new capability set in parallel, old userspace
will continue to use the old set, new userspace will start using
the new one when it detects a fixed kernel.

v2: don't use a define from virglrenderer, just probe it.

Signed-off-by: Dave Airlie 
---
 hw/display/virtio-gpu-3d.c  | 5 +
 hw/display/virtio-gpu.c | 7 ++-
 include/standard-headers/linux/virtio_gpu.h | 1 +
 3 files changed, 12 insertions(+), 1 deletion(-)

diff --git a/hw/display/virtio-gpu-3d.c b/hw/display/virtio-gpu-3d.c
index 7db84efe89..c601b43810 100644
--- a/hw/display/virtio-gpu-3d.c
+++ b/hw/display/virtio-gpu-3d.c
@@ -362,6 +362,11 @@ static void virgl_cmd_get_capset_info(VirtIOGPU *g,
 virgl_renderer_get_cap_set(resp.capset_id,
&resp.capset_max_version,
&resp.capset_max_size);
+} else if (info.capset_index == 1) {
+resp.capset_id = VIRTIO_GPU_CAPSET_VIRGL2;
+virgl_renderer_get_cap_set(resp.capset_id,
+   &resp.capset_max_version,
+   &resp.capset_max_size);
 } else {
 resp.capset_max_version = 0;
 resp.capset_max_size = 0;
diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
index 6658f6c6a6..1418db1b88 100644
--- a/hw/display/virtio-gpu.c
+++ b/hw/display/virtio-gpu.c
@@ -1212,10 +1212,15 @@ static void virtio_gpu_device_realize(DeviceState 
*qdev, Error **errp)
 g->req_state[0].height = g->conf.yres;
 
 if (virtio_gpu_virgl_enabled(g->conf)) {
+uint32_t capset2_max_ver, capset2_max_size;
 /* use larger control queue in 3d mode */
 g->ctrl_vq   = virtio_add_queue(vdev, 256, virtio_gpu_handle_ctrl_cb);
 g->cursor_vq = virtio_add_queue(vdev, 16, virtio_gpu_handle_cursor_cb);
-g->virtio_config.num_capsets = 1;
+
+virgl_renderer_get_cap_set(VIRTIO_GPU_CAPSET_VIRGL2,
+   &capset2_max_ver,
+   &capset2_max_size);
+g->virtio_config.num_capsets = capset2_max_ver > 0 ? 2 : 1;
 } else {
 g->ctrl_vq   = virtio_add_queue(vdev, 64, virtio_gpu_handle_ctrl_cb);
 g->cursor_vq = virtio_add_queue(vdev, 16, virtio_gpu_handle_cursor_cb);
diff --git a/include/standard-headers/linux/virtio_gpu.h 
b/include/standard-headers/linux/virtio_gpu.h
index c1c8f0751d..52a830dcf8 100644
--- a/include/standard-headers/linux/virtio_gpu.h
+++ b/include/standard-headers/linux/virtio_gpu.h
@@ -260,6 +260,7 @@ struct virtio_gpu_cmd_submit {
 };
 
 #define VIRTIO_GPU_CAPSET_VIRGL 1
+#define VIRTIO_GPU_CAPSET_VIRGL2 2
 
 /* VIRTIO_GPU_CMD_GET_CAPSET_INFO */
 struct virtio_gpu_get_capset_info {
-- 
2.14.3




  1   2   >