Hi,
On 08/01/2011 03:22 PM, Alon Levy wrote:
On Mon, Aug 01, 2011 at 03:14:12PM +0200, Hans de Goede wrote:
Hi,
On 08/01/2011 10:51 AM, Christophe Fergeau wrote:
On Sat, Jul 30, 2011 at 08:26:04AM +0200, Hans de Goede wrote:
<snip snip>
The second is to define a new jpeg-rgb format where we use jpeg
compression straight on rgb, rather then first converting it
to yuv colorspace (this happens inside libjpeg, but can be
overridden) this will lead to a slight increase in bandwidth
usage, but also a great drop in cpu usage.
Would this still be standard jpeg, or would this mean we invented our own
spice-jpeg? In other words, if we change the server to do this, will the
client be able to handle this using standard libjpeg calls, or will it need
to be modified to be able to handle this jpeg-rgb format?
The client will need changes too. The client might need changes for the
abbreviated image stuff too, but it might work out of the box too. I'm not
sure if currently one libjpeg decompression object gets re-used or a new
one gets created each time. If there is a new one each time then the
client will need changes for the abbreviated stuff too.
Moreover, are you
sure it will only result in a slight increase in bandwith use at comparable
quality?
A good question, thinking more about this, I'm likely wrong wrt the bandwidth
usage. The uncompressed data is 6 bytes / 4 pixels in YUV420 mode (YUV color
space U and V sampled once every 2x2 pixels) and 12 bytes / 4 pixels in
RGB24 mode, so switching our RGB24 data directly to libjpeg rather then
first converting it to YUV420 likely will lead to approx double bandwidth
usage :(
Isn't the image sent compressed? i.e. size(jpeg_compress(as_rgb(image))) /
size(jpeg_compress(as_yuv(image))) != (12/4) / (6/4)
Right, but since the raw components fed to the jpeg compression (which always
happens per coefficient) are twice as big, chances are the output will also
be approx twice as big.
I'm not really familiar with jpeg compression, I know it's something with
truncated
cosine coefficient series, so the point is that the type of the saved data is
not
YUV420 nor RGB24.
Right, but the compression happens per component, you can even add an alpha
channel
if you want, or compress 6 components or whatever ...
Basically what happens more or less currenly is:
-The image gets divided into 16x16 macro blocks
-The last row / column of macro blocks get padded out to 16x16
by duplicating the last row / column of the block.
-16x16 R + G + B data -> 16x16 Y data + 8x8 U + V data
-per component the pixels:
-get quantized (non significant bits get thrown away)
-get transformed into the frequency domain
-high frequencies get thrown away
-the left over frequency info gets huffman compressed
-this results in a bitstream, note the next macro block
component will start with the next bit, not necessarily
on a byte boundary.
According to the JPEG standard one is free to use different subsampling factors
for
components, number of components, etc.
Not even sure if the compression will be better or worse - I imagine an all red
image will be much better compressed starting from RGB24.
No, in both RGB and YUV colorspaces an all red image will have constant
component values from one pixels to the next, so both will compress very well.
Regards,
Hans
_______________________________________________
Spice-devel mailing list
Spice-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/spice-devel