Re: [Qemu-devel] [PATCH] lance: unbreak after memory API conversion

2011-08-09 Thread Edgar E. Iglesias
On Tue, Aug 09, 2011 at 09:54:22AM +0300, Avi Kivity wrote:
> The conversion passed the wrong opaque pointer, causing a crash on first use.
> Pass the correct opaque.

I've applied this, thanks.


> 
> Signed-off-by: Avi Kivity 
> ---
>  hw/lance.c |2 +-
>  1 files changed, 1 insertions(+), 1 deletions(-)
> 
> diff --git a/hw/lance.c b/hw/lance.c
> index 8e20360..d83e7f5 100644
> --- a/hw/lance.c
> +++ b/hw/lance.c
> @@ -116,7 +116,7 @@ static int lance_init(SysBusDevice *dev)
>  SysBusPCNetState *d = FROM_SYSBUS(SysBusPCNetState, dev);
>  PCNetState *s = &d->state;
>  
> -memory_region_init_io(&s->mmio, &lance_mem_ops, s, "lance-mmio", 4);
> +memory_region_init_io(&s->mmio, &lance_mem_ops, d, "lance-mmio", 4);
>  
>  qdev_init_gpio_in(&dev->qdev, parent_lance_reset, 1);
>  
> -- 
> 1.7.5.3
> 
> 



Re: [Qemu-devel] [PATCH 44/55] spitz tosa: Simplify "drive is suitable for microdrive" test

2011-08-09 Thread Kevin Wolf
Am 09.08.2011 06:32, schrieb andrzej zaborowski:
> On 4 August 2011 10:02, Kevin Wolf  wrote:
>> Am 03.08.2011 22:20, schrieb andrzej zaborowski:
>>> On 3 August 2011 20:24, Markus Armbruster  wrote:
 andrzej zaborowski  writes:
> On 3 August 2011 18:38, Markus Armbruster  wrote:
>> andrzej zaborowski  writes:
>>>  2. if the
>>> underlaying storage can disappear for any other reason if that's
>>> possible to check.
>>
>> bdrv_is_removable() *isn't* such a check.
>
> Obviously I wasn't claiming it is, just that it might be useful, but
> not necessrily possible.  After all pretty much any storage can be
> "ejected" with enough force, depending on how far you want to go.
>
 What's wrong with that again?  All sounds sensible to me.
>>>
>>> I'm not claiming otherwise, just double-checking this is what you want.
>
> So first you said you had a problem with _is_removable, and then you
> said nothing was wrong with the implementation you outlined, plase
> make up your mind.

 I don't appreciate you quoting me out of context like that.
>>>
>>> I got quite annoyed when you started putting words in my mouth by
>>> saying I said anything about CD-ROM.. the code in spitz/tosa is not
>>> concerned with CD-ROMs even if downstream it boils down to that, it is
>>> concerned with whether the device is removable or not, and that's what
>>> the check does.  It doesn't help readability or anything by inlining
>>> that check.  If you're trying to check for A then don't spell it out
>>> as B, be explicit.  It's not a big deal but I just don't see the
>>> point, sorry.
>>>

 The sentence you quoted was in the middle of my attempt to get you to
 explain what you're trying to accomplish there.
>>>
>>> I already said about 3 times what it's trying to acomplish.  You also
>>> have used the word "removable" so I'm sure you know what it means and
>>> don't need further explanation.  But let's define it this way: if a
>>> GUI is going to display an "eject" button next to a drive in the qemu
>>> device tree, that's a removable device.  CD-ROM is an example of that.
>>>  An IDE HDD is an example of something that's not going to have that
>>> button (I assume).
>>
>> But this is a property of the device, not of the backend. This means
>> that it belongs in the device emulation and not in block.c.
> 
> By device do you mean hw/ide/microdrive.c?

I mean just anything in hw/. I'm not familiar with how these devices
work internally, so if hw/ide/microdrive.c is the right place, fine.

> I'm not saying it belongs
> in block.c, but logically it belongs in the same place as
> bdrv_is_inserted, bdrv_is_locked, bdrv_eject etc. no?  So it is a
> property of whatever "media" is property of.

Depends. As long as you're talking about purely virtual device state, I
would say yes, they belong there, too. However, we have things like
CD-ROM passthrough where bdrv_is_inserted etc. are supposed to return
the status of the physical host device. These are host state and need to
be in the block layer.

Kevin



Re: [Qemu-devel] [PATCH 09/24] omap_gpmc/nseries/tusb6010: convert to memory API

2011-08-09 Thread Peter Maydell
On 9 August 2011 07:34, Avi Kivity  wrote:
> Also, my patchset focuses on mechanical transformations.  It is already
> risky enough in terms of regressions, I'm not going to rewrite/improve all
> of qemu; if you want those callbacks removed, you will have to remove them
> yourself.

Sure. Can I ask you to drop this one and the onenand patch, and I'll
have a go over the next week or three at incorporating a conversion into
the patchset I have for those?

Thanks
-- PMM



Re: [Qemu-devel] [PATCH 09/24] omap_gpmc/nseries/tusb6010: convert to memory API

2011-08-09 Thread Avi Kivity

On 08/09/2011 10:37 AM, Peter Maydell wrote:

On 9 August 2011 07:34, Avi Kivity  wrote:
>  Also, my patchset focuses on mechanical transformations.  It is already
>  risky enough in terms of regressions, I'm not going to rewrite/improve all
>  of qemu; if you want those callbacks removed, you will have to remove them
>  yourself.

Sure. Can I ask you to drop this one and the onenand patch, and I'll
have a go over the next week or three at incorporating a conversion into
the patchset I have for those?



Umm... next week or three?  I'd like to move fast on this.

Can't you base your patches on this instead?  After all, this is just a 
mechanical transformation, there should be no functional change.


--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.




[Qemu-devel] [PATCH v2 00/15] qcow/qcow2 cleanups

2011-08-09 Thread Frediano Ziglio
These patches mostly cleanup some AIO code using coroutines.
Mostly they use stack instead of allocated AIO structure.
Feel free to collapse it too short.

Frediano Ziglio (15):
  qcow: allocate QCowAIOCB structure using stack
  qcow: QCowAIOCB field cleanup
  qcow: move some blocks of code to avoid useless variable
initialization
  qcow: embed qcow_aio_read_cb into qcow_co_readv and qcow_aio_write_cb
into qcow_co_writev
  qcow: remove old #undefined code
  qcow2: removed unused fields
  qcow2: removed cur_nr_sectors field in QCowAIOCB
  qcow2: remove l2meta from QCowAIOCB
  qcow2: remove cluster_offset from QCowAIOCB
  qcow2: remove common from QCowAIOCB
  qcow2: reindent and use while before the big jump
  qcow2: removed QCowAIOCB entirely
  qcow2: remove memory leak
  qcow2: small math optimization
  qcow2: small optimization

 block/qcow.c   |  378 ++--
 block/qcow2-refcount.c |   16 +--
 block/qcow2.c  |  412 +++
 3 files changed, 294 insertions(+), 512 deletions(-)




[Qemu-devel] [PATCH v2 02/15] qcow: QCowAIOCB field cleanup

2011-08-09 Thread Frediano Ziglio
remove unused field from this structure and put some of them in 
qcow_aio_read_cb and qcow_aio_write_cb

Signed-off-by: Frediano Ziglio 
---
 block/qcow.c |  137 +++--
 1 files changed, 65 insertions(+), 72 deletions(-)

diff --git a/block/qcow.c b/block/qcow.c
index d19ef04..f0b1599 100644
--- a/block/qcow.c
+++ b/block/qcow.c
@@ -487,72 +487,61 @@ static int qcow_read(BlockDriverState *bs, int64_t 
sector_num,
 #endif
 
 typedef struct QCowAIOCB {
-BlockDriverAIOCB common;
+BlockDriverState *bs;
 int64_t sector_num;
 QEMUIOVector *qiov;
 uint8_t *buf;
 void *orig_buf;
 int nb_sectors;
-int n;
-uint64_t cluster_offset;
-uint8_t *cluster_data;
-struct iovec hd_iov;
-bool is_write;
-QEMUBH *bh;
-QEMUIOVector hd_qiov;
-BlockDriverAIOCB *hd_aiocb;
 } QCowAIOCB;
 
 static QCowAIOCB *qcow_aio_setup(BlockDriverState *bs,
 int64_t sector_num, QEMUIOVector *qiov, int nb_sectors,
 int is_write, QCowAIOCB *acb)
 {
-memset(acb, 0, sizeof(*acb));
-acb->common.bs = bs;
-acb->hd_aiocb = NULL;
+acb->bs = bs;
 acb->sector_num = sector_num;
 acb->qiov = qiov;
-acb->is_write = is_write;
 
 if (qiov->niov > 1) {
 acb->buf = acb->orig_buf = qemu_blockalign(bs, qiov->size);
 if (is_write)
 qemu_iovec_to_buffer(qiov, acb->buf);
 } else {
+acb->orig_buf = NULL;
 acb->buf = (uint8_t *)qiov->iov->iov_base;
 }
 acb->nb_sectors = nb_sectors;
-acb->n = 0;
-acb->cluster_offset = 0;
 return acb;
 }
 
 static int qcow_aio_read_cb(QCowAIOCB *acb)
 {
-BlockDriverState *bs = acb->common.bs;
+BlockDriverState *bs = acb->bs;
 BDRVQcowState *s = bs->opaque;
 int index_in_cluster;
-int ret;
-
-acb->hd_aiocb = NULL;
+int ret, n = 0;
+uint64_t cluster_offset = 0;
+struct iovec hd_iov;
+QEMUIOVector hd_qiov;
 
  redo:
 /* post process the read buffer */
-if (!acb->cluster_offset) {
+if (!cluster_offset) {
 /* nothing to do */
-} else if (acb->cluster_offset & QCOW_OFLAG_COMPRESSED) {
+} else if (cluster_offset & QCOW_OFLAG_COMPRESSED) {
 /* nothing to do */
 } else {
 if (s->crypt_method) {
 encrypt_sectors(s, acb->sector_num, acb->buf, acb->buf,
-acb->n, 0,
+n, 0,
 &s->aes_decrypt_key);
 }
 }
 
-acb->nb_sectors -= acb->n;
-acb->sector_num += acb->n;
-acb->buf += acb->n * 512;
+acb->nb_sectors -= n;
+acb->sector_num += n;
+acb->buf += n * 512;
 
 if (acb->nb_sectors == 0) {
 /* request completed */
@@ -560,57 +549,58 @@ static int qcow_aio_read_cb(QCowAIOCB *acb)
 }
 
 /* prepare next AIO request */
-acb->cluster_offset = get_cluster_offset(bs, acb->sector_num << 9,
+cluster_offset = get_cluster_offset(bs, acb->sector_num << 9,
  0, 0, 0, 0);
 index_in_cluster = acb->sector_num & (s->cluster_sectors - 1);
-acb->n = s->cluster_sectors - index_in_cluster;
-if (acb->n > acb->nb_sectors)
-acb->n = acb->nb_sectors;
+n = s->cluster_sectors - index_in_cluster;
+if (n > acb->nb_sectors) {
+n = acb->nb_sectors;
+}
 
-if (!acb->cluster_offset) {
+if (!cluster_offset) {
 if (bs->backing_hd) {
 /* read from the base image */
-acb->hd_iov.iov_base = (void *)acb->buf;
-acb->hd_iov.iov_len = acb->n * 512;
-qemu_iovec_init_external(&acb->hd_qiov, &acb->hd_iov, 1);
+hd_iov.iov_base = (void *)acb->buf;
+hd_iov.iov_len = n * 512;
+qemu_iovec_init_external(&hd_qiov, &hd_iov, 1);
 qemu_co_mutex_unlock(&s->lock);
 ret = bdrv_co_readv(bs->backing_hd, acb->sector_num,
-acb->n, &acb->hd_qiov);
+n, &hd_qiov);
 qemu_co_mutex_lock(&s->lock);
 if (ret < 0) {
 return -EIO;
 }
 } else {
 /* Note: in this case, no need to wait */
-memset(acb->buf, 0, 512 * acb->n);
+memset(acb->buf, 0, 512 * n);
 goto redo;
 }
-} else if (acb->cluster_offset & QCOW_OFLAG_COMPRESSED) {
+} else if (cluster_offset & QCOW_OFLAG_COMPRESSED) {
 /* add AIO support for compressed blocks ? */
-if (decompress_cluster(bs, acb->cluster_offset) < 0) {
+if (decompress_cluster(bs, cluster_offset) < 0) {
 return -EIO;
 }
 memcpy(acb->buf,
-   s->cluster_cache + index_in_cluster * 512, 512 * acb->n);
+   s->cluster_cache + index_in_cluster * 512, 512 * n);
 goto redo;
 } else {
-if ((acb->cluster_offset & 511) != 0) {
+if ((cluster_offset & 511) != 0) {

[Qemu-devel] [PATCH v2 03/15] qcow: move some blocks of code to avoid useless variable initialization

2011-08-09 Thread Frediano Ziglio

Signed-off-by: Frediano Ziglio 
---
 block/qcow.c |   53 ++---
 1 files changed, 26 insertions(+), 27 deletions(-)

diff --git a/block/qcow.c b/block/qcow.c
index f0b1599..e17f9c5 100644
--- a/block/qcow.c
+++ b/block/qcow.c
@@ -520,35 +520,18 @@ static int qcow_aio_read_cb(QCowAIOCB *acb)
 BlockDriverState *bs = acb->bs;
 BDRVQcowState *s = bs->opaque;
 int index_in_cluster;
-int ret, n = 0;
-uint64_t cluster_offset = 0;
+int ret, n;
+uint64_t cluster_offset;
 struct iovec hd_iov;
 QEMUIOVector hd_qiov;
 
  redo:
-/* post process the read buffer */
-if (!cluster_offset) {
-/* nothing to do */
-} else if (cluster_offset & QCOW_OFLAG_COMPRESSED) {
-/* nothing to do */
-} else {
-if (s->crypt_method) {
-encrypt_sectors(s, acb->sector_num, acb->buf, acb->buf,
-n, 0,
-&s->aes_decrypt_key);
-}
-}
-
-acb->nb_sectors -= n;
-acb->sector_num += n;
-acb->buf += n * 512;
-
 if (acb->nb_sectors == 0) {
 /* request completed */
 return 0;
 }
 
-/* prepare next AIO request */
+/* prepare next request */
 cluster_offset = get_cluster_offset(bs, acb->sector_num << 9,
  0, 0, 0, 0);
 index_in_cluster = acb->sector_num & (s->cluster_sectors - 1);
@@ -573,7 +556,6 @@ static int qcow_aio_read_cb(QCowAIOCB *acb)
 } else {
 /* Note: in this case, no need to wait */
 memset(acb->buf, 0, 512 * n);
-goto redo;
 }
 } else if (cluster_offset & QCOW_OFLAG_COMPRESSED) {
 /* add AIO support for compressed blocks ? */
@@ -582,7 +564,6 @@ static int qcow_aio_read_cb(QCowAIOCB *acb)
 }
 memcpy(acb->buf,
s->cluster_cache + index_in_cluster * 512, 512 * n);
-goto redo;
 } else {
 if ((cluster_offset & 511) != 0) {
 return -EIO;
@@ -600,6 +581,23 @@ static int qcow_aio_read_cb(QCowAIOCB *acb)
 }
 }
 
+/* post process the read buffer */
+if (!cluster_offset) {
+/* nothing to do */
+} else if (cluster_offset & QCOW_OFLAG_COMPRESSED) {
+/* nothing to do */
+} else {
+if (s->crypt_method) {
+encrypt_sectors(s, acb->sector_num, acb->buf, acb->buf,
+n, 0,
+&s->aes_decrypt_key);
+}
+}
+
+acb->nb_sectors -= n;
+acb->sector_num += n;
+acb->buf += n * 512;
+
 goto redo;
 }
 
@@ -631,16 +629,12 @@ static int qcow_aio_write_cb(QCowAIOCB *acb)
 int index_in_cluster;
 uint64_t cluster_offset;
 const uint8_t *src_buf;
-int ret, n = 0;
+int ret, n;
 uint8_t *cluster_data = NULL;
 struct iovec hd_iov;
 QEMUIOVector hd_qiov;
 
 redo:
-acb->nb_sectors -= n;
-acb->sector_num += n;
-acb->buf += n * 512;
-
 if (acb->nb_sectors == 0) {
 /* request completed */
 return 0;
@@ -683,6 +677,11 @@ redo:
 if (ret < 0) {
 return ret;
 }
+
+acb->nb_sectors -= n;
+acb->sector_num += n;
+acb->buf += n * 512;
+
 goto redo;
 }
 
-- 
1.7.1




[Qemu-devel] [PATCH v2 01/15] qcow: allocate QCowAIOCB structure using stack

2011-08-09 Thread Frediano Ziglio
instead of calling qemi_aio_get use stack

Signed-off-by: Frediano Ziglio 
---
 block/qcow.c  |   52 
 block/qcow2.c |   38 +++---
 2 files changed, 27 insertions(+), 63 deletions(-)

diff --git a/block/qcow.c b/block/qcow.c
index 6447c2a..d19ef04 100644
--- a/block/qcow.c
+++ b/block/qcow.c
@@ -503,28 +503,12 @@ typedef struct QCowAIOCB {
 BlockDriverAIOCB *hd_aiocb;
 } QCowAIOCB;
 
-static void qcow_aio_cancel(BlockDriverAIOCB *blockacb)
-{
-QCowAIOCB *acb = container_of(blockacb, QCowAIOCB, common);
-if (acb->hd_aiocb)
-bdrv_aio_cancel(acb->hd_aiocb);
-qemu_aio_release(acb);
-}
-
-static AIOPool qcow_aio_pool = {
-.aiocb_size = sizeof(QCowAIOCB),
-.cancel = qcow_aio_cancel,
-};
-
 static QCowAIOCB *qcow_aio_setup(BlockDriverState *bs,
 int64_t sector_num, QEMUIOVector *qiov, int nb_sectors,
-int is_write)
+int is_write, QCowAIOCB *acb)
 {
-QCowAIOCB *acb;
-
-acb = qemu_aio_get(&qcow_aio_pool, bs, NULL, NULL);
-if (!acb)
-return NULL;
+memset(acb, 0, sizeof(*acb));
+acb->common.bs = bs;
 acb->hd_aiocb = NULL;
 acb->sector_num = sector_num;
 acb->qiov = qiov;
@@ -543,9 +527,8 @@ static QCowAIOCB *qcow_aio_setup(BlockDriverState *bs,
 return acb;
 }
 
-static int qcow_aio_read_cb(void *opaque)
+static int qcow_aio_read_cb(QCowAIOCB *acb)
 {
-QCowAIOCB *acb = opaque;
 BlockDriverState *bs = acb->common.bs;
 BDRVQcowState *s = bs->opaque;
 int index_in_cluster;
@@ -634,29 +617,27 @@ static int qcow_co_readv(BlockDriverState *bs, int64_t 
sector_num,
  int nb_sectors, QEMUIOVector *qiov)
 {
 BDRVQcowState *s = bs->opaque;
-QCowAIOCB *acb;
+QCowAIOCB acb;
 int ret;
 
-acb = qcow_aio_setup(bs, sector_num, qiov, nb_sectors, 0);
+qcow_aio_setup(bs, sector_num, qiov, nb_sectors, 0, &acb);
 
 qemu_co_mutex_lock(&s->lock);
 do {
-ret = qcow_aio_read_cb(acb);
+ret = qcow_aio_read_cb(&acb);
 } while (ret > 0);
 qemu_co_mutex_unlock(&s->lock);
 
-if (acb->qiov->niov > 1) {
-qemu_iovec_from_buffer(acb->qiov, acb->orig_buf, acb->qiov->size);
-qemu_vfree(acb->orig_buf);
+if (acb.qiov->niov > 1) {
+qemu_iovec_from_buffer(acb.qiov, acb.orig_buf, acb.qiov->size);
+qemu_vfree(acb.orig_buf);
 }
-qemu_aio_release(acb);
 
 return ret;
 }
 
-static int qcow_aio_write_cb(void *opaque)
+static int qcow_aio_write_cb(QCowAIOCB *acb)
 {
-QCowAIOCB *acb = opaque;
 BlockDriverState *bs = acb->common.bs;
 BDRVQcowState *s = bs->opaque;
 int index_in_cluster;
@@ -714,23 +695,22 @@ static int qcow_co_writev(BlockDriverState *bs, int64_t 
sector_num,
   int nb_sectors, QEMUIOVector *qiov)
 {
 BDRVQcowState *s = bs->opaque;
-QCowAIOCB *acb;
+QCowAIOCB acb;
 int ret;
 
 s->cluster_cache_offset = -1; /* disable compressed cache */
 
-acb = qcow_aio_setup(bs, sector_num, qiov, nb_sectors, 1);
+qcow_aio_setup(bs, sector_num, qiov, nb_sectors, 1, &acb);
 
 qemu_co_mutex_lock(&s->lock);
 do {
-ret = qcow_aio_write_cb(acb);
+ret = qcow_aio_write_cb(&acb);
 } while (ret > 0);
 qemu_co_mutex_unlock(&s->lock);
 
-if (acb->qiov->niov > 1) {
-qemu_vfree(acb->orig_buf);
+if (acb.qiov->niov > 1) {
+qemu_vfree(acb.orig_buf);
 }
-qemu_aio_release(acb);
 
 return ret;
 }
diff --git a/block/qcow2.c b/block/qcow2.c
index f07d550..edc068e 100644
--- a/block/qcow2.c
+++ b/block/qcow2.c
@@ -388,17 +388,6 @@ typedef struct QCowAIOCB {
 QLIST_ENTRY(QCowAIOCB) next_depend;
 } QCowAIOCB;
 
-static void qcow2_aio_cancel(BlockDriverAIOCB *blockacb)
-{
-QCowAIOCB *acb = container_of(blockacb, QCowAIOCB, common);
-qemu_aio_release(acb);
-}
-
-static AIOPool qcow2_aio_pool = {
-.aiocb_size = sizeof(QCowAIOCB),
-.cancel = qcow2_aio_cancel,
-};
-
 /*
  * Returns 0 when the request is completed successfully, 1 when there is still
  * a part left to do and -errno in error cases.
@@ -528,13 +517,10 @@ static int qcow2_aio_read_cb(QCowAIOCB *acb)
 static QCowAIOCB *qcow2_aio_setup(BlockDriverState *bs, int64_t sector_num,
   QEMUIOVector *qiov, int nb_sectors,
   BlockDriverCompletionFunc *cb,
-  void *opaque, int is_write)
+  void *opaque, int is_write, QCowAIOCB *acb)
 {
-QCowAIOCB *acb;
-
-acb = qemu_aio_get(&qcow2_aio_pool, bs, cb, opaque);
-if (!acb)
-return NULL;
+memset(acb, 0, sizeof(*acb));
+acb->common.bs = bs;
 acb->sector_num = sector_num;
 acb->qiov = qiov;
 acb->is_write = is_write;
@@ -554,19 +540,18 @@ static int qcow2_co_readv(BlockDriverState *bs, int64_t 
sector_num,
 

[Qemu-devel] [PATCH v2 15/15] qcow2: small optimization

2011-08-09 Thread Frediano Ziglio

Signed-off-by: Frediano Ziglio 
---
 block/qcow2-refcount.c |   11 +--
 1 files changed, 5 insertions(+), 6 deletions(-)

diff --git a/block/qcow2-refcount.c b/block/qcow2-refcount.c
index 0e31868..318d66d 100644
--- a/block/qcow2-refcount.c
+++ b/block/qcow2-refcount.c
@@ -681,14 +681,13 @@ void qcow2_create_refcount_update(QCowCreateState *s, 
int64_t offset,
 int64_t size)
 {
 int refcount;
-int64_t start, last, cluster_offset;
+int64_t start, last, cluster;
 uint16_t *p;
 
-start = offset & ~(s->cluster_size - 1);
-last = (offset + size - 1)  & ~(s->cluster_size - 1);
-for(cluster_offset = start; cluster_offset <= last;
-cluster_offset += s->cluster_size) {
-p = &s->refcount_block[cluster_offset >> s->cluster_bits];
+start = offset >> s->cluster_bits;
+last = (offset + size - 1)  >> s->cluster_bits;
+for (cluster = start; cluster <= last; ++cluster) {
+p = &s->refcount_block[cluster];
 refcount = be16_to_cpu(*p);
 refcount++;
 *p = cpu_to_be16(refcount);
-- 
1.7.1




[Qemu-devel] [PATCH v2 13/15] qcow2: remove memory leak

2011-08-09 Thread Frediano Ziglio

Signed-off-by: Frediano Ziglio 
---
 block/qcow2.c |2 ++
 1 files changed, 2 insertions(+), 0 deletions(-)

diff --git a/block/qcow2.c b/block/qcow2.c
index 6073568..aed8da7 100644
--- a/block/qcow2.c
+++ b/block/qcow2.c
@@ -492,6 +492,7 @@ fail:
 qemu_co_mutex_unlock(&s->lock);
 
 qemu_iovec_destroy(&hd_qiov);
+qemu_free(cluster_data);
 
 return ret;
 }
@@ -604,6 +605,7 @@ fail:
 qemu_co_mutex_unlock(&s->lock);
 
 qemu_iovec_destroy(&hd_qiov);
+qemu_free(cluster_data);
 
 return ret;
 }
-- 
1.7.1




[Qemu-devel] [PATCH v2 14/15] qcow2: small math optimization

2011-08-09 Thread Frediano Ziglio

Signed-off-by: Frediano Ziglio 
---
 block/qcow2-refcount.c |5 +
 1 files changed, 1 insertions(+), 4 deletions(-)

diff --git a/block/qcow2-refcount.c b/block/qcow2-refcount.c
index 14b2f67..0e31868 100644
--- a/block/qcow2-refcount.c
+++ b/block/qcow2-refcount.c
@@ -140,10 +140,7 @@ static unsigned int next_refcount_table_size(BDRVQcowState 
*s,
 static int in_same_refcount_block(BDRVQcowState *s, uint64_t offset_a,
 uint64_t offset_b)
 {
-uint64_t block_a = offset_a >> (2 * s->cluster_bits - REFCOUNT_SHIFT);
-uint64_t block_b = offset_b >> (2 * s->cluster_bits - REFCOUNT_SHIFT);
-
-return (block_a == block_b);
+return !((offset_a ^ offset_b) >> (2 * s->cluster_bits - REFCOUNT_SHIFT));
 }
 
 /*
-- 
1.7.1




Re: [Qemu-devel] [PATCH 08/24] tusb6010: move declarations to new file tusb6010.h

2011-08-09 Thread Peter Maydell
On 8 August 2011 18:06, Avi Kivity  wrote:
> diff --git a/hw/tusb6010.h b/hw/tusb6010.h
> new file mode 100644
> index 000..6faa94d
> --- /dev/null
> +++ b/hw/tusb6010.h
> @@ -0,0 +1,10 @@
> +#ifndef TUSB6010_H
> +#define TUSB6010_H
> +
> +typedef struct TUSBState TUSBState;
> +TUSBState *tusb6010_init(qemu_irq intr);
> +int tusb6010_sync_io(TUSBState *s);
> +int tusb6010_async_io(TUSBState *s);
> +void tusb6010_power(TUSBState *s, int on);
> +
> +#endif

New files should have a copyright/license boilerplate comment.

-- PMM



[Qemu-devel] [PATCH v2 11/15] qcow2: reindent and use while before the big jump

2011-08-09 Thread Frediano Ziglio
prepare to remove read/write callbacks

Signed-off-by: Frediano Ziglio 
---
 block/qcow2.c |  272 -
 1 files changed, 135 insertions(+), 137 deletions(-)

diff --git a/block/qcow2.c b/block/qcow2.c
index c7445cc..dfb969e 100644
--- a/block/qcow2.c
+++ b/block/qcow2.c
@@ -395,107 +395,105 @@ static int qcow2_aio_read_cb(QCowAIOCB *acb)
 int cur_nr_sectors; /* number of sectors in current iteration */
 uint64_t cluster_offset = 0;
 
-if (acb->remaining_sectors == 0) {
-/* request completed */
-return 0;
-}
-
-/* prepare next request */
-cur_nr_sectors = acb->remaining_sectors;
-if (s->crypt_method) {
-cur_nr_sectors = MIN(cur_nr_sectors,
-QCOW_MAX_CRYPT_CLUSTERS * s->cluster_sectors);
-}
+while (acb->remaining_sectors != 0) {
 
-ret = qcow2_get_cluster_offset(bs, acb->sector_num << 9,
-&cur_nr_sectors, &cluster_offset);
-if (ret < 0) {
-return ret;
-}
-
-index_in_cluster = acb->sector_num & (s->cluster_sectors - 1);
-
-qemu_iovec_reset(&acb->hd_qiov);
-qemu_iovec_copy(&acb->hd_qiov, acb->qiov, acb->bytes_done,
-cur_nr_sectors * 512);
-
-if (!cluster_offset) {
-
-if (bs->backing_hd) {
-/* read from the base image */
-n1 = qcow2_backing_read1(bs->backing_hd, &acb->hd_qiov,
-acb->sector_num, cur_nr_sectors);
-if (n1 > 0) {
-BLKDBG_EVENT(bs->file, BLKDBG_READ_BACKING_AIO);
-qemu_co_mutex_unlock(&s->lock);
-ret = bdrv_co_readv(bs->backing_hd, acb->sector_num,
-n1, &acb->hd_qiov);
-qemu_co_mutex_lock(&s->lock);
-if (ret < 0) {
-return ret;
-}
-}
-} else {
-/* Note: in this case, no need to wait */
-qemu_iovec_memset(&acb->hd_qiov, 0, 512 * cur_nr_sectors);
+/* prepare next request */
+cur_nr_sectors = acb->remaining_sectors;
+if (s->crypt_method) {
+cur_nr_sectors = MIN(cur_nr_sectors,
+QCOW_MAX_CRYPT_CLUSTERS * s->cluster_sectors);
 }
-} else if (cluster_offset & QCOW_OFLAG_COMPRESSED) {
-/* add AIO support for compressed blocks ? */
-ret = qcow2_decompress_cluster(bs, cluster_offset);
+
+ret = qcow2_get_cluster_offset(bs, acb->sector_num << 9,
+&cur_nr_sectors, &cluster_offset);
 if (ret < 0) {
 return ret;
 }
 
-qemu_iovec_from_buffer(&acb->hd_qiov,
-s->cluster_cache + index_in_cluster * 512,
-512 * cur_nr_sectors);
-} else {
-if ((cluster_offset & 511) != 0) {
-return -EIO;
-}
+index_in_cluster = acb->sector_num & (s->cluster_sectors - 1);
 
-if (s->crypt_method) {
-/*
- * For encrypted images, read everything into a temporary
- * contiguous buffer on which the AES functions can work.
- */
-if (!acb->cluster_data) {
-acb->cluster_data =
-qemu_mallocz(QCOW_MAX_CRYPT_CLUSTERS * s->cluster_size);
+qemu_iovec_reset(&acb->hd_qiov);
+qemu_iovec_copy(&acb->hd_qiov, acb->qiov, acb->bytes_done,
+cur_nr_sectors * 512);
+
+if (!cluster_offset) {
+
+if (bs->backing_hd) {
+/* read from the base image */
+n1 = qcow2_backing_read1(bs->backing_hd, &acb->hd_qiov,
+acb->sector_num, cur_nr_sectors);
+if (n1 > 0) {
+BLKDBG_EVENT(bs->file, BLKDBG_READ_BACKING_AIO);
+qemu_co_mutex_unlock(&s->lock);
+ret = bdrv_co_readv(bs->backing_hd, acb->sector_num,
+n1, &acb->hd_qiov);
+qemu_co_mutex_lock(&s->lock);
+if (ret < 0) {
+return ret;
+}
+}
+} else {
+/* Note: in this case, no need to wait */
+qemu_iovec_memset(&acb->hd_qiov, 0, 512 * cur_nr_sectors);
+}
+} else if (cluster_offset & QCOW_OFLAG_COMPRESSED) {
+/* add AIO support for compressed blocks ? */
+ret = qcow2_decompress_cluster(bs, cluster_offset);
+if (ret < 0) {
+return ret;
 }
 
-assert(cur_nr_sectors <=
-QCOW_MAX_CRYPT_CLUSTERS * s->cluster_sectors);
-qemu_iovec_reset(&acb->hd_qiov);
-qemu_iovec_add(&acb->hd_qiov, acb->cluster_data,
+qemu_iovec_from_buffer(&acb->hd_qiov,
+s->cluster_cache + index_in_cluster * 512,
 512 * cur_nr_sectors);
-}
+} else {
+if ((cluster_offset & 511) != 0) {
+ 

Re: [Qemu-devel] [PATCH 09/24] omap_gpmc/nseries/tusb6010: convert to memory API

2011-08-09 Thread Peter Maydell
On 9 August 2011 08:41, Avi Kivity  wrote:
> On 08/09/2011 10:37 AM, Peter Maydell wrote:
>>
>> On 9 August 2011 07:34, Avi Kivity  wrote:
>> >  Also, my patchset focuses on mechanical transformations.  It is already
>> >  risky enough in terms of regressions, I'm not going to rewrite/improve
>> > all
>> >  of qemu; if you want those callbacks removed, you will have to remove
>> > them
>> >  yourself.
>>
>> Sure. Can I ask you to drop this one and the onenand patch, and I'll
>> have a go over the next week or three at incorporating a conversion into
>> the patchset I have for those?

> Umm... next week or three?  I'd like to move fast on this.

I'm busy at least half of this week, and next week is the KVM
Forum, so I didn't want to promise anything faster.

> Can't you base your patches on this instead?  After all, this is just a
> mechanical transformation, there should be no functional change.

Well, maybe, but it seems a bit silly to commit something which is
going to be half-reverted and rewritten.

Also I just noticed this change:

-static CPUReadMemoryFunc * const omap_gpmc_readfn[] = {
-omap_badwidth_read32,  /* TODO */
-omap_badwidth_read32,  /* TODO */
-omap_gpmc_read,
-};
-
-static CPUWriteMemoryFunc * const omap_gpmc_writefn[] = {
-omap_badwidth_write32, /* TODO */
-omap_badwidth_write32, /* TODO */
-omap_gpmc_write,
+static const MemoryRegionOps omap_gpmc_ops = {
+/* TODO: specialize <4 byte writes? */
+.read = omap_gpmc_read,
+.write = omap_gpmc_write,
+.endianness = DEVICE_NATIVE_ENDIAN,
 };

...isn't this just throwing away the warnings on bad-width accesses?
(it's not clear to me from the docs what the behaviour of the new
API on bad-width accesses is going to be.)

-- PMM



[Qemu-devel] [PATCH v2 10/15] qcow2: remove common from QCowAIOCB

2011-08-09 Thread Frediano Ziglio

Signed-off-by: Frediano Ziglio 
---
 block/qcow2.c |8 
 1 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/block/qcow2.c b/block/qcow2.c
index 446946e..c7445cc 100644
--- a/block/qcow2.c
+++ b/block/qcow2.c
@@ -373,7 +373,7 @@ int qcow2_backing_read1(BlockDriverState *bs, QEMUIOVector 
*qiov,
 }
 
 typedef struct QCowAIOCB {
-BlockDriverAIOCB common;
+BlockDriverState *bs;
 int64_t sector_num;
 QEMUIOVector *qiov;
 int remaining_sectors;
@@ -388,7 +388,7 @@ typedef struct QCowAIOCB {
  */
 static int qcow2_aio_read_cb(QCowAIOCB *acb)
 {
-BlockDriverState *bs = acb->common.bs;
+BlockDriverState *bs = acb->bs;
 BDRVQcowState *s = bs->opaque;
 int index_in_cluster, n1;
 int ret;
@@ -504,7 +504,7 @@ static QCowAIOCB *qcow2_aio_setup(BlockDriverState *bs, 
int64_t sector_num,
   void *opaque, QCowAIOCB *acb)
 {
 memset(acb, 0, sizeof(*acb));
-acb->common.bs = bs;
+acb->bs = bs;
 acb->sector_num = sector_num;
 acb->qiov = qiov;
 
@@ -556,7 +556,7 @@ static void run_dependent_requests(BDRVQcowState *s, 
QCowL2Meta *m)
  */
 static int qcow2_aio_write_cb(QCowAIOCB *acb)
 {
-BlockDriverState *bs = acb->common.bs;
+BlockDriverState *bs = acb->bs;
 BDRVQcowState *s = bs->opaque;
 int index_in_cluster;
 int n_end;
-- 
1.7.1




[Qemu-devel] [PATCH v2 06/15] qcow2: removed unused fields

2011-08-09 Thread Frediano Ziglio

Signed-off-by: Frediano Ziglio 
---
 block/qcow2.c |   10 +++---
 1 files changed, 3 insertions(+), 7 deletions(-)

diff --git a/block/qcow2.c b/block/qcow2.c
index edc068e..5c454bb 100644
--- a/block/qcow2.c
+++ b/block/qcow2.c
@@ -381,11 +381,8 @@ typedef struct QCowAIOCB {
 uint64_t bytes_done;
 uint64_t cluster_offset;
 uint8_t *cluster_data;
-bool is_write;
 QEMUIOVector hd_qiov;
-QEMUBH *bh;
 QCowL2Meta l2meta;
-QLIST_ENTRY(QCowAIOCB) next_depend;
 } QCowAIOCB;
 
 /*
@@ -517,13 +514,12 @@ static int qcow2_aio_read_cb(QCowAIOCB *acb)
 static QCowAIOCB *qcow2_aio_setup(BlockDriverState *bs, int64_t sector_num,
   QEMUIOVector *qiov, int nb_sectors,
   BlockDriverCompletionFunc *cb,
-  void *opaque, int is_write, QCowAIOCB *acb)
+  void *opaque, QCowAIOCB *acb)
 {
 memset(acb, 0, sizeof(*acb));
 acb->common.bs = bs;
 acb->sector_num = sector_num;
 acb->qiov = qiov;
-acb->is_write = is_write;
 
 qemu_iovec_init(&acb->hd_qiov, qiov->niov);
 
@@ -543,7 +539,7 @@ static int qcow2_co_readv(BlockDriverState *bs, int64_t 
sector_num,
 QCowAIOCB acb;
 int ret;
 
-qcow2_aio_setup(bs, sector_num, qiov, nb_sectors, NULL, NULL, 0, &acb);
+qcow2_aio_setup(bs, sector_num, qiov, nb_sectors, NULL, NULL, &acb);
 
 qemu_co_mutex_lock(&s->lock);
 do {
@@ -658,7 +654,7 @@ static int qcow2_co_writev(BlockDriverState *bs,
 QCowAIOCB acb;
 int ret;
 
-qcow2_aio_setup(bs, sector_num, qiov, nb_sectors, NULL, NULL, 1, &acb);
+qcow2_aio_setup(bs, sector_num, qiov, nb_sectors, NULL, NULL, &acb);
 s->cluster_cache_offset = -1; /* disable compressed cache */
 
 qemu_co_mutex_lock(&s->lock);
-- 
1.7.1




[Qemu-devel] [PATCH v2 07/15] qcow2: removed cur_nr_sectors field in QCowAIOCB

2011-08-09 Thread Frediano Ziglio

Signed-off-by: Frediano Ziglio 
---
 block/qcow2.c |   98 +
 1 files changed, 43 insertions(+), 55 deletions(-)

diff --git a/block/qcow2.c b/block/qcow2.c
index 5c454bb..0cf4465 100644
--- a/block/qcow2.c
+++ b/block/qcow2.c
@@ -377,7 +377,6 @@ typedef struct QCowAIOCB {
 int64_t sector_num;
 QEMUIOVector *qiov;
 int remaining_sectors;
-int cur_nr_sectors;/* number of sectors in current iteration */
 uint64_t bytes_done;
 uint64_t cluster_offset;
 uint8_t *cluster_data;
@@ -395,42 +394,22 @@ static int qcow2_aio_read_cb(QCowAIOCB *acb)
 BDRVQcowState *s = bs->opaque;
 int index_in_cluster, n1;
 int ret;
-
-/* post process the read buffer */
-if (!acb->cluster_offset) {
-/* nothing to do */
-} else if (acb->cluster_offset & QCOW_OFLAG_COMPRESSED) {
-/* nothing to do */
-} else {
-if (s->crypt_method) {
-qcow2_encrypt_sectors(s, acb->sector_num,  acb->cluster_data,
-acb->cluster_data, acb->cur_nr_sectors, 0, 
&s->aes_decrypt_key);
-qemu_iovec_reset(&acb->hd_qiov);
-qemu_iovec_copy(&acb->hd_qiov, acb->qiov, acb->bytes_done,
-acb->cur_nr_sectors * 512);
-qemu_iovec_from_buffer(&acb->hd_qiov, acb->cluster_data,
-512 * acb->cur_nr_sectors);
-}
-}
-
-acb->remaining_sectors -= acb->cur_nr_sectors;
-acb->sector_num += acb->cur_nr_sectors;
-acb->bytes_done += acb->cur_nr_sectors * 512;
+int cur_nr_sectors; /* number of sectors in current iteration */
 
 if (acb->remaining_sectors == 0) {
 /* request completed */
 return 0;
 }
 
-/* prepare next AIO request */
-acb->cur_nr_sectors = acb->remaining_sectors;
+/* prepare next request */
+cur_nr_sectors = acb->remaining_sectors;
 if (s->crypt_method) {
-acb->cur_nr_sectors = MIN(acb->cur_nr_sectors,
+cur_nr_sectors = MIN(cur_nr_sectors,
 QCOW_MAX_CRYPT_CLUSTERS * s->cluster_sectors);
 }
 
 ret = qcow2_get_cluster_offset(bs, acb->sector_num << 9,
-&acb->cur_nr_sectors, &acb->cluster_offset);
+&cur_nr_sectors, &acb->cluster_offset);
 if (ret < 0) {
 return ret;
 }
@@ -439,14 +418,14 @@ static int qcow2_aio_read_cb(QCowAIOCB *acb)
 
 qemu_iovec_reset(&acb->hd_qiov);
 qemu_iovec_copy(&acb->hd_qiov, acb->qiov, acb->bytes_done,
-acb->cur_nr_sectors * 512);
+cur_nr_sectors * 512);
 
 if (!acb->cluster_offset) {
 
 if (bs->backing_hd) {
 /* read from the base image */
 n1 = qcow2_backing_read1(bs->backing_hd, &acb->hd_qiov,
-acb->sector_num, acb->cur_nr_sectors);
+acb->sector_num, cur_nr_sectors);
 if (n1 > 0) {
 BLKDBG_EVENT(bs->file, BLKDBG_READ_BACKING_AIO);
 qemu_co_mutex_unlock(&s->lock);
@@ -457,11 +436,9 @@ static int qcow2_aio_read_cb(QCowAIOCB *acb)
 return ret;
 }
 }
-return 1;
 } else {
 /* Note: in this case, no need to wait */
-qemu_iovec_memset(&acb->hd_qiov, 0, 512 * acb->cur_nr_sectors);
-return 1;
+qemu_iovec_memset(&acb->hd_qiov, 0, 512 * cur_nr_sectors);
 }
 } else if (acb->cluster_offset & QCOW_OFLAG_COMPRESSED) {
 /* add AIO support for compressed blocks ? */
@@ -472,9 +449,7 @@ static int qcow2_aio_read_cb(QCowAIOCB *acb)
 
 qemu_iovec_from_buffer(&acb->hd_qiov,
 s->cluster_cache + index_in_cluster * 512,
-512 * acb->cur_nr_sectors);
-
-return 1;
+512 * cur_nr_sectors);
 } else {
 if ((acb->cluster_offset & 511) != 0) {
 return -EIO;
@@ -490,24 +465,37 @@ static int qcow2_aio_read_cb(QCowAIOCB *acb)
 qemu_mallocz(QCOW_MAX_CRYPT_CLUSTERS * s->cluster_size);
 }
 
-assert(acb->cur_nr_sectors <=
+assert(cur_nr_sectors <=
 QCOW_MAX_CRYPT_CLUSTERS * s->cluster_sectors);
 qemu_iovec_reset(&acb->hd_qiov);
 qemu_iovec_add(&acb->hd_qiov, acb->cluster_data,
-512 * acb->cur_nr_sectors);
+512 * cur_nr_sectors);
 }
 
 BLKDBG_EVENT(bs->file, BLKDBG_READ_AIO);
 qemu_co_mutex_unlock(&s->lock);
 ret = bdrv_co_readv(bs->file,
 (acb->cluster_offset >> 9) + index_in_cluster,
-acb->cur_nr_sectors, &acb->hd_qiov);
+cur_nr_sectors, &acb->hd_qiov);
 qemu_co_mutex_lock(&s->lock);
 if (ret < 0) {
 return ret;
 }
+if (s->crypt_method) {
+qcow2_encrypt_sectors(s, acb->sector_num,  acb->cluster_data,
+acb->cluster_data, cur_nr_sectors, 0, &s->aes_decrypt_key);

[Qemu-devel] [PATCH v2 08/15] qcow2: remove l2meta from QCowAIOCB

2011-08-09 Thread Frediano Ziglio

Signed-off-by: Frediano Ziglio 
---
 block/qcow2.c |   15 ---
 1 files changed, 8 insertions(+), 7 deletions(-)

diff --git a/block/qcow2.c b/block/qcow2.c
index 0cf4465..adf31ce 100644
--- a/block/qcow2.c
+++ b/block/qcow2.c
@@ -381,7 +381,6 @@ typedef struct QCowAIOCB {
 uint64_t cluster_offset;
 uint8_t *cluster_data;
 QEMUIOVector hd_qiov;
-QCowL2Meta l2meta;
 } QCowAIOCB;
 
 /*
@@ -514,8 +513,6 @@ static QCowAIOCB *qcow2_aio_setup(BlockDriverState *bs, 
int64_t sector_num,
 acb->bytes_done = 0;
 acb->remaining_sectors = nb_sectors;
 acb->cluster_offset = 0;
-acb->l2meta.nb_clusters = 0;
-qemu_co_queue_init(&acb->l2meta.dependent_requests);
 return acb;
 }
 
@@ -566,6 +563,10 @@ static int qcow2_aio_write_cb(QCowAIOCB *acb)
 int n_end;
 int ret;
 int cur_nr_sectors; /* number of sectors in current iteration */
+QCowL2Meta l2meta;
+
+l2meta.nb_clusters = 0;
+qemu_co_queue_init(&l2meta.dependent_requests);
 
 if (acb->remaining_sectors == 0) {
 /* request completed */
@@ -579,12 +580,12 @@ static int qcow2_aio_write_cb(QCowAIOCB *acb)
 n_end = QCOW_MAX_CRYPT_CLUSTERS * s->cluster_sectors;
 
 ret = qcow2_alloc_cluster_offset(bs, acb->sector_num << 9,
-index_in_cluster, n_end, &cur_nr_sectors, &acb->l2meta);
+index_in_cluster, n_end, &cur_nr_sectors, &l2meta);
 if (ret < 0) {
 return ret;
 }
 
-acb->cluster_offset = acb->l2meta.cluster_offset;
+acb->cluster_offset = l2meta.cluster_offset;
 assert((acb->cluster_offset & 511) == 0);
 
 qemu_iovec_reset(&acb->hd_qiov);
@@ -618,9 +619,9 @@ static int qcow2_aio_write_cb(QCowAIOCB *acb)
 return ret;
 }
 
-ret = qcow2_alloc_cluster_link_l2(bs, &acb->l2meta);
+ret = qcow2_alloc_cluster_link_l2(bs, &l2meta);
 
-run_dependent_requests(s, &acb->l2meta);
+run_dependent_requests(s, &l2meta);
 
 if (ret < 0) {
 return ret;
-- 
1.7.1




[Qemu-devel] [PATCH v2 09/15] qcow2: remove cluster_offset from QCowAIOCB

2011-08-09 Thread Frediano Ziglio

Signed-off-by: Frediano Ziglio 
---
 block/qcow2.c |   22 +++---
 1 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/block/qcow2.c b/block/qcow2.c
index adf31ce..446946e 100644
--- a/block/qcow2.c
+++ b/block/qcow2.c
@@ -378,7 +378,6 @@ typedef struct QCowAIOCB {
 QEMUIOVector *qiov;
 int remaining_sectors;
 uint64_t bytes_done;
-uint64_t cluster_offset;
 uint8_t *cluster_data;
 QEMUIOVector hd_qiov;
 } QCowAIOCB;
@@ -394,6 +393,7 @@ static int qcow2_aio_read_cb(QCowAIOCB *acb)
 int index_in_cluster, n1;
 int ret;
 int cur_nr_sectors; /* number of sectors in current iteration */
+uint64_t cluster_offset = 0;
 
 if (acb->remaining_sectors == 0) {
 /* request completed */
@@ -408,7 +408,7 @@ static int qcow2_aio_read_cb(QCowAIOCB *acb)
 }
 
 ret = qcow2_get_cluster_offset(bs, acb->sector_num << 9,
-&cur_nr_sectors, &acb->cluster_offset);
+&cur_nr_sectors, &cluster_offset);
 if (ret < 0) {
 return ret;
 }
@@ -419,7 +419,7 @@ static int qcow2_aio_read_cb(QCowAIOCB *acb)
 qemu_iovec_copy(&acb->hd_qiov, acb->qiov, acb->bytes_done,
 cur_nr_sectors * 512);
 
-if (!acb->cluster_offset) {
+if (!cluster_offset) {
 
 if (bs->backing_hd) {
 /* read from the base image */
@@ -439,9 +439,9 @@ static int qcow2_aio_read_cb(QCowAIOCB *acb)
 /* Note: in this case, no need to wait */
 qemu_iovec_memset(&acb->hd_qiov, 0, 512 * cur_nr_sectors);
 }
-} else if (acb->cluster_offset & QCOW_OFLAG_COMPRESSED) {
+} else if (cluster_offset & QCOW_OFLAG_COMPRESSED) {
 /* add AIO support for compressed blocks ? */
-ret = qcow2_decompress_cluster(bs, acb->cluster_offset);
+ret = qcow2_decompress_cluster(bs, cluster_offset);
 if (ret < 0) {
 return ret;
 }
@@ -450,7 +450,7 @@ static int qcow2_aio_read_cb(QCowAIOCB *acb)
 s->cluster_cache + index_in_cluster * 512,
 512 * cur_nr_sectors);
 } else {
-if ((acb->cluster_offset & 511) != 0) {
+if ((cluster_offset & 511) != 0) {
 return -EIO;
 }
 
@@ -474,7 +474,7 @@ static int qcow2_aio_read_cb(QCowAIOCB *acb)
 BLKDBG_EVENT(bs->file, BLKDBG_READ_AIO);
 qemu_co_mutex_unlock(&s->lock);
 ret = bdrv_co_readv(bs->file,
-(acb->cluster_offset >> 9) + index_in_cluster,
+(cluster_offset >> 9) + index_in_cluster,
 cur_nr_sectors, &acb->hd_qiov);
 qemu_co_mutex_lock(&s->lock);
 if (ret < 0) {
@@ -512,7 +512,6 @@ static QCowAIOCB *qcow2_aio_setup(BlockDriverState *bs, 
int64_t sector_num,
 
 acb->bytes_done = 0;
 acb->remaining_sectors = nb_sectors;
-acb->cluster_offset = 0;
 return acb;
 }
 
@@ -564,6 +563,7 @@ static int qcow2_aio_write_cb(QCowAIOCB *acb)
 int ret;
 int cur_nr_sectors; /* number of sectors in current iteration */
 QCowL2Meta l2meta;
+uint64_t cluster_offset;
 
 l2meta.nb_clusters = 0;
 qemu_co_queue_init(&l2meta.dependent_requests);
@@ -585,8 +585,8 @@ static int qcow2_aio_write_cb(QCowAIOCB *acb)
 return ret;
 }
 
-acb->cluster_offset = l2meta.cluster_offset;
-assert((acb->cluster_offset & 511) == 0);
+cluster_offset = l2meta.cluster_offset;
+assert((cluster_offset & 511) == 0);
 
 qemu_iovec_reset(&acb->hd_qiov);
 qemu_iovec_copy(&acb->hd_qiov, acb->qiov, acb->bytes_done,
@@ -612,7 +612,7 @@ static int qcow2_aio_write_cb(QCowAIOCB *acb)
 BLKDBG_EVENT(bs->file, BLKDBG_WRITE_AIO);
 qemu_co_mutex_unlock(&s->lock);
 ret = bdrv_co_writev(bs->file,
- (acb->cluster_offset >> 9) + index_in_cluster,
+ (cluster_offset >> 9) + index_in_cluster,
  cur_nr_sectors, &acb->hd_qiov);
 qemu_co_mutex_lock(&s->lock);
 if (ret < 0) {
-- 
1.7.1




[Qemu-devel] [PATCH v2 05/15] qcow: remove old #undefined code

2011-08-09 Thread Frediano Ziglio

Signed-off-by: Frediano Ziglio 
---
 block/qcow.c |   63 --
 1 files changed, 0 insertions(+), 63 deletions(-)

diff --git a/block/qcow.c b/block/qcow.c
index 00f339f..0989799 100644
--- a/block/qcow.c
+++ b/block/qcow.c
@@ -190,24 +190,6 @@ static int qcow_set_key(BlockDriverState *bs, const char 
*key)
 return -1;
 if (AES_set_decrypt_key(keybuf, 128, &s->aes_decrypt_key) != 0)
 return -1;
-#if 0
-/* test */
-{
-uint8_t in[16];
-uint8_t out[16];
-uint8_t tmp[16];
-for(i=0;i<16;i++)
-in[i] = i;
-AES_encrypt(in, tmp, &s->aes_encrypt_key);
-AES_decrypt(tmp, out, &s->aes_decrypt_key);
-for(i = 0; i < 16; i++)
-printf(" %02x", tmp[i]);
-printf("\n");
-for(i = 0; i < 16; i++)
-printf(" %02x", out[i]);
-printf("\n");
-}
-#endif
 return 0;
 }
 
@@ -441,51 +423,6 @@ static int decompress_cluster(BlockDriverState *bs, 
uint64_t cluster_offset)
 return 0;
 }
 
-#if 0
-
-static int qcow_read(BlockDriverState *bs, int64_t sector_num,
- uint8_t *buf, int nb_sectors)
-{
-BDRVQcowState *s = bs->opaque;
-int ret, index_in_cluster, n;
-uint64_t cluster_offset;
-
-while (nb_sectors > 0) {
-cluster_offset = get_cluster_offset(bs, sector_num << 9, 0, 0, 0, 0);
-index_in_cluster = sector_num & (s->cluster_sectors - 1);
-n = s->cluster_sectors - index_in_cluster;
-if (n > nb_sectors)
-n = nb_sectors;
-if (!cluster_offset) {
-if (bs->backing_hd) {
-/* read from the base image */
-ret = bdrv_read(bs->backing_hd, sector_num, buf, n);
-if (ret < 0)
-return -1;
-} else {
-memset(buf, 0, 512 * n);
-}
-} else if (cluster_offset & QCOW_OFLAG_COMPRESSED) {
-if (decompress_cluster(bs, cluster_offset) < 0)
-return -1;
-memcpy(buf, s->cluster_cache + index_in_cluster * 512, 512 * n);
-} else {
-ret = bdrv_pread(bs->file, cluster_offset + index_in_cluster * 
512, buf, n * 512);
-if (ret != n * 512)
-return -1;
-if (s->crypt_method) {
-encrypt_sectors(s, sector_num, buf, buf, n, 0,
-&s->aes_decrypt_key);
-}
-}
-nb_sectors -= n;
-sector_num += n;
-buf += n * 512;
-}
-return 0;
-}
-#endif
-
 static int qcow_co_readv(BlockDriverState *bs, int64_t sector_num,
  int nb_sectors, QEMUIOVector *qiov)
 {
-- 
1.7.1




[Qemu-devel] [PATCH v2 12/15] qcow2: removed QCowAIOCB entirely

2011-08-09 Thread Frediano Ziglio

Signed-off-by: Frediano Ziglio 
---
 block/qcow2.c |  207 ++---
 1 files changed, 80 insertions(+), 127 deletions(-)

diff --git a/block/qcow2.c b/block/qcow2.c
index dfb969e..6073568 100644
--- a/block/qcow2.c
+++ b/block/qcow2.c
@@ -372,83 +372,77 @@ int qcow2_backing_read1(BlockDriverState *bs, 
QEMUIOVector *qiov,
 return n1;
 }
 
-typedef struct QCowAIOCB {
-BlockDriverState *bs;
-int64_t sector_num;
-QEMUIOVector *qiov;
-int remaining_sectors;
-uint64_t bytes_done;
-uint8_t *cluster_data;
-QEMUIOVector hd_qiov;
-} QCowAIOCB;
-
-/*
- * Returns 0 when the request is completed successfully, 1 when there is still
- * a part left to do and -errno in error cases.
- */
-static int qcow2_aio_read_cb(QCowAIOCB *acb)
+static int qcow2_co_readv(BlockDriverState *bs, int64_t sector_num,
+  int remaining_sectors, QEMUIOVector *qiov)
 {
-BlockDriverState *bs = acb->bs;
 BDRVQcowState *s = bs->opaque;
 int index_in_cluster, n1;
 int ret;
 int cur_nr_sectors; /* number of sectors in current iteration */
 uint64_t cluster_offset = 0;
+uint64_t bytes_done = 0;
+QEMUIOVector hd_qiov;
+uint8_t *cluster_data = NULL;
 
-while (acb->remaining_sectors != 0) {
+qemu_iovec_init(&hd_qiov, qiov->niov);
+
+qemu_co_mutex_lock(&s->lock);
+
+while (remaining_sectors != 0) {
 
 /* prepare next request */
-cur_nr_sectors = acb->remaining_sectors;
+cur_nr_sectors = remaining_sectors;
 if (s->crypt_method) {
 cur_nr_sectors = MIN(cur_nr_sectors,
 QCOW_MAX_CRYPT_CLUSTERS * s->cluster_sectors);
 }
 
-ret = qcow2_get_cluster_offset(bs, acb->sector_num << 9,
+ret = qcow2_get_cluster_offset(bs, sector_num << 9,
 &cur_nr_sectors, &cluster_offset);
 if (ret < 0) {
-return ret;
+goto fail;
 }
 
-index_in_cluster = acb->sector_num & (s->cluster_sectors - 1);
+index_in_cluster = sector_num & (s->cluster_sectors - 1);
 
-qemu_iovec_reset(&acb->hd_qiov);
-qemu_iovec_copy(&acb->hd_qiov, acb->qiov, acb->bytes_done,
+qemu_iovec_reset(&hd_qiov);
+qemu_iovec_copy(&hd_qiov, qiov, bytes_done,
 cur_nr_sectors * 512);
 
 if (!cluster_offset) {
 
 if (bs->backing_hd) {
 /* read from the base image */
-n1 = qcow2_backing_read1(bs->backing_hd, &acb->hd_qiov,
-acb->sector_num, cur_nr_sectors);
+n1 = qcow2_backing_read1(bs->backing_hd, &hd_qiov,
+sector_num, cur_nr_sectors);
 if (n1 > 0) {
 BLKDBG_EVENT(bs->file, BLKDBG_READ_BACKING_AIO);
 qemu_co_mutex_unlock(&s->lock);
-ret = bdrv_co_readv(bs->backing_hd, acb->sector_num,
-n1, &acb->hd_qiov);
+ret = bdrv_co_readv(bs->backing_hd, sector_num,
+n1, &hd_qiov);
 qemu_co_mutex_lock(&s->lock);
 if (ret < 0) {
-return ret;
+goto fail;
 }
 }
 } else {
 /* Note: in this case, no need to wait */
-qemu_iovec_memset(&acb->hd_qiov, 0, 512 * cur_nr_sectors);
+qemu_iovec_memset(&hd_qiov, 0, 512 * cur_nr_sectors);
 }
 } else if (cluster_offset & QCOW_OFLAG_COMPRESSED) {
 /* add AIO support for compressed blocks ? */
 ret = qcow2_decompress_cluster(bs, cluster_offset);
 if (ret < 0) {
-return ret;
+goto fail;
 }
 
-qemu_iovec_from_buffer(&acb->hd_qiov,
+qemu_iovec_from_buffer(&hd_qiov,
 s->cluster_cache + index_in_cluster * 512,
 512 * cur_nr_sectors);
 } else {
 if ((cluster_offset & 511) != 0) {
-return -EIO;
+ret = -EIO;
+goto fail;
 }
 
 if (s->crypt_method) {
@@ -456,15 +450,15 @@ static int qcow2_aio_read_cb(QCowAIOCB *acb)
  * For encrypted images, read everything into a temporary
  * contiguous buffer on which the AES functions can work.
  */
-if (!acb->cluster_data) {
-acb->cluster_data =
+if (!cluster_data) {
+cluster_data =
 qemu_mallocz(QCOW_MAX_CRYPT_CLUSTERS * 
s->cluster_size);
 }
 
 assert(cur_nr_sectors <=
 QCOW_MAX_CRYPT_CLUSTERS * s->cluster_sectors);
-qemu_iovec_reset(&acb->hd_qiov);
-qemu_iovec_add(&acb->hd_qiov, acb->cluster_data

[Qemu-devel] [PATCH v2 04/15] qcow: embed qcow_aio_read_cb into qcow_co_readv and qcow_aio_write_cb into qcow_co_writev

2011-08-09 Thread Frediano Ziglio

Signed-off-by: Frediano Ziglio 
---
 block/qcow.c |  291 -
 1 files changed, 123 insertions(+), 168 deletions(-)

diff --git a/block/qcow.c b/block/qcow.c
index e17f9c5..00f339f 100644
--- a/block/qcow.c
+++ b/block/qcow.c
@@ -486,223 +486,178 @@ static int qcow_read(BlockDriverState *bs, int64_t 
sector_num,
 }
 #endif
 
-typedef struct QCowAIOCB {
-BlockDriverState *bs;
-int64_t sector_num;
-QEMUIOVector *qiov;
-uint8_t *buf;
-void *orig_buf;
-int nb_sectors;
-} QCowAIOCB;
-
-static QCowAIOCB *qcow_aio_setup(BlockDriverState *bs,
-int64_t sector_num, QEMUIOVector *qiov, int nb_sectors,
-int is_write, QCowAIOCB *acb)
-{
-acb->bs = bs;
-acb->sector_num = sector_num;
-acb->qiov = qiov;
-
-if (qiov->niov > 1) {
-acb->buf = acb->orig_buf = qemu_blockalign(bs, qiov->size);
-if (is_write)
-qemu_iovec_to_buffer(qiov, acb->buf);
-} else {
-acb->orig_buf = NULL;
-acb->buf = (uint8_t *)qiov->iov->iov_base;
-}
-acb->nb_sectors = nb_sectors;
-return acb;
-}
-
-static int qcow_aio_read_cb(QCowAIOCB *acb)
+static int qcow_co_readv(BlockDriverState *bs, int64_t sector_num,
+ int nb_sectors, QEMUIOVector *qiov)
 {
-BlockDriverState *bs = acb->bs;
 BDRVQcowState *s = bs->opaque;
 int index_in_cluster;
-int ret, n;
+int ret = 0, n;
 uint64_t cluster_offset;
 struct iovec hd_iov;
 QEMUIOVector hd_qiov;
+uint8_t *buf;
+void *orig_buf;
 
- redo:
-if (acb->nb_sectors == 0) {
-/* request completed */
-return 0;
+if (qiov->niov > 1) {
+buf = orig_buf = qemu_blockalign(bs, qiov->size);
+} else {
+orig_buf = NULL;
+buf = (uint8_t *)qiov->iov->iov_base;
 }
 
-/* prepare next request */
-cluster_offset = get_cluster_offset(bs, acb->sector_num << 9,
- 0, 0, 0, 0);
-index_in_cluster = acb->sector_num & (s->cluster_sectors - 1);
-n = s->cluster_sectors - index_in_cluster;
-if (n > acb->nb_sectors) {
-n = acb->nb_sectors;
-}
+qemu_co_mutex_lock(&s->lock);
+
+while (nb_sectors != 0) {
+/* prepare next request */
+cluster_offset = get_cluster_offset(bs, sector_num << 9,
+ 0, 0, 0, 0);
+index_in_cluster = sector_num & (s->cluster_sectors - 1);
+n = s->cluster_sectors - index_in_cluster;
+if (n > nb_sectors) {
+n = nb_sectors;
+}
 
-if (!cluster_offset) {
-if (bs->backing_hd) {
-/* read from the base image */
-hd_iov.iov_base = (void *)acb->buf;
+if (!cluster_offset) {
+if (bs->backing_hd) {
+/* read from the base image */
+hd_iov.iov_base = (void *)buf;
+hd_iov.iov_len = n * 512;
+qemu_iovec_init_external(&hd_qiov, &hd_iov, 1);
+qemu_co_mutex_unlock(&s->lock);
+ret = bdrv_co_readv(bs->backing_hd, sector_num,
+n, &hd_qiov);
+qemu_co_mutex_lock(&s->lock);
+if (ret < 0) {
+goto fail;
+}
+} else {
+/* Note: in this case, no need to wait */
+memset(buf, 0, 512 * n);
+}
+} else if (cluster_offset & QCOW_OFLAG_COMPRESSED) {
+/* add AIO support for compressed blocks ? */
+if (decompress_cluster(bs, cluster_offset) < 0) {
+goto fail;
+}
+memcpy(buf,
+   s->cluster_cache + index_in_cluster * 512, 512 * n);
+} else {
+if ((cluster_offset & 511) != 0) {
+goto fail;
+}
+hd_iov.iov_base = (void *)buf;
 hd_iov.iov_len = n * 512;
 qemu_iovec_init_external(&hd_qiov, &hd_iov, 1);
 qemu_co_mutex_unlock(&s->lock);
-ret = bdrv_co_readv(bs->backing_hd, acb->sector_num,
+ret = bdrv_co_readv(bs->file,
+(cluster_offset >> 9) + index_in_cluster,
 n, &hd_qiov);
 qemu_co_mutex_lock(&s->lock);
 if (ret < 0) {
-return -EIO;
+break;
+}
+if (s->crypt_method) {
+encrypt_sectors(s, sector_num, buf, buf,
+n, 0,
+&s->aes_decrypt_key);
 }
-} else {
-/* Note: in this case, no need to wait */
-memset(acb->buf, 0, 512 * n);
-}
-} else if (cluster_offset & QCOW_OFLAG_COMPRESSED) {
-/* add AIO support for compressed blocks ? */
-if (decompress_cluster(bs, cluster_offset) < 0) {
-return -EIO;
-

Re: [Qemu-devel] [PATCH 09/24] omap_gpmc/nseries/tusb6010: convert to memory API

2011-08-09 Thread Avi Kivity

On 08/09/2011 11:07 AM, Peter Maydell wrote:

On 9 August 2011 08:41, Avi Kivity  wrote:
>  On 08/09/2011 10:37 AM, Peter Maydell wrote:
>>
>>  On 9 August 2011 07:34, Avi Kivitywrote:
>>  >Also, my patchset focuses on mechanical transformations.  It is already
>>  >risky enough in terms of regressions, I'm not going to rewrite/improve
>>  >  all
>>  >of qemu; if you want those callbacks removed, you will have to remove
>>  >  them
>>  >yourself.
>>
>>  Sure. Can I ask you to drop this one and the onenand patch, and I'll
>>  have a go over the next week or three at incorporating a conversion into
>>  the patchset I have for those?

>  Umm... next week or three?  I'd like to move fast on this.

I'm busy at least half of this week, and next week is the KVM
Forum, so I didn't want to promise anything faster.

>  Can't you base your patches on this instead?  After all, this is just a
>  mechanical transformation, there should be no functional change.

Well, maybe, but it seems a bit silly to commit something which is
going to be half-reverted and rewritten.


It certainly won't be reverted, since the ram_addr_t based API will be 
removed.  So it will just be modified to suit however you want it to 
be.  If you want it otherwise, post a patch, either on top of this or 
instead of it, and I will incorporate it into this patchset.


Again, this is not about improving individual devices (that is left for 
device maintainers); it's just about converting to the new API with 
minimal changes.



Also I just noticed this change:

-static CPUReadMemoryFunc * const omap_gpmc_readfn[] = {
-omap_badwidth_read32,  /* TODO */
-omap_badwidth_read32,  /* TODO */
-omap_gpmc_read,
-};
-
-static CPUWriteMemoryFunc * const omap_gpmc_writefn[] = {
-omap_badwidth_write32, /* TODO */
-omap_badwidth_write32, /* TODO */
-omap_gpmc_write,
+static const MemoryRegionOps omap_gpmc_ops = {
+/* TODO: specialize<4 byte writes? */
+.read = omap_gpmc_read,
+.write = omap_gpmc_write,
+.endianness = DEVICE_NATIVE_ENDIAN,
  };

...isn't this just throwing away the warnings on bad-width accesses?


It is; will fix.


(it's not clear to me from the docs what the behaviour of the new
API on bad-width accesses is going to be.)


It's one of the things I'm collecting requirements on.  So far I think 
we'll have an inheritable, configurable policy that allows you to ignore 
writes/read ones, raise a machine check exception, or warn (in developer 
mode only).


--
error compiling committee.c: too many arguments to function




Re: [Qemu-devel] [PATCH v5 2/4] block: add the block queue support

2011-08-09 Thread Ram Pai
On Tue, Aug 09, 2011 at 12:17:50PM +0800, Zhi Yong Wu wrote:
> The patch introduce one block queue for QEMU block layer.
> 
> Signed-off-by: Zhi Yong Wu 
> ---
>  block/blk-queue.c |  141 
> +
>  block/blk-queue.h |   73 +++
>  2 files changed, 214 insertions(+), 0 deletions(-)
>  create mode 100644 block/blk-queue.c
>  create mode 100644 block/blk-queue.h
> 
> diff --git a/block/blk-queue.c b/block/blk-queue.c
> new file mode 100644
> index 000..f36f3e2
> --- /dev/null
> +++ b/block/blk-queue.c
> @@ -0,0 +1,141 @@
> +/*
> + * QEMU System Emulator queue definition for block layer
> + *
> + * Copyright (c) 2011 Zhi Yong Wu  
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a 
> copy
> + * of this software and associated documentation files (the "Software"), to 
> deal
> + * in the Software without restriction, including without limitation the 
> rights
> + * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
> + * copies of the Software, and to permit persons to whom the Software is
> + * furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 
> FROM,
> + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
> + * THE SOFTWARE.
> + */
> +
> +#include "block_int.h"
> +#include "qemu-queue.h"
> +#include "block/blk-queue.h"
> +
> +/* The APIs for block request queue on qemu block layer.
> + */
> +
> +static void qemu_block_queue_cancel(BlockDriverAIOCB *acb)
> +{
> +qemu_aio_release(acb);
> +}
> +
> +static AIOPool block_queue_pool = {
> +.aiocb_size = sizeof(struct BlockDriverAIOCB),
> +.cancel = qemu_block_queue_cancel,
> +};
> +
> +static void qemu_block_queue_callback(void *opaque, int ret)
> +{
> +BlockDriverAIOCB *acb = opaque;
> +
> +qemu_aio_release(acb);
> +}
> +
> +BlockQueue *qemu_new_block_queue(void)
> +{
> +BlockQueue *queue;
> +
> +queue = qemu_mallocz(sizeof(BlockQueue));
> +
> +QTAILQ_INIT(&queue->requests);
> +
> +return queue;
> +}
> +
> +void qemu_del_block_queue(BlockQueue *queue)
> +{
> +BlockIORequest *request, *next;
> +
> +QTAILQ_FOREACH_SAFE(request, &queue->requests, entry, next) {
> +QTAILQ_REMOVE(&queue->requests, request, entry);
> +qemu_free(request);
> +}
> +
> +qemu_free(queue);
> +}
> +
> +BlockDriverAIOCB *qemu_block_queue_enqueue(BlockQueue *queue,
> +BlockDriverState *bs,
> +BlockRequestHandler *handler,
> +int64_t sector_num,
> +QEMUIOVector *qiov,
> +int nb_sectors,
> +BlockDriverCompletionFunc *cb,
> +void *opaque)
> +{
> +BlockIORequest *request;
> +BlockDriverAIOCB *acb;
> +
> +request = qemu_malloc(sizeof(BlockIORequest));
> +request->bs = bs;
> +request->handler = handler;
> +request->sector_num = sector_num;
> +request->qiov = qiov;
> +request->nb_sectors = nb_sectors;
> +request->cb = cb;
> +request->opaque = opaque;
> +
> +QTAILQ_INSERT_TAIL(&queue->requests, request, entry);
> +
> +acb = qemu_aio_get(&block_queue_pool, bs,
> +   qemu_block_queue_callback, opaque);
> +
> +request->acb = acb;
> +
> +return acb;
> +}
> +
> +int qemu_block_queue_handler(BlockIORequest *request)
> +{
> +int ret;
> +BlockDriverAIOCB *res;
> +
> +/* indicate this req is from block queue */
> +request->bs->req_from_queue = true;
> +
> +res = request->handler(request->bs, request->sector_num,
> +   request->qiov, request->nb_sectors,
> +   request->cb, request->opaque);


should'nt you return with a failure here if res == NULL?
Otherwise  qemu_block_queue_callback() gets called which 
will release the acb.


> +
> +if (request->acb) {
> +qemu_block_queue_callback(request->acb, 0);
> +}
> +
> +ret = (res == NULL) ? 0 : 1;
> +
> +return ret;
> +}


RP



Re: [Qemu-devel] [PATCH v5 2/4] block: add the block queue support

2011-08-09 Thread Zhi Yong Wu
On Tue, Aug 9, 2011 at 4:46 PM, Ram Pai  wrote:
> On Tue, Aug 09, 2011 at 12:17:50PM +0800, Zhi Yong Wu wrote:
>> The patch introduce one block queue for QEMU block layer.
>>
>> Signed-off-by: Zhi Yong Wu 
>> ---
>>  block/blk-queue.c |  141 
>> +
>>  block/blk-queue.h |   73 +++
>>  2 files changed, 214 insertions(+), 0 deletions(-)
>>  create mode 100644 block/blk-queue.c
>>  create mode 100644 block/blk-queue.h
>>
>> diff --git a/block/blk-queue.c b/block/blk-queue.c
>> new file mode 100644
>> index 000..f36f3e2
>> --- /dev/null
>> +++ b/block/blk-queue.c
>> @@ -0,0 +1,141 @@
>> +/*
>> + * QEMU System Emulator queue definition for block layer
>> + *
>> + * Copyright (c) 2011 Zhi Yong Wu  
>> + *
>> + * Permission is hereby granted, free of charge, to any person obtaining a 
>> copy
>> + * of this software and associated documentation files (the "Software"), to 
>> deal
>> + * in the Software without restriction, including without limitation the 
>> rights
>> + * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
>> + * copies of the Software, and to permit persons to whom the Software is
>> + * furnished to do so, subject to the following conditions:
>> + *
>> + * The above copyright notice and this permission notice shall be included 
>> in
>> + * all copies or substantial portions of the Software.
>> + *
>> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS 
>> OR
>> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
>> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
>> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR 
>> OTHER
>> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 
>> FROM,
>> + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
>> + * THE SOFTWARE.
>> + */
>> +
>> +#include "block_int.h"
>> +#include "qemu-queue.h"
>> +#include "block/blk-queue.h"
>> +
>> +/* The APIs for block request queue on qemu block layer.
>> + */
>> +
>> +static void qemu_block_queue_cancel(BlockDriverAIOCB *acb)
>> +{
>> +    qemu_aio_release(acb);
>> +}
>> +
>> +static AIOPool block_queue_pool = {
>> +    .aiocb_size         = sizeof(struct BlockDriverAIOCB),
>> +    .cancel             = qemu_block_queue_cancel,
>> +};
>> +
>> +static void qemu_block_queue_callback(void *opaque, int ret)
>> +{
>> +    BlockDriverAIOCB *acb = opaque;
>> +
>> +    qemu_aio_release(acb);
>> +}
>> +
>> +BlockQueue *qemu_new_block_queue(void)
>> +{
>> +    BlockQueue *queue;
>> +
>> +    queue = qemu_mallocz(sizeof(BlockQueue));
>> +
>> +    QTAILQ_INIT(&queue->requests);
>> +
>> +    return queue;
>> +}
>> +
>> +void qemu_del_block_queue(BlockQueue *queue)
>> +{
>> +    BlockIORequest *request, *next;
>> +
>> +    QTAILQ_FOREACH_SAFE(request, &queue->requests, entry, next) {
>> +        QTAILQ_REMOVE(&queue->requests, request, entry);
>> +        qemu_free(request);
>> +    }
>> +
>> +    qemu_free(queue);
>> +}
>> +
>> +BlockDriverAIOCB *qemu_block_queue_enqueue(BlockQueue *queue,
>> +                        BlockDriverState *bs,
>> +                        BlockRequestHandler *handler,
>> +                        int64_t sector_num,
>> +                        QEMUIOVector *qiov,
>> +                        int nb_sectors,
>> +                        BlockDriverCompletionFunc *cb,
>> +                        void *opaque)
>> +{
>> +    BlockIORequest *request;
>> +    BlockDriverAIOCB *acb;
>> +
>> +    request = qemu_malloc(sizeof(BlockIORequest));
>> +    request->bs = bs;
>> +    request->handler = handler;
>> +    request->sector_num = sector_num;
>> +    request->qiov = qiov;
>> +    request->nb_sectors = nb_sectors;
>> +    request->cb = cb;
>> +    request->opaque = opaque;
>> +
>> +    QTAILQ_INSERT_TAIL(&queue->requests, request, entry);
>> +
>> +    acb = qemu_aio_get(&block_queue_pool, bs,
>> +                       qemu_block_queue_callback, opaque);
>> +
>> +    request->acb = acb;
>> +
>> +    return acb;
>> +}
>> +
>> +int qemu_block_queue_handler(BlockIORequest *request)
>> +{
>> +    int ret;
>> +    BlockDriverAIOCB *res;
>> +
>> +    /* indicate this req is from block queue */
>> +    request->bs->req_from_queue = true;
>> +
>> +    res = request->handler(request->bs, request->sector_num,
>> +                           request->qiov, request->nb_sectors,
>> +                           request->cb, request->opaque);
>
>
> should'nt you return with a failure here if res == NULL?
> Otherwise  qemu_block_queue_callback() gets called which
> will release the acb.
Thanks, Pai. When req handler fails, acb should not be released. A
condition determination should be done before callback function is
invoked. It should be like as below:
res = request->handler(request->bs, request->sector_num,
   request->qiov, request->nb_sectors,

Re: [Qemu-devel] [PATCH 09/24] omap_gpmc/nseries/tusb6010: convert to memory API

2011-08-09 Thread Avi Kivity

On 08/09/2011 11:44 AM, Avi Kivity wrote:

...isn't this just throwing away the warnings on bad-width accesses?



It is; will fix.


Reading the original code, it seems broken:

uint32_t omap_badwidth_read32(void *opaque, target_phys_addr_t addr)
{
uint32_t ret;

OMAP_32B_REG(addr);
cpu_physical_memory_read(addr, (void *) &ret, 4);
return ret;
}

The code issues a read from addr, but that is not the original address 
used by the guest, since addresses are relative to the start of the 
region (in both the old and new APIs), not to physical address start.


So I'll just set access size validity in the new code.

--
error compiling committee.c: too many arguments to function




Re: [Qemu-devel] [PATCH v5 3/4] block: add block timer and block throttling algorithm

2011-08-09 Thread Ram Pai
On Tue, Aug 09, 2011 at 12:17:51PM +0800, Zhi Yong Wu wrote:
> Note:
>   1.) When bps/iops limits are specified to a small value such as 511 
> bytes/s, this VM will hang up. We are considering how to handle this senario.
>   2.) When "dd" command is issued in guest, if its option bs is set to a 
> large value such as "bs=1024K", the result speed will slightly bigger than 
> the limits.
> 
> For these problems, if you have nice thought, pls let us know.:)
> 
> Signed-off-by: Zhi Yong Wu 
> ---
>  block.c |  347 
> +--
>  block.h |6 +-
>  block_int.h |   30 +
>  3 files changed, 372 insertions(+), 11 deletions(-)
> 
> diff --git a/block.c b/block.c
> index 24a25d5..8fd6643 100644
> --- a/block.c
> +++ b/block.c
> @@ -29,6 +29,9 @@
>  #include "module.h"
>  #include "qemu-objects.h"
> 
> +#include "qemu-timer.h"
> +#include "block/blk-queue.h"
> +
>  #ifdef CONFIG_BSD
>  #include 
>  #include 
> @@ -58,6 +61,13 @@ static int bdrv_read_em(BlockDriverState *bs, int64_t 
> sector_num,
>  static int bdrv_write_em(BlockDriverState *bs, int64_t sector_num,
>   const uint8_t *buf, int nb_sectors);
> 
> +static bool bdrv_exceed_bps_limits(BlockDriverState *bs, int nb_sectors,
> +bool is_write, double elapsed_time, uint64_t *wait);
> +static bool bdrv_exceed_iops_limits(BlockDriverState *bs, bool is_write,
> +double elapsed_time, uint64_t *wait);
> +static bool bdrv_exceed_io_limits(BlockDriverState *bs, int nb_sectors,
> +bool is_write, uint64_t *wait);
> +
>  static QTAILQ_HEAD(, BlockDriverState) bdrv_states =
>  QTAILQ_HEAD_INITIALIZER(bdrv_states);
> 
> @@ -90,6 +100,68 @@ int is_windows_drive(const char *filename)
>  }
>  #endif
> 
> +/* throttling disk I/O limits */
> +void bdrv_io_limits_disable(BlockDriverState *bs)
> +{
> +bs->io_limits_enabled = false;
> +bs->req_from_queue= false;
> +
> +if (bs->block_queue) {
> +qemu_block_queue_flush(bs->block_queue);
> +qemu_del_block_queue(bs->block_queue);
> +}
> +
> +if (bs->block_timer) {
> +qemu_del_timer(bs->block_timer);
> +qemu_free_timer(bs->block_timer);
> +}
> +
> +bs->slice_start[0]   = 0;
> +bs->slice_start[1]   = 0;
> +
> +bs->slice_end[0] = 0;
> +bs->slice_end[1] = 0;
> +}
> +
> +static void bdrv_block_timer(void *opaque)
> +{
> +BlockDriverState *bs = opaque;
> +BlockQueue *queue = bs->block_queue;
> +
> +qemu_block_queue_flush(queue);
> +}
> +
> +void bdrv_io_limits_enable(BlockDriverState *bs)
> +{
> +bs->req_from_queue = false;
> +
> +bs->block_queue= qemu_new_block_queue();
> +bs->block_timer= qemu_new_timer_ns(vm_clock, bdrv_block_timer, bs);
> +
> +bs->slice_start[BLOCK_IO_LIMIT_READ]  = qemu_get_clock_ns(vm_clock);
> +bs->slice_start[BLOCK_IO_LIMIT_WRITE] = qemu_get_clock_ns(vm_clock);
 
a minor comment. better to keep the slice_start of the both the READ and WRITE
side the same.  

bs->slice_start[BLOCK_IO_LIMIT_WRITE] = 
bs->slice_start[BLOCK_IO_LIMIT_READ];

saves  a call to qemu_get_clock_ns().

> +
> +bs->slice_end[BLOCK_IO_LIMIT_READ]=
> +  qemu_get_clock_ns(vm_clock) + BLOCK_IO_SLICE_TIME;

bs->slice_end[BLOCK_IO_LIMIT_READ] = bs->slice_start[BLOCK_IO_LIMIT_READ] +
BLOCK_IO_SLICE_TIME;

saves one more call to qemu_get_clock_ns()

> +bs->slice_end[BLOCK_IO_LIMIT_WRITE]   =
> +  qemu_get_clock_ns(vm_clock) + BLOCK_IO_SLICE_TIME;


bs->slice_end[BLOCK_IO_LIMIT_WRITE] = bs->slice_start[BLOCK_IO_LIMIT_WRITE] 
+
BLOCK_IO_SLICE_TIME;

yet another call saving.


> +}
> +
> +bool bdrv_io_limits_enabled(BlockDriverState *bs)
> +{
> +BlockIOLimit *io_limits = &bs->io_limits;
> +if ((io_limits->bps[BLOCK_IO_LIMIT_READ] == 0)
> + && (io_limits->bps[BLOCK_IO_LIMIT_WRITE] == 0)
> + && (io_limits->bps[BLOCK_IO_LIMIT_TOTAL] == 0)
> + && (io_limits->iops[BLOCK_IO_LIMIT_READ] == 0)
> + && (io_limits->iops[BLOCK_IO_LIMIT_WRITE] == 0)
> + && (io_limits->iops[BLOCK_IO_LIMIT_TOTAL] == 0)) {
> +return false;
> +}
> +
> +return true;
> +}

can be optimized to:

return (io_limits->bps[BLOCK_IO_LIMIT_READ]
|| io_limits->bps[BLOCK_IO_LIMIT_WRITE]
|| io_limits->bps[BLOCK_IO_LIMIT_TOTAL]
|| io_limits->iops[BLOCK_IO_LIMIT_READ]
|| io_limits->iops[BLOCK_IO_LIMIT_WRITE]
|| io_limits->iops[BLOCK_IO_LIMIT_TOTAL]);

> +
>  /* check if the path starts with ":" */
>  static int path_has_protocol(const char *path)
>  {
> @@ -642,6 +714,11 @@ int bdrv_open(BlockDriverState *bs, const char 
> *filename, int flags,
>  bs->change_cb(bs->change_opaque, CHANGE_MEDIA);
>  }
> 
> +/* throttling disk I/O limits */
> +if (bs->io_limits_enabled) {
> +

[Qemu-devel] [PATCH v1.1 08/24] tusb6010: move declarations to new file tusb6010.h

2011-08-09 Thread Avi Kivity
Avoid #include hell.

Signed-off-by: Avi Kivity 
---

v1.1: add copyright/license blurb

 hw/devices.h  |7 ---
 hw/nseries.c  |1 +
 hw/tusb6010.c |2 +-
 hw/tusb6010.h |   25 +
 4 files changed, 27 insertions(+), 8 deletions(-)
 create mode 100644 hw/tusb6010.h

diff --git a/hw/devices.h b/hw/devices.h
index c788373..07fda83 100644
--- a/hw/devices.h
+++ b/hw/devices.h
@@ -47,13 +47,6 @@ void *tahvo_init(qemu_irq irq, int betty);
 
 void retu_key_event(void *retu, int state);
 
-/* tusb6010.c */
-typedef struct TUSBState TUSBState;
-TUSBState *tusb6010_init(qemu_irq intr);
-int tusb6010_sync_io(TUSBState *s);
-int tusb6010_async_io(TUSBState *s);
-void tusb6010_power(TUSBState *s, int on);
-
 /* tc6393xb.c */
 typedef struct TC6393xbState TC6393xbState;
 #define TC6393XB_RAM   0x11 /* amount of ram for Video and USB */
diff --git a/hw/nseries.c b/hw/nseries.c
index 6a5575e..5521f28 100644
--- a/hw/nseries.c
+++ b/hw/nseries.c
@@ -32,6 +32,7 @@
 #include "bt.h"
 #include "loader.h"
 #include "blockdev.h"
+#include "tusb6010.h"
 
 /* Nokia N8x0 support */
 struct n800_s {
diff --git a/hw/tusb6010.c b/hw/tusb6010.c
index ccd01ad..add748c 100644
--- a/hw/tusb6010.c
+++ b/hw/tusb6010.c
@@ -23,7 +23,7 @@
 #include "usb.h"
 #include "omap.h"
 #include "irq.h"
-#include "devices.h"
+#include "tusb6010.h"
 
 struct TUSBState {
 int iomemtype[2];
diff --git a/hw/tusb6010.h b/hw/tusb6010.h
new file mode 100644
index 000..ebb3584
--- /dev/null
+++ b/hw/tusb6010.h
@@ -0,0 +1,25 @@
+/*
+ * tusb6010 interfaces
+ *
+ * Copyright 2011 Red Hat, Inc. and/or its affiliates
+ *
+ * Authors:
+ *  Avi Kivity 
+ *
+ * Derived from hw/devices.h.
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2.  See
+ * the COPYING file in the top-level directory.
+ *
+ */
+
+#ifndef TUSB6010_H
+#define TUSB6010_H
+
+typedef struct TUSBState TUSBState;
+TUSBState *tusb6010_init(qemu_irq intr);
+int tusb6010_sync_io(TUSBState *s);
+int tusb6010_async_io(TUSBState *s);
+void tusb6010_power(TUSBState *s, int on);
+
+#endif
-- 
1.7.5.3




[Qemu-devel] [PATCH v1.1 09/24] omap_gpmc/nseries/tusb6010: convert to memory API

2011-08-09 Thread Avi Kivity
Somewhat clumsy since it needs a variable sized region.

Signed-off-by: Avi Kivity 
---

v1.1: set access size validity for omap gpmc region to allow only 32-bit
  accesses

 hw/omap.h  |3 ++-
 hw/omap_gpmc.c |   56 
 hw/tusb6010.c  |   30 +-
 hw/tusb6010.h  |7 +--
 4 files changed, 52 insertions(+), 44 deletions(-)

diff --git a/hw/omap.h b/hw/omap.h
index a064353..c2fe54c 100644
--- a/hw/omap.h
+++ b/hw/omap.h
@@ -17,6 +17,7 @@
  * with this program; if not, see .
  */
 #ifndef hw_omap_h
+#include "memory.h"
 # define hw_omap_h "omap.h"
 
 # define OMAP_EMIFS_BASE   0x
@@ -119,7 +120,7 @@ void omap_sdrc_reset(struct omap_sdrc_s *s);
 struct omap_gpmc_s;
 struct omap_gpmc_s *omap_gpmc_init(target_phys_addr_t base, qemu_irq irq);
 void omap_gpmc_reset(struct omap_gpmc_s *s);
-void omap_gpmc_attach(struct omap_gpmc_s *s, int cs, int iomemtype,
+void omap_gpmc_attach(struct omap_gpmc_s *s, int cs, MemoryRegion *iomem,
 void (*base_upd)(void *opaque, target_phys_addr_t new),
 void (*unmap)(void *opaque), void *opaque);
 
diff --git a/hw/omap_gpmc.c b/hw/omap_gpmc.c
index 8bf3343..19e0865 100644
--- a/hw/omap_gpmc.c
+++ b/hw/omap_gpmc.c
@@ -21,10 +21,13 @@
 #include "hw.h"
 #include "flash.h"
 #include "omap.h"
+#include "memory.h"
+#include "exec-memory.h"
 
 /* General-Purpose Memory Controller */
 struct omap_gpmc_s {
 qemu_irq irq;
+MemoryRegion iomem;
 
 uint8_t sysconfig;
 uint16_t irqst;
@@ -39,7 +42,8 @@ struct omap_gpmc_s {
 uint32_t config[7];
 target_phys_addr_t base;
 size_t size;
-int iomemtype;
+MemoryRegion *iomem;
+MemoryRegion container;
 void (*base_update)(void *opaque, target_phys_addr_t new);
 void (*unmap)(void *opaque);
 void *opaque;
@@ -75,8 +79,12 @@ static void omap_gpmc_cs_map(struct omap_gpmc_cs_file_s *f, 
int base, int mask)
  * constant), the mask should cause wrapping of the address space, so
  * that the same memory becomes accessible at every size bytes
  * starting from base.  */
-if (f->iomemtype)
-cpu_register_physical_memory(f->base, f->size, f->iomemtype);
+if (f->iomem) {
+memory_region_init(&f->container, "omap-gpmc-file", f->size);
+memory_region_add_subregion(&f->container, 0, f->iomem);
+memory_region_add_subregion(get_system_memory(), f->base,
+&f->container);
+}
 
 if (f->base_update)
 f->base_update(f->opaque, f->base);
@@ -87,8 +95,11 @@ static void omap_gpmc_cs_unmap(struct omap_gpmc_cs_file_s *f)
 if (f->size) {
 if (f->unmap)
 f->unmap(f->opaque);
-if (f->iomemtype)
-cpu_register_physical_memory(f->base, f->size, IO_MEM_UNASSIGNED);
+if (f->iomem) {
+memory_region_del_subregion(get_system_memory(), &f->container);
+memory_region_del_subregion(&f->container, f->iomem);
+memory_region_destroy(&f->container);
+}
 f->base = 0;
 f->size = 0;
 }
@@ -132,7 +143,8 @@ void omap_gpmc_reset(struct omap_gpmc_s *s)
 ecc_reset(&s->ecc[i]);
 }
 
-static uint32_t omap_gpmc_read(void *opaque, target_phys_addr_t addr)
+static uint64_t omap_gpmc_read(void *opaque, target_phys_addr_t addr,
+   unsigned size)
 {
 struct omap_gpmc_s *s = (struct omap_gpmc_s *) opaque;
 int cs;
@@ -230,7 +242,7 @@ static uint32_t omap_gpmc_read(void *opaque, 
target_phys_addr_t addr)
 }
 
 static void omap_gpmc_write(void *opaque, target_phys_addr_t addr,
-uint32_t value)
+uint64_t value, unsigned size)
 {
 struct omap_gpmc_s *s = (struct omap_gpmc_s *) opaque;
 int cs;
@@ -249,7 +261,7 @@ static void omap_gpmc_write(void *opaque, 
target_phys_addr_t addr,
 
 case 0x010:/* GPMC_SYSCONFIG */
 if ((value >> 3) == 0x3)
-fprintf(stderr, "%s: bad SDRAM idle mode %i\n",
+fprintf(stderr, "%s: bad SDRAM idle mode %"PRIi64"\n",
 __FUNCTION__, value >> 3);
 if (value & 2)
 omap_gpmc_reset(s);
@@ -369,34 +381,30 @@ static void omap_gpmc_write(void *opaque, 
target_phys_addr_t addr,
 }
 }
 
-static CPUReadMemoryFunc * const omap_gpmc_readfn[] = {
-omap_badwidth_read32,  /* TODO */
-omap_badwidth_read32,  /* TODO */
-omap_gpmc_read,
-};
-
-static CPUWriteMemoryFunc * const omap_gpmc_writefn[] = {
-omap_badwidth_write32, /* TODO */
-omap_badwidth_write32, /* TODO */
-omap_gpmc_write,
+static const MemoryRegionOps omap_gpmc_ops = {
+.read = omap_gpmc_read,
+.write = omap_gpmc_write,
+.endianness = DEVICE_NATIVE_ENDIAN,
+.valid = {
+.min_access_size = 4,
+.max_access

Re: [Qemu-devel] [PATCH v5 3/4] block: add block timer and block throttling algorithm

2011-08-09 Thread Zhi Yong Wu
On Tue, Aug 9, 2011 at 4:57 PM, Ram Pai  wrote:
> On Tue, Aug 09, 2011 at 12:17:51PM +0800, Zhi Yong Wu wrote:
>> Note:
>>       1.) When bps/iops limits are specified to a small value such as 511 
>> bytes/s, this VM will hang up. We are considering how to handle this senario.
>>       2.) When "dd" command is issued in guest, if its option bs is set to a 
>> large value such as "bs=1024K", the result speed will slightly bigger than 
>> the limits.
>>
>> For these problems, if you have nice thought, pls let us know.:)
>>
>> Signed-off-by: Zhi Yong Wu 
>> ---
>>  block.c     |  347 
>> +--
>>  block.h     |    6 +-
>>  block_int.h |   30 +
>>  3 files changed, 372 insertions(+), 11 deletions(-)
>>
>> diff --git a/block.c b/block.c
>> index 24a25d5..8fd6643 100644
>> --- a/block.c
>> +++ b/block.c
>> @@ -29,6 +29,9 @@
>>  #include "module.h"
>>  #include "qemu-objects.h"
>>
>> +#include "qemu-timer.h"
>> +#include "block/blk-queue.h"
>> +
>>  #ifdef CONFIG_BSD
>>  #include 
>>  #include 
>> @@ -58,6 +61,13 @@ static int bdrv_read_em(BlockDriverState *bs, int64_t 
>> sector_num,
>>  static int bdrv_write_em(BlockDriverState *bs, int64_t sector_num,
>>                           const uint8_t *buf, int nb_sectors);
>>
>> +static bool bdrv_exceed_bps_limits(BlockDriverState *bs, int nb_sectors,
>> +        bool is_write, double elapsed_time, uint64_t *wait);
>> +static bool bdrv_exceed_iops_limits(BlockDriverState *bs, bool is_write,
>> +        double elapsed_time, uint64_t *wait);
>> +static bool bdrv_exceed_io_limits(BlockDriverState *bs, int nb_sectors,
>> +        bool is_write, uint64_t *wait);
>> +
>>  static QTAILQ_HEAD(, BlockDriverState) bdrv_states =
>>      QTAILQ_HEAD_INITIALIZER(bdrv_states);
>>
>> @@ -90,6 +100,68 @@ int is_windows_drive(const char *filename)
>>  }
>>  #endif
>>
>> +/* throttling disk I/O limits */
>> +void bdrv_io_limits_disable(BlockDriverState *bs)
>> +{
>> +    bs->io_limits_enabled = false;
>> +    bs->req_from_queue    = false;
>> +
>> +    if (bs->block_queue) {
>> +        qemu_block_queue_flush(bs->block_queue);
>> +        qemu_del_block_queue(bs->block_queue);
>> +    }
>> +
>> +    if (bs->block_timer) {
>> +        qemu_del_timer(bs->block_timer);
>> +        qemu_free_timer(bs->block_timer);
>> +    }
>> +
>> +    bs->slice_start[0]   = 0;
>> +    bs->slice_start[1]   = 0;
>> +
>> +    bs->slice_end[0]     = 0;
>> +    bs->slice_end[1]     = 0;
>> +}
>> +
>> +static void bdrv_block_timer(void *opaque)
>> +{
>> +    BlockDriverState *bs = opaque;
>> +    BlockQueue *queue = bs->block_queue;
>> +
>> +    qemu_block_queue_flush(queue);
>> +}
>> +
>> +void bdrv_io_limits_enable(BlockDriverState *bs)
>> +{
>> +    bs->req_from_queue = false;
>> +
>> +    bs->block_queue    = qemu_new_block_queue();
>> +    bs->block_timer    = qemu_new_timer_ns(vm_clock, bdrv_block_timer, bs);
>> +
>> +    bs->slice_start[BLOCK_IO_LIMIT_READ]  = qemu_get_clock_ns(vm_clock);
>> +    bs->slice_start[BLOCK_IO_LIMIT_WRITE] = qemu_get_clock_ns(vm_clock);
>
> a minor comment. better to keep the slice_start of the both the READ and WRITE
> side the same.
>
>    bs->slice_start[BLOCK_IO_LIMIT_WRITE] = 
> bs->slice_start[BLOCK_IO_LIMIT_READ];
>
> saves  a call to qemu_get_clock_ns().
>
>> +
>> +    bs->slice_end[BLOCK_IO_LIMIT_READ]    =
>> +                      qemu_get_clock_ns(vm_clock) + BLOCK_IO_SLICE_TIME;
>
>    bs->slice_end[BLOCK_IO_LIMIT_READ] = bs->slice_start[BLOCK_IO_LIMIT_READ] +
>                        BLOCK_IO_SLICE_TIME;
>
> saves one more call to qemu_get_clock_ns()
>
>> +    bs->slice_end[BLOCK_IO_LIMIT_WRITE]   =
>> +                      qemu_get_clock_ns(vm_clock) + BLOCK_IO_SLICE_TIME;
>
>
>    bs->slice_end[BLOCK_IO_LIMIT_WRITE] = 
> bs->slice_start[BLOCK_IO_LIMIT_WRITE] +
>                        BLOCK_IO_SLICE_TIME;
>
> yet another call saving.
OK. thanks.
>
>
>> +}
>> +
>> +bool bdrv_io_limits_enabled(BlockDriverState *bs)
>> +{
>> +    BlockIOLimit *io_limits = &bs->io_limits;
>> +    if ((io_limits->bps[BLOCK_IO_LIMIT_READ] == 0)
>> +         && (io_limits->bps[BLOCK_IO_LIMIT_WRITE] == 0)
>> +         && (io_limits->bps[BLOCK_IO_LIMIT_TOTAL] == 0)
>> +         && (io_limits->iops[BLOCK_IO_LIMIT_READ] == 0)
>> +         && (io_limits->iops[BLOCK_IO_LIMIT_WRITE] == 0)
>> +         && (io_limits->iops[BLOCK_IO_LIMIT_TOTAL] == 0)) {
>> +        return false;
>> +    }
>> +
>> +    return true;
>> +}
>
> can be optimized to:
>
>        return (io_limits->bps[BLOCK_IO_LIMIT_READ]
>                || io_limits->bps[BLOCK_IO_LIMIT_WRITE]
>                || io_limits->bps[BLOCK_IO_LIMIT_TOTAL]
>                || io_limits->iops[BLOCK_IO_LIMIT_READ]
>                || io_limits->iops[BLOCK_IO_LIMIT_WRITE]
>                || io_limits->iops[BLOCK_IO_LIMIT_TOTAL]);
OK. thanks.
>
>> +
>>  /* check if the path starts with ":" */
>>  static int path_has_protocol(const char *path)
>>  {
>> @@ -642,6 +714,11 @@ int bdrv

Re: [Qemu-devel] [ANNOUNCE] QEMU 0.15.0 Release

2011-08-09 Thread Avi Kivity

On 08/08/2011 10:16 PM, Anthony Liguori wrote:

Hi,

On behalf of the entire QEMU team, I'm please to announce the release 
of QEMU 0.15.0.  This is the first release of the 0.15 branch and is 
intended for production usage.


The 0.15.0 release is the product of almost 150 contributors with 
1,800 changes, with almost 100k lines of code added.


Tag and push, please.

--
error compiling committee.c: too many arguments to function




[Qemu-devel] KVM call agenda for August 8

2011-08-09 Thread Juan Quintela

Hi

Please send in any agenda items you are interested in covering.

Later, Juan.

PD.  I was on vacation, and I forget to send the call for agenda
before.  Sorry.



Re: [Qemu-devel] [PATCH v1.1 09/24] omap_gpmc/nseries/tusb6010: convert to memory API

2011-08-09 Thread Peter Maydell
On 9 August 2011 10:02, Avi Kivity  wrote:
> +static const MemoryRegionOps omap_gpmc_ops = {
> +    .read = omap_gpmc_read,
> +    .write = omap_gpmc_write,
> +    .endianness = DEVICE_NATIVE_ENDIAN,
> +    .valid = {
> +        .min_access_size = 4,
> +        .max_access_size = 4,
> +    },
>  };

Does this give debug trace output for missized accesses?
That's the main point of the badread/badwrite functions...

-- PMM



Re: [Qemu-devel] [PATCH v1.1 09/24] omap_gpmc/nseries/tusb6010: convert to memory API

2011-08-09 Thread Avi Kivity

On 08/09/2011 12:23 PM, Peter Maydell wrote:

On 9 August 2011 10:02, Avi Kivity  wrote:
>  +static const MemoryRegionOps omap_gpmc_ops = {
>  +.read = omap_gpmc_read,
>  +.write = omap_gpmc_write,
>  +.endianness = DEVICE_NATIVE_ENDIAN,
>  +.valid = {
>  +.min_access_size = 4,
>  +.max_access_size = 4,
>  +},
>};

Does this give debug trace output for missized accesses?


Only after we implement and configure a debug policy in memory.c


That's the main point of the badread/badwrite functions...


These are broken now, aren't they?

--
error compiling committee.c: too many arguments to function




Re: [Qemu-devel] [PATCH v1.1 09/24] omap_gpmc/nseries/tusb6010: convert to memory API

2011-08-09 Thread Peter Maydell
On 9 August 2011 10:26, Avi Kivity  wrote:
> On 08/09/2011 12:23 PM, Peter Maydell wrote:
>>
>> On 9 August 2011 10:02, Avi Kivity  wrote:
>> >  +static const MemoryRegionOps omap_gpmc_ops = {
>> >  +    .read = omap_gpmc_read,
>> >  +    .write = omap_gpmc_write,
>> >  +    .endianness = DEVICE_NATIVE_ENDIAN,
>> >  +    .valid = {
>> >  +        .min_access_size = 4,
>> >  +        .max_access_size = 4,
>> >  +    },
>> >    };
>>
>> Does this give debug trace output for missized accesses?
>
> Only after we implement and configure a debug policy in memory.c
>
>> That's the main point of the badread/badwrite functions...
>
> These are broken now, aren't they?

They successfully print a message, which as I say is the
main point. The OMAP TRM typically says the behaviour isn't
defined for wrong-size accesses, so guest code doing it is
broken anyhow and so the error in the current implementation
is IMHO not so serious.

I think if (as here) it turns out that we need more functionality
in memory.c then we should be implementing it, not silently
dropping things in the conversion process.

-- PMM



Re: [Qemu-devel] Safely reopening image files by stashing fds

2011-08-09 Thread supriya kannery

Kevin Wolf wrote:

Am 08.08.2011 09:02, schrieb Supriya Kannery:
  

On 08/05/2011 09:19 PM, Anthony Liguori wrote:


On 08/05/2011 10:43 AM, Kevin Wolf wrote:
  

Am 05.08.2011 17:24, schrieb Stefan Hajnoczi:


On Fri, Aug 5, 2011 at 3:28 PM, Christoph Hellwig wrote:
  

On Fri, Aug 05, 2011 at 02:12:48PM +0100, Daniel P. Berrange wrote:


Because you cannot change O_DIRECT on an open fd :(. This is why
we're going through this pain.


Hmm, I remember hearing that before, but looking at the current
fcntl()
manpage, it claims you *can* change O_DIRECT using SET_FL. Perhaps
this
is a newish feature, but it'd be nicer to use it if possible ?
  

It's been there since day 1 of O_DIRECT support.


Sorry, my bad. So for Linux we could just use fcntl for
block_set_hostcache and not bother with reopening. However, we will
need to reopen should we wish to support changing O_DSYNC.
  

We do wish to support that.

Anthony thinks that allowing the guest to toggle WCE is a prerequisite
for making cache=writeback the default. And this is something that I
definitely want to do for 1.0.


Indeed.

  

We discussed the following so far...
1. How to safely reopen image files
2. Dynamic hostcache change
3. Support for dynamic change of O_DSYNC

Since 2 is independent of 1, shall I go ahead implementing
hostcache change using fcntl.

Implementation for safely reopening image files using "BDRVReopenState"
can be done separately as a pre-requisite before implementing 3



Doing it separately means that we would introduce yet another callback
that is used just to change O_DIRECT. In the end we want it to use
bdrv_reopen(), too, so I'm not sure if there is a need for a temporary
solution.

  
Could you please explain "In the end we want it to use bdrv_reopen" at 
bit more.
When fcntl() can change O_DIRECT on  open fd , is there a  need to 
"re-open"

the image file?

Considering the current way of having separate high level commands for
changing block parameters (block_set_hostcache, and may be block_set_flush
in furture), these dynamic requests will be sequential. So wouldn't it 
be better to
avoid re-opening of image if possible for individual flag change request 
that comes in?

Actually, once we know what we really want (I haven't seen many comments
on the BDRVReopenState suggestion yet), it should be pretty easy to
implement.

Kevin
  
Will work on to get an RFC patch with this proposed BDRVReopenState to 
get more

inputs.



Re: [Qemu-devel] Safely reopening image files by stashing fds

2011-08-09 Thread Kevin Wolf
Am 09.08.2011 11:22, schrieb supriya kannery:
> Kevin Wolf wrote:
>> Am 08.08.2011 09:02, schrieb Supriya Kannery:
>>   
>>> On 08/05/2011 09:19 PM, Anthony Liguori wrote:
>>> 
 On 08/05/2011 10:43 AM, Kevin Wolf wrote:
   
> Am 05.08.2011 17:24, schrieb Stefan Hajnoczi:
> 
>> On Fri, Aug 5, 2011 at 3:28 PM, Christoph Hellwig wrote:
>>   
>>> On Fri, Aug 05, 2011 at 02:12:48PM +0100, Daniel P. Berrange wrote:
>>> 
> Because you cannot change O_DIRECT on an open fd :(. This is why
> we're going through this pain.
> 
 Hmm, I remember hearing that before, but looking at the current
 fcntl()
 manpage, it claims you *can* change O_DIRECT using SET_FL. Perhaps
 this
 is a newish feature, but it'd be nicer to use it if possible ?
   
>>> It's been there since day 1 of O_DIRECT support.
>>> 
>> Sorry, my bad. So for Linux we could just use fcntl for
>> block_set_hostcache and not bother with reopening. However, we will
>> need to reopen should we wish to support changing O_DSYNC.
>>   
> We do wish to support that.
>
> Anthony thinks that allowing the guest to toggle WCE is a prerequisite
> for making cache=writeback the default. And this is something that I
> definitely want to do for 1.0.
> 
 Indeed.

   
>>> We discussed the following so far...
>>> 1. How to safely reopen image files
>>> 2. Dynamic hostcache change
>>> 3. Support for dynamic change of O_DSYNC
>>>
>>> Since 2 is independent of 1, shall I go ahead implementing
>>> hostcache change using fcntl.
>>>
>>> Implementation for safely reopening image files using "BDRVReopenState"
>>> can be done separately as a pre-requisite before implementing 3
>>> 
>>
>> Doing it separately means that we would introduce yet another callback
>> that is used just to change O_DIRECT. In the end we want it to use
>> bdrv_reopen(), too, so I'm not sure if there is a need for a temporary
>> solution.
>>
>>   
> Could you please explain "In the end we want it to use bdrv_reopen" at 
> bit more.
> When fcntl() can change O_DIRECT on  open fd , is there a  need to 
> "re-open"
> the image file?

What I meant is that in the end, with a generic bdrv_reopen(), we can
have raw-posix only call dup() and fcntl() instead of doing a
close()/open() sequence if it can satisfy the new flags this way. But
this would be an implementation detail and not be visible in the interface.

Kevin



Re: [Qemu-devel] Safely reopening image files by stashing fds

2011-08-09 Thread supriya kannery

Kevin Wolf wrote:

Am 09.08.2011 11:22, schrieb supriya kannery:
  

Kevin Wolf wrote:


Am 08.08.2011 09:02, schrieb Supriya Kannery:
  
  

On 08/05/2011 09:19 PM, Anthony Liguori wrote:



On 08/05/2011 10:43 AM, Kevin Wolf wrote:
  
  

Am 05.08.2011 17:24, schrieb Stefan Hajnoczi:



On Fri, Aug 5, 2011 at 3:28 PM, Christoph Hellwig wrote:
  
  

On Fri, Aug 05, 2011 at 02:12:48PM +0100, Daniel P. Berrange wrote:



Because you cannot change O_DIRECT on an open fd :(. This is why
we're going through this pain.



Hmm, I remember hearing that before, but looking at the current
fcntl()
manpage, it claims you *can* change O_DIRECT using SET_FL. Perhaps
this
is a newish feature, but it'd be nicer to use it if possible ?
  
  

It's been there since day 1 of O_DIRECT support.



Sorry, my bad. So for Linux we could just use fcntl for
block_set_hostcache and not bother with reopening. However, we will
need to reopen should we wish to support changing O_DSYNC.
  
  

We do wish to support that.

Anthony thinks that allowing the guest to toggle WCE is a prerequisite
for making cache=writeback the default. And this is something that I
definitely want to do for 1.0.



Indeed.

  
  

We discussed the following so far...
1. How to safely reopen image files
2. Dynamic hostcache change
3. Support for dynamic change of O_DSYNC

Since 2 is independent of 1, shall I go ahead implementing
hostcache change using fcntl.

Implementation for safely reopening image files using "BDRVReopenState"
can be done separately as a pre-requisite before implementing 3



Doing it separately means that we would introduce yet another callback
that is used just to change O_DIRECT. In the end we want it to use
bdrv_reopen(), too, so I'm not sure if there is a need for a temporary
solution.

  
  
Could you please explain "In the end we want it to use bdrv_reopen" at 
bit more.
When fcntl() can change O_DIRECT on  open fd , is there a  need to 
"re-open"

the image file?



What I meant is that in the end, with a generic bdrv_reopen(), we can
have raw-posix only call dup() and fcntl() instead of doing a
close()/open() sequence if it can satisfy the new flags this way. But
this would be an implementation detail and not be visible in the interface.

Kevin
  


ok
- thanks, Supriya



Re: [Qemu-devel] [PATCH v4 14/39] ac97: convert to memory API

2011-08-09 Thread malc
On Mon, 8 Aug 2011, Avi Kivity wrote:

> fixes BAR sizing as well.

I'm fine with this version, thanks.

[..snip..]

-- 
mailto:av1...@comtv.ru



Re: [Qemu-devel] [PATCH] Introduce short names for fixed width integer types

2011-08-09 Thread malc
On Mon, 8 Aug 2011, Peter Maydell wrote:

> On 8 August 2011 13:56, Avi Kivity  wrote:
> > QEMU deals with a lot of fixed width integer types; their names
> > (uint64_t etc) are clumsy to use and take up a lot of space.
> >
> > Following Linux, introduce shorter names, for example U64 for
> > uint64_t.
> 
> Strongly disagree. uint64_t &c are standard types and it's
> immediately clear to a competent C programmer what they are.
> Random qemu-specific funny named types just introduces an
> unnecessary level of indirection.
> 
> We only just recently managed to get rid of the nonstandard
> typenames for these from fpu/...
> 

Seconded.

-- 
mailto:av1...@comtv.ru



Re: [Qemu-devel] [PATCH v4 17/39] es1370: convert to memory API

2011-08-09 Thread malc
On Mon, 8 Aug 2011, Avi Kivity wrote:

Fine with this too, thanks.

[..snip..]

-- 
mailto:av1...@comtv.ru



[Qemu-devel] [PATCH] qxl: allowing the command rings to be not empty when spice worker is stopped RHBZ #728984

2011-08-09 Thread Yonit Halperin
same as 8927cfbba232e28304734f7afd463c1b84134031, but for qxl_check_state, that 
was
triggered by qxl_pre_load (which calls qxl_hard_reset, which calls 
qxl_soft_reset),
and caused the migration target to crash.
---
 hw/qxl.c |8 +++-
 1 files changed, 3 insertions(+), 5 deletions(-)

diff --git a/hw/qxl.c b/hw/qxl.c
index db7ae7a..7991e70 100644
--- a/hw/qxl.c
+++ b/hw/qxl.c
@@ -821,17 +821,15 @@ static void qxl_check_state(PCIQXLDevice *d)
 {
 QXLRam *ram = d->ram;
 
-assert(SPICE_RING_IS_EMPTY(&ram->cmd_ring));
-assert(SPICE_RING_IS_EMPTY(&ram->cursor_ring));
+assert(!d->ssd.running || SPICE_RING_IS_EMPTY(&ram->cmd_ring));
+assert(!d->ssd.running || SPICE_RING_IS_EMPTY(&ram->cursor_ring));
 }
 
 static void qxl_reset_state(PCIQXLDevice *d)
 {
-QXLRam *ram = d->ram;
 QXLRom *rom = d->rom;
 
-assert(!d->ssd.running || SPICE_RING_IS_EMPTY(&ram->cmd_ring));
-assert(!d->ssd.running || SPICE_RING_IS_EMPTY(&ram->cursor_ring));
+qxl_check_state(d);
 d->shadow_rom.update_id = cpu_to_le32(0);
 *rom = d->shadow_rom;
 qxl_rom_set_dirty(d);
-- 
1.7.4.4




Re: [Qemu-devel] [PATCH v5 1/4] block: add the command line support

2011-08-09 Thread Stefan Hajnoczi
On Tue, Aug 9, 2011 at 5:17 AM, Zhi Yong Wu  wrote:
> Signed-off-by: Zhi Yong Wu 
> ---
>  Makefile.objs   |    2 +-
>  blockdev.c      |   39 +++
>  qemu-config.c   |   24 
>  qemu-option.c   |   17 +
>  qemu-option.h   |    1 +
>  qemu-options.hx |    1 +
>  6 files changed, 83 insertions(+), 1 deletions(-)
>
> diff --git a/Makefile.objs b/Makefile.objs
> index 9f99ed4..06f2033 100644
> --- a/Makefile.objs
> +++ b/Makefile.objs
> @@ -23,7 +23,7 @@ block-nested-y += raw.o cow.o qcow.o vdi.o vmdk.o cloop.o 
> dmg.o bochs.o vpc.o vv
>  block-nested-y += qcow2.o qcow2-refcount.o qcow2-cluster.o qcow2-snapshot.o 
> qcow2-cache.o
>  block-nested-y += qed.o qed-gencb.o qed-l2-cache.o qed-table.o qed-cluster.o
>  block-nested-y += qed-check.o
> -block-nested-y += parallels.o nbd.o blkdebug.o sheepdog.o blkverify.o
> +block-nested-y += parallels.o nbd.o blkdebug.o sheepdog.o blkverify.o 
> blk-queue.o

This does not build:
  LINK  qemu-ga
gcc: error: block/blk-queue.o: No such file or directory

This Makefile.objs change should be in the commit that adds blk-queue.c.

Each patch in a series should compile cleanly and can only depend on
previous patches.  This is important so that git-bisect(1) can be
used, it only works if every commit builds a working program.  It also
makes patch review easier when the patch series builds up logically.

>  block-nested-$(CONFIG_WIN32) += raw-win32.o
>  block-nested-$(CONFIG_POSIX) += raw-posix.o
>  block-nested-$(CONFIG_CURL) += curl.o
> diff --git a/blockdev.c b/blockdev.c
> index c263663..9c78548 100644
> --- a/blockdev.c
> +++ b/blockdev.c
> @@ -238,6 +238,10 @@ DriveInfo *drive_init(QemuOpts *opts, int 
> default_to_scsi)
>     int on_read_error, on_write_error;
>     const char *devaddr;
>     DriveInfo *dinfo;
> +    BlockIOLimit io_limits;

This structure is not undefined at this point in the patch series.

> +    bool iol_flag = false;
> +    const char *iol_opts[7] = {"bps", "bps_rd", "bps_wr",
> +                                "iops", "iops_rd", "iops_wr"};
>     int is_extboot = 0;
>     int snapshot = 0;
>     int ret;
> @@ -372,6 +376,36 @@ DriveInfo *drive_init(QemuOpts *opts, int 
> default_to_scsi)
>         return NULL;
>     }
>
> +    /* disk io throttling */
> +    iol_flag = qemu_opt_io_limits_enable_flag(opts, iol_opts);
> +    if (iol_flag) {
> +        memset(&io_limits, 0, sizeof(BlockIOLimit));
> +
> +        io_limits.bps[BLOCK_IO_LIMIT_TOTAL]  =
> +                           qemu_opt_get_number(opts, "bps", 0);
> +        io_limits.bps[BLOCK_IO_LIMIT_READ]   =
> +                           qemu_opt_get_number(opts, "bps_rd", 0);
> +        io_limits.bps[BLOCK_IO_LIMIT_WRITE]  =
> +                           qemu_opt_get_number(opts, "bps_wr", 0);
> +        io_limits.iops[BLOCK_IO_LIMIT_TOTAL] =
> +                           qemu_opt_get_number(opts, "iops", 0);
> +        io_limits.iops[BLOCK_IO_LIMIT_READ]  =
> +                           qemu_opt_get_number(opts, "iops_rd", 0);
> +        io_limits.iops[BLOCK_IO_LIMIT_WRITE] =
> +                           qemu_opt_get_number(opts, "iops_wr", 0);
> +
> +        if (((io_limits.bps[BLOCK_IO_LIMIT_TOTAL] != 0)
> +            && ((io_limits.bps[BLOCK_IO_LIMIT_READ] != 0)
> +            || (io_limits.bps[BLOCK_IO_LIMIT_WRITE] != 0)))
> +            || ((io_limits.iops[BLOCK_IO_LIMIT_TOTAL] != 0)
> +            && ((io_limits.iops[BLOCK_IO_LIMIT_READ] != 0)
> +            || (io_limits.iops[BLOCK_IO_LIMIT_WRITE] != 0 {
> +            error_report("bps(iops) and bps_rd/bps_wr(iops_rd/iops_wr) \
> +                                        cannot be used at the same time");
> +            return NULL;
> +        }
> +    }
> +
>     on_write_error = BLOCK_ERR_STOP_ENOSPC;
>     if ((buf = qemu_opt_get(opts, "werror")) != NULL) {
>         if (type != IF_IDE && type != IF_SCSI && type != IF_VIRTIO && type != 
> IF_NONE) {
> @@ -483,6 +517,11 @@ DriveInfo *drive_init(QemuOpts *opts, int 
> default_to_scsi)
>
>     bdrv_set_on_error(dinfo->bdrv, on_read_error, on_write_error);
>
> +    /* disk I/O throttling */
> +    if (iol_flag) {
> +        bdrv_set_io_limits(dinfo->bdrv, &io_limits);
> +    }

iol_flag and qemu_opt_io_limits_enable_flag() are not necessary.  If
no limits were set then all fields will be 0 (unlimited).

Stefan



Re: [Qemu-devel] [ANNOUNCE] QEMU 0.15.0 Release

2011-08-09 Thread Anthony Liguori

On 08/09/2011 04:11 AM, Avi Kivity wrote:

On 08/08/2011 10:16 PM, Anthony Liguori wrote:

Hi,

On behalf of the entire QEMU team, I'm please to announce the release
of QEMU 0.15.0. This is the first release of the 0.15 branch and is
intended for production usage.

The 0.15.0 release is the product of almost 150 contributors with
1,800 changes, with almost 100k lines of code added.


Tag and push, please.


D'oh!  Pushed now.

Regards,

Anthony Liguori








Re: [Qemu-devel] Safely reopening image files by stashing fds

2011-08-09 Thread Stefan Hajnoczi
On Tue, Aug 9, 2011 at 11:35 AM, Kevin Wolf  wrote:
> Am 09.08.2011 12:25, schrieb Stefan Hajnoczi:
>> On Mon, Aug 8, 2011 at 4:16 PM, Kevin Wolf  wrote:
>>> Am 08.08.2011 16:49, schrieb Stefan Hajnoczi:
 On Fri, Aug 5, 2011 at 10:48 AM, Kevin Wolf  wrote:
> Am 05.08.2011 11:29, schrieb Stefan Hajnoczi:
>> On Fri, Aug 5, 2011 at 10:07 AM, Kevin Wolf  wrote:
>>> Am 05.08.2011 10:40, schrieb Stefan Hajnoczi:
 We've discussed safe methods for reopening image files (e.g. useful for
 changing the hostcache parameter).  The problem is that closing the 
 file first
 and then opening it again exposes us to the error case where the open 
 fails.
 At that point we cannot get to the file anymore and our options are to
 terminate QEMU, pause the VM, or offline the block device.

 This window of vulnerability can be eliminated by keeping the file 
 descriptor
 around and falling back to it should the open fail.

 The challenge for the file descriptor approach is that image formats, 
 like
 VMDK, can span multiple files.  Therefore the solution is not as 
 simple as
 stashing a single file descriptor and reopening from it.
>>>
>>> So far I agree. The rest I believe is wrong because you can't assume
>>> that every backend uses file descriptors. The qemu block layer is based
>>> on BlockDriverStates, not fds. They are a concept that should be hidden
>>> in raw-posix.
>>>
>>> I think something like this could do:
>>>
>>> struct BDRVReopenState {
>>>    BlockDriverState *bs;
>>>    /* can be extended by block drivers */
>>> };
>>>
>>> .bdrv_reopen(BlockDriverState *bs, BDRVReopenState **reopen_state, int
>>> flags);
>>> .bdrv_reopen_commit(BDRVReopenState *reopen_state);
>>> .bdrv_reopen_abort(BDRVReopenState *reopen_state);
>>>
>>> raw-posix would store the old file descriptor in its reopen_state. On
>>> commit, it closes the old descriptors, on abort it reverts to the old
>>> one and closes the newly opened one.
>>>
>>> Makes things a bit more complicated than the simple bdrv_reopen I had in
>>> mind before, but it allows VMDK to get an all-or-nothing semantics.
>>
>> Can you show how bdrv_reopen() would use these new interfaces?  I'm
>> not 100% clear on the idea.
>
> Well, you wouldn't only call bdrv_reopen, but also either
> bdrv_reopen_commit/abort (for the top-level caller we can have a wrapper
> function that does both, but that's syntactic sugar).
>
> For example we would have:
>
> int vmdk_reopen()

 .bdrv_reopen() is a confusing name for this operation because it does
 not reopen anything.  bdrv_prepare_reopen() might be clearer.
>>>
>>> Makes sense.
>>>

> {
>    *((VMDKReopenState**) rs) = malloc();
>
>    foreach (extent in s->extents) {
>        ret = bdrv_reopen(extent->file, &extent->reopen_state)
>        if (ret < 0)
>            goto fail;
>    }
>    return 0;
>
> fail:
>    foreach (extent in rs->already_reopened) {
>        bdrv_reopen_abort(extent->reopen_state);
>    }
>    return ret;
> }

> void vmdk_reopen_commit()
> {
>    foreach (extent in s->extents) {
>        bdrv_reopen_commit(extent->reopen_state);
>    }
>    free(rs);
> }
>
> void vmdk_reopen_abort()
> {
>    foreach (extent in s->extents) {
>        bdrv_reopen_abort(extent->reopen_state);
>    }
>    free(rs);
> }

 Does the caller invoke bdrv_close(bs) after bdrv_prepare_reopen(bs,
 &rs)?
>>>
>>> No. Closing the old backend would be part of bdrv_reopen_commit.
>>>
>>> Do you have a use case where it would be helpful if the caller invoked
>>> bdrv_close?
>>
>> When the caller does bdrv_close() two BlockDriverStates are never open
>> for the same image file.  I thought this was a property we wanted.
>>
>> Also, in the block_set_hostcache case we need to reopen without
>> switching to a new BlockDriverState instance.  That means the reopen
>> needs to be in-place with respect to the BlockDriverState *bs pointer.
>>  We cannot create a new instance.
>
> Yes, but where do you even get the second BlockDriverState from?
>
> My prototype only returns an int, not a new BlockDriverState. Until
> bdrv_reopen_commit() it would refer to the old file descriptors etc. and
> after bdrv_reopen_commit() the very same BlockDriverState would refer to
> the new ones.

It seems I don't understand the API.  I thought it was:

do_block_set_hostcache()
{
bdrv_prepare_reopen(bs, &rs);
...open new file and check everything is okay...
if (ret == 0) {
bdrv_reopen_commit(rs);
} else {
bdrv_reopen_abort(rs);
}
return ret;
}

If the caller isn't opening the new file then what's the poin

[Qemu-devel] Regarding guest physical addresses for multithreaded program

2011-08-09 Thread Amit Roy
Hi,

I have recently started using Qemu.


I have a very specific requirement which has been discussed previously at 
length. I want to capture the guest physical address trace of multithreaded 
programs (x86_64 arch). After going through the archives, I have been able to 
record the guest virtual memory trace (by inserting hooks in i386/translate.c 
). However, I was wondering if there is an easy way to convert these guest 
virtual addresses to the guest physical address.

I understand that I have to modify the code somewhere between  the translations 
guest_virtual->guest_host->host_virtual. However as far as I could understand, 
the guest_virtual->guest_host is often bypassed  if the base address is already 
present.

Can you please suggest ways to retrieve the guest physical addresses of all the 
memory accesses? For example, force  Qemu to do the above translation all the 
time?

On a similar vein, I was wondering what is the use of cpu_get_phys_page_debug() 
function in  cpu-all.h ? Can this be used for my purpose?

Any pointer would be greatly appreciated.

 

Thanks & Regards  
Amit 


Re: [Qemu-devel] KVM call agenda for August 8

2011-08-09 Thread Anthony Liguori

On 08/09/2011 04:21 AM, Juan Quintela wrote:


Hi

Please send in any agenda items you are interested in covering.


I think we can skip since many people will be in Vancouver next week and 
we'll have plenty to talk about then.


Just a couple reminders, for KVM Forum speakers, please post your slides 
to http://www.linux-kvm.org/page/KVM_Forum_2011


Also, we'll be hosting a KVM Hackathon on Thursday from 9am-2pm.  We'll 
announce the room information at KVM Forum.


For everyone travelling, have a safe flight!

Regards,

Anthony Liguori



Later, Juan.

PD.  I was on vacation, and I forget to send the call for agenda
before.  Sorry.






Re: [Qemu-devel] [Spice-devel] [PATCH] qxl: allowing the command rings to be not empty when spice worker is stopped RHBZ #728984

2011-08-09 Thread Arnon Gilboa

Acked-by: Arnon Gilboa 

Yonit Halperin wrote:

same as 8927cfbba232e28304734f7afd463c1b84134031, but for qxl_check_state, that 
was
triggered by qxl_pre_load (which calls qxl_hard_reset, which calls 
qxl_soft_reset),
and caused the migration target to crash.
---
 hw/qxl.c |8 +++-
 1 files changed, 3 insertions(+), 5 deletions(-)

diff --git a/hw/qxl.c b/hw/qxl.c
index db7ae7a..7991e70 100644
--- a/hw/qxl.c
+++ b/hw/qxl.c
@@ -821,17 +821,15 @@ static void qxl_check_state(PCIQXLDevice *d)
 {
 QXLRam *ram = d->ram;
 
-assert(SPICE_RING_IS_EMPTY(&ram->cmd_ring));

-assert(SPICE_RING_IS_EMPTY(&ram->cursor_ring));
+assert(!d->ssd.running || SPICE_RING_IS_EMPTY(&ram->cmd_ring));
+assert(!d->ssd.running || SPICE_RING_IS_EMPTY(&ram->cursor_ring));
 }
 
 static void qxl_reset_state(PCIQXLDevice *d)

 {
-QXLRam *ram = d->ram;
 QXLRom *rom = d->rom;
 
-assert(!d->ssd.running || SPICE_RING_IS_EMPTY(&ram->cmd_ring));

-assert(!d->ssd.running || SPICE_RING_IS_EMPTY(&ram->cursor_ring));
+qxl_check_state(d);
 d->shadow_rom.update_id = cpu_to_le32(0);
 *rom = d->shadow_rom;
 qxl_rom_set_dirty(d);
  





[Qemu-devel] [PATCH v1.2 09/24] omap_gpmc/nseries/tusb6010: convert to memory API

2011-08-09 Thread Avi Kivity
Somewhat clumsy since it needs a variable sized region.

Signed-off-by: Avi Kivity 
---
v1.2: drop v1.1 changes, instead forward invalid size accesses to original
  omap_badwidth_*() functions

v1.1: set access size validity for omap gpmc region to allow only 32-bit
  accesses

 hw/omap.h  |3 +-
 hw/omap_gpmc.c |   60 +--
 hw/tusb6010.c  |   30 ---
 hw/tusb6010.h  |7 -
 4 files changed, 56 insertions(+), 44 deletions(-)

diff --git a/hw/omap.h b/hw/omap.h
index a064353..c2fe54c 100644
--- a/hw/omap.h
+++ b/hw/omap.h
@@ -17,6 +17,7 @@
  * with this program; if not, see .
  */
 #ifndef hw_omap_h
+#include "memory.h"
 # define hw_omap_h "omap.h"
 
 # define OMAP_EMIFS_BASE   0x
@@ -119,7 +120,7 @@ void omap_sdrc_reset(struct omap_sdrc_s *s);
 struct omap_gpmc_s;
 struct omap_gpmc_s *omap_gpmc_init(target_phys_addr_t base, qemu_irq irq);
 void omap_gpmc_reset(struct omap_gpmc_s *s);
-void omap_gpmc_attach(struct omap_gpmc_s *s, int cs, int iomemtype,
+void omap_gpmc_attach(struct omap_gpmc_s *s, int cs, MemoryRegion *iomem,
 void (*base_upd)(void *opaque, target_phys_addr_t new),
 void (*unmap)(void *opaque), void *opaque);
 
diff --git a/hw/omap_gpmc.c b/hw/omap_gpmc.c
index 8bf3343..e137e4a 100644
--- a/hw/omap_gpmc.c
+++ b/hw/omap_gpmc.c
@@ -21,10 +21,13 @@
 #include "hw.h"
 #include "flash.h"
 #include "omap.h"
+#include "memory.h"
+#include "exec-memory.h"
 
 /* General-Purpose Memory Controller */
 struct omap_gpmc_s {
 qemu_irq irq;
+MemoryRegion iomem;
 
 uint8_t sysconfig;
 uint16_t irqst;
@@ -39,7 +42,8 @@ struct omap_gpmc_s {
 uint32_t config[7];
 target_phys_addr_t base;
 size_t size;
-int iomemtype;
+MemoryRegion *iomem;
+MemoryRegion container;
 void (*base_update)(void *opaque, target_phys_addr_t new);
 void (*unmap)(void *opaque);
 void *opaque;
@@ -75,8 +79,12 @@ static void omap_gpmc_cs_map(struct omap_gpmc_cs_file_s *f, 
int base, int mask)
  * constant), the mask should cause wrapping of the address space, so
  * that the same memory becomes accessible at every size bytes
  * starting from base.  */
-if (f->iomemtype)
-cpu_register_physical_memory(f->base, f->size, f->iomemtype);
+if (f->iomem) {
+memory_region_init(&f->container, "omap-gpmc-file", f->size);
+memory_region_add_subregion(&f->container, 0, f->iomem);
+memory_region_add_subregion(get_system_memory(), f->base,
+&f->container);
+}
 
 if (f->base_update)
 f->base_update(f->opaque, f->base);
@@ -87,8 +95,11 @@ static void omap_gpmc_cs_unmap(struct omap_gpmc_cs_file_s *f)
 if (f->size) {
 if (f->unmap)
 f->unmap(f->opaque);
-if (f->iomemtype)
-cpu_register_physical_memory(f->base, f->size, IO_MEM_UNASSIGNED);
+if (f->iomem) {
+memory_region_del_subregion(get_system_memory(), &f->container);
+memory_region_del_subregion(&f->container, f->iomem);
+memory_region_destroy(&f->container);
+}
 f->base = 0;
 f->size = 0;
 }
@@ -132,12 +143,17 @@ void omap_gpmc_reset(struct omap_gpmc_s *s)
 ecc_reset(&s->ecc[i]);
 }
 
-static uint32_t omap_gpmc_read(void *opaque, target_phys_addr_t addr)
+static uint64_t omap_gpmc_read(void *opaque, target_phys_addr_t addr,
+   unsigned size)
 {
 struct omap_gpmc_s *s = (struct omap_gpmc_s *) opaque;
 int cs;
 struct omap_gpmc_cs_file_s *f;
 
+if (size != 4) {
+return omap_badwidth_read32(opaque, addr);
+}
+
 switch (addr) {
 case 0x000:/* GPMC_REVISION */
 return 0x20;
@@ -230,12 +246,16 @@ static uint32_t omap_gpmc_read(void *opaque, 
target_phys_addr_t addr)
 }
 
 static void omap_gpmc_write(void *opaque, target_phys_addr_t addr,
-uint32_t value)
+uint64_t value, unsigned size)
 {
 struct omap_gpmc_s *s = (struct omap_gpmc_s *) opaque;
 int cs;
 struct omap_gpmc_cs_file_s *f;
 
+if (size != 4) {
+return omap_badwidth_write32(opaque, addr, value);
+}
+
 switch (addr) {
 case 0x000:/* GPMC_REVISION */
 case 0x014:/* GPMC_SYSSTATUS */
@@ -249,7 +269,7 @@ static void omap_gpmc_write(void *opaque, 
target_phys_addr_t addr,
 
 case 0x010:/* GPMC_SYSCONFIG */
 if ((value >> 3) == 0x3)
-fprintf(stderr, "%s: bad SDRAM idle mode %i\n",
+fprintf(stderr, "%s: bad SDRAM idle mode %"PRIi64"\n",
 __FUNCTION__, value >> 3);
 if (value & 2)
 omap_gpmc_reset(s);
@@ -369,34 +389,26 @@ static void omap_gpmc_write(void *opaque, 
target_phys_addr_t addr,
 }
 }
 

Re: [Qemu-devel] [PATCH 44/55] spitz tosa: Simplify "drive is suitable for microdrive" test

2011-08-09 Thread Markus Armbruster
Kevin Wolf  writes:

> Am 09.08.2011 13:56, schrieb Markus Armbruster:
>> bdrv_is_locked() is about the frontend's media lock.  To make this more
>> obvious, my PATCH 29/55 replaces it by bdrv_dev_is_medium_locked().  It
>> does *not* query the backend's lock (which may not even exist!) set by
>> bdrv_set_locked().
>
> This sounds wrong (the behaviour, not your analysis). Do you plan to
> remove bdrv_dev_is_medium_locked() as well? It is used by IDE and
> scsi-disk (easy to replace) and in eject_device() in blockdev.c. Maybe
> the 'eject' monitor command should be handled by another callback into
> the device.

Just two users remain after my series:

* bdrv_info()

  It wants to show the frontend's lock state, and uses
  bdrv_dev_is_medium_locked() to get it from the frontend.

* eject_device()

  It needs to fail if the frontend has its medium locked.  It uses
  bdrv_dev_is_medium_locked() to get the lock state from the frontend.
  Pseudo code (-f glossed over for simplicity):

  unless frontend has removable media
  fail
  if frontend medium is not already open
 and frontend medium is locked
  fail
  drop the block driver

  I considered replacing this with "ask the frontend to eject", and then
  let the frontend decide whether to order the backend to drop the block
  driver.  I decided against it, since I need
  bdrv_dev_is_medium_locked() anyway for bdrv_info().  Besides, the
  series is long enough already.



Re: [Qemu-devel] [PATCH v1.1 09/24] omap_gpmc/nseries/tusb6010: convert to memory API

2011-08-09 Thread Avi Kivity

On 08/09/2011 12:41 PM, Peter Maydell wrote:

On 9 August 2011 10:26, Avi Kivity  wrote:
>  On 08/09/2011 12:23 PM, Peter Maydell wrote:
>>
>>  On 9 August 2011 10:02, Avi Kivitywrote:
>>  >+static const MemoryRegionOps omap_gpmc_ops = {
>>  >+.read = omap_gpmc_read,
>>  >+.write = omap_gpmc_write,
>>  >+.endianness = DEVICE_NATIVE_ENDIAN,
>>  >+.valid = {
>>  >+.min_access_size = 4,
>>  >+.max_access_size = 4,
>>  >+},
>>  >  };
>>
>>  Does this give debug trace output for missized accesses?
>
>  Only after we implement and configure a debug policy in memory.c
>
>>  That's the main point of the badread/badwrite functions...
>
>  These are broken now, aren't they?

They successfully print a message, which as I say is the
main point. The OMAP TRM typically says the behaviour isn't
defined for wrong-size accesses, so guest code doing it is
broken anyhow and so the error in the current implementation
is IMHO not so serious.

I think if (as here) it turns out that we need more functionality
in memory.c then we should be implementing it, not silently
dropping things in the conversion process.



Okay.  I'll update the patch to forward to the badread/badwrite 
functions on bad size.


--
error compiling committee.c: too many arguments to function




Re: [Qemu-devel] Safely reopening image files by stashing fds

2011-08-09 Thread Stefan Hajnoczi
On Tue, Aug 9, 2011 at 11:50 AM, Stefan Hajnoczi  wrote:
> On Tue, Aug 9, 2011 at 11:35 AM, Kevin Wolf  wrote:
>> Am 09.08.2011 12:25, schrieb Stefan Hajnoczi:
>>> On Mon, Aug 8, 2011 at 4:16 PM, Kevin Wolf  wrote:
 Am 08.08.2011 16:49, schrieb Stefan Hajnoczi:
> On Fri, Aug 5, 2011 at 10:48 AM, Kevin Wolf  wrote:
>> Am 05.08.2011 11:29, schrieb Stefan Hajnoczi:
>>> On Fri, Aug 5, 2011 at 10:07 AM, Kevin Wolf  wrote:
 Am 05.08.2011 10:40, schrieb Stefan Hajnoczi:
> We've discussed safe methods for reopening image files (e.g. useful 
> for
> changing the hostcache parameter).  The problem is that closing the 
> file first
> and then opening it again exposes us to the error case where the open 
> fails.
> At that point we cannot get to the file anymore and our options are to
> terminate QEMU, pause the VM, or offline the block device.
>
> This window of vulnerability can be eliminated by keeping the file 
> descriptor
> around and falling back to it should the open fail.
>
> The challenge for the file descriptor approach is that image formats, 
> like
> VMDK, can span multiple files.  Therefore the solution is not as 
> simple as
> stashing a single file descriptor and reopening from it.

 So far I agree. The rest I believe is wrong because you can't assume
 that every backend uses file descriptors. The qemu block layer is based
 on BlockDriverStates, not fds. They are a concept that should be hidden
 in raw-posix.

 I think something like this could do:

 struct BDRVReopenState {
    BlockDriverState *bs;
    /* can be extended by block drivers */
 };

 .bdrv_reopen(BlockDriverState *bs, BDRVReopenState **reopen_state, int
 flags);
 .bdrv_reopen_commit(BDRVReopenState *reopen_state);
 .bdrv_reopen_abort(BDRVReopenState *reopen_state);

 raw-posix would store the old file descriptor in its reopen_state. On
 commit, it closes the old descriptors, on abort it reverts to the old
 one and closes the newly opened one.

 Makes things a bit more complicated than the simple bdrv_reopen I had 
 in
 mind before, but it allows VMDK to get an all-or-nothing semantics.
>>>
>>> Can you show how bdrv_reopen() would use these new interfaces?  I'm
>>> not 100% clear on the idea.
>>
>> Well, you wouldn't only call bdrv_reopen, but also either
>> bdrv_reopen_commit/abort (for the top-level caller we can have a wrapper
>> function that does both, but that's syntactic sugar).
>>
>> For example we would have:
>>
>> int vmdk_reopen()
>
> .bdrv_reopen() is a confusing name for this operation because it does
> not reopen anything.  bdrv_prepare_reopen() might be clearer.

 Makes sense.

>
>> {
>>    *((VMDKReopenState**) rs) = malloc();
>>
>>    foreach (extent in s->extents) {
>>        ret = bdrv_reopen(extent->file, &extent->reopen_state)
>>        if (ret < 0)
>>            goto fail;
>>    }
>>    return 0;
>>
>> fail:
>>    foreach (extent in rs->already_reopened) {
>>        bdrv_reopen_abort(extent->reopen_state);
>>    }
>>    return ret;
>> }
>
>> void vmdk_reopen_commit()
>> {
>>    foreach (extent in s->extents) {
>>        bdrv_reopen_commit(extent->reopen_state);
>>    }
>>    free(rs);
>> }
>>
>> void vmdk_reopen_abort()
>> {
>>    foreach (extent in s->extents) {
>>        bdrv_reopen_abort(extent->reopen_state);
>>    }
>>    free(rs);
>> }
>
> Does the caller invoke bdrv_close(bs) after bdrv_prepare_reopen(bs,
> &rs)?

 No. Closing the old backend would be part of bdrv_reopen_commit.

 Do you have a use case where it would be helpful if the caller invoked
 bdrv_close?
>>>
>>> When the caller does bdrv_close() two BlockDriverStates are never open
>>> for the same image file.  I thought this was a property we wanted.
>>>
>>> Also, in the block_set_hostcache case we need to reopen without
>>> switching to a new BlockDriverState instance.  That means the reopen
>>> needs to be in-place with respect to the BlockDriverState *bs pointer.
>>>  We cannot create a new instance.
>>
>> Yes, but where do you even get the second BlockDriverState from?
>>
>> My prototype only returns an int, not a new BlockDriverState. Until
>> bdrv_reopen_commit() it would refer to the old file descriptors etc. and
>> after bdrv_reopen_commit() the very same BlockDriverState would refer to
>> the new ones.
>
> It seems I don't understand the API.  I thought it was:
>
> do_block_set_hostcache()
> {
>    bdrv_prepare_reopen(bs, &rs);
>    ...open new file and

Re: [Qemu-devel] [Bug 816370] Re: compile error in QEMU 0.15.0-rc0 and 0.15.0-rc1

2011-08-09 Thread Stefan Hajnoczi
On Thu, Aug 4, 2011 at 4:52 PM, rowa <816...@bugs.launchpad.net> wrote:
> Now it is possible to build qemu 0.15.0-rc1 with --disable-spice option
> on Ubuntu 10.10 „Maverick Meerkat“ .
>
> --
> You received this bug notification because you are a member of qemu-
> devel-ml, which is subscribed to QEMU.
> https://bugs.launchpad.net/bugs/816370
>
> Title:
>  compile error in QEMU 0.15.0-rc0 and  0.15.0-rc1
>
> Status in QEMU:
>  New
>
> Bug description:
>  I've tryed to compile QEMU 0.15.0-rc0 on Ubuntu 10.10 „Maverick
>  Meerkat“ but I get an error (For further details please see http
>  ://qemu-buch.de/d/Installation#Quellen_kompilieren ).
>
>  ./configure --prefix=/usr --enable-spice  
> --audio-card-list=ac97,es1370,sb16,adlib,gus,cs4231a
>  make
>
>    GEN   config-host.h
>    GEN   trace.h
>    GEN   qemu-options.def
>    GEN   qapi-generated/qga-qapi-types.h
>    GEN   qapi-generated/qga-qapi-visit.h
>    GEN   qapi-generated/qga-qmp-marshal.c
>    CC    qapi/qapi-visit-core.o
>  In file included from qapi/qapi-visit-core.c:14:
>  ./qapi/qapi-visit-core.h:31: error: expected declaration specifiers or ‘...’ 
> before ‘Error’

This is an odd error.  I saw the other related mailing list thread.
To debug it:

1. Run the build and let it fail.
2. Find out the command-line to build qapi/qapi-visit-core.o: make V=1
3. Change the gcc command-line that was printed out in step #2:
gcc ... -o qapi/qapi-visit-core.o qapi/qapi-visit-core.c
To this:
gcc ... -o qapi-visit-core.txt -E qapi/qapi-visit-core.c

The -E option outputs the pre-processed source.  It will be written to
qapi-visit-core.txt.  Please post the entire pre-processed source (try
http://pastebin.com/ to avoid sending a huge email).

Stefan



Re: [Qemu-devel] [PATCH v4 26/39] pcnet: convert to memory API

2011-08-09 Thread Avi Kivity

On 08/09/2011 03:48 PM, Michael S. Tsirkin wrote:

>
>  But in some cases, we can't, and the it's a pain having to wrap
>  MemoryRegion in another structure containing an opaque.

I guess, even though that wrapping structure would
use a proper type, not an opaque.


Yes, of course - that's what the first version did.


>  Maybe a good compromise is to:
>
>- keep MemoryRegion::opaque
>- pass a MemoryRegion *mr to callbacks instead of opaque
>- use container_of() when possible
>- use mr->opaque otherwise

Right. This even saves a memory dereference when opaque is
unused.



I'll put this on the TODO (as well as writing the TODO).

--
error compiling committee.c: too many arguments to function




Re: [Qemu-devel] [PATCH 44/55] spitz tosa: Simplify "drive is suitable for microdrive" test

2011-08-09 Thread Kevin Wolf
Am 09.08.2011 14:36, schrieb Markus Armbruster:
> Kevin Wolf  writes:
> 
>> Am 09.08.2011 13:56, schrieb Markus Armbruster:
>>> bdrv_is_locked() is about the frontend's media lock.  To make this more
>>> obvious, my PATCH 29/55 replaces it by bdrv_dev_is_medium_locked().  It
>>> does *not* query the backend's lock (which may not even exist!) set by
>>> bdrv_set_locked().
>>
>> This sounds wrong (the behaviour, not your analysis). Do you plan to
>> remove bdrv_dev_is_medium_locked() as well? It is used by IDE and
>> scsi-disk (easy to replace) and in eject_device() in blockdev.c. Maybe
>> the 'eject' monitor command should be handled by another callback into
>> the device.
> 
> Just two users remain after my series:
> 
> * bdrv_info()
> 
>   It wants to show the frontend's lock state, and uses
>   bdrv_dev_is_medium_locked() to get it from the frontend.

bdrv_info() is a nasty reason for keeping this in the block layer.

> * eject_device()
> 
>   It needs to fail if the frontend has its medium locked.  It uses
>   bdrv_dev_is_medium_locked() to get the lock state from the frontend.
>   Pseudo code (-f glossed over for simplicity):
> 
>   unless frontend has removable media
>   fail
>   if frontend medium is not already open
>  and frontend medium is locked
>   fail
>   drop the block driver
> 
>   I considered replacing this with "ask the frontend to eject", and then
>   let the frontend decide whether to order the backend to drop the block
>   driver.  I decided against it, since I need
>   bdrv_dev_is_medium_locked() anyway for bdrv_info().  Besides, the
>   series is long enough already.

I'm not asking to change it in this series, but in general I think it
would be a good change to make.

Kevin



Re: [Qemu-devel] [PATCH] Introduce short names for fixed width integer types

2011-08-09 Thread Juan Quintela
Peter Maydell  wrote:
> On 8 August 2011 13:56, Avi Kivity  wrote:
>> QEMU deals with a lot of fixed width integer types; their names
>> (uint64_t etc) are clumsy to use and take up a lot of space.
>>
>> Following Linux, introduce shorter names, for example U64 for
>> uint64_t.
>
> Strongly disagree. uint64_t &c are standard types and it's
> immediately clear to a competent C programmer what they are.
> Random qemu-specific funny named types just introduces an
> unnecessary level of indirection.
>
> We only just recently managed to get rid of the nonstandard
> typenames for these from fpu/...

Agreed.  And if we want to change them to be "linux-like", can we just
use the linux names?  Using a name with uppercase just looks doubly
wrong on my eyes.

Later, Juan.



Re: [Qemu-devel] [PATCH v4 26/39] pcnet: convert to memory API

2011-08-09 Thread Michael S. Tsirkin
On Tue, Aug 09, 2011 at 09:52:17AM +0300, Avi Kivity wrote:
> On 08/09/2011 09:55 AM, Bob Breuer wrote:
> >>   static void lance_cleanup(VLANClientState *nc)
> >>  @@ -117,13 +116,11 @@ static int lance_init(SysBusDevice *dev)
> >>   SysBusPCNetState *d = FROM_SYSBUS(SysBusPCNetState, dev);
> >>   PCNetState *s =&d->state;
> >>
> >>  -s->mmio_index =
> >>  -cpu_register_io_memory(lance_mem_read, lance_mem_write, d,
> >>  -   DEVICE_NATIVE_ENDIAN);
> >>  +memory_region_init_io(&s->mmio,&lance_mem_ops, s, "lance-mmio", 4);
> >
> >You've switched up d and s here, so anything that tries to talk to the
> >ethernet, such as a sparc32 guest, will now cause Qemu to segfault.
> >
> >
> 
> Good catch; will post a fix.
> 
> Maybe keeping the opaque wasn't such a good idea.

Yes, we typically can get from the mmio to the device state
using container_of.

> -- 
> I have a truly marvellous patch that fixes the bug which this
> signature is too narrow to contain.



Re: [Qemu-devel] [PATCH 0/4] usb/hid: bugfixes, more on usb and hid split

2011-08-09 Thread Gerd Hoffmann

On 08/07/11 19:29, Michael Walle wrote:

This USB patchset moves the VM state stuff from usb-hid.c to hid.c, so it
can be reused by other devices.


Nice.


There is one major drawback: i need to increase the vmstate version_id of
the usb-hid device. I don't know if you agree with this change.
Alternatively, we could add a load_old function which just skips old
versions.


You don't have to change the version.  The binary representation in the 
savevm format doesn't change if you move the fields from vmstate_usb_ptr 
to vmstate_hid_ptr_device.


For anything referenced via VMS_STRUCT the vmstate->name is just there 
to beautify the source code, it isn't used at all when reading/writing 
the state.  No new section, no nothing.  The savevm code only looks at 
->fields and writes them out (and executes the hooks).


Reordering the fields (like you did with the keyboard) isn't going to 
work though.


cheers,
  Gerd



Re: [Qemu-devel] Safely reopening image files by stashing fds

2011-08-09 Thread Stefan Hajnoczi
On Tue, Aug 09, 2011 at 01:39:13PM +0200, Kevin Wolf wrote:
> Am 09.08.2011 12:56, schrieb Stefan Hajnoczi:
> > On Tue, Aug 9, 2011 at 11:50 AM, Stefan Hajnoczi  wrote:
> >> On Tue, Aug 9, 2011 at 11:35 AM, Kevin Wolf  wrote:
> >>> Am 09.08.2011 12:25, schrieb Stefan Hajnoczi:
>  On Mon, Aug 8, 2011 at 4:16 PM, Kevin Wolf  wrote:
> > Am 08.08.2011 16:49, schrieb Stefan Hajnoczi:
> >> On Fri, Aug 5, 2011 at 10:48 AM, Kevin Wolf  wrote:
> >>> Am 05.08.2011 11:29, schrieb Stefan Hajnoczi:
>  On Fri, Aug 5, 2011 at 10:07 AM, Kevin Wolf  wrote:
> > Am 05.08.2011 10:40, schrieb Stefan Hajnoczi:
> >> We've discussed safe methods for reopening image files (e.g. 
> >> useful for
> >> changing the hostcache parameter).  The problem is that closing 
> >> the file first
> >> and then opening it again exposes us to the error case where the 
> >> open fails.
> >> At that point we cannot get to the file anymore and our options 
> >> are to
> >> terminate QEMU, pause the VM, or offline the block device.
> >>
> >> This window of vulnerability can be eliminated by keeping the file 
> >> descriptor
> >> around and falling back to it should the open fail.
> >>
> >> The challenge for the file descriptor approach is that image 
> >> formats, like
> >> VMDK, can span multiple files.  Therefore the solution is not as 
> >> simple as
> >> stashing a single file descriptor and reopening from it.
> >
> > So far I agree. The rest I believe is wrong because you can't assume
> > that every backend uses file descriptors. The qemu block layer is 
> > based
> > on BlockDriverStates, not fds. They are a concept that should be 
> > hidden
> > in raw-posix.
> >
> > I think something like this could do:
> >
> > struct BDRVReopenState {
> >BlockDriverState *bs;
> >/* can be extended by block drivers */
> > };
> >
> > .bdrv_reopen(BlockDriverState *bs, BDRVReopenState **reopen_state, 
> > int
> > flags);
> > .bdrv_reopen_commit(BDRVReopenState *reopen_state);
> > .bdrv_reopen_abort(BDRVReopenState *reopen_state);
> >
> > raw-posix would store the old file descriptor in its reopen_state. 
> > On
> > commit, it closes the old descriptors, on abort it reverts to the 
> > old
> > one and closes the newly opened one.
> >
> > Makes things a bit more complicated than the simple bdrv_reopen I 
> > had in
> > mind before, but it allows VMDK to get an all-or-nothing semantics.
> 
>  Can you show how bdrv_reopen() would use these new interfaces?  I'm
>  not 100% clear on the idea.
> >>>
> >>> Well, you wouldn't only call bdrv_reopen, but also either
> >>> bdrv_reopen_commit/abort (for the top-level caller we can have a 
> >>> wrapper
> >>> function that does both, but that's syntactic sugar).
> >>>
> >>> For example we would have:
> >>>
> >>> int vmdk_reopen()
> >>
> >> .bdrv_reopen() is a confusing name for this operation because it does
> >> not reopen anything.  bdrv_prepare_reopen() might be clearer.
> >
> > Makes sense.
> >
> >>
> >>> {
> >>>*((VMDKReopenState**) rs) = malloc();
> >>>
> >>>foreach (extent in s->extents) {
> >>>ret = bdrv_reopen(extent->file, &extent->reopen_state)
> >>>if (ret < 0)
> >>>goto fail;
> >>>}
> >>>return 0;
> >>>
> >>> fail:
> >>>foreach (extent in rs->already_reopened) {
> >>>bdrv_reopen_abort(extent->reopen_state);
> >>>}
> >>>return ret;
> >>> }
> >>
> >>> void vmdk_reopen_commit()
> >>> {
> >>>foreach (extent in s->extents) {
> >>>bdrv_reopen_commit(extent->reopen_state);
> >>>}
> >>>free(rs);
> >>> }
> >>>
> >>> void vmdk_reopen_abort()
> >>> {
> >>>foreach (extent in s->extents) {
> >>>bdrv_reopen_abort(extent->reopen_state);
> >>>}
> >>>free(rs);
> >>> }
> >>
> >> Does the caller invoke bdrv_close(bs) after bdrv_prepare_reopen(bs,
> >> &rs)?
> >
> > No. Closing the old backend would be part of bdrv_reopen_commit.
> >
> > Do you have a use case where it would be helpful if the caller invoked
> > bdrv_close?
> 
>  When the caller does bdrv_close() two BlockDriverStates are never open
>  for the same image file.  I thought this was a property we wanted.
> 
>  Also, in the block_set_hostcache case we need to reopen without
>  switching to a new BlockDriverState instance.  That means the reopen
>  needs to be in-place with 

Re: [Qemu-devel] [PATCH v5 2/4] block: add the block queue support

2011-08-09 Thread Stefan Hajnoczi
On Tue, Aug 9, 2011 at 5:17 AM, Zhi Yong Wu  wrote:
> +/* The APIs for block request queue on qemu block layer.
> + */
> +
> +static void qemu_block_queue_cancel(BlockDriverAIOCB *acb)
> +{
> +    qemu_aio_release(acb);
> +}
> +
> +static AIOPool block_queue_pool = {
> +    .aiocb_size         = sizeof(struct BlockDriverAIOCB),
> +    .cancel             = qemu_block_queue_cancel,
> +};

The lifecycle of block_queue_pool acbs should be like this:

When a request is queued we need a BlockQueue acb because we have no
real acb yet.  So we get an acb from block_queue_pool.

If the acb is cancelled before being dispatched we need to release the
acb and remove the request from the queue.  (This patch does not
remove the request from the queue on cancel.)

When the acb is dispatched we need to keep track of the real acb (e.g.
from qcow2).  The caller will only ever see our acb because there is
no way to give them the pointer to the new acb after dispatch.  That
means we need to keep the our acb alive for the entire lifetime of the
request.  (This patch currently releases our acb when the request is
dispatched.)

If the acb is cancelled after being dispatched we need to first cancel
the real acb and then release our acb.

When the acb is dispatched we need to pass qemu_block_queue_callback
as the cb function to handler.  Inside qemu_block_queue_callback we
invoke the request cb and then release our acb.  This is how our acb
should be freed.

> +BlockDriverAIOCB *qemu_block_queue_enqueue(BlockQueue *queue,
> +                        BlockDriverState *bs,
> +                        BlockRequestHandler *handler,
> +                        int64_t sector_num,
> +                        QEMUIOVector *qiov,
> +                        int nb_sectors,
> +                        BlockDriverCompletionFunc *cb,
> +                        void *opaque)
> +{
> +    BlockIORequest *request;
> +    BlockDriverAIOCB *acb;
> +
> +    request = qemu_malloc(sizeof(BlockIORequest));
> +    request->bs = bs;
> +    request->handler = handler;
> +    request->sector_num = sector_num;
> +    request->qiov = qiov;
> +    request->nb_sectors = nb_sectors;
> +    request->cb = cb;
> +    request->opaque = opaque;
> +
> +    QTAILQ_INSERT_TAIL(&queue->requests, request, entry);

It would be simpler to define BlockQueueAIOCB and using it as our acb
instead of managing an extra BlockIORequest structure.  That way you
don't need to worry about extra mallocs and frees.

> +
> +    acb = qemu_aio_get(&block_queue_pool, bs,
> +                       qemu_block_queue_callback, opaque);
> +
> +    request->acb = acb;
> +
> +    return acb;
> +}
> +
> +int qemu_block_queue_handler(BlockIORequest *request)

This function can be static.

Stefan



[Qemu-devel] [PATCH] etrax: QDevify the Ethernet MAC.

2011-08-09 Thread Edgar E. Iglesias
Hi,

This QDevifies the ETRAX Ethernet MAC. DMA Connections remain
adhoc..

Cheers

commit ad1a1afaa185de0ed668f9bc3fb4f3cb076b32a0
Author: Edgar E. Iglesias 
Date:   Tue Aug 9 13:24:04 2011 +0200

etrax: QDevify the Ethernet MAC.

Signed-off-by: Edgar E. Iglesias 

diff --git a/hw/cris_pic_cpu.c b/hw/cris_pic_cpu.c
index a92d445..7f1e4ab 100644
--- a/hw/cris_pic_cpu.c
+++ b/hw/cris_pic_cpu.c
@@ -22,6 +22,7 @@
  * THE SOFTWARE.
  */
 
+#include "sysbus.h"
 #include "hw.h"
 #include "pc.h"
 #include "etraxfs.h"
diff --git a/hw/etraxfs.h b/hw/etraxfs.h
index 5c61f1b..1554b0b 100644
--- a/hw/etraxfs.h
+++ b/hw/etraxfs.h
@@ -25,6 +25,21 @@
 #include "etraxfs_dma.h"
 
 qemu_irq *cris_pic_init_cpu(CPUState *env);
-void etraxfs_eth_init(NICInfo *nd, target_phys_addr_t base, int phyaddr,
-  struct etraxfs_dma_client *dma_out,
-  struct etraxfs_dma_client *dma_in);
+
+/* Instantiate an ETRAXFS Ethernet MAC.  */
+static inline DeviceState *
+etraxfs_eth_init(NICInfo *nd, target_phys_addr_t base, int phyaddr,
+ void *dma_out, void *dma_in)
+{
+DeviceState *dev;
+qemu_check_nic_model(nd, "fseth");
+
+dev = qdev_create(NULL, "etraxfs-eth");
+qdev_set_nic_properties(dev, nd);
+qdev_prop_set_uint32(dev, "phyaddr", phyaddr);
+qdev_prop_set_ptr(dev, "dma_out", dma_out);
+qdev_prop_set_ptr(dev, "dma_in", dma_in);
+qdev_init_nofail(dev);
+sysbus_mmio_map(sysbus_from_qdev(dev), 0, base);
+return dev;
+}
diff --git a/hw/etraxfs_eth.c b/hw/etraxfs_eth.c
index 6453077..92d4eca 100644
--- a/hw/etraxfs_eth.c
+++ b/hw/etraxfs_eth.c
@@ -23,7 +23,7 @@
  */
 
 #include 
-#include "hw.h"
+#include "sysbus.h"
 #include "net.h"
 #include "etraxfs.h"
 
@@ -319,6 +319,7 @@ static void mdio_cycle(struct qemu_mdio *bus)
 
 struct fs_eth
 {
+   SysBusDevice busdev;
NICState *nic;
NICConf conf;
int ethregs;
@@ -327,8 +328,14 @@ struct fs_eth
uint8_t macaddr[2][6];
uint32_t regs[FS_ETH_MAX_REGS];
 
-   struct etraxfs_dma_client *dma_out;
-   struct etraxfs_dma_client *dma_in;
+   union {
+   void *vdma_out;
+   struct etraxfs_dma_client *dma_out;
+   };
+   union {
+   void *vdma_in;
+   struct etraxfs_dma_client *dma_in;
+   };
 
/* MDIO bus.  */
struct qemu_mdio mdio_bus;
@@ -579,37 +586,50 @@ static NetClientInfo net_etraxfs_info = {
.link_status_changed = eth_set_link,
 };
 
-void etraxfs_eth_init(NICInfo *nd, target_phys_addr_t base, int phyaddr,
-   struct etraxfs_dma_client *dma_out,
-   struct etraxfs_dma_client *dma_in)
+static int fs_eth_init(SysBusDevice *dev)
 {
-   struct fs_eth *eth = NULL;
+   struct fs_eth *s = FROM_SYSBUS(typeof(*s), dev);
+   int eth_regs;
 
-   qemu_check_nic_model(nd, "fseth");
-
-   eth = qemu_mallocz(sizeof *eth);
+   if (!s->dma_out || !s->dma_in) {
+   hw_error("Unconnected ETRAX-FS Ethernet MAC.\n");
+   }
 
-   dma_out->client.push = eth_tx_push;
-   dma_out->client.opaque = eth;
-   dma_in->client.opaque = eth;
-   dma_in->client.pull = NULL;
+   s->dma_out->client.push = eth_tx_push;
+   s->dma_out->client.opaque = s;
+   s->dma_in->client.opaque = s;
+   s->dma_in->client.pull = NULL;
 
-   eth->dma_out = dma_out;
-   eth->dma_in = dma_in;
+   eth_regs = cpu_register_io_memory(eth_read, eth_write, s,
+ DEVICE_LITTLE_ENDIAN);
+   sysbus_init_mmio(dev, 0x5c, eth_regs);
 
-   /* Connect the phy.  */
-   eth->phyaddr = phyaddr & 0x1f;
-   tdk_init(ð->phy);
-   mdio_attach(ð->mdio_bus, ð->phy, eth->phyaddr);
+   qemu_macaddr_default_if_unset(&s->conf.macaddr);
+   s->nic = qemu_new_nic(&net_etraxfs_info, &s->conf,
+ dev->qdev.info->name, dev->qdev.id, s);
+   qemu_format_nic_info_str(&s->nic->nc, s->conf.macaddr.a);
 
-   eth->ethregs = cpu_register_io_memory(eth_read, eth_write, eth,
-  DEVICE_NATIVE_ENDIAN);
-   cpu_register_physical_memory (base, 0x5c, eth->ethregs);
+   tdk_init(&s->phy);
+   mdio_attach(&s->mdio_bus, &s->phy, s->phyaddr);
+   return 0;
+}
 
-   eth->conf.macaddr = nd->macaddr;
-   eth->conf.vlan = nd->vlan;
-   eth->conf.peer = nd->netdev;
+static SysBusDeviceInfo etraxfs_eth_info = {
+   .init = fs_eth_init,
+   .qdev.name  = "etraxfs-eth",
+   .qdev.size  = sizeof(struct fs_eth),
+   .qdev.props = (Property[]) {
+   DEFINE_PROP_UINT32("phyaddr", struct fs_eth, phyaddr, 1),
+   DEFINE_PROP_PTR("dma_out", struct fs_eth, vdma_out),
+   DEFINE_PROP_PTR("dma_in", struct fs_eth, vdma_in),
+   DEFINE_NIC_PROPERTIES(struct fs_eth, conf),
+   DEFINE_PROP_END_OF_LIST(),
+   

Re: [Qemu-devel] [PATCH] PPC: Fix sync instructions problem in SMP

2011-08-09 Thread Alexander Graf

On 08/04/2011 12:34 PM, Elie Richa wrote:

bump.

Anybody have opinions on this?
I can provide a more thorough explanation of the problem if needed. I 
wrote the test case in Ada for an MPC8641D processor (2 e600 cores) on 
a board that is not yet submitted to the list, therefore I cannot 
provide a ready to run binary. I can submit my board if needed.


Even though my patch is rather harsh, it has the advantage of solving
the problem for any number of CPUs without touching the translation
code. And it does not affect normal execution as I explained in the
previous mail.


No, the patch looks reasonable. I'll try to apply it ASAP.


Alex




[Qemu-devel] [PATCH] hw/qxl: fir build break with newer spice

2011-08-09 Thread Alon Levy
ioport_write's val was changed from uint32_t to uint64_t, this
broke two printfs. Use PRId64 instead of %d.

Signed-off-by: Alon Levy 
---
 hw/qxl.c |5 +++--
 1 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/hw/qxl.c b/hw/qxl.c
index db7ae7a..2335e4b 100644
--- a/hw/qxl.c
+++ b/hw/qxl.c
@@ -1189,7 +1189,7 @@ async_common:
 }
 d->current_async = orig_io_port;
 qemu_mutex_unlock(&d->async_lock);
-dprint(d, 2, "start async %d (%d)\n", io_port, val);
+dprint(d, 2, "start async %d (%"PRId64")\n", io_port, val);
 break;
 default:
 break;
@@ -1305,7 +1305,8 @@ async_common:
 break;
 }
 case QXL_IO_FLUSH_SURFACES_ASYNC:
-dprint(d, 1, "QXL_IO_FLUSH_SURFACES_ASYNC (%d) (%s, s#=%d, res#=%d)\n",
+dprint(d, 1, "QXL_IO_FLUSH_SURFACES_ASYNC"
+ " (%"PRId64"d) (%s, s#=%d, res#=%d)\n",
val, qxl_mode_to_string(d->mode), d->guest_surfaces.count,
d->num_free_res);
 qxl_spice_flush_surfaces_async(d);
-- 
1.7.6




Re: [Qemu-devel] [PATCH v4 26/39] pcnet: convert to memory API

2011-08-09 Thread Avi Kivity

On 08/09/2011 03:42 PM, Michael S. Tsirkin wrote:

On Tue, Aug 09, 2011 at 09:52:17AM +0300, Avi Kivity wrote:
>  On 08/09/2011 09:55 AM, Bob Breuer wrote:
>  >>static void lance_cleanup(VLANClientState *nc)
>  >>   @@ -117,13 +116,11 @@ static int lance_init(SysBusDevice *dev)
>  >>SysBusPCNetState *d = FROM_SYSBUS(SysBusPCNetState, dev);
>  >>PCNetState *s =&d->state;
>  >>
>  >>   -s->mmio_index =
>  >>   -cpu_register_io_memory(lance_mem_read, lance_mem_write, d,
>  >>   -   DEVICE_NATIVE_ENDIAN);
>  >>   +memory_region_init_io(&s->mmio,&lance_mem_ops, s, "lance-mmio", 4);
>  >
>  >You've switched up d and s here, so anything that tries to talk to the
>  >ethernet, such as a sparc32 guest, will now cause Qemu to segfault.
>  >
>  >
>
>  Good catch; will post a fix.
>
>  Maybe keeping the opaque wasn't such a good idea.

Yes, we typically can get from the mmio to the device state
using container_of.




But in some cases, we can't, and the it's a pain having to wrap 
MemoryRegion in another structure containing an opaque.


Maybe a good compromise is to:

  - keep MemoryRegion::opaque
  - pass a MemoryRegion *mr to callbacks instead of opaque
  - use container_of() when possible
  - use mr->opaque otherwise

--
error compiling committee.c: too many arguments to function




Re: [Qemu-devel] [PATCH 44/55] spitz tosa: Simplify "drive is suitable for microdrive" test

2011-08-09 Thread Markus Armbruster
Kevin Wolf  writes:

> Am 09.08.2011 06:32, schrieb andrzej zaborowski:
>> On 4 August 2011 10:02, Kevin Wolf  wrote:
>>> Am 03.08.2011 22:20, schrieb andrzej zaborowski:
 On 3 August 2011 20:24, Markus Armbruster  wrote:
> andrzej zaborowski  writes:
>> On 3 August 2011 18:38, Markus Armbruster  wrote:
>>> andrzej zaborowski  writes:
  2. if the
 underlaying storage can disappear for any other reason if that's
 possible to check.
>>>
>>> bdrv_is_removable() *isn't* such a check.
>>
>> Obviously I wasn't claiming it is, just that it might be useful, but
>> not necessrily possible.  After all pretty much any storage can be
>> "ejected" with enough force, depending on how far you want to go.
>>
> What's wrong with that again?  All sounds sensible to me.

 I'm not claiming otherwise, just double-checking this is what you want.
>>
>> So first you said you had a problem with _is_removable, and then you
>> said nothing was wrong with the implementation you outlined, plase
>> make up your mind.
>
> I don't appreciate you quoting me out of context like that.

 I got quite annoyed when you started putting words in my mouth by
 saying I said anything about CD-ROM.. the code in spitz/tosa is not
 concerned with CD-ROMs even if downstream it boils down to that, it is
 concerned with whether the device is removable or not, and that's what
 the check does.  It doesn't help readability or anything by inlining
 that check.  If you're trying to check for A then don't spell it out
 as B, be explicit.  It's not a big deal but I just don't see the
 point, sorry.

>
> The sentence you quoted was in the middle of my attempt to get you to
> explain what you're trying to accomplish there.

 I already said about 3 times what it's trying to acomplish.  You also
 have used the word "removable" so I'm sure you know what it means and
 don't need further explanation.  But let's define it this way: if a
 GUI is going to display an "eject" button next to a drive in the qemu
 device tree, that's a removable device.  CD-ROM is an example of that.
  An IDE HDD is an example of something that's not going to have that
 button (I assume).
>>>
>>> But this is a property of the device, not of the backend. This means
>>> that it belongs in the device emulation and not in block.c.
>> 
>> By device do you mean hw/ide/microdrive.c?
>
> I mean just anything in hw/. I'm not familiar with how these devices
> work internally, so if hw/ide/microdrive.c is the right place, fine.

We have way too many things we call "device", or "driver"...  Let me try
to dispell the confusion.

Machines spitz, borzoi, terrier (hw/spitz.c) have a spitz microdrive
device.  Machine tosa (hw/tosa.c) has a tosa microdrive device.

Like all block devices, they consist of a frontend and a backend.

Our block device backend abstraction is BlockDriverState.  Internally, a
block device backend is the block layer sitting on top of an optional
BlockDriver.  No block driver is interpreted as no medium.

Monitor command "eject" drops the block driver.

Monitor command "change" first drops the block driver, then attaches a
new one.

A BlockDriver may support removable media.  Host device block drivers
do, the others don't.  Doesn't affect monitor commands eject and change
at all, because these *drop* the block driver.

A block device frontend is attached to a backend on initialization.  In
other words, a backend may exist alone, but a frontend is always
attached to a backend.

A block device frontend may support removable media.  If it does,
monitor commands eject and change work while the frontend is attached.
They always work while no frontend is attached.

Note: we got *two* different "removables" here, frontend and backend.
It is possible to back a removable frontend, say ide-cd, with a
non-removable backend, say file.  The opposite is also possible (whether
it's wise is a different question).

The frontend code is in hw/ide/microdrive.c.  It does *not* support
removable media.  Monitor commands eject and change refuse to touch this
device.

The property "monitor command eject is applicable to the device" is a
property of the frontend, because the frontend and only the frontend
decide whether eject is okay.

The property 'GUI shall display an "eject" button next to a drive in the
qemu device tree' is just the same.

Kevin wrote "this is a property of the device, not of the backend.  This
means that it belongs in the device emulation and not in block.c."
Kevin is correct.  The code is also correct: microdrive.c defines a
non-removable device (by not implementing callback change_media_cb()).
Therefore, monitor commands eject and change will not touch microdrives.

The checks in spitz_microdrive_attach() and tosa_microdrive_attach() do
*n

Re: [Qemu-devel] Safely reopening image files by stashing fds

2011-08-09 Thread Kevin Wolf
Am 09.08.2011 12:56, schrieb Stefan Hajnoczi:
> On Tue, Aug 9, 2011 at 11:50 AM, Stefan Hajnoczi  wrote:
>> On Tue, Aug 9, 2011 at 11:35 AM, Kevin Wolf  wrote:
>>> Am 09.08.2011 12:25, schrieb Stefan Hajnoczi:
 On Mon, Aug 8, 2011 at 4:16 PM, Kevin Wolf  wrote:
> Am 08.08.2011 16:49, schrieb Stefan Hajnoczi:
>> On Fri, Aug 5, 2011 at 10:48 AM, Kevin Wolf  wrote:
>>> Am 05.08.2011 11:29, schrieb Stefan Hajnoczi:
 On Fri, Aug 5, 2011 at 10:07 AM, Kevin Wolf  wrote:
> Am 05.08.2011 10:40, schrieb Stefan Hajnoczi:
>> We've discussed safe methods for reopening image files (e.g. useful 
>> for
>> changing the hostcache parameter).  The problem is that closing the 
>> file first
>> and then opening it again exposes us to the error case where the 
>> open fails.
>> At that point we cannot get to the file anymore and our options are 
>> to
>> terminate QEMU, pause the VM, or offline the block device.
>>
>> This window of vulnerability can be eliminated by keeping the file 
>> descriptor
>> around and falling back to it should the open fail.
>>
>> The challenge for the file descriptor approach is that image 
>> formats, like
>> VMDK, can span multiple files.  Therefore the solution is not as 
>> simple as
>> stashing a single file descriptor and reopening from it.
>
> So far I agree. The rest I believe is wrong because you can't assume
> that every backend uses file descriptors. The qemu block layer is 
> based
> on BlockDriverStates, not fds. They are a concept that should be 
> hidden
> in raw-posix.
>
> I think something like this could do:
>
> struct BDRVReopenState {
>BlockDriverState *bs;
>/* can be extended by block drivers */
> };
>
> .bdrv_reopen(BlockDriverState *bs, BDRVReopenState **reopen_state, int
> flags);
> .bdrv_reopen_commit(BDRVReopenState *reopen_state);
> .bdrv_reopen_abort(BDRVReopenState *reopen_state);
>
> raw-posix would store the old file descriptor in its reopen_state. On
> commit, it closes the old descriptors, on abort it reverts to the old
> one and closes the newly opened one.
>
> Makes things a bit more complicated than the simple bdrv_reopen I had 
> in
> mind before, but it allows VMDK to get an all-or-nothing semantics.

 Can you show how bdrv_reopen() would use these new interfaces?  I'm
 not 100% clear on the idea.
>>>
>>> Well, you wouldn't only call bdrv_reopen, but also either
>>> bdrv_reopen_commit/abort (for the top-level caller we can have a wrapper
>>> function that does both, but that's syntactic sugar).
>>>
>>> For example we would have:
>>>
>>> int vmdk_reopen()
>>
>> .bdrv_reopen() is a confusing name for this operation because it does
>> not reopen anything.  bdrv_prepare_reopen() might be clearer.
>
> Makes sense.
>
>>
>>> {
>>>*((VMDKReopenState**) rs) = malloc();
>>>
>>>foreach (extent in s->extents) {
>>>ret = bdrv_reopen(extent->file, &extent->reopen_state)
>>>if (ret < 0)
>>>goto fail;
>>>}
>>>return 0;
>>>
>>> fail:
>>>foreach (extent in rs->already_reopened) {
>>>bdrv_reopen_abort(extent->reopen_state);
>>>}
>>>return ret;
>>> }
>>
>>> void vmdk_reopen_commit()
>>> {
>>>foreach (extent in s->extents) {
>>>bdrv_reopen_commit(extent->reopen_state);
>>>}
>>>free(rs);
>>> }
>>>
>>> void vmdk_reopen_abort()
>>> {
>>>foreach (extent in s->extents) {
>>>bdrv_reopen_abort(extent->reopen_state);
>>>}
>>>free(rs);
>>> }
>>
>> Does the caller invoke bdrv_close(bs) after bdrv_prepare_reopen(bs,
>> &rs)?
>
> No. Closing the old backend would be part of bdrv_reopen_commit.
>
> Do you have a use case where it would be helpful if the caller invoked
> bdrv_close?

 When the caller does bdrv_close() two BlockDriverStates are never open
 for the same image file.  I thought this was a property we wanted.

 Also, in the block_set_hostcache case we need to reopen without
 switching to a new BlockDriverState instance.  That means the reopen
 needs to be in-place with respect to the BlockDriverState *bs pointer.
  We cannot create a new instance.
>>>
>>> Yes, but where do you even get the second BlockDriverState from?
>>>
>>> My prototype only returns an int, not a new BlockDriverState. Until
>>> bdrv_reopen_commit() it would refer to the old file descriptors etc. and
>>> after bdrv_reopen_commit() the very sa

Re: [Qemu-devel] [PATCH v5 0/4] The intro of QEMU block I/O throttling

2011-08-09 Thread Stefan Hajnoczi
On Tue, Aug 9, 2011 at 5:17 AM, Zhi Yong Wu  wrote:
>  Makefile.objs     |    2 +-
>  block.c           |  347 
> +++--
>  block.h           |    6 +-
>  block/blk-queue.c |  141 ++
>  block/blk-queue.h |   73 +++
>  block_int.h       |   30 +
>  blockdev.c        |  108 +
>  blockdev.h        |    2 +
>  hmp-commands.hx   |   15 +++
>  qemu-config.c     |   24 
>  qemu-option.c     |   17 +++
>  qemu-option.h     |    1 +
>  qemu-options.hx   |    1 +
>  qerror.c          |    4 +
>  qerror.h          |    3 +
>  qmp-commands.hx   |   52 -
>  16 files changed, 813 insertions(+), 13 deletions(-)
>  create mode 100644 block/blk-queue.c
>  create mode 100644 block/blk-queue.h

This series is against qemu-kvm.git, please send block patches against
qemu.git/master in the future.  There is no qemu-kvm.git-specific code
here so just working against qemu.git and emailing qemu-devel is best.

Stefan



Re: [Qemu-devel] [PATCH] e1000: Do reset when E1000_CTRL_RST bit is set.

2011-08-09 Thread Anthony PERARD
On Fri, Aug 5, 2011 at 17:53, Anthony Liguori  wrote:
>
> You'll break some GCCs with -Wall -Werror with this.  Please do:
>
> if ((val & E1000_CTRL_RST)) {

:(, I never heard of this. But OK, I will do that.

-- 
Anthony PERARD



Re: [Qemu-devel] [PATCH v4 26/39] pcnet: convert to memory API

2011-08-09 Thread Michael S. Tsirkin
On Tue, Aug 09, 2011 at 03:44:35PM +0300, Avi Kivity wrote:
> On 08/09/2011 03:42 PM, Michael S. Tsirkin wrote:
> >On Tue, Aug 09, 2011 at 09:52:17AM +0300, Avi Kivity wrote:
> >>  On 08/09/2011 09:55 AM, Bob Breuer wrote:
> >>  >>static void lance_cleanup(VLANClientState *nc)
> >>  >>   @@ -117,13 +116,11 @@ static int lance_init(SysBusDevice *dev)
> >>  >>SysBusPCNetState *d = FROM_SYSBUS(SysBusPCNetState, dev);
> >>  >>PCNetState *s =&d->state;
> >>  >>
> >>  >>   -s->mmio_index =
> >>  >>   -cpu_register_io_memory(lance_mem_read, lance_mem_write, d,
> >>  >>   -   DEVICE_NATIVE_ENDIAN);
> >>  >>   +memory_region_init_io(&s->mmio,&lance_mem_ops, s, "lance-mmio", 
> >> 4);
> >>  >
> >>  >You've switched up d and s here, so anything that tries to talk to the
> >>  >ethernet, such as a sparc32 guest, will now cause Qemu to segfault.
> >>  >
> >>  >
> >>
> >>  Good catch; will post a fix.
> >>
> >>  Maybe keeping the opaque wasn't such a good idea.
> >
> >Yes, we typically can get from the mmio to the device state
> >using container_of.
> >
> >
> 
> But in some cases, we can't, and the it's a pain having to wrap
> MemoryRegion in another structure containing an opaque.

I guess, even though that wrapping structure would
use a proper type, not an opaque.

> Maybe a good compromise is to:
> 
>   - keep MemoryRegion::opaque
>   - pass a MemoryRegion *mr to callbacks instead of opaque
>   - use container_of() when possible
>   - use mr->opaque otherwise

Right. This even saves a memory dereference when opaque is
unused.

> -- 
> error compiling committee.c: too many arguments to function



Re: [Qemu-devel] Safely reopening image files by stashing fds

2011-08-09 Thread Kevin Wolf
Am 09.08.2011 12:25, schrieb Stefan Hajnoczi:
> On Mon, Aug 8, 2011 at 4:16 PM, Kevin Wolf  wrote:
>> Am 08.08.2011 16:49, schrieb Stefan Hajnoczi:
>>> On Fri, Aug 5, 2011 at 10:48 AM, Kevin Wolf  wrote:
 Am 05.08.2011 11:29, schrieb Stefan Hajnoczi:
> On Fri, Aug 5, 2011 at 10:07 AM, Kevin Wolf  wrote:
>> Am 05.08.2011 10:40, schrieb Stefan Hajnoczi:
>>> We've discussed safe methods for reopening image files (e.g. useful for
>>> changing the hostcache parameter).  The problem is that closing the 
>>> file first
>>> and then opening it again exposes us to the error case where the open 
>>> fails.
>>> At that point we cannot get to the file anymore and our options are to
>>> terminate QEMU, pause the VM, or offline the block device.
>>>
>>> This window of vulnerability can be eliminated by keeping the file 
>>> descriptor
>>> around and falling back to it should the open fail.
>>>
>>> The challenge for the file descriptor approach is that image formats, 
>>> like
>>> VMDK, can span multiple files.  Therefore the solution is not as simple 
>>> as
>>> stashing a single file descriptor and reopening from it.
>>
>> So far I agree. The rest I believe is wrong because you can't assume
>> that every backend uses file descriptors. The qemu block layer is based
>> on BlockDriverStates, not fds. They are a concept that should be hidden
>> in raw-posix.
>>
>> I think something like this could do:
>>
>> struct BDRVReopenState {
>>BlockDriverState *bs;
>>/* can be extended by block drivers */
>> };
>>
>> .bdrv_reopen(BlockDriverState *bs, BDRVReopenState **reopen_state, int
>> flags);
>> .bdrv_reopen_commit(BDRVReopenState *reopen_state);
>> .bdrv_reopen_abort(BDRVReopenState *reopen_state);
>>
>> raw-posix would store the old file descriptor in its reopen_state. On
>> commit, it closes the old descriptors, on abort it reverts to the old
>> one and closes the newly opened one.
>>
>> Makes things a bit more complicated than the simple bdrv_reopen I had in
>> mind before, but it allows VMDK to get an all-or-nothing semantics.
>
> Can you show how bdrv_reopen() would use these new interfaces?  I'm
> not 100% clear on the idea.

 Well, you wouldn't only call bdrv_reopen, but also either
 bdrv_reopen_commit/abort (for the top-level caller we can have a wrapper
 function that does both, but that's syntactic sugar).

 For example we would have:

 int vmdk_reopen()
>>>
>>> .bdrv_reopen() is a confusing name for this operation because it does
>>> not reopen anything.  bdrv_prepare_reopen() might be clearer.
>>
>> Makes sense.
>>
>>>
 {
*((VMDKReopenState**) rs) = malloc();

foreach (extent in s->extents) {
ret = bdrv_reopen(extent->file, &extent->reopen_state)
if (ret < 0)
goto fail;
}
return 0;

 fail:
foreach (extent in rs->already_reopened) {
bdrv_reopen_abort(extent->reopen_state);
}
return ret;
 }
>>>
 void vmdk_reopen_commit()
 {
foreach (extent in s->extents) {
bdrv_reopen_commit(extent->reopen_state);
}
free(rs);
 }

 void vmdk_reopen_abort()
 {
foreach (extent in s->extents) {
bdrv_reopen_abort(extent->reopen_state);
}
free(rs);
 }
>>>
>>> Does the caller invoke bdrv_close(bs) after bdrv_prepare_reopen(bs,
>>> &rs)?
>>
>> No. Closing the old backend would be part of bdrv_reopen_commit.
>>
>> Do you have a use case where it would be helpful if the caller invoked
>> bdrv_close?
> 
> When the caller does bdrv_close() two BlockDriverStates are never open
> for the same image file.  I thought this was a property we wanted.
> 
> Also, in the block_set_hostcache case we need to reopen without
> switching to a new BlockDriverState instance.  That means the reopen
> needs to be in-place with respect to the BlockDriverState *bs pointer.
>  We cannot create a new instance.

Yes, but where do you even get the second BlockDriverState from?

My prototype only returns an int, not a new BlockDriverState. Until
bdrv_reopen_commit() it would refer to the old file descriptors etc. and
after bdrv_reopen_commit() the very same BlockDriverState would refer to
the new ones.

Kevin



Re: [Qemu-devel] [PATCH 44/55] spitz tosa: Simplify "drive is suitable for microdrive" test

2011-08-09 Thread Kevin Wolf
Am 09.08.2011 13:56, schrieb Markus Armbruster:
> bdrv_is_locked() is about the frontend's media lock.  To make this more
> obvious, my PATCH 29/55 replaces it by bdrv_dev_is_medium_locked().  It
> does *not* query the backend's lock (which may not even exist!) set by
> bdrv_set_locked().

This sounds wrong (the behaviour, not your analysis). Do you plan to
remove bdrv_dev_is_medium_locked() as well? It is used by IDE and
scsi-disk (easy to replace) and in eject_device() in blockdev.c. Maybe
the 'eject' monitor command should be handled by another callback into
the device.

Kevin



Re: [Qemu-devel] KVM call agenda for August 8

2011-08-09 Thread Juan Quintela
Anthony Liguori  wrote:
> On 08/09/2011 04:21 AM, Juan Quintela wrote:
>>
>> Hi
>>
>> Please send in any agenda items you are interested in covering.
>
> I think we can skip since many people will be in Vancouver next week
> and we'll have plenty to talk about then.
>
> Just a couple reminders, for KVM Forum speakers, please post your
> slides to http://www.linux-kvm.org/page/KVM_Forum_2011
>
> Also, we'll be hosting a KVM Hackathon on Thursday from 9am-2pm.
> We'll announce the room information at KVM Forum.
>
> For everyone travelling, have a safe flight!
>
> Regards,
>
> Anthony Liguori

Agreed about both things.

Have a nice day, Juan.



[Qemu-devel] Información importante

2011-08-09 Thread Frank Mill
 - This mail is in HTML. Some elements may be ommited in plain text. -

Leer archivo adjunto para más detalles de sus fondos muertos miembro de la 
familia.
Saludos,
Yo soy el Sr. Frank Miller, el gerente de créditos del Sistema Nacional de 
Salud Fiduciario en Holanda. Hay una cuenta que pertenece a la última miembro 
de la familia, Albert que voy a
quiero discutir con usted acerca de y tiene algunas ofrelationship forma con 
que va por la similitud en el nombre. Estoy en el Reino Unido en el momento de 
someterse a una
Curso de ingeniería financiera por favor envíeme un correo electrónico con 
respecto a esto. Usted me puede dar su número de teléfono personal para que yo 
pueda darle una llamada, que es urgente.
Saludos,
Frank Miller.

www.sns.nl
Reino Unido código - 203 318 3822


Re: [Qemu-devel] Safely reopening image files by stashing fds

2011-08-09 Thread Stefan Hajnoczi
On Mon, Aug 8, 2011 at 4:16 PM, Kevin Wolf  wrote:
> Am 08.08.2011 16:49, schrieb Stefan Hajnoczi:
>> On Fri, Aug 5, 2011 at 10:48 AM, Kevin Wolf  wrote:
>>> Am 05.08.2011 11:29, schrieb Stefan Hajnoczi:
 On Fri, Aug 5, 2011 at 10:07 AM, Kevin Wolf  wrote:
> Am 05.08.2011 10:40, schrieb Stefan Hajnoczi:
>> We've discussed safe methods for reopening image files (e.g. useful for
>> changing the hostcache parameter).  The problem is that closing the file 
>> first
>> and then opening it again exposes us to the error case where the open 
>> fails.
>> At that point we cannot get to the file anymore and our options are to
>> terminate QEMU, pause the VM, or offline the block device.
>>
>> This window of vulnerability can be eliminated by keeping the file 
>> descriptor
>> around and falling back to it should the open fail.
>>
>> The challenge for the file descriptor approach is that image formats, 
>> like
>> VMDK, can span multiple files.  Therefore the solution is not as simple 
>> as
>> stashing a single file descriptor and reopening from it.
>
> So far I agree. The rest I believe is wrong because you can't assume
> that every backend uses file descriptors. The qemu block layer is based
> on BlockDriverStates, not fds. They are a concept that should be hidden
> in raw-posix.
>
> I think something like this could do:
>
> struct BDRVReopenState {
>    BlockDriverState *bs;
>    /* can be extended by block drivers */
> };
>
> .bdrv_reopen(BlockDriverState *bs, BDRVReopenState **reopen_state, int
> flags);
> .bdrv_reopen_commit(BDRVReopenState *reopen_state);
> .bdrv_reopen_abort(BDRVReopenState *reopen_state);
>
> raw-posix would store the old file descriptor in its reopen_state. On
> commit, it closes the old descriptors, on abort it reverts to the old
> one and closes the newly opened one.
>
> Makes things a bit more complicated than the simple bdrv_reopen I had in
> mind before, but it allows VMDK to get an all-or-nothing semantics.

 Can you show how bdrv_reopen() would use these new interfaces?  I'm
 not 100% clear on the idea.
>>>
>>> Well, you wouldn't only call bdrv_reopen, but also either
>>> bdrv_reopen_commit/abort (for the top-level caller we can have a wrapper
>>> function that does both, but that's syntactic sugar).
>>>
>>> For example we would have:
>>>
>>> int vmdk_reopen()
>>
>> .bdrv_reopen() is a confusing name for this operation because it does
>> not reopen anything.  bdrv_prepare_reopen() might be clearer.
>
> Makes sense.
>
>>
>>> {
>>>    *((VMDKReopenState**) rs) = malloc();
>>>
>>>    foreach (extent in s->extents) {
>>>        ret = bdrv_reopen(extent->file, &extent->reopen_state)
>>>        if (ret < 0)
>>>            goto fail;
>>>    }
>>>    return 0;
>>>
>>> fail:
>>>    foreach (extent in rs->already_reopened) {
>>>        bdrv_reopen_abort(extent->reopen_state);
>>>    }
>>>    return ret;
>>> }
>>
>>> void vmdk_reopen_commit()
>>> {
>>>    foreach (extent in s->extents) {
>>>        bdrv_reopen_commit(extent->reopen_state);
>>>    }
>>>    free(rs);
>>> }
>>>
>>> void vmdk_reopen_abort()
>>> {
>>>    foreach (extent in s->extents) {
>>>        bdrv_reopen_abort(extent->reopen_state);
>>>    }
>>>    free(rs);
>>> }
>>
>> Does the caller invoke bdrv_close(bs) after bdrv_prepare_reopen(bs,
>> &rs)?
>
> No. Closing the old backend would be part of bdrv_reopen_commit.
>
> Do you have a use case where it would be helpful if the caller invoked
> bdrv_close?

When the caller does bdrv_close() two BlockDriverStates are never open
for the same image file.  I thought this was a property we wanted.

Also, in the block_set_hostcache case we need to reopen without
switching to a new BlockDriverState instance.  That means the reopen
needs to be in-place with respect to the BlockDriverState *bs pointer.
 We cannot create a new instance.

Stefan



Re: [Qemu-devel] Safely reopening image files by stashing fds

2011-08-09 Thread Kevin Wolf
Am 09.08.2011 14:00, schrieb Stefan Hajnoczi:
> On Tue, Aug 09, 2011 at 01:39:13PM +0200, Kevin Wolf wrote:
>> Am 09.08.2011 12:56, schrieb Stefan Hajnoczi:
>>> On Tue, Aug 9, 2011 at 11:50 AM, Stefan Hajnoczi  wrote:
 On Tue, Aug 9, 2011 at 11:35 AM, Kevin Wolf  wrote:
> Am 09.08.2011 12:25, schrieb Stefan Hajnoczi:
>> On Mon, Aug 8, 2011 at 4:16 PM, Kevin Wolf  wrote:
>>> Am 08.08.2011 16:49, schrieb Stefan Hajnoczi:
 On Fri, Aug 5, 2011 at 10:48 AM, Kevin Wolf  wrote:
> Am 05.08.2011 11:29, schrieb Stefan Hajnoczi:
>> On Fri, Aug 5, 2011 at 10:07 AM, Kevin Wolf  wrote:
>>> Am 05.08.2011 10:40, schrieb Stefan Hajnoczi:
 We've discussed safe methods for reopening image files (e.g. 
 useful for
 changing the hostcache parameter).  The problem is that closing 
 the file first
 and then opening it again exposes us to the error case where the 
 open fails.
 At that point we cannot get to the file anymore and our options 
 are to
 terminate QEMU, pause the VM, or offline the block device.

 This window of vulnerability can be eliminated by keeping the file 
 descriptor
 around and falling back to it should the open fail.

 The challenge for the file descriptor approach is that image 
 formats, like
 VMDK, can span multiple files.  Therefore the solution is not as 
 simple as
 stashing a single file descriptor and reopening from it.
>>>
>>> So far I agree. The rest I believe is wrong because you can't assume
>>> that every backend uses file descriptors. The qemu block layer is 
>>> based
>>> on BlockDriverStates, not fds. They are a concept that should be 
>>> hidden
>>> in raw-posix.
>>>
>>> I think something like this could do:
>>>
>>> struct BDRVReopenState {
>>>BlockDriverState *bs;
>>>/* can be extended by block drivers */
>>> };
>>>
>>> .bdrv_reopen(BlockDriverState *bs, BDRVReopenState **reopen_state, 
>>> int
>>> flags);
>>> .bdrv_reopen_commit(BDRVReopenState *reopen_state);
>>> .bdrv_reopen_abort(BDRVReopenState *reopen_state);
>>>
>>> raw-posix would store the old file descriptor in its reopen_state. 
>>> On
>>> commit, it closes the old descriptors, on abort it reverts to the 
>>> old
>>> one and closes the newly opened one.
>>>
>>> Makes things a bit more complicated than the simple bdrv_reopen I 
>>> had in
>>> mind before, but it allows VMDK to get an all-or-nothing semantics.
>>
>> Can you show how bdrv_reopen() would use these new interfaces?  I'm
>> not 100% clear on the idea.
>
> Well, you wouldn't only call bdrv_reopen, but also either
> bdrv_reopen_commit/abort (for the top-level caller we can have a 
> wrapper
> function that does both, but that's syntactic sugar).
>
> For example we would have:
>
> int vmdk_reopen()

 .bdrv_reopen() is a confusing name for this operation because it does
 not reopen anything.  bdrv_prepare_reopen() might be clearer.
>>>
>>> Makes sense.
>>>

> {
>*((VMDKReopenState**) rs) = malloc();
>
>foreach (extent in s->extents) {
>ret = bdrv_reopen(extent->file, &extent->reopen_state)
>if (ret < 0)
>goto fail;
>}
>return 0;
>
> fail:
>foreach (extent in rs->already_reopened) {
>bdrv_reopen_abort(extent->reopen_state);
>}
>return ret;
> }

> void vmdk_reopen_commit()
> {
>foreach (extent in s->extents) {
>bdrv_reopen_commit(extent->reopen_state);
>}
>free(rs);
> }
>
> void vmdk_reopen_abort()
> {
>foreach (extent in s->extents) {
>bdrv_reopen_abort(extent->reopen_state);
>}
>free(rs);
> }

 Does the caller invoke bdrv_close(bs) after bdrv_prepare_reopen(bs,
 &rs)?
>>>
>>> No. Closing the old backend would be part of bdrv_reopen_commit.
>>>
>>> Do you have a use case where it would be helpful if the caller invoked
>>> bdrv_close?
>>
>> When the caller does bdrv_close() two BlockDriverStates are never open
>> for the same image file.  I thought this was a property we wanted.
>>
>> Also, in the block_set_hostcache case we need to reopen without
>> switching to a new BlockDriverState instance.  That m

[Qemu-devel] Ahmed

2011-08-09 Thread Ahmed

e suis le fils de l'ancien ministre de la Guinée (Mariame Sy Diallo) mais je 
vis actuellement en Angleterre, j'ai trouvé votre adresse à la chambre de 
commerce ici à Londres, j'ai besoin de votre aide pour investir au Maroc ou Algérie ou en Tunisie.
Si vous êtes intéressé à ma demandes'il vous plaît contactez-moi sur mon adresse e-mail 
(ahmeddiall...@gmail.com) ou sur mon numéro, (+447031869448). Merci de votre bonne compréhension Ahmed. 
Pour plus de détails. Je veux en savoir plus sur vous
 Votre nom ... Votre ville actuelle... Votre profession 
... ... ... ... .. Votre numéro de téléphone ... ... ... ...  Votre âge ... 
.



[Qemu-devel] [Bug 816370] Re: compile error in QEMU 0.15.0-rc0 and 0.15.0-rc1

2011-08-09 Thread rowa
Hi Stefan,

here is the file qapi-visit-core.txt:

http://pastebin.com/6sG5PdXQ

Robert

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/816370

Title:
  compile error in QEMU 0.15.0-rc0 and  0.15.0-rc1

Status in QEMU:
  New

Bug description:
  I've tryed to compile QEMU 0.15.0-rc0 on Ubuntu 10.10 „Maverick
  Meerkat“ but I get an error (For further details please see http
  ://qemu-buch.de/d/Installation#Quellen_kompilieren ).

  ./configure --prefix=/usr --enable-spice  
--audio-card-list=ac97,es1370,sb16,adlib,gus,cs4231a
  make

GEN   config-host.h
GEN   trace.h
GEN   qemu-options.def
GEN   qapi-generated/qga-qapi-types.h
GEN   qapi-generated/qga-qapi-visit.h
GEN   qapi-generated/qga-qmp-marshal.c
CCqapi/qapi-visit-core.o
  In file included from qapi/qapi-visit-core.c:14:
  ./qapi/qapi-visit-core.h:31: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:32: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:34: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:35: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:36: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:39: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:41: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:42: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:43: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:45: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:49: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:50: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:53: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:54: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:58: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:59: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:61: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:62: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:63: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:64: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:65: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:67: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:68: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:70: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:71: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:72: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:73: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:74: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  qapi/qapi-visit-core.c:17: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  qapi/qapi-visit-core.c: In function ‘visit_start_handle’:
  qapi/qapi-visit-core.c:19: warning: implicit declaration of function 
‘error_is_set’
  qapi/qapi-visit-core.c:19: warning: nested extern declaration of 
‘error_is_set’
  qapi/qapi-visit-core.c:19: error: ‘errp’ undeclared (first use in this 
function)
  qapi/qapi-visit-core.c:19: error: (Each undeclared identifier is reported 
only once
  qapi/qapi-visit-core.c:19: error: for each function it appears in.)
  qapi/qapi-visit-core.c:20: error: too many arguments to function 
‘v->start_handle’
  qapi/qapi-visit-core.c: At top level:
  qapi/qapi-visit-core.c:24: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  qapi/qapi-visit-core.c: In function ‘visit_end_handle’:
  qapi/qapi-visit-core.c:26: error: ‘errp’ undeclared (first use in this 
function)
  qapi/qapi-visit-core.c:27: error: too many arguments to function 
‘v->end_handle’
  qapi/qapi-visit-core.c: At top level:
  qapi/qapi-visit-core.c:32: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  qapi/qapi-visit-core.c: In function ‘visit_start_struct’:
  qapi/qapi-visit-core.c:34: error: ‘errp’ undeclared (first use in this 
function)
  qapi/qapi-visit-core.c:35: error: too many arguments to function 
‘v->start_struct’
  qapi/qapi-visit-core.c: At top level:
  qapi/qapi

Re: [Qemu-devel] [Bug 816370] Re: compile error in QEMU 0.15.0-rc0 and 0.15.0-rc1

2011-08-09 Thread Michael Roth

On 08/09/2011 07:02 AM, rowa wrote:

Hi Stefan,

here is the file qapi-visit-core.txt:

http://pastebin.com/6sG5PdXQ

Robert



This seems to confirm ALSA's error.h is the culprit:

#
# 375 "./qemu-common.h" 2
#
# 18 "./qapi/qapi-types-core.h" 2
#
# 1 "/usr/include/alsa/error.h" 1
#
# 45 "/usr/include/alsa/error.h"
#
const char *snd_strerror(int errnum);
#
# 59 "/usr/include/alsa/error.h"
#
typedef void (*snd_lib_error_handler_t)(const char *file, int line, 
const char *function, int err, const char *fmt, ...) ;

#
extern snd_lib_error_handler_t snd_lib_error;
#
extern int snd_lib_error_set_handler(snd_lib_error_handler_t handler);
#
# 19 "./qapi/qapi-types-core.h" 2



Re: [Qemu-devel] [Bug 816370] Re: compile error in QEMU 0.15.0-rc0 and 0.15.0-rc1

2011-08-09 Thread Michael Roth
On 08/08/2011 11:46 PM, rowa wrote:
> This Bug is not fixed in QEMU 0.15.0
> :-(
>

Hi Rowa,

Take a look at Gerd Hoffmann's comment: it looks like Spice server 0.8.2 
pulls in some ALSA includes that contain a error.h which ends up 
clobbering the error.h that that particular C file is trying to pull in.

If you're compiling Spice server 0.8.2 from scratch, you can try this
patch:

http://cgit.freedesktop.org/spice/spice/commit/?h=0.8&id=54c660470a5aea19f799c5574cc0d4a707696712

Otherwise you can try Stefan's suggestion to confirm you see an ALSA 
directory containing an error.h file as a -I option in make's gcc 
invocation.

If either is the case, it looks like the "fix" is upstream in Spice and 
I'm guessing it'll be included in the next release. You might need to 
make do with a patched Spice in the meantime, unfortunately.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/816370

Title:
  compile error in QEMU 0.15.0-rc0 and  0.15.0-rc1

Status in QEMU:
  New

Bug description:
  I've tryed to compile QEMU 0.15.0-rc0 on Ubuntu 10.10 „Maverick
  Meerkat“ but I get an error (For further details please see http
  ://qemu-buch.de/d/Installation#Quellen_kompilieren ).

  ./configure --prefix=/usr --enable-spice  
--audio-card-list=ac97,es1370,sb16,adlib,gus,cs4231a
  make

GEN   config-host.h
GEN   trace.h
GEN   qemu-options.def
GEN   qapi-generated/qga-qapi-types.h
GEN   qapi-generated/qga-qapi-visit.h
GEN   qapi-generated/qga-qmp-marshal.c
CCqapi/qapi-visit-core.o
  In file included from qapi/qapi-visit-core.c:14:
  ./qapi/qapi-visit-core.h:31: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:32: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:34: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:35: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:36: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:39: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:41: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:42: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:43: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:45: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:49: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:50: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:53: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:54: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:58: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:59: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:61: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:62: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:63: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:64: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:65: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:67: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:68: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:70: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:71: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:72: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:73: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  ./qapi/qapi-visit-core.h:74: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  qapi/qapi-visit-core.c:17: error: expected declaration specifiers or ‘...’ 
before ‘Error’
  qapi/qapi-visit-core.c: In function ‘visit_start_handle’:
  qapi/qapi-visit-core.c:19: warning: implicit declaration of function 
‘error_is_set’
  qapi/qapi-visit-core.c:19: warning: nested extern declaration of 
‘error_is_set’
  qapi/qapi-visit-core.c:19: error: ‘errp’ undeclared (first use in this 
function)
  qapi/qapi-visit-core.c:19: error: (Each undeclared identifier is reported 
only once
  qapi/qapi-visit-core.c:19: error: for each function it appears in.)
  qapi/qapi-visit-core.c:20: error: too many arguments to function 
‘v->start_handle’
  qapi/qapi-visit-cor

Re: [Qemu-devel] [PATCH v5 3/4] block: add block timer and block throttling algorithm

2011-08-09 Thread Stefan Hajnoczi
On Tue, Aug 9, 2011 at 5:17 AM, Zhi Yong Wu  wrote:
> Note:
>      1.) When bps/iops limits are specified to a small value such as 511 
> bytes/s, this VM will hang up. We are considering how to handle this senario.

If an I/O request is larger than the limit itself then I think we
should let it through with a warning that the limit is too low.  This
reflects the fact that we don't split I/O requests into smaller
requests and the guest may give us 128 KB (or larger?) requests.  In
practice the lowest feasible limit is probably 256 KB or 512 KB.

> diff --git a/block.c b/block.c
> index 24a25d5..8fd6643 100644
> --- a/block.c
> +++ b/block.c
> @@ -29,6 +29,9 @@
>  #include "module.h"
>  #include "qemu-objects.h"
>
> +#include "qemu-timer.h"
> +#include "block/blk-queue.h"
> +
>  #ifdef CONFIG_BSD
>  #include 
>  #include 
> @@ -58,6 +61,13 @@ static int bdrv_read_em(BlockDriverState *bs, int64_t 
> sector_num,
>  static int bdrv_write_em(BlockDriverState *bs, int64_t sector_num,
>                          const uint8_t *buf, int nb_sectors);
>
> +static bool bdrv_exceed_bps_limits(BlockDriverState *bs, int nb_sectors,
> +        bool is_write, double elapsed_time, uint64_t *wait);
> +static bool bdrv_exceed_iops_limits(BlockDriverState *bs, bool is_write,
> +        double elapsed_time, uint64_t *wait);
> +static bool bdrv_exceed_io_limits(BlockDriverState *bs, int nb_sectors,
> +        bool is_write, uint64_t *wait);
> +
>  static QTAILQ_HEAD(, BlockDriverState) bdrv_states =
>     QTAILQ_HEAD_INITIALIZER(bdrv_states);
>
> @@ -90,6 +100,68 @@ int is_windows_drive(const char *filename)
>  }
>  #endif
>
> +/* throttling disk I/O limits */
> +void bdrv_io_limits_disable(BlockDriverState *bs)
> +{
> +    bs->io_limits_enabled = false;
> +    bs->req_from_queue    = false;
> +
> +    if (bs->block_queue) {
> +        qemu_block_queue_flush(bs->block_queue);
> +        qemu_del_block_queue(bs->block_queue);

When you fix the acb lifecycle in block-queue.c this will no longer be
safe.  Requests that are dispatched still have acbs that belong to
this queue.  It is simplest to keep the queue for the lifetime of the
BlockDriverState - it's just a list so it doesn't take up much memory.

> +    }
> +
> +    if (bs->block_timer) {
> +        qemu_del_timer(bs->block_timer);
> +        qemu_free_timer(bs->block_timer);

To prevent double frees:

bs->block_timer = NULL;

> +    }
> +
> +    bs->slice_start[0]   = 0;
> +    bs->slice_start[1]   = 0;
> +
> +    bs->slice_end[0]     = 0;
> +    bs->slice_end[1]     = 0;
> +}
> +
> +static void bdrv_block_timer(void *opaque)
> +{
> +    BlockDriverState *bs = opaque;
> +    BlockQueue *queue = bs->block_queue;
> +
> +    qemu_block_queue_flush(queue);
> +}
> +
> +void bdrv_io_limits_enable(BlockDriverState *bs)
> +{
> +    bs->req_from_queue = false;
> +
> +    bs->block_queue    = qemu_new_block_queue();
> +    bs->block_timer    = qemu_new_timer_ns(vm_clock, bdrv_block_timer, bs);
> +
> +    bs->slice_start[BLOCK_IO_LIMIT_READ]  = qemu_get_clock_ns(vm_clock);
> +    bs->slice_start[BLOCK_IO_LIMIT_WRITE] = qemu_get_clock_ns(vm_clock);
> +
> +    bs->slice_end[BLOCK_IO_LIMIT_READ]    =
> +                      qemu_get_clock_ns(vm_clock) + BLOCK_IO_SLICE_TIME;
> +    bs->slice_end[BLOCK_IO_LIMIT_WRITE]   =
> +                      qemu_get_clock_ns(vm_clock) + BLOCK_IO_SLICE_TIME;

The slice times could just be initialized to 0 here.  When
bdrv_exceed_io_limits() is called it will start a new slice.

> +}
> +
> +bool bdrv_io_limits_enabled(BlockDriverState *bs)
> +{
> +    BlockIOLimit *io_limits = &bs->io_limits;
> +    if ((io_limits->bps[BLOCK_IO_LIMIT_READ] == 0)
> +         && (io_limits->bps[BLOCK_IO_LIMIT_WRITE] == 0)
> +         && (io_limits->bps[BLOCK_IO_LIMIT_TOTAL] == 0)
> +         && (io_limits->iops[BLOCK_IO_LIMIT_READ] == 0)
> +         && (io_limits->iops[BLOCK_IO_LIMIT_WRITE] == 0)
> +         && (io_limits->iops[BLOCK_IO_LIMIT_TOTAL] == 0)) {
> +        return false;
> +    }
> +
> +    return true;
> +}
> +
>  /* check if the path starts with ":" */
>  static int path_has_protocol(const char *path)
>  {
> @@ -642,6 +714,11 @@ int bdrv_open(BlockDriverState *bs, const char 
> *filename, int flags,
>             bs->change_cb(bs->change_opaque, CHANGE_MEDIA);
>     }
>
> +    /* throttling disk I/O limits */
> +    if (bs->io_limits_enabled) {
> +        bdrv_io_limits_enable(bs);
> +    }
> +
>     return 0;
>
>  unlink_and_fail:
> @@ -680,6 +757,16 @@ void bdrv_close(BlockDriverState *bs)
>         if (bs->change_cb)
>             bs->change_cb(bs->change_opaque, CHANGE_MEDIA);
>     }
> +
> +    /* throttling disk I/O limits */
> +    if (bs->block_queue) {
> +        qemu_del_block_queue(bs->block_queue);
> +    }
> +
> +    if (bs->block_timer) {
> +        qemu_del_timer(bs->block_timer);
> +        qemu_free_timer(bs->block_timer);
> +    }
>  }
>
>  void bdrv_close_all(void)
> @@ -1312,6 +1399,15 @@ void bdrv_get_geometry_hint(BlockDriverState *bs

Re: [Qemu-devel] [Bug 816370] Re: compile error in QEMU 0.15.0-rc0 and 0.15.0-rc1

2011-08-09 Thread Stefan Hajnoczi
On Tue, Aug 9, 2011 at 4:01 PM, Michael Roth <816...@bugs.launchpad.net> wrote:
> Take a look at Gerd Hoffmann's comment: it looks like Spice server 0.8.2
> pulls in some ALSA includes that contain a error.h which ends up
> clobbering the error.h that that particular C file is trying to pull in.

Wow, I didn't realize that libraries place themselves in -I.  I don't
understand why.

Apparently pkg-config --cflags gthread-2.0 does this for glib too.

Stefan



[Qemu-devel] coroutine-win32.c:36: error: thread-local storage not supported for this target

2011-08-09 Thread Sebastian Herbszt

 CCcoroutine-win32.o
coroutine-win32.c:36: error: thread-local storage not supported for this target
coroutine-win32.c:37: error: thread-local storage not supported for this target
make: *** [coroutine-win32.o] Error 1

Sebastian




Re: [Qemu-devel] coroutine-win32.c:36: error: thread-local storage not supported for this target

2011-08-09 Thread Paolo Bonzini

On 08/09/2011 05:53 PM, Sebastian Herbszt wrote:

CC coroutine-win32.o
coroutine-win32.c:36: error: thread-local storage not supported for this
target
coroutine-win32.c:37: error: thread-local storage not supported for this
target
make: *** [coroutine-win32.o] Error 1


You need a newer GCC.  Alternatively we can change it to use 
TlsAlloc/TlsFree, but I would like to support __thread-based TLS always 
(for example for liburcu---userspace RCU...).


Paolo



Re: [Qemu-devel] [PATCH] hw/qxl: fir build break with newer spice

2011-08-09 Thread Alon Levy
On Tue, Aug 09, 2011 at 03:21:15PM +0300, Alon Levy wrote:
> ioport_write's val was changed from uint32_t to uint64_t, this
> broke two printfs. Use PRId64 instead of %d.

Nack, missed a 'd' on the second fix. And the subject has a typo.

> 
> Signed-off-by: Alon Levy 
> ---
>  hw/qxl.c |5 +++--
>  1 files changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/hw/qxl.c b/hw/qxl.c
> index db7ae7a..2335e4b 100644
> --- a/hw/qxl.c
> +++ b/hw/qxl.c
> @@ -1189,7 +1189,7 @@ async_common:
>  }
>  d->current_async = orig_io_port;
>  qemu_mutex_unlock(&d->async_lock);
> -dprint(d, 2, "start async %d (%d)\n", io_port, val);
> +dprint(d, 2, "start async %d (%"PRId64")\n", io_port, val);
>  break;
>  default:
>  break;
> @@ -1305,7 +1305,8 @@ async_common:
>  break;
>  }
>  case QXL_IO_FLUSH_SURFACES_ASYNC:
> -dprint(d, 1, "QXL_IO_FLUSH_SURFACES_ASYNC (%d) (%s, s#=%d, 
> res#=%d)\n",
> +dprint(d, 1, "QXL_IO_FLUSH_SURFACES_ASYNC"
> + " (%"PRId64"d) (%s, s#=%d, res#=%d)\n",
> val, qxl_mode_to_string(d->mode), d->guest_surfaces.count,
> d->num_free_res);
>  qxl_spice_flush_surfaces_async(d);
> -- 
> 1.7.6
> 
> 



[Qemu-devel] [PATCH 0/3] PPC: Support -M pseries for PR KVM

2011-08-09 Thread Alexander Graf
We have support for -M pseries with TCG for a while now, but are missing
integration with KVM. While the KVM HV target which is predestined for -M
pseries is slightly more tricky to integrate and needs to evolve a bit still,
we can easily support it with PR KVM.

So this patch set adds support for running -M pseries with KVM that has the
patch set applied I just submitted to the kvm list.


Alex

Alexander Graf (3):
  KVM: update kernel headers
  PPC: Enable to use PAPR with PR style KVM
  PPC: SPAPR: Use KVM function for time info

 hw/spapr.c   |   22 ---
 linux-headers/asm-powerpc/kvm.h  |   23 
 linux-headers/asm-x86/kvm_para.h |   14 ++
 linux-headers/linux/kvm.h|   25 -
 linux-headers/linux/kvm_para.h   |1 +
 target-ppc/kvm.c |   53 ++
 target-ppc/kvm_ppc.h |5 +++
 7 files changed, 130 insertions(+), 13 deletions(-)




[Qemu-devel] [PATCH 2/3] PPC: Enable to use PAPR with PR style KVM

2011-08-09 Thread Alexander Graf
When running PR style KVM, we need to tell the kernel that we want
to run in PAPR mode now. This means that we need to pass some more
register information down and enable papr mode. We also need to align
the HTAB to htab_size boundary.

Using this patch, -M pseries works with kvm even on non-hv kvm
implementations, as long as the preceding kernel patches are in.

Signed-off-by: Alexander Graf 
---
 hw/spapr.c   |   14 -
 target-ppc/kvm.c |   53 ++
 target-ppc/kvm_ppc.h |5 
 3 files changed, 71 insertions(+), 1 deletions(-)

diff --git a/hw/spapr.c b/hw/spapr.c
index 07b2165..d56697a 100644
--- a/hw/spapr.c
+++ b/hw/spapr.c
@@ -38,6 +38,9 @@
 #include "hw/spapr_vio.h"
 #include "hw/xics.h"
 
+#include "kvm.h"
+#include "kvm_ppc.h"
+
 #include 
 
 #define KERNEL_LOAD_ADDR0x
@@ -336,12 +339,21 @@ static void ppc_spapr_init(ram_addr_t ram_size,
  * later we should probably make it scale to the size of guest
  * RAM */
 spapr->htab_size = 1ULL << (pteg_shift + 7);
-spapr->htab = qemu_malloc(spapr->htab_size);
+spapr->htab = qemu_memalign(spapr->htab_size, spapr->htab_size);
 
 for (env = first_cpu; env != NULL; env = env->next_cpu) {
 env->external_htab = spapr->htab;
 env->htab_base = -1;
 env->htab_mask = spapr->htab_size - 1;
+
+/* Tell KVM that we're in PAPR mode */
+env->spr[SPR_SDR1] = (unsigned long)spapr->htab |
+ ((pteg_shift + 7) - 18);
+env->spr[SPR_HIOR] = 0;
+
+if (kvm_enabled()) {
+kvmppc_set_papr(env);
+}
 }
 
 filename = qemu_find_file(QEMU_FILE_TYPE_BIOS, "spapr-rtas.bin");
diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c
index 219e7a7..02f958f 100644
--- a/target-ppc/kvm.c
+++ b/target-ppc/kvm.c
@@ -29,6 +29,10 @@
 #include "cpu.h"
 #include "device_tree.h"
 
+#include "hw/sysbus.h"
+#include "hw/spapr.h"
+#include "hw/spapr_vio.h"
+
 //#define DEBUG_KVM
 
 #ifdef DEBUG_KVM
@@ -455,6 +459,14 @@ int kvm_arch_handle_exit(CPUState *env, struct kvm_run 
*run)
 dprintf("handle halt\n");
 ret = kvmppc_handle_halt(env);
 break;
+#if defined(CONFIG_FDT) && defined(TARGET_PPC64)
+case KVM_EXIT_PAPR_HCALL:
+dprintf("handle PAPR hypercall\n");
+run->papr_hcall.ret = spapr_hypercall(env, run->papr_hcall.nr,
+  run->papr_hcall.args);
+ret = 1;
+break;
+#endif
 default:
 fprintf(stderr, "KVM: unknown exit reason %d\n", run->exit_reason);
 ret = -1;
@@ -606,6 +618,47 @@ int kvmppc_get_hypercall(CPUState *env, uint8_t *buf, int 
buf_len)
 return 0;
 }
 
+void kvmppc_set_papr(CPUState *env)
+{
+struct kvm_enable_cap cap;
+struct kvm_sregs sregs;
+int ret;
+
+memset(&cap, 0, sizeof(cap));
+cap.cap = KVM_CAP_PPC_PAPR;
+ret = kvm_vcpu_ioctl(env, KVM_ENABLE_CAP, &cap);
+
+if (ret) {
+goto fail;
+}
+
+/*
+ * XXX We set HIOR here. It really should be a qdev property of
+ * the CPU node, but we don't have CPUs converted to qdev yet.
+ *
+ * Once we have qdev CPUs, move HIOR to a qdev property and
+ * remove this chunk.
+ */
+memset(&sregs, 0, sizeof(sregs));
+ret = kvm_vcpu_ioctl(env, KVM_GET_SREGS, &sregs);
+if (ret) {
+goto fail;
+}
+
+sregs.u.s.flags |= KVM_SREGS_S_HIOR;
+sregs.u.s.hior = env->spr[SPR_HIOR];
+sregs.u.s.sdr1 = env->spr[SPR_SDR1];
+ret = kvm_vcpu_ioctl(env, KVM_SET_SREGS, &sregs);
+if (ret) {
+goto fail;
+}
+
+return;
+
+fail:
+cpu_abort(env, "This KVM version does not support PAPR\n");
+}
+
 bool kvm_arch_stop_on_emulation_error(CPUState *env)
 {
 return true;
diff --git a/target-ppc/kvm_ppc.h b/target-ppc/kvm_ppc.h
index 76f98d9..809b8b4 100644
--- a/target-ppc/kvm_ppc.h
+++ b/target-ppc/kvm_ppc.h
@@ -17,6 +17,7 @@ uint32_t kvmppc_get_tbfreq(void);
 uint64_t kvmppc_get_clockfreq(void);
 int kvmppc_get_hypercall(CPUState *env, uint8_t *buf, int buf_len);
 int kvmppc_set_interrupt(CPUState *env, int irq, int level);
+void kvmppc_set_papr(CPUState *env);
 
 #else
 
@@ -40,6 +41,10 @@ static inline int kvmppc_set_interrupt(CPUState *env, int 
irq, int level)
 return -1;
 }
 
+static void kvmppc_set_papr(CPUState *env)
+{
+}
+
 #endif
 
 #ifndef CONFIG_KVM
-- 
1.6.0.2




[Qemu-devel] [PATCH 1/3] KVM: update kernel headers

2011-08-09 Thread Alexander Graf
This patch updates the kvm kernel headers to the latest version.

Signed-off-by: Alexander Graf 
---
 linux-headers/asm-powerpc/kvm.h  |   23 +++
 linux-headers/asm-x86/kvm_para.h |   14 ++
 linux-headers/linux/kvm.h|   25 +
 linux-headers/linux/kvm_para.h   |1 +
 4 files changed, 55 insertions(+), 8 deletions(-)

diff --git a/linux-headers/asm-powerpc/kvm.h b/linux-headers/asm-powerpc/kvm.h
index 777d307..579e219 100644
--- a/linux-headers/asm-powerpc/kvm.h
+++ b/linux-headers/asm-powerpc/kvm.h
@@ -22,6 +22,10 @@
 
 #include 
 
+/* Select powerpc specific features in  */
+#define __KVM_HAVE_SPAPR_TCE
+#define __KVM_HAVE_PPC_SMT
+
 struct kvm_regs {
__u64 pc;
__u64 cr;
@@ -145,6 +149,12 @@ struct kvm_regs {
 #define KVM_SREGS_E_UPDATE_DBSR(1 << 3)
 
 /*
+ * Book3S special bits to indicate contents in the struct by maintaining
+ * backwards compatibility with older structs. If adding a new field,
+ * please make sure to add a flag for that new field */
+#define KVM_SREGS_S_HIOR   (1 << 0)
+
+/*
  * In KVM_SET_SREGS, reserved/pad fields must be left untouched from a
  * previous KVM_GET_REGS.
  *
@@ -169,6 +179,8 @@ struct kvm_sregs {
__u64 ibat[8];
__u64 dbat[8];
} ppc32;
+   __u64 flags; /* KVM_SREGS_S_ */
+   __u64 hior;
} s;
struct {
union {
@@ -272,4 +284,15 @@ struct kvm_guest_debug_arch {
 #define KVM_INTERRUPT_UNSET-2U
 #define KVM_INTERRUPT_SET_LEVEL-3U
 
+/* for KVM_CAP_SPAPR_TCE */
+struct kvm_create_spapr_tce {
+   __u64 liobn;
+   __u32 window_size;
+};
+
+/* for KVM_ALLOCATE_RMA */
+struct kvm_allocate_rma {
+   __u64 rma_size;
+};
+
 #endif /* __LINUX_KVM_POWERPC_H */
diff --git a/linux-headers/asm-x86/kvm_para.h b/linux-headers/asm-x86/kvm_para.h
index 834d71e..f2ac46a 100644
--- a/linux-headers/asm-x86/kvm_para.h
+++ b/linux-headers/asm-x86/kvm_para.h
@@ -21,6 +21,7 @@
  */
 #define KVM_FEATURE_CLOCKSOURCE23
 #define KVM_FEATURE_ASYNC_PF   4
+#define KVM_FEATURE_STEAL_TIME 5
 
 /* The last 8 bits are used to indicate how to interpret the flags field
  * in pvclock structure. If no bits are set, all flags are ignored.
@@ -30,10 +31,23 @@
 #define MSR_KVM_WALL_CLOCK  0x11
 #define MSR_KVM_SYSTEM_TIME 0x12
 
+#define KVM_MSR_ENABLED 1
 /* Custom MSRs falls in the range 0x4b564d00-0x4b564dff */
 #define MSR_KVM_WALL_CLOCK_NEW  0x4b564d00
 #define MSR_KVM_SYSTEM_TIME_NEW 0x4b564d01
 #define MSR_KVM_ASYNC_PF_EN 0x4b564d02
+#define MSR_KVM_STEAL_TIME  0x4b564d03
+
+struct kvm_steal_time {
+   __u64 steal;
+   __u32 version;
+   __u32 flags;
+   __u32 pad[12];
+};
+
+#define KVM_STEAL_ALIGNMENT_BITS 5
+#define KVM_STEAL_VALID_BITS ((-1ULL << (KVM_STEAL_ALIGNMENT_BITS + 1)))
+#define KVM_STEAL_RESERVED_MASK (((1 << KVM_STEAL_ALIGNMENT_BITS) - 1 ) << 1)
 
 #define KVM_MAX_MMU_OP_BATCH   32
 
diff --git a/linux-headers/linux/kvm.h b/linux-headers/linux/kvm.h
index fc63b73..2062375 100644
--- a/linux-headers/linux/kvm.h
+++ b/linux-headers/linux/kvm.h
@@ -161,6 +161,7 @@ struct kvm_pit_config {
 #define KVM_EXIT_NMI  16
 #define KVM_EXIT_INTERNAL_ERROR   17
 #define KVM_EXIT_OSI  18
+#define KVM_EXIT_PAPR_HCALL  19
 
 /* For KVM_EXIT_INTERNAL_ERROR */
 #define KVM_INTERNAL_ERROR_EMULATION 1
@@ -264,6 +265,11 @@ struct kvm_run {
struct {
__u64 gprs[32];
} osi;
+   struct {
+   __u64 nr;
+   __u64 ret;
+   __u64 args[9];
+   } papr_hcall;
/* Fix the size of the union. */
char padding[256];
};
@@ -457,7 +463,7 @@ struct kvm_ppc_pvinfo {
 #define KVM_CAP_VAPIC 6
 #define KVM_CAP_EXT_CPUID 7
 #define KVM_CAP_CLOCKSOURCE 8
-#define KVM_CAP_NR_VCPUS 9   /* returns max vcpus per vm */
+#define KVM_CAP_NR_VCPUS 9   /* returns recommended max vcpus per vm */
 #define KVM_CAP_NR_MEMSLOTS 10   /* returns max memory slots per vm */
 #define KVM_CAP_PIT 11
 #define KVM_CAP_NOP_IO_DELAY 12
@@ -544,6 +550,12 @@ struct kvm_ppc_pvinfo {
 #define KVM_CAP_TSC_CONTROL 60
 #define KVM_CAP_GET_TSC_KHZ 61
 #define KVM_CAP_PPC_BOOKE_SREGS 62
+#define KVM_CAP_SPAPR_TCE 63
+#define KVM_CAP_PPC_SMT 64
+#define KVM_CAP_PPC_RMA65
+#define KVM_CAP_MAX_VCPUS 66   /* returns max vcpus per vm */
+#define KVM_CAP_PPC_HIOR 67
+#define KVM_CAP_PPC_PAPR 68
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
@@ -746,6 +758,9 @@ struct kvm_clock_data {
 /* Available with KVM_CAP_XCRS */
 #define KVM_GET_XCRS _IOR(KVMIO,  0xa6, struct kvm_xcrs)
 #define KVM_SET_XCRS _IOW(KVMIO,  0xa7, struct kvm_xcrs)
+#define KVM_CREATE_SPAPR_TCE _IOW(KVMIO,  0xa8, s

[Qemu-devel] [PATCH 3/3] PPC: SPAPR: Use KVM function for time info

2011-08-09 Thread Alexander Graf
One of the things we can't fake on PPC is the timer speed. So
we need to extract the frequency information from the host and
put it back into the guest device tree.

Luckily, we already have functions for that from the non-pseries
targets, so all we need to do is to connect the dots and the guest
suddenly gets to know its real timer speeds.

Signed-off-by: Alexander Graf 
---
 hw/spapr.c |8 
 1 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/hw/spapr.c b/hw/spapr.c
index d56697a..a73f38a 100644
--- a/hw/spapr.c
+++ b/hw/spapr.c
@@ -140,6 +140,8 @@ static void *spapr_create_fdt_skel(const char *cpu_model,
 char *nodename;
 uint32_t segs[] = {cpu_to_be32(28), cpu_to_be32(40),
0x, 0x};
+uint32_t tbfreq = kvm_enabled() ? kvmppc_get_tbfreq() : TIMEBASE_FREQ;
+uint32_t cpufreq = kvm_enabled() ? kvmppc_get_clockfreq() : 10;
 
 if (asprintf(&nodename, "%s@%x", modelname, index) < 0) {
 fprintf(stderr, "Allocation failure\n");
@@ -158,10 +160,8 @@ static void *spapr_create_fdt_skel(const char *cpu_model,
 env->dcache_line_size)));
 _FDT((fdt_property_cell(fdt, "icache-block-size",
 env->icache_line_size)));
-_FDT((fdt_property_cell(fdt, "timebase-frequency", TIMEBASE_FREQ)));
-/* Hardcode CPU frequency for now.  It's kind of arbitrary on
- * full emu, for kvm we should copy it from the host */
-_FDT((fdt_property_cell(fdt, "clock-frequency", 10)));
+_FDT((fdt_property_cell(fdt, "timebase-frequency", tbfreq)));
+_FDT((fdt_property_cell(fdt, "clock-frequency", cpufreq)));
 _FDT((fdt_property_cell(fdt, "ibm,slb-size", env->slb_nr)));
 _FDT((fdt_property(fdt, "ibm,pft-size",
pft_size_prop, sizeof(pft_size_prop;
-- 
1.6.0.2




Re: [Qemu-devel] coroutine-win32.c:36: error: thread-local storage not supported for this target

2011-08-09 Thread Sebastian Herbszt

Paolo Bonzini wrote:

On 08/09/2011 05:53 PM, Sebastian Herbszt wrote:

CC coroutine-win32.o
coroutine-win32.c:36: error: thread-local storage not supported for this
target
coroutine-win32.c:37: error: thread-local storage not supported for this
target
make: *** [coroutine-win32.o] Error 1


You need a newer GCC.  Alternatively we can change it to use 
TlsAlloc/TlsFree, but I would like to support __thread-based TLS always 
(for example for liburcu---userspace RCU...).


Paolo


I guess "gcc version 3.4.5 (mingw32 special)" is no longer supported then?
What's the minimum gcc version needed now?

Sebastian




[Qemu-devel] VNC server running on `127.0.0.1:5900'

2011-08-09 Thread Nithish R
Hi,

I installed the latest version of Qemu 0.15.0 today. Until now I was using
0.14.1. Now after installing this new version,
I am not able to boot into any guest OS that I had installed before. Also
while trying to load any ISO file or img file, I get
an output as:
:~/Qemu$ qemu -m 512 -hda fedora.img -cdrom fedora.iso -boot d
VNC server running on `127.0.0.1:5900'

How do I enable my proper booting of guest OS?

Nithish R


Re: [Qemu-devel] [PATCH] e1000: Do reset when E1000_CTRL_RST bit is set.

2011-08-09 Thread Peter Maydell
On 9 August 2011 15:10, Anthony PERARD  wrote:
> On Fri, Aug 5, 2011 at 17:53, Anthony Liguori  wrote:
>>
>> You'll break some GCCs with -Wall -Werror with this.  Please do:
>>
>> if ((val & E1000_CTRL_RST)) {
>
> :(, I never heard of this. But OK, I will do that.

Please don't unless Anthony L can actually demonstrate a compiler
which has this property and doesn't already complain about all
the existing equivalent code in qemu...

-- PMM



Re: [Qemu-devel] build failure with coroutine-gthread

2011-08-09 Thread Blue Swirl
On Mon, Aug 8, 2011 at 9:29 AM, Stefan Hajnoczi  wrote:
> On Mon, Aug 8, 2011 at 10:01 AM, Aneesh Kumar K.V
>  wrote:
>>
>>
>>  LINK  qemu-ga
>> coroutine-gthread.o: In function `coroutine_init':
>> /home/opensource/sources/qemu/qemu-upstream/coroutine-gthread.c:39: 
>> undefined reference to `g_thread_init'
>> collect2: ld returned 1 exit status
>>
>> The below patch fix the failure.  I also added the patch in the VirtFS
>> pull request.
>
> It appears coroutine-core v7 was merged instead of v8.  The
> differences are minor:
>  * Bisectability: introduce gthread implementation before ucontext/fibers
>  * Depend on gthread, not just glib, for multithreaded programming
>
> Here is the patchwork link to the gthread dependency patch in case a
> committer wants to merge it immediately:
> http://patchwork.ozlabs.org/patch/106810/

Coroutines only need gthreads when both ucontext and win32 versions
are unavailable. This is handled better by my patch:
http://lists.nongnu.org/archive/html/qemu-devel/2011-08/msg00998.html



Re: [Qemu-devel] [PULL 0/7] ARM patch queue (for master)

2011-08-09 Thread Edgar E. Iglesias
On Mon, Aug 08, 2011 at 10:54:47AM +0100, Peter Maydell wrote:
> Ping?

I've applied this, thanks.

Cheers



Re: [Qemu-devel] build failure with coroutine-gthread

2011-08-09 Thread Aneesh Kumar K.V
On Tue, 9 Aug 2011 17:28:17 +, Blue Swirl  wrote:
> On Mon, Aug 8, 2011 at 9:29 AM, Stefan Hajnoczi  wrote:
> > On Mon, Aug 8, 2011 at 10:01 AM, Aneesh Kumar K.V
> >  wrote:
> >>
> >>
> >>  LINK  qemu-ga
> >> coroutine-gthread.o: In function `coroutine_init':
> >> /home/opensource/sources/qemu/qemu-upstream/coroutine-gthread.c:39: 
> >> undefined reference to `g_thread_init'
> >> collect2: ld returned 1 exit status
> >>
> >> The below patch fix the failure.  I also added the patch in the VirtFS
> >> pull request.
> >
> > It appears coroutine-core v7 was merged instead of v8.  The
> > differences are minor:
> >  * Bisectability: introduce gthread implementation before ucontext/fibers
> >  * Depend on gthread, not just glib, for multithreaded programming
> >
> > Here is the patchwork link to the gthread dependency patch in case a
> > committer wants to merge it immediately:
> > http://patchwork.ozlabs.org/patch/106810/
> 
> Coroutines only need gthreads when both ucontext and win32 versions
> are unavailable. This is handled better by my patch:
> http://lists.nongnu.org/archive/html/qemu-devel/2011-08/msg00998.html

We would soon need that for VirtFS. So that will be another check of 
if (linux && atttr )

-aneesh



[Qemu-devel] [PATCH 0/3] linux-user: Implement setxattr/getxattr/removexattr syscalls

2011-08-09 Thread An-Cheng Huang
These patches implement the setxattr, getxattr, and removexattr syscalls.
Since libattr uses indirect syscalls for these, the fix for the indirect
syscall handling on MIPS is needed for these to work.

An-Cheng Huang (3):
  linux-user: Fix MIPS indirect syscall handling
  linux-user: Verify MIPS syscall arguments
  linux-user: Implement setxattr/getxattr/removexattr syscalls

 linux-user/main.c|   24 -
 linux-user/syscall.c |   54 +++--
 2 files changed, 69 insertions(+), 9 deletions(-)

-- 
1.7.2.5



[Qemu-devel] [PATCH 1/3] linux-user: Fix MIPS indirect syscall handling

2011-08-09 Thread An-Cheng Huang
Change the number of argument for MIPS sys_syscall from 0 to 8. This
allows arguments for indirect syscalls to be processed correctly.

Signed-off-by: An-Cheng Huang 
---
 linux-user/main.c |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/linux-user/main.c b/linux-user/main.c
index 8e15474..cd6750b 100644
--- a/linux-user/main.c
+++ b/linux-user/main.c
@@ -1669,7 +1669,7 @@ void cpu_loop(CPUPPCState *env)
 #define MIPS_SYS(name, args) args,
 
 static const uint8_t mips_syscall_args[] = {
-   MIPS_SYS(sys_syscall, 0)/* 4000 */
+   MIPS_SYS(sys_syscall, 8)/* 4000 */
MIPS_SYS(sys_exit   , 1)
MIPS_SYS(sys_fork   , 0)
MIPS_SYS(sys_read   , 3)
-- 
1.7.2.5



[Qemu-devel] [PATCH 2/3] linux-user: Verify MIPS syscall arguments

2011-08-09 Thread An-Cheng Huang
On MIPS, some syscall arguments are taken from the stack. This patch adds
verification such that do_syscall() is only invoked if all arguments
have been successfully taken from the stack.

Signed-off-by: An-Cheng Huang 
---
 linux-user/main.c |   22 +-
 1 files changed, 17 insertions(+), 5 deletions(-)

diff --git a/linux-user/main.c b/linux-user/main.c
index cd6750b..77e8f87 100644
--- a/linux-user/main.c
+++ b/linux-user/main.c
@@ -2090,11 +2090,22 @@ void cpu_loop(CPUMIPSState *env)
 sp_reg = env->active_tc.gpr[29];
 switch (nb_args) {
 /* these arguments are taken from the stack */
-/* FIXME - what to do if get_user() fails? */
-case 8: get_user_ual(arg8, sp_reg + 28);
-case 7: get_user_ual(arg7, sp_reg + 24);
-case 6: get_user_ual(arg6, sp_reg + 20);
-case 5: get_user_ual(arg5, sp_reg + 16);
+case 8:
+if ((ret = get_user_ual(arg8, sp_reg + 28)) != 0) {
+goto done_syscall;
+}
+case 7:
+if ((ret = get_user_ual(arg7, sp_reg + 24)) != 0) {
+goto done_syscall;
+}
+case 6:
+if ((ret = get_user_ual(arg6, sp_reg + 20)) != 0) {
+goto done_syscall;
+}
+case 5:
+if ((ret = get_user_ual(arg5, sp_reg + 16)) != 0) {
+goto done_syscall;
+}
 default:
 break;
 }
@@ -2105,6 +2116,7 @@ void cpu_loop(CPUMIPSState *env)
  env->active_tc.gpr[7],
  arg5, arg6, arg7, arg8);
 }
+done_syscall:
 if (ret == -TARGET_QEMU_ESIGRETURN) {
 /* Returning from a successful sigreturn syscall.
Avoid clobbering register state.  */
-- 
1.7.2.5



[Qemu-devel] [PATCH 3/3] linux-user: Implement setxattr/getxattr/removexattr syscalls

2011-08-09 Thread An-Cheng Huang
This patch implements the setxattr, getxattr, and removexattr syscalls
if CONFIG_ATTR is enabled.

Note that since libattr uses indirect syscalls for these, this change
depends on the fix for indirect syscall handling on MIPS.

Signed-off-by: An-Cheng Huang 
---
 linux-user/syscall.c |   54 +++--
 1 files changed, 51 insertions(+), 3 deletions(-)

diff --git a/linux-user/syscall.c b/linux-user/syscall.c
index 73f9baa..afc6b5d 100644
--- a/linux-user/syscall.c
+++ b/linux-user/syscall.c
@@ -70,6 +70,9 @@ int __clone2(int (*fn)(void *), void *child_stack_base,
 #ifdef CONFIG_EPOLL
 #include 
 #endif
+#ifdef CONFIG_ATTR
+#include 
+#endif
 
 #define termios host_termios
 #define winsize host_winsize
@@ -7633,22 +7636,67 @@ abi_long do_syscall(void *cpu_env, int num, abi_long 
arg1,
 #endif
 break;
 #endif
+#ifdef CONFIG_ATTR
 #ifdef TARGET_NR_setxattr
-case TARGET_NR_setxattr:
 case TARGET_NR_lsetxattr:
 case TARGET_NR_fsetxattr:
-case TARGET_NR_getxattr:
 case TARGET_NR_lgetxattr:
 case TARGET_NR_fgetxattr:
 case TARGET_NR_listxattr:
 case TARGET_NR_llistxattr:
 case TARGET_NR_flistxattr:
-case TARGET_NR_removexattr:
 case TARGET_NR_lremovexattr:
 case TARGET_NR_fremovexattr:
 ret = -TARGET_EOPNOTSUPP;
 break;
+case TARGET_NR_setxattr:
+{
+void *p, *n, *v;
+p = lock_user_string(arg1);
+n = lock_user_string(arg2);
+v = lock_user(VERIFY_READ, arg3, arg4, 1);
+if (p && n && v) {
+ret = get_errno(setxattr(p, n, v, arg4, arg5));
+} else {
+ret = -TARGET_EFAULT;
+}
+unlock_user(p, arg1, 0);
+unlock_user(n, arg2, 0);
+unlock_user(v, arg3, 0);
+}
+break;
+case TARGET_NR_getxattr:
+{
+void *p, *n, *v;
+p = lock_user_string(arg1);
+n = lock_user_string(arg2);
+v = lock_user(VERIFY_WRITE, arg3, arg4, 0);
+if (p && n && v) {
+ret = get_errno(getxattr(p, n, v, arg4));
+} else {
+ret = -TARGET_EFAULT;
+}
+unlock_user(p, arg1, 0);
+unlock_user(n, arg2, 0);
+unlock_user(v, arg3, arg4);
+}
+break;
+case TARGET_NR_removexattr:
+{
+void *p, *n;
+p = lock_user_string(arg1);
+n = lock_user_string(arg2);
+if (p && n) {
+ret = get_errno(removexattr(p, n));
+} else {
+ret = -TARGET_EFAULT;
+}
+unlock_user(p, arg1, 0);
+unlock_user(n, arg2, 0);
+}
+break;
 #endif
+#endif /* CONFIG_ATTR */
 #ifdef TARGET_NR_set_thread_area
 case TARGET_NR_set_thread_area:
 #if defined(TARGET_MIPS)
-- 
1.7.2.5



Re: [Qemu-devel] Safely reopening image files by stashing fds

2011-08-09 Thread Blue Swirl
On Tue, Aug 9, 2011 at 12:24 PM, Kevin Wolf  wrote:
> Am 09.08.2011 14:00, schrieb Stefan Hajnoczi:
>> On Tue, Aug 09, 2011 at 01:39:13PM +0200, Kevin Wolf wrote:
>>> Am 09.08.2011 12:56, schrieb Stefan Hajnoczi:
 On Tue, Aug 9, 2011 at 11:50 AM, Stefan Hajnoczi  
 wrote:
> On Tue, Aug 9, 2011 at 11:35 AM, Kevin Wolf  wrote:
>> Am 09.08.2011 12:25, schrieb Stefan Hajnoczi:
>>> On Mon, Aug 8, 2011 at 4:16 PM, Kevin Wolf  wrote:
 Am 08.08.2011 16:49, schrieb Stefan Hajnoczi:
> On Fri, Aug 5, 2011 at 10:48 AM, Kevin Wolf  wrote:
>> Am 05.08.2011 11:29, schrieb Stefan Hajnoczi:
>>> On Fri, Aug 5, 2011 at 10:07 AM, Kevin Wolf  
>>> wrote:
 Am 05.08.2011 10:40, schrieb Stefan Hajnoczi:
> We've discussed safe methods for reopening image files (e.g. 
> useful for
> changing the hostcache parameter).  The problem is that closing 
> the file first
> and then opening it again exposes us to the error case where the 
> open fails.
> At that point we cannot get to the file anymore and our options 
> are to
> terminate QEMU, pause the VM, or offline the block device.
>
> This window of vulnerability can be eliminated by keeping the 
> file descriptor
> around and falling back to it should the open fail.
>
> The challenge for the file descriptor approach is that image 
> formats, like
> VMDK, can span multiple files.  Therefore the solution is not as 
> simple as
> stashing a single file descriptor and reopening from it.

 So far I agree. The rest I believe is wrong because you can't 
 assume
 that every backend uses file descriptors. The qemu block layer is 
 based
 on BlockDriverStates, not fds. They are a concept that should be 
 hidden
 in raw-posix.

 I think something like this could do:

 struct BDRVReopenState {
    BlockDriverState *bs;
    /* can be extended by block drivers */
 };

 .bdrv_reopen(BlockDriverState *bs, BDRVReopenState **reopen_state, 
 int
 flags);
 .bdrv_reopen_commit(BDRVReopenState *reopen_state);
 .bdrv_reopen_abort(BDRVReopenState *reopen_state);

 raw-posix would store the old file descriptor in its reopen_state. 
 On
 commit, it closes the old descriptors, on abort it reverts to the 
 old
 one and closes the newly opened one.

 Makes things a bit more complicated than the simple bdrv_reopen I 
 had in
 mind before, but it allows VMDK to get an all-or-nothing semantics.
>>>
>>> Can you show how bdrv_reopen() would use these new interfaces?  I'm
>>> not 100% clear on the idea.
>>
>> Well, you wouldn't only call bdrv_reopen, but also either
>> bdrv_reopen_commit/abort (for the top-level caller we can have a 
>> wrapper
>> function that does both, but that's syntactic sugar).
>>
>> For example we would have:
>>
>> int vmdk_reopen()
>
> .bdrv_reopen() is a confusing name for this operation because it does
> not reopen anything.  bdrv_prepare_reopen() might be clearer.

 Makes sense.

>
>> {
>>    *((VMDKReopenState**) rs) = malloc();
>>
>>    foreach (extent in s->extents) {
>>        ret = bdrv_reopen(extent->file, &extent->reopen_state)
>>        if (ret < 0)
>>            goto fail;
>>    }
>>    return 0;
>>
>> fail:
>>    foreach (extent in rs->already_reopened) {
>>        bdrv_reopen_abort(extent->reopen_state);
>>    }
>>    return ret;
>> }
>
>> void vmdk_reopen_commit()
>> {
>>    foreach (extent in s->extents) {
>>        bdrv_reopen_commit(extent->reopen_state);
>>    }
>>    free(rs);
>> }
>>
>> void vmdk_reopen_abort()
>> {
>>    foreach (extent in s->extents) {
>>        bdrv_reopen_abort(extent->reopen_state);
>>    }
>>    free(rs);
>> }
>
> Does the caller invoke bdrv_close(bs) after bdrv_prepare_reopen(bs,
> &rs)?

 No. Closing the old backend would be part of bdrv_reopen_commit.

 Do you have a use case where it would be helpful if the caller invoked
 bdrv_close?
>>>
>>> When the caller does bdrv_close() two BlockDriverStates are never open

[Qemu-devel] [RFC] PowerPC Mac OS on Qemu

2011-08-09 Thread William Hahne
Hello,

I am a Google Summer of Code student working on getting PowerPC Mac OS
running on Qemu. While I am not finished I need to upstream what I currently
have before the end of Summer of Code. My patch is for OpenBIOS but I am
cross-posting to both Qemu and OpenBIOS mailing lists since it is useful to
get feedback from both.

One part of the patch I am particularly concerned about is the patch to
arch/ppc/qemu/init.c to provide a CPU and Timebase frequency. Qemu doesn't
report a CPU frequency and the reported Timebase frequency is too high and
causes the Mac OS scheduler to break. I hard-coded values that work but this
seems like an unacceptable long-term solution to me. A simple test for CPU
frequency might be the best solution here but seems a little redundant as
Mac OS later runs its own test, seemingly only relying on these values
during initialization (I am not 100% sure of this since it crashes before
reaching that point.) Feedback here would be especially appreciated.

Another thing to note is in drivers/adb_kbd.c. "get-key-map" which returns a
map of the current pressed keys on the keyboard just returns a dummy value.
I felt it was a waste of time making a full implementation when all it
really gets you is the ability to use keyboard shortcuts for verbose or
single user mode.

Other than those specific concerns I welcome general feedback on the patch,
since as I said I am hoping to get it in before the end of Summer of Code.

Thanks,
William Hahne


PowerPC_Mac_OS.patch
Description: Binary data


  1   2   >