On Tue, 2013-05-21 at 13:55 +0200, Andreas Färber wrote:
> Hi,
>
> Am 21.05.2013 13:33, schrieb Nicholas Thomas:
> > Migrating from:
> >
> > /opt/qemu-1.4.1/bin/qemu-system-x86_64 -M pc -watchdog i6300esb
> > -watchdog-action reset [...]
> >
> >
Hi all,
Migrating from:
/opt/qemu-1.4.1/bin/qemu-system-x86_64 -M pc -watchdog i6300esb
-watchdog-action reset [...]
to:
/opt/qemu-1.5.0/bin/qemu-system-x86_64 -M pc-i440fx-1.4 -watchdog
i6300esb -watchdog-action reset [...]
I get:
qemu: warning: error while loading state for instance 0x0
On Thu, 2013-05-16 at 11:40 +0300, Michael S. Tsirkin wrote:
> On Thu, May 16, 2013 at 09:20:55AM +0100, Nicholas Thomas wrote:
> > Hi,
> >
> > On Thu, 2013-05-16 at 09:27 +0300, Michael S. Tsirkin wrote:
> > > On Thu, May 16, 2013 at 09:24:05AM +0300, Michael S.
Hi,
On Thu, 2013-05-16 at 09:27 +0300, Michael S. Tsirkin wrote:
> On Thu, May 16, 2013 at 09:24:05AM +0300, Michael S. Tsirkin wrote:
> > Is this with or without vhost-net in host?
>
> never mind, I see it's without.
> Try to enable vhost-net (you'll have to switch to -netdev syntax
> for that t
Hi again,
On Tue, 2013-05-14 at 15:49 +0100, Nicholas Thomas wrote:
> /sys/devices/virtual/net/t100/tun_flags is 0x5002 - so it looks like
> IFF_ONE_QUEUE was indeed unset by qemu (which is lacking the patch). It
> surprises me, but that's probably my fault, rather than qemu
Hi,
On Tue, 2013-05-14 at 16:28 +0200, Peter Lieven wrote:
> Please check the tunnel mode in sysfs after your VM is started. It is likely
> that qemu overwrites the settings you made in the ruby script.
>
> Please check if the patch
>
> tap: set IFF_ONE_QUEUE per default
>
> is in your qemu 1.
Hi all,
On Tue, 2013-02-12 at 08:06 +0100, Peter Lieven wrote:
> On 23.01.2013 11:03, Michael S. Tsirkin wrote:
> > For future, we can try to set TUN_ONE_QUEUE flag on the interface,
> > or try applying this patch
> > 5d097109257c03a71845729f8db6b5770c4bbedc
> > in kernel see if this helps.
> >
>
On Mon, 2013-04-15 at 16:14 +0200, Stefan Hajnoczi wrote:
> The nbd block driver should use TCP_NODELAY. Nick Thomas
>
> measured a 40 millisecond latency added by the Naggle algorithm.
>
> This series turns on TCP_NODELAY. This requires that we use TCP_CORK to
> efficiently send NBD requests
On Fri, 2012-11-02 at 13:41 +0100, Paolo Bonzini wrote:
> Il 02/11/2012 11:28, n...@bytemark.co.uk ha scritto:
> > @@ -197,12 +198,14 @@ testlist options
> > IMGPROTO=rbd
> > xpand=false
> > ;;
> > -
> > -sheepdog)
> > IMGPROTO=sheepdog
> > xpand=false
>
From: n...@bytemark.co.uk
To: qemu-devel@nongnu.org
Cc: pbonz...@redhat.com, kw...@redhat.com, Nick Thomas
Subject: [PATCH v3] tests: allow qemu-iotests to be run against nbd
backend
Date: Fri, 2 Nov 2012 10:28:06 +
From: Nick Thomas
To do this, we start a qemu-nbd process at _make_test_im
On Wed, 2012-10-31 at 17:44 +0100, Kevin Wolf wrote:
> Am 31.10.2012 15:33, schrieb Paolo Bonzini:
> > Il 31/10/2012 15:01, n...@bytemark.co.uk ha scritto:
> >> From: Nick Thomas
> >>
> >> To do this, we start a qemu-nbd process at _make_test_img and kill
> >> it in _cleanup_test_img. $TEST_IMG is
On Wed, 2012-10-24 at 16:10 +0200, Paolo Bonzini wrote:
> Il 24/10/2012 16:03, Paolo Bonzini ha scritto:
> > Il 24/10/2012 14:16, Nicholas Thomas ha scritto:
> >>
> >> I've also just noticed that flush & discard don't take the send_mutex
> >> befor
On Tue, 2012-10-23 at 16:02 +0100, Jamie Lokier wrote:
> Nicholas Thomas wrote:
> > On Tue, 2012-10-23 at 12:33 +0200, Kevin Wolf wrote:
> > > Am 22.10.2012 13:09, schrieb n...@bytemark.co.uk:
> > > >
> > > > This is unlikely to come u
On Tue, 2012-10-23 at 12:33 +0200, Kevin Wolf wrote:
> Am 22.10.2012 13:09, schrieb n...@bytemark.co.uk:
> >
> > This is unlikely to come up now, but is a necessary prerequisite for
> > reconnection
> > behaviour.
> >
> > Signed-off-by: Nick Thomas
> > ---
> > block/nbd.c | 13 +++--
On 08/09/11 16:25, Paolo Bonzini wrote:
> qemu-nbd has a limit of slightly less than 1M per request. Work
> around this in the nbd block driver.
>
> Signed-off-by: Paolo Bonzini
> ---
> block/nbd.c | 52 ++--
> 1 files changed, 46 insertions(+),
On 08/09/11 16:25, Paolo Bonzini wrote:
> Signed-off-by: Paolo Bonzini
> ---
> block/nbd.c | 167 ++
> nbd.c |8 +++
> 2 files changed, 117 insertions(+), 58 deletions(-)
>
> diff --git a/block/nbd.c b/block/nbd.c
> index 964caa8
On 09/09/11 12:04, Kevin Wolf wrote:
> Good to see agreement here. Do you think that Paolo's patches need to be
> changed or can we do everything else on top?
A few things have come up on a third read, actually. I'll respond in due
course to the appropriate patch.
> We do have some timer stubs i
On Fri, 2011-09-09 at 12:29 +0200, Paolo Bonzini wrote:
> On 09/09/2011 11:00 AM, Kevin Wolf wrote:
> > There is anonther patch enabling AIO for NBD on the list [1], by
> > Nicholas Thomas (CCed), that lacked review so far. Can you guys please
> > review each others approach a
/block/nbd.c b/block/nbd.c
index c8dc763..7ec57d9 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -6,6 +6,7 @@
*
* Some parts:
*Copyright (C) 2007 Anthony Liguori
+ *Copyright (C) 2011 Nicholas Thomas
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
Current behaviour if a read fails is for the acb to not get finished.
This causes an infinite loop in bdrv_read_em (block.c). The read failure
never gets reported to the guest and if the error condition clears, the
process never recovers.
With this patch, when curl reports a failure we finish the
On Wed, 2011-05-04 at 11:13 +0100, Daniel P. Berrange wrote:
> On Wed, May 04, 2011 at 09:39:02AM +0100, n...@bytemark.co.uk wrote:
> > Hi,
> >
> > Currently migration-tcp.c uses the IPv4-only socket functions, making
> > migrations over IPv6 impossible. Following is a tentative patch that
> > sw
On Thu, 2011-04-28 at 16:20 +0100, n...@bytemark.co.uk wrote:
[...]
> +static void nbd_unregister_write_request_handler(BDRVNBDState *s)
> +{
> +int sock = s->sock;
> +if (s->sock == -1) {
> +logout("Unregister write request handler tried when socket
closed\n");
> +return
Hi again Kevin, all,
Thanks for applying the first four patches, and apologies for taking so
long to get back to you. I've found the time to take your comments
on-board and re-do the last patch, + the string-leak patch; I'll send
them on shortly, I just wanted to make a few notes on yours, first.
On Mon, 2011-02-21 at 20:10 +, Stefan Hajnoczi wrote:
> On Mon, Feb 21, 2011 at 12:37 PM, Kevin Wolf wrote:
> > Am 18.02.2011 13:55, schrieb Nick Thomas:
> >> +retry:
> >> +if (do_read) {
> >> +ret = recvmsg(sockfd, &msg, 0);
> >> +} else {
> >> +ret = sendmsg(sockfd, &
> > + * Send I/O requests to the server.
> > + *
> > + * This function sends requests to the server, links the requests to
> > + * the outstanding_list in BDRVNBDState, and exits without waiting for
> > + * the response. The responses are received in the `aio_read_response'
> > + * function which
Hi again,
Thanks for looking through the patches. I'm just going through and
making the suggested changes now. I've also got qemu-nbd and block/nbd.c
working over IPv6 :) - hopefully I'll be able to provide patches in a
couple of days. Just a few questions about some of the changes...
Canceled re
On Fri, 2011-02-18 at 13:23 +0100, Kevin Wolf wrote:
> I haven't had a close look at your patches yet, but one thing that I
> noticed is that your patches are corrupted by line wraps. Please
> consider using git-send-email to avoid this kind of trouble or configure
> your mailer so that it stops do
On Thu, 2011-02-17 at 19:28 +, Nicholas Thomas wrote:
> Additional testing has revealed that this code breaks the stock
> nbd-server (the one on sourceforge) when large (well, 1.3MiB) write
> requests are sent to it.
NBD has a limit of 1MB on the size of write requests.
NBD_BU
Ho hum.
On Thu, 2011-02-17 at 16:34 +, Nicholas Thomas wrote:
> Signed-off-by: Nick Thomas
> ---
> block/nbd.c | 549
> ++-
> 1 files changed, 464 insertions(+), 85 deletions(-)
Additional testing has revealed that t
Replace an entertaining mixture of tabs and spaces with four-space
indents.
Signed-off-by: Nick Thomas
---
nbd.c | 835
+
1 files changed, 418 insertions(+), 417 deletions(-)
diff --git a/nbd.c b/nbd.c
index d8ebc42..abe0ecb 10064
Signed-off-by: Nick Thomas
---
block/nbd.c | 549
++-
1 files changed, 464 insertions(+), 85 deletions(-)
diff --git a/block/nbd.c b/block/nbd.c
index c8dc763..1387227 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -1,11 +1,12 @@
/*
- * QE
Signed-off-by: Nick Thomas
---
nbd.c | 51 +++
nbd.h |2 ++
2 files changed, 53 insertions(+), 0 deletions(-)
diff --git a/nbd.c b/nbd.c
index abe0ecb..83d3342 100644
--- a/nbd.c
+++ b/nbd.c
@@ -107,6 +107,57 @@ size_t nbd_wr_sync(int fd, voi
Hi again,
On Wed, 2011-02-16 at 13:00 +0100, Kevin Wolf wrote:
> Am 15.02.2011 22:26, schrieb Nicholas Thomas:
> > On Tue, 2011-02-15 at 12:09 +0100, Kevin Wolf wrote:
> >> Am 14.02.2011 21:32, schrieb Stefan Hajnoczi:
> I'm not sure about how much duplication there act
Hi Kevin, Stefan.
On Tue, 2011-02-15 at 12:09 +0100, Kevin Wolf wrote:
> Am 14.02.2011 21:32, schrieb Stefan Hajnoczi:
[...]
> > block/nbd.c needs to be made asynchronous in order for this change to
> > work.
>
> And even then it's not free of problem: For example qemu_aio_flush()
> will hang.
[Apologies for the cross-post - I originally sent this to the KVM ML -
obviously, it's far more appropriate here]
Hi,
I've been doing some work with /block/nbd.c with the aim of improving
its behaviour when the NBD server is inaccessible or goes away.
Current behaviour is to exit on startup if c
35 matches
Mail list logo