> The behavior you both are seeing is fixed by making flush requests
> asynchronous in the qemu driver. This was fixed upstream in qemu 1.4.2
> and 1.5.0. If you've installed from ceph-extras, make sure you're using
> the .async rpms [1] (we should probably remove the non-async ones at
> this point
Thanks guys,
Useful info - we'll see how we go. I expect the main issue blocking a
cloud-wide upgrade will be forwards live-migration of existing instances.
Cheers, ~B
On 3 October 2013 13:04, Michael Lowe wrote:
> FWIW: I use a qemu 1.4.2 that I built with a debian package upgrade script
> a
FWIW: I use a qemu 1.4.2 that I built with a debian package upgrade script and
the stock libvirt from raring.
> On Oct 2, 2013, at 10:59 PM, Josh Durgin wrote:
>
>> On 10/02/2013 06:26 PM, Blair Bethwaite wrote:
>> Josh,
>>
>>> On 3 October 2013 10:36, Josh Durgin wrote:
>>> The version bas
On 10/02/2013 06:26 PM, Blair Bethwaite wrote:
Josh,
On 3 October 2013 10:36, Josh Durgin wrote:
The version base of qemu in precise has the same problem. It only
affects writeback caching.
You can get qemu 1.5 (which fixes the issue) for precise from ubuntu's
cloud archive.
Thanks for the
Josh,
On 3 October 2013 10:36, Josh Durgin wrote:
> The version base of qemu in precise has the same problem. It only
> affects writeback caching.
>
> You can get qemu 1.5 (which fixes the issue) for precise from ubuntu's
> cloud archive.
Thanks for the pointer! I had not realised there were new
On 10/02/2013 03:16 PM, Blair Bethwaite wrote:
Hi Josh,
Message: 3
Date: Wed, 02 Oct 2013 10:55:04 -0700
From: Josh Durgin
To: Oliver Daudey , ceph-users@lists.ceph.com,
robert.vanleeu...@spilgames.com
Subject: Re: [ceph-users] Loss of connectivity when using client
caching
Hi Josh,
> Message: 3
> Date: Wed, 02 Oct 2013 10:55:04 -0700
> From: Josh Durgin
> To: Oliver Daudey , ceph-users@lists.ceph.com,
> robert.vanleeu...@spilgames.com
> Subject: Re: [ceph-users] Loss of connectivity when using client
> caching with
On 10/02/2013 10:45 AM, Oliver Daudey wrote:
Hey Robert,
On 02-10-13 14:44, Robert van Leeuwen wrote:
Hi,
I'm running a test setup with Ceph (dumpling) and Openstack (Grizzly) using libvirt to
"patch" the ceph disk directly to the qemu instance.
I'm using SL6 with the patched qemu packages fr
Hey Robert,
On 02-10-13 14:44, Robert van Leeuwen wrote:
> Hi,
>
> I'm running a test setup with Ceph (dumpling) and Openstack (Grizzly) using
> libvirt to "patch" the ceph disk directly to the qemu instance.
> I'm using SL6 with the patched qemu packages from the Ceph site (which the
> latest
Hi,
I'm running a test setup with Ceph (dumpling) and Openstack (Grizzly) using
libvirt to "patch" the ceph disk directly to the qemu instance.
I'm using SL6 with the patched qemu packages from the Ceph site (which the
latest version is still cuttlefish):
http://www.ceph.com/packages/ceph-extras
10 matches
Mail list logo