Hello Robert,
On Tue, 15 Mar 2016 10:54:20 -0600 Robert LeBlanc wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> There are no monitors on the new node.
>
So one less possible source of confusion.
> It doesn't look like there has been any new corruption since we
> stopped changin
Hello,
On Tue, 15 Mar 2016 12:00:24 +0200 Yair Magnezi wrote:
> Thanks Christian .
>
> Still
>
> "So yes, your numbers are normal for single client, low depth reads, as
> many threads in this ML confirm."
>
> we're facing very high latency ( i expect much less latency from ssd
> cluster ) :
>Indeed, well understood.
>
>As a shorter term workaround, if you have control over the VMs, you could
>always just slice out an LVM volume from local SSD/NVMe and pass it through to
>the guest. Within the guest, use dm-cache (or similar) to add a cache
>front-end to your RBD volume.
If you
Hi Robert,
>Caching writes would be bad because a hypervisor failure would result in loss
>of the cache which pretty much guarantees inconsistent data on the ceph volume.
>Also live-migration will become problematic compared to running everything
>from ceph since you will also need to migrate th
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Daniel Niasoff
> Sent: 16 March 2016 08:26
> To: Van Leeuwen, Robert ; Jason Dillaman
>
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Local SSD cache for ceph on each compute n
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Christian Balzer
> Sent: 16 March 2016 07:08
> To: Robert LeBlanc
> Cc: Robert LeBlanc ; ceph-users us...@lists.ceph.com>; William Perkins
> Subject: Re: [ceph-users] data corruption with
>
>My understanding of how a writeback cache should work is that it should only
>take a few seconds for writes to be streamed onto the network and is focussed
>on resolving the speed issue of small sync writes. The writes would be bundled
>into larger writes that are not time sensitive.
>
>So th
Hi,
one of our customer's vm is running against our ceph cluster. we dumped the
osd perf counters and found that the io generated from this vm are almost
op_rw operations.
Can anybody explain what is the io pattern of op_rw and how to
simulate/generate it?
zhongyan
Hi Robert,
It seems I have to give up on this goal for now but wanted to be sure I wasn't
missing something obvious.
>If you can survive missing that data you are probably better of running fully
>from ephemeral storage in the first place.
What and lose the entire ephemeral disk since the VM w