Hello,
On Sun, 27 Mar 2016 13:41:57 +0800 lin zhou wrote:
> Hi,guys.
> some days ago,one osd have a large latency seeing in ceph osd perf.and
> this device make this node a high cpu await.
The thing to do at that point would have been look at things with atop or
iostat to verify that it was the
Hello,
On Sun, 27 Mar 2016 02:59:30 +0300 Dzianis Kahanovich wrote:
> New problem (unsure, but probably not observed in Hammer, but sure in
> Infernalis): copying large (tens g) files into kernel cephfs (from
> outside of cluster, iron - non-VM, preempt kernel) - make slow requests
> on some of
Hello,
On Sun, 27 Mar 2016 18:44:41 +0200 (CEST) Daniel Delin wrote:
> Hi,
>
> I have ordered three 240GB Samsung SM863 SSD to my 3 OSD hosts, each
> with 4 OSDs, to improve write performance.
Did you test these SSDs in advance?
While I'm pretty sure they are suitable for Ceph journals I have
Hi,
I have ordered three 240GB Samsung SM863 SSD to my 3 OSD hosts, each with 4
OSDs, to improve write performance.
When looking at the docs, there is a formula for journal size (osd journal size
= {2 * (expected throughput * filestore max sync interval)})
that I intend to use. If I understand t
Hi all,
On 16/03/16 18:11, Van Leeuwen, Robert wrote:
>> Indeed, well understood.
>>
>> As a shorter term workaround, if you have control over the VMs, you could
>> always just slice out an LVM volume from local SSD/NVMe and pass it through
>> to the guest. Within the guest, use dm-cache (or sim
Thanks
Wouldn't it be amazing to puta 2TB NVMe card in each compute node, make 1
config change and presto! Users see a 10 fold increase in performance :) with
95% reads going to cache and all writes being acknowledged after being written
on cache. For writes you might want dual NVMe in a raid 1
On 03/27/2016 11:13 AM, Daniel Niasoff wrote:
Hi Ric,
But you would still have to set a dm-cache per rbd volume which makes it
difficult to manage.
There needs to be a global setting either within kvm or ceph that caches
reads/writes before they hit the rbd the device.
Thanks
Daniel
Corre
On 03/25/2016 02:00 PM, Jan Schermer wrote:
V5 is supposedly stable, but that only means it will be just as bad as any
other XFS.
I recommend avoiding XFS whenever possible. Ext4 works perfectly and I never
lost any data with it, even when it got corrupted, while XFS still likes to eat
the da
Hi Ric,
But you would still have to set a dm-cache per rbd volume which makes it
difficult to manage.
There needs to be a global setting either within kvm or ceph that caches
reads/writes before they hit the rbd the device.
Thanks
Daniel
-Original Message-
From: Ric Wheeler [mailto:r
On 03/16/2016 12:15 PM, Van Leeuwen, Robert wrote:
My understanding of how a writeback cache should work is that it should only
take a few seconds for writes to be streamed onto the network and is focussed
on resolving the speed issue of small sync writes. The writes would be bundled
into larg
10 matches
Mail list logo