Hi Mark,
Do you enable rbdcache? I test on my ssd cluster(only one ssd), it seemed ok.
> dd if=/dev/zero of=test bs=16k count=65536 oflag=direct
82.3MB/s
On Sun, Jun 22, 2014 at 11:50 AM, Mark Kirkwood
wrote:
> On 22/06/14 14:09, Mark Kirkwood wrote:
>
> Upgrading the VM to 14.04 and restesti
On 06/22/2014 02:02 AM, Haomai Wang wrote:
Hi Mark,
Do you enable rbdcache? I test on my ssd cluster(only one ssd), it seemed ok.
dd if=/dev/zero of=test bs=16k count=65536 oflag=direct
82.3MB/s
RBD Cache is definitely going to help in this use case. This test is
basically just sequentia
I'm using Crucial M500s.
On Sat, Jun 21, 2014 at 7:09 PM, Mark Kirkwood <
mark.kirkw...@catalyst.net.nz> wrote:
> I can reproduce this in:
>
> ceph version 0.81-423-g1fb4574
>
> on Ubuntu 14.04. I have a two osd cluster with data on two sata spinners
> (WD blacks) and journals on two ssd (Crucua
We actually do have a use pattern of large batch sequential writes, and
this dd is pretty similar to that use case.
A round-trip write with replication takes approximately 10-15ms to
complete. I've been looking at dump_historic_ops on a number of OSDs and
getting mean, min, and max for sub_op and
On Sun, 22 Jun 2014 12:14:38 -0700 Greg Poirier wrote:
> We actually do have a use pattern of large batch sequential writes, and
> this dd is pretty similar to that use case.
>
> A round-trip write with replication takes approximately 10-15ms to
> complete. I've been looking at dump_historic_ops
Hey cephers,
I know I promised you a schedule by Friday but we had to take a bit of extra
time to squeeze everything in with the whole Inktank crew traveling. The good
news is that it’s up now!
https://wiki.ceph.com/Planning/CDS/CDS_Giant_and_Hammer_(Jun_2014)
Hopefully I covered off on enoug
Hello,
This weekend I noticed that the deep scrubbing took a lot longer than
usual (long periods without a scrub running/finishing), even though the
cluster wasn't all that busy.
It was however busier than in the past and the load average was above 0.5
frequently.
Now according to the document
Good point, I had neglected to do that.
So, amending my conf.conf [1]:
[client]
rbd cache = true
rbd cache size = 2147483648
rbd cache max dirty = 1073741824
rbd cache max dirty age = 100
and also the VM's xml def to include cache to writeback:
How does RBD cache work? I wasn't able to find an adequate explanation in
the docs.
On Sunday, June 22, 2014, Mark Kirkwood
wrote:
> Good point, I had neglected to do that.
>
> So, amending my conf.conf [1]:
>
> [client]
> rbd cache = true
> rbd cache size = 2147483648
> rbd cache max dirty = 10
Hello,
On Sun, 22 Jun 2014 23:27:01 -0700 Greg Poirier wrote:
> How does RBD cache work? I wasn't able to find an adequate explanation in
> the docs.
>
The mailing list (archive) is your friend, I asked pretty much the same
question in January.
In short it mimics the cache on a typical hard dis
10 matches
Mail list logo