> -----Original Message-----
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Gregory Farnum
> Sent: 16 March 2015 17:33
> To: Nick Fisk
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Cache Tier Flush = immediate base tier journal
> sync?
>
> On Wed, Mar 11, 2015 at 2:25 PM, Nick Fisk <n...@fisk.me.uk> wrote:
> >
> > I’m not sure if it’s something I’m doing wrong or just experiencing an
> oddity, but when my cache tier flushes dirty blocks out to the base tier, the
> writes seem to hit the OSD’s straight away instead of coalescing in the
> journals, is this correct?
> >
> > For example if I create a RBD on a standard 3 way replica pool and run fio
> via librbd 128k writes, I see the journals take all the io’s until I hit my
> filestore_min_sync_interval and then I see it start writing to the underlying
> disks.
> >
> > Doing the same on a full cache tier (to force flushing) I immediately see
> > the
> base disks at a very high utilisation. The journals also have some write IO at
> the same time. The only other odd thing I can see via iostat is that most of
> the time whilst I’m running Fio, is that I can see the underlying disks doing
> very small write IO’s of around 16kb with an occasional big burst of activity.
> >
> > I know erasure coding+cache tier is slower than just plain replicated pools,
> but even with various high queue depths I’m struggling to get much above
> 100-150 iops compared to a 3 way replica pool which can easily achieve 1000-
> 1500. The base tier is comprised of 40 disks. It seems quite a marked
> difference and I’m wondering if this strange journal behaviour is the cause.
> >
> > Does anyone have any ideas?
>
> If you're running a full cache pool, then on every operation touching an
> object which isn't in the cache pool it will try and evict an object. That's
> probably what you're seeing.
>
> Cache pool in general are only a wise idea if you have a very skewed
> distribution of data "hotness" and the entire hot zone can fit in cache at
> once.
> -Greg
Hi Greg,
It's not the caching behaviour that I confused about, it’s the journal
behaviour on the base disks during flushing. I've been doing some more tests
and can do something reproducible which seems strange to me.
First off 10MB of 4kb writes:
time ceph tell osd.1 bench 10000000 4096
{ "bytes_written": 10000000,
"blocksize": 4096,
"bytes_per_sec": "16009426.000000"}
real 0m0.760s
user 0m0.063s
sys 0m0.022s
Now split this into 2x5mb writes:
time ceph tell osd.1 bench 5000000 4096 && time ceph tell osd.1 bench 5000000
4096
{ "bytes_written": 5000000,
"blocksize": 4096,
"bytes_per_sec": "10580846.000000"}
real 0m0.595s
user 0m0.065s
sys 0m0.018s
{ "bytes_written": 5000000,
"blocksize": 4096,
"bytes_per_sec": "9944252.000000"}
real 0m4.412s
user 0m0.053s
sys 0m0.071s
2nd bench takes a lot longer even though both should easily fit in the 5GB
journal. Looking at iostat, I think I can see that no writes happen to the
journal whilst the writes from the 1st bench are being flushed. Is this the
expected behaviour? I would have thought as long as there is space available in
the journal it shouldn't block on new writes. Also I see in iostat writes to
the underlying disk happening at a QD of 1 and 16kb IO's for a number of
seconds, with a large blip or activity just before the flush finishes. Is this
the correct behaviour? I would have thought if this "tell osd bench" is doing
sequential IO then the journal should be able to flush 5-10mb of data in a
fraction a second.
Ceph.conf
[osd]
filestore max sync interval = 30
filestore min sync interval = 20
filestore flusher = false
osd_journal_size = 5120
osd_crush_location_hook = /usr/local/bin/crush-location
osd_op_threads = 5
filestore_op_threads = 4
iostat during period where writes seem to be blocked (journal=sda disk=sdd)
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz
avgqu-sz await r_await w_await svctm %util
sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00
sdb 0.00 0.00 0.00 2.00 0.00 4.00 4.00
0.00 0.00 0.00 0.00 0.00 0.00
sdc 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00
sdd 0.00 0.00 0.00 76.00 0.00 760.00 20.00
0.99 13.11 0.00 13.11 13.05 99.20
iostat during what I believe to be the actual flush
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz
avgqu-sz await r_await w_await svctm %util
sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00
sdb 0.00 0.00 0.00 2.00 0.00 4.00 4.00
0.00 0.00 0.00 0.00 0.00 0.00
sdc 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00
sdd 0.00 1411.00 0.00 206.00 0.00 6560.00 63.69
70.14 324.14 0.00 324.14 4.85 100.00
Nick
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com