hello
I mean a filesystem mounted on top of a mapped rbd
rbd create --size=10G kube/benchrbd feature disable kube/bench object-
map fast-diff deep-flattenrbd map bench  --pool kube --name
client.admin/sbin/mkfs.ext4  /dev/rbd/kube/bench  mount
/dev/rbd/kube/bench /mnt/cd /mnt/
about the bench I did. I try to get apples in both side. (I hope) :
block size : 4kthread : 1size of data : 1G
Writes are great.
rbd -p kube bench kube/bench --io-type write --io-threads 1 --io-total
1G --io-pattern seq

elapsed:    12  ops:   262144  ops/sec: 20758.70  bytes/sec:
85027625.70

rbd -p kube bench kube/bench --io-type write --io-threads 1 --io-total
10G --io-pattern rand
elapsed:    14  ops:   262144  ops/sec: 17818.16  bytes/sec:
72983201.32
Reads are very very slow :rbd -p kube bench kube/bench --io-type read
--io-threads 1 --io-total 1G --io-pattern
randelapsed:   445  ops:    81216  ops/sec:   182.37  bytes/sec:
747006.15
rbd -p kube bench kube/bench --io-type read --io-threads 1 --io-total
1G --io-pattern
seqelapsed:    14  ops:    14153  ops/sec:   957.57  bytes/sec:
3922192.15
Perhaps I'm hitting this 'issue' : 
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-August/028878.html
For the record : 
I got an old cluster with vm in ceph 10.2.11. With a dd bench I reach
the cluster limitation.
dd if=/dev/zero of=test bs=4M count=250 oflag=direct1048576000 bytes
(1.0 GB) copied, 11.5469 s, 90.8 MB/s
and pgbench gives me 200 transaction per second
On the new cluster with containers running on fs on top of a mapped rbd
and ceph nautilus I got :dd if=/dev/zero of=test bs=4M count=250
oflag=direct1048576000 bytes (1.0 GB, 1000 MiB) copied, 27.0351 s, 38.8
MB/s
and pgbench gives me 10 transactions per second.
something it not ok somewhere :)
oau
Le mercredi 14 août 2019 à 15:56 +0200, Ilya Dryomov a écrit :
> On Wed, Aug 14, 2019 at 2:49 PM Paul Emmerich <paul.emmer...@croit.io
> > wrote:
> > On Wed, Aug 14, 2019 at 2:38 PM Olivier AUDRY <oliv...@nmlq.fr>
> > wrote:
> > > let's test random write
> > > rbd -p kube bench kube/bench --io-type write --io-size 8192 --io-
> > > threads 256 --io-total 10G --io-pattern rand
> > > elapsed:   125  ops:  1310720  ops/sec: 10416.31  bytes/sec:
> > > 85330446.58
> > > 
> > > dd if=/dev/zero of=test bs=8192k count=100 oflag=direct
> > > 838860800 bytes (839 MB, 800 MiB) copied, 24.6185 s, 34.1 MB/s
> > > 
> > > 34.1MB/s vs 85MB/s ....
> > 
> > 34 apples vs. 85 oranges
> > 
> > You are comparing 256 threads with a huge queue depth vs a single
> > thread with a normal queue depth.
> > Use fio on the mounted rbd to get better control over what it's
> > doing
> 
> When you said mounted, did you mean mapped or "a filesystem mounted
> on
> top of a mapped rbd"?
> 
> There is no filesystem in "rbd bench" tests, so fio should be used on
> a raw block device.  It still won't be completely apples to apples
> because in "rbd bench" or fio's rbd engine (--ioengine=rbd) case
> there
> is no block layer either, but it is closer...
> 
> Thanks,
> 
>                 Ilya
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to