> ...
> > > > I get about 2000 IOPs with this test:
> > > >
> > > > # rados bench -p volumes 10 write -t 8 -b 16K
> > > > hints = 1
> > > > Maintaining 8 concurrent writes of 16384 bytes to objects of size
> > > > 16384 for up to 10 seconds or 0 objects
> > > > Object prefix: benchmark_data_fstosin
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
With good hardware and correct configuration, an all flash cluster
should give:
approx 1-2K write iops per thread (0.5-1 ms latency)
approx 2-5K read iops per thread (0.2-0.5 ms latency)
This is dependent on quality of drives and cpu/frequency but independent
on number of drives or cores.
>
> 6) There are advanced tuning like numa pinning, but you should get decent
> speeds without doing fancy stuff.
This is why I’d asked the OP for the CPU in use. Mark and Dan’s recent and
superlative presentation about 1TB/s with Ceph underscored how tunings can make
a very real difference.