Hello,
I created an rbd image and wrote some data to it. After that, I cloned a
new image from the previous one.
Then I compared this two image byte by byte, but they are not totally equal.
The data in position where I never wrote to the first image are not equal.
Is this a normal c
Why do you think that is slow? That's 4.5k write iops and 13.5k read iops
at the same time, that's amazing for a total of 30 HDDs.
It's actually way faster than you'd expect for 30 HDDs, so these DB devices
are really helping there :)
Paul
--
Paul Emmerich
Looking for help with your Ceph clus
I am working on upgrading my current ethernet only ceph cluster to a
combined ethernet frontend and infiniband backend. From my research I
understand that I set:
ms_cluster_type = async+rdma
ms_async_rdma_device_name = mlx4_0
What I don't understand is how does ceph know how to reach each OSD ove
HI
Today checking our monitor logs see that RocksDB compactation trigger every
minute.
Is that normal?
2020-01-02 14:08:33.091 7f2b8acbe700 4 rocksdb:
[db/db_impl_compaction_flush.cc:1403] [default] Manual compaction starting
2020-01-02 14:08:33.091 7f2b8acbe700 4 rocksdb:
[db/db_impl_compa
Hi Stefan, using fio with bs=64k I got very good performances.
I am not skilled on storage, but linux file system block size is 4k.
So, How can I modify the configuration on ceph to obtain best performances
with bs=4k ?
Regards
Ignazio
Il giorno gio 2 gen 2020 alle ore 10:59 Stefan Kooman ha
sc
Hi,
Your performance is not that bad, is it? What performance do you expect?
I just ran the same test.
12 Node, SATA SSD Only:
READ: bw=63.8MiB/s (66.9MB/s), 63.8MiB/s-63.8MiB/s (66.9MB/s-66.9MB/s),
io=3070MiB (3219MB), run=48097-48097msec
WRITE: bw=21.3MiB/s (22.4MB/s), 21.3MiB/s-21.3MiB/s
Hi Stefan,
I did not understand you question but it's may fault.
I am using virtio-scsi on my virtual machine.
The virtual machine has two cores.
Or dow yum mean cores on osd servers ?
Regards
Ignazio
Il giorno gio 2 gen 2020 alle ore 10:59 Stefan Kooman ha
scritto:
> Quoting Ignazio Cassano
Quoting Ignazio Cassano (ignaziocass...@gmail.com):
> Hello All,
> I installed ceph luminous with openstack, an using fio in a virtual machine
> I got slow random writes:
>
> fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test
> --filename=random_read_write.fio --bs=4k --io
Hello All,
I installed ceph luminous with openstack, an using fio in a virtual machine
I got slow random writes:
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test
--filename=random_read_write.fio --bs=4k --iodepth=64 --size=4G
--readwrite=randrw --rwmixread=75
Run status