I would try to scale horizontally with smaller ceph nodes, so you have
the advantage of being able to choose an EC profile that does not
require too much overhead and you can use failure domain host.
Joachim
Am 09.01.2020 um 15:31 schrieb Wido den Hollander:
On 1/9/20 2:27 PM, Stefan Priebe
Maybe this will help you:
https://docs.ceph.com/docs/master/radosgw/multisite/#migrating-a-single-site-system-to-multi-site
___
Clyso GmbH
Am 03.10.2019 um 13:32 schrieb M Ranga Swami Reddy:
Thank you. Do we have a quick document to do this migration?
Thanks
Hi Uwe,
I can only recommend the use of enterprise SSDs. We've tested many
consumer SSDs in the past, including your SSDs. Many of them are not
suitable for long-term use and some weard out within 6 months.
Cheers, Joachim
Homepage: https://www.clyso.com
Am 27.02.2019 um 10:24 schrieb Enek
Hi Ketil,
We also offer independent ceph consulting and
and operate productive cluster for more than 4 years and up 2500 osds.
You can meet many in person at the next cephalocon in Barcelona.
(https://ceph.com/cephalocon/barcelona-2019/)
Regards, Joachim
Clyso GmbH
Homepage: https://www.cly
Hello Andreas,
we had the following experience in recent years:
1 year ago we also completely shut down one 2500+ osds ceph cluster and
had no problems to start the cluster again. ( 5 mon nodes each with 4 x
25 Gbit/s )
A few years ago, we increased the number of osds to more than 600 in
an
In such a situation, we noticed a performance drop (caused by the
filesystem) and soon had no free inodes left.
___
Clyso GmbH
Am 12.12.2018 um 09:24 schrieb Klimenko, Roman:
Ok, I'll try these params. thx!
---