t: 17 January 2020 06:55:25
To: Dave Hall
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Beginner questions
There is no difference in allocation between replication or EC. If failure
domain is host, one osd per host ok s used for a PG. So if you use a 2+1 EC
profile with a host failure
erformair.com>
www.PerformAir.com<http://www.PerformAir.com>
-Original Message-
From: Dave Hall [mailto:kdh...@binghamton.edu<mailto:kdh...@binghamton.edu>]
Sent: Thursday, January 16, 2020 1:04 PM
To: Dominic Hilsbos; ceph-users@lists.ceph.com<mailto:ceph-users@lists
ime, and
>> benchmark different configurations?
>>
>> Thank you,
>>
>> Dominic L. Hilsbos, MBA
>> Director – Information Technology
>> Perform Air International Inc.
>> dhils...@performair.com
>> www.PerformAir.com
>>
>>
>> -Ori
ay, January 16, 2020 1:04 PM
To: Dominic Hilsbos; ceph-users@lists.ceph.com
<mailto:ceph-users@lists.ceph.com>
Subject: Re: [External Email] RE: [ceph-users] Beginner questions
Dominic,
We ended up with a 1.6TB PCIe NVMe in each node. For 8 drives this
worked out to a DB size of
Visser
Sent: Thursday, January 16, 2020 10:55 AM
To: Dave Hall
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Beginner questions
I would definitely go for Nautilus. there are quite some optimizations that
went in after mimic.
Bluestore DB size usually ends up at either 30 or 60 GB.
30 GB
Don't use Mimic, support for it is far worse than Nautilus or Luminous. I
think we were the only company who built a product around Mimic, both
Redhat and Suse enterprise storage was Luminous and then Nautilus skipping
Mimic entirely.
We only offered Mimic as a default for a limited time and immed
I would definitely go for Nautilus. there are quite some optimizations that
went in after mimic.
Bluestore DB size usually ends up at either 30 or 60 GB.
30 GB is one of the sweet spots during normal operation. But during
compaction, ceph writes the new data before removing the old, hence the
60GB