Hi Ilya,
thank you very much for your help, you were right, we got this fixed by fixing
the caps.
thank you
From: Ilya Dryomov
Sent: Wednesday, May 25, 2022 12:39:08 PM
To: Sopena Ballesteros Manuel
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] rbd command
Am 26.05.22 um 20:21 schrieb Sarunas Burdulis:
size 2 min_size 1
With such a setting you are guaranteed to lose data.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt
On 5/26/22 14:38, Wesley Dillingham wrote:
pool 13 'mathfs_metadata' replicated size 2 min_size 2 crush_rule 0
object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change
The problem is you have size=2 and min_size=2 on this pool. I would
increase the size of this pool to 3 (but i w
On 5/27/22 04:54, Robert Sander wrote:
Am 26.05.22 um 20:21 schrieb Sarunas Burdulis:
size 2 min_size 1
With such a setting you are guaranteed to lose data.
What would you suggest?
--
Sarunas Burdulis
Dartmouth Mathematics
math.dartmouth.edu/~sarunas
· https://useplaintext.email ·
Hi,
Can you please tell us the side of your ceph cluster? How man OSDs do you
have?
The default recommendations are to have a min_size of 2 and replica 3 per
replicated pool.
Thank you,
Bogdan Velica
croit.io
On Fri, May 27, 2022 at 6:33 PM Sarunas Burdulis
wrote:
> On 5/27/22 04:54, Robert Sa
On 5/27/22 11:41, Bogdan Adrian Velica wrote:
Hi,
Can you please tell us the side of your ceph cluster? How man OSDs do
you have?
16 OSDs.
$ ceph df
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 8.9 TiB 8.3 TiB 595 GiB 595 GiB 6.55
ssd 7.6 TiB