Hello,
I need advice on the creation of an EC profile and the associate crush rule,
for a cluster of 15 nodes, each with 12 x 18Tb disks with the objective of
being able to lose 2 hosts or 4 disks.
I would like to have the most space available, a 75% ratio would be ideal
If you can give me some
ld storage used only with cephfs and where we store only big files.
We cannot have a full replicated cluster, and we need maximum uptime...
Regards
- Mail original -
> De: "Anthony D'Atri"
> À: "Bailey Allison"
> Cc: "Danny Webb" , "Ch
nd 30GB of extra room for compaction?
>>
>> I don't use cephadm, but it's maybe related to this regression:
>> https://tracker.ceph.com/issues/56031. At list the symptoms looks very
>> similar...
>>
>> Cheers,
>>
>> --
>&g
___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
--
Christophe BAILLON
Mobile :: +336 16 400 522
Work :: https://eyona.com
Twitter :: https://twitter.com/ctof
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
that, can you help me to find
the best way to deploy samba on top ?
Regards
--
Christophe BAILLON
Mobile :: +336 16 400 522
Work :: https://eyona.com
Twitter :: https://twitter.com/ctof
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscri
t; there aren't any guides yet on docs.ceph.com.
>
> Regards,
> Eugen
>
> [1] https://documentation.suse.com/ses/7.1/single-html/ses-admin/#cha-ses-cifs
>
> Zitat von Christophe BAILLON :
>
>> Hello,
>>
>> For a side project, we need to exp
Thanks, it's fine
> De: "Wyll Ingersoll"
> À: "Christophe BAILLON"
> Cc: "Eugen Block" , "ceph-users"
> Envoyé: Jeudi 27 Octobre 2022 22:49:18
> Objet: Re: [ceph-users] Re: SMB and ceph question
> No - the recommendation is ju
Hello,
How to simply monitor the growing of db/wal partitions ?
We have 2 nmve shared for 12 osd by host (1 nvme for 6 osd), and we want to
monitor the growing.
We use cephadm to manage ours clusters
Thanks for advance
--
Christophe BAILLON
Mobile :: +336 16 400 522
Work :: https://eyona.com
Hello,
We have a cluster with 26 nodes, and 15 nodes have a bad batch of 2 nvme wheree
we have for each 6 lv for db/wal. We have to change it, because they fail one
by one...
The defective nvme are M2 samsung enterprise.
When they fail, we got sense errors, and the nvme disappear, if we power o
Hello,
we have a Ceph 17.2.5 cluster with a total of 26 nodes, where 15 nodes that
have faulty NVMe drives,
where the db/wal resides (one NVMe for the first 6 OSDs and another for the
remaining 6).
We replaced them with new drives and pvmoved it to avoid losing the OSDs.
So far, there are n
},
{
"device": "/dev/sdc",
"device_id": "SEAGATE_ST18000NM004J_ZR52TT83C148JFSJ"
}
]
- Mail original -
> De: "Christophe BAILLON"
> À: "ceph-users"
> Envoyé: Vendredi 30 Juin 2023 15:33:41
> Objet
Hello,
We have a cluster with 21 nodes, each having 12 x 18TB, and 2 NVMe for db/wal.
We need to add more nodes.
The last time we did this, the PGs remained at 1024, so the number of PGs per
OSD decreased.
Currently, we are at 43 PGs per OSD.
Does auto-scaling work correctly in Ceph versio
Thanks for your reply
- Mail original -
> De: "Kai Stian Olstad"
> À: "Christophe BAILLON"
> Cc: "ceph-users"
> Envoyé: Jeudi 14 Septembre 2023 21:44:57
> Objet: Re: [ceph-users] Questions about PG auto-scaling and node addition
> On W
Hello
On a new cluster, installed with cephadm, I have prepared news osd for separate
al and db
To do it I follow this doc :
https://docs.ceph.com/en/quincy/rados/configuration/bluestore-config-ref/
I run ceph version 17.2.0
When I shoot the ceph-volume creation I got this error
root@store-par
l original -----
> De: "Christophe BAILLON"
> À: "ceph-users"
> Envoyé: Mardi 31 Mai 2022 18:15:15
> Objet: [ceph-users] Problem with ceph-volume
> Hello
>
> On a new cluster, installed with cephadm, I have prepared news osd for
> separate
> al and db
Hi all
I got many error about PG deviation more than 30% on a new installed cluster
This cluster is managed by cephadm
all box 15 have :
12 x 18Tb
2 x nvme
2 x ssd for boot
Our main pool is on EC 6 + 2 for exclusive use with cephfs
created with this method :
ceph orch apply -i osd_spec.yaml
wi
t; very low (too low) PG numbers per OSD (between 0 and 6), did you stop
> the autoscaler at an early stage? If you don't want to use the
> autoscaler you should increase the pg_num, but you could set
> autoscaler to warn mode and see what it suggests.
>
>
> Zitat von Chris
> email. The autoscaler will increase pg_num as soon as you push data
> into it, no need to tear the cluster down.
>
> Zitat von Christophe BAILLON :
>
>> Hello,
>>
>> thanks for your reply
>>
>> No did not stop autoscaler
>>
>> root@stor
Quincy.
>
> Gr. Stefan
>
> [1]:
> https://docs.ceph.com/en/quincy/rados/configuration/bluestore-config-ref/#sizing
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@cep
atjana Dehler
>> SUSE Software Solutions Germany GmbH
>> Frankenstraße 146
>> 90461 Nuremberg
>> Germany
>>
>> (HRB 36809, AG Nürnberg)
>> Managing Director: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien
>> Moerman
>> __
20 matches
Mail list logo