Hello everyone,
We need to add 180 20TB OSDs to our Ceph cluster, which currently
consists of 540 OSDs of identical size (replicated size 3).
I'm not sure, though: is it a good idea to add all the OSDs at once? Or
is it better to add them gradually?
The idea is to minimize the impact of reb
my work address: sirj...@ill.fr
Cheers,
--
Fabien Sirjean
Head of IT Infrastructures (DPT/SI/INFRA)
Institut Laue Langevin (ILL)
+33 (0)4 76 20 76 46 / +33 (0)6 62 47 52 80
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an emai
Hi,
Yes, Cisco supports VPC (virtual port-channel) for LACP over multiple
switches.
We're using 2x10G VPC on our Ceph and Proxmox hosts with 2 Cisco Nexus
3064PQ (48 x 10G SFP+ & 4 x 40G).
These refs can be found for ~1000€ in refurb with lifetime warranty.
Happy users here :-)
Cheers,
F
Dear all,
Could you please share recent experiences with support for cephfs access
from MacOS clients ?
I couldn't find any threads on this matter on the list since 2021
(Daniel Persson), where he stated that :
Mac Mini Intel Catalina - Connected and working fine.
Mac Mini M1 BigSur - Can'
Hi,
We have the same issue. It seems to come from this bug :
https://access.redhat.com/solutions/6982902
We had to disable root_squash, which of course is a huge issue...
Cheers,
Fabien
On 5/15/24 12:54, Nicola Mori wrote:
Dear Ceph users,
I'm trying to export a CephFS with the root_squa
Hi,
On 5/17/24 08:51, Kotresh Hiremath Ravishankar wrote:
Yes, it's already merged to the reef branch, and should be available in the
next reef release.
Please look at https://tracker.ceph.com/issues/62952
This is great news! Many thanks to all involved.
F.
___
Hi Ceph users!
I've been proposed an interesting EC setup I hadn't thought about before.
Scenario is : we have two server rooms and want to store ~4PiB with the
ability to loose 1 server room without loss of data or RW availability.
For the context, performance is not needed (cold storage mos
one DC and everything was still working fine. Although there's also
the stretch mode (which I haven't tested properly yet) I can encourage
you to use such a profile. Just be advised to properly test your crush
rule. ;-)
Regards,
Eugen
Zitat von Fabien Sirjean :
Hi Ceph users!
I
Hi Ceph users !
After years of using Ceph, we plan to build soon a new cluster bigger than what
we've done in the past. As the project is still in reflection, I'd like to
have your thoughts on our planned design : any feedback is welcome :)
## Requirements
* ~1 PB usable space for file storage
Dear all,
If I am to believe this issue [1], it seems that it is still not
possible to make files immutable (chattr +i) in cephfs.
Do you have any update on this matter ?
Thanks a lot for all the good work!
Cheers,
Fabien
[1] : https://tracker.ceph.com/issues/10679
10 matches
Mail list logo