[ceph-users] Re: octopus v15.2.17 QE Validation status

2022-07-28 Thread Sven Kieske
On Do, 2022-07-28 at 07:50 +1000, Brad Hubbard wrote: > The primary cause of the issues with ca is that octopus was pinned to > the stable_6.0 branch of ca for octopus should be using stable_5.0 > according to https://docs.ceph.com/projects/ceph-ansible/en/latest/#releases > > I don't believe this

[ceph-users] Re: PG does not become active

2022-07-28 Thread Jesper Lykkegaard Karlsen
Hi Frank, I think you need at least 6 OSD hosts to make EC 4+2 with faillure domain host. I do not know how it was possible for you to create that configuration at first? Could it be that you have multiple name for the OSD hosts? That would at least explain the one OSD down, being show as tw

[ceph-users] Re: PG does not become active

2022-07-28 Thread Jesper Lykkegaard Karlsen
Ah I see, should have look at the “raw” data instead ;-) Then I agree this very weird? Best, Jesper -- Jesper Lykkegaard Karlsen Scientific Computing Centre for Structural Biology Department of Molecular Biology and Genetics Aarhus University Universitetsbyen 81 8000 Aar

[ceph-users] Cluster running without monitors

2022-07-28 Thread Johannes Liebl
Hi Ceph Users, I am currently evaluating different cluster layouts and as a test I stopped two of my three monitors while client traffic was running on the nodes.? Only when I restartet an OSD all PGs which were related to that OSD went down, but the rest were still active and serving request

[ceph-users] Ceph pool size and OSD data distribution

2022-07-28 Thread Roland Giesler
I have a 7 node cluster which is complaining that: root@s1:~# ceph -s cluster: id: a6092407-216f-41ff-bccb-9bed78587ac3 health: HEALTH_WARN 1 nearfull osd(s) 4 pool(s) nearfull services: mon: 3 daemons, quorum sm1,2,s5 mgr: s1(active), standbys: s5,

[ceph-users] cephadm automatic sizing of WAL/DB on SSD

2022-07-28 Thread Calhoun, Patrick
Hi, I'd like to understand if the following behaviour is a bug. I'm running ceph 16.2.9. In a new OSD node with 24 hdd (16 TB each) and 2 ssd (1.44 TB each), I'd like to have "ceph orch" allocate WAL and DB on the ssd devices. I use the following service spec: spec: data_devices: rotation

[ceph-users] Re: Cluster running without monitors

2022-07-28 Thread Gregory Farnum
On Thu, Jul 28, 2022 at 5:32 AM Johannes Liebl wrote: > > Hi Ceph Users, > > > I am currently evaluating different cluster layouts and as a test I stopped > two of my three monitors while client traffic was running on the nodes.? > > > Only when I restartet an OSD all PGs which were related to th

[ceph-users] Re: Ceph Stretch Cluster - df pool size (Max Avail)

2022-07-28 Thread Nicolas FONTAINE
Hello, We have exactly the same problem. Did you find an answer or should we open a bug report? Sincerely, Nicolas. Le 23/06/2022 à 11:42, Kilian Ries a écrit : Hi Joachim, yes i assigned the stretch rule to the pool (4x replica / 2x min). The rule says that two replicas should be in eve

[ceph-users] Re: cannot set quota on ceph fs root

2022-07-28 Thread Gregory Farnum
On Thu, Jul 28, 2022 at 1:01 AM Frank Schilder wrote: > > Hi all, > > I'm trying to set a quota on the ceph fs file system root, but it fails with > "setfattr: /mnt/adm/cephfs: Invalid argument". I can set quotas on any > sub-directory. Is this intentional? The documentation > (https://docs.cep

[ceph-users] Re: Ceph Stretch Cluster - df pool size (Max Avail)

2022-07-28 Thread Gregory Farnum
https://tracker.ceph.com/issues/56650 There's a PR in progress to resolve this issue now. (Thanks, Prashant!) -Greg On Thu, Jul 28, 2022 at 7:52 AM Nicolas FONTAINE wrote: > > Hello, > > We have exactly the same problem. Did you find an answer or should we > open a bug report? > > Sincerely, > >

[ceph-users] Re: Upgrade from Octopus to Pacific cannot get monitor to join

2022-07-28 Thread Gregory Farnum
On Wed, Jul 27, 2022 at 4:54 PM wrote: > > Currently, all of the nodes are running in docker. The only way to upgrade is > to redeploy with docker (ceph orch daemon redeploy), which is essentially > making a new monitor. Am I missing something? Apparently. I don't have any experience with Docke

[ceph-users] Re: 17.2.2: all MGRs crashing in fresh cephadm install

2022-07-28 Thread Adam King
I've just taken another look at the orch ps output you posted and noticed that the REFRESHED column is reporting "62m sgo". That makes it seem like the issue is that cephadm isn't actually running its normal operations (it should refresh daemons every 10 minutes by default). I guess maybe we should

[ceph-users] Cache configuration for each storage class

2022-07-28 Thread Alejandro T:
Hello, I have an octopus cluster with 3 OSD hosts. Each of them has 13 daemons belonging to different storage classes. I'd like to have multiple osd_memory_target settings for each class. In the documentation there's some mention of setting different bluestore cache sizes for HDDs and SSDs, bu

[ceph-users] Re: cannot set quota on ceph fs root

2022-07-28 Thread Jesper Lykkegaard Karlsen
Hi Frank, I guess there is alway the possibility to set quota on pool level with "target_max_objects" and “target_max_bytes” The cephfs quotas through attributes are only for sub-directories as far as I recall. Best, Jesper -- Jesper Lykkegaard Karlsen Scientific Com

[ceph-users] Re: LibCephFS Python Mount Failure

2022-07-28 Thread 胡 玮文
Hi Adam, Have you tried ‘cephfs.LibCephFS(auth_id="monitoring")’? Weiwen Hu > 在 2022年7月27日,20:41,Adam Carrgilson (NBI) 写道: > > I’m still persevering with this, if anyone can assist, I would truly > appreciate it. > > As I said previously, I’ve been able to identify that the error is > spec

[ceph-users] RGW Multisite Sync Policy - Bucket Specific - Core Dump

2022-07-28 Thread Mark Selby
We use Ceph RBD/FS extensively and are starting down our RGW journey. We have 3 sites and want to replicate buckets from a single “primary” to multiple “backup” site. Each site has a Ceph cluster and they are all configured as part of a Multisite setup. I am using the instructions at https://d

[ceph-users] RGW Multisite Sync Policy - Flow and Pipe Linkage

2022-07-28 Thread Mark Selby
We use Ceph RBD/FS extensively and are starting down our RGW journey. We have 3 sites and want to replicate buckets from a single "primary" to multiple "backup" sites. Each site has a Ceph cluster and they are all configured as part of a Multisite setup. I am using the examples at https://d

[ceph-users] Re: RGW Multisite Sync Policy - Bucket Specific - Core Dump

2022-07-28 Thread Soumya Koduri
On 7/28/22 22:19, Mark Selby wrote: /usr/include/c++/8/optional:714: constexpr _Tp& std::_Optional_base<_Tp, , >::_M_get() [with _Tp = rgw_bucket; bool = false; bool = false]: Assertion 'this->_M_is_engaged()' failed. *** Caught signal (Aborted) ** in thread 7f0092e41380 thread_name:radosg

[ceph-users] Re: RGW Multisite Sync Policy - Flow and Pipe Linkage

2022-07-28 Thread Soumya Koduri
On 7/28/22 22:41, Mark Selby wrote: We use Ceph RBD/FS extensively and are starting down our RGW journey. We have 3 sites and want to replicate buckets from a single "primary" to multiple "backup" sites. Each site has a Ceph cluster and they are all configured as part of a Multisite setup.

[ceph-users] Re: colocation of MDS (count-per-host) not working in Quincy?

2022-07-28 Thread John Mulligan
On Thursday, July 28, 2022 3:45:30 PM EDT Vladimir Brik wrote: > count_per_host worked! > > I create a ticket. Good to hear. Thanks! > > > Regardless of whether this is a good idea or not the > > option is a > > > generic one and should be handled gracefully. :-) > > Do you mean running mu

[ceph-users] Re: replacing OSD nodes

2022-07-28 Thread Jesper Lykkegaard Karlsen
Thanks you for your suggestions Josh, it is really appreciated. Pgremapper looks interesting and definitely something I will look into. I know the balancer will reach a well balanced PG landscape eventually, but I am not sure that it will prioritise backfill after “most available location” fi

[ceph-users] Re: replacing OSD nodes

2022-07-28 Thread Jesper Lykkegaard Karlsen
Cool thanks a lot! I will definitely put it in my toolbox. Best, Jesper -- Jesper Lykkegaard Karlsen Scientific Computing Centre for Structural Biology Department of Molecular Biology and Genetics Aarhus University Universitetsbyen 81 8000 Aarhus C E-mail: je...@mbg.au

[ceph-users] mds optimization

2022-07-28 Thread David Yang
Dear all I have a CephFS filesystem storage cluster with version pacific mounted on a linux server using the kernel client. Then share the stored mount directory to the windows client by deploying the samba service. Sometimes it is found that some workloads from Windows will have a lot of metadata