[ceph-users] Re: rbd deep copy in Luminous

2022-06-08 Thread Eugen Block
Hi, the deep copy feature was introduced in Mimic [1] and I doubt that there will be backports since Luminous is EOL quite for some time now (as are Mimic and Nautilus btw). Eugen [1] https://ceph.io/en/news/blog/2018/v13-2-0-mimic-released/ Zitat von Pardhiv Karri : Hi, We are current

[ceph-users] Re: Many errors about PG deviate more than 30% on a new cluster deployed by cephadm

2022-06-08 Thread Eugen Block
It looks like your cluster doesn't have any "real" data in it yet, only metadata. I see something similar in an Octopus cluster where one device class is not used yet. I misread your output from the first email. The autoscaler will increase pg_num as soon as you push data into it, no need t

[ceph-users] Re: OSDs getting OOM-killed right after startup

2022-06-08 Thread Eugen Block
Hi, is there any reason you use custom configs? Most of the defaults work well. But you only give your OSDs 1 GB of memory, that is way too low except for an idle cluster without much data. I recommend to remove the line osd_memory_target = 1048576 and let ceph handle it. I didn't in

[ceph-users] Re: OSDs getting OOM-killed right after startup

2022-06-08 Thread Eugen Block
It's even worse, you only give them 1MB, not GB. Zitat von Eugen Block : Hi, is there any reason you use custom configs? Most of the defaults work well. But you only give your OSDs 1 GB of memory, that is way too low except for an idle cluster without much data. I recommend to remove the

[ceph-users] Re: Many errors about PG deviate more than 30% on a new cluster deployed by cephadm

2022-06-08 Thread Christophe BAILLON
Thanks I will start testing and adding some datas this afternoon, to check if I got the same errors regards - Mail original - > De: "Eugen Block" > À: "ceph-users" > Envoyé: Mercredi 8 Juin 2022 10:49:44 > Objet: [ceph-users] Re: Many errors about PG deviate more than 30% on a new >

[ceph-users] Re: Ceph Repo Branch Rename - May 24

2022-06-08 Thread David Galloway
On 6/1/22 14:55, Rishabh Dave wrote: On Wed, 1 Jun 2022 at 23:52, David Galloway wrote: The master branch has been deleted from all recently active repos except ceph.git. I'm slowly retargeting existing PRs from master to main. The tool I used to rename the branches didn't take care of th

[ceph-users] Crashing MDS

2022-06-08 Thread Dave Schulz
Hi Everyone, I have an MDS server that's crashing moments after it starts. The filesystem is set to max_mds=5 and mds.[1-4] are all up and active but mds.0 keeps crashing.  all I can see is the following in the /var/log/ceph/ceph-mds. logfile.  Any thoughts?     -2> 2022-06-08 10:02:59.

[ceph-users] Re: Crashing MDS

2022-06-08 Thread Can Özyurt
Hi Dave, Just to make sure, have you checked if the host has free inode available? On Wed, 8 Jun 2022 at 19:22, Dave Schulz wrote: > Hi Everyone, > > I have an MDS server that's crashing moments after it starts. The > filesystem is set to max_mds=5 and mds.[1-4] are all up and active but > mds.

[ceph-users] Re: Crashing MDS

2022-06-08 Thread Dave Schulz
I don't see any full inode fs on the boot disks if that's what you mean.  We're using bluestore so I don't think this applies to the OSDs.  Thanks for the suggestion. -Dave On 2022-06-08 10:50 a.m., Can Özyurt wrote: [△EXTERNAL] Hi Dave, Just to make sure, have you checked if the host has

[ceph-users] Luminous to Pacific Upgrade with Filestore OSDs

2022-06-08 Thread Pardhiv Karri
Hi, We are planning to upgrade our current Ceph from Luminous (12.2.11) to Nautilus and then to Pacific. We are using Filestore for OSDs now. Is it okay to upgrade with filestore OSDs? We plan to migrate from filestore to Bluestore at a later date as the clusters are pretty large in PBs size and u

[ceph-users] Troubleshooting cephadm - not deploying any daemons

2022-06-08 Thread Zach Heise (SSCC)
Our 16.2.7 cluster was deployed using cephadm from the start, but now it seems like deploying daemons with it is broken. Running 'ceph orch apply mgr --placement=2' causes '6/8/22 2:34:18 PM[INF]Saving service mgr spec with placement count:2' to appear in the logs, but a 2nd mgr does not get cr

[ceph-users] Re: Troubleshooting cephadm - not deploying any daemons

2022-06-08 Thread Dhairya Parmar
Hi Zach, Try running `ceph orch apply mgr 2` or `ceph orch apply mgr --placement=" "`. Refer this doc for more information, hope it helps. Regards, Dhairya On Thu, Jun 9, 2022 at 1:59 AM Zach Heise (SSCC) wrote:

[ceph-users] Re: Troubleshooting cephadm - not deploying any daemons

2022-06-08 Thread Zach Heise (SSCC)
 Yes, sorry - I tried both 'ceph orch apply mgr "ceph01,ceph03"' and 'ceph orch apply mds "ceph04,ceph05"' before writing this initial email - once again, the same logged message: "6/8/22 2:25:12 PM[INF]Saving service mgr spec with placement ceph03;ceph01" but there's no messages logged about a

[ceph-users] Re: Troubleshooting cephadm - not deploying any daemons

2022-06-08 Thread Eugen Block
Have you checked /var/log/ceph/cephadm.log on the target nodes? Zitat von "Zach Heise (SSCC)" :  Yes, sorry - I tried both 'ceph orch apply mgr "ceph01,ceph03"' and 'ceph orch apply mds "ceph04,ceph05"' before writing this initial email - once again, the same logged message: "6/8/22 2:25:12

[ceph-users] radosgw multisite sync - how to fix data behind shards?

2022-06-08 Thread Wyll Ingersoll
Seeking help from a radosgw expert... I have a 3-zone multisite configuration (all running pacific 16.2.9) with 1 bucket per zone and a couple of small objects in each bucket for testing purposes. One of the secondary zones cannot get seem to get into sync with the master, sync status reports

[ceph-users] Re: Troubleshooting cephadm - not deploying any daemons

2022-06-08 Thread Zach Heise (SSCC)
Yes - running tail on /var/log/ceph/cephadm.log on ceph01, then running 'ceph orch apply mgr "ceph01,ceph03"' (my active manager is on ceph03 and I don't want to clobber it while troubleshooting) the log output on ceph01's cephadm.log is merely the following lines, over and over again, 6 times