[ceph-users] Re: Radosgw multisite replication issues

2023-04-20 Thread Eugen Block
Hi, what ceph versions is this? Have you also verified that any config and keyring files in /etc/ceph (in case your cluster is not cephadm yet) are in the desired state? The permission denied error suggests that one site is not allowed to sync. Zitat von "Tarrago, Eli (RIS-BCT)" : Good

[ceph-users] Re: deploying Ceph using FQDN for MON / MDS Services

2023-04-20 Thread Konstantin Shalygin
Hi, Just add POSIX domain, fstype ceph This is equivalent of mount -t Ceph on ovirt side k Sent from my iPhone > On 21 Apr 2023, at 05:24, Lokendra Rathour wrote: > > Hi Robert / Team, > Further we are now trying to integrate the ceph as storage domain in OVirt > 4.4 > > > We want to creat

[ceph-users] How to replace an HDD in a OSD with shared SSD for DB/WAL

2023-04-20 Thread Tao LIU
HI, I build a Ceph Cluster with cephadm. Every cehp node has 4 OSDs. These 4 OSD were build with 4 HDD (block) and 1 SDD (DB). At present , one HDD is broken, and I am trying to replace the HDD,and build the OSD with the new HDD and the free space of the SDD. I did the follows: #ceph osd stop osd

[ceph-users] Re: deploying Ceph using FQDN for MON / MDS Services

2023-04-20 Thread Lokendra Rathour
Hi Robert / Team, Further we are now trying to integrate the ceph as storage domain in OVirt 4.4 We want to create a storage domain of POSIX-compliant type for mounting a ceph-based infrastructure in oVirt. As stated we are able to manually mount the ceph-mon nodes using the following command on

[ceph-users] Re: Can I delete rgw log entries?

2023-04-20 Thread Richard Bade
Ok, cool. Thanks for clarifying that Daniel and Casey. I'll clean up my sync logs now but leave the rest alone. Rich On Fri, 21 Apr 2023, 05:46 Daniel Gryniewicz, wrote: > On 4/20/23 10:38, Casey Bodley wrote: > > On Sun, Apr 16, 2023 at 11:47 PM Richard Bade wrote: > >> > >> Hi Everyone, > >>

[ceph-users] Radosgw multisite replication issues

2023-04-20 Thread Tarrago, Eli (RIS-BCT)
Good Afternoon, I am experiencing an issue where east-1 is no longer able to replicate from west-1, however, after a realm pull, west-1 is now able to replicate from east-1. In other words: West <- Can Replicate <- East West -> Cannot Replicate -> East After confirming the access and secret ke

[ceph-users] Re: Be careful with primary-temp to balance primaries ...

2023-04-20 Thread Laura Flores
There was a lot of interest expressed at Cephalocon in using the read balancer code and new commands to Quincy and Pacific. Until I evaluate the possibility of backporting the feature, I would recommend using the read balancer on Reef only, as this is where it the feature has been tested. The main

[ceph-users] Re: Can I delete rgw log entries?

2023-04-20 Thread Daniel Gryniewicz
On 4/20/23 10:38, Casey Bodley wrote: On Sun, Apr 16, 2023 at 11:47 PM Richard Bade wrote: Hi Everyone, I've been having trouble finding an answer to this question. Basically I'm wanting to know if stuff in the .log pool is actively used for anything or if it's just logs that can be deleted. I

[ceph-users] Re: Can I delete rgw log entries?

2023-04-20 Thread Casey Bodley
On Sun, Apr 16, 2023 at 11:47 PM Richard Bade wrote: > > Hi Everyone, > I've been having trouble finding an answer to this question. Basically > I'm wanting to know if stuff in the .log pool is actively used for > anything or if it's just logs that can be deleted. > In particular I was wondering a

[ceph-users] Be careful with primary-temp to balance primaries ...

2023-04-20 Thread Stefan Kooman
Hi, A word of caution for Ceph operators out there. Be careful with "ceph osd primary-temp" command. TL;DR: with primary_temp active, a CRUSH change might CRASH your OSDs ... and they won't come back online after a restart (in almost all cases). The bug is described in this tracker [1], and

[ceph-users] Re: Some hint for a DELL PowerEdge T440/PERC H750 Controller...

2023-04-20 Thread Marco Gaiarin
Mandi! Anthony D'Atri In chel di` si favelave... > Actually there was a firmware bug around that a while back. The HBA and > storcli claimed to not touch drive cache, but actually were enabling it and > lying. Some pointer to the issue? I doubt hit me but... Thanks. -- ma l'impresa e

[ceph-users] User + Dev Monthly Meetup cancelled

2023-04-20 Thread Laura Flores
Hi users, The User + Dev monthly meetup is cancelled today since some folks are still at or returning from Cephalocon. See you next month! - Laura Flores -- Laura Flores She/Her/Hers Software Engineer, Ceph Storage Chicago, IL lflo...@ibm.com | lflo...@redhat.com M: +170

[ceph-users] Re: quincy user metadata constantly changing versions on multisite slave with radosgw roles

2023-04-20 Thread Casey Bodley
On Wed, Apr 19, 2023 at 7:55 PM Christopher Durham wrote: > > Hi, > > I am using 17.2.6 on rocky linux for both the master and the slave site > I noticed that: > radosgw-admin sync status > often shows that the metadata sync is behind a minute or two on the slave. > This didn't make sense, as the

[ceph-users] 17.2.6 dashboard: unable to get RGW dashboard working

2023-04-20 Thread Michel Jouvin
Hi, I just upgraded in 17.2.6 but in fact I had the same problem in 16.2.10. I'm trying to configure the Ceph dashboard to monitor the RGWs (object gateways used as S3 gw). Our cluster has 2 RGW realms (eros, fink) with 1 zonegroup per realm (p2io-eros and p2io-fink respectively) and 1 zone p

[ceph-users] Re: cephadm grafana per host certificate

2023-04-20 Thread Eugen Block
Hi, thanks for the suggestion, I'm aware of a wildcard certificate option (which brings its own issues for other services). But since the ceph config seems to support this per-host based certificates I would like to get this running. Thanks, Eugen Zitat von Reto Gysi : Hi Eugen, I've

[ceph-users] Re: cephadm grafana per host certificate

2023-04-20 Thread Reto Gysi
Hi Eugen, I've created a certificate with subject alternative names, so the certificate is valid on each node of the cluster. [image: image.png] Cheers Reto Am Do., 20. Apr. 2023 um 11:42 Uhr schrieb Eugen Block : > Hi *, > > I've set up grafana, prometheus and node-exporter on an adopted > cl

[ceph-users] cephadm grafana per host certificate

2023-04-20 Thread Eugen Block
Hi *, I've set up grafana, prometheus and node-exporter on an adopted cluster (currently running 16.2.10) and was trying to enable ssl for grafana. As stated in the docs [1] there's a way to configure individual certs and keys per host: ceph config-key set mgr/cephadm/{hostname}/grafana_k

[ceph-users] Re: [ceph 17.2.6] unable to create rbd snapshots for images with erasure code data-pool

2023-04-20 Thread Reto Gysi
Ok, thanks Venky! Am Do., 20. Apr. 2023 um 06:12 Uhr schrieb Venky Shankar < vshan...@redhat.com>: > Hi Reto, > > On Wed, Apr 19, 2023 at 9:34 PM Ilya Dryomov wrote: > > > > On Wed, Apr 19, 2023 at 5:57 PM Reto Gysi wrote: > > > > > > > > > Hi, > > > > > > Am Mi., 19. Apr. 2023 um 11:02 Uhr sch