On 26/06/2024 08:48, Torkil Svensgaard wrote:
Hi
We have a bunch of HDD OSD hosts with DB/WAL on PCI NVMe, either 2 x
3.2TB or 1 x 6.4TB. We used to have 4 SSDs pr node for journals before
bluestore and those have been repurposed for an SSD pool (wear level is
fine).
We've been using the
Hi,
we've just updated our test cluster via
ceph orch upgrade start --image quay.io/ceph/ceph:v18.2.0
During the update of the RGW service, all of the daemons went down at the
same time. If I would do that on our production system it would cause a
small but noticeable outage.
Is there a way to
Hi,
On 6/26/24 11:49, Boris wrote:
Is there a way to only update 1 daemon at a time?
You can use the feature "staggered upgrade":
https://docs.ceph.com/en/reef/cephadm/upgrade/#staggered-upgrade
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www
Ah nice.
Thanks a lot :)
Am Mi., 26. Juni 2024 um 11:56 Uhr schrieb Robert Sander <
r.san...@heinlein-support.de>:
> Hi,
>
> On 6/26/24 11:49, Boris wrote:
>
> > Is there a way to only update 1 daemon at a time?
>
> You can use the feature "staggered upgrade":
>
> https://docs.ceph.com/en/reef/
On 6/25/24 3:21 PM, Matthew Vernon wrote:
On 24/06/2024 21:18, Matthew Vernon wrote:
2024-06-24T17:33:26.880065+00:00 moss-be2001 ceph-mgr[129346]: [rgw
ERROR root] Non-zero return from ['radosgw-admin', '-k',
'/var/lib/ceph/mgr/ceph-moss-be2001.qvwcaq/keyring', '-n',
'mgr.moss-be2001.qvwcaq'
Interesting. Given this is coming from a radosgw-admin call being done from
within the rgw mgr module, I wonder if a radosgw-admin log file is ending
up in the active mgr container when this happens.
On Wed, Jun 26, 2024 at 9:04 AM Daniel Gryniewicz wrote:
> On 6/25/24 3:21 PM, Matthew Vernon w
Hi folks,
We have a number of ceph clusters organized into one realm and three
zonegroups. All the clusters are running ceph 17 built with cephadm.
I am trying to move the metadata master from one zonegroup (us-east-1) to
zone mn1 in zonegroup us-central-1, by following the steps in the
document
Hi Everyone,
I had an issue last night when I was bringing online some osds that I
was rebuilding. When the osds created and came online 15pgs got stuck
in activating. The first osd (osd.112) seemed to come online ok, but
the second one (osd.113) triggered the issue. All the pgs in
activating inclu
Hello everyone.
I have a cluster with 8321 pgs and recently I started to get pg not
deep-scrub warnings.
The reason is that I reduced max_scrub to avoid the impact of scrub on IO.
Here is my current scrub configuration:
~]# ceph tell osd.1 config show|grep scrub
"mds_max_scrub_ops_in_progress":
Can anybody comment on my questions below? Thanks so much in advance
Am 26. Juni 2024 08:08:39 MESZ schrieb Dietmar Rieder
:
>...sending also to the list and Xiubo (were accidentally removed from
>recipients)...
>
>On 6/25/24 21:28, Dietmar Rieder wrote:
>> Hi Patrick, Xiubo and List,
>>
10 matches
Mail list logo