Good morning and happy holidays everyone!
Guys, what would be the best strategy to increase the number of PGs in a
POOL that is already in production?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph
Can anyone help me on this? I can't be that hard to do.
-- Michael
-Original Message-
From: Michael Worsham
Sent: Thursday, February 8, 2024 3:03 PM
To: ceph-users@ceph.io
Subject: [ceph-users] What is the proper way to setup Rados Gateway (RGW) under
Ceph?
I have setup a 'reef' Ceph
Den mån 12 feb. 2024 kl 14:12 skrev Murilo Morais :
>
> Good morning and happy holidays everyone!
>
> Guys, what would be the best strategy to increase the number of PGs in a
> POOL that is already in production?
"ceph osd pool set pg_num " and let the pool get pgp_nums increased slowly by
itself
Hi everyone.
I couldn't find a documentation about how to install a S3/Swift API (as I
understand it's RadosGW) on quincy.
I can find some documentation on octupus
(https://docs.ceph.com/en/octopus/install/ceph-deploy/install-ceph-gateway/)
Very strangely when I go
https://docs.ceph.com/en
Hi,
recommended methods of deploying rgw are imho overly complicated. You can
get service up manually also with something simple like:
[root@mon1 bin]# cat /etc/ceph/ceph.conf
[global]
fsid = 12345678-XXXx ...
mon initial members = mon1,mon3
mon host = ip-mon1,ip-mon2
auth cluster required = non
On 12.02.2024 18:15, Albert Shih wrote:
I couldn't find a documentation about how to install a S3/Swift API (as
I
understand it's RadosGW) on quincy.
It depends on how you have install Ceph.
If your are using Cephadm the docs is here
https://docs.ceph.com/en/reef/cephadm/services/rgw/
I c
Le 12/02/2024 à 18:38:08+0100, Kai Stian Olstad a écrit
> On 12.02.2024 18:15, Albert Shih wrote:
> > I couldn't find a documentation about how to install a S3/Swift API (as
> > I
> > understand it's RadosGW) on quincy.
>
> It depends on how you have install Ceph.
> If your are using Cephadm the d
So, just so I am clear – in addition to the steps below, will I also need to
also install NGINX or HAProxy on the server to act as the front end?
-- M
From: Rok Jaklič
Sent: Monday, February 12, 2024 12:30 PM
To: Michael Worsham
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] Re: What is the
You don't have to. You can serve rgw on the front end directly.
You:
1. set certificate with smth like: rgw_frontends = " ...
ssl_certificate=/etc/pki/ceph/cert.pem". We are using nginx on front end to
act as a proxy and for some other stuff.
2. delete line with rgw_crypt_require_ssl
... you shou
Janne, thanks for the tip.
Does the "target_max_misplaced_ratio" parameter influence the process? I
would like to make the increase with as little overhead as possible.
Em seg., 12 de fev. de 2024 às 11:39, Janne Johansson
escreveu:
> Den mån 12 feb. 2024 kl 14:12 skrev Murilo Morais :
> >
> > G
Thank you for your idea.
I realize that the number of SSD is important as well as the capacity of SSD
for block.wal.
> Naturally the best solution is to not use HDDs at all ;)
You are right! :)
___
ceph-users mailing list -- ceph-users@ceph.io
To unsub
Hello,
I have a Ceph cluster created by orchestrator Cephadm. It consists of 3 Dell
PowerEdge R730XD servers. This cluster's hard drives used as OSD were
configured as RAID 0. The configuration summery is as the following:
ceph-node1 (mgr, mon)
Public network: 172.16.7.11/22
Cluster network:
Thanks a lot! Yes it turns out to be the same issue that you pointed to.
Switching to wpq solved the issue. We are running 18.2.0.
Leon
On Wed, Feb 7, 2024 at 12:48 PM Kai Stian Olstad
wrote:
> You don't say anything about the Ceph version you are running.
> I had an similar issue with 17.2.7,
Thanks, we are storing a lot less stress.
0. I rebooted 30 OSDs on one machine and the queue was not reduced, but the
storage space was released in large amounts.
1. why did the reboot OSD release so much space?
Here are Ceph details..
ceph version 16.2.7 (dd0603118f56ab514f133c8d2e3adfc9839425
You probably have the H330 HBA, rebadged LSI. You can set the “mode” or
“personality” using storcli / perccli. You might need to remove the VDs from
them too.
> On Feb 12, 2024, at 7:53 PM, sa...@dcl-online.com wrote:
>
> Hello,
>
> I have a Ceph cluster created by orchestrator Cephadm. I
15 matches
Mail list logo