Hi,
We're actually on very similar setup to you with 18.04 and Nautilus
and thinking about the 20.04 upgrade process.
As for your RGW, I think I would not consider the downgrade. I believe
the order is about avoiding issues with newer RGW connecting to older
mons and osds. Since you're already in
Hi,
I'm on Ceph 16.2.10, and I'm trying to rotate the ceph lockbox keyring. I
used ceph-authtool to create a new keyring, and used `ceph auth import -i
` to update the lockbox keyring. I also updated the keyring
file, which is /var/lib/ceph/osd/ceph-/lockbox.keyring. I tried
`systemctl restart cep
Hi,
I’m seeing the same thing …
With debug logging enabled I see this:
2023-02-07T00:35:51.853+0100 7fdab9930e00 10 snap_mapper.convert_legacy
converted 1410 keys
2023-02-07T00:35:51.853+0100 7fdab9930e00 10 snap_mapper.convert_legacy
converted 1440 keys
2023-02-07T00:35:51.853+0100 7fdab9930e
Hi Matthias,
I've done a bit of testing on my Nautilus (14.2.22) test cluster and I
can confirm that what you're seeing with rotation going back to '0' in
osd metadata also happens for me in Nautilus after rebooting the host.
> Alternatively, can I manually set and persist the relevant bluestore
>
Le 2023-02-06 14:11, Eugen Block a écrit :
What does the active mgr log when you try to access the dashboard?
Please paste your rgw config settings as well.
Ah, Sorry to hijack, but I also can't access Object Storage menus in the
Dashboard since upgrading from 16.2.10 to 16.2.11.
Here are th
Hello !
It seems like ceph-volume from Ceph Pacific 16.2.11 has a problem with
same LV names in different VG.
I use ceph-ansible (stable-6), with a pre-existing LVM configuration.
Here's the error :
TASK [ceph-osd : include_tasks scenarios/lvm.yml]
Hi everyone,
Network issues are resolved.
The telemetry endpoints are available again.
Thanks for your patience and contribution!
Yaarit
On Thu, Feb 2, 2023 at 6:11 PM Yaarit Hatuka wrote:
> Hi everyone,
>
> Our telemetry endpoints are temporarily unavailable due to network issues.
> We apolo
Thank you for your email and for providing the solution to check for shadow
and multipart objects in CEPH. I have checked the objects in my CEPH
cluster and found the following results:
The command rados -p ls | grep --text -vE "shadow|multipart" | wc -l
returns about 80 million objects.
The comm
Hello Eugen,
The output shows that all daemon are configured , would like to know also
if there is a possibility of removing those RGW and redeploy them to see if
there will be changes.
root@ceph-mon1:~# ceph service dump
{
"epoch": 1740,
"modified": "2023-02-06T15:21:42.235595+0200",
Hello Eugen
Below are rgw configs and logs while I am accessing the dashboard :
root@ceph-mon1:/var/log/ceph# tail -f /var/log/ceph/ceph-mgr.ceph-mon1.log
2023-02-06T15:25:30.037+0200 7f68b15cd700 0 [prometheus INFO
cherrypy.access.140087714875184] :::10.10.110.134 - -
[06/Feb/2023:15:25:30]
Just a quick edit: what does the active mgr log when you try to access
the rgw page in the dashboard?
With 'ceph service dump' you can see the rgw daemons that are
registered to the mgr. If the daemons are not shown in the dashboard
you'll have to check the active mgr logs for errors or hin
What does the active mgr log when you try to access the dashboard?
Please paste your rgw config settings as well.
Zitat von Michel Niyoyita :
Hello Robert
below is the output of ceph versions command
root@ceph-mon1:~# ceph versions
{
"mon": {
"ceph version 16.2.11 (3cf40e2dca667
Hello Robert
below is the output of ceph versions command
root@ceph-mon1:~# ceph versions
{
"mon": {
"ceph version 16.2.11 (3cf40e2dca667f68c6ce3ff5cd94f01e711af894)
pacific (stable)": 3
},
"mgr": {
"ceph version 16.2.11 (3cf40e2dca667f68c6ce3ff5cd94f01e711af894)
pacif
On 06.02.23 13:48, Michel Niyoyita wrote:
root@ceph-mon1:~# ceph -v
ceph version 16.2.11 (3cf40e2dca667f68c6ce3ff5cd94f01e711af894) pacific
(stable)
This is the version of the command line tool "ceph".
Please run "ceph versions" to show the version of the running Ceph daemons.
Regards
--
Rob
On 04.02.23 20:54, Ramin Najjarbashi wrote:
ceph df | grep mypoo
--- POOLS ---
POOL OBJECTS
mypool 1.11G
---
and from this, I got 8.8M objects :
for item in `radosgw-admin user list | jq -r ".[]" | head`; do
B_OBJ=$(radosgw-admin user stats --uid $item 2>/dev/null |
Hello Eugen,
below is the Version of Ceph I am running
root@ceph-mon1:~# ceph -v
ceph version 16.2.11 (3cf40e2dca667f68c6ce3ff5cd94f01e711af894) pacific
(stable)
root@ceph-mon1:~# ceph orch ls rgw --export --format yaml
Error ENOENT: No orchestrator configured (try `ceph orch set backend`)
root@c
On 04.02.23 00:02, Thomas Cannon wrote:
Boreal-01 - the host - 17.2.5:
Boreal-02 - 15.2.6:
Boreal-03 - 15.2.8:
And the host I added - Boreal-04 - 17.2.5:
This is a wild mix of versions. Such a situation may exist during an
upgrade but not when operating normally or extending the clus
Please send responses to the mailing-list.
If the orchestrator is available, please share also this output (mask
sensitive data):
ceph orch ls rgw --export --format yaml
Which ceph version is this? The command 'ceph dashboard
get-rgw-api-host' was removed between Octopus and Pacific, that'
Hi,
can you paste the output of:
ceph config dump | grep mgr/dashboard/RGW_API_HOST
Does it match your desired setup? Depending on the ceph version (and
how ceph-ansible deploys the services) you could also check:
ceph dashboard get-rgw-api-host
I'm not familiar with ceph-ansible, but if y
Hi,
did you check if your cluster has many "shadow" or "multipart" objects
in the pool? Those are taken into account when calculating the total
number of objects in a pool but are not in the user stats of radosgw.
Here's an example of a small rgw setup:
rados -p ls | grep -vE "shadow|mul
Hello team,
I have a ceph cluster deployed using ceph-ansible , running on ubuntu 20.04
OS which have 6 hosts , 3 hosts for OSD and 3 hosts used as monitors and
managers , I have deployed RGW on all those hosts and RGWLOADBALENCER on
top of them , for testing purpose , I have switched off one OS
Hi,
I've increased the placement group in my octopus cluster firstly in the index
pool and I gave almost 2.5 hours bad performance for the user. I'm planning to
increase the data pool also, but first I'd like to know is there any way to
make it smoother or not.
At the moment I have these value
22 matches
Mail list logo