[ceph-users] Issue with Recovery Throughput Not Visible in Ceph Dashboard After Upgrade to 19.2.0 (Squid)

2024-10-19 Thread Sanjay Mohan
Dear Ceph Users,
I hope this message finds you well.
I recently performed an upgrade of my Ceph cluster, moving through the 
following versions:

  *   17.2.7 >> 18.2.0 >> 18.2.2 >> 18.2.4 >> 19.2.0 (Squid)

After successfully upgrading to Ceph 19.2.0, I noticed an issue where the 
recovery throughput is no longer visible in the new Ceph dashboard. However, 
Old dashboard metrics and features seem to be working as expected. It is 
important to note that the recovery throughput was displayed properly in the 
previous version of the Ceph dashboard. I am using Cephadm for the 
installation, not Rook.
Current behavior:

  *   Recovery throughput metrics are not displayed in the new dashboard after 
upgrading to 19.2.0.

Expected behavior:

  *   The recovery throughput should be visible, as it was in previous versions 
of the Ceph dashboard.

I am reaching out to inquire if there are any known issues, workarounds, or 
upcoming fixes for this. Your assistance in this matter would be greatly 
appreciated.
Thank you for your time and support. I look forward to hearing from you soon.
Best regards,
Sanjay Mohan
Software Defined Storage Engineer
sanjaymo...@am.amrita.edu

[Amrita University]
Disclaimer: The information transmitted in this email, including attachments, 
is intended only for the person(s) or entity to which it is addressed and may 
contain confidential and/or privileged material. Any review, retransmission, 
dissemination or other use of, or taking of any action in reliance upon this 
information by persons or entities other than the intended recipient is 
prohibited. Any views expressed in any message are those of the individual 
sender and may not necessarily reflect the views of Amrita Vishwa Vidyapeetham. 
If you received this in error, please contact the sender and destroy any copies 
of this information.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Influencing the osd.id when creating or replacing an osd

2024-10-19 Thread Anthony D'Atri


> On Oct 19, 2024, at 2:47 PM, Shain Miley  wrote:
> 
> We are running octopus but will be upgrading to reef or squid in the next few 
> weeks.  As part of that upgrade I am planning on switching over to using 
> cephadm as well.
> 
> Part of what I am doing right now is going through and replacing old drives 
> and removing some of our oldest nodes and replacing them with new ones…then I 
> will convert the rest of the filestore osd over to bluestore so that I can 
> upgrade.
>  
> One other question based on your suggestion below…my typical process of 
> removing or replacing an osd involves the following:
> 
> ceph osd crush reweight osd.id  0.0
> ceph osd out osd.id 
> service ceph stop osd.id 
> ceph osd crush remove osd.id 
> ceph auth del osd.id 
> ceph osd rm id
>  
> Does `ceph osd destroy` do something other than the last 3 commands above or 
> am I just doing the same thing using multiple commands?  If I need to start 
> issuing the destroy command as well I can.
> 

I don’t recall if it will stop the service if running, but it does leave the 
OSD in the CRUSH map marked as ‘destroyed’.  I *think* it leaves the auth but 
I’m not sure.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Influencing the osd.id when creating or replacing an osd

2024-10-19 Thread Shain Miley
We are running octopus but will be upgrading to reef or squid in the next few 
weeks.  As part of that upgrade I am planning on switching over to using 
cephadm as well.

Part of what I am doing right now is going through and replacing old drives and 
removing some of our oldest nodes and replacing them with new ones…then I will 
convert the rest of the filestore osd over to bluestore so that I can upgrade.


One other question based on your suggestion below…my typical process of 
removing or replacing an osd involves the following:

ceph osd crush reweight osd.id 0.0

ceph osd out osd.id
service ceph stop osd.id
ceph osd crush remove osd.id
ceph auth del osd.id

ceph osd rm id



Does `ceph osd destroy` do something other than the last 3 commands above or am 
I just doing the same thing using multiple commands?  If I need to start 
issuing the destroy command as well I can.



Thank you.

Shain



From: Anthony D'Atri 
Date: Friday, October 18, 2024 at 9:01 AM
To: Shain Miley 
Cc: ceph-users@ceph.io 
Subject: Re: [ceph-users] Influencing the osd.id when creating or replacing an 
osd
!---|
  External Email - Use Caution

|---!

What release are you running where ceph-deploy still works?

I get what you're saying, but really you should get used to OSD IDs being 
arbitrary.

- ``ceph osd ls-tree `` will output a list of OSD ids under
  the given CRUSH name (like a host or rack name).  This is useful
  for applying changes to entire subtrees.  For example, ``ceph
  osd down `ceph osd ls-tree rack1```.

This is useful for one-off scripts, where you can e.g. use it to get a list of 
OSDs on a given host.

Normally the OSD ID selected is the lowest-numbered unused one.  Which can 
either be an ID that has never been used before, or one that has been deleted.  
So if you delete an OSD entirely and redeploy, you may or may not get the same 
ID depending on the cluster’s history.

- ``ceph osd destroy`` will mark an OSD destroyed and remove its
  cephx and lockbox keys.  However, the OSD id and CRUSH map entry
  will remain in place, allowing the id to be reused by a
  replacement device with minimal data rebalancing.

Destroying OSDs and redeploying them can help with what you’re after.

> On Oct 17, 2024, at 9:14 PM, Shain Miley  wrote:
>
> Hello,
> I am still using ceph-deploy to add osd’s to my cluster.  From what I have 
> read ceph-deploy does not allow you to specify the osd.id when creating new 
> osds, however I am wondering if there is a way to influence the number that 
> ceph will assign for the next osd that is created.
>
> I know that it really shouldn’t matter what osd number gets assigned to the 
> disk but as the number of osd increases it is much easier to keep track of 
> where things are if you can control the id when replacing failed disks or 
> adding new nodes.
>
> Thank you,
> Shain
>
>
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io