[ceph-users] Re: Broke libvirt on compute node due to Ceph Luminous to Nautilus Upgrade

2025-02-18 Thread Pardhiv Karri
works with Nautilus and above. --Pardhiv On Tue, Feb 18, 2025 at 11:01 AM Pardhiv Karri wrote: > Hi Anthony, > > Thank you for the reply. Here is the output from the monitor node. The > monitor (includes manager) and OSD nodes have been rebooted sequentially > after the upgrade to

[ceph-users] Re: Broke libvirt on compute node due to Ceph Luminous to Nautilus Upgrade

2025-02-18 Thread Pardhiv Karri
ker.ceph.com/issues/13301 > > Run `ceph features` which should give you client info. An unfortunate > wrinkle is that in the case of pg-upmap, some clients may report “jewel” > but their feature bitmaps actually indicate compatibility with pg-upmap. > If you see clients that ar

[ceph-users] Broke libvirt on compute node due to Ceph Luminous to Nautilus Upgrade

2025-02-18 Thread Pardhiv Karri
Hi, We recently upgraded our Ceph from Luminous to Nautilus and upgraded the ceph clients on OpenStack (using rbd). All went well and after a few days, we randomly saw instances getting stuck with libvirt_qemu_exporter, which is getting the libvirt stuck on Openstack compute nodes. We had to kill

[ceph-users] Ceph Nautilus packages for ubuntu 20.04

2024-11-27 Thread Pardhiv Karri
Hi, I am in a tricky situation. Our current OSD nodes (luminous version) are on the latest Dell servers, which only support Ubuntu 20.04. The Luminous packages were installed on 16.04, so the packages are still Xenial; I later upgraded the OS to 20.04 and added OSDs to the cluster. Now, I am tryin

[ceph-users] Re: Ceph image delete error - NetHandler create_socket couldnt create socket

2024-04-19 Thread Pardhiv Karri
to set command `ulimit -n 10240` before rbd rm task > > > k > Sent from my iPhone > > > On 18 Apr 2024, at 23:50, Pardhiv Karri wrote: > > > > Hi, > > > > Trying to delete images in a Ceph pool is causing errors in one of > > the clusters. I reb

[ceph-users] Ceph image delete error - NetHandler create_socket couldnt create socket

2024-04-18 Thread Pardhiv Karri
Hi, Trying to delete images in a Ceph pool is causing errors in one of the clusters. I rebooted all the monitor nodes sequentially to see if the error went away, but it still persists. What is the best way to fix this? The Ceph cluster is in an OK state, with no rebalancing or scrubbing happening

[ceph-users] Ceph - Error ERANGE: (34) Numerical result out of range

2023-10-26 Thread Pardhiv Karri
Hi, Trying to move a node/host under a new SSD root and getting below error. Has anyone seen it and know the fix? the pg_num and pgp_num are same for all pools so that is not the issue. [root@hbmon1 ~]# ceph osd crush move hbssdhost1 root=ssd Error ERANGE: (34) Numerical result out of range [roo

[ceph-users] init unable to update_crush_location: (34) Numerical result out of range

2023-10-25 Thread Pardhiv Karri
Hi, Getting an error while adding a new node/OSD with bluestore OSDs to the cluster. The OSD is added without any host and is down, tried to bring it up didn't work. The same method to add in other clusters doesn't have any issue. Any idea what the problem is? Ceph Version: ceph version 12.2.11 (

[ceph-users] Copying and renaming pools

2022-06-13 Thread Pardhiv Karri
Hi, Our Ceph is used as backend storage for Openstack. We use the "images" pool for glance and the "compute" pool for instances. We need to migrate our images pool which is on HDD drives to SSD drives. I copied all the data from the "images" pool that is on HDD disks to an "ssdimages" pool that i

[ceph-users] Re: Luminous to Pacific Upgrade with Filestore OSDs

2022-06-10 Thread Pardhiv Karri
Ok, thanks! --Pardhiv On Fri, Jun 10, 2022 at 2:46 AM Eneko Lacunza wrote: > Hi Pardhiv, > > I don't recall anything unusual, just follow upgrade procedures outlined > in each release. > > Cheers > > El 9/6/22 a las 20:08, Pardhiv Karri escribió: > > Aweso

[ceph-users] Ceph pool set min_write_recency_for_promote not working

2022-06-09 Thread Pardhiv Karri
Hi, I created a new pool called "ssdimages," which is similar to another pool called "images" (a very old one). But when I try to set min_write_recency_for_promote to 1, it fails with permission denied. Do you know how I can fix it? ceph-lab # ceph osd dump | grep -E 'images|ssdimages' pool 3 'im

[ceph-users] Re: Luminous to Pacific Upgrade with Filestore OSDs

2022-06-09 Thread Pardhiv Karri
cunza wrote: > Hi Pardhiv, > > We have a running production Pacific cluster with some filestore OSDs (and > other Bluestore OSD too). This cluster was installed "some" years ago with > Firefly... :) > > No issues related to filestore so far. > > Cheers > &g

[ceph-users] Luminous to Pacific Upgrade with Filestore OSDs

2022-06-08 Thread Pardhiv Karri
Hi, We are planning to upgrade our current Ceph from Luminous (12.2.11) to Nautilus and then to Pacific. We are using Filestore for OSDs now. Is it okay to upgrade with filestore OSDs? We plan to migrate from filestore to Bluestore at a later date as the clusters are pretty large in PBs size and u

[ceph-users] rbd deep copy in Luminous

2022-06-07 Thread Pardhiv Karri
Hi, We are currently on Ceph Luminous version (12.2.11). I don't see the "rbd deep cp" command in this version. Is it in a different version or release? If so, which one? If in another release, Mimic or later, is there a way to get it in Luminous? Thanks, Pardhiv _

[ceph-users] Ceph RBD pool copy?

2022-05-19 Thread Pardhiv Karri
Hi, We have a ceph cluster with integration to Openstack. We are thinking about migrating the glance (images) pool to a new pool with better SSD disks. I see there is a "rados cppool" command. Will that work with snapshots in this rbd pool? -- *Pardhiv* __

[ceph-users] Unable to login to Ceph Pacific Dashboard

2022-01-19 Thread Pardhiv Karri
Hi, I installed Ceph Pacific one Monitor node using cephadm tool. The output of installation gave me the credentials. When I go to a browser (different from ceph server) I see the login screen and when I enter the credentials the browser loads to the same page, in that fraction of a second I see i

[ceph-users] Re: Unable to track different ceph client version connections

2020-01-24 Thread Pardhiv Karri
nodes? > > Look your clients at mon sessions: > > `ceph daemon /var/run/ceph/ceph-mon.ceph-mon0.asok sessions | grep hammer > | awk '{print $2}'` > > > > k > -- *Pardhiv Karri* "Rise and Rise again until LAMBS become LIONS" __