works with
Nautilus and above.
--Pardhiv
On Tue, Feb 18, 2025 at 11:01 AM Pardhiv Karri
wrote:
> Hi Anthony,
>
> Thank you for the reply. Here is the output from the monitor node. The
> monitor (includes manager) and OSD nodes have been rebooted sequentially
> after the upgrade to
ker.ceph.com/issues/13301
>
> Run `ceph features` which should give you client info. An unfortunate
> wrinkle is that in the case of pg-upmap, some clients may report “jewel”
> but their feature bitmaps actually indicate compatibility with pg-upmap.
> If you see clients that ar
Hi,
We recently upgraded our Ceph from Luminous to Nautilus and upgraded the
ceph clients on OpenStack (using rbd). All went well and after a few days,
we randomly saw instances getting stuck with libvirt_qemu_exporter, which
is getting the libvirt stuck on Openstack compute nodes. We had to kill
Hi,
I am in a tricky situation. Our current OSD nodes (luminous version) are on
the latest Dell servers, which only support Ubuntu 20.04. The Luminous
packages were installed on 16.04, so the packages are still Xenial; I later
upgraded the OS to 20.04 and added OSDs to the cluster. Now, I am tryin
to set command `ulimit -n 10240` before rbd rm task
>
>
> k
> Sent from my iPhone
>
> > On 18 Apr 2024, at 23:50, Pardhiv Karri wrote:
> >
> > Hi,
> >
> > Trying to delete images in a Ceph pool is causing errors in one of
> > the clusters. I reb
Hi,
Trying to delete images in a Ceph pool is causing errors in one of
the clusters. I rebooted all the monitor nodes sequentially to see if the
error went away, but it still persists. What is the best way to fix this?
The Ceph cluster is in an OK state, with no rebalancing or scrubbing
happening
Hi,
Trying to move a node/host under a new SSD root and getting below error.
Has anyone seen it and know the fix? the pg_num and pgp_num are same for
all pools so that is not the issue.
[root@hbmon1 ~]# ceph osd crush move hbssdhost1 root=ssd
Error ERANGE: (34) Numerical result out of range
[roo
Hi,
Getting an error while adding a new node/OSD with bluestore OSDs to the
cluster. The OSD is added without any host and is down, tried to bring it
up didn't work. The same method to add in other clusters doesn't have any
issue. Any idea what the problem is?
Ceph Version: ceph version 12.2.11
(
Hi,
Our Ceph is used as backend storage for Openstack. We use the "images" pool
for glance and the "compute" pool for instances. We need to migrate our
images pool which is on HDD drives to SSD drives.
I copied all the data from the "images" pool that is on HDD disks to an
"ssdimages" pool that i
Ok, thanks!
--Pardhiv
On Fri, Jun 10, 2022 at 2:46 AM Eneko Lacunza wrote:
> Hi Pardhiv,
>
> I don't recall anything unusual, just follow upgrade procedures outlined
> in each release.
>
> Cheers
>
> El 9/6/22 a las 20:08, Pardhiv Karri escribió:
>
> Aweso
Hi,
I created a new pool called "ssdimages," which is similar to another pool
called "images" (a very old one). But when I try to
set min_write_recency_for_promote to 1, it fails with permission denied. Do
you know how I can fix it?
ceph-lab # ceph osd dump | grep -E 'images|ssdimages'
pool 3 'im
cunza wrote:
> Hi Pardhiv,
>
> We have a running production Pacific cluster with some filestore OSDs (and
> other Bluestore OSD too). This cluster was installed "some" years ago with
> Firefly... :)
>
> No issues related to filestore so far.
>
> Cheers
>
&g
Hi,
We are planning to upgrade our current Ceph from Luminous (12.2.11) to
Nautilus and then to Pacific. We are using Filestore for OSDs now. Is it
okay to upgrade with filestore OSDs? We plan to migrate from filestore to
Bluestore at a later date as the clusters are pretty large in PBs size and
u
Hi,
We are currently on Ceph Luminous version (12.2.11). I don't see the "rbd
deep cp" command in this version. Is it in a different version or release?
If so, which one? If in another release, Mimic or later, is there a way to
get it in Luminous?
Thanks,
Pardhiv
_
Hi,
We have a ceph cluster with integration to Openstack. We are thinking about
migrating the glance (images) pool to a new pool with better SSD disks. I
see there is a "rados cppool" command. Will that work with snapshots in
this rbd pool?
--
*Pardhiv*
__
Hi,
I installed Ceph Pacific one Monitor node using cephadm tool. The output of
installation gave me the credentials. When I go to a browser (different
from ceph server) I see the login screen and when I enter the credentials
the browser loads to the same page, in that fraction of a second I see i
nodes?
>
> Look your clients at mon sessions:
>
> `ceph daemon /var/run/ceph/ceph-mon.ceph-mon0.asok sessions | grep hammer
> | awk '{print $2}'`
>
>
>
> k
>
--
*Pardhiv Karri*
"Rise and Rise again until LAMBS become LIONS"
__
17 matches
Mail list logo