Hi darren & Anthony,
>>How many PG’s have you got configured for the Ceph pool that you are testing
>>against?
I have crealed the cloudstack pools with the PG of 64.
ceph osd pool get cloudstack-BRK pg_autoscale_mode
pg_autoscale_mode: on
>>Have you tried the same benchmark without the replicat
Hi Darren & Anthony
>>How many PG’s have you got configured for the Ceph pool that you are testing
>>against?
I have crealed the cloudstack pools with the PG of 64.
ceph osd pool get cloudstack-BRK pg_autoscale_mode
pg_autoscale_mode: on
ceph osd pool ls
.mgr
cloudstack-GUL
cloudstack-BRK
.nfs
Hi,
Could you please change this doc
https://docs.ceph.com/en/quincy/mgr/ceph_api/#post--api-cluster-user-export
as it doesn't accept simple strings, but a json array.
i.e. I think this:
{
> "entities": "string"
> }
should be replaced by something like:
>
> {
> "entities": ["string"]
> }
Hi Istvan,
the issue you're referring to has nothing about real high latency ops.
It's just a cosmetic issue caused by improper ordering of the events
which resulted in negative delta between them and hence showing that
delta as a huge positive value.
But this impacts operations dump only. F
Gotcha. I’ve been through Grizzly > Havana > Icehouse migrations in the past,
Ceph from Dumpling > Hammer in conjunction, so Queens is like way futuristic to
me ;)
I suspect that you will — however painfully — want to do the grand
migrate-or-reboot shuffle in advance of the OpenStack migration
The “release” shown here isn’t what one might quite reasonably think it is. In
this context think of it as a “minimum feature set”.
> On Feb 18, 2025, at 2:01 PM, Pardhiv Karri wrote:
>
> Hi Anthony,
>
> Thank you for the reply. Here is the output from the monitor node. The
> monitor (incl
Hi Devender,
Does this help? https://docs.clyso.com/tools/erasure-coding-calculator
Cheers, dan
On Thu, Feb 13, 2025 at 7:43 AM Devender Singh wrote:
>
> Hello all
>
>
> Do we have a good cluster design calculator which can suggest failure
> domain and pool size and min size according the numbe
Hi Anthony,
Regarding the need to upgrade Ceph, we are upgrading our current OpenStack
from Queens (yeah, very old) to Antelope and the openstack vendor required
us to upgrade Ceph from Luminous to Nautilus for their migration code to
work as the framework they are using to migrate/upgrade only wo
Hi,
We recently upgraded our Ceph from Luminous to Nautilus and upgraded the
ceph clients on OpenStack (using rbd). All went well and after a few days,
we randomly saw instances getting stuck with libvirt_qemu_exporter, which
is getting the libvirt stuck on Openstack compute nodes. We had to kill
This is one of the pitfalls of package-based installs. This dynamic with Nova
and other virtualization systems has been well-known for at least a dozen years.
I would not expect a Luminous client (i.e. librbd / librados) to have an issue,
though — it should be able to handle pg-upmap. If you h
Hi Anthony,
Thank you for the reply. Here is the output from the monitor node. The
monitor (includes manager) and OSD nodes have been rebooted sequentially
after the upgrade to Nautilus, so I wonder why they are still
showing luminous now. Anyway I can fix?
or1sz2 [root@mon1 ~]# ceph features
{
Hi,
If the strays are increasing, it mostly means there are references
lingering around. You can try to evaluate strays in the ~mdsdir [0]. If
strays keep on increasing at a staggering rate then check if the files/dirs
deleted are referenced anywhere (like snapshots) and as Eugen mentioned
note th
Hi,
On 18.02.25 01:00, Jinfeng Biao wrote:
Hello Eugen and all,
Thanks for the reply. We’ve checked the SuSE doc before raising it twice. From
100k to 125k, then to 150k.
We are a bit worried about the continuous growth of strays at 50K a day and
would like to find an effective to reduce th
See https://tracker.ceph.com/issues/69867
On 18/02/2025 08:29, Nicola Mori wrote:
Dear Ceph users,
after the upgrade to 19.2.1 I experience a soft freeze of the web UI
(old version, activated with `ceph dashboard feature disable
dashboard`). When I click on the "Cluster status" widget I get
Dear Ceph users,
after the upgrade to 19.2.1 I experience a soft freeze of the web UI
(old version, activated with `ceph dashboard feature disable
dashboard`). When I click on the "Cluster status" widget I get only a
blank small tooltip window, instead of e.g. the list of current
warnings. Af
15 matches
Mail list logo