Thanks, Yuri!
Zitat von Yuri Weinstein :
We merged two PRs and hope that the issues were addressed.
We are resuiming testing and will email the QE status email as soon as
the results are ready for review.
On Thu, Dec 12, 2024 at 12:10 AM Eugen Block wrote:
Hi,
am I assuming correctly that
We merged two PRs and hope that the issues were addressed.
We are resuiming testing and will email the QE status email as soon as
the results are ready for review.
On Thu, Dec 12, 2024 at 12:10 AM Eugen Block wrote:
>
> Hi,
>
> am I assuming correctly that this PR won't land in Squid 19.2.1 since
Hello,
I'm not an openstack user myself, but did notice when I was testing with
proxmox if I set AIO to native vs io_uring it performs faster in Windows.
We don't use Windows normally, so this was just a test environment and
don't know if Openstack has something similar. Just my 2 cents for what
l
FYI you can also set the balancer mode to crush-compat, this way even if
the balancer is re enabled for any reason error messages will not occurs.
https://docs.ceph.com/en/pacific/rados/operations/balancer/
On Thu, Dec 12, 2024, 15:28 Janne Johansson wrote:
> I have clusters that have been upgr
Hi Ilya,
some of our Proxmox VE users also report they need to enable rxbounce to
avoid their Windows VMs triggering these errors, see e.g. [1]. With
rxbounce, everything seems to work smoothly, so thanks for adding this
option. :)
We're currently checking how our stack could handle this more
gra
Hi,
I managed to install our 15th server with a fully qualified hostname. I
rectified this after adding the disks to the cluster (17.2.6) by changing
the hostname.
1) ceph orch host ls - returns the correct (shrot) hostnames
2) ceph orch ps - returns the correct (short) hostname for all demons
3
Hello Nizamudeen,
yes, of course. I did all this. Thanks.
As a matter of fact today I stopped every single RGW via systemd.
Then I restarted one of the RGWs and suddenly it became visible in the
dashboard.
I then started the first of the multisite RGWs and so on.
Now everything is fine.
Be
On Thursday, December 12, 2024 9:02:25 AM EST Rok Jaklič wrote:
> Hi,
>
> I am trying to create nfs cluster with following command:
> ceph nfs cluster create cephnfs
>
> But I get an error like:
> Error EPERM: osd pool create failed: 'pgp_num' must be greater than 0 and
> lower or equal than 'pg_
Hi,
I am trying to create nfs cluster with following command:
ceph nfs cluster create cephnfs
But I get an error like:
Error EPERM: osd pool create failed: 'pgp_num' must be greater than 0 and
lower or equal than 'pg_num', which in this case is 1 retval: -34
Any ideas why?
I also tried adding p
I have clusters that have been upgraded into "upmap"-capable releases,
but in those cases, it was never in upmap mode, since these clusters
would also have jewel-clients as lowest possible, so if you tried to
enable balancer in upmap mode it would tell me to first bump clients
to luminous at least,
Hi,
first thing coming to mind is to set hw_scsi_model=virtio-scsi and
hw_disk_bus=scsi [0]. Or did you already do that?
[0] https://docs.ceph.com/en/reef/rbd/rbd-openstack/#image-properties
Zitat von Michel Niyoyita :
Hello team,
I have configured a ceph cluster running on ubuntu 20.04
Hello team,
I have configured a ceph cluster running on ubuntu 20.04 , with 16 HDD for
data and 4 SSD for journaling on each host. the total cluster size is 150
TB usable with a replica of 3. The cluster is in production , integrated
with openstack with different pools: Volumes, vms, cinder and
As you discovered, it looks like there are no upmap items in your
cluster right now. The `ceph osd dump` command will list them, in JSON
as you show, or you can `grep ^pg_upmap` without JSON as well (same
output, different format).
I think the balancer would have been enabled by default in Nau
Dear all,
during our upgrade from octopus to pacific the MGR suddenly started logging
messages like this one to audit.log:
2024-12-10T10:30:01.105524+0100 mon.ceph-03 (mon.2) 3004 : audit [INF]
from='mgr.424622547 192.168.32.67:0/63' entity='mgr.ceph-03' cmd=[{"prefix":
"osd pg-upmap-items", "
Hi Malte,
couple of things for you to check.
1. do you have an rgw user called 'dashboard' configured in that cluster?
2. do you have the RGW_API_ACCESS_KEY and RGW_API_SECRET_KEY configured in
the dashboard configuration? if not you can do it by `ceph dashboard
set-rgw-credentials` which will po
Hello,
this issue hit lots of people in the past.
Now, I am on 18.2.2 and after configuring multisite between two clusters
the RGWs/Object Gateways are not visible in the dashboard anymore.
There are two realms, two zonegroups now and for each realm three RGWs
running on one cluster.
One o
Hi all,
two days ago we upgraded our cluster from octopus to pacific. Everything went
well and we see lots of improvements. Thanks for releasing the last stable
version with all its fixes. I do have some questions though and this hiccup is
one for starters:
After the upgrade to pacific we star
Hi,
am I assuming correctly that this PR won't land in Squid 19.2.1 since
it hasn't been merged yet?
https://github.com/ceph/ceph/pull/60881
Thanks!
Eugen
Zitat von Laura Flores :
The fix for https://tracker.ceph.com/issues/69067 is getting
backported/tested.
On Tue, Dec 3, 2024 at 4:04
18 matches
Mail list logo