On Mon, Feb 05, 2024 at 07:51:34PM +0100, Ondřej Kukla wrote:
> Hello,
>
> For some time now I’m struggling with the time it takes to
> CompleteMultipartUpload on one of my rgw clusters.
>
> I have a customer with ~8M objects in one bucket uploading quite a large
> files. From 100GB to like 800
Hello Anthony,
The replicated index pool has about 20TiB of free space and we are using Intel
P5510 NVMe Enterprise SSDs so I guess the HW shouldn’t be the issue.
Yes, I’m able to change the timeout on our LB, but I’m not sure if I want to
set it to 40minutes+…
Ondrej
> On 5. 2. 2024, at 20:0
Hello,
For some time now I’m struggling with the time it takes to
CompleteMultipartUpload on one of my rgw clusters.
I have a customer with ~8M objects in one bucket uploading quite a large files.
From 100GB to like 800GB.
I’ve noticed when they are uploading ~200GB files that the requests st
Don't try to mount the LVM2 PV as a filesystem. You need to look at syslog
to determine why you are unable to scan it in. When you have your PVs
mapped, they should show up in lsblk and pvs.
Once you determine why they are not showing (maybe there is something else
mapping type 1024, so remove i
Thanks. I think the only issue with doing snapshots via Cloudstack is
potentially having to pause an instance for an extended period of time. I
haven’t tested this yet but based on the docs, I think kvm has to be paused
regardless.
What about added volumes? Does an instance have to pause of you
Can you share your script? Thanks!
> On Saturday, Feb 03, 2024 at 10:35 AM, Marc (mailto:m...@f1-outsourcing.eu)> wrote:
> I am having a script that checks on each node what vm's are active and then
> the script makes a snap shot of their rbd's. It first issues some command to
> the vm to freez
On 01.02.24 10:10, Christian Rohmann wrote:
[...]
I am wondering if ceph-exporter ([2] is also built and packaged via
the ceph packages [3] for installations that use them?
[2] https://github.com/ceph/ceph/tree/main/src/exporter
[3] https://docs.ceph.com/en/latest/install/get-packages/
I c
Hi,
Just looking back through PyO3 issues, it would appear this functionality was
never supported:
https://github.com/PyO3/pyo3/issues/3451
https://github.com/PyO3/pyo3/issues/576
It just appears attempting to use this functionality (which does not
work/exist) wasn't successfully prevented pre
I don't use rocky, so stab in the dark and probably not the issue, but
could selinux be blocking the process? Really long shot, but python3 is in
the standard location? So if you run python3 --version as your ceph user it
returns?
Probably not much help, but figured I'd throw it out there.
On Mo
I have verified the server's expected hostname (with `hostname`) matches
the hostname I am trying to use.
Just to be sure, I also ran:
cephadm check-host --expect-hostname
and it returns:
Hostname "" matches what is expected.
On the current admin server where I am trying to add the host
Hi,
I think that the approach with exporting and importing PGs would be
a-priori more successful than the one based on pvmove or ddrescue. The
reason is that you don't need to export/import all data that the
failed disk holds, but only the PGs that Ceph cannot recover
otherwise. The logic here i
On the other hand, the openstack docs [3] report this:
The mirroring of RBD images stored in Erasure Coded pools is not
currently supported by the ceph-rbd-mirror charm due to limitations
in the functionality of the Ceph rbd-mirror application.
But I can't tell if it's a limitation within t
Hi,
I think you still need a replicated pool for the rbd metadata, check
out this thread [1]. Althouh I don't know if a mixed setup will work.
IIUC, in the referred thread the pools are set up identically on both
clusters, not sure if it will work if you only have one replicated
pool in s
Hello,
we are configure new ceph cluster with Mellanox 2x100Gbps cards.
We bond this two ports to MLAG bond0 interface.
In the async+posix mode everythink is OK, cluster is in the
HELTH_OK state.
CEPH version is 18.2.1.
Then we tried to configure RoCE for cluster part of network, but
without s
14 matches
Mail list logo