On Tue, Jan 24, 2023 at 1:34 AM wrote:
>
> Hello Thomas,
>
> I have same issue with mds like you describe, and ceph version is a same. Did
> state up:replay ever finish in your case?
There is probably much going on with Thomas's cluster which is
blocking the mds to make progress. Could you uploa
Hi Peter,
I'm not quite sure if you're cluster is fully backed by NVMe drives
based on your description, but you might be interested in the CPU
scaling article we posted last fall. It's available here:
https://ceph.io/en/news/blog/2022/ceph-osd-cpu-scaling/
That gives a good overview of wh
We use the rbd-mirror as a way to migrate volumes between clusters.
The process is enable mirroring on the image to migrate, demote on the
primary cluster, promote on the secondary cluster, and then disable
mirroring on the image.
When we started using `rbd_mirroring_delete_delay` so we could retai
I have my ceph IOPS very low with over 48 SSD backed on NVMs for DB/WAL on four
physical servers. The whole cluster has only about 20K IO total. Looks the IOs
are suppressed over bottleneck somewhere. Dstat shows a lots csw and interrupts
over 150K, while I am using FIO bench 4K 128QD test.
I c
hello ,
i tryed all of the option it's not working , my replication network
speed still same , can you help me any other way we can do speed performance
increase
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email t
Hello Thomas,
I have same issue with mds like you describe, and ceph version is a same. Did
state up:replay ever finish in your case?
Thx
Aleksandar
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph
Hey, did you ever find a resolution for this?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi,
I try to set rdma setting in cluster of 6 nodes (3 mon, 3 OSD Nodes and 10 OSDs
on each OSD nodes).
OS : CentOS Stream release 8.
I've followed the step bellow, but I got an error.
[root@mon1 ~]# cephadm shell
Inferring fsid 9414e1bc-9061-11ed-90fc-00163e4f92ad
Using recent ceph image
qu
Ilya, Venky
rbd, krbd, fs reruns are almost ready, pls review/approve
On Mon, Jan 23, 2023 at 2:30 AM Ilya Dryomov wrote:
>
> On Fri, Jan 20, 2023 at 5:38 PM Yuri Weinstein wrote:
> >
> > The overall progress on this release is looking much better and if we
> > can approve it we can plan to pub
On Fri, Jan 20, 2023 at 5:38 PM Yuri Weinstein wrote:
>
> The overall progress on this release is looking much better and if we
> can approve it we can plan to publish it early next week.
>
> Still seeking approvals
>
> rados - Neha, Laura
> rook - Sébastien Han
> cephadm - Adam
> dashboard - Erne
Hey Yuri,
On Fri, Jan 20, 2023 at 10:08 PM Yuri Weinstein wrote:
>
> The overall progress on this release is looking much better and if we
> can approve it we can plan to publish it early next week.
>
> Still seeking approvals
>
> rados - Neha, Laura
> rook - Sébastien Han
> cephadm - Adam
> dash
Thanks a lot
Cheers, Massimo
On Mon, Jan 23, 2023 at 9:55 AM Robert Sander
wrote:
> Am 23.01.23 um 09:44 schrieb Massimo Sgaravatto:
>
> >> This triggered the remapping of some pgs and therefore some data
> movement.
> >> Is this normal/expected, since for the time being I have only hdd osds ?
>
On Fri, 20 Jan 2023 at 13:12, seccentral wrote:
> Hello,
> Thank you for the valuable info, and especially for the slack link (it's
> not listed on the community page)
> The ceph-volume command was issued in the following manner :
> login to my 1st vps from which I performed the boostrap with cep
Any feedback ? I would just like to be sure that I am using the right
procedure ...
Thanks, Massimo
On Fri, Jan 20, 2023 at 11:28 AM Massimo Sgaravatto <
massimo.sgarava...@gmail.com> wrote:
> Dear all
>
> I have a ceph cluster where so far all OSDs have been rotational hdd disks
> (actually the
Am 23.01.23 um 09:44 schrieb Massimo Sgaravatto:
This triggered the remapping of some pgs and therefore some data movement.
Is this normal/expected, since for the time being I have only hdd osds ?
This is expected behaviour as the cluster map has changed. Internally
the device classes are rep
15 matches
Mail list logo