[ceph-users] cephfs mount with kernel driver

2023-06-19 Thread farhad kh
I noticed that in my scenario, when I mount cephfs via the kernel module, it directly copies to one or three of the OSDs. And the writing speed of the client is higher than the speed of replication and auto scaling This causes the writing operation to stop as soon as those OSDs are filled, and the

[ceph-users] autocaling not work and active+remapped+backfilling

2023-06-19 Thread farhad kh
hi i have a problem with ceph 17.2.6 , cephfs with mds daemons but see an unusual behavior. create a data pool with default crush rule but data just store in 3 specific osd and other osd is clean PG auto-scaling is also active but its size does not change when the pool is biger I did this manua

[ceph-users] Re: same OSD in multiple CRUSH hierarchies

2023-06-19 Thread Eugen Block
Hi, I don't think this is going to work. Each OSD belongs to a specific host and you can't have multiple buckets (e.g. bucket type "host") with the same name in the crush tree. But if I understand your requirement correctly, there should be no need to do it this way. If you structure your

[ceph-users] Re: OpenStack (cinder) volumes retyping on Ceph back-end

2023-06-19 Thread Eugen Block
Hi, I don't quite understand the issue yet, maybe you can clarify. If I perform a "change volume type" from OpenStack on volumes attached to the VMs the system successfully migrates the volume from the source pool to the destination pool and at the end of the process the volume is visible

[ceph-users] Critical Information: DELL/Toshiba SSDs dying after 70,000 hours of operation

2023-06-19 Thread Frédéric Nass
Hello, This message does not concern Ceph itself but a hardware vulnerability which can lead to permanent loss of data on a Ceph cluster equipped with the same hardware in separate fault domains. The DELL / Toshiba PX02SMF020, PX02SMF040, PX02SMF080 and PX02SMB160 SSD drives of the 13G gene

[ceph-users] Re: EC 8+3 Pool PGs stuck in remapped+incomplete

2023-06-19 Thread Jayanth Reddy
Hello Weiwen, Thank you for the response. I've attached the output for all PGs in state incomplete and remapped+incomplete. Thank you! Thanks, Jayanth Reddy On Sun, Jun 18, 2023 at 4:09 PM Jayanth Reddy wrote: > Hello Weiwen, > > Thank you for the response. I've attached the output for all PGs

[ceph-users] Re: EC 8+3 Pool PGs stuck in remapped+incomplete

2023-06-19 Thread Jayanth Reddy
Hello Weiwen, Thank you for the response. I've attached the output for all PGs in state incomplete and remapped+incomplete. Thank you! Thanks, Jayanth Reddy On Sat, Jun 17, 2023 at 11:00 PM 胡 玮文 wrote: > Hi Jayanth, > > Can you post the complete output of “ceph pg query”? So that we can > und

[ceph-users] Ceph iSCSI GW not working with VMware VMFS and Windows Clustered Storage Volumes (CSV)

2023-06-19 Thread Work Ceph
Hello guys, We have a Ceph cluster that runs just fine with Ceph Octopus; we use RBD for some workloads, RadosGW (via S3) for others, and iSCSI for some Windows clients. Recently, we had the need to add some VMWare clusters as clients for the iSCSI GW and also Windows systems with the use of Clus

[ceph-users] Re: Help needed to configure erasure coding LRC plugin

2023-06-19 Thread Eugen Block
Hi, I have a real hardware cluster for testing available now. I'm not sure whether I'm completely misunderstanding how it's supposed to work or if it's a bug in the LRC plugin. This cluster has 18 HDD nodes available across 3 rooms (or DCs), I intend to use 15 nodes to be able to recover if o

[ceph-users] Re: Grafana service fails to start due to bad directory name after Quincy upgrade

2023-06-19 Thread Eugen Block
Hi, so grafana is starting successfully now? What did you change? Regarding the container images, yes there are defaults in cephadm which can be overridden with ceph config. Can you share this output? ceph config dump | grep container_image I tend to always use a specific image as describe

[ceph-users] Re: Ceph iSCSI GW not working with VMware VMFS and Windows Clustered Storage Volumes (CSV)

2023-06-19 Thread Robert Sander
On 19.06.23 13:47, Work Ceph wrote: Recently, we had the need to add some VMWare clusters as clients for the iSCSI GW and also Windows systems with the use of Clustered Storage Volumes (CSV), and we are facing a weird situation. In windows for instance, the iSCSI block can be mounted, formatted

[ceph-users] Re: Ceph iSCSI GW not working with VMware VMFS and Windows Clustered Storage Volumes (CSV)

2023-06-19 Thread Maged Mokhtar
Windows Clustered Shared Volumes and Failover Clustering require the support of clustered persistence reservations by the block device to coordinate access by multiple hosts. The default iSCSI implementation in Ceph does not support this, you can use the iSCSI implementation in PetaSAN project:

[ceph-users] How does a "ceph orch restart SERVICE" affect availability?

2023-06-19 Thread Mikael Öhman
The documentation very briefly explains a few core commands for restarting things; https://docs.ceph.com/en/quincy/cephadm/operations/#starting-and-stopping-daemons but I feel I'm lacking quite some details of what is safe to do. I have a system in production, clusters connected via CephFS and som

[ceph-users] Re: same OSD in multiple CRUSH hierarchies

2023-06-19 Thread Budai Laszlo
Hi, Actually I've learned that it's not needed for a rule to start with a root bucket, so I can heve rules that will only consider a subtree of my total resources, and achieve what I was trying to do with the different disjunct hierarchies. BTW: it is possible to have different trees with dif

[ceph-users] Re: Starting v17.2.5 RGW SSE with default key (likely others) no longer works

2023-06-19 Thread Casey Bodley
On Sat, Jun 17, 2023 at 1:11 PM Jayanth Reddy wrote: > > Hello Folks, > > I've been experimenting with RGW encryption and found this out. > Focusing on Quincy and Reef dev, for the SSE (any methods) to work, transit > has to be end to end encrypted, however if there is a proxy, then [1] can > be m

[ceph-users] Re: header_limit in AsioFrontend class

2023-06-19 Thread Casey Bodley
On Sat, Jun 17, 2023 at 8:37 AM Vahideh Alinouri wrote: > > Dear Ceph Users, > > I am writing to request the backporting changes related to the > AsioFrontend class and specifically regarding the header_limit value. > > In the Pacific release of Ceph, the header_limit value in the > AsioFrontend c

[ceph-users] Re: Help needed to configure erasure coding LRC plugin

2023-06-19 Thread Michel Jouvin
Hi Eugen, Thank you very much for these detailed tests that match what I observed and reported earlier. I'm happy to see that we have the same understanding of how it should work (based on the documentation). Is there any other way that this list to enter in contact with the plugin developers

[ceph-users] Re: CephFS metadata pool grows by two orders of magnitude while trimming (?) snapshots

2023-06-19 Thread Janek Bevendorff
Hi Patrick, The event log size of 3/5 MDS is also very high, still. mds.1, mds.3, and mds.4 report between 4 and 5 million events, mds.0 around 1.4 million and mds.2 between 0 and 200,000. The numbers have been constant since my last MDS restart four days ago. I ran your ceph-gather.sh script a

[ceph-users] Re: Help needed to configure erasure coding LRC plugin

2023-06-19 Thread Eugen Block
Hi, adding the dev mailing list, hopefully someone there can chime in. But apparently the LRC code hasn't been maintained for a few years (https://github.com/ceph/ceph/tree/main/src/erasure-code/lrc). Let's see... Zitat von Michel Jouvin : Hi Eugen, Thank you very much for these detaile

[ceph-users] Re: Ceph iSCSI GW not working with VMware VMFS and Windows Clustered Storage Volumes (CSV)

2023-06-19 Thread Work Ceph
I see, thanks for the feedback guys! It is interesting that Ceph Manager does not allow us to export iSCSI blocks without selecting 2 or more iSCSI portals. Therefore, we will always use at least two, and as a consequence that feature is not going to be supported. Can I export an RBD image via iSC

[ceph-users] Re: Ceph iSCSI GW not working with VMware VMFS and Windows Clustered Storage Volumes (CSV)

2023-06-19 Thread Angelo Hongens
As a sidenote: there's the windows rbd driver which will get you wy more performance. It's labeled beta, but it seems to work fine for a lot of people. If you have a testlab you could try that. Angelo. On 19/06/2023 18:16, Work Ceph wrote: I see, thanks for the feedback guys! It is inter

[ceph-users] Transmit rate metric based per bucket

2023-06-19 Thread Szabo, Istvan (Agoda)
Hello, I'd like to know is there a way to query some metrics/logs in octopus (or if has newer version I'm interested for the future too) about the bandwidth used in the bucket for put/get operations? Thank you This message is confidential and is for the sole us