[ceph-users] Re: 16.2.11 pacific QE validation status

2023-01-20 Thread Nizamudeen A
Dashboard lgtm! Regards, Nizam On Fri, Jan 20, 2023, 22:09 Yuri Weinstein wrote: > The overall progress on this release is looking much better and if we > can approve it we can plan to publish it early next week. > > Still seeking approvals > > rados - Neha, Laura > rook - Sébastien Han > cepha

[ceph-users] Re: Retrieve number of read/write operations for a particular file in Cephfs

2023-01-20 Thread Patrick Donnelly
Hello, On Mon, Jan 16, 2023 at 11:04 AM thanh son le wrote: > > Hi, > > I have been studying the document from Ceph and Rados but I could not find > any metrics to measure the number of read/write operations for each file. I > understand that Cephfs is the front-end, the file is going to be store

[ceph-users] Re: 16.2.11 pacific QE validation status

2023-01-20 Thread Neha Ojha
On Fri, Jan 20, 2023 at 12:36 PM Laura Flores wrote: > From my end, rados looks good. All failures are known. Leaving final > approval to Neha. > > On Fri, Jan 20, 2023 at 12:03 PM Ernesto Puerta > wrote: > >> CCing Nizam as Dashboard lead for review & approval. >> >> Kind Regards, >> Ernesto >>

[ceph-users] Re: 16.2.11 pacific QE validation status

2023-01-20 Thread Laura Flores
From my end, rados looks good. All failures are known. Leaving final approval to Neha. On Fri, Jan 20, 2023 at 12:03 PM Ernesto Puerta wrote: > CCing Nizam as Dashboard lead for review & approval. > > Kind Regards, > Ernesto > > > On Fri, Jan 20, 2023 at 6:42 PM Adam King wrote: > >> cephadm ap

[ceph-users] Re: trouble deploying custom config OSDs

2023-01-20 Thread seccentral
Hello, Thank you for the valuable info, and especially for the slack link (it's not listed on the community page) The ceph-volume command was issued in the following manner : login to my 1st vps from which I performed the boostrap with cephadm exec > sudo cephadm shell which gets me root shell i

[ceph-users] Re: 16.2.11 pacific QE validation status

2023-01-20 Thread Ernesto Puerta
CCing Nizam as Dashboard lead for review & approval. Kind Regards, Ernesto On Fri, Jan 20, 2023 at 6:42 PM Adam King wrote: > cephadm approved. Known failures. > > On Fri, Jan 20, 2023 at 11:39 AM Yuri Weinstein > wrote: > >> The overall progress on this release is looking much better and if

[ceph-users] Re: 16.2.11 pacific QE validation status

2023-01-20 Thread Adam King
cephadm approved. Known failures. On Fri, Jan 20, 2023 at 11:39 AM Yuri Weinstein wrote: > The overall progress on this release is looking much better and if we > can approve it we can plan to publish it early next week. > > Still seeking approvals > > rados - Neha, Laura > rook - Sébastien Han

[ceph-users] Re: 16.2.11 pacific QE validation status

2023-01-20 Thread Casey Bodley
On Fri, Jan 20, 2023 at 11:39 AM Yuri Weinstein wrote: > > The overall progress on this release is looking much better and if we > can approve it we can plan to publish it early next week. > > Still seeking approvals > > rados - Neha, Laura > rook - Sébastien Han > cephadm - Adam > dashboard - Ern

[ceph-users] Re: 16.2.11 pacific QE validation status

2023-01-20 Thread Yuri Weinstein
The overall progress on this release is looking much better and if we can approve it we can plan to publish it early next week. Still seeking approvals rados - Neha, Laura rook - Sébastien Han cephadm - Adam dashboard - Ernesto rgw - Casey rbd - Ilya (full rbd run in progress now) krbd - Ilya fs

[ceph-users] RBD to fail fast/auto unmap in case of timeout

2023-01-20 Thread Mathias Chapelain
Hello, We would like to run a RAID1 between a local storage and a RBD device. This would allow us to sustain network failures or Ceph failures and also give better read performance as we would set it up with write-mostly on RBD in mdadm. Basically we would like to implement https://discord.com

[ceph-users] Pools and classes

2023-01-20 Thread Massimo Sgaravatto
Dear all I have a ceph cluster where so far all OSDs have been rotational hdd disks (actually there are some SSDs, used only for block.db and wal.db) I now want to add some SSD disks to be used as OSD. My use case is: 1) for the existing pools keep using only hdd disks 2) create some new pools u