[ceph-users] Re: squid 19.2.2 deployed with cephadmin - no grafana data on some dashboards ( RGW, MDS)

2025-07-23 Thread Ryan Sleeth
> > ceph-users mailing list -- ceph-users@ceph.io > > To unsubscribe send an email to ceph-users-le...@ceph.io > > > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io > -- Ryan Sleeth ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Compression confusion

2025-07-11 Thread Ryan Sleeth
bluestore_compression_required_ratio 0.975 -- Ryan Sleeth ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Heads up: bad drive for Ceph Western Digital Ultrastar DC HC560

2025-07-07 Thread Ryan Sleeth
t registered any such issues > > > > k > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io > -- Ryan Sleeth ___ ceph-users mail

[ceph-users] Re: CephFS with Ldap

2025-06-30 Thread Ryan Sleeth
t; > > > Thanks, > > Gagan > > ___ > > ceph-users mailing list -- ceph-users@ceph.io > > To unsubscribe send an email to ceph-users-le...@ceph.io > ___ > ceph-users mailing lis

[ceph-users] First time configuration advice

2025-06-23 Thread Ryan Sleeth
I am setting up my first cluster of 9-nodes each with 8x 20T HDDs and 2x 2T NVMes. I plan to partition the NVMes into 5x 300G so that one partition can be used by cephfs_metadata (SSD only), while the other 4x partitions will be paired as db devices for 4x of the HDDs. The cluster will only be used

[ceph-users] Re: Adding OSD with separate DB via "ceph orch daemon add osd"

2025-05-27 Thread Ryan Rempel
>> So, my first question is whether it's possible to specify a separate DB via >> "ceph orch daemon add osd"? > I believe it is, don’t have the syntax to hand. Thanks for the response, and the service spec examples — that gave me some courage to try a few things. What I settled on for my case

[ceph-users] Adding OSD with separate DB via "ceph orch daemon add osd"

2025-05-27 Thread Ryan Rempel
ize — that you wouldn't specify both, for instance. I'm also reading the ceph-volume docs for "prepare". I suppose if I find that more suitable, it might be possible to "prepare" and OSD with ceph-volume and then "adopt" it with cephadm? Well, just wri

[ceph-users] Re: AssumeRoleWithWebIdentity in RGW with Azure AD

2024-07-11 Thread Ryan Rempel
(I believe it doesn't break them, but haven't tested). -- Ryan Rempel From: Pritha Srivastava Sent: Monday, July 8, 2024 10:38 PM Hi Ryan, This appears to be a known issue and is tracked here: https://tracker.ceph.com/issues/54562. There is a wo

[ceph-users] AssumeRoleWithWebIdentity in RGW with Azure AD

2024-07-08 Thread Ryan Rempel
;m curious whether anyone else has been trying to get this to work with Azure AD, and whether they have run into similar problems. And, of course, whether I appear to be misunderstanding anything about how this is supposed to work. Ryan Rempel Director of Information Technology Canadian Mennoni

[ceph-users] splitting client and replication traffic

2023-05-06 Thread Justin Ryan
serves clients over.. Is this a common configuration, and/or can anyone provide me some guidance?! Thanks in advance! Best! J -- Justin Alan Ryan ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] set-rgw-api-host removed from pacific

2022-12-03 Thread Ryan
ternally facing gateways. How do I control which rados gateways the dashboard will connect to? Thanks, Ryan ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: df shows wrong size of cephfs share when a subdirectory is mounted

2022-04-21 Thread Ryan Taylor
ino 0x1001b45c2fa cap 0cde56f9 issued pAsLsXsFs (mask AsXsFs) [94831.006576] ceph: __touch_cap 3bb3ccb2 cap 0cde56f9 mds0 [94831.006581] ceph: statfs Thanks, -rt Ryan Taylor Research Computing Specialist Research Computing Services, University Systems University of Victoria __

[ceph-users] Re: df shows wrong size of cephfs share when a subdirectory is mounted

2022-04-21 Thread Ryan Taylor
eph version (ours is v14.2.22) , or could it depend on something Manila is doing? Is there any other useful information I could collect? Thanks, -rt Ryan Taylor Research Computing Specialist Research Computing Services, University Systems University of Victoria ___

[ceph-users] Re: df shows wrong size of cephfs share when a subdirectory is mounted

2022-04-20 Thread Ryan Taylor
ta.max_bytes="121212" [fedora@cephtest ~]$ getfattr -n ceph.quota.max_bytes /mnt/ceph2 getfattr: Removing leading '/' from absolute path names # file: mnt/ceph2 ceph.quota.max_bytes="121212" Thanks, -rt From: Luís Henriques Sen

[ceph-users] Re: df shows wrong size of cephfs share when a subdirectory is mounted

2022-04-19 Thread Ryan Taylor
quot;Merged into 5.2-rc1." So it seems https://tracker.ceph.com/issues/55090 is either a new issue or a regression of the previous issue. Thanks, -rt Ryan Taylor Research Computing Specialist Research Computing Services, University Systems University of Victoria __

[ceph-users] df shows wrong size of cephfs share when a subdirectory is mounted

2022-04-14 Thread Ryan Taylor
issue is in cephfs or Manila, but what would be required to get the right size and usage stats to be reported by df when a subpath of a share is mounted? Thanks! -rt Ryan Taylor Research Computing Specialist Research Computing Services, University Systems University of Vic

[ceph-users] Re: Static website hosting with RGW

2019-10-29 Thread Ryan
i gateways are in xml still. Ryan On Mon, Oct 28, 2019 at 10:49 AM Casey Bodley wrote: > > On 10/24/19 8:38 PM, Oliver Freyermuth wrote: > > Dear Cephers, > > > > I have a question concerning static websites with RGW. > > To my understanding, it is best to run

[ceph-users] Re: iSCSI write performance

2019-10-25 Thread Ryan
20153264 7796.56 510955233.33 21160832 7814.44 512126854.90 elapsed:21 ops: 163840 ops/sec: 7659.97 bytes/sec: 502004079.43 On Fri, Oct 25, 2019 at 11:54 AM Mike Christie wrote: > On 10/24/2019 11:47 PM, Ryan wrote: > > I'm using CentOS 7.7.1908 with kernel

[ceph-users] Re: iSCSI write performance

2019-10-25 Thread Ryan
Can you point me to the directions for the kernel mode iscsi backend. I was following these directions https://docs.ceph.com/docs/master/rbd/iscsi-target-cli/ Thanks, Ryan On Fri, Oct 25, 2019 at 11:29 AM Mike Christie wrote: > On 10/25/2019 09:31 AM, Ryan wrote: > > I'm

[ceph-users] Re: iSCSI write performance

2019-10-25 Thread Ryan
will trigger VMWare to use vaai extended copy, which > activates LIO's xcopy functionality which uses 512KB block sizes by > default. We also bumped the xcopy block size to 4M (rbd object size) which > gives around 400 MB/s vmotion speed, the same speed can also be achieved > via Veeam

[ceph-users] Re: iSCSI write performance

2019-10-25 Thread Ryan
de out de 2019 às 20:16, Mike Christie > escreveu: > >> On 10/24/2019 12:22 PM, Ryan wrote: >> > I'm in the process of testing the iscsi target feature of ceph. The >> > cluster is running ceph 14.2.4 and ceph-iscsi 3.3. It consists of 5 >> >> What

[ceph-users] Re: iSCSI write performance

2019-10-24 Thread Ryan
lient: 344 MiB/s rd, 625 KiB/s wr, 5.54k op/s rd, 62 op/s wr I'm going to test bonnie++ with an rbd volume mounted directly on the iscsi gateway. Also will test bonnie++ inside a VM on a ceph backed datastore. On Thu, Oct 24, 2019 at 7:15 PM Mike Christie wrote: > On 10/24/2019 12:22 P

[ceph-users] Re: iSCSI write performance

2019-10-24 Thread Ryan
Drew Weaver wrote: > I was told by someone at Red Hat that ISCSI performance is still several > magnitudes behind using the client / driver. > > Thanks, > -Drew > > > -Original Message- > From: Nathan Fish > Sent: Thursday, October 24, 2019 1:27 PM > To:

[ceph-users] iSCSI write performance

2019-10-24 Thread Ryan
f the datastore is fast at 200-300MB/s. What should I be looking at to track down the write performance issue? In comparison with the Nimble Storage arrays I can see 200-300MB/s in both directions. Thanks, Ryan ___ ceph-users mailing list -- ceph-users@ceph.