> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
--
Ryan Sleeth
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
bluestore_compression_required_ratio 0.975
--
Ryan Sleeth
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
t registered any such issues
>
>
>
> k
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
--
Ryan Sleeth
___
ceph-users mail
t; >
> > Thanks,
> > Gagan
> > ___
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
> ___
> ceph-users mailing lis
I am setting up my first cluster of 9-nodes each with 8x 20T HDDs and 2x 2T
NVMes. I plan to partition the NVMes into 5x 300G so that one partition can
be used by cephfs_metadata (SSD only), while the other 4x partitions will
be paired as db devices for 4x of the HDDs. The cluster will only be used
>> So, my first question is whether it's possible to specify a separate DB via
>> "ceph orch daemon add osd"?
> I believe it is, don’t have the syntax to hand.
Thanks for the response, and the service spec examples — that gave me some
courage to try a few things.
What I settled on for my case
ize — that you
wouldn't specify both, for instance.
I'm also reading the ceph-volume docs for "prepare". I suppose if I find that
more suitable, it might be possible to "prepare" and OSD with ceph-volume and
then "adopt" it with cephadm?
Well, just wri
(I believe it doesn't break them, but haven't tested).
--
Ryan Rempel
From: Pritha Srivastava
Sent: Monday, July 8, 2024 10:38 PM
Hi Ryan,
This appears to be a known issue and is tracked here:
https://tracker.ceph.com/issues/54562. There is a wo
;m curious whether anyone else has been trying to get this to work
with Azure AD, and whether they have run into similar problems. And, of course,
whether I appear to be misunderstanding anything about how this is supposed to
work.
Ryan Rempel
Director of Information Technology
Canadian Mennoni
serves clients over..
Is this a common configuration, and/or can anyone provide me some guidance?!
Thanks in advance!
Best!
J
--
Justin Alan Ryan
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
ternally facing gateways. How do
I control which rados gateways the dashboard will connect to?
Thanks,
Ryan
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
ino 0x1001b45c2fa cap
0cde56f9 issued pAsLsXsFs (mask AsXsFs)
[94831.006576] ceph: __touch_cap 3bb3ccb2 cap 0cde56f9 mds0
[94831.006581] ceph: statfs
Thanks,
-rt
Ryan Taylor
Research Computing Specialist
Research Computing Services, University Systems
University of Victoria
__
eph version (ours is v14.2.22) , or could it depend on
something Manila is doing?
Is there any other useful information I could collect?
Thanks,
-rt
Ryan Taylor
Research Computing Specialist
Research Computing Services, University Systems
University of Victoria
___
ta.max_bytes="121212"
[fedora@cephtest ~]$ getfattr -n ceph.quota.max_bytes /mnt/ceph2
getfattr: Removing leading '/' from absolute path names
# file: mnt/ceph2
ceph.quota.max_bytes="121212"
Thanks,
-rt
From: Luís Henriques
Sen
quot;Merged into 5.2-rc1."
So it seems https://tracker.ceph.com/issues/55090 is either a new issue or a
regression of the previous issue.
Thanks,
-rt
Ryan Taylor
Research Computing Specialist
Research Computing Services, University Systems
University of Victoria
__
issue is in cephfs or Manila, but what would be required to
get the right size and usage stats to be reported by df when a subpath of a
share is mounted?
Thanks!
-rt
Ryan Taylor
Research Computing Specialist
Research Computing Services, University Systems
University of Vic
i gateways
are in xml still.
Ryan
On Mon, Oct 28, 2019 at 10:49 AM Casey Bodley wrote:
>
> On 10/24/19 8:38 PM, Oliver Freyermuth wrote:
> > Dear Cephers,
> >
> > I have a question concerning static websites with RGW.
> > To my understanding, it is best to run
20153264 7796.56 510955233.33
21160832 7814.44 512126854.90
elapsed:21 ops: 163840 ops/sec: 7659.97 bytes/sec: 502004079.43
On Fri, Oct 25, 2019 at 11:54 AM Mike Christie wrote:
> On 10/24/2019 11:47 PM, Ryan wrote:
> > I'm using CentOS 7.7.1908 with kernel
Can you point me to the directions for the kernel mode iscsi backend. I was
following these directions
https://docs.ceph.com/docs/master/rbd/iscsi-target-cli/
Thanks,
Ryan
On Fri, Oct 25, 2019 at 11:29 AM Mike Christie wrote:
> On 10/25/2019 09:31 AM, Ryan wrote:
> > I'm
will trigger VMWare to use vaai extended copy, which
> activates LIO's xcopy functionality which uses 512KB block sizes by
> default. We also bumped the xcopy block size to 4M (rbd object size) which
> gives around 400 MB/s vmotion speed, the same speed can also be achieved
> via Veeam
de out de 2019 às 20:16, Mike Christie
> escreveu:
>
>> On 10/24/2019 12:22 PM, Ryan wrote:
>> > I'm in the process of testing the iscsi target feature of ceph. The
>> > cluster is running ceph 14.2.4 and ceph-iscsi 3.3. It consists of 5
>>
>> What
lient: 344 MiB/s rd, 625 KiB/s wr, 5.54k op/s rd, 62 op/s wr
I'm going to test bonnie++ with an rbd volume mounted directly on the iscsi
gateway. Also will test bonnie++ inside a VM on a ceph backed datastore.
On Thu, Oct 24, 2019 at 7:15 PM Mike Christie wrote:
> On 10/24/2019 12:22 P
Drew Weaver wrote:
> I was told by someone at Red Hat that ISCSI performance is still several
> magnitudes behind using the client / driver.
>
> Thanks,
> -Drew
>
>
> -Original Message-
> From: Nathan Fish
> Sent: Thursday, October 24, 2019 1:27 PM
> To:
f the
datastore is fast at 200-300MB/s.
What should I be looking at to track down the write performance issue? In
comparison with the Nimble Storage arrays I can see 200-300MB/s in both
directions.
Thanks,
Ryan
___
ceph-users mailing list -- ceph-users@ceph.
24 matches
Mail list logo