age
Alliances and SUSE Embedded
db...@suse.com
918.528.4422
From: ceph-users on behalf of Joe Comeau
Date: Friday, August 31, 2018 at 1:07 PM
To: "ceph-users@lists.ceph.com" , Peter Eisch
Subject: Re: [ceph-users] cephfs speed
Are you using bluestore OSDs ?
if so my thought proce
gust 31, 2018 at 1:07 PM
To: "ceph-users@lists.ceph.com" , Peter Eisch
Subject: Re: [ceph-users] cephfs speed
Are you using bluestore OSDs ?
if so my thought process on this is what we are having an issue with is caching
and bluestore
see the thread on bluestore caching
"
...@suse.com<mailto:db...@suse.com>
918.528.4422
From: ceph-users on behalf of Joe Comeau
Date: Friday, August 31, 2018 at 1:07 PM
To: "ceph-users@lists.ceph.com" , Peter Eisch
Subject: Re: [ceph-users] cephfs speed
Are you using bluestore OSDs ?
if so my thought process on thi
but I'm open to tweaking!
peter
From: Gregory Farnum
Date: Thursday, August 30, 2018 at 11:47 AM
To: Peter Eisch
Cc: "ceph-users@lists.ceph.com"
Subject: Re: [ceph-users] cephfs speed
How are you mounting CephFS? It may be that the cache settings are just set
very badly for a 1
Pretty plain, but I'm open to tweaking!
peter
From: Gregory Farnum
Date: Thursday, August 30, 2018 at 11:47 AM
To: Peter Eisch
Cc: "ceph-users@lists.ceph.com"
Subject: Re: [ceph-users] cephfs speed
How are you mounting CephFS? It may be that the ca
r and delete this e-mail
message.
v2.10
From: Gregory Farnum
Date: Thursday, August 30, 2018 at 11:47 AM
To: Peter Eisch
Cc: "ceph-users@lists.ceph.com"
Subject: Re: [ceph-users] cephfs speed
How are you mounting CephFS? It may be that the cache settings are just set
very badly for a
How are you mounting CephFS? It may be that the cache settings are just set
very badly for a 10G pipe. Plus rados bench is a very parallel large-IO
benchmark and many benchmarks you might dump into a filesystem are
definitely not.
-Greg
On Thu, Aug 30, 2018 at 7:54 AM Peter Eisch
wrote:
> Hi,
>
Hi,
I have a cluster serving cephfs and it works. It’s just slow. Client is using
the kernel driver. I can ‘rados bench’ writes to the cephfs_data pool at wire
speeds (9580Mb/s on a 10G link) but when I copy data into cephfs it is rare to
get above 100Mb/s. Large file writes may start fast