Hello,
On 18/12/2015 23:26, Don Waterloo wrote:
> rbd -p mypool create speed-test-image --size 1000
> rbd -p mypool bench-write speed-test-image
>
> I get
>
> bench-write io_size 4096 io_threads 16 bytes 1073741824 pattern seq
> SEC OPS OPS/SEC BYTES/SEC
> 1 79053 79070.82
Sorry, one last comment on issue #1 (slow with SCST iSCSI but fast qla2xxx
FC with Ceph RBD):
> tly work fine in combination with SCST so I'd recommend to continue
>>> testing with a recent kernel. I'm running myself kernel 4.3.0 since some
>>> time on my laptop and development workstation.
>>
>>
On 20 December 2015 at 08:35, Francois Lafont wrote:
> Hello,
>
> On 18/12/2015 23:26, Don Waterloo wrote:
>
> > rbd -p mypool create speed-test-image --size 1000
> > rbd -p mypool bench-write speed-test-image
> >
> > I get
> >
> > bench-write io_size 4096 io_threads 16 bytes 1073741824 pattern
Hi,
On 20/12/2015 19:47, Don Waterloo wrote:
> I did a bit more work on this.
>
> On cephfs-fuse, I get ~700 iops.
> On cephfs kernel, I get ~120 iops.
> These were both on 4.3 kernel
>
> So i backed up to 3.16 kernel on the client. And observed the same results.
>
> So ~20K iops w/ rbd, ~120i
On 20 December 2015 at 15:06, Francois Lafont wrote:
> Hi,
>
> On 20/12/2015 19:47, Don Waterloo wrote:
>
> > I did a bit more work on this.
> >
> > On cephfs-fuse, I get ~700 iops.
> > On cephfs kernel, I get ~120 iops.
> > These were both on 4.3 kernel
> >
> > So i backed up to 3.16 kernel on t
On 20/12/2015 21:06, Francois Lafont wrote:
> Ok. Please, can you give us your configuration?
> How many nodes, osds, ceph version, disks (SSD or not, HBA/controller), RAM,
> CPU, network (1Gb/10Gb) etc.?
And I add this: with cephfs-fuse, did you have some specific conf in the client
side?
Spec
On 20 December 2015 at 19:23, Francois Lafont wrote:
> On 20/12/2015 22:51, Don Waterloo wrote:
>
> > All nodes have 10Gbps to each other
>
> Even the link client node <---> cluster nodes?
>
> > OSD:
> > $ ceph osd tree
> > ID WEIGHT TYPE NAMEUP/DOWN REWEIGHT PRIMARY-AFFINITY
> > -1 5.48
On 20/12/2015 22:51, Don Waterloo wrote:
> All nodes have 10Gbps to each other
Even the link client node <---> cluster nodes?
> OSD:
> $ ceph osd tree
> ID WEIGHT TYPE NAMEUP/DOWN REWEIGHT PRIMARY-AFFINITY
> -1 5.48996 root default
> -2 0.8 host nubo-1
> 0 0.8 osd.
On Fri, Dec 18, 2015 at 11:16 AM, Christian Balzer wrote:
>
> Hello,
>
> On Fri, 18 Dec 2015 03:36:12 +0100 Francois Lafont wrote:
>
>> Hi,
>>
>> I have ceph cluster currently unused and I have (to my mind) very low
>> performances. I'm not an expert in benchs, here an example of quick
>> bench:
>
Hi, I have a 2 port 10Gb NIC installed in ceph client, but I just want to
use open NIC port to do ceph IO. How can I achieve this?
Thanks.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Fri, Dec 18, 2015 at 3:43 AM, Bryan Wright wrote:
> Hi folks,
>
> This is driving me crazy. I have a ceph filesystem that behaves normally
> when I "ls" files, and behaves normally when I copy smallish files on or off
> of the filesystem, but large files (~ GB size) hang after copying a few
>
11 matches
Mail list logo