[ceph-users] tons of "failed lossy con, dropping message" => root cause for bad performance ?

2013-09-02 Thread Matthieu Patou
Hello All, I have a simple test setup with 2 osd servers each 3 NICs (1Gb each): * One for management (ssh and such) * One for the public network (connected to ceph clients) * One for the cluster (osd inter-connection) I keep seeing this messages: Aug 26 18:43:31 ceph01 ceph-osd: 2013-08-26 1

[ceph-users] using ssds with ceph

2013-03-17 Thread Matthieu Patou
Hello all, Our dev environment are quite I/O intensive but didn't require much space (~20G per dev environment), for the moment our dev machines are served by VMWare and the storage is done in NFS appliances with SAS or SATA drives. After some testing with consumer grade SSD we discovered that

Re: [ceph-users] using ssds with ceph

2013-03-17 Thread Matthieu Patou
On 03/17/2013 04:03 PM, Mark Nelson wrote: On 03/17/2013 05:40 PM, Matthieu Patou wrote: Hello all, Our dev environment are quite I/O intensive but didn't require much space (~20G per dev environment), for the moment our dev machines are served by VMWare and the storage is done i

[ceph-users] ceph + openstack: running osd and nova computer on the same node

2013-03-17 Thread Matthieu Patou
Hello, Is this a good idea to run the osd and nova compute on the same node or not so much and if so why ? Matthieu. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] using ssds with ceph

2013-03-18 Thread Matthieu Patou
On 03/17/2013 06:14 PM, Gregory Farnum wrote: On Sunday, March 17, 2013 at 4:03 PM, Mark Nelson wrote: On 03/17/2013 05:40 PM, Matthieu Patou wrote: Hello all, Our dev environment are quite I/O intensive but didn't require much space (~20G per dev environment), for the moment ou

Re: [ceph-users] SSD Capacity and Partitions for OSD Journals

2013-03-25 Thread Matthieu Patou
On 03/25/2013 04:07 PM, peter_j...@dell.com wrote: Hi, I have a couple of HW provisioning questions in regards to SSD for OSD Journals. I’d like to provision 12 OSDs per a node and there are enough CPU clocks and Memory. Each OSD is allocated one 3TB HDD for OSD data – these 12 * 3TB HDDs

Re: [ceph-users] CTDB Cluster Samba on Cephfs

2013-03-27 Thread Matthieu Patou
On 03/27/2013 10:41 AM, Marco Aroldi wrote: Hi list, I'm trying to create a active/active Samba cluster on top of Cephfs I would ask if Ceph fully supports CTDB at this time. If I'm not wrong Ceph (even CephFS) do not support exporting a block device or mounting the same FS more than once wherea

Re: [ceph-users] CTDB Cluster Samba on Cephfs

2013-03-28 Thread Matthieu Patou
On 03/28/2013 07:41 AM, Sage Weil wrote: On Wed, 27 Mar 2013, Matthieu Patou wrote: On 03/27/2013 10:41 AM, Marco Aroldi wrote: Hi list, I'm trying to create a active/active Samba cluster on top of Cephfs I would ask if Ceph fully supports CTDB at this time. If I'm not wrong Ceph (e

[ceph-users] under performing osd, where to look ?

2013-03-31 Thread Matthieu Patou
Hi, I was doing some testing with iozone and found that performance of an exported rdb volume where 1/3 of the performance of the hard drives. I was expecting to have a performance penalty but not so important. I suspect something is not correct in the configuration but I can't what exactly.

Re: [ceph-users] under performing osd, where to look ?

2013-04-01 Thread Matthieu Patou
Anyone on this one ? On 03/31/2013 04:37 PM, Matthieu Patou wrote: Hi, I was doing some testing with iozone and found that performance of an exported rdb volume where 1/3 of the performance of the hard drives. I was expecting to have a performance penalty but not so important. I suspect

Re: [ceph-users] under performing osd, where to look ?

2013-04-01 Thread Matthieu Patou
On 04/01/2013 05:35 PM, Mark Nelson wrote: On 03/31/2013 06:37 PM, Matthieu Patou wrote: Hi, I was doing some testing with iozone and found that performance of an exported rdb volume where 1/3 of the performance of the hard drives. I was expecting to have a performance penalty but not so

Re: [ceph-users] under performing osd, where to look ?

2013-04-07 Thread Matthieu Patou
On 04/01/2013 11:26 PM, Matthieu Patou wrote: On 04/01/2013 05:35 PM, Mark Nelson wrote: On 03/31/2013 06:37 PM, Matthieu Patou wrote: Hi, I was doing some testing with iozone and found that performance of an exported rdb volume where 1/3 of the performance of the hard drives. I was expecting

Re: [ceph-users] under performing osd, where to look ?

2013-04-08 Thread Matthieu Patou
On 04/08/2013 05:55 AM, Mark Nelson wrote: On 04/08/2013 01:09 AM, Matthieu Patou wrote: On 04/01/2013 11:26 PM, Matthieu Patou wrote: On 04/01/2013 05:35 PM, Mark Nelson wrote: On 03/31/2013 06:37 PM, Matthieu Patou wrote: Hi, I was doing some testing with iozone and found that performance

Re: [ceph-users] under performing osd, where to look ?

2013-04-09 Thread Matthieu Patou
On 04/09/2013 04:53 AM, Mark Nelson wrote: On 04/09/2013 01:48 AM, Matthieu Patou wrote: On 04/08/2013 05:55 AM, Mark Nelson wrote: On 04/08/2013 01:09 AM, Matthieu Patou wrote: On 04/01/2013 11:26 PM, Matthieu Patou wrote: On 04/01/2013 05:35 PM, Mark Nelson wrote: On 03/31/2013 06:37 PM

Re: [ceph-users] journal on ramdisk for testing

2013-04-27 Thread Matthieu Patou
On 04/25/2013 12:39 AM, James Harper wrote: I'm doing some testing and wanted to see the effect of increasing journal speed, and the fastest way to do this seemed to be to put it on a ramdisk where latency should drop to near zero and I can see what other inefficiencies exist. I created a tmpf