Hello All,
I have a simple test setup with 2 osd servers each 3 NICs (1Gb each):
* One for management (ssh and such)
* One for the public network (connected to ceph clients)
* One for the cluster (osd inter-connection)
I keep seeing this messages:
Aug 26 18:43:31 ceph01 ceph-osd: 2013-08-26 1
Hello all,
Our dev environment are quite I/O intensive but didn't require much
space (~20G per dev environment), for the moment our dev machines are
served by VMWare and the storage is done in NFS appliances with SAS or
SATA drives.
After some testing with consumer grade SSD we discovered that
On 03/17/2013 04:03 PM, Mark Nelson wrote:
On 03/17/2013 05:40 PM, Matthieu Patou wrote:
Hello all,
Our dev environment are quite I/O intensive but didn't require much
space (~20G per dev environment), for the moment our dev machines are
served by VMWare and the storage is done i
Hello,
Is this a good idea to run the osd and nova compute on the same node or
not so much and if so why ?
Matthieu.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On 03/17/2013 06:14 PM, Gregory Farnum wrote:
On Sunday, March 17, 2013 at 4:03 PM, Mark Nelson wrote:
On 03/17/2013 05:40 PM, Matthieu Patou wrote:
Hello all,
Our dev environment are quite I/O intensive but didn't require much
space (~20G per dev environment), for the moment ou
On 03/25/2013 04:07 PM, peter_j...@dell.com wrote:
Hi,
I have a couple of HW provisioning questions in regards to SSD for OSD
Journals.
I’d like to provision 12 OSDs per a node and there are enough CPU
clocks and Memory.
Each OSD is allocated one 3TB HDD for OSD data – these 12 * 3TB HDDs
On 03/27/2013 10:41 AM, Marco Aroldi wrote:
Hi list,
I'm trying to create a active/active Samba cluster on top of Cephfs
I would ask if Ceph fully supports CTDB at this time.
If I'm not wrong Ceph (even CephFS) do not support exporting a block
device or mounting the same FS more than once wherea
On 03/28/2013 07:41 AM, Sage Weil wrote:
On Wed, 27 Mar 2013, Matthieu Patou wrote:
On 03/27/2013 10:41 AM, Marco Aroldi wrote:
Hi list,
I'm trying to create a active/active Samba cluster on top of Cephfs
I would ask if Ceph fully supports CTDB at this time.
If I'm not wrong Ceph (e
Hi,
I was doing some testing with iozone and found that performance of an
exported rdb volume where 1/3 of the performance of the hard drives.
I was expecting to have a performance penalty but not so important.
I suspect something is not correct in the configuration but I can't what
exactly.
Anyone on this one ?
On 03/31/2013 04:37 PM, Matthieu Patou wrote:
Hi,
I was doing some testing with iozone and found that performance of an
exported rdb volume where 1/3 of the performance of the hard drives.
I was expecting to have a performance penalty but not so important.
I suspect
On 04/01/2013 05:35 PM, Mark Nelson wrote:
On 03/31/2013 06:37 PM, Matthieu Patou wrote:
Hi,
I was doing some testing with iozone and found that performance of an
exported rdb volume where 1/3 of the performance of the hard drives.
I was expecting to have a performance penalty but not so
On 04/01/2013 11:26 PM, Matthieu Patou wrote:
On 04/01/2013 05:35 PM, Mark Nelson wrote:
On 03/31/2013 06:37 PM, Matthieu Patou wrote:
Hi,
I was doing some testing with iozone and found that performance of an
exported rdb volume where 1/3 of the performance of the hard drives.
I was expecting
On 04/08/2013 05:55 AM, Mark Nelson wrote:
On 04/08/2013 01:09 AM, Matthieu Patou wrote:
On 04/01/2013 11:26 PM, Matthieu Patou wrote:
On 04/01/2013 05:35 PM, Mark Nelson wrote:
On 03/31/2013 06:37 PM, Matthieu Patou wrote:
Hi,
I was doing some testing with iozone and found that performance
On 04/09/2013 04:53 AM, Mark Nelson wrote:
On 04/09/2013 01:48 AM, Matthieu Patou wrote:
On 04/08/2013 05:55 AM, Mark Nelson wrote:
On 04/08/2013 01:09 AM, Matthieu Patou wrote:
On 04/01/2013 11:26 PM, Matthieu Patou wrote:
On 04/01/2013 05:35 PM, Mark Nelson wrote:
On 03/31/2013 06:37 PM
On 04/25/2013 12:39 AM, James Harper wrote:
I'm doing some testing and wanted to see the effect of increasing journal
speed, and the fastest way to do this seemed to be to put it on a ramdisk where
latency should drop to near zero and I can see what other inefficiencies exist.
I created a tmpf
15 matches
Mail list logo