Re: [ceph-users] High disk utilisation

2015-12-09 Thread MATHIAS, Bryn (Bryn)
to update this, the error looks like it comes from updatedb scanning the ceph disks. When we make sure it doesn’t, by putting the ceph mount points in the exclusion file, the problem goes away. Thanks for the help and time. On 30 Nov 2015, at 09:53, MATHIAS, Bryn (Bryn) mailto:bryn.math

Re: [ceph-users] High disk utilisation

2015-11-30 Thread MATHIAS, Bryn (Bryn)
Hi, > On 30 Nov 2015, at 13:44, Christian Balzer wrote: > > > Hello, > > On Mon, 30 Nov 2015 07:55:24 +0000 MATHIAS, Bryn (Bryn) wrote: > >> Hi Christian, >> >> I’ll give you a much better dump of detail :) >> >> Running RHEL 7.1, >>

Re: [ceph-users] High disk utilisation

2015-11-29 Thread MATHIAS, Bryn (Bryn)
On 30 Nov 2015, at 12:57, Christian Balzer mailto:ch...@gol.com>> wrote: Hello, On Mon, 30 Nov 2015 07:15:35 + MATHIAS, Bryn (Bryn) wrote: Hi All, I am seeing an issue with ceph performance. Starting from an empty cluster of 5 nodes, ~600Tb of storage. It would be helpful to hav

[ceph-users] High disk utilisation

2015-11-29 Thread MATHIAS, Bryn (Bryn)
Hi All, I am seeing an issue with ceph performance. Starting from an empty cluster of 5 nodes, ~600Tb of storage. monitoring disk usage in nmon I see rolling 100% usage of a disk. Ceph -w doesn’t report any spikes in throughput and the application putting data is not spiking in the load generate

Re: [ceph-users] ceph-disk prepare with systemd and infernarlis

2015-10-31 Thread MATHIAS, Bryn (Bryn)
Hi Loic > On 30 Oct 2015, at 19:33, Loic Dachary wrote: > > Hi Mathias, > >> On 31/10/2015 02:05, MATHIAS, Bryn (Bryn) wrote: >> Hi All, >> >> I have been rolling out an infernarlis cluster, however I get stuck on the >> ceph-disk prepare stage. &g

[ceph-users] ceph-disk prepare with systemd and infernarlis

2015-10-30 Thread MATHIAS, Bryn (Bryn)
Hi All, I have been rolling out an infernarlis cluster, however I get stuck on the ceph-disk prepare stage. I am deploying ceph via ansible along with a whole load of other software. Log output at the end of the message but the solution is to copy the "/lib/systemd/system/ceph-osd@.service” fi

[ceph-users] poorly distributed osd load between machines

2015-10-28 Thread MATHIAS, Bryn (Bryn)
Hi All, I am testing a 5 node, 4+1 EC cluster using some simple python code https://gist.github.com/brynmathias/03c60569499dbf3f6be4 when I run this from an external machine one of my 5 nodes experiences very high cpu usage (3-400%) per osd and the others show very low usage. see here: http://i

Re: [ceph-users] Ceph performance, empty vs part full

2015-07-08 Thread MATHIAS, Bryn (Bryn)
arching for those terms and seeing what your OSD > folder structures look like. You could test by creating a new pool and > seeing if it's faster or slower than the one you've already filled up. > -Greg > > On Wed, Jul 8, 2015 at 1:25 PM, MATHIAS, Bryn (Bryn) > wrote: >

[ceph-users] Ceph performance, empty vs part full

2015-07-08 Thread MATHIAS, Bryn (Bryn)
Hi All, I’m perf testing a cluster again, This time I have re-built the cluster and am filling it for testing. on a 10 min run I get the following results from 5 load generators, each writing though 7 iocontexts, with a queue depth of 50 async writes. Gen1 Percentile 100 = 0.729775905609 Max

Re: [ceph-users] [COMMERCIAL] Ceph EC pool performance benchmarking, highlatencies.

2015-06-21 Thread MATHIAS, Bryn (Bryn)
>> > > Just out of interest do any of your journals or disks look like they are > getting maxed out? > > Your latency breakdown seems to indicate that the bulk of requests are being > serviced in reasonable time, but around 5% (or less) are taking excessively > long for some reason. > > I'm

[ceph-users] Ceph EC pool performance benchmarking, high latencies.

2015-06-19 Thread MATHIAS, Bryn (Bryn)
Hi All, I am currently benchmarking CEPH to work out the correct read / write model, to get the optimal cluster throughput and latency. For the moment I am writing 4Mb files to an EC 4+1 pool with a randomised name using the rados python interface. Load generation is happening on external mach