to update this, the error looks like it comes from updatedb scanning the ceph
disks.
When we make sure it doesn’t, by putting the ceph mount points in the exclusion
file, the problem goes away.
Thanks for the help and time.
On 30 Nov 2015, at 09:53, MATHIAS, Bryn (Bryn)
mailto:bryn.math
Hi,
> On 30 Nov 2015, at 13:44, Christian Balzer wrote:
>
>
> Hello,
>
> On Mon, 30 Nov 2015 07:55:24 +0000 MATHIAS, Bryn (Bryn) wrote:
>
>> Hi Christian,
>>
>> I’ll give you a much better dump of detail :)
>>
>> Running RHEL 7.1,
>>
On 30 Nov 2015, at 12:57, Christian Balzer
mailto:ch...@gol.com>> wrote:
Hello,
On Mon, 30 Nov 2015 07:15:35 + MATHIAS, Bryn (Bryn) wrote:
Hi All,
I am seeing an issue with ceph performance.
Starting from an empty cluster of 5 nodes, ~600Tb of storage.
It would be helpful to hav
Hi All,
I am seeing an issue with ceph performance.
Starting from an empty cluster of 5 nodes, ~600Tb of storage.
monitoring disk usage in nmon I see rolling 100% usage of a disk.
Ceph -w doesn’t report any spikes in throughput and the application putting
data is not spiking in the load generate
Hi Loic
> On 30 Oct 2015, at 19:33, Loic Dachary wrote:
>
> Hi Mathias,
>
>> On 31/10/2015 02:05, MATHIAS, Bryn (Bryn) wrote:
>> Hi All,
>>
>> I have been rolling out an infernarlis cluster, however I get stuck on the
>> ceph-disk prepare stage.
&g
Hi All,
I have been rolling out an infernarlis cluster, however I get stuck on the
ceph-disk prepare stage.
I am deploying ceph via ansible along with a whole load of other software.
Log output at the end of the message but the solution is to copy the
"/lib/systemd/system/ceph-osd@.service” fi
Hi All,
I am testing a 5 node, 4+1 EC cluster using some simple python code
https://gist.github.com/brynmathias/03c60569499dbf3f6be4
when I run this from an external machine one of my 5 nodes experiences very
high cpu usage (3-400%) per osd
and the others show very low usage.
see here: http://i
arching for those terms and seeing what your OSD
> folder structures look like. You could test by creating a new pool and
> seeing if it's faster or slower than the one you've already filled up.
> -Greg
>
> On Wed, Jul 8, 2015 at 1:25 PM, MATHIAS, Bryn (Bryn)
> wrote:
>
Hi All,
I’m perf testing a cluster again,
This time I have re-built the cluster and am filling it for testing.
on a 10 min run I get the following results from 5 load generators, each
writing though 7 iocontexts, with a queue depth of 50 async writes.
Gen1
Percentile 100 = 0.729775905609
Max
>>
>
> Just out of interest do any of your journals or disks look like they are
> getting maxed out?
>
> Your latency breakdown seems to indicate that the bulk of requests are being
> serviced in reasonable time, but around 5% (or less) are taking excessively
> long for some reason.
>
> I'm
Hi All,
I am currently benchmarking CEPH to work out the correct read / write model, to
get the optimal cluster throughput and latency.
For the moment I am writing 4Mb files to an EC 4+1 pool with a randomised name
using the rados python interface.
Load generation is happening on external mach
11 matches
Mail list logo