Hello Ceph-users,
Florian has been helping with some issues on our proof-of-concept
cluster, where we've been experiencing these issues. Thanks for the
replies so far. I wanted to jump in with some extra details.
All of our testing has been with scrubbing turned off, to remove that as
a fact
One additional detail, we also did filestore testing using Jewel and saw
substantially similar results to those on Kraken.
On Mon, May 1, 2017 at 2:07 PM, Patrick Dinnen wrote:
> Hello Ceph-users,
>
> Florian has been helping with some issues on our proof-of-concept cluster,
> wher
rease the cost
>> too much, probably.
>>
>> You could change swappiness value or use something
>> like https://hoytech.com/vmtouch/ to pre-cache inodes entries.
>>
>> You could tarball the smaller files before loading them into Ceph maybe.
>>
>> How ar
ss value or use something like
> https://hoytech.com/vmtouch/ to pre-cache inodes entries.
>
> You could tarball the smaller files before loading them into Ceph maybe.
>
> How are the ten clients accessing Ceph by the way?
>
> On May 1, 2017, at 14:23, Patrick Dinnen wrote:
>
> One
isks. The activity on the OSDs and SSDs
seems anti-correlated. SSDs peak in activity as OSDs reach the bottom
of the trough. Then the reverse. Repeat.
Does anyone have any suggestions as to what could possibly be causing
a regular pattern like this at such a low frequency
nce behaviors?
> -Greg
>
> On Thu, May 11, 2017 at 12:47 PM, Patrick Dinnen wrote:
>> Seeing some odd behaviour while testing using rados bench. This is on
>> a pre-split pool, two node cluster with 12 OSDs total.
>>
>> ceph osd pool create newerpoolofhopes 2048 2048