Hi Nick,
On 02/10/16 22:56, Nick Fisk wrote:
[...]
osd_agent_max_high_ops
osd_agent_max_ops
They control how many concurrent flushes happen at the high/low thresholds. Ie
you can set the low one to 1 to minimise the impact
on client IO.
Also the target_max_bytes is calculated on a per PG bas
Hello,
I've been investigating the following crash with cephfs:
[8734559.785146] general protection fault: [#1] SMP
[8734559.791921] ioatdma shpchp ipmi_devintf ipmi_si ipmi_msghandler
tcp_scalable ib_qib dca ib_mad ib_core ib_addr ipv6 [last unloaded:
stat_faker_4410clouder4]
[8734559
On Mon, Oct 3, 2016 at 1:19 PM, Nikolay Borisov wrote:
> Hello,
>
> I've been investigating the following crash with cephfs:
>
> [8734559.785146] general protection fault: [#1] SMP
> [8734559.791921] ioatdma shpchp ipmi_devintf ipmi_si ipmi_msghandler
> tcp_scalable ib_qib dca ib_mad ib_cor
On 10/03/2016 03:27 PM, Ilya Dryomov wrote:
> On Mon, Oct 3, 2016 at 1:19 PM, Nikolay Borisov wrote:
>> Hello,
>>
>> I've been investigating the following crash with cephfs:
>>
>> [8734559.785146] general protection fault: [#1] SMP
>> [8734559.791921] ioatdma shpchp ipmi_devintf ipmi_si ip
On Mon, Oct 3, 2016 at 2:37 PM, Nikolay Borisov wrote:
>
>
> On 10/03/2016 03:27 PM, Ilya Dryomov wrote:
>> On Mon, Oct 3, 2016 at 1:19 PM, Nikolay Borisov wrote:
>>> Hello,
>>>
>>> I've been investigating the following crash with cephfs:
>>>
>>> [8734559.785146] general protection fault: [#
On 3/10/2016 5:59 AM, Sascha Vogt wrote:
Any feedback, especially corrections is highly welcome!
http://maybebuggy.de/post/ceph-cache-tier/
Thanks, that clarified things a lot - much easier to follow than the
offical docs :)
Do cache tiers help with writes as well?
--
Lindsay Mathieson
_
Hello all,
Over the past few weeks I've been trying to go through the Quick Ceph Deploy
tutorial at:
http://docs.ceph.com/docs/jewel/start/quick-ceph-deploy/
just trying to get a basic 2 OSD ceph cluster up and running. Everything seems
to go well until I get to the:
ceph-deploy osd activate c
Oops, I said CentOS 5 (old habit, ran it for years!). I meant CentOS 7. And I'm
running the following Ceph package versions from the ceph repo:
root@ceph02 ~]# rpm -qa |grep -i ceph
libcephfs1-10.2.3-0.el7.x86_64
ceph-common-10.2.3-0.el7.x86_64
ceph-mon-10.2.3-0.el7.x86_64
ceph-release-1-1.el7.noa
Hi Greg...
Just checking this on my case on 10.2.2
I have mds cache size = 200
Current used RAM in the mds is about 9GB
9 GB / 200 is much closer to 4k than 2k for the sum of Inodes / Cdir
and Cdentries.
Maybe the numbers in
http://docs.ceph.com/docs/master/dev/mds_internals/data-s