On 13 Oct. 2016, at 20:21, Praveen Kumar G T (Cloud Platform)
wrote:
>
>
> Hi David,
>
> I am Praveen, we also had a similar problem with hammer 0.94.2. We had the
> problem when we created a new cluster with erasure coding pool (10+5 config).
>
> Root cause:
>
> The high memory usage in o
> On 7 Oct. 2016, at 22:53, Haomai Wang wrote:
>
> do you try to restart osd to se the memory usage?
>
Restarting OSDs does not change the memory usage.
(Apologies for delay in reply - was offline due to illness.)
Regards
David
--
FetchTV Pty Ltd, Level 5, 61 Lavender Street, Milsons Poin
sure there are no memory leaks in ceph hammer
0.94.2 version.
Regards,
Praveen
Date: Fri, 7 Oct 2016 16:04:03 +1100
From: David Burns
To: ceph-us...@ceph.com
Subject: [ceph-users] Hammer OSD memory usage very high
Message-ID:
Content-Type: text/plain; charset=UTF-8
Hello all,
We have a small
do you try to restart osd to se the memory usage?
On Fri, Oct 7, 2016 at 1:04 PM, David Burns wrote:
> Hello all,
>
> We have a small 160TB Ceph cluster used only as a test s3 storage repository
> for media content.
>
> Problem
> Since upgrading from Firefly to Hammer we are experiencing very hi
Hello all,
We have a small 160TB Ceph cluster used only as a test s3 storage repository
for media content.
Problem
Since upgrading from Firefly to Hammer we are experiencing very high OSD memory
use of 2-3 GB per TB of OSD storage - typical OSD memory 6-10GB.
We have had to increase swap space