There’s probably multiple reasons. However I just wanted to chime in that I set 
my cache size to 1G and I constantly see OSD memory converge to ~2.5GB. 

In [1] you can see the difference between a node with 4 OSDs, v12.2.2, on the 
left; and a node with 4 OSDs v12.2.1 on the right. I really hoped that v12.2.2 
would make the memory usage a bit closer to the cache parameter. almost 2.5x, 
in contrast to 3x of 12.2.1, is still quite far off IMO.

Practically, I think it’s not quite possible to have 2 OSDs on your 2GB server, 
let alone have some leeway memory.


[1] https://pasteboard.co/GXHO5eF.png 

> On Dec 11, 2017, at 3:44 AM, shadow_lin <shadow_...@163.com> wrote:
> 
> My workload is mainly seq write(for surveillance usage).I am not sure how 
> cache would effect the write performance and why the memory usage keeps 
> increasing as more data is wrote into ceph storage.
>  
> 2017-12-11 
> lin.yunfan
> 发件人:Peter Woodman <pe...@shortbus.org>
> 发送时间:2017-12-11 05:04
> 主题:Re: [ceph-users] The way to minimize osd memory usage?
> 收件人:"David Turner"<drakonst...@gmail.com>
> 抄送:"shadow_lin"<shadow_...@163.com>,"ceph-users"<ceph-users@lists.ceph.com>,"Konstantin
>  Shalygin"<k0...@k0ste.ru>
>  
> I've had some success in this configuration by cutting the bluestore 
> cache size down to 512mb and only one OSD on an 8tb drive. Still get 
> occasional OOMs, but not terrible. Don't expect wonderful performance, 
> though. 
>  
> Two OSDs would really be pushing it. 
>  
> On Sun, Dec 10, 2017 at 10:05 AM, David Turner <drakonst...@gmail.com> wrote: 
> > The docs recommend 1GB/TB of OSDs. I saw people asking if this was still 
> > accurate for bluestore and the answer was that it is more true for 
> > bluestore 
> > than filestore. There might be a way to get this working at the cost of 
> > performance. I would look at Linux kernel memory settings as much as ceph 
> > and bluestore settings. Cache pressure is one that comes to mind that an 
> > aggressive setting might help. 
> > 
> > 
> > On Sat, Dec 9, 2017, 11:33 PM shadow_lin <shadow_...@163.com> wrote: 
> >> 
> >> The 12.2.1(12.2.1-249-g42172a4 (42172a443183ffe6b36e85770e53fe678db293bf) 
> >> we are running is with the memory issues fix.And we are working on to 
> >> upgrade to 12.2.2 release to see if there is any furthermore improvement. 
> >> 
> >> 2017-12-10 
> >> ________________________________ 
> >> lin.yunfan 
> >> ________________________________ 
> >> 
> >> 发件人:Konstantin Shalygin <k0...@k0ste.ru> 
> >> 发送时间:2017-12-10 12:29 
> >> 主题:Re: [ceph-users] The way to minimize osd memory usage? 
> >> 收件人:"ceph-users"<ceph-users@lists.ceph.com> 
> >> 抄送:"shadow_lin"<shadow_...@163.com> 
> >> 
> >> 
> >> > I am testing running ceph luminous(12.2.1-249-g42172a4 
> >> > (42172a443183ffe6b36e85770e53fe678db293bf) on ARM server. 
> >> Try new 12.2.2 - this release should fix memory issues with Bluestore. 
> >> 
> >> _______________________________________________ 
> >> ceph-users mailing list 
> >> ceph-users@lists.ceph.com 
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
> > 
> > 
> > _______________________________________________ 
> > ceph-users mailing list 
> > ceph-users@lists.ceph.com 
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
> > 
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to