Re: [ceph-users] Unexplainable high memory usage OSD with BlueStore

2019-05-06 Thread Igor Fedotov
Hi Kenneth, mimic 13.2.5 has previous version of bitmap allocator which isn't recommended to use. Please revert. New bitmap allocator is will be available since 13.2.6. Thanks, Igor On 5/6/2019 4:19 PM, Kenneth Waegeman wrote: Hi all, I am also switching osds to the new bitmap allocater

Re: [ceph-users] Unexplainable high memory usage OSD with BlueStore

2019-05-06 Thread Kenneth Waegeman
Hi all, I am also switching osds to the new bitmap allocater on 13.2.5. That went quite fluently for now, except for one OSD that keeps segfaulting when I enable the bitmap allocator. Each time I disable bitmap allocater on that again, osd is ok again. Segfault error of the OSD: --- begin

Re: [ceph-users] Unexplainable high memory usage OSD with BlueStore

2019-05-03 Thread Igor Podlesny
On Fri, 3 May 2019 at 21:39, Mark Nelson wrote: [...] > > [osd] > > ... > > bluestore_allocator = bitmap > > bluefs_allocator = bitmap > > > > I would restart the nodes one by one and see, what happens. > > If you are using 12.2.11 you likely still have the old bitmap allocator Would those config

Re: [ceph-users] Unexplainable high memory usage OSD with BlueStore

2019-05-03 Thread Mark Nelson
On 5/3/19 1:38 AM, Denny Fuchs wrote: hi, I never recognized the Debian /etc/default/ceph :-) = # Increase tcmalloc cache size TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728 that is, what is active now. Yep, if you profile the OSD under a small write workload you can see

Re: [ceph-users] Unexplainable high memory usage OSD with BlueStore

2019-05-03 Thread Igor Podlesny
On Fri, 3 May 2019 at 13:38, Denny Fuchs wrote: [...] > If I understand correct: I should try to set bitmap allocator That's among one of the options I mentioned. Another one was to try using jemalloc (re-read my emails). > [osd] > ... > bluestore_allocator = bitmap > bluefs_allocator = bitmap

Re: [ceph-users] Unexplainable high memory usage OSD with BlueStore

2019-05-02 Thread Denny Fuchs
hi, I never recognized the Debian /etc/default/ceph :-) = # Increase tcmalloc cache size TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728 that is, what is active now. Huge pages: # cat /sys/kernel/mm/transparent_hugepage/enabled always [madvise] never # dpkg -S /usr/lib/x8

Re: [ceph-users] Unexplainable high memory usage OSD with BlueStore

2019-05-02 Thread Igor Podlesny
On Fri, 3 May 2019 at 05:12, Mark Nelson wrote: [...] > > -- https://www.kernel.org/doc/Documentation/vm/transhuge.txt > > Why are you quoting the description for the madvise setting when that's > clearly not what was set in the case I just showed you? Similarly why(?) are you telling us it must

Re: [ceph-users] Unexplainable high memory usage OSD with BlueStore

2019-05-02 Thread Mark Nelson
On 5/2/19 1:51 PM, Igor Podlesny wrote: On Fri, 3 May 2019 at 01:29, Mark Nelson wrote: On 5/2/19 11:46 AM, Igor Podlesny wrote: On Thu, 2 May 2019 at 05:02, Mark Nelson wrote: [...] FWIW, if you still have an OSD up with tcmalloc, it's probably worth looking at the heap stats to see how mu

Re: [ceph-users] Unexplainable high memory usage OSD with BlueStore

2019-05-02 Thread Igor Podlesny
On Fri, 3 May 2019 at 01:29, Mark Nelson wrote: > On 5/2/19 11:46 AM, Igor Podlesny wrote: > > On Thu, 2 May 2019 at 05:02, Mark Nelson wrote: > > [...] > >> FWIW, if you still have an OSD up with tcmalloc, it's probably worth > >> looking at the heap stats to see how much memory tcmalloc thinks

Re: [ceph-users] Unexplainable high memory usage OSD with BlueStore

2019-05-02 Thread Mark Nelson
On 5/2/19 11:46 AM, Igor Podlesny wrote: On Thu, 2 May 2019 at 05:02, Mark Nelson wrote: [...] FWIW, if you still have an OSD up with tcmalloc, it's probably worth looking at the heap stats to see how much memory tcmalloc thinks it's allocated vs how much RSS memory is being used by the proce

Re: [ceph-users] Unexplainable high memory usage OSD with BlueStore

2019-05-02 Thread Igor Podlesny
On Thu, 2 May 2019 at 05:02, Mark Nelson wrote: [...] > FWIW, if you still have an OSD up with tcmalloc, it's probably worth > looking at the heap stats to see how much memory tcmalloc thinks it's > allocated vs how much RSS memory is being used by the process. It's > quite possible that there is

Re: [ceph-users] Unexplainable high memory usage OSD with BlueStore

2019-05-01 Thread Mark Nelson
On 5/1/19 12:59 AM, Igor Podlesny wrote: On Tue, 30 Apr 2019 at 20:56, Igor Podlesny wrote: On Tue, 30 Apr 2019 at 19:10, Denny Fuchs wrote: [..] Any suggestions ? -- Try different allocator. Ah, BTW, except memory allocator there's another option: recently backported bitmap allocator. Igor

Re: [ceph-users] Unexplainable high memory usage OSD with BlueStore

2019-05-01 Thread Igor Fedotov
Hi Igor, yeah, BlueStore allocators are absolutely interchangeable. You can switch between them for free. Thanks, Igor On 5/1/2019 8:59 AM, Igor Podlesny wrote: On Tue, 30 Apr 2019 at 20:56, Igor Podlesny wrote: On Tue, 30 Apr 2019 at 19:10, Denny Fuchs wrote: [..] Any suggestions ?

Re: [ceph-users] Unexplainable high memory usage OSD with BlueStore

2019-04-30 Thread Igor Podlesny
On Tue, 30 Apr 2019 at 20:56, Igor Podlesny wrote: > On Tue, 30 Apr 2019 at 19:10, Denny Fuchs wrote: > [..] > > Any suggestions ? > > -- Try different allocator. Ah, BTW, except memory allocator there's another option: recently backported bitmap allocator. Igor Fedotov wrote about it's expected

Re: [ceph-users] Unexplainable high memory usage OSD with BlueStore

2019-04-30 Thread Igor Podlesny
On Tue, 30 Apr 2019 at 19:10, Denny Fuchs wrote: [..] > Any suggestions ? -- Try different allocator. In Proxmox 4 they by default had this in /etc/default/ceph {{ ## use jemalloc instead of tcmalloc # # jemalloc is generally faster for small IO workloads and when # ceph-osd is backed by SSDs.

Re: [ceph-users] Unexplainable high memory usage OSD with BlueStore

2019-04-30 Thread Denny Fuchs
hi, I want to add also a memory problem. What we have: * Ceph version 12.2.11 * 5 x 512MB Samsung 850 Evo * 5 x 1TB WD Red (5.4k) * OS Debian Stretch ( Proxmox VE 5.x ) * 2 x CPU CPU E5-2620 v4 * Memory 64GB DDR4 I've added to ceph.conf ... [osd] osd memory target = 3221225472 ... Which i

Re: [ceph-users] Unexplainable high memory usage OSD with BlueStore

2018-11-08 Thread Wido den Hollander
On 11/8/18 12:28 PM, Hector Martin wrote: > On 11/8/18 5:52 PM, Wido den Hollander wrote: >> [osd] >> bluestore_cache_size_ssd = 1G >> >> The BlueStore Cache size for SSD has been set to 1GB, so the OSDs >> shouldn't use more then that. >> >> When dumping the mem pools each OSD claims to be usin

Re: [ceph-users] Unexplainable high memory usage OSD with BlueStore

2018-11-08 Thread Hector Martin
On 11/8/18 5:52 PM, Wido den Hollander wrote: > [osd] > bluestore_cache_size_ssd = 1G > > The BlueStore Cache size for SSD has been set to 1GB, so the OSDs > shouldn't use more then that. > > When dumping the mem pools each OSD claims to be using between 1.8GB and > 2.2GB of memory. > > $ ceph d

Re: [ceph-users] Unexplainable high memory usage OSD with BlueStore

2018-11-08 Thread Wido den Hollander
On 11/8/18 11:34 AM, Stefan Kooman wrote: > Quoting Wido den Hollander (w...@42on.com): >> Hi, >> >> Recently I've seen a Ceph cluster experience a few outages due to memory >> issues. >> >> The machines: >> >> - Intel Xeon E3 CPU >> - 32GB Memory >> - 8x 1.92TB SSD >> - Ubuntu 16.04 >> - Ceph 1

Re: [ceph-users] Unexplainable high memory usage OSD with BlueStore

2018-11-08 Thread Stefan Kooman
Quoting Wido den Hollander (w...@42on.com): > Hi, > > Recently I've seen a Ceph cluster experience a few outages due to memory > issues. > > The machines: > > - Intel Xeon E3 CPU > - 32GB Memory > - 8x 1.92TB SSD > - Ubuntu 16.04 > - Ceph 12.2.8 What kernel version is running? What network card

[ceph-users] Unexplainable high memory usage OSD with BlueStore

2018-11-08 Thread Wido den Hollander
Hi, Recently I've seen a Ceph cluster experience a few outages due to memory issues. The machines: - Intel Xeon E3 CPU - 32GB Memory - 8x 1.92TB SSD - Ubuntu 16.04 - Ceph 12.2.8 Looking at one of the machines: root@ceph22:~# free -h totalusedfree shared buff