[ceph-users] Re: Nautilus: BlueFS spillover

2019-09-27 Thread Eugen Block
Hi, generally expanding existing DB devices isn't enough to immediately eliminate the spillover alert. As spilled over data is already there and doesn't immediately move by such an expansion. Theoretically allert will eventually disappear after RocksDB completely rewrites all the data at

[ceph-users] Re: Raw use 10 times higher than data use

2019-09-27 Thread Andrei Mikhailovsky
Hi Mark, thanks for coming back regarding the small objects which are under the min_alloc size. I am sure there are plenty of such objects as the rgw has backups of windows pcs/servers which are not compressed. Could you please confirm something for me. When I do the "radosgw-admin bucket stats

[ceph-users] Re: Nfs-ganesha 2.6 upgrade to 2.7

2019-09-27 Thread Daniel Gryniewicz
Sounds like someone turned on MSPAC support, which is off by default. It should probably be left off. Daniel On 9/26/19 1:19 PM, Marc Roos wrote: Yes I think this one libntirpc. In 2.6 this samba dependency was not there. -Original Message- From: Daniel Gryniewicz [mailto:d...@redhat

[ceph-users] Re: Nautilus: BlueFS spillover

2019-09-27 Thread Igor Fedotov
Hi Eugen, generally expanding existing DB devices isn't enough to immediately eliminate the spillover alert. As spilled over data is already there and doesn't immediately move by such an expansion. Theoretically allert will eventually  disappear after RocksDB completely rewrites all the data a

[ceph-users] Re: Nautilus: BlueFS spillover

2019-09-27 Thread Burkhard Linke
Hi, On 9/27/19 10:54 AM, Eugen Block wrote: Update: I expanded all rocksDB devices, but the warnings still appear: BLUEFS_SPILLOVER BlueFS spillover detected on 10 OSD(s) osd.0 spilled over 2.5 GiB metadata from 'db' device (2.4 GiB used of 30 GiB) to slow device osd.19 spilled over

[ceph-users] Re: Nautilus: BlueFS spillover

2019-09-27 Thread Eugen Block
Update: I expanded all rocksDB devices, but the warnings still appear: BLUEFS_SPILLOVER BlueFS spillover detected on 10 OSD(s) osd.0 spilled over 2.5 GiB metadata from 'db' device (2.4 GiB used of 30 GiB) to slow device osd.19 spilled over 66 MiB metadata from 'db' device (818 MiB u

[ceph-users] HELP! Way too much space consumption with ceph-fuse using erasure code data pool under highly concurrent writing operations

2019-09-27 Thread daihongbo
Hi, It's observerd up to 10 times space is consumed when concurrent 200 files iozone writing test , with erasure code profile (k=8,m=4) data pool, mounted with ceph fuse, but disk usage is normal if only has one writing task . Furthermore everything is normal using replicated data pool, no

[ceph-users] Re: Nautilus: BlueFS spillover

2019-09-27 Thread Eugen Block
Thank you, Konstantin. I'll resize the rocksDB devices then and see what happens if we turn the warnings back on. Thanks! Eugen Zitat von Konstantin Shalygin : On 9/26/19 9:45 PM, Eugen Block wrote: I'm following the discussion for a tracker issue [1] about spillover warnings that affect