Hello folks,
We've been having issues with slow requests cropping up on practically
idle ceph clusters. From what I can tell the requests are hanging
waiting for subops, and the OSD on the other end receives requests
minutes later! Below it started waiting for subops at 12:09:51 and the
subop was
Could you be running into block size (minimum allocation unit)
overhead? Default bluestore block size is 4k for hdd and 64k for ssd.
This is exacerbated if you have tons of small files. I tend to see
this when "ceph df detail" sum of raw used in pools is less than the
global raw bytes used.
On Mon
Marcus,
You may want to look at the bluestore_min_alloc_size setting as well
as the respective bluestore_min_alloc_size_ssd and
bluestore_min_alloc_size_hdd. By default bluestore sets a 64k block
size for ssds. I'm also using ceph for small objects and I've see my
OSD usage go down from 80% to 20%
On Wed, Apr 19, 2017 at 4:33 PM, Gregory Farnum wrote:
> On Wed, Apr 19, 2017 at 1:26 PM, Pavel Shub wrote:
>> Hey All,
>>
>> I'm running a test of bluestore in a small VM and seeing 2x overhead
>> for each object in cephfs. Here's the output of df de
Hey All,
I'm running a test of bluestore in a small VM and seeing 2x overhead
for each object in cephfs. Here's the output of df detail
https://gist.github.com/pavel-citymaps/868a7c4b1c43cea9ab86cdf2e79198ee
This is on a VM with all daemons & 20gb disk, all pools are of size 1.
Is this the expect
Hey All,
I'm running a test of bluestore in a small VM and seeing 2x overhead
for each object. Here's the output of df detail:
GLOBAL:
SIZE AVAIL RAW USED %RAW USED OBJECTS
20378M 14469M5909M 29.00772k
POOLS:
NAMEID CA
Hi all,
I'm running a 6 node 24 OSD cluster, Jewel 10.2.5 with kernel 4.8.
I put about 1TB of data in the cluster, with all pools having size 3. Yet
about 5TB of raw disk is used as opposed to the expected 3TB.
result of ceph -s:
pgmap v1057361: 2400 pgs, 3 pools, 984 GB data, 125 Mobject