uxia...@horebdata.cn
*From:* Maged Mokhtar <mailto:mmokh...@petasan.org>
*Date:* 2020-09-18 18:20
*To:* vitalif <mailto:vita...@yourcmc.ru>; huxiaoyu
<mailto:huxia...@horebdata.cn>; ceph-users <mailto:ceph-users@ceph.io>
*Subject:* Re: [ceph-users] Re: Ben
,
Samuel
huxia...@horebdata.cn
From: Maged Mokhtar
Date: 2020-09-18 18:20
To: vitalif; huxiaoyu; ceph-users
Subject: Re: [ceph-users] Re: Benchmark WAL/DB on SSD and HDD for RGW RBD CephFS
dm-writecache works using a high and low watermarks, set at 45 and 50%.
All writes land in cache, once
On 2020-09-17 19:21, vita...@yourcmc.ru wrote:
> It does, RGW really needs SSDs for bucket indexes. CephFS also needs SSDs for
> metadata in any setup that's used by more than 1 user :).
Nah. I crashed my first cephfs with my music library, a 2 TB git annex
repo, just me alone (slow ops on mds).
dm-writecache works using a high and low watermarks, set at 45 and 50%.
All writes land in cache, once cache fills to the high watermark
backfilling to the slow device starts and stops when reaching the low
watermark. Backfilling uses b-tree with LRU blocks and tries merge
blocks to reduce h
On 17/09/2020 17:37, Mark Nelson wrote:
Does fio handle S3 objects spread across many buckets well? I think
bucket listing performance was maybe missing too, but It's been a
while since I looked at fio's S3 support. Maybe they have those use
cases covered now. I wrote a go based benchmark cal
> we did test dm-cache, bcache and dm-writecache, we found the later to be
> much better.
Did you set bcache block size to 4096 during your tests? Without this setting
it's slow because 99.9% SSDs don't handle 512 byte overwrites well. Otherwise I
don't think bcache should be worse than dm-write
: Benchmark WAL/DB on SSD and HDD for RGW RBD CephFS
On 17/09/2020 19:21, vita...@yourcmc.ru wrote:
> RBD in fact doesn't benefit much from the WAL/DB partition alone because
> Bluestore never does more writes per second than HDD can do on average (it
> flushes every 32 writes to
On 17/09/2020 19:21, vita...@yourcmc.ru wrote:
RBD in fact doesn't benefit much from the WAL/DB partition alone because
Bluestore never does more writes per second than HDD can do on average (it
flushes every 32 writes to the HDD). For RBD, the best thing is bcache.
rbd will benefit: for
On 9/17/20 12:21 PM, vita...@yourcmc.ru wrote:
It does, RGW really needs SSDs for bucket indexes. CephFS also needs SSDs for
metadata in any setup that's used by more than 1 user :). RBD in fact doesn't
benefit much from the WAL/DB partition alone because Bluestore never does more
writes per
It does, RGW really needs SSDs for bucket indexes. CephFS also needs SSDs for
metadata in any setup that's used by more than 1 user :). RBD in fact doesn't
benefit much from the WAL/DB partition alone because Bluestore never does more
writes per second than HDD can do on average (it flushes ever
Does fio handle S3 objects spread across many buckets well? I think
bucket listing performance was maybe missing too, but It's been a while
since I looked at fio's S3 support. Maybe they have those use cases
covered now. I wrote a go based benchmark called hsbench based on the
wasabi-tech be
On 16/09/2020 07:26, Danni Setiawan wrote:
Hi all,
I'm trying to find performance penalty with OSD HDD when using WAL/DB
in faster device (SSD/NVMe) vs WAL/DB in same device (HDD) for
different workload (RBD, RGW with index bucket in SSD pool, and CephFS
with metadata in SSD pool). I want to
Yes, I agree that there are many knob for fine tuning Ceph performance.
The problem is we don't have data which workload that benefit most from
WAL/DB in SSD vs in same spinning drive and by how much. Does it really
help in a cluster that mostly for object storage/RGW? Or may be just
block stor
Den ons 16 sep. 2020 kl 06:27 skrev Danni Setiawan <
danni.n.setia...@gmail.com>:
> Hi all,
>
> I'm trying to find performance penalty with OSD HDD when using WAL/DB in
> faster device (SSD/NVMe) vs WAL/DB in same device (HDD) for different
> workload (RBD, RGW with index bucket in SSD pool, and C
14 matches
Mail list logo