Murilo,
The latency of HDD is about 10ms+. and the IO stack in Ceph may spends
~3ms+,
so the test result is still in doubt. I guess the rbd test used the ram
cache.
You can paste more fio outputs here.
On 2023/3/17 07:16, Murilo Morais wrote:
Good evening everyone!
Guys, what to expect
Janne,
Thanks for your advice. I'll have a try. :)
On 2023/3/16 15:00, Janne Johansson wrote:
Den tors 16 mars 2023 kl 06:42 skrev Norman :
Janne,
Thanks for your reply. To reduce the cost of recovering OSDs while
WAL/DB device is down, maybe I have no
choice but add more WAL/DB devices
Janne,
Thanks for your reply. To reduce the cost of recovering OSDs while
WAL/DB device is down, maybe I have no
choice but add more WAL/DB devices.
On 2023/3/15 15:04, Janne Johansson wrote:
hi, everyone,
I have a question about repairing the broken WAL/DB device.
I have a cluster with 8
hi, everyone,
I have a question about repairing the broken WAL/DB device.
I have a cluster with 8 OSDs, and 4 WAL/DB devices(1 OSD per WAL/DB
device), and hwo can I repair the OSDs quickly if
one WAL/DB device breaks down without rebuilding the them? Thanks.
_
steady-load
settings.
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: norman
Sent: 20 November 2020 13:40:18
To: ceph-users
Subject: [ceph-users] The serious side-effect of rbd cache setting
Hi All,
We
Hi All,
We're testing the rbd cache setting for openstack(Ceph 14.2.5 Bluestore
3-replica), and an odd problem found:
1. Setting librbd cache
[client]
rbd cache = true
rbd cache size = 16777216
rbd cache max dirty = 12582912
rbd cache target dirty = 8388608
rbd cache max dirty age = 1
r
oping this helps.
Yours, Norman
On 18/11/2020 上午6:49, Thomas Hukkelberg wrote:
Hi all!
Hopefully some of you can shed some light on this. We have big problems with
samba crashing when macOS smb clients access certain/random folders/files over
vfs_ceph.
When browsing cephfs folder in question d
ceph df may not reflect the new files we can store.
I'm reading codes of ceph, I will reply the thread when I got the
correct meaning.
Thanks,
Norman
On 11/9/2020 上午7:40, Igor Fedotov wrote:
Norman,
>default-fs-data0 9 374 TiB 1.48G 939 TiB
74.71
Anyone else met the same problem? Using EC instead of Replica is to save
spaces, but now it's worse than replica...
On 9/9/2020 上午7:30, norman kern wrote:
Hi,
I have changed most of pools from 3-replica to ec 4+2 in my cluster, when I use
ceph df command to show
the used capactiy o
) .
Norman
On 9/9/2020 下午7:17, Igor Fedotov wrote:
Hi Norman,
not pretending to know the exact root cause but IMO one of the working
hypothesis might be as follows :
Presuming spinners as backing devices for you OSDs and hence 64K
allocation unit (bluestore min_alloc_size_hdd param).
1) 1.48GB
Hi,
I have changed most of pools from 3-replica to ec 4+2 in my cluster, when I use ceph df command to show
the used capactiy of the cluster:
RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 1.8 PiB 788 TiB 1.0 PiB 1.0 PiB
The current OSD's size and pg_num? Are you using the different size OSDs?
On 6/9/2020 上午1:34, huxia...@horebdata.cn wrote:
Dear Ceph folks,
As the capacity of one HDD (OSD) is growing bigger and bigger, e.g. from 6TB up
to 18TB or even more, should the number of PG per OSD increase as well, e.
Hi guys,
When I update the pg_num of a pool, I found it not worked(no
rebalanced), anyone know the reason? Pool's info:
pool 21 'openstack-volumes-rs' replicated size 3 min_size 2 crush_rule
21 object_hash rjenkins pg_num 1024 pgp_num 512 pgp_num_target 1024
autoscale_mode warn last_change 85103
Stefan,
I agree with you about the crush rule, but I truely met the problem for
the cluster,
I set the values large for a quick recover:
osd_recovery_max_active 16
osd_max_backfills 32
Is it a very bad setting?
Kern
On 18/8/2020 下午5:27, Stefan Kooman wrote:
On 2020-08-18 11:13, Hans van
there is a cap on how many
recovery/backfill requests there can be per OSD at any given time.
I am not sure though, but I am happy to be proved wrong by the senior
members in this list :)
Hans
On 8/18/20 10:23 AM, norman wrote:
Hi guys,
I have a rbd pool pg_num 2048, I want to change it to 4096,
Hi guys,
I have a rbd pool pg_num 2048, I want to change it to 4096, how can I
do this?
If I change it directly to 4096, it may cause the client slow requests,
What the better step size should be?
Thanks,
Kern
___
ceph-users mailing list -- ceph-use
Aaron,
It's the same consideration I will take, If mix them, I will worry about
the jitter of perf.
Best regards,
Norman
On 8/7/2020 下午1:33, Aaron Joue wrote:
Hi Norman
There is no fixed percentage for that. If you mix slow and fast OSD in a PG,
the overall performance of the pool wi
Anthony,
I just used normal HDD, I pretend to test the same HDDs on two X86 and
ARM clusters to test the cephfs perf diff.
Best regards,
Norman
On 8/7/2020 上午11:51, Anthony D'Atri wrote:
Bear in mind that ARM and x86 are architectures, not CPU models. Both are
available in a vast va
Aaron,
Significant performance diff for OSD? Can you tell me percentage of perf
descent?
Best regards,
Norman
On 8/7/2020 上午9:56, Aaron Joue wrote:
Hi Norman,
It works well. We mix Arm and x86. For example, OSD and MON are on Arm, RGWs
are on x86. We can also put x86 OSD in the same
Ubuntu Server and Huawei TaiShan.
Best regards,
kern
On 7/7/2020 下午2:25, Siegfried Höllrigl wrote:
hich Hardware and Operating System are you using to run arm64 ?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph
Sean,
Thanks for your reply. I will test the perf on ARM.
On 7/7/2020 上午7:41, Sean Johnson wrote:
It should be fine. I use arm64 systems as clients, and would expect them to be
fine for servers. The biggest problem would be performance.
~ Sean
On Jul 6, 2020, 5:04 AM -0500, norman , wrote
Hi all,
I'm using Ceph on X86 and ARM, is it safe make x86 and arm64 in the same
cluster?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi guys,
I'm using ceph fs , I met a problem of large object in my fs meta pool.
Anyone can tell me how to avoid the problem? thanks.
best regards,
kern
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...
23 matches
Mail list logo