On 10/09/2020 22:35, vita...@yourcmc.ru wrote:
Hi George
Author of Ceph_performance here! :)
I suspect you're running tests with 1 PG. Every PG's requests are always
serialized, that's why OSD doesn't utilize all threads with 1 PG. You need
something like 8 PGs per OSD. More than 8 usually do
Sometimes the motherboard glitch can cause printer is in error state situation.
Therefore, you can use the troubleshooting solutions that are available in the
tech consultancy websites or you can call the help team and use their
assistance to deal with the issue. In addition to that, you can alw
The drafts are the place the message gets set aside in the occasion that you've
excluded the recipient. In any case, if there's a glitch there, by then you can
get it cured by using the help from the tech help destinations or you can dial
the Facebook Customer Service Toll Free Number and have a
Norman,
>default-fs-data0 9 374 TiB 1.48G 939
TiB 74.71 212 TiB
given the above numbers 'default-fs-data0' pool has average object size
around 256K (374 TiB / 1.48G objects). Are you sure that absolute
majority of your objects in this pool are 4M?
Wond
A naive ceph user asks:
I have 3 node cluster configured with 72 bluestore OSDs running on Ubuntu
20.01, Ceph Octopus 15.2.4
The cluster is configured via ceph-ansible stable-5.0.
No configuration changes have been made outside of what is generated by
ceph-ansible.
I expected "ceph config dump"
Latency from a client side is not an issue. It just combines with other
latencies in the stack. The more client lags, the easier it's for the
cluster.
Here, the thing I talk, is slightly different. When you want to establish
baseline performance for osd daemon (disregarding block device and networ
Yeah, of course... but RBD is primarily used for KVM VMs, so the results from a
VM are the thing that real clients see. So they do mean something... :)
I know. I tested fio before testing cephwith fio. On null ioengine fio can
handle up to 14M IOPS (on my dusty lab's R220). On blk_null to gets do
I know. I tested fio before testing ceph
with fio. On null ioengine fio can handle up to 14M IOPS (on my dusty
lab's R220). On blk_null to gets down to 2.4-2.8M IOPS.
On brd it drops to sad 700k IOPS.
BTW, never run synthetic high-performance benchmarks on kvm. My old server
with 'makelinuxfasta
By the way, DON'T USE rados bench. It's an incorrect benchmark. ONLY use fio
10 сентября 2020 г. 22:35:53 GMT+03:00, vita...@yourcmc.ru пишет:
>Hi George
>
>Author of Ceph_performance here! :)
>
>I suspect you're running tests with 1 PG. Every PG's requests are
>always serialized, that's why OSD d
Hi George
Author of Ceph_performance here! :)
I suspect you're running tests with 1 PG. Every PG's requests are always
serialized, that's why OSD doesn't utilize all threads with 1 PG. You need
something like 8 PGs per OSD. More than 8 usually doesn't improve results.
Also note that read tests
On 2020-09-01 10:51, Marcel Kuiper wrote:
> As a matter of fact we did. We doubled the storage nodes from 25 to 50.
> Total osds now 460.
>
> You want to share your thoughts on that?
OK, I'm really curious if you observed the following behaviour:
During, or shortly after the rebalance, did you s
Thank you!
I know that article, but they promise 6 core use per OSD, and I got barely
over three, and all this in totally synthetic environment with no SDD to
blame (brd is more than fast and have a very consistent latency under any
kind of load).
On Thu, Sep 10, 2020, 19:39 Marc Roos wrote:
>
Hi George,
Very interesting and also a bit expecting result. Some messages posted
here are already indicating that getting expensive top of the line
hardware does not really result in any performance increase above some
level. Vitaliy has documented something similar[1]
[1]
https://yourcm
On 9/10/20 11:03 AM, George Shuklin wrote:
I'm creating a benchmark suite for Сeph.
During benchmarking of benchmark, I've checked how fast ceph-osd
works. I decided to skip all 'SSD mess' and use brd (block ram disk,
modprobe brd) as underlying storage. Brd itself can yield up to
2.7Mpps in
On 2020-09-08 19:30, norman kern wrote:
> Hi,
>
> I have changed most of pools from 3-replica to ec 4+2 in my cluster,
> when I use ceph df command to show
>
> the used capactiy of the cluster:
>
[...]
>
> The USED = 3 * STORED in 3-replica mode is completely right, but for EC
> 4+2 pool (for
I'm creating a benchmark suite for Сeph.
During benchmarking of benchmark, I've checked how fast ceph-osd works.
I decided to skip all 'SSD mess' and use brd (block ram disk, modprobe
brd) as underlying storage. Brd itself can yield up to 2.7Mpps in fio.
In single thread mode (iodepth=1) it ca
On Thu, Sep 10, 2020 at 10:19 AM shubjero wrote:
>
> Hi Casey,
>
> I was never setting rgw_max_chunk_size in my ceph.conf so it must have
> been default? Funny enough I dont even see this configuration
> parameter in the documentation
> https://docs.ceph.com/docs/nautilus/radosgw/config-ref/ .
>
>
On Thu, Sep 10, 2020 at 10:05 AM Eugen Block wrote:
>
> Thank you, Jason.
> The report can be found at https://tracker.ceph.com/issues/47390
>
> By the way, I think your link to the rbd issues should be the other
> way around, http://tracker.ceph.com/issues/rbd gives me a 404. ;-)
> This is better
Hi,
I stumbled across an issue where an OSD the gets redeployed has a CRUSH weight
of 0 after cephadm finishes.
I have created a service definition for the orchestrator to automatically
deploy OSDs on SSDs:
service_type: osd
service_id: SSD_OSDs
placement:
label: 'osd'
data_devices:
rotati
Multiple people have posted to this mailing list with this exact problem,
presumably others have it as well, but the developers don't believe it is
worthy of even placing a warning in documentation, for all the good that
ceph does this issue is oddly treated with little urgency. Basically Ceph
does
Hi Casey,
I was never setting rgw_max_chunk_size in my ceph.conf so it must have
been default? Funny enough I dont even see this configuration
parameter in the documentation
https://docs.ceph.com/docs/nautilus/radosgw/config-ref/ .
Armed with your information I tried setting the following in my c
Thank you, Jason.
The report can be found at https://tracker.ceph.com/issues/47390
By the way, I think your link to the rbd issues should be the other
way around, http://tracker.ceph.com/issues/rbd gives me a 404. ;-)
This is better: https://tracker.ceph.com/projects/rbd/issues
Regards,
Eugen
On Thu, Sep 10, 2020 at 7:36 AM Eugen Block wrote:
>
> Hi *,
>
> I was just testing rbd-mirror on ceph version 15.2.4-864-g0f510cb110
> (0f510cb1101879a5941dfa1fa824bf97db6c3d08) octopus (stable) and
> noticed mgr errors on the primary site (also in version 15.2.2):
>
> ---snip---
> 2020-09-10T11:
Hi Jason,
Sure, it's probably worth creating a new tracker ticket at [1]. Is
your system configured to enable journaling by default on all new
images?
yes, I have it in the ceph.conf
rbd default features = 125
and the features are enabled:
ceph1:~ # rbd info rbd-pool1/cloud7 | grep features
On Thu, Sep 10, 2020 at 7:44 AM Eugen Block wrote:
>
> Hi *,
>
> I'm currently testing rbd-mirror on ceph version
> 15.2.4-864-g0f510cb110 (0f510cb1101879a5941dfa1fa824bf97db6c3d08)
> octopus (stable) and saw this during an rbd import of a fresh image on
> the primary site:
>
> ---snip---
> ceph1:
We might have the same problem. EC 6+2 on a pool for RBD images on spindles.
Please see the earlier thread "mimic: much more raw used than reported". In our
case, this seems to be a problem exclusively for RBD workloads and here, in
particular, Windows VMs. I see no amplification at all on our c
Hi *,
I'm currently testing rbd-mirror on ceph version
15.2.4-864-g0f510cb110 (0f510cb1101879a5941dfa1fa824bf97db6c3d08)
octopus (stable) and saw this during an rbd import of a fresh image on
the primary site:
---snip---
ceph1:~ # rbd import /mnt/SUSE-OPENSTACK-CLOUD-7-x86_64-GM-DVD1.iso
Hi *,
I was just testing rbd-mirror on ceph version 15.2.4-864-g0f510cb110
(0f510cb1101879a5941dfa1fa824bf97db6c3d08) octopus (stable) and
noticed mgr errors on the primary site (also in version 15.2.2):
---snip---
2020-09-10T11:20:01.724+0200 7f1c1b46a700 0 [dashboard ERROR
controllers.
thanks a lot for the information.
samuel
huxia...@horebdata.cn
From: Eugen Block
Date: 2020-09-10 08:50
To: ceph-users
Subject: [ceph-users] Re: Moving OSD from one node to another
Hi,
I haven't done this myself yet but you should be able to simply move
the (virtual) disk to the new host
29 matches
Mail list logo