Hey all,
I hope this is a simple question, but I haven't been able to figure it out.
On one of our clusters there seems to be a disparity between the global
available space and the space available to pools.
$ ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
1528T 505T
That creates IO with a queue depth of 1, so you are effectively
measuring latency and not bandwidth.
30 mb/s would be ~33ms of latency on average (a little bit less
because it still needs to do the actual IO).
Assuming you distribute to all 3 servers: each IO will have to wait
for one of your "lar
`dd if=/dev/zero of=/mnt/test/writetest bs=1M count=1000 oflag=dsync`
-Original Message-
From: Paul Emmerich [mailto:paul.emmer...@croit.io]
Sent: Friday, December 07, 2018 12:31 PM
To: Scharfenberg, Buddy
Cc: Ceph Users
Subject: Re: [ceph-users] Performance Problems
What are the exact
What are the exact parameters you are using? I often see people using
dd in a way that effectively just measures write latency instead of
throughput.
Check out fio as a better/more realistic benchmarking tool.
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://
I'm measuring with dd writing from /dev/zero with a size of 1 MB 1000 times to
get client write speeds.
-Original Message-
From: Paul Emmerich [mailto:paul.emmer...@croit.io]
Sent: Friday, December 07, 2018 11:52 AM
To: Scharfenberg, Buddy
Cc: Ceph Users
Subject: Re: [ceph-users] Per
How are you measuring the performance when using CephFS?
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
Am Fr., 7. Dez. 2018 um 18:34 Uhr schrieb Scharfenberg, Buddy :
Hello all,
I'm new to Ceph management, and we're having some performance issues with a
basic cluster we've set up.
We have 3 nodes set up, 1 with several large drives, 1 with a handful of small
ssds, and 1 with several nvme drives. We have 46 OSDs in total, a healthy FS
being served out, and 1
Thanks Greg,
Yes, I'm using CephFS and RGW (mainly CephFS)
The files are still accessible and users doesn't report any problem.
Here is the output of ceph -s
ceph -s
cluster:
id:
health: HEALTH_OK
services:
mon: 5 daemons, quorum
ceph-mon01,ceph-mon02,ceph-mon03,ceph-mon04,ce
Hi all,
I tried the mds_cache_memory_limit and it doesn’t work.
But unexpectedly, we found drop_cache can work.
We added a "echo 3 > /proc/sys/vm/drop_caches” cron job.
Let it drop cache every 10 minutes and the problem didn’t occur for a whole
night. : )
It seems to work but we can’t unders
Hello,
I'm using ceph 12.2.10 on debian stretch.
I have two clusters on two different datacenters interconnected with a ~ 7ms
latency link.
I setup S3 replication between those DC and it works fine except when I enable
SSL.
My setup is the following:
- 2 radosgw on each site
10 matches
Mail list logo