Yes, the unit is in seconds for those latencies. The sum/avgcount is the
average since the daemon was (re)started.
If you're interested, I've co-authored a collectd plugin which captures data
from Ceph daemons - built into the plugin I give the option to calculate either
the long-run avg (sum/
For OSDs, that is correct. FYI - perf counters are also available for all Ceph
daemon types (mon, mds, rgw).
Dan Ryder
From: 10 minus [mailto:t10te...@gmail.com]
Sent: Monday, November 17, 2014 7:25 AM
To: Dan Ryder (daryder)
Cc: ceph-users
Subject: Re: [ceph-users] Performance data collection
Hi,
Take a look at the built in perf counters -
http://ceph.com/docs/master/dev/perf_counters/. Through this you can get
individual daemon performance as well as some cluster level statistics.
Other (cluster-level) disk space utilization and pool utilization/performance
is available through “c
Hi cephers,
I'm designing a new "production-like" Ceph cluster, but I've run into an issue.
I have 4 nodes with 1 disk for OS, 3 disks for OSDs on each node. However, I
only have 2 extra disks for use of OSD journals.
My first question is if it is possible to use a remote disk partition
(curre
Hi Dan,
Maybe I misunderstand what you are trying to do, but I think you are trying to
add your Ceph RBD pool into libvirt as a storage pool?
If so, it's relatively straightforward - here's an example from my setup:
Related libvirt
I had similar issues, I tried many different ways to use vagrant but couldn’t
build packages successfully. I’m not sure how reliable this is, but if you are
looking to get Calamari packages quickly, you can skip the Vagrant install
steps and just use the Makefile.
I used “make dpkg” to build th
I had similar issues, I tried many different ways to use vagrant but couldn’t
build packages successfully. I’m not sure how reliable this is, but if you are
looking to get Calamari packages quickly, you can skip the Vagrant install
steps and just use the Makefile.
I used “make dpkg” to build th
, May 09, 2014 1:42 PM
To: Dan Ryder (daryder)
Cc: Haomai Wang; ceph-us...@ceph.com
Subject: Re: [ceph-users] Low latency values
The recovery_state "latencies" are all about how long your PGs are in various
states of recovery; they're not per-operation latencies. 3 days still seem
iginal Message-
From: Haomai Wang [mailto:haomaiw...@gmail.com]
Sent: Friday, May 09, 2014 12:29 PM
To: Dan Ryder (daryder)
Cc: ceph-us...@ceph.com
Subject: Re: [ceph-users] Low latency values
yes
On Sat, May 10, 2014 at 12:19 AM, Dan Ryder (daryder) wrote:
> Thanks Haomai,
>
>
Thanks Haomai,
So are all latency values calculated in seconds?
Dan
-Original Message-
From: Haomai Wang [mailto:haomaiw...@gmail.com]
Sent: Friday, May 09, 2014 11:20 AM
To: Dan Ryder (daryder)
Cc: ceph-us...@ceph.com
Subject: Re: [ceph-users] Low latency values
178/184229=0.00097 s
Hi,
I'm seeing really low latency values, to the extent that they don't seem
realistic.
Snippet from the latest perf dump for this OSD:
"op_r_latency": { "avgcount": 184229,
"sum": 178.07771},
Long run avg = 178.07771/184229 = 0.00097 ms? Is it correct that latency values
have m
Hello,
I've been working on Ceph / Openstack integration and I have a couple of
questions.
1. If I boot an instance from a volume, I can't see the storage of that
volume:
[Screen capture]
The volume I booted from is located at /dev/vda. I'm not too familiar with
Linux filesystem, but fro
Hello,
My team is working on Ceph and Openstack integration, and trying to get volume
usage statistics as well as I/O, latency for volumes.
I've found through the "virsh" command we should be able to get these stats.
However, with "virsh domblkinfo" command, we are getting a problem - "Bad file
Hello,
I'm working with two different Ceph clusters, and in both clusters, I'm seeing
very high latency values.
Here's part of a sample perf dump:
"recoverystate_perf": { "initial_latency": { "avgcount": 338,
"sum": 0.069851000},
"started_latency": { "avgcount": 1647,
Hello,
I am wondering if there is any detailed documentation for obtaining I/O
statistics for a Ceph cluster.
The important metrics I'm looking for are: the number of operations, size of
operations, and latency of operations - by operations I'm referring to
read/write.
I've seen what look like
Hello,
On two separate occasions I have lost power to my Ceph cluster. Both times, I
had trouble bringing the cluster back to good health. I am wondering if I need
to config something that would solve this problem?
After powering back up the cluster, "ceph health" revealed stale pages, mds
clus
16 matches
Mail list logo