That's better :D
Thanks a lot, now I will be able to troubleshoot my problem :)
Thanks Dan,
Andrija
On 11 August 2014 13:21, Dan Van Der Ster wrote:
> Hi,
> I changed the script to be a bit more flexible with the osd path. Give
> this a try again:
> https://github.com/cernceph/ceph-scripts/b
Hi,
I changed the script to be a bit more flexible with the osd path. Give this a
try again:
https://github.com/cernceph/ceph-scripts/blob/master/tools/rbd-io-stats.pl
Cheers, Dan
-- Dan van der Ster || Data & Storage Services || CERN IT Department --
On 11 Aug 2014, at 12:48, Andrija Panic
ma
I appologize, clicked the Send button to fast...
Anyway, I can see there are lines in log file:
2014-08-11 12:43:25.477693 7f022d257700 10
filestore(/var/lib/ceph/osd/ceph-0) write
3.48_head/14b1ca48/rbd_data.41e16619f5eb6.1bd1/head//3
3641344~4608 = 4608
Not sure if I can do anything
Hi Dan,
the script provided seems to not work on my ceph cluster :(
This is ceph version 0.80.3
I get empty results, on both debug level 10 and the maximum level of 20...
[root@cs1 ~]# ./rbd-io-stats.pl /var/log/ceph/ceph-osd.0.log-20140811.gz
Writes per OSD:
Writes per pool:
Writes per PG:
Writ
Will do so definitively, thanks Wido and Dan...
Cheers guys
On 8 August 2014 16:13, Wido den Hollander wrote:
> On 08/08/2014 03:44 PM, Dan Van Der Ster wrote:
>
>> Hi,
>> Here’s what we do to identify our top RBD users.
>>
>> First, enable log level 10 for the filestore so you can see all the
On 08/08/2014 03:44 PM, Dan Van Der Ster wrote:
Hi,
Here’s what we do to identify our top RBD users.
First, enable log level 10 for the filestore so you can see all the IOs
coming from the VMs. Then use a script like this (used on a dumpling
cluster):
https://github.com/cernceph/ceph-scripts/bl
Thanks again, and btw, beside being Friday I'm also on vacation - so double
the joy of troubleshooting performance problmes :)))
Thx :)
On 8 August 2014 16:01, Dan Van Der Ster wrote:
> Hi,
>
> On 08 Aug 2014, at 15:55, Andrija Panic wrote:
>
> Hi Dan,
>
> thank you very much for the scri
Hi,
On 08 Aug 2014, at 15:55, Andrija Panic
mailto:andrija.pa...@gmail.com>> wrote:
Hi Dan,
thank you very much for the script, will check it out...no thortling so far,
but I guess it will have to be done...
This seems to read only gziped logs?
Well it’s pretty simple, and it zcat’s each inp
Hi Dan,
thank you very much for the script, will check it out...no thortling so
far, but I guess it will have to be done...
This seems to read only gziped logs? so since read only I guess it is safe
to run it on proudction cluster now... ?
The script will also check for mulitply OSDs as far as I
Hi,
Here’s what we do to identify our top RBD users.
First, enable log level 10 for the filestore so you can see all the IOs coming
from the VMs. Then use a script like this (used on a dumpling cluster):
https://github.com/cernceph/ceph-scripts/blob/master/tools/rbd-io-stats.pl
to summarize t
Hm, true...
One final question, I might be a noob...
13923 B/s rd, 4744 kB/s wr, 1172 op/s
what does this op/s represent - is it classic IOps (4k reads/writes) or
something else ? how much is too much :) - I'm familiar with SATA/SSD IO/s
specs/tests, etc, but not sure what CEPH menas by op/s - cou
On 08/08/2014 02:02 PM, Andrija Panic wrote:
Thanks Wido, yes I'm aware of CloudStack in that sense, but would prefer
some precise OP/s per ceph Image at least...
Will check CloudStack then...
Ceph doesn't really know that since RBD is just a layer on top of RADOS.
In the end the CloudStack h
Thanks Wido, yes I'm aware of CloudStack in that sense, but would prefer
some precise OP/s per ceph Image at least...
Will check CloudStack then...
Thx
On 8 August 2014 13:53, Wido den Hollander wrote:
> On 08/08/2014 01:51 PM, Andrija Panic wrote:
>
>> Hi,
>>
>> we just had some new clients,
On 08/08/2014 01:51 PM, Andrija Panic wrote:
Hi,
we just had some new clients, and have suffered very big degradation in
CEPH performance for some reasons (we are using CloudStack).
I'm wondering if there is way to monitor OP/s or similar usage by client
connected, so we can isolate the heavy c
Hi,
we just had some new clients, and have suffered very big degradation in
CEPH performance for some reasons (we are using CloudStack).
I'm wondering if there is way to monitor OP/s or similar usage by client
connected, so we can isolate the heavy client ?
Also, what is the general best practic
15 matches
Mail list logo