On Mon, Mar 14, 2016 at 3:48 PM, Christian Balzer wrote:
>
> Hello,
>
> On Mon, 14 Mar 2016 09:16:13 -0700 Blade Doyle wrote:
>
> > Hi Ceph Community,
> >
> > I am trying to use "ceph -w" output to monitor my ceph cluster. The
> > basic setup is:
> >
> > A python script runs ceph -w and processe
Hello,
On Mon, 14 Mar 2016 20:51:04 -0600 Mike Lovell wrote:
> something weird happened on one of the ceph clusters that i administer
> tonight which resulted in virtual machines using rbd volumes seeing
> corruption in multiple forms.
>
> when everything was fine earlier in the day, the cluste
something weird happened on one of the ceph clusters that i administer
tonight which resulted in virtual machines using rbd volumes seeing
corruption in multiple forms.
when everything was fine earlier in the day, the cluster was a number of
storage nodes spread across 3 different roots in the cru
Hello,
On Mon, 14 Mar 2016 15:51:11 +0200 Yair Magnezi wrote:
> On Fri, Mar 11, 2016 at 2:01 AM, Christian Balzer wrote:
>
> >
> > Hello,
> >
> > As alway there are many similar threads in here, googling and reading
> > up stuff are good for you.
> >
> > On Thu, 10 Mar 2016 16:55:03 +0200 Yair
Hello,
On Mon, 14 Mar 2016 09:16:13 -0700 Blade Doyle wrote:
> Hi Ceph Community,
>
> I am trying to use "ceph -w" output to monitor my ceph cluster. The
> basic setup is:
>
> A python script runs ceph -w and processes each line of output. It finds
> the data it wants and reports it to Influ
On Mon, Mar 14, 2016 at 4:16 PM, Blade Doyle wrote:
> Hi Ceph Community,
>
> I am trying to use "ceph -w" output to monitor my ceph cluster. The basic
> setup is:
>
> A python script runs ceph -w and processes each line of output. It finds
> the data it wants and reports it to InfluxDB. I view
Hi David,
On 14/03/2016 18:33, David Casier wrote:
> "usermod -aG ceph snmp" is better ;)
After thinking, the solution to add "snmp" in the "ceph" group seems to me
better too... _if_ the "ceph" group has never the "w" right in /var/lib/ceph/
(which seems to be the case). So thanks to comfort me
Hi François,
"usermod -aG ceph snmp" is better ;)
2016-03-11 3:37 GMT+01:00 Francois Lafont :
> Hi,
>
> I have a ceph cluster on Infernalis and I'm using a snmp agent to retrieve
> data and generate generic graphs concerning each cluster node. Currently, I
> can see in the syslog of each node this
Earlier on during newstore/bluestore development we tested with the
rocksdb instance (and just the rocksdb WAL) on SSDs. At the time it did
help, but bluestore performance has improved dramatically since then so
we'll need to retest. SSDs shouldn't really help with large writes
anymore (blues
Mark,
Since most of us already have existing clusters that use SSDs for
journals, has there been any testing of converting that hardware over to
using BlueStore and re-purposing the SSDs as a block cache (like using
bcache)?
To me this seems like it would be a good combination for a typical RBD
c
Hi Folks,
We are actually in the middle of doing some bluestore testing/tuning for
the upstream jewel release as we speak. :) These are (so far) pure HDD
tests using 4 nodes with 4 spinning disks and no SSDs.
Basically on the write side it's looking fantastic and that's an area we
really wa
Hi Ceph Community,
I am trying to use "ceph -w" output to monitor my ceph cluster. The basic
setup is:
A python script runs ceph -w and processes each line of output. It finds
the data it wants and reports it to InfluxDB. I view the data using
Grafana, and Ceph Dashboard.
For the most part it
Hi Stefan,
We are also interested in the bluestore, but did not yet look into it.
We tried keyvaluestore before and that could be enabled by setting the
osd objectstore value.
And in this ticket http://tracker.ceph.com/issues/13942 I see:
[global]
enable experimental unrecoverable dat
Hello everyone!
I think that the new bluestore sounds great and would like to try it out in my
test environment but I didn't find anything how to use it but I finally managed
to test it and it really looks promising performancewise.
If anyone has more information or guides for bluestore please t
Hey cephers,
Just a reminder that there are still a couple of slots left if you
would like to present something at either Ceph Day Portland (Hosted by
Intel) on 25 May, or Ceph Day Switzerland (Hosted by CERN) on 14 June.
If you are interested in presenting something and haven’t already
submitted
On Fri, Mar 11, 2016 at 2:01 AM, Christian Balzer wrote:
>
> Hello,
>
> As alway there are many similar threads in here, googling and reading up
> stuff are good for you.
>
> On Thu, 10 Mar 2016 16:55:03 +0200 Yair Magnezi wrote:
>
> > Hello Cephers .
> >
> > I wonder if anyone has some experienc
Reply to myself !
In fact i had an overlay in top of my ssd pool & hdd pool.
The guy who installed this ceph cluster created some objects in the hdd pool
but didn't removed them before applying the overlay,
so the objects was not listed in the rbd ls but when i removed the overlay a
rbd -p r
Oh, Sorry, I miss $name, let me try first
-
wukongming ID: 12019
Tel:0571-86760239
Dept:2014 UIS2 ONEStor
-邮件原件-
发件人: Tianshan Qu [mailto:qutians...@gmail.com]
发送时间: 2016年3月14日 15:27
收件人: wukongming 12019 (RD)
抄送: ceph-de...@vger.kernel.org; c
Still cannot separate Rbd log and mds log.
-
wukongming ID: 12019
Tel:0571-86760239
Dept:2014 UIS2 ONEStor
-邮件原件-
发件人: Tianshan Qu [mailto:qutians...@gmail.com]
发送时间: 2016年3月14日 15:27
收件人: wukongming 12019 (RD)
抄送: ceph-de...@vger.kernel.org;
try add log_file=/var/log/ceph/$name.log in [client] section
> 在 2016年3月14日,11:45,Wukongming 写道:
>
> Hi All
>
> Here I want ceph to separate rbd,rgw,mds' logs into individual file. I knew
> rgw's can be configured by adding comment[client.radosgw.***], but useless to
> the other 2 module.
> I
Based on Yao Ning's PR, I promote a new PR for this
https://github.com/ceph/ceph/pull/8083
In this PR, i also solved such a upgrade situation problem:
consider such a upgrade situation which we need to upgrade to this
can_recover_partial version:
eg. a pg 3.67 [0, 1, 2]
1)firstly, we update osd.0(
21 matches
Mail list logo