English Version :
Hello,
I found a strange behavior in Ceph. This behavior is visible on Buckets
(RGW) and pools (RDB).
pools:
``
root@:~# qemu-img info rbd:pool/kibana2
image: rbd:pool/kibana2
file format: raw
virtual size: 30G (32212254720 bytes)
disk size: unavailable
Snapshot list:
ID
Hi,
For some unknown reason, periodically, the master is kicked out and
another one becomes leader. And then a couple second later, the
original master calls for re-election and becomes leader again.
This also seems to cause some load even after the original master is
back. Here's a couple of gr
Patrick,
I'm not sure where you are at with forming this. I would love to be considered.
Previously I've contributed commits (mostly CephFS kernel side) and
I'd love to contribute more there. Usually the hardest part about
these kinds of things is finding the time to participate. Since Ceph
is a
Hi all,
we are considering building all SSD OSD servers for RBD pool.
Couple of questions:
Does Ceph have any recommendation for number of cores/memory/ghz per
SSD drive, similar to what is usually followed for hard drives(1
core/1 GB Ram/1Ghz speed)?
thanks,
Sreenath
__
Right now we're just scraping the output of ifconfig:
ifconfig p2p1 | grep -e 'RX\|TX' | grep packets | awk '{print $3}'
It clunky, but it works. I'm sure there's a cleaner way, but this was
expedient.
QH
On Tue, Mar 31, 2015 at 5:05 PM, Francois Lafont wrote:
> Hi,
>
> Quentin Hartman wrote
On Wed, Apr 1, 2015 at 5:03 AM, Sylvain Munaut
wrote:
> Hi,
>
>
> For some unknown reason, periodically, the master is kicked out and
> another one becomes leader. And then a couple second later, the
> original master calls for re-election and becomes leader again.
>
> This also seems to cause som
I've built the Calamari client, server, and diamond packages from source for
trusty and centos and installed it on the trusty Master. Installed diamond and
salt packages on the storage nodes. I can connect to the calamari master,
accept salt keys from the ceph nodes, but then Calamari reports "3
> On 31 Mar 2015, at 11:38, Neville wrote:
>
>
>
> > Date: Mon, 30 Mar 2015 12:17:48 -0400
> > From: yeh...@redhat.com
> > To: neville.tay...@hotmail.co.uk
> > CC: ceph-users@lists.ceph.com
> > Subject: Re: [ceph-users] Radosgw authorization failed
> >
> >
> >
> > - Original Message -
Any pointers to fix incomplete PG would be grateful
I tried the following with no success.
pg scrub
pg deep scrub
pg repair
osd out , down , rm , in
osd lost
# ceph -s
cluster 2bd3283d-67ef-4316-8b7e-d8f4747eae33
health HEALTH_WARN 7 pgs down; 20 pgs incomplete; 1 pgs recovering; 2
- Original Message -
> From: "Neville"
> To: "Yehuda Sadeh-Weinraub"
> Cc: ceph-users@lists.ceph.com
> Sent: Wednesday, April 1, 2015 11:45:09 AM
> Subject: Re: [ceph-users] Radosgw authorization failed
>
>
>
> > On 31 Mar 2015, at 11:38, Neville wrote:
> >
> >
> >
> > > Date: M
You should have a config page in calamari UI where you can accept osd nodes
"into the cluster" as Calamari sees it. If you skipped the little
first-setup window like I did, it's kind of a pain to find.
QH
On Wed, Apr 1, 2015 at 12:34 PM, Bruce McFarland <
bruce.mcfarl...@taec.toshiba.com> wrote:
All,
Apologies for my ignorance but I don't seem to be able to search an
archive.
I've spent a lot of time trying but am having difficulty in integrating
Ceph (Giant) into Openstack (Juno). I don't appear to be recording any
errors anywhere, but simply don't seem to be writing to the cluster if I
Hey Milosz,
I have to ask, did you mean to say that or was it a spell check fail? I
almost choked on my coffee :-)
O
On 2 April 2015 at 00:05, Milosz Tanski wrote:
> Patrick,
>
> I'm not sure where you are at with forming this. I would love to be
> considered.
>
> Previously I've contributed c
I am conincidentally going through the same process right now. The best
reference I've found is this: http://ceph.com/docs/master/rbd/rbd-openstack/
When I did Firefly / icehouse, this (seemingly) same guide Just Worked(tm),
but now with Giant / Juno I'm running into similar trouble to that which
Quentin,
I got the config page to come up by exiting Calamari, deleting the salt keys on
the calamari master ‘salt-key –D’, then restarting Calamari on the master and
accepting the salt keys on the master ‘salt-key –A’ after doing salt-minion and
diamond service restart on the ceph nodes. Once t
Both of those say they want to talk to osd.115.
I see from the recovery_state, past_intervals that you have flapping OSDs.
osd.140 will drop out, then come back. osd.115 will drop out, then come
back. osd.80 will drop out, then come back.
So really, you need to solve the OSD flapping. That wi
Can you both set Cinder and / or Glance logging to debug and provide some
logs? There was an issue with the first Juno release of Glance in some
vendor packages, so make sure you're fully updated to 2014.2.2
On Apr 1, 2015 7:12 PM, "Quentin Hartman"
wrote:
> I am conincidentally going through the
No sure whether it is relevant to your setup or not. But, we saw OSDs are
flapping while rebalancing is going on with say ~150 TB of data within 6 nodes
cluster.
During root causing we saw continuous dropping of packets in dmesg and may be
because of that osd heartbeat responses are lost. As a r
Hello,
On Wed, 1 Apr 2015 18:40:10 +0530 Sreenath BH wrote:
> Hi all,
>
> we are considering building all SSD OSD servers for RBD pool.
>
I'd advise you to spend significant time reading the various threads in
this ML about SSD based pools.
Both about the current shortcomings and limitations o
i checked the cluster state, it has recoveried to HEALTH_OK. i don's know why.
yesterday, 09:02, i started the mon.computer06 , it can not be started, the
log‘s in attachment 0902.
and 16:38, i started the mon.computer06 again, it also stucked with these
processes:
/usr/bin/ceph-mon -i comput
20 matches
Mail list logo