On 2013-07-24 07:19, Sage Weil wrote:
On Wed, 24 Jul 2013, S?bastien RICCIO wrote:
Hi! While trying to install ceph using ceph-deploy the monitors nodes
are
stuck waiting on this process:
/usr/bin/python /usr/sbin/ceph-create-keys -i a (or b or c)
I tried to run mannually the command and it
> -Original Message-
> From: Studziński Krzysztof
> Sent: Wednesday, July 24, 2013 1:18 AM
> To: 'Gregory Farnum'; Yehuda Sadeh
> Cc: ceph-de...@vger.kernel.org; ceph-users@lists.ceph.com; Mostowiec
> Dominik
> Subject: RE: [ceph-users] Flapping osd / continuously reported as failed
>
> >
Hi Sage,
I just had a 0.61.6 monitor crash and one osd. The mon and all osds
restarted just fine after the update but it decided to crash after 15
minutes orso. See a snippet of the logfile below. I have you sent a link
to the logfiles and monitor store. It seems the bug hasn't been fully
fix
On 24 Jul 2013, at 05:47, Sage Weil wrote:
> There was a problem with the monitor daemons in v0.61.5 that would prevent
> them from restarting after some period of time. This release fixes the
> bug and works around the issue to allow affected monitors to restart.
> All v0.61.5 users are str
Hi to all,
I would like to archieve a faul tollerance cluster with an infiniband network.
Actually, one rsocket is bound to a single IB port. In case of a
dual-port HBA, I have to use multiple rsockets to use both ports.
Is possible to configure ceph with multiple cluster addresses for each OSDs?
On 07/24/2013 03:24 PM, Gandalf Corvotempesta wrote:
Hi to all,
I would like to archieve a faul tollerance cluster with an infiniband network.
Actually, one rsocket is bound to a single IB port. In case of a
dual-port HBA, I have to use multiple rsockets to use both ports.
Is possible to configu
On Wed, 24 Jul 2013, Eric Eastman wrote:
> I still have much to learn about how ceph is built.
>
> The ceph-deploy list command is now working with my system using a cciss boot
> disk.
Excellent. Thanks for testing!
> Tomorrow I will bring up a HP system that has multiple cciss disks installed
On Wed, 24 Jul 2013, Dan van der Ster wrote:
> On Wednesday, July 24, 2013 at 7:19 AM, Sage Weil wrote:
> On Wed, 24 Jul 2013, S?bastien RICCIO wrote:
>
> Hi! While trying to install ceph using ceph-deploy the monitors
> nodes are
> stuck waiting on this process:
> /usr/bin/python /usr/sbin/
Later today I will try both the HP testing using multiple cciss devices
for
my OSDs and separately testing manually specifying the dm devices
on my external FC and iSCSI storage and will let you know how both
tests turn out.
Thanks again,
Eric
Tomorrow I will bring up a HP system that has mul
Am Dienstag, 23. Juli 2013, 09:01:39 schrieb Sage Weil:
> On Tue, 23 Jul 2013, Gregory Farnum wrote:
> > On Tue, Jul 23, 2013 at 8:50 AM, Guido Winkelmann
> >
> > wrote:
> > > Hi,
> > >
> > > How can I get a list of all defined monitors in a ceph cluster from a
> > > client when using the C API?
I was trying openstack on ceph. I could create volumes but I am not able to
attach the volume to any running instance. If I attach a instance to an
instance and reboot it, it goes to error state.
Compute error logs are given below.
15:32.666 ERROR nova.compute.manager
[#033[01;36mreq-464776fd-283
There's your problem:
error rbd username 'volumes' specified but secret not found#033[00m
You need to follow the steps in the doc for creating the secret using virsh.
http://ceph.com/docs/next/rbd/rbd-openstack/
On Jul 24, 2013, at 11:20 AM, johnu wrote:
>
> I was trying openstack on ceph. I
I followed the same steps earlier. How can I verify it?
On Wed, Jul 24, 2013 at 11:26 AM, Abel Lopez wrote:
> There's your problem:
> error rbd username 'volumes' specified but secret not found#033[00m
>
> You need to follow the steps in the doc for creating the secret using
> virsh.
> http://
You need to do this on each compute node, and you can verify with
virsh secret-list
On Jul 24, 2013, at 11:20 AM, johnu wrote:
>
> I was trying openstack on ceph. I could create volumes but I am not able to
> attach the volume to any running instance. If I attach a instance to an
> instance
sudo virsh secret-list
UUID Usage
---
bdf77f5d-bf0b-1053-5f56-cd76b32520dc Unused
All nodes have secret set.
On Wed, Jul 24, 2013 at 11:30 AM, Abel Lopez wrote:
> You need to do this on each compute node, a
One thing I had to do, and it's not really in the documentation,
I Created the secret once on 1 compute node, then I reused the UUID when
creating it in the rest of the compute nodes.
I then was able to use this value in cinder.conf AND nova.conf.
On Jul 24, 2013, at 11:39 AM, johnu wrote:
> s
Abel,
What did you change in nova.conf? . I have added rbd_username and
rbd_secret_uuid in cinder.conf. I verified that rbd_secret_uuid is same as
virsh secret-list .
On Wed, Jul 24, 2013 at 11:49 AM, Abel Lopez wrote:
> One thing I had to do, and it's not really in the documentation,
>
You are correct, I didn't add that to nova.conf, only cinder.conf.
if you do
virsh secret-get-value bdf77f5d-bf0b-1053-5f56-cd76b32520dc
do you see the key that you have for your client.volumes?
On Jul 24, 2013, at 12:11 PM, johnu wrote:
> Abel,
>What did you change in nova.conf? . I h
Yes. It matches for all nodes in the cluster
On Wed, Jul 24, 2013 at 1:12 PM, Abel Lopez wrote:
> You are correct, I didn't add that to nova.conf, only cinder.conf.
> if you do
> virsh secret-get-value bdf77f5d-bf0b-1053-5f56-cd76b32520dc
> do you see the key that you have for your client.volum
So, in cinder.conf, you have rbd_user=volumes and
rbd_secret_uuid=bdf77f5d-bf0b-1053-5f56-cd76b32520dc
If so, I'm stumped.
On Jul 24, 2013, at 1:12 PM, Abel Lopez wrote:
> You are correct, I didn't add that to nova.conf, only cinder.conf.
> if you do
> virsh secret-get-value bdf77f5d-bf0b-105
Hi folks,
Some very basic questions.
(a) Can I be running more than 1 ceph cluster on the same node (assume that
I have no more than 1 monitor/node, but storage is contributed by one node
into more than 1 cluster)
(b) Are there any issues with running Ceph clients on the same node as the
other Ce
I finally opted to deploy Ceph on Ubuntu Server 12.04 instead (at the moment,
support/dev is way better). While configuring the Ubuntu machines, I realized
that I had to add the HTTP/HTTPS proxy that I have in my network, not only in
the profile.d folder, but also in the /etc/apt/apt.conf.d fold
Hi Sage,
I tested the HP cciss devices as OSD disks on the
--dev=wip-cuttlefish-ceph-disk
build tonight and it worked, but not exactly as expected. I first
tried:
# ceph-deploy -v osd create testsrv16:c0d1
which failed with:
ceph-disk: Error: data path does not exist: /dev/c0d1
so I went t
Hi,
I have a bug in the 3.10 kernel under debian, be it a self compiled
linux-stable from the git (built with make-kpkg) or the sid's package.
I'm using format-2 images (ceph version 0.61.6
(59ddece17e36fef69ecf40e239aeffad33c9db35)) to make snapshots and clones
of a database for development
Hi everyone,
We have a release candidate for v0.67 dumpling! There are a handful of
remaining known issues (which I suppose means it is technically *not* an
actual candidate for the final release), but for the most part we are
happy with the stability so far, and encourage anyone with test clu
You can configure mon server and crushmap like shown in this
beautiful example:
http://www.sebastien-han.fr/blog/2013/01/28/ceph-geo-replication-sort-of/
четверг, 27 июня 2013 г. пользователь писал:
> Hi,
>
> yes exactly. synchronous replication is OK. The distance between the
> datacenter
> is
26 matches
Mail list logo