Hi Michael,
I'm trying to reproduce the problem from sources (today's instead of
yesterday's but there is no difference that could explain the behaviour you
have):
cd src
rm -fr /tmp/dev /tmp/out ; mkdir -p /tmp/dev ; CEPH_DIR=/tmp LC_ALL=C MON=1
OSD=6 bash -x ./vstart.sh -d -n -X -l mon osd
Hello,
On Sat, 29 Mar 2014 11:05:53 +0100 Andreas Rammhold wrote:
> Hi Jon,
> It might be that your qemu-kvm version/build doesn't support
> lib{rados,rbd}. Which Ganeti version are you using?
>
That's exactly what I'm suspecting.
He's using 2.10, the hotdisk is a dead giveaway.
And he's trying
I was trying to play around with RDO and CEPH at
http://openstack.redhat.com/Using_Ceph_for_Cinder_with_RDO_Havana but after I
install QEMU for CEPH and when I run
qemu-img create –f raw rbd:data/foo 1G
The command just hangs without creating any raw device. Am I missing something
here ?
Than
On 30/03/2014 14:23, Vilobh Meshram wrote:
> I was trying to play around with RDO and CEPH at
> http://openstack.redhat.com/Using_Ceph_for_Cinder_with_RDO_Havana but after I
> install QEMU for CEPH and when I run
>
> qemu-img create –f raw rbd:data/foo 1G
>
> The command just hangs without c
Hi guys,
I upgraded a working cluster from Dumpling to Emperor which went OK. All
mons, osds and mds running 0.72.2 on Fedora 18 now.
I then installed the ceph-extras repo and let it update curl, libcurl
and leveldb.
Next I tried restarting a mon - but it won't start up again. It just
just
Hi again,
Next I tried restarting a mon - but it won't start up again. It just
just hangs:
I have captured a backtrace of the hang using gdb:
(gdb) bt
#0 0x773e6e4d in __lll_lock_wait () from /lib64/libpthread.so.0
#1 0x773e2cc1 in _L_lock_885 () from /lib64/libpthread.so.0
Hi Loic,
Thanks for your reply.
Not really I have setup 3 nodes as storage nodes ³ceph osd tree² output
also confirms that.
I was more concerned about the authentication aspect meaning the request
not able to reach the MON node and hence not getting forwarded to the
storage nodes ?
Thanks,
Vilob
Hi Loic,
On Sun, 30 Mar 2014, Loic Dachary wrote:
Hi Michael,
I'm trying to reproduce the problem from sources (today's instead of
yesterday's but there is no difference that could explain the behaviour you
have):
cd src
rm -fr /tmp/dev /tmp/out ; mkdir -p /tmp/dev ; CEPH_DIR=/tmp LC_ALL=C
hi all,
I am trying to get ceph work with openstack havana. I am following the
instructions here.
https://ceph.com/docs/master/rbd/rbd-openstack/
However, I found i probably need more details on openstack. The instruction
mentioned cinder-volume nodes. How about cinder-api cinder-scheduler? Do
Thanks for the prompt response and for the good advice.
Since this is a test Ceph cluster with no data of any value,
I have already rebuilt it from scratch so I can resume testing.
Will keep the group posted if the issue resurfaces, or if I learn
anything new that seems worth sharing.
Thanks again.
10 matches
Mail list logo