This might be slightly off topic though many of ceph users might have run into
similar issues.
For one of our Grizzly Openstack environment, we are using Ceph/RBD as the
exclusive image and volume storage for VMs, which are booting from rbd backed
Cinder volumes. As a result, nova image cache i
ank.com
> To: ws...@hotmail.com
> CC: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] adding, deleting or changing privilege for existing
> cephx users?
>
> On Mon, Jul 22, 2013 at 5:42 AM, w sun wrote:
> > Does anyone know how to do this or if this is not possible? We try
Does anyone know how to do this or if this is not possible? We try to modify
the security scope for an existing cephx user but could not figure out how to
add access to a new pool without recreating the user, e.g.,
ceph auth get-or-create client.svl-ceph-openstack-images mon 'allow r' osd
'allow
; From: josh.dur...@inktank.com
> To: ws...@hotmail.com
> CC: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Openstack Multi-rbd storage backend
>
> On 06/27/2013 05:54 PM, w sun wrote:
> > Thanks Josh. That explains. So I guess right now with Grizzly, you can
> &
> Date: Fri, 28 Jun 2013 14:10:12 -0700
> From: josh.dur...@inktank.com
> To: ws...@hotmail.com
> CC: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Openstack Multi-rbd storage backend
>
> On 06/27/2013 05:54 PM, w sun wrote:
> > Thanks Josh. That explains
Wed, 26 Jun 2013 15:08:56 -0700
> From: josh.dur...@inktank.com
> To: ws...@hotmail.com
> CC: ceph-users@lists.ceph.com; sebastien@enovance.com
> Subject: Re: [ceph-users] Openstack Multi-rbd storage backend
>
> On 06/21/2013 09:48 AM, w sun wrote:
> > Josh & Sebast
Josh & Sebastien,
Does either of you have any comments on this cephx issue with multi-rbd backend
pools?
Thx. --weiguo
From: ws...@hotmail.com
To: ceph-users@lists.ceph.com
Date: Thu, 20 Jun 2013 17:58:34 +
Subject: [ceph-users] Openstack Multi-rbd storage backend
Anyone saw the same
Anyone saw the same issue as below?
We are trying to test the multi backend feature with two RBD pools on Grizzly
release. At this point, it seems that rbd.py does not take separate cephx users
for the two RBD pools for authentication as it defaults to the single ID
defined in /etc/init/cind
loud Engineer
"Always give 100%. Unless you're giving blood."
Phone : +33 (0)1 49 70 99 72 – Mobile : +33 (0)6 52 84 44 70Email :
sebastien@enovance.com – Skype : han.sbastienAddress : 10, rue de la
Victoire – 75009 ParisWeb : www.enovance.com – Twitter : @enovance
On Jun 11, 2013
Hi,
We are currently testing the performance with rbd caching enabled with
write-back mode on our openstack (grizzly) nova nodes. By default, nova fires
up the rbd volumes with "if=none" mode evidenced by the following cmd line from
"ps | grep".
-drive
file=rbd:ceph-openstack-volumes/volume-949
-x86_64 instead" >&2
> exec qemu-system-x86_64 -machine accel=kvm:tcg "$@"
>
>
> On Jun 3, 2013, at 2:10 AM, Wolfgang Hennerbichler
> wrote:
>
> > On Wed, May 29, 2013 at 04:16:14PM +0200, w sun wrote:
> >> Hi Wolfgang,
> >>
> &g
Wed, May 29, 2013 at 04:16:14PM +0200, w sun wrote:
> > Hi Wolfgang,
> >
> > Can you elaborate the issue for 1.5 with libvirt? Wonder if that will
> > impact the usage with Grizzly. Did a quick compile for 1.5 with RBD support
> > enabled, so far it seems to be ok fo
Hi Martin,
I notice you have got everything work. Just wants to point out that we use the
following in our nova.conf and it has been working without issue.
cinder_catalog_info=volume:cinder:internalURL
--weiguo
> Date: Thu, 30 May 2013 22:50:12 +0200
> From: mar...@tuxadero.com
> To: josh
I would suggest on nova compute host (particularly if you have separate compute
nodes),
(1) make sure "rbd ls -l -p " works and /etc/ceph/ceph.conf is readable by user
nova!!(2) make sure you can start up a regular ephemeral instance on the same
nova node (ie, nova-compute is working correctly)(
I believe that the async_flush fix got in after 1.4.1 release. Unless someone
had backported the patch to 1.4.0, it is unlikely that 1.4.0 package would
contain the fix.
--weiguo
> From: a...@alex.org.uk
> Date: Wed, 29 May 2013 08:59:14 +0100
> To: wolfgang.hennerbich...@risc-software.at
> CC:
Hi Wolfgang,
Can you elaborate the issue for 1.5 with libvirt? Wonder if that will impact
the usage with Grizzly. Did a quick compile for 1.5 with RBD support enabled,
so far it seems to be ok for openstack with a few simple tests. But definitely
want to be cautious if there is known integration
I have been reading the architecture section of ceph document. One thing has
not been clear to me is how the data HA works when we encounter OSD or server
failure. Does the Crush algorithm recalculate based on the new cluster map and
point the data to the 2nd or 3rd replica for existing data blo
While planning the usage of fast clone from openstack glance image store to
cinder volume, I am a little concerned about possible IO performance impact to
the cinder volume service node if I have to perform flattening of the multiple
image down the road.
Am I right to assume the copying of the
commit from Josh Durgan about white listing rbd migration.
On May 11, 2013, at 10:53 AM, w sun wrote:
The reference Mike provided is not valid to me. Anyone else has the same
problem? --weiguo
From: j.michael.l...@gmail.com
Date: Sat, 11 May 2013 08:45:41 -0400
To: pi...@pioto.org
CC: ceph
The reference Mike provided is not valid to me. Anyone else has the same
problem? --weiguo
From: j.michael.l...@gmail.com
Date: Sat, 11 May 2013 08:45:41 -0400
To: pi...@pioto.org
CC: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] RBD vs RADOS benchmark performance
I believe that this is f
RBD is not supported by VMware/vSphere. You will need to build a NFS/iSCSI/FC
GW to support VMware. Here is a post someone has been trying and you may have
to contact them directly for status,
http://ceph.com/community/ceph-over-fibre-for-vmware/
--weiguo
To: ceph-users@lists.ceph.com
From: jare
-default cluster name?
On Thu, 9 May 2013, w sun wrote:
> I think I ran into a bug with ceph-deploy on cuttlefish? Has anyone else
> seen this?
>
> When creating new monitor, on the server node 1, found the directory
> prepended with default cluster name "ceph" ( was cr
I think I ran into a bug with ceph-deploy on cuttlefish? Has anyone else seen
this?
When creating new monitor, on the server node 1, found the directory prepended
with default cluster name "ceph" ( was created,
root@svl-ceph-01:/var/lib/ceph# ll /var/lib/ceph/mon/total 12drwxr-xr-x 3 root
root 4
Hi Josh,
I assume by "put up-to-date rbd on top of the RHEL package", you mean that the
latest "asynchronous flush" fix (QEMU portion) can be back-ported and included
in the RPMs? Or not?
Thx. --weiguo
> Date: Mon, 6 May 2013 12:54:57 -0700
> From: josh.dur...@inktank.com
> To: ceph-users@lists.
Does anyone know if there are RPM packages for EPEL 6-8 ? I have heard they
have been built but could not find them in the latest 6-8 repo.
Thanks. --weiguo ___
ceph-users mailing list
ceph-users@lists.ceph.com
ht
25 matches
Mail list logo