[ceph-users] ceph/rbd & nova image cache

2013-08-20 Thread w sun
This might be slightly off topic though many of ceph users might have run into similar issues. For one of our Grizzly Openstack environment, we are using Ceph/RBD as the exclusive image and volume storage for VMs, which are booting from rbd backed Cinder volumes. As a result, nova image cache i

Re: [ceph-users] adding, deleting or changing privilege for existing cephx users?

2013-07-22 Thread w sun
ank.com > To: ws...@hotmail.com > CC: ceph-users@lists.ceph.com > Subject: Re: [ceph-users] adding, deleting or changing privilege for existing > cephx users? > > On Mon, Jul 22, 2013 at 5:42 AM, w sun wrote: > > Does anyone know how to do this or if this is not possible? We try

[ceph-users] adding, deleting or changing privilege for existing cephx users?

2013-07-22 Thread w sun
Does anyone know how to do this or if this is not possible? We try to modify the security scope for an existing cephx user but could not figure out how to add access to a new pool without recreating the user, e.g., ceph auth get-or-create client.svl-ceph-openstack-images mon 'allow r' osd 'allow

Re: [ceph-users] Openstack Multi-rbd storage backend

2013-07-01 Thread w sun
; From: josh.dur...@inktank.com > To: ws...@hotmail.com > CC: ceph-users@lists.ceph.com > Subject: Re: [ceph-users] Openstack Multi-rbd storage backend > > On 06/27/2013 05:54 PM, w sun wrote: > > Thanks Josh. That explains. So I guess right now with Grizzly, you can > &

Re: [ceph-users] Openstack Multi-rbd storage backend

2013-06-28 Thread w sun
> Date: Fri, 28 Jun 2013 14:10:12 -0700 > From: josh.dur...@inktank.com > To: ws...@hotmail.com > CC: ceph-users@lists.ceph.com > Subject: Re: [ceph-users] Openstack Multi-rbd storage backend > > On 06/27/2013 05:54 PM, w sun wrote: > > Thanks Josh. That explains

Re: [ceph-users] Openstack Multi-rbd storage backend

2013-06-27 Thread w sun
Wed, 26 Jun 2013 15:08:56 -0700 > From: josh.dur...@inktank.com > To: ws...@hotmail.com > CC: ceph-users@lists.ceph.com; sebastien@enovance.com > Subject: Re: [ceph-users] Openstack Multi-rbd storage backend > > On 06/21/2013 09:48 AM, w sun wrote: > > Josh & Sebast

Re: [ceph-users] Openstack Multi-rbd storage backend

2013-06-21 Thread w sun
Josh & Sebastien, Does either of you have any comments on this cephx issue with multi-rbd backend pools? Thx. --weiguo From: ws...@hotmail.com To: ceph-users@lists.ceph.com Date: Thu, 20 Jun 2013 17:58:34 + Subject: [ceph-users] Openstack Multi-rbd storage backend Anyone saw the same

[ceph-users] Openstack Multi-rbd storage backend

2013-06-20 Thread w sun
Anyone saw the same issue as below? We are trying to test the multi backend feature with two RBD pools on Grizzly release. At this point, it seems that rbd.py does not take separate cephx users for the two RBD pools for authentication as it defaults to the single ID defined in /etc/init/cind

Re: [ceph-users] QEMU -drive setting (if=none) for rbd

2013-06-14 Thread w sun
loud Engineer "Always give 100%. Unless you're giving blood." Phone : +33 (0)1 49 70 99 72 – Mobile : +33 (0)6 52 84 44 70Email : sebastien@enovance.com – Skype : han.sbastienAddress : 10, rue de la Victoire – 75009 ParisWeb : www.enovance.com – Twitter : @enovance On Jun 11, 2013

[ceph-users] QEMU -drive setting (if=none) for rbd

2013-06-11 Thread w sun
Hi, We are currently testing the performance with rbd caching enabled with write-back mode on our openstack (grizzly) nova nodes. By default, nova fires up the rbd volumes with "if=none" mode evidenced by the following cmd line from "ps | grep". -drive file=rbd:ceph-openstack-volumes/volume-949

Re: [ceph-users] qemu-1.4.2 rbd-fixed ubuntu packages

2013-06-05 Thread w sun
-x86_64 instead" >&2 > exec qemu-system-x86_64 -machine accel=kvm:tcg "$@" > > > On Jun 3, 2013, at 2:10 AM, Wolfgang Hennerbichler > wrote: > > > On Wed, May 29, 2013 at 04:16:14PM +0200, w sun wrote: > >> Hi Wolfgang, > >> > &g

Re: [ceph-users] qemu-1.4.2 rbd-fixed ubuntu packages

2013-06-03 Thread w sun
Wed, May 29, 2013 at 04:16:14PM +0200, w sun wrote: > > Hi Wolfgang, > > > > Can you elaborate the issue for 1.5 with libvirt? Wonder if that will > > impact the usage with Grizzly. Did a quick compile for 1.5 with RBD support > > enabled, so far it seems to be ok fo

Re: [ceph-users] Openstack with Ceph, boot from volume

2013-05-31 Thread w sun
Hi Martin, I notice you have got everything work. Just wants to point out that we use the following in our nova.conf and it has been working without issue. cinder_catalog_info=volume:cinder:internalURL --weiguo > Date: Thu, 30 May 2013 22:50:12 +0200 > From: mar...@tuxadero.com > To: josh

Re: [ceph-users] Openstack with Ceph, boot from volume

2013-05-30 Thread w sun
I would suggest on nova compute host (particularly if you have separate compute nodes), (1) make sure "rbd ls -l -p " works and /etc/ceph/ceph.conf is readable by user nova!!(2) make sure you can start up a regular ephemeral instance on the same nova node (ie, nova-compute is working correctly)(

Re: [ceph-users] qemu-1.4.2 rbd-fixed ubuntu packages

2013-05-29 Thread w sun
I believe that the async_flush fix got in after 1.4.1 release. Unless someone had backported the patch to 1.4.0, it is unlikely that 1.4.0 package would contain the fix. --weiguo > From: a...@alex.org.uk > Date: Wed, 29 May 2013 08:59:14 +0100 > To: wolfgang.hennerbich...@risc-software.at > CC:

Re: [ceph-users] qemu-1.4.2 rbd-fixed ubuntu packages

2013-05-29 Thread w sun
Hi Wolfgang, Can you elaborate the issue for 1.5 with libvirt? Wonder if that will impact the usage with Grizzly. Did a quick compile for 1.5 with RBD support enabled, so far it seems to be ok for openstack with a few simple tests. But definitely want to be cautious if there is known integration

[ceph-users] How do rados get to data block if primary OSD is out?

2013-05-22 Thread w sun
I have been reading the architecture section of ceph document. One thing has not been clear to me is how the data HA works when we encounter OSD or server failure. Does the Crush algorithm recalculate based on the new cluster map and point the data to the 2nd or 3rd replica for existing data blo

[ceph-users] rbd image clone flattening @ client or cluster level?

2013-05-13 Thread w sun
While planning the usage of fast clone from openstack glance image store to cinder volume, I am a little concerned about possible IO performance impact to the cinder volume service node if I have to perform flattening of the multiple image down the road. Am I right to assume the copying of the

Re: [ceph-users] RBD vs RADOS benchmark performance

2013-05-12 Thread w sun
commit from Josh Durgan about white listing rbd migration. On May 11, 2013, at 10:53 AM, w sun wrote: The reference Mike provided is not valid to me. Anyone else has the same problem? --weiguo From: j.michael.l...@gmail.com Date: Sat, 11 May 2013 08:45:41 -0400 To: pi...@pioto.org CC: ceph

Re: [ceph-users] RBD vs RADOS benchmark performance

2013-05-11 Thread w sun
The reference Mike provided is not valid to me. Anyone else has the same problem? --weiguo From: j.michael.l...@gmail.com Date: Sat, 11 May 2013 08:45:41 -0400 To: pi...@pioto.org CC: ceph-users@lists.ceph.com Subject: Re: [ceph-users] RBD vs RADOS benchmark performance I believe that this is f

Re: [ceph-users] Using Ceph as Storage for VMware

2013-05-09 Thread w sun
RBD is not supported by VMware/vSphere. You will need to build a NFS/iSCSI/FC GW to support VMware. Here is a post someone has been trying and you may have to contact them directly for status, http://ceph.com/community/ceph-over-fibre-for-vmware/ --weiguo To: ceph-users@lists.ceph.com From: jare

Re: [ceph-users] ceph-deploy issue with non-default cluster name?

2013-05-09 Thread w sun
-default cluster name? On Thu, 9 May 2013, w sun wrote: > I think I ran into a bug with ceph-deploy on cuttlefish? Has anyone else > seen this? > > When creating new monitor, on the server node 1, found the directory > prepended with default cluster name "ceph" ( was cr

[ceph-users] ceph-deploy issue with non-default cluster name?

2013-05-09 Thread w sun
I think I ran into a bug with ceph-deploy on cuttlefish? Has anyone else seen this? When creating new monitor, on the server node 1, found the directory prepended with default cluster name "ceph" ( was created, root@svl-ceph-01:/var/lib/ceph# ll /var/lib/ceph/mon/total 12drwxr-xr-x 3 root root 4

Re: [ceph-users] EPEL packages for QEMU-KVM with rbd support?

2013-05-06 Thread w sun
Hi Josh, I assume by "put up-to-date rbd on top of the RHEL package", you mean that the latest "asynchronous flush" fix (QEMU portion) can be back-ported and included in the RPMs? Or not? Thx. --weiguo > Date: Mon, 6 May 2013 12:54:57 -0700 > From: josh.dur...@inktank.com > To: ceph-users@lists.

[ceph-users] EPEL packages for QEMU-KVM with rbd support?

2013-05-06 Thread w sun
Does anyone know if there are RPM packages for EPEL 6-8 ? I have heard they have been built but could not find them in the latest 6-8 repo. Thanks. --weiguo ___ ceph-users mailing list ceph-users@lists.ceph.com ht