S currently "
>
> Thanks.
> -chen
>
>
> -Original Message-
> From: yehud...@gmail.com [mailto:yehud...@gmail.com] On Behalf Of
> Yehuda Sadeh
> Sent: Thursday, March 21, 2013 9:52 PM
> To: Sebastien Han
> Cc: Li, Chen; ceph-users@lists.ceph.com
> Subje
nks.
> -chen
>
>
> -Original Message-
> From: yehud...@gmail.com [mailto:yehud...@gmail.com] On Behalf Of Yehuda Sadeh
> Sent: Thursday, March 21, 2013 9:52 PM
> To: Sebastien Han
> Cc: Li, Chen; ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] using Ceph FS as O
: Thursday, March 21, 2013 9:52 PM
To: Sebastien Han
Cc: Li, Chen; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] using Ceph FS as OpenStack Glance's backend
On Thu, Mar 21, 2013 at 2:12 AM, Sebastien Han
wrote:
>
> Hi,
>
> Storing the image as an object with RADOS or RGW
On Thu, Mar 21, 2013 at 2:12 AM, Sebastien Han
wrote:
>
> Hi,
>
> Storing the image as an object with RADOS or RGW will result as a single big
> object stored somewhere in Ceph. However with RBD the image is spread across
> thousands of objects across the entire cluster. At the end, you get way
2013/3/21 Sebastien Han
> Hi,
>
> Storing the image as an object with RADOS or RGW will result as a single
> big object stored somewhere in Ceph. However with RBD the image is spread
> across thousands of objects across the entire cluster. At the end, you get
> way more performance by using RBD s
eph object gateway or Ceph RBD makes more sense than CephFS currently." ?Because CephFS is production ready?Thanks.-chen-Original Message-From: Neil Levine [mailto:neil.levine@inktank.com] Sent: Wednesday, March 20, 2013 4:05 AMTo: Patrick McGarryCc: Li, Chen; ceph-users@lists.ceph.comS
nktank.com]
Sent: Wednesday, March 20, 2013 4:05 AM
To: Patrick McGarry
Cc: Li, Chen; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] using Ceph FS as OpenStack Glance's backend
..to be more precise, I should have said: object storage has been the preferred
mechanism of late in Openstack
I just want to try if Ceph FS works.
Thanks.
-chen
-Original Message-
From: pmcga...@gmail.com [mailto:pmcga...@gmail.com] On Behalf Of Patrick
McGarry
Sent: Tuesday, March 19, 2013 9:50 PM
To: Li, Chen
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] using Ceph FS as OpenStack
..to be more precise, I should have said: object storage has been the
preferred mechanism of late in Openstack, but RBD makes more sense due
to the copy-on-write facility. Either way, either the Ceph object
gateway or Ceph RBD makes more sense than CephFS currently.
neil
On Tue, Mar 19, 2013 at 1
I think object storage (using the Swift-compatible Ceph Object
Gateway) is the preferred mechanism for a Glance backend.
Neil
On Tue, Mar 19, 2013 at 6:49 AM, Patrick McGarry wrote:
> Any reason you have chosen to use CephFS here instead of RBD for
> direct integration with Glance?
>
> http://ce
Any reason you have chosen to use CephFS here instead of RBD for
direct integration with Glance?
http://ceph.com/docs/master/rbd/rbd-openstack/
Best Regards,
Patrick McGarry
Director, Community || Inktank
http://ceph.com || http://inktank.com
@scuttlemonkey || @ceph || @inktank
On Tue, Ma
I'm trying to use Ceph FS as glance's backend.
I have mount Ceph FS at glance machine. And edit /etc/glance/glance-api.conf to
use the mounted directory.
But when I upload the image as I used to, I met the error:
Request returned failure status.
None
HTTPServiceUnavailable (HTTP 503)
If I change
12 matches
Mail list logo