[Openstack] [OpenStack][Ceph][Manila]

2017-12-22 Thread Amit Kumar
Hi All, I have installed Ocata and configured Ceph storage cluster for shared file system. I am able to mount a share created by Manila service, on a VM as specified in following link: https://docs.openstack.org/manila/ocata/devref/cephfs_native_driver.html Problem is that I am only able to moun

Re: [Openstack] Ceph RadosGW and Object Storage meters: storage.objects.incoming|outgoing.bytes

2016-06-23 Thread Eugen Block
I'm trying to accomplish the same, I use Ceph as storage backend and get errors in ceilometer-polling.log like Cannot inspect data of MemoryUsagePollster for 7307de53-52a4-4900-9c04-d5fb6c787159, non-fatal reason: Failed to inspect memory usage of instance id=7307de53-52a4-4900-9c04-d5fb6c7

[Openstack] Ceph RadosGW and Object Storage meters: storage.objects.incoming|outgoing.bytes

2016-06-22 Thread magicb...@hotmail.com
Hi is there a way to get meters like storage.objects.(incoming|outgoing).bytes when using ceph radosgw service instead of swift on openstack? thanks in advance. J. ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Pos

Re: [Openstack] ceph and openstack

2016-03-08 Thread Erdősi Péter
2016. 03. 08. 6:04 keltezéssel, Martin Wilderoth írta: Where should I run cinder-volume when i user ceph on the controller ? on the ceph mon or mds ? or ? If you use (and I think, you will) cinder-conversion, maybe goot to dedicate resource. (That role makes qcow to raw, which cause a lot of I

Re: [Openstack] ceph and openstack

2016-03-08 Thread Martin Wilderoth
After loading RBD driver. Maybe my setup is incorrect. Thanks setup and error rados pools data metadata rbd images volumes backups vms cinder-volume log 2016-03-08 07:39:38.713 7856 INFO cinder.service [-] Starting cinder-volume node (version 7.0.1) 2016-03-08 07:39:38.715 7856 INFO cinder.vo

Re: [Openstack] ceph and openstack

2016-03-08 Thread Geo Varghese
which error it is showing? On Tue, Mar 8, 2016 at 11:37 AM, Martin Wilderoth < martin.wilder...@linserv.se> wrote: > Thanks Both, > > I will run it on controller node. > > My cinder-volume crashed. > Any dependensis or is my ceph cluster to old > (Im running dumpling) I will investigate > > Thank

Re: [Openstack] ceph and openstack

2016-03-07 Thread Martin Wilderoth
Thanks Both, I will run it on controller node. My cinder-volume crashed. Any dependensis or is my ceph cluster to old (Im running dumpling) I will investigate Thanks On 8 March 2016 at 06:32, Mike Smith wrote: > If you are using Ceph as a Cinder backend, you would likely want to run > cinder

Re: [Openstack] ceph and openstack

2016-03-07 Thread Joshua Harlow
On 03/07/2016 09:32 PM, Mike Smith wrote: You can also run Ceph for nova ephemeral disks without Cinder at all. You’d do that in nova.conf. We use both at Overstock. Ceph for nova ephemeral for general use, and also Ceph as one option in a multi-backend Cinder configuration. We also use it f

Re: [Openstack] ceph and openstack

2016-03-07 Thread Mike Smith
If you are using Ceph as a Cinder backend, you would likely want to run cinder-volume on your controller node(s). You could run it anywhere I suppose, including on the Ceph nodes themselves, but I’d recommend having it on the controllers. Wherever you run it, you’d need a properly configured

Re: [Openstack] ceph and openstack

2016-03-07 Thread Erik McCormick
I run it on control nodes generally. You can also dedicate boxes to it if you expect it to get extremely busy. I like to leave my ceph boxen to do ceph only. -Erik On Mar 8, 2016 12:07 AM, "Martin Wilderoth" wrote: > > Hello > > Where should I run cinder-volume when i user ceph > on the controll

[Openstack] ceph and openstack

2016-03-07 Thread Martin Wilderoth
Hello Where should I run cinder-volume when i user ceph on the controller ? on the ceph mon or mds ? or ? Maybe it dosn't matter ? Thanks in advance Regards Martin ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to

Re: [Openstack] CEPH Speed Limit

2016-01-20 Thread Caitlin Bestler
On 1/19/16 5:28 PM, John van Ommen wrote: I have a client who isn't happy with the performance of their storage. The client is currently running a mix of SAS HDDs and SATA SSDs. They wanted to remove the SAS HDDs and replace them with SSDs, so the entire array would be SSDs. I was running b

Re: [Openstack] CEPH Speed Limit

2016-01-19 Thread Van Leeuwen, Robert
Hi, I think this question would be better suited to the ceph maillinglist but I will have a go at it. > I have a client who isn't happy with the performance of their storage. > The client is currently running a mix of SAS HDDs and SATA SSDs. What part are they not happy about? Throughput or IOP

Re: [Openstack] CEPH Speed Limit

2016-01-19 Thread Azher Mughal
Hi John, On 1/19/2016 5:28 PM, John van Ommen wrote: > I have a client who isn't happy with the performance of their storage. > The client is currently running a mix of SAS HDDs and SATA SSDs. > > They wanted to remove the SAS HDDs and replace them with SSDs, so the > entire array would be SSDs. >

[Openstack] CEPH Speed Limit

2016-01-19 Thread John van Ommen
I have a client who isn't happy with the performance of their storage. The client is currently running a mix of SAS HDDs and SATA SSDs. They wanted to remove the SAS HDDs and replace them with SSDs, so the entire array would be SSDs. I was running benchmarks on the current hardware and I found th

Re: [Openstack] Ceph backed Nova live-migration

2015-01-15 Thread Pádraig Brady
On 14/01/15 23:35, Yagmur Akbulut wrote: > Thanks for the reply. We are running Icehouse. > Is there a back port for this? There is in RHOSP5 and maybe other productized versions. I don't know of a publicly available backport. Pádraig. ___ Mailing lis

Re: [Openstack] Ceph backed Nova live-migration

2015-01-14 Thread Yagmur Akbulut
Thanks for the reply. We are running Icehouse. Is there a back port for this? On Wed, Jan 14, 2015 at 6:11 AM, Pádraig Brady wrote: > On 12/01/15 19:42, Yagmur Akbulut wrote: > > Hi all, > > > > We are working on nova live-migration using Ceph. Before live-migration, > Nova does a check to see i

Re: [Openstack] Ceph backed Nova live-migration

2015-01-14 Thread Pádraig Brady
On 12/01/15 19:42, Yagmur Akbulut wrote: > Hi all, > > We are working on nova live-migration using Ceph. Before live-migration, Nova > does a check to see if the remote is on shared storage. > > In order to make this test pass, we have patched Nova to always return True > in _check_shared_stor

Re: [Openstack] Ceph backed Nova live-migration

2015-01-12 Thread Matt Riedemann
On 1/12/2015 1:42 PM, Yagmur Akbulut wrote: Hi all, We are working on nova live-migration using Ceph. Before live-migration, Nova does a check to see if the remote is on shared storage. In order to make this test pass, we have patched Nova to always return True in _check_shared_storage_test_f

[Openstack] Ceph backed Nova live-migration

2015-01-12 Thread Yagmur Akbulut
Hi all, We are working on nova live-migration using Ceph. Before live-migration, Nova does a check to see if the remote is on shared storage. In order to make this test pass, we have patched Nova to always return True in _check_shared_storage_test_file located in nova/virt/libvirt/driver.py The

Re: [Openstack] [ceph-users] Unable to download files from ceph radosgw node using openstack juno swift client.

2014-12-16 Thread Vivek Varghese Cherian
Hi, root@ppm-c240-ceph3:/var/run/ceph# ceph --admin-daemon /var/run/ceph/ceph-osd.11.asok config show | less | grep rgw_max_chunk_size "rgw_max_chunk_size": "524288", root@ppm-c240-ceph3:/var/run/ceph# And the value is above 4 MB. Regards, -- Vivek Varghese Cherian

Re: [Openstack] [ceph-users] Unable to download files from ceph radosgw node using openstack juno swift client.

2014-12-16 Thread Vivek Varghese Cherian
Hi, On Tue, Dec 16, 2014 at 12:54 PM, pushpesh sharma wrote: > > Vivek, > > The problem is swift client is only downloading a chunk of object not > the whole object so the etag mismatch. Could you paste the value of > 'rgw_max_chunk_size'. Please be sure you set this to a sane > value(<4MB, atlea

Re: [Openstack] [ceph-users] Unable to download files from ceph radosgw node using openstack juno swift client.

2014-12-15 Thread pushpesh sharma
Vivek, The problem is swift client is only downloading a chunk of object not the whole object so the etag mismatch. Could you paste the value of 'rgw_max_chunk_size'. Please be sure you set this to a sane value(<4MB, atleast for Giant release this works below this value). On Tue, Dec 16, 2014 a

Re: [Openstack] [ceph-users] Unable to start radosgw

2014-12-15 Thread Mark Kirkwood
On 15/12/14 20:54, Vivek Varghese Cherian wrote: Hi, Do I need to overwrite the existing .db files and .txt file in /var/lib/nssdb on the radosgw host with the ones copied from /var/ceph/nss on the Juno node ? Yeah - worth a try (we want to rule out any certifica

Re: [Openstack] [ceph-users] Unable to start radosgw

2014-12-14 Thread Vivek Varghese Cherian
Hi, >> Do I need to overwrite the existing .db files and .txt file in >> /var/lib/nssdb on the radosgw host with the ones copied from >> /var/ceph/nss on the Juno node ? >> >> > Yeah - worth a try (we want to rule out any certificate mis-match errors). > > Cheers > > Mark > > I have manually c

Re: [Openstack] [ceph-users] Unable to start radosgw

2014-12-10 Thread Mark Kirkwood
On 11/12/14 02:33, Vivek Varghese Cherian wrote: Hi, root@ppm-c240-ceph3:~# /usr/bin/radosgw -n client.radosgw.gateway -d log-to-stderr 2014-12-09 12:51:31.410944 7f073f6457c0 0 ceph version 0.80.7 (__6c0127fcb58008793d3c8b62d925bc__91963672a3), process radosgw,

Re: [Openstack] [ceph-users] Unable to start radosgw

2014-12-10 Thread Vivek Varghese Cherian
Hi, root@ppm-c240-ceph3:~# /usr/bin/radosgw -n client.radosgw.gateway -d >> log-to-stderr >> 2014-12-09 12:51:31.410944 7f073f6457c0 0 ceph version 0.80.7 >> (6c0127fcb58008793d3c8b62d925bc91963672a3), process radosgw, pid 5958 >> common/ceph_crypto.cc: In function 'void >> ceph::crypto::init(Ce

Re: [Openstack] [ceph-users] Unable to start radosgw

2014-12-09 Thread Mark Kirkwood
On 10/12/14 07:36, Vivek Varghese Cherian wrote: Hi, I am trying to integrate OpenStack Juno Keystone with the Ceph Object Gateway(radosw). I want to use keystone as the users authority. A user that keystone authorizes to access the gateway will also be created on the radosgw. Tokens that keyst

Re: [Openstack] Ceph.

2014-10-04 Thread Bruno L
Hi Marcus, RADOS, Ceph's object store foundation, is strongly consistent (CP). The RADOS Gateway, the system that exposes RADOS via the Swift and S3 APIs, can be configured for async replication of objects across sites (so you are effectively overlaying an AP system on top of a CP system). Add C

Re: [Openstack] Ceph.

2014-10-04 Thread Jonathan Proulx
Hi Marcus, The silence you hear is because Ceph isn't an OpenStack project, though a lot of us (myself included) do use it heavily with OpenStack. Your questions are very cpeh specific rather than about how to get it to work with OpenStack and would be better answered on the ceph-users mailing l

Re: [Openstack] Ceph.

2014-10-04 Thread Marcus White
Hello, A bump! MW On Tue, Sep 30, 2014 at 2:17 PM, Marcus White wrote: > Hello, > Had some basic Ceph questions. > > a. Ceph is strongly consistent and different from usual object, does > that mean all metadata also, container and account etc is all > consistent and everything is updated in the

[Openstack] Ceph.

2014-09-30 Thread Marcus White
Hello, Had some basic Ceph questions. a. Ceph is strongly consistent and different from usual object, does that mean all metadata also, container and account etc is all consistent and everything is updated in the path of the client operation itself, even for a single site? Is it the same for block

Re: [Openstack] Ceph vs Swift + Cinder

2014-07-14 Thread Frans Thamura
thx all must study deeply all features. because i just start to go this field, but one of my friend/company using ceph with paralels vps, and provide iaas services with help from inktank still to know how it work, and explanation make me must search more case study for both F -- Frans Thamura

Re: [Openstack] Ceph vs Swift + Cinder

2014-07-14 Thread Robert van Leeuwen
> just discussion regarding ceph as platform for swift and cinder, and > also a replacement. > > because we can run swift and cinder also without ceph > and why should ceph as both platform, As in the true linux spirit you have options. Make sure you know what your requirements are and see what fi

Re: [Openstack] Ceph vs Swift + Cinder

2014-07-14 Thread Michael Gale
Hey, I agree with Hugo, it really depends on your use case / requirements. Why are you considering object storage, what functionality and features do you need. What is the expectation around block storage? Once you know those it will help you decide which technology is best for you. Michael

Re: [Openstack] Ceph vs Swift + Cinder

2014-07-14 Thread Kuo Hugo
Hi Frans, Here's my perspective. There's some tradeoff between different choices. First of all, you should know your use case clearly before make decision. There's several combo : - OpenStack Swift as object storage core + Cinder as the the block storage controller with various backends

[Openstack] Ceph vs Swift + Cinder

2014-07-14 Thread Frans Thamura
Hi all just discussion regarding ceph as platform for swift and cinder, and also a replacement. i still want to know, how ceph stand because we can run swift and cinder also without ceph and why should ceph as both platform, F ___ Mailing list: http

[Openstack] Ceph and ephemeral disks

2014-07-11 Thread Sergey Motovilovets
Hi I got a problem with Ceph as a backend for ephemeral volumes. Creation of the instance completes as supposed, and instance is functional, but when I try to delete it - instance stuck in "Deleting" state forever Part of nova.conf on compute node: ... [libvirt] inject_password=false inject_key=f

Re: [Openstack] Ceph vs swift

2014-06-16 Thread Chuck Thier
Hi Vincenzo, Thanks for taking the time to review my suggestions. I'm a bit concerned though as you failed to address one of the biggest issues. Your test results still indicate that the swift cluster wasn't properly configured and you optimize the ceph cluster with an extra disk for journals, a

Re: [Openstack] Ceph vs swift

2014-06-16 Thread Vincenzo Pii
Hi Chuck, Many thanks for your comments! I have replied on the blog. Best regards, Vincenzo. 2014-06-12 21:10 GMT+02:00 Chuck Thier : > Hi Vincenzo, > > First thank you for this work. It is always interesting to see > different data points from different use cases. > > I noticed a couple o

Re: [Openstack] Ceph vs swift

2014-06-12 Thread Chuck Thier
Hi Vincenzo, First thank you for this work. It is always interesting to see different data points from different use cases. I noticed a couple of things and would like to ask a couple of questions and make some observations. Comparing a high level HTTP/REST based api (swift) to a low-level C ba

Re: [Openstack] Ceph vs swift

2014-06-12 Thread Diego Parrilla Santamaría
Great stuff. Congrats! -- Diego Parrilla *CEO* *www.stackops.com | * diego.parri...@stackops.com | +34 91 005-2164 | skype:diegoparrilla On Thu, Jun 12, 2014 at 10:46 AM, Vincenzo Pii wrote: > As promised, the results for our study on Cep

Re: [Openstack] Ceph vs swift

2014-06-12 Thread hua peng
As promised, the results for our study on Ceph vs Swift for object storage: http://blog.zhaw.ch/icclab/evaluating-the-performance-of-ceph-and-swift-for-object-storage-on-small-clusters/ A real nice posting. Thanks. ___ Mailing list: http://lists.op

Re: [Openstack] Ceph vs swift

2014-06-12 Thread Vincenzo Pii
As promised, the results for our study on Ceph vs Swift for object storage: http://blog.zhaw.ch/icclab/evaluating-the-performance-of-ceph-and-swift-for-object-storage-on-small-clusters/ 2014-06-06 20:19 GMT+02:00 Matthew Farrellee : > On 06/02/2014 02:52 PM, Chuck Thier wrote: > > I have heard

Re: [Openstack] Ceph vs swift

2014-06-06 Thread Matthew Farrellee
On 06/02/2014 02:52 PM, Chuck Thier wrote: I have heard that there has been some work to integrate Hadoop with Swift, but know very little about it. Integration with MS exchange, but could be an interesting use case. nutshell: the hadoop ecosystem tends to integrate with mapreduce or hdfs. h

Re: [Openstack] Ceph vs swift

2014-06-03 Thread Remo
Grazie Vincenzo, I am very familiar on those docs / links below. Ciao On 2014-06-03 06:32, Vincenzo Pii wrote: > Hi Remo, > > We are doing a performance evaluation study on Ceph vs Swift for small storage clusters. > The results should be published soon, so if the use case is of inter

Re: [Openstack] Ceph vs swift

2014-06-03 Thread Vincenzo Pii
Hi Remo, We are doing a performance evaluation study on Ceph vs Swift for small storage clusters. The results should be published soon, so if the use case is of interest to you you will have some material to analyze :). Concerning the partition power, I think this article [1] (which is a bit old

Re: [Openstack] Ceph vs swift

2014-06-02 Thread Chuck Thier
Hi Remo, I have heard that there has been some work to integrate Hadoop with Swift, but know very little about it. Integration with MS exchange, but could be an interesting use case. Partitions can be thought of as virtual buckets that objects are assigned to. They are an abstract concept and d

Re: [Openstack] Ceph vs swift

2014-05-29 Thread Remo Mattei
Thanks Chuck This is great I want to use an object store which allows me to work well with Hadoop and if possible with MS exchange IIS etc. If you do have tips on this that will be great. I also would love your point of partition power in how you describe it. I have my own ideas on it so I m cu

Re: [Openstack] Ceph vs swift

2014-05-29 Thread Chuck Thier
Hello Remo, That is quite an open ended question :) If you could share a bit more about your use case, then it would be easier to provide more detailed information, but I'll try to cover some of the basics. First, a disclaimer. I am one of the original Openstack Swift developers, so I *may* be

[Openstack] Ceph vs swift

2014-05-29 Thread Remo Mattei
Hi all does anyone have done any testing or comparison between swift and ceph I want to get some others people prospectives. Thanks. Inviato da iPhone () ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : o

Re: [Openstack] Ceph as unified storage solution

2014-04-04 Thread Ian Marshall
Hi 'All' Thanks for your responses, If going with a Ceph storage solution, the plan would be to use two R720xds per site with each having 128Gb RAM, 10gbe network connections and 24 x 600Gb 10k SAS drives for each storage server with each disk being a single OSD setup as RAID0. Regards Ian

Re: [Openstack] Ceph as unified storage solution

2014-04-04 Thread Alvin Starr
100 servers has your running 400GB of ram and 2TB of storage per server or 4TB of storage overall. That would actually be within the range of 2 systems using DBRB and SSDs and you would get extremely fast performance. I would argue that CEPH works best for large data sets and where there ar

Re: [Openstack] Ceph as unified storage solution

2014-04-04 Thread Drew Weaver
with SSD. Thanks, -Drew From: Ian Marshall [mailto:ian.marsh...@freedom-finance.com] Sent: Friday, April 04, 2014 2:17 AM To: openstack@lists.openstack.org Subject: [Openstack] Ceph as unified storage solution Hi I am implementing a small Openstack production system across two sites. This will

[Openstack] Ceph as unified storage solution

2014-04-03 Thread Ian Marshall
Hi I am implementing a small Openstack production system across two sites. This will initially have 2 controller nodes (also acting as network nodes) and 2 compute nodes at each site. Network up to hadware load balancers will be 10gbe. Expectation is we will be running about 80-100 VMs at each

Re: [Openstack] Ceph bootable cinder volume not working in Havana

2013-12-17 Thread Edward Lezondra
Subject: Re: [Openstack] Ceph bootable cinder volume not working in Havana I have noticed a similar problem. I have 2 different pools: images and volumes and both use the same ceph.user keyring. I can create a volume from an image but if I try to cause the image to copy as part of an instance start

Re: [Openstack] Ceph bootable cinder volume not working in Havana

2013-12-12 Thread Alvin Starr
10:03 PM To: Edward Lezondra Cc: Openstack@lists.openstack.org Subject: Re: [Openstack] Ceph bootable cinder volume not working in Havana Could you give out your cinder.conf? On Tue, Dec 10, 2013 at 4:10 AM, Edward Lezondra wrote: Hello, I’m having trouble getting a bootable cinder volume to wo

Re: [Openstack] Ceph bootable cinder volume not working in Havana

2013-12-11 Thread Haomai Wang
33 South Wacker Drive # 4300 | Chicago, IL 60606 >> | http://www.imc-chicago.com >> Phone: +13122755425 | E-Mail: edward.lezon...@imc-chicago.com >> >> -Original Message- >> From: Haomai Wang [mailto:haomaiw...@gmail.com] >> Sent: Monday, December 09, 2013 10:

Re: [Openstack] Ceph bootable cinder volume not working in Havana

2013-12-11 Thread Alvin Starr
: Edward Lezondra Cc: Openstack@lists.openstack.org Subject: Re: [Openstack] Ceph bootable cinder volume not working in Havana Could you give out your cinder.conf? On Tue, Dec 10, 2013 at 4:10 AM, Edward Lezondra wrote: Hello, I’m having trouble getting a bootable cinder volume to work in

Re: [Openstack] Ceph bootable cinder volume not working in Havana

2013-12-11 Thread Haomai Wang
On Tue, Dec 10, 2013 at 1:03 PM, Thomas Goirand wrote: > On Tue Dec 10 2013 12:03:16 PM HKT, Haomai Wang wrote: > >> Could you give out your cinder.conf? >> >> On Tue, Dec 10, 2013 at 4:10 AM, Edward Lezondra >> wrote: >> > Hello, >> > >> > I’m having trouble getting a bootable cinder volume to

Re: [Openstack] Ceph bootable cinder volume not working in Havana

2013-12-11 Thread Edward Lezondra
@lists.openstack.org Subject: Re: [Openstack] Ceph bootable cinder volume not working in Havana Could you give out your cinder.conf? On Tue, Dec 10, 2013 at 4:10 AM, Edward Lezondra wrote: > Hello, > > > > I’m having trouble getting a bootable cinder volume to work in Havana > using Ceph. I man

Re: [Openstack] Ceph bootable cinder volume not working in Havana

2013-12-09 Thread Thomas Goirand
On Tue Dec 10 2013 12:03:16 PM HKT, Haomai Wang wrote: > Could you give out your cinder.conf? > > On Tue, Dec 10, 2013 at 4:10 AM, Edward Lezondra > wrote: > > Hello, > > > > I’m having trouble getting a bootable cinder volume to work in Havana > > using Ceph. I managed to get glance working c

Re: [Openstack] Ceph bootable cinder volume not working in Havana

2013-12-09 Thread Haomai Wang
Could you give out your cinder.conf? On Tue, Dec 10, 2013 at 4:10 AM, Edward Lezondra wrote: > Hello, > > > > I’m having trouble getting a bootable cinder volume to work in Havana using > Ceph. I managed to get glance working correctly. I got cinder volume to > work, but once I try to create boot

[Openstack] Ceph bootable cinder volume not working in Havana

2013-12-09 Thread Edward Lezondra
Hello, I'm having trouble getting a bootable cinder volume to work in Havana using Ceph. I managed to get glance working correctly. I got cinder volume to work, but once I try to create bootable cinder volume to an image it errors out. My cinder.conf reflect exactly what the ceph docs tell one t

Re: [Openstack] Ceph integration with Grizzly install Guide

2013-10-27 Thread raghavendra.lad
[mailto:bere...@b1-systems.de] Sent: Monday, October 28, 2013 11:25 AM To: openstack@lists.openstack.org Subject: Re: [Openstack] Ceph integration with Grizzly install Guide On 10/28/2013 06:48 AM, raghavendra@accenture.com wrote: > Please help with Install Guide for Ceph or tutorial, links

Re: [Openstack] Ceph integration with Grizzly install Guide

2013-10-27 Thread Christian Berendt
On 10/28/2013 07:24 AM, raghavendra@accenture.com wrote: 1. Do we need Openstack Storage(Grizzly) to be made Client for Ceph install? If you want to use the Rados Block Devices (RBD) as block storage for your instances on Nova you have to use Cinder, the OpenStack Block Storage. 2. Shou

Re: [Openstack] Ceph integration with Grizzly install Guide

2013-10-27 Thread Christian Berendt
On 10/28/2013 06:48 AM, raghavendra@accenture.com wrote: Please help with Install Guide for Ceph or tutorial, links to install with existing Grizzly Openstack. You should have a look at http://ceph.com/docs/next/rbd/rbd-openstack/. Christian. -- Christian Berendt Cloud Computing Solution

[Openstack] Ceph integration with Grizzly install Guide

2013-10-27 Thread raghavendra.lad
Hi Team, I am running Grizzly and plans to integrate with Ceph. Let me know if you have integrated Ceph with Openstack? Please help with Install Guide for Ceph or tutorial, links to install with existing Grizzly Openstack. Your help would be appreciated. Regards, Raghavendra Lad __

[Openstack] [Ceph] volume attach failed

2013-08-05 Thread ibissga .
Hi! I try to use openstack with ceph storage, and have trouble with it: when I try to atach volume to instance, I got this errors in nova-compute.log NovaException: No Volume Connector found. (full log live here: http://paste.openstack.org/show/43169/ ) My cinder.conf: [DEFAULT] debug=True ver