Thank you for the information :-)

 I never used Ceph RDB before but we will have a look into it to consider if 
this is an option for us.

Regards,
Michael

-----Ursprüngliche Nachricht-----
Von: Jonathan D. Proulx [mailto:j...@csail.mit.edu] 
Gesendet: Dienstag, 21. Juni 2016 16:51
An: Michael Stang
Cc: Matt Jarvis; OpenStack Operators
Betreff: Re: [Openstack-operators] Shared Storage for compute nodes

On Tue, Jun 21, 2016 at 11:42:45AM +0200, Michael Stang wrote:
:I think I have asked my question not correctly, it is not for the cinder 
:backend, I meant the shared storage for the instances which is shared by the 
:compute nodes. Or can cinder also be used for this? Sorry if I ask stupid 
:questions, OpenStack is still new for me ;-)


We use Ceph RBD for:

Nova ephemeral storage
Cinder Volume storage
Glance Image storage

(and ceph for object storage too)

/var/lib/nova which holds the libvirt xml files that actually define instances 
live on local node staorage.

This is sufficient for us to do live migration.  However as of Kilo at least 
'vacating' a failed node doesn't work as it assumes /var/lib/nova is on shared 
storage if the ephemeral storage is shared even though the xml could be 
recreated from the database.  I don't know if Juno or Mitaka still have this 
issue or not.

If I were trying to solve that I'd probably go with NFS for /var/lib/nova as 
it's easy, storage is small (just text files) and load is light.  But we've 
been very happy with ceph rbd for ephemeral storage.

Our use case is private cloud w/ 80 hypervisors and about 1k running VMs 
supported by a team of two (each of whom has other responsibilites as well).  
Ceph is 3 monitors and 9 storage nodes with 370T raw storage ( with triple 
replication net storage is 1/3 of that.)

-Jon

: 
:Regards,
:Michael
: 
:
:> Matt Jarvis <matt.jar...@datacentred.co.uk> hat am 21. Juni 2016 um 10:21 :> 
geschrieben:
:>
:>  If you look at the user survey (
:> https://www.openstack.org/user-survey/survey-2016-q1/landing ) you can see 
:> what the current landscape looks like in terms of deployments. Ceph is by 
far :> the most commonly used storage backend for Cinder. 
:>
:>  On 21 June 2016 at 08:27, Michael Stang <michael.st...@dhbw-mannheim.de :> 
mailto:michael.st...@dhbw-mannheim.de > wrote:
:>    > >    Hi,
:> >     
:> >    I wonder what is the recommendation for a shared storage for the compute
:> > nodes? At the moment we are using an iSCSI device which is served to all 
:> > compute nodes with multipath, the filesystem is OCFS2. But this makes it a 
:> > little unflexible in my opinion, because you have to decide how many 
compute :> > nodes you will have in the future.
:> >     
:> >    So is there any suggestion which kind of shared storage to use for the
:> > compute nodes and what filesystem?
:> >     
:> >    Thanky,
:> >    Michael
:> >     
:> > 
:> >    _______________________________________________
:> >    OpenStack-operators mailing list
:> >    OpenStack-operators@lists.openstack.org
:> > mailto:OpenStack-operators@lists.openstack.org
:> >    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
:> >  >
:>  DataCentred Limited registered in England and Wales no. 05611763 :>
:
: 
:Viele Grüße
:
:Michael Stang
:Laboringenieur, Dipl. Inf. (FH)
:
:Duale Hochschule Baden-Württemberg Mannheim :Baden-Wuerttemberg Cooperative 
State University Mannheim :ZeMath Zentrum für 
mathematisch-naturwissenschaftliches Basiswissen :Fachbereich Informatik, 
Fakultät Technik :Coblitzallee 1-9
:68163 Mannheim
:
:Tel.: +49 (0)621 4105 - 1367
:michael.st...@dhbw-mannheim.de
:http://www.dhbw-mannheim.de

:_______________________________________________
:OpenStack-operators mailing list
:OpenStack-operators@lists.openstack.org
:http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


-- 
_______________________________________________
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

Reply via email to