Thanks for the explanations! 
Really helpful.

My questions are added in line.

Thanks.
-chen

-----Original Message-----
From: Ben Swartzlander [mailto:b...@swartzlander.org] 
Sent: Friday, January 09, 2015 6:02 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Manila] Driver modes, share-servers, and clustered 
backends

There has been some confusion on the topic of driver modes and share-server, 
especially as they related to storage controllers with multiple physical nodes, 
so I will try to clear up the confusion as much as I can.

Manila has had the concept of "share-servers" since late icehouse. This feature 
was added to solve 3 problems:
1) Multiple drivers were creating storage VMs / service VMs as a side effect of 
share creation and Manila didn't offer any way to manage or even know about 
these VMs that were created.
2) Drivers needed a way to keep track of (persist) what VMs they had created

==> so, a corresponding relationship do exist between share server and virtual 
machines.  

3) We wanted to standardize across drivers what these VMs looked like to Manila 
so that the scheduler and share-manager could know about them

==>Q, why scheduler and share-manager need to know them ?

It's important to recognize that from Manila's perspective, all a share-server 
is is a container for shares that's tied to a share network and it also has 
some network allocations. It's also important to know that each share-server 
can have zero, one, or multiple IP addresses and can exist on an arbitrary 
large number of physical nodes, and the actual form that a share-server takes 
is completely undefined.

During Juno, drivers that didn't explicity support the concept of share-servers 
basically got a dummy share server created which acted as a giant container for 
all the shares that backend created. This worked okay, but it was informal and 
not documented, and it made some of the things we want to do in kilo impossible.

==> Q, what things are impossible?  Dummy share server solution make sense to 
me. 

To solve the above problem I proposed driver modes. Initially I proposed
3 modes:
1) single_svm
2) flat_multi_svm
3) managed_multi_svm

Mode (1) was supposed to correspond to driver that didn't deal with share 
servers, and modes (2) and (3) were for drivers that did deal with share 
servers, where the difference between those 2 modes came down to networking 
details. We realized that (2) can be implemented as a special case of (3) so we 
collapsed the modes down to 2 and that's what's merged upstream now.

==> "driver that didn't deal with share servers "  
  =>  
https://blueprints.launchpad.net/manila/+spec/single-svm-mode-for-generic-driver
  => This is where I get totally lost.
  => Because for generic driver, it is "not create and delete share servers and 
its related network, but would still use a "share server"(the service VM) ".
  => The share (the cinder volume) need to attach to an instance no matter what 
the driver mode is.
  => I think "use" is some kind of "deal" too.

The specific names we settled on (single_svm and multi_svm) were perhaps poorly 
chosen, because "svm" is not a term we've used officially (unofficially we do 
talk about storage VMs and service VMs and the svm term captured both concepts 
nicely) and as some have pointed out, even multi and single aren't completely 
accurate terms because what we mean when we say single_svm is that the driver 
doesn't create/destroy share servers, it uses something created externally.

==> If we use "svm" instead of "share server" in code, I'm ok with svm. I'd 
like mode name and code implementation is consistent.

So one thing I want everyone to understand is that you can have a "single_svm" 
driver which is implemented by a large cluster of storage controllers, and you 
have have a "multi_svm" driver which is implemented a single box with some form 
of network and service virtualization. The two concepts are orthagonal.

The other thing we need to decide (hopefully at our upcoming Jan 15
meeting) is whether to change the mode names and if so what to change them to. 
I've created the following etherpad with all of the suggestions I've heard so 
far and the my feedback on each:
https://etherpad.openstack.org/p/manila-driver-modes-discussion

-Ben Swartzlander


_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to