Re: [Openstack] Bringing High Availability to OpenStack Keystone and Glance

2012-04-04 Thread Alex Glikson
We have a prototype of similar capabilities targeting Nova services, using 
ZooKeeper (heartbeat/membership, leader election, etc -- will share with 
the community soon). Would be interesting to compare the two approaches.

Regards,
Alex




From:   Martin Gerhard Loschwitz 
To: openstack@lists.launchpad.net, 
Date:   03/04/2012 06:23 PM
Subject:[Openstack] Bringing High Availability to OpenStack 
Keystone andGlance
Sent by:openstack-bounces+glikson=il.ibm@lists.launchpad.net



Hi folks,

there have been some discussions lately on the topic of OpenStack and High
Availability. To take this discussion one step further, I started to work
on OCF resource agents for Keystone and Glance. They allow administrators
to integrate keystone, glance-registry and glance-api into the Pacemaker
cluster manager. I tested them a lot and they worked just fine in my 
setup, 
so I thought i'd put them up here for discussion.

I wrote a blog post about this which includes the links to the actual 
files. 
You can find it here:

http://www.hastexo.com/blogs/martin/2012/04/02/bringing-high-availability-openstack-keystone-and-glance


I am keen on hearing your opinion and your feedback on this, so please 
don't
hestitate to either reply to this email or leave a comment behind in my 
blog.

Thanks a lot!

Best regards
Martin

-- 
Martin Gerhard Loschwitz
Principal Consultant
hastexo Professional Services

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] VM High Availability and Floating IP

2012-07-24 Thread Alex Glikson
We are working on a patch that would enable such capability within 
OpenStack Nova. 
It is a bit more complex than what you suggested. In a nutshell, one would 
need to do the following:
- re-create the VM on the destination host
- re-attach volumes/IPs
- update DB
- make sure that if/when the original host recovers, it does not cause 
corruption of the instance file on shared storage (e.g., by removing the 
corresponding VM from the original host on startup)

Regards,
Alex



Alex Glikson
Manager, Cloud Operating System Technologies, IBM Haifa Research Lab
http://w3.haifa.ibm.com/dept/stt/cloud_sys.html | 
https://www.research.ibm.com/haifa/dept/stt/cloud_sys.shtml 
Email: glik...@il.ibm.com | Phone: +972-4-8281085 | Mobile: 
+972-54-647 | Fax: +972-4-8296112




From:   Alessandro Tagliapietra 
To: Jay Pipes , 
Cc: openstack@lists.launchpad.net
Date:   24/07/2012 08:52 PM
Subject:Re: [Openstack] VM High Availability and Floating IP
Sent by:openstack-bounces+glikson=il.ibm@lists.launchpad.net



But i don't see any part (except the future plans) talking about HA at 
instance level, that seems more to an application level

Il giorno 24/lug/2012, alle ore 18:56, Jay Pipes ha scritto:

> On 07/24/2012 12:52 PM, Alessandro Tagliapietra wrote:
>> Thank you Jay, never read about that.
>> Seems something like scalr/chef? WHich handles application and keeps a 
minimum number of vm running?
> 
> Yeah, kinda.. just one more way of doing things... :)
> -jay


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Default reply to behavior for mailing list

2012-07-31 Thread Alex Glikson
> From: "Daniel P. Berrange" 
> On Tue, Jul 31, 2012 at 10:50:02AM -0700, Bhuvaneswaran A wrote:
> > Stefano,
> > 
> > If a subscriber reply to a mailing list message, it's sent to the
> > author only. Each subscriber should use "Reply to All" every time, to
> > post a reply to mailing list.
> > 
> > Can you please configure the mailing list and set reply-to header as
> > mailing list address, openstack@lists.launchpad.net. With this setup,
> > if the user click "reply" in his email client, the message is sent to
> > mailing list, instead of the author.
> 
> This discussion invariably turns up on most open source mailing lists
> from time to time. People never agree on the "best" setting. Asking
> for this reply-to setting to be changed is merely shifting the pain
> away from one set of users (which includes you) onto other set of
> users (which includes me). There's no net gain here. Just shifting
> the pain.  As such IMHO we should leave it as it is.

Is it possible to configure the behavior per user? I have found several 
threads raising the need for such a feature, but don't know if anyone 
actually bothered to implement it (seems to be rather straightforward).

Regards,
Alex


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] instance evacuation from a failed node (rebuild for HA)

2012-08-10 Thread Alex Glikson
Dear all,

We have submitted a patch https://review.openstack.org/#/c/11086/ to 
address https://blueprints.launchpad.net/nova/+spec/rebuild-for-ha that 
simplifies recovery from a node failure by introducing an API that 
recreates an instance on *another* host (similar to the existing instance 
'rebuild' operation). The exact semantics of this operations varies 
depending on the configuration of the instances and the underlying storage 
topology. For example, if it is a regular 'ephemeral' instance, invoking 
will respawn from the same image on another node while retaining the same 
identity and configuration (e.g. same ID, flavor, IP, attached volumes, 
etc). For instances running off shared storage (i.e. same instance file 
accessible on the target host), the VM will be re-created and point to the 
same instance file while retaining the identity and configuration. More 
details are available at http://wiki.openstack.org/Evacuate. 

Note that the API must be manually invoked today. 

In addition, this patch modifies nova-compute such that on startup (e.g., 
after it failed and recovered) it verifies with the DB that it is still 
the owner of an instance before starting the VM.

Would be great to hear whether people think that such a capability is 
important to push into Folsom, despite the short runway till F3. Any other 
thoughts/recommendations regarding such capability would be also highly 
appreciated.

Thanks,
Alex

========
Alex Glikson
Manager, Cloud Operating System Technologies, IBM Haifa Research Lab
http://w3.haifa.ibm.com/dept/stt/cloud_sys.html | 
https://www.research.ibm.com/haifa/dept/stt/cloud_sys.shtml 
Email: glik...@il.ibm.com | Phone: +972-4-8281085 | Mobile: 
+972-54-647 | Fax: +972-4-8296112
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] instance evacuation from a failed node (rebuild for HA)

2012-08-11 Thread Alex Glikson
> From: Ryan Lane 
> > We have submitted a patch https://review.openstack.org/#/c/11086/ to 
address
> > https://blueprints.launchpad.net/nova/+spec/rebuild-for-ha that 
simplifies
> > recovery from a node failure by introducing an API that recreates an
> > instance on *another* host (similar to the existing instance 'rebuild'
> > operation). 
[...]
> If shared storage is available, the only think that likely needs to 
> happen is for the instance's host to be updated in the database, and
> a reboot issued for the instance. That would keep everything identical,
> and would likely be much faster.

That's pretty much what we do in 'manager' -- but what needs to happen in 
'driver' is to (re)create the domain in libvirt on the destination host, 
re-attach volumes, floating IPs, etc. Essentially, everything 'spawn' is 
doing today, just without creating the new instance file. Of course, we 
don't re-provision the instance from image in this case.

> - Ryan

Regards,
Alex
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Discussion about where to put database for bare-metal provisioning (review 10726)

2012-08-15 Thread Alex Glikson
Can you elaborate what is the purpose of this database?
If we compare it to KVM support, the 'primary' location of VMs' metadata 
is in libvirt internal store (outside of Nova), and then it is cached in 
Nova DB, for Nova purposes.
A similar approach might make for bare-metal machines too -- keep 
'primary' metadata store outside of Nova, and a cache in Nova DB.

Regards,
Alex




From:   David Kang 
To: OpenStack Development Mailing List 
, "openstack@lists.launchpad.net 
(openstack@lists.launchpad.net)" , 
Date:   15/08/2012 06:32 PM
Subject:[Openstack] Discussion about where to put database for 
bare-metal provisioning (review 10726)
Sent by:openstack-bounces+glikson=il.ibm@lists.launchpad.net





 Hi,

 This is call for discussion about the code review 10726.
https://review.openstack.org/#/c/10726/
Mark asked why we implemented a separata database for bare-metal 
provisioning.
Here we describe our thought. 
We are open to discussion and to the changes that the community 
recommends.
Please give us your thoughts.

 NTT Docomo and USC/ISI have developed bare-metal provisioning.
We created separate database to describe bare-metal nodes, which consists 
of 5 tables now.
Our initial implementation assumes the database is not a part of nova 
database.
In addition to the reasons described in the comments of the code review,
here is another reason we decided a separate database for baremetal 
provisioning.

Bare-metal database is mainly used by bare-metal nova-compute.
Since bare-metal nova-compute manages multiple bare-metal machines, 
it needs to keep/update the information of bare-metal machines.
If the bare-metal database is in the main nova db, accessing nova db 
remotely by
bare-metal nova-compute is inevitable.
Once Vish told us that shared db access from nova-compute is not 
desirable.

It is possible to make the scheduler do the job of bare-metal 
nova-compute.
However, it would need a big changes in how the scheduler and a 
nova-compute
communicates. For example, currently, the scheduler casts an instance to a
nova-compute. But for bare-metal node, the scheduler should cast an 
instance 
to a bare-metal machine through bare-metal nova-compute.
Bare-metal nova-compute should boot the machine, transfer kernel, fs, etc.
So, bare-metal nova-compute should know the id of bare-metal node and 
other information 
for booting (PXE ip address, ...) and more.
That information should be sent to bare-metal nova-compute by the 
scheduler.

If frequent access of bare-metal tables in nova db from bare-metal 
nova-compute
is OK, we are OK to put the bare-metal tables into nova db.

 Please let us know your opinions.

 Thanks,
 David, Mikyung @ USC/ISI

--
Dr. Dong-In "David" Kang
Computer Scientist
USC/ISI


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] reservation of CPU cores?

2012-09-03 Thread Alex Glikson
Dear all,

I wonder whether there is an easy way to configure how many cores 
nova-compute is supposed to use (or alternatively refrain from using) for 
provisioning of instances on a certain host, out of the total number of 
cores on that host? E.g., if I know that there are other things running on 
the host, and I want to dedicate some of the CPU capacity to them (such as 
OpenStack services themselves). I am thinking of KVM right now, but the 
general approach is probably platform-independent. Would appreciate your 
insights.

Thanks,
Alex

P.S. trivial solution is to dedicate entire host(s) to those other 
"things", and assume that nova-compute is running on hosts used *only* to 
run VMs (and that nova-compute itself does not need much CPU cycles). but 
I am looking for a more fine-grained approach.

====
Alex Glikson
Manager, Cloud Operating System Technologies, IBM Haifa Research Lab
http://w3.haifa.ibm.com/dept/stt/cloud_sys.html | 
https://www.research.ibm.com/haifa/dept/stt/cloud_sys.shtml 
Email: glik...@il.ibm.com | Phone: +972-4-8281085 | Mobile: 
+972-54-647 | Fax: +972-4-8296112
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Compute Node Down!

2012-09-23 Thread Alex Glikson
As stated below, the following patch addresses the VM recovery in this and 
few other scenarios: https://review.openstack.org/#/c/11086/
Also, there is another patch which can be used to simplify failure 
detection: https://review.openstack.org/#/c/10903/
Unfortunately, neither of the two made it for Folsom, and are planned to 
be merged once a stream targeting Grizzly is available.
Glad to here that there is a relatively simple workaround meanwhile. 
Would be good to have a broader discussion on this at the summit.

Regards,
Alex






From:   Wolfgang Hennerbichler 
To: Tom Fifield , 
Cc: "openstack@lists.launchpad.net" 
Date:   20/09/2012 09:19 AM
Subject:Re: [Openstack] Compute Node Down!
Sent by:openstack-bounces+glikson=il.ibm@lists.launchpad.net



Thanks, that's what I ended up doing (by intuition rather than knowledge) 
yesterday. I didn't know about nova rescue either. 
I think this is a Big Big room for improvement here. In the best case this 
should be discovered automatically and the switchover should be done 
without human Intervention. 

Wolfgang 

-- 
Sent from my mobile device

On 20.09.2012, at 07:26, "Tom Fifield"  wrote:

> On 20/09/12 13:50, Vishvananda Ishaya wrote:
>> **
>> 
>> On Wed, Sep 19, 2012 at 4:03 AM, Wolfgang Hennerbichler
>> > > wrote:
>> 
>>Hello Folks,
>> 
>>Although it seems a pretty straightforward scenario I have a hard
>>time finding documentation on this.
>>One of my compute nodes broke down. All the instances are on shared
>>storage, so no troubles here, but I don't know how to tell openstack
>>that the VM should be deployed on another compute node. I tried
>>fiddling around in the mysql-db with no success.
>>Any help is really appreciated.
>> 
>>Wolfgang
> 
> 
> 
> 
> == Dead compute host ==
> Working with the host information
> 
> i-15b9 at3-ui02 running nectarkey (376, np-rcc54) 0 m1.xxlarge 
2012-06-19T00:48:11.000Z 115.146.93.60
> 
> 
> # review the status of the host using the nova database, some of the 
important information is highlighted below.
> 
> SELECT * FROM instances WHERE id = CONV('15b9', 16, 10) \G;
> *** 1. row ***
>  created_at: 2012-06-19 00:48:11
>  updated_at: 2012-07-03 00:35:11
>  deleted_at: NULL
> ...
>  id: 5561
> ...
> power_state: 5
>vm_state: shutoff
> ...
>hostname: at3-ui02
>host: np-rcc54
> ...
>uuid: 3f57699a-e773-4650-a443-b4b37eed5a06
> ...
>  task_state: NULL
> ...
> 
> 
> Update the vm's compute host.
> 
> UPDATE instances SET host = 'np-rcc46' WHERE uuid = 
'3f57699a-e773-4650-a443-b4b37eed5a06';
> 
> 
> Update the libvirt xml
> 
> * change the DHCPSERVER value to the host ip address.
> * possibly the VNC IP if it isn't already 0.0.0.0
> 
> Dump a copy of a nwfilter to use as a template for creating the missing 
nwfilter.
> 
> virsh nwfilter-list
> vrish nwfilter-dumpxml nova-instance-instance-.
> 
> 
> Example of the template file
> 
> 
>  d5f6f610-d0b8-4407-ae00-5dabef80677a
>  
> 
> 
> 
> The filter name value is available from the instances.xml file 
().
> *Note the filter name must be exact!
> Generate a new uuid and replace it at the uuid value.
> 
> Update filter to match id from instance xml
> 
> virsh nwfilter-define /tmp/filter.xml
> virsh define libvirt.xml
> virsh list --all
> 
> 
> Kill all dnsmasq and restart nova services.
> 
> killall dnsmasq; service nova-network restart; service nova-compute 
restart
> 
> 
> Start the vm
> 
> virsh start instance-0
> 
> 
> On the nova DB
> 
> UPDATE instances SET vm_state = 'active', power_state = 1 WHERE uuid = 
'3f57699a-e773-4650-a443-b4b37eed5a06';
> 
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] workflows in OpenStack?

2013-01-25 Thread Alex Glikson
Are there any additional details regarding the usage of workflows in 
Horizon? http://docs.openstack.org/developer/horizon/ref/workflows.html
Is this something that can be also reused outside of Horizon, in the 
broader context of workflows for OpenStack?

Thanks,
Alex
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] usability of putting host in maintenance mode in Nova

2013-04-20 Thread Alex Glikson

Was great to see many stackers at the summit!

Following-up on one of the design discussions, the consensus seemed to be
that the operation which puts a host in maintenance mode should probably
only disable further provisioning requests to that host, while the
orchestration of migrating the individual instances from that host should
be done separately, potentially even outside of Nova. One possible approach
could be to use the admin API to list all the instances on a particular
host, and then to migrate each of them (potentially using the option not to
specify the target host, in which case the new placement will be selected
by the scheduler). Similar approach may apply to recovery of instances from
a failed host, using 'evacuate' API.

It would be great to understand how people might want to have this
capability surfaced and consumed by the administrator. A script (somehow
shared with the community)? REST API? (provided by which component?) part
of nova-manage? something else?

Would appreciate your feedback (especially if you manage OpenStack clouds
and is interested in this capability).

Thanks,
Alex

P.S. one additional item on our todo list is to enable usage of scheduler
hints when applying the individual migration operations
P.P.S. similar scenarios might be also implemented for Cinder storage
nodes, ones the new volume migration feature gets implemented


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] VM disk affinity during live migration

2013-05-31 Thread Alex Glikson
There is an ongoing work to refactor live migration code, including use of 
scheduler to find/validate placement. At the moment the admin would need 
to make sure he/she is doing the right thing.

Regards,
Alex



Alex Glikson
Manager, Cloud Operating System Technologies, IBM Haifa Research Lab
http://w3.haifa.ibm.com/dept/stt/cloud_sys.html | 
http://www.research.ibm.com/haifa/dept/stt/cloud_sys.shtml 
Email: glik...@il.ibm.com | Phone: +972-4-8281085 | Mobile: 
+972-54-647 | Fax: +972-4-8296112




From:   "Chris Bartels" 
To: , 
Date:   31/05/2013 02:12 PM
Subject:[Openstack] VM disk affinity during live migration
Sent by:"Openstack" 




Hi,
 
Please forgive me if I?ve asked already here on the list- I didn?t get a 
reply & I really need an answer, so I?m asking again in simpler terms this 
time.
 
If I have a cluster of servers, each with spindle drives & SSDs, how can I 
be sure VM disks which reside on spindle drives migrate to spindle drives 
& those which reside on SSDs stay on SSDs as they migrate between servers?
 
Thanks,
Chris___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] VM disk affinity during live migration

2013-05-31 Thread Alex Glikson
Right. A slightly different approach (requiring admin effort) would be to 
define two host aggregates -- one reporting SSD as one of the capabilities 
of its hosts, and another one reporting SAS. Then the admin can attach the 
corresponding capability as a an extra spec of an instance flavor, and use 
Filter Scheduler with AggregateInstanceExtraSpecsFilter to make sure 
instances would not be placed on a hosts which belong to a wrong 
aggregate. All this can be done already (see 
http://docs.openstack.org/trunk/openstack-compute/admin/content/host-aggregates.html
). The missing piece (which is, I believe, going to be resolved in Havana) 
would be to prevent admin from live-migrating an instance to a wrong 
location manually (but this wouldn't be an issue if the admin 
live-migrates without explicitly specifying destination, as Jay pointed 
out).

Regards,
Alex




From:   Lau Jay 
To: ch...@christopherbartels.com, 
Cc: Alex Glikson/Haifa/IBM@IBMIL, openstack@lists.launchpad.net
Date:   01/06/2013 07:39 AM
Subject:Re: [Openstack] VM disk affinity during live migration



Hi Chris,

I think that you are using live migration without specifying target host, 
right? OpenStack cannot handle your case for now, but it has very flexible 
framework to enable you DIY your migration logic.

1) Make sure SSD or SAS can be reported by nova compute, you might want to 
update nova compute driver to report those metrics?
2) Add a new scheduler filter to do your logic checking for SSD and SAS.

Thanks,

Jay



2013/6/1 Chris Bartels 
Thanks for your reply.
 
Your reply implies that its possible to ensure that the disks stay on the 
right target manually. What would you have to do to make sure this 
happened?
 
The SAS space is 228GB & the SSD space is only 64GB.
 
So the SAS disk image wouldn?t fit on the SSD, but the SSD image would fit 
on the SAS, so the migration system I imagine wouldn?t be able to screw it 
up since it would have to keep the large SAS image on the SAS target, and 
would then only be able to place the smaller SSD image on the SSD.
 
But you say it?s a work in progress so that could mean anything could 
happen.
 
What does the actual process look like when I would migrate a VM from one 
server to another? What exactly would I have to do to make sure it went 
right?
 
Thanks.
 
From: Alex Glikson [mailto:glik...@il.ibm.com] 
Sent: Friday, May 31, 2013 7:34 AM
To: ch...@christopherbartels.com
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] VM disk affinity during live migration
 
There is an ongoing work to refactor live migration code, including use of 
scheduler to find/validate placement. At the moment the admin would need 
to make sure he/she is doing the right thing. 

Regards, 
Alex 



From:"Chris Bartels"  
To:, 
Date:31/05/2013 02:12 PM 
Subject:[Openstack] VM disk affinity during live migration 
Sent by:"Openstack" <
openstack-bounces+glikson=il.ibm@lists.launchpad.net> 




Hi, 
  
Please forgive me if I?ve asked already here on the list- I didn?t get a 
reply & I really need an answer, so I?m asking again in simpler terms this 
time. 
  
If I have a cluster of servers, each with spindle drives & SSDs, how can I 
be sure VM disks which reside on spindle drives migrate to spindle drives 
& those which reside on SSDs stay on SSDs as they migrate between servers? 

  
Thanks, 
Chris___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Benefits for moving live migration/resize/code migration/provision to conductor

2013-06-01 Thread Alex Glikson
One of the goals was to separate between instance placement calculation 
logic and the orchestration logic, having each in a separate runtime (see 
https://blueprints.launchpad.net/nova/+spec/query-scheduler). Scheduler 
and conductor (respectively) seemed like a reasonable choice.

Regards,
Alex




From:   Lau Jay 
To: Michael Still , 
Cc: OpenStack general mailing list 
Date:   01/06/2013 06:19 PM
Subject:Re: [Openstack] Benefits for moving live 
migration/resize/code migration/provision to conductor
Sent by:"Openstack" 




Hi Michael and other Stackers,

Sorry one more question, for provision VM instance, there is no 
interaction between compute nodes, why also move provision logic to 
conductor?

Thanks,
Jay


2013/6/1 Lau Jay 
Thanks Michael for the answer, just want to dig more.

>From your answer, it seems that we do not want libvirt on one node opens 
up a connection to the other, but from the Gerrit code diff, I did not 
notice any change on nova compute, but only move the logic of live 
migraiton/resize/code migration from scheduler to conductor, and conductor 
still call nova compute directly and once the request cast to nova 
compute, libvirt on one node still opens up a connection to the another, 
so what is the difference?

Thanks,
Jay



2013/6/1 Michael Still 
IIRC the discussion from the summit, there was concern about compute
nodes talking directly to each other. The way live migration works in
libvirt is that the libvirt on one node opens up a connection to the
other and then streams the instance across. If this is bounced off a
conductor, then it makes firewall rules much easier to construct.

Cheers,
Michael

On Sat, Jun 1, 2013 at 2:53 PM, Lau Jay  wrote:
> Hi Stackers,
>
> I noticed that there are some blueprints trying to move the logic of 
live
> migration/resize/code migration/provision from nova scheduler to nova
> conductor, but the blueprint did not describe clearly the benefits of 
doing
> so, can some experts give some explanation on this?
>
> I know the original design for nova conductor is for a non-db nova 
compute,
> but what's the reason of moving scheduling logic to nova conductor?
>
> Thanks,
>
> Jay
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] How can I recover my VM on a broken hardware?

2013-06-23 Thread Alex Glikson
Try "nova evacuate VM1 compute-node2 --on-shared-storage". 

Regards,
Alex




From:   Ray Sun 
To: OpenStack Maillist , 
Date:   24/06/2013 07:04 AM
Subject:[Openstack] How can I recover my VM on a broken hardware?
Sent by:"Openstack" 




I have three nodes, two of them is compute node, compute-node1 and 
compute-node2. And the other one is a nfs server. compute-node1 and 
compute-node2 can access nfs server directory and the VMs run on 
compute-node1 can live migrate to compute-node2.

My question is now, I have a VM1 is running on compute-node1, and 
compute-node1 is down due to power problem. How can I recover VM1 on 
compute-node2 and still can be managed by OpenStack?

My network is Quantum.

Thanks.

Best Regards
-- Ray___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp