Beside eliminate race conditions, we use host_subnet_size in the special
cases, we have different capacity hardware in a deployment,
imagine a simple case, two compute hosts(RAM 48G vs 16G free), only enable
the RAM weighter for nova-scheduler, if we launch
10 instances(RAM 1G flavor) one by one, a
x27;s nice for us to understand how the API works and when we should use it,
we can update the api-ref and add exact usage,
avoid users' confusion about it. Feel free to reply something, thank you.
2017-03-27 23:36 GMT+08:00 Matt Riedemann :
> On 3/27/2017 7:23 AM, Rui Chen wrote:
&g
Hi:
A question about nova AddFixedIp API, nova api-ref[1] describe the API
as "Adds a fixed IP address to a server instance, which associates that
address with the server.", the argument of API is network id, so if there
are two or more subnets in a network, which one is lucky to associate ip
+1
Liusheng is a responsible reviewer and keep good reviewing quality in Mogan.
Thank you working hard for Mogan, Liusheng.
2017-03-20 16:19 GMT+08:00 Zhenguo Niu :
> Hi team,
>
> I would like to nominate liusheng to Mogan core. Liusheng has been a
> significant code contributor since the proje
I have one to users:
- Have you need to add some customized features on upstream Nova code to
meet your special needs? or Nova is out of the box project?
Thanks.
2016-12-27 7:18 GMT+08:00 Jay Pipes :
> On 12/26/2016 06:08 PM, Matt Riedemann wrote:
>
>> We have the opportunity to again [1] ask a
+1
Congratulations!
2016-02-18 2:14 GMT+08:00 Masahito MUROI :
> Thank you folks. I'm glad to be a part of this team and community, and
> appreciate all supports from you.
>
> On 2016/02/17 12:10, Anusha Ramineni wrote:
>
>> +1
>>
>> Best Regards,
>> Anusha
>>
>> On 17 February 2016 at 00:59, Pe
Looks like we can use user_data and cloud-init to do this stuff.
Adding the following content into user_data.txt and launch instance like
this: nova boot --user-data user_data.txt ...,
the instance will shutdown after boot is finished.
power_state:
mode: poweroff
message: Bye Bye
You can find
It's a very good example to show how to draft the customize cloud policy in
OpenStack deployment, thank you Su :-)
Some my comments had been added into the google doc.
2015-10-09 4:23 GMT+08:00 Su Zhang :
> Hello,
>
> I've implemented a set of security group management policies and already
> pu
In my memory, there are 4 topics about OPNFV, congress gating, distributed
arch, Monasca.
Some details in IRC meeting log
http://eavesdrop.openstack.org/meetings/congressteammeeting/2015/congressteammeeting.2015-10-01-00.01.log.html
2015-10-08 9:48 GMT+08:00 zhangyali (D) :
> Hi Tim,
>
>
>
> Tha
+1
Tim is an excellent and passionate leader, go ahead, Congress :-)
2015-09-17 4:09 GMT+08:00 :
> +1 and looking forward to see you in Tokyo.
>
>
>
> Thanks,
>
> Ramki
>
>
>
> *From:* Tim Hinrichs [mailto:t...@styra.com]
> *Sent:* Tuesday, September 15, 2015 1:23 PM
> *To:* OpenStack Developme
I start to fix https://bugs.launchpad.net/congress/+bug/1492329
if I have enough time, I can allocate other one or two bugs.
2015-09-06 8:13 GMT+08:00 Zhou, Zhenzan :
> I have taken two, thanks.
>
> https://bugs.launchpad.net/congress/+bug/1492308
>
> https://bugs.launchpad.net/congress/+bug/1492
hi folks:
When we use paginated queries to retrieve instances, we can't get the
total count of instances in current list-servers API.
The count of the querying result is important for operators, Think about a
case, the operators want to know how many 'error' instances
in current deployment in
We had reviewed these patches each other, and fixed some minor issues by
following others' suggestion.
Please feel free to add your comments in these patches, welcome~~
Best Regards.
2015-08-21 18:34 GMT+08:00 Qiao, Liyong :
> Hi folks
>
>
>
> We just finished 2nd prc hackathon this Friday.
>
>
cinder:volumes(id, _x_1_1, _x_1_2, "available", _x_1_4, _x_1_5, _x_1_6,
> _x_1_7, _x_1_8)
>
> Probably the solution you want is to write 2 rules:
>
> error(id) :- cinder:volumes(id=id), not avail_cinder_vol(id)
> avail_cinder_vol(id) :- cinder:volumes(id=id, status
I use *screen* in devstack, Ctrl+c kill services, then restart it in
console.
Please try the following cmd in your devstack environment, and read some
docs.
*screen -r stack*
http://www.ibm.com/developerworks/cn/linux/l-cn-screen/
2015-08-14 11:20 GMT+08:00 Guo, Ruijing :
> It is very useful
Sorry, send the same mail again, please comments at here, the other mail
lack title.
2015-08-14 11:03 GMT+08:00 Rui Chen :
> Hi folks:
>
> I face a problem when I insert a rule into Congress. I want to find
> out all of the volumes that are not available status, so I draft a rule
Hi folks:
I face a problem when I insert a rule into Congress. I want to find out
all of the volumes that are not available status, so I draft a rule like
this:
error(id) :- cinder:volumes(id=id), not cinder:volumes(id=id,
status="available")
But when I create the rule, a error is ra
Hi folks:
I face a problem when I insert a rule into Congress. I want to find out
all of the volumes that are not available status, so I draft a rule like
this:
error(id) :- cinder:volumes(id=id), not cinder:volumes(id=id,
status="available")
But when I create the rule, a error is ra
Convert to Asia timezone, the new time is easy to remember for us :)
For CST (UTC+8:00):
Thursday 08:00 AM
For JST (UTC+9:00):
Thursday 09:00 AM
http://www.timeanddate.com/worldclock/fixedtime.html?iso=20150806T00&p1=1440
2015-08-01 1:20 GMT+08:00 Tim Hinrichs :
> Peter pointed out that no on
According to the error message, looks like no enough mysql db connections
for the HA Congress server launching.
Can you double check your mysql '*max_connections*' option in my.cnf and
show the active connections in mysql console like this:
*msyql> show full processlist;*
more details:
https://
According to the error message, looks like no enough mysql db connections
for the HA Congress server launching.
Can you double check your mysql '*max_connections*' option in my.cnf and
show the active connections in mysql console like this:
*msyql> show full processlist;*
more details:
https://
Wonderful! I don't need to stay up late.
Thank everybody.
2015-07-15 10:28 GMT+08:00 Masahito MUROI :
> I'm happy to see that.
>
> btw, is the day on Tuesday?
>
> best regard,
> masa
>
> On 2015/07/15 9:52, Zhou, Zhenzan wrote:
>
>> Glad to see this change.
>> Thanks for the supporting for devel
AFAIK nova and cinder support --all-tenants when we list servers and
volumes, it's a admin only operation, like Kirill point out in above
comments.
And in the other side I think we should be careful to use this option,
because the huge results are pulled at one time when we want to get the
cross t
Thank you @Alex, it's helpful for me :)
2015-06-27 13:51 GMT+08:00 Alex Xu :
> Hi, Rui, Abhishek,
>
> There is email can answer your question:
> http://lists.openstack.org/pipermail/openstack-dev/2015-June/068079.html
>
> Thanks
> Alex
>
> 2015-06-27 11:33 GMT+0
I have same question about this.
My spec and blueprint:
https://review.openstack.org/#/c/169638/
https://blueprints.launchpad.net/nova/+spec/selecting-subnet-when-creating-vm
2015-06-26 17:56 GMT+08:00 Kekane, Abhishek :
> Hi Nova Devs,
>
>
>
> I have submitted a nova spec [1] for improving
Hi all:
We have the instance action and action event for most of the instance
operations,
exclude: live-migration. In the current master code, when we do
live-migration, the
instance action is recorded, but the action event for live-migration is
lost. I'm not sure that
it's a bug or design
Hi all:
I find the bug [1] "block/live migration doesn't work with LVM as
libvirt storage" is marked as 'Fix released', but I don't think this issue
really is solved, I check the live-migration code and don't find any logic
for handling LVM disk. Please correct me if I'm wrong.
In the bug [1]
Assuming my understanding is correct, 2 things make you sad in the upgrade
process.
1. must reconfig the 'upgrade_levels' in the config file during
post-upgrade.
2. must restart the service in order to make the option 'upgrade_level'
work.
I think the configuration management tools (e.g. chef, pu
Maybe we can add a python3 jenkins job (non-voting) to help us finding out
some potential issue.
2015-04-24 16:34 GMT+08:00 Victor Stinner :
> Hi,
>
> Porting OpenStack applications during the Liberty Cycle was discussed last
> days in the thread "[oslo] eventlet 0.17.3 is now fully Python 3
> co
Hi all:
I'm working on the patch https://review.openstack.org/#/c/147048/ for
bug/1408859
Description of Bug:
When the nova-scheduler can't select enough hosts for multiple creating
instance, a NoValidHost exception was raised, but the part of hosts had
been consumed from instance in
Thank you for reply, Chris.
2015-03-27 23:15 GMT+08:00 Chris Friesen :
> On 03/26/2015 07:44 PM, Rui Chen wrote:
>
>> Yes, you are right, but we found our instance hang at first
>> dom.shutdown() call,
>> if the dom.shutdown() don't return, there is no cha
Yes, you are right, but we found our instance hang at first dom.shutdown()
call, if the dom.shutdown() don't return, there is no chance to execute
dom.destroy(), right?
2015-03-26 23:20 GMT+08:00 Chris Friesen :
> On 03/25/2015 10:15 PM, Rui Chen wrote:
>
>> Hi all:
>>
&g
Hi all:
I found a discuss about the libvirt shutdown API maybe hang when
shutdown instance in libvirt community,
https://www.redhat.com/archives/libvir-list/2015-March/msg01121.html
I'm not sure that whether there are some risks when we shutdown
instance in nova.
Three questions:
Hi all:
I deploy my OpenStack with VMware driver, one nova-compute connect to
VMware deployment,
there are about 3000 VMs in VMware deployment. I use mysql. The method
of InstanceList.get_by_host
rasie rpc timeout error when ComputeManager.init_host() and
_sync_power_states periodic ta
> On 03/04/2015 09:23 AM, Sylvain Bauza wrote:
>>
>>> Le 04/03/2015 04:51, Rui Chen a écrit :
>>>
>>>> Hi all,
>>>>
>>>> I want to make it easy to launch a bunch of scheduler processes on a
>>>> host, multiple scheduler w
HostState object in self memory,
the only different point with HA is just launching all scheduler processes
on a host.
I'm sorry to waste some time, I just want to clarify it.
2015-03-05 17:12 GMT+08:00 Sylvain Bauza :
>
> Le 05/03/2015 08:54, Rui Chen a écrit :
>
> We will face
t;
> 2015-03-05 10:55 GMT+08:00 Rui Chen :
>
>> Looks like it's a complicated problem, and nova-scheduler can't scale-out
>> horizontally in active/active mode.
>>
>> Maybe we should illustrate the problem in the HA docs.
>>
>> http://docs.o
ere are use cases when the scheduler would need to know even more data,
> > Is there a plan for keeping `everything` in all schedulers process
> memory up-to-date ?
> > (Maybe zookeeper)
> >
> > The opposite way would be to move most operation into the DB side,
>
Hi all,
I want to make it easy to launch a bunch of scheduler processes on a host,
multiple scheduler workers will make use of multiple processors of host and
enhance the performance of nova-scheduler.
I had registered a blueprint and commit a patch to implement it.
https://blueprints.launchpad.n
es :
> On 03/03/2015 01:10 AM, Rui Chen wrote:
>
>> Hi all,
>>
>> When we boot instance from volume, we find some ambiguous description
>> about flavor root_gb in operations guide,
>> http://docs.openstack.org/openstack-ops/content/flavors.html
>>
>>
Hi all,
When we boot instance from volume, we find some ambiguous description about
flavor root_gb in operations guide,
http://docs.openstack.org/openstack-ops/content/flavors.html
*Virtual root disk size in gigabytes. This is an ephemeral disk the base
image is copied into. You don't use it when
Append blueprint link:
https://blueprints.launchpad.net/nova/+spec/verifiable-force-hosts
2015-02-13 10:48 GMT+08:00 Rui Chen :
> I agree with you @Chris
> '--force' flag is a good idea, it keep backward compatibility and
> flexibility.
> We can select whether the f
I agree with you @Chris
'--force' flag is a good idea, it keep backward compatibility and
flexibility.
We can select whether the filters was applied for force_hosts.
I will register blueprint to trace the feature.
The 'force_hosts' feature is so age-old that I don't know how many users
had used it
orce_hosts' is operator action, the default value is
'is_admin:True' in policy.json, but in some case the value may be changed
so that the regular user can boot instance on specified host.
2015-02-12 17:44 GMT+08:00 Sylvain Bauza :
>
> Le 12/02/2015 10:05, Rui Chen a écrit :
&g
Hi:
If we boot instance with 'force_hosts', the force host will skip all
filters, looks like that it's intentional logic, but I don't know the
reason.
I'm not sure that the skipping logic is apposite, I think we should
remove the skipping logic, and the 'force_hosts' should work with the
sc
ters makes sense to me.
>
> 2015-02-12 15:01 GMT+08:00 Rui Chen :
> > Hi:
> >
> > Currently, resizing instance cause migrating from the host that the
> > instance run on to other host, but maybe the current host is suitable for
> > new flavor. Migrating will lea
anagaraj M
>
>
>
> *From:* Jesse Pretorius [mailto:jesse.pretor...@gmail.com]
> *Sent:* Thursday, February 12, 2015 1:25 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [nova] Priority resizing instance on same
> host
>
>
>
>
chosen host even if the disk size remain the same.
2015-02-12 15:55 GMT+08:00 Jesse Pretorius :
> On Thursday, February 12, 2015, Rui Chen wrote:
>>
>> Currently, resizing instance cause migrating from the host that the
>> instance run on to other host, but maybe the current
Hi:
Currently, resizing instance cause migrating from the host that the
instance run on to other host, but maybe the current host is suitable for
new flavor. Migrating will lead to copy image between hosts if no shared
storage, it waste time.
I think that priority resizing instance on the
Thank @Sahid, I will help to review this patch :)
2014-12-19 16:01 GMT+08:00 Sahid Orentino Ferdjaoui <
sahid.ferdja...@redhat.com>:
>
> On Fri, Dec 19, 2014 at 11:36:03AM +0800, Rui Chen wrote:
> > Hi,
> >
> > Is Anybody still working on this nova BP 'Impr
Hi,
Is Anybody still working on this nova BP 'Improve Nova KVM IO support'?
https://blueprints.launchpad.net/nova/+spec/improve-nova-kvm-io-support
I willing to complement nova-spec and implement this feature in kilo or
subsequent versions.
Feel free to assign this BP to me, thanks:)
Best Regar
Thanks for your fantastic leadership!!
2014-09-23 10:54 GMT+08:00 Adam Young :
> On 09/22/2014 10:47 AM, Dolph Mathews wrote:
>
> Dearest stackers and [key]stoners,
>
> With the PTL candidacies officially open for Kilo, I'm going to take the
> opportunity to announce that I won't be running ag
*I think domain attribute is more appropriate than nova.conf node config,
need to consider across host task like **migrate and live-migrate :)*
2014-02-24 10:45 GMT+08:00 zhangyu (AI) :
> Sure, hard-coding seems weird…
>
>
>
> However, a global configuration here dominates all domains. It might
eac-99ed-be587EXAMPLE
>> vol-1a2b3c4d
>> i-1a2b3c4d
>> /dev/sdh
>> attaching
>> -MM-DDTHH:MM:SS.000Z
>>
>>
>>
>> So I think it's a bug IMO.Thanks~
>>
>>
>> wingwj
>>
>>
>> On Sat, Feb 15,
Hi Stackers;
I use Nova EC2 interface to attach a volume, attach success, but volume
status is detached in message response.
# euca-attach-volume -i i-000d -d /dev/vdb vol-0001
ATTACHMENT vol-0001i-000d detached
This make me confusion, I think the status sho
Hi Stackers:
Some instance operations and flavor are closely connected, for example,
resize.
If I delete the flavor when resize instance, instance will be error. Like
this:
1. run instance with flavor A
2. resize instance from flavor A to flavor B
3. delete flavor A
4. resize-revert instance
5. i
56 matches
Mail list logo