You should check your syslog for app armor denied messages. It is possible
app armor is getting in the way here.
Vish
On Apr 11, 2013, at 8:35 AM, John Paul Walters wrote:
> Hi Sylvain,
>
> I agree, though I've confirmed that the UID and GID are consistent across
> both the compute nodes and
I wasn't aware that force_hosts actually works. Someone should probably verify.
The availability zone method still works in grizzly.
Vish
On Mar 30, 2013, at 6:42 PM, Lorin Hochstein wrote:
> I see that in grizzly an admin can use a scheduler hint to force a VM to
> launch on a particular hos
I just looked at the code and it appears this is not possible through the
os_networks extension. This appears to be an oversight. It should probably
allow a project to be passed in.
Bug report here: https://bugs.launchpad.net/nova/+bug/1161441
That said, the first time a user boots an instance,
onRefused: '[Errno 111] Connection refused'
>
> I know it's a generic error, but do you have any clue?
>
> Thanks again,
> Gabriel.
>
>
> -Original Message-
> From: Vishvananda Ishaya [mailto:vishvana...@gmail.com]
> Sent: quinta-feira, 21 de mar
Well phooey:
987 if network_ref['multi_host']:
988 _add_dhcp_mangle_rule(dev)
The mangle rule is only added my nova-network in multihost mode.
Can you verify whether or not adding the rule on the compute or network node
fixes it?
That way we can either remove the check on multi_h
On Mar 20, 2013, at 11:20 AM, Brano Zarnovican wrote:
> On Wed, Mar 20, 2013 at 5:06 PM, Vishvananda Ishaya
> wrote:
>>> 2) Wipeout connection_info after disconnect. At least for Netapp
>>> provider it makes no sense to retain the info which is no longer valid
>&
On Mar 20, 2013, at 3:39 AM, Brano Zarnovican wrote:
> Hi devs,
>
> we are using backend iSCSI provider (Netapp) which is mapping
> Openstack volumes to iSCSI LUNs. This mapping is not static and
> changes over time. For example when the volume is detached then his
> LUN id becomes unused. Afte
Hello all,
I would like to run for a seat on The Technical Comittee. I have been working
on Nova since it was a project as Nasa and I have been heavily involved in
openstack since it was founded. I was elected to the precursor to TC (the
Project Oversight Committee, later named the Project Poli
Not exactly, although you can do something similar in both folsom and grizzly.
If you have a volume snapshot you can pass it in the block_device_mapping when
you boot an instance and nova will automatically create a volume from the
snapshot and boot from it. If you also set delete_on_terminate t
On Mar 1, 2013, at 8:05 AM, Paras pradhan wrote:
> Can somebody check if my endpoints are correct. 192.168.122.25 is my
> proxy node in which port is on with self signed certs.
>
> --
> public: http://192.168.122.25:/v1/AUTH_%(tenant_id)s
> internal: http://192.168.122.25:/v1/AUTH_
The default build of kvm-qemu does not have spice support on ubuntu-precise. If
you are running on ubuntu you might have to do:
sudo apt-get install qemu-kvm-spice
Devstack should probably be modified to install that package if n-spice is
enabled.
Vish
On Feb 28, 2013, at 10:33 AM, Shake Chen
qlen 500
> link/ether fe:16:3e:5f:b2:0a brd ff:ff:ff:ff:ff:ff
> inet6 fe80::fc16:3eff:fe5f:b20a/64 scope link
>valid_lft forever preferred_lft forever
>
> I don't understand that why br100 is displaying unknown state.
>
> Thanks
> Kashif
>
>
>
>
&
This topic might be better posted on openstack-dev
Vish
On Feb 26, 2013, at 11:24 AM, Kun Huang wrote:
> Hi swift developer,
>
> I'm confused about implementation of ring structure.
>
> in the RingBuilder, line 671 ~ 681
>
>
> for part, replace_replicas in reassign_parts:
>
>
>
On Feb 26, 2013, at 10:11 AM, mohammad kashif wrote:
> Hi
> I am installing openstack folsom on rhel6.4 with multi_host nova network. I
> have a working setup with ubuntu 12.04 and Essex and I am using almost same
> network setup with rhel with folsom. I don't understand that what is going
If you set:
enable_new_services=False
in your nova.conf, all new services will be "disabled" by default and the
scheduler won't start scheduling instances until you explicitly enable them.
Vish
On Feb 25, 2013, at 2:46 PM, Shawn Starr wrote:
> On Monday, February 25, 2013 10:34:11 PM Jeremy S
Looks like those docs are pretty outdated. I have a github repository where I
have been putting together some examples of doing common commands with
a) cli
b) python-*client
c) curl
It is incomplete but this should help get you started:
# helper method to create the client
https://github.com/vis
gt;
> Sincerely,
> Hsiao
>
> On Fri, Feb 22, 2013 at 12:57 AM, Vishvananda Ishaya
> wrote:
>> I'm pretty sure a whole disk image will fail with lxc. You need just the
>> root filesystem.
>>
>> You might have more luck with the unpacked version of:
I'm pretty sure a whole disk image will fail with lxc. You need just the root
filesystem.
You might have more luck with the unpacked version of:
http://uec-images.ubuntu.com/releases/precise/release/ubuntu-12.04-server-cloudimg-amd64.tar.gz
Vish
On Feb 21, 2013, at 8:34 AM, Chuan-Heng Hsiao w
ven moinmoin project running to provide DNSaas in openstack that supports
> bind 9 at present would there be any future change in nova dhcp and dns
> architecture to the currently it has.
>
> On Wed, Feb 20, 2013 at 1:30 AM, Vishvananda Ishaya
> wrote:
> No particular reason e
nks vish,
>
> Can you tell me the location of the external host file we provide to
> dnsmasq , so that i can try putting the directive there.
>
> On Wed, Feb 20, 2013 at 1:07 AM, Vishvananda Ishaya
> wrote:
> You cannot have an external dhcp server with openstack. Openstac
sq instead of ISC-DHCP managed with OMAPI, for example.
>
> Cheers
> Diego
>
> --
> Diego Parrilla
> CEO
> www.stackops.com | diego.parri...@stackops.com | +34 649 94 43 29 |
> skype:diegoparrilla
>
>
>
>
>
> On Tue, Feb 19, 2013 at 8:37 PM, Vi
You cannot have an external dhcp server with openstack. Openstack needs a way
to know the ip address assigned to a vm to do its listing properly. If you
don't care about the api returning valid ips there is a possibility of using
FlatNetworking (not FlatDHCP) to make nova stick the network into
You definitely need the libvirt modules. Nova has no way to detect whether the
modules are installed so it will try to attach via virtio.
Note that with grizzly you can use custom glance properties to override the
default vif type and disk bus. See https://review.openstack.org/#/c/21527/ and
ht
Hi Everyone,
I pushed another version of python novaclient (2.11.1) to pypi[1]. There was a
bug[2] with using the gnome keyring that was affecting some users, so the only
change from 2.11.0 is the inclusion of a fix for the bug.
[1] http://pypi.python.org/pypi/python-novaclient/
[2] https://bug
I seem to recall something similar happening when I built from source. One
option is to update your /etc/nova/nova.conf to:
libvirt_cpu_mode=host-passthrough
Vish
On Feb 15, 2013, at 9:07 AM, Sylvain Bauza wrote:
> Hi,
>
> Nova is generating libvirt.xml for each instance withmode="host-m
Hello Everyone,
I just pushed version 2.11.0 of python-novaclient to Pypi. There are a lot of
fixes and features in this release. Here is a brief overview:
Bug Fixes
-
private key files now created with 400 permissions
nova quota-show now uses current tenant by default
nova live-migrati
You are likely doing the associate too early. You should wait until the
vm is showing a fixed ip address before associating a floating ip.
Vish
On Feb 6, 2013, at 7:50 AM, Nivrutti Kale wrote:
> Hi All,
>
> I am trying to associate IP to the instance. I am getting following error.
>
> Error:
On Feb 4, 2013, at 2:29 AM, Joe Warren-Meeks wrote:
> Hi guys,
>
> I need to have arp and mac spoofing work on my virts so that HA works as I
> need it.
>
> I've used virsh nwfilter-edit nova-base to edit and remove the bits I need,
> however it looks like that change was undone when the serve
rom dashboard/horizon?
>
> Br.
>
> Umar
>
> On Sat, Feb 2, 2013 at 9:33 PM, Vishvananda Ishaya
> wrote:
>
> On Feb 1, 2013, at 8:24 PM, Umar Draz wrote:
>
> > So this is not possible that create a dedicated floating ip pools that
> > share all te
On Feb 1, 2013, at 8:24 PM, Umar Draz wrote:
> So this is not possible that create a dedicated floating ip pools that share
> all tenant.
>
> I have 128 ip pools and different tenant, I don't want a tenant hold the ip
> even if its not needed. I want a central pool every tenant should acquir
; export SERVICE_TENANT_NAME="service"
> According to above bashrc everything I do will be run as admin user.
>
> 1) Then how I can run nova commands for other users?
> 2) I don't want to run this floating-ip-create 50 times for my 50 tenant
> 3) Is there possible I jus
I suspect you are suffering from this recently fixed bug:
https://bugs.launchpad.net/nova/+bug/1103436
If you update your nova code and and run everything you should be ok.
Vish
On Feb 1, 2013, at 10:20 AM, Wojciech Dec wrote:
> Hi All,
>
> while testing the latest code under devstackon a mu
What do you mean it isn't visible?
you should be able to do:
nova floating-ip-create mypool
as any user.
Vish
On Feb 1, 2013, at 10:29 AM, Umar Draz wrote:
> Hi All,
>
> I have 3 Tenant (admin, rebel, penguin). Also have 3 different users for
> these Tenants
>
> I have /25 network pool
On Jan 31, 2013, at 6:37 PM, "Ali, Haneef" wrote:
> Isn’t signed token an optional feature? If so validateToken is going to be
> a high frequency call. Also “Service Catalog” is a constant, the services
> can cache it. It doesn’t need to be part of validateToken.
Service catalog is not a
On Jan 30, 2013, at 11:35 AM, Umar Draz wrote:
> Hi Caitlin,
>
> I need multiple ip address for my Haproxy server.
>
> Here is my senario
>
> I have already running Haproxy Server virtual machine for web load balancing
> on vSphare with 45 public ip address. We are running 45 diffrent web
at I was trying to find out was if that additional action was available
> from the nova client. E.g is there a “nova restore ” command ?
> Looking through the client code I can’t see one, but thought I might be
> missing something.
>
> Thanks
> Phil
>
> From: Vis
On Jan 29, 2013, at 8:55 AM, "Day, Phil" wrote:
> Hi Folks,
>
> Does the nova client provide support to restore a soft deleted instance (and
> if not, what is the process for pulling an instance back from the brink) ?
If you have reclaim_instance_interval set then you can restore instances
In the future these should probably be done on the dev list. But for now I'm
adding him back.
Congrats trey.
Vish
On Jan 24, 2013, at 5:23 AM, Gary Kotton wrote:
> +1
>
> On 01/23/2013 05:51 PM, Joe Gordon wrote:
>>
>> +1
>>
>> On Wed, Jan 23, 2013 at 7:58 AM, Chris Behrens wrote:
>> +1
>
There is nothing wrong with your setup. L3 routing is done by the network node.
L3 is already blocked by security groups. The vlans provide L2 isolation.
Essentially we handle this with convention, as in tell your tenants not to open
up their firewalls if they don't want to be accessed by other
+1
We mentioned previously that we would fast-track former core members back in.
I gess we can wait a couple of days to see if anyone objects and then add him
back.
Vish
On Jan 22, 2013, at 3:38 PM, Matt Dietz wrote:
> All,
>
> I think Trey Morris has been doing really well on reviews a
On Jan 22, 2013, at 12:32 PM, Blair Zajac wrote:
> /usr/bin/nova-volume
The wrong bin is running. You should be running /usr/bin/cinder-volume if you
are using cinder.
It doesn't look like you have configured cinder properly.
___
Mailing list: http
in folsom, cinder didn't automatically convert images to raw when creating a
volume. This is necessary because a qcow written directly to a volume will not
boot properly. This means you need to create a volume that is the size of the
virtual disk.
Vish
On Jan 16, 2013, at 8:39 PM, "Bontala, Vi
On Jan 15, 2013, at 8:43 AM, Joe Warren-Meeks
wrote:
> So, now you should be done. However, Openstack will try to add in a SNAT rule
> to SNAT some outbound traffic. Vish suggested leaving --routing_source_ip= in
> nova.conf set to nothing, but that doesn't work, it throws an error when
> se
On Jan 14, 2013, at 10:15 AM, Antonio Messina
wrote:
>
> On Mon, Jan 14, 2013 at 7:07 PM, Vishvananda Ishaya
> wrote:
>
> On Jan 14, 2013, at 9:28 AM, Antonio Messina
> wrote:
>
>> On Mon, Jan 14, 2013 at 6:18 PM, Vishvananda Ishaya
>> wrote:
>&g
On Jan 14, 2013, at 9:28 AM, Antonio Messina
wrote:
> On Mon, Jan 14, 2013 at 6:18 PM, Vishvananda Ishaya
> wrote:
>
> On Jan 14, 2013, at 7:49 AM, Jay Pipes wrote:
>
> >
> > There is an integer key in the s3_images table that stores the map
> > betw
This doesn't exist yet, but I thought at one point it was being worked on.
Hot-adding nics would be a great feature for the quantum integration especially.
Blueprint here:
https://blueprints.launchpad.net/nova/+spec/network-adapter-hotplug
There was work done here:
https://review.openstack.org
On Jan 14, 2013, at 7:49 AM, Jay Pipes wrote:
>
> There is an integer key in the s3_images table that stores the map
> between the UUID and the AMI image id:
>
> https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/models.py#L964
>
> Not sure this is available via Horizon... sorry
I can't find it. Do you have any more advice?
>
> -David
>
> On 1/11/2013 1:32 PM, Vishvananda Ishaya wrote:
>> Key name is the recommended method, but injecting it into the guest is not.
>> The key should be downloaded from the metadata server using a guest process
&
Key name is the recommended method, but injecting it into the guest is not. The
key should be downloaded from the metadata server using a guest process like
cloud-init.
Vish
On Jan 11, 2013, at 10:20 AM, David Kranz wrote:
> Sometimes when I boot a bunch of vms seconds apart, using the key_na
Hi Markus,
It kind of depends on exactly how you are routing on the gateway host, but it
might be libvirt-enabled ebtables filtering that is causing your problem here.
By default we block traffic from a machine that is not coming from the same
source ip and mac that is assigned to the instance.
On Jan 10, 2013, at 5:50 AM, Alex Vitola wrote:
> I'm creating the Dashboard / Horizon.
>
> Using m1.small flavor.
>
> Strange that using the same process works with Ubuntu.
>
> Using command line, same problem
>
> ~# nova boot --flavor=6 --image=e4fc62b7-5e1b-457b-a578-26939b547ed0
> CentOS
If you are attempting to stop nova-network from snatting for instances you can
very easily do it with conf:
routing_source_ip=
(set routing_source_ip to none)
This will stop the snat for instances. Please note that you will need to
provide a gateway through dnsmasq for your instances to reach
I believe that this bug only happens if:
a) you have your floating ips on a different interface from your flat ips
b) you are using an external gateway for the fixed ips (a custom dnsmasq config
file)
I've noticed that self-ping also breaks if you have dmz_cidr set to your
fixed_range (this isn
> VMs if that helps.
>
> Bruno
>
> Enviado do meu iPad
>
> No dia 03/01/2013, às 21:18, Vishvananda Ishaya
> escreveu:
>
>> This will be extremely difficult. I wouldn't recommend it. It would
>> probably be easier to make a manual cloudpipe instance
This will be extremely difficult. I wouldn't recommend it. It would probably
be easier to make a manual cloudpipe instance instead of having nova manage it.
You will just have to do some tweaking of the nwfilter rules of the vm. An even
easier solution would be to just make a bastion vm that th
ute2
>
> I can not ping 10.10.10.4 from compute1 node, and same I can not ping
> 10.10.10.2 from compute 2 node.
>
> But I can ping 10.10.10.3 and 10.10.10.5 from each compute nodes. Above is
> the output of ifconfig of both nodes.
>
> Best Regards,
>
> Umar
&g
> BROADCAST MULTICAST MTU:1500 Metric:1
> RX packets:0 errors:0 dropped:0 overruns:0 frame:0
> TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
> collisions:0 txqueuelen:1000
> RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
>
>
nd of network
> nova-manage network create --label=myNetwork --fixed_range_v4=10.10.10.0/24
> --bridge=br100 --num_networks=1 --multi_host=T
>
> Best Regards,
>
> Umar
>
> On Thu, Jan 3, 2013 at 10:13 PM, Vishvananda Ishaya
> wrote:
> Need a little more info:
>
I think this seems reasonable, although FYI, openstack-dev seems like a better
place for emails like this.
Vish
On Jan 3, 2013, at 6:40 AM, "Day, Phil" wrote:
> Hi Folks, and Happy New Year.
>
> In working with the Filter Scheduler I’m considering an enhancement to make
> the final host sel
Need a little more info:
a) what does your nova.config look like? Specifically what is the setting for
flat_interface?
b) what command did you use to create your network?
c) what is the output of brctl show?
d) what is the output of ip addr show?
Vish
On Jan 2, 2013, at 11:11 PM, Umar Draz
Lorin, that one might have been missed.
Vish
On Dec 31, 2012, at 1:52 PM, Lorin Hochstein wrote:
> Vish:
>
> On Thu, Nov 29, 2012 at 2:47 PM, Vishvananda Ishaya
> wrote:
> Hello Everyone,
>
> I just pushed out a new version of python-novaclient[1]. Mostly cleanups
This is generally due to timeouts talking to the network server. It will be
much better with multi_host networking. Also you could avoid some issues by
upping the rpc call timeout:
rpc_response_timeout=180
(defaults to 60 seconds)
Vish
On Dec 28, 2012, at 2:53 AM, Andrew Holway wrote:
> Hel
On Dec 27, 2012, at 9:09 AM, heut2008 wrote:
> note that the flag --start_guests_on_host_boot=true has been removed in the
> latest trunk code.so instances which are running willn't be restarted even
> the nova-compute is restarted .
Correct. The proper way to get instances to come back is
We didn't implement list as the operation is very expensive. You can get the
cidr for a network using nova network-list and check each one clientside via
nova fixed-ip-get
Vish
On Dec 26, 2012, at 4:47 PM, zhoudshu zhoudshu wrote:
>I can't find the same one in nova api or novaclient as th
On Dec 22, 2012, at 7:50 AM, yuezhouli wrote:
> On 2012年12月22日 09:59, 이창만 wrote:
>> Hello.
>>
>> Could anyone tell me how to create vm instance to specific compute node?
>>
>> I've tried blow command, but I couldn't place vm instance to wanted compute
>> node.
>>
>>
>> $ nova boot --image p
On Dec 20, 2012, at 9:18 PM, Jian Hua Geng wrote:
> According to the comments in https://review.openstack.org/#/c/18469/, I
> summary the following work items need to be done, pls give me your suggestion:
>
> 1. I prefer to provide a new attribute when run new instance, for example:
> --cdrom
On Dec 20, 2012, at 2:24 PM, Andrew Holway wrote:
> Hi Vish,
>
> Manually creating vlans would be quite tiresome if you are using a vlan per
> project and I'm not sure flatdhcp is good for serious use in multi tenanted
> production environments. (thoughts?)
Personally I think vlan isolation
There is no need for nova to create the vlans, you could use flatdhcp and
manually create the vlans and specify the vlans when you create your networks:
nova-manage network-create --bridge br0101 --bridge_interface eth0.101
nova-manage network-create --bridge br1101 --bridge_interface eth1.101
N
19, 2012 at 1:13 PM, Vishvananda Ishaya
> wrote:
> There should be a redirect in iptables from 169.254.169.254:80 to $my_ip:8775
> (where nova-api-metadata is running)
>
> So:
>
> a) can you
>
> curl $my_ip:8775 (should 404)
> CloudController and Nodes awnser i
There should be a redirect in iptables from 169.254.169.254:80 to $my_ip:8775
(where nova-api-metadata is running)
So:
a) can you
curl $my_ip:8775 (should 404)
b) if you do
sudo iptables -t nat -L -n v
do you see the forward rule? Is it getting hit properly?
Vish
On Dec 19, 2012, at 6
A number of things could be going wrong, but I did notice this bug recently:
https://bugs.launchpad.net/nova/+bug/1086352
I think this only affects installs on old versions of xp. Perhaps there is some
incompatibility between the virtio drivers and windows server 2012?
Vish
On Dec 18, 2012, at
Was there more to the error? The underlying exception isn't listed.
Vish
On Dec 14, 2012, at 7:24 AM, Sébastien Han wrote:
> Hi Stackers,
>
> I run Folsom on Ubuntu 12.04.
>
> Every time I run a new instance I get this ERROR in the compute logs.
>
> Dec 12 23:46:29 c2-compute-02 2012-12-12
I don't see why you would need to put in the --network_host flag, especially
since you seem to be running two nova-networks. It appears that nova-compute is
not checking in to the database which means it isn't running or it is hung
somehow. Check nova-compute.log on host1.
Vish
On Dec 13, 2012
3141 elif vm_state == vm_states.ACTIVE:
3142 # The only rational power state should be RUNNING
3143 if vm_power_state in (power_state.NOSTATE,
3144power_state.SHUTDOWN,
3145po
cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/configuration_limits/limits_513/nexus_5000_config_limits_513.html#wp344401
>> [2]
>> http://jpmcauley.com/2011/06/23/vlan-port-instance-limitation-on-cisco-ucs/
>>
>>
>>
>> On Mon, Dec 3, 2012 at 11:50 PM, Vishvan
It failed all nodes:
> Previously tried hosts: [u'node1', u'node2'].
the ComputeFilter checks for whether the host is up. The RamFilter just ran
first and failed it. Your instance is going to Error because node1 and node2
are failing.
Vish
On Dec 11, 2012, at 1:34 AM, Liu Wenmao wrote:
> hi
bbits
running with nodes connected to different ones?
Vish
On Dec 10, 2012, at 2:00 PM, Afef MDHAFFAR wrote:
>
>
> 2012/12/10 Vishvananda Ishaya
> Check rabbitmqctl list_queues and see if there are queues that have nonzero
> entries. That means messages are being sent but not pi
The recommended way is to run cinder. The config that you showed before was not
running osapi_volume as one of your enabled apis.
Prior to folsom the way was to enable osapi_volume or run nova-api-volume. The
worker that processes commands is called nova-volume (similar to nova-compute
on the c
. You could try
deleting them and restarting nova-network.
Vish
On Dec 10, 2012, at 1:32 PM, Afef Mdhaffar
wrote:
>
>
> 2012/12/10 Vishvananda Ishaya
> I don't see any errors in your network log. are nova-network and nova-compute
> running on the same host with the same c
I don't see any errors in your network log. are nova-network and nova-compute
running on the same host with the same config file? It looks like it isn't
recieving a message. Are you running another nova-network that is picking up
the message on another host?
Vish
On Dec 10, 2012, at 12:18 PM,
t; Do I need to specify this flag allow_resize_to_same_host=true in nova.conf of
> the compute node?
>
> Regards,
> Krishnaprasad
> From: Vishvananda Ishaya [mailto:vishvana...@gmail.com]
> Sent: Montag, 10. Dezember 2012 20:03
> To: Narayanan, Krishnaprasad
> Cc: ope
2 requirements:
1) hostname for compute hosts resolve properly
2) passwordless ssh works between compute hosts.
Vish
On Dec 10, 2012, at 10:37 AM, "Narayanan, Krishnaprasad"
wrote:
> Hallo All,
>
> I am trying to use the Nova API (POST call) for changing the flavor
> information (to resize
x27;,
> default=8775,
> help='port for metadata api to listen'),
>cfg.IntOpt('metadata_workers',
> default=None,
> help='Number of workers for metadata service'),
>cfg.StrOpt('osapi_volume_listen
gt; [app:oscomputeversionapp]
> paste.app_factory = nova.api.openstack.compute.versions:Versions.factory
>
> [pipeline:osvolumeversions]
> pipeline = faultwrap osvolumeversionapp
>
> [app:osvolumeversionapp]
> paste.app_factory = nova.api.openstack.volume.versions:Versions.factory
>
> ###
It gets the endpoint configuration from keystone. Everything has to know where
the keystone server is and it can use the service catalog to talk to the other
services.
Vish
On Dec 6, 2012, at 3:20 AM, Trinath Somanchi wrote:
> Hi Stackers-
>
> I have got a doubt about a new kind of setup wit
ple). Nova moves the eth1 ip automatically when it creates the bridge
if eth1 has an ip.
Vish
> But if the 192 address doesn't exist, how the compute-note communicate with
> each other? Through the eth0? I have no idea.
>
>
> On Thu, Dec 6, 2012 at 3:12 AM, Vishvananda Ishaya
n this.
--
Best regard,
David Geng
--
Vishvananda Ishaya ---12/06/2012 02:37:23 AM---Vishvananda Ishaya <vishvana...@gmail.com>
Vishvananda Ishaya <vishvana...@gmail.com>
12/06/2012 02:37 AM
To
Jian Hua Geng/China
Check if you have any space in the instances dir in the filesystem. I've seen
this happen when the drive gets full and libvirt gets an io error trying to
write to disk so it shuts off the vms.
Vish
On Dec 5, 2012, at 6:59 PM, pyw wrote:
> My virtual machine created, often in the absence of in
Odd. This looks remarkably like it is trying to start osapi_volume even though
you don't have it specified in enabled apis. Your enabled_apis setting looks
correct to me.
Vish
On Dec 10, 2012, at 9:24 AM, Andrew Holway wrote:
> Hi,
>
> I cannot start the nova-api service.
>
> [root@blade02
On Dec 5, 2012, at 2:27 PM, Clint Walsh wrote:
> Vish,
>
> thanks for the clarification re hostnames.
>
> NeCTAR uses shared storage across compute nodes for VM images storage and our
> compute nodes hostnames resolve
>
> Is there a way around passwordless access between compute nodes for t
ed to be solved. There is a bp about this:
https://blueprints.launchpad.net/nova/+spec/resize-no-raw
>
> Resize would be very useful for our tenants.
>
> ---
> Clint Walsh
> NeCTAR Research Cloud Support
>
>
>
> On 6 December 2012 05:39, Vishvananda Ishaya wrote
On Dec 5, 2012, at 11:33 AM, Alberto Molina Coballes
wrote:
> 2012/12/5 Vishvananda Ishaya :
>> Probably wheezy puts iscsiadm somewhere that rootwrap can't find it.
>>
>> iscsiadm: CommandFilter, /sbin/iscsiadm, root
>> iscsiadm_usr: CommandFilter, /usr/bin/isc
Probably wheezy puts iscsiadm somewhere that rootwrap can't find it.
iscsiadm: CommandFilter, /sbin/iscsiadm, root
iscsiadm_usr: CommandFilter
This is a known issue in folsom and stable/folsom. You should turn off the
image cache if you are using shared storage.
https://bugs.launchpad.net/nova/+bug/1078594
See the upgrade notes here to see how to disable the imagecache run:
http://wiki.openstack.org/ReleaseNotes/Folsom#OpenStack_Compu
On Dec 5, 2012, at 1:53 AM, Lei Zhang wrote:
> Hi all,
>
> I am reading the
> http://docs.openstack.org/trunk/openstack-compute/admin/content/libvirt-flat-dhcp-networking.html,
> I got the following deploy architecture. But there are several that I am
> confused.
>
> How and why 192.168.0.0
On Dec 4, 2012, at 9:35 AM, Ahmed Al-Mehdi wrote:
> Hi Marco,
>
> This is really good stuff, thank you very much for helping out. I am
> creating some instances to test out how/where the different storage related
> elements are created.
>
> I created two VM instance:
>
> Instance 1 : 20GB
On Dec 4, 2012, at 1:15 AM, Marco CONSONNI wrote:
> Not sure, but it seems like this feature is available for XenServer, only
> http://osdir.com/ml/openstack-cloud-computing/2011-10/msg00473.html
>
> Does anybody know more?
Resize should work for kvm as well, but you will need hostnames to re
On Dec 4, 2012, at 3:48 AM, Jian Hua Geng wrote:
> Vish,
>
> Many thanks for u comments, but as you know to support windows sysprep image,
> we need save the unattend.xml in the CDROM or C:\ device. So, we want to
> extend the config drive to attach a CDROM device when launch VM.
>
> Anyway,
ps, the
> method 'db.network_get_all_by_host' use in 'init-host' must return the
> network in this case ?
>
> I only implement this for the multi hosted networks with the VLAN manger. I
> think isn't useful to add this on the multi hosted network with the Flat
On Dec 2, 2012, at 6:15 PM, Jian Hua Geng wrote:
> I saw the comments in the https://bugs.launchpad.net/nova/+bug/1029647 , can
> anyone give me more detail introduction of this decision about why the
> functionality for using an image id for config drive was removed?
>
> Just for example fo
1 - 100 of 718 matches
Mail list logo