Hello,
I was trying to migrate one instance from the compute node compute2 to another
node compute1, but failed, the nova-compute.log said compute2 failed to connect
the 16509 port on compute1, i noticed that the 16509 port is not opened on
compute1, is there any additional configuration for Li
"$ ssh -i key.pem fedora@"
are you sure the username is fedora but not root ?
you also could add a user yourself in the image ,so you could ssh with password
to check the file .ssh/authorized_keys to ensure the
public key there.
-- 原始邮件 --
发件人: "Chenrui (A)"
Hi Stackers,
I found there is a document to describe how to setup live migration using
NFS by Mirantis.
http://www.mirantis.com/blog/tutorial-openstack-live-migration-with-kvm-hypervisor-and-nfs-shared-storage
In this articles, there is a paragraph here:
*In a typical Openstack deployment, every co
I think inject_key_into_fs(key, fs) only affects the root user
-- 原始邮件 --
发件人: "menghuizhanguo";;
发送时间: 2013年10月28日(星期一) 下午4:07
收件人: "Chenrui (A)"; "Bill Owen";
抄送: "Openstack Milis";
主题: 回复:[Openstack] 答复: Key Injection not working after upgrading from Griz
Hi Thiago,
you don't say what version of OVS. It must be >=1.10.0 because you are using
VXLAN. From that version the way MTU and tunnels is handled has changed.
See "- Tunneling: " in http://openvswitch.org/releases/NEWS-1.10.0
Here is a tcpdump with with OVS 1.10.2 where all interfaces have an
After making the following changes, the libvirtd daemon failed to start... but
if i remove "-f /etc/libvirt/libvirtd.conf" from /etc/default/libvirt-bin, the
livbirted daemon can start after running 'service libvirt-bin restart.
cat /etc/default/libvirt-bin
# Defaults for libvirt-bin initscript
run libvirtd deamon manually to see what happed
-- 原始邮件 --
发件人: "董建华";;
发送时间: 2013年10月28日(星期一) 下午4:40
收件人: "止语";
"openstack";
主题: Re: 回复:[Openstack] Nova live-migration failed(libvirt Connection refused)
After making the following changes, the libv
and I am not sure listen_addr = "10.10.10.182", the other nodes could access
it . "0.0.0.0"?
-- 原始邮件 --
发件人: "menghuizhanguo";;
发送时间: 2013年10月28日(星期一) 下午4:51
收件人: "董建华"; "Openstack
Milis";
主题: 回复: 回复:[Openstack] Nova live-migration failed(libvirt Connection
check the port in /etc/libvirt/libvirtd.conf :)
-- 原始邮件 --
发件人: "董建华";;
发送时间: 2013年10月28日(星期一) 下午3:12
收件人: "openstack";
主题: [Openstack] Nova live-migration failed(libvirt Connection refused)
Hello,
I was trying to migrate one instance from the compute n
When doing Live-Migration, the error message said failed to connect
'compute1:16509', compute1 can be resolved to 10.10.10.182 on any node.
发件人: 止语
发送时间: 2013-10-28 16:55
收件人: 董建华; Openstack Milis
主题: 回复: 回复:[Openstack] Nova live-migration failed(libvirt Connection refused)
and I am not sure lis
Thanks, as you suggested, i uncomment the following line, the daemon can start
now, going to test the Live-Migration.
#listen_tls = 0
发件人: menghuizhanguo
发送时间: 2013-10-28 17:10
收件人: 董建华; openstack
主题: Reply: Re:[Openstack] Nova live-migration failed(libvirt Connection refused)
disable the tls l
root@compute1:/etc/default# cat libvirt-bin|grep libvirtd_opts
libvirtd_opts="-d"
root@compute1:/etc/default# service libvirt-bin restart
libvirt-bin stop/waiting
libvirt-bin start/running, process 3146
root@compute1:/etc/default# ps -ef|grep libvirtd
root 3146 1 29 16:51 ?00:00:
Hi Razique,
They are located on different compute nodes and allow incoming connections. The
VMs that I referred have a fixed IP address. I could see the eth0 interface and
it has the fixed IP address.
Thanks,
Krishnaprasad
From: Razique Mahroua [mailto:razique.mahr...@gmail.com]
Sent: Sonntag,
disable the tls listen
-- Original --
Sender: "董建华";
Send time: Monday, Oct 28, 2013 5:05 PM
To: "止语"; "openstack";
Subject: Re: Re:[Openstack] Nova live-migration failed(libvirt Connection
refused)
root@compute1:/etc/default# libvirtd -l -f /etc/libvirt
问题就是端口没侦听,libvirtd进程都莫有。
发件人: 董建华
发送时间: 2013-10-28 17:05
收件人: 止语; openstack
主题: Re: Re:[Openstack] Nova live-migration failed(libvirt Connection refused)
root@compute1:/etc/default# libvirtd -l -f /etc/libvirt/libvirtd.conf
2013-10-28 09:03:50.514+: 3777: info : libvirt version: 1.1.1
2013-10-2
root@compute1:/etc/default# libvirtd -l -f /etc/libvirt/libvirtd.conf
2013-10-28 09:03:50.514+: 3777: info : libvirt version: 1.1.1
2013-10-28 09:03:50.514+: 3777: error : virNetTLSContextCheckCertFile:117 :
Cannot read CA certificate '/etc/pki/CA/cacert.pem': No such file or directory
发
Thiago,
some more answers below.
Btw: I saw the problem with a "qemu-nbd -c" process using all the cpu on the
compute. It happened just once - must be a bug in it. You can disable libvirt
injection if you don't want it by setting "libvirt_inject_partition = -2" in
nova.conf.
On Saturday, 26
Now i got a new error message when doing the migration.
2013-10-28 17:31:46.030 2997 ERROR nova.virt.libvirt.driver [-] [instance:
d44c935f-d60a-4ebd-bea9-a9168ba50bb4] Live Migration failure: operation failed:
Failed to connect to remote libvirt URI qemu+tcp://compute2/system:
authentication
Dear All,
I am new to open-stack community and want to try some experiments with
it like creating a storage service on cloud.
I have very basic question,
Is it possible to successfully install open stack component (specially
swift) using virtualbox on a machine having 4GB ram? Also if anybody
tri
Hi Folks,
I'm trying to understand the quantum security model. I've OVS plugin
configured with VLAN isolation.
I've a tenant project (alt_demo)
*(admin) > keystone tenant-list*
+--+--+-+
|id| name | enabled |
+--
Hi Friends,
I have deployed Openstack Havana release on Centos 6.4 64-bit via Redhat
RDO tool. There are 2 network ranges I have configured in Openstack 1) for
Floating IPs(172.31.15.0/22) and 2nd) For Internal IPs(10.0.0.0/24). I have
also shared these 2 Network Ranges so that other projects can
Ray,
It seems that OpenStack purposefully omits the method of provisioning shared
storage.
It's totally up to you - all you need to do is to mount
/var/lib/nova/instances/ on each compute node on the system level.
Make sure your user IDs inside /var/lib/nova/instances/ are correct: nova,
libvir
Hi,
I am getting error in server.log on NETWORK node :
2013-10-28 18:09:20.255 1099 WARNING neutron.api.extensions [-] Extension
lbaas not supported by any of loaded plugins
2013-10-28 18:09:20.263 1099 WARNING neutron.api.extensions [-] Extension
routed-service-insertion not supported by any of
Hello everyone,
A new problem with creating snapshots appeared after I upgraded to Havana:
2013-10-25 14:49:32.329 22980 TRACE nova.openstack.common.rpc.amqp File
"/usr/lib/python2.7/dist-packages/libvirt.py", line 646, in blockRebase
2013-10-25 14:49:32.329 22980 TRACE nova.openstack.common.rpc.a
I wanted to lunch an instance that is set with 16GB in Openstack Folsom
env. and when I looked at the logs, I saw that it failed to lunch on a host
for not having enough memory:
host '': free_ram_mb:20873 free_disk_mb:684032 does
> not have 16384 MB usable ram, it only has 14433.7 MB usable ram.
>
On 10/27/13 4:39 PM, Michael Still wrote:
These sound like bugs worth filing...
I opened https://bugs.launchpad.net/nova/+bug/1245502
Blair
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists
On 10/28/2013 08:49 AM, Jitendra Kumar Bhaskar wrote:
Hi,
I am getting error in server.log on NETWORK node :
2013-10-28 18:09:20.255 1099 WARNING neutron.api.extensions [-]
Extension lbaas not supported by any of loaded plugins
2013-10-28 18:09:20.263 1099 WARNING neutron.api.extensions [-]
Ext
Thanks for the response. While this link does contain correct information for
switching to an ldap backed identity store, I haven't seen anything in it that
shows how to do ldap-based authentication while maintaining everything else in
the SQL-based store. That's what was quoted as being possi
On 10/27/2013 11:59 AM, Martinx - ジェームズ wrote:
Guys,
I am not detecting any problems related to "MTU = 1500" when using VXLAN!
It is easy to reproduce the "GRE MTU problem", when using GRE tunnels
with MTU = 1500, from a Instance, it is impossible to use RubyGems
(Ubuntu 12.04 Instance), for ex
Daniel,
Thanks for you response. But I am still confusing about use a iscsi device,
if I want to use it as shared storage, first I need to attach to a node as
a block storage, then I mount it to other compute nodes using NFS. The
problem is that this will make a big lost on performance. Currently n
On 10/28/2013 09:35 AM, Ray Sun wrote:
Daniel,
Thanks for you response. But I am still confusing about use a iscsi
device, if I want to use it as shared storage, first I need to attach to
a node as a block storage, then I mount it to other compute nodes using
NFS. The problem is that this will ma
Hello, David
Thanks for your quick response. Changing the default behaviour of Horizon
is fine :), yet it is still unclear to me how do I force Horizon to call my
custom function provided via 'user_home' config parameter. It isn't called
right now ('get_user_home' function from
/usr/share/openstac
Libvrit does support ISCSI LUN as backends, meaning you can mount the ISCSI
block on every compute node and you should be fine
- Razique
On Oct 28, 2013, at 8:54, Chris Friesen wrote:
> On 10/28/2013 09:35 AM, Ray Sun wrote:
>> Daniel,
>> Thanks for you response. But I am still confusing about
OK
can you show us:
• The fixed IP you are trying to reach. From the compute node it is running on,
are you able to ping it?
• $iptables -L -nv -t nat
• What is the floating IP ; does it appear on the iptables’ output command?
• If you run $ip addr sh | grep $IP do you see the floating IP? (assumi
That works for one client. How do you synchronize access between
multiple clients?
Also, for iSCSI it looks like libvirt can't create/delete volumes, that
needs to be done on the server.
Chris
On 10/28/2013 10:49 AM, Razique Mahroua wrote:
Libvrit does support ISCSI LUN as backends, meanin
The only thing is that you need to create the volumes on your ISCSI backend
first, so libvirt can use them, otherwise as a shared storage, works fine.
ISCSI always gave me nice speeds compared to NFS
On Oct 28, 2013, at 10:04, Chris Friesen wrote:
> That works for one client. How do you synch
Are there any ways I can try debugging/troubleshooting this?
-Shri
On Fri, Oct 25, 2013 at 10:19 PM, Shrinand Javadekar <
shrin...@maginatics.com> wrote:
> Hi,
>
> My attempt to upgrade my 3 node swift installation from v1.9.1 to v1.10.0
> fails without any errors :(. I downloaded the tar ball
Im struggling getting security groups work with docker and Neutron
1) should the secgroups be inside the namespace of the container
2) or outside on the compute node like KVM ?
If the 2nd, seems that i cant find the right way to get the rules applied
on the host, no matter what conf options i try
Hello,
I've installed OpenStack on a Ubuntu 12.04 VM according to the install guide.
It worked fine until I created the compute node on a second VM and tried to set
up nova networking:
http://docs.openstack.org/havana/install-guide/install/apt/content/nova-network.html
# nova network-create v
hey
is «controller» a hostame all ur servers can resolve?
- Razique
On Oct 28, 2013, at 11:40, Florian Lindner wrote:
> Hello,
>
> I've installed OpenStack on a Ubuntu 12.04 VM according to the install guide.
> It worked fine until I created the compute node on a second VM and tried to
> set
Am Montag, 28. Oktober 2013, 12:04:02 schrieb Razique Mahroua:
> hey
> is «controller» a hostame all ur servers can resolve?
Yes, set in /etc/hosts.
Regards,
Florian
>
> - Razique
>
> On Oct 28, 2013, at 11:40, Florian Lindner wrote:
> > Hello,
> >
> > I've installed OpenStack on a Ubuntu 12
On the RabbitMQ server, are vhost and user id setup correctly?
% rabbitmqctl list_users
% rabbitmqctl list_vhosts
The user and vhost should match what is in the OpenStack configurations.
On 10/28/13 1:14 PM, Florian Lindner wrote:
Am Montag, 28. Oktober 2013, 12:04:02 schrieb Razique Mahrou
Guys,
I'm trying to figure out the main differences between FWaaS and "Security
Groups".
* Does it complement each other? Or is FWaaS a "Security Groups"
replacement...?
* Can FWaaS manage the "Tenant Namespace Router NAT Table"?
* Does FWaaS manage the same iptables/ip6tables tables at L3 Nam
Cool! Thanks!!
On 28 October 2013 19:16, Aaron Rosen wrote:
> Hi Thiago,
>
> Current, FWaaS only manages what's allowed in and out on router ports.
> Security profiles are applied to instances ports directly.
>
> FYI: The current FWaaS API is somewhat experimental and policy applies
> globally
Hi Thiago,
Current, FWaaS only manages what's allowed in and out on router ports.
Security profiles are applied to instances ports directly.
FYI: The current FWaaS API is somewhat experimental and policy applies
globally to all the routers a tenant owns (i.e: no zone concept yet).
Aaron
On Mon
Stackers!
I'm trying to configure my Security Groups and, I'm seeing that the rules
are being applied at the Compute Node OVS ports (iptables / ip6tables) BUT,
it does have no effect (or just being ignored?).
I'm using Ubuntu 12.04.3 + Havana from Cloud Archive.
For example:
I have 1 Instance
Guys,
I'm back using "libvirt_vif_driver =
nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver" (nova-compute.conf) but
the problem persist for "tenant1".
My nova.conf contains:
---
# Network settings
network_api_class = nova.network.neutronv2.api.API
neutron_url = http://contrller-1.mydomain.com
Well,
Now I'm using "firewall_driver = nova.virt.firewall.NoopFirewallDriver" for
both Nova and Neutron (Open vSwitch Agent) but, Security Groups rules are
applied but ignored.
Tips!?
Thanks!
Thiago
On 28 October 2013 21:13, Martinx - ジェームズ wrote:
> Guys,
>
> I'm back using "libvirt_vif_driv
Okay, I think I got it...
Nova should proxy 'Security Groups' calls to Neutron (and not do it by
itself), so, it must have:
--- nova.conf ---
firewall_driver = nova.virt.firewall.NoopFirewallDriver
security_group_api = neutron
---
At Neutron OVS Agent (ovs_neutron_plugin.ini), you must set:
---
Guys,
A new test to see that the packages currently did not mach any iptables
rules at the compute node, completely bypassing "Security Groups", look:
* Instance with ONLY port 80 TCP open:
---
root@hypervisor-1:~# *iptables -L neutron-openvswi-i2fa3cfab-a -nv*
Chain neutron-openvswi-i2fa3cfab-
tune the flavors ?
RamFilter ?
-- Original --
From: "Shai Ben Naphtali";
Date: Oct 28, 2013
To: "openstack";
Subject: [Openstack] A question about Filter Scheduler in Folsom
I wanted to lunch an instance that is set with 16GB in Openstack Folsom env.
a
Of course , but you could use kvm(qemu) above vmware , or lxc on any of
them .
I did not test too much.
2013/10/28 Pravar Jawalekar
> Dear All,
>
> I am new to open-stack community and want to try some experiments with
> it like creating a storage service on cloud.
>
> I have very basic questio
Hi guys,
I have a patch in the works against this that will hopefully fix your
problems:
https://review.openstack.org/#/c/54212/
One of the gotchas though will be if you have already ran migration 185
you can't run it again (even if it failed because it'll try and do
operations that it got p
Padraig, Robert, Chenrui,
It seems to be a problem with nova metadata service.
> Are you using libguestfs to do the injection?
Yes - I wasn't originally, but have installed this on my controller and
compute nodes.
> What's the value of the following in nova.conf?
> libvirt_inject_key
> libvirt_i
Have you migrated to Neutron at the same time as upgrading?
There are installer docs for metadata with Neutron; the troubleshooting
process will depend on the production config you chose - e.g. namespaced
network, or not etc.
Cheers,
Rob
On 29 October 2013 14:14, Bill Owen wrote:
> Padraig, R
Hi,
After VM Live-Migration, i failed to get console-log for the VM, is it normal ?
nova show 737c9b60-a701-4691-aaac-66faf6c2a637|grep hypervisor
| OS-EXT-SRV-ATTR:hypervisor_hostname | compute1
|
root@controller:~# nova console-log 737c9b60-a701
The only way I'm seeing to protect your Havana cloud right now (topology
Per-Tenants Router with Private Networks), is by enabling FWaaS...
That's it! FWaaS installed, Tenant network protected.
I think that there is a bug with Security Groups in Havana / Neutron...
Comments?!
Regards,
Thiago
On 10/28/13 6:17 PM, Joshua Hesketh wrote:
Hi guys,
I have a patch in the works against this that will hopefully fix your problems:
https://review.openstack.org/#/c/54212/
One of the gotchas though will be if you have already ran migration 185 you
can't run it again (even if it failed because i
Cool! LBaaS installed! Need a stress test now... =)
My Havana neutron.conf have now:
---
service_plugins = neutron.services.firewall.fwaas_plugin.FirewallPlugin,*
neutron.services.loadbalancer.plugin.LoadBalancerPlugin*
---
Seems to be running fine... I'll test LBaaS more this week.
Cheers!
T
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
No - looking at that now.
Thanks,
Bill Owen
From: Robert Collins
To: Bill Owen/Tucson/IBM@IBMUS
Cc: Pádraig Brady , "Chenrui (A)"
, "openstack@lists.openstack.org"
Date: 10/28/2013 06:55 PM
Subject:Re: [Openstack] Key Injection not working after up
Thanks for all your response.
So libvirt can support ISCSI LUN, but OpenStack didn't provide such options
yet?
Best Regards
-- Ray
On Tue, Oct 29, 2013 at 1:11 AM, Razique Mahroua
wrote:
> The only thing is that you need to create the volumes on your ISCSI
> backend first, so libvirt can use t
That’s is, OpenStack is not aware yet of the backend that servers the
/var/lib/nova/instances directory ; not that I’m aware of :)
- Razique
On Oct 28, 2013, at 22:35, Ray Sun wrote:
> Thanks for all your response.
>
> So libvirt can support ISCSI LUN, but OpenStack didn't provide such options
Ok
on the compute node, can you see the file being populated? (console.log within
the instance’ directory)
On Oct 28, 2013, at 22:53, 董建华 wrote:
> root@compute1:/etc/nova# cat nova.conf|grep vnc
>
> vncserver_listen=0.0.0.0
> vncserver_proxyclient_address=10.10.10.182
> novncproxy_base_url=ht
Yes, the file is there. But after migration, this file was zeroed.
root@compute1:/var/lib/nova/instances# nova list
+--+-+++-+--+
| ID | Name| Status | Task State |
root@compute1:/etc/nova# cat nova.conf|grep vnc
vncserver_listen=0.0.0.0
vncserver_proxyclient_address=10.10.10.182
novncproxy_base_url=http://192.168.11.180:6080/vnc_auto.html
10.10.10.182 is the internal IP of the compute node. 192.168.11.180 is the
public IP of the controller node.
From: Ra
Hi,
isn??t related to the option ?0?0 vncserver_proxyclient_address ?0?3? in your
nova.conf
Not sure about that one??
On Oct 28, 2013, at 19:53, ?? wrote:
> Hi,
>
> After VM Live-Migration, i failed to get console-log for the VM, is it normal
> ?
>
> nova show 737c9b60-a701-4691-aaac-
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
On 29 October 2013 18:31, Bill Owen wrote:
>
> No - looking at that now.
> Thanks,
> Bill Owen
Ok, so you're still on nova-network.
That means you don't have namespaced networks and don't need a
namespace-escaping metadata agent.
So what you should have is regular 'ip route' inspectable routing
I just noticed that the file permission was changed from libvirt-qemu to root,
what caused this problem ?
**
系统服务部董建华
杭州新世纪信息技术股份有限公司
Hangzhou New Century Information Technology Co.,Ltd.
地址:杭州市滨江区南环路3766号
手机:13857132818
TEL:0571-28996
Interesting,
never know about that
thanks!
On Oct 28, 2013, at 13:44, Craig E. Ward wrote:
> On the RabbitMQ server, are vhost and user id setup correctly?
>
> % rabbitmqctl list_users
>
> % rabbitmqctl list_vhosts
>
> The user and vhost should match what is in the OpenStack configurations.
So is there any blueprint for this feature?
Best Regards
-- Ray
On Tue, Oct 29, 2013 at 1:48 PM, Razique Mahroua
wrote:
> That’s is, OpenStack is not aware yet of the backend that servers the
> /var/lib/nova/instances directory ; not that I’m aware of :)
> - Razique
>
> On Oct 28, 2013, at 22:3
72 matches
Mail list logo