Re: [Openstack] standalone mysql

2014-02-14 Thread Xin Zhao
;: {"tenantName": "bnlcloud", "passwordCredentials": {"username": "xzhao", "password": "passwd"}}} Authorization Failed: HTTPConnectionPool(host='10.255.2.134', port=35357): Request timed out. (timeout=600.0) Thanks, Xin Re

[Openstack] standalone mysql

2014-02-14 Thread Xin Zhao
Hello, I would like to use a mysql DB, from its own host, and have all openstack daemons talk to it. So I set up one mysql DB, dump and reload the current running DB to it, changed the sql "connection" setting in the keystone config file to point to the new ip, restart keystone service, but

[Openstack] test

2014-02-08 Thread Xin Zhao
Testing, please ignore ... Xin ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Re: [Openstack] metadata service not working for VMs

2014-02-07 Thread Xin Zhao
/python2.6/site-packages/quantumclient/client.py", line 193, in authenticate raise exceptions.Unauthorized(message=body) Unauthorized: [Errno 111] ECONNREFUSED Thanks, XIn On 2/7/2014 11:57 AM, Xin Zhao wrote: Hello, I have an issue with accessing metadata from instances. I am running a grizzly

[Openstack] metadata service not working for VMs

2014-02-07 Thread Xin Zhao
which is used as the keystroke endpoint for nova so I integrated the start and stop of the nova metadata service into the scripts it calls with a state change with further assistance by an external check script which attempts an auto recovery on failure executed by Nagios. -- Sent from my HP Pr

[Openstack] Can I move keystone-signing-XXX files out of /tmp ?

2013-12-24 Thread Xin Zhao
Hello, I am running a Grizzly multi-host test cluster on RHEL6. On the controller node, there are several keystone-signing- files automatically created by the daemons. But if some disk cleanup scripts kick in and remove them, that will cause problem to the services. So I wonder if I can move t

Re: [Openstack] [rhos-list] dns problem for instances

2013-11-27 Thread Xin Zhao
namespace, that's probably why it wasn't working before. Thanks, Xin On 11/15/2013 5:16 PM, Xin Zhao wrote: Thanks for all the reply, as Paul said, the dnsmasq version doesn't appear to be the issue here. I also tested dns between 2 different VM subnets, instances can ping ea

Re: [Openstack] [neutron] provider router with private networks, can not ping private IP and floating IP [RESOLVED]

2013-11-18 Thread Xin Zhao
On 11/18/2013 4:15 AM, sylecn wrote: Thanks for all the hints. Finally I have a working network. The biggest problem is neutron.agent.metadata.agent has bad auth param set in metadata_agent.ini config file. The metadata agent log file did not catch my attention earlier. Another problem is net

Re: [Openstack] [rhos-list] dns problem for instances

2013-11-15 Thread Xin Zhao
x86_64.rpm Regards, --- JuanFra 2013/11/14 Xin Zhao mailto:xz...@bnl.gov>> Hi Terry, I upgrade to version 2013.1.4 on all 3 hosts (controller/network/compute). Unfortunately that doesn't solve the DNS issue for instances. In the dhcp-agent.log, there is a mes

Re: [Openstack] [rhos-list] dns problem for instances

2013-11-14 Thread Xin Zhao
Hi Terry, I upgrade to version 2013.1.4 on all 3 hosts (controller/network/compute). Unfortunately that doesn't solve the DNS issue for instances. In the dhcp-agent.log, there is a message of: WARNING [quantum.agent.linux.dhcp] FAILED VERSION REQUIREMENT FOR DNSMASQ. DHCP AGENT MAY NOT RUN

Re: [Openstack] dns problem for instances

2013-11-14 Thread Xin Zhao
. Thanks, Xin On 11/12/2013 6:55 PM, Remo Mattei wrote: RH does have firewall rules you may want to see if DNS is going out. I know you said that it goes outside but you can also check the order if in nsswitch.conf etc.. Have a good day, ciao -- Remo Mattei November 12, 2013 at 14:32

[Openstack] dns problem for instances

2013-11-12 Thread Xin Zhao
Hello, I have a multi-host grizzly RHEL6 install, using OVS. From the instance, I can ping external ips, but DNS resolv doesn't work, it only works for other instances on the VM network. If I do subnet-update to add public DNS server ips to the vm network, DNS resolv works for external hosts,

Re: [Openstack] [rhos-list] system panic after starting OVS agent

2013-11-08 Thread Xin Zhao
version that comes with RHOS or RDO that might explain it. -- Sent from my HP Pre3 -------- On Nov 5, 2013 17:33, Xin Zhao wrote: Hi Thomas, Thanks for the reply, here is the info of the drivers: 1) NIC for the VM network connection:

Re: [Openstack] [rhos-list] system panic after starting OVS agent

2013-11-06 Thread Xin Zhao
running the stock RedHat kernel not the patched version that comes with RHOS or RDO that might explain it. -- Sent from my HP Pre3 On Nov 5, 2013 17:33, Xin Zhao wrote: Hi Thomas, Thanks for the reply, here is the info

Re: [Openstack] [rhos-list] system panic after starting OVS agent

2013-11-05 Thread Xin Zhao
gister-dump: no supports-priv-flags: no Xin On 11/5/2013 4:55 PM, Thomas Graf wrote: On 11/05/2013 10:43 PM, Xin Zhao wrote: Add the general Openstack list, sorry for folks who are on both lists... On 11/5/2013 2:42 PM, Xin Zhao wrote: Hello, On my grizzly quantum/OVS network node, after I sta

Re: [Openstack] [rhos-list] system panic after starting OVS agent

2013-11-05 Thread Xin Zhao
Add the general Openstack list, sorry for folks who are on both lists... On 11/5/2013 2:42 PM, Xin Zhao wrote: Hello, On my grizzly quantum/OVS network node, after I start the quantum-openvswitch-agent, the system log shows errors as below, and it repeats every second since then... and the

[Openstack] question on physical name for provider network

2013-10-16 Thread Xin Zhao
Hello, I've been puzzled by the "physical_name" for the physical networks. Our grizzly cluster follows the typical 3 nodes setup that is described in many documents, with one physical network associated with the internal data network, and another physical network associated with the public net

[Openstack] quantum net-create throws 403 forbidden

2013-10-10 Thread Xin Zhao
Hello, When I try to create a network, using the admin user's credential, it throws "403 forbidden" error: [root@nethost ~(keystone_admin)]# quantum net-create public01 --router:external True --provider:network_type flat --provider:physical_network physnet1 (403, 'Forbidden') [root@nethost

[Openstack] define IP_RANGE for L3 agent

2013-10-10 Thread Xin Zhao
Hello, In the following command to configure a provider network for grizzly L3 agent: (as documented at https://access.redhat.com/site/documentation/en-US/Red_Hat_OpenStack/3/html/Installation_and_Configuration_Guide/Configuring_a_Provider_Network1.html) *||quantum subnet-create --gateway/|GA

Re: [Openstack] publicurl definition in keystone

2013-10-10 Thread Xin Zhao
IPv6, this is much more easy to achieve, since there is no need to deal with creepy NAT rules. Which means that your endpoints will always have a public IP address (if you have IPv6). Keep it in mind! Cheers! Thiago On 9 October 2013 12:28, Xin Zhao <mailto:xz...@bnl.gov>> wro

Re: [Openstack] publicurl definition in keystone

2013-10-09 Thread Xin Zhao
e network, and "public" the "out-facing" IP Razique Le 7 oct. 2013 à 17:30, Xin Zhao mailto:xz...@bnl.gov>> a écrit : > Hello, > > Our openstack controller has two IPs, one out-facing, the other is internal only (on the managemen

[Openstack] publicurl definition in keystone

2013-10-07 Thread Xin Zhao
Hello, Our openstack controller has two IPs, one out-facing, the other is internal only (on the management network). When it comes to define service endpoints in keystone, the publicurl entry should be the out-facing IP, and the internalurl and adminurl should be the internal IP, right? Thank

Re: [Openstack] Changing quantum/neutron OVS Plugin

2013-09-13 Thread Xin Zhao
Hello, I have a similar question. We are considering to upgrade to grizzly using OVS/vlan model, if I understand the doc correctly, all the external traffic and internal intra-virtual network traffic go through the network host, which makes the network host a single point of failure, and high l