;: {"tenantName": "bnlcloud", "passwordCredentials":
{"username": "xzhao", "password": "passwd"}}}
Authorization Failed: HTTPConnectionPool(host='10.255.2.134',
port=35357): Request timed out. (timeout=600.0)
Thanks,
Xin
Re
Hello,
I would like to use a mysql DB, from its own host, and have all
openstack daemons talk to it. So I set up one mysql DB, dump and reload
the current running DB to it, changed the sql "connection" setting in
the keystone config file to point to the new ip, restart keystone
service, but
Testing, please ignore ...
Xin
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
/python2.6/site-packages/quantumclient/client.py", line
193, in authenticate
raise exceptions.Unauthorized(message=body)
Unauthorized: [Errno 111] ECONNREFUSED
Thanks,
XIn
On 2/7/2014 11:57 AM, Xin Zhao wrote:
Hello,
I have an issue with accessing metadata from instances.
I am running a grizzly
which is used as the
keystroke endpoint for nova so I integrated the start and stop of the
nova metadata service into the scripts it calls with a state change
with further assistance by an external check script which attempts an
auto recovery on failure executed by Nagios.
-- Sent from my HP Pr
Hello,
I am running a Grizzly multi-host test cluster on RHEL6. On the controller
node, there are several keystone-signing- files automatically created
by the daemons. But if some disk cleanup scripts kick in and remove them,
that will cause problem to the services. So I wonder if I can move t
namespace, that's probably why it wasn't working before.
Thanks,
Xin
On 11/15/2013 5:16 PM, Xin Zhao wrote:
Thanks for all the reply, as Paul said, the dnsmasq version doesn't
appear to be the issue here.
I also tested dns between 2 different VM subnets, instances can ping
ea
On 11/18/2013 4:15 AM, sylecn wrote:
Thanks for all the hints. Finally I have a working network.
The biggest problem is neutron.agent.metadata.agent has bad auth param
set in metadata_agent.ini config file.
The metadata agent log file did not catch my attention earlier.
Another problem is net
x86_64.rpm
Regards,
---
JuanFra
2013/11/14 Xin Zhao mailto:xz...@bnl.gov>>
Hi Terry,
I upgrade to version 2013.1.4 on all 3 hosts
(controller/network/compute). Unfortunately that doesn't solve the
DNS issue for instances.
In the dhcp-agent.log, there is a mes
Hi Terry,
I upgrade to version 2013.1.4 on all 3 hosts
(controller/network/compute). Unfortunately that doesn't solve the DNS
issue for instances.
In the dhcp-agent.log, there is a message of:
WARNING [quantum.agent.linux.dhcp] FAILED VERSION REQUIREMENT FOR
DNSMASQ. DHCP AGENT MAY NOT RUN
.
Thanks,
Xin
On 11/12/2013 6:55 PM, Remo Mattei wrote:
RH does have firewall rules you may want to see if DNS is going out. I
know you said that it goes outside but you can also check the order if
in nsswitch.conf etc..
Have a good day,
ciao
--
Remo Mattei
November 12, 2013 at 14:32
Hello,
I have a multi-host grizzly RHEL6 install, using OVS. From the instance,
I can ping external ips, but DNS resolv doesn't work, it only works for
other instances on the VM network.
If I do subnet-update to add public DNS server ips to the vm network,
DNS resolv works for external hosts,
version that
comes with RHOS or RDO that might explain it.
-- Sent from my HP Pre3
--------
On Nov 5, 2013 17:33, Xin Zhao wrote:
Hi Thomas,
Thanks for the reply, here is the info of the drivers:
1) NIC for the VM network connection:
running the stock RedHat kernel not the patched version
that comes with RHOS or RDO that might explain it.
-- Sent from my HP Pre3
On Nov 5, 2013 17:33, Xin Zhao wrote:
Hi Thomas,
Thanks for the reply, here is the info
gister-dump: no
supports-priv-flags: no
Xin
On 11/5/2013 4:55 PM, Thomas Graf wrote:
On 11/05/2013 10:43 PM, Xin Zhao wrote:
Add the general Openstack list, sorry for folks who are on both lists...
On 11/5/2013 2:42 PM, Xin Zhao wrote:
Hello,
On my grizzly quantum/OVS network node, after I sta
Add the general Openstack list, sorry for folks who are on both lists...
On 11/5/2013 2:42 PM, Xin Zhao wrote:
Hello,
On my grizzly quantum/OVS network node, after I start the
quantum-openvswitch-agent, the system log shows errors as below,
and it repeats every second since then... and the
Hello,
I've been puzzled by the "physical_name" for the physical networks. Our
grizzly cluster follows the typical
3 nodes setup that is described in many documents, with one physical
network associated with the internal data network,
and another physical network associated with the public net
Hello,
When I try to create a network, using the admin user's credential, it
throws "403 forbidden" error:
[root@nethost ~(keystone_admin)]# quantum net-create public01
--router:external True --provider:network_type flat
--provider:physical_network physnet1
(403, 'Forbidden')
[root@nethost
Hello,
In the following command to configure a provider network for grizzly L3
agent:
(as documented at
https://access.redhat.com/site/documentation/en-US/Red_Hat_OpenStack/3/html/Installation_and_Configuration_Guide/Configuring_a_Provider_Network1.html)
*||quantum subnet-create --gateway/|GA
IPv6, this is much more easy to achieve, since there
is no need to deal with creepy NAT rules. Which means that your
endpoints will always have a public IP address (if you have IPv6).
Keep it in mind!
Cheers!
Thiago
On 9 October 2013 12:28, Xin Zhao <mailto:xz...@bnl.gov>> wro
e network, and "public"
the "out-facing" IP
Razique
Le 7 oct. 2013 à 17:30, Xin Zhao mailto:xz...@bnl.gov>> a écrit :
> Hello,
>
> Our openstack controller has two IPs, one out-facing, the other
is internal only (on the managemen
Hello,
Our openstack controller has two IPs, one out-facing, the other is
internal only (on the management network).
When it comes to define service endpoints in keystone, the publicurl
entry should be the out-facing IP, and the
internalurl and adminurl should be the internal IP, right?
Thank
Hello,
I have a similar question. We are considering to upgrade to grizzly
using OVS/vlan model, if I understand the doc correctly,
all the external traffic and internal intra-virtual network traffic go
through the network host, which makes the network host a
single point of failure, and high l
23 matches
Mail list logo