δΊ 2014-4-10 21:21, gustavo panizzo ει:
Hello
i have a use case where i have to use two provider networks over the
same physical nic.
My provider provides me 2 network ranges (each has it's netmask and
gateway) over the same nic. without VLAN or tunneling.
i need to expose both network
Ha, right you are, my man. I had a typo in my internal URL endpoint for
cinder. After fixing that, and also specifying:
"cinder_catalog_info=volume:cinder:internalURL"
in my nova.conf on my compute nodes, it started working. Thanks a bunch!
-erich
On 04/09/14 23:22, John Griffith wrote:
OpenStack Security Advisory: 2014-013
CVE: CVE-2014-2828
Date: April 10, 2014
Title: Keystone DoS through V3 API authentication chaining
Reporter: Abu Shohel Ahmed (Ericsson)
Products: Keystone
Versions: from 2013.1 to 2013.2.3
Description:
Abu Shohel Ahmed from Ericsson reported a vulnerability i
Well once you remove the oldhost from all of your proxy-server.conf's
"memcache_servers" list in the their [filter:cache] section; and restart
those services - they won't make any more connections to the unreferenced
memcached server. Internal proxy might... you should double check your
/etc/swift
OpenStack Security Advisory: 2014-012
CVE: CVE-2014-0162
Date: April 10, 2014
Title: Remote code execution in Glance Sheepdog backend
Reporter: Paul McMillan (Nebula)
Products: Glance
Versions: from 2013.2 to 2013.2.3
Description:
Paul McMillan from Nebula reported a vulnerability in Glance Sheepd
I'm wondering why the swift proxy hosts will keep memcached connections
open to other hosts even after I remove the memcached settings from the
proxy-server.conf and restart the service.
I'm seeing what is the equivalent of ghost connections in the logs. I have
a proxy that I've removed from servi
Hi All,
Running 2013.2.2 on Ubuntu 12.04. I've recently notices some (10%
ish) of my compute node exhibiting unusually large "system' cpu
utilization over the past few days.
Looking a little more closely I see that there are a number of
'qemu-nbd -c' process trying to connect deleted instance dr
Hi Chathura,
Thanks for your doc submission and question, but I've had others look at it
and they agree this is specialized and belongs in your blog post.
Do keep thinking of ways you can use OpenStack and share with the community
though!
Thanks,
Anne
On Sun, Apr 6, 2014 at 6:49 PM, Chathura Sar
On 04/10/2014 10:57 AM, Ageeleshwar Kandavelu wrote:
>
> How about this
> in plugin.ini set like this
> bridge_mappings = Physnet1:br-ex1,Physnet2:br-ex2
>
> then go on and create a proxy bridge to emulate two network on same nic
> ovs-vsctl add-br br-proxy
> ovs-vsctl add-port br-proxy ethx
How about this
in plugin.ini set like this
bridge_mappings = Physnet1:br-ex1,Physnet2:br-ex2
then go on and create a proxy bridge to emulate two network on same nic
ovs-vsctl add-br br-proxy
ovs-vsctl add-port br-proxy ethx
ovs-vsctl add-br br-ex1
ip link add name ex1-br-proxy type veth peer
Hello
i have a use case where i have to use two provider networks over the
same physical nic.
My provider provides me 2 network ranges (each has it's netmask and
gateway) over the same nic. without VLAN or tunneling.
i need to expose both network ranges to the VMs
my initial tough was to
Steps to debug.
1. Understand where exactly the problem lies
* Are you not able to reach the floating ip of instances?
* First start a continuous ping from an machine outside openstack to
the floating ip
* Go to network node. Find the interface of the router that att
Thanks Robert,
Yes other components still work, openvswitch works fine as no flows are
dropped.
I even do not see any error in the logs, but still it stops working.
Also, after the restart it starts working fine,so I don't doubt the space
in rabbit message queue to be a problem.
Regards
Akshat
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
OpenSSL Heartbleed vulnerability can lead to OpenStack compromise
- ---
### Summary ###
A vulnerability in OpenSSL can lead to leaking of confidential data
protected by SSL/TLS in an OpenStack deployment.
### Affected Services / Software ###
Grizzly,
Hello everyone,
Due to various release-critical issues detected in Heat icehouse RC1, a
new release candidate was just generated. You can find a list of the 18
bugs fixed and a link to the RC2 source tarball at:
https://launchpad.net/heat/icehouse/icehouse-rc2
Unless new release-critical issues
15 matches
Mail list logo