On Tue, Jun 4, 2013 at 2:41 PM, Andrii Loshkovskyi
wrote:
> Thank you for answer.
>
> Chmouel, do you mean the auth_token on Keystone or swift proxy server?
>
> from /etc/keystone/keystone.conf
>
> [filter:token_auth]
> paste.filter_factory = keystone.middleware:TokenAuthMiddleware.factory
>
> fro
On Tue, Jun 4, 2013 at 5:02 PM, Christopher Armstrong <
chris.armstr...@rackspace.com> wrote:
> On Tue, Jun 4, 2013 at 2:44 PM, Christopher Armstrong <
> chris.armstr...@rackspace.com> wrote:
>
>> On Tue, Jun 4, 2013 at 2:28 PM, Sean Dague wrote:
>>
>>> If you are running on recent Ubuntu it appe
Hi,
I installed a Grizzly Quantum server node and a Quantum network node based
on Ubuntu 12.04, it can works.
When I used CML quantum agent-list, I got the strange result as below:
*root@nnode01:~# quantum agent-list*
*
+--++
On Tue, Jun 4, 2013 at 2:44 PM, Christopher Armstrong <
chris.armstr...@rackspace.com> wrote:
> On Tue, Jun 4, 2013 at 2:28 PM, Sean Dague wrote:
>
>> If you are running on recent Ubuntu it appears that you basically need
>> MULTI_HOST=True set for guest routing to work. Otherwise vhost_net corru
Hello,
I try to deploy an openstack grizzly plate-forme from redhat rdo, in a
multi-node setup.
Well recently support of net namespace was added to rhel 6.4 through the
rdo repo.
(One quick note : be carefull about vswitch from rdo, it doesnt support gre
tunnel).
So can somebody explain me how
On Tue, Jun 4, 2013 at 2:28 PM, Sean Dague wrote:
> If you are running on recent Ubuntu it appears that you basically need
> MULTI_HOST=True set for guest routing to work. Otherwise vhost_net corrupts
> the checksums of the dhcp packets, and you're kind of done.
>
> Been trying to narrow this dow
If you are running on recent Ubuntu it appears that you basically need
MULTI_HOST=True set for guest routing to work. Otherwise vhost_net
corrupts the checksums of the dhcp packets, and you're kind of done.
Been trying to narrow this down as far as possible the last couple of
days (as I have s
Hi Farhan,
it might be an option to push out the lower mtu size using DHCP (option 26)
http://tools.ietf.org/html/rfc2132#section-5
I was able to get dnsmasq to do that without changing any code.
You may wish to try the following in your test environment to see
if your instances request and us
Hi all!
I've been struggling the past couple of days on getting a working DevStack
so I can start contributing to the Heat project. The problems I've been
running into are largely networking related. I've tried a number of
combinations, like using latest git checkouts vs stable/grizzly for the
var
Hi guys, I'm currently setting up a VPN site-to-site for a customer.I've been able to setup the network (I'm running in VLAN mode)...BUT I'd like now to have my instances to use the router (that's a physical equipment) as a gateway instead of the dnsmasq/bridge IP for that network.I found that thre
Hi all,
I'm new to openstack, I read the manual to study about it's scheduling.
I find that there are other schedulers (chance, multi, and simple
scheduler). How can I config to change the schedule in nova?
By default there is "compute_scheduler_driver =
nova.scheduler.filter_scheduler.FilterSch
Hi Anne,
Thanks for the quick reply.
Le 04/06/2013 16:00, Anne Gentle a écrit :
If that's all a bit too much, we can work on it from your blog entry,
I've created this doc bug to track the work:
https://bugs.launchpad.net/openstack-manuals/+bug/1187400. Feel free
to assign yourself.
Thanks
On 06/04/2013 02:45 AM, Klaus Schürmann wrote:
Hi Rick,
I found the problem. I placed a hardware balancer in front of the proxy server.
The balancer lost some packets because of a faulty network interface.
Your tip was excellent.
I'm glad it helped. Looking back I see that my math on the cumu
On Mon, Jun 3, 2013 at 2:47 AM, Sylvain Bauza wrote:
> Thanks Jerome for the clarification.
> I just posted out a blogpost for adding a Second volume to Cinder in
> Folsom [3]. Maybe it could be merged with the official Folsom Ubuntu Cinder
> documentation ? There is only H/A aspects that are men
On Tue, Jun 04 2013, Swann Croiset wrote:
> the exchange used to send notifications need to match between Cinder and
> Ceilometer configurations.
> since ceilometer has 'cinder' as default and cinder has 'openstack' as
> default
>
> try to set in cinder.conf:
> control_exchange=cinder
This is bug
Hi,
The default dnsmasq lease time of 2 mins flood my /var/log/messages file
with dhcp requests and acks, do you guys think that there is a good
reason for the default lease time to be so low? do you think we better
change the default to be a higher value?
--
Thanks,
Rami Vaknin, QE @ Red
the exchange used to send notifications need to match between Cinder and
Ceilometer configurations.
since ceilometer has 'cinder' as default and cinder has 'openstack' as
default
try to set in cinder.conf:
control_exchange=cinder
And i don't know what is usage audit here ...
Le 04/06/2013
Thank you for answer.
Chmouel, do you mean the auth_token on Keystone or swift proxy server?
from /etc/keystone/keystone.conf
[filter:token_auth]
paste.filter_factory = keystone.middleware:TokenAuthMiddleware.factory
from /etc/swift/proxy-server.conf
[filter:authtoken]
paste.filter_factory = k
Heyho guys :)
I've a little problem with policy settings in keystone. I've create a
new rule in my policy-file and restarts keystone but keystone i don't
have privileges.
Example:
keystone user-create --name kadmin --pw lala
keystone user-role-add --
keystone role-list --user kadmin --role Key
I have seen this when keystone is too busy for validating tokens.
getting keystone behind apache or scaling up keystone make things a
better (and make sure you are using swift memcache connection in
auth_token).
Chmouel.
On Mon, Jun 3, 2013 at 8:15 PM, Andrii Loshkovskyi
wrote:
> Hello,
>
> I w
Hi Rick,
I found the problem. I placed a hardware balancer in front of the proxy server.
The balancer lost some packets because of a faulty network interface.
Your tip was excellent.
Thanks
Klaus
-Ursprüngliche Nachricht-
Von: Rick Jones [mailto:rick.jon...@hp.com]
Gesendet: Freitag, 3
Robert Collins wrote:
> What if we were to always do a release after a security advisory?
We don't do a server "stable release" after each security advisory as it
doesn't significantly help spreading the fix, but I agree that for
client libraries (where the PyPI releases are the main form of
downs
I have already added
notification_driver=cinder.openstack.common.notifier.rpc_notifier and restarted
cinder-volume service.
How can I enable usage audit ?
I have already tried things which I have described in my previous query
https://lists.launchpad.net/openstack/msg24112.html
__
On Tue, Jun 04 2013, Anshul Gangwar wrote:
> I want to receive cinder volume meters from ceilometer. What changes shall i
> make in localrc file of devstack to acheive this?
You need:
notification_driver=cinder.openstack.common.notifier.rpc_notifier
and usage audit enabled and running.
--
J
The poll closed a few minutes ago, the results are in:
1. Icehouse: 102
2. Ichang:73
3. Inverness: 58
The *next* release cycle for OpenStack, starting in November 2013 after
we conclude the current release cycle ("Havana") will therefore be
called "Icehouse" !
https://launchpad.net/~openstac
Good job guys.
I reckon we might make users' life easier if we change naming strategy for
default security groups to 'default-$tenant_id'
On the other hand this is not a priority since as an admin user I guess you
can already get that information properly choosing the fields to display.
Salvatore
Aaron,
It really works after I add the icmp rule for my second tenant. Thanks for your
help!
Leon
From: Aaron Rosen [mailto:aro...@nicira.com]
Sent: 2013年6月4日 10:37
To: Li, Leon
Cc: openstack-operat...@lists.openstack.org; openstack@lists.launchpad.net
(openstack@lists.launchpad.net)
Subject:
27 matches
Mail list logo