Hi,
Which network mode do you use ?
I think I get the same problem in VLAN network mode. Can you check your
'nova-api' logs ?
The DNAT rule in pre-routing table change the destination of meta data
packets but SNAT rule in post routing table change the source IP of all
outgoing traffic.
So, the me
Hi all,
I installed Openstack swift on 4 nodes and it works correctly
I cann access and store files when I'm connected on the proxy node
but I can't store files or do any thing when I'm not logged on the proxy node
Can any one please help me ?
thank's in advance
best regards
Khaled
Hi all,
I did some further testing and found something that works for me as a
workaround.
Since I was getting an error like this on XE6:
[20111003T08:57:36.775Z|debug|xe1|398170 inet-RPC|VDI.resize_online
R:3fa63f98fab0|audit] VDI.resize_online: VDI =
'2e79ace3-5cb3-418c-9aad-58faf218c09e'; size =
On Mon, Oct 3, 2011 at 8:01 AM, Devin Carlen wrote:
> Hello all,
> There is now an official Diablo branch on GitHub:
> https://github.com/4p/openstack-dashboard/tree/diablo
> I'll be sending out a more formal note about this later.
Are you going to mark stable code by tag? It is very useful for
> On Mon, Oct 3, 2011 at 8:01 AM, Devin Carlen wrote:
>> Hello all,
>> There is now an official Diablo branch on GitHub:
>> https://github.com/4p/openstack-dashboard/tree/diablo
>> I'll be sending out a more formal note about this later.
And where can we found the diablo compatible version for
Hi all,
I'm about to test the scheduling across zones functionality in diablo,
but the run instance command does not propagate correctly across the
child zones.
My environment:
3 VM's with diablo installed.
PARENT ZONE: Europe1 [192.168.124.47]
|
Hi Khaled
What version of swift are you using and what authentication system ?
If you are using swauth (the preferred one), you need to make sure you have the
"default_swift_cluster" properly defined within the "[filter:swauth]" section
of the proxy config.
Ex:
default_swift_cluster =
local
Hi Khaled
What version of swift are you using and what authentication system ?
If you are using swauth (the preferred one), you need to make sure you have the
"default_swift_cluster" properly defined within the "[filter:swauth]" section
of the proxy config.
Ex:
default_swift_cluster =
local
Alle lunedì 03 ottobre 2011, Fabrice Bacchella ha scritto:
> > On Mon, Oct 3, 2011 at 8:01 AM, Devin Carlen
wrote:
> >> Hello all,
> >> There is now an official Diablo branch on GitHub:
> >> https://github.com/4p/openstack-dashboard/tree/diablo
> >> I'll be sending out a more formal note about
Hi,
Here are the network configuration parameters in my nova.conf
--network_manager = nova.network.manager.FlatManager
--flat_network_bridge=br100
--flat_injected=true
--flat_interface=eth0
--public_interface=eth0
Hence, I ran tcpdump with following three filters on the interfaces - eth0,
lo & b
> Is http://yum.griddynamics.net/yum/diablo/ the repository for the final
> version ?
Yep, it's Diablo release + our patches.
Dmitry Maslennikov should come up with an announce soon.
> The home page don's says a word about diablo:
> http://yum.griddynamics.net/
We'll fix that once our Diablo p
Hello Everyone,
I was wondering if any effort has been made regarding supporting
infiniband devices to support I/O Virtualization, RDMA etc..? If so,
can you please direct me to the latest documentation.
Thanks in Advance,
Nick.
___
Mailing list: http
What is the current state of the yum repository at griddynamics ?
http://yum.griddynamics.net/yum/
There is a diablo-4/openstack/, that uses master/deps/ and diablo/ that is
standalone. Is http://yum.griddynamics.net/yum/diablo/ the repository for the
final version ?
The home page don's says a
Le 3 oct. 2011 à 14:06, Carlo Impagliazzo a écrit :
> Same for me ( I'm using scientific linux and xen ), I've resolved with last
> revisions suggested ( in the footer ).
> last diablo revisions:
>
> # compute service
> NOVA_REPO=https://github.com/openstack/nova.git
> NOVA_BRANCH=2011.3
$ git
Hey guys, I'm still trying to get this working, but I still don't understand
what's happening.
In the ttylinux busybox I do a fdisk -l and it says the disk is only 18 MB
large and doesn't have a valid partition table:
/ # fdisk -l
Disk /dev/sda: 18 MB, 18874368 bytes
255 heads, 63 s
Le 3 oct. 2011 à 14:47, Andrey Brindeyev a écrit :
>> Is http://yum.griddynamics.net/yum/diablo/ the repository for the final
>> version ?
>
> Yep, it's Diablo release + our patches.
>
> Dmitry Maslennikov should come up with an announce soon.
>
>> The home page don's says a word about diablo
Hi Marcelo,
I followed instructions described in "
http://swift.openstack.org/howto_installmultinode.html " to install swift
i think that the used for authentication is tempauth
best regards
Khaled
Subject: Re: [Openstack] access to openstack swift cluster
From: btorch...@zeroaccess.org
Date:
On 3 paź 2011, at 15:52, Khaled Ben Bahri wrote:
> Hi Marcelo,
>
> I followed instructions described in "
> http://swift.openstack.org/howto_installmultinode.html " to install swift
> i think that the used for authentication is tempauth
>
Please paste your proxy-server.conf and exact command
Hi
This is the proxy-server.conf
[DEFAULT]
cert_file = /etc/swift/cert.crt
key_file = /etc/swift/cert.key
bind_port = 8080
workers = 8
user = swift
[pipeline:main]
pipeline = healthcheck cache tempauth proxy-server
[app:proxy-server]
use = egg:swift#proxy
allow_account_management = true
[filter
On Sun, 2011-10-02 at 14:44 -0400, Monty Taylor wrote:
> [...] (and I believe
> I heard python-novaclient is being sucked in to nova, so the precedent
> seems to be that we don't care for client lib projects having different
> lifecycles)
It's worth pointing out that novaclient is a special case;
On 3 paź 2011, at 16:56, Khaled Ben Bahri wrote:
> Hi
>
> This is the proxy-server.conf
>
> [DEFAULT]
> cert_file = /etc/swift/cert.crt
> key_file = /etc/swift/cert.key
> bind_port = 8080
> workers = 8
> user = swift
> [pipeline:main]
> pipeline = healthcheck cache tempauth proxy-server
> [app
Hi Paul,
I did follow the wiki to configure the server (basically was the same
procedure used to install 5.6 servers).
While installing the server I just followed the wizard choosing to
enable thin provisioning on local storage.
I have no remote filesystems at the moment.
My local storage params a
Hi,
thanks for the advice
this not a problem, i installed swift on virtual machines and i will delete
them :)
these command are executed when i'm logged on the proxy server, but i want to
manage files from outside the proxy
and these commands don't work from another computer
I don't know if
Hi all,
Could someone tell me how to handle the translation files properly when
submitting a commit for review?
Should I just ignore all the local modification for po/nova.pot? Or should I
regenerate the .pot with setup.py build_i18n and submit the new version?
Regards,
Stanisław Pitucha
Cloud
On Mon, Oct 3, 2011 at 5:46 PM, Fabrice Bacchella
wrote:
> I hope it's not too late, but a lot of configuration files are not tagged as
> such in the spec files.
>
> So if I try a yum erase, they juste vanished. Or they can be overridden with
> a yum update. That a big problem for production ser
Nick Khamis asked:
> I was wondering if any effort has been made regarding supporting
infiniband
>devices to support I/O Virtualization, RDMA etc..? If so, can you
please direct
>me to the latest documentation.
Adding RDMA support to OpenStack will be a challenge with the current
software archit
On 3 paź 2011, at 17:49, Khaled Ben Bahri wrote:
> Hi,
>
> these command are executed when i'm logged on the proxy server, but i want to
> manage files from outside the proxy
> and these commands don't work from another computer
>
> I don't know if there are any commands or configuration to m
I suspect that the original poster was looking for instance access
(mediated in some way) to IB gear. When we were trying to figure out
how to best use our IB gear inside of openstack, we decided that it
was too risky to try exposing IB at the verbs layer to instances
directly, since the security m
You seem to doing things correctly.
Can you paste the output from 'nova zone-list' in the parent zone please?
-Sandy
From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] o
Quoting Daniel P. Berrange (berra...@redhat.com):
> The LXC controller 'main' method received the handshake FD and invokes
> lxcControllerRun(). This method does various setup tasks, in particular
> the following:
>
>
>
> if (lxcSetContainerResources(def) < 0)
> goto cleanup;
Narayan Desai wrote:
> I suspect that the original poster was looking for instance access
(mediated in some way) to IB gear.
> When we were trying to figure out how to best use our IB gear inside
of openstack, we decided that
> it was too risky to try exposing IB at the verbs layer to instances
Nati Ueno wrote:
> Thank you for your great recommendations!
Double-check all sessions, there were some changes since Jay looked at
the session times.
--
Thierry Carrez (ttx)
Release Manager, OpenStack
___
Mailing list: https://launchpad.net/~openstack
Thanks to all who attended our chat Repose today. Just wanted to send a quick
message to let you know that the code is available today on GitHub!
https://github.com/rackspace/repose
-jOrGe W.
This email may include confidential information. If you received it in error,
please delete it.
___
Quoting Serge E. Hallyn (serge.hal...@canonical.com):
> I'm sure the patch should be tweaked (helpers moved elsewhere, whatever)
> but running euca-terminate-instances twice on an lxc container can cause
> oopses on the host without this.
Sorry, I guess when I re-typed it I erred on caps - s/true/
34 matches
Mail list logo