hi,
i wnat to makw swift use keystone to auth.
i config as here https://github.com/openstack/keystone "Swift
Integration - Quick Start"
but at here "swift-init main start"
object-server already started...
Traceback (most recent call last):
File "/usr/bin/swift-proxy-server", line 22
Hi!
On 02/14/2012 07:29 PM, Joe Gordon wrote:
> Hi Developers,
>
> I have been looking at https://bugs.launchpad.net/nova/+bug/931608,
> "run_tests.sh (-x | --stop) is broken." A fix was committed but it only
> stopped "./run_tests.sh -x" from failing, and not restoring the
> "./run_tests.sh -x"
Hi Developers,
I have been looking at https://bugs.launchpad.net/nova/+bug/931608,
"run_tests.sh (-x | --stop) is broken." A fix was committed but it only
stopped "./run_tests.sh -x" from failing, and not restoring the
"./run_tests.sh -x" functionality.
"run_tests.sh (-x | --stop)" is a nosetest
I agree fully with Jesse. I think given the timelines the first cut of
Keystone was great. Moving forward we'll also have more folks that are
burdened (honored?) with operating it in production environments which means
that more rubber meets the road kinds of issues will be identified and deal
hi,all
to be first,I am not to be proficient in filesystem and storage,so what
I say may not correct,forgive me.
to realize EBS like amazon, openstack first use LVM,through iSCSI
PROTOCOL,so we can use remote
storage as block devices.and other problem is snapshot,the ideal situation
is we can
Yes
Light was the codename when it was an internal tool.
The first version was a couple hundred lines and supported all core APIs.
After it was decided it would be more effective to flesh out light than
continue to tweak the existing code base, it became the redux branch of the
official keystone
Are "keystone light" and "keystone redux" the same thing? Or is one just a
light beer?
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help
The major lessons of keystone:
While keystone served as an effective proof of concept for unified
authentication (before keystone each component had its own
users/passwords), it didn't get enough attention from other developers and
integration with other core projects.
The pain caused by not havi
There's probably several ways to answer this, but I'd say that the original
development of keystone was not sufficiently focused on it's integration
with other projects (and the focus on testing in general came late), while
redux was quite literally born from integration testing.
- Dolph
On Tue,
Hey there Joshua,
Good question! `redux` started due to a variety of frustrations with the
previous design that arose from decisions made early in the original
development and were deemed infeasible to resolve in an evolutionary way.
My team and the teams we work with closely felt we were in a go
On Feb 14, 2012, at 6:28 PM, Kevin L. Mitchell wrote:
> On Wed, 2012-02-15 at 00:00 +, Monsyne Dragon wrote:
>>> Other possibilities:
>>>
>>> * Container (not recommended, as it is overloaded with Solaris or Linux
>>> container virtualization)
>>> * ServerGroup
>>> * HostGroup
>>> * Group
>>
Great!
A question I never understood, why was a redux needed?
Isn't keystone "pretty" new anyway? Maybe I missed that message/memo.
Was there some kind of "learnings/oops moment" that happened that we can all
benefit from (and not repeat?).
Sorry if this is a repeat...
On 2/14/12 4:38 PM, "Andy
On Wed, 2012-02-15 at 00:00 +, Monsyne Dragon wrote:
> On Feb 14, 2012, at 1:25 PM, Jay Pipes wrote:
>
> > -1 on shard b/c of database terminology. -1 on cluster because of HPC and
> > database terminology.
> >
> > Zone was originally used because it is general -- referring to merely a
> >
tl;dr proposal to merge keystone redux: same API, same client, new
service. Please review and ask questions!
FRIENDS, ROMANS
We are gathered here today to celebrate the commencement of Keystone
(redux) to fill the role of Keystone (henceforth known as legacy). It is
with great pride that we prop
On Wed, 2012-02-15 at 00:00 +, Monsyne Dragon wrote:
> > Other possibilities:
> >
> > * Container (not recommended, as it is overloaded with Solaris or Linux
> > container virtualization)
> > * ServerGroup
> > * HostGroup
> > * Group
> > * Collection
>
> - Set
> - Cell
> - Huddle
> - Constel
Ya, it seems like guestfs and netcf are being worked on by RH (at least in some
part).
Maybe someone from there can chime in.
It would be awesome to just use guestfs and something like "guestnetwork"
(using netcf?) for network config or just have it be a part of guestfs.
Josh
On 2/14/12 2:56 PM,
On Feb 14, 2012, at 1:25 PM, Jay Pipes wrote:
> -1 on shard b/c of database terminology. -1 on cluster because of HPC and
> database terminology.
>
> Zone was originally used because it is general -- referring to merely a
> collection of hosts or other zones and not having a geographic connota
Hello Everyone,
We had excellent discussion about the outstanding Feature Freeze Exceptions.
Overall e-4 is going very well. We have had tons of fixes and updates and
almost all of the FFes are in. Here is an update about all of the outstanding
items. All of these patches need to be merged by
ServerGroup gets my vote at the moment: it's a term that has an
overloaded meaning (as far as I know)
Martin
On 15 February 2012 06:25, Jay Pipes wrote:
> -1 on shard b/c of database terminology. -1 on cluster because of HPC and
> database terminology.
>
> Zone was originally used because it is
It sounds like we need to update the cli to reflect this
disk-configuration-parity change. It is also likely that some nova-api
work will need to be done to get this humming.
https://bugs.launchpad.net/nova/+bug/932423
On Tue, Feb 14, 2012 at 1:57 PM, Jesse Andrews wrote:
> Deliberate change.
>
On 02/14/2012 06:48 PM, Scott Moser wrote:
> On Tue, 14 Feb 2012, Leandro Reox wrote:
>
>> Hi guys,
>>
>> Anyone already implemented networking injection to RHEL systems acting as a
>> guest ? If no any plans to make it to Essex final ?
>
>
> Before we go down the road of trying to write system
As per today's meeting (which I think Vish will send a separate update about),
we've decided to remove the current zones implementation for the Essex release.
I'll be maintaining a branch that has a new implementation until F opens up.
With that, we have 2 branches up for review to remove the
Deliberate change.
It used to be that KVM and XS did different things as far as disk partitioning.
https://blueprints.launchpad.net/nova/+spec/disk-configuration-parity
Now a flavor can specify the root vs ephemeral partition size
independently instead of being decided by choice of hypervisor.
In tracking down a problem with the tempest flavors test I noticed that
'nova flavor-list' returns this in Essex:
++---+---+--+--+---+-+
| ID |Name | Memory_MB | Swap | Local_GB | VCPUs | RXTX_Factor |
++---+---+--+---
Folks,
I like to request an Essex feature freeze exception for this blueprint:
https://blueprints.launchpad.net/glance/+spec/retrieve-image-from
as implemented by the following patch:
https://review.openstack.org/#change,4096
The blueprint was raised in response to a late-breaking feature
On Tue, 2012-02-14 at 20:43 +, Kevin Jackson wrote:
> Dear cloud folk,
> I raised https://bugs.launchpad.net/nova/+bug/928819 last week that's
> not getting any love so was wondering if it was user error rather than
> a bug (as its a show stopper for my setup that I previously didn't
> have).
Partitions maybe?
On 2/14/12 4:01 PM, "Gabe Westmaas" wrote:
>This really is more of about sharding than grouping though. The specific
>goal of this implementation is to shard your nova database (on a capacity
>basis, not on a special key) and allow you to split (or shard :)
>connections to you
This really is more of about sharding than grouping though. The specific
goal of this implementation is to shard your nova database (on a capacity
basis, not on a special key) and allow you to split (or shard :)
connections to your rabbit server. This implementation should be used for
performance
Dear cloud folk,
I raised https://bugs.launchpad.net/nova/+bug/928819 last week that's
not getting any love so was wondering if it was user error rather than
a bug (as its a show stopper for my setup that I previously didn't
have).
My setup is simple -
Fresh install of Precise A2
Installed OpenSta
Even better, guess I didn't see that :-P
On 2/14/12 12:25 PM, "Leandro Reox" wrote:
What about http://git.fedorahosted.org/git/?p=python-netcf.git
On Tue, Feb 14, 2012 at 5:20 PM, Joshua Harlow wrote:
Does anyone have any experience with https://fedorahosted.org/netcf/ (RH??)
Just from a litt
What about http://git.fedorahosted.org/git/?p=python-netcf.git
On Tue, Feb 14, 2012 at 5:20 PM, Joshua Harlow wrote:
> Does anyone have any experience with https://fedorahosted.org/netcf/(RH??)
>
> Just from a little search that project seems to be oriented to do this (os
> agonistic net cfg)
>
Does anyone have any experience with https://fedorahosted.org/netcf/ (RH??)
Just from a little search that project seems to be oriented to do this (os
agonistic net cfg)
It just seems to be missing a python api (at the moment).
On 2/14/12 10:48 AM, "Scott Moser" wrote:
On Tue, 14 Feb 2012, Le
Great. Thx!
On 2/14/12 11:00 AM, "Russell Bryant" wrote:
On 02/14/2012 01:47 PM, Joshua Harlow wrote:
> So is that in the openstack "mainline" or is that an add-on that
> fedora/RH/EPEL has done?
>
> Something in the "mainline" would be great (then there would be one
> solution and not X+1 solut
-1 on shard b/c of database terminology. -1 on cluster because of HPC
and database terminology.
Zone was originally used because it is general -- referring to merely a
collection of hosts or other zones and not having a geographic
connotation like Region does.
Other possibilities:
* Contain
Ok, so that sounds nice and would be ideal.
But what is realistic?
Is there some kind of OS agonistic interface format that exists?
If there is that's great, lets us it! If there isn't what is plan B. Do we make
one? Something like guestfs for networking would seem pretty nice :-P
On 2/14/12 1
On 02/14/2012 01:47 PM, Joshua Harlow wrote:
> So is that in the openstack “mainline” or is that an add-on that
> fedora/RH/EPEL has done?
>
> Something in the “mainline” would be great (then there would be one
> solution and not X+1 solutions to this).
>
> I guess this goes beyond the networking
Nathan i forgot to mention that were actually running ubuntu as host, so we
were thinking about a way of inject ips on Red Hat as guest, but without
forcing any interfaces.template so Ubuntu guests can reside in the same
hosts too. Maybe if we can merge the aditions to the mainline package.
Thats
On Tue, 14 Feb 2012, Leandro Reox wrote:
> Hi guys,
>
> Anyone already implemented networking injection to RHEL systems acting as a
> guest ? If no any plans to make it to Essex final ?
Before we go down the road of trying to write system network configuration
scripts for each potential guest OS
So is that in the openstack "mainline" or is that an add-on that fedora/RH/EPEL
has done?
Something in the "mainline" would be great (then there would be one solution
and not X+1 solutions to this).
I guess this goes beyond the networking injection and also applies to any other
"adjustments" t
I know various companies/groups have hacked it in themselves.
Guestfs just showed up in essex. But the networking part would seem equally
important.
Possible something like
https://github.com/griddynamics/nova/blob/master/nova/virt/netcfg.py can get
pulled in??
My only concern though is that
The Fedora / EPEL packaging does this.
http://fedoraproject.org/wiki/OpenStack
http://koji.fedoraproject.org/koji/packageinfo?packageID=12510
Thanks,
Nate
On Tue, Feb 14, 2012 at 1:23 PM, Leandro Reox wrote:
> Hi guys,
>
> Anyone already implemented networking injection to RHEL systems acting
Hi guys,
Anyone already implemented networking injection to RHEL systems acting as a
guest ? If no any plans to make it to Essex final ?
Regards
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe :
Thanks for the reply Dan!
Yes, I am running with the option --multi_host=T on both controller and compute
nodes [I followed the same URL that you mentioned in your email]
Controller node is running dnsmasq-dhcp correctly (There are 2 instances of
it). If the VM is launched on the controller, then
Hi Jay,
If you are running nova-network in a mode that performs DHCP, you should be
sure that the network Nova is connecting VMs to does not have your
corporate DHCP server on it. Having two conflicting DHCP servers on the
same network will give unpredictable results. What may be happening is a
Hello,
I have a two node cluster running on Ubuntu 11.04 (natty) with the Diablo
release from ppa:openstack- release/ 2011.3
[2011.3 (2011.3-nova-milestone-tarball:tarmac-20110922115702-k9nkvxqzhj130av2)]
Controller node runs nova-compute, nova-volume, nova-network, nova-api - 2 NICs
Compute nod
Folks,
We’ve used IRC #openstack-volumes for that and had a mailing-list set up
for the team. Check it out on https://launchpad.net/~openstack-volume
Unfortunately, there was not much going on lately. Primarily I suppose you
can blame me on this as our team was completely overwhelmed by
produ
Hi Diego,
Many thanks. Let me try it.
Salman
From: Diego Parrilla Santamaría
To: openstack@lists.launchpad.net
Date: 02/14/2012 11:53 AM
Subject:[Openstack] Fwd: NFS for nova-volume
Sent by:openstack-bounces+sabaset=us.ibm@lists.launchpad.net
Hi Salman,
you c
Hi Salman,
you can checkout the directory in our stable/diablo branch in our repos in
GitHub:
https://github.com/StackOps/nova/commits/stable/diablo
You need to configure it as follows:
- NovaVolume nodes must have access to qemu-img executable. Otherwise it
won't be able to create the nodes.
-
On Tue, 2012-02-14 at 12:14 +, Chmouel Boudjnah wrote:
> 2012/2/14 Juan J. :
> >> https://github.com/chmouel/python-swiftclient
> > +1, we did that internally some time ago.
> > We're the swift structure (swift.commn.client) and added bufferedhttp.py
> > too to the swift-client package. What do
On Feb 14, 2012, at 2:07 AM, i3D.net - Tristan van Bokkem wrote:
> So, we can run MySQL in master-master mode on multiple hosts, we can run
> nova-api on serveral hosts and load balance those and RabbitMQ has a cluster
> ha setup as well but is this the way to go? I can't find a clear answer to
Hi Tristan,
When I saw your post, I though what about an pacemaker ressource agent?
Corosync for the messaging layer and pacemaker for the ressource
management. Maybe someone as wrote a script ressource agent for nova in
order to manage the failover.
Also DRBD can be useful.
And then after a coupl
2012/2/14 Juan J. :
>> https://github.com/chmouel/python-swiftclient
> +1, we did that internally some time ago.
> We're the swift structure (swift.commn.client) and added bufferedhttp.py
> too to the swift-client package. What do you think about that?
I think this is great but that would need to
Hi Christian,
Thanks for your reply.
With mysql master/master i meant the following: http://mysql-mmm.org/ but it
seems they are using 2 different locations for mysql reads and writes.
Something nova is (yet) not able to configure (i.e. --sql_connection= which is
being used for reads and right
Hi Tristan.
> So, we can run MySQL in master-master mode on multiple hosts, we can
> run nova-api on serveral hosts and load balance those and RabbitMQ
> has a cluster ha setup as well but is this the way to go? I can't
> find a clear answer to this. I am hoping one can shine some light on
> this!
We have developed a QEMUDriver for stable/diablo, sadly the essex build is
still broken. We are a bit overwhelmed, and we would like to contribute it
in the future (gue rivero will help us with Gerrit).
Still, if anybody wants to test it in stable/diablo, we are more than open
to help him to use i
Hi Salman.
On Mon, 13 Feb 2012 20:21:30 -0500
Salman A Baset wrote:
> I was wondering if anyone has tried setting up nova-volume on NFS
> backend without making any changes to nova-volume code?
I think it's not possible to do that without any changes to the
nova-volume code at the moment.
We op
On Mon, 2012-02-13 at 15:29 +0100, Chmouel Boudjnah wrote:
> [...]
> https://github.com/chmouel/python-swiftclient
>
> Let me know what do you think.
+1, we did that internally some time ago.
We're the swift structure (swift.commn.client) and added bufferedhttp.py
too to the swift-client package
Hi Stackers,
It seems running Openstack components in High Availability hasn't been really a
focus point lately, am I right?
The general docs don't really mention HA except for nova-network. So I did some
resource on how to run Nova in a High Availability and have some questions
about it:
The
58 matches
Mail list logo