+1.
We're going to be running a bunch of parallel deployments of openstack for
the purpose of experimentation in system design. it would be nice to be
able to share glance and keystone between instances.
-nld
On Wed, May 15, 2013 at 1:46 PM, John Paul Walters wrote:
> Hi,
>
> We're looking at
This will depend on whether the VMs are in the same tenant network or
not. Assuming they are on the same L2 and L3 network, then the packets
will transit either the linux bridge, or openvswitch, depending on how
you have things configured. Note that network filtering rules will be
processed on this
And if a nova reboot fails, you can always fall back to issuing virsh
commands on the node behind nova's back.
-nld
On Mon, Apr 8, 2013 at 8:28 PM, Blair Bethwaite
wrote:
> Dave,
>
> Have you tried rebooting it (via OpenStack dashboard/CLI/API)? Obviously
> you'll lose memory state, but the VM w
Make sure that the metadata server has a route back to the VM. Traffic
hitting that NAT rule ensures that data is flowing properly in one
direction, but you need to make sure bits can flow back to establish a
tcp connection. We had this problem running multiple nova-network
servers.
-nld
On Thu,
We have the same basic problems. We have 4 different types of systems
integrated into our system. They all have different ratios of cpu to
memory, and we have some specialized hardware on one class of nodes.
We ended up setting up a series of chassis specific instance
definitions. We then use the
We're using IB (QDR connectX and connectX2) on our system. It turns
out that the drivers included in version 3.2 of the linux kernel are
fine. I've built a ppa for updated management tools though; all of
those bits are ancient in precise. The ppa is here:
http://launchpad.net/~nar
I've finally finished my writeup describing the experiments that we
performed using Openstack to drive a wide area 100 gigabit network.
I've included all of the details for configuration and tuning, as well
as speculation why we're seeing such good numbers.
tl;dr: you can push a whole lot of netwo
On Thu, Sep 27, 2012 at 2:20 PM, Nandavar, Divakar Padiyar (STSD)
wrote:
> >From the information available in the blueprint for
> >"multi-process-api-service" I see that implementation has been completed and
> >would be available as part of Folsom release
> >https://blueprints.launchpad.net/no
I'm not sure if it will work with FlatManager or not. We're doing this with
VlanManager.
-nld
On Mon, Sep 17, 2012 at 10:13 AM, Xin Zhao wrote:
> I have been using flatdhcp mode in a single-host network configuration.
> Will flatdhcp also work in this 2 nova-network setting, or a vlan
> mode i
Sure, we've been running in that sort of configuration since bexar.
The only tricky part is that you need to make sure that you run
nova-api-metadata on each nova-network server, and you need to make
sure that floating IPs can get to the appropriate fixed addresses (ie
if a fixed address is not rou
On Wed, Aug 29, 2012 at 12:19 PM, Joshua Harlow wrote:
> Perhaps we should also have a CHANGELOG file to explain the major
> features/changes...
>
> Perhaps a 'MIGRATION' file as well that explains how to migrate from
> version - 1?
I think that this would be a great start.
In the long term, I w
We've managed to get things working by hardwiring the filtering
scheduler to route instances to particular hosts that are running nova
compute with different virtualization layers. (in our case, kvm and
lxc for GPUs)
-nld
On Wed, Aug 22, 2012 at 12:34 PM, Michael J Fork wrote:
> Looking at the H
On Thu, Aug 2, 2012 at 8:42 AM, Christoph Kluenter wrote:
> * Am Thu, Aug 02 2012 at 09:24:55 -0400 , schrieb Ravi Jagannathan:
>> It should hop on to the next subnet block if available ( assuming that in
>> LAN its a private address scheme ) .
> We only use routable IPs. thats why we have some ne
On Sat, Jul 21, 2012 at 6:47 AM, Xu (Simon) Chen wrote:
> Narayan,
>
> If you do net.bridge.bridge-nf-call-iptables = 0 on the network controller,
> does floating IP still work? For each tenant/network, a subnet is created,
> and the nova-network has a .1 gateway configured on the bridge with the
Just for the record, we found the issue. There was some filtering
being applied in the bridge code which randomly (?) dropped some DNS
requests. Setting:
net.bridge.bridge-nf-call-arptables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-ip6tables = 0
completely resolved the p
On Fri, Jul 20, 2012 at 9:59 AM, Kiall Mac Innes wrote:
> Sounds like one solution alright..
>
> But - what about making quotas pluggable, like the scheduler?
>
> This would allow for even more complex quotas, like limiting the number of
> SSD backed instances across the entire cloud per tenant, w
On Fri, Jul 20, 2012 at 9:42 AM, Eoghan Glynn wrote:
>
>> The harder part is that we need to be able to specify
>> independent/orthogonal quota constraints on different flavors. It
>> would be really useful to be able to say basically, you can have 2TB
>> of memory from this flavor, and 4TB of mem
On Fri, Jul 20, 2012 at 4:38 AM, Eoghan Glynn wrote:
> Hi Narayan,
>
> I had the idea previously of applying a "weighting function" to the
> resource usage being allocated from the quota, as opposed to simply
> counting raw instances.
>
> The notion I had in mind was more related to image usage i
We're running a system with a really wide variety of node types. This
variety (nodes with 24GB, 48GB, GPU nodes, and 1TB mem nodes) causes
some real trouble with quotas. Basically, for any tenant that is going
to use the large memory nodes (even in smaller slices), we need to set
quotas that are hi
l 19, 2012 at 11:35 AM, Joshua wrote:
>>
>> Thanks for the updates, it hurts to hear that solaris is not support. I
>> was hoping I could nest the vm, but even that has been removed as a feature.
>> Anyone have any workarounds on getting solaris running within openstack.
>
On Wed, Jul 18, 2012 at 7:38 PM, Michael March wrote:
> I don't follow Solaris that closely but I vaguely remember the Joyent folks
> ported all of KVM to Solaris, right? Or am I just missing the whole point
> here?
They did, and it is a fairly impressive piece of work. Their focus (as
a hosting
, from the sound of things.
-nld
On Tue, Jul 17, 2012 at 7:55 PM, Narayan Desai wrote:
> I suspect that you need the right solaris (more likely illumos) bits
> to get guest side support for virtio. We tried a while ago and the
> default openindiana at the time didn't work.
> -nld
&g
I suspect that you need the right solaris (more likely illumos) bits
to get guest side support for virtio. We tried a while ago and the
default openindiana at the time didn't work.
-nld
On Tue, Jul 17, 2012 at 7:43 PM, Joshua wrote:
> I have tried with both KVM and qemu. Solaris starts to boot a
On Mon, Jul 16, 2012 at 9:20 PM, Nathanael Burton
wrote:
> Narayan,
>
> Are you doing bonding in conjunction with your bridging + vlans? Or is it
> just a single interface backing the vlan_interface?
Hi Nate. Just a single interface.
-nld
___
Mailing
We're running into what looks like a linux bridging bug, which causes
both substantial (20-40%) packet loss, and DNS to fail about that same
fraction of the time. We're running essex on precise, with dedicated
nova-network servers and VLANManager. On either of our nova-network
servers, we see the s
On Thu, Jul 12, 2012 at 4:36 PM, Vishvananda Ishaya
wrote:
> Upgrading has been painful and we are striving to improve this process
> as much as possible.
I think that this needs to be a core value of the developer community,
if Openstack is going to become pervasive.
> I would like to take thi
On Thu, Jul 12, 2012 at 2:38 PM, Vishvananda Ishaya
wrote:
> Agreed, I'm a developer, so I'm clearly biased towards what is easier for
> developers. It will be a significant effort to have to maintain the
> nova-volume code, so I want to be sure it is necessary. End users really
> shouldn't ca
On Wed, Jul 11, 2012 at 1:49 PM, Adam Gandelman wrote:
> On 07/11/2012 09:22 AM, Narayan Desai wrote:
>>
>> I also vote for option 1, but the migration path really needs to be
>> solid and well documented.
>> -nld
>
>
> I feel the same. I think document
I also vote for option 1, but the migration path really needs to be
solid and well documented.
-nld
On Wed, Jul 11, 2012 at 10:52 AM, Andrew Clay Shafer
wrote:
> One vote for option 1.
>
> Remove Volumes
>
>
> ___
> Mailing list: https://launchpad.net/
On Fri, Jul 6, 2012 at 11:52 AM, Stefano Maffulli wrote:
> On 07/06/2012 07:51 AM, John Paul Walters wrote:
>> One of the outputs of the design summit was that folks are
>> interested in participating in a monthly (or so) telecon to express
>> feature requests, best practices, etc. I'd like to ge
On Fri, Jul 6, 2012 at 9:51 AM, John Paul Walters wrote:
> Does something like the first Monday of the month at 4:00pm EDT (UTC-4) work?
> I'm just throwing out that time as something that seems to broadly work on
> my end, but I'd welcome any input from others.
That generally works fine for
On Sat, Jun 30, 2012 at 3:06 AM, Christian Parpart wrote:
> Hm, Pacemaker/Corosync *inside* the VM will add the Service-IP to the local
> ethernet
> interface, and thus, the outside OpenStack components do not know about.
>
> Using a dedicated floating IP pool for service IPs might feel like a gre
On Thu, Jun 21, 2012 at 4:21 PM, Rick Jones wrote:
>
> TSO and GRO can cover a multitude of path-length sins :)
Along with a 64 MB TCP window ;)
> That is one of the reasons netperf does more than just bulk transfer :)
> When I was/am measuring "scaling" of an SMP node I would use aggregate,
>
On Thu, Jun 21, 2012 at 11:16 AM, Rick Jones wrote:
> On 06/20/2012 08:09 PM, Huang Zhiteng wrote:
>> By 'network scaling', do you mean the aggregated throughput
>> (bandwidth, packets/sec) of the entire cloud (or part of it)? I think
>> picking up 'netperf' as micro benchmark is just 1st step, t
I vaguely recall Vish mentioning a bug in dnsmasq that had a somewhat
similar problem. (it had to do with lease renewal problems on ip
aliases or something like that).
This issue was particularly pronounced with windows VMs, apparently.
-nld
On Thu, Jun 14, 2012 at 6:02 PM, Christian Parpart wr
How integrated is the network target support for zfs on freebsd? One
of the most compelling features (IMHO) of ZFS on illumos is the whole
comstar stack. On the zfs linux port at least, there are just
integration hooks out to the standard linux methods (kernel-nfs, etc)
for nfs, iscsi, etc.
I'm re
We're definitely interested in this sort of thing. So much so that
we've already hacked support into nova-volume to run directly on top
of an illumos box with zfs. ;)
We've only gotten the basics working, and we haven't done any serious
torture testing of it yet. Our real goal is to get things wor
cp is not leasing an ip address in vlan mode
> **
> do you see sent and received packets on the vlan? I would suspect that
> you actually don't have the vlans trunked on the ports so the packets
> aren't making it across the switch.
> **
> Vish
> **
> On May 31, 2
This sounds like it might be working properly. In VLAN mode, all
instances are connected to one of the project vlans. The .1 address
(gateway, dhcp, etc) exists on an interface on the nova-network node
(or one of them, in the case that you are running multiple. This
interface is bridged to a tagged
I'm not sure that it would be particularly easy to make nova-volume
support clustered filesystems; the current model only supports
attaching a volume to a single instance at a time. Aside from that, it
shouldn't be too hard to use fc as the data path instead of iscsi.
We're looking at using iSER in
As far as I know, the current volume service doesn't support
connecting the same volume to multiple instances at the same time, so
neither of these can work directly through nova apis.
-nld
On Tue, Apr 24, 2012 at 4:44 AM, Daniel Martinez wrote:
> Hello everyone.
>
> My setup is simple. A volume
We needed to setup something similar when we split out the
nova-network service to a different host than nova-api in cactus, so
that instances could get to the metadata service. It was pretty simple
to make quagga work, but then we needed to add a rule to bypass NAT.
Since this was just for the met
Ghe, while you're right that these two workloads are different, deployers
need developers to use a representative environment during development, or
the code doesn't work when it hits real deployments. We've now been bitten
during our initial deployment of cactus, our upgrade to diablo, and our
rec
Thanks for the input everyone. We'll start filing bugs after we triage
the tracebacks.
-nld
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https
Hello all. We've recently upgraded our cactus system to more recent
code. In the process of doing this, we've started logging whenever we
get tracebacks out of any of the openstack components we are running.
Some of these are clearly bugs, while others correspond to normal
operational conditions (l
This looks more or less right. We have been running a setup like you
are describing here for quite a while, and we've found it to be stable
(and easier to setup than a lot of the other network options, IMO).
When debugging this sort of setup, trunking setup problems on the
switch are often the cul
On Mon, Oct 3, 2011 at 4:21 PM, Caitlin Bestler
wrote:
>
>
> Narayan Desai wrote:
>
>
>> I suspect that the original poster was looking for instance access
> (mediated in some way) to IB gear.
>> When we were trying to figure out how to best use our IB gear inside
&
I suspect that the original poster was looking for instance access
(mediated in some way) to IB gear. When we were trying to figure out
how to best use our IB gear inside of openstack, we decided that it
was too risky to try exposing IB at the verbs layer to instances
directly, since the security m
When we were having rabbitmq problems, we would use rabbitmqctl
list_queues to see what the queue depth for each nova service was.
While this doesn't show what the throughput is, it does let you know
when things start to get backed up.
-nld
___
Mailing
We had to preconfigure the vlan tags and set all network ports for
nova-compute nodes to trunk them in advance on our switching gear.
(BNT and Juniper both, but I've also needed to do it on Cisco gear) I
think that is a pretty common requirement for managed switches.
-nld
We've got a system comprised of 336 compute nodes, a head node with
everything else except for the network and volume pieces, 12 volume
servers, and 2 network servers.
We're using mysql. We've deployed using VlanManager. We deploy using a
custom node imaging system here for base builds and bcfg2 f
'm interesting about quagga in your environment.
>
> 2011/5/11 Narayan Desai
>>
>> For what it's worth, we're running in a configuration similar to the
>> one in the attached diagram using VlanManager. When we moved the
>> nova-network service off of the
For what it's worth, we're running in a configuration similar to the
one in the attached diagram using VlanManager. When we moved the
nova-network service off of the machine with nova-api, we needed to
add an additional prerouting rule on the network server that prevented
the traffic from being sen
53 matches
Mail list logo