On Oct 3, 2013, at 12:49 PM, Mike Wilson wrote:
[…]
> Now that I've said all this, cells does handle these three problems very
> nicely by partitioning them all off and coordinating the api. However, there
> are some missing features that I think are not trivial to implement. I'm also
> not a
Tim,
thanks for your vision. Reducing load average on Rabbit is a good reason to
use cells. Technically RPC is a kind of bottleneck on large number of
hypervisors but I think dividing cluster on small peaces is just increasing
number of bottlenecks. May be it's better improve RPC mechanism (in term
On 01.10.2013 17:55, Jonathan Proulx wrote:
On Tue, Oct 01, 2013 at 12:01:24PM +0300, Ilkka Tengvall wrote:
I'm not using the quantum metadata service. I can get away with this
becasue I have relatively few networks and are all more or less in the
same administrative domain.
I'm running the nov
If you are able to make it here, please register so we can estimate the
arrangements better. And don't be afraid of the original invite in
Finnish, we can have all the possible presentations also in English, as
queried by some people.
Lighning talks, anyone?
http://en.wikipedia.org/wiki/Light
Hi, instance run with centos 6.4 and host used ubuntu 13.
i try run instance with cirros and still get same issue with different
value.
On 10/4/2013 11:55 AM, Ritesh Nanda wrote:
Hello ,
Which operating system is it , if ubuntu you would need
initramfs-growroot , so that it allocates the roo
Hello ,
Which operating system is it , if ubuntu you would need initramfs-growroot
, so that it allocates the root space at the time of boot . Same would be
for another os also but you need to check the package name.
Regards,
Ritesh
On Fri, Oct 4, 2013 at 9:49 AM, Mahardhika Gilang <
mahardik
Hi all,
i have deploy one instance and used flavor that have 20G Disk with
ephermal 10G
when instance up, i check with df -h command, and it show like this (LVM)
*FilesystemSize Used Avail Use% Mounted on**
**/dev/mapper/VolGroup-lv_root**
** 2.5G 1.7G
Your configuration looks right to me. Seems a browser or plugin issue. Can you
try browsers like google chrome and firefox.
From: kody abney [mailto:bagelthesm...@gmail.com]
Sent: Thursday, October 03, 2013 1:06 PM
To: openstack@lists.openstack.org
Subject: [Openstack] deployment vnc issues -
He
Hello, just joined this mailing list. Reason being is that most of the
openstack documentation out there is too scattered and not very accurate.
Learned this when my company setup our openstack cluster. I have a very
simple question today, something that isn't working properly. We deployed a
full s
Hi All,
what is the stable version of keystone to support (sqlite,postgres,
ldap,oauth)?
i have downloaded "
https://github.com/openstack/keystone/archive/2013.1.3.tar.gz"; , was unable
to start the keystone instance on ubuntu 12.04.
can anyone pl provide the url of keystone stable version to s
Tim,
We currently run a bit more than 20k hypervisors on a single cell. We had
three major problems with getting this large: RPC, DB and scheduler. RPC is
solvable by getting away from the hub-spoke topology that brokered
messaging forces you into, AKA, use 0mq. DB was overcome by a combination
of
Got it. By RPC I was referring to RabbitMQ in particular. That's also the
rationale that Rackspace presented at the Portland summit.
Subbu
On Oct 3, 2013, at 11:42 AM, Chris Behrens wrote:
>
> On Oct 3, 2013, at 10:23 AM, Subbu Allamaraju wrote:
>
>> Hi Tim,
>>
>> Can you comment on scalab
Chris,
Great to see further improvements are in the pipeline. The cinder support for
cells in Havana is a very welcome development.
For our Grizzly instance, we're also seeing some issues around flavors /
availability zone definition along with some ceilometer functions.
It would be good to h
On Oct 3, 2013, at 10:23 AM, Subbu Allamaraju wrote:
> Hi Tim,
>
> Can you comment on scalability more? Are you referring to just the RPC layer
> in the control plane?
Not just RPC, but RPC is a big one. Cells gives the ability to split up and
distribute work. If you divide hypervisors int
We've got several OpenStack clouds at CERN (details in
http://openstack-in-production.blogspot.fr/2013/09/a-tale-of-3-openstack-clouds-5.html).
The CMS farm was the further ahead and encountered problems with the number of
database connections at around 1300 hypervisors. Nova conductor help
Hi Tim,
I'd also like to know what happens above 1000 hypervisors that u think needs
cells?
>From experience at y! we actually start to see the nova-scheduler (and the
>filter scheduler mainly) be the problem (at around ~400 hypervisors) and that
>seems addressable without cells (yes it requir
On Oct 3, 2013, at 8:53 AM, Tim Bell wrote:
>
> At CERN, we’re running cells for scalability. When you go over 1000
> hypervisors or so, the general recommendation is to be in a cells
> configuration.
>
> Cells are quite complex and the full functionality is not there yet so some
> parts
Hi Tim,
Can you comment on scalability more? Are you referring to just the RPC layer in
the control plane?
Subbu
> On Oct 3, 2013, at 8:53 AM, Tim Bell wrote:
>
>
> At CERN, we’re running cells for scalability. When you go over 1000
> hypervisors or so, the general recommendation is to be
At CERN, we're running cells for scalability. When you go over 1000 hypervisors
or so, the general recommendation is to be in a cells configuration.
Cells are quite complex and the full functionality is not there yet so some
parts will need to wait for Havana.
Tim
From: Dmitry Ukov [mailto:du
Hi Anne, David,
Yes, the transparency committee update is on the board meeting agenda, but I
think the code of conduct for our Summit is a different issue.
The transparency committee had three main tasks: develop a written policy
documenting our approach to transparency, investigate creating an
Hello all,
I've really interested in cells but unfortunately i can't find any useful
use cases of them.
For instance I have 4 DCs and I need single entry point for them. In this
case cells are a bit complicated solution. It's better to use multiple
regions in keystone instead
The only one good
Hi John,
please find my answers below - just to clarify, you are talking about the
storage for the instances right?
Le 3 oct. 2013 à 11:04, John Ashford a écrit :
> 1 – Glusterfs V Ceph
> Im reading a lot of different opinions about which of these is the best
> storage backend. My need is
Hello everyone,
This morning we've got Nova, Neutron, Heat and Horizon all publishing
their first release candidate for the Havana release ! You can download
those RC1 tarballs at:
https://launchpad.net/nova/havana/havana-rc1
https://launchpad.net/neutron/havana/havana-rc1
https://launchpad.net/h
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
On 02/10/13 22:49, James Page wrote:
>> sudo ip netns exec qrouter-d3baf1b1-55ee-42cb-a3f6-9629288e3221
>>> traceroute -n 10.5.0.2 -p 4 --mtu traceroute to 10.5.0.2
>>> (10.5.0.2), 30 hops max, 65000 byte packets 1 10.5.0.2 0.950
>>> ms F=1500
On 3 October 2013 11:04, John Ashford wrote:
> 1 – Glusterfs V Ceph
>
> Im reading a lot of different opinions about which of these is the best
> storage backend. My need is for a fully stable product that has fault
> tolerance built in. It needs to support maybe 400 low traffic web sites and
> a
1 – Glusterfs V Ceph
Im reading a lot of different opinions about which of these
is the best storage backend. My need is for a fully stable product that has
fault tolerance built in. It needs to support maybe 400 low traffic web sites
and a few very high traffic. I saw a Redhat diag suggesting thr
Hello folks,
I'm writing for having some information about the hostname management in
OpenStack.
I looked into AWS and, if I understood correctly, it implements two types
of hostnames: external and internal.
External hostnames are associated to public IP (floating IP in OpenStack
parlance) while
On 10/03/2013 03:12 AM, Clement Buisson wrote:
All these variables are correct, I just double checked them.
This is really strange because it was working fine and stopped working
all of the sudden!
Your request says that you're not using the admin project (tenant):
"X-Auth-Project-Id: main"
It
28 matches
Mail list logo