Hi all !!
Its seems that in 2012.1 the call
"*http://kstn:35357/v2.0/tokens/[userToken]/endpoints*"; with admin token
is not implemented.
So, how can i get the endpoints for a given token through the admin
port/token ?
If i do *http://kstn:35357/v2.0/endpoints *i get the endpoint list, but
wit
Sorry!
Surfing the code, i've found the "*belongsTo*" query string to ask for
the endpoints also.
thanks!
On 05/15/2012 03:30 PM, Alejandro Comisario wrote:
Hi all !!
Its seems that in 2012.1 the call
"*http://kstn:35357/v2.0/tokens/[userToken]/endpoints*";
One of the things i dont like in essex.
That the "autostart" flag in nova.conf with KVM doesnt work with the
autostart feature of libvirt/kvm, so if, for some reason you need to
restart nova-compute to apply some kind of modification, the instances get
soft/hard rebooted because now nova-compute ha
a-compute-kvm
> 2012.1+stable~20120612-3ee026e-0ubuntu1.3
> > 2012-08-23 06:34:35 upgrade nova-compute
> 2012.1+stable~20120612-3ee026e-0ubuntu1.2
> 2012.1+stable~20120612-3ee026e-0ubuntu1.3
> >
> > Here is detail:
> > http://pastebin.com/juiSxCue
> >
&g
force a reboot every time nova is
> started, but the resume_ option will only attempt to reboot them if they
> are supposed to be running and the driver says they are not.
>
> Vish
>
> On Aug 30, 2012, at 10:38 AM, Alejandro Comisario <
> alejandro.comisa...@mercadolibre.com&g
if you are on essex, you can issue a "nova rescue", if in cactus, you have
to manipulate the "instances" table to tell where the new instance will be
running, and then from the new compute node issue a :
virsh define /path/to/XML
virsh start instance_name
>From that moment, you can manage the ins
Hi Cris, maybe your problem is related to this bug ?
https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/997978
Regards.
Ale
On Fri, Oct 5, 2012 at 8:44 AM, Christian Parpart wrote:
> Hey all,
>
> we're pretty happy about our new OpenStack Essex installation atop of
> Ubuntu 12.04 (hyperv
Hi Stackers !
This is the thing, today we have a 24 datanodes (3 copies, 90TB usables)
each datanode has 2 intel hexacores CPU with HT and 96GB of RAM, and 6
Proxies with the same hardware configuration, using swift 1.4.8 with
keystone.
Regarding the networking, each proxy / datanodes has a dual 1G
Its worth to know that the objects in the cluster, are going to be from
200KB the biggest and 50KB the tiniest.
Any considerations regarding this ?
-
alejandrito
On Thu, Oct 11, 2012 at 8:28 PM, Alejandro Comisario <
alejandro.comisa...@mercadolibre.com> wrote:
> Hi Stackers !
>
Guys ??
Anyone ??
*
*
*
*
*Alejandro Comisario
#melicloud CloudBuilders*
Arias 3751, Piso 7 (C1430CRG)
Ciudad de Buenos Aires - Argentina
Cel: +549(11) 15-3770-1857
Tel : +54(11) 4640-8443
On Mon, Oct 15, 2012 at 11:59 AM, Kiall Mac Innes wrote:
> While I can't answer your question (I&
ould need to be verified by
> your testing or an examination of the memcache module being used. An
> alternative would be to look at the way swift implements it's memcache
> connections in an eventlet-friendly way (see
> swift/common/memcache.py:_get_conns() in the swift codebas
a single object, but those
> improvements have not been coded yet.
>
> --John
>
>
>
> On Oct 24, 2012, at 1:20 PM, Alejandro Comisario <
> alejandro.comisa...@mercadolibre.com> wrote:
>
> > Thanks Josh, and Thanks John.
> > I know it was an exciting
n keystone, and disks aren't too
> bad, I assume over 1000 op/s can archive with one proxy plus 5 storage
> modes with your pattern.
>
> -ywang
>
> 在 2012-10-25,1:56,Alejandro Comisario <
> alejandro.comisa...@mercadolibre.com> 写道:
>
> Guys ??
> Anyone ??
>
Hi guys.
When we have any kind of trouble, we hit the logs right away, and when
we see the stacks, what i want to do is to copy & paste the error, and
wait for the "search engine" to do its job, since at this point i
consider myself a user, so, i try to think like one, and most of the
time wha
Hi everyone !
Since we are using swift for a time now, we would like to know a few
things in a deep way about how some things actually works in SWIFT.
Imagine the setup where im putting all the doubts is as follow :
+ 2 proxyNodes
+ 10 dataNodes ( 5 zones )
So, lets get down to business.
# 1
Thanks for the answers John !
Below are a couple more questions.
On 01/03/2012 07:03 PM, John Dickinson wrote:
Answers inline.
On Jan 3, 2012, at 11:32 AM, Alejandro Comisario wrote:
So, lets get down to business.
# 1 we have memcache service running on each proxy, so as far as we know
could not comply with the
request since it is either malformed or otherwise incorrect.",
"code": 400}}
PS: the token is the one obtained from keystone for tenant 5,
listing of SG is working ok, but creation of new SG nor adding a
new rule for that SG
the content-type header was missing !
thanks !
Alejandro Comisario
Infrastructure IT - #melicloud Cloud Builder
Arias 3751, Piso 7 (C1430CRG)
Ciudad de Buenos Aires - Argentina
Cel: +549(11
esting ESSEX Milestone 3.
Cheers.
--
Alejandro Comisario
Infrastructure IT - #melicloud Cloud Builder
Arias 3751, Piso 7 (C1430CRG)
Ciudad de Buenos Aires - Argentina
Cel: +549(11) 15-
Hey guys, we finally got MultiZone working on Essex with KVM as
hypervisor (trunk version 2012.1-dev (2012.1-LOCALBRANCH:LOCALREVISION))
we have one Parent zone and one child zone (named "zoneE") all
integrated with keystone ( nova, glance ) and we are able to spawn
instances across both zones
ters/weight functions will still carry forward, so any
investments there won't be lost.
Stay tuned, we're hoping to get all this in a new blueprint soon.
Hope it helps,
Sandy
From: boun...@canonical.com <mailto:boun
This would be amazing for those who uses netapp as the backend storage.
Geting out the controller as the iscsi target, and let the driver handle one
lun for each nova-volume its perfect.
I hope you make and exception to let openstack support such a big storage
solution behind nova-volume.
Best.
Hi guys.
Its true that we are trying to make multizones work, actually we did,
but we get into an instance were listing all vms from the parent zone (
where is has to go thru all the child zones ) is buggy ( if not
impossible by now ).
So, if there is a new zone architecture that actually works
(distributed_scheduler_v2) and doing the modifications there.
That way we can minimize chances of breakage
d) it needs to be merged by the 15th
Does that seem reasonable?
Vish
On Feb 1, 2012, at 1:42 PM, Alejandro Comisario wrote:
Hi guys.
Its true that we are trying to make multizones work, actually
Thierry et. al.
Responses inline.
On 02/02/2012 06:03 AM, Thierry Carrez wrote:
Chris Behrens wrote:
Well, I can actually say with confidence that the replacement would be stable
by essex release. In fact, I expect the first commit to be a completely
working solution that solves a number of
Niceee !!
Alejandro.
On 02/09/2012 02:02 PM, Chris Behrens wrote:
I should be pushing something up by end of day... Even if it's not granted an
FFE, I'll have a need to keep my branch updated and working, so I should at
least always have a branch pushed up to a github account somewhere until
Hi openstack list.
Sorry to ask this, but i have a strong doubt on how the "endpoint"
config in keystone actually works when you make a nova api call (we are
using Essex-3)
First, let me setup a use case :
user1 -> tenant1 -> zone1 (private nova endpoint)
user2 -> tenant2 -> zone2 (p
John, what i think would be terrific ( i hope is not implemented, if not im
gonna feel a dunce ) if, for latency matters, suppose you have 4 zones, 2
on each datacenter, and on each datacenter, you have 2 proxies for example.
De idea would be that there were some kind of mechanism to tell the ring
Nic, a good reason to wait to Grizzly (?) with our arms wide open!
Thanks for confirming that John / Lean !
*
*
*
*
*Alejandro Comisario
#melicloud CloudBuilders*
Arias 3751, Piso 7 (C1430CRG)
Ciudad de Buenos Aires - Argentina
Cel: +549(11) 15-3770-1857
Tel : +54(11) 4640-8443
On Thu, Nov
Hi everyone.
We have a production Keystone (Essex 2012.1.3) pool composed of 10 servers
reading to the same database ( running MySQL Galera Cluster ).
We have other Openstack part of the infrastructure like swift monitored
over NewRelic ( python client )
The thing is that we are trying to monitor
No one needed to monitor Keystone throughput and response times ?
BUMP!
*
*
*
*
*Alejandro Comisario
#melicloud CloudBuilders*
Arias 3751, Piso 7 (C1430CRG)
Ciudad de Buenos Aires - Argentina
Cel: +549(11) 15-3770-1857
Tel : +54(11) 4640-8443
On Mon, Dec 3, 2012 at 5:01 PM, Alejandro
Hi guys, we are planning to upgrade our production cluster from 1.4.8 to
1.7.4 to have the several features of the new version.
One of the main doubts before dive into this task is as follow :
Is it possible to use SWIFT 1.7.4 with Keystone/ESSEX ? Or is MUST to have
Keystone from Folsom release ?
t;> But I am fairly confident it would work just fine.
>>
>> -Matt
>>
>> On Tue, Dec 11, 2012 at 2:25 PM, Alejandro Comisario <
>> alejandro.comisa...@mercadolibre.com> wrote:
>>
>>> Hi guys, we are planning to upgrade our production clust
Hi guys.
We've created a swift cluster several months ago, the things is that righ
now we cant add hardware and we configured lots of partitions thinking
about the final picture of the cluster.
Today each datanodes is having 2500+ partitions per device, and even tuning
the background processes ( r
loss while doing this, but you will probably
> have availability issues, depending on the data access patterns.
> >
> > I'd like to eventually see something in swift that allows for changing
> the partition power in existing rings, but that will be
> hard/tricky/non-trivial.
&g
o latency over there.
Hope you guys can shed some light.
*
*
*
*
*Alejandro Comisario
#melicloud CloudBuilders*
Arias 3751, Piso 7 (C1430CRG)
Ciudad de Buenos Aires - Argentina
Cel: +549(11) 15-3770-1857
Tel : +54(11) 4640-8443
On Mon, Jan 14, 2013 at 1:23 PM, Chuck Thier wrote:
> Hi Alejan
ccount and container servers,
> workers=48 seems too high, which will increase contention on accessing
> account or container db.
>
> -ywang
>
> 在 2013-1-15,4:01,Alejandro Comisario
> 写道:
>
> Chuck et All.
>
> Let me go through the point one by one.
>
> #1
Hi guys.
Maybe i had to ask in a KVM list, but worth the try.
We have lots of instances that some tenants spawn and left them for
"testing purposes" floating in the cloud.
Since we know that tools like powertop and alike can help us, the idea is
to detect instances that are basically, doing nothi
Hi guys, we are using keystone 2012.1.4 into production, and we are using
keystone logs to stream into kafka, and we wanted to modify the logs
content to add the tenant id into the line.
We saw in the code that keystone leaves that duty to the python logging
method.
We tried several methods to mod
39 matches
Mail list logo