On Wed, Oct 24, 2012 at 4:45 PM, heckj wrote:
> "Specifically, I'm concerned with the way auth_token handles memcache
> connections. I'm not sure how well it will work in swift with eventlet. If
> the memcache module being used caches sockets, then concurrency in eventlet
> (different greenthreads
On 10/25/2012 06:13 PM, Chander Kant wrote:
Sure. We have published a new blog related to the summit, including a
link to our presentation slides:
http://www.zmanda.com/blogs/?p=971
http://www.zmanda.com/pdf/how-swift-is-your-Swift-SD.pdf
We plan to publish more performance results within next
On Wed, Oct 24, 2012 at 4:19 PM, Sina Sadeghi wrote:
> The guys from Zmanda presented some evaluation of swift at the summit,
> might be useful here
>
> http://www.zmanda.com/blogs/?p=947 they've written a blog but it doesn't
> have all the findings which they presented at the summit.
>
> Maybe
On 10/24/2012 07:45 PM, heckj wrote:
John brought the concern over auth_token middleware up to me directly -
I don't know of anyone that's driven the keystone middleware to these
rates and determined where the bottlenecks are other than folks
deploying swift and driving high performance number
John brought the concern over auth_token middleware up to me directly -
I don't know of anyone that's driven the keystone middleware to these rates and
determined where the bottlenecks are other than folks deploying swift and
driving high performance numbers.
The concern that John detailed to
The guys from Zmanda presented some
evaluation of swift at the summit, might be useful here
http://www.zmanda.com/blogs/?p=947 they've written a blog but it
doesn't have all the findings which they presented at the summit.
Maybe Chander would be will
Wow nice, i think we have a lot to look at guys.
I'll get back to you as soon as we have more metrics to share regarding
this matter.
Basically, we are going to try to add more proxies, since indeed, the
requests are to small (20K not 20MB)
Thanks guys !
---
Alejandrito
On Wed, Oct 24, 20
Smaller requests, of course, will have a higher percentage overhead for each
request, so you will need more proxies for many small requests than the same
number of larger requests (all other factors being equal).
If most of the requests are reads, then you probably won't have to worry about
key
On Oct 11, 2012, at 4:28 PM, Alejandro Comisario
wrote:
Hi Stackers !
This is the thing, today we have a 24 datanodes (3 copies, 90TB
usables) each datanode has 2 intel hexacores CPU with HT and 96GB
of RAM, and 6 Proxies with the same hardware configuration, using
swift 1.4.8 with keystone. R
Thanks Josh, and Thanks John.
I know it was an exciting Summit! Congrats to everyone !
John, let me give you extra data and something that i've already said, that
might me wrong.
First, the request size that will compose the 90.000RPM - 200.000 RPM will
be from 90% 20K objects, and 10% 150/200K o
Sorry for the delay. You've got an interesting problem, and we were all quite
busy last week with the summit.
First, the standard caveat: Your performance is going to be highly dependent on
your particular workload and your particular hardware deployment. 3500 req/sec
in two different deploymen
Guys ??
Anyone ??
*
*
*
*
*Alejandro Comisario
#melicloud CloudBuilders*
Arias 3751, Piso 7 (C1430CRG)
Ciudad de Buenos Aires - Argentina
Cel: +549(11) 15-3770-1857
Tel : +54(11) 4640-8443
On Mon, Oct 15, 2012 at 11:59 AM, Kiall Mac Innes wrote:
> While I can't answer your question (I've never
While I can't answer your question (I've never used swift) - it's worth
mentioning many of the openstack folks are en-route/at the design summit.
Also - you might have more luck on the openstack-operators list, rather
than the general list.
Kiall
On Oct 15, 2012 2:57 PM, "Alejandro Comisario" <
a
Its worth to know that the objects in the cluster, are going to be from
200KB the biggest and 50KB the tiniest.
Any considerations regarding this ?
-
alejandrito
On Thu, Oct 11, 2012 at 8:28 PM, Alejandro Comisario <
alejandro.comisa...@mercadolibre.com> wrote:
> Hi Stackers !
> This is the
14 matches
Mail list logo