If so, let me rephrase it...having 16gb of data and have to access all
16 gb at on time is a daunting task. if you have 16gb of data and you
work only on a piece of that, let's say, an indexed tree and a client
requests just a node, you can try with redis. the problem arises only
when you have to put into python process's memory 16 gb of data
alltogether.


On 6 Feb, 22:09, "Sebastian E. Ovide" <sebastian.ov...@gmail.com>
wrote:
> On Mon, Feb 6, 2012 at 7:39 PM, Bruno Rocha <rochacbr...@gmail.com> wrote:
> > 16 GB shared across requests is called "Database", to run a memory like
> > database you should go with Redis!
>
> :D it sounds a lot... but it is not anymore... specially if you want to
> serve a lot of requests in realtime !
>
> we are using two machines with 36G for a real commercial application. We
> use so much memory for implementing a tree for fast research of addresses
> using phonetics (28M addresses)... using Oracle (a big machine optimized by
>  two DBA experts) was two slow for us (around 1 second per query)...  A big
> improvement was obtained using special indexes (created by lucene) stored
> in SSD... but still to slow for us... so the only solution was to use a
> "special" tree all in memory....
>
> just investigating if there is something else in the open source that could
> save us Weblogic licences....
>
> --
> Sebastian E. Ovide

Reply via email to