I've re-written the php library to use keep alives (and various other
tweaks). Let me see what I can do about releasing the code.

The current php library simply instantiates a new curl instance for each
request, making it less than optimal.

Mark Steele
Bering Media Inc.


On Fri, Apr 8, 2011 at 12:23 PM, Gui Pinto <gpi...@chitika.com> wrote:

> Hey Everyone, thanks for all of the recommendations.
>
> I've tried importing using the example load_data 
> script<http://wiki.basho.com/Loading-Data-and-Running-MapReduce-Queries.html>available
>  on the Fast Track, and have last tried the PHP library.
>
> Both of these execute a straight-foward CURL -X PUT request.. which makes
> me think Mark might have just guessed it..
> Keep-alive not being used definitely explains the 200-writes/second cap.
>
> I'm going to take a look into the PHP library and test this theory.
>
> Gui Pinto
> Software Engineer at Chitika
>
>
>
> On Fri, Apr 8, 2011 at 10:01 AM, Mark Steele <mste...@beringmedia.com>wrote:
>
>> If using HTTP, make sure you're using keep-alives. That will be a gigantic
>> speed boost.
>>
>> The protocol buffer API is much faster if you're client language supports
>> it.
>>
>>
>> Mark Steele
>> Bering Media Inc.
>>
>>
>>
>> On Thu, Apr 7, 2011 at 10:58 PM, matthew hawthorne 
>> <mhawtho...@gmail.com>wrote:
>>
>>> Hi Gui,
>>>
>>> I recently pushed 70 million records of size 1K each into a 5-node
>>> Riak cluster (which was replicating to another 5-node cluster) at
>>> around 1000 writes/second using basho_bench and the REST interface.  I
>>> probably could have pushed it further, but I wanted to confirm that it
>>> could maintain the load for the entire data set, which it did.
>>>
>>> My point being that your speed-limit of 200 writes/second is likely
>>> specific to your configuration.
>>>
>>> I wonder:
>>> 1) what's your average write latency?
>>> 2) how big is your connection pool?
>>>
>>> Because it's possible that you don't have enough connections available
>>> to handle your desired load.
>>>
>>> -matt
>>>
>>>
>>> On Thu, Apr 7, 2011 at 6:01 PM, Gui Pinto <gpi...@chitika.com> wrote:
>>> > Hey guys,
>>> > I'm attempting to importing 300M+ objects into a Riak cluster, but have
>>> > quickly reached the REST API's speed-limit at 200-store()'s per
>>> second..
>>> > At the rate of 200/s, I'm looking at 20-days to import this data set!
>>> That
>>> > can't be the fastest method to do this..
>>> >
>>> > Any recommendations?
>>> >
>>> > Thanks!
>>> > Gui Pinto
>>> >
>>> > _______________________________________________
>>> > riak-users mailing list
>>> > riak-users@lists.basho.com
>>> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>> >
>>> >
>>>
>>> _______________________________________________
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>>
>>
>>
>
_______________________________________________
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to