I'm thinking that was more a problem in the usage of curl in Ripple, than a
problem with the curl client, as it appears the issue was the same handle
being used for multiple requests prior to the initial request actually being
completed, correct?

*
<http://www.loomlearning.com/>
Jonathan Langevin
Systems Administrator
Loom Inc.
Wilmington, NC: (910) 241-0433 - jlange...@loomlearning.com -
www.loomlearning.com - Skype: intel352
*


On Thu, Jul 28, 2011 at 5:20 PM, Sean Cribbs <s...@basho.com> wrote:

> Not sure if it's a problem in PHP (and should go without saying), but make
> sure to open a new connection when performing additional client requests
> (like a delete) while receiving a streamed response. Otherwise you might run
> into a case of "accidental concurrency" like I did (see also
> http://seancribbs.com/tech/2011/03/08/how-ripple-uses-fibers/). If your
> client has a proper connection pool/stack this should not be a problem.
>
>
> On Thu, Jul 28, 2011 at 5:11 PM, Jonathan Langevin <
> jlange...@loomlearning.com> wrote:
>
>> Ah, nice. Checked the js client, which mentioned that ?keys=stream is a
>> valid option to result in streamed key listings.
>> I'll get that implemented in my PHP client, thanks for the pointer :-)
>>
>> *
>>
>>  <http://www.loomlearning.com/>
>>  Jonathan Langevin
>> Systems Administrator
>> Loom Inc.
>> Wilmington, NC: (910) 241-0433 - jlange...@loomlearning.com -
>> www.loomlearning.com - Skype: intel352
>> *
>>
>>
>> On Thu, Jul 28, 2011 at 4:42 PM, Jeremiah Peschka <
>> jeremiah.pesc...@gmail.com> wrote:
>>
>>> Depending on the client you're using, you can perform results streaming
>>> to pull back results and process them in chunks rather than waiting for a
>>> buffer to get filled.
>>>
>>> It's easy enough to write something like this using Ripple or
>>> CorrugatedIron. I'm guessing it's possible with other clients.
>>> ---
>>> Jeremiah Peschka
>>> Founder, Brent Ozar PLF, LLC
>>>
>>> On Jul 28, 2011, at 1:40 PM, Jonathan Langevin wrote:
>>>
>>> > I've read on the wiki that to delete a bucket, the only method is to
>>> manually delete all keys within the bucket.
>>> > So then what is the recommended process for deleting all keys within a
>>> bucket, manually?
>>> >
>>> > I was initially just listing all keys within a bucket, and then
>>> iterating the keys to send delete requests, but I hit a wall when I had too
>>> many keys to get back in a list request (received header too large errors).
>>> >
>>> > So I assume the alternative would be to run a mapreduce to pull keys
>>> from the bucket with a specified limit, to then execute the deletes?
>>> > While that's fine for an "active record" style environment (where there
>>> may be cleanup actions that must occur per object being deleted), is there
>>> another method for deleting all keys within a bucket, massively? (Maybe via
>>> a map call?)
>>> >
>>> >
>>> > Jonathan Langevin
>>> > Systems Administrator
>>> > Loom Inc.
>>> > Wilmington, NC: (910) 241-0433 - jlange...@loomlearning.com -
>>> www.loomlearning.com - Skype: intel352
>>> >
>>> > _______________________________________________
>>> > riak-users mailing list
>>> > riak-users@lists.basho.com
>>> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>>
>>>
>>
>> _______________________________________________
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>>
>
>
> --
> Sean Cribbs <s...@basho.com>
> Developer Advocate
> Basho Technologies, Inc.
> http://www.basho.com/
>
>
_______________________________________________
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to