I'm wondering if we could get a 1.1.2 version bump pretty soon. Not being able
to do 2i over PBC with 1.1.1 is rather painful and I kind of need a released
version to send it to production.
Thanks,
Sean McKibben
On Jan 10, 2013, at 2:05 PM, Sean Cribbs wrote:
> Hey riak-users,
>
1.0 release:
> https://github.com/basho/riak-ruby-client/commit/4fe52756d7df6ee494bfbc40552ec017f3ff4da4
>
> On Wed, Apr 3, 2013 at 3:35 PM, Sean McKibben wrote:
>> I'm wondering if we could get a 1.1.2 version bump pretty soon. Not being
>> able to do 2i over PBC with 1.1.1
We just upgraded to 1.4 and are having a big problem with some of our larger 2i
queries. We have a few key queries that takes longer than 60 seconds (usually
about 110 seconds) to execute, but after going to 1.4 we can't seem to get
around a 60 second timeout.
I've tried:
curl -H "X-Riak-Timeou
gt; The timeout isn't via a header, it's a query param -> &timeout=
>
> You can also use stream=true to stream the results.
>
> - Roach
>
> Sent from my iPhone
>
> On Jul 26, 2013, at 3:43 PM, Sean McKibben wrote:
>
>> We just upgraded to 1.
away.
>
> In the meantime, I wonder if streaming the results would help, or if you'd
> still hit the overall timeout?
>
> Very sorry that you've run into this. Let me know if streaming helps, I've
> raised an issue here[1] if you want to track this bug
>
> C
esults. Breaking your query up that
> way should duck the timeout.
>
> Furthermore, adding &stream=true will mean the first results is received very
> rapidly.
>
> I don't think the Ruby client is up to date for the new 2i features, but you
> could monkeypatch as be
This same thing is happening to me, where both $bucket index and my own custom
indexes are returning keys that have been deleted and I can’t remove them.
I am hoping there is a way to fix this as it is causing significant problems
for us in production. It seems to be happening with some frequency
lved in 1.4.6 or 1.4.7?
Only thing we can think of at this point might be to remove or force remove the
member and join in a new freshly built one, but last time we attempted that (on
a different cluster) our secondary indexes got irreparably damaged and only
regained consistency when we c
removing the +S.
>
> And finally, those 2i queries that return "millions of results" … how long do
> those queries take to execute?
>
> Matthew
>
>> On Jan 9, 2014, at 9:33 PM, Sean McKibben wrote:
>>
>> We have a 5 node cluster using elevelDB (1.4.2
us. I think
the automation will significantly decrease the number of animal sacrifices
needed to appease the levelDB gods! :)
Sean McKibben
On Jan 10, 2014, at 9:18 AM, Matthew Von-Maszewski wrote:
> Attached is the spreadsheet I used for deriving the cache_size and
> max_open_files
+1 LevelDB backup information is important to us
On Jan 20, 2014, at 4:38 PM, Elias Levy wrote:
> Anyone from Basho care to comment?
>
>
> On Thu, Jan 16, 2014 at 10:19 AM, Elias Levy
> wrote:
>
> Also, while LevelDB appears to be largely an append only format, the
> documentation current
entmachine that requires less manual intervention?
Sorry if this has been covered somewhere else but I haven't had much luck
finding anyone else using EM with Riak.
Thanks,
Sean McKibben
___
riak-users mailing list
riak-users@lists
12 matches
Mail list logo