Afternoon, Evening, Morning to all,
To everyone in the States: hope you had a relaxing and enjoyable
holiday weekend.
Apologies for the lack of Recap this past Friday.
For today's recap we have a script for deleting keys from a bucket,
some updates to Ripple, some Riak + Drupal developments, a f
Jon -
I think I can explain the behavior you're experiencing. During work leading up
to the 0.13 release the logic around how and when reduce functions are called
was subtly changed. The change caused reduce functions to be called with empty
input at the end of a job immediately prior to the r
Interesting.
I only ran into this issue once, but I remember what I was doing. So
I'll try to reproduce it later tonight, and if I succeed I'll send you
more info -- to eventually pass on to the Erlang folks.
Thanks,
Francisco
2010/11/29 David Smith :
> Thanks for the info.
>
> A more careful
Hi Jon,
If the map function is only returning '1' for each object there is no need
to distinguish between map inputs and reduce inputs.
For example, imagine we have 5 objects in our "test" bucket and the initial
batch of results to reduceSum only includes 3 of the objects:
[1,1,1] -> reduceSum ->
Thanks for the info.
A more careful reading of EAGAIN related messages says that:
"A temporary resource shortage made an operation impossible. fork can
return this error. It indicates that the shortage is expected to pass,
so your program can try the call again later and it may succeed. It is
pro
Woops, sorry for misspelling your name in the previous email.
Daniel Reverri
Developer Advocate
Basho Technologies, Inc.
d...@basho.com
On Mon, Nov 29, 2010 at 11:00 AM, Dan Reverri wrote:
> Johan,
>
> Dan is correct; Riak does not support nested buckets.
>
> Thanks,
> Dan
>
> Daniel Reverri
>
Johan,
Dan is correct; Riak does not support nested buckets.
Thanks,
Dan
Daniel Reverri
Developer Advocate
Basho Technologies, Inc.
d...@basho.com
On Mon, Nov 29, 2010 at 10:59 AM, Dan Young wrote:
> Nested buckets are not supported...to the best of my knowledge.
>
> Regards,
>
> Dan
>
>
> O
Nested buckets are not supported...to the best of my knowledge.
Regards,
Dan
On Mon, Nov 29, 2010 at 11:51 AM, Jonah Crawford wrote:
> Hi all,
>
> The python client tests show only the creation of a bucket at the first uri
> namespace beyond riak
>
> http://127.0.0.1:8091/riak/myspecialbucket
2010/11/29 David Smith :
> It looks like you had an interrupted syscall (!!) that wasn't handled
> by Erlang (which I find rather hard to believe). What version of Riak
> and what operating system are you using, please?
=> bin/riaksearch-admin status
1-minute stats for 'riaksea...@127.0.0.1'
---
On Nov 29, 2010, at 12:15 PM, Dan Reverri wrote:
> Note the following from the article:
> "The important thing to understand is that the function defining the reduce
> phase may be evaluated multiple times, and the input of later evaluations
> will include the input of earlier evaluations."
>
Hi all,
The python client tests show only the creation of a bucket at the first uri
namespace beyond riak
http://127.0.0.1:8091/riak/myspecialbucket
I'm looking to do something like this:
http://127.0.0.1:8091/riak/myclientbucket/irs/filings/10K/2010/Q1/
Would like to know how to create neste
If I continuously read from the node that I am rebooting, the request made
to that node hangs until the client times out, subsequent requests receive a
"Failed to connect" error.
I am using curl for my tests.
Thanks,
Dan
Daniel Reverri
Developer Advocate
Basho Technologies, Inc.
d...@basho.com
Jon -
I am digging into your MapReduce bug right now and will share the results of my
pokings shortly.
--Kevin
On Nov 29, 2010, at 10:08 AM, Jon Brisbin wrote:
>
> On Nov 29, 2010, at 8:59 AM, David Smith wrote:
>
>> On Mon, Nov 29, 2010 at 7:43 AM, Jon Brisbin
>> wrote:
>>> I'm working on
On Mon, Nov 29, 2010 at 8:08 AM, Jon Brisbin
wrote:
>
> There's two problems I'm seeing, one is with map/reduce, the other is that
> the server will crash under the load the tests put on it (I'll send a
> separate email with steps to reproduce).
I'd be very interested to see this crash. The onl
You may have mentioned which client you are using (the thread is deep
already) but I would think that this is a client implementation
problem. As in some sort of connection pooling thing. Try calling curl
from a sleep loop in a shell script and see what happens.
-Alexander
On Mon, Nov 29, 2010 at
Hi Dan,
Sorry for not getting back to you sooner. There is no multi-set feature in
Riak; each object would need to be updated with a separate request.
While it is possible to modify Riak objects from within an Erlang map
function, the map/reduce functionality was not intended for this type of
ope
It looks like you had an interrupted syscall (!!) that wasn't handled
by Erlang (which I find rather hard to believe). What version of Riak
and what operating system are you using, please?
Also, this could be related to ulimit -n -- what do you set that to?
Thanks,
D.
On Mon, Nov 29, 2010 at 11
Some days ago, Riak crashed on my dev machine. All I know is that
several concurrent writes could have been happening at the time of the
crash (possibly around 30).
https://gist.github.com/717015
Could somebody enlighten me here... what exactly went wrong?
Thanks,
Francisco
Hm, that's curious. Are you rebooting the physical machine? When you
reboot one of the nodes, what happens to HTTP calls to that node? Do they
immediately error, or do they hang indefinitely?
In the meanwhile, I'll add some logging so I can see whether I'm timing out
on the writes as well, and
Hi Jon,
It looks like Riak 0.13 may be running an initial empty reduce phase; we'll
look into this issue. With that said, the reduce function you have defined
does not account for "re-reduce". The following article explains how reduce
phases work:
https://wiki.basho.com/display/RIAK/MapReduce#MapR
Hi Jay,
I'm not able to reproduce the behavior you are seeing. Here is what I am
doing to try to reproduce the issue:
1. Setup a 4 node cluster
2. Continuously write a new object to Riak every 0.5 second
3. Continuously read a known object (GET riak/test/1) from Riak every 0.5
second
4. Reboot one
Hey Dan/Sean,
Thanks for the response. sasl-error.log on node A is completely empty, and
I see this pattern in erlang.log:
= ALIVE Tue Nov 23 12:46:57 PST 2010
= Tue Nov 23 12:57:36 PST 2010
=ERROR REPORT 23-Nov-2010::12:57:36 ===
** Node 'riak@' not responding **
** Removing (time
Inline:
On Mon, Nov 29, 2010 at 12:15, Gleb Peregud wrote:
> Hello all
>
> I'm considering Riak as a datastore for a service, which has a part
> which looks like a microblog - i.e. each user will have his own feed
> of events, status updates, links and liked items from other feeds.
> Events on a
Hello all
I'm considering Riak as a datastore for a service, which has a part
which looks like a microblog - i.e. each user will have his own feed
of events, status updates, links and liked items from other feeds.
Events on a feed are usually pretty small (avg. 500 bytes).
What is the best way to
On Nov 29, 2010, at 8:59 AM, David Smith wrote:
> On Mon, Nov 29, 2010 at 7:43 AM, Jon Brisbin
> wrote:
>> I'm working on the Spring Data and Grails Gorm support for Riak and I'm
>> seeing some problems running my tests against Riak 0.13. I don't see these
>> problems when running 0.12.
>
> C
On Mon, Nov 29, 2010 at 7:43 AM, Jon Brisbin
wrote:
> I'm working on the Spring Data and Grails Gorm support for Riak and I'm
> seeing some problems running my tests against Riak 0.13. I don't see these
> problems when running 0.12.
Can you elaborate on the problems you're seeing, please?
Than
I'm working on the Spring Data and Grails Gorm support for Riak and I'm seeing
some problems running my tests against Riak 0.13. I don't see these problems
when running 0.12.
I've mentioned this before in different venues, but since it's a Monday after a
holiday, I thought I'd bring it up again
On Tue, Nov 23, 2010 at 3:33 PM, Jay Adkisson wrote:
> (many profuse apologies to Dan - hit "reply" instead of "reply all")
> Alrighty, I've done a little more digging. When I throttle the writes
> heavily (2/sec) and set R and W to 1 all around, the cluster works just fine
> after I restart the
28 matches
Mail list logo