Your ulimit is only 256 so if riak tries to open a new file handler for your
connection and is denied by the OS then it will refuse the connection.
Here's a basic rundown on ulimit:
http://www.linuxhowtos.org/Tips%20and%20Tricks/ulimit.htm
Hope this helps.
-Sylvain
On Wed, Mar 9, 2011 at 12:4
So I've seen a few well written examples of erlang map or reduce
functions in the contrib section of the wiki/github but the missing
piece of glue for me is: Where do I compile from? I've done a lot of
ejabberd development and generally I just throw it in the src
directory, add a config param to th
ut of the
previous map: [["value"], ["bucket", "key"]]
Any pointers would be greatly appreciated! If there's any open source
code out there using ripple in this fashion I'd love a pointer!
Thanks,
-Sylvain
On Tue, May 24, 2011 at 6:55 PM, Sylvain Niles w
ction from the REST api or via
Ripple, specifically we are using the output of a Javascript Map to an
Erlang Reduce (If this is not supported, please let us know!)
Thanks in advance,
Sylvain Niles
On Fri, May 27, 2011 at 12:16 PM, Sylvain Niles wrote:
> Still looking for advice on this,
> You can get an idea of what the body should look like by outputting the JSON
> representation of the Ripple MapReduce object:
> puts
> Riak::MapReduce.new(Ripple.client).add(a-bucket').map("function(v){return
> [[v.values[0].data], [v.bucket, v.key]];}", :keep =>
> tr
This calls for silly hats.
It's a re-cap, heh.
http://rookery9.aviary.com.s3.amazonaws.com/8461000/8461482_11a7_625x625.jpg
If anyone likes silly hats I could certainly clean this up a lot.
On Thu, Jun 9, 2011 at 4:27 PM, Mark Phillips wrote:
> Firstly, thanks for the feedback. This is exac
So we've been encountering some input that has json which is parsed
just fine by ruby but causes riak map reduce to die a horrible death.
Running Riak 0.14.2 with json2.js updated to the latest from github
(thanks for that suggestion Dan)
Steps to reproduce:
bin/riak attach
Badstuff = <<123,34,
Why not write to a queue bucket with a timestamp and have a queue
processor move writes to the "final" bucket once they're over a
certain age? It can dedup/validate at that point too.
On Tue, Jun 21, 2011 at 2:26 PM, Les Mikesell wrote:
> Where can I find the redis hacks that get close to cluste
That looks just like the json of death I was experiencing. Can you try
doing a get on that key and using a json validator on it? Riak will
let you put invalid json in, but the map/reduce parser will break on
it.
On Wed, Jun 22, 2011 at 10:59 PM, Andrew Berman wrote:
> Hey Ryan,
>
> Here is the e
The example from the wiki for setting up an endpoint doesn't work:
client = Riak::Client.new :solr => "/solr"
ArgumentError: Invalid configuration options given.
from
/Library/Ruby/Gems/1.8/gems/riak-client-0.9.5/lib/riak/client.rb:98:in
`initialize'
from (irb):14:in `new'
PS: it seems like 0.9.4 has been removed from all the default gem
repos so you can't easily downgrade once you've made the mistake of
upgrading.
On Sun, Jun 26, 2011 at 8:01 PM, Sylvain Niles wrote:
> The example from the wiki for setting up an endpoint doesn't work:
PPS: I'm dumb, old version was 0.9.3.
On Sun, Jun 26, 2011 at 8:03 PM, Sylvain Niles wrote:
> PS: it seems like 0.9.4 has been removed from all the default gem
> repos so you can't easily downgrade once you've made the mistake of
> upgrading.
>
>
> On Sun, Jun 2
> -- adding support for the :solr configuration option as well as the other
> features. On git master (which will become 1.0), search support is fully
> merged and not added through reopening the class.
> On Sun, Jun 26, 2011 at 11:01 PM, Sylvain Niles
> wrote:
>>
>> T
We recently started trying to move our production environment over to
riak and we're seeing some weird behavior that's preventing us from
doing so:
Running: riak HEAD from github as of last Friday, riak_search turned
on with indexing of the problem bucket "events".
When we turn on our processes th
I had to re-index a bucket and used the erlang bucket export/import in
the riak function contrib. One minor gotcha is the export function
takes a binary string arg: <<"bucketname">> while the bucket importer
uses a normal string "bucketname". Feel free to ping me if you have
any questions about how
ils on your data and the MapReduce jobs
> you're running would be great to reproduce and figure out the problem.
>
> Mathias Meyer
> Developer Advocate, Basho Technologies
>
>
> On Mittwoch, 29. Juni 2011 at 00:41, Sylvain Niles wrote:
>
>> We recently started tr
I notice you're using riak search 0.14.0, if possible it might help to
upgrade to 0.14.2.
-Sylvain
On Thu, Jul 7, 2011 at 11:28 AM, Muhammad Yousaf
wrote:
> Hi,
>
> I have 2 node Riaksearch cluster. My first node is working perfectly with
> that stats
>
> curl -H "Accept: text/plain" http://192
Our system had been humming along fine for a week and crashed today
with almost no load. This is the only thing in the erlang.log:
=INFO REPORT 8-Jul-2011::16:46:14 ===
[{alarm_handler,{clear,system_memory_high_watermark}}]
=INFO REPORT 8-Jul-2011::16:50:14 ===
[{alarm_handler,{set,{syste
le was created? The crash dump should
> give some indication of which processes were taking up memory.
>
> The crashdump_viewer built into Erlang is very useful for reviewing crash
> dumps.
>
> Thanks
> Dan
>
> Sent from my iPhone
>
> On Jul 8, 2011, at 4:18 PM,
dictionary will have a
> messages property.
> Thanks,
> Dan
> Daniel Reverri
> Developer Advocate
> Basho Technologies, Inc.
> d...@basho.com
>
>
> On Fri, Jul 8, 2011 at 5:03 PM, Sylvain Niles
> wrote:
>>
>> Thanks Dan, very useful tool. Here's
lar problem recently where we had several large documents
> (500MB - 1.1gigs) that was causing Erlang to crash with eheap_alloc errors.
> Likely, the documents we so large due to high amounts of conflict.
> Deleting the documents fixed the problem.
>
> On Fri, Jul 8, 2011 at 5:29
all of the examples we've seen don't
cause ruby or erlang any problems, only spidermonkey.
Bad UTF8: <<"Afro-Belly Boogie® Fitness and Wellness-1800400">>
-Sylvain
On Tue, Jul 5, 2011 at 12:37 PM, Bryan Fink wrote:
> On Thu, Jun 30, 2011 at 4:52 PM, Sylvain N
Hi Rohman, the conversation yesterday got us to thinking and Basho confirmed
that buckets are a form of key prefix. So no matter how small the bucket it
will traverse the whole key space for a map reduce. We sat down and did some
thinking of how to work our data differently as we have a similar use
We recently upgraded to 1.0 via a rolling upgrade per the instructions
on basho.com. Since then we've encountered a problem with some of our
data and wanted to wipe the bucket from Ripple but we're seeing an
error that does not make sense:
irb(main):001:0> bucket = Riak::Client.new["pinged_urls"]
24 matches
Mail list logo