Hi all,
I remember a previous discussion in the mailing-list about
doing mapreduce over an entire bucket
'{"inputs":"products","query":[...]}' being very slow independently of
how many items that bucket would have stored in as Riak would have to
scan all memory in search for the keys, etc... s
Ok, this is a little simpler but I'm not sure how to proceed. When I pull
the keys via ?keys=stream API, I have a key "name-this\nBob". But
fetching it gets me a 404 not found error. I've tried %0D%0A (and other
variants) in the API call but no luck. Any way to delete this errant key?
Jim
From:
In what order are the sort and rows options of search via the Solr API
applied?
It would appear that rows is applied first, with the cluster waiting just
long enough to receive enough answers to to fulfill the rows requirement,
and only then sorting the result set. Alas, if that is the case, then
- Original Message -
> > I just want to know if one can import CommonJS modulesin Riak
> > mapreduce
> > or pre-commit hook function.
> Hi, Briche. I'm unfamiliar with how the CommonJS modules are
> structured, but if they're available as a simple collections of .js
> files, you can have
I am wondering if anyone has experienced the following, which
just occurred to me.
The set up is a 3 node cluster, multibackend, actually storing data into
eleveldb. One bucket and search enabled with a custom schema. The cluster
is lightly loaded. Load is almost all writes, with a handful of s
I'm getting a crash when running a key filter query. It looks like the
problem is having a newline in a key name (see name-this\nBob in error
below). Is there any restriction on key characters?
Jim
2011-10-24 05:08:14 =SUPERVISOR REPORT
Supervisor: {local,riak_pipe_fitting_sup}
Con
We've solved this before a few ways (to give some concrete examples):
1. On EC2 behind an ELB (which is inherently public on the internet) we
ran HAProxy on each Riak node, proxying some other port to Riak's port 8098.
The ELB's public port was 8098, but it translated that to the port fo
Hello,
We have switched over to lager but have a problem with it truncating
large records similar to io:fwrite() ~P - how do I turn this off, or
switch it to ~p?
Thanks so much for your help. Probably something simple that I am missing.
Cheers,
Bryan
___
On 24 Oct 2011, at 21:56, Alexander Robbins wrote:
> Hey, sorry for the delay. I'm having trouble switching to the HTTP client.
> I'm getting this error whenever I try to use it: (Using riak java client
> version 1.0.1)
>
>
> org.apache.http.impl.conn.tsccm.ThreadSafeClientConnManager: method
Hey, sorry for the delay. I'm having trouble switching to the HTTP client.
I'm getting this error whenever I try to use it: (Using riak java client
version 1.0.1)
org.apache.http.impl.conn.tsccm.ThreadSafeClientConnManager: method
()V not found
Caused by:
java.lang.NoSuchMethodError:
org.apache
Tomer,
The issues you encountered aren't related to having a mixed 0.14/1.0
cluster, or the overall upgrade cycle. They're issues with 0.14 Riak.
In pre-1.0 Riak, GETs would sometimes return 404s when adding/removing
nodes to a cluster. The situation would be transient, and would sort
itself out
On 24 Oct 2011, at 20:21, Alexander Robbins wrote:
> Using a new Riak 1.0 cluster.
>
> We added data in with a secondary index. When getting the data back out the
> results list is odd.
>
> [null, null, null, null, null, null, null, null, null, null, null, null,
> null, null, null, null, null
Using a new Riak 1.0 cluster.
We added data in with a secondary index. When getting the data back out the
results list is odd.
[null, null, null, null, null, null, null, null, null, null, null, null,
null, null, null, null, null, null, null, null, a, c, 9, d, 9, 1, d, d, 0,
e, 5, 5, 6, 8, c, 6, d
Great validation. Thanks Mark.
Jim
On 10/24/11 7:51 AM, "Mark Steele" wrote:
>It was a pretty simple benchmark test using a custom built protocol
>buffer client, so I wouldn't put too much faith in it.
>
>As far as performance, my client was able to retrieve keys at a rate of
>about 120 thousan
On Fri, Oct 7, 2011 at 8:56 PM, Nate Lawson wrote:
> Perhaps this is the flaw the author hints at? If so, what's the proper way in
> Riak MR to specify a split function to be sure proper boundaries are applied
> to Luwak files?
Hi, Nate. Indeed, you found the flaw. If you haven't stumbled on
On Mon, Oct 24, 2011 at 12:08 PM, briche arnaud wrote:
> Hi,
> I just want to know if one can import CommonJS modulesin Riak mapreduce
> or pre-commit hook function.
Hi, Briche. I'm unfamiliar with how the CommonJS modules are
structured, but if they're available as a simple collections of .j
Important: Removing/leaving a node from the cluster does NOT automatically
replicate the data that was on that node to get back up to your desired
replication level.
Right now, the suggested method for doing that involves invoking a read on all
of the keys that were on that node. This forces
you have to put something in front of riak, like haproxy, that will act as a
gateway into and out of your riak cluster. also block ports/access on your riak
node itself via a firewall or something like that.
-Alexander Sicular
@siculars
http://siculars.posterous.com
On Oct 24, 2011, at 10:54
Well, it caused us some problems...
The situation is as follows:
we have a production cluster with five 0.14 nodes and which we want to replace
with three new Riak 1.0.1 servers.
This is what we did:
1. join each of the 1.0 nodes to the 0.14 cluster.
2. after that all the three 1.0
Hi,
I just want to know if one can import CommonJS modulesin Riak mapreduce
or pre-commit hook function.
By the way, the goal is to share json schema validation code between the
client and Riak pre-commit hook, if there
another strategy I hadn't thought.
Thanks.
Any help please?
On Mon, Oct 24, 2011 at 2:03 PM, Lyes zaiko wrote:
> By the way, how can we also compute snippets from search results??
>
>
> On Thu, Oct 20, 2011 at 2:51 PM, Lyes zaiko wrote:
>
>> Hello! I need a little help.
>>
>> How can we obtain paginated search results by querying via th
Hi All,
I'm relatively new to Riak and just loving it. :)
I have a query about Riak that, why is there no user auth mechanisms? Is it
the part of the design or a future candidate?
Or else, how can I make a Riak based app protect files from unauthorized
access.
I searched the mailing list history,
It was a pretty simple benchmark test using a custom built protocol buffer
client, so I wouldn't put too much faith in it.
As far as performance, my client was able to retrieve keys at a rate of about
120 thousand keys per second from the key listing operation. The key listing
performance was c
Yes, using 1.0.1 with LevelDB. I moved to it from Bitcask in the hopes of
better performance.
Good to hear about your 60M key use-case. Can you share any key access
performance numbers?
Jim
On Oct 24, 2011, at 7:23 AM, Mark Steele wrote:
> Just curious Kyle, you using the 1.0 series?
>
> I'
Just curious Kyle, you using the 1.0 series?
I've done some informal testing on a 3 node 1.0 cluster and key listing was
working just peachy on 60 million keys using bitcask as the backend.
Cheers,
Mark
On Sunday 23 October 2011 12:26:35 Aphyr wrote:
> On 10/23/2011 12:11 PM, Jim Adler wrote:
Soren,
Thanks for the response. The information is useful, however I tried using
DROP in place of REJECT and the result is much the same: activity stops for
two minutes and then resumes.
Regards,
Malcolm
On 24 October 2011 14:14, Soren Hansen wrote:
> 2011/10/24 shared mailinglists :
> > During
Soren,
Yes, that works for Search, but not for the 2I feature. Sorry I wasn't more
clear.
On Mon, Oct 24, 2011 at 9:25 AM, Soren Hansen wrote:
> 2011/10/24 Sean Cribbs
> > This functionality is not currently available, no. You can only query one
> index at a time.
>
> Well, perhaps not using t
2011/10/24 Sean Cribbs
> This functionality is not currently available, no. You can only query one
> index at a time.
Well, perhaps not using that format, but this certainly works:
curl
http://localhost:8098/solr/products/select?q=price_int:200+AND+category:sports
Or, with the Python bindings
2011/10/24 shared mailinglists :
> During the run, I use iptables to simulate a network partition in which one
> node in the cluster is disconnected from the other four (but all five remain
> connected to the client). To disconnect from node A, for example, I run:
>
> sudo /sbin/iptables -A INPUT -
By the way, how can we also compute snippets from search results??
On Thu, Oct 20, 2011 at 2:51 PM, Lyes zaiko wrote:
> Hello! I need a little help.
>
> How can we obtain paginated search results by querying via the erlang
> command line (using search:search_doc/3)??
>
> thank you!
>
___
Hi,
I am investigating the feasibility of using Riak for an application where
there is a firm requirement for the system to remain available in the face
of node failures and network partitions. Up to now, I have found Riak to be
quite resilient to node failures, but network partitions are causing m
This functionality is not currently available, no. You can only query one
index at a time.
On Mon, Oct 24, 2011 at 4:10 AM, Antonio Rohman Fernandez <
roh...@mahalostudio.com> wrote:
> **
>
> Hi
>
> Is there any way to query with 2 secondary indexes at the same time?
>
> I can query 1 index like:
32 matches
Mail list logo