To do paging in mapreduce queries I make queries based on secondary indexes
(2i). My keys is ordered by date and so I could ask date period. If you need
results based on not dates but number of keys, you could name your keys like
"1", "2", etc. But it would be problem with strict number of
Riak has it`s own cache mechanism (at least leveldb backend), not sure about
it efficiency, it could be tuned in app.config file. So i am also
interesting in answer from skilled riak users.
--
View this message in context:
http://riak-users.197444.n3.nabble.com/risk-and-memcached-tp4024963p4024
Your email is slightly confusing - I assume by "paging" you mean "pagination"?
I recommend not trying to do pagination in Riak.
I can't help you at all with your custom map/reduce phase code in
trend_riak.erl without seeing vm.args, trend_riak.erl, &c...
On Aug 9, 2012, at 7:27 PM, 郎咸武 wrote:
Who can answer me? thanks.
2012/8/10 郎咸武
> Hi guys,
> How to do a paging in MapReduce queries?
> I try to write some method in the module(trend_riak.erl, Add -pa
> /home/jason/work/server/trunk/trend_riakto vm.args).
> I thought it will well, but unfortunately it does not work
(Note: Just realized I didn't send this to the riak-users mailing
list. Re-sending so everyone sees my reply)
Yes, this makes sense unfortunately. 'riak-admin transfers' isn't
going to work for you in a mixed 0.14.2 and 1.2 cluster.
Between 0.14.2 and 1.0, the entire cluster system was revamped.
Sure thing!
Any time that you can avoid a hit to disk, you should avoid using disk. In the
RDBMS world, we put caches in front of SQL Server, even though SQL Server will
devour all RAM in the known universe if you let it. Likewise, why read from
potentially slow disks when you can try to hit ca
Hi all,
my question is fairly simple. If one is developing an app, does that make sense
to combine memcached with a riak datastore, given that riak has its own caching
mechanism?
José
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.b
I forgot to mention, that I also ran
"riak_core_node_watcher:service_up(riak_pipe, self())." on the 0.14.2 node (got
that from here: http://wiki.basho.com/Rolling-Upgrades.html)
On 09.08.2012, at 22:16, Sebastian Cohnen wrote:
> Hey all,
>
> looks like I'm already stuck :-/
>
> I'm trying t
Hey all,
looks like I'm already stuck :-/
I'm trying to test the upgrade on a stage cluster (with 2 nodes). What I did so
far:
* downloaded 1.2
* stopped riak
* backup /var/lib/riak/ring and /etc/riak
* installed 1.2
* changed app.config and vm.args (just node name, ring creation size, config
f
Hi Daniel,
Since Riak 1.0, node id's are used to resolve conflicts in place of
client id's. While supplying the client ID won't hurt anything, Riak
will ignore it.
You can read more about this here:
http://wiki.basho.com/Vector-Clocks.html#More-Information
Thanks!
Brian Sparrow
On Thu, Aug 9, 2
Hi guys,
How to do a paging in MapReduce queries?
I try to write some method in the module(trend_riak.erl, Add -pa
/home/jason/work/server/trunk/trend_riakto vm.args).
I thought it will well, but unfortunately it does not work. Becase the
"Reduce" ,the result is wrong.
Is ther
I've read somewhere some time ago that each client connecting to Riak
cluster should
have unique id to help with resolving conflicts. Is it still the case and
if yes, what would be a recommended way
of selecting such id?
I just found in RawClient and in IRiakClient
/**
* If you don't set a
Hi Denis,
I believe we ran into the same problem, and worked with Basho to get
it fixed in the 1.2 release. We haven't yet upgraded to 1.2, but plan
to do so in the next few days. You might try it to see if it doesn't
solve your problem. We were told this commit should fix it
https://github
Amir,
I'll add one more major consideration to Ryan's excellent list, check your
network for TCP Incast. Every cluster at reasonable scale will have to manage
this issue carefully, 20 nodes is more than enough to create this kind of
problem (I see it with as few as 9). Here's more information
Amir,
Are you using one node to run basho bench? If so, have you tried running
multiple basho bench instances on separate nodes (or tried other benchmark
tools)? There could be many reasons for your plateau but I would first
rule out that your not maxing out the basho bench instance or the node
I hate to use your post as an example, Amir, but your post is a perfect example
of "according to my benchmark your product did not meet my performance
expectations; oh ya, I'm not gonna tell you anything about the benchmark or the
environment I was running in." Everybody, please stop doing that.
Hi there,
I have done a scalability benchmark for Riak DBMS and we couldn't scale up
the throughput beyond 20 Riak nodes. The benchmarking with Basho_Bench has
been run on a 31 node cluster and each node has its own hard disk but the
maximum throughput is on 20 nodes.
I’d like to understand why R
Parnell,
Thanks for sending the PR, I've left you some comments.
On Thu, Aug 9, 2012 at 4:52 AM, Parnell Springmeyer wrote:
> I've been needing the Riak Python client to handle pooled connections
> (including pooled connections to multiple nodes) and decided to re-work
> (just barely) the curren
I'm actually thinking about taking the risk. We only have a small 3-node
cluster with ~50GB of data with relatively little traffic (and we don't have
any 2i, nor do we use search or MR).
I'll backup the data files, the ring state and everything else I find and give
it a try. If anything strange
I toyed with a pmap in Python a while back to attempt to speed up multiple
HTTP request to our web services layer at work. You may want to attempt
that with gevent.
Here's the code I wrote which is probably not production ready.
https://github.com/ericmoritz/pmap
On Aug 9, 2012 4:46 AM, "Parnell
The only issue with this approach is AFAIK that M/R effectively runs with R=1,
i.e. it doesn't ensure that a value is consistent across replicas.
IMHO riak_kv_mapreduce should have a map_get_object_value, which does a proper
RiakClient:get, i.e. something like this: [will be slower, but will h
I've been needing the Riak Python client to handle pooled connections
(including pooled connections to multiple nodes) and decided to re-work (just
barely) the current riak-python-client to use gevent's queue to do thread-safe
and light-weight Riak connection pooling.
There is a very minimal ch
Jeremy,
I was looking for something similar and first built an extra handler onto an
internal erlang cowboy API server that used maelstrom (my own worker pool OTP
application).
It was used to make a simple POST with a string of the {bucket, key} pairs and
the server would concurrently GET and
Are there any debian 6 packages for Riak Enterprise planned? I only see Ubuntu.
--
Bip Thelin
KIVRA | Lugnets Allé 1 | 120 33 Stockholm
Tel 08-533 335 37 | Mob 0735-18 18 90
www.kivra.com
___
riak-users mailing list
riak-users@lists.basho.com
http:
24 matches
Mail list logo