We are using it on amazon with no issues. we are using 7.5gb instances. You can
reduce the cost by doing reserve instances.
We are still using bitcask, which makes amount of ram per instance very
important due to each node need to keep all keys it's responsible for in the
ram.
It's a trade off
this is certainly a tricky problem for us, because there are times we need
to perform 9 to 100 GETs on per page render basis. some bullt-in
support would certainly be useful
On Fri, Aug 19, 2011 at 10:12 AM, Jacques wrote:
> This begs the question, is there much efficiency to gain by creating a t
Hi,
I've been using map reduce to fake a bulk get.
I do a post against map reduce interface, using the following
{"inputs":
[["bucket1", "k1"],
["bucket1", "k2"],
["bucket1", "k3"],
["bucket1", "k4"],
["bucket1", "k5"],
["bucket1", "k6"],
["bucket1", "k7"],
["bucket1", "k8"],
["bucket1", "k9"]
]
nd the ring.
>
> On Sun, Jul 31, 2011 at 6:59 PM, Wilson MacGyver
> wrote:
>>
>> In riak, if I want to replace a node in case of upgrading the
>> hardware, is the following step correct?
>>
>> tar up riak/data on old machine
>>
>> setup riak on the n
Thank you. I tried googling for it, I guess my google-fu is weak :)
On Aug 3, 2011, at 1:15 PM, Brett Hoerner wrote:
> Yup, see r, w, etc here: http://wiki.basho.com/HTTP-Set-Bucket-Properties.html
>
>
>
> On Wed, Aug 3, 2011 at 11:55 AM, Wilson MacGyver wrote:
>> Hi,
Hi,
I know you can set n value on a per bucket basis.
and you can set r value on client side for read.
can you set a r value on a per bucket basis too so that you don't have
to require
all client HTTP GET to have to pass r=somevalue?
Thanks
--
Omnem crede diem tibi diluxisse supremum.
__
In riak, if I want to replace a node in case of upgrading the
hardware, is the following step correct?
tar up riak/data on old machine
setup riak on the new machine, but don't start it
copy riak/data to the new machine and untar it
run reip (because I couldn't keep the old IP for some reason)
On Mon, Apr 18, 2011 at 6:12 PM, Jon Brisbin wrote:
>
> If true atomicity is really a concern, then use Redis and write a pub/sub
> handler to update your Riak documents whenever things change.
>
> You know we could probably take the Riak RabbitMQ postcommit hook and adapt
> it to use Redis for
you can still set each of the haproxy to ping each riak node and do
the remove automatically.
your app can handle failure by retry on timeout, since you have it set
to round robin. the next retry
will hit a working node.
On Thu, Apr 7, 2011 at 11:27 PM, Greg Nelson wrote:
> I don't want to have
use haproxy to ping each of the riak node. and remove nodes that
aren't responding is
what we did.
we use a single haproxy instead of 1 per application server.
On Thu, Apr 7, 2011 at 9:47 PM, Greg Nelson wrote:
> Hello,
> I have a simple three node cluster that I have been using for testing and
The topic I'd really like coverage is, what can riak-core do that other
stack including erlang+otp can't do.
Basically, what awesome unique sauce does it bring to the table that you
would want to ditch your favorite stack?
___
riak-users mailing list
ria
In our case, because we use ec2 7.5gb instance with ebs. The macmini setup I
tweeted is a good match to production nodes.
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Hey, Java users are people too. :) Besides the kind of
companies that can afford Riak enterprise, what do you think
they use? :)
While I use groovy with the riak java client, any java client lib
improvement will also benefit all the other jvm languages like
clojure, scala, etc.
On Mar 27, 2011,
two questions jump out right away
1: how do you fetch linked records using the API? and how do you setup
link for records?
2: it's not obvious from reading the code how to set the r and w value
during read and write. is it
using the rw() method?
Thanks
--
Omnem crede diem tibi diluxisse supr
On Mar 23, 2011, at 2:25 PM, Jon Brisbin wrote:
> I personally don't like using ambiguous short names. I get that you can
> technically distinguish things by package name but using a simple name that
> is very common isn't good for code readability. "RiakEntry" is unambiguous,
> self-documenti
On Wed, Mar 16, 2011 at 7:39 PM, Mark Phillips wrote:
> On a related note, @wmacgyver mentioned he is also using Riak, Redis
> and PostgreSQL:
> http://twitter.com/wmacgyver/statuses/48105509718458368 . Out of
> curiosity, anyone else using these three DBs together in their stack?
I'll expand a b
If only we wrote the webapp in haskell :)
On Fri, Feb 25, 2011 at 4:35 PM, Bryan O'Sullivan wrote:
> On Fri, Feb 25, 2011 at 12:17 PM, Joseph Blomstedt wrote:
>>
>> For protocol buffers there are client libraries that support
>> connection pooling. Of the top of my head I know riak-js does, and
ots of factors.
>
> -Alexander Sicular
>
> @siculars
>
> On Feb 25, 2011, at 2:57 PM, Wilson MacGyver wrote:
>
>> It's not an actual use case. the actual use case is in fact fairly
>> random access pattern to
>> the keys.
>>
>> it's ho
/s. This is on an 8 core system (Dual Intel Xeon E5506
> @2.13GHz), and all cores are at nearly 100%. No tuning was done on
> either riak or the OS.
>
> Also 99% of the requests still took less then 20ms, while with 10
> concurrent requests its more like 2ms.
>
>
> Am Freitag, den
25, 2011 at 2:38 PM, Ryan Zezeski wrote:
>
>
> On Fri, Feb 25, 2011 at 12:45 PM, Wilson MacGyver
> wrote:
>>
>> I purposely have it grab the same key/value over and over
>> again.
>>
>
> Could the fact that your getting the same key for each request a
SO_REUSEADDR is also something you set at the socket API as I recall.
So I don't think it's something you can just set on the TCP/IP itself
as a global setting.
On Fri, Feb 25, 2011 at 1:41 PM, Les Mikesell wrote:
> Those settings shouldn't make a big difference in how the number of
> connections
ing well
> outside my expertise (although TCP slow-start comes to mind). That said,
> it's kind of hard not to use TCP if you want a reliable connection, warts and
> all.
>
> Sean Cribbs
> Developer Advocate
> Basho Technologies, Inc.
> http://basho.com/
>
> O
re's no reason to have
> "standby workers", especially when they don't need to maintain state between
> requests.
>
> Sean Cribbs
> Developer Advocate
> Basho Technologies, Inc.
> http://basho.com/
>
> On Feb 25, 2011, at 12:45 PM, Wilson MacGyver
It just occurred to me that I should be more clear. When I say 100s of
concurrent connection to
riak. I meant per node, not per riak cluster.
ie, I don't mean handling 100 connections into 6 nodes. I meant enough
connection to result
100 connections per node.
--
Omnem crede diem tibi diluxisse s
nologies, Inc.
> http://basho.com/
>
> On Feb 25, 2011, at 9:33 AM, Wilson MacGyver wrote:
>
>> TCP_NODELAY is something you'd set when you use the socket API call,
>> not a global tuning setting on the OS as I recall.
>>
>> On Fri, Feb 25, 2011 at 9:28 AM,
ribbs
> Developer Advocate
> Basho Technologies, Inc.
> http://basho.com/
>
> On Feb 25, 2011, at 9:33 AM, Wilson MacGyver wrote:
>
>> TCP_NODELAY is something you'd set when you use the socket API call,
>> not a global tuning setting on the OS as I recall.
>
TCP_NODELAY is something you'd set when you use the socket API call,
not a global tuning setting on the OS as I recall.
On Fri, Feb 25, 2011 at 9:28 AM, Nico Meyer wrote:
> Whenever I see latencies which are roughly multiples of 40ms it screams
> to me 'nagle algorithm'. I have seen this so often
we are using ec2 7.5GB 64bit instances, with EBS. all riak data are
stored on the EBS.
all ec2 instances are on the same zone, communicating using the private ip.
the 80ms timing we are seeing are GET requests using the private ip also.
On Fri, Feb 25, 2011 at 9:14 AM, Jeremiah Peschka
wrote:
>
ing ~0.5-<1.0 load average on Riak nodes as
> iowait.
>
> Feel free to hop on the #riak IRC channel and I'd be happy to try to answer
> any other questions about our setup.
>
> - Bob Feldbauer
>
>> -- Forwarded message --
>> From: Wilson M
other questions about our setup.
>
> - Bob Feldbauer
>
>> -- Forwarded message --
>> From: Wilson MacGyver
>> Date: Tue, Feb 22, 2011 at 7:15 PM
>> Subject: tuning riak for 100s of concurrent connections
>> To: riak-users Users
>&g
t;> Feel free to hop on the #riak IRC channel and I'd be happy to try to answer
>> any other questions about our setup.
>>
>> - Bob Feldbauer
>>
>>> -- Forwarded message --
>>> From: Wilson MacGyver
>>> Date: Tue, Feb 22, 2011 a
Are there any guidelines/howtos on tuning riak nodes for 100s
(200-500) of concurrent connections
which are 99.9% HTTP GET?
Thanks,
--
Omnem crede diem tibi diluxisse supremum.
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.
Excellent, I'll give that a shot.
Thanks,
On Sun, Feb 20, 2011 at 4:00 PM, Russell Brown wrote:
>
> When you store using RiakClient.store(RiakObject, RequestMeta) you can set up
> RequestMeta with arbitrary query params like
>
> RequestMeta requestMeta = RequestMeta.writeParams(3,
> 3)
We use the java client, mostly in groovy. In terms of wish list
1: Protocol buffer support, this is addressed in the 0.14 version.
2: make the httpconnection max client setting work. I couldn't get this to work
no matter what I do. It always only open 1 connection.
3: support no-data flag. I not
Hi,
A while ago, this issue was opened to track supporting multi_get
https://issues.basho.com/show_bug.cgi?id=96
ie, in a single HTTP get request, you pass a list of 20 keys, riak returns all
20 values in a single response.
I didn't see any movement on that.
so is there a way to "fake" it usin
Thanks, will give it a try
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Hi,
I noticed riak java client have been updated to 0.14, but there are no
info beyond the
original 0.11 version changelog. Any place I can find out what the changes are?
also, any more progress on protocol buffer support yet?
Thanks,
--
Omnem crede diem tibi diluxisse supremum.
_
it's here
http://www.erlang.org/doc/reference_manual/expressions.html#id75794
On Thu, Feb 3, 2011 at 9:13 PM, Joshua Partogi wrote:
> Hi,
>
> In riak erlang client, why do we have to surround the parameter with << and
>>> ? I can not find what this means in erlang documentation.
>
> Thanks heaps
Thanks, I'll give that a try.
On Mon, Dec 6, 2010 at 7:21 AM, Jan Buchholdt wrote:
> The config.setMaxConnections is not connected to the underlying
> httpConnectionManager. I guess that is an error in the implementation of the
> RiakClient. Try to use something like:
>
> RiakClient riakClient =
Hi,
I'm using the riak java client.
according to http://bitbucket.org/jonjlee/riak-java-client/src
if you use the following
config.setMaxConnections(50);
this let the riak client opens 50 concurrent connections.
in practice, I'm not seeing that. I set it to 250, and still all I see is 1
inbou
Thanks, another thing to keep on postgres.
On Wed, Nov 24, 2010 at 5:15 PM, Alexander Sicular wrote:
> Not that I know of. I would imagine you would have to list buckets (erlang
> only now, may change shortly) then m/r buckets and sum.
>
> -Alexander
>
> On Nov 24, 2010,
is there a way to get the total number of keys in the entire riak
cluster without having
to do a map/reduce count?
Thanks,
--
Season's Greetings!
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users
were you guys using "copyField" in solr to fill "any"? :)
On Wed, Nov 17, 2010 at 5:18 PM, Neville Burnell
wrote:
> heh, in a previous job working with Solr, my team created the "any" field
>
> On 18 November 2010 08:49, Wilson MacGyver wrote:
>>
&g
thanks for the info. that means for multiple fields search
as default. We'd have to create some sort of combined field
On Wed, Nov 17, 2010 at 4:07 PM, Dan Reverri wrote:
> Only 1 field can be specified for the default_field property.
--
Omnem crede diem tibi diluxisse supremum.
So far in the documentation, only string and integer are mentioned as types.
how about floats or timestamps?
--
Omnem crede diem tibi diluxisse supremum.
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/ri
can you have more than 1 fields specified for default_field in the
search schema?
--
Omnem crede diem tibi diluxisse supremum.
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
you mean beyond the backup and restore command here?
https://wiki.basho.com/display/RIAK/Command-Line+Tools#Command-LineTools-backup
On Mon, Oct 25, 2010 at 8:14 PM, Cagdas Tulek wrote:
> Hi,
> Is there an article or tutorial about how to backup and recover a server?
> Best,
> Cagdas
>
> ___
I want to make sure my understanding is correct.
riak-search is a super set of riak. as such, any existing riak client
lib (the java one for example),
can "PUT" data into riak-search.
but only PHP, Python, Ruby, and Erlang riak client can execute a
"search" from within the client
lib.
However, y
Great, time to buy 10 more macminis :)
On Wed, Oct 13, 2010 at 1:34 PM, David Smith wrote:
> To the best of my knowledge, there are no changes between 0.12->0.13
> that would prevent a rolling upgrade. That said, we have not tested
> that specific use-case yet -- it's in the pipeline to be part o
I thought it's possible to do "rolling upgrade". ie, shutdown 1 node,
upgrade it, and reconnect the
node to the cluster.
otherwise you'd need 2X of the nodes in production.
On Wed, Oct 13, 2010 at 1:21 PM, Alexander Sicular wrote:
> I'm pretty sure that Baho would recommend that all nodes in a c
with the new riak 0.13 release as well as riak-search, is erlang R13
still the recommended
production usage, or has it been bumped to R14B?
--
Omnem crede diem tibi diluxisse supremum.
___
riak-users mailing list
riak-users@lists.basho.com
http://lists
are you installing it from src? because I run it as a non-root user and it
never requested sudo
On Mon, Oct 11, 2010 at 10:57 AM, Mojito Sorbet wrote:
> My question is not about installing it as root, but running it as root.
> The "riak start" script immediately tries to do a sudo. Why is this
>
Yes, it's only for the keys that the node is responsible for.
On Oct 8, 2010, at 10:11 AM, Tony Novak wrote:
> Quick question about Riak's memory footprint: I know that Bitcask
> requires all keys to fit in memory. Does this mean only the keys that
> reside on a given node, or does each node
So base on this, is it fair to say the default value of 64 is only
suitable for up to 10
nodes?
Thanks,
--
Omnem crede diem tibi diluxisse supremum.
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-u
Just use HAproxy.
http://haproxy.1wt.eu/
On Sep 4, 2010, at 6:37 PM, OJ Reeves wrote:
> The plan was to create a Riak client connection for every request coming in
> (maintained for the lifetime of the request). That connection would connect
> to any one of the nodes in the Riak cluster (vi
assuming you are using bitcask. maybe you have too many keys so that
the single computer ram is being used up by both riak nodes, and there
isn't enough ram to hold them?
On Tue, Aug 31, 2010 at 12:54 PM, Michael Colussi wrote:
> Hey guys,
> I've been successfully running component tests where I
I have a 3 node test server (actually 3 machines) to test the
behavior/size requirement of
the keys. I have n set to the default value 3.
I started blasting large amount of keys/data into the 3 nodes at the
same time. At some point, it was too
much and one of the nodes crashed with a out of memory
On Sat, Aug 21, 2010 at 10:09 PM, Justin Sheehy wrote:
> Hi, Wilson.
> All it means is that one merge was scheduled while another was
> running, so the first one did all the work and the second had nothing
> to do.
That makes me feel MUCH better :)
--
Omnem crede diem tibi diluxisse supremum.
Hi,
on a 3 node riak system running stock 0.12. After inserting a bunch of
data repetitively,
because I want to see when merge condition would trigger.
I saw it happen. But in the erlang.log
I saw a bunch of
=ERROR REPORT
Failed to merge
follow by a bunch of list of bitcask files
with fin
I'm curious. Let's say I only setup 3 riak nodes. and I leave n to 3.
This means of course, all 3 nodes have a copy of the full set of data,
due to n being 3.
What happens if I then do riak stop on all 3 nodes at the same time?
Each node will
try to hand off data to the others before going down, b
how is the compact phase defined/activated? I looked through the docs
and didn't see it.
On Tue, Aug 17, 2010 at 11:54 AM, Alexander Sicular wrote:
> Bitcask is a write only log (wol) that eats disk (by keeping all updates)
> until a compaction phase that reclaims disk at some defined interval.
Thanks. I'm more interested in just the keys memory usage. riak-admin status
already reports total mem allocated, doesn't it?
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Looks like someone brought over couchdb backend as a storage option
for riak
http://matt.io/technobabble/The_Key-Value_Wars_of_the_Early_21st_Century/ui
--
Omnem crede diem tibi diluxisse supremum.
___
riak-users mailing list
riak-users@lists.basho.co
table implementation. Rest assured he and Justin are
> looking for ways to continue improving on that.
>
> Sean Cribbs
> Developer Advocate
> Basho Technologies, Inc.
> http://basho.com/
>
> On Aug 7, 2010, at 10:39 PM, Wilson MacGyver wrote:
>
>> Is the bitcask key
Is the bitcask key always treated as a string, even if you pass an interger?
trying to think of ways to reduce the ram usage of keys in bitcask.
--
Omnem crede diem tibi diluxisse supremum.
___
riak-users mailing list
riak-users@lists.basho.com
http:
Hi,
I know one of the side effect of bitcask is, each node needs to have
enough ram to hold
the entire list of keys it's responsible for.
is there a tool/method to check how much is being used so you can see
how close to
the limit you are? The alternative is to wait till bitcask blows up.
that se
Thanks everyone. It seems like it's either mochijson2 or mochijson.
--
Omnem crede diem tibi diluxisse supremum.
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
At the risk of somewhat off topic, what JSON lib does everyone use with the
Erlang riak client?
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
I've seen this a few times in docs and client src example. But there
really isn't anything
I can find on why this is useful, or when to use it instead of keeping
the data in the value.
What is the usage case for UserMeta?
--
Omnem crede diem tibi diluxisse supremum.
When I did this, I thought I was abusing the concept of
buckets in riak. :)
> a powerful tool for data modeling. For example, sets of 1-to-1
> relationships can be very nicely represented as something like
> "bucket1/keyA, bucket2/keyA, bucket3/keyA", which allows related items
> to be fetched w
does riak 0.11 release src build under Erlang R140A?
or do I have to hg pull from the master to do that?
Thanks
--
Omnem crede diem tibi diluxisse supremum.
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinf
Thanks to both speedy responses. The audience is most likely
new to NOSQL, and probably never heard of riak.
I want to talk about what Riak is and what makes Riak better
the others. I was trying to think of a good way to demo Riak.
But I'm not sure if it's feasible. I mean, I don't want to lug
3-5
Hi,
I'm going to be giving a talk on riak sometime soon. Anyone has slides
I can steal/borrow? :)
Thanks
--
Omnem crede diem tibi diluxisse supremum.
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-
/github.com/aitrus/riak-pbclient
>
> More officially supported clients will be getting the PBC option in the near
> future.
>
> Sean Cribbs
> Developer Advocate
> Basho Technologies, Inc.
> http://basho.com/
>
> On Jul 11, 2010, at 11:48 AM, Wilson MacGyver wrote:
&g
There are various riak clients out there right now.
Some official, some not. Most are http REST based.
as far as I know, the erlang native client is the only prototocol buffer
based version.
I remember hearing that protocol buffer version is about 10x faster
than HTTP.
so if I want to bulk load
75 matches
Mail list logo