Hi,
Does anyone have any experience with a similar setup?
We have resolved questions 4 and 5 - they occurred due to a firewall
misconfiguration, but I would still very much like to hear if there are any
drawbacks with deleting data in the map reduce job itself compared to just
collecting the keys
It is already in test and available for your download now:
https://github.com/basho/leveldb/tree/mv-flexcache
Discussion is here:
https://github.com/basho/leveldb/wiki/mv-flexcache
This code is slated for Riak 2.0. Enjoy!!
Matthew
On Oct 17, 2013, at 20:50, darren wrote:
> But why isn't ri
Hi Darren,
One can always configure swap to be turned on, which can prevent OOM
killing; however, the performance impact of doing this is detrimental and
not recommended. I'd recommend you Matthew's recommendation above as a
starting point, if you indeed are limited to 4GB of RAM.
Cheers,
Wes
But why isn't riak smart enough to adjust itself to the available memory or
lack thereof?
No serious enterprise technology should just consume everything and crash.
Sent from my Verizon Wireless 4G LTE Smartphone
Original message
From: Matthew Von-Maszewski
Date: 10/17/2013
How many nodes are you running? You should aim for around 8-16 vnodes per
server (must be a power of 2). So if you're running 5 nodes, you should be fine
with 4GB since it'll be approx 12 vnodes per. If you're only running on 1
server, you'll be running 64 vnodes on that single server (which is
Greetings,
The default config targets 5 servers and 16 to 32G of RAM. Yes, the app.config
needs some adjustment to achieve happiness for you:
- change ring_creation_size from 64 to 16 (remove the % from the beginning of
the line)
- add this line before "{data_root, }" in eleveldb section:
"{m
4GB of memory is not very much, and you'll likely exhaust it after not a lot
of time. If you're attempting to do development work on that little amount
of memory, you're going to want to lower the memory consumption for leveldb
by tweaking the leveldb configuration parameters (such as cache_size).
Hi
I installed riak v1.4.2 on ubuntu12.04(64bit, 4G RAM) with apt-get, run it
with default app.conf but change the backend to leveldb, and test it with
https://github.com/tpjg/goriakpbc .
Just keep putting (key, value) to an bucket, the memory always increasing, and
in the end it crashed, as
Trying to Build Riak on SUSE LE 11 SP2
make rel fails with
*
*
==> ebloom (compile)
Compiled src/ebloom.erl
Compiling /home/cstatmgrd/riak/riak-1.4.2/deps/ebloom/c_src/ebloom_nifs.cpp
sh: line 0: exec: c++: not found
ERROR: compile failed while processing
/home/cstatmgrd/riak/riak-1.4.2/deps/ebloom
Hi Dave,
Since kerl uses curl to download files, you should be able to set your
proxy this way and have it picked up:
$ export http_proxy http://proxy.server.com:3128
--
Luke Bakken
CSE
lbak...@basho.com
On Thu, Oct 17, 2013 at 4:19 PM, Dave King wrote:
> I'm trying to install erlang on a ma
I'm trying to install erlang on a machine with Proxy values. curl picks up
these values. Kerl on the other hand just seems to sit and wait. Is there
a way to pass proxy settings to Kerl?
Is there a good page on Kerl? Google doesn't seem to recognize it.
Dave
__
May have been a network issue, it's started working.
- Peace
Dave
On Thu, Oct 17, 2013 at 5:19 PM, Dave King wrote:
> I'm trying to install erlang on a machine with Proxy values. curl picks
> up these values. Kerl on the other hand just seems to sit and wait. Is
> there a way to pass proxy
You don't say _how_ you get the last_event_id for a particular aggregate -
but presuming that's a relatively trivial operation - you could change
around your secondary index so you just have to make a range query.
Instead of - or possibly in addition to - the aggregate_id, you could have
an aggreg
Working on an event-sourcing aproach and would really appreciate some
advice.
1. Every event is tagged with a secondary index ("aggregate_id")
2. Every event's id is k-ordered (using a Flake compatible id generator)
3. Every aggregate has last_event_id
I would like the ability to select all event
On Thursday, 17 October 2013 at 12:45PM, Eric Redmond wrote:
> Apologies that it's unclear, and I'll update the docs to correct this.
>
> http://docs.basho.com/riak/latest/ops/advanced/install-custom-code/
>
> When you install custom code, you must install that code on every node.
>
> Eric
>
>
And, just to close the loop, I went ahead and patched the Go library to
support the above functionality.
Thanks for the help everyone.
--
View this message in context:
http://riak-users.197444.n3.nabble.com/Read-Before-Writes-on-Distributed-Counters-tp4029492p4029513.html
Sent from the Riak
On 17 Oct 2013, at 17:21, Jeremiah Peschka wrote:
> When you 'update' a counter, you send in an increment operation. That's added
> to an internal list in Riak. The operations are then zipped up to provide the
> correct counter value on read. The worst that you'll do is add a large(ish)
> num
Apologies that it's unclear, and I'll update the docs to correct this.
http://docs.basho.com/riak/latest/ops/advanced/install-custom-code/
When you install custom code, you must install that code on every node.
Eric
On Oct 17, 2013, at 9:17 AM, Tristan Foureur wrote:
> Hi,
>
> My question is
To Konstantin Kalin:
This is not a good place to start discussion about NIFS, but check this
http://ferd.ca/rtb-where-erlang-blooms.html especially last passage.
--
View this message in context:
http://riak-users.197444.n3.nabble.com/Storing-JSON-via-Erlang-Client-tp4029489p4029509.html
Sent fr
I used mochijson2 and ejson. I found that ejson works faster since it's
built using NIF. But both libraries use tuple wrapping around proplists.
Thus I developed a few wrapper functions to manipulate with fields.
Thank you,
Konstantin.
On Thu, Oct 17, 2013 at 9:08 AM, Eric Redmond wrote:
> For
Hi,My question is simple, but really I cannot find a clear answer anywhere in the documentation. I understand how a cluster works, and how a hook works, but if you have a hook on a certain bucket and commit to a node on that bucket, is the hook triggered only on that node, on all the nodes, or only
I'd also recommend jsx [1], which doesn't require wrapping your objects in
struct tuples.
[1] https://github.com/talentdeficit/jsx
- Chris
--
Christopher Meiklejohn
Software Engineer
Basho Technologies, Inc.
On Thursday, October 17, 2013 at 12:08 PM, Eric Redmond wrote:
> For building js
For building json you should also check out a tool like mochijson2.
On Oct 17, 2013 6:51 AM, "Daniil Churikov" wrote:
> {ok, Worker} = riakc_pb_socket:start_link("my_riak_node_1", 8087),
> Obj = riakc_obj:new(<<"my_bucket">>, <<"my_key">>,
> <<"{\"key\":\"\val\"}">>,
> <<"application/json">>),
>
Hi Daniil,
On 17 Oct 2013, at 16:55, Daniil Churikov wrote:
> Correct me if I wrong, but when you blindly do update without previous read,
> you create a sibling, which should be resolved on read. In case if you make
> a lot of increments for counter and rarely reads it will lead to siblings
> e
That's why I linked to the video - it's 60 minutes of Cribbs™ brand
pedantry.
---
Jeremiah Peschka - Founder, Brent Ozar Unlimited
MCITP: SQL Server 2008, MVP
Cloudera Certified Developer for Apache Hadoop
On Thu, Oct 17, 2013 at 10:45 AM, Sean Cribbs wrote:
> Since Jeremiah loves it when I'm
Since Jeremiah loves it when I'm pedantic, it bears mentioning that the
list of operations are rolled up immediately (not kept around), grouping by
which partition took the increment. So if I increment by 2 and then by 50,
and the increment goes to different replicas, my counter will look like
[{a,
The reasons counters are interesting are:
1) You send an "increment" or "decrement" operation rather than the new
value.
2) Any conflicts that were created by that operation get resolved
automatically.
So, no, sibling explosion will not occur.
On Thu, Oct 17, 2013 at 3:55 PM, Daniil Churikov w
When you 'update' a counter, you send in an increment operation. That's
added to an internal list in Riak. The operations are then zipped up to
provide the correct counter value on read. The worst that you'll do is add
a large(ish) number of values to the op list inside Riak.
Siblings will be crea
Correct me if I wrong, but when you blindly do update without previous read,
you create a sibling, which should be resolved on read. In case if you make
a lot of increments for counter and rarely reads it will lead to siblings
explosion.
I am not familiar with new counters datatypes, so I am curio
Aha! thanks for that tip
On Wed, Oct 16, 2013 at 3:05 PM, Jared Morrow wrote:
> It is checked by 'riak-admin diag' if you run that to check your system.
>
> -Jared
>
>
>
>
> On Wed, Oct 16, 2013 at 2:33 PM, Alex Rice wrote:
>>
>> Thanks for confirming, Matthew! That might be a good check for th
I have some from a while back, if I can find my graphs I'll put them up
somewhere.
Cheers
Russell
On 17 Oct 2013, at 16:35, Weston Jossey wrote:
> Great everyone, thank you.
>
> @Russell: I specifically work with either Go
> (https://github.com/tpjg/goriakpbc) or Ruby (basho client). I
Great everyone, thank you.
@Russell: I specifically work with either Go (
https://github.com/tpjg/goriakpbc) or Ruby (basho client). I haven't
tested the ruby client, but I'd assume it will perform the write without
the read (based on my reading of the code). The Go library, on the other
hand,
Hi Wes,
The client application does not need to perform a read before a write, the riak
server must read from disk before updating the counter. Or at least it must
with our current implementation.
What PRs did you have in mind? I'm curious.
Oh, it looks like Sam beat me to it…to elaborate on h
In the context of using distributed counters (introduced in 1.4), is it
strictly necessary to perform a read prior to issue a write for a given key? A
la, if I want to blindly increment a value by 1, regardless of what its current
value is, is it sufficient to issue the write without previously
It is perfectly safe with Counters to "blindly" issue an update. Clients (for
counters) should allow a way to blindly send updates.
You should only be aware that your updates are *not* idempotent - if you retry
an update to a counter, both updates could be preserved.
Sam
--
Sam Elliott
Enginee
Hello,
it's the first post, I've been searching around and having a trobule right now.
I really apologize if this question seems newbie, I've been playing a few riak
since now.
My problem is that today I tryed to update riak, I was working with this
versions:
ii riak
{ok, Worker} = riakc_pb_socket:start_link("my_riak_node_1", 8087),
Obj = riakc_obj:new(<<"my_bucket">>, <<"my_key">>, <<"{\"key\":\"\val\"}">>,
<<"application/json">>),
ok = riakc_pb_socket:put(Worker, Obj),
ok = riakc_pb_socket:stop(Worker).
--
View this message in context:
http://riak-users.
37 matches
Mail list logo