I've read somewhere some time ago that each client connecting to Riak
cluster should
have unique id to help with resolving conflicts. Is it still the case and
if yes, what would be a recommended way
of selecting such id?
I just found in RawClient and in IRiakClient
/**
* If you don't set a
Hi
In my setup everything worked fine until I upgraded to riak 1.2
(although this may be a coincidence)
Nodes are installed from scratch with changes only to db backend (I'm
using eLevelDB)
and names.
For some reason node cannot join to another.
What am I doing wrong?
I'm using Ubuntu 10.04 but I
t; this. Staged clustering was put in place to keep users from hurting their
> clusters and to make multiple changes more efficient.
>
> -Z
>
> On Tue, Aug 21, 2012 at 9:28 AM, Daniel Iwan wrote:
>>
>> Hi
>>
>> In my setup everything worked fine until I up
I hope someone could shed some light on this issue
Part of our dev code is using Java RiakClient like this
KeySource fetched = getRiakClient().listKeys(bucket);
while (fetched.hasNext()) {
result.add(fetched.next().toStringUtf8());
}
where getRiakClient() returns instance of com.basho.riak.p
Is there a repository/location where I could download 1.0.7 Java
riak-client
without building it myself?
Thanks
Daniel
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
One of our nodes fails to start
$ sudo riak console
Attempting to restart script through sudo -H -u riak
Exec: /usr/lib/riak/erts-5.9.1/bin/erlexec -boot
/usr/lib/riak/releases/1.2.1/riak -embedded -config
/etc/riak/app.config -pa /usr/lib/riak/lib/basho-patches
-args
automatically
and use previous version?
Regards
Daniel
On 17 January 2013 14:00, Daniel Iwan wrote:
> One of our nodes fails to start
>
> $ sudo riak console
> Attempting to restart script through sudo -H -u riak
> Exec: /usr/lib/riak/erts-5.9.1/bin/erlexec -boot
> /usr/lib/riak/r
go and it will appear in the
> 1.3 release.
>
> Jon
>
> On Jan 21, 2013, at 3:58 AM, Daniel Iwan wrote:
>
> THe issue was that one of the ring snapshot files had size 0
>
> user@node1:~$ ls -la /var/lib/riak/ring/
> total 32
> drwxr-xr-x 2 riak riak 138 Jan 17 10:
ECONDS);
}
state = State.RUNNING;
}
Regards
Daniel Iwan
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
bout that. This has been corrected in the current master
> on github and version 1.1.0 of the client will be released today.
> https://github.com/basho/riak-java-client/pull/212
>
> Thanks!
> Brian Roach
>
> On Thu, Feb 14, 2013 at 9:31 AM, Daniel Iwan
> wrote:
> > I see
Hi
In our test setup (3 nodes) we've changed number of vnodes from default 64
to 512.
We've noticed increased riak start up time (by 7 seconds) and failures in
our test framework due to that. Our test framework wipes Riak cluster and
recreates it, and then our application starts.
Application (ea
Awesome stuff Shane!
Thanks for sharing.
We were thinking about the same approach so that will save us some work.
Also we were planning to add some code to get/put some fixed value into
Riak to check if that succeeds, but I'm not sure if that would work
considering
Riak's HA. I suspect that even i
In our tests we are adding 3000 keys into 3-node Riak db right after nodes
have joined.
For each key one node reads it and modifies it and another node does the
same but also deletes the key when it sees other change (key is no longer
needed). After all keys are processed our test framework checks
Hi
What worries me though is:
1) number of keys changes when i do listing, shouldn't that number be
constant?
If I do:
http://127.0.0.1:8098/buckets/TX/index/$key/0/zzz' | grep keys | awk
'{split($0,a,/,/); for (i in a) print a[i]}' | wc -l
I'm getting 12, 15 or 20 keys randomly. I believe all o
Right after setting up the cluster of 3 nodes before riak finishes vnode
transfers (ring 512)
I store 3000 keys
On some occasions instead of having 3000 keys listed I have 2999 or 2989.
After transfer is finished we have 3000 keys visible via listing.
Why is that happening and what's the best way
Somehow I cannot find a way to avoid pre-fetch during store operation (Java
client).
I know in StoreObject there is withoutFetch method for that purpose but I
cannot find corresponding method/property in DomainBucket or
DomainBucketBuilder
Am I missing something?
Also on related note when without
Thanks Jeremy
In our code I'm discarding ghost keys although I'm quite sure default
settings in Java client should not return tombstones.
I think my bug in the code contributed to the problems I've observed. I'm
using DomainBucket and custom converter and in that case I think I need to
explicitly
I'm aware that listing keys is not for production.
I'm using it mainly during testing, which started to be unreliable after
changes described above.
What I was not expecting at all was that some of the keys won't be listed.
I'm not sure if that is stated in documentation to tell the truth.
To me i
d
What is the solution here? Waiting until vnode transfer finishes is not
acceptable (availability) and recent findings show it may take a while on
big clusters.
Regards
Daniel
On 11 March 2013 23:06, Daniel Iwan wrote:
> I'm aware that listing keys is not for production.
> I
> the line of code you noted.
>
> 3. When you store, the vector clock stored in that field will be
> passed to the .fromDomain() method of your Converter. Make sure to
> call the .withVClock(vclock) method of the RiakObjectBuilder or
> explicitly set it in the IRiakObject being ret
Maybe someone from Basho could shed some light on that issue?
Regards
Daniel
On 12 March 2013 11:55, Daniel Iwan wrote:
> Just to add to that.
> Further test shows that 2i searches aso suffer form the problem of not
> showing all results durring active vnode transfer.
> Is this a
Hi Brian
Thanks for your detailed response.
Nothing detects whether there is a vclock or not. If there isn't one
> provided (the value is `null` in Java), then one isn't sent to Riak -
> it is not a requirement for a store operation for it to be present. If
> an object exists when such a store is
tion to the
> issue.
>
> Mark
>
> On Thursday, March 14, 2013, Daniel Iwan wrote:
>
>> Maybe someone from Basho could shed some light on that issue?
>>
>> Regards
>> Daniel
>>
>>
>> On 12 March 2013 11:55, Daniel Iwan wrote:
>>
>
When doing migration from pre-1.3.1 do I run
riak-admin reformat-indexes [] []
on every node that is part of the cluster or just one and then it magically
applies change to all of them? Changelog says:
Riak 1.3.1 includes a utility, as part of riak-admin, that will perform the
reformatting of th
rades doc, I think I've read
it somewhere on mailing list.
Daniel
On 30 April 2013 09:59, Russell Brown wrote:
>
> On 30 Apr 2013, at 09:47, Daniel Iwan wrote:
>
> > When doing migration from pre-1.3.1 do I run
> >
> > riak-admin reformat-indexes [] []
> >
Hi all
I see node stalled at 'joining' for good 8 hours now:
3-node cluster v1.3.1, 512 vnodes (way too high but that's another matter),
leveldb backend
Cluster was originally 2-nodes only and after upgrading to 1.3.1 we
attached another node
No active transfers on the nodes at the moment, but fro
Four days passed and node is still joining.
I haven't tried to restart it (which would probably fix the issue) as I
would like to find out what was the real reason of that stall and what to
do
to avoid it in the future.
Any suggestions?
Daniel
On 27 June 2013 00:19, Daniel Iwan wrote:
Hi my riak admin diag shows output as below(3-node cluster)
I'm assuming long numbers are vnodes. Strange thing is:
5708990770823839524233143877797980545530986496 exist twice for the same node
19981467697883438334816003572292931909358452736 once on the list
How do I interpret this?
How can I l
Thanks Jared
I'm aware of limitations of 3-node cluster. If I understand it correctly
there are some corner cases where certain copies for some vnodes can land
on the same physical node. But I would
assume there is no case where all 3 copies (for N=3) should land on the
same physical node. Hence I
If I remove all keys from a bucket that bucket is not visible when I do
curl http://127.0.0.1:8098/buckets?buckets=true
I know buckets are only prefixes for keys so in theory bucket does not know
if it's empty (or maybe it does) but to me it looks like only buckets with
keys are visible.
Is ther
nologies
>
> Sent with Sparrow <http://www.sparrowmailapp.com/?sig>
>
> On Thursday, September 5, 2013 at 12:53 PM, Daniel Iwan wrote:
>
> If I remove all keys from a bucket that bucket is not visible when I do
>
> curl http://127.0.0.1:8098/buckets?buckets=true
>
> I know bucket
You can store revertIndex = (MAX_KEY_VALUE - keyvaluefromberkley) in Riak as
a secondary index for every object. Then get a full range for that index
limiting results to 1. In this way you'll get one result with max
keyvaluefromberkley. Reversing order in a nutshell, because I think values
for 2i i
Is there anywhere a pseudo-code or description of the algorithm how
vnodes (primaries and replicas) would be distributed if I had 3, 4 and more
nodes in the cluster?
Does it depend in any way on the node name or any other setting, or is it
only a function of number of physical nodes?
Regards
Dani
Hi I'm using Riak 1.3.1 and Java client 1.1.2
Using http and curl I see 4 siblings for an object one of which
has X-Riak-Deleted: true
but when I'm using Java client with DomainBucket my Converter's method
toDomain is called only 3 times.
I have set the property
builder.returnDeletedVClock(true)
orward-port to 1.4.x as well and cut new jars. Should
> be avail by tomorrow morning at the latest.
>
> Thanks!
> - Roach
>
> On Thu, Oct 3, 2013 at 9:38 AM, Daniel Iwan wrote:
> > Hi I'm using Riak 1.3.1 and Java client 1.1.2
> >
> > Using http and curl I s
atest build? I tried
http://riak-java-client.s3.amazonaws.com/riak-client-1.1.3-jar-with-dependencies.jar
but access is denied
Cheers
Daniel
On 3 October 2013 19:36, Brian Roach wrote:
> On Thu, Oct 3, 2013 at 10:32 AM, Daniel Iwan
> wrote:
> > Thanks Brian for quick response.
ndencies.jar
>
> It fixes up the DomainBucket stuff and the JSONConverter.
>
> Thanks,
> - Roach
>
> On Fri, Oct 4, 2013 at 2:58 AM, Daniel Iwan wrote:
> > Thanks Brian for putting fix together so quickly.
> >
> > I think I found something else though.
> > In
uot;probably doesn't").
>
> If you do a subsequent fetch after sending both your writes you'll get
> back a single vclock with siblings.
>
> Thanks,
> - Roach
>
> On Mon, Oct 7, 2013 at 12:37 PM, Daniel Iwan
> wrote:
> > Hi Brian
> >
> &
see that
On 7 October 2013 21:21, Daniel Iwan wrote:
> I tested that with curl. Should've mentioned that.
> The output shows there is no siblings for the key and returned header
> looks like this:
>
> < HTTP/1.1 200 OK
> < X-Riak-Vclock:
> a85hYGBgymDKBVIc84WrPgU
Hi
With Java client 1.1.3 and Riak 1.3.1
I'm doing:
WriteBucket wb =
iclient.createBucket(BUCKET_NAME).nVal(3).allowSiblings(true);
Bucket b = wb.execute();
_logger.fine("Regular bucket: " + bucket + ", allows siblings? " +
bucket.getAllowSiblings());
DomainBucketBuilder
hes Riak.
This situation is potentially very dangerous for us. As I have no way of
checking if allow_mult has incorrect value (Riak client returns true)
it simply means write loss during updates.
Is there a way to debug what's happening or check what's in the ring?
Regards
Daniel Iwan
On
Unlimited
> MCITP: SQL Server 2008, MVP
> Cloudera Certified Developer for Apache Hadoop
>
>
> On Wed, Oct 9, 2013 at 12:35 PM, Daniel Iwan wrote:
>
>> Thank for reply.
>>
>> The thing is that bucket never converges. The allow_mult remains false
>> even seve
Sometimes I get siblings like this
- original object
- object modified from machine1
- object modified from machine2
- deleted object
4 siblings for one object. Delete happens only if both machines made
modifications to the object, so clearly object was deleted but not removed
from Riak db.
In my
Hi
I found a place in my code where allow_mult is switched to false (during
boot) and then back to true. After removing that I could not reproduce the
problem (so far). Looks like it may be related to problems Jeremiah
reported, and allow_mult getting stuck in false. Thanks for that hint.
D.
Hi
I found a place in my code where allow_mult is switched to false (during
boot) and then back to true. After removing that I could not reproduce the
problem (so far). Looks like it may be related to problems Jeremiah
reported, and allow_mult getting stuck in false. Thanks for that hint.
D.
Hi
I found a place in my code where allow_mult is switched to false (during
boot) and then back to true. After removing that I could not reproduce the
problem (so far). Looks like it may be related to problems Jeremiah
reported, and allow_mult getting stuck in false. Thanks for that hint.
D.
There is no coordination between servers so concurrent update of properties
is possible.
That would certainly explain a lot.
In my case though I'm setting allow_mult back to true so eventually that
should win? Or would propagation through ring potentially break that logic
and allow_multi = false c
Any comment on that approach?
http://hackingdistributed.com/2014/01/14/back-that-nosql-up/
Snippet:
HyperDex uses HyperLevelDB as its storage backend, which, in turn,
constructs an LSM-tree on disk. The majority of data stored within
HyperLevelDB is stored within immutable .sst files. Once writte
How "heavy" for the cluster are those two operations for Riak cluster 3-5
nodes?
Listing all keys and filtering on client side is definitely not recommended
but is 2i query via $key for given bucket equally heavy and not recommended?
On related note is there a $bucket query to find all the buckets
I just got this right after installing Riak and restarting (Ubuntu 12.04.2)
Node name should be riak@10.173.240.5 but is different in this error msg.
Vm.args had a correct name ie. riak@10.173.240.5
Moving content /var/lib/riak, killing all riak processes and manual launch
via riak start
fixed it,
;
> http://docs.basho.com/riak/latest/ops/running/tools/riak-admin/#cluster-replace
>
> Eric
> On Jan 27, 2014 7:04 AM, "Daniel Iwan" wrote:
>
>> I just got this right after installing Riak and restarting (Ubuntu
>> 12.04.2)
>>
>> Node name should be
We have /usr/lib/riak/erts-5.9.1/ installed from official Riak apt package
1.3.1, and we've been using it on numerous installs.
No other erlang packages have been installed. Is this the version you are
talking about or should we upgrade it?
D.
--
View this message in context:
http://riak-users
Hi all
Is there a reason there's no 2i querying methods in DomainBucket?
That requires to keep both Bucket and DomainBucket references which makes it
a bit awkward when passing those around.
Thanks
Daniel
--
View this message in context:
http://riak-users.197444.n3.nabble.com/Java-client-query
On 5 node cluster when our servers boot our application (which runs on the
same nodes as riak and queries localhost) I got
Caused by: com.basho.riak.client.RiakRetryFailedException:
com.basho.riak.pbc.RiakError: {error,insufficient_vnodes_available}
at com.basho.riak.client.cap.DefaultRetrier.att
Any ideas regarding that?
Thanks
Daniel
--
View this message in context:
http://riak-users.197444.n3.nabble.com/Cluster-start-and-2i-query-tp4030557p4030610.html
Sent from the Riak Users mailing list archive at Nabble.com.
___
riak-users mailing lis
Thanks Ciprian
We already have wait-for-service in our script and it looks like it's not a
sufficient condition to satisfy secondary index query.
How long application should wait before starting querying Riak using 2i?
Should we do riak-admin transfers to make sure there are no vnode transfers
ha
re it could run successfully and Riak would return
> {error,insufficient_vnodes_available} while the required primary
> partitions are coming up.
>
> I would suggest defensive programming (retrying the 2i queries on error)
> as a way to mitigate this.
>
>
> Thanks,
> Cip
Below is an output of my Riak cluster. 3 physical nodes. Ring size 128.
As far as I can tell when Riak installed fresh it is always place partitions
in the same way on a ring as long as number of vnodes and servers is the
same.
All presentations including "A Little Riak Book' show pretty picture o
Hi Ciprian
Thanks for reply
I'm assuming 'overlay' you are talking about are vnodes?
When creating cluster and joining 2 nodes to first node (3-node cluster)
there should be possible distributing partitions to guarantee 3 copies are
on distinct machines. Simple sequential vnode assignment would do
Hi
I got following exception with riak Java client 1.1.3, Riak cluster 1.3.1
I don't see any error messages in Riak's console log. Any idea what may be
causing this?
Caused by: com.basho.riak.client.RiakRetryFailedException:
java.io.IOException: bad message code. Expected: 14 actual: 1
at com.ba
gt; I’d upgrade to Java client 1.1.4 and see if the behavior continues.
>
> Best Regards,
>
> Bryan Hunt
>
>
>
> On 8 May 2014, at 15:02, Daniel Iwan wrote:
>
> > Hi
> >
> > I got following exception with riak Java client 1.1.3, Riak cluster 1.3.1
>
ading to the 1.1.4 client release and see if the problem persists.
>
> Thanks,
> - Roach
>
> On Thu, May 8, 2014 at 8:02 AM, Daniel Iwan wrote:
> > Hi
> >
> > I got following exception with riak Java client 1.1.3, Riak cluster 1.3.1
> > I don't see any
I watched Ricon2014 video from Martin @NHS. Before the end of his talk he
briefly mentions returnTerm
option and also something about regular expression matching (2i ?)
https://www.youtube.com/watch?v=5Plsj6Zl-kM
http://basho.github.io/riak-java-client/1.4.4/com/basho/riak/pbc/IndexRequest.html#r
By the look of it it seems returnTerm is available in 1.3+ and regexp
matching got merged into 2.0?
Also is there any documentation what subset of Perl regexp is supported?
Thanks
Daniel
--
View this message in context:
http://riak-users.197444.n3.nabble.com/Riak-client-returnTerm-and-regexp-t
Also it may be worth checking if there is any 0-byte file in AAE folder. I've
seen corruptions like that in the past (although not on AAE but on ring
files). If you find and remove corrupted file, rebuilding AAE will be
faster/cheaper.
It would be good if that error showed which file could not be r
We are experiencing crash of beam.smp on one of nodes in 3-node cluster (ring
128)
Distro is Ubuntu 12.04 with 16GB of memory (almost exclusive for Riak)
= Sun Feb 15 10:02:23 UTC 2015
Erlang has closed/usr/lib/riak/lib/os_mon-2.2.9/priv/bin/memsup:
Erlang has closed.
Hi I've got following
Hi
On 3 node cluster Ubuntu 12.04, nodes 8GB RAM all nodes show 6GB taken
beam.smp, 2GB by our process.
beam started swapping and currently is using 23GB of swap space.
vm.swappiness is set to 1
We are using ring 128. /var/lib/riak is 37GB in size 11GB of which is used
by anti-entropy
Is there a
My ideas:
1. Rewrite (read-write) object with new values for all indexes
2. Enable siblings on a bucket, write empty object with update for your
index, that will create sibling.
Then whenever you read object do merge of object+indexes. This may be more
appropriate if you have big objects and want t
We are using levelDB as backend without any tuning.
Also we are aware that performance may suffer due to potentially storing
some of the copies (n=3) twice on the server. We are not so much concerned
about latencies caused by that.
What is worrying though is almost unbounded growth of swap used, wh
I absolutely agree. That is why we've change the setting vm.swappiness to 1
so it swaps only when absolutely necessary. I think we underestimated how
much swap may be needed, but I also don't understand why so much hungry on
memory.
Is there a particular activity, like 2i queries, AAE or levelDB c
Ciprian
Thanks for reply. I will check that as soon as I will get access to the
servers again
D.
--
View this message in context:
http://riak-users.197444.n3.nabble.com/Riak-1-3-1-crashing-with-segfault-tp4032638p4032673.html
Sent from the Riak Users mailing list archive at Nabble.com.
Hi
I've checked all logs and there is nothing regarding memory issues.
Since then I've had several Riak crashes but looks like other processes are
failing as well
Feb 2 22:05:28 node2 kernel: [20052.901884] beam.smp[1830]: segfault at
8523111 ip 08523111 sp 7f03ba821be8 error 14
I moved /var/lib/riak folder to RAID array
Another crash happened 20 minutes after Riak start.
Another crash 20 mins after start
=
= LOGGING STARTED Wed Feb 25 12:40:07 UTC 2015
=
Exec: /usr/lib/riak/erts-5.9.1/bin/erlexec -boot
/usr/lib/riak/releases/1.3.1/riak -embedded
Thanks Magnus
I'm using memtest86 on set of another 3 servers with identical configuration
to see if I can trigger that as well.
I cannot do it on the failing node at the moment since it's a remote site
but I agree it's a strong indication of RAM module problem.
As a test I moved var/lib/riak to R
Hi all
Am I right thinking that v1.1.4 does not support headOnly() on domain
buckets?
During domain.fetch() line 237 in
https://github.com/basho/riak-java-client/blob/1.1.4/src/main/java/com/basho/riak/client/bucket/DomainBucket.java
there is no check/call headOnly() on FetchMeta object.
Cod
We are using official 1.1.4 which is the latest recommended with Riak 1.3 we
have installed.
Upgrade to Riak 1.4 is not possible at the moment.
D.
--
View this message in context:
http://riak-users.197444.n3.nabble.com/Java-client-1-1-4-and-headOnly-in-domain-buckets-tp4033042p4033048.html
Sen
Hi
I'm using 4 node Riak cluster v1.3.1
I wanted to know a little bit more about using withoutFetch() option when
used with levelDB.
I'm trying to write to a single key as fast as I can with n=3.
I deliberately create siblings by writing with stale vclock. I'm limiting
number of writes to 1000 pe
We are using Java client 1.1.4.
We haven't moved to newer version of Riak as as for the moment we don't need
any new features.
Also roll out of the new version may be complicated since we have multiple
clusters.
As with regards to object size its ~250-300 bytes per write. We store simple
JSON stru
Hi Alex
>> It appears that the domain buckets api does not support headOnly(). That
>> api was written to be a higher-level abstraction around a common usage,
>> so
>> it abstracted that idea of head vs object data away.
I think it may be quite useful functionality anyway, to check the existen
Alex,
Thanks for answering this one and pointing me into right direction.
I did an experiment and wrote 0 bytes instead of a JSON and got the same
effect - level db folder is 80-220MB in size and activity around 20MB/s
write to disk, no read from disk.
Java client reports speed 45 secs for 1000 en
Hi
Our attach script failed and it only issued *cluster join*
but not *cluster plan* and *cluster commit*.
so the node was visible as joining in member_status
Since then both part (original cluster and new node) were taking writes but
in our configuration each node takes writes from it local proc
Hi Jon
Thanks for confirming this.
We did do plan/commit and everything worked as expected, no issues
whatsoever
Thanks a bunch
Daniel
--
View this message in context:
http://riak-users.197444.n3.nabble.com/Node-join-not-committed-tp4033571p4033573.html
Sent from the Riak Users mailing list a
Hi all
Am I right thinking that when node goes offline *riak-admin transfers* will
always show transfers to be done? E.g.
riak-admin transfers
Attempting to restart script through sudo -H -u riak
[sudo] password for myuser:
Nodes ['riak@10.173.240.12'] are currently down.
'riak@10.173.240.9' wai
Magnus
Thanks for confirming.
We've had issues with 2i (coverage queries) during node startup where some
keys potentially might not appear in results.
More details on the here:
http://riak-users.197444.n3.nabble.com/Keys-not-listed-during-vnode-transfers-td4027133.html#a4027139
We've been using
Hello riak users
I'm trying to get my head around the partitioning in Riak
Quite recent thread was very helpful
http://thread.gmane.org/gmane.comp.db.riak.user/6207/focus=6266
Let's say I install Riak on 3 nodes (initially), which will possibly
grow to 10 or more.
Default partition size is 64 a
85 matches
Mail list logo