Java clientId

2012-08-09 Thread Daniel Iwan
I've read somewhere some time ago that each client connecting to Riak cluster should have unique id to help with resolving conflicts. Is it still the case and if yes, what would be a recommended way of selecting such id? I just found in RawClient and in IRiakClient /** * If you don't set a

Node cannot join

2012-08-21 Thread Daniel Iwan
Hi In my setup everything worked fine until I upgraded to riak 1.2 (although this may be a coincidence) Nodes are installed from scratch with changes only to db backend (I'm using eLevelDB) and names. For some reason node cannot join to another. What am I doing wrong? I'm using Ubuntu 10.04 but I

Re: Node cannot join

2012-08-21 Thread Daniel Iwan
t; this. Staged clustering was put in place to keep users from hurting their > clusters and to make multiple changes more efficient. > > -Z > > On Tue, Aug 21, 2012 at 9:28 AM, Daniel Iwan wrote: >> >> Hi >> >> In my setup everything worked fine until I up

Listing keys again

2012-10-11 Thread Daniel Iwan
I hope someone could shed some light on this issue Part of our dev code is using Java RiakClient like this KeySource fetched = getRiakClient().listKeys(bucket); while (fetched.hasNext()) { result.add(fetched.next().toStringUtf8()); } where getRiakClient() returns instance of com.basho.riak.p

Java riak-client 1.0.7 build with dependencies

2013-01-11 Thread Daniel Iwan
Is there a repository/location where I could download 1.0.7 Java riak-client without building it myself? Thanks Daniel ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Riak fails to start

2013-01-17 Thread Daniel Iwan
One of our nodes fails to start $ sudo riak console Attempting to restart script through sudo -H -u riak Exec: /usr/lib/riak/erts-5.9.1/bin/erlexec -boot /usr/lib/riak/releases/1.2.1/riak -embedded -config /etc/riak/app.config -pa /usr/lib/riak/lib/basho-patches -args

Re: Riak fails to start

2013-01-21 Thread Daniel Iwan
automatically and use previous version? Regards Daniel On 17 January 2013 14:00, Daniel Iwan wrote: > One of our nodes fails to start > > $ sudo riak console > Attempting to restart script through sudo -H -u riak > Exec: /usr/lib/riak/erts-5.9.1/bin/erlexec -boot > /usr/lib/riak/r

Re: Riak fails to start

2013-01-21 Thread Daniel Iwan
go and it will appear in the > 1.3 release. > > Jon > > On Jan 21, 2013, at 3:58 AM, Daniel Iwan wrote: > > THe issue was that one of the ring snapshot files had size 0 > > user@node1:~$ ls -la /var/lib/riak/ring/ > total 32 > drwxr-xr-x 2 riak riak 138 Jan 17 10:

Riak Java client 100% CPU

2013-02-14 Thread Daniel Iwan
ECONDS); } state = State.RUNNING; } Regards Daniel Iwan ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Re: Riak Java client 100% CPU

2013-02-15 Thread Daniel Iwan
bout that. This has been corrected in the current master > on github and version 1.1.0 of the client will be released today. > https://github.com/basho/riak-java-client/pull/212 > > Thanks! > Brian Roach > > On Thu, Feb 14, 2013 at 9:31 AM, Daniel Iwan > wrote: > > I see

Riak vnodes not available

2013-03-05 Thread Daniel Iwan
Hi In our test setup (3 nodes) we've changed number of vnodes from default 64 to 512. We've noticed increased riak start up time (by 7 seconds) and failures in our test framework due to that. Our test framework wipes Riak cluster and recreates it, and then our application starts. Application (ea

Re: Riak vnodes not available

2013-03-05 Thread Daniel Iwan
Awesome stuff Shane! Thanks for sharing. We were thinking about the same approach so that will save us some work. Also we were planning to add some code to get/put some fixed value into Riak to check if that succeeds, but I'm not sure if that would work considering Riak's HA. I suspect that even i

Delete keys still in Riak db

2013-03-07 Thread Daniel Iwan
In our tests we are adding 3000 keys into 3-node Riak db right after nodes have joined. For each key one node reads it and modifies it and another node does the same but also deletes the key when it sees other change (key is no longer needed). After all keys are processed our test framework checks

Re: Delete keys still in Riak db

2013-03-08 Thread Daniel Iwan
Hi What worries me though is: 1) number of keys changes when i do listing, shouldn't that number be constant? If I do: http://127.0.0.1:8098/buckets/TX/index/$key/0/zzz' | grep keys | awk '{split($0,a,/,/); for (i in a) print a[i]}' | wc -l I'm getting 12, 15 or 20 keys randomly. I believe all o

Keys not listed during vnode transfers

2013-03-08 Thread Daniel Iwan
Right after setting up the cluster of 3 nodes before riak finishes vnode transfers (ring 512) I store 3000 keys On some occasions instead of having 3000 keys listed I have 2999 or 2989. After transfer is finished we have 3000 keys visible via listing. Why is that happening and what's the best way

Using withoutFetch with DomainBucket

2013-03-08 Thread Daniel Iwan
Somehow I cannot find a way to avoid pre-fetch during store operation (Java client). I know in StoreObject there is withoutFetch method for that purpose but I cannot find corresponding method/property in DomainBucket or DomainBucketBuilder Am I missing something? Also on related note when without

Re: Delete keys still in Riak db

2013-03-11 Thread Daniel Iwan
Thanks Jeremy In our code I'm discarding ghost keys although I'm quite sure default settings in Java client should not return tombstones. I think my bug in the code contributed to the problems I've observed. I'm using DomainBucket and custom converter and in that case I think I need to explicitly

Re: Keys not listed during vnode transfers

2013-03-11 Thread Daniel Iwan
I'm aware that listing keys is not for production. I'm using it mainly during testing, which started to be unreliable after changes described above. What I was not expecting at all was that some of the keys won't be listed. I'm not sure if that is stated in documentation to tell the truth. To me i

Re: Keys not listed during vnode transfers

2013-03-12 Thread Daniel Iwan
d What is the solution here? Waiting until vnode transfer finishes is not acceptable (availability) and recent findings show it may take a while on big clusters. Regards Daniel On 11 March 2013 23:06, Daniel Iwan wrote: > I'm aware that listing keys is not for production. > I

Re: Using withoutFetch with DomainBucket

2013-03-12 Thread Daniel Iwan
> the line of code you noted. > > 3. When you store, the vector clock stored in that field will be > passed to the .fromDomain() method of your Converter. Make sure to > call the .withVClock(vclock) method of the RiakObjectBuilder or > explicitly set it in the IRiakObject being ret

Re: Keys not listed during vnode transfers

2013-03-14 Thread Daniel Iwan
Maybe someone from Basho could shed some light on that issue? Regards Daniel On 12 March 2013 11:55, Daniel Iwan wrote: > Just to add to that. > Further test shows that 2i searches aso suffer form the problem of not > showing all results durring active vnode transfer. > Is this a

Re: Using withoutFetch with DomainBucket

2013-03-14 Thread Daniel Iwan
Hi Brian Thanks for your detailed response. Nothing detects whether there is a vclock or not. If there isn't one > provided (the value is `null` in Java), then one isn't sent to Riak - > it is not a requirement for a store operation for it to be present. If > an object exists when such a store is

Re: Keys not listed during vnode transfers

2013-04-02 Thread Daniel Iwan
tion to the > issue. > > Mark > > On Thursday, March 14, 2013, Daniel Iwan wrote: > >> Maybe someone from Basho could shed some light on that issue? >> >> Regards >> Daniel >> >> >> On 12 March 2013 11:55, Daniel Iwan wrote: >> >

Reformatting 2i in Riak 1.3.1

2013-04-30 Thread Daniel Iwan
When doing migration from pre-1.3.1 do I run riak-admin reformat-indexes [] [] on every node that is part of the cluster or just one and then it magically applies change to all of them? Changelog says: Riak 1.3.1 includes a utility, as part of riak-admin, that will perform the reformatting of th

Re: Reformatting 2i in Riak 1.3.1

2013-04-30 Thread Daniel Iwan
rades doc, I think I've read it somewhere on mailing list. Daniel On 30 April 2013 09:59, Russell Brown wrote: > > On 30 Apr 2013, at 09:47, Daniel Iwan wrote: > > > When doing migration from pre-1.3.1 do I run > > > > riak-admin reformat-indexes [] [] > >

Riak node joining

2013-06-26 Thread Daniel Iwan
Hi all I see node stalled at 'joining' for good 8 hours now: 3-node cluster v1.3.1, 512 vnodes (way too high but that's another matter), leveldb backend Cluster was originally 2-nodes only and after upgrading to 1.3.1 we attached another node No active transfers on the nodes at the moment, but fro

Re: Riak node joining

2013-06-30 Thread Daniel Iwan
Four days passed and node is still joining. I haven't tried to restart it (which would probably fix the issue) as I would like to find out what was the real reason of that stall and what to do to avoid it in the future. Any suggestions? Daniel On 27 June 2013 00:19, Daniel Iwan wrote:

riak-admin diag output

2013-07-12 Thread Daniel Iwan
Hi my riak admin diag shows output as below(3-node cluster) I'm assuming long numbers are vnodes. Strange thing is: 5708990770823839524233143877797980545530986496 exist twice for the same node 19981467697883438334816003572292931909358452736 once on the list How do I interpret this? How can I l

Re: riak-admin diag output

2013-07-15 Thread Daniel Iwan
Thanks Jared I'm aware of limitations of 3-node cluster. If I understand it correctly there are some corner cases where certain copies for some vnodes can land on the same physical node. But I would assume there is no case where all 3 copies (for N=3) should land on the same physical node. Hence I

Empty bucket disappears

2013-09-05 Thread Daniel Iwan
If I remove all keys from a bucket that bucket is not visible when I do curl http://127.0.0.1:8098/buckets?buckets=true I know buckets are only prefixes for keys so in theory bucket does not know if it's empty (or maybe it does) but to me it looks like only buckets with keys are visible. Is ther

Re: Empty bucket disappears

2013-09-05 Thread Daniel Iwan
nologies > > Sent with Sparrow <http://www.sparrowmailapp.com/?sig> > > On Thursday, September 5, 2013 at 12:53 PM, Daniel Iwan wrote: > > If I remove all keys from a bucket that bucket is not visible when I do > > curl http://127.0.0.1:8098/buckets?buckets=true > > I know bucket

Re: Getting largest key

2013-09-19 Thread Daniel Iwan
You can store revertIndex = (MAX_KEY_VALUE - keyvaluefromberkley) in Riak as a secondary index for every object. Then get a full range for that index limiting results to 1. In this way you'll get one result with max keyvaluefromberkley. Reversing order in a nutshell, because I think values for 2i i

VNodes distribution on the ring

2013-09-19 Thread Daniel Iwan
Is there anywhere a pseudo-code or description of the algorithm how vnodes (primaries and replicas) would be distributed if I had 3, 4 and more nodes in the cluster? Does it depend in any way on the node name or any other setting, or is it only a function of number of physical nodes? Regards Dani

Riak Java client not returning deleted sibling

2013-10-03 Thread Daniel Iwan
Hi I'm using Riak 1.3.1 and Java client 1.1.2 Using http and curl I see 4 siblings for an object one of which has X-Riak-Deleted: true but when I'm using Java client with DomainBucket my Converter's method toDomain is called only 3 times. I have set the property builder.returnDeletedVClock(true)

Re: Riak Java client not returning deleted sibling

2013-10-03 Thread Daniel Iwan
orward-port to 1.4.x as well and cut new jars. Should > be avail by tomorrow morning at the latest. > > Thanks! > - Roach > > On Thu, Oct 3, 2013 at 9:38 AM, Daniel Iwan wrote: > > Hi I'm using Riak 1.3.1 and Java client 1.1.2 > > > > Using http and curl I s

Re: Riak Java client not returning deleted sibling

2013-10-04 Thread Daniel Iwan
atest build? I tried http://riak-java-client.s3.amazonaws.com/riak-client-1.1.3-jar-with-dependencies.jar but access is denied Cheers Daniel On 3 October 2013 19:36, Brian Roach wrote: > On Thu, Oct 3, 2013 at 10:32 AM, Daniel Iwan > wrote: > > Thanks Brian for quick response.

Re: Riak Java client not returning deleted sibling

2013-10-07 Thread Daniel Iwan
ndencies.jar > > It fixes up the DomainBucket stuff and the JSONConverter. > > Thanks, > - Roach > > On Fri, Oct 4, 2013 at 2:58 AM, Daniel Iwan wrote: > > Thanks Brian for putting fix together so quickly. > > > > I think I found something else though. > > In

Re: Riak Java client not returning deleted sibling

2013-10-07 Thread Daniel Iwan
uot;probably doesn't"). > > If you do a subsequent fetch after sending both your writes you'll get > back a single vclock with siblings. > > Thanks, > - Roach > > On Mon, Oct 7, 2013 at 12:37 PM, Daniel Iwan > wrote: > > Hi Brian > > > &

Re: Riak Java client not returning deleted sibling

2013-10-08 Thread Daniel Iwan
see that On 7 October 2013 21:21, Daniel Iwan wrote: > I tested that with curl. Should've mentioned that. > The output shows there is no siblings for the key and returned header > looks like this: > > < HTTP/1.1 200 OK > < X-Riak-Vclock: > a85hYGBgymDKBVIc84WrPgU

Bucket properties not updated

2013-10-09 Thread Daniel Iwan
Hi With Java client 1.1.3 and Riak 1.3.1 I'm doing: WriteBucket wb = iclient.createBucket(BUCKET_NAME).nVal(3).allowSiblings(true); Bucket b = wb.execute(); _logger.fine("Regular bucket: " + bucket + ", allows siblings? " + bucket.getAllowSiblings()); DomainBucketBuilder

Re: Bucket properties not updated

2013-10-09 Thread Daniel Iwan
hes Riak. This situation is potentially very dangerous for us. As I have no way of checking if allow_mult has incorrect value (Riak client returns true) it simply means write loss during updates. Is there a way to debug what's happening or check what's in the ring? Regards Daniel Iwan On

Re: Bucket properties not updated

2013-10-09 Thread Daniel Iwan
Unlimited > MCITP: SQL Server 2008, MVP > Cloudera Certified Developer for Apache Hadoop > > > On Wed, Oct 9, 2013 at 12:35 PM, Daniel Iwan wrote: > >> Thank for reply. >> >> The thing is that bucket never converges. The allow_mult remains false >> even seve

Delete deleted object

2013-10-10 Thread Daniel Iwan
Sometimes I get siblings like this - original object - object modified from machine1 - object modified from machine2 - deleted object 4 siblings for one object. Delete happens only if both machines made modifications to the object, so clearly object was deleted but not removed from Riak db. In my

Re: Bucket properties not updated

2013-10-10 Thread Daniel Iwan
Hi I found a place in my code where allow_mult is switched to false (during boot) and then back to true. After removing that I could not reproduce the problem (so far). Looks like it may be related to problems Jeremiah reported, and allow_mult getting stuck in false. Thanks for that hint. D.

Re: Bucket properties not updated

2013-10-10 Thread Daniel Iwan
Hi I found a place in my code where allow_mult is switched to false (during boot) and then back to true. After removing that I could not reproduce the problem (so far). Looks like it may be related to problems Jeremiah reported, and allow_mult getting stuck in false. Thanks for that hint. D.

Re: Bucket properties not updated

2013-10-10 Thread Daniel Iwan
Hi I found a place in my code where allow_mult is switched to false (during boot) and then back to true. After removing that I could not reproduce the problem (so far). Looks like it may be related to problems Jeremiah reported, and allow_mult getting stuck in false. Thanks for that hint. D.

Re: Bucket properties not updated

2013-10-10 Thread Daniel Iwan
There is no coordination between servers so concurrent update of properties is possible. That would certainly explain a lot. In my case though I'm setting allow_mult back to true so eventually that should win? Or would propagation through ring potentially break that logic and allow_multi = false c

Re: Riak Search and Yokozuna Backup Strategy

2014-01-21 Thread Daniel Iwan
Any comment on that approach? http://hackingdistributed.com/2014/01/14/back-that-nosql-up/ Snippet: HyperDex uses HyperLevelDB as its storage backend, which, in turn, constructs an LSM-tree on disk. The majority of data stored within HyperLevelDB is stored within immutable .sst files. Once writte

Listing all keys and 2i $key query on a bucket

2014-01-25 Thread Daniel Iwan
How "heavy" for the cluster are those two operations for Riak cluster 3-5 nodes? Listing all keys and filtering on client side is definitely not recommended but is 2i query via $key for given bucket equally heavy and not recommended? On related note is there a $bucket query to find all the buckets

Rak 13.1 error on start

2014-01-27 Thread Daniel Iwan
I just got this right after installing Riak and restarting (Ubuntu 12.04.2) Node name should be riak@10.173.240.5 but is different in this error msg. Vm.args had a correct name ie. riak@10.173.240.5 Moving content /var/lib/riak, killing all riak processes and manual launch via riak start fixed it,

Re: Rak 13.1 error on start

2014-01-28 Thread Daniel Iwan
; > http://docs.basho.com/riak/latest/ops/running/tools/riak-admin/#cluster-replace > > Eric > On Jan 27, 2014 7:04 AM, "Daniel Iwan" wrote: > >> I just got this right after installing Riak and restarting (Ubuntu >> 12.04.2) >> >> Node name should be

Re: Rak 1.3.1 error on start

2014-01-28 Thread Daniel Iwan
We have /usr/lib/riak/erts-5.9.1/ installed from official Riak apt package 1.3.1, and we've been using it on numerous installs. No other erlang packages have been installed. Is this the version you are talking about or should we upgrade it? D. -- View this message in context: http://riak-users

Java client, querying using domain bucket and 2i

2014-02-08 Thread Daniel Iwan
Hi all Is there a reason there's no 2i querying methods in DomainBucket? That requires to keep both Bucket and DomainBucket references which makes it a bit awkward when passing those around. Thanks Daniel -- View this message in context: http://riak-users.197444.n3.nabble.com/Java-client-query

Cluster start and 2i query

2014-02-22 Thread Daniel Iwan
On 5 node cluster when our servers boot our application (which runs on the same nodes as riak and queries localhost) I got Caused by: com.basho.riak.client.RiakRetryFailedException: com.basho.riak.pbc.RiakError: {error,insufficient_vnodes_available} at com.basho.riak.client.cap.DefaultRetrier.att

Re: Cluster start and 2i query

2014-03-05 Thread Daniel Iwan
Any ideas regarding that? Thanks Daniel -- View this message in context: http://riak-users.197444.n3.nabble.com/Cluster-start-and-2i-query-tp4030557p4030610.html Sent from the Riak Users mailing list archive at Nabble.com. ___ riak-users mailing lis

Re: Cluster start and 2i query

2014-03-05 Thread Daniel Iwan
Thanks Ciprian We already have wait-for-service in our script and it looks like it's not a sufficient condition to satisfy secondary index query. How long application should wait before starting querying Riak using 2i? Should we do riak-admin transfers to make sure there are no vnode transfers ha

Re: Cluster start and 2i query

2014-03-08 Thread Daniel Iwan
re it could run successfully and Riak would return > {error,insufficient_vnodes_available} while the required primary > partitions are coming up. > > I would suggest defensive programming (retrying the 2i queries on error) > as a way to mitigate this. > > > Thanks, > Cip

Partitions placement

2014-03-13 Thread Daniel Iwan
Below is an output of my Riak cluster. 3 physical nodes. Ring size 128. As far as I can tell when Riak installed fresh it is always place partitions in the same way on a ring as long as number of vnodes and servers is the same. All presentations including "A Little Riak Book' show pretty picture o

Re: Partitions placement

2014-03-17 Thread Daniel Iwan
Hi Ciprian Thanks for reply I'm assuming 'overlay' you are talking about are vnodes? When creating cluster and joining 2 nodes to first node (3-node cluster) there should be possible distributing partitions to guarantee 3 copies are on distinct machines. Simple sequential vnode assignment would do

[no subject]

2014-05-08 Thread Daniel Iwan
Hi I got following exception with riak Java client 1.1.3, Riak cluster 1.3.1 I don't see any error messages in Riak's console log. Any idea what may be causing this? Caused by: com.basho.riak.client.RiakRetryFailedException: java.io.IOException: bad message code. Expected: 14 actual: 1 at com.ba

Re: bad message code. Expected: 14 actual: 1

2014-05-08 Thread Daniel Iwan
gt; I’d upgrade to Java client 1.1.4 and see if the behavior continues. > > Best Regards, > > Bryan Hunt > > > > On 8 May 2014, at 15:02, Daniel Iwan wrote: > > > Hi > > > > I got following exception with riak Java client 1.1.3, Riak cluster 1.3.1 >

Re:

2014-05-08 Thread Daniel Iwan
ading to the 1.1.4 client release and see if the problem persists. > > Thanks, > - Roach > > On Thu, May 8, 2014 at 8:02 AM, Daniel Iwan wrote: > > Hi > > > > I got following exception with riak Java client 1.1.3, Riak cluster 1.3.1 > > I don't see any

Riak client returnTerm and regexp

2015-02-04 Thread Daniel Iwan
I watched Ricon2014 video from Martin @NHS. Before the end of his talk he briefly mentions returnTerm option and also something about regular expression matching (2i ?) https://www.youtube.com/watch?v=5Plsj6Zl-kM http://basho.github.io/riak-java-client/1.4.4/com/basho/riak/pbc/IndexRequest.html#r

Re: Riak client returnTerm and regexp

2015-02-05 Thread Daniel Iwan
By the look of it it seems returnTerm is available in 1.3+ and regexp matching got merged into 2.0? Also is there any documentation what subset of Perl regexp is supported? Thanks Daniel -- View this message in context: http://riak-users.197444.n3.nabble.com/Riak-client-returnTerm-and-regexp-t

Re: Riak Corruption

2015-02-12 Thread Daniel Iwan
Also it may be worth checking if there is any 0-byte file in AAE folder. I've seen corruptions like that in the past (although not on AAE but on ring files). If you find and remove corrupted file, rebuilding AAE will be faster/cheaper. It would be good if that error showed which file could not be r

Riak 1.3.1 crashing with segfault

2015-02-17 Thread Daniel Iwan
We are experiencing crash of beam.smp on one of nodes in 3-node cluster (ring 128) Distro is Ubuntu 12.04 with 16GB of memory (almost exclusive for Riak) = Sun Feb 15 10:02:23 UTC 2015 Erlang has closed/usr/lib/riak/lib/os_mon-2.2.9/priv/bin/memsup: Erlang has closed. Hi I've got following

Riak 1.3.1 high swap usage

2015-02-19 Thread Daniel Iwan
Hi On 3 node cluster Ubuntu 12.04, nodes 8GB RAM all nodes show 6GB taken beam.smp, 2GB by our process. beam started swapping and currently is using 23GB of swap space. vm.swappiness is set to 1 We are using ring 128. /var/lib/riak is 37GB in size 11GB of which is used by anti-entropy Is there a

Re: Secondary index in riak

2015-02-19 Thread Daniel Iwan
My ideas: 1. Rewrite (read-write) object with new values for all indexes 2. Enable siblings on a bucket, write empty object with update for your index, that will create sibling. Then whenever you read object do merge of object+indexes. This may be more appropriate if you have big objects and want t

Re: Riak 1.3.1 high swap usage

2015-02-19 Thread Daniel Iwan
We are using levelDB as backend without any tuning. Also we are aware that performance may suffer due to potentially storing some of the copies (n=3) twice on the server. We are not so much concerned about latencies caused by that. What is worrying though is almost unbounded growth of swap used, wh

Re: Riak 1.3.1 high swap usage

2015-02-19 Thread Daniel Iwan
I absolutely agree. That is why we've change the setting vm.swappiness to 1 so it swaps only when absolutely necessary. I think we underestimated how much swap may be needed, but I also don't understand why so much hungry on memory. Is there a particular activity, like 2i queries, AAE or levelDB c

Re: Riak 1.3.1 crashing with segfault

2015-02-20 Thread Daniel Iwan
Ciprian Thanks for reply. I will check that as soon as I will get access to the servers again D. -- View this message in context: http://riak-users.197444.n3.nabble.com/Riak-1-3-1-crashing-with-segfault-tp4032638p4032673.html Sent from the Riak Users mailing list archive at Nabble.com.

Re: Riak 1.3.1 crashing with segfault

2015-02-25 Thread Daniel Iwan
Hi I've checked all logs and there is nothing regarding memory issues. Since then I've had several Riak crashes but looks like other processes are failing as well Feb 2 22:05:28 node2 kernel: [20052.901884] beam.smp[1830]: segfault at 8523111 ip 08523111 sp 7f03ba821be8 error 14

Re: Riak 1.3.1 crashing with segfault

2015-02-25 Thread Daniel Iwan
I moved /var/lib/riak folder to RAID array Another crash happened 20 minutes after Riak start. Another crash 20 mins after start = = LOGGING STARTED Wed Feb 25 12:40:07 UTC 2015 = Exec: /usr/lib/riak/erts-5.9.1/bin/erlexec -boot /usr/lib/riak/releases/1.3.1/riak -embedded

Re: Riak 1.3.1 crashing with segfault

2015-02-25 Thread Daniel Iwan
Thanks Magnus I'm using memtest86 on set of another 3 servers with identical configuration to see if I can trigger that as well. I cannot do it on the failing node at the moment since it's a remote site but I agree it's a strong indication of RAM module problem. As a test I moved var/lib/riak to R

Java client 1.1.4 and headOnly() in domain buckets

2015-05-11 Thread Daniel Iwan
Hi all Am I right thinking that v1.1.4 does not support headOnly() on domain buckets? During domain.fetch() line 237 in https://github.com/basho/riak-java-client/blob/1.1.4/src/main/java/com/basho/riak/client/bucket/DomainBucket.java there is no check/call headOnly() on FetchMeta object. Cod

Re: Java client 1.1.4 and headOnly() in domain buckets

2015-05-12 Thread Daniel Iwan
We are using official 1.1.4 which is the latest recommended with Riak 1.3 we have installed. Upgrade to Riak 1.4 is not possible at the moment. D. -- View this message in context: http://riak-users.197444.n3.nabble.com/Java-client-1-1-4-and-headOnly-in-domain-buckets-tp4033042p4033048.html Sen

Clarifying withoutFetch() with LevelDB and

2015-05-13 Thread Daniel Iwan
Hi I'm using 4 node Riak cluster v1.3.1 I wanted to know a little bit more about using withoutFetch() option when used with levelDB. I'm trying to write to a single key as fast as I can with n=3. I deliberately create siblings by writing with stale vclock. I'm limiting number of writes to 1000 pe

Re: Clarifying withoutFetch() with LevelDB and

2015-05-13 Thread Daniel Iwan
We are using Java client 1.1.4. We haven't moved to newer version of Riak as as for the moment we don't need any new features. Also roll out of the new version may be complicated since we have multiple clusters. As with regards to object size its ~250-300 bytes per write. We store simple JSON stru

Re: Java client 1.1.4 and headOnly() in domain buckets

2015-05-13 Thread Daniel Iwan
Hi Alex >> It appears that the domain buckets api does not support headOnly(). That >> api was written to be a higher-level abstraction around a common usage, >> so >> it abstracted that idea of head vs object data away. I think it may be quite useful functionality anyway, to check the existen

Re: Clarifying withoutFetch() with LevelDB and

2015-05-13 Thread Daniel Iwan
Alex, Thanks for answering this one and pointing me into right direction. I did an experiment and wrote 0 bytes instead of a JSON and got the same effect - level db folder is 80-220MB in size and activity around 20MB/s write to disk, no read from disk. Java client reports speed 45 secs for 1000 en

Node join not committed

2015-10-09 Thread Daniel Iwan
Hi Our attach script failed and it only issued *cluster join* but not *cluster plan* and *cluster commit*. so the node was visible as joining in member_status Since then both part (original cluster and new node) were taking writes but in our configuration each node takes writes from it local proc

Re: Node join not committed

2015-10-09 Thread Daniel Iwan
Hi Jon Thanks for confirming this. We did do plan/commit and everything worked as expected, no issues whatsoever Thanks a bunch Daniel -- View this message in context: http://riak-users.197444.n3.nabble.com/Node-join-not-committed-tp4033571p4033573.html Sent from the Riak Users mailing list a

Pending handoff when node offline

2016-01-05 Thread Daniel Iwan
Hi all Am I right thinking that when node goes offline *riak-admin transfers* will always show transfers to be done? E.g. riak-admin transfers Attempting to restart script through sudo -H -u riak [sudo] password for myuser: Nodes ['riak@10.173.240.12'] are currently down. 'riak@10.173.240.9' wai

Re: Pending handoff when node offline

2016-01-05 Thread Daniel Iwan
Magnus Thanks for confirming. We've had issues with 2i (coverage queries) during node startup where some keys potentially might not appear in results. More details on the here: http://riak-users.197444.n3.nabble.com/Keys-not-listed-during-vnode-transfers-td4027133.html#a4027139 We've been using

Non-standard ring size

2012-01-19 Thread Daniel Iwan GM
Hello riak users I'm trying to get my head around the partitioning in Riak Quite recent thread was very helpful http://thread.gmane.org/gmane.comp.db.riak.user/6207/focus=6266 Let's say I install Riak on 3 nodes (initially), which will possibly grow to 10 or more. Default partition size is 64 a