Register Jackson Module with Riak-Java

2013-03-06 Thread Justin Long
Hello Riak-Java users.

Just trying to figure out how I can register the Scala Jackson module with
Jackson so that when Riak goes to convert objects to be saved in the
database it will pick up on the included module. In the README in
https://github.com/FasterXML/jackson-module-scala, it's rather simple but I
have a feeling there's more to this problem when using Riak-Java.

Anyone have any experience with this?

Thanks!

p.s.: take a look at
http://stackoverflow.com/questions/15236140/jackson-cannot-map-scala-list-in-riak-clientif
you need some background.
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Runaway "Failed to compact" errors

2013-11-24 Thread Justin Long
Hello everyone,

Our Riak cluster has failed after what seems to be an issue in LevelDB. Noticed 
that a process running a segment compact has started to throw errors non-stop. 
I opened a Stack Overflow question here where you will find a lot of log data: 
http://stackoverflow.com/questions/20172878/riak-is-throwing-failed-to-compact-like-crazy

Here is exactly what we're getting in console.log:

2013-11-24 10:38:46.803 [info] 
<0.19760.0>@riak_core_handoff_receiver:process_message:99 Receiving handoff 
data for partition 
riak_search_vnode:1050454301831586472458898473514828420377701515264
2013-11-24 10:38:47.239 [info] 
<0.19760.0>@riak_core_handoff_receiver:handle_info:69 Handoff receiver for 
partition 1050454301831586472458898473514828420377701515264 exited after 
processing 5409 objects
2013-11-24 10:38:49.743 [error] emulator Error in process <0.19767.0> on node 
'riak@192.168.3.3' with exit value: {badarg,[{erlang,binary_to_term,[<<260 
bytes>>],[]},{mi_segment,iterate_all_bytes,2,[{file,"src/mi_segment.erl"},{line,167}]},{mi_server,'-group_iterator/2-fun-0-',2,[{file,"src/mi_server.erl"},{line,722}]},{mi_server,'-group_iterator/2-fun-1-'...


2013-11-24 10:38:49.743 [error] <0.580.0>@mi_scheduler:worker_loop:141 Failed 
to compact <0.11868.0>: 
{badarg,[{erlang,binary_to_term,[<<131,104,3,109,0,0,0,25,99,111,108,108,101,99,116,111,114,45,99,111,108,108,101,99,116,45,116,119,105,116,116,101,114,109,0,0,0,14,100,97,116,97,95,102,111,108,108,111,119,101,114,115,109,0,0,128,203,123,34,105,100,115,34,58,91,49,54,50,51,53,50,50,50,50,51,44,49,55,51,55,51,52,52,50,44,49,50,56,51,52,52,56,55,51,57,44,51,57,56,56,57,56,50,51,52,44,49,52,52,55,51,54,54,57,53,48,44,53,48,48,55,53,57,48,55,44,52,51,56,49,55,53,52,56,53,44,49,51,54,53,49,50,49,52,50,44,52,54,50,52,52,54,56,51,44,49,48,55,57,56,55,49,50,48,48,44,55,55,48,56,51,54,55,57,44,50,56,51,56,51,57,55,56,44,49,57,50,48,55,50,55,51,48,44,51,57,54,57,56,56,57,56,55,44,50,56,48,50,54,51,56,48,52,44,53,57,50,56,56,53,50,51,48,44,49,50,52,55,53,56,57,53,55,56,44,49,55,51,56,56,51,53,52,50,44,49,53,56,57,54,51,50,50,50,48,44,53,53,49,51>>],[]},{mi_segment,iterate_all_bytes,2,[{file,"src/mi_segment.erl"},{line,167}]},{mi_server,'-group_iterator/2-fun-0-',2,[{file,"src/mi_server.erl"},{line,722}]},{mi_server,'-group_iterator/2-fun-1-',2,[{file,"src/mi_server.erl"},{line,725}]},{mi_server,'-group_iterator/2-fun-0-',2,[{file,"src/mi_server.erl"},{line,722}]},{mi_server,'-group_iterator/2-fun-1-',2,[{file,"src/mi_server.erl"},{line,725}]},{mi_server,'-group_iterator/2-fun-0-',2,[{file,"src/mi_server.erl"},{line,722}]},{mi_segment_writer,from_iterator,4,[{file,"src/mi_segment_writer.erl"},{line,110}]}]}






The log is just full of them. Thanks for your help! We need to get this cluster 
back up ASAP, appreciated!

- Justin___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Runaway "Failed to compact" errors

2013-11-24 Thread Justin Long
5487351808},{ho_stats,{1385,326398,498434},undefined,14225,2426123},gen_tcp,50226},{{<<"collector-collect-instagram-cache">>,{<<"data_follows">>,<<"who">>}},[{<<"3700758">>,[{p,[561]}],1383759429413536},{<<"368835984">>,[{p,[297,303]}],1383611963556763},{<<"368835984">>,[{p,[298,304]}],1383756713657753},{<<"31715058">>,[{p,[325]}],1383611996352193}]},4}}}
 
[{riak_core_handoff_sender,start_fold,5,[{file,"src/riak_core_handoff_sender.erl"},{line,161}]}]

--

I am aware that values that bucket might be larger than most of our other 
objects. Not sure if that would cause the issues, though. Thanks for your help!

J



On Nov 24, 2013, at 12:51 PM, Richard Shaw  wrote:

> Hi Justin,
> 
> Please can you run this command to look for compaction errors in the leveldb 
> logs on the node with the crash log entries
> 
> grep -R "Compaction error" /var/lib/riak/leveldb/*/LOG
> 
> Where the path matches your path to the leveldb dir
> 
> Thanks
> 
> Richard
> 
> 
> 
> On 24 November 2013 10:45, Justin Long  wrote:
> Hello everyone,
> 
> Our Riak cluster has failed after what seems to be an issue in LevelDB. 
> Noticed that a process running a segment compact has started to throw errors 
> non-stop. I opened a Stack Overflow question here where you will find a lot 
> of log data: 
> http://stackoverflow.com/questions/20172878/riak-is-throwing-failed-to-compact-like-crazy
> 
> Here is exactly what we're getting in console.log:
> 
> 2013-11-24 10:38:46.803 [info] 
> <0.19760.0>@riak_core_handoff_receiver:process_message:99 Receiving handoff 
> data for partition 
> riak_search_vnode:1050454301831586472458898473514828420377701515264
> 2013-11-24 10:38:47.239 [info] 
> <0.19760.0>@riak_core_handoff_receiver:handle_info:69 Handoff receiver for 
> partition 1050454301831586472458898473514828420377701515264 exited after 
> processing 5409 objects
> 2013-11-24 10:38:49.743 [error] emulator Error in process <0.19767.0> on node 
> 'riak@192.168.3.3' with exit value: {badarg,[{erlang,binary_to_term,[<<260 
> bytes>>],[]},{mi_segment,iterate_all_bytes,2,[{file,"src/mi_segment.erl"},{line,167}]},{mi_server,'-group_iterator/2-fun-0-',2,[{file,"src/mi_server.erl"},{line,722}]},{mi_server,'-group_iterator/2-fun-1-'...
> 
> 
> 2013-11-24 10:38:49.743 [error] <0.580.0>@mi_scheduler:worker_loop:141 Failed 
> to compact <0.11868.0>: 
> {badarg,[{erlang,binary_to_term,[<<131,104,3,109,0,0,0,25,99,111,108,108,101,99,116,111,114,45,99,111,108,108,101,99,116,45,116,119,105,116,116,101,114,109,0,0,0,14,100,97,116,97,95,102,111,108,108,111,119,101,114,115,109,0,0,128,203,123,34,105,100,115,34,58,91,49,54,50,51,53,50,50,50,50,51,44,49,55,51,55,51,52,52,50,44,49,50,56,51,52,52,56,55,51,57,44,51,57,56,56,57,56,50,51,52,44,49,52,52,55,51,54,54,57,53,48,44,53,48,48,55,53,57,48,55,44,52,51,56,49,55,53,52,56,53,44,49,51,54,53,49,50,49,52,50,44,52,54,50,52,52,54,56,51,44,49,48,55,57,56,55,49,50,48,48,44,55,55,48,56,51,54,55,57,44,50,56,51,56,51,57,55,56,44,49,57,50,48,55,50,55,51,48,44,51,57,54,57,56,56,57,56,55,44,50,56,48,50,54,51,56,48,52,44,53,57,50,56,56,53,50,51,48,44,49,50,52,55,53,56,57,53,55,56,44,49,55,51,56,56,51,53,52,50,44,49,53,56,57,54,51,50,50,50,48,44,53,53,49,51>>],[]},{mi_segment,iterate_all_bytes,2,[{file,"src/mi_segment.erl"},{line,167}]},{mi_server,'-group_iterator/2-fun-0-',2,[{file,"src/mi_server.erl"},{line,722}]},{mi_server,'-group_iterator/2-fun-1-',2,[{file,"src/mi_server.erl"},{line,725}]},{mi_server,'-group_iterator/2-fun-0-',2,[{file,"src/mi_server.erl"},{line,722}]},{mi_server,'-group_iterator/2-fun-1-',2,[{file,"src/mi_server.erl"},{line,725}]},{mi_server,'-group_iterator/2-fun-0-',2,[{file,"src/mi_server.erl"},{line,722}]},{mi_segment_writer,from_iterator,4,[{file,"src/mi_segment_writer.erl"},{line,110}]}]}
> 
> 
> 
> 
> 
> 
> The log is just full of them. Thanks for your help! We need to get this 
> cluster back up ASAP, appreciated!
> 
> - Justin
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> 

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Runaway "Failed to compact" errors

2013-11-24 Thread Justin Long
Thanks Joe. I would agree that would probably be the problem. I am concerned 
since none of the fields of objects I am storing in Riak would produce a key 
larger than 32kb. Here’s a sample Scala (Java-based) POJO that represents an 
object in the problem bucket using the Riak-Java-Client:

case class InstagramCache(
  @(JsonProperty@field)("identityId")
  @(RiakKey@field)
  val identityId: String, // ID of user on social network
  
  @(JsonProperty@field)("userId")
  @(RiakIndex@field)(name = "userId")
  val userId: String, // associated user ID on platform
  
  @(JsonProperty@field)("data")
  val data: Map[String, Option[String]],
  
  @(JsonProperty@field)("updated")
  var updated: Date
  
)

The fields identityId and userId would rarely exceed 30 characters. Is Riak 
trying to index the whole object?

Thanks



On Nov 24, 2013, at 2:11 PM, Joe Caswell  wrote:

> Justin,
> 
> The terms being stored in merge index are too large. The maximum size for an 
> {Index, Field, Term} key is 32k bytes.
> The binary blob in your log entry represents a tuple that was 32952 bytes.  
> Since merge index uses a 15-bit integer to store term size, if the 
> term_to_binary of the given key is larger than 32767, high bits are lost, 
> effectively storing ( mod 32767) bytes.
> When this data is read back, binary_to_term is unable to reconstruct the key 
> due the missing bytes, and throws a badarg exception.
> 
> Search index repair is document here: 
> http://docs.basho.com/riak/1.4.0/cookbooks/Repairing-Search-Indexes/  
> However,  you would need to first modify your extractor to not produce search 
> keys larger than 32k or the corruption issues will recur.
> 
> Joe Caswell
> 
> 
> From: Richard Shaw 
> Date: Sunday, November 24, 2013 4:25 PM
> To: Justin Long 
> Cc: riak-users 
> Subject: Re: Runaway "Failed to compact" errors
> 
> Ok thanks Justin, please can you change the vm.args on each node with the 
> following and then restart each node
> 
> -env ERL_MAX_ETS_TABLES 256000
> 
> I'd also like you to please confirm the ulimit on each server
> 
> $ riak attach
> os:cmd("ulimit -n").
> 
> If you're running Riak >=1.4 then exit with Ctrl+c then a and if you're 
> running 1.3 or older then Ctrl+d to exit
> 
> I would recommend upping the ulimit to 65536 if its not already there [0]
> 
> [0]http://docs.basho.com/riak/latest/ops/tuning/open-files-limit/
> 
> I'm going to need to sign off at this point Justin, I'll see if a colleague 
> can take over.
> 
> 
> Kind regards,
> 
> Richard
> 
> 
> 
> On 24 November 2013 21:00, Justin Long  wrote:
>> Hi Richard,
>> 
>> Result turned up empty on the failed node. Here¹s what is in vm.args:
>> 
>> --
>> 
>> # Name of the riak node
>> -name riak@192.168.3.3
>> 
>> ## Cookie for distributed erlang.  All nodes in the same cluster
>> ## should use the same cookie or they will not be able to communicate.
>> -setcookie riak
>> 
>> ## Heartbeat management; auto-restarts VM if it dies or becomes unresponsive
>> ## (Disabled by default..use with caution!)
>> ##-heart
>> 
>> ## Enable kernel poll and a few async threads
>> +K true
>> +A 64
>> 
>> ## Treat error_logger warnings as warnings
>> +W w
>> 
>> ## Increase number of concurrent ports/sockets
>> -env ERL_MAX_PORTS 4096
>> 
>> ## Tweak GC to run more often
>> -env ERL_FULLSWEEP_AFTER 0
>> 
>> ## Set the location of crash dumps
>> -env ERL_CRASH_DUMP /var/log/riak/erl_crash.dump
>> 
>> ## Raise the ETS table limit
>> -env ERL_MAX_ETS_TABLES 22000
>> 
>> --
>> 
>> 
>> Before I received your email, I have since isolated the node and 
>> force-removed it from the cluster. In the meantime, I brought up a new fresh 
>> node and joined it to the cluster. When Riak went to handoff some of the 
>> RiakSearch indexes here is what was popping up in console.log:
>> 
>> --
>> 
>> <0.4262.0>@merge_index_backend:async_fold_fun:116 failed to iterate the 
>> index with reason 
>> {badarg,[{erlang,binary_to_term,[<<131,104,3,109,0,0,0,25,99,111,108,108,101,99,116,111,114,45,99,111,108,108,101,99,116,45,116,119,105,116,116,101,114,109,0,0,0,14,100,97,116,97,95,102,111,108,108,111,119,101,114,115,109,0,0,128,199,123,34,105,100,115,34,58,91,49,52,52,55,51,54,54,57,53,48,44,53,48,48,55,53,57,48,55,44,52,51,56,49,55,53,52,56,53,44,49,51,54,53

Re: Runaway "Failed to compact" errors

2013-11-24 Thread Justin Long
Interesting, as I mentioned previously any objects in the “collector” buckets 
all share the same structure. It is trying to index a field that is actually 
inside a String. “data_followers” is a key inside the “data” map and the value 
for that key is escaped JSON (it goes into offline processing FYI later). And 
as you can see there isn’t any indexing set for that field.


On Nov 24, 2013, at 2:56 PM, Joe Caswell  wrote:

> Justin,
> 
>   The binary in the log entry below equates to:
> {<<"collector-collect-twitter">>,<<"data_followers">>,<<32897-byte string>>}
> 
>   Hope this helps.
> 
> Joe
> From: Justin Long 
> Date: Sunday, November 24, 2013 5:17 PM
> To: Joe Caswell 
> Cc: Richard Shaw , riak-users 
> Subject: Re: Runaway "Failed to compact" errors
> 
> Thanks Joe. I would agree that would probably be the problem. I am concerned 
> since none of the fields of objects I am storing in Riak would produce a key 
> larger than 32kb. Here’s a sample Scala (Java-based) POJO that represents an 
> object in the problem bucket using the Riak-Java-Client:
> 
> case class InstagramCache(
>   @(JsonProperty@field)("identityId")
>   @(RiakKey@field)
>   val identityId: String, // ID of user on social network
>   
>   @(JsonProperty@field)("userId")
>   @(RiakIndex@field)(name = "userId")
>   val userId: String, // associated user ID on platform
>   
>   @(JsonProperty@field)("data")
>   val data: Map[String, Option[String]],
>   
>   @(JsonProperty@field)("updated")
>   var updated: Date
>   
> )
> 
> The fields identityId and userId would rarely exceed 30 characters. Is Riak 
> trying to index the whole object?
> 
> Thanks
> 
> 
> 
> On Nov 24, 2013, at 2:11 PM, Joe Caswell  wrote:
> 
>> Justin,
>> 
>> The terms being stored in merge index are too large. The maximum size for an 
>> {Index, Field, Term} key is 32k bytes.
>> The binary blob in your log entry represents a tuple that was 32952 bytes.  
>> Since merge index uses a 15-bit integer to store term size, if the 
>> term_to_binary of the given key is larger than 32767, high bits are lost, 
>> effectively storing ( mod 32767) bytes.
>> When this data is read back, binary_to_term is unable to reconstruct the key 
>> due the missing bytes, and throws a badarg exception.
>> 
>> Search index repair is document here: 
>> http://docs.basho.com/riak/1.4.0/cookbooks/Repairing-Search-Indexes/  
>> However,  you would need to first modify your extractor to not produce 
>> search keys larger than 32k or the corruption issues will recur.
>> 
>> Joe Caswell
>> 
>> 
>> From: Richard Shaw 
>> Date: Sunday, November 24, 2013 4:25 PM
>> To: Justin Long 
>> Cc: riak-users 
>> Subject: Re: Runaway "Failed to compact" errors

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Runaway "Failed to compact" errors

2013-11-24 Thread Justin Long
Update: after checking the Java-client and my bucket code I noticed that I am 
doing the following:

val bucket = DB.client.createBucket(bucketName).enableForSearch().execute()

I have a feeling that “enableForSearch” is causing each object in its entirety 
to be indexed, instead of the explicit fields. Checking this Javadoc 
http://basho.github.io/riak-java-client/1.0.5/com/basho/riak/client/bucket/WriteBucket.html
 shows that search=true is written for the bucket.

Would that cause the entire object to be indexed, not just the explicit fields?



On Nov 24, 2013, at 3:02 PM, Justin Long  wrote:

> Interesting, as I mentioned previously any objects in the “collector” buckets 
> all share the same structure. It is trying to index a field that is actually 
> inside a String. “data_followers” is a key inside the “data” map and the 
> value for that key is escaped JSON (it goes into offline processing FYI 
> later). And as you can see there isn’t any indexing set for that field.
> 
> 
> On Nov 24, 2013, at 2:56 PM, Joe Caswell  wrote:
> 
>> Justin,
>> 
>>   The binary in the log entry below equates to:
>> {<<"collector-collect-twitter">>,<<"data_followers">>,<<32897-byte string>>}
>> 
>>   Hope this helps.
>> 
>> Joe
>> From: Justin Long 
>> Date: Sunday, November 24, 2013 5:17 PM
>> To: Joe Caswell 
>> Cc: Richard Shaw , riak-users 
>> Subject: Re: Runaway "Failed to compact" errors
>> 
>> Thanks Joe. I would agree that would probably be the problem. I am concerned 
>> since none of the fields of objects I am storing in Riak would produce a key 
>> larger than 32kb. Here’s a sample Scala (Java-based) POJO that represents an 
>> object in the problem bucket using the Riak-Java-Client:
>> 
>> case class InstagramCache(
>>   @(JsonProperty@field)("identityId")
>>   @(RiakKey@field)
>>   val identityId: String, // ID of user on social network
>>   
>>   @(JsonProperty@field)("userId")
>>   @(RiakIndex@field)(name = "userId")
>>   val userId: String, // associated user ID on platform
>>   
>>   @(JsonProperty@field)("data")
>>   val data: Map[String, Option[String]],
>>   
>>   @(JsonProperty@field)("updated")
>>   var updated: Date
>>   
>> )
>> 
>> The fields identityId and userId would rarely exceed 30 characters. Is Riak 
>> trying to index the whole object?
>> 
>> Thanks
>> 
>> 
>> 
>> On Nov 24, 2013, at 2:11 PM, Joe Caswell  wrote:
>> 
>>> Justin,
>>> 
>>> The terms being stored in merge index are too large. The maximum size for 
>>> an {Index, Field, Term} key is 32k bytes.
>>> The binary blob in your log entry represents a tuple that was 32952 bytes.  
>>> Since merge index uses a 15-bit integer to store term size, if the 
>>> term_to_binary of the given key is larger than 32767, high bits are lost, 
>>> effectively storing ( mod 32767) bytes.
>>> When this data is read back, binary_to_term is unable to reconstruct the 
>>> key due the missing bytes, and throws a badarg exception.
>>> 
>>> Search index repair is document here: 
>>> http://docs.basho.com/riak/1.4.0/cookbooks/Repairing-Search-Indexes/  
>>> However,  you would need to first modify your extractor to not produce 
>>> search keys larger than 32k or the corruption issues will recur.
>>> 
>>> Joe Caswell
>>> 
>>> 
>>> From: Richard Shaw 
>>> Date: Sunday, November 24, 2013 4:25 PM
>>> To: Justin Long 
>>> Cc: riak-users 
>>> Subject: Re: Runaway "Failed to compact" errors
> 

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Anti-Entropy Sudden Exchange

2014-01-02 Thread Justin Long
Hi everyone,

Experiencing some weird behaviour in a cluster of 3 nodes. As of this morning, 
we noticed a massive performance lag and when checking the logs Active 
Anti-Entropy Exchange suddenly began initiating a huge series of transfers. 
Here’s what we see in the logs:

2014-01-02 19:26:34.396 [info] 
<0.1571.1800>@riak_kv_exchange_fsm:key_exchange:206 Repaired 882 keys during 
active anti-entropy exchange of 
{228359630832953580969325755111919221821239459840,3} between 
{228359630832953580969325755111919221821239459840,'riak@192.168.3.2'} and 
{25119559391624893906625833062344003363405824,'riak@192.168.3.3’}

When we check the logs on all 3 nodes we only had a few process crashes 10 
hours ago, nothing in between that time and now. It all came on suddenly. Any 
idea if this is normal and will fix itself or if we need to intervene?

Thanks!
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Anti-Entropy Sudden Exchange

2014-01-02 Thread Justin Long
Ok here’s an update on the issue, I’m really hoping someone can chime in. I 
disabled Anti-Entropy, restarted Riak, and then tried to run a repair on 
indexes. After running the command [riak_kv_vnode:repair(P) || P <- 
Partitions]. Riak then returned the error 

** exception exit: {timeout,{gen_server,call,

[{riak_core_vnode_manager,'riak@192.168.3.2'},
 {repair,riak_kv,
 
{riak_kv_vnode,22835963083295358096932575511191922182123945984},
 
{riak_kv_vnode,repair_filter}}]}}
 in function  gen_server:call/2 (gen_server.erl, line 180)

We then tried to use the Basho Riak Data Migrator tool to back up the most 
recent data on the node, but it takes 2 minutes to export a simple KV object 
(and these are less than 32kb in size).  Thanks for your help.

FYI Using Riak 1.4.6



On Jan 2, 2014, at 11:31 AM, Justin Long  wrote:

> Hi everyone,
> 
> Experiencing some weird behaviour in a cluster of 3 nodes. As of this 
> morning, we noticed a massive performance lag and when checking the logs 
> Active Anti-Entropy Exchange suddenly began initiating a huge series of 
> transfers. Here’s what we see in the logs:
> 
> 2014-01-02 19:26:34.396 [info] 
> <0.1571.1800>@riak_kv_exchange_fsm:key_exchange:206 Repaired 882 keys during 
> active anti-entropy exchange of 
> {228359630832953580969325755111919221821239459840,3} between 
> {228359630832953580969325755111919221821239459840,'riak@192.168.3.2'} and 
> {25119559391624893906625833062344003363405824,'riak@192.168.3.3’}
> 
> When we check the logs on all 3 nodes we only had a few process crashes 10 
> hours ago, nothing in between that time and now. It all came on suddenly. Any 
> idea if this is normal and will fix itself or if we need to intervene?
> 
> Thanks!

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com