riak-cs

2016-02-26 Thread Gustav Spellauge

hello,

i'm kind of new to the world of riak but i succeeded to install and 
configure riak_2.1.3 (5 instances on 5 machines running debian jessie), 
riak-cs_2.1.1, riak-cs-control_1.0.2 and stanchion_2.1.1.


everything works fine but i'm observing a strange behavior:

when i had finished installation and configuration, i was able to obtain 
a bucket list from riak


the command curl -s http://localhost:8098/buckets?buckets=true|json_pp

returned
{
   "buckets" : [
  "moss.users",
  "moss.buckets",
  "moss.access"
   ]
}

after i copied (using s3cmd) some files into riak-cs this was no longer 
possible


the command curl -s http://svc.softing.com:8098/buckets?buckets=true

now returnes

500 Internal Server 
ErrorInternal Server ErrorThe server 
encountered an error while processing this 
request:{error,{exit,{ucs,{bad_utf8_character_code}},

[{xmerl_ucs,from_utf8,1,[{file,"xmerl_ucs.erl"},{line,185}]},
  {mochijson2,json_encode_string,2,
[{file,"src/mochijson2.erl"},{line,200}]},
  {mochijson2,'-json_encode_array/2-fun-0-',3,
[{file,"src/mochijson2.erl"},{line,171}]},
  {lists,foldl,3,[{file,"lists.erl"},{line,1248}]},
  {mochijson2,json_encode_array,2,
[{file,"src/mochijson2.erl"},{line,173}]},
  {mochijson2,'-json_encode_proplist/2-fun-0-',3,
[{file,"src/mochijson2.erl"},{line,181}]},
  {lists,foldl,3,[{file,"lists.erl"},{line,1248}]},
  {mochijson2,json_encode_proplist,2,
[{file,"src/mochijson2.erl"},{line,184}]}]}}mochiweb+webmachine 
web server


might i have done some kind  misconfiguration? what can i do to get a 
bucket list from riak? should i be worried?


thanks in advance for your answer, gustav


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Testing On A Single Node

2016-02-26 Thread Christopher Mancini
To second Vitaly, losing data in restart is not normal. What OS are you
running it on? Is it in a VM or on bare metal? How did you install it? Did
you change any riak.conf vars? With this info, we should be able to help
you trouble shoot this issue better.

Chris

On Fri, Feb 26, 2016 at 12:39 AM Vitaly E <13vitam...@gmail.com> wrote:

> Hi Joe,
>
> A standalone node should behave similarly to a clustered node in terms of
> data persistence. Actually, I use single-node Riak setups (inside a VM) a
> lot for testing. The only differences are ring_size=8 and n_val=1, for
> performance reasons.
>
> So, it must be your Riak or VM configuration. Also make sure you don't
> delete Riak data directories by mistake when restarting your VM. Another
> reason could be that you restart the VM before all the writes have been
> flushed to the disk, for example when using Bitcask (
> docs.basho.com/riak/latest/ops/advanced/backends/bitcask/), but in that
> case you should have lost only the most recent writes.
>
> Hope this helps
>
>
> Vitaly
> I am trying to set up a simple test environment. This environment consists
> of a single Riak KV node which has not joined a cluster.
>
> I can populate the single un-clustered node with KV pairs just fine using
> curl.
>
> However, when I stop the node, and then restart it, all the KV pairs that
> were written before the stop are gone.
>
> Is there a way to get a non-cluster joined single node to reload KV data
> from disk, or is the only way for a restarted node to re-populate KV pairs
> from other nodes in a cluster?
>
> I realize this is an edge case, and I only use it for Virtualbox testing.
>
>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
-- 
Sincerely,

Christopher Mancini
-

employee = {
purpose: solve problems with code,
phone:7164625591,
email: cmanc...@basho.com,
github:http://www.github.com/christophermancini
}
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Yokozuna inconsistent search results

2016-02-26 Thread Oleksiy Krivoshey
Hi!

Riak 2.1.3

Having a stable data set (no documents deleted in months) I'm receiving
inconsistent search results with Yokozuna. For example first query can
return num_found: 3000 (correct), the same query repeated in next seconds
can return 2998, or 2995, then 3000 again. Similar inconsistency happens
when trying to receive data in pages (using start/rows options): sometimes
I get the same document twice (in different pages), sometimes some
documents are missing completely.

There are no errors or warning in Yokozuna logs. What should I look for in
order to debug the problem?

Thanks!
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: riak-cs

2016-02-26 Thread Gustav Spellauge

did some investigation

- this problem seems to be the same as described in 
https://github.com/basho/riak/issues/415 (and others)

- using the erlang interface i retrieved the bucket list as:

[<<"moss.buckets">>,<<"moss.access">>,<<"moss.users">>,
<<48,111,58,16,121,107,99,149,64,231,6,8,234,204,240,88,62,111,225>>,
 <<"riak-cs-gc">>,
<<48,98,58,16,121,107,99,149,64,231,6,8,234,204,240,88,62,111,225>>]

so i guess, it was not the best idea to retrieve data using the http 
interface and rather use other interfaces erlang for instance.


sorry for any noise.

On 02/26/2016 01:49 PM, Gustav Spellauge wrote:

hello,

i'm kind of new to the world of riak but i succeeded to install and 
configure riak_2.1.3 (5 instances on 5 machines running debian 
jessie), riak-cs_2.1.1, riak-cs-control_1.0.2 and stanchion_2.1.1.


everything works fine but i'm observing a strange behavior:

when i had finished installation and configuration, i was able to 
obtain a bucket list from riak


the command curl -s http://localhost:8098/buckets?buckets=true|json_pp

returned
{
   "buckets" : [
  "moss.users",
  "moss.buckets",
  "moss.access"
   ]
}

after i copied (using s3cmd) some files into riak-cs this was no 
longer possible


the command curl -s http://svc.softing.com:8098/buckets?buckets=true

now returnes

500 Internal Server 
ErrorInternal Server ErrorThe server 
encountered an error while processing this 
request:{error,{exit,{ucs,{bad_utf8_character_code}},

[{xmerl_ucs,from_utf8,1,[{file,"xmerl_ucs.erl"},{line,185}]},
  {mochijson2,json_encode_string,2,
[{file,"src/mochijson2.erl"},{line,200}]},
  {mochijson2,'-json_encode_array/2-fun-0-',3,
[{file,"src/mochijson2.erl"},{line,171}]},
  {lists,foldl,3,[{file,"lists.erl"},{line,1248}]},
  {mochijson2,json_encode_array,2,
[{file,"src/mochijson2.erl"},{line,173}]},
  {mochijson2,'-json_encode_proplist/2-fun-0-',3,
[{file,"src/mochijson2.erl"},{line,181}]},
  {lists,foldl,3,[{file,"lists.erl"},{line,1248}]},
  {mochijson2,json_encode_proplist,2,
[{file,"src/mochijson2.erl"},{line,184}]}]}}mochiweb+webmachine 
web server


might i have done some kind  misconfiguration? what can i do to get a 
bucket list from riak? should i be worried?


thanks in advance for your answer, gustav




___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Yokozuna inconsistent search results

2016-02-26 Thread Fred Dushin
I would check the coverage plans that are being used for the different queries, 
which you can usually see in the headers of the resulting document.  When you 
run a search query though yokozuna, it will use a coverage plan from riak core 
to find a minimal set of nodes (and partitions) to query to get a set of 
results, and the coverage plan may change every few seconds.  You might be 
hitting nodes that have inconsistencies or are in need of repair.  Do you have 
AAE enabled?

-Fred

> On Feb 26, 2016, at 8:36 AM, Oleksiy Krivoshey  wrote:
> 
> Hi!
> 
> Riak 2.1.3
> 
> Having a stable data set (no documents deleted in months) I'm receiving 
> inconsistent search results with Yokozuna. For example first query can return 
> num_found: 3000 (correct), the same query repeated in next seconds can return 
> 2998, or 2995, then 3000 again. Similar inconsistency happens when trying to 
> receive data in pages (using start/rows options): sometimes I get the same 
> document twice (in different pages), sometimes some documents are missing 
> completely.
> 
> There are no errors or warning in Yokozuna logs. What should I look for in 
> order to debug the problem?
> 
> Thanks!
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Solr Error Handling

2016-02-26 Thread Colin Walker
Hey again everyone,

Due to bad planning on my part, Solr is having trouble indexing some of the
fields I am sending to it, specifically, I ended up with some string fields
in a numerical field. Is there a way to retrieve the records from Riak that
have thrown errors in solr?

Cheers,

Colin
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Yokozuna inconsistent search results

2016-02-26 Thread Oleksiy Krivoshey
Yes, AAE is enabled:

anti_entropy = active

anti_entropy.use_background_manager = on
handoff.use_background_manager = on

anti_entropy.throttle.tier1.mailbox_size = 0
anti_entropy.throttle.tier1.delay = 5ms

anti_entropy.throttle.tier2.mailbox_size = 50
anti_entropy.throttle.tier2.delay = 50ms

anti_entropy.throttle.tier3.mailbox_size = 100
anti_entropy.throttle.tier3.delay = 500ms

anti_entropy.throttle.tier4.mailbox_size = 200
anti_entropy.throttle.tier4.delay = 2000ms

anti_entropy.throttle.tier5.mailbox_size = 500
anti_entropy.throttle.tier5.delay = 5000ms

However the output of "riak-admin search aae-status" looks like this:
http://oleksiy.sirv.com/misc/search-aae.png


On Fri, 26 Feb 2016 at 17:13 Fred Dushin  wrote:

> I would check the coverage plans that are being used for the different
> queries, which you can usually see in the headers of the resulting
> document.  When you run a search query though yokozuna, it will use a
> coverage plan from riak core to find a minimal set of nodes (and
> partitions) to query to get a set of results, and the coverage plan may
> change every few seconds.  You might be hitting nodes that have
> inconsistencies or are in need of repair.  Do you have AAE enabled?
>
> -Fred
>
> > On Feb 26, 2016, at 8:36 AM, Oleksiy Krivoshey 
> wrote:
> >
> > Hi!
> >
> > Riak 2.1.3
> >
> > Having a stable data set (no documents deleted in months) I'm receiving
> inconsistent search results with Yokozuna. For example first query can
> return num_found: 3000 (correct), the same query repeated in next seconds
> can return 2998, or 2995, then 3000 again. Similar inconsistency happens
> when trying to receive data in pages (using start/rows options): sometimes
> I get the same document twice (in different pages), sometimes some
> documents are missing completely.
> >
> > There are no errors or warning in Yokozuna logs. What should I look for
> in order to debug the problem?
> >
> > Thanks!
> > ___
> > riak-users mailing list
> > riak-users@lists.basho.com
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Solr Error Handling

2016-02-26 Thread Alex Moore
Hey Colin,

Do you see any errors in your solr log that would give you the info on the
bad entries?

Thanks,
Alex

On Fri, Feb 26, 2016 at 10:40 AM, Colin Walker  wrote:

> Hey again everyone,
>
> Due to bad planning on my part, Solr is having trouble indexing some of
> the fields I am sending to it, specifically, I ended up with some string
> fields in a numerical field. Is there a way to retrieve the records from
> Riak that have thrown errors in solr?
>
> Cheers,
>
> Colin
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Ok, I am stumped. Losing data or riak stop

2016-02-26 Thread Christopher Mancini
Hey Joe,

I will do my best to help, but I am not the most experienced with Riak
operations. Your best bet to get to a solution as fast as possible is to
include the full users group, which I have added to the recipients of this
message.

1. Are the Riak data directories within Vagrant shared directories between
the host and guest? I have had issues with OS file system caching before
when working with web server files.

2. What version of Ubuntu are you using?

3. How did you install Riak on Ubuntu?

4. Have you tried restoring the original distribution riak.conf file and
seen if the issue persists? This would help you determine if the issue is
your config or something with your environment.

Chris

On Fri, Feb 26, 2016 at 10:55 AM Joe Olson  wrote:

>
> Chris -
>
>  I cannot figure out what is going on. Here is my test case. Configuration
> file attached. I am running a single node of Riak on a vagrant box with a
> level DB back end. I don't even have to bring the box down, merely stopping
> and restarting riak '(riak stop' and 'riak start' or 'risk restart) causes
> all the keys to be lost. The riak node is set up on a Vagrant box. But
> againI do not have to bring the machine up or down to get this error.
>
> I've also deleted the ring info in /var/lib/riak/ring, and deleted all the
> leveldb files. In this case, the bucket type is just n_val = 1, and the
> ring size is the minimum of 8.
>
> Is it possible Riak is not flushing RAM to disk after write? The keys only
> reside in RAM?
>
> My test procedure:
>
>
> On a remote machine=
>
>
> riak01@ubuntu:/etc$ curl -i http://
> :8098/types/n1/buckets/test/keys?keys=true
> HTTP/1.1 200 OK
> Vary: Accept-Encoding
> Server: MochiWeb/1.1 WebMachine/1.10.8 (that head fake, tho)
> Date: Fri, 26 Feb 2016 13:14:59 GMT
> Content-Type: application/json
> Content-Length: 17
>
> {"keys":["test"]}
>
>
> riak01@ubuntu:/etc$
>
>
>
>
> On the single Riak node itself
>
>
> [vagrant@i-2016022519 -9bb5c84f riak]$ sudo riak stop
> ok
> [vagrant@i-2016022519 -9bb5c84f riak]$ sudo riak start
> [vagrant@i-2016022519 -9bb5c84f riak]$ sudo riak ping
>
> pong
>
>
>
>
> Back to the remote machine
>
>
> riak01@ubuntu:/etc$ curl -i http://
> :8098/types/n1/buckets/test/keys?keys=true
> HTTP/1.1 200 OK
> Vary: Accept-Encoding
> Server: MochiWeb/1.1 WebMachine/1.10.8 (that head fake, tho)
> Date: Fri, 26 Feb 2016 13:16:34 GMT
> Content-Type: application/json
> Content-Length: 11
>
> {"keys":[]}
>
>
> riak01@ubuntu:/etc$
>
>
>
>
> --
Sincerely,

Christopher Mancini
-

employee = {
purpose: solve problems with code,
phone:7164625591,
email: cmanc...@basho.com,
github:http://www.github.com/christophermancini
}
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Ok, I am stumped. Losing data or riak stop

2016-02-26 Thread Matthew Von-Maszewski
Joe,

Are there any error messages in the leveldb LOG and/or LOG.old files?  These 
files are located within each vnode's directory, likely 
/var/lib/riak/data/leveldb/*/LOG* on your machine.

The LOG files are not to be confused with 000xxx.log files.  The lower case 
*.log files are the recovery files that should contain the keys you are 
missing.  If they are not loading properly, the LOG files should have clues.

Matthew

> On Feb 26, 2016, at 11:04 AM, Christopher Mancini  wrote:
> 
> Hey Joe,
> 
> I will do my best to help, but I am not the most experienced with Riak 
> operations. Your best bet to get to a solution as fast as possible is to 
> include the full users group, which I have added to the recipients of this 
> message.
> 
> 1. Are the Riak data directories within Vagrant shared directories between 
> the host and guest? I have had issues with OS file system caching before when 
> working with web server files.
> 
> 2. What version of Ubuntu are you using?
> 
> 3. How did you install Riak on Ubuntu?
> 
> 4. Have you tried restoring the original distribution riak.conf file and seen 
> if the issue persists? This would help you determine if the issue is your 
> config or something with your environment.
> 
> Chris
> 
> On Fri, Feb 26, 2016 at 10:55 AM Joe Olson  > wrote:
> 
> Chris - 
> 
>  I cannot figure out what is going on. Here is my test case. Configuration 
> file attached. I am running a single node of Riak on a vagrant box with a 
> level DB back end. I don't even have to bring the box down, merely stopping 
> and restarting riak '(riak stop' and 'riak start' or 'risk restart) causes 
> all the keys to be lost. The riak node is set up on a Vagrant box. But 
> againI do not have to bring the machine up or down to get this error.
> 
> I've also deleted the ring info in /var/lib/riak/ring, and deleted all the 
> leveldb files. In this case, the bucket type is just n_val = 1, and the ring 
> size is the minimum of 8. 
> 
> Is it possible Riak is not flushing RAM to disk after write? The keys only 
> reside in RAM?
> 
> My test procedure:
> 
> On a remote machine=
> 
> riak01@ubuntu:/etc$ curl -i http:// 
> <>:8098/types/n1/buckets/test/keys?keys=true
> HTTP/1.1 200 OK
> Vary: Accept-Encoding
> Server: MochiWeb/1.1 WebMachine/1.10.8 (that head fake, tho)
> Date: Fri, 26 Feb 2016 13:14:59 GMT
> Content-Type: application/json
> Content-Length: 17
> 
> {"keys":["test"]}
> 
> riak01@ubuntu:/etc$
> 
> 
> 
> On the single Riak node itself
> 
> [vagrant@i-2016022519 -9bb5c84f riak]$ sudo riak stop
> ok
> [vagrant@i-2016022519 -9bb5c84f riak]$ sudo riak start
> [vagrant@i-2016022519 -9bb5c84f riak]$ sudo riak ping
> pong
> 
> 
> 
> Back to the remote machine
> 
> riak01@ubuntu:/etc$ curl -i http:// 
> <>:8098/types/n1/buckets/test/keys?keys=true
> HTTP/1.1 200 OK
> Vary: Accept-Encoding
> Server: MochiWeb/1.1 WebMachine/1.10.8 (that head fake, tho)
> Date: Fri, 26 Feb 2016 13:16:34 GMT
> Content-Type: application/json
> Content-Length: 11
> 
> {"keys":[]}
> 
> riak01@ubuntu:/etc$
> 
> 
> 
> -- 
> Sincerely,
> 
> Christopher Mancini
> -
> 
> employee = {
> purpose: solve problems with code,
> phone:7164625591,
> email: cmanc...@basho.com ,
> github:http://www.github.com/christophermancini 
> 
> }
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Solr Error Handling

2016-02-26 Thread Jason Voegele
On Feb 26, 2016, at 10:40 AM, Colin Walker  wrote:
> Due to bad planning on my part, Solr is having trouble indexing some of the 
> fields I am sending to it, specifically, I ended up with some string fields 
> in a numerical field. Is there a way to retrieve the records from Riak that 
> have thrown errors in sole?

Hi Colin,

Can you tell us what version of Riak you are running? In recent versions of 
Riak you can get this information by expiring the AAE trees for Yokozuna and 
then noting the objects that are flagged as not being indexable. See 
http://docs.basho.com/riak/latest/ops/advanced/aae/#AAE-and-Riak-Search 
 for 
some background info on AAE and Yokozuna, if needed.

Another possible option is to see if the the “_yz_err” field that is 
automatically created on certain error conditions might hold the information 
you need. See http://docs.basho.com/riak/latest/dev/advanced/search-schema/ 
 for info on 
“_yz_err”.

-- 
Jason Voegele
When my brain begins to reel from my literary labors, I make an occasional
cheese dip.
-- Ignatius Reilly

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Yokozuna inconsistent search results

2016-02-26 Thread Oleksiy Krivoshey
regarding the coverage plan, is there any way to get it with protocol
buffers API? As RpbSearchQueryResp messages doesn't seem to contain
anything but docs:

message RpbSearchQueryResp {
  repeated RpbSearchDoc docs  = 1; // Result documents
  optional floatmax_score = 2; // Maximum score
  optional uint32   num_found = 3; // Number of results
}


On Fri, 26 Feb 2016 at 17:51 Oleksiy Krivoshey  wrote:

> Yes, AAE is enabled:
>
> anti_entropy = active
>
> anti_entropy.use_background_manager = on
> handoff.use_background_manager = on
>
> anti_entropy.throttle.tier1.mailbox_size = 0
> anti_entropy.throttle.tier1.delay = 5ms
>
> anti_entropy.throttle.tier2.mailbox_size = 50
> anti_entropy.throttle.tier2.delay = 50ms
>
> anti_entropy.throttle.tier3.mailbox_size = 100
> anti_entropy.throttle.tier3.delay = 500ms
>
> anti_entropy.throttle.tier4.mailbox_size = 200
> anti_entropy.throttle.tier4.delay = 2000ms
>
> anti_entropy.throttle.tier5.mailbox_size = 500
> anti_entropy.throttle.tier5.delay = 5000ms
>
> However the output of "riak-admin search aae-status" looks like this:
> http://oleksiy.sirv.com/misc/search-aae.png
>
>
> On Fri, 26 Feb 2016 at 17:13 Fred Dushin  wrote:
>
>> I would check the coverage plans that are being used for the different
>> queries, which you can usually see in the headers of the resulting
>> document.  When you run a search query though yokozuna, it will use a
>> coverage plan from riak core to find a minimal set of nodes (and
>> partitions) to query to get a set of results, and the coverage plan may
>> change every few seconds.  You might be hitting nodes that have
>> inconsistencies or are in need of repair.  Do you have AAE enabled?
>>
>> -Fred
>>
>> > On Feb 26, 2016, at 8:36 AM, Oleksiy Krivoshey 
>> wrote:
>> >
>> > Hi!
>> >
>> > Riak 2.1.3
>> >
>> > Having a stable data set (no documents deleted in months) I'm receiving
>> inconsistent search results with Yokozuna. For example first query can
>> return num_found: 3000 (correct), the same query repeated in next seconds
>> can return 2998, or 2995, then 3000 again. Similar inconsistency happens
>> when trying to receive data in pages (using start/rows options): sometimes
>> I get the same document twice (in different pages), sometimes some
>> documents are missing completely.
>> >
>> > There are no errors or warning in Yokozuna logs. What should I look for
>> in order to debug the problem?
>> >
>> > Thanks!
>> > ___
>> > riak-users mailing list
>> > riak-users@lists.basho.com
>> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Ok, I am stumped. Losing data or riak stop

2016-02-26 Thread Joe Olson


Negative. 

I have ring size set to 8, leveldb split across two sets of drives ("fast" and 
"slow", but meaningless on the test Vagrant box...just two separate 
directories). I checked all of the ../leveldb/* directories. All LOG files are 
identical, and no errors in any of them. 

I will try to build another Vagrant machine with the default riak.conf and see 
if I can get this to repeat. It is almost as if the KV pairs are not persisting 
to disk at all. 



From: "Matthew Von-Maszewski"  
To: "Joe Olson"  
Cc: "riak-users" , "cmancini"  
Sent: Friday, February 26, 2016 10:12:15 AM 
Subject: Re: Ok, I am stumped. Losing data or riak stop 

Joe, 

Are there any error messages in the leveldb LOG and/or LOG.old files? These 
files are located within each vnode's directory, likely 
/var/lib/riak/data/leveldb/*/LOG* on your machine. 

The LOG files are not to be confused with 000xxx.log files. The lower case 
*.log files are the recovery files that should contain the keys you are 
missing. If they are not loading properly, the LOG files should have clues. 

Matthew 




On Feb 26, 2016, at 11:04 AM, Christopher Mancini < cmanc...@basho.com > wrote: 

Hey Joe, 

I will do my best to help, but I am not the most experienced with Riak 
operations. Your best bet to get to a solution as fast as possible is to 
include the full users group, which I have added to the recipients of this 
message. 

1. Are the Riak data directories within Vagrant shared directories between the 
host and guest? I have had issues with OS file system caching before when 
working with web server files. 

2. What version of Ubuntu are you using? 

3. How did you install Riak on Ubuntu? 

4. Have you tried restoring the original distribution riak.conf file and seen 
if the issue persists? This would help you determine if the issue is your 
config or something with your environment. 

Chris 

On Fri, Feb 26, 2016 at 10:55 AM Joe Olson < technol...@nododos.com > wrote: 

BQ_BEGIN


Chris - 

I cannot figure out what is going on. Here is my test case. Configuration file 
attached. I am running a single node of Riak on a vagrant box with a level DB 
back end. I don't even have to bring the box down, merely stopping and 
restarting riak '(riak stop' and 'riak start' or 'risk restart) causes all the 
keys to be lost. The riak node is set up on a Vagrant box. But againI do 
not have to bring the machine up or down to get this error. 

I've also deleted the ring info in /var/lib/riak/ring, and deleted all the 
leveldb files. In this case, the bucket type is just n_val = 1, and the ring 
size is the minimum of 8. 

Is it possible Riak is not flushing RAM to disk after write? The keys only 
reside in RAM? 

My test procedure: 

On a remote machine= 

riak01@ubuntu:/etc$ curl -i http:// 
:8098/types/n1/buckets/test/keys?keys=true 
HTTP/1.1 200 OK 
Vary: Accept-Encoding 
Server: MochiWeb/1.1 WebMachine/1.10.8 (that head fake, tho) 
Date: Fri, 26 Feb 2016 13:14:59 GMT 
Content-Type: application/json 
Content-Length: 17 

{"keys":["test"]} 

riak01@ubuntu:/etc$ 



On the single Riak node itself 

[vagrant@i- 2016022519 -9bb5c84f riak]$ sudo riak stop 
ok 
[vagrant@i- 2016022519 -9bb5c84f riak]$ sudo riak start 
[vagrant@i- 2016022519 -9bb5c84f riak]$ sudo riak ping 
pong 



Back to the remote machine 

riak01@ubuntu:/etc$ curl -i http:// 
:8098/types/n1/buckets/test/keys?keys=true 
HTTP/1.1 200 OK 
Vary: Accept-Encoding 
Server: MochiWeb/1.1 WebMachine/1.10.8 (that head fake, tho) 
Date: Fri, 26 Feb 2016 13:16:34 GMT 
Content-Type: application/json 
Content-Length: 11 

{"keys":[]} 

riak01@ubuntu:/etc$ 






-- 
Sincerely, 

Christopher Mancini 
- 

employee = { 
purpose: solve problems with code, 
phone: 7164625591, 
email: cmanc...@basho.com , 
github: http://www.github.com/christophermancini 
} 
___ 
riak-users mailing list 
riak-users@lists.basho.com 
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com 

BQ_END



___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Solr Error Handling

2016-02-26 Thread Colin Walker
Thanks for the quick response! I am using 2.1.3 and will check out that
tutorial. I can see everything in the logs but want to repair the indexes
programmatically. It sounds like Jason's solution is what I'm looking for.

Cheers,

Colin
On 26 Feb 2016 8:17 a.m., "Jason Voegele"  wrote:

> On Feb 26, 2016, at 10:40 AM, Colin Walker  wrote:
>
> Due to bad planning on my part, Solr is having trouble indexing some of
> the fields I am sending to it, specifically, I ended up with some string
> fields in a numerical field. Is there a way to retrieve the records from
> Riak that have thrown errors in sole?
>
>
> Hi Colin,
>
> Can you tell us what version of Riak you are running? In recent versions
> of Riak you can get this information by expiring the AAE trees for Yokozuna
> and then noting the objects that are flagged as not being indexable. See
> http://docs.basho.com/riak/latest/ops/advanced/aae/#AAE-and-Riak-Search for
> some background info on AAE and Yokozuna, if needed.
>
> Another possible option is to see if the the “_yz_err” field that is
> automatically created on certain error conditions might hold the
> information you need. See
> http://docs.basho.com/riak/latest/dev/advanced/search-schema/ for info on
> “_yz_err”.
>
> --
> Jason Voegele
> When my brain begins to reel from my literary labors, I make an occasional
> cheese dip.
> -- Ignatius Reilly
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Ok, I am stumped. Losing data or riak stop

2016-02-26 Thread Matthew Von-Maszewski
Joe,

If the sample data is not confidential, how about creating a tar file of the 
entire leveldb data directory and either emailing to me directly or posting 
somewhere I can download it?  No need to copy the entire mailing list on the 
file or download location.

Matthew

> On Feb 26, 2016, at 11:19 AM, Joe Olson  wrote:
> 
> 
> 
> Negative.
> 
> I have ring size set to 8, leveldb split across two sets of drives ("fast" 
> and "slow", but meaningless on the test Vagrant box...just two separate 
> directories). I checked all of the ../leveldb/* directories. All LOG files 
> are identical, and no errors in any of them.
> 
> I will try to build another Vagrant machine with the default riak.conf and 
> see if I can get this to repeat. It is almost as if the KV pairs are not 
> persisting to disk at all.
> 
> 
> From: "Matthew Von-Maszewski" 
> To: "Joe Olson" 
> Cc: "riak-users" , "cmancini" 
> Sent: Friday, February 26, 2016 10:12:15 AM
> Subject: Re: Ok, I am stumped. Losing data or riak stop
> 
> Joe,
> 
> Are there any error messages in the leveldb LOG and/or LOG.old files?  These 
> files are located within each vnode's directory, likely 
> /var/lib/riak/data/leveldb/*/LOG* on your machine.
> 
> The LOG files are not to be confused with 000xxx.log files.  The lower case 
> *.log files are the recovery files that should contain the keys you are 
> missing.  If they are not loading properly, the LOG files should have clues.
> 
> Matthew
> 
> On Feb 26, 2016, at 11:04 AM, Christopher Mancini  > wrote:
> 
> Hey Joe,
> 
> I will do my best to help, but I am not the most experienced with Riak 
> operations. Your best bet to get to a solution as fast as possible is to 
> include the full users group, which I have added to the recipients of this 
> message.
> 
> 1. Are the Riak data directories within Vagrant shared directories between 
> the host and guest? I have had issues with OS file system caching before when 
> working with web server files.
> 
> 2. What version of Ubuntu are you using?
> 
> 3. How did you install Riak on Ubuntu?
> 
> 4. Have you tried restoring the original distribution riak.conf file and seen 
> if the issue persists? This would help you determine if the issue is your 
> config or something with your environment.
> 
> Chris
> 
> On Fri, Feb 26, 2016 at 10:55 AM Joe Olson  > wrote:
> 
> Chris - 
> 
>  I cannot figure out what is going on. Here is my test case. Configuration 
> file attached. I am running a single node of Riak on a vagrant box with a 
> level DB back end. I don't even have to bring the box down, merely stopping 
> and restarting riak '(riak stop' and 'riak start' or 'risk restart) causes 
> all the keys to be lost. The riak node is set up on a Vagrant box. But 
> againI do not have to bring the machine up or down to get this error.
> 
> I've also deleted the ring info in /var/lib/riak/ring, and deleted all the 
> leveldb files. In this case, the bucket type is just n_val = 1, and the ring 
> size is the minimum of 8. 
> 
> Is it possible Riak is not flushing RAM to disk after write? The keys only 
> reside in RAM?
> 
> My test procedure:
> 
> On a remote machine=
> 
> riak01@ubuntu:/etc$ curl -i http:// 
> <>:8098/types/n1/buckets/test/keys?keys=true
> HTTP/1.1 200 OK
> Vary: Accept-Encoding
> Server: MochiWeb/1.1 WebMachine/1.10.8 (that head fake, tho)
> Date: Fri, 26 Feb 2016 13:14:59 GMT
> Content-Type: application/json
> Content-Length: 17
> 
> {"keys":["test"]}
> 
> riak01@ubuntu:/etc$
> 
> 
> 
> On the single Riak node itself
> 
> [vagrant@i-2016022519 -9bb5c84f riak]$ sudo riak stop
> ok
> [vagrant@i-2016022519 -9bb5c84f riak]$ sudo riak start
> [vagrant@i-2016022519 -9bb5c84f riak]$ sudo riak ping
> pong
> 
> 
> 
> Back to the remote machine
> 
> riak01@ubuntu:/etc$ curl -i http:// 
> <>:8098/types/n1/buckets/test/keys?keys=true
> HTTP/1.1 200 OK
> Vary: Accept-Encoding
> Server: MochiWeb/1.1 WebMachine/1.10.8 (that head fake, tho)
> Date: Fri, 26 Feb 2016 13:16:34 GMT
> Content-Type: application/json
> Content-Length: 11
> 
> {"keys":[]}
> 
> riak01@ubuntu:/etc$
> 
> 
> 
> -- 
> Sincerely,
> 
> Christopher Mancini
> -
> 
> employee = {
> purpose: solve problems with code,
> phone:7164625591,
> email: cmanc...@basho.com ,
> github:http://www.github.com/christophermancini 
> 
> }
> ___
> riak-users mailing list
> riak-users@lists.basho.com 
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> 

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Ok, I am stumped. Losing data or riak stop

2016-02-26 Thread Matthew Von-Maszewski
What I failed to say, was make the copy after you populate and stop, but before 
you attempt to start Riak again.

Matthew

> On Feb 26, 2016, at 11:19 AM, Joe Olson  wrote:
> 
> 
> 
> Negative.
> 
> I have ring size set to 8, leveldb split across two sets of drives ("fast" 
> and "slow", but meaningless on the test Vagrant box...just two separate 
> directories). I checked all of the ../leveldb/* directories. All LOG files 
> are identical, and no errors in any of them.
> 
> I will try to build another Vagrant machine with the default riak.conf and 
> see if I can get this to repeat. It is almost as if the KV pairs are not 
> persisting to disk at all.
> 
> 
> From: "Matthew Von-Maszewski" 
> To: "Joe Olson" 
> Cc: "riak-users" , "cmancini" 
> Sent: Friday, February 26, 2016 10:12:15 AM
> Subject: Re: Ok, I am stumped. Losing data or riak stop
> 
> Joe,
> 
> Are there any error messages in the leveldb LOG and/or LOG.old files?  These 
> files are located within each vnode's directory, likely 
> /var/lib/riak/data/leveldb/*/LOG* on your machine.
> 
> The LOG files are not to be confused with 000xxx.log files.  The lower case 
> *.log files are the recovery files that should contain the keys you are 
> missing.  If they are not loading properly, the LOG files should have clues.
> 
> Matthew
> 
> On Feb 26, 2016, at 11:04 AM, Christopher Mancini  > wrote:
> 
> Hey Joe,
> 
> I will do my best to help, but I am not the most experienced with Riak 
> operations. Your best bet to get to a solution as fast as possible is to 
> include the full users group, which I have added to the recipients of this 
> message.
> 
> 1. Are the Riak data directories within Vagrant shared directories between 
> the host and guest? I have had issues with OS file system caching before when 
> working with web server files.
> 
> 2. What version of Ubuntu are you using?
> 
> 3. How did you install Riak on Ubuntu?
> 
> 4. Have you tried restoring the original distribution riak.conf file and seen 
> if the issue persists? This would help you determine if the issue is your 
> config or something with your environment.
> 
> Chris
> 
> On Fri, Feb 26, 2016 at 10:55 AM Joe Olson  > wrote:
> 
> Chris - 
> 
>  I cannot figure out what is going on. Here is my test case. Configuration 
> file attached. I am running a single node of Riak on a vagrant box with a 
> level DB back end. I don't even have to bring the box down, merely stopping 
> and restarting riak '(riak stop' and 'riak start' or 'risk restart) causes 
> all the keys to be lost. The riak node is set up on a Vagrant box. But 
> againI do not have to bring the machine up or down to get this error.
> 
> I've also deleted the ring info in /var/lib/riak/ring, and deleted all the 
> leveldb files. In this case, the bucket type is just n_val = 1, and the ring 
> size is the minimum of 8. 
> 
> Is it possible Riak is not flushing RAM to disk after write? The keys only 
> reside in RAM?
> 
> My test procedure:
> 
> On a remote machine=
> 
> riak01@ubuntu:/etc$ curl -i http:// 
> <>:8098/types/n1/buckets/test/keys?keys=true
> HTTP/1.1 200 OK
> Vary: Accept-Encoding
> Server: MochiWeb/1.1 WebMachine/1.10.8 (that head fake, tho)
> Date: Fri, 26 Feb 2016 13:14:59 GMT
> Content-Type: application/json
> Content-Length: 17
> 
> {"keys":["test"]}
> 
> riak01@ubuntu:/etc$
> 
> 
> 
> On the single Riak node itself
> 
> [vagrant@i-2016022519 -9bb5c84f riak]$ sudo riak stop
> ok
> [vagrant@i-2016022519 -9bb5c84f riak]$ sudo riak start
> [vagrant@i-2016022519 -9bb5c84f riak]$ sudo riak ping
> pong
> 
> 
> 
> Back to the remote machine
> 
> riak01@ubuntu:/etc$ curl -i http:// 
> <>:8098/types/n1/buckets/test/keys?keys=true
> HTTP/1.1 200 OK
> Vary: Accept-Encoding
> Server: MochiWeb/1.1 WebMachine/1.10.8 (that head fake, tho)
> Date: Fri, 26 Feb 2016 13:16:34 GMT
> Content-Type: application/json
> Content-Length: 11
> 
> {"keys":[]}
> 
> riak01@ubuntu:/etc$
> 
> 
> 
> -- 
> Sincerely,
> 
> Christopher Mancini
> -
> 
> employee = {
> purpose: solve problems with code,
> phone:7164625591,
> email: cmanc...@basho.com ,
> github:http://www.github.com/christophermancini 
> 
> }
> ___
> riak-users mailing list
> riak-users@lists.basho.com 
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> 

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com