The nodes have 8G - so well more then the recommended value.
The configuration was at the default of 1G - I now changed it to 2G.
Chaim Solomon
On Mon, Aug 11, 2014 at 6:14 PM, Eric Redmond wrote:
> If Solr is stumbling over bad data, your node's solr.log should be filled
> up. If Yokozuna is
If Solr is stumbling over bad data, your node's solr.log should be filled up.
If Yokozuna is stumbling over bad data that it's trying to send Solr in a loop,
the console.log should be full. If yokozuna is going ahead and indexing bad
values (such as unparsable json), it will go ahead and index a
Hi,
I don't think that it is a resource issue now.
After removing the data, the other nodes had low load and are handling the
workload just fine.
And the Java process - when it crashed - was really dead, on shutting down
Riak it stayed around and needed a -9 to go away.
I don't think the disks a
Chaim,
Some comments inline:
On Mon, Aug 11, 2014 at 4:14 AM, Chaim Solomon
wrote:
> Hi,
>
> I've been running into an issue with the yz search acting up.
>
> I've been getting a lot of these:
>
> 2014-08-11 06:45:22.005 [error] <0.913.0>@yz_kv:index:206 failed to index
> object {<<"bucketname"
Hi,
I've been running into an issue with the yz search acting up.
I've been getting a lot of these:
2014-08-11 06:45:22.005 [error] <0.913.0>@yz_kv:index:206 failed to index
object {<<"bucketname">>,<<"123">>} with error {"Failed to index
docs",{error,req_timedout}} because [{yz_solr,index,3,[{f