Hi Prometheus,
This is a known issue, tracked here (https://issues.basho.com/790) and high
on the list for the next release.
The summary is that when putting data into Riak Search too quickly, it will
build up a backlog of files waiting to be compacted. Each file maintains its
own separate ETS ta
I put the ERL_MAX_ETS_TABLES in vm.args and restart riaksearch its ok now..
thanks..
On Oct 31, 2010, at 11:38 AM, Ulf Wiger wrote:
>
> It looks as if it is the ets:new/2 operation that fails due to system_limit.
>
> Like with the number of concurrently running processes, the system limit
It looks as if it is the ets:new/2 operation that fails due to system_limit.
Like with the number of concurrently running processes, the system limit
is by default pretty low (1400 tables), but can be increased by setting the
OS environment variable ERL_MAX_ETS_TABLES.
Now, I have no idea if Ri
ulimit = unlimited
On Oct 31, 2010, at 10:30 AM, Neville Burnell wrote:
> Have you increased your ulimit?
>
> On 31 October 2010 19:23, Prometheus WillSurvive
> wrote:
> Hi,
>
> We started a batch index test (wikipedia) when we reached around 600K docs
> system gave below error.. Any
Have you increased your ulimit?
On 31 October 2010 19:23, Prometheus WillSurvive <
prometheus.willsurv...@gmail.com> wrote:
> Hi,
>
> We started a batch index test (wikipedia) when we reached around 600K
> docs system gave below error.. Any idea ?
>
> We can not index any more doc in this ind
Hi,
We started a batch index test (wikipedia) when we reached around 600K docs
system gave below error.. Any idea ?
We can not index any more doc in this index.
=ERROR REPORT 31-Oct-2010::10:22:42 ===
** Too many db tables **
DEBUG: riak_search_dir_indexer:197 - "{ error , Type , Er