> From: n...@photonhost.com [mailto:n...@photonhost.com] On Behalf Of Nikolay
> Mihaylov
> Sent: Monday, January 22, 2018 11:47 AM
> To: user@cassandra.apache.org
> Subject: Re: Too many open files
>
> You can increase system open files,
> also if you compact, open files
a global session object or to create it and shut it down
for every request?
From: n...@photonhost.com [mailto:n...@photonhost.com] On Behalf Of Nikolay
Mihaylov
Sent: Monday, January 22, 2018 11:47 AM
To: user@cassandra.apache.org
Subject: Re: Too many open files
You can increase system open
k connection in the calculation (everything
> in *nix is a file). If it makes you feel better my laptop
> has 40k open files for Chrome..
>
> On Sun, Jan 21, 2018 at 11:59 PM, Andreou, Arys (Nokia - GR/Athens) <
> arys.andr...@nokia.com> wrote:
>
>> Hi,
>>
>
2018 at 11:59 PM, Andreou, Arys (Nokia - GR/Athens) <
arys.andr...@nokia.com> wrote:
> Hi,
>
>
>
> I keep getting a “Last error: Too many open files” followed by a list of
> node IPs.
>
> The output of “lsof -n|grep java|wc -l” is about 674970 on each node.
>
Hi,
I keep getting a "Last error: Too many open files" followed by a list of node
IPs.
The output of "lsof -n|grep java|wc -l" is about 674970 on each node.
What is a normal number of open files?
Thank you.
many connection ?
郝加来
From: Jason Lewis
Date: 2015-11-07 10:38
To: user@cassandra.apache.org
Subject: Re: Too many open files Cassandra 2.1.11.872
cat /proc/5980/limits
Limit Soft Limit Hard Limit Units
Max cpu time unlimited
> On Fri, Nov 6, 2015 at 12:49 PM, Bryan Cheng
>> wrote:
>>
>>> Is your compaction progressing as expected? If not, this may cause an
>>> excessive number of tiny db files. Had a node refuse to start recently
>>> because of this, had to temporarily
, 2015 at 12:49 PM, Bryan Cheng
> wrote:
>
>> Is your compaction progressing as expected? If not, this may cause an
>> excessive number of tiny db files. Had a node refuse to start recently
>> because of this, had to temporarily remove limits on that process.
>>
>
mpaction progressing as expected? If not, this may cause an
> excessive number of tiny db files. Had a node refuse to start recently
> because of this, had to temporarily remove limits on that process.
>
> On Fri, Nov 6, 2015 at 10:09 AM, Jason Lewis
> wrote:
>
>> I'm getti
Is your compaction progressing as expected? If not, this may cause an
excessive number of tiny db files. Had a node refuse to start recently
because of this, had to temporarily remove limits on that process.
On Fri, Nov 6, 2015 at 10:09 AM, Jason Lewis wrote:
> I'm getting too many op
I'm getting too many open files errors and I'm wondering what the
cause may be.
lsof -n | grep java show 1.4M files
~90k are inodes
~70k are pipes
~500k are cassandra services in /usr
~700K are the data files.
What might be causing so many files to be open?
jas
Yes, that was the problem—I actually knew better, but had briefly overlooked
this that when I was doing some refactoring. I am not the OP (although he
himself realized his mistake).
if you follow the thread, I was explaining that the Datastax Java driver
allowed me to basically open a signific
It really doesn't need to be this complicated. You only need 1
session per application. It's thread safe and manages the connection
pool for you.
http://www.datastax.com/drivers/java/2.0/com/datastax/driver/core/Session.html
On Sat, Aug 9, 2014 at 1:29 PM, Kevin Burton wrote:
> Another idea
Another idea to detect this is when the number of open sessions exceeds the
number of threads.
On Aug 9, 2014 10:59 AM, "Andrew" wrote:
> I just had a generator that (in the incorrect way) had a cluster as a
> member variable, and would call .connect() repeatedly. I _thought_,
> incorrectly, tha
know Linux open a FD for each connection received and
honestly I still don't know much about the details of this. When I got a
"too many open files" error it took a good while to think about checking
the connections.
I think the documentation could point this fact, it would help other
I just had a generator that (in the incorrect way) had a cluster as a member
variable, and would call .connect() repeatedly. I _thought_, incorrectly, that
the Session was thread unsafe, and so I should request a separate Session each
time—obviously wrong in hind sight.
There was no special lo
Tyler,
I’ll see if I can reproduce this on a local instance, but just in case, the
error was basically—instead of storing the session in my connection factory, I
stored a cluster and called “connect” each time I requested a Session. I had
defined a max/min number of connections for the connect
From: Marcelo Elias Del Valle
Sent: Saturday, August 9, 2014 12:41 AM
To: user@cassandra.apache.org
Subject: Re: too many open files
Indeed, that was my mistake, that was exactly what we were doing in the code.
[]s
2014-08-09 0:56 GMT-03:00 Brian Zhang :
For cassandra driver,session is just
Del Valle <
>> marc...@s1mbi0se.com.br> wrote:
>>
>>> Hi,
>>>
>>> I am using Cassandra 2.0.9 running on Debian Wheezy, and I am having
>>> "too many open files" exceptions when I try to perform a large number of
>>> operations in my
gt;
> Thank you,
> Andrey
>
>
> On Fri, Aug 8, 2014 at 12:35 PM, Marcelo Elias Del Valle
> wrote:
> Hi,
>
> I am using Cassandra 2.0.9 running on Debian Wheezy, and I am having "too
> many open files" exceptions when I try to perform a large nu
arc...@s1mbi0se.com.br> wrote:
>
>> Hi,
>>
>> I am using Cassandra 2.0.9 running on Debian Wheezy, and I am having "too
>> many open files" exceptions when I try to perform a large number of
>> operations in my 10 node cluster.
>>
>> I saw th
On Fri, Aug 8, 2014 at 5:52 PM, Redmumba wrote:
> Just to chime in, I also ran into this issue when I was migrating to the
> Datastax client. Instead of reusing the session, I was opening a new
> session each time. For some reason, even though I was still closing the
> session on the client side,
I am not sure what I could do to solve the problem.
>>
>> Any hint about how to solve it?
>>
>> My client is written in python and uses Cassandra Python Driver. Here are
>> the exceptions I am having in the client:
>> [s1log] 2014-08-08 12:16:09,631 - cassandra.po
e.com.br> wrote:
> Hi,
>
> I am using Cassandra 2.0.9 running on Debian Wheezy, and I am having "too
> many open files" exceptions when I try to perform a large number of
> operations in my 10 node cluster.
>
> I saw the documentation
> http://www.datas
distribution ups the file handle limit to 10. That
>>> number's hard to exceed.
>>>
>>>
>>>
>>> On Fri, Aug 8, 2014 at 1:35 PM, Marcelo Elias Del Valle <
>>> marc...@s1mbi0se.com.br> wrote:
>>>
>>>> Hi,
>&
>>
>> On Fri, Aug 8, 2014 at 1:35 PM, Marcelo Elias Del Valle <
>> marc...@s1mbi0se.com.br> wrote:
>>
>>> Hi,
>>>
>>> I am using Cassandra 2.0.9 running on Debian Wheezy, and I am having
>>> "too many open files" exceptions w
celo Elias Del Valle <
> marc...@s1mbi0se.com.br> wrote:
>
>> Hi,
>>
>> I am using Cassandra 2.0.9 running on Debian Wheezy, and I am having "too
>> many open files" exceptions when I try to perform a large number of
>> operations in my 10 node cluster.
>
n Debian Wheezy, and I am having "too
> many open files" exceptions when I try to perform a large number of
> operations in my 10 node cluster.
>
> I saw the documentation
> http://www.datastax.com/documentation/cassandra/2.0/cassandra/troubleshooting/trblshootTooManyFi
Hi,
I am using Cassandra 2.0.9 running on Debian Wheezy, and I am having "too
many open files" exceptions when I try to perform a large number of
operations in my 10 node cluster.
I saw the documentation
http://www.datastax.com/documentation/cassandra/2.0/cassandra/troub
,
We are running ElasticMapReduce Jobs from Amazon on a 25 nodes Cassandra
cluster (with VNodes). Since we have increased the size of the cluster we
are facing a too many open files (due to sockets) exception when creating
the splits. Does anyone has an idea?
Thanks,
Here is the stacktrace:
14
cation.
>
> Write are doing good. but when comes to reads i have obsereved that
> cassandra is getting into too many open files issues. When i check the logs
> its not able to open the cassandra data files any more before of the file
> descriptors limits.
>
>
> Can some one su
Thursday, November 07, 2013 4:22 AM
> To: user@cassandra.apache.org
> Subject: RE: Getting into Too many open files issues
>
> Hi Murthy,
>
> 32768 is a bit low (I know datastax docs recommend this). But our production
> env is now running on 1kk, or you can even pu
, November 07, 2013 4:22 AM
To: user@cassandra.apache.org
Subject: RE: Getting into Too many open files issues
Hi Murthy,
32768 is a bit low (I know datastax docs recommend this). But our production
env is now running on 1kk, or you can even put it on unlimited.
Pieter
From: Murthy Chelankuri
: Getting into Too many open files issues
Thanks Pieter for giving quick reply.
I have downloaded the tar ball. And have changed the limits.conf as per the
documentation like below.
* soft nofile 32768
* hard nofile 32768
root soft nofile 32768
root hard nofile 32768
* soft memlock unlimited
* hard
However, with the 2.0.x I had to raise it to 1 000 000 because 100 000 was
> too low.
>
>
>
> Kind regards,
>
> Pieter Callewaert
>
>
>
> *From:* Murthy Chelankuri [mailto:kmurt...@gmail.com]
> *Sent:* donderdag 7 november 2013 12:15
> *To:* user@cassandra
@cassandra.apache.org
Subject: Getting into Too many open files issues
I have experimenting cassandra latest version for storing the huge the in our
application.
Write are doing good. but when comes to reads i have obsereved that cassandra
is getting into too many open files issues. When i check the logs its not
I have experimenting cassandra latest version for storing the huge the in
our application.
Write are doing good. but when comes to reads i have obsereved that
cassandra is getting into too many open files issues. When i check the logs
its not able to open the cassandra data files any more before
Oleg Dulin wrote:
> Got this error:
>
> WARN [Thread-8] 2013-10-29 02:58:24,565 CustomTThreadPoolServer.java (line
> 122) Transport error occurred during acceptance of message.
>2 org.apache.thrift.transport.TTransportException:
> java.net.SocketException: Too many open f
es when I do some
> stress testing (5 select’s spread over multiple threads)
> -Unexpected exception in the selector loop. Seems not related with
> the Too many open files, it just happens.
> -It’s not socket related.
> -Using Oracle Java(TM) SE Ru
Investigated a bit more:
-I can reproduce it, happened already on several nodes when I do some
stress testing (5 select's spread over multiple threads)
-Unexpected exception in the selector loop. Seems not related with the
Too many open files, it just ha
Hi,
I've noticed some nodes in our cluster are dying after some period of time.
WARN [New I/O server boss #17] 2013-10-29 12:22:20,725 Slf4JLogger.java (line
76) Failed to accept a connection.
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(N
Got this error:
WARN [Thread-8] 2013-10-29 02:58:24,565 CustomTThreadPoolServer.java
(line 122) Transport error occurred during acceptance of message.
2 org.apache.thrift.transport.TTransportException:
java.net.SocketException: Too many open files
3 at
I believe too many open files is really too many open file descriptors so you
may want to check number of sockets open as well to see if you hit the open
file descriptor limit. Sockets open a descriptor and count toward the limit I
believe….I am quite rusty in this and this is from my bad
each of my 6 CFs.
ERROR [ReadStage:62384] 2013-07-14 18:04:26,062
AbstractCassandraDaemon.java (line 135) Exception in thread
Thread[ReadStage:62384,5,main]
java.io.IOError: java.io.FileNotFoundException:
/tmp_vol/cassandra/data/dev_a/portfoliodao/dev_a-portfoliodao-hf-166-Data.db
(Too many
It doesn't tell you anything if file ends it with "ic-###", except
pointing out the SSTable version it uses ("ic" in this case).
Files related to secondary index contain something like this in the
filename: -., while in "regular" CFs do not contain
any dots except the one just before file exte
Also, looking through the log, it appears a lot of the files end with ic-
which I assume is associated with a secondary index I have on the table. Are
secondary indexes really expensive from a file descriptor standpoint? That
particular table uses the default compaction scheme...
On Jul 1
I have one table that is using leveled. It was set to 10MB, I will try
changing it to 256MB. Is there a good way to merge the existing sstables?
On Jul 14, 2013, at 5:32 PM, Jonathan Haddad wrote:
> Are you using leveled compaction? If so, what do you have the file size set
> at? If you're
Are you using leveled compaction? If so, what do you have the file size
set at? If you're using the defaults, you'll have a ton of really small
files. I believe Albert Tobey recommended using 256MB for the
table sstable_size_in_mb to avoid this problem.
On Sun, Jul 14, 2013 at 5:10 PM, Paul In
I'm running into a problem where instances of my cluster are hitting over 450K
open files. Is this normal for a 4 node 1.2.6 cluster with replication factor
of 3 and about 50GB of data on each node? I can push the file descriptor limit
up, but I plan on having a much larger load so I'm wonderi
[mailto:jeremy.hanna1...@gmail.com]
Sent: donderdag 27 juni 2013 15:36
To: user@cassandra.apache.org
Subject: Re: Too many open files and stopped compaction with many pending
compaction tasks
Are you on SSDs?
On 27 Jun 2013, at 14:24, "Desimpel, Ignace" wrote:
> On a test with 3 cass
start querying (using thrift), I get a ’too many open files’ error
> on the machine with pending compaction tasks.
>
> Limits.conf setting for nofile is 65536
> Using ‘lsof’ and ‘wc –l’ I get a count of 59577 files for Cassandra.
> Total count of keyspace files on disk : 20464.
>
(via jmx).
compaction_throughput_mb_per_sec is 0.
Concurrent_compactors is 3.
multithreaded_compaction = false.
No other load on these machines.
And when I start querying (using thrift), I get a 'too many open files' error
on the machine with pending compaction tasks.
Limits.conf s
t;
>
> On Thu, Dec 20, 2012 at 1:44 PM, Andrey Ilinykh wrote:
> This bug is fixed in 1.1.5
>
> Andrey
>
>
> On Thu, Dec 20, 2012 at 12:01 AM, santi kumar wrote:
> While running the nodetool repair , we are running into FileNotFoundException
> with too many open files
fixed in 1.1.5
>>
>> Andrey
>>
>>
>> On Thu, Dec 20, 2012 at 12:01 AM, santi kumar wrote:
>>
>>> While running the nodetool repair , we are running into
>>> FileNotFoundException with too many open files error. We increased the
>>> ulimit
, 2012 at 12:01 AM, santi kumar wrote:
>
>> While running the nodetool repair , we are running into
>> FileNotFoundException with too many open files error. We increased the
>> ulimit value to 32768, and still we have seen this issue.
>>
>> THe number of files
This bug is fixed in 1.1.5
Andrey
On Thu, Dec 20, 2012 at 12:01 AM, santi kumar wrote:
> While running the nodetool repair , we are running into
> FileNotFoundException with too many open files error. We increased the
> ulimit value to 32768, and still we have seen this issue.
>
While running the nodetool repair , we are running into
FileNotFoundException with too many open files error. We increased the
ulimit value to 32768, and still we have seen this issue.
THe number of files in the data directory is around 29500+.
If we further increase the limit of ulimt, would it
ver which hit a wall
>> yesterday:
>>
>> ERROR [CompactionExecutor:2918] 2012-01-12 20
>> :37:06,327
>> AbstractCassandraDaemon.java (line 133) Fatal exception in thread
>> Thread[CompactionExecutor:2918,1,main] java.io.IOError:
>> java.io.FileNotFoundExc
AbstractCassandraDaemon.java (line 133) Fatal exception in thread
> Thread[CompactionExecutor:2918,1,main] java.io.IOError:
> java.io.FileNotFoundException:
> /mnt/ebs/data/rslog_production/req_word_idx-hc-453661-Data.db (Too many
> open files in system)
>
> After that it stopped wor
d[CompactionExecutor:2918,1,main] java.io.IOError:
> java.io.FileNotFoundException:
> /mnt/ebs/data/rslog_production/req_word_idx-hc-453661-Data.db (Too many
> open files in system)
>
> After that it stopped working and just say there with this error
> (undestandable). I did an lso
; yesterday:
>
> ERROR [CompactionExecutor:2918] 2012-01-12 20:37:06,327
> AbstractCassandraDaemon.java (line 133) Fatal exception in thread
> Thread[CompactionExecutor:2918,1,main] java.io.IOError:
> java.io.FileNotFoundException:
> /mnt/ebs/data/rslog_production/req_word_idx-hc-45
actionExecutor:2918] 2012-01-12 20:37:06,327
> AbstractCassandraDaemon.java (line 133) Fatal exception in thread
> Thread[CompactionExecutor:2918,1,main] java.io.IOError:
> java.io.FileNotFoundException:
> /mnt/ebs/data/rslog_production/req_word_idx-hc-453661-Data.db (Too many
> ope
t/ebs/data/rslog_production/req_word_idx-hc-453661-Data.db (Too many
open files in system)
After that it stopped working and just say there with this error
(undestandable). I did an lsof and saw that it had 98567 open files,
yikes! An ls in the data directory shows 234011 files. After restarting
it
hows the socket number is very few.
>
> ** **
>
> WARN [main] 2011-07-27 16:14:04,872 CustomTThreadPoolServer.java (line 104)
> Transport error occurred during acceptance of message.
>
> org.apache.thrift.transport.TTransportException: java.net.SocketException:
>
> What does the following error mean? One of my cassandra servers print this
> error, and nodetool shows the state of the server is down. Netstat result
> shows the socket number is very few.
The operating system enforced limits have been hit, so Cassandra is
unable to create additional file descr
during acceptance of message.
org.apache.thrift.transport.TTransportException:
java.net.SocketException: Too many open files
at
org.apache.thrift.transport.TServerSocket.acceptImpl(TServerSocket.java:
124)
at
org.apache.thrift.transport.TServerSocket.acceptImpl(TServerSocket.java
If you are using Linux, especially Ubuntu, check the linked document
below. This is my favorite: "Using sudo has side effects in terms of
open file limits. On Ubuntu they’ll be reset to 1024, no matter what’s
set in /etc/security/limits.conf"
http://wiki.basho.com/Open-Files-Limit.html
/Attila
I'm guessing you've seen this already?
http://www.datastax.com/docs/0.8/troubleshooting/index#java-reports-an-error-saying-there-are-too-many-open-files
Check out the # of File Descriptors opened with the "lsof- -n | grep java"
command.
On Tue, Jul 19, 2011 at 8:30 AM, cber
Hi all.
In production we want to run nodetool repair but each time we do it we get the
too many open files error.
We've increased the number of available FD for Cassandra till 8192 but still
we get the same error after few seconds.
Should I increase it more?
WARN [Thread-7] 2011-07-19
> An: user@cassandra.apache.org
> Cc: Roland Gude; Juergen Link; Johannes Hoerle
> Betreff: Re: too many open files - maybe a fd leak in indexslicequeries
>
> Index queries (ColumnFamilyStore.scan) don't do any low-level i/o
> themselves, they go through CFS.getColumnFamily, which i
Nachricht-
Von: Jonathan Ellis [mailto:jbel...@gmail.com]
Gesendet: Freitag, 1. April 2011 06:07
An: user@cassandra.apache.org
Cc: Roland Gude; Juergen Link; Johannes Hoerle
Betreff: Re: too many open files - maybe a fd leak in indexslicequeries
Index queries (ColumnFamilyStore.scan) don'
Index queries (ColumnFamilyStore.scan) don't do any low-level i/o
themselves, they go through CFS.getColumnFamily, which is what normal
row fetches also go through. So if there is a leak there it's
unlikely to be specific to indexes.
What is your open-file limit (remember that sockets count towar
I experience something that looks exactly like
https://issues.apache.org/jira/browse/CASSANDRA-1178
On cassandra 0.7.3 when using index slice queries (lots of them)
Crashing multiple nodes and rendering the cluster useless. But I have no clue
where to look if index queries still leak fd
Does any
my hector client to insert 5.000.000 rows but after a couple of
> >> > hours,
> >> > the following Exception occurs :
> >> >
> >> > WARN [main] 2010-12-15 16:38:53,335 CustomTThreadPoolServer.java
> (line
> >&g
> the following Exception occurs :
>> >
>> > WARN [main] 2010-12-15 16:38:53,335 CustomTThreadPoolServer.java (line
>> > 104)
>> > Transport error occurred during acceptance of message.
>> > org
lServer.java (line
>> > 104)
>> > Transport error occurred during acceptance of message.
>> > org.apache.thrift.transport.TTransportException:
>> > java.net.SocketException:
>> > Too many open files
>> > at
>> >
>> > org.apache.thri
ort error occurred during acceptance of message.
> > org.apache.thrift.transport.TTransportException:
> java.net.SocketException:
> > Too many open files
> > at
> >
> org.apache.thrift.transport.TServerSocket.acceptImpl(TServerSocket.java:124)
> > at
> >
> org.apache.cas
ft.transport.TTransportException: java.net.SocketException:
> Too many open files
> at
> org.apache.thrift.transport.TServerSocket.acceptImpl(TServerSocket.java:124)
> at
> org.apache.cassandra.thrift.TCustomServerSocket.acceptImpl(TCustomServerSocket.java:67)
> at
> org.a
s to "unlimted".
> Now, I get exactly the same exception after 3.50 rows :
>
> *CustomTThreadPoolServer.java (line 104) Transport error occurred during
> acceptance of message.*
> *org.apache.thrift.transport.TTransportException:
> java.net.SocketException: Too many
the amount of the allowed file descriptors to "unlimted".
> Now, I get exactly the same exception after 3.50 rows :
>
> *CustomTThreadPoolServer.java (line 104) Transport error occurred during
> acceptance of message.*
> *org.apache.thrift.transport.TTransportException:
>
tException: java.net.SocketException:
Too many open files*
*
*
What worries me is this / by zero exception when I try to restart cassandra
! At least, I want to backup the 3.50 rows to continue then my
insertion, is there a way to do this?
*
Exception encountered during startup.
java.lang.ArithmeticException: / b
3,335 CustomTThreadPoolServer.java (line 104)
Transport error occurred during acceptance of message.
org.apache.thrift.transport.TTransportException: java.net.SocketException:
Too many open files
at
org.apache.thrift.transport.TServerSocket.acceptImpl(TServerSocket.java:12
http://www.riptano.com/docs/0.6/troubleshooting/index#java-reports-an-error-saying-there-are-too-many-open-files
On Wed, Dec 15, 2010 at 11:13 AM, Amin Sakka, Novapost <
amin.sa...@novapost.fr> wrote:
> *Hello,*
> *I'm using cassandra 0.7.0 rc1, a single node configura
3,335 CustomTThreadPoolServer.java (line 104)
Transport error occurred during acceptance of message.
org.apache.thrift.transport.TTransportException: java.net.SocketException:
Too many open files
at
org.apache.thrift.transport.TServerSocket.acceptImpl(TServerSocket.java:12
6, 2010 at 2:05 PM, Aaron Morton <aa...@thelastpickle.com> wrote:
Under 0.7.0 beta1 am seeing cassandra run out of files handles...Caused by: java.io.FileNotFoundException: /local1/junkbox/cassandra/data/junkbox.wetafx.co.nz/ObjectIndex-e-31-Index.db (Too many open f
cassandra/data/
> junkbox.wetafx.co.nz/ObjectIndex-e-31-Index.db (Too many open files)
> at java.ioRandomAccessFile.open(Native Method)
> at java.io.RandomAccessFile.(RandomAccessFile.java:212)
> at java.io.RandomAccessFile.(RandomAccess
Under 0.7.0 beta1 am seeing cassandra run out of files handles...Caused by: java.io.FileNotFoundException: /local1/junkbox/cassandra/data/junkbox.wetafx.co.nz/ObjectIndex-e-31-Index.db (Too many open files) at java.ioRandomAccessFile.open(Native Method) at java.io.RandomAccessFile
Each of my top-level functions was allocating a Hector client connection at
the top, and releasing it when returning. The problem arose when a top-level
function had to call another top-level function, which led to the same
thread allocating two connections. Hector was not releasing one of them eve
do you mean that you don't release the connection back to fhe pool?
On 2010 7 14 20:51, "Jorge Barrios" wrote:
Thomas, I had a similar problem a few weeks back. I changed my code to make
sure that each thread only creates and uses one Hector connection. It seems
that client sockets are not being
Peter Schuller wrote:
> > [snip]
> > I'm not sure that is the case.
> >
> > When the server gets into the unrecoverable state, the repeating
> exceptions
> > are indeed "SocketException: Too many open files".
> [snip]
> > Although this is unque
> [snip]
> I'm not sure that is the case.
>
> When the server gets into the unrecoverable state, the repeating exceptions
> are indeed "SocketException: Too many open files".
[snip]
> Although this is unquestionably a network error, I don't think it is
>
>>
>>
>
> Thanks for the suggestion. I gave it a whirl, but no go. The file handles
> in
> in use stayed at around 500 for the first 30M or so mutates, then within
> 4 seconds they jumped to about 800, stayed there for about 30 seconds,
> then within 5 seconds went ove
92 matches
Mail list logo