Indeed, that was my mistake, that was exactly what we were doing in the
code.
[]s


2014-08-09 0:56 GMT-03:00 Brian Zhang <yikebo...@gmail.com>:

> For cassandra driver,session is just like database connection pool,it
> maybe contains many tcp connections,if you create a new session every
> time,more and more tcp connections will be created,till surpass the max
> file description limit  of os.
>
> You should create one session,use it repeatedly ,session can manage
> connections automatically,create new connection or close old connection for
> your requests.
>
> 在 2014年8月9日,6:52,Redmumba <redmu...@gmail.com> 写道:
>
> Just to chime in, I also ran into this issue when I was migrating to the
> Datastax client. Instead of reusing the session, I was opening a new
> session each time. For some reason, even though I was still closing the
> session on the client side, I was getting the same error.
>
> Plus, the only way I could recover was by restarting Cassandra. I did not
> really see the connections timeout over a period of a few minutes.
>
> Andrew
> On Aug 8, 2014 3:19 PM, "Andrey Ilinykh" <ailin...@gmail.com> wrote:
>
>> You may have this problem if your client doesn't reuse the connection but
>> opens new every type. So, run netstat and check the number of established
>> connections. This number should not be big.
>>
>> Thank you,
>>   Andrey
>>
>>
>> On Fri, Aug 8, 2014 at 12:35 PM, Marcelo Elias Del Valle <
>> marc...@s1mbi0se.com.br> wrote:
>>
>>> Hi,
>>>
>>> I am using Cassandra 2.0.9 running on Debian Wheezy, and I am having
>>> "too many open files" exceptions when I try to perform a large number of
>>> operations in my 10 node cluster.
>>>
>>> I saw the documentation
>>> http://www.datastax.com/documentation/cassandra/2.0/cassandra/troubleshooting/trblshootTooManyFiles_r.html
>>> and I have set everything to the recommended settings, but I keep getting
>>> the errors.
>>>
>>> In the documentation it says: "Another, much less likely possibility,
>>> is a file descriptor leak in Cassandra. Run lsof -n | grep java to
>>> check that the number of file descriptors opened by Java is reasonable and
>>> reports the error if the number is greater than a few thousand."
>>>
>>> I guess it's not the case, or else a lot of people would be complaining
>>> about it, but I am not sure what I could do to solve the problem.
>>>
>>> Any hint about how to solve it?
>>>
>>> My client is written in python and uses Cassandra Python Driver. Here
>>> are the exceptions I am having in the client:
>>> [s1log] 2014-08-08 12:16:09,631 - cassandra.pool - WARNING - Error
>>> attempting to reconnect to 200.200.200.151, scheduling retry in 600.0
>>> seconds: [Errno 24] Too many open files
>>> [s1log] 2014-08-08 12:16:09,632 - cassandra.pool - WARNING - Error
>>> attempting to reconnect to 200.200.200.142, scheduling retry in 600.0
>>> seconds: [Errno 24] Too many open files
>>> [s1log] 2014-08-08 12:16:09,633 - cassandra.pool - WARNING - Error
>>> attempting to reconnect to 200.200.200.143, scheduling retry in 600.0
>>> seconds: [Errno 24] Too many open files
>>> [s1log] 2014-08-08 12:16:09,634 - cassandra.pool - WARNING - Error
>>> attempting to reconnect to 200.200.200.142, scheduling retry in 600.0
>>> seconds: [Errno 24] Too many open files
>>> [s1log] 2014-08-08 12:16:09,634 - cassandra.pool - WARNING - Error
>>> attempting to reconnect to 200.200.200.145, scheduling retry in 600.0
>>> seconds: [Errno 24] Too many open files
>>> [s1log] 2014-08-08 12:16:09,635 - cassandra.pool - WARNING - Error
>>> attempting to reconnect to 200.200.200.144, scheduling retry in 600.0
>>> seconds: [Errno 24] Too many open files
>>> [s1log] 2014-08-08 12:16:09,635 - cassandra.pool - WARNING - Error
>>> attempting to reconnect to 200.200.200.148, scheduling retry in 600.0
>>> seconds: [Errno 24] Too many open files
>>> [s1log] 2014-08-08 12:16:09,732 - cassandra.pool - WARNING - Error
>>> attempting to reconnect to 200.200.200.146, scheduling retry in 600.0
>>> seconds: [Errno 24] Too many open files
>>> [s1log] 2014-08-08 12:16:09,733 - cassandra.pool - WARNING - Error
>>> attempting to reconnect to 200.200.200.77, scheduling retry in 600.0
>>> seconds: [Errno 24] Too many open files
>>> [s1log] 2014-08-08 12:16:09,734 - cassandra.pool - WARNING - Error
>>> attempting to reconnect to 200.200.200.76, scheduling retry in 600.0
>>> seconds: [Errno 24] Too many open files
>>> [s1log] 2014-08-08 12:16:09,734 - cassandra.pool - WARNING - Error
>>> attempting to reconnect to 200.200.200.75, scheduling retry in 600.0
>>> seconds: [Errno 24] Too many open files
>>> [s1log] 2014-08-08 12:16:09,735 - cassandra.pool - WARNING - Error
>>> attempting to reconnect to 200.200.200.142, scheduling retry in 600.0
>>> seconds: [Errno 24] Too many open files
>>> [s1log] 2014-08-08 12:16:09,736 - cassandra.pool - WARNING - Error
>>> attempting to reconnect to 200.200.200.185, scheduling retry in 600.0
>>> seconds: [Errno 24] Too many open files
>>> [s1log] 2014-08-08 12:16:09,942 - cassandra.pool - WARNING - Error
>>> attempting to reconnect to 200.200.200.144, scheduling retry in 512.0
>>> seconds: Timed out connecting to 200.200.200.144
>>> [s1log] 2014-08-08 12:16:09,998 - cassandra.pool - WARNING - Error
>>> attempting to reconnect to 200.200.200.77, scheduling retry in 512.0
>>> seconds: Timed out connecting to 200.200.200.77
>>>
>>>
>>> And here is the exception I am having in the server:
>>>
>>>  WARN [Native-Transport-Requests:163] 2014-08-08 14:27:30,499
>>> BatchStatement.java (line 223) Batch of prepared statements for
>>> [identification.entity_lookup, identification.entity] is of size 25216,
>>> exceeding specified threshold of 5120 by 20096.
>>> ERROR [Native-Transport-Requests:150] 2014-08-08 14:27:31,611
>>> ErrorMessage.java (line 222) Unexpected exception during request
>>> java.io.IOException: Connection reset by peer
>>>         at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
>>>         at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
>>>         at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
>>>         at sun.nio.ch.IOUtil.read(IOUtil.java:192)
>>>         at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:375)
>>>         at
>>> org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:64)
>>>         at
>>> org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:109)
>>>         at
>>> org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)
>>>         at
>>> org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:90)
>>>         at
>>> org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
>>>         at
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>>         at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>>         at java.lang.Thread.run(Thread.java:745)
>>>
>>> Here is the amount of open files before and after I restart Cassandra:
>>>
>>> root@h:/etc/security/limits.d# lsof -n | grep java | wc -l
>>> 936580
>>> root@h:/etc/security/limits.d# sudo service cassandra restart
>>> [ ok ] Restarting Cassandra: cassandra.
>>> root@h:/etc/security/limits.d# lsof -n | grep java | wc -l
>>> 80295
>>>
>>>
>>> Best regards,
>>> Marcelo Valle.
>>>
>>
>>
>

Reply via email to