Ya, that happens when some operation throws a time out or any other sort of
operation (connection refuse, etc). There is a failback logic that will try
to discover all the nodes within the Cluster (not only the ones you
configured) in order to reach the cluster and execution the operation.
Have y
Indeed Hector has a connection pool behind it, I think it uses 50
connectios per node.
But also uses a node to discover the others, I assume that, as I saw
connections from my app to nodes that I didn't configure in Hector.
So, you may check the fds in OS level to see if there is a bottleneck ther
You probably want to switch to using mutator#addInsertion for some
number of iterations (start with 1000 and adjust as needed), then
calling execute(). This will be much more efficient.
On Thu, Dec 16, 2010 at 11:39 AM, Amin Sakka, Novapost
wrote:
>
> I'm using a unique client instance (using Hec
I'm using a unique client instance (using Hector) and a unique connection to
cassandra.
For each insertion I'm using a new mutator and then I release it.
I have 473 sstable "Data.db", the average size of each is 30Mo.
2010/12/16 Ryan King
> Are you creating a new connection for each row you
Are you creating a new connection for each row you insert (and if so
are you closing it)?
-ryan
On Wed, Dec 15, 2010 at 8:13 AM, Amin Sakka, Novapost
wrote:
> Hello,
> I'm using cassandra 0.7.0 rc1, a single node configuration, replication
> factor 1, random partitioner, 2 GO heap size.
> I ran
how many sstable "Data.db" files do you see in your system and how big are
they?
Also, how big are the rows you are inserting?
On Thu, Dec 16, 2010 at 7:59 AM, Amin Sakka, Novapost <
amin.sa...@novapost.fr> wrote:
>
> I increased the amount of the allowed file descriptors to "unlimted".
> Now,
Be careful with the unlimited value on ulimit, you could end up with a
unresponsive server... I mean, you could not even connect via ssh if you
don't have enough handles.
On Thu, Dec 16, 2010 at 9:59 AM, Amin Sakka, Novapost <
amin.sa...@novapost.fr> wrote:
>
> I increased the amount of the allow
I increased the amount of the allowed file descriptors to "unlimted".
Now, I get exactly the same exception after 3.50 rows :
*CustomTThreadPoolServer.java (line 104) Transport error occurred during
acceptance of message.*
*org.apache.thrift.transport.TTransportException: java.net.SocketExcept
*Hello,*
*I'm using cassandra 0.7.0 rc1, a single node configuration, replication
factor 1, random partitioner, 2 GO heap size.*
*I ran my hector client to insert 5.000.000 rows but after a couple of
hours, the following Exception occurs : *
WARN [main] 2010-12-15 16:38:53,335 CustomTThreadPoolS
http://www.riptano.com/docs/0.6/troubleshooting/index#java-reports-an-error-saying-there-are-too-many-open-files
On Wed, Dec 15, 2010 at 11:13 AM, Amin Sakka, Novapost <
amin.sa...@novapost.fr> wrote:
> *Hello,*
> *I'm using cassandra 0.7.0 rc1, a single node configuration, replication
> factor
*Hello,*
*I'm using cassandra 0.7.0 rc1, a single node configuration, replication
factor 1, random partitioner, 2 GO heap size.*
*I ran my hector client to insert 5.000.000 rows but after a couple of
hours, the following Exception occurs : *
WARN [main] 2010-12-15 16:38:53,335 CustomTThreadPoolS
11 matches
Mail list logo