Thanks very much for answering!

Do you think that after failing the joining task of a node to the cluster
should I run some repairs and cleanups?

Thanks!

On Tue, Apr 28, 2015 at 5:13 AM, Carlos Rolo <r...@pythian.com> wrote:

> Hi,
>
> The 2.1.x series is not recommeded for use, especially the first versions.
> I would downgrade to 2.0.14 or if must stay on 2.1 upgrade your cluster to
> 2.1.4 or the imminent release of 2.1.5.
>
> This mailing list as a few tips how to deal with the 2.1.x releases, but
> the best way is indeed a downgrade or wait for the 2.1.5.
>
> Regards,
>
> Carlos Juzarte Rolo
> Cassandra Consultant
>
> Pythian - Love your data
>
> rolo@pythian | Twitter: cjrolo | Linkedin: *linkedin.com/in/carlosjuzarterolo
> <http://linkedin.com/in/carlosjuzarterolo>*
> Mobile: +31 6 159 61 814 | Tel: +1 613 565 8696 x1649
> www.pythian.com
>
> On Tue, Apr 28, 2015 at 3:30 AM, Analia Lorenzatto <
> analialorenza...@gmail.com> wrote:
>
>>
>> Hello guys,
>>
>> I have a cluster comprised of 2 nodes, configured with vnodes.  Using
>> 2.1.0-2 version of cassandra.
>>
>> And I am facing an issue when I want to joing a new node to the cluster.
>>
>> At first starting joining but then it got stuck:
>>
>> UN  1x.x.x.x  348.11 GB  256     100.0%            xxxxxxxxxxxx  1c
>> UN  1x.x.x.x  342.74 GB  256     100.0%            xxxxxxxxxxxx  1c
>> UJ  1x.x.x.x  26.86 GB   256     ?                 xxxxxxxxxxxx  1c
>>
>>
>> I can see some errors on the already working nodes:
>>
>> *WARN  [SharedPool-Worker-7] 2015-04-27 17:41:16,060
>> SliceQueryFilter.java:236 - Read 5001 live and 66548 tombstoned cells in
>> usmc.userpixel (see tombstone_warn_threshol*
>> *d). 5000 columns was requested, slices=[-],
>> delInfo={deletedAt=-9223372036854775808, localDeletion=2147483647
>> <2147483647>}*
>> *WARN  [SharedPool-Worker-32] 2015-04-27 17:41:16,668
>> SliceQueryFilter.java:236 - Read 2012 live and 30440 tombstoned cells in
>> usmc.userpixel (see tombstone_warn_thresho*
>> *ld). 5001 columns was requested,
>> slices=[b6d051df-0a8f-4c13-b93c-1b4ff0d82b8d:date-],
>> delInfo={deletedAt=-9223372036854775808, localDeletion=2147483647}*
>>
>> *ERROR [CompactionExecutor:35638] 2015-04-27 19:06:07,613
>> CassandraDaemon.java:166 - Exception in thread
>> Thread[CompactionExecutor:35638,1,main]*
>> *java.lang.AssertionError: Memory was freed*
>> *        at
>> org.apache.cassandra.io.util.Memory.checkPosition(Memory.java:281)
>> ~[apache-cassandra-2.1.0.jar:2.1.0]*
>> *        at org.apache.cassandra.io.util.Memory.getInt(Memory.java:233)
>> ~[apache-cassandra-2.1.0.jar:2.1.0]*
>> *        at
>> org.apache.cassandra.io.sstable.IndexSummary.getPositionInSummary(IndexSummary.java:118)
>> ~[apache-cassandra-2.1.0.jar:2.1.0]*
>> *        at
>> org.apache.cassandra.io.sstable.IndexSummary.getKey(IndexSummary.java:123)
>> ~[apache-cassandra-2.1.0.jar:2.1.0]*
>> * at
>> org.apache.cassandra.io.sstable.IndexSummary.binarySearch(IndexSummary.java:92)
>> ~[apache-cassandra-2.1.0.jar:2.1.0]*
>> *        at
>> org.apache.cassandra.io.sstable.SSTableReader.getSampleIndexesForRanges(SSTableReader.java:1209)
>> ~[apache-cassandra-2.1.0.jar:2.1.0]*
>> *        at
>> org.apache.cassandra.io.sstable.SSTableReader.estimatedKeysForRanges(SSTableReader.java:1165)
>> ~[apache-cassandra-2.1.0.jar:2.1.0]*
>> *        at
>> org.apache.cassandra.db.compaction.AbstractCompactionStrategy.worthDroppingTombstones(AbstractCompactionStrategy.java:328)
>> ~[apache-cassandra-2.1.0.jar:2.1.0*
>> *]*
>> *        at
>> org.apache.cassandra.db.compaction.LeveledCompactionStrategy.findDroppableSSTable(LeveledCompactionStrategy.java:365)
>> ~[apache-cassandra-2.1.0.jar:2.1.0]*
>> *        at
>> org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getMaximalTask(LeveledCompactionStrategy.java:127)
>> ~[apache-cassandra-2.1.0.jar:2.1.0]*
>> *        at
>> org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getNextBackgroundTask(LeveledCompactionStrategy.java:112)
>> ~[apache-cassandra-2.1.0.jar:2.1.0]*
>> *        at
>> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:229)
>> ~[apache-cassandra-2.1.0.jar:2.1.0]*
>> *        at
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>> ~[na:1.7.0_51]*
>>         *at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> ~[na:1.7.0_51]*
>> *        at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> ~[na:1.7.0_51]*
>> *        at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> [na:1.7.0_51]*
>> *        at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]*
>>
>> But I do not see any warning or error message in logs of the joining
>> nodes.  I just see an exception there when I run: "nodetool info":
>>
>> root@:~# nodetool info
>> ID               : f5e49647-59fa-474f-b6af-9f65abc43581
>> Gossip active    : true
>> Thrift active    : false
>> Native Transport active: false
>> Load             : 26.86 GB
>> Generation No    : 1430163258
>> Uptime (seconds) : 18799
>> Heap Memory (MB) : 4185.15 / 7566.00
>> error: null
>> -- StackTrace --
>> java.lang.AssertionError
>> at
>> org.apache.cassandra.locator.TokenMetadata.getTokens(TokenMetadata.java:440)
>> at
>> org.apache.cassandra.service.StorageService.getTokens(StorageService.java:2079)
>> at
>> org.apache.cassandra.service.StorageService.getTokens(StorageService.java:2068)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:606)
>> at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
>> at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:606)
>> at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
>> at
>> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
>> at
>> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
>> at
>> com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
>> at com.sun.jmx.mbeanserver.PerInterface.getAttribute(PerInterface.java:83)
>> at
>> com.sun.jmx.mbeanserver.MBeanSupport.getAttribute(MBeanSupport.java:206)
>> at
>> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
>> at
>> com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
>> at
>> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1464)
>> at
>> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
>> at
>> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
>> at
>> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
>> at
>> javax.management.remote.rmi.RMIConnectionImpl.getAttribute(RMIConnectionImpl.java:657)
>> at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:606)
>>
>>
>> I've been checking the heap size's consumption and it does not look like
>> getting out of memory.
>>
>> I am not sure what to do to fix that problem.  Also I am not sure how to
>> proceed If I want to re-bootstrap the new node in order to not get worse
>> the situation.  I've been reading suggestions on the net, but I am not
>> completely sure.
>>
>> Could you please please help me with that?
>>
>> Thanks in advance, any help would be appreciated!
>> --
>> Saludos / Regards.
>>
>> Analía Lorenzatto.
>>
>>
>>
>
> --
>
>
>
>


-- 
Saludos / Regards.

Analía Lorenzatto.

“It's possible to commit no errors and still lose. That is not weakness.
That is life".  By Captain Jean-Luc Picard.

Reply via email to