I checked file system integrity and contents. Everything seems OK.

Whatever the reason, that problem spread to all nodes at the same time, so
I believe it is not very likely for all volumes or any hardware to fail at
the same time.

On Tue, Jun 19, 2018 at 2:25 PM Joshua Galbraith
<jgalbra...@newrelic.com.invalid> wrote:

> Deniz,
>
> The assertion error you're seeing appears to be coming from this line:
>
> https://github.com/apache/cassandra/blob/cassandra-3.11.0/src/java/org/apache/cassandra/db/lifecycle/LogReplicaSet.java#L63
>
> This file describes a LogReplicaSet as "A set of physical files on disk,
> [where] each file is an identical replica"
>
> https://github.com/apache/cassandra/blob/cassandra-3.11.0/src/java/org/apache/cassandra/db/lifecycle/LogFile.java#L64
>
> This in reference the transaction logs:
>
> https://github.com/apache/cassandra/blob/cassandra-3.11.0/src/java/org/apache/cassandra/db/lifecycle/LogFile.java#L45-L54
>
> Do you see anything odd about the log files, log directory, docker volume,
> or underlying EBS volume on the affected nodes?
>
> On Tue, Jun 19, 2018 at 7:13 AM, @Nandan@ <nandanpriyadarshi...@gmail.com>
> wrote:
>
>> Check with Java Using version.
>>
>> On Tue, Jun 19, 2018 at 6:18 PM, Deniz Acay <deniza...@gmail.com> wrote:
>>
>>> Hello there,
>>>
>>> Let me get straight to the point. Yesterday our three node Cassandra
>>> production cluster had a problem and we could not find a solution yet.
>>> Before taking more radical actions, I would like to consult you about the
>>> issue.
>>>
>>> We are using Cassandra version 3.11.0. Cluster is living on AWS EC2
>>> nodes of type m4.2xlarge with 32 GBs of RAM. Each node Dockerized using
>>> host networking mode. Two EBS SSD volumes are attached to each node, 32GB
>>> for commit logs (io1) and 4TB for data directory (gp2). We have been
>>> running smoothly for 7 months and filled %55 of data directory on each node.
>>> Now our C* nodes fail during bootstrapping phase. Let me paste the logs
>>> from system.log file from start to the time of error:
>>>
>>> INFO  [main] 2018-06-19 09:51:32,726 YamlConfigurationLoader.java:89 -
>>> Configuration location:
>>> file:/opt/apache-cassandra-3.11.0/conf/cassandra.yaml
>>> INFO  [main] 2018-06-19 09:51:32,954 Config.java:481 - Node
>>> configuration:[allocate_tokens_for_keyspace=botanalytics;
>>> authenticator=AllowAllAuthenticator; authorizer=AllowAllAuthorizer;
>>> auto_bootstrap=false; auto_snapshot=true; back_pressure_enabled=false;
>>> back_pressure_strategy=org.apache.cassandra.net.RateBasedBackPressure{high_ratio=0.9,
>>> factor=5, flow=FAST}; batch_size_fail_threshold_in_kb=50;
>>> batch_size_warn_threshold_in_kb=5; batchlog_replay_throttle_in_kb=1024;
>>> broadcast_address=null; broadcast_rpc_address=null;
>>> buffer_pool_use_heap_if_exhausted=true; cas_contention_timeout_in_ms=1000;
>>> cdc_enabled=false; cdc_free_space_check_interval_ms=250;
>>> cdc_raw_directory=/var/data/cassandra/cdc_raw; cdc_total_space_in_mb=0;
>>> client_encryption_options=<REDACTED>; cluster_name=Botanalytics Production;
>>> column_index_cache_size_in_kb=2; column_index_size_in_kb=64;
>>> commit_failure_policy=stop_commit; commitlog_compression=null;
>>> commitlog_directory=/var/data/cassandra_commitlog;
>>> commitlog_max_compression_buffers_in_pool=3;
>>> commitlog_periodic_queue_size=-1; commitlog_segment_size_in_mb=32;
>>> commitlog_sync=periodic; commitlog_sync_batch_window_in_ms=NaN;
>>> commitlog_sync_period_in_ms=10000; commitlog_total_space_in_mb=8192;
>>> compaction_large_partition_warning_threshold_mb=100;
>>> compaction_throughput_mb_per_sec=1600; concurrent_compactors=null;
>>> concurrent_counter_writes=32; concurrent_materialized_view_writes=32;
>>> concurrent_reads=32; concurrent_replicates=null; concurrent_writes=64;
>>> counter_cache_keys_to_save=2147483647; counter_cache_save_period=7200;
>>> counter_cache_size_in_mb=null; counter_write_request_timeout_in_ms=5000;
>>> credentials_cache_max_entries=1000; credentials_update_interval_in_ms=-1;
>>> credentials_validity_in_ms=2000; cross_node_timeout=false;
>>> data_file_directories=[Ljava.lang.String;@662b4c69;
>>> disk_access_mode=auto; disk_failure_policy=best_effort;
>>> disk_optimization_estimate_percentile=0.95;
>>> disk_optimization_page_cross_chance=0.1; disk_optimization_strategy=ssd;
>>> dynamic_snitch=true; dynamic_snitch_badness_threshold=0.1;
>>> dynamic_snitch_reset_interval_in_ms=600000;
>>> dynamic_snitch_update_interval_in_ms=100;
>>> enable_scripted_user_defined_functions=false;
>>> enable_user_defined_functions=false;
>>> enable_user_defined_functions_threads=true; encryption_options=null;
>>> endpoint_snitch=Ec2Snitch; file_cache_size_in_mb=null;
>>> gc_log_threshold_in_ms=200; gc_warn_threshold_in_ms=1000;
>>> hinted_handoff_disabled_datacenters=[]; hinted_handoff_enabled=true;
>>> hinted_handoff_throttle_in_kb=1024; hints_compression=null;
>>> hints_directory=null; hints_flush_period_in_ms=10000;
>>> incremental_backups=false; index_interval=null;
>>> index_summary_capacity_in_mb=null;
>>> index_summary_resize_interval_in_minutes=60; initial_token=null;
>>> inter_dc_stream_throughput_outbound_megabits_per_sec=200;
>>> inter_dc_tcp_nodelay=false; internode_authenticator=null;
>>> internode_compression=dc; internode_recv_buff_size_in_bytes=0;
>>> internode_send_buff_size_in_bytes=0; key_cache_keys_to_save=2147483647;
>>> key_cache_save_period=14400; key_cache_size_in_mb=null;
>>> listen_address=172.31.6.233; listen_interface=null;
>>> listen_interface_prefer_ipv6=false; listen_on_broadcast_address=false;
>>> max_hint_window_in_ms=10800000; max_hints_delivery_threads=2;
>>> max_hints_file_size_in_mb=128; max_mutation_size_in_kb=null;
>>> max_streaming_retries=3; max_value_size_in_mb=256;
>>> memtable_allocation_type=heap_buffers; memtable_cleanup_threshold=null;
>>> memtable_flush_writers=0; memtable_heap_space_in_mb=null;
>>> memtable_offheap_space_in_mb=null; min_free_space_per_drive_in_mb=50;
>>> native_transport_max_concurrent_connections=-1;
>>> native_transport_max_concurrent_connections_per_ip=-1;
>>> native_transport_max_frame_size_in_mb=256;
>>> native_transport_max_threads=128; native_transport_port=9042;
>>> native_transport_port_ssl=null; num_tokens=8;
>>> otc_backlog_expiration_interval_ms=200;
>>> otc_coalescing_enough_coalesced_messages=8;
>>> otc_coalescing_strategy=DISABLED; otc_coalescing_window_us=200;
>>> partitioner=org.apache.cassandra.dht.Murmur3Partitioner;
>>> permissions_cache_max_entries=1000; permissions_update_interval_in_ms=-1;
>>> permissions_validity_in_ms=2000; phi_convict_threshold=8.0;
>>> prepared_statements_cache_size_mb=null; range_request_timeout_in_ms=10000;
>>> read_request_timeout_in_ms=5000;
>>> request_scheduler=org.apache.cassandra.scheduler.NoScheduler;
>>> request_scheduler_id=null; request_scheduler_options=null;
>>> request_timeout_in_ms=10000; role_manager=CassandraRoleManager;
>>> roles_cache_max_entries=1000; roles_update_interval_in_ms=-1;
>>> roles_validity_in_ms=2000;
>>> row_cache_class_name=org.apache.cassandra.cache.OHCProvider;
>>> row_cache_keys_to_save=2147483647; row_cache_save_period=0;
>>> row_cache_size_in_mb=0; rpc_address=172.31.6.233; rpc_interface=null;
>>> rpc_interface_prefer_ipv6=false; rpc_keepalive=true; rpc_listen_backlog=50;
>>> rpc_max_threads=2147483647 <(214)%20748-3647>; rpc_min_threads=16;
>>> rpc_port=9160; rpc_recv_buff_size_in_bytes=null;
>>> rpc_send_buff_size_in_bytes=null; rpc_server_type=sync;
>>> saved_caches_directory=/var/data/cassandra/saved_caches;
>>> seed_provider=org.apache.cassandra.locator.SimpleSeedProvider{seeds=172.31.30.86,172.31.6.233,172.31.32.108};
>>> server_encryption_options=<REDACTED>; slow_query_log_timeout_in_ms=500;
>>> snapshot_before_compaction=false; ssl_storage_port=7001;
>>> sstable_preemptive_open_interval_in_mb=50; start_native_transport=true;
>>> start_rpc=false; storage_port=7000;
>>> stream_throughput_outbound_megabits_per_sec=200;
>>> streaming_keep_alive_period_in_secs=300;
>>> streaming_socket_timeout_in_ms=86400000;
>>> thrift_framed_transport_size_in_mb=15; thrift_max_message_length_in_mb=16;
>>> thrift_prepared_statements_cache_size_mb=null;
>>> tombstone_failure_threshold=100000; tombstone_warn_threshold=1000;
>>> tracetype_query_ttl=86400; tracetype_repair_ttl=604800;
>>> transparent_data_encryption_options=org.apache.cassandra.config.TransparentDataEncryptionOptions@fa49800;
>>> trickle_fsync=false; trickle_fsync_interval_in_kb=10240;
>>> truncate_request_timeout_in_ms=60000;
>>> unlogged_batch_across_partitions_warn_threshold=10;
>>> user_defined_function_fail_timeout=1500;
>>> user_defined_function_warn_timeout=500; user_function_timeout_policy=die;
>>> windows_timer_interval=1; write_request_timeout_in_ms=2000]
>>> INFO  [main] 2018-06-19 09:51:32,954 DatabaseDescriptor.java:366 -
>>> DiskAccessMode 'auto' determined to be mmap, indexAccessMode is mmap
>>> INFO  [main] 2018-06-19 09:51:32,954 DatabaseDescriptor.java:420 -
>>> Global memtable on-heap threshold is enabled at 2011MB
>>> INFO  [main] 2018-06-19 09:51:32,954 DatabaseDescriptor.java:424 -
>>> Global memtable off-heap threshold is enabled at 2011MB
>>> INFO  [main] 2018-06-19 09:51:33,198 RateBasedBackPressure.java:123 -
>>> Initialized back-pressure with high ratio: 0.9, factor: 5, flow: FAST,
>>> window size: 2000.
>>> INFO  [main] 2018-06-19 09:51:33,198 DatabaseDescriptor.java:710 -
>>> Back-pressure is disabled with strategy 
>>> org.apache.cassandra.net.RateBasedBackPressure{high_ratio=0.9,
>>> factor=5, flow=FAST}.
>>> INFO  [main] 2018-06-19 09:51:33,228 Ec2Snitch.java:67 - EC2Snitch using
>>> region: us-west-2, zone: 2c.
>>> INFO  [main] 2018-06-19 09:51:33,332 JMXServerUtils.java:249 -
>>> Configured JMX server at: service:jmx:rmi://
>>> 0.0.0.0/jndi/rmi://0.0.0.0:7199/jmxrmi
>>> INFO  [main] 2018-06-19 09:51:33,337 CassandraDaemon.java:471 -
>>> Hostname: cassandra3.system.botanalytics.co
>>> INFO  [main] 2018-06-19 09:51:33,338 CassandraDaemon.java:478 - JVM
>>> vendor/version: OpenJDK 64-Bit Server VM/1.8.0_131
>>> INFO  [main] 2018-06-19 09:51:33,339 CassandraDaemon.java:479 - Heap
>>> size: 7.855GiB/7.855GiB
>>> INFO  [main] 2018-06-19 09:51:33,340 CassandraDaemon.java:484 - Code
>>> Cache Non-heap memory: init = 2555904(2496K) used = 5440896(5313K)
>>> committed = 5505024(5376K) max = 251658240(245760K)
>>> INFO  [main] 2018-06-19 09:51:33,340 CassandraDaemon.java:484 -
>>> Metaspace Non-heap memory: init = 0(0K) used = 17400896(16993K) committed =
>>> 17956864(17536K) max = -1(-1K)
>>> INFO  [main] 2018-06-19 09:51:33,340 CassandraDaemon.java:484 -
>>> Compressed Class Space Non-heap memory: init = 0(0K) used = 2076056(2027K)
>>> committed = 2228224(2176K) max = 1073741824(1048576K)
>>> INFO  [main] 2018-06-19 09:51:33,340 CassandraDaemon.java:484 - G1 Eden
>>> Space Heap memory: init = 444596224(434176K) used = 85983232(83968K)
>>> committed = 444596224(434176K) max = -1(-1K)
>>> INFO  [main] 2018-06-19 09:51:33,341 CassandraDaemon.java:484 - G1
>>> Survivor Space Heap memory: init = 0(0K) used = 0(0K) committed = 0(0K) max
>>> = -1(-1K)
>>> INFO  [main] 2018-06-19 09:51:33,341 CassandraDaemon.java:484 - G1 Old
>>> Gen Heap memory: init = 7990149120(7802880K) used = 0(0K) committed =
>>> 7990149120(7802880K) max = 8434745344(8237056K)
>>> INFO  [main] 2018-06-19 09:51:33,341 CassandraDaemon.java:486 -
>>> Classpath:
>>> /opt/apache-cassandra-3.11.0/conf:/opt/apache-cassandra-3.11.0/build/classes/main:/opt/apache-cassandra-3.11.0/build/classes/thrift:/opt/apache-cassandra-3.11.0/lib/HdrHistogram-2.1.9.jar:/opt/apache-cassandra-3.11.0/lib/ST4-4.0.8.jar:/opt/apache-cassandra-3.11.0/lib/airline-0.6.jar:/opt/apache-cassandra-3.11.0/lib/antlr-runtime-3.5.2.jar:/opt/apache-cassandra-3.11.0/lib/apache-cassandra-3.11.0.jar:/opt/apache-cassandra-3.11.0/lib/apache-cassandra-thrift-3.11.0.jar:/opt/apache-cassandra-3.11.0/lib/asm-5.0.4.jar:/opt/apache-cassandra-3.11.0/lib/caffeine-2.2.6.jar:/opt/apache-cassandra-3.11.0/lib/cassandra-driver-core-3.0.1-shaded.jar:/opt/apache-cassandra-3.11.0/lib/commons-cli-1.1.jar:/opt/apache-cassandra-3.11.0/lib/commons-codec-1.9.jar:/opt/apache-cassandra-3.11.0/lib/commons-lang3-3.1.jar:/opt/apache-cassandra-3.11.0/lib/commons-math3-3.2.jar:/opt/apache-cassandra-3.11.0/lib/compress-lzf-0.8.4.jar:/opt/apache-cassandra-3.11.0/lib/concurrent-trees-2.4.0.jar:/opt/apache-cassandra-3.11.0/lib/concurrentlinkedhashmap-lru-1.4.jar:/opt/apache-cassandra-3.11.0/lib/disruptor-3.0.1.jar:/opt/apache-cassandra-3.11.0/lib/ecj-4.4.2.jar:/opt/apache-cassandra-3.11.0/lib/guava-18.0.jar:/opt/apache-cassandra-3.11.0/lib/high-scale-lib-1.0.6.jar:/opt/apache-cassandra-3.11.0/lib/hppc-0.5.4.jar:/opt/apache-cassandra-3.11.0/lib/jackson-core-asl-1.9.2.jar:/opt/apache-cassandra-3.11.0/lib/jackson-mapper-asl-1.9.2.jar:/opt/apache-cassandra-3.11.0/lib/jamm-0.3.0.jar:/opt/apache-cassandra-3.11.0/lib/javax.inject.jar:/opt/apache-cassandra-3.11.0/lib/jbcrypt-0.3m.jar:/opt/apache-cassandra-3.11.0/lib/jcl-over-slf4j-1.7.7.jar:/opt/apache-cassandra-3.11.0/lib/jctools-core-1.2.1.jar:/opt/apache-cassandra-3.11.0/lib/jflex-1.6.0.jar:/opt/apache-cassandra-3.11.0/lib/jna-4.4.0.jar:/opt/apache-cassandra-3.11.0/lib/joda-time-2.4.jar:/opt/apache-cassandra-3.11.0/lib/json-simple-1.1.jar:/opt/apache-cassandra-3.11.0/lib/jstackjunit-0.0.1.jar:/opt/apache-cassandra-3.11.0/lib/libthrift-0.9.2.jar:/opt/apache-cassandra-3.11.0/lib/log4j-over-slf4j-1.7.7.jar:/opt/apache-cassandra-3.11.0/lib/logback-classic-1.1.3.jar:/opt/apache-cassandra-3.11.0/lib/logback-core-1.1.3.jar:/opt/apache-cassandra-3.11.0/lib/lz4-1.3.0.jar:/opt/apache-cassandra-3.11.0/lib/metrics-core-3.1.0.jar:/opt/apache-cassandra-3.11.0/lib/metrics-jvm-3.1.0.jar:/opt/apache-cassandra-3.11.0/lib/metrics-logback-3.1.0.jar:/opt/apache-cassandra-3.11.0/lib/netty-all-4.0.44.Final.jar:/opt/apache-cassandra-3.11.0/lib/ohc-core-0.4.4.jar:/opt/apache-cassandra-3.11.0/lib/ohc-core-j8-0.4.4.jar:/opt/apache-cassandra-3.11.0/lib/reporter-config-base-3.0.3.jar:/opt/apache-cassandra-3.11.0/lib/reporter-config3-3.0.3.jar:/opt/apache-cassandra-3.11.0/lib/sigar-1.6.4.jar:/opt/apache-cassandra-3.11.0/lib/slf4j-api-1.7.7.jar:/opt/apache-cassandra-3.11.0/lib/snakeyaml-1.11.jar:/opt/apache-cassandra-3.11.0/lib/snappy-java-1.1.1.7.jar:/opt/apache-cassandra-3.11.0/lib/snowball-stemmer-1.3.0.581.1.jar:/opt/apache-cassandra-3.11.0/lib/stream-2.5.2.jar:/opt/apache-cassandra-3.11.0/lib/thrift-server-0.3.7.jar:/opt/apache-cassandra-3.11.0/lib/jsr223/*/*.jar:/opt/apache-cassandra-3.11.0/lib/jamm-0.3.0.jar
>>> INFO  [main] 2018-06-19 09:51:33,342 CassandraDaemon.java:488 - JVM
>>> Arguments: [-Xloggc:/opt/apache-cassandra-3.11.0/logs/gc.log, -ea,
>>> -XX:+UseThreadPriorities, -XX:ThreadPriorityPolicy=42,
>>> -XX:+HeapDumpOnOutOfMemoryError, -Xss256k, -XX:StringTableSize=1000003,
>>> -XX:+AlwaysPreTouch, -XX:-UseBiasedLocking, -XX:+UseTLAB, -XX:+ResizeTLAB,
>>> -XX:+UseNUMA, -XX:+PerfDisableSharedMem, -Djava.net.preferIPv4Stack=true,
>>> -XX:+UseG1GC, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps,
>>> -XX:+PrintHeapAtGC, -XX:+PrintTenuringDistribution,
>>> -XX:+PrintGCApplicationStoppedTime, -XX:+PrintPromotionFailure,
>>> -XX:+UseGCLogFileRotation, -XX:NumberOfGCLogFiles=10,
>>> -XX:GCLogFileSize=10M, -Xms8043M, -Xmx8043M,
>>> -XX:CompileCommandFile=/opt/apache-cassandra-3.11.0/conf/hotspot_compiler,
>>> -javaagent:/opt/apache-cassandra-3.11.0/lib/jamm-0.3.0.jar,
>>> -Dcassandra.jmx.remote.port=7199,
>>> -Dcom.sun.management.jmxremote.rmi.port=7199,
>>> -Dcom.sun.management.jmxremote.password.file=/etc/cassandra/jmxremote.password,
>>> -Djava.library.path=/opt/apache-cassandra-3.11.0/lib/sigar-bin,
>>> -Dlogback.configurationFile=logback.xml,
>>> -Dcassandra.logdir=/opt/apache-cassandra-3.11.0/logs,
>>> -Dcassandra.storagedir=/opt/apache-cassandra-3.11.0/data,
>>> -Dcassandra-foreground=yes]
>>> INFO  [main] 2018-06-19 09:51:33,442 NativeLibrary.java:174 - JNA
>>> mlockall successful
>>> WARN  [main] 2018-06-19 09:51:33,443 StartupChecks.java:127 - jemalloc
>>> shared library could not be preloaded to speed up memory allocations
>>> INFO  [main] 2018-06-19 09:51:33,443 StartupChecks.java:167 - JMX is
>>> enabled to receive remote connections on port: 7199
>>> WARN  [main] 2018-06-19 09:51:33,444 StartupChecks.java:197 - OpenJDK is
>>> not recommended. Please upgrade to the newest Oracle Java release
>>> INFO  [main] 2018-06-19 09:51:33,446 SigarLibrary.java:44 - Initializing
>>> SIGAR library
>>> INFO  [main] 2018-06-19 09:51:33,459 SigarLibrary.java:180 - Checked OS
>>> settings and found them configured for optimal performance.
>>> INFO  [main] 2018-06-19 09:54:10,270 QueryProcessor.java:115 -
>>> Initialized prepared statement caches with 31 MB (native) and 31 MB (Thrift)
>>> INFO  [main] 2018-06-19 09:54:18,131 ColumnFamilyStore.java:406 -
>>> Initializing system.IndexInfo
>>> INFO  [SSTableBatchOpen:2] 2018-06-19 09:54:19,249 BufferPool.java:230 -
>>> Global buffer pool is enabled, when pool is exhausted (max is 512.000MiB)
>>> it will allocate on heap
>>> INFO  [main] 2018-06-19 09:54:19,490 CacheService.java:112 -
>>> Initializing key cache with capacity of 100 MBs.
>>> INFO  [main] 2018-06-19 09:54:19,498 CacheService.java:134 -
>>> Initializing row cache with capacity of 0 MBs
>>> INFO  [main] 2018-06-19 09:54:19,500 CacheService.java:163 -
>>> Initializing counter cache with capacity of 50 MBs
>>> INFO  [main] 2018-06-19 09:54:19,501 CacheService.java:174 - Scheduling
>>> counter cache save to every 7200 seconds (going to save all keys).
>>> INFO  [main] 2018-06-19 09:54:19,635 ColumnFamilyStore.java:406 -
>>> Initializing system.batches
>>> INFO  [main] 2018-06-19 09:54:19,750 ColumnFamilyStore.java:406 -
>>> Initializing system.paxos
>>> INFO  [main] 2018-06-19 09:54:19,990 ColumnFamilyStore.java:406 -
>>> Initializing system.local
>>> INFO  [main] 2018-06-19 09:54:20,668 ColumnFamilyStore.java:406 -
>>> Initializing system.peers
>>> INFO  [main] 2018-06-19 09:54:21,235 ColumnFamilyStore.java:406 -
>>> Initializing system.peer_events
>>> INFO  [main] 2018-06-19 09:54:21,344 ColumnFamilyStore.java:406 -
>>> Initializing system.range_xfers
>>> INFO  [main] 2018-06-19 09:54:22,274 ColumnFamilyStore.java:406 -
>>> Initializing system.compaction_history
>>> INFO  [main] 2018-06-19 09:54:24,256 ColumnFamilyStore.java:406 -
>>> Initializing system.sstable_activity
>>> INFO  [main] 2018-06-19 09:54:26,305 ColumnFamilyStore.java:406 -
>>> Initializing system.size_estimates
>>> INFO  [main] 2018-06-19 09:54:27,331 ColumnFamilyStore.java:406 -
>>> Initializing system.available_ranges
>>> INFO  [main] 2018-06-19 09:54:27,438 ColumnFamilyStore.java:406 -
>>> Initializing system.transferred_ranges
>>> INFO  [main] 2018-06-19 09:54:27,548 ColumnFamilyStore.java:406 -
>>> Initializing system.views_builds_in_progress
>>> INFO  [main] 2018-06-19 09:54:27,962 ColumnFamilyStore.java:406 -
>>> Initializing system.built_views
>>> INFO  [main] 2018-06-19 09:54:28,372 ColumnFamilyStore.java:406 -
>>> Initializing system.hints
>>> INFO  [main] 2018-06-19 09:54:28,485 ColumnFamilyStore.java:406 -
>>> Initializing system.batchlog
>>> INFO  [main] 2018-06-19 09:54:29,308 ColumnFamilyStore.java:406 -
>>> Initializing system.prepared_statements
>>> INFO  [main] 2018-06-19 09:54:30,199 ColumnFamilyStore.java:406 -
>>> Initializing system.schema_keyspaces
>>> INFO  [main] 2018-06-19 09:54:30,307 ColumnFamilyStore.java:406 -
>>> Initializing system.schema_columnfamilies
>>> INFO  [main] 2018-06-19 09:54:30,415 ColumnFamilyStore.java:406 -
>>> Initializing system.schema_columns
>>> INFO  [main] 2018-06-19 09:54:30,522 ColumnFamilyStore.java:406 -
>>> Initializing system.schema_triggers
>>> INFO  [main] 2018-06-19 09:54:30,644 ColumnFamilyStore.java:406 -
>>> Initializing system.schema_usertypes
>>> INFO  [main] 2018-06-19 09:54:30,751 ColumnFamilyStore.java:406 -
>>> Initializing system.schema_functions
>>> INFO  [main] 2018-06-19 09:54:30,857 ColumnFamilyStore.java:406 -
>>> Initializing system.schema_aggregates
>>> INFO  [main] 2018-06-19 09:54:30,895 ViewManager.java:137 - Not
>>> submitting build tasks for views in keyspace system as storage service is
>>> not initialized
>>> INFO  [main] 2018-06-19 09:54:30,995 ApproximateTime.java:44 -
>>> Scheduling approximate time-check task with a precision of 10 milliseconds
>>> INFO  [main] 2018-06-19 09:54:31,398 StorageService.java:599 -
>>> Populating token metadata from system tables
>>> INFO  [main] 2018-06-19 09:54:31,417 StorageService.java:606 - Token
>>> metadata: Normal Tokens:
>>> /172.31.6.233:[-8558296730980030069, -7525382470676135506,
>>> -5365679946543067437, -5024744468396129791, -621548782781630789,
>>> 2560648275603582213, 3045808823380974565, 4528773636540445635]
>>> /172.31.30.86:[-4996254754506875993, -4643155412471971849,
>>> -4368808458340046140, -3850506734999042317, -2258278949288203779,
>>> 595567677047792091, 2603367051083131788, 4627331559939659350]
>>> /172.31.32.108:[-8971900399054629031, -8230309605059852620,
>>> -7924781343155531759, -3886073213015791996, 1365546428882466509,
>>> 5068042707833365047, 5447578083699391181, 5537482776219757111]
>>>
>>> INFO  [main] 2018-06-19 09:54:32,136 ColumnFamilyStore.java:406 -
>>> Initializing system_schema.keyspaces
>>> INFO  [main] 2018-06-19 09:54:33,162 ColumnFamilyStore.java:406 -
>>> Initializing system_schema.tables
>>> INFO  [main] 2018-06-19 09:54:34,069 ColumnFamilyStore.java:406 -
>>> Initializing system_schema.columns
>>> INFO  [main] 2018-06-19 09:54:34,582 ColumnFamilyStore.java:406 -
>>> Initializing system_schema.triggers
>>> INFO  [main] 2018-06-19 09:54:34,901 ColumnFamilyStore.java:406 -
>>> Initializing system_schema.dropped_columns
>>> INFO  [main] 2018-06-19 09:54:35,329 ColumnFamilyStore.java:406 -
>>> Initializing system_schema.views
>>> INFO  [main] 2018-06-19 09:54:35,550 ColumnFamilyStore.java:406 -
>>> Initializing system_schema.types
>>> INFO  [main] 2018-06-19 09:54:35,660 ColumnFamilyStore.java:406 -
>>> Initializing system_schema.functions
>>> INFO  [main] 2018-06-19 09:54:35,772 ColumnFamilyStore.java:406 -
>>> Initializing system_schema.aggregates
>>> INFO  [main] 2018-06-19 09:54:35,991 ColumnFamilyStore.java:406 -
>>> Initializing system_schema.indexes
>>> INFO  [main] 2018-06-19 09:54:36,140 ViewManager.java:137 - Not
>>> submitting build tasks for views in keyspace system_schema as storage
>>> service is not initialized
>>> ERROR [main] 2018-06-19 09:54:58,186 CassandraDaemon.java:706 -
>>> Exception encountered during startup
>>> java.lang.AssertionError: null
>>> at
>>> org.apache.cassandra.db.lifecycle.LogReplicaSet.addReplica(LogReplicaSet.java:63)
>>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>> at java.util.ArrayList.forEach(ArrayList.java:1249) ~[na:1.8.0_131]
>>> at
>>> org.apache.cassandra.db.lifecycle.LogReplicaSet.addReplicas(LogReplicaSet.java:57)
>>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>> at org.apache.cassandra.db.lifecycle.LogFile.<init>(LogFile.java:146)
>>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>> at org.apache.cassandra.db.lifecycle.LogFile.make(LogFile.java:94)
>>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>> at
>>> org.apache.cassandra.db.lifecycle.LogTransaction$LogFilesByName.removeUnfinishedLeftovers(LogTransaction.java:461)
>>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>> at
>>> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
>>> ~[na:1.8.0_131]
>>> at java.util.HashMap$EntrySpliterator.tryAdvance(HashMap.java:1712)
>>> ~[na:1.8.0_131]
>>> at
>>> java.util.stream.ReferencePipeline.forEachWithCancel(ReferencePipeline.java:126)
>>> ~[na:1.8.0_131]
>>> at
>>> java.util.stream.AbstractPipeline.copyIntoWithCancel(AbstractPipeline.java:498)
>>> ~[na:1.8.0_131]
>>> at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:485)
>>> ~[na:1.8.0_131]
>>> at
>>> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
>>> ~[na:1.8.0_131]
>>> at
>>> java.util.stream.MatchOps$MatchOp.evaluateSequential(MatchOps.java:230)
>>> ~[na:1.8.0_131]
>>> at
>>> java.util.stream.MatchOps$MatchOp.evaluateSequential(MatchOps.java:196)
>>> ~[na:1.8.0_131]
>>> at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
>>> ~[na:1.8.0_131]
>>> at
>>> java.util.stream.ReferencePipeline.allMatch(ReferencePipeline.java:454)
>>> ~[na:1.8.0_131]
>>> at
>>> org.apache.cassandra.db.lifecycle.LogTransaction$LogFilesByName.removeUnfinishedLeftovers(LogTransaction.java:456)
>>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>> at
>>> org.apache.cassandra.db.lifecycle.LogTransaction.removeUnfinishedLeftovers(LogTransaction.java:423)
>>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>> at
>>> org.apache.cassandra.db.lifecycle.LogTransaction.removeUnfinishedLeftovers(LogTransaction.java:415)
>>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>> at
>>> org.apache.cassandra.db.lifecycle.LifecycleTransaction.removeUnfinishedLeftovers(LifecycleTransaction.java:544)
>>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>> at
>>> org.apache.cassandra.db.ColumnFamilyStore.scrubDataDirectories(ColumnFamilyStore.java:636)
>>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>> at
>>> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:275)
>>> [apache-cassandra-3.11.0.jar:3.11.0]
>>> at
>>> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:600)
>>> [apache-cassandra-3.11.0.jar:3.11.0]
>>> at
>>> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:689)
>>> [apache-cassandra-3.11.0.jar:3.11.0]
>>>
>>> Do you have any idea what may be wrong?
>>>
>>> Thanks in advance,
>>> Deniz
>>>
>>
>>
>
>
> --
> *Joshua Galbraith *| Senior Software Engineer | New Relic
> C: 907-209-1208 <(907)%20209-1208> | jgalbra...@newrelic.com
>

Reply via email to