force processing of pending hinted handoffs

2017-04-11 Thread Roland Otta
hi,

sometimes we have the problem that we have hinted handoffs (for example
because auf network problems between 2 DCs) that do not get processed
even if the connection problem between the dcs recovers. Some of the
files stay in the hints directory until we restart the node that
contains the hints.

after the restart of cassandra we can see the proper messages for the
hints handling

Apr 11 09:28:56 bigd006 cassandra: INFO  07:28:56 Deleted hint file
c429ad19-ee9f-4b5a-abcd-1da1516d1003-1491895717182-1.hints
Apr 11 09:28:56 bigd006 cassandra: INFO  07:28:56 Finished hinted
handoff of file c429ad19-ee9f-4b5a-abcd-1da1516d1003-1491895717182-
1.hints to endpoint c429ad19-ee9f-4b5a-abcd-1da1516d1003

is there a way (for example via jmx) to force a node to process
outstanding hints instead of restarting the node?
does anyone know whats the cause for not retrying to process those
hints automatically?

br,
roland




Re: Inconsistent data after adding a new DC and rebuilding

2017-04-11 Thread Roland Otta
well .. thats pretty much the same we saw in our environment (cassandra 3.7).

in our case a full repair fixed the issues.
but no doubt .. it would be more satisfying to know the root cause for that 
issue

br,
roland


On Mon, 2017-04-10 at 19:12 +0200, George Sigletos wrote:
In 3 out of 5 nodes of our new DC the rebuild process finished successfully. In 
the other two nodes not (the process was hanging doing nothing) so we killed 
it, removed all data and started again. This time finished successfully.

Here is the netstats output of one of the new newly added nodes:

Mode: NORMAL
Not sending any streams.
Read Repair Statistics:
Attempted: 269142
Mismatch (Blocking): 169866
Mismatch (Background): 4
Pool NameActive   Pending  Completed   Dropped
Commandsn/a 2   10031126  1935
Responses   n/a97   22565129   n/a


On Mon, Apr 10, 2017 at 5:28 PM, Roland Otta 
mailto:roland.o...@willhaben.at>> wrote:
Hi,

we have seen similar issues here.

have you verified that your rebuilds have been finished successfully? we have 
seen rebuilds that stopped streaming and working but have not finished.
what does nodetool netstats output for your newly built up nodes?

br,
roland


On Mon, 2017-04-10 at 17:15 +0200, George Sigletos wrote:
Hello,

We recently added a new datacenter to our cluster and run "nodetool rebuild -- 
" in all 5 new nodes, one by one.

After this process finished we noticed there is data missing from the new 
datacenter, although it exists on the current one.

How would that be possible? Should I maybe have run repair in all nodes of the 
current DC before adding the new one?

Running Cassandra 2.1.15

Kind regards,
George







Re: Node always dieing

2017-04-11 Thread Cogumelos Maravilha
"system_auth" not my table.


On 04/11/2017 07:12 AM, Oskar Kjellin wrote:
> You changed to 6 nodes because you were running out of disk? But you
> still replicate 100% to all so you don't gain anything 
>
>
>
> On 10 Apr 2017, at 13:48, Cogumelos Maravilha
> mailto:cogumelosmaravi...@sapo.pt>> wrote:
>
>> No.
>>
>> nodetool status, nodetool describecluster also nodetool ring shows a
>> correct cluster.
>>
>> Not all nodes needs to be a seed, but can be.
>>
>> I had also ALTER KEYSPACE system_auth WITH REPLICATION = { 'class' :
>> 'SimpleStrategy', 'replication_factor' : 6 } AND durable_writes = false;
>>
>> And the first command on the new node was  |nodetool repair system_auth|
>>
>>
>> On 04/10/2017 12:37 PM, Chris Mawata wrote:
>>> Notice
>>> .SimpleSeedProvider{seeds=10.100.100.19, 10.100.100.85,
>>> 10.100.100.185, 10.100.100.161, 10.100.100.52, 10.100.1000.213};
>>>
>>> Why do you have all six of your nodes as seeds? is it possible that
>>> the last one you added used itself as the seed and is isolated?
>>>
>>> On Thu, Apr 6, 2017 at 6:48 AM, Cogumelos Maravilha
>>> mailto:cogumelosmaravi...@sapo.pt>> wrote:
>>>
>>> Yes C* is running as cassandra:
>>>
>>> cassand+  2267 1 99 10:18 ?00:02:56 java
>>> -Xloggc:/var/log/cassandra/gc.log -ea -XX:+UseThreadPriorities
>>> -XX:Threa...
>>>
>>> INFO  [main] 2017-04-06 10:35:42,956 Config.java:474 - Node
>>> configuration:[allocate_tokens_for_keyspace=null;
>>> authenticator=PasswordAuthenticator;
>>> authorizer=CassandraAuthorizer; auto_bootstrap=true;
>>> auto_snapshot=true; back_pressure_enabled=false;
>>> back_pressure_strategy=org.apache.cassandra.net
>>> .RateBasedBackPressure{high_ratio=0.9,
>>> factor=5, flow=FAST}; batch_size_fail_threshold_in_kb=50;
>>> batch_size_warn_threshold_in_kb=5;
>>> batchlog_replay_throttle_in_kb=1024; broadcast_address=null;
>>> broadcast_rpc_address=null;
>>> buffer_pool_use_heap_if_exhausted=true;
>>> cas_contention_timeout_in_ms=600; cdc_enabled=false;
>>> cdc_free_space_check_interval_ms=250; cdc_raw_directory=null;
>>> cdc_total_space_in_mb=0; client_encryption_options=;
>>> cluster_name=company; column_index_cache_size_in_kb=2;
>>> column_index_size_in_kb=64; commit_failure_policy=ignore;
>>> commitlog_compression=null;
>>> commitlog_directory=/mnt/cassandra/commitlog;
>>> commitlog_max_compression_buffers_in_pool=3;
>>> commitlog_periodic_queue_size=-1;
>>> commitlog_segment_size_in_mb=32; commitlog_sync=periodic;
>>> commitlog_sync_batch_window_in_ms=NaN;
>>> commitlog_sync_period_in_ms=1;
>>> commitlog_total_space_in_mb=null;
>>> compaction_large_partition_warning_threshold_mb=100;
>>> compaction_throughput_mb_per_sec=16; concurrent_compactors=null;
>>> concurrent_counter_writes=32;
>>> concurrent_materialized_view_writes=32; concurrent_reads=32;
>>> concurrent_replicates=null; concurrent_writes=32;
>>> counter_cache_keys_to_save=2147483647;
>>> counter_cache_save_period=7200; counter_cache_size_in_mb=null;
>>> counter_write_request_timeout_in_ms=600;
>>> credentials_cache_max_entries=1000;
>>> credentials_update_interval_in_ms=-1;
>>> credentials_validity_in_ms=2000; cross_node_timeout=false;
>>> data_file_directories=[Ljava.lang.String;@223f3642;
>>> disk_access_mode=auto; disk_failure_policy=ignore;
>>> disk_optimization_estimate_percentile=0.95;
>>> disk_optimization_page_cross_chance=0.1;
>>> disk_optimization_strategy=ssd; dynamic_snitch=true;
>>> dynamic_snitch_badness_threshold=0.1;
>>> dynamic_snitch_reset_interval_in_ms=60;
>>> dynamic_snitch_update_interval_in_ms=100;
>>> enable_scripted_user_defined_functions=false;
>>> enable_user_defined_functions=false;
>>> enable_user_defined_functions_threads=true;
>>> encryption_options=null; endpoint_snitch=SimpleSnitch;
>>> file_cache_size_in_mb=null; gc_log_threshold_in_ms=200;
>>> gc_warn_threshold_in_ms=1000;
>>> hinted_handoff_disabled_datacenters=[];
>>> hinted_handoff_enabled=true; hinted_handoff_throttle_in_kb=1024;
>>> hints_compression=null; hints_directory=/mnt/cassandra/hints;
>>> hints_flush_period_in_ms=1; incremental_backups=false;
>>> index_interval=null; index_summary_capacity_in_mb=null;
>>> index_summary_resize_interval_in_minutes=60; initial_token=null;
>>> inter_dc_stream_throughput_outbound_megabits_per_sec=200;
>>> inter_dc_tcp_nodelay=false; internode_authenticator=null;
>>> internode_compression=dc; internode_recv_buff_size_in_bytes=0;
>>> internode_send_buff_size_in_bytes=0;
>>> key_cache_keys_to_save=2147483647; key_cache_save_period=14400;
>>> key_cache_size_in_mb=null; listen_address=10.100.100.213;
>>> listen_interface=null; listen_interface_prefer_ipv6=false;
>>> listen_on_broadcast_address=false;
>>>

IPv6-only host, can't seem to get Cassandra to bind to a public port

2017-04-11 Thread Martijn Pieters
I’m having issues getting a single-node Cassandra cluster to run on a Ubuntu 
16.04 VM with only IPv6 available. I’m running Oracle Java 8 
(8u121-1~webupd8~2), Cassandra 3.10 (installed via the Cassandra 
http://www.apache.org/dist/cassandra/debian packages.)

I consistently get a “Protocol family unavailable” exception:

ERROR [main] 2017-04-11 09:54:23,991 CassandraDaemon.java:752 - Exception 
encountered during startup
java.lang.RuntimeException: java.net.SocketException: Protocol family 
unavailable
at 
org.apache.cassandra.net.MessagingService.getServerSockets(MessagingService.java:730)
 ~[apache-cassandra-3.10.jar:3.10]
at 
org.apache.cassandra.net.MessagingService.listen(MessagingService.java:664) 
~[apache-cassandra-3.10.jar:3.10]
at 
org.apache.cassandra.net.MessagingService.listen(MessagingService.java:648) 
~[apache-cassandra-3.10.jar:3.10]
at 
org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:773)
 ~[apache-cassandra-3.10.jar:3.10]
at 
org.apache.cassandra.service.StorageService.initServer(StorageService.java:666) 
~[apache-cassandra-3.10.jar:3.10]
at 
org.apache.cassandra.service.StorageService.initServer(StorageService.java:612) 
~[apache-cassandra-3.10.jar:3.10]
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:394) 
[apache-cassandra-3.10.jar:3.10]
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:601) 
[apache-cassandra-3.10.jar:3.10]
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:735) 
[apache-cassandra-3.10.jar:3.10]
Caused by: java.net.SocketException: Protocol family unavailable
at sun.nio.ch.Net.bind0(Native Method) ~[na:1.8.0_121]
at sun.nio.ch.Net.bind(Net.java:433) ~[na:1.8.0_121]
at sun.nio.ch.Net.bind(Net.java:425) ~[na:1.8.0_121]
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) 
~[na:1.8.0_121]
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) 
~[na:1.8.0_121]
at 
org.apache.cassandra.net.MessagingService.getServerSockets(MessagingService.java:714)
 ~[apache-cassandra-3.10.jar:3.10]
... 8 common frames omitted

`lo` (loopback) has both `inet` and `inet6` addresses, but `eth0` has no `inet` 
addresses, so only inet6 addr entries (both a local and a global scope address 
are configured).

My configuration changes:

listen_address: 
listen_interface_prefer_ipv6: true

Tracing through the source code the exception shows that it is the 
listen_address value above that throws the exception, changing it back to 
127.0.0.1 makes the server work again (but then I don’t get to use it on my 
local network). I tried both the local and the global scope IPv6 address.

I tried changing the JVM configuration to prefer IPv6 by editing 
/etc/cassandra/cassandra-env.sh:

--- etc/cassandra/cassandra-env.sh  2017-01-31 16:29:32.0 +
+++ /etc/cassandra/cassandra-env.sh 2017-04-11 09:52:51.45600 +
@@ -290,6 +290,9 @@
# to the location of the native libraries.
JVM_OPTS="$JVM_OPTS -Djava.library.path=$CASSANDRA_HOME/lib/sigar-bin"

+#JVM_OPTS="$JVM_OPTS -Djava.net.preferIPv4Stack=true"
+JVM_OPTS="$JVM_OPTS -Djava.net.preferIPv6Addresses=true"
+
JVM_OPTS="$JVM_OPTS $MX4J_ADDRESS"
JVM_OPTS="$JVM_OPTS $MX4J_PORT"
JVM_OPTS="$JVM_OPTS $JVM_EXTRA_OPTS"

But this makes no difference

I also tried using `listen_interface` instead, but that only changes the error 
message to:

ERROR [main] 2017-04-11 10:35:16,426 CassandraDaemon.java:752 - Exception 
encountered during startup: Configured listen_interface "eth0" could not be 
found

What else can I do?

Martijn Pieters


Re: Inconsistent data after adding a new DC and rebuilding

2017-04-11 Thread George Sigletos
Thanks for your reply. Yes, it would be nice to know the root cause.

Now running a full repair. Hopefully this will solve the problem

On Tue, Apr 11, 2017 at 9:43 AM, Roland Otta 
wrote:

> well .. thats pretty much the same we saw in our environment (cassandra
> 3.7).
>
> in our case a full repair fixed the issues.
> but no doubt .. it would be more satisfying to know the root cause for
> that issue
>
> br,
> roland
>
>
> On Mon, 2017-04-10 at 19:12 +0200, George Sigletos wrote:
>
> In 3 out of 5 nodes of our new DC the rebuild process finished
> successfully. In the other two nodes not (the process was hanging doing
> nothing) so we killed it, removed all data and started again. This time
> finished successfully.
>
> Here is the netstats output of one of the new newly added nodes:
>
> Mode: NORMAL
> Not sending any streams.
> Read Repair Statistics:
> Attempted: 269142
> Mismatch (Blocking): 169866
> Mismatch (Background): 4
> Pool NameActive   Pending  Completed   Dropped
> Commandsn/a 2   10031126  1935
> Responses   n/a97   22565129   n/a
>
>
> On Mon, Apr 10, 2017 at 5:28 PM, Roland Otta 
> wrote:
>
> Hi,
>
> we have seen similar issues here.
>
> have you verified that your rebuilds have been finished successfully? we
> have seen rebuilds that stopped streaming and working but have not finished.
> what does nodetool netstats output for your newly built up nodes?
>
> br,
> roland
>
>
> On Mon, 2017-04-10 at 17:15 +0200, George Sigletos wrote:
>
> Hello,
>
> We recently added a new datacenter to our cluster and run "nodetool
> rebuild -- " in all 5 new nodes, one by one.
>
> After this process finished we noticed there is data missing from the new
> datacenter, although it exists on the current one.
>
> How would that be possible? Should I maybe have run repair in all nodes of
> the current DC before adding the new one?
>
> Running Cassandra 2.1.15
>
> Kind regards,
> George
>
>
>
>
>
>


Deserializing a json string directly to a java class using Jackson?

2017-04-11 Thread Ali Akhtar
I have a table containing a column `foo` which is a string, and is json.

I have a class called `Foo` which maps to `foo_json` and can be serialized
/ deserialized using Jackson.

Is it possible to define the column as `private Foo foo` rather than
`private String foo` and manually deserializing it?

From
https://docs.datastax.com/en/drivers/java/3.1/com/datastax/driver/extras/codecs/json/JacksonJsonCodec.html
it looks like one just has to add that maven dependency? Does anything else
have to be done?


Re: IPv6-only host, can't seem to get Cassandra to bind to a public port

2017-04-11 Thread sai krishnam raju potturi
I got a similar error, and commenting out the below line helped.

JVM_OPTS="$JVM_OPTS -Djava.net.preferIPv4Stack=true"


Did you also include "rpc_interface_prefer_ipv6: true" in the YAML file?


thanks

Sai



On Tue, Apr 11, 2017 at 6:37 AM, Martijn Pieters  wrote:

> I’m having issues getting a single-node Cassandra cluster to run on a
> Ubuntu 16.04 VM with only IPv6 available. I’m running Oracle Java 8
> (8u121-1~webupd8~2), Cassandra 3.10 (installed via the Cassandra
> http://www.apache.org/dist/cassandra/debian packages.)
>
>
>
> I consistently get a “Protocol family unavailable” exception:
>
>
>
> ERROR [main] 2017-04-11 09:54:23,991 CassandraDaemon.java:752 - Exception
> encountered during startup
>
> java.lang.RuntimeException: java.net.SocketException: Protocol family
> unavailable
>
> at 
> org.apache.cassandra.net.MessagingService.getServerSockets(MessagingService.java:730)
> ~[apache-cassandra-3.10.jar:3.10]
>
> at 
> org.apache.cassandra.net.MessagingService.listen(MessagingService.java:664)
> ~[apache-cassandra-3.10.jar:3.10]
>
> at 
> org.apache.cassandra.net.MessagingService.listen(MessagingService.java:648)
> ~[apache-cassandra-3.10.jar:3.10]
>
> at 
> org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:773)
> ~[apache-cassandra-3.10.jar:3.10]
>
> at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:666)
> ~[apache-cassandra-3.10.jar:3.10]
>
> at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:612)
> ~[apache-cassandra-3.10.jar:3.10]
>
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:394)
> [apache-cassandra-3.10.jar:3.10]
>
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:601)
> [apache-cassandra-3.10.jar:3.10]
>
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:735)
> [apache-cassandra-3.10.jar:3.10]
>
> Caused by: java.net.SocketException: Protocol family unavailable
>
> at sun.nio.ch.Net.bind0(Native Method) ~[na:1.8.0_121]
>
> at sun.nio.ch.Net.bind(Net.java:433) ~[na:1.8.0_121]
>
> at sun.nio.ch.Net.bind(Net.java:425) ~[na:1.8.0_121]
>
> at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> ~[na:1.8.0_121]
>
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> ~[na:1.8.0_121]
>
> at 
> org.apache.cassandra.net.MessagingService.getServerSockets(MessagingService.java:714)
> ~[apache-cassandra-3.10.jar:3.10]
>
> ... 8 common frames omitted
>
>
>
> `lo` (loopback) has both `inet` and `inet6` addresses, but `eth0` has no
> `inet` addresses, so only inet6 addr entries (both a local and a global
> scope address are configured).
>
>
>
> My configuration changes:
>
>
>
> listen_address: 
>
> listen_interface_prefer_ipv6: true
>
>
>
> Tracing through the source code the exception shows that it is the
> listen_address value above that throws the exception, changing it back to
> 127.0.0.1 makes the server work again (but then I don’t get to use it on my
> local network). I tried both the local and the global scope IPv6 address.
>
>
>
> I tried changing the JVM configuration to prefer IPv6 by editing
> /etc/cassandra/cassandra-env.sh:
>
>
>
> --- etc/cassandra/cassandra-env.sh  2017-01-31 16:29:32.0
> +
>
> +++ /etc/cassandra/cassandra-env.sh 2017-04-11 09:52:51.45600
> +
>
> @@ -290,6 +290,9 @@
>
> # to the location of the native libraries.
>
> JVM_OPTS="$JVM_OPTS -Djava.library.path=$CASSANDRA_HOME/lib/sigar-bin"
>
>
>
> +#JVM_OPTS="$JVM_OPTS -Djava.net.preferIPv4Stack=true"
>
> +JVM_OPTS="$JVM_OPTS -Djava.net.preferIPv6Addresses=true"
>
> +
>
> JVM_OPTS="$JVM_OPTS $MX4J_ADDRESS"
>
> JVM_OPTS="$JVM_OPTS $MX4J_PORT"
>
> JVM_OPTS="$JVM_OPTS $JVM_EXTRA_OPTS"
>
>
>
> But this makes no difference
>
>
>
> I also tried using `listen_interface` instead, but that only changes the
> error message to:
>
>
>
> ERROR [main] 2017-04-11 10:35:16,426 CassandraDaemon.java:752 -
> Exception encountered during startup: Configured listen_interface "eth0"
> could not be found
>
>
>
> What else can I do?
>
>
>
> Martijn Pieters
>


Re: IPv6-only host, can't seem to get Cassandra to bind to a public port

2017-04-11 Thread Martijn Pieters
From: sai krishnam raju potturi 
> I got a similar error, and commenting out the below line helped.
> JVM_OPTS="$JVM_OPTS -Djava.net.preferIPv4Stack=true"
>
> Did you also include "rpc_interface_prefer_ipv6: true" in the YAML file?

No luck at all here. Yes, I had commented out that line (and also tried 
replacing it with `-Djava.net.preferIPv6Addresses=true`, included in my email. 
I also included an error to make sure it was the right file).

It all *should* work, but doesn’t. :-(

I just tried again with “rpc_interface_prefer_ipv6: true” set as well, but 
without luck. I note that I have the default “rpc_address: localhost”, so it’ll 
bind to the lo loopback, which has IPv4 configured already. Not that using 
“rpc_address: ‘::1’” instead works (same error, so I can’t bind to the IPv6 
localhost address either).

Martijn Pieters






Re: IPv6-only host, can't seem to get Cassandra to bind to a public port

2017-04-11 Thread sai krishnam raju potturi
We have included the IPV6 address with scope GLOBAL, and not IPV6 with
SCOPE LINK in the YAML and TOPOLOGY files.

inet6 addr: 2001: *** : ** : ** : * : * :  :   Scope:Global

inet6 addr: fe80 :: *** :  :  :  Scope:Link


Not sure if this might be of relevance to the issue you are facing.


thanks

Sai



On Tue, Apr 11, 2017 at 10:29 AM, Martijn Pieters  wrote:

> From: sai krishnam raju potturi 
> > I got a similar error, and commenting out the below line helped.
> > JVM_OPTS="$JVM_OPTS -Djava.net.preferIPv4Stack=true"
> >
> > Did you also include "rpc_interface_prefer_ipv6: true" in the YAML file?
>
> No luck at all here. Yes, I had commented out that line (and also tried
> replacing it with `-Djava.net.preferIPv6Addresses=true`, included in my
> email. I also included an error to make sure it was the right file).
>
> It all *should* work, but doesn’t. :-(
>
> I just tried again with “rpc_interface_prefer_ipv6: true” set as well, but
> without luck. I note that I have the default “rpc_address: localhost”, so
> it’ll bind to the lo loopback, which has IPv4 configured already. Not that
> using “rpc_address: ‘::1’” instead works (same error, so I can’t bind to
> the IPv6 localhost address either).
>
> Martijn Pieters
>
>
>
>
>


Start Cassandra with Gossip disabled ?

2017-04-11 Thread Biscuit Ninja
We run an 8 node Cassandra v2.1.16 cluster (4 nodes in two discrete 
datacentres) and we're currently investigating a problem where by 
restarting Cassandra on a node resulted in the filling of 
Eden/Survivor/Old and frequent GCs.


http://imgur.com/a/OR1dk

This hammered reads from our application tier (writes seemed okay) and 
until we determine what the root cause is, we'd like to be able to start 
Cassandra with gossip disabled.


Is this possible?
Thanks
.bN


Re: Multiple nodes decommission

2017-04-11 Thread Jacob Shadix
Are you using vnodes? I typically do one-by-one as the decommission will
create additional load/network activity streaming data to the other nodes
as the token ranges are reassigned.

-- Jacob Shadix

On Sat, Apr 8, 2017 at 10:55 AM, Vlad  wrote:

> Hi,
>
> how multiple nodes should be decommissioned by "nodetool decommission"-
> one by one or in parallel ?
>
> Thanks.
>


Re: Multiple nodes decommission

2017-04-11 Thread benjamin roth
I did not test it but I'd bet that parallel decommision will lead to
inconsistencies.
Each decommission results in range movements and range reassignments which
becomes effective after a successful decommission.
If you start several decommissions at once, I guess the calculated
reassignments are invalid for at least one node after the first node
finished the decommission process.

I hope someone will correct me if i am wrong.

2017-04-11 18:43 GMT+02:00 Jacob Shadix :

> Are you using vnodes? I typically do one-by-one as the decommission will
> create additional load/network activity streaming data to the other nodes
> as the token ranges are reassigned.
>
> -- Jacob Shadix
>
> On Sat, Apr 8, 2017 at 10:55 AM, Vlad  wrote:
>
>> Hi,
>>
>> how multiple nodes should be decommissioned by "nodetool decommission"-
>> one by one or in parallel ?
>>
>> Thanks.
>>
>
>


Re: Multiple nodes decommission

2017-04-11 Thread Jacob Shadix
Right! Another reason why I just stick with sequential decommissions. Maybe
someone here could shed some light on what happens under the covers if
parallel decommissions are kicked off.

-- Jacob Shadix

On Tue, Apr 11, 2017 at 12:55 PM, benjamin roth  wrote:

> I did not test it but I'd bet that parallel decommision will lead to
> inconsistencies.
> Each decommission results in range movements and range reassignments which
> becomes effective after a successful decommission.
> If you start several decommissions at once, I guess the calculated
> reassignments are invalid for at least one node after the first node
> finished the decommission process.
>
> I hope someone will correct me if i am wrong.
>
> 2017-04-11 18:43 GMT+02:00 Jacob Shadix :
>
>> Are you using vnodes? I typically do one-by-one as the decommission will
>> create additional load/network activity streaming data to the other nodes
>> as the token ranges are reassigned.
>>
>> -- Jacob Shadix
>>
>> On Sat, Apr 8, 2017 at 10:55 AM, Vlad  wrote:
>>
>>> Hi,
>>>
>>> how multiple nodes should be decommissioned by "nodetool decommission"-
>>> one by one or in parallel ?
>>>
>>> Thanks.
>>>
>>
>>
>


UNSUBSCRIBE

2017-04-11 Thread Lawrence Turcotte
UNSUBSCRIBE


Streaming errors during bootstrap

2017-04-11 Thread Jai Bheemsen Rao Dhanwada
Hello,

I am seeing streaming errors while adding new nodes(in the same DC) to the
cluster.

ERROR [STREAM-IN-/x.x.x.x] 2017-04-11 23:09:29,318 StreamSession.java:512 -
[Stream #a8d56c70-1f0b-11e7-921e-61bb8bdc19bb] Streaming error occurred
java.io.IOException: CF *465ed8d0-086c-11e6-9744-2900b5a9ab11* was dropped
during streaming
at
org.apache.cassandra.streaming.compress.CompressedStreamReader.read(CompressedStreamReader.java:77)
~[apache-cassandra-2.1.16.jar:2.1.16]
at
org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:48)
~[apache-cassandra-2.1.16.jar:2.1.16]
at
org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:38)
~[apache-cassandra-2.1.16.jar:2.1.16]
at
org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:56)
~[apache-cassandra-2.1.16.jar:2.1.16]
at
org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:276)
~[apache-cassandra-2.1.16.jar:2.1.16]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]

The CF : 465ed8d0-086c-11e6-9744-2900b5a9ab11 is actually present and all
the nodes are in sync. I am sure there is  not n/w connectivity issues.
Not sure why this error is happening.

I tried to run repair/scrub on the CF with metadata :
465ed8d0-086c-11e6-9744-2900b5a9ab11 but didn't help.

Any idea what else to look for in this case?

Thanks in advance.


Re: UNSUBSCRIBE

2017-04-11 Thread Nate McCall
To unsubscribe from this list, please send an email to
user-unsubscr...@cassandra.apache.org

Thanks!

On Wed, Apr 12, 2017 at 6:37 AM, Lawrence Turcotte <
lawrence.turco...@gmail.com> wrote:

> UNSUBSCRIBE
>