Re: Optimizing for connections

2018-12-21 Thread Eric Stevens
RE #2, some have had good results from having coordinator-only nodes:
https://www.slideshare.net/DataStax/optimizing-your-cluster-with-coordinator-nodes-eric-lubow-simplereach-cassandra-summit-2016

Assuming finite resources, it might be better to be certain you have good
token awareness in your application and use those extra nodes in the main
cluster, however.

On Fri, Dec 21, 2018 at 12:34 AM Rahul Singh 
wrote:

> See inline
>
> Rahul Singh
> Chief Executive Officer
> m 202.905.2818 <(202)%20905-2818>
>
> Anant Corporation
> 1010 Wisconsin Ave NW, Suite 250
> 
> Washington, D.C. 20007
> 
>
> We build and manage digital business technology platforms.
> On Dec 9, 2018, 2:02 PM -0500, Devaki, Srinivas ,
> wrote:
>
> Hi Guys,
>
> Have a couple of questions regarding the connections to cassandra,
>
> 1. What are the recommended number of connections per cassandra node?
>
>
> Depends on hardware.
>
> 2. Is it a good idea to create coordinator nodes(with `num_token: 0`) and
> whitelisting only those hosts from client side? so that I can isolate main
> worker don't need to work on connection threads
>
>
> Defeats the purpose of having a masterless system.
>
> 3. does the request time on client side include connect time?
>
>
> Who is measuring?
>
>
> 4. Is there any hard limit on number of connections that can be set on
> cassandra?
>
>
>
> Read :
> https://stackoverflow.com/questions/33562374/cassandra-throttling-workload
>
> Thanks a lot for your help
>
>


Writes and Reads with high latency

2018-12-21 Thread Marco Gasparini
hello all,

I have 1 DC of 3 nodes in which is running Cassandra 3.11.3 with
consistency level ONE and Java 1.8.0_191.

Every day, there are many nodejs programs that send data to the cassandra's
cluster via NodeJs cassandra-driver.
Every day I got like 600k requests. Each request makes the server to:
1_ READ some data in Cassandra (by an id, usually I get 3 records),
2_ DELETE one of those records
3_ WRITE the data into Cassandra.

So every day I make many deletes.

Every day I find errors like:
"All host(s) tried for query failed. First host tried, 10.8.0.10:9042: Host
considered as DOWN. See innerErrors"
"Server timeout during write query at consistency LOCAL_ONE (0 peer(s)
acknowledged the write over 1 required)"
"Server timeout during write query at consistency SERIAL (0 peer(s)
acknowledged the write over 1 required)"
"Server timeout during read query at consistency LOCAL_ONE (0 peer(s)
acknowledged the read over 1 required)"

nodetool tablehistograms tells me this:

Percentile  SSTables Write Latency  Read LatencyPartition Size
  Cell Count
  (micros)  (micros)   (bytes)
50% 8.00379.02   1955.67379022
   8
75%10.00785.94 155469.30654949
  17
95%12.00  17436.92 268650.95   1629722
  35
98%12.00  25109.16 322381.14   2346799
  42
99%12.00  30130.99 386857.37   3379391
  50
Min 0.00  6.87 88.15   104
   0
Max12.00  43388.63 386857.37  20924300
 179

in the 99% I noted that write and read latency is pretty high, but I don't
know how to improve that.
I can provide more statistics if needed.

Is there any improvement I can make to the Cassandra's configuration in
order to not to lose any data?

Thanks

Regards
Marco


RE: [EXTERNAL] Writes and Reads with high latency

2018-12-21 Thread Durity, Sean R
Can you provide the schema and the queries? What is the RF of the keyspace for 
the data? Are you using any Retry policy on your Cluster object?


Sean Durity

From: Marco Gasparini 
Sent: Friday, December 21, 2018 10:45 AM
To: user@cassandra.apache.org
Subject: [EXTERNAL] Writes and Reads with high latency

hello all,

I have 1 DC of 3 nodes in which is running Cassandra 3.11.3 with consistency 
level ONE and Java 1.8.0_191.

Every day, there are many nodejs programs that send data to the cassandra's 
cluster via NodeJs cassandra-driver.
Every day I got like 600k requests. Each request makes the server to:
1_ READ some data in Cassandra (by an id, usually I get 3 records),
2_ DELETE one of those records
3_ WRITE the data into Cassandra.

So every day I make many deletes.

Every day I find errors like:
"All host(s) tried for query failed. First host tried, 
10.8.0.10:9042:
 Host considered as DOWN. See innerErrors"
"Server timeout during write query at consistency LOCAL_ONE (0 peer(s) 
acknowledged the write over 1 required)"
"Server timeout during write query at consistency SERIAL (0 peer(s) 
acknowledged the write over 1 required)"
"Server timeout during read query at consistency LOCAL_ONE (0 peer(s) 
acknowledged the read over 1 required)"

nodetool tablehistograms tells me this:

Percentile  SSTables Write Latency  Read LatencyPartition Size  
  Cell Count
  (micros)  (micros)   (bytes)
50% 8.00379.02   1955.67379022  
   8
75%10.00785.94 155469.30654949  
  17
95%12.00  17436.92 268650.95   1629722  
  35
98%12.00  25109.16 322381.14   2346799  
  42
99%12.00  30130.99 386857.37   3379391  
  50
Min 0.00  6.87 88.15   104  
   0
Max12.00  43388.63 386857.37  20924300  
 179

in the 99% I noted that write and read latency is pretty high, but I don't know 
how to improve that.
I can provide more statistics if needed.

Is there any improvement I can make to the Cassandra's configuration in order 
to not to lose any data?

Thanks

Regards
Marco



The information in this Internet Email is confidential and may be legally 
privileged. It is intended solely for the addressee. Access to this Email by 
anyone else is unauthorized. If you are not the intended recipient, any 
disclosure, copying, distribution or any action taken or omitted to be taken in 
reliance on it, is prohibited and may be unlawful. When addressed to our 
clients any opinions or advice contained in this Email are subject to the terms 
and conditions expressed in any applicable governing The Home Depot terms of 
business or client engagement letter. The Home Depot disclaims all 
responsibility and liability for the accuracy and content of this attachment 
and for any damages or losses arising from any inaccuracies, errors, viruses, 
e.g., worms, trojan horses, etc., or other items of a destructive nature, which 
may be contained in this attachment and shall not be liable for direct, 
indirect, consequential or special damages in connection with this e-mail 
message or its attachment.