Re: Cassandra DevCenter

2018-03-13 Thread phiroc
Good morning,

when I run 

groovy -cp .\cassandra-driver-core-3.4.0.jar;C:\DevCenter\plugins Cass1.groovy 

I get the following error message:


U:\workarea\ProjetsBNP\groo>run_Cass1.bat
"groovy vopts = "
Caught: java.lang.NoClassDefFoundError: 
com/google/common/util/concurrent/AsyncFunction
java.lang.NoClassDefFoundError: com/google/common/util/concurrent/AsyncFunction
at Cass1.retrieveCities(Cass1.groovy:25)
at Cass1.run(Cass1.groovy:67)
Caused by: java.lang.ClassNotFoundException: 
com.google.common.util.concurrent.AsyncFunction
... 2 more


Here's my Groovy code:



import com.datastax.driver.core.Cluster
import com.datastax.driver.core.Host
import com.datastax.driver.core.Metadata
import com.datastax.driver.core.Session
import com.datastax.driver.core.ResultSet
import com.datastax.driver.core.Row
import com.datastax.driver.core.ProtocolOptions.Compression
import java.sql.SQLException

def retrieveCities() {

final def hostsIp = 'xxx,bbb,ccc'
final def hosts = ''
// 
def Cluster cluster = 
Cluster.builder().addContactPoints(hostsIp.split(','))  <== line 25
.withPort(1234)
// com.datastax.driver.core.ProtocolOptions.Compression.LZ4
.withCompression(Compression.LZ4)
.withCredentials( '...' , '...')
.build()




As a reminder, I can't download missing dependencies as I have no access to 
Internet.

As a result, I must rely only on the cassandra-driver-core-3.4.0.jar jar and/or 
the DevCenter jars.

Any help would be much appreciated.


Philippe


- Mail original -
De: "Jacques-Henri Berthemet" 
À: user@cassandra.apache.org
Envoyé: Lundi 12 Mars 2018 09:44:27
Objet: RE: Cassandra DevCenter




Hi, 



There is no DevCenter 2.x, latest is 1.6. It would help if you provide jar 
names and exceptions you encounter. Make sure you ’ re not mixing Guava 
versions from other dependencies. DevCenter uses Datastax driver to connect to 
Cassandra, double check the versions of the jars you need here: 

https://mvnrepository.com/artifact/com.datastax.cassandra/cassandra-driver-core 



Put only the jars listed on the driver version you have on you classpath and it 
should work. 




-- 

Jacques-Henri Berthemet 





From: Philippe de Rochambeau [mailto:phi...@free.fr] 
Sent: Saturday, March 10, 2018 6:56 PM 
To: user@cassandra.apache.org 
Subject: Re: Cassandra DevCenter 




Hi, 


thank you for replying. 


Unfortunately, the computer DevCenter is running on doesn’t have Internet 
access (for security reasons). As a result, I can’t use the pom.xml. 


Furthermore, I’ve tried running a Groovy program whose classpath included the 
DevCenter (2.x) lib directory, but to no avail as a Google dependency was 
missing (I can’t recall the dependency’s name). 


Because DevCenter manages to connect to Cassandra without downloading 
dependencies, there’s bound to be a way to drive the former using Java or 
Groovy. 



Le 10 mars 2018 à 18:34, Goutham reddy < goutham.chiru...@gmail.com > a écrit : 






Get the JARS from Cassandra lib folder and put it in your build path. Or else 
use Pom.xml maven project to directly download from repository. 





Thanks and Regards, 


Goutham Reddy Aenugu. 





On Sat, Mar 10, 2018 at 9:30 AM Philippe de Rochambeau < phi...@free.fr > 
wrote: 



Hello, 
has anyone tried running CQL queries from a Java program using the jars 
provided with DevCenter? 
Many thanks. 
Philippe 

- 
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org 
For additional commands, e-mail: user-h...@cassandra.apache.org 

-- 




Regards 

Goutham Reddy

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



RE: Cassandra DevCenter

2018-03-13 Thread Jacques-Henri Berthemet
Hi,

Try that:
groovy -cp C:\DevCenter\plugins\*.jar Cass1.groovy 


--
Jacques-Henri Berthemet

-Original Message-
From: phi...@free.fr [mailto:phi...@free.fr] 
Sent: Tuesday, March 13, 2018 9:40 AM
To: user@cassandra.apache.org
Subject: Re: Cassandra DevCenter

Good morning,

when I run 

groovy -cp .\cassandra-driver-core-3.4.0.jar;C:\DevCenter\plugins Cass1.groovy 

I get the following error message:


U:\workarea\ProjetsBNP\groo>run_Cass1.bat
"groovy vopts = "
Caught: java.lang.NoClassDefFoundError: 
com/google/common/util/concurrent/AsyncFunction
java.lang.NoClassDefFoundError: com/google/common/util/concurrent/AsyncFunction
at Cass1.retrieveCities(Cass1.groovy:25)
at Cass1.run(Cass1.groovy:67)
Caused by: java.lang.ClassNotFoundException: 
com.google.common.util.concurrent.AsyncFunction
... 2 more


Here's my Groovy code:



import com.datastax.driver.core.Cluster
import com.datastax.driver.core.Host
import com.datastax.driver.core.Metadata import 
com.datastax.driver.core.Session import com.datastax.driver.core.ResultSet
import com.datastax.driver.core.Row
import com.datastax.driver.core.ProtocolOptions.Compression
import java.sql.SQLException

def retrieveCities() {

final def hostsIp = 'xxx,bbb,ccc'
final def hosts = ''
// 
def Cluster cluster = 
Cluster.builder().addContactPoints(hostsIp.split(','))  <== line 25
.withPort(1234)
// com.datastax.driver.core.ProtocolOptions.Compression.LZ4
.withCompression(Compression.LZ4)
.withCredentials( '...' , '...')
.build()




As a reminder, I can't download missing dependencies as I have no access to 
Internet.

As a result, I must rely only on the cassandra-driver-core-3.4.0.jar jar and/or 
the DevCenter jars.

Any help would be much appreciated.


Philippe


- Mail original -
De: "Jacques-Henri Berthemet" 
À: user@cassandra.apache.org
Envoyé: Lundi 12 Mars 2018 09:44:27
Objet: RE: Cassandra DevCenter




Hi, 



There is no DevCenter 2.x, latest is 1.6. It would help if you provide jar 
names and exceptions you encounter. Make sure you ’ re not mixing Guava 
versions from other dependencies. DevCenter uses Datastax driver to connect to 
Cassandra, double check the versions of the jars you need here: 

https://mvnrepository.com/artifact/com.datastax.cassandra/cassandra-driver-core 



Put only the jars listed on the driver version you have on you classpath and it 
should work. 




-- 

Jacques-Henri Berthemet 





From: Philippe de Rochambeau [mailto:phi...@free.fr]
Sent: Saturday, March 10, 2018 6:56 PM
To: user@cassandra.apache.org
Subject: Re: Cassandra DevCenter 




Hi, 


thank you for replying. 


Unfortunately, the computer DevCenter is running on doesn’t have Internet 
access (for security reasons). As a result, I can’t use the pom.xml. 


Furthermore, I’ve tried running a Groovy program whose classpath included the 
DevCenter (2.x) lib directory, but to no avail as a Google dependency was 
missing (I can’t recall the dependency’s name). 


Because DevCenter manages to connect to Cassandra without downloading 
dependencies, there’s bound to be a way to drive the former using Java or 
Groovy. 



Le 10 mars 2018 à 18:34, Goutham reddy < goutham.chiru...@gmail.com > a écrit : 






Get the JARS from Cassandra lib folder and put it in your build path. Or else 
use Pom.xml maven project to directly download from repository. 





Thanks and Regards, 


Goutham Reddy Aenugu. 





On Sat, Mar 10, 2018 at 9:30 AM Philippe de Rochambeau < phi...@free.fr > 
wrote: 



Hello,
has anyone tried running CQL queries from a Java program using the jars 
provided with DevCenter? 
Many thanks. 
Philippe 

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org 

-- 




Regards 

Goutham Reddy

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Re: command to view yaml file setting in use on console

2018-03-13 Thread Oleksandr Shulgin
On Tue, Mar 13, 2018 at 2:43 AM, Jeff Jirsa  wrote:

> Cassandra-7622 went patch available today
>

Jeff,

Are you sure you didn't mistype the issue number?  I see:

https://issues.apache.org/jira/browse/CASSANDRA-7622

Summary: Implement virtual tables
Status: Open

--
Alex


Re: Cassandra DevCenter

2018-03-13 Thread phiroc

I'm now getting this:

Cass1.groovy: 15: unable to resolve class 
com.datastax.driver.core.ProtocolOptions.Compression
 @ line 15, column 1.
   import com.datastax.driver.core.ProtocolOptions.Compression
   ^

Cass1.groovy: 11: unable to resolve class com.datastax.driver.core.Metadata
 @ line 11, column 1.
   import com.datastax.driver.core.Metadata
   ^

Cass1.groovy: 13: unable to resolve class com.datastax.driver.core.ResultSet
 @ line 13, column 1.
   import com.datastax.driver.core.ResultSet
   ^
Cass1.groovy: 9: unable to resolve class com.datastax.driver.core.Cluster
 @ line 9, column 1.
   import com.datastax.driver.core.Cluster
   ^

Cass1.groovy: 10: unable to resolve class com.datastax.driver.core.Host
 @ line 10, column 1.
   import com.datastax.driver.core.Host
   ^
\Cass1.groovy: 14: unable to resolve class com.datastax.driver.core.Row
 @ line 14, column 1.
   import com.datastax.driver.core.Row
   ^



- Mail original -
De: "Jacques-Henri Berthemet" 
À: user@cassandra.apache.org
Envoyé: Mardi 13 Mars 2018 09:43:54
Objet: RE: Cassandra DevCenter

Hi,

Try that:
groovy -cp C:\DevCenter\plugins\*.jar Cass1.groovy 


--
Jacques-Henri Berthemet

-Original Message-
From: phi...@free.fr [mailto:phi...@free.fr] 
Sent: Tuesday, March 13, 2018 9:40 AM
To: user@cassandra.apache.org
Subject: Re: Cassandra DevCenter

Good morning,

when I run 

groovy -cp .\cassandra-driver-core-3.4.0.jar;C:\DevCenter\plugins Cass1.groovy 

I get the following error message:


U:\workarea\ProjetsBNP\groo>run_Cass1.bat
"groovy vopts = "
Caught: java.lang.NoClassDefFoundError: 
com/google/common/util/concurrent/AsyncFunction
java.lang.NoClassDefFoundError: com/google/common/util/concurrent/AsyncFunction
at Cass1.retrieveCities(Cass1.groovy:25)
at Cass1.run(Cass1.groovy:67)
Caused by: java.lang.ClassNotFoundException: 
com.google.common.util.concurrent.AsyncFunction
... 2 more


Here's my Groovy code:



import com.datastax.driver.core.Cluster
import com.datastax.driver.core.Host
import com.datastax.driver.core.Metadata import 
com.datastax.driver.core.Session import com.datastax.driver.core.ResultSet
import com.datastax.driver.core.Row
import com.datastax.driver.core.ProtocolOptions.Compression
import java.sql.SQLException

def retrieveCities() {

final def hostsIp = 'xxx,bbb,ccc'
final def hosts = ''
// 
def Cluster cluster = 
Cluster.builder().addContactPoints(hostsIp.split(','))  <== line 25
.withPort(1234)
// com.datastax.driver.core.ProtocolOptions.Compression.LZ4
.withCompression(Compression.LZ4)
.withCredentials( '...' , '...')
.build()




As a reminder, I can't download missing dependencies as I have no access to 
Internet.

As a result, I must rely only on the cassandra-driver-core-3.4.0.jar jar and/or 
the DevCenter jars.

Any help would be much appreciated.


Philippe


- Mail original -
De: "Jacques-Henri Berthemet" 
À: user@cassandra.apache.org
Envoyé: Lundi 12 Mars 2018 09:44:27
Objet: RE: Cassandra DevCenter




Hi, 



There is no DevCenter 2.x, latest is 1.6. It would help if you provide jar 
names and exceptions you encounter. Make sure you ’ re not mixing Guava 
versions from other dependencies. DevCenter uses Datastax driver to connect to 
Cassandra, double check the versions of the jars you need here: 

https://mvnrepository.com/artifact/com.datastax.cassandra/cassandra-driver-core 



Put only the jars listed on the driver version you have on you classpath and it 
should work. 




-- 

Jacques-Henri Berthemet 





From: Philippe de Rochambeau [mailto:phi...@free.fr]
Sent: Saturday, March 10, 2018 6:56 PM
To: user@cassandra.apache.org
Subject: Re: Cassandra DevCenter 




Hi, 


thank you for replying. 


Unfortunately, the computer DevCenter is running on doesn’t have Internet 
access (for security reasons). As a result, I can’t use the pom.xml. 


Furthermore, I’ve tried running a Groovy program whose classpath included the 
DevCenter (2.x) lib directory, but to no avail as a Google dependency was 
missing (I can’t recall the dependency’s name). 


Because DevCenter manages to connect to Cassandra without downloading 
dependencies, there’s bound to be a way to drive the former using Java or 
Groovy. 



Le 10 mars 2018 à 18:34, Goutham reddy < goutham.chiru...@gmail.com > a écrit : 






Get the JARS from Cassandra lib folder and put it in your build path. Or else 
use Pom.xml maven project to directly download from repository. 





Thanks and Regards, 


Goutham Reddy Aenugu. 





On Sat, Mar 10, 2018 at 9:30 AM Philippe de Rochambeau < phi...@free.fr > 
wrote: 



Hello,
has anyone tried running CQL queries from a Java program using the jars 
provided with DevCenter? 
Many thanks. 
Philippe 


RE: Cassandra DevCenter

2018-03-13 Thread Jacques-Henri Berthemet
And that?
groovy -cp C:\DevCenter\plugins\* Cass1.groovy

DevCenter\plugins contains all needed jars, your problem is just a classpath 
problem. If that does not work, you'll need to manually make the required 
classpath, check jars needed in 
https://mvnrepository.com/artifact/com.datastax.cassandra/cassandra-driver-core/3.4.0
 "Compile dependencies" and make it yourself.
--
Jacques-Henri Berthemet

-Original Message-
From: phi...@free.fr [mailto:phi...@free.fr] 
Sent: Tuesday, March 13, 2018 9:51 AM
To: user@cassandra.apache.org
Subject: Re: Cassandra DevCenter


I'm now getting this:

Cass1.groovy: 15: unable to resolve class 
com.datastax.driver.core.ProtocolOptions.Compression
 @ line 15, column 1.
   import com.datastax.driver.core.ProtocolOptions.Compression
   ^

Cass1.groovy: 11: unable to resolve class com.datastax.driver.core.Metadata  @ 
line 11, column 1.
   import com.datastax.driver.core.Metadata
   ^

Cass1.groovy: 13: unable to resolve class com.datastax.driver.core.ResultSet
 @ line 13, column 1.
   import com.datastax.driver.core.ResultSet
   ^
Cass1.groovy: 9: unable to resolve class com.datastax.driver.core.Cluster  @ 
line 9, column 1.
   import com.datastax.driver.core.Cluster
   ^

Cass1.groovy: 10: unable to resolve class com.datastax.driver.core.Host  @ line 
10, column 1.
   import com.datastax.driver.core.Host
   ^
\Cass1.groovy: 14: unable to resolve class com.datastax.driver.core.Row  @ line 
14, column 1.
   import com.datastax.driver.core.Row
   ^



- Mail original -
De: "Jacques-Henri Berthemet" 
À: user@cassandra.apache.org
Envoyé: Mardi 13 Mars 2018 09:43:54
Objet: RE: Cassandra DevCenter

Hi,

Try that:
groovy -cp C:\DevCenter\plugins\*.jar Cass1.groovy 


--
Jacques-Henri Berthemet

-Original Message-
From: phi...@free.fr [mailto:phi...@free.fr]
Sent: Tuesday, March 13, 2018 9:40 AM
To: user@cassandra.apache.org
Subject: Re: Cassandra DevCenter

Good morning,

when I run 

groovy -cp .\cassandra-driver-core-3.4.0.jar;C:\DevCenter\plugins Cass1.groovy 

I get the following error message:


U:\workarea\ProjetsBNP\groo>run_Cass1.bat
"groovy vopts = "
Caught: java.lang.NoClassDefFoundError: 
com/google/common/util/concurrent/AsyncFunction
java.lang.NoClassDefFoundError: com/google/common/util/concurrent/AsyncFunction
at Cass1.retrieveCities(Cass1.groovy:25)
at Cass1.run(Cass1.groovy:67)
Caused by: java.lang.ClassNotFoundException: 
com.google.common.util.concurrent.AsyncFunction
... 2 more


Here's my Groovy code:



import com.datastax.driver.core.Cluster
import com.datastax.driver.core.Host
import com.datastax.driver.core.Metadata import 
com.datastax.driver.core.Session import com.datastax.driver.core.ResultSet
import com.datastax.driver.core.Row
import com.datastax.driver.core.ProtocolOptions.Compression
import java.sql.SQLException

def retrieveCities() {

final def hostsIp = 'xxx,bbb,ccc'
final def hosts = ''
// 
def Cluster cluster = 
Cluster.builder().addContactPoints(hostsIp.split(','))  <== line 25
.withPort(1234)
// com.datastax.driver.core.ProtocolOptions.Compression.LZ4
.withCompression(Compression.LZ4)
.withCredentials( '...' , '...')
.build()




As a reminder, I can't download missing dependencies as I have no access to 
Internet.

As a result, I must rely only on the cassandra-driver-core-3.4.0.jar jar and/or 
the DevCenter jars.

Any help would be much appreciated.


Philippe


- Mail original -
De: "Jacques-Henri Berthemet" 
À: user@cassandra.apache.org
Envoyé: Lundi 12 Mars 2018 09:44:27
Objet: RE: Cassandra DevCenter




Hi, 



There is no DevCenter 2.x, latest is 1.6. It would help if you provide jar 
names and exceptions you encounter. Make sure you ’ re not mixing Guava 
versions from other dependencies. DevCenter uses Datastax driver to connect to 
Cassandra, double check the versions of the jars you need here: 

https://mvnrepository.com/artifact/com.datastax.cassandra/cassandra-driver-core 



Put only the jars listed on the driver version you have on you classpath and it 
should work. 




-- 

Jacques-Henri Berthemet 





From: Philippe de Rochambeau [mailto:phi...@free.fr]
Sent: Saturday, March 10, 2018 6:56 PM
To: user@cassandra.apache.org
Subject: Re: Cassandra DevCenter 




Hi, 


thank you for replying. 


Unfortunately, the computer DevCenter is running on doesn’t have Internet 
access (for security reasons). As a result, I can’t use the pom.xml. 


Furthermore, I’ve tried running a Groovy program whose classpath included the 
DevCenter (2.x) lib directory, but to no avail as a Google dependency was 
missing (I can’t recall the dependency’s name). 


Because DevCenter manages to connect to Cassandra without downloading 
dependencies, there’s bound to be a way to drive the former using

Re: Cassandra vs MySQL

2018-03-13 Thread Rahul Singh
Oliver,


Here’s the criteria I have for you:

1. Do you need massive concurrency on reads and writes ?

If not you can replicate MySQL using master slave. Or consider Galera - Maria 
DB master master. I’ve not used it but then again doesn’t mean that it doesn’t 
work. If you have time to experiment , please do a comparison with Galera vs. 
Cassandra. ;)

2. Do you plan on doing both OLTP and OLAP on the same data?

Cassandra can replicate data to different Datacenters so you can concurrently 
do heavy read and write on one Logical Datacenter and simultaneously have 
another Logical Datacenter for analytics.

3. Do you have a ridiculously strict SLA to maintain? And does it need to be 
global?

If you don’t need to be up and running all the time and don’t need a global 
platform, don’t bother using Cassandra.

Exporting a relational schema and importing into Cassandra will be a box of 
hurt. In my professional (the type of experience that comes from people paying 
me to make judgments, decisions ) experience with Cassandra, the biggest 
mistake is people thinking that since CQL is similar to SQL that it is just 
like SQL. It’s not. The keys and literally “no relationships” mean that all the 
tables should be “Report tables” or “direct object tables.” That being said if 
you don’t do a lot of joins and arbitrary selects on any field, Cassandra can 
help achieve massive scale.

The statement that “Cassandra is going to die in a few time” is the same thing 
people said about Java and .NET. They are still here decades later. Cassandra 
has achieved critical mass. So much that a company made a C++ version of it and 
Microsoft supports a global Database as a service version of it called Cosmos, 
not to mention that DataStax supports huge global brands on a commercial build 
of it. It’s not going anywhere.


--
Rahul Singh
rahul.si...@anant.us

Anant Corporation

On Mar 12, 2018, 3:58 PM -0400, Oliver Ruebenacker , wrote:
>
>  Hello,
>
>   We have a project currently using MySQL single-node with 5-6TB of data and 
> some performance issues, and we plan to add data up to a total size of maybe 
> 25-30TB.
>
>   We are thinking of migrating to Cassandra. I have been trying to find 
> benchmarks or other guidelines to compare MySQL and Cassandra, but most of 
> them seem to be five years old or older.
>
>   Is there some good more recent material?
>
>   Thanks!
>
>  Best, Oliver
>
> --
> Oliver Ruebenacker
> Senior Software Engineer, Diabetes Portal, Broad Institute
>


Re: Cassandra DevCenter

2018-03-13 Thread phiroc
Same result.

- Mail original -
De: "Jacques-Henri Berthemet" 
À: user@cassandra.apache.org
Envoyé: Mardi 13 Mars 2018 10:46:29
Objet: RE: Cassandra DevCenter

And that?
groovy -cp C:\DevCenter\plugins\* Cass1.groovy

DevCenter\plugins contains all needed jars, your problem is just a classpath 
problem. If that does not work, you'll need to manually make the required 
classpath, check jars needed in 
https://mvnrepository.com/artifact/com.datastax.cassandra/cassandra-driver-core/3.4.0
 "Compile dependencies" and make it yourself.
--
Jacques-Henri Berthemet

-Original Message-
From: phi...@free.fr [mailto:phi...@free.fr] 
Sent: Tuesday, March 13, 2018 9:51 AM
To: user@cassandra.apache.org
Subject: Re: Cassandra DevCenter


I'm now getting this:

Cass1.groovy: 15: unable to resolve class 
com.datastax.driver.core.ProtocolOptions.Compression
 @ line 15, column 1.
   import com.datastax.driver.core.ProtocolOptions.Compression
   ^

Cass1.groovy: 11: unable to resolve class com.datastax.driver.core.Metadata  @ 
line 11, column 1.
   import com.datastax.driver.core.Metadata
   ^

Cass1.groovy: 13: unable to resolve class com.datastax.driver.core.ResultSet
 @ line 13, column 1.
   import com.datastax.driver.core.ResultSet
   ^
Cass1.groovy: 9: unable to resolve class com.datastax.driver.core.Cluster  @ 
line 9, column 1.
   import com.datastax.driver.core.Cluster
   ^

Cass1.groovy: 10: unable to resolve class com.datastax.driver.core.Host  @ line 
10, column 1.
   import com.datastax.driver.core.Host
   ^
\Cass1.groovy: 14: unable to resolve class com.datastax.driver.core.Row  @ line 
14, column 1.
   import com.datastax.driver.core.Row
   ^



- Mail original -
De: "Jacques-Henri Berthemet" 
À: user@cassandra.apache.org
Envoyé: Mardi 13 Mars 2018 09:43:54
Objet: RE: Cassandra DevCenter

Hi,

Try that:
groovy -cp C:\DevCenter\plugins\*.jar Cass1.groovy 


--
Jacques-Henri Berthemet

-Original Message-
From: phi...@free.fr [mailto:phi...@free.fr]
Sent: Tuesday, March 13, 2018 9:40 AM
To: user@cassandra.apache.org
Subject: Re: Cassandra DevCenter

Good morning,

when I run 

groovy -cp .\cassandra-driver-core-3.4.0.jar;C:\DevCenter\plugins Cass1.groovy 

I get the following error message:


U:\workarea\ProjetsBNP\groo>run_Cass1.bat
"groovy vopts = "
Caught: java.lang.NoClassDefFoundError: 
com/google/common/util/concurrent/AsyncFunction
java.lang.NoClassDefFoundError: com/google/common/util/concurrent/AsyncFunction
at Cass1.retrieveCities(Cass1.groovy:25)
at Cass1.run(Cass1.groovy:67)
Caused by: java.lang.ClassNotFoundException: 
com.google.common.util.concurrent.AsyncFunction
... 2 more


Here's my Groovy code:



import com.datastax.driver.core.Cluster
import com.datastax.driver.core.Host
import com.datastax.driver.core.Metadata import 
com.datastax.driver.core.Session import com.datastax.driver.core.ResultSet
import com.datastax.driver.core.Row
import com.datastax.driver.core.ProtocolOptions.Compression
import java.sql.SQLException

def retrieveCities() {

final def hostsIp = 'xxx,bbb,ccc'
final def hosts = ''
// 
def Cluster cluster = 
Cluster.builder().addContactPoints(hostsIp.split(','))  <== line 25
.withPort(1234)
// com.datastax.driver.core.ProtocolOptions.Compression.LZ4
.withCompression(Compression.LZ4)
.withCredentials( '...' , '...')
.build()




As a reminder, I can't download missing dependencies as I have no access to 
Internet.

As a result, I must rely only on the cassandra-driver-core-3.4.0.jar jar and/or 
the DevCenter jars.

Any help would be much appreciated.


Philippe


- Mail original -
De: "Jacques-Henri Berthemet" 
À: user@cassandra.apache.org
Envoyé: Lundi 12 Mars 2018 09:44:27
Objet: RE: Cassandra DevCenter




Hi, 



There is no DevCenter 2.x, latest is 1.6. It would help if you provide jar 
names and exceptions you encounter. Make sure you ’ re not mixing Guava 
versions from other dependencies. DevCenter uses Datastax driver to connect to 
Cassandra, double check the versions of the jars you need here: 

https://mvnrepository.com/artifact/com.datastax.cassandra/cassandra-driver-core 



Put only the jars listed on the driver version you have on you classpath and it 
should work. 




-- 

Jacques-Henri Berthemet 





From: Philippe de Rochambeau [mailto:phi...@free.fr]
Sent: Saturday, March 10, 2018 6:56 PM
To: user@cassandra.apache.org
Subject: Re: Cassandra DevCenter 




Hi, 


thank you for replying. 


Unfortunately, the computer DevCenter is running on doesn’t have Internet 
access (for security reasons). As a result, I can’t use the pom.xml. 


Furthermore, I’ve tried running a Groovy program whose classpath included the 
DevCenter (2.x) lib directory, but to no avail as a Google dependency was 
missing (I can’t re

RE: Cassandra DevCenter

2018-03-13 Thread Jacques-Henri Berthemet
Then you need to make the cp yourself as I described.

--
Jacques-Henri Berthemet

-Original Message-
From: phi...@free.fr [mailto:phi...@free.fr] 
Sent: Tuesday, March 13, 2018 11:03 AM
To: user@cassandra.apache.org
Subject: Re: Cassandra DevCenter

Same result.

- Mail original -
De: "Jacques-Henri Berthemet" 
À: user@cassandra.apache.org
Envoyé: Mardi 13 Mars 2018 10:46:29
Objet: RE: Cassandra DevCenter

And that?
groovy -cp C:\DevCenter\plugins\* Cass1.groovy

DevCenter\plugins contains all needed jars, your problem is just a classpath 
problem. If that does not work, you'll need to manually make the required 
classpath, check jars needed in 
https://mvnrepository.com/artifact/com.datastax.cassandra/cassandra-driver-core/3.4.0
 "Compile dependencies" and make it yourself.
--
Jacques-Henri Berthemet

-Original Message-
From: phi...@free.fr [mailto:phi...@free.fr] 
Sent: Tuesday, March 13, 2018 9:51 AM
To: user@cassandra.apache.org
Subject: Re: Cassandra DevCenter


I'm now getting this:

Cass1.groovy: 15: unable to resolve class 
com.datastax.driver.core.ProtocolOptions.Compression
 @ line 15, column 1.
   import com.datastax.driver.core.ProtocolOptions.Compression
   ^

Cass1.groovy: 11: unable to resolve class com.datastax.driver.core.Metadata  @ 
line 11, column 1.
   import com.datastax.driver.core.Metadata
   ^

Cass1.groovy: 13: unable to resolve class com.datastax.driver.core.ResultSet
 @ line 13, column 1.
   import com.datastax.driver.core.ResultSet
   ^
Cass1.groovy: 9: unable to resolve class com.datastax.driver.core.Cluster  @ 
line 9, column 1.
   import com.datastax.driver.core.Cluster
   ^

Cass1.groovy: 10: unable to resolve class com.datastax.driver.core.Host  @ line 
10, column 1.
   import com.datastax.driver.core.Host
   ^
\Cass1.groovy: 14: unable to resolve class com.datastax.driver.core.Row  @ line 
14, column 1.
   import com.datastax.driver.core.Row
   ^



- Mail original -
De: "Jacques-Henri Berthemet" 
À: user@cassandra.apache.org
Envoyé: Mardi 13 Mars 2018 09:43:54
Objet: RE: Cassandra DevCenter

Hi,

Try that:
groovy -cp C:\DevCenter\plugins\*.jar Cass1.groovy 


--
Jacques-Henri Berthemet

-Original Message-
From: phi...@free.fr [mailto:phi...@free.fr]
Sent: Tuesday, March 13, 2018 9:40 AM
To: user@cassandra.apache.org
Subject: Re: Cassandra DevCenter

Good morning,

when I run 

groovy -cp .\cassandra-driver-core-3.4.0.jar;C:\DevCenter\plugins Cass1.groovy 

I get the following error message:


U:\workarea\ProjetsBNP\groo>run_Cass1.bat
"groovy vopts = "
Caught: java.lang.NoClassDefFoundError: 
com/google/common/util/concurrent/AsyncFunction
java.lang.NoClassDefFoundError: com/google/common/util/concurrent/AsyncFunction
at Cass1.retrieveCities(Cass1.groovy:25)
at Cass1.run(Cass1.groovy:67)
Caused by: java.lang.ClassNotFoundException: 
com.google.common.util.concurrent.AsyncFunction
... 2 more


Here's my Groovy code:



import com.datastax.driver.core.Cluster
import com.datastax.driver.core.Host
import com.datastax.driver.core.Metadata import 
com.datastax.driver.core.Session import com.datastax.driver.core.ResultSet
import com.datastax.driver.core.Row
import com.datastax.driver.core.ProtocolOptions.Compression
import java.sql.SQLException

def retrieveCities() {

final def hostsIp = 'xxx,bbb,ccc'
final def hosts = ''
// 
def Cluster cluster = 
Cluster.builder().addContactPoints(hostsIp.split(','))  <== line 25
.withPort(1234)
// com.datastax.driver.core.ProtocolOptions.Compression.LZ4
.withCompression(Compression.LZ4)
.withCredentials( '...' , '...')
.build()




As a reminder, I can't download missing dependencies as I have no access to 
Internet.

As a result, I must rely only on the cassandra-driver-core-3.4.0.jar jar and/or 
the DevCenter jars.

Any help would be much appreciated.


Philippe


- Mail original -
De: "Jacques-Henri Berthemet" 
À: user@cassandra.apache.org
Envoyé: Lundi 12 Mars 2018 09:44:27
Objet: RE: Cassandra DevCenter




Hi, 



There is no DevCenter 2.x, latest is 1.6. It would help if you provide jar 
names and exceptions you encounter. Make sure you ’ re not mixing Guava 
versions from other dependencies. DevCenter uses Datastax driver to connect to 
Cassandra, double check the versions of the jars you need here: 

https://mvnrepository.com/artifact/com.datastax.cassandra/cassandra-driver-core 



Put only the jars listed on the driver version you have on you classpath and it 
should work. 




-- 

Jacques-Henri Berthemet 





From: Philippe de Rochambeau [mailto:phi...@free.fr]
Sent: Saturday, March 10, 2018 6:56 PM
To: user@cassandra.apache.org
Subject: Re: Cassandra DevCenter 




Hi, 


thank you for replying. 


Unfortunately, the computer DevCenter is running on doesn’t h

Re: Anomaly detection

2018-03-13 Thread Rahul Singh
I’ve used OpsCenter, New Relic, Splunk, and ELK and all of them have ways to 
visualize what’s going on. Eventually I just forked a cfstats2csv python 
program and started making formatted excel files which made it easy to spot 
anomalies and filter keyspaces / tables across nodes. I have some basic anomaly 
detection based on std. deviation but it’s only a static snapshot based 
detection. I think you want something that may be looking at the time series of 
the whole dataset.

Regardless cfstats , tpstats are good places to see
What’s going on and then determine what you need to monitor via other tools.

--
Rahul Singh
rahul.si...@anant.us

Anant Corporation

On Mar 12, 2018, 10:02 PM -0400, Fernando Ipar , wrote:
> Hello Salvatore,
>
> > On Mon, Mar 12, 2018 at 2:12 PM, D. Salvatore  
> > wrote:
> > > Hi Rahul,
> > > I was mainly thinking about performance anomaly detection but I am also 
> > > interested in other types such as fault detection, data or queries 
> > > anomalies.
> >
> > I know VividCortex (http://vividcortex.com) supports Cassandra (2.1 or 
> > greater) and I also know it does automatic (they call it adaptive) fault 
> > detection for MySQL. I took a quick look at their website and could not 
> > find an explicit list of features they support for Cassandra but it's 
> > possible that fault detection is one of them too, so if SaaS is an option 
> > I'd recommend you take a look at them.
> >
> > Regards,
> > Fernando Ipar
> > http://fernandoipar.com


Re: Cassandra at Instagram with Dikang Gu interview by Jeff Carpenter

2018-03-13 Thread Rahul Singh
I agree with Jeff. I believe the _best_ part of Cassandra exists in its 
networking , replication , and fault tolerance. The storage engine is as dumb 
as disk or memory. If we can make it fast by replacing it , great. Maybe even 
optimize it in JVM.

With new paradigms like blockchain entering into the mainstream - eventually I 
see bridges between Cassandra and blockchain for organizations that need speed 
as well as fault tolerance for the “ledger”.


--
Rahul Singh
rahul.si...@anant.us

Anant Corporation

On Mar 12, 2018, 7:59 PM -0400, Jeff Jirsa , wrote:
>
>
> > On Mon, Mar 12, 2018 at 3:58 PM, Carl Mueller 
> >  wrote:
> > >  Rocksandra can expand out it's non-java footprint without rearchitecting 
> > > the java codebase. Or are there serious concerns with Datastax and the 
> > > binary protocols?
> > >
> >
> > Rockssandra should eventually become part of Cassandra. The pluggable 
> > storage has other benefits beyond avoiding JVM garbage.
> >
> > I don't know what "concerns with Datastax and the binary protocols" means, 
> > Apache Cassandra owns the protocol, not any company or driver.
> >
> >
> >


Re: Cassandra storage: Some thoughts

2018-03-13 Thread Vangelis Koukis
On Fri, Mar 09, 2018 at 07:53:17pm -0500, Rahul Singh wrote:
> Interesting. Can this be used in conjunction with bare metal? As in does it 
> present containers in place if the “real” node until the node is up and 
> running?
> 
> --
> Rahul Singh
> rahul.si...@anant.us
> 
> Anant Corporation
>
Hello Rahul,

Yes, Rok can be used in conjunction with bare metal.

It works at the block layer, and there is no limitation on whether the
storage being presented is consumed by a container running Cassandra, an
EC2 instance on ephemeral NVMe-backed storage, or an actual physical node
directly.

Similarly, you can be snapshotting local storage on physical nodes,
EC2 instances, or containers and present them to containers, EC2
instances or physical nodes in the same DC, or synchronize them to
another Rok deployment on a remote DC, and present them there.

Hope the above answers your question,
Vangelis.

> On Mar 9, 2018, 10:56 AM -0500, Vangelis Koukis , wrote:
> > Hello all,
> >
> > My name is Vangelis Koukis and I am a Founder and the CTO of Arrikto.
> >
> > I'm writing to share our thoughts on how people run distributed,
> > stateful applications such as Cassandra on modern infrastructure,
> > and would love to get the community's feedback and comments.
> >
> > The fundamental question is: Where does a Cassandra node find its data?
> > Does it run over local storage, e.g., a super-fast NVMe device, or does
> > it run over some sort of external, managed storage, e.g., EBS on AWS?
> >
> > Going in one of the two directions is a tradeoff between flexibility on
> > one hand, and performance/cost on the other.
> >
> > * External storage, e.g., EBS:
> >
> > Easy backups as thin/instant EBS snapshots, and easy node recovery
> > in the case of instance failure by re-attaching the EBS data volume
> > to a newly-created instance. But then, I/O bandwidth, I/O latency,
> > and cost suffer.
> >
> > * Local NVMe:
> >
> > Blazing fast, with very low latency, excellent bandwidth, a
> > fraction of the cost, but then it is not obvious how one backs up
> > their data, or recovers from node failure.
> >
> > At Arrikto we are building decentralized storage to tackle this problem
> > for cloud-native apps. Our software, Rok, allows you to run stateful
> > apps directly over fast, local NVMe storage on-prem or on the cloud, and
> > still be able to snapshot the containers and distribute them
> > efficiently: across machines of the same cluster, or across distinct
> > locations and administrative domains over a decentralized network.
> >
> > Rok runs on the side of Cassandra, which accesses local storage
> > directly. It only has to intervene during snapshot-based node recovery,
> > which is transparent to the application. It does not invoke an
> > application-wide data recovery and rebalancing operation, which would
> > put load on the whole cluster and impact application responsiveness.
> > Instead, it performs block-level recovery of this specific node from the
> > Rok snapshot store, e.g., S3, with predictable performance.
> >
> > This solves four important issues we have seen people running Cassandra
> > at scale face today:
> >
> > * Node recovery / node migration:
> >
> > If you lose an entire Cassandra node, then your database will
> > continue operating normally, as Rok in combination with your
> > Container Orchestrator (e.g., Kubernetes) will present another
> > Cassandra node. This node will have the data of the latest
> > snapshot that resides on the Rok snapshot store. In this case,
> > Cassandra only has to recover the changed parts, which is just a
> > small fraction of the node data, and does not cause CPU load on
> > the whole cluster. Similarly, you can migrate a Cassandra node
> > from one physical host to another, without depending on external,
> > EBS-like storage.
> >
> > * Backup and recovery:
> >
> > You can use Rok to take a full backup of your whole application,
> > along with the DB, as a group-consistent snapshot of its VMs or
> > containers, and store it externally. This does not depend on app-
> > or Cassandra-specific functionality.
> >
> > * Data mobility:
> >
> > You can synchronize these snapshots to different locations, e.g.,
> > across regions or cloud providers, and across administrative
> > domains, i.e., share them with others without giving them direct
> > access to your Cassandra DB. You can then spawn your entire
> > application stack in the new location.
> >
> > * Testing / analytics:
> >
> > Being able to spawn a copy of your Cassandra DB as a thin clone
> > means you can have test & dev workflows running in parallel, on
> > independent, mutable clones, with real data underneath. Similarly,
> > your analytics team can run their lengthy reporting and analytics
> > workloads on an independent clone of your transactional DB, on
> > completely distinct hardware, or even on a different location.
> >
> > So far, initial validation of our solution with early adopters shows
> > significant performanc

Re: Cassandra storage: Some thoughts

2018-03-13 Thread Vangelis Koukis
On Sat, Mar 10, 2018 at 04:35:05am +0100, Oleksandr Shulgin wrote:
> On 9 Mar 2018 16:56, "Vangelis Koukis"  wrote:
> > 
> > Hello all,
> > 
> > My name is Vangelis Koukis and I am a Founder and the CTO of Arrikto.
> > 
> > I'm writing to share our thoughts on how people run distributed,
> > stateful applications such as Cassandra on modern infrastructure,
> > and would love to get the community's feedback and comments.
> > 
>
> Thanks, that sounds interesting.
> 

Thank you Alex.

> > At Arrikto we are building decentralized storage to tackle this problem
> > for cloud-native apps. Our software, Rok
>
> 
> Do I understand correctly that there is only white paper available, but not
> any source code?
> 

I assume you are referring to the resources which are publicly available
on our website.

Yes, Rok is not open source, at least not for now.

> >  In this case,
> >   Cassandra only has to recover the changed parts, which is just a
> >   small fraction of the node data, and does not cause CPU load on
> >   the whole cluster.
> > 
> 
> How if not running a repair? And if it's a repair why would it not put CPU
> load on other nodes?
> 

Rok will present new local storage to the node, which will be a thin,
instant clone of its latest snapshot, and will be hydrated in the
background.

This process only involves the node under recovery, which communicates
with the Rok snapshot store directly, and happens with predictable
performance without impacting the rest of the cluster. Currently, we
recover at ~1GB/s when storing snapshots on S3.

At this point, the node has been recovered to a previous point in time,
so yes, a repair needs to run, to bring it up to date with the rest of
the cluster.

We assume this will be an incremental repair, in which case the load on
the other nodes will be minimal: instead of having to transfer the full
data set, they will only have to provide the recovered node with the
data that has changed since the snapshot was last taken. Moreover, to
participate in the repair they will be reading their SSTables from
local, NVMe-based storage which has very good performance.

Since Rok can take snapshots of the whole Cassandra deployment
periodically, the amount of data to be transferred depends on the
snapshot frequency. If we assume a standard snapshot frequency of once
every 15', we can expect that only a small fraction of the whole data
set will need to be repaired, for the node to be up-to-date again.

Hope the above answers your question,
Vangelis.

-- 
Vangelis Koukis
CTO, Arrikto Inc.
3505 El Camino Real, Palo Alto, CA 94306
www.arrikto.com


signature.asc
Description: Digital signature


Re: command to view yaml file setting in use on console

2018-03-13 Thread Jeff Jirsa
Not a typo

Look at sample output

-- 
Jeff Jirsa


> On Mar 13, 2018, at 1:48 AM, Oleksandr Shulgin  
> wrote:
> 
>> On Tue, Mar 13, 2018 at 2:43 AM, Jeff Jirsa  wrote:
>> Cassandra-7622 went patch available today 
> 
> Jeff,
> 
> Are you sure you didn't mistype the issue number?  I see:
> 
> https://issues.apache.org/jira/browse/CASSANDRA-7622
> 
> Summary: Implement virtual tables
> Status: Open
> 
> --
> Alex
> 


Re: Row cache functionality - Some confusion

2018-03-13 Thread Rahul Singh
It’s pretty clear to me that the only thing that gets put into the caches are 
the top N rows.

https://github.com/apache/cassandra/blob/0db88242c66d3a7193a9ad836f9a515b3ac7f9fa/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java#L523

It may fetch more, but it doesn’t cache it. It may get more if its not the full 
partition cache, but theres no code that inserts into the CacheService except

https://github.com/apache/cassandra/blob/0db88242c66d3a7193a9ad836f9a515b3ac7f9fa/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java#L528



--
Rahul Singh
rahul.si...@anant.us

Anant Corporation

On Mar 12, 2018, 8:56 AM -0400, Hannu Kröger , wrote:
>
> > On 12 Mar 2018, at 14:45, Rahul Singh  wrote:
> >
> > I may be wrong, but what I’ve read and used in the past assumes that the 
> > “first” N rows are cached and the clustering key design is how I change 
> > what N rows are put into memory. Looking at the code, it seems that’s the 
> > case.
>
> So we agree that we row cache is storing only N rows from the beginning of 
> the partition. So if only the last row in a partition is read, then it 
> probably doesn’t get cached assuming there are more than N rows in a 
> partition?
>
> > The language of the comment basically says that it holds in cache what 
> > satisfies the query if and only if it’s the head of the partition, if not 
> > it fetches it and saves it - I dont interpret it differently from what I 
> > have seen in the documentation.
>
> Hmm, I’m trying to understand this. Does it mean that it stores the results 
> in cache if it is head and if not, it will fetch the head and store that 
> (instead of the results for the query) ?
>
> Hannu


Best way to Drop Tombstones/after GC Grace

2018-03-13 Thread Madhu-Nosql
I got few ways to Drop Tombstones- Chos Monkey/Zombie Data mainly to avoid Data
Resurrection (you deleted data it will comes back in future)

I am thinking of below options, let me know if you have any best practice
for this

1.using nodetool garbagecollect
2.only_purge_repaired_tombstones
3.At Table level making GC_Grace_period to zero and compact

Thanks,
Madhu


Re: Best way to Drop Tombstones/after GC Grace

2018-03-13 Thread Rahul Singh
Do you anticipate this happening all the time or are you just trying to rescue?

Nodetool scrub can be useful too.


--
Rahul Singh
rahul.si...@anant.us

Anant Corporation

On Mar 13, 2018, 11:29 AM -0400, Madhu-Nosql , wrote:
> I got few ways to Drop Tombstones- Chos Monkey/Zombie Data mainly to avoid 
> Data Resurrection (you deleted data it will comes back in future)
>
> I am thinking of below options, let me know if you have any best practice for 
> this
>
> 1.using nodetool garbagecollect
> 2.only_purge_repaired_tombstones
> 3.At Table level making GC_Grace_period to zero and compact
>
> Thanks,
> Madhu


Re: Best way to Drop Tombstones/after GC Grace

2018-03-13 Thread Madhu-Nosql
Rahul,

Nodetool scrub is good for rescue, what if its happening all the time?

On Tue, Mar 13, 2018 at 10:37 AM, Rahul Singh 
wrote:

> Do you anticipate this happening all the time or are you just trying to
> rescue?
>
> Nodetool scrub can be useful too.
>
>
> --
> Rahul Singh
> rahul.si...@anant.us
>
> Anant Corporation
>
> On Mar 13, 2018, 11:29 AM -0400, Madhu-Nosql ,
> wrote:
>
> I got few ways to Drop Tombstones- Chos Monkey/Zombie Data mainly to avoid 
> Data
> Resurrection (you deleted data it will comes back in future)
>
> I am thinking of below options, let me know if you have any best practice
> for this
>
> 1.using nodetool garbagecollect
> 2.only_purge_repaired_tombstones
> 3.At Table level making GC_Grace_period to zero and compact
>
> Thanks,
> Madhu
>
>


Re: Best way to Drop Tombstones/after GC Grace

2018-03-13 Thread Rahul Singh
Are you writing nulls or does the data cycle that way?

--
Rahul Singh
rahul.si...@anant.us

Anant Corporation

On Mar 13, 2018, 11:48 AM -0400, Madhu-Nosql , wrote:
> Rahul,
>
> Nodetool scrub is good for rescue, what if its happening all the time?
>
> > On Tue, Mar 13, 2018 at 10:37 AM, Rahul Singh 
> >  wrote:
> > > Do you anticipate this happening all the time or are you just trying to 
> > > rescue?
> > >
> > > Nodetool scrub can be useful too.
> > >
> > >
> > > --
> > > Rahul Singh
> > > rahul.si...@anant.us
> > >
> > > Anant Corporation
> > >
> > > On Mar 13, 2018, 11:29 AM -0400, Madhu-Nosql , 
> > > wrote:
> > > > I got few ways to Drop Tombstones- Chos Monkey/Zombie Data mainly to 
> > > > avoid Data Resurrection (you deleted data it will comes back in future)
> > > >
> > > > I am thinking of below options, let me know if you have any best 
> > > > practice for this
> > > >
> > > > 1.using nodetool garbagecollect
> > > > 2.only_purge_repaired_tombstones
> > > > 3.At Table level making GC_Grace_period to zero and compact
> > > >
> > > > Thanks,
> > > > Madhu
>


Re: Best way to Drop Tombstones/after GC Grace

2018-03-13 Thread Madhu-Nosql
We assume that's becoz of nulls

On Tue, Mar 13, 2018 at 12:58 PM, Rahul Singh 
wrote:

> Are you writing nulls or does the data cycle that way?
>
> --
> Rahul Singh
> rahul.si...@anant.us
>
> Anant Corporation
>
> On Mar 13, 2018, 11:48 AM -0400, Madhu-Nosql ,
> wrote:
>
> Rahul,
>
> Nodetool scrub is good for rescue, what if its happening all the time?
>
> On Tue, Mar 13, 2018 at 10:37 AM, Rahul Singh <
> rahul.xavier.si...@gmail.com> wrote:
>
>> Do you anticipate this happening all the time or are you just trying to
>> rescue?
>>
>> Nodetool scrub can be useful too.
>>
>>
>> --
>> Rahul Singh
>> rahul.si...@anant.us
>>
>> Anant Corporation
>>
>> On Mar 13, 2018, 11:29 AM -0400, Madhu-Nosql ,
>> wrote:
>>
>> I got few ways to Drop Tombstones- Chos Monkey/Zombie Data mainly to
>> avoid Data Resurrection (you deleted data it will comes back in future)
>>
>> I am thinking of below options, let me know if you have any best practice
>> for this
>>
>> 1.using nodetool garbagecollect
>> 2.only_purge_repaired_tombstones
>> 3.At Table level making GC_Grace_period to zero and compact
>>
>> Thanks,
>> Madhu
>>
>>
>


Migration of keyspace to another new cluster

2018-03-13 Thread Goutham reddy
Hi,
We got a requirement to migrate only one keyspace data from one cluster to
other cluster. And we no longer need the old cluster anymore. Can you
suggest what are the best possible ways we can achieve it.

Regards
Goutham Reddy


Fast Writes to Cassandra Failing Through Python Script

2018-03-13 Thread Faraz Mateen
 Hi everyone,

I seem to have hit a problem in which writing to cassandra through a python
script fails and also occasionally causes cassandra node to crash. Here are
the details of my problem.

I have a python based streaming application that reads data from kafka at a
high rate and pushes it to cassandra through datastax's cassandra driver
for python. My cassandra setup consists of 3 nodes and a replication factor
of 2. Problem is that my python application crashes after writing ~12000
records with the following error:

Exception: Error from server: code=1100 [Coordinator node timed out
waiting for replica nodes' responses] message="Operation timed out -
received only 0 responses." info={'received_responses':
 0, 'consistency': 'LOCAL_ONE', 'required_responses': 1}

Sometimes the  python application crashes with this traceback:

cassandra.OperationTimedOut: errors={'10.128.1.1': 'Client request
timeout. See Session.execute[_async](timeout)'}, last_host=10.128.1.1

With the error above, one of the cassandra node crashes as well. When I
look at cassandra system logs (/var/log/cassandra/system.log), I see the
following exception:

https://gist.github.com/farazmateen/e7aa5749f963ad2293f8be0ca1ccdc22/
e3fd274af32c20eb9f534849a31734dcd33745b4

According to the suggestion in post linked below, I have set my JVM Heap
size to 8GB but the problem still persists.:
https://dzone.com/articles/diagnosing-and-fixing-cassandra-timeouts

*Cluster:*

   - Cassandra version 3.9
   - 3 nodes, with 8 cores and 30GB of RAM each.
   - Keyspace has a replication factor of 2.
   - Write consistency is LOCAL_ONE
   - MAX HEAP SIZE is set to 8GB.

Any help will be greatly appreciated.

--
Faraz


Re: Fast Writes to Cassandra Failing Through Python Script

2018-03-13 Thread Dinesh Joshi
What does your schema look like? Are you seeing any warnings or errors in the 
server log?
Dinesh 

On Tuesday, March 13, 2018, 3:43:33 PM PDT, Faraz Mateen  
wrote:  
 
 Hi everyone,
I seem to have hit a problem in which writing to cassandra through a python 
script fails and also occasionally causes cassandra node to crash. Here are the 
details of my problem.
I have a python based streaming application that reads data from kafka at a 
high rate and pushes it to cassandra through datastax's cassandra driver for 
python. My cassandra setup consists of 3 nodes and a replication factor of 2. 
Problem is that my python application crashes after writing ~12000 records with 
the following error:

Exception: Error from server: code=1100 [Coordinator node timed out waiting for 
replica nodes' responses] message="Operation timed out - received only 0 
responses." info={'received_responses':
 0, 'consistency': 'LOCAL_ONE', 'required_responses': 1}Sometimes the  python 
application crashes with this traceback:

cassandra.OperationTimedOut: errors={'10.128.1.1': 'Client request timeout. See 
Session.execute[_async](timeou t)'}, last_host=10.128.1.1With the error above, 
one of the cassandra node crashes as well. When I look at cassandra system logs 
(/var/log/cassandra/system.log ), I see the following exception:

https://gist.github.com/ farazmateen/ e7aa5749f963ad2293f8be0ca1ccdc 22/ 
e3fd274af32c20eb9f534849a31734 dcd33745b4

According to the suggestion in post linked below, I have set my JVM Heap size 
to 8GB but the problem still persists.:
https://dzone.com/articles/dia gnosing-and-fixing-cassandra- timeouts
Cluster:
   
   - Cassandra version 3.9
   - 3 nodes, with 8 cores and 30GB of RAM each.
   - Keyspace has a replication factor of 2.
   - Write consistency is LOCAL_ONE
   - MAX HEAP SIZE is set to 8GB.
Any help will be greatly appreciated.
--
Faraz
  

Re: Fast Writes to Cassandra Failing Through Python Script

2018-03-13 Thread Goutham reddy
Faraz,
Can you share your code snippet, how you are trying to save the  entity
objects into cassandra.

Thanks and Regards,
Goutham Reddy Aenugu.

Regards
Goutham Reddy

On Tue, Mar 13, 2018 at 3:42 PM, Faraz Mateen  wrote:

> Hi everyone,
>
> I seem to have hit a problem in which writing to cassandra through a
> python script fails and also occasionally causes cassandra node to crash.
> Here are the details of my problem.
>
> I have a python based streaming application that reads data from kafka at
> a high rate and pushes it to cassandra through datastax's cassandra driver
> for python. My cassandra setup consists of 3 nodes and a replication factor
> of 2. Problem is that my python application crashes after writing ~12000
> records with the following error:
>
> Exception: Error from server: code=1100 [Coordinator node timed out waiting 
> for replica nodes' responses] message="Operation timed out - received only 0 
> responses." info={'received_responses':
>  0, 'consistency': 'LOCAL_ONE', 'required_responses': 1}
>
> Sometimes the  python application crashes with this traceback:
>
> cassandra.OperationTimedOut: errors={'10.128.1.1': 'Client request timeout. 
> See Session.execute[_async](timeout)'}, last_host=10.128.1.1
>
> With the error above, one of the cassandra node crashes as well. When I
> look at cassandra system logs (/var/log/cassandra/system.log), I see the
> following exception:
>
> https://gist.github.com/farazmateen/e7aa5749f963ad2293f8be0c
> a1ccdc22/e3fd274af32c20eb9f534849a31734dcd33745b4
>
> According to the suggestion in post linked below, I have set my JVM Heap
> size to 8GB but the problem still persists.:
> https://dzone.com/articles/diagnosing-and-fixing-cassandra-timeouts
>
> *Cluster:*
>
>- Cassandra version 3.9
>- 3 nodes, with 8 cores and 30GB of RAM each.
>- Keyspace has a replication factor of 2.
>- Write consistency is LOCAL_ONE
>- MAX HEAP SIZE is set to 8GB.
>
> Any help will be greatly appreciated.
>
> --
> Faraz
>


Re: Fast Writes to Cassandra Failing Through Python Script

2018-03-13 Thread Bruce Tietjen
 The following won't address any server performance issues, but will allow
your application to continue to run even if there are client or server
timeouts:

Your python code should wrap all Cassandra statement execution calls in
a try/except block to catch any errors and handle them appropriately.
For timeouts, you might consider re-trying the statement.

You may also want to consider proactively setting your client and/or
server timeouts so your application sees fewer failures.


Any production code should include proper error handling and during initial
development and testing, it may be helpful to allow your application to
continue running
so you get a better idea of if or when different timeouts occur.

see:
   cassandra.Timeout
   cassandra.WriteTimeout
   cassandra.ReadTimeout

also:
   https://datastax.github.io/python-driver/api/cassandra.html





On Tue, Mar 13, 2018 at 5:17 PM, Goutham reddy 
wrote:

> Faraz,
> Can you share your code snippet, how you are trying to save the  entity
> objects into cassandra.
>
> Thanks and Regards,
> Goutham Reddy Aenugu.
>
> Regards
> Goutham Reddy
>
> On Tue, Mar 13, 2018 at 3:42 PM, Faraz Mateen  wrote:
>
>> Hi everyone,
>>
>> I seem to have hit a problem in which writing to cassandra through a
>> python script fails and also occasionally causes cassandra node to crash.
>> Here are the details of my problem.
>>
>> I have a python based streaming application that reads data from kafka at
>> a high rate and pushes it to cassandra through datastax's cassandra driver
>> for python. My cassandra setup consists of 3 nodes and a replication factor
>> of 2. Problem is that my python application crashes after writing ~12000
>> records with the following error:
>>
>> Exception: Error from server: code=1100 [Coordinator node timed out waiting 
>> for replica nodes' responses] message="Operation timed out - received only 0 
>> responses." info={'received_responses':
>>  0, 'consistency': 'LOCAL_ONE', 'required_responses': 1}
>>
>> Sometimes the  python application crashes with this traceback:
>>
>> cassandra.OperationTimedOut: errors={'10.128.1.1': 'Client request timeout. 
>> See Session.execute[_async](timeout)'}, last_host=10.128.1.1
>>
>> With the error above, one of the cassandra node crashes as well. When I
>> look at cassandra system logs (/var/log/cassandra/system.log), I see the
>> following exception:
>>
>> https://gist.github.com/farazmateen/e7aa5749f963ad2293f8be0c
>> a1ccdc22/e3fd274af32c20eb9f534849a31734dcd33745b4
>>
>> According to the suggestion in post linked below, I have set my JVM Heap
>> size to 8GB but the problem still persists.:
>> https://dzone.com/articles/diagnosing-and-fixing-cassandra-timeouts
>>
>> *Cluster:*
>>
>>- Cassandra version 3.9
>>- 3 nodes, with 8 cores and 30GB of RAM each.
>>- Keyspace has a replication factor of 2.
>>- Write consistency is LOCAL_ONE
>>- MAX HEAP SIZE is set to 8GB.
>>
>> Any help will be greatly appreciated.
>>
>> --
>> Faraz
>>
>
>


Re: Migration of keyspace to another new cluster

2018-03-13 Thread dba newsql
Sstable loader

On Mar 13, 2018 1:06 PM, "Goutham reddy"  wrote:

> Hi,
> We got a requirement to migrate only one keyspace data from one cluster to
> other cluster. And we no longer need the old cluster anymore. Can you
> suggest what are the best possible ways we can achieve it.
>
> Regards
> Goutham Reddy
>


Re: What versions should the documentation support now?

2018-03-13 Thread kurt greaves
>
> I’ve never heard of anyone shipping docs for multiple versions, I don’t
> know why we’d do that.  You can get the docs for any version you need by
> downloading C*, the docs are included.  I’m a firm -1 on changing that
> process.

We should still host versioned docs on the website however. Either that or
we specify "since version x" for each component in the docs with notes on
behaviour.
​


Re: What versions should the documentation support now?

2018-03-13 Thread Jonathan Haddad
Yes, I agree, we should host versioned docs.  I don't think anyone is
against it, it's a matter of someone having the time to do it.

On Tue, Mar 13, 2018 at 6:14 PM kurt greaves  wrote:

> I’ve never heard of anyone shipping docs for multiple versions, I don’t
>> know why we’d do that.  You can get the docs for any version you need by
>> downloading C*, the docs are included.  I’m a firm -1 on changing that
>> process.
>
> We should still host versioned docs on the website however. Either that or
> we specify "since version x" for each component in the docs with notes on
> behaviour.
> ​
>


Re: Migration of keyspace to another new cluster

2018-03-13 Thread Nate McCall
> Hi,
> We got a requirement to migrate only one keyspace data from one cluster to
> other cluster. And we no longer need the old cluster anymore. Can you
> suggest what are the best possible ways we can achieve it.
>
> Regards
> Goutham Reddy
>


Temporarily treat the new cluster as a new datacenter for the current
cluster and follow the process for adding a datacenter for that keyspace.
When complete remove the old datacenter/cluster similarly.


sstableupgrade is giving commitlog dir is not readable and writable for the DSE

2018-03-13 Thread AI Rumman
Hi,

I am trying to upgrade Datastax Cassandra 4.8.14 to 5.0.12, which is
Cassandra version 2.0.1 to 3.4.0.

The software upgrade operation went fine. But during sstable upgrade, I am
getting the following error:

nodetool upgradesstables
error: commitlog directory '/cassandra_dir/commitlog' or, if it does not
already exist, an existing parent directory of it, is not readable and
writable for the DSE. Check file system and configuration.
-- StackTrace --
org.apache.cassandra.exceptions.ConfigurationException: commitlog directory
'/cassandra_dir/commitlog' or, if it does not already exist, an existing
parent directory of it, is not readable and writable for the DSE. Check
file system and configuration.
at
org.apache.cassandra.config.DatabaseDescriptor.resolveAndCheckDirectory(DatabaseDescriptor.java:801)
at
org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:538)
at
org.apache.cassandra.config.DatabaseDescriptor.(DatabaseDescriptor.java:131)
at
org.apache.cassandra.tools.NodeProbe.checkJobs(NodeProbe.java:274)
at
org.apache.cassandra.tools.NodeProbe.upgradeSSTables(NodeProbe.java:328)
at
org.apache.cassandra.tools.nodetool.UpgradeSSTable.execute(UpgradeSSTable.java:54)
at
org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(NodeTool.java:253)
at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:167)

Here is my directory structure:

ls -ld /cassandra_dir/*
drwxr-xr-x  2 cassandra cassandra 4096 Mar 14 01:23 /cassandra_dir/commitlog
drwxr-xr-x 11 cassandra cassandra 4096 Mar 14 01:15 /cassandra_dir/data
drwxr-xr-x  2 cassandra cassandra 4096 Mar 14 01:14 /cassandra_dir/hints
drwxr-xr-x  2 cassandra cassandra 4096 Mar 14 01:04
/cassandra_dir/saved_caches

Can anyone please tell me why I am getting the above error?

Thanks.


Changing a node IP address

2018-03-13 Thread Cyril Scetbon
Hey,

I always thought that changing the IP address of a node requires to use the 
same procedure as for a died node, which part of it consists in starting 
Cassandra the -Dcassandra.replace_address option as indicated at 
https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsReplaceNode.html
 

However, it’s said at 
https://docs.datastax.com/en/dse/5.1/dse-admin/datastax_enterprise/operations/opsChangeIp.html
 that we can simply start the new node after having done some changes in 
configuration files that could be impacted (seed list in cassandra.yaml, 
cassandra-topology.properties). Is it a feature of the DSE ? Is it something 
that works with the community version ? How does it work exactly ? Does the 
replacement happen because it has the same data as the replaced node and 
something like an id is found in the local files ? The token list ?

Thank you
—
Cyril Scetbon



Re: sstableupgrade is giving commitlog dir is not readable and writable for the DSE

2018-03-13 Thread AI Rumman
I found the issue. I was running the nodetool command from another user.
Sorry for the noise.

Thanks.

On Tue, Mar 13, 2018 at 6:45 PM, AI Rumman  wrote:

> Hi,
>
> I am trying to upgrade Datastax Cassandra 4.8.14 to 5.0.12, which is
> Cassandra version 2.0.1 to 3.4.0.
>
> The software upgrade operation went fine. But during sstable upgrade, I am
> getting the following error:
>
> nodetool upgradesstables
> error: commitlog directory '/cassandra_dir/commitlog' or, if it does not
> already exist, an existing parent directory of it, is not readable and
> writable for the DSE. Check file system and configuration.
> -- StackTrace --
> org.apache.cassandra.exceptions.ConfigurationException: commitlog
> directory '/cassandra_dir/commitlog' or, if it does not already exist, an
> existing parent directory of it, is not readable and writable for the DSE.
> Check file system and configuration.
> at org.apache.cassandra.config.DatabaseDescriptor.
> resolveAndCheckDirectory(DatabaseDescriptor.java:801)
> at org.apache.cassandra.config.DatabaseDescriptor.applyConfig(
> DatabaseDescriptor.java:538)
> at org.apache.cassandra.config.DatabaseDescriptor.(
> DatabaseDescriptor.java:131)
> at org.apache.cassandra.tools.NodeProbe.checkJobs(NodeProbe.
> java:274)
> at org.apache.cassandra.tools.NodeProbe.upgradeSSTables(
> NodeProbe.java:328)
> at org.apache.cassandra.tools.nodetool.UpgradeSSTable.
> execute(UpgradeSSTable.java:54)
> at org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(
> NodeTool.java:253)
> at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:167)
>
> Here is my directory structure:
>
> ls -ld /cassandra_dir/*
> drwxr-xr-x  2 cassandra cassandra 4096 Mar 14 01:23
> /cassandra_dir/commitlog
> drwxr-xr-x 11 cassandra cassandra 4096 Mar 14 01:15 /cassandra_dir/data
> drwxr-xr-x  2 cassandra cassandra 4096 Mar 14 01:14 /cassandra_dir/hints
> drwxr-xr-x  2 cassandra cassandra 4096 Mar 14 01:04
> /cassandra_dir/saved_caches
>
> Can anyone please tell me why I am getting the above error?
>
> Thanks.
>
>


Re: Changing a node IP address

2018-03-13 Thread Jeff Jirsa
If you're just trying to change an IP, you can just stop the node, change
the IP and restart the node and it'll be fine (change it everywhere).

Replacing a node is different: replacing is when a node dies, and you're
replacing it with a new node that doesnt have any data. The
-Dcassandra.replace_address option tells the starting instance it needs to
look for a dead host and get all of the data that host should have had.



On Tue, Mar 13, 2018 at 6:57 PM, Cyril Scetbon 
wrote:

> Hey,
>
> I always thought that changing the IP address of a node requires to use
> the same procedure as for a died node, which part of it consists in
> starting Cassandra the -Dcassandra.replace_address option as indicated at
> https://docs.datastax.com/en/cassandra/3.0/cassandra/
> operations/opsReplaceNode.html
>
> However, it’s said at https://docs.datastax.com/en/
> dse/5.1/dse-admin/datastax_enterprise/operations/opsChangeIp.html that we
> can simply start the new node after having done some changes in
> configuration files that could be impacted (seed list in cassandra.yaml,
> cassandra-topology.properties). Is it a feature of the DSE ? Is it
> something that works with the community version ? How does it work exactly
> ? Does the replacement happen because it has the same data as the replaced
> node and something like an id is found in the local files ? The token list ?
>
> Thank you
> —
> Cyril Scetbon
>
>


RE: What versions should the documentation support now?

2018-03-13 Thread Kenneth Brotman
I made sub directories “2_x” and “3_x” under docs and put a copy of the doc in 
each.  No links were changed yet.  We can work on the files first and discuss 
how we want to change the template and links.  I did the pull request already.

 

Kenneth Brotman

 

From: Jonathan Haddad [mailto:j...@jonhaddad.com] 
Sent: Tuesday, March 13, 2018 6:19 PM
To: user@cassandra.apache.org
Subject: Re: What versions should the documentation support now?

 

Yes, I agree, we should host versioned docs.  I don't think anyone is against 
it, it's a matter of someone having the time to do it.

 

On Tue, Mar 13, 2018 at 6:14 PM kurt greaves  wrote:

I’ve never heard of anyone shipping docs for multiple versions, I don’t know 
why we’d do that.  You can get the docs for any version you need by downloading 
C*, the docs are included.  I’m a firm -1 on changing that process.

We should still host versioned docs on the website however. Either that or we 
specify "since version x" for each component in the docs with notes on 
behaviour.

​



Re: RE: What versions should the documentation support now?

2018-03-13 Thread Dinesh Joshi
Kenneth,
The only 4.x docs should go in trunk. If you would like to contribute docs to 
the 2.x and/or 3.x releases, please make pull requests against branches for 
those versions.
During normal development process, the docs should be updated in trunk. When a 
release is cut from trunk, any further fixes to the docs pertaining to that 
release should go into that branch. This is in principle the same process that 
the code follows. So the docs will live with their respective branches. You 
should not put the documentation for older releases in trunk because it will 
end up confusing the user.
It looks like the in-tree docs were introduced in 4.x. They seem to also be 
present in the 3.11 branch. If you're inclined, you might back port them to the 
older 3.x & 2.x releases and update them.
Personally, I think focusing on making the 4.x docs awesome is a better use of 
your time.
Thanks,
Dinesh 

On Tuesday, March 13, 2018, 11:03:04 PM PDT, Kenneth Brotman 
 wrote:  
 
 #yiv9726083586 #yiv9726083586 -- _filtered #yiv9726083586 
{font-family:Calibri;panose-1:2 15 5 2 2 2 4 3 2 4;} _filtered #yiv9726083586 
{font-family:Tahoma;panose-1:2 11 6 4 3 5 4 4 2 4;}#yiv9726083586 
#yiv9726083586 p.yiv9726083586MsoNormal, #yiv9726083586 
li.yiv9726083586MsoNormal, #yiv9726083586 div.yiv9726083586MsoNormal 
{margin:0in;margin-bottom:.0001pt;font-size:12.0pt;}#yiv9726083586 a:link, 
#yiv9726083586 span.yiv9726083586MsoHyperlink 
{color:blue;text-decoration:underline;}#yiv9726083586 a:visited, #yiv9726083586 
span.yiv9726083586MsoHyperlinkFollowed 
{color:purple;text-decoration:underline;}#yiv9726083586 
span.yiv9726083586EmailStyle17 {color:#1F497D;}#yiv9726083586 
.yiv9726083586MsoChpDefault {} _filtered #yiv9726083586 {margin:1.0in 1.0in 
1.0in 1.0in;}#yiv9726083586 div.yiv9726083586WordSection1 {}#yiv9726083586 
I made sub directories “2_x” and “3_x” under docs and put a copy of the doc in 
each.  No links were changed yet.  We can work on the files first and discuss 
how we want to change the template and links.  I did the pull request already.

  

Kenneth Brotman

  

From: Jonathan Haddad [mailto:j...@jonhaddad.com] 
Sent: Tuesday, March 13, 2018 6:19 PM
To: user@cassandra.apache.org
Subject: Re: What versions should the documentation support now?

  

Yes, I agree, we should host versioned docs.  I don't think anyone is against 
it, it's a matter of someone having the time to do it.

  

On Tue, Mar 13, 2018 at 6:14 PM kurt greaves  wrote:



I’ve never heard of anyone shipping docs for multiple versions, I don’t know 
why we’d do that.  You can get the docs for any version you need by downloading 
C*, the docs are included.  I’m a firm -1 on changing that process.


We should still host versioned docs on the website however. Either that or we 
specify "since version x" for each component in the docs with notes on 
behaviour.

​

  

RE: RE: What versions should the documentation support now?

2018-03-13 Thread Kenneth Brotman
I show a 3.0 and a 3.11 branch but no 4.0.  I’m at 
https://github.com/apache/cassandra .

 

 

From: Dinesh Joshi [mailto:dinesh.jo...@yahoo.com.INVALID] 
Sent: Tuesday, March 13, 2018 11:30 PM
To: user@cassandra.apache.org
Subject: Re: RE: What versions should the documentation support now?

 

Kenneth,

 

The only 4.x docs should go in trunk. If you would like to contribute docs to 
the 2.x and/or 3.x releases, please make pull requests against branches for 
those versions.

 

During normal development process, the docs should be updated in trunk. When a 
release is cut from trunk, any further fixes to the docs pertaining to that 
release should go into that branch. This is in principle the same process that 
the code follows. So the docs will live with their respective branches. You 
should not put the documentation for older releases in trunk because it will 
end up confusing the user.

 

It looks like the in-tree docs were introduced in 4.x. They seem to also be 
present in the 3.11 branch. If you're inclined, you might back port them to the 
older 3.x & 2.x releases and update them.

 

Personally, I think focusing on making the 4.x docs awesome is a better use of 
your time.

 

Thanks,

 

Dinesh

 

 

On Tuesday, March 13, 2018, 11:03:04 PM PDT, Kenneth Brotman 
 wrote: 

 

 

I made sub directories “2_x” and “3_x” under docs and put a copy of the doc in 
each.  No links were changed yet.  We can work on the files first and discuss 
how we want to change the template and links.  I did the pull request already.

 

Kenneth Brotman

 

From: Jonathan Haddad [mailto:j...@jonhaddad.com] 
Sent: Tuesday, March 13, 2018 6:19 PM
To: user@cassandra.apache.org
Subject: Re: What versions should the documentation support now?

 

Yes, I agree, we should host versioned docs.  I don't think anyone is against 
it, it's a matter of someone having the time to do it.

 

On Tue, Mar 13, 2018 at 6:14 PM kurt greaves  wrote:

I’ve never heard of anyone shipping docs for multiple versions, I don’t know 
why we’d do that.  You can get the docs for any version you need by downloading 
C*, the docs are included.  I’m a firm -1 on changing that process.

We should still host versioned docs on the website however. Either that or we 
specify "since version x" for each component in the docs with notes on 
behaviour.

​



Re: RE: RE: What versions should the documentation support now?

2018-03-13 Thread Dinesh Joshi
trunk is the next release which is 4.0. You won't find a branch named 4.0 yet.
Dinesh 

On Tuesday, March 13, 2018, 11:39:44 PM PDT, Kenneth Brotman 
 wrote:  
 
 #yiv3841634821 #yiv3841634821 -- _filtered #yiv3841634821 
{font-family:Helvetica;panose-1:2 11 6 4 2 2 2 2 2 4;} _filtered #yiv3841634821 
{font-family:Helvetica;panose-1:2 11 6 4 2 2 2 2 2 4;} _filtered #yiv3841634821 
{font-family:Calibri;panose-1:2 15 5 2 2 2 4 3 2 4;} _filtered #yiv3841634821 
{font-family:Tahoma;panose-1:2 11 6 4 3 5 4 4 2 4;} _filtered #yiv3841634821 
{panose-1:2 11 6 9 4 5 4 2 2 4;}#yiv3841634821 #yiv3841634821 
p.yiv3841634821MsoNormal, #yiv3841634821 li.yiv3841634821MsoNormal, 
#yiv3841634821 div.yiv3841634821MsoNormal 
{margin:0in;margin-bottom:.0001pt;font-size:12.0pt;}#yiv3841634821 a:link, 
#yiv3841634821 span.yiv3841634821MsoHyperlink 
{color:blue;text-decoration:underline;}#yiv3841634821 a:visited, #yiv3841634821 
span.yiv3841634821MsoHyperlinkFollowed 
{color:purple;text-decoration:underline;}#yiv3841634821 
p.yiv3841634821MsoAcetate, #yiv3841634821 li.yiv3841634821MsoAcetate, 
#yiv3841634821 div.yiv3841634821MsoAcetate 
{margin:0in;margin-bottom:.0001pt;font-size:8.0pt;}#yiv3841634821 
p.yiv3841634821msonormal, #yiv3841634821 li.yiv3841634821msonormal, 
#yiv3841634821 div.yiv3841634821msonormal 
{margin-right:0in;margin-left:0in;font-size:12.0pt;}#yiv3841634821 
span.yiv3841634821msohyperlink {}#yiv3841634821 
span.yiv3841634821msohyperlinkfollowed {}#yiv3841634821 
span.yiv3841634821emailstyle17 {}#yiv3841634821 p.yiv3841634821msonormal1, 
#yiv3841634821 li.yiv3841634821msonormal1, #yiv3841634821 
div.yiv3841634821msonormal1 
{margin:0in;margin-bottom:.0001pt;font-size:12.0pt;}#yiv3841634821 
span.yiv3841634821msohyperlink1 
{color:blue;text-decoration:underline;}#yiv3841634821 
span.yiv3841634821msohyperlinkfollowed1 
{color:purple;text-decoration:underline;}#yiv3841634821 
span.yiv3841634821emailstyle171 {color:#1F497D;}#yiv3841634821 
span.yiv3841634821BalloonTextChar {}#yiv3841634821 
span.yiv3841634821EmailStyle27 {color:#1F497D;}#yiv3841634821 
.yiv3841634821MsoChpDefault {font-size:10.0pt;} _filtered #yiv3841634821 
{margin:1.0in 1.0in 1.0in 1.0in;}#yiv3841634821 div.yiv3841634821WordSection1 
{}#yiv3841634821 
I show a 3.0 and a 3.11 branch but no 4.0.  I’m at 
https://github.com/apache/cassandra .    

  

  

From: Dinesh Joshi [mailto:dinesh.jo...@yahoo.com.INVALID] 
Sent: Tuesday, March 13, 2018 11:30 PM
To: user@cassandra.apache.org
Subject: Re: RE: What versions should the documentation support now?

  

Kenneth,

  

The only 4.x docs should go in trunk. If you would like to contribute docs to 
the 2.x and/or 3.x releases, please make pull requests against branches for 
those versions.

  

During normal development process, the docs should be updated in trunk. When a 
release is cut from trunk, any further fixes to the docs pertaining to that 
release should go into that branch. This is in principle the same process that 
the code follows. So the docs will live with their respective branches. You 
should not put the documentation for older releases in trunk because it will 
end up confusing the user.

  

It looks like the in-tree docs were introduced in 4.x. They seem to also be 
present in the 3.11 branch. If you're inclined, you might back port them to the 
older 3.x & 2.x releases and update them.

  

Personally, I think focusing on making the 4.x docs awesome is a better use of 
your time.

  

Thanks,

  

Dinesh

  

  

On Tuesday, March 13, 2018, 11:03:04 PM PDT, Kenneth Brotman 
 wrote: 

  

  

I made sub directories “2_x” and “3_x” under docs and put a copy of the doc in 
each.  No links were changed yet.  We can work on the files first and discuss 
how we want to change the template and links.  I did the pull request already.

 

Kenneth Brotman

 

From: Jonathan Haddad [mailto:j...@jonhaddad.com] 
Sent: Tuesday, March 13, 2018 6:19 PM
To: user@cassandra.apache.org
Subject: Re: What versions should the documentation support now?

 

Yes, I agree, we should host versioned docs.  I don't think anyone is against 
it, it's a matter of someone having the time to do it.

 

On Tue, Mar 13, 2018 at 6:14 PM kurt greaves  wrote:



I’ve never heard of anyone shipping docs for multiple versions, I don’t know 
why we’d do that.  You can get the docs for any version you need by downloading 
C*, the docs are included.  I’m a firm -1 on changing that process.


We should still host versioned docs on the website however. Either that or we 
specify "since version x" for each component in the docs with notes on 
behaviour.

​