Hi All
Could this be due to how "DESC" functionality changed via
https://issues.apache.org/jira/browse/CASSANDRA-14825? Earlier client
drivers were creating schema so in the 3.11.x version, we were able to see
the schema of COMPACT tables but now in Cassandra 4.0.x we are seeing the
warning.
Reg
Hi All
As I understand there was a plan to drop *Compact Storage* support
with *Cassandra
4* but later few issues were identified which resulted in continued support
for Compact Storage in Cassandra 4. My cluster with a few old "compact
storage" tables was able to come up with Cassandra 4.0.5. Bu
Hi Sean,
thanks for the quick answer. I have applied your suggestion and tested on
several environments, everything is working fine.
Other communication protected by SSL such as server-to-server and
client-to-server is working without problems as well.
Regards,
Maxim.
On Fri, Apr 30, 2021 at 3:1
.org/jira/projects/CASSANDRA/summary> with the
provided reproduction steps.
Thanks,
Paulo
Em ter., 4 de mai. de 2021 às 18:22, Bowen Song escreveu:
Hi all,
I was using the cqlsh from Cassandra 4.0 RC1 and trying to connect
to a Cassandra 3.11 cluster, and it does not appear t
Hi Bowen,
This seems like a bug to me, please kindly file an issue on
https://issues.apache.org/jira/projects/CASSANDRA/summary with the provided
reproduction steps.
Thanks,
Paulo
Em ter., 4 de mai. de 2021 às 18:22, Bowen Song escreveu:
> Hi all,
>
>
> I was using the cqlsh fr
Hi all,
I was using the cqlsh from Cassandra 4.0 RC1 and trying to connect to a
Cassandra 3.11 cluster, and it does not appear to be working correctly.
Specificity, the "desc" command does not work at all.
Steps to reproduce:
# ensure you have docker installed a
Try adding this into the SSL section of your cqlshrc file:
version = SSLv23
Sean Durity
From: Maxim Parkachov
Sent: Friday, April 30, 2021 8:57 AM
To: user@cassandra.apache.org; d...@cassandra.apache.org
Subject: [EXTERNAL] Cassandra 3.11 cqlsh doesn't work with latest JDK
Hi everyon
ime Environment (build 1.8.0_292-b10)
OpenJDK 64-Bit Server VM (build 25.292-b10, mixed mode)
Now when I try to connect to my local instance of Cassandra with cqlsh I'm
getting error:
$ cqlsh --ssl -u cassandra -p cassandra
Connection error: ('Unable to connect to any servers',
There isn't, no. Cheers!
>
Hi
I am storing image data as `base64` in rows of my table. If I query all records
then the image data (which is large text) makes it difficult to read results in
cqlsh. Is there a `cqlsh` method which can just tell me if the column is empty
or not?
Something like
select isPresent(image
there a way to export the TTL using CQLsh or DSBulk?
>>
>> On Thu, Jul 16, 2020 at 11:20 AM Alex Ott wrote:
>>
>>> if you didn't export TTL explicitly, and didn't load it back, then
>>> you'll get not expirable data.
>>>
>&g
look into a series of the blog posts that I sent, I think that it should be
in the 4th post
On Thu, Jul 16, 2020 at 8:27 PM Jai Bheemsen Rao Dhanwada <
jaibheem...@gmail.com> wrote:
> okay, is there a way to export the TTL using CQLsh or DSBulk?
>
> On Thu, Jul 16, 2020 at 11
okay, is there a way to export the TTL using CQLsh or DSBulk?
On Thu, Jul 16, 2020 at 11:20 AM Alex Ott wrote:
> if you didn't export TTL explicitly, and didn't load it back, then you'll
> get not expirable data.
>
> On Thu, Jul 16, 2020 at 7:48 PM Jai Bheemse
; time but the TTL value is showing as null. Is this expected? Does this mean
> this record will never expire after the insert?
> Is there any alternative to preserve the TTL ?
>
> In the new Table inserted with Cqlsh and Dsbulk
> cqlsh > SELECT ttl(secret) from ks_blah.cf_blah ;
>
In tried verify metadata, In case of writetime it is setting it as insert
time but the TTL value is showing as null. Is this expected? Does this mean
this record will never expire after the insert?
Is there any alternative to preserve the TTL ?
In the new Table inserted with Cqlsh and Dsbulk
ean by "preserving metadata" ? would you mind
>> explaining?
>>
>> On Tue, Jul 14, 2020 at 8:50 AM Jai Bheemsen Rao Dhanwada <
>> jaibheem...@gmail.com> wrote:
>>
>>> Thank you for the suggestions
>>>
>>> On Tue, Jul 14, 2020 at
ibheem...@gmail.com> wrote:
>
>> Thank you for the suggestions
>>
>> On Tue, Jul 14, 2020 at 1:42 AM Alex Ott wrote:
>>
>>> CQLSH definitely won't work for that amount of data, so you need to use
>>> other tools.
>>>
>>>
ly do you mean by "preserving metadata" ? would you mind
explaining?
On Tue, Jul 14, 2020 at 8:50 AM Jai Bheemsen Rao Dhanwada <
jaibheem...@gmail.com> wrote:
> Thank you for the suggestions
>
> On Tue, Jul 14, 2020 at 1:42 AM Alex Ott wrote:
>
>> CQLSH definitely
Thank you for the suggestions
On Tue, Jul 14, 2020 at 1:42 AM Alex Ott wrote:
> CQLSH definitely won't work for that amount of data, so you need to use
> other tools.
>
> But before selecting them, you need to define requirements. For example:
>
>1. Are you copying th
CQLSH definitely won't work for that amount of data, so you need to use
other tools.
But before selecting them, you need to define requirements. For example:
1. Are you copying the data into tables with exactly the same structure?
2. Do you need to preserve metadata, like, writetime
o copy some data from one cassandra cluster to another
> cassandra cluster using the CQLSH copy command. Is this the good approach
> if the dataset size on the source cluster is very high(500G - 1TB)? If not
> what is the safe approach? and are there any limitations/known issues to
> keep in mind before attempting this?
>
Hello,
I would like to copy some data from one cassandra cluster to another
cassandra cluster using the CQLSH copy command. Is this the good approach
if the dataset size on the source cluster is very high(500G - 1TB)? If not
what is the safe approach? and are there any limitations/known issues to
t;
> I am looking for Cassandra GUI that supports cqlsh connection to Cassandra
> node through bastion/jump host using ssh key.
>
>
>
> Thanks,
>
> Bhavesh
>
Hi,
I am looking for Cassandra GUI that supports cqlsh connection to Cassandra node
through bastion/jump host using ssh key.
Thanks,
Bhavesh
On Sat, Jun 29, 2019 at 6:19 AM Nimbus Lin wrote:
>
> On the 2nd question, would you like to tell me how to change a
> write's and a read's consistency level separately in cqlsh?
>
Not that I know of special syntax for that, but you may add an explicit
"CONSIST
n Jconsole latter.
On the 2nd question, would you like to tell me how to change a write's
and a read's consistency level separately in cqlsh?
Otherwise, how the document's R+W>Replicator to realize to guarantee a strong
consistency write and read?
Thank you!
Si
Do separate queries for each partition you want. There's no benefit
in using the IN() clause here, and performance is significantly worse
with multi-partition IN(), especially if the partitions are small.
On Sun, May 5, 2019 at 4:52 AM Soheil Pourbafrani wrote:
>
> Hi,
>
> I
Hi,
I want to run cqlsh query on cassandra table using IN
SELECT * from data WHERE nid = 'value' AND mm IN (201905,201904) AND
tid = 'value2' AND ts >= 155639466 AND ts <= 155699946 ;
The nid and mm columns are partition key and the ts is clustering key
Env:
[cqlsh 5.0.1 | Cassandra 3.11.2 | CQL spec 3.4.4 | Native protocol v4]
I am trying to ingest a csv that has date in MM/DD/ format ( %m/%d/%Y
).
While trying to load I am providing the WITH datetimeformat = '%m/%d/%Y'
but still getting errored out *time data '03/12/2019&
ility, extremely low
>>> latency queries (on known access patterns), high volume/low latency writes,
>>> easy scalability, etc. then you are going to have to rethink how you model
>>> the data.
>>>
>>>
>>>
>>>
>>>
>>>
gt;> scalability, etc. then you are going to have to rethink how you model the
>> data.
>>
>>
>>
>>
>>
>> Sean Durity
>>
>>
>>
>> *From:* Kenneth Brotman
>> *Sent:* Thursday, February 07, 2019 7:01 AM
>> *To:* us
, easy
> scalability, etc. then you are going to have to rethink how you model the
> data.
>
>
>
>
>
> Sean Durity
>
>
>
> *From:* Kenneth Brotman
> *Sent:* Thursday, February 07, 2019 7:01 AM
> *To:* user@cassandra.apache.org
> *Subject:* [EXTERNAL] R
] RE: SASI queries- cqlsh vs java driver
Peter,
Sounds like you may need to use a different architecture. Perhaps you need
something like Presto or Kafka as a part of the solution. If the data from the
legacy system is wrong for Cassandra it’s an ETL problem? You’d have to
transform the data
that a proper data model
for Cassandra can be used.
From: Peter Heitman [mailto:pe...@heitman.us]
Sent: Wednesday, February 06, 2019 10:05 PM
To: user@cassandra.apache.org
Subject: Re: SASI queries- cqlsh vs java driver
Yes, I have read the material. The problem is that the application has a
ved=2ahUKEwi0n-nWzajgAhXnHzQIHf6jBJIQ6AEwAXoECAgQAQ#v=onepage&q=jeff%20carpenter%20chapter%205&f=false>
> .
>
>
>
> Kenneth Brotman
>
>
>
> *From:* Peter Heitman [mailto:pe...@heitman.us]
> *Sent:* Wednesday, February 06, 2019 6:33 PM
>
>
> *To:*
itman.us]
Sent: Wednesday, February 06, 2019 6:33 PM
To: user@cassandra.apache.org
Subject: Re: SASI queries- cqlsh vs java driver
Yes, I "know" that allow filtering is a sign of a (possibly fatal) inefficient
data model. I haven't figured out how to do it correctly yet
On Thu,
se ALLOW FILTERING in the queries. That is not recommended.
>
>
>
> Kenneth Brotman
>
>
>
> *From:* Peter Heitman [mailto:pe...@heitman.us]
> *Sent:* Wednesday, February 06, 2019 6:09 PM
> *To:* user@cassandra.apache.org
> *Subject:* Re: SASI queries- cqlsh vs java
queries- cqlsh vs java driver
You are completely right! My problem is that I am trying to port code for SQL
to CQL for an application that provides the user with a relatively general
search facility. The original implementation didn't worry about secondary
indexes - it just took advanta
>
> Kenneth Brotman
>
>
>
> *From:* Peter Heitman [mailto:pe...@heitman.us]
> *Sent:* Tuesday, February 05, 2019 6:59 PM
> *To:* user@cassandra.apache.org
> *Subject:* Re: SASI queries- cqlsh vs java driver
>
>
>
> The table and secondary indexes look generally lik
PM
To: user@cassandra.apache.org
Subject: Re: SASI queries- cqlsh vs java driver
The table and secondary indexes look generally like this. Note that I have
changed the names of many of the columns to be generic since they aren't
important to the question as far as I know. I left the a
On Tue, Feb 5, 2019 at 3:33 PM Oleksandr Petrov
wrote:
> Could you post full table schema (names obfuscated, if required) with
> index creation statements and queries?
>
> On Mon, Feb 4, 2019 at 10:04 AM Jacques-Henri Berthemet <
> jacques-henri.berthe...@genesys.com> wrote:
>
&g
;user@cassandra.apache.org"
> *Date: *Monday 4 February 2019 at 07:17
> *To: *"user@cassandra.apache.org"
> *Subject: *SASI queries- cqlsh vs java driver
>
>
>
> When I create a SASI index on a secondary column, from cqlsh I can execute
> a query
>
>
: "user@cassandra.apache.org"
Date: Monday 4 February 2019 at 07:17
To: "user@cassandra.apache.org"
Subject: SASI queries- cqlsh vs java driver
When I create a SASI index on a secondary column, from cqlsh I can execute a
query
SELECT blah FROM foo WHERE IN ('mytext')
When I create a SASI index on a secondary column, from cqlsh I can execute
a query
SELECT blah FROM foo WHERE IN ('mytext') ALLOW FILTERING;
but not from the java driver:
SELECT blah FROM foo WHERE IN :val ALLOW FILTERING
Here I get an
Hello,
I am trying to copy the content of a materialized view to a CSV using the
cqlsh COPY command (doc here :
https://docs.datastax.com/en/cql/3.3/cql/cql_reference/cqlshCopy.html).
When I use the command on a regular table It works perfectly, but it is not
working when I am doing it on a
ded to do count in cassandra better if u can avoid it
>
>
> On Fri, Aug 24, 2018, 4:06 PM Vitaliy Semochkin wrote:
>>
>> Hi,
>>
>> i'm running count query for a very small table (less than 1000 000 records).
>> When the amount of records gets to 800 000 i r
than 1000 000
> records).
> When the amount of records gets to 800 000 i receive read timeout
> error in cqlsh.
> I tried to run cqlsh with option --request-timeout=3600, but receive same
> error,
> what should I do in order not to recieve timeout e
Hi,
i'm running count query for a very small table (less than 1000 000 records).
When the amount of records gets to 800 000 i receive read timeout
error in cqlsh.
I tried to run cqlsh with option --request-timeout=3600, but receive same error,
what should I do in order not to recieve ti
:15 AM, Anup Shirolkar <
anup.shirol...@instaclustr.com> wrote:
> Hi,
>
> The error shows that, the cqlsh connection with down node is failed.
> So, you should debug why it happened.
>
> Although, you have mentioned other node in cqlsh command '10.0.0.154'
> my g
Hi,
The error shows that, the cqlsh connection with down node is failed.
So, you should debug why it happened.
Although, you have mentioned other node in cqlsh command '10.0.0.154'
my guess is, the down node was present in connection pool, hence it was
attempted for connection.
I
) replicas.
cqlsh 10.0.0.154 -e "COPY X.Y TO 'backup/X.Y' WITH NUMPROCESSES=1"
Using 1 child processes
Starting copy of X.Y with columns [key, column1, value].
2018-06-29 19:12:23,661 Failed to create connection pool for new host
10.0.0.47:
Traceback (most recent call last):
F
rote:
>
> Hi,
> I've upgraded Cassandra from 2.1.6 to 3.0.9 on three nodes cluster. After
> upgrade
> cqlsh shows following error when trying to run "use {keyspace};" command:
> 'ResponseFuture' object has no attribute 'is_schema_agreed'
>
&
Hi,
I'm getting the following error on the node when trying to connect to Cassandra
through cqlsh with SSL enabled:
io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException:
Client requested protocol TLSv1 not enabled or not supported
I'm running the C* code
Looks like a bug, could you open a jira?
> On Nov 2, 2017, at 2:08 AM, Mikhail Tsaplin wrote:
>
> Hi,
> I've upgraded Cassandra from 2.1.6 to 3.0.9 on three nodes cluster. After
> upgrade
> cqlsh shows following error when trying to run "use {keyspace};" comma
Hi,
I've upgraded Cassandra from 2.1.6 to 3.0.9 on three nodes cluster. After
upgrade
cqlsh shows following error when trying to run "use {keyspace};" command:
'ResponseFuture' object has no attribute 'is_schema_agreed'
Actual upgrade was done on Ubuntu 16.04 by
Hi Suresh,
cqlsh COPY does batches intelligently by only grouping inserts targeting
the same partition in a batch.
As of version 3.6, C* will not emit the "batch size exceeded" errors if all
statements in a batch belong to the same partition (CASSANDRA-13467
<https://issues.ap
Hi All,
Can someone provide me the code snippet for the cqlsh COPY from csv file.
I just want to know how that COPY mechanism work compared to normal
insert/commit to avaoid the batch size exceed the limit.
Thanks,
Suresh.
Using COPY .. TO you can export using the DELIMITER option, does that help?
> On Aug 15, 2017, at 9:01 PM, Harikrishnan A wrote:
>
> Thank you all
>
> Regards,
> Hari
>
>
> On Tuesday, August 15, 2017 12:55 AM, Erick Ramirez
> wrote:
>
>
> +1 to J
Thank you all
Regards,Hari
On Tuesday, August 15, 2017 12:55 AM, Erick Ramirez
wrote:
+1 to Jim and Tobin. cqlsh wasn't designed for what you're trying to achieve.
Cheers!
On Tue, Aug 15, 2017 at 1:34 AM, Tobin Landricombe wrote:
Can't change the delimiter (I'm o
+1 to Jim and Tobin. cqlsh wasn't designed for what you're trying to
achieve. Cheers!
On Tue, Aug 15, 2017 at 1:34 AM, Tobin Landricombe
wrote:
> Can't change the delimiter (I'm on cqlsh 5.0.1). Best I can offer is
> https://docs.datastax.com/en/cql/3.3/cql/cql_
Can't change the delimiter (I'm on cqlsh 5.0.1). Best I can offer is
https://docs.datastax.com/en/cql/3.3/cql/cql_reference/cqlshExpand.html
> On 14 Aug 2017, at 16:17, Jim Witschey wrote:
>
> Not knowing the problem you're trying to solve, I'm going to guess
>
Not knowing the problem you're trying to solve, I'm going to guess
cqlsh is a bad tool for this job. If you want to put the results of
CQL queries into a shell pipeline, a small custom script using a
driver is probably a better fit, and should be possible to write
without m
I have column values with Pipe separator, hence unable to replace this default
delimiter from the output.
Thanks Hari
On Monday, August 14, 2017 12:12 AM, algermissen1971
wrote:
On 14.08.2017, at 07:49, Harikrishnan A wrote:
Hello,
When I execute cqlsh -e "SELECT stat
> On 14.08.2017, at 07:49, Harikrishnan A wrote:
>
> Hello,
>
> When I execute cqlsh -e "SELECT statement .." , it gives the output with a
> pipe ('|') separator. Is there anyway I can change this default delimiter in
> the output of cqlsh -e "
Hello,
When I execute cqlsh -e "SELECT statement .." , it gives the output with a
pipe ('|') separator. Is there anyway I can change this default delimiter in
the output of cqlsh -e " SELECT statement ..".
Thanks & Regards,Hari
lel. I wait for 1 query to complete(with schema agreement) before
>> firing another one.
>>
>> I want to dig deeper into what all things happen in C* at time of CF
>> creation to understand more about the limitation of number of keyspaces
>> which can be created. C
r into what all things happen in C* at time of CF
> creation to understand more about the limitation of number of keyspaces
> which can be created. Can you please point me to the corresponding source
> code? Specifically if you can also point me to the this 1MB per CF thingy,
> it would be
ically
if you can also point me to the this 1MB per CF thingy, it would be great.
Best Regards,
Saumitra
On Mon, Dec 19, 2016 at 11:41 PM, Vladimir Yudovin <vla...@winguzone.com>
wrote:
Hi,
Question: Does C* reads some schema/metadata on calling cqlsh
an you please point me to the corresponding source
> code? Specifically if you can also point me to the this 1MB per CF thingy,
> it would be great.
>
>
> Best Regards,
> Saumitra
>
>
>
>
>
>
>
>
>
> On Mon, Dec 19, 2016 at 11:41 PM, Vladimir Yudovin
> wrote:
>
>
19, 2016 at 11:41 PM, Vladimir Yudovin <vla...@winguzone.com>
wrote:
Hi,
Question: Does C* reads some schema/metadata on calling cqlsh, which is causing
timeout with large number of keyspaces?
A lot ). cqlsh reads schemas, cluster topology, each node tokens, etc. You can
just c
ing source
code? Specifically if you can also point me to the this 1MB per CF thingy,
it would be great.
Best Regards,
Saumitra
On Mon, Dec 19, 2016 at 11:41 PM, Vladimir Yudovin
wrote:
> Hi,
>
> *Question*: Does C* reads some schema/metadata on calling cqlsh, which is
> caus
Hi,
Question: Does C* reads some schema/metadata on calling cqlsh, which is causing
timeout with large number of keyspaces?
A lot ). cqlsh reads schemas, cluster topology, each node tokens, etc. You can
just capture TCP port 9042 (unless you use SSL) and view all negotiation
between cqlsh
Hi All,
I have a 2 node cluster(32gb ram/8cpu) running 3.0.10 and I created 50
keyspaces in it. Each keyspace has 25 CF. Column count in each CF ranges
between 5 to 30.
I am getting few issues once keyspace count reaches ~50.
*Issue 1:*
When I try to use cqlsh, I get timeout.
*$ cqlsh
It may be possible that you were using the old version of cqlsh? `which
cqlsh` on your upgraded nodes might point to the old install path, or a
copied version somewhere in your $PATH, perhaps.
Doing a fresh install and checking was a good idea, and it does show
that using the current version
Ok, I tried with a new empty one node cluster of the same DSE version and
cqlsh works without hiccups.
So, the whole issue exists because I upgraded from Cassandra 2.1.11.
The procedure I followed for the upgrade was very simple:
- nodetool drain (on all nodes)
- shutdown all nodes
- Uncompressed
7.11 or 2.7.12?
>
> Kind regards,
> Rajesh Radhakrishnan
>
> --
> *From:* Ioannis Zafiropoulos [john...@gmail.com]
> *Sent:* 27 October 2016 22:16
> *To:* user@cassandra.apache.org
> *Subject:* cqlsh fails to connect
>
> I upgraded DSE 4.8
Hi John Z,
Did you tried running with latest Python 2.7.11 or 2.7.12?
Kind regards,
Rajesh Radhakrishnan
From: Ioannis Zafiropoulos [john...@gmail.com]
Sent: 27 October 2016 22:16
To: user@cassandra.apache.org
Subject: cqlsh fails to connect
I upgraded DSE
I upgraded DSE 4.8.9 to 5.0.3, that is, from Cassandra 2.1.11 to 3.0.9
I used DSE 5.0.3 tarball installation. Cassandra cluster is up and running
OK and I am able to connect through DBeaver.
Tried a lot of things and cannot connect with cqlsh:
Connection error: ('Unable to connect to any se
CASSANDRA-10959 <https://issues.apache.org/jira/browse/CASSANDRA-10959> made
the control connection timeout be the same as the connect timeout. This
patch is available since 2.2.5, 3.0.3, 3.3.
On Fri, Oct 14, 2016 at 11:45 AM, joseph gao wrote:
> I've found the problem, in cqlsh
I've found the problem, in cqlsh file, find all Cluster 's construct method
like ' conn = Cluster(xxx,xxx). At the end, add
parameter control_connection_timeout=float(_SOME_MS_VALUE_). As below:
conn = Cluster(contact_points=(self.hostname,), port=self.port,
cql_version=self.c
Hi,
We are using Cassandra 3.6 and I have been facing this issue for a while.
When I connect to a cassandra cluster using cqlsh and disconnect the
network keeping cqlsh on, I get really high cpu utilization on client by
cqlsh python process. On network reconnect things return back to normal
$Segment.get(LocalCache.java:2195)
> ~[guava-16.0.jar:na]
>
>
> On Tue, Sep 20, 2016 at 11:12 AM, George Sigletos
> wrote:
>
>> I am also getting the same error:
>> cqlsh -u cassandra -p cassandra
>>
>> Connection error: ('Unable to connect to
.jar:na]
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2195)
~[guava-16.0.jar:na]
On Tue, Sep 20, 2016 at 11:12 AM, George Sigletos
wrote:
> I am also getting the same error:
> cqlsh -u cassandra -p cassandra
>
> Connection error: ('Unable to conne
I am also getting the same error:
cqlsh -u cassandra -p cassandra
Connection error: ('Unable to connect to any servers', {'':
OperationTimedOut('errors=Timed out creating connection (5 seconds),
last_host=None',)})
But it is not consistent. Sometimes I manage t
Thanks Alain
I tried to make index before starting tests.
The problem hasn't occurred for now.
What happen if you run it on a screen and come back later to see if this
> query completed successfully?
The index was made after I forced the hanging cqlsh to stop.
Which problem, cqlsh ha
tions.
From
https://docs.datastax.com/en/cql/3.1/cql/cql_reference/create_index_r.html:
"If data already exists for the column, Cassandra indexes the data during
the execution of this statement."
Why does cqlsh sometimes make no response in LCS setting for compaction?
So I would say it is
Hi all,
I have a question.
I use Cassandra 2.2.6.
Why does cqlsh sometimes make no response in LCS setting for compaction?
I requested as below:
cqlsh -e "create index new_index on keyspace.table (sub_column);"
When this problem happened, Cassandra process used 100% CPU
and deb
There's an effort to improve the docs, but while that's catching up, 3.0
has the latest version of the document you're looking for:
https://cassandra.apache.org/doc/cql3/CQL-3.0.html#createKeyspaceStmt
On Wed, Jun 15, 2016 at 5:28 AM Steve Anderson
wrote:
> Couple of Cqlsh que
Couple of Cqlsh questions:
1) Why when I run the DESCRIBE CLUSTER command no snitch information is
required? Is this because my Cassandra cluster is a single node?
2) When I run the HELP CREATE_KEYSPACE command the following info is displayed:
*** No browser to display CQL help. URL
9160 port instead of 5.x.x。
>>>>
>>>> Hopefully this could be resolved, Thanks!
>>>>
>>>> 2016-03-30 22:13 GMT+08:00 Alain RODRIGUEZ :
>>>>
>>>>> Hi Joseph,
>>>>>
>>>>> why cassandra using tc
t;>> And I already set [image: 内嵌图片 2]
>>>
>>> Now I'm using 4.1.1 using 9160 port instead of 5.x.x。
>>>
>>> Hopefully this could be resolved, Thanks!
>>>
>>> 2016-03-30 22:13 GMT+08:00 Alain RODRIGUEZ >> >:
>>>
>
i Joseph,
>>>
>>> why cassandra using tcp6 for 9042 port like :
>>>> tcp6 0 0 0.0.0.0:9042:::*
>>>> LISTEN
>>>>
>>>
>>> if I remember correctly, in 2.1 and higher, cqlsh uses native transport,
>>
MT+08:00 Alain RODRIGUEZ :
>
>> Hi Joseph,
>>
>> why cassandra using tcp6 for 9042 port like :
>>> tcp6 0 0 0.0.0.0:9042 :::*
>>> LISTEN
>>>
>>
>> if I remember correctly, in 2.1 and higher, cqlsh uses native tr
for 9042 port like :
>> tcp6 0 0 0.0.0.0:9042:::*
>> LISTEN
>>
>
> if I remember correctly, in 2.1 and higher, cqlsh uses native transport,
> port 9042 (instead of thrift port 9160) and your clients (if any) are also
> probably using native tran
5:44 PM
To: user@cassandra.apache.org
Subject: Re: Unable to connect to CQLSH or Launch SparkContext
Check your environment variables, looks like JAVA_HOME is not properly set
On Mon, Apr 11, 2016 at 9:07 AM, Lokesh Ceeba - Vendor
mailto:lokesh.ce...@walmart.com>> wrote:
H
Check your environment variables, looks like JAVA_HOME is not properly set
On Mon, Apr 11, 2016 at 9:07 AM, Lokesh Ceeba - Vendor <
lokesh.ce...@walmart.com> wrote:
> Hi Team,
>
> Help required
>
>
>
> cassandra:/app/cassandra $ nodetool status
>
>
>
> Cassandra 2.0 and later requir
Hi Team,
Help required
cassandra:/app/cassandra $ nodetool status
Cassandra 2.0 and later require Java 7u25 or later.
cassandra:/app/cassandra $ nodetool status
Cassandra 2.0 and later require Java 7u25 or later.
cassandra:/app/cassandra $ java -version
Error occurred during initia
Hi Joseph,
why cassandra using tcp6 for 9042 port like :
> tcp6 0 0 0.0.0.0:9042:::*LISTEN
>
if I remember correctly, in 2.1 and higher, cqlsh uses native transport,
port 9042 (instead of thrift port 9160) and your clients (if any) are also
pr
why cassandra using tcp6 for 9042 port like :
tcp6 0 0 0.0.0.0:9042:::*LISTEN
would this be the problem
2016-03-30 11:34 GMT+08:00 joseph gao :
> still have not fixed it . cqlsh: error: no such option: --connect-timeout
> cqlsh version
still have not fixed it . cqlsh: error: no such option: --connect-timeout
cqlsh version 5.0.1
2016-03-25 16:46 GMT+08:00 Alain RODRIGUEZ :
> Hi Joseph.
>
> As I can't reproduce here, I believe you are having network issue of some
> kind.
>
> MacBook-Pro:~ alain$ cqlsh
1 - 100 of 351 matches
Mail list logo