Thank you for reporting this.
I've filed https://issues.apache.org/jira/browse/CASSANDRA-11333.
On Thu, Mar 10, 2016 at 6:16 AM, Rakesh Kumar wrote:
> Cassandra : 3.3
> CQLSH : 5.0.1
>
> If there is a typo in the column name of the copy command, we get this:
>
> copy mytable
> (event_id,ev
Cassandra : 3.3
CQLSH : 5.0.1
If there is a typo in the column name of the copy command, we get this:
copy mytable
(event_id,event_class_cd,event_ts,receive_ts,event_source_instance,client_id,client_id_type,event_tag,event_udf,client_event_date)
from '/pathtofile.dat'
with DELIMITER = '|'
> I don't know why I got no error in 1.0.8 with PropertyFileSnitch in
> cassandra.yaml and wrong syntax in cassandra-topology.properties.
>
Not sure either.
> PS: I had to change JVM_OPTS in /etc/cassandra/cassandra-env.sh to use 160k
> instead 128k. This has not been fixed?
Still marked as unre
I forgot to change cassandra.yaml to use PropertyFileSnitch AND
cassandra-topology syntax was incorrect. Thanks, Nick.
I don't know why I got no error in 1.0.8 with PropertyFileSnitch in
cassandra.yaml and wrong syntax in cassandra-topology.properties.
PS: I had to change JVM_OPTS in /etc/cassand
The property file snitch isn't used by default. Did you change your
cassandra.yaml to use PropertyFileSnitch so it reads
cassandra-topology.properties?
Also the formatting in your dc property file isn't right. It should be
'=:'. So:
127.0.0.1=dc-test:my-notebook
On Mon, Jun 11, 2012 at 1:49 PM,
Just installed cassandra 1.1.1 and run:
root@carlo-laptop:/tmp# cassandra-cli -h localhost
Connected to: "Test Cluster" on localhost/9160
Welcome to Cassandra CLI version 1.1.1
Type 'help;' or '?' for help.
Type 'quit;' or 'exit;' to quit.
[default@unknown] create keyspace accounts
... with
For the "Too many open files" error see:
http://www.datastax.com/docs/0.8/troubleshooting/index#java-reports-an-error-saying-there-are-too-many-open-files
Restart the node and see if the node is able to complete the pending repair
this time. Your node may have just been stuck on this error that ca
I'm running into a quirky issue with Brisk 1.0 Beta 2 (w/ Cassandra 0.8.1).
I think the last node in our cluster is having problems (10.201.x.x).
OpsCenter and nodetool ring (run from that node) show the node as down, but
the rest of the cluster sees it as up.
If I run nodetool ring from one of t
Looks like the end of June.
On Fri, Jun 18, 2010 at 8:38 PM, Corey Hulen wrote:
> Awesome...thanks.
> I just downloaded the patch and applied it and verified it fixes our
> problems.
> what's the ETA on 0.6.3? (debating on weather to tolerate it or maintain
> our own 0.6.2+patch).
> -Corey
>
> O
Awesome...thanks.
I just downloaded the patch and applied it and verified it fixes our
problems.
what's the ETA on 0.6.3? (debating on weather to tolerate it or maintain
our own 0.6.2+patch).
-Corey
On Fri, Jun 18, 2010 at 8:21 PM, Jonathan Ellis wrote:
> Fixed for 0.6.3: https://issues.apac
Fixed for 0.6.3: https://issues.apache.org/jira/browse/CASSANDRA-1042
On Fri, Jun 18, 2010 at 2:49 PM, Corey Hulen wrote:
>
> We are using MapReduce to periodical verify and rebuild our secondary
> indexes along with counting total records. We started to noticed double
> counting of unique keys
OK...I just verified on a clean EC2 small single instance box using
apache-cassandra-0.6.2-src.
I'm pertty sure the Cassandra MapReduce functionality is broken.
If your MapReduce jobs are idempotent then you are OK, but if you are doing
things like word count (as in the supplied example) or key c
I thought the same thing, but using the supplied contrib example I just
delete the /var/lib/data dirs and commit log.
-Corey
On Fri, Jun 18, 2010 at 3:11 PM, Phil Stanhope wrote:
> "blow all the data away" ... how do you do that? What is the timestamp
> precision that you are using when creat
"blow all the data away" ... how do you do that? What is the timestamp
precision that you are using when creating key/col or key/supercol/col items?
I have seen a fail to write a key when the timestamp is identical to the
previous timestamp of a deleted key/col. While I didn't examine the source
We are using MapReduce to periodical verify and rebuild our secondary
indexes along with counting total records. We started to noticed double
counting of unique keys on single machine standalone tests. We were finally
able to reproduce the problem using
the apache-cassandra-0.6.2-src/contrib/word_
15 matches
Mail list logo