Short answer: you'll need to pass something like --cqlversion="3.0.0" to
cqlsh.
Longer answer: when a CQL client connects (and cqlsh is one), it asks to
use a specific version of CQL. If it asked for a version that is newer than
what the server knows, you get the error message you have above. So w
hey guys,
I'm getting this exception when I try to run cqlsh.
[root@beta:/var/www/admin] #cqlsh beta.mydomain.com 9160
Traceback (most recent call last):
File "/etc/alternatives/cassandrahome/bin/cqlsh", line 2027, in
main(*read_options(sys.argv[1:], os.environ))
File "/etc/alternatives
Hi,
> 1) I will expect same row key could show up in both sstable2json
output, as this one row exists in both SSTable files, right?
Yes.
> 2) If so, what is the boundary? Will Cassandra guarantee the column
level as the boundary? What I mean is that for one column's data, it
will be guarant
Hi, I have some questions related to the SSTable in the Cassandra, as I am
doing a project to use it and hope someone in this list can share some thoughts.
My understand is the SSTable is per column family. But each column family could
have multi SSTable files. During the runtime, one row COULD s
Thanks to you and Paolo an Edward. You’ve given me something to think about.
I’ll just have to figure out the most reasonable approach for my needs.
Les
From: Laing, Michael [mailto:michael.la...@nytimes.com]
Sent: Wednesday, September 11, 2013 2:39 PM
To: user@cassandra.apache.org
Subject: Re:
> My high-level understanding of how Cassandra handles a SELECT is that :
> (excuse incorrect terminology)
> 1. client connects to some node N
> 2. node N acts as a kind of coordinator and fires off the thrift or
> binary-protocol messages
> to all other nodes to fetch rows off t
> Or CqlPagingRecordReader supports paging through the entire result set?
Supports paging through the entire result set.
Cheers
-
Aaron Morton
New Zealand
@aaronmorton
Co-Founder & Principal Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
On 12/09/2013, at 5
On Mon, Sep 16, 2013 at 12:01 PM, Philippe wrote:
> Is there a way to limit the Memtable sizes on a columnFamily basis on
> cassandra 1.1.x ? I have some CF that have very very low throughput and I'd
> like to lower the amount of data in memory to keep the Heap size down.
>
No.
If you wanted to
I think you can do by moving all the sstables under one drive. I am not
sure though. The sstables names should be unique across drives.
On Mon, Sep 16, 2013 at 10:14 AM, Juan Manuel Formoso wrote:
> Because I ran out of space when shuffling, I was forced to add multiple
> disks on my Cassandra n
Is there a way to limit the Memtable sizes on a columnFamily basis on
cassandra 1.1.x ? I have some CF that have very very low throughput and I'd
like to lower the amount of data in memory to keep the Heap size down.
Thanks
Because I ran out of space when shuffling, I was forced to add multiple
disks on my Cassandra nodes.
When I finish compacting, cleaning up, and repairing, I'd like to remove
them and return to one disk per node.
What is the procedure to make the switch?
Can I just kill cassandra, move the data fr
Repair should not take that long since you have very less data. Check the
logs of other machines with which it is repairing to find anything
interesting.
On Mon, Sep 16, 2013 at 10:15 AM, Parag Patel wrote:
> Thanks. I’ve noticed that a repair takes a long to time to finish. My
> data is quit
For how long does the read latencies go up once a machine is down? It takes
a configurable amount of time for machines to detect that another machine
is down. This is done through Gossip. The algo to detect failures is The
Phi accrual failure detector.
Regarding your question, if you are bootstrap
Thanks. I've noticed that a repair takes a long to time to finish. My data is
quite small, 1.5GB on each node when running nodetool status. Is there anyway
to speed up repairs? (FYI, I haven't actually seen a repair finish since it
didn't retrun after 10 mins - I figured I was doing something
On Mon, Sep 16, 2013 at 8:08 AM, Keith Freeman <8fo...@gmail.com> wrote:
> I'm spec'ing out some hardware for a small cassandra cluster. I know the
> recommendation (v1.2+) on spinning media is to have the commitlog on a
> separate physical disk from the data, but is it considered ok for
> perfor
Hi,
I try to generate sstable for bulk insert. generated table for composite
key is nearly x1000 times slower than single primary key.
Is there a trick to speed up generation of sstable for composite key?
Thanks
Koray
Hi,
I am experimenting with C* 2.0 ( and today's java-driver 2.0 snapshot) for
implementing distributed locks.
Basically, I have a table of 'states' I want to serialize access to:
create table state ( id text , lock uuid , data text, primary key (id) ) (3
nodes, replication level 3)
in
RF=3. Single dc deployment. No v-nodes.
Is there a certain amount of time I need to wait from the time the down node is
started to the point where it's ready to be used? If so, what's that time? If
it's dynamic, how would I know when it's ready?
Thanks,
Parag
From: sankalp kohli [mailto:ko
I'm spec'ing out some hardware for a small cassandra cluster. I know
the recommendation (v1.2+) on spinning media is to have the commitlog on
a separate physical disk from the data, but is it considered ok for
performance to put the commitlog on a partition of the OS's disk?
19 matches
Mail list logo