After I use sstableloader from a old cluster, the new cluster node's data size
double. The new cluster has the same node with old cluster( 3 nodes).
The old cluster has 500G each node,after sstableloader completely, the new
cluster data size nearly 1T each node.
The problem is that compaction
Trying to connect to C* v3.1.1 cluster.
It works nice with cqlsh
/$ cqlsh//
//Connected to Test Cluster at 127.0.0.1:9042.//
//[cqlsh 5.0.1 | Cassandra 3.1.1 | CQL spec 3.3.1 | Native protocol v4]/
But doesn't work with /cassandra-driver-core/.
I use next mvn deps:
///
//com.datastax.cassan
There are still a couple of tickets being worked on for 2.1.10 and the
release will come afterwards. You can check the list:
https://datastax-oss.atlassian.net/browse/JAVA-989?jql=project%20%3D%20JAVA%20AND%20fixVersion%20%3D%202.1.10
On Mon, Jan 4, 2016 at 5:57 AM, joseph gao wrote:
> By the wa
Hello,
I would gladly welcome the help of the community on the following issue
I am having while starting Cassandra.
I am starting Cassandra by a Bash script in this way:
- $CASSANDRA_HOME/bin/cassandra -p $CASSANDRA_PID_FILE
and then, I submit some updates via
- $CASSANDRA_HOME/bin/cqlsh -f
I think you are looking for the nodetool utility:
https://docs.datastax.com/en/cassandra/2.1/cassandra/tools/toolsNodetool_r.html
On Mon, Jan 4, 2016 at 1:47 PM, Giovanni Usai
wrote:
> Hello,
> I would gladly welcome the help of the community on the following issue I
> am having while starting C
(Hit enter too fast)
In particular, `nodetool status` will give you a summary of the status of
the cluster. See the documentation for the parameters it takes.
-kr, Gerard.
On Mon, Jan 4, 2016 at 3:49 PM, Gerard Maas wrote:
> I think you are looking for the nodetool utility:
> https://docs.data
Hi all,
I have a C* cluster with 6 nodes. My cassandra version is 2.1.1. I start 50
threads to insert datas into C* cluster, each thread inserts about up to 100
million rows with the same partition key. After inserting all the datas, I
start another app with 50 threads to export all the dat
A problem that I have run into repeatedly when doing schema design is how
to control partition size while still allowing for efficient multi-row
queries.
We want to limit partition size to some number between 10 and 100 megabytes
to avoid operational issues. The standard way to do that is to figur
You have three choices:
1. Insert with CL=ALL, with client-level retries if the write fails due to
the cluster being overloaded.
2. Insert with CL=QUORUM and then run repair after all data has been
inserted.
3. Lower your insert rate in your client so that the cluster can keep up
with your inserts
Hello Gerard,
thanks for your reply.
It seems nodetool works only when the cluster is up and running.
In case of a bad startup of Cassandra, if I run "nodetool status" I get
one of these 2 errors:
1) error: No nodes present in the cluster. Has this node finished
starting up?
-- StackTrace --
Hi Giovanni,
You could use netcat (nc) to test that the cassandra port is up and use a
timeout to decide when to take an action
nc -z localhost 9160
check for the exit code to decide what action to take.
-kr, Gerard.
On Mon, Jan 4, 2016 at 4:56 PM, Giovanni Usai
wrote:
> Hello Gerard,
> tha
On Sun, Jan 3, 2016 at 5:54 PM, Shuo Chen wrote:
> There are client operation in these days. Besides most columnfamily in the
> cluster are supercolumnfamily created by cassandra-cli. Most rows have
> average 30 sub-rows and each sub-row has 20 columns.
>
Supercolumns, especially pre-CQL impleme
I was surprised the other day to discover that this was a cluster-wide
setting. Why does that make sense?
In a heterogeneous cassandra deployment, say I have some old servers
running spinning disks and I'm bringing on more nodes that perhaps utilize
SSD. I want to have different compaction thro
This is set in the cassandra.yaml on each node independently; it doesn't
have to be same cluster-wide.
On Mon, Jan 4, 2016 at 3:59 PM, Ken Hancock wrote:
> I was surprised the other day to discover that this was a cluster-wide
> setting. Why does that make sense?
>
> In a heterogeneous cassand
>
>
> Also, as I increase my node count, I technically also have to increase my
> compaction_throughput which would require a rolling restart across the
> cluster.
>
>
You can set compaction throughput on each node dynamically via nodetool
setcompactionthroughput.
--
-
Nate McCal
>
>> Also, as I increase my node count, I technically also have to increase my
>> compaction_throughput which would require a rolling restart across the
>> cluster.
>>
>>
> You can set compaction throughput on each node dynamically via nodetool
> setcompactionthroughput.
>
>
>
Also, the IOPS genera
Why do you think it’s cluster wide? That param is per-node, and you can change
it at runtime with nodetool (or via the JMX interface using jconsole to ip:7199
)
From: Ken Hancock
Reply-To: "user@cassandra.apache.org"
Date: Monday, January 4, 2016 at 12:59 PM
To: "user@cassandra.apache.org"
Thrift has been officially frozen for almost two years and unofficially for
longer. Meanwhile, maintaining Thrift support through changes like 8099
has been a substantial investment.
Thus, we are officially deprecating Thrift now and removing support in 4.0,
i.e. Nov 2016 if tick-tock goes as pl
You should endeavor to use a repeatable method of segmenting your data.
Swapping partitions every time you "fill one" seems like an anti pattern to
me. but I suppose it really depends on what your primary key is. Can you
share some more information on this?
In the past I have utilized the consiste
Dear Jack,
Thanks!
My keyspace is such as:
test@cqlsh> DESC KEYSPACE sky ;
CREATE KEYSPACE sky WITH replication = {'class': 'SimpleStrategy',
'replication_factor': '3'} AND durable_writes = true;
CREATE TABLE sky.user1 (pati int, uuid text, name text, name2 text,
PRIMARY KEY (pati, u
20 matches
Mail list logo