Hi,
For cassandra 2.0, DSE 4.0 come with casandra 2.0.5
For cassndra 1.2, DSE 3.1 come with cassandra 1.2.6
you should wait at least version 2.1.5
2014-10-14 21:20 GMT+02:00 Jason Lewis :
> I can't find any info related to dates anywhere.
>
> jas
>
Based on history, there is typically a 6 month delay between Cassandra
release and related DSE release.
Hannu
2014-10-14 22:20 GMT+03:00 Jason Lewis :
> I can't find any info related to dates anywhere.
>
> jas
>
I found this JIRA with similar error message (
https://issues.apache.org/jira/browse/CASSANDRA-6276), not sure it applies
to your case, and especially the issue is supposed to be fixed in 2.0.10
(your current version).
Consider filling a new JIRA if you can reproduce the issue
On Mon, Oct 13, 20
I was going over http://www.datastax.com/docs/1.1/backup_restore, which seems
pretty clear about how to restore a snapshot. Basically, it seems the procedure
is to stop your node, wipe the commit logs, move the snapshotted sstables into
place, and restart. That makes sense…. so long as you only
On Wed, Oct 15, 2014 at 8:12 AM, Ben Chobot wrote:
> I was going over http://www.datastax.com/docs/1.1/backup_restore, which
> seems pretty clear about how to restore a snapshot. Basically, it seems the
> procedure is to stop your node, wipe the commit logs, move the snapshotted
> sstables into p
On Tue, Oct 14, 2014 at 6:37 PM, luolee.me wrote:
> I'm new to cassandra and told to write a script to backup the cassandra
> cluster every day, but it throws an exception at one node as below
>
https://github.com/JeremyGrosser/tablesnap
=Rob
http://twitter.com/rcolidba
On Tue, Oct 14, 2014 at 4:52 PM, Donald Smith <
donald.sm...@audiencescience.com> wrote:
> Suppose I create a new DC with 25 nodes. I have their IPs in
> cassandra-topology.properties. Twenty-three of the nodes start up, but two
> of the nodes fail to start. If I start replicating (via "nodeto
I have a 3-node cluster running Cassandra 2.0.6 on CentOS 6.5, with Jave
1.7.0_51.
I ran a CQL statement like "alter table table_name with
gc_grace_seconds=864000;" on node 1 in CQLSH, and it finished
instantaneously. "desc keyspace" listed the table with the new value for
gc_grace_seconds, and "
Even with vnodes, when you add a node to a cluster, it takes over some portions
of the token range. If the other nodes have been running for a long time you
should bootstrap the new node, so it gets old data. Then you should run
"nodetool cleanup" on the other nodes to eliminate no-longer-need
"So, my point is that to avoid the need to bootstrap and to cleanup, it's
better to bring all nodes up at about the same time. If this is wrong,
please explain why."
LGTM. That's how I do it. Balance first your ring by adding all the nodes
you want, adding them with "auto_bootstrap: false", then
+1 for "nodetool disablegossip && nodetool disablethrift && nodetool
disablebinary" (there is a binary protocol now too, port 9042, you might
want to disable it as well depending on your clients)
"nodetool enablegossip && nodetool enablethrift && nodetool enablebinary"
to come back "online"
Cheer
Cassandra in general can't provide guarantee any ordering of the executed
queries, since nodes may fail or rejoin the in arbitrary points in time.
But why can't it provide ordering for queries run at at least the quorum
level? Given that none of the updates get lost, why would order still an
issue
stream_throughput_outbound_megabits_per_sec is the timeout per operation on
the streaming socket. The docs recommend not to have it too low (because a
timeout causes streaming to restart from the beginning). But the default 0
never times out. What's a reasonable value?
Does it stream an en
Hello,
We upgraded a cassandra cluster from 1.2.18 to 2.0.10, and it looks like
repair is significantly more expensive now. Is this expected?
We schedule rolling repairs through the cluster. With 1.2.18 a repair
would take 3 hours or so. The first repair after the upgrade has been
going on for
On Wed, Oct 15, 2014 at 4:54 PM, Sean Bridges
wrote:
> We upgraded a cassandra cluster from 1.2.18 to 2.0.10, and it looks like
> repair is significantly more expensive now. Is this expected?
>
It depends on what you mean by "expected." Operators usually don't expect
defaults with such dramatic
On Wed, Oct 15, 2014 at 3:41 PM, Alain RODRIGUEZ wrote:
> +1 for "nodetool disablegossip && nodetool disablethrift && nodetool
> disablebinary" (there is a binary protocol now too, port 9042, you might
> want to disable it as well depending on your clients)
>
> "nodetool enablegossip && nodetool
Hello, dlu66061.
A common issue with schema disagreements is time drift on the nodes. Are
you using NTP?
The only other issue is when the nodes are not reachable at the time that
the schema update was being propagated ---
http://www.datastax.com/documentation/cassandra/2.0/cassandra/dml/dml_handl
Thanks Robert. Does the switch to sequential from parallel explain why IO
increases, we see significantly higher IO with 2.10.
The nodetool docs [1] hint at the reason for defaulting to sequential,
"This allows the dynamic snitch to maintain performance for your
application via the other replica
We bootstrapped a node with replace_address and noticed that the
WriteCount for each CF stayed at 0 until the bootstrap was complete.
After the bootstrap completed cfstats reported the expected values for
WriteCount on each CF.
The node wrote gigs of data to various CFs during the bootstrap so it
Hi,
I am facing many problem after storing certain limit of records in
cassandra, and giving outofmemoryerror.
I have 8GB of RAM in my system, so how much records i can expect to
retrieve by using select query?
and what will be the configuration for those people who are retrieving
15-20 GB of da
Hello Timmy
Even when you write and read using quorum, you still don't have isolation.
Example:
Client A write "John Doe" to 3 replicas. Since CL = Quorum, the
coordinator waits for 2 acks from the replicas before telling client A
that the write is successful.
Now suppose that between the fi
21 matches
Mail list logo