Hello,
We are running a 3 node cluster with RF=3 and 5 clients in a test environment.
The C* settings are mostly default. We noticed quite high context switching
during our tests. On 100 000 000 keys/partitions we averaged around 260 000 cs
(with a max of 530 000).
We were running 12 000~ tran
Hi Boying,
I'm not sure I fully understand your question here, so some clarification
may be needed. However, if you are asking what steps need to be performed
on the current datacenter or on the new datacenter:
Step 1 - Current DC
Step 2 - New DC
Step 3 - Depending on the snitch you may need to m
Akhtar Hussain shared a search result with you
-
https://issues.apache.org/jira/secure/IssueNavigator.jspa?reset=true&jqlQuery=reporter+%3D+currentUser%28%29+ORDER+BY+createdDate+DESC
We have a Geo-red setup with 2 Data centers having 3 nodes each. Whe
On 19 Nov 2014, at 19:53, Robert Coli wrote:
>
> My hunch is that you originally triggered this by picking up some obsolete
> SSTables during the 1.2 era. Probably if you clean up the existing zombies
> you will not encounter them again, unless you encounter another "obsolete
> sstables marked
I have a setup that looks like this
Dc1: 9 nodes
Dc2: 9 nodes
Dc3: 9 nodes
C* version: 2.0.10
RF: 2 in each DC
Empty CF with no data at the beginning of the test
Scenario 1 (happy path): I connect to a node in DC1 using CQLsh, validate that
I am using CL=1, insert 10 rows.
Then using CQLsh conne
I believe you were attempting to share:
https://issues.apache.org/jira/browse/CASSANDRA-8352
Your cassandra logs outputs the following:
> DEBUG [Thrift:4] 2014-11-20 15:36:50,653 CustomTThreadPoolServer.java
> (line 204) Thrift transport error occurred during processing of message.
> org.apache.t
Thats true.Will re look in to server logs and get back.
Br/Akhtar
On Fri, Nov 21, 2014 at 5:09 PM, Mark Reddy wrote:
> I believe you were attempting to share:
> https://issues.apache.org/jira/browse/CASSANDRA-8352
>
> Your cassandra logs outputs the following:
>
>> DEBUG [Thrift:4] 2014-11-20 1
Hey guys,
Just reviving this thread. In case anyone is using the
cassandra_range_repair tool (https://github.com/BrianGallew/cassandra_range_
repair), please sync your repositories because the tool was not working
before due to a critical bug on the token range definition method. For more
informat
How do the clients connect, which protocol is used and do they use
keep-alive connections? Is it possible that the clients use Thrift and the
server type is "sync"? It is just my guess, but in this scenario with high
number of clients connecting-disconnecting often there may be high number
of conte
> The main purpose is to protect us from human errors (eg. unexpected
> manipulations: delete, drop tables, …).
If that is the main purpose, having "auto_snapshot: true” in cassandra.yaml
will be enough to protect you.
Regarding backup, I have a small script that creates a named snapshot
Does hector or cassandra imposes a limit on max ttl value for a column?
I am trying to insert record into one of the column family and seeing the
following error..
Cassandra version : 1.1.12
Hector : 1.1-4
Any pointers appreciated.
me.prettyprint.hector.api.exceptions.HInvalidRequestException:
Hi Rajanish,
Cassandra imposes a max TTL of 20 years.
public static final int MAX_TTL = 20 * 365 * 24 * 60 * 60; // 20 years in
> seconds
See:
https://github.com/apache/cassandra/blob/8d8fed52242c34b477d0384ba1d1ce3978efbbe8/src/java/org/apache/cassandra/db/ExpiringCell.java#L37
Mark
On 21
With the newest versions of Cassandra, cql is not hanging, but returns the
same Invalid Query Exception you are seeing through hector. I would assume
from the exception that 63072 is in fact that largest TTL you can use.
What are you doing that you need to set a TTL of approximately 30 years?
Hello,
I have been bootstrapping 4 new nodes into an existing production
cluster. Each node was bootstrapped one at a time, the first 2
completing without errors, but ran into issues with the 3rd one. The 4th
node has not been started yet.
On bootstrapping the third node, the data steaming s
On Fri, Nov 21, 2014 at 8:40 AM, Jens Rantil wrote:
> > The main purpose is to protect us from human errors (eg. unexpected
> manipulations: delete, drop tables, …).
>
> If that is the main purpose, having "auto_snapshot: true” in
> cassandra.yaml will be enough to protect you.
>
OP includes "de
On Fri, Nov 21, 2014 at 1:21 AM, Jan Karlsson
wrote:
> Nothing really wrong with that however I would like to understand why
> these numbers are so high. Have others noticed this behavior? How much
> context switching is expected and why? What are the variables that affect
> this?
>
I +1 Nikola
On Fri, Nov 21, 2014 at 9:44 AM, Chris Hornung
wrote:
> On bootstrapping the third node, the data steaming sessions completed
> without issue, but bootstrapping did not finish. The node is stuck in
> JOINING state even 19 hours or so after data streaming completed.
>
Stop the joining node. Wipe
On Fri, Nov 21, 2014 at 3:38 AM, Rahul Neelakantan wrote:
> The missing rows never show up in DC2 and DC3 unless I do a CQLsh lookup
> with CL=all
>
> Why is there a difference in the replication between writes performed
> using the datastax drivers and while using CQLsh?
>
If reproducable, this
On Fri, Nov 21, 2014 at 3:11 AM, André Cruz wrote:
> Can it be that they were all in the middle of a compaction (Leveled
> compaction) and the new sstables were written but the old ones were not
> deleted? Will Cassandra blindly pick up old and new sstables when it
> restarts?
>
Yes.
https://is
We're getting strange results when reading from Cassandra 2.0 in php using
this driver:
http://code.google.com/a/apache-extras.org/p/cassandra-pdo/
Here's the schema:
CREATE TABLE events (
day text,
last_event text,
event_text text,
mdn text,
PRIMARY KEY ((day), last_event)
) WITH CLU
On Wed, Nov 19, 2014 at 4:51 PM, Jimmy Lin wrote:
>
> #
> When you said send "read digest request" to the rest of the replica, do
> you mean all replica(s) in current and other DC? or just the one last
> replica in my current DC and one of the co-ordinate node in other DC?
>
> (our read and write
21 matches
Mail list logo