Actually, this is tunable in >= 2.0.2 ;)
http://www.datastax.com/dev/blog/rapid-read-protection-in-cassandra-2-0-2
Michael
On 03/07/2014 07:33 PM, Jonathan Lacefield wrote:
Yikes my apologies. B is not the answer
On Mar 7, 2014, at 8:24 PM, Russell Hatch mailto:rha...@datastax.com>> wrote:
Yikes my apologies. B is not the answer
On Mar 7, 2014, at 8:24 PM, Russell Hatch wrote:
If you are using cqlsh, you can get a look at what's happening behind the
scenes by enabling tracing with 'tracing on;' before executing a query. In
this scenario you'll see 'Sending message to [ip address]
If you are using cqlsh, you can get a look at what's happening behind the
scenes by enabling tracing with 'tracing on;' before executing a query. In
this scenario you'll see 'Sending message to [ip address]' for each of the
replicas.
On Fri, Mar 7, 2014 at 5:44 PM, Jonathan Lacefield
wrote:
> B
B is the answer
> On Mar 7, 2014, at 7:35 PM, James Lyons wrote:
>
> I'm wondering about the following scenario.
>
> Consider a cluster of nodes with replication say 3.
> When performing a read at "read one" consistency and lets say my client isn't
> smart enough to route the request to the Cass
I'm wondering about the following scenario.
Consider a cluster of nodes with replication say 3.
When performing a read at "read one" consistency and lets say my client
isn't smart enough to route the request to the Cassandra node housing the
data at first. the contacted node acts as a coordinator
Robert, please elaborate why you say "To make best use of Cassandra, my minimum
recommendation is usually RF=3, N=6."
I surmise that with any less than 6 nodes, you'd likely perform better with a
sequential/single-node solution. You need at least six nodes to overcome the
overheads from concur
I agree, that's totally unintuitive. I would have the same expectations
that compaction is done on a row/column pair instead of simply at the row
level.
On Fri, Feb 28, 2014 at 11:44 AM, Keith Wright wrote:
> FYI - I recently filed
> https://issues.apache.org/jira/browse/CASSANDRA-6654 and wan
you would create a new session. Don't create a new cluster, that will
quickly exhaust the connections to the servers
On Fri, Mar 7, 2014 at 3:42 PM, Green, John M (HP Education) <
john.gr...@hp.com> wrote:
> I've been tinkering with both the C++ and Java drivers but in neither
> case have I got
I've been tinkering with both the C++ and Java drivers but in neither case have
I got a good indication of how threading and resource mgmt should be
implemented in a long-lived multi-threaded application server process.That
is, what should be the scope of a builder, a cluster, session, and s
On Fri, Mar 7, 2014 at 6:00 AM, Oleg Dulin wrote:
> I have the following situation:
>
> 10.194.2.5RAC1Up Normal 378.6 GB50.00%
>0
> 10.194.2.4RAC1Up Normal 427.5 GB50.00%
> 127605887595351923798765477786913079295
> 10.194.2.7RAC1
On Fri, Mar 7, 2014 at 7:26 AM, Daniel Curry wrote:
> I would like to know on what is the rule of thumb for
> "replication_factor:" number?
> I think the answer is depends on how many nodes one has? IE: three nodes
> will be the
> number 3. What would happen it I put the number 2 for a three no
Hi, Jonathan:
Thanks for your answer. My original goal of this question is not really related
to backup/restore, but to see if we can skip the Full Snapshot during ETL
transferring the data from SSTable files of Cassandra into another Hadoop
Cluster.
Right now, our production generates a full sn
Hi Joel,
On 07/03/14 15:22, Joel Samuelsson wrote:
I try to fetch all the row keys from a column family (there should only be a
couple of hundred in that CF) in several different ways but I get timeouts
whichever way I try:
did you check the node logs for exceptions? You can get this kind of
Are you on Cassandra 1.2 and can utilize the trace functionality? Might be
an informative route.
Ken
On Fri, Mar 7, 2014 at 9:22 AM, Joel Samuelsson
wrote:
> I try to fetch all the row keys from a column family (there should only be
> a couple of hundred in that CF) in several different ways
Thank you for the link,
On 03/07/2014 07:40 AM, Jonathan Lacefield wrote:
Hello,
The rule of thumb depends on your use case, particularly your
consistency requirements. Typical configuration is to leverage RF3.
Here's documentation on consistency levels:
http://www.datastax.com/documenta
Thank you. Your answer makes sense, Is this documented any where
On 03/07/2014 07:37 AM, John Pyeatt wrote:
You really don't want to set your RF to the same value as the number
of nodes in your cluster for a variety of reasons. The biggest one
being that if you have a node go down, your entire
Hello,
The rule of thumb depends on your use case, particularly your consistency
requirements. Typical configuration is to leverage RF3. Here's
documentation on consistency levels:
http://www.datastax.com/documentation/cassandra/1.2/cassandra/dml/dml_config_consistency_c.html
If you had a 3
You really don't want to set your RF to the same value as the number of
nodes in your cluster for a variety of reasons. The biggest one being that
if you have a node go down, your entire database is essentially down
because you will be unable to fulfil any requests because the RF can never
be met.
Hello,
Full snapshot forces a flush, yes.
Incremental hard-links to SSTables, yes.
This question really depends on how your cluster was "lost".
Node Loss: You would be able to restore a node based on restoring
backups + commit log or just by using repair.
Cluster Loss: (all nod
I would like to know on what is the rule of thumb for
"replication_factor:" number?
I think the answer is depends on how many nodes one has? IE: three nodes
will be the
number 3. What would happen it I put the number 2 for a three node cluster?
We are using both 3.2.4 and 3.1.3 ( that will
I try to fetch all the row keys from a column family (there should only be
a couple of hundred in that CF) in several different ways but I get
timeouts whichever way I try:
Through the cassandra cli:
Fetching 45 rows is fine:
list cf limit 46 columns 0;
.
.
.
45 Rows Returned.
Elapsed time: 298 ms
I have the following situation:
10.194.2.5RAC1Up Normal 378.6 GB50.00% 0
10.194.2.4RAC1Up Normal 427.5 GB50.00%
127605887595351923798765477786913079295
10.194.2.7RAC1Up Normal 350.63 GB 50.00%
23 matches
Mail list logo