Hi,
Are there any stats/tools to know how much memtable hits v/s SSTable hits ?
/Girish BK
Thanks a lot for yours answers.
2013/11/29 John Sanda
> Couldn't another reason for doing cleanup sequentially be to avoid data
> loss? If data is being streamed from a node during bootstrap and cleanup is
> run too soon, couldn't you wind up in a situation with data loss if the new
> node be
Hi, I'm building a new cluster, and having problems starting Cassandra.
RHEL 5.9
Java 1.7 U40
Cassandra 2.0.2
Previous clusters have started fine using the same methods, although the
envoronments are a little different (new RHEL, older Java).
I am installing from DataStax tarball, and after ma
With RHEL, there is a problem with snappy 1.0.5. You’d need to use 1.0.4.1
which works fine but you need to download it separately and put it in your lib
directory. You can find the 1.0.4.1 file from
https://github.com/apache/cassandra/tree/cassandra-1.1.12/lib
Jeremy
On 29 Nov 2013, at 10:1
Hi,
If I have total requests for "Key Cache Requests", does ( Total requests -
Key Cache Requests ) indicate requests answered by MemTable alone ?
/Girish BK
Hi John,
I am trying again :)
The way I understand it is that compression gives you the advantage of
having to use way less IO and rather use CPU. The bottleneck of reads is
usually the IO time you need to read the data from disk. As a figure, we
had about 25 reads/s reading from disk, while
Apart from the compaction, you might want to also look at free space required
for repairs.
This could be problem if you have large rows as repair is not at column level.
> On Nov 28, 2013, at 19:21, Robert Wille wrote:
>
> I’m trying to estimate our disk space requirements and I’m wondering
Very many thanks for the swift response Jeremy, snappy 1.0.4 works perfectly.
For information, we have working environment of RHEL 6.4, and Java 1.7 U25,
with snappy 1.0.5.
All the best, Nigel
-Original Message-
From: jeremy.hanna1...@gmail.com [mailto:jeremy.hanna1...@gmail.com]
Sent:
The big * in the explanation: Smaller file size footprint leads to better
disk cache, however decompression adds work for the JVM to do and increases
the churn of objects in the JVM. Additionally compression block sizes might
be 4KB while for some use cases a small row may be 200bytes. This means
t
I sent this to the Pig list, but didn't get a response...
I'm trying to get Pig running with Cassandra 2.0.2. The instructions
I've been using are here:
https://github.com/jeromatron/pygmalion/wiki/Getting-Started
The cassandra 2.0.2 src does not have a contrib directory. Am I
missing somethin
Having an issue with sstable2json. It appears to hang when I run it against an
SSTable that's part of a keyspace with authentication turned on. Running it
against any other keyspace works, and as far as I can tell the only difference
between the keyspaces is authentication. Has anyone run into t
I know I need to get around to upgrading. Is this (exception on startup) an
issue fixed in 2.0.3?
Caused by: java.lang.IndexOutOfBoundsException: index (1) must be less than
size (1)
at
com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:306)
at
com.google
Hi Robert,
We found having about 50% free disk space is a good rule of thumb.
Cassandra will typically use less than that when running compactions,
however it is good to have free space available just in case it compacts
some of the larger SSTables in the keyspace. More information can be found
o
Hi,
> If Cassandra only compacts one table at a time, then I should be safe if
I keep as much free space as there is data in the largest table. If
Cassandra can compact multiple tables simultaneously, then it seems that I
need as much free space as all the tables put together, which means no more
Hi Robert,
In this case would it be possible to do the following to replace a seed
node?
nodetool disablethrift
nodetool disablegossip
nodetool drain
stop Cassandra
deep copy /var/lib/cassandra/* on old seed node to new seed node
start Cassandra on new seed node
Regards,
Anthony
On Wed, Nov
Can someone please explain how to do update using datastax QueryBuilder
java API 1.04. I've tried:
Query update = QueryBuilder
.update("demo", "user")
.with(set("col1", "val1"))
.and(set("col2","val2"))
.where(eq("col3","val3"));
but
https://issues.apache.org/jira/browse/CASSANDRA-5905 looks identical to your
case. It’s duplicated to https://issues.apache.org/jira/browse/CASSANDRA-5202,
which is still open.
-M
"Jacob Rhoden" wrote in message
news:18cefb6f-6d85-4084-9b08-fcdd6d3a6...@me.com...
I know I need to get around
17 matches
Mail list logo