Yes, currently the resultSet will contain all the rows, there is no "fetch
size" supported. This will change soon however since Cassandra 2.0 has some
"paging" support at the protocol level and the driver will make use of
that. But that won't be before 2.0.
As an aside, this kind of question about
Yes, that's what I am looking for. Thanks.
On Mon, Jul 15, 2013 at 10:08 PM, Jake Luciani wrote:
> Take a look at https://issues.apache.org/jira/browse/CASSANDRA-5661
>
>
> On Mon, Jul 15, 2013 at 4:18 AM, sulong wrote:
>
>> Thanks for your help. Yes, I will try to increase the sstable size. I
Good point. Just to be clear - my suggestions all assume this is a
testing/playground/get a feel setup. This is a bad idea for
performance testing (not to mention anywhere near production).
On Mon, Jul 15, 2013 at 3:02 PM, Tim Wintle wrote:
> I might be missing something, but if it is all on one
Hi All,
I am using the Datastax native client for Cassandra and have a question. Does
the resultset contain all the Rows? On a JDBC driver there is this concept of
fetch record size. I do not seem to think Datastax did that in their
implementation but that probably is not a requirement.
But I b
I might be missing something, but if it is all on one machine then why use
Cassandra or hadoop?
Sent from my phone
On 13 Jul 2013 01:16, "Martin Arrowsmith"
wrote:
> Dear Cassandra experts,
>
> I have an HP Proliant ML350 G8 server, and I want to put virtual
> servers on it. I would like to put
This is really dependent on the workload. Cassandra does well with 8GB
of RAM for the jvm, but you can do 4GB for moderate loads.
JVM requirements for Hadoop jobs and available slots are wholly
dependent on what you are doing (and again whether you are just
integration testing).
You can get away
Couple of questions about the test setup:
- are you running the tests in parallel (via threadCount in surefire
or failsafe for example?)
- is the instance of cassandra per-class for per jvm? (or is fork=true?)
On Sun, Jul 14, 2013 at 5:52 PM, Tristan Seligmann
wrote:
> On Mon, Jul 15, 2013 at 12
Take a look at https://issues.apache.org/jira/browse/CASSANDRA-5661
On Mon, Jul 15, 2013 at 4:18 AM, sulong wrote:
> Thanks for your help. Yes, I will try to increase the sstable size. I hope
> it can save me.
>
> 9000 SSTableReader x 10 RandomAccessReader x 64Kb = 5.6G memory. If there
> is on
I believe too many open files is really too many open file descriptors so you
may want to check number of sockets open as well to see if you hit the open
file descriptor limit. Sockets open a descriptor and count toward the limit I
believe….I am quite rusty in this and this is from my bad memor
Odd that this discussion happens now as I'm also getting this error. I get
a burst of error messages and then the system continues...with no apparent
ill effect.
I can't tell what the system was doing at the timehere is the stack.
BTW Opscenter says I only have 4 or 5 SSTables in each of my 6
It doesn't tell you anything if file ends it with "ic-###", except
pointing out the SSTable version it uses ("ic" in this case).
Files related to secondary index contain something like this in the
filename: -., while in "regular" CFs do not contain
any dots except the one just before file exte
My understanding is that it is not possible to change the number of
tokens after the node has been initialized.
that was my conclusion too. vnodes currently do not brings any
noticeable benefits to outweight trouble. shuffle is very slow in large
cluster. Recovery is faster with vnodes but i h
Thanks for your help. Yes, I will try to increase the sstable size. I hope
it can save me.
9000 SSTableReader x 10 RandomAccessReader x 64Kb = 5.6G memory. If there
is only one RandomAccessReader, the memory will be 9000 * 1 * 64Kb = 0.56G
. Looks great. But I think it must be reasonable to recycl
I had exactly the same problem, so I increased the sstable size (from 5 to 50
MB - the default 5MB is most certainly too low for serious usecases). Now the
number of SSTableReader objects is manageable, and my heap is happier.
Note that for immediate effect I stopped the node, removed the *.js
Why does cassandra PoolingSegmentedFile recycle the RandomAccessReader? The
RandomAccessReader objects consums too much memory.
I have a cluster of 4 nodes. Every node's cassandra jvm has 8G heap. The
cassandra's memory is full after about one month, so I have to restart the
4 nodes every month.
Also, looking through the log, it appears a lot of the files end with ic-
which I assume is associated with a secondary index I have on the table. Are
secondary indexes really expensive from a file descriptor standpoint? That
particular table uses the default compaction scheme...
On Jul 1
I have one table that is using leveled. It was set to 10MB, I will try
changing it to 256MB. Is there a good way to merge the existing sstables?
On Jul 14, 2013, at 5:32 PM, Jonathan Haddad wrote:
> Are you using leveled compaction? If so, what do you have the file size set
> at? If you're
17 matches
Mail list logo