Check Cassandra logs for tombstone threshold error
On Aug 3, 2015 7:32 PM, "Robert Coli" wrote:
> On Mon, Aug 3, 2015 at 2:48 PM, Sid Tantia > wrote:
>
>> Any select all or select count query on a particular table is timing out
>> with "Cassandra::Errors::TimeoutError: Timed out"
>>
>> A “SELECT
Theres your problem, you're using the DataStax java driver :) I just ran
into this issue in the last week and it was incredibly frustrating. If you
are doing a simple loop on a "select * " query, then the DataStax java
driver will only process 2^31 rows (e.g. the Java Integer Max
(2,147,483,647)) b
It could be the linux kernel killing Cassandra b/c of memory usage. When
this happens, nothing is logged in Cassandra. Check the system
logs: /var/log/messages Look for a message saying "Out of Memory"... "kill
process"...
On Mon, Jun 8, 2015 at 1:37 PM, Paulo Motta
wrote:
> try checking your s
Try breaking it up into smaller chunks using multiple threads and token
ranges. 86400 is pretty large. I found ~1000 results per query is good.
This will spread the burden across all servers a little more evenly.
On Thu, May 7, 2015 at 4:27 AM, Alprema wrote:
> Hi,
>
> I am writing an applicatio
What other storage impacting commands or nuances do you gave to consider
when you switch to leveled compaction? For instance, nodetool cleanup says
"Running the nodetool cleanup command causes a temporary increase in disk
space usage proportional to the size of your largest SSTable."
Are sstables
I have 1 DC that was originally 3 nodes each set with a single token:
'-9223372036854775808', '-3074457345618258603', '3074457345618258602'
I added two more nodes and ran nodetool move and nodetool cleanup one
server at a time with these tokens: '-9223372036854775808',
'-5534023222112865485', '-18