If you use your off heap memory linux has an OOM killer, that will kill a
random tasks.
On Fri, May 10, 2013 at 11:34 AM, Bryan Talbot wrote:
> If off-heap memory (for indes samples, bloom filters, row caches, key
> caches, etc) is exhausted, will cassandra experience a memory allocation
> error
Thanks, this is interesting, but if I'm not mistaken, Astyanax uses
CQL2. I'm trying to find a CQL3 solution on top the binary protocol.
There has to be a way to do this in CQL3...?
Thorsten
On 5/10/2013 1:33 PM, Keith Wright wrote:
What you are proposing should work and I started to impleme
What you are proposing should work and I started to implement that using
multiple threads over the token ranges but decided instead to use to
Astyanax's read all rows recipe as it does much of that already. It
required some work to convert the composite CQL2 format returned from
Astayanx into what
Hi all,
I am using C* 1.2.4 with Vnodes and am getting the following error when
attempting to fetch some keys in a CQL2 table that was drop and recreated
programmatically. I'm wondering how I can recover from this? I tried a scrub
but basically got the same error and so far a repair has to
Hi!
I have been wondering how Repair is actually used by operators. If
people operating Cassandra in production could answer the following
questions, I would greatly appreciate it.
1) What version of Cassandra do you run, on what hardware?
2) What consistency level do you write at? Do you do DELE
On Thu, May 9, 2013 at 7:40 PM, Techy Teck wrote:
> How to figure out from the Datastax OPSCENTER whether the compaction is
> finished/done?
If you triggered a compaction through OpsCenter and you're using the latest
version of OpsCenter (3.1), you will get a notification at the top of the
scre
My cluster of 11 nodes running Casandra 1.1-5 is pausing too long for ParNew
GC, which increases our response latency, Is it a good idea to have a a smaller
HEAP_NEWSIZE so that we can collect more often, but not pause that long?
INFO [ScheduledTasks:1] 2013-05-10 01:00:17,245 GCInspector.java
On Thu, May 9, 2013 at 3:38 PM, aaron morton wrote:
>> At what point does compression start ?
> It starts for new SSTables created after the schema was altered.
@OP :
If you want to compress all existing SSTables, use "upgradesstables"
or "cleanup", both of which rewrite existing SSTables 1:1.
On Thu, May 9, 2013 at 5:40 PM, Techy Teck wrote:
> How to figure out from the Datastax OPSCENTER whether the compaction is
> finished/done?
Minor compaction is conceptually never "done" unless you don't write
to your cluster. I don't know the answer to your opscenter question,
but Cassandra expo
C* users,
The simple code below demonstrates pycassa failing to write values
containing more than one-tenth that thrift_framed_transport_size_in_mb.
It writes a single column row using a UUID key.
For example, with the default of
thrift_framed_transport_size_in_mb: 15
the code below
What is the proper way to scan a table in CQL3 when using the random
partitioner? Specifically, what is the proper way to *start* the scan?
E.g., is it something like:
select rowkey from my_table limit N;
while some_row_is_returned do
select rowkey from my_table where token(rowkey) >
token(last_
If off-heap memory (for indes samples, bloom filters, row caches, key
caches, etc) is exhausted, will cassandra experience a memory allocation
error and quit? If so, are there plans to make the off-heap usage more
dynamic to allow less used pages to be replaced with "hot" data and the
paged-out /
do you use vnodes ?
On Fri, May 10, 2013 at 10:19 AM, 杨辉强 wrote:
> Hi, all
> I use ./bin/nodetool -h 10.21.229.32 ring
>
> It generates lots of info of same host like this:
>
> 10.21.229.32 rack1 Up Normal 928.3 MB24.80%
>8875305964978355793
> 10.21.229.32 rack1
same host, multiple cassandra instance? but looks wrong, what cassandra
version?
On Fri, May 10, 2013 at 3:19 PM, 杨辉强 wrote:
> Hi, all
> I use ./bin/nodetool -h 10.21.229.32 ring
>
> It generates lots of info of same host like this:
>
> 10.21.229.32 rack1 Up Normal 928.3 MB
> On Wed, May 8, 2013 at 10:43 PM, Nicolai Gylling wrote:
>> At the time of normal operation there was 800 gb free space on each node.
>> After the crash, C* started using a lot more, resulting in an
>> out-of-diskspace situation on 2 nodes, eg. C* used up the 800 gb in just 2
>> days, giving us v
15 matches
Mail list logo