Are you sure you're using the last version of the 1.2 branch? The
exceptions are in RangeTombstoneList that has been introduced by the patch
of CASSANDRA-5677 and the initial version of the patch were indeed able to
produce that kind of error. Could that be you're using such development
version of
Obvisously, that should not happen. That being said, we did fix a bug
yesterday that could produce that kind of trace:
https://issues.apache.org/jira/browse/CASSANDRA-5799. It will be part of
1.2.7 that should be released tomorrow, though the artifacts that are being
voted on are at
https://reposit
Yeah, Rob is smart. don't run crap in production. Run what others are stable
at. If you are running the latest greatest dumbest craziest in prod then you
ask for fail, and you will get just that.
FAIL
On Jul 24, 2013, at 12:06 PM, Robert Coli wrote:
> A better solution would likely involve
Mysql?
--
Colin
+1 320 221 9531
On Jul 25, 2013, at 6:08 AM, Derek Andree wrote:
> Yeah, Rob is smart. don't run crap in production. Run what others are
> stable at. If you are running the latest greatest dumbest craziest in prod
> then you ask for fail, and you will get just that.
>
The Cassandra team is pleased to announce the release of the second beta for
the future Apache Cassandra 2.0.0.
Let me first stress that this is still beta software and as such is *not*
ready
for production use.
As with the first beta, the goal is to give a preview of what will become
Cassandra 2
Unfortunately the table in question is a CQL3 table so cassandra-cli will not
output its describe:
WARNING: CQL3 tables are intentionally omitted from 'describe' output.
See https://issues.apache.org/jira/browse/CASSANDRA-4377 for details.
However, I did figure out that apparently I was not sett
Hello,
Quick question on Cassandra, TTLs, tombstones, and GC grace. If we have a
column family whose only mechanism of deleting columns is utilizing TTLs, is
repair really necessary to make tombstones consistent, and therefore would it
be safe to set the gc grace period of the column family to
Hi Michael,
yes, you should never loose a delete, because there are no real deletes. No
matter what version you are using.
btw: There is actually a ticket that builds an optimization on top of that
assumption: CASSANDRA-4917. Basically, if TTL>gc_grace then do not create
tombstones for expiring-c
Some bug was fixed in 2.0.0-beta2 by C* developers. Try it!
2013/7/22 Andrew Cobley
> I've been noticing some strange casandra-stress results with 2.0.0 beta
> 1. I've set up a single node on a Mac (4 gig ram, 2.8Ghz core 2 duo) and
> installed 2.0.0 beta1.
>
> When I run ./cassandra-stress
With 1.2.7 you can use -Dcassandra.unsafesystem. That will speed up cf
creation. So you will get in even more trouble even faster!
On Tue, Jul 23, 2013 at 12:23 PM, bjbylh wrote:
> Hi all:
> i have two questions to ask:
> 1,how many column families can be created in a cluster?is there a limit
I'm getting something similar for beta 2, you can see here it's dropping from
12000 to 400 quite quickly. The output from cassandra is available n dropbox
here:
https://dl.dropboxusercontent.com/u/1638201/output.txt (184kBytes)
I'm wondering if it's a GC issue ?
./cassandra-stress -d 134.36
Does anyone have opinions on the maximum amount of data reasonable to store on
one Cassandra node? If there are limitations, what are the reasons for it?
Thanks,
Anne
Between 500GB - 1TB is recommended.
But it depends also your hardware, traffic characteristics and
requirements. Can you give some details on that?
Best Regards,
Cem
On Thu, Jul 25, 2013 at 5:35 PM, Pruner, Anne (Anne) wrote:
> Does anyone have opinions on the maximum amount of data reasonabl
We're storing fairly large files (about 1MB apiece) for a few months and then
deleting the oldest to get more space to add new ones. We have large
requirements (maybe up to 100 TB), so having a 1TB limit would be unworkable.
What is the reason for the limit? Does something fail after that?
If
Issues with large data nodes would be -
* Nodetool repair will be impossible to run
* Your read i/o will suffer since you will almost always go to disk
(each read will take 3 IOPS worst case)
* Boot-straping the node in case of failures will take days/weeks
From: Pru
You will suffer from long compactions if you are planning to get rid of
from old records by TTL.
Best Regards,
Cem.
On Thu, Jul 25, 2013 at 5:51 PM, Kanwar Sangha wrote:
> Issues with large data nodes would be –
>
> ** **
>
> **· **Nodetool repair will be impossible to run
>
>
I actually wrote my own compactor that deals with this problem.
Anne
From: cem [mailto:cayiro...@gmail.com]
Sent: Thursday, July 25, 2013 11:59 AM
To: user@cassandra.apache.org
Subject: Re: maximum storage per node
You will suffer from long compactions if you are planning to get rid of from
old
I'm wondering if it's a GC issue ?
yes it is:
1039280992 used; max is 1052770304
most likely memory leak.
So is that in Cassandra somewhere ? Any idea on how I can go about pinpointing
the problem to raise a JIRA issue ?
Andy
On 25 Jul 2013, at 17:50, Radim Kolar wrote:
>
>> I'm wondering if it's a GC issue ?
> yes it is:
>
> 1039280992 used; max is 1052770304
>
> most likely memory leak.
>
>
Dne 25.7.2013 20:03, Andrew Cobley napsal(a):
Any idea on how I can go about pinpointing the problem to raise a JIRA issue ?
http://www.ehow.com/how_8705297_create-java-heap-dump.html
Hi,
I've been playing around with CAS (compare-and-swap) in the 2.0 beta, using
the Thrift API. I can't at all figure out how to delete columns while using
CAS, however. I'm able to update/insert columns by adding them to the list
of updates that is passed as a parameter to
org.apache.cassandra.th
Hi,
I am upgrading my cassandra cluster from 0.8 to 1.2.5.
In cassandra 1.2.5 the 'num_token' attribute confuses me.
I understand that it distributes multiple tokens per node but I am not
clear how that is helpful for performance or load balancing. Can anyone
elaborate? has anyone used this feature
Hey Sylvain,
I pulled the latest from the 1.2 branch and these exceptions were fixed. The
only issue that remains is compacting large rows which fails with an assertion
error.
Thanks for the tip!
Paul
On Jul 25, 2013, at 12:26 AM, Sylvain Lebresne wrote:
> Are you sure you're using the las
It is very useful to upgrade the apps perfomance.
For example, if you have a machine with X capacity, you can put the
num_token=256. If you add a machine in your cluster with (X*2) capacity you
can put the num_token=512.
So, this new machine will receive twice the load automatically.
Moreover, you
I ran with the latest from the 1.2 branch, and although the NPE and index out
of bounds exception went away, these still stuck around.
I'll see if I can figure out something to put into JIRA...
On Jul 25, 2013, at 12:32 AM, Sylvain Lebresne wrote:
> Obvisously, that should not happen. That bei
Hi,
I have test setup where clients randomly make a controlled number of cas()
requests (among other requests) at a cluster of cassandra 2.0 servers.
After one point, I'm seeing that all requests are pending and my client's
throughput has reduced to 0.0 for all kinds of requests. For this specific
We managed to make it work with RandomPartitioner... Guess we will rely on
it for now...
2013/7/23 Hiller, Dean
> Oh, and in the past 0.20.x has been pretty stable by the wayŠ..they
> finally switched their numbering scheme thank god.
>
> Dean
>
> On 7/23/13 2:13 PM, "Hiller, Dean" wrote:
>
>
Try putting multiple instances per machine with each instance mapped to its
own disk. This might not work with v-nodes
On Thu, Jul 25, 2013 at 9:04 AM, Pruner, Anne (Anne) wrote:
> I actually wrote my own compactor that deals with this problem.
>
> ** **
>
> Anne
>
> ** **
>
> *From:* c
28 matches
Mail list logo