thank you very much aaron. your explanation is clear enough and very
helpful!
On Mon, Sep 26, 2011 at 4:58 AM, aaron morton wrote:
> If you had RF3 in a 3 node cluster and everything was repaired you *should*
> be ok to only take the data from 1 node, if the cluster is not receiving
> writes.
>
Hi Aaron,
Thanks for the explanation, I know the performance will be varied when the
offset is a very large number, just like what has been mentioned
on CASSANDRA-261. Even if the users implement the offset on the client side,
they suffer the same issues, I just think it would be nice if cassandra
Juste did
Could there be data corruption or will repairs do this?
Thanks
Le 25 sept. 2011 15:30, "Jonathan Ellis" a écrit :
> Assertion errors are bugs, so that should worry you.
>
> However, I'd upgrade before filing a ticket. There were a lot of
> fixes in 0.8.5.
>
> On Sun, Sep 25, 2011 at 2:2
Surge [1] is scalability focused conference in late September hosted in
Baltimore. It's a pretty cool conference with a good mix of
operationally minded people interested in scalability, distributed
systems, systems level performance and good stuff like that. You should
go! [2]
Anyway, I'll be t
Then there is nothing to repair.
Set a better token, cassandra-cli to increase the RF to 2 and then kick off
repair.
A
-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com
On 26/09/2011, at 10:12 AM, Radim Kolar wrote:
> Dne 25.9.2011 22:4
Dne 25.9.2011 22:40, aaron morton napsal(a):
That message will be logged if there RF on the keyspace is 1 or if the other
nodes are not up.
What's the RF ?
rf is 1.
If you had RF3 in a 3 node cluster and everything was repaired you *should* be
ok to only take the data from 1 node, if the cluster is not receiving writes.
If you want to merge the data from 3 nodes rename the files AFAIK they do not
have to have contiguous file numbers.
Cheers
---
Seeds will not auto-bootstrap themselves when you add them to the cluster.
Normal approach is to have 2 or 3 per DC.
You may also be interested in how Gossip uses the seed list
http://wiki.apache.org/cassandra/ArchitectureGossip
cheers
-
Aaron Morton
Freelance Cassandra De
That message will be logged if there RF on the keyspace is 1 or if the other
nodes are not up.
What's the RF ?
You should also sort out the tokens before going to far.
Cheers
-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com
On 25/09/201
On Sun, Sep 25, 2011 at 1:10 PM, Yang wrote:
> Thanks Brandon.
>
> I'll try this.
>
> but you can also see my later post regarding message drop :
> http://mail-archives.apache.org/mod_mbox/cassandra-user/201109.mbox/%3ccaanh3_8aehidyh9ybt82_emh3likbcdsenrak3jhfzaj2l+...@mail.gmail.com%3E
>
> that
Thanks Brandon.
I'll try this.
but you can also see my later post regarding message drop :
http://mail-archives.apache.org/mod_mbox/cassandra-user/201109.mbox/%3ccaanh3_8aehidyh9ybt82_emh3likbcdsenrak3jhfzaj2l+...@mail.gmail.com%3E
that seems to show something in either code or background load c
On Sun, Sep 25, 2011 at 12:52 PM, Yang wrote:
> Thanks Brandon.
>
> I suspected that, but I think that's precluded as a possibility since
> I setup another background job to do
> echo | nc other_box 7000
> in a loop,
> this job seems to be working fine all the time, so network seems fine.
This is
Thanks Brandon.
I suspected that, but I think that's precluded as a possibility since
I setup another background job to do
echo | nc other_box 7000
in a loop,
this job seems to be working fine all the time, so network seems fine.
Yang
On Sun, Sep 25, 2011 at 10:39 AM, Brandon Williams wrote:
>
Thanks Peter and Aaron.
right now I have too much logging so the CMS logging is flushed
(somehow it does not appear in the system.log, only on stdout ), I'll
keep an eye on the correlation with ParNew as I get more logging
Yang
On Sun, Sep 25, 2011 at 3:59 AM, Peter Schuller
wrote:
>> I see th
On Sat, Sep 24, 2011 at 4:54 PM, Yang wrote:
> I'm using 1.0.0
>
>
> there seems to be too many node Up/Dead events detected by the failure
> detector.
> I'm using a 2 node cluster on EC2, in the same region, same security
> group, so I assume the message drop
> rate should be fairly low.
> but i
thanks Jonathan,
I really don't know, I just did further tests to catch the jstack on
the receiving side over the last night. going through these stacks
now. if I can't find anything suspicious, I'll add these debugging to
the sending side too.
another useful piece of info: when I did a single-
Dne 25.9.2011 14:31, Radim Kolar napsal(a):
Dne 25.9.2011 9:29, Philippe napsal(a):
I have this happening on 0.8.x It looks to me as this happens when
the node is under heavy load such as unthrottled compactions or a
huge GC.
i have this problem too. Node down detection must be improved -
incr
What makes you think the problem is on the receiving node, rather than
the sending node?
On Sun, Sep 25, 2011 at 1:19 AM, Yang wrote:
> I constantly see TimedOutException , then followed by
> UnavailableException in my logs,
> so I added some extra debugging to Gossiper. notifyFailureDetector()
>
Assertion errors are bugs, so that should worry you.
However, I'd upgrade before filing a ticket. There were a lot of
fixes in 0.8.5.
On Sun, Sep 25, 2011 at 2:27 AM, Philippe wrote:
> Hello,
> I've seen a couple of these in my logs, running 0.8.4.
> This is a RF=3, 3-node cluster. 2 nodes incl
Dne 25.9.2011 9:29, Philippe napsal(a):
I have this happening on 0.8.x It looks to me as this happens when the
node is under heavy load such as unthrottled compactions or a huge GC.
i have this problem too. Node down detection must be improved -
increased timeouts a bit or make more tries before
> I see the following in my GC log
>
> 1910.513: [GC [1 CMS-initial-mark: 2598619K(26214400K)]
> 13749939K(49807360K), 6.0696680 secs] [Times: user=6.10 sys=0.00,
> real=6.07 secs]
>
> so there is a stop-the-world period of 6 seconds. does this sound bad
> ? or 6 seconds is OK and we should expect
thanks! another problem is what if cluster number are not the same?
in my case I am move 3 nodes cluster data to 1 node, the keyspace files in
3 nodes might use the same name...
I am using the new cluster only for emergency usage, so only 1 node is
attached.
On Sun, Sep 25, 2011 at 5:20 PM, aa
It does seem long and will be felt by your application.
Are you running a 47GB heap ? Most peeps seem to think 8 to 12 is about the
viable maximum.
Cheers
-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com
On 25/09/2011, at 7:14 PM, Yang w
sounds like it.
A
-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com
On 25/09/2011, at 6:10 PM, Yan Chunlu wrote:
> thanks! is that similar problem described in this thread?
>
>
> http://cassandra-user-incubator-apache-org.3065146.n2.nabbl
That can read data from previous versions, i.e. if you upgrade to 0.8 it can
read the existing files from 0.7.
But what you are doing with the sstable loader is (AFAIK) only copying the Data
portion of the CF. Once the table is loaded the node will then build the Index
and the Filter, this is
Check the schema agreement using the CLI by running describe cluster; it will
tell you if they are in agreement.
it may have been a temporary thing while the new machine was applying it's
schema.
if the nodes are not in agreement or you want to dig deeper look for log
messages from "Migratio
Make sure that the directory /var/log/cassandra exists and the user running
cassandra has permission to use it.
There are some instructions here in the readme file
https://github.com/apache/cassandra/blob/cassandra-0.7.9/README.txt#L27
Good luck.
A
-
Aaron Morton
Freelance C
Some discussion of large data here
http://wiki.apache.org/cassandra/LargeDataSetConsiderations
When creating large rows you also need to be aware of
in_memory_compaction_limit_in_mb (see the yaml) and that all columns for a row
are stored on the same node. So if you store one file in a one row
I have this happening on 0.8.x It looks to me as this happens when the node
is under heavy load such as unthrottled compactions or a huge GC.
2011/9/24 Yang
> I'm using 1.0.0
>
>
> there seems to be too many node Up/Dead events detected by the failure
> detector.
> I'm using a 2 node cluster on
Hello,
I'm deploying my cluster with Puppet so it's actually easier for me to add
all cassandra nodes to the seed list in the YAML file than to choose a few.
Would there be any reason NOT to do this ?
Thanks
Hello,
I've seen a couple of these in my logs, running 0.8.4.
This is a RF=3, 3-node cluster. 2 nodes including this one are on 0.8.4 and
one is on 0.8.5
The node is still functionning hours later. Should I be worried ?
Thanks
ERROR [ReadStage:94911] 2011-09-24 22:40:30,043 AbstractCassandraDaem
31 matches
Mail list logo