I attempted to manually load the Schema sstables onto the new node and
bootstrap it. Unfortunately when doing so, the new node believed it was
already bootstrapped, and just joined the ring with zero data.
To fix (read: hack) that, I removed the following logic from
StorageService.java:523:
is the case here, as getVersion is blank. Don't
all nodes bootstrap with a blank schema version? Why would the Migration
logic expect the lastVersion to match the bootstrapping nodes getVersion?
On Wednesday, September 5, 2012 4:29:34 AM UTC-7, Jason Harvey wrote:
>
> Hey folks,
>
Hey folks,
I have a 1.0.11 ring running in production with 6 nodes. Trying to
bootstrap a new node in, and I'm getting the following consistently:
INFO [main] 2012-09-05 04:24:13,317 StorageService.java (line 668)
JOINING: waiting for schema information to complete
After waiting for over 30
Got a response from jbellis in IRC saying that the node will have to
build its own hash tree. The request to itself is normal.
On Mon, Sep 19, 2011 at 7:01 AM, Jason Harvey wrote:
> I have a node in my 0.8.5 ring that I'm attempting to repair. I sent
> it the repair command and let i
I have a node in my 0.8.5 ring that I'm attempting to repair. I sent
it the repair command and let it run for a few hours. After checking
the logs it didn't appear to have repaired at all. This was the last
repair-related thing in the logs:
INFO [AntiEntropyStage:1] 2011-09-19 05:53:55,823
AntiEn
Interesting issue this morning.
My apps started throwing a bunch of pycassa timeouts all of a sudden.
The ring looked perfect. No load issues anywhere, and no errors in the
logs.
The site was basically down, so I got desperate and whacked a random
node in the ring. As soon as gossip saw it go dow
Greetings all,
I removetoken'd a node a few weeks back and completely shut down the
node which owned that token. Every few days, it shows back up in the
ring as "Down" and I have to removetoken it again. Thinking it was an
issue with gossip, I shut the ring completely down, deleted all of the
hint
My Xmx and Xms are both 7.5GB. However, I never see the heap usage
reach past 5.5. Think it is still a good idea to increase the heap?
Thanks,
Jason
On Apr 2, 2:45 am, Peter Schuller wrote:
> > Previously, mark-and-sweep would run around 5.5GB, and would cut heap
> > usage to 4GB. Now, it still
Ah, that would probably explain it. Thanks!
On Apr 1, 8:49 pm, Edward Capriolo wrote:
> On Fri, Apr 1, 2011 at 11:27 PM, Jason Harvey wrote:
> > On further analysis, it looks like this behavior occurs when a node is
> > simply restarted. Is that normal behavior? If mark-an
On further analysis, it looks like this behavior occurs when a node is
simply restarted. Is that normal behavior? If mark-and-sweep becomes
less and less effective over time, does that suggest an issue with GC,
or an issue with memory use?
On Apr 1, 8:21 pm, Jason Harvey wrote:
> Af
After increasing read concurrency from 8 to 64, GC mark-and-sweep was
suddenly able to reclaim much more memory than it previously did.
Previously, mark-and-sweep would run around 5.5GB, and would cut heap
usage to 4GB. Now, it still runs at 5.5GB, but it shrinks all the way
down to 2GB used. This
Nvm. Found the answer in the FAQ :P It is normal.
Thx,
Jason
On Fri, Mar 25, 2011 at 1:24 AM, Jason Harvey wrote:
> I am running a get_range_slices on one of my larger CFs. I am then
> running a 'get' call on each of those keys. I have run into 50 or so
> keys that were re
I am running a get_range_slices on one of my larger CFs. I am then
running a 'get' call on each of those keys. I have run into 50 or so
keys that were returned in the range, but get a NotFound when called
against 'get'.
I repeated the range call to ensure they weren't simply recently
modified/dele
Since the 0.7 upgrade, I've been going through and scrubbing all of
our sstables on 0.7.4. Some of the tables have completely unordered
keys, and the scrub fails to work on those tables. In those cases, I
export the sstable via sstable2json, and reimport it with
json2sstable.
Tonight I've ran into
Gah! Thx :)
Jason
On Mar 21, 10:34 pm, Chris Goffinet wrote:
> -Dcassandra.join_ring=false
>
> -Chris
>
> On Mar 21, 2011, at 10:32 PM, Jason Harvey wrote:
>
> > I set join_ring=false in my java opts:
> > -Djoin_ring=false
>
> > However, when the node s
I set join_ring=false in my java opts:
-Djoin_ring=false
However, when the node started up, it joined the ring. Is there
something I am missing? Using 0.7.4
Thanks,
Jason
ar 20, 2011 at 7:17 PM, Jason Harvey wrote:
> > Just ran into a Java segfault on 0.7.4 when Cassandra created a new
> > commitlog segment. Does that point to a bug in the JVM, or in
> > Cassandra? My guess would be the JVM, but I wanted to check before
> > submitting a
Just ran into a Java segfault on 0.7.4 when Cassandra created a new
commitlog segment. Does that point to a bug in the JVM, or in
Cassandra? My guess would be the JVM, but I wanted to check before
submitting a bug report to anyone.
Thanks!
Jason
Hola everyone,
I have been considering making a few nodes only manage 1 token and
entirely dedicating them to talking to clients. My reasoning behind
this is I don't like the idea of a node having a dual-duty of handling
data, and talking to all of the client stuff.
Is there any merit to this tho
Got my answer from the #cassandra channel:
I can set max_compaction_threshold to 0 to prevent compaction from
occurring while I rebuild everything.
Thanks!
Jason Harvey
On Mar 18, 5:45 pm, Jason Harvey wrote:
> Hey everyone,
>
> Is there a way to prevent cassandra from compacting wh
Hey everyone,
Is there a way to prevent cassandra from compacting while it is
running? I am having to do some scrub+sstable2json->json2sstable
magic, and I don't want the data changing at all while I am in the
process.
Thanks,
Jason
utions
>
> Bye
> Norman
>
> 2011/3/13, Jason Harvey :
>
> > nvm, I found the problem. Sstable2json and json2sstable require a
> > log4j-tools properties file. I created one and all was well. I guess
> > that should be added to the default install packages.
&g
It eventually died with an OOM error. Guess the table was just too
big :( Created an improvement request ticket:
https://issues.apache.org/jira/browse/CASSANDRA-2322
Jason
On Mar 12, 10:50 pm, Jason Harvey wrote:
> Trying to import a 3GB JSON file which was exported from sstable2json.
>
Trying to import a 3GB JSON file which was exported from sstable2json.
I let it run for over an hour and saw zero IO activity. The last thing
it logs is the following:
DEBUG 23:19:32,638 collecting 0 of 2147483647:
Avro/Schema:false:2042@1298067089267
DEBUG 23:19:32,638 collecting 1 of 2147483647:
nvm, I found the problem. Sstable2json and json2sstable require a
log4j-tools properties file. I created one and all was well. I guess
that should be added to the default install packages.
Cheers,
Jason
On Sat, Mar 12, 2011 at 12:09 AM, Jason Harvey wrote:
> Sstable2json always spits out
Hey everyone,
I ran into some severely broken SSTables which I ran through
sstable2json to preserve all of the info I could. The scrub process
deleted all of the screwed up rows, so I am now trying to reimport
that data back into cassandra from JSON. I know I must specify an
sstable for json2sstab
Sstable2json always spits out the following when I execute it:
log4j:WARN No appenders could be found for logger
(org.apache.cassandra.config.DatabaseDescriptor).
log4j:WARN Please initialize the log4j system properly.
I verified that the run script sets the CLASSPATH properly, and I even
tried
I applied the #2296 patch and retried a scrub. Now getting thousands
of the following:
java.io.IOException: Keys must be written in ascending order.
at
org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:111)
at
org.apache.cassandra.io.sstable.SSTableWri
28 matches
Mail list logo