Hi,
I am currently trying to migrate my test cluster to incremental repairs.
These are the steps I'm doing on every node:
- touch marker
- nodetool disableautocompation
- nodetool repair
- cassandra stop
- find all *Data*.db files older then marker
- invoke sstablerepairedset on those
- cassan
Hi Marcus,
thanks for that quick reply. I did also look at:
http://www.datastax.com/documentation/cassandra/2.1/cassandra/operations/ops_repair_nodes_c.html
which describes the same process, it's 2.1.x, so I see that 2.1.2+ is
not covered there. I did upgrade my testcluster to 2.1.2 and with y
Hi Marcus,
thanks a lot for those pointers. Now further testing can begin - and
I'll wait for 2.1.3. Right now on production repair times are really
painful, maybe that will become better. At least I hope so :-)
Hi,
I'm testing around with cassandra fair a bit, using 2.1.2 which I know
has some major issues,but it is a test environment. After some bulk
loading, testing with incremental repairs and running out of heap once I
found that now I have a quit large number of sstables which are really
small:
ggests you're falling behind on compaction in
general (check nodetool compactionstats, you should have <5
outstanding/pending, preferably 0-1). To see whether and how
much it is impacting your read performance, check nodetool
cfstats and nodetool cfhistograms
Hi,
just as a short follow up, it worked - all nodes now have 20-30 sstables
instead of thousands.
Cheers,
Roland
Hi Flavien,
I hit some problem with minor compations recently (just some days ago) -
but with many more tables. In my case compactions got not triggered, you
can check this with nodetool compactionstats.
Reason for me was that those minor compactions did not get triggered
since there were al
Hi,
are you running 2.1.2 evenutally? I had this problem recently and there
were two topics here about this. Problem was, that my test cluster had
almost no reads and did not compact sstables.
Reason for me was that those minor compactions did not get triggered
since there were almost no rea
Hi,
a short question about the new incremental repairs again. I am running
2.1.2 (for testing). Marcus pointed me that 2.1.2 should do incremental
repairs automatically, so I rolled back all steps taken. I expect that
routine repair times will decrease when I do not put many new data on
the c
Hi,
the "automatically" meant this reply earlier:
If you are on 2.1.2+ (or using STCS) you don't those steps (should
probably update the blog post).
Now we keep separate levelings for the repaired/unrepaired data and
move the sstables over after the first incremental repair
My understandin
Hi,
maybe you are running into an issue that I also had on my test cluster.
Since there were almost no reads on it cassandra did not run any minor
compactions at all. Solution for me (in this case) was:
ALTER TABLE WITH compaction = {'class':
'SizeTieredCompactionStrategy', 'min_threshold':
Hi,
1) Actual Cassandra 2.1.3, it was upgraded from 2.1.0 (suggested by Al
Tobey from DataStax)
7) minimal reads (usually none, sometimes few)
those two points keep me repeating an anwser I got. First where did you
get 2.1.3 from? Maybe I missed it, I will have a look. But if it is
2.1.2 whi
Hi,
2.1.3 is now the official latest release - I checked this morning and
got this good surprise. Now it's update time - thanks to all guys
involved, if I meet anyone one beer from me :-)
The changelist is rather long:
https://git1-us-west.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=C
Hi,
try 2.1.3 - with 2.1.2 this is "normal". From the changelog:
* Make sure we don't add tmplink files to the compaction strategy
(CASSANDRA-8580)
* Remove tmplink files for offline compactions (CASSANDRA-8321)
In most cases they are safe to delete, I did this when the node was down.
Cheers
Hi Cass,
just a hint from the off - if I got it right you have:
Table 1: PRIMARY KEY ( (event_day,event_hr),event_time)
Table 2: PRIMARY KEY (event_day,event_time)
Assuming your events to write come in by wall clock time, the first
table design will have a hotspot on a specific node getting al
Hi Piotrek,
your disks are mostly idle as far as I can see (the one with 17% busy
isn't that high on load). One thing came up to my mind did you look on
the sizes of your sstables? I did this with something like
find /var/lib/cassandra/data -type f -size -1k -name "*Data.db" | wc
find /var/lib/c
Hi,
8GB Heap is a good value already - going above 8GB will often result in
noticeable gc pause times in java, but you can give 12G a try just to
see if that helps (and turn it back down again). You can add a "Heap
Used" graph in opscenter to get a quick overview of your heap state.
Best reg
Hi,
I think that your clocks are not in sync. Do you have ntp on all your
nodes up and running with low offset? If not, setup ntp as first
probable solution. Cassandra relies on accurate clocks on all cluster
nodes for it's (internal) timestamps.
Do you see any error while writing? Or just w
18 matches
Mail list logo