Minor compactions will still be triggered whenever a size tier gets 4+ sstables
(for the default compaction strategy). So it does not affect new data.
It just takes longer for the biggest size tier to get to 4 files. So it takes
longer to compact the big output from the major compaction.
Assu
Correct
On Nov 13, 2012, at 5:21 AM, André Cruz wrote:
> On Nov 13, 2012, at 8:54 AM, aaron morton wrote:
>
>>> I don't think that statement is accurate.
>> Which part ?
>
> Probably this part:
> "After running a major compaction, automatic minor compactions are no longer
> triggered, freque
On Nov 13, 2012, at 8:54 AM, aaron morton wrote:
>> I don't think that statement is accurate.
> Which part ?
Probably this part:
"After running a major compaction, automatic minor compactions are no longer
triggered, frequently requiring you to manually run major compactions on a
routine basis
> I don't think that statement is accurate.
Which part ?
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 13/11/2012, at 6:31 AM, Binh Nguyen wrote:
> I don't think that statement is accurate. The minor compaction is still
> triggered for
I don't think that statement is accurate. The minor compaction is still
triggered for small sstables but for the big sstables it may or may not.
By default Cassandra will wait until it finds 4 sstables of the same size
to trigger the compaction so if the sstables are big then it may take a
while to
If you have a long lived row with a lot of tombstones or overwrites, it's often
more efficient to select a known list of columns. There are short circuits in
the read path that can avoid older tombstones filled fragments of the row being
read. (Obviously this is hard to do if you don't know the
On Nov 11, 2012, at 12:01 AM, Binh Nguyen wrote:
> FYI: Repair does not remove tombstones. To remove tombstones you need to run
> compaction.
> If you have a lot of data then make sure you run compaction on all nodes
> before running repair. We had a big trouble with our system regarding
> tom
FYI: Repair does not remove tombstones. To remove tombstones you need to
run compaction.
If you have a lot of data then make sure you run compaction on all nodes
before running repair. We had a big trouble with our system regarding
tombstone and it took us long time to figure out the reason. It tur
That must be it. I dumped the sstables to json and there are lots of records,
including ones that are returned to my application, that have the deletedAt
attribute. I think this is because the regular repair job was not running for
some time, surely more than the grace period, and lots of tombst
Can it be that you have tons and tons of tombstoned columns in the middle
of these two? I've seen plenty of performance issues with wide
rows littered with column tombstones (you could check with dumping the
sstables...)
Just a thought...
Josep M.
On Thu, Nov 8, 2012 at 12:23 PM, André Cruz wro
These are the two columns in question:
=> (super_column=13957152-234b-11e2-92bc-e0db550199f4,
(column=attributes, value=, timestamp=1351681613263657)
(column=blocks,
value=A4edo5MhHvojv3Ihx_JkFMsF3ypthtBvAZkoRHsjulw06pez86OHch3K3OpmISnDjHODPoCf69bKcuAZSJj-4Q,
timestamp=1351681613263657
What is the size of columns? Probably those two are huge.
On Thu, Nov 8, 2012 at 4:01 AM, André Cruz wrote:
> On Nov 7, 2012, at 12:15 PM, André Cruz wrote:
>
> > This error also happens on my application that uses pycassa, so I don't
> think this is the same bug.
>
> I have narrowed it down t
On Nov 7, 2012, at 12:15 PM, André Cruz wrote:
> This error also happens on my application that uses pycassa, so I don't think
> this is the same bug.
I have narrowed it down to a slice between two consecutive columns. Observe
this behaviour using pycassa:
>>> DISCO_CASS.col_fam_nsrev.get(uui
On Nov 7, 2012, at 2:12 AM, Chuan-Heng Hsiao wrote:
> I assume you are using cassandra-cli and connecting to some specific node.
>
> You can check the following steps:
>
> 1. Can you still reproduce this issue? (not -> maybe the system/node issue)
Yes. I can reproduce this issue on all 3 nodes
Hi Andre,
I am just a cassandra user, the following suggestions may not be valid.
I assume you are using cassandra-cli and connecting to some specific node.
You can check the following steps:
1. Can you still reproduce this issue? (not -> maybe the system/node issue)
2. What's the result when q
Can anyone shed some light on this matter, please? I don't want to just
increase the timeout without understanding why this is happening. Some pointer
for me to investigate would be helpful.
I'm running Cassandra 1.1.5 and these are wide rows (lots of small columns). I
would think that fetching
Hello.
I have a SCF that is acting strange. See these 2 query times:
get NamespaceRevision[3cd88d97-ffde-44ca-8ae9-5336caaebc4e] limit 33;
...
Returned 33 results.
Elapsed time: 41 msec(s).
get NamespaceRevision[3cd88d97-ffde-44ca-8ae9-5336caaebc4e] limit 34;
...
Returned 34 results.
Elapsed ti
17 matches
Mail list logo