Use the community edition and try it out. Compaction has nothing to do with the 
CPU. It's all on raw disk speed. What kind of disks do you have ? 7.2k, 10k, 
15k RPM ?

Are your keys unique or you are doing updates ? if unique writes, I would not 
worry about compaction too much and let it run faster on off-peak hours.

From: Jay Svc [mailto:jaytechg...@gmail.com]
Sent: 18 April 2013 14:28
To: user@cassandra.apache.org; Wei Zhu
Subject: Re: How to make compaction run faster?

Hi Wei,

Thank you for your reply.

Yes, I observed that all the concurrent_compactors and multithreaded_compaction 
has no effect on LCS. I also tried with large SSTable size it helped keeping 
the SSTable count low so keeping the pending compaction low. But in spite I 
have more CPU, I am not able to utilize it to make compaction faster. 
Compaction takes few hours to complete.

By the way are you using DSE 3.0+ or community edition? How can we use 
Cassandra 1.2. Its not supported by DSE yet.

Thanks,
Jayant K Kenjale


On Thu, Apr 18, 2013 at 1:25 PM, Wei Zhu 
<wz1...@yahoo.com<mailto:wz1...@yahoo.com>> wrote:
We have tried very hard to speed up lcs on 1.1.6 with no luck. It seems to be 
single threaded and not much parallelism you can achieve. 1.2 does come with 
parallel lcs which should help.
One more thing to try is to enlarge the sstable size which will reduce the 
number of SSTable. It *might* help the lcs.


-Wei
________________________________
From: "Alexis Rodríguez" 
<arodrig...@inconcertcc.com<mailto:arodrig...@inconcertcc.com>>
To: user@cassandra.apache.org<mailto:user@cassandra.apache.org>
Sent: Thursday, April 18, 2013 11:03:13 AM
Subject: Re: How to make compaction run faster?

Jay,

await, according to iostat's man page it is the time of a request to the disk 
to get served. You may try changing the io scheduler. I've read that noop it's 
recommended for SSDs, you can check here http://goo.gl/XMiIA

Regarding compaction, a week ago we had serious problems with compaction in a 
test machine, solved by changing from openjdk 1.6 to sun-jdk 1.6.



On Thu, Apr 18, 2013 at 2:08 PM, Jay Svc 
<jaytechg...@gmail.com<mailto:jaytechg...@gmail.com>> wrote:
By the way the compaction and commit log disk latency, these are two seperate 
problems I see.

The important one is compaction problem, How I can speed that up?

Thanks,
Jay

On Thu, Apr 18, 2013 at 12:07 PM, Jay Svc 
<jaytechg...@gmail.com<mailto:jaytechg...@gmail.com>> wrote:
Looks like formatting is bit messed up. Please let me know if you want the same 
in clean format.

Thanks,
Jay

On Thu, Apr 18, 2013 at 12:05 PM, Jay Svc 
<jaytechg...@gmail.com<mailto:jaytechg...@gmail.com>> wrote:
Hi Aaron, Alexis,

Thanks for reply, Please find some more details below.

Core problems: Compaction is taking longer time to finish. So it will affect my 
reads. I have more CPU and memory, want to utilize that to speed up the 
compaction process.
Parameters used:
1.     SSTable size: 500MB (tried various sizes from 20MB to 1GB)
2.     Compaction throughput mb per sec: 250MB (tried from 16MB to 640MB)
3.     Concurrent write: 196 (tried from 32 to 296)
4.     Concurrent compactors: 72 (tried disabling to making it 172)
5.     Multithreaded compaction: true (tried both true and false)
6.     Compaction strategy: LCS (tried STCS as well)
7.     Memtable total space in mb: 4096 MB (tried default and some other params 
too)
Note: I have tried almost all permutation combination of these parameters.
Observations:
I ran test for 1.15 hrs with writes at the rate of 21000 records/sec(total 60GB 
data during 1.15 hrs). After I stopped the test
compaction took additional 1.30 hrs to finish compaction, that reduced the 
SSTable count from 170 to 17.
CPU(24 cores): almost 80% idle during the run
JVM: 48G RAM, 8G Heap, (3G to 5G heap used)
Pending Writes: sometimes high spikes for small amount of time otherwise pretty 
flat
Aaron, Please find the iostat below: the sdb and dm-2 are the commitlog disks.
Please find the iostat of some of 3 different boxes in my cluster.
-bash-4.1$ iostat -xkcd
Linux 2.6.32-358.2.1.el6.x86_64 (edc-epod014-dl380-3) 04/18/2013 _x86_64_ (24 
CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
1.20 1.11 0.59 0.01 0.00 97.09
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util
sda 0.03 416.56 9.00 7.08 1142.49 1694.55 352.88 0.07 4.08 0.57 0.92
sdb 0.00 172.38 0.08 3.34 10.76 702.89 416.96 0.09 24.84 0.94 
0.32<tel:24.84%C2%A0%C2%A0%200.94%C2%A0%C2%A0%200.32>
dm-0 0.00 0.00 0.03 0.75 0.62 3.00 9.24 0.00 1.45 0.33 0.03
dm-1 0.00 0.00 0.00 0.00 0.00 0.00 8.00 0.00 0.74 0.68 0.00
dm-2 0.00 0.00 0.08 175.72 10.76 702.89 8.12 3.26 18.49 0.02 0.32
dm-3 0.00 0.00 0.00 0.00 0.00 0.00 7.97 0.00 0.83 0.62 0.00
dm-4 0.00 0.00 8.99 422.89 1141.87 1691.55 13.12 4.64 10.71 0.02 0.90
-bash-4.1$ iostat -xkcd
Linux 2.6.32-358.2.1.el6.x86_64 (ndc-epod014-dl380-1) 04/18/2013 _x86_64_ (24 
CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
1.20 1.12 0.52 0.01 0.00 97.14
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svc
sda 0.01 421.17 9.22 7.43 1167.81 1714.38 346.10 0.07 3.99 0.
sdb 0.00 172.68 0.08 3.26 10.52 703.74 427.79 0.08 25.01 0.
dm-0 0.00 0.00 0.04 1.04 0.89 4.16 9.34 0.00 2.58 0.
dm-1 0.00 0.00 0.00 0.00 0.00 0.00 8.00 0.00 0.77 0.
dm-2 0.00 0.00 0.08 175.93 10.52 703.74 8.12 3.13 17.78 0.
dm-3 0.00 0.00 0.00 0.00 0.00 0.00 7.97 0.00 1.14 0.
dm-4 0.00 0.00 9.19 427.55 1166.91 1710.21 13.18 4.67 10.65 0.
-bash-4.1$ iostat -xkcd
Linux 2.6.32-358.2.1.el6.x86_64 (edc-epod014-dl380-1) 04/18/2013 _x86_64_ (24 
CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
1.15 1.13 0.52 0.01 0.00 97.19
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util
sda 0.02 429.97 9.28 7.29 1176.81 1749.00 353.12 0.07 4.10 0.55 0.91
sdb 0.00 173.65 0.08 3.09 10.50 706.96 452.25 0.09 27.23 0.99 0.31
dm-0 0.00 0.00 0.04 0.79 0.82 3.16 9.61 0.00 1.54 0.27 0.02
dm-1 0.00 0.00 0.00 0.00 0.00 0.00 8.00 0.00 0.68 0.63 0.00
dm-2 0.00 0.00 0.08 176.74 10.50 706.96 8.12 3.46 19.53 0.02 0.31
dm-3 0.00 0.00 0.00 0.00 0.00 0.00 7.97 0.00 0.85 0.83 0.00
dm-4 0.00 0.00 9.26 436.46 1175.98 1745.84 13.11 0.03 0.03 0.02 0.89
Thanks,
Jay

On Thu, Apr 18, 2013 at 2:50 AM, aaron morton 
<aa...@thelastpickle.com<mailto:aa...@thelastpickle.com>> wrote:
> I believe that compaction occurs on the data directories and not in the 
> commitlog.
Yes, compaction only works on the data files.

> When I ran iostat; I see "await" 26ms to 30 ms for my commit log disk. My CPU 
> is less than 18% used.
>
> How I reduce the disk latency for my commit log disk. They are SSDs.
That does not sound right. Can you include the output from iostat for the 
commit log and data volumes. Also some information on how many writes you are 
processing the the size of rows as well.

Cheers

-----------------
Aaron Morton
Freelance Cassandra Consultant
New Zealand

@aaronmorton
http://www.thelastpickle.com

On 18/04/2013, at 11:58 AM, Alexis Rodríguez 
<arodrig...@inconcertcc.com<mailto:arodrig...@inconcertcc.com>> wrote:

> Jay,
>
> I believe that compaction occurs on the data directories and not in the 
> commitlog.
>
> http://wiki.apache.org/cassandra/MemtableSSTable
>
>
>
>
> On Wed, Apr 17, 2013 at 7:58 PM, Jay Svc 
> <jaytechg...@gmail.com<mailto:jaytechg...@gmail.com>> wrote:
> Hi Alexis,
>
> Thank you for your response.
>
> My commit log is on SSD. which shows me 30 to 40 ms of disk latency.
>
> When I ran iostat; I see "await" 26ms to 30 ms for my commit log disk. My CPU 
> is less than 18% used.
>
> How I reduce the disk latency for my commit log disk. They are SSDs.
>
> Thank you in advance,
> Jay
>
>
> On Wed, Apr 17, 2013 at 3:58 PM, Alexis Rodríguez 
> <arodrig...@inconcertcc.com<mailto:arodrig...@inconcertcc.com>> wrote:
> :D
>
> Jay, check if your disk(s) utilization allows you to change the configuration 
> the way Edward suggest. iostat -xkcd 1 will show you how much of your disk(s) 
> are in use.
>
>
>
>
> On Wed, Apr 17, 2013 at 5:26 PM, Edward Capriolo 
> <edlinuxg...@gmail.com<mailto:edlinuxg...@gmail.com>> wrote:
> three things:
> 1) compaction throughput is fairly low (yaml nodetool)
> 2) concurrent compactions is fairly low (yaml)
> 3) multithreaded compaction might be off in your version
>
> Try raising these things. Otherwise consider option 4.
>
> 4)$$$$$$$$$$$$$$$$$$$$$$$ RAID,RAM<CPU$$$$$$$$$$$$$$
>
>
> On Wed, Apr 17, 2013 at 4:01 PM, Jay Svc 
> <jaytechg...@gmail.com<mailto:jaytechg...@gmail.com>> wrote:
> Hi Team,
>
>
> I have a high write traffic to my Cassandra cluster. I experience a very high 
> number of pending compactions. As I expect higher writes, The pending 
> compactions keep increasing. Even when I stop my writes it takes several 
> hours to finishing pending compactions.
>
> My CF is configured with LCS, with sstable_size_mb=20M. My CPU is below 20%, 
> JVM memory usage is between 45%-55%. I am using Cassandra 1.1.9.
>
> How can I increase the compaction rate so it will run bit faster to match my 
> write speed?
>
> Your inputs are appreciated.
>
> Thanks,
> Jay
>
>
>
>
>






Reply via email to