How many columns are in the rows you are reading from?

30ms is quite high, so I suspect you have relatively large rows, in
which case decreasing the column index threshold may help.

On Wed, May 5, 2010 at 4:59 PM, Ran Tavory <ran...@gmail.com> wrote:
> let's see if I can make some assertions, feel free to correct me...
> Well, obviously, reads are much slower in cassandra than writes, everyone
> knows that, but by which factor?
> In my case I read/write only one column at a time. Key, column and value are
> pretty small (< 200b)
> So the numbers are usually - Write Latency: ~0.05ms, Read Latency: ~30ms. So
> it looks like reads are 60x slower, at least on my hardware. This happens
> when cache is cold. If cache is warm reads are better, but unfortunately my
> cache is usually cold...
> If my application keeps reading a cold cache there's nothing I can do to
> make reads faster from cassandra's side. With one client thread this implies
> 1000/30=33 reads/sec, not great.
> However, although read latency is a bottleneck, read throughput isn't. So I
> need to add more reading threads and I can actually add many of them before
> read latency starts draining, according to ycsb I can have ~10000 reads/sec
> (on the cluster they tested, numbers may vary) before read latency starts
> draining. So, numbers may vary by cluster size, hardware, data size etc, but
> the idea is - if read latency is 30ms and cache is a miss most of the time,
> that's normal, just add more reader threads and you get a better throughput.
> Sorry if this sounds trivial, I was just trying to improve on the 30ms reads
> until I realized I actually can't...
> On Wed, May 5, 2010 at 7:08 PM, Jonathan Ellis <jbel...@gmail.com> wrote:
>>
>>  - your key cache isn't warm.  capacity 17M, size 0.5M, 468083 reads
>> sounds like most of your reads have been for unique keys.
>>  - the kind of reads you are doing can have a big effect (mostly
>> number of columns you are asking for).  column index granularity plays
>> a role (for non-rowcached reads); so can column comparator (see e.g.
>> https://issues.apache.org/jira/browse/CASSANDRA-1043)
>>  - the slow system reads are all on HH rows, which can get very wide
>> (hence, slow to read the whole row, which is what the HH code does).
>> clean those out either by bringing back the nodes it's hinting for, or
>> just removing the HH data files.
>>
>> On Wed, May 5, 2010 at 10:19 AM, Ran Tavory <ran...@gmail.com> wrote:
>> > I'm still trying to figure out where my slowness is coming from...
>> > By now I'm pretty sure it's the reads are slow, but not sure how to
>> > improve
>> > them.
>> > I'm looking at cfstats. Can you say if there are better configuration
>> > options? So far I've used all default settings, except for:
>> >     <Keyspace Name="outbrain_kvdb">
>> >       <ColumnFamily CompareWith="BytesType" Name="KvImpressions"
>> > KeysCached="50%"/>
>> >
>> >
>> >  <ReplicaPlacementStrategy>org.apache.cassandra.locator.RackAwareStrategy</ReplicaPlacementStrategy>
>> >       <ReplicationFactor>2</ReplicationFactor>
>> >
>> >
>> >  <EndPointSnitch>org.apache.cassandra.locator.EndPointSnitch</EndPointSnitch>
>> >     </Keyspace>
>> >
>> > What does a good read latency look like? I was expecting 10ms, however
>> > so
>> > far it seems that my KvImpressions read latency is 30ms and in the
>> > system
>> > keyspace I have 800ms :(
>> > I thought adding KeysCached="50%" would improve my situation but
>> > unfortunately looks like the hitrate is about 0. I realize that's
>> > application specific, but maybe there are other magic bullets...
>> > Is there something like adding cache to the system keyspace? 800 ms is
>> > pretty bad, isn't it?
>> > See stats below and thanks.
>> >
>> > Keyspace: outbrain_kvdb
>> >         Read Count: 651668
>> >         Read Latency: 34.18622328547666 ms.
>> >         Write Count: 655542
>> >         Write Latency: 0.041145092152752985 ms.
>> >         Pending Tasks: 0
>> >                 Column Family: KvImpressions
>> >                 SSTable count: 13
>> >                 Space used (live): 23304548897
>> >                 Space used (total): 23304548897
>> >                 Memtable Columns Count: 895
>> >                 Memtable Data Size: 2108990
>> >                 Memtable Switch Count: 8
>> >                 Read Count: 468083
>> >                 Read Latency: 151.603 ms.
>> >                 Write Count: 552566
>> >                 Write Latency: 0.023 ms.
>> >                 Pending Tasks: 0
>> >                 Key cache capacity: 17398656
>> >                 Key cache size: 567967
>> >                 Key cache hit rate: 0.0
>> >                 Row cache: disabled
>> >                 Compacted row minimum size: 269
>> >                 Compacted row maximum size: 54501
>> >                 Compacted row mean size: 933
>> > ...
>> > ----------------
>> > Keyspace: system
>> >         Read Count: 1151
>> >         Read Latency: 872.5014448305822 ms.
>> >         Write Count: 51215
>> >         Write Latency: 0.07156788050375866 ms.
>> >         Pending Tasks: 0
>> >                 Column Family: HintsColumnFamily
>> >                 SSTable count: 5
>> >                 Space used (live): 437366878
>> >                 Space used (total): 437366878
>> >                 Memtable Columns Count: 14987
>> >                 Memtable Data Size: 87975
>> >                 Memtable Switch Count: 2
>> >                 Read Count: 1150
>> >                 Read Latency: NaN ms.
>> >                 Write Count: 51211
>> >                 Write Latency: 0.027 ms.
>> >                 Pending Tasks: 0
>> >                 Key cache capacity: 6
>> >                 Key cache size: 4
>> >                 Key cache hit rate: NaN
>> >                 Row cache: disabled
>> >                 Compacted row minimum size: 0
>> >                 Compacted row maximum size: 0
>> >                 Compacted row mean size: 0
>> >                 Column Family: LocationInfo
>> >                 SSTable count: 2
>> >                 Space used (live): 3504
>> >                 Space used (total): 3504
>> >                 Memtable Columns Count: 0
>> >                 Memtable Data Size: 0
>> >                 Memtable Switch Count: 1
>> >                 Read Count: 1
>> >                 Read Latency: NaN ms.
>> >                 Write Count: 7
>> >                 Write Latency: NaN ms.
>> >                 Pending Tasks: 0
>> >                 Key cache capacity: 2
>> >                 Key cache size: 1
>> >                 Key cache hit rate: NaN
>> >                 Row cache: disabled
>> >                 Compacted row minimum size: 0
>> >                 Compacted row maximum size: 0
>> >                 Compacted row mean size: 0
>> >
>> > On Tue, May 4, 2010 at 10:57 PM, Kyusik Chung <kyu...@discovereads.com>
>> > wrote:
>> >>
>> >> Im using Ubuntu 8.04 on 64 bit hosts on rackspace cloud.
>> >>
>> >> Im in the middle of repeating some perf tests, but so far, I get
>> >> as-good
>> >> or slightly better read perf by using standard disk access mode vs
>> >> mmap.  So
>> >> far consecutive tests are returning consistent numbers.
>> >>
>> >> Im not sure how to explain it...maybe its an ubuntu 8.04 issue with
>> >> mmap.
>> >>  Back when I was using mmap, I was definitely seeing the kswapd0
>> >> process
>> >> start using cpu as the box ran out of memory, and read performance
>> >> significantly degraded.
>> >>
>> >> Next, Ill run some tests with mmap_index_only, and Ill test with heavy
>> >> concurrent writes as well as reads.  Ill let everyone know what I find.
>> >>
>> >> Kyusik Chung
>> >> CEO, Discovereads.com
>> >> kyu...@discovereads.com
>> >>
>> >> On May 4, 2010, at 12:27 PM, Jonathan Ellis wrote:
>> >>
>> >> > Are you using 32 bit hosts?  If not don't be scared of mmap using a
>> >> > lot of address space, you have plenty.  It won't make you swap more
>> >> > than using buffered i/o.
>> >> >
>> >> > On Tue, May 4, 2010 at 1:57 PM, Ran Tavory <ran...@gmail.com> wrote:
>> >> >> I canceled mmap and indeed memory usage is sane again. So far
>> >> >> performance
>> >> >> hasn't been great, but I'll wait and see.
>> >> >> I'm also interested in a way to cap mmap so I can take advantage of
>> >> >> it
>> >> >> but
>> >> >> not swap the host to death...
>> >> >>
>> >> >> On Tue, May 4, 2010 at 9:38 PM, Kyusik Chung
>> >> >> <kyu...@discovereads.com>
>> >> >> wrote:
>> >> >>>
>> >> >>> This sounds just like the slowness I was asking about in another
>> >> >>> thread -
>> >> >>> after a lot of reads, the machine uses up all available memory on
>> >> >>> the
>> >> >>> box
>> >> >>> and then starts swapping.
>> >> >>> My understanding was that mmap helps greatly with read and write
>> >> >>> perf
>> >> >>> (until the box starts swapping I guess)...is there any way to use
>> >> >>> mmap
>> >> >>> and
>> >> >>> cap how much memory it takes up?
>> >> >>> What do people use in production?  mmap or no mmap?
>> >> >>> Thanks!
>> >> >>> Kyusik Chung
>> >> >>> On May 4, 2010, at 10:11 AM, Schubert Zhang wrote:
>> >> >>>
>> >> >>> 1. When initially startup your nodes, please plan your InitialToken
>> >> >>> of
>> >> >>> each node evenly.
>> >> >>> 2. <DiskAccessMode>standard</DiskAccessMode>
>> >> >>>
>> >> >>> On Tue, May 4, 2010 at 9:09 PM, Boris Shulman <shulm...@gmail.com>
>> >> >>> wrote:
>> >> >>>>
>> >> >>>> I think that the extra (more than 4GB) memory usage comes from the
>> >> >>>> mmaped io, that is why it happens only for reads.
>> >> >>>>
>> >> >>>> On Tue, May 4, 2010 at 2:02 PM, Jordan Pittier
>> >> >>>> <jordan.pitt...@gmail.com>
>> >> >>>> wrote:
>> >> >>>>> I'm facing the same issue with swap. It only occurs when I
>> >> >>>>> perform
>> >> >>>>> read
>> >> >>>>> operations (write are very fast :)). So I can't help you with the
>> >> >>>>> memory
>> >> >>>>> probleme.
>> >> >>>>>
>> >> >>>>> But to balance the load evenly between nodes in cluster just
>> >> >>>>> manually
>> >> >>>>> fix
>> >> >>>>> their token.(the "formula" is i * 2^127 / nb_nodes).
>> >> >>>>>
>> >> >>>>> Jordzn
>> >> >>>>>
>> >> >>>>> On Tue, May 4, 2010 at 8:20 AM, Ran Tavory <ran...@gmail.com>
>> >> >>>>> wrote:
>> >> >>>>>>
>> >> >>>>>> I'm looking into performance issues on a 0.6.1 cluster. I see
>> >> >>>>>> two
>> >> >>>>>> symptoms:
>> >> >>>>>> 1. Reads and writes are slow
>> >> >>>>>> 2. One of the hosts is doing a lot of GC.
>> >> >>>>>> 1 is slow in the sense that in normal state the cluster used to
>> >> >>>>>> make
>> >> >>>>>> around 3-5k read and writes per second (6-10k operations per
>> >> >>>>>> second),
>> >> >>>>>> but
>> >> >>>>>> how it's in the order of 200-400 ops per second, sometimes even
>> >> >>>>>> less.
>> >> >>>>>> 2 looks like this:
>> >> >>>>>> $ tail -f /outbrain/cassandra/log/system.log
>> >> >>>>>>  INFO [GC inspection] 2010-05-04 00:42:18,636 GCInspector.java
>> >> >>>>>> (line
>> >> >>>>>> 110)
>> >> >>>>>> GC for ParNew: 672 ms, 166482384 reclaimed leaving 2872087208
>> >> >>>>>> used;
>> >> >>>>>> max is
>> >> >>>>>> 4432068608
>> >> >>>>>>  INFO [GC inspection] 2010-05-04 00:42:28,638 GCInspector.java
>> >> >>>>>> (line
>> >> >>>>>> 110)
>> >> >>>>>> GC for ParNew: 498 ms, 166493352 reclaimed leaving 2836049448
>> >> >>>>>> used;
>> >> >>>>>> max is
>> >> >>>>>> 4432068608
>> >> >>>>>>  INFO [GC inspection] 2010-05-04 00:42:38,640 GCInspector.java
>> >> >>>>>> (line
>> >> >>>>>> 110)
>> >> >>>>>> GC for ParNew: 327 ms, 166091528 reclaimed leaving 2796888424
>> >> >>>>>> used;
>> >> >>>>>> max is
>> >> >>>>>> 4432068608
>> >> >>>>>> ... and it goes on and on for hours, no stopping...
>> >> >>>>>> The cluster is made of 6 hosts, 3 in one DC and 3 in another.
>> >> >>>>>> Each host has 8G RAM.
>> >> >>>>>> -Xmx=4G
>> >> >>>>>> For some reason, the load isn't distributed evenly b/w the
>> >> >>>>>> hosts,
>> >> >>>>>> although
>> >> >>>>>> I'm not sure this is the cause for slowness
>> >> >>>>>> $ nodetool -h localhost -p 9004 ring
>> >> >>>>>> Address       Status     Load          Range
>> >> >>>>>>        Ring
>> >> >>>>>>
>> >> >>>>>> 144413773383729447702215082383444206680
>> >> >>>>>> 192.168.252.99Up         15.94 GB
>> >> >>>>>>  66002764663998929243644931915471302076     |<--|
>> >> >>>>>> 192.168.254.57Up         19.84 GB
>> >> >>>>>>  81288739225600737067856268063987022738     |   ^
>> >> >>>>>> 192.168.254.58Up         973.78 MB
>> >> >>>>>> 86999744104066390588161689990810839743     v   |
>> >> >>>>>> 192.168.252.62Up         5.18 GB
>> >> >>>>>> 88308919879653155454332084719458267849     |   ^
>> >> >>>>>> 192.168.254.59Up         10.57 GB
>> >> >>>>>>  142482163220375328195837946953175033937    v   |
>> >> >>>>>> 192.168.252.61Up         11.36 GB
>> >> >>>>>>  144413773383729447702215082383444206680    |-->|
>> >> >>>>>> The slow host is 192.168.252.61 and it isn't the most loaded
>> >> >>>>>> one.
>> >> >>>>>> The host is waiting a lot on IO and the load average is usually
>> >> >>>>>> 6-7
>> >> >>>>>> $ w
>> >> >>>>>>  00:42:56 up 11 days, 13:22,  1 user,  load average: 6.21, 5.52,
>> >> >>>>>> 3.93
>> >> >>>>>> $ vmstat 5
>> >> >>>>>> procs -----------memory---------- ---swap-- -----io----
>> >> >>>>>> --system--
>> >> >>>>>> -----cpu------
>> >> >>>>>>  r  b   swpd   free   buff  cache   si   so    bi    bo   in
>> >> >>>>>> cs
>> >> >>>>>> us
>> >> >>>>>> sy id
>> >> >>>>>> wa st
>> >> >>>>>>  0  8 2147844  45744   1816 4457384    6    5    66    32    5
>> >> >>>>>>  2
>> >> >>>>>>  1
>> >> >>>>>>  1
>> >> >>>>>> 96  2  0
>> >> >>>>>>  0  8 2147164  49020   1808 4451596  385    0  2345    58 3372
>> >> >>>>>> 9957
>> >> >>>>>>  2
>> >> >>>>>>  2
>> >> >>>>>> 78 18  0
>> >> >>>>>>  0  3 2146432  45704   1812 4453956  342    0  2274   108 3937
>> >> >>>>>> 10732
>> >> >>>>>>  2  2
>> >> >>>>>> 78 19  0
>> >> >>>>>>  0  1 2146252  44696   1804 4453436  345  164  1939   294 3647
>> >> >>>>>> 7833
>> >> >>>>>>  2
>> >> >>>>>>  2
>> >> >>>>>> 78 18  0
>> >> >>>>>>  0  1 2145960  46924   1744 4451260  158    0  2423   122 4354
>> >> >>>>>> 14597
>> >> >>>>>>  2  2
>> >> >>>>>> 77 18  0
>> >> >>>>>>  7  1 2138344  44676    952 4504148 1722  403  1722   406 1388
>> >> >>>>>>  439
>> >> >>>>>> 87
>> >> >>>>>>  0
>> >> >>>>>> 10  2  0
>> >> >>>>>>  7  2 2137248  45652    956 4499436 1384  655  1384   658 1356
>> >> >>>>>>  392
>> >> >>>>>> 87
>> >> >>>>>>  0
>> >> >>>>>> 10  3  0
>> >> >>>>>>  7  1 2135976  46764    956 4495020 1366  718  1366   718 1395
>> >> >>>>>>  380
>> >> >>>>>> 87
>> >> >>>>>>  0
>> >> >>>>>>  9  4  0
>> >> >>>>>>  0  8 2134484  46964    956 4489420 1673  555  1814   586 1601
>> >> >>>>>> 215590
>> >> >>>>>> 14
>> >> >>>>>>  2 68 16  0
>> >> >>>>>>  0  1 2135388  47444    972 4488516  785  833  2390   995 3812
>> >> >>>>>> 8305
>> >> >>>>>>  2
>> >> >>>>>>  2
>> >> >>>>>> 77 20  0
>> >> >>>>>>  0 10 2135164  45928    980 4488796  788  543  2275   626 36
>> >> >>>>>> So, the host is swapping like crazy...
>> >> >>>>>> top shows that it's using a lot of memory. As noted before
>> >> >>>>>> -Xmx=4G
>> >> >>>>>> and
>> >> >>>>>> nothing else seems to be using a lot of memory on the host
>> >> >>>>>> except
>> >> >>>>>> for
>> >> >>>>>> the
>> >> >>>>>> cassandra process, however, of the 8G ram on the host, 92% is
>> >> >>>>>> used
>> >> >>>>>> by
>> >> >>>>>> cassandra. How's that?
>> >> >>>>>> Top shows there's 3.9g Shared and 7.2g Resident and 15.9g
>> >> >>>>>> Virtual.
>> >> >>>>>> Why
>> >> >>>>>> does it have 15g virtual? And why 7.2 RES? This can explain the
>> >> >>>>>> slowness in
>> >> >>>>>> swapping.
>> >> >>>>>> $ top
>> >> >>>>>>   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+
>> >> >>>>>>  COMMAND
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>> 20281 cassandr  25   0 15.9g 7.2g 3.9g S 33.3 92.6 175:30.27
>> >> >>>>>> java
>> >> >>>>>> So, can the total memory be controlled?
>> >> >>>>>> Or perhaps I'm looking in the wrong direction...
>> >> >>>>>> I've looked at all the cassandra JMX counts and nothing seemed
>> >> >>>>>> suspicious
>> >> >>>>>> so far. By suspicious i mean a large number of pending tasks -
>> >> >>>>>> there
>> >> >>>>>> were
>> >> >>>>>> always very small numbers in each pool.
>> >> >>>>>> About read and write latencies, I'm not sure what the normal
>> >> >>>>>> state
>> >> >>>>>> is,
>> >> >>>>>> but
>> >> >>>>>> here's an example of what I see on the problematic host:
>> >> >>>>>> #mbean = org.apache.cassandra.service:type=StorageProxy:
>> >> >>>>>> RecentReadLatencyMicros = 30105.888180684495;
>> >> >>>>>> TotalReadLatencyMicros = 78543052801;
>> >> >>>>>> TotalWriteLatencyMicros = 4213118609;
>> >> >>>>>> RecentWriteLatencyMicros = 1444.4809201925639;
>> >> >>>>>> ReadOperations = 4779553;
>> >> >>>>>> RangeOperations = 0;
>> >> >>>>>> TotalRangeLatencyMicros = 0;
>> >> >>>>>> RecentRangeLatencyMicros = NaN;
>> >> >>>>>> WriteOperations = 4740093;
>> >> >>>>>> And the only pool that I do see some pending tasks is the
>> >> >>>>>> ROW-READ-STAGE,
>> >> >>>>>> but it doesn't look like much, usually around 6-8:
>> >> >>>>>> #mbean = org.apache.cassandra.concurrent:type=ROW-READ-STAGE:
>> >> >>>>>> ActiveCount = 8;
>> >> >>>>>> PendingTasks = 8;
>> >> >>>>>> CompletedTasks = 5427955;
>> >> >>>>>> Any help finding the solution is appreciated, thanks...
>> >> >>>>>> Below are a few more JMXes I collected from the system that may
>> >> >>>>>> be
>> >> >>>>>> interesting.
>> >> >>>>>> #mbean = java.lang:type=Memory:
>> >> >>>>>> Verbose = false;
>> >> >>>>>> HeapMemoryUsage = {
>> >> >>>>>>   committed = 3767279616;
>> >> >>>>>>   init = 134217728;
>> >> >>>>>>   max = 4293656576;
>> >> >>>>>>   used = 1237105080;
>> >> >>>>>>  };
>> >> >>>>>> NonHeapMemoryUsage = {
>> >> >>>>>>   committed = 35061760;
>> >> >>>>>>   init = 24313856;
>> >> >>>>>>   max = 138412032;
>> >> >>>>>>   used = 23151320;
>> >> >>>>>>  };
>> >> >>>>>> ObjectPendingFinalizationCount = 0;
>> >> >>>>>> #mbean = java.lang:name=ParNew,type=GarbageCollector:
>> >> >>>>>> LastGcInfo = {
>> >> >>>>>>   GcThreadCount = 11;
>> >> >>>>>>   duration = 136;
>> >> >>>>>>   endTime = 42219272;
>> >> >>>>>>   id = 11719;
>> >> >>>>>>   memoryUsageAfterGc = {
>> >> >>>>>>     ( CMS Perm Gen ) = {
>> >> >>>>>>       key = CMS Perm Gen;
>> >> >>>>>>       value = {
>> >> >>>>>>         committed = 29229056;
>> >> >>>>>>         init = 21757952;
>> >> >>>>>>         max = 88080384;
>> >> >>>>>>         used = 17648848;
>> >> >>>>>>        };
>> >> >>>>>>      };
>> >> >>>>>>     ( Code Cache ) = {
>> >> >>>>>>       key = Code Cache;
>> >> >>>>>>       value = {
>> >> >>>>>>         committed = 5832704;
>> >> >>>>>>         init = 2555904;
>> >> >>>>>>         max = 50331648;
>> >> >>>>>>         used = 5563520;
>> >> >>>>>>        };
>> >> >>>>>>      };
>> >> >>>>>>     ( CMS Old Gen ) = {
>> >> >>>>>>       key = CMS Old Gen;
>> >> >>>>>>       value = {
>> >> >>>>>>         committed = 3594133504;
>> >> >>>>>>         init = 112459776;
>> >> >>>>>>         max = 4120510464;
>> >> >>>>>>         used = 964565720;
>> >> >>>>>>        };
>> >> >>>>>>      };
>> >> >>>>>>     ( Par Eden Space ) = {
>> >> >>>>>>       key = Par Eden Space;
>> >> >>>>>>       value = {
>> >> >>>>>>         committed = 171835392;
>> >> >>>>>>         init = 21495808;
>> >> >>>>>>         max = 171835392;
>> >> >>>>>>         used = 0;
>> >> >>>>>>        };
>> >> >>>>>>      };
>> >> >>>>>>     ( Par Survivor Space ) = {
>> >> >>>>>>       key = Par Survivor Space;
>> >> >>>>>>       value = {
>> >> >>>>>>         committed = 1310720;
>> >> >>>>>>         init = 131072;
>> >> >>>>>>         max = 1310720;
>> >> >>>>>>         used = 0;
>> >> >>>>>>        };
>> >> >>>>>>      };
>> >> >>>>>>    };
>> >> >>>>>>   memoryUsageBeforeGc = {
>> >> >>>>>>     ( CMS Perm Gen ) = {
>> >> >>>>>>       key = CMS Perm Gen;
>> >> >>>>>>       value = {
>> >> >>>>>>         committed = 29229056;
>> >> >>>>>>         init = 21757952;
>> >> >>>>>>         max = 88080384;
>> >> >>>>>>         used = 17648848;
>> >> >>>>>>        };
>> >> >>>>>>      };
>> >> >>>>>>     ( Code Cache ) = {
>> >> >>>>>>       key = Code Cache;
>> >> >>>>>>       value = {
>> >> >>>>>>         committed = 5832704;
>> >> >>>>>>         init = 2555904;
>> >> >>>>>>         max = 50331648;
>> >> >>>>>>         used = 5563520;
>> >> >>>>>>        };
>> >> >>>>>>      };
>> >> >>>>>>     ( CMS Old Gen ) = {
>> >> >>>>>>       key = CMS Old Gen;
>> >> >>>>>>       value = {
>> >> >>>>>>         committed = 3594133504;
>> >> >>>>>>         init = 112459776;
>> >> >>>>>>         max = 4120510464;
>> >> >>>>>>         used = 959221872;
>> >> >>>>>>        };
>> >> >>>>>>      };
>> >> >>>>>>     ( Par Eden Space ) = {
>> >> >>>>>>       key = Par Eden Space;
>> >> >>>>>>       value = {
>> >> >>>>>>         committed = 171835392;
>> >> >>>>>>         init = 21495808;
>> >> >>>>>>         max = 171835392;
>> >> >>>>>>         used = 171835392;
>> >> >>>>>>        };
>> >> >>>>>>      };
>> >> >>>>>>     ( Par Survivor Space ) = {
>> >> >>>>>>       key = Par Survivor Space;
>> >> >>>>>>       value = {
>> >> >>>>>>         committed = 1310720;
>> >> >>>>>>         init = 131072;
>> >> >>>>>>         max = 1310720;
>> >> >>>>>>         used = 0;
>> >> >>>>>>        };
>> >> >>>>>>      };
>> >> >>>>>>    };
>> >> >>>>>>   startTime = 42219136;
>> >> >>>>>>  };
>> >> >>>>>> CollectionCount = 11720;
>> >> >>>>>> CollectionTime = 4561730;
>> >> >>>>>> Name = ParNew;
>> >> >>>>>> Valid = true;
>> >> >>>>>> MemoryPoolNames = [ Par Eden Space, Par Survivor Space ];
>> >> >>>>>> #mbean = java.lang:type=OperatingSystem:
>> >> >>>>>> MaxFileDescriptorCount = 63536;
>> >> >>>>>> OpenFileDescriptorCount = 75;
>> >> >>>>>> CommittedVirtualMemorySize = 17787711488;
>> >> >>>>>> FreePhysicalMemorySize = 45522944;
>> >> >>>>>> FreeSwapSpaceSize = 2123968512;
>> >> >>>>>> ProcessCpuTime = 12251460000000;
>> >> >>>>>> TotalPhysicalMemorySize = 8364417024;
>> >> >>>>>> TotalSwapSpaceSize = 4294959104;
>> >> >>>>>> Name = Linux;
>> >> >>>>>> AvailableProcessors = 8;
>> >> >>>>>> Arch = amd64;
>> >> >>>>>> SystemLoadAverage = 4.36;
>> >> >>>>>> Version = 2.6.18-164.15.1.el5;
>> >> >>>>>> #mbean = java.lang:type=Runtime:
>> >> >>>>>> Name = 20...@ob1061.nydc1.outbrain.com;
>> >> >>>>>>
>> >> >>>>>> ClassPath =
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>> /outbrain/cassandra/apache-cassandra-0.6.1/bin/../conf:/outbrain/cassandra/apache-cassandra-0.6.1/bin/../build/classes:/outbrain/cassandra/apache-cassandra-0.6.1/bin/..
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>> /lib/antlr-3.1.3.jar:/outbrain/cassandra/apache-cassandra-0.6.1/bin/../lib/apache-cassandra-0.6.1.jar:/outbrain/cassandra/apache-cassandra-0.6.1/bin/../lib/avro-1.2.0-dev.jar:/outb
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>> rain/cassandra/apache-cassandra-0.6.1/bin/../lib/clhm-production.jar:/outbrain/cassandra/apache-cassandra-0.6.1/bin/../lib/commons-cli-1.1.jar:/outbrain/cassandra/apache-cassandra-
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>> 0.6.1/bin/../lib/commons-codec-1.2.jar:/outbrain/cassandra/apache-cassandra-0.6.1/bin/../lib/commons-collections-3.2.1.jar:/outbrain/cassandra/apache-cassandra-0.6.1/bin/../lib/com
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>> mons-lang-2.4.jar:/outbrain/cassandra/apache-cassandra-0.6.1/bin/../lib/google-collections-1.0.jar:/outbrain/cassandra/apache-cassandra-0.6.1/bin/../lib/hadoop-core-0.20.1.jar:/out
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>> brain/cassandra/apache-cassandra-0.6.1/bin/../lib/high-scale-lib.jar:/outbrain/cassandra/apache-cassandra-0.6.1/bin/../lib/ivy-2.1.0.jar:/outbrain/cassandra/apache-cassandra-0.6.1/
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>> bin/../lib/jackson-core-asl-1.4.0.jar:/outbrain/cassandra/apache-cassandra-0.6.1/bin/../lib/jackson-mapper-asl-1.4.0.jar:/outbrain/cassandra/apache-cassandra-0.6.1/bin/../lib/jline
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>> -0.9.94.jar:/outbrain/cassandra/apache-cassandra-0.6.1/bin/../lib/json-simple-1.1.jar:/outbrain/cassandra/apache-cassandra-0.6.1/bin/../lib/libthrift-r917130.jar:/outbrain/cassandr
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>> a/apache-cassandra-0.6.1/bin/../lib/log4j-1.2.14.jar:/outbrain/cassandra/apache-cassandra-0.6.1/bin/../lib/slf4j-api-1.5.8.jar:/outbrain/cassandra/apache-cassandra-0.6.1/bin/../lib
>> >> >>>>>> /slf4j-log4j12-1.5.8.jar;
>> >> >>>>>>
>> >> >>>>>> BootClassPath =
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>> /usr/java/jdk1.6.0_17/jre/lib/alt-rt.jar:/usr/java/jdk1.6.0_17/jre/lib/resources.jar:/usr/java/jdk1.6.0_17/jre/lib/rt.jar:/usr/java/jdk1.6.0_17/jre/lib/sunrsasign.j
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>> ar:/usr/java/jdk1.6.0_17/jre/lib/jsse.jar:/usr/java/jdk1.6.0_17/jre/lib/jce.jar:/usr/java/jdk1.6.0_17/jre/lib/charsets.jar:/usr/java/jdk1.6.0_17/jre/classes;
>> >> >>>>>>
>> >> >>>>>> LibraryPath =
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>> /usr/java/jdk1.6.0_17/jre/lib/amd64/server:/usr/java/jdk1.6.0_17/jre/lib/amd64:/usr/java/jdk1.6.0_17/jre/../lib/amd64:/usr/java/packages/lib/amd64:/lib:/usr/lib;
>> >> >>>>>>
>> >> >>>>>> VmName = Java HotSpot(TM) 64-Bit Server VM;
>> >> >>>>>>
>> >> >>>>>> VmVendor = Sun Microsystems Inc.;
>> >> >>>>>>
>> >> >>>>>> VmVersion = 14.3-b01;
>> >> >>>>>>
>> >> >>>>>> BootClassPathSupported = true;
>> >> >>>>>>
>> >> >>>>>> InputArguments = [ -ea, -Xms128M, -Xmx4G,
>> >> >>>>>> -XX:TargetSurvivorRatio=90,
>> >> >>>>>> -XX:+AggressiveOpts, -XX:+UseParNewGC, -XX:+UseConcMarkSweepGC,
>> >> >>>>>> -XX:+CMSParallelRemarkEnabled, -XX:+HeapDumpOnOutOfMemoryError,
>> >> >>>>>> -XX:SurvivorRatio=128, -XX:MaxTenuringThreshold=0,
>> >> >>>>>> -Dcom.sun.management.jmxremote.port=9004,
>> >> >>>>>> -Dcom.sun.management.jmxremote.ssl=false,
>> >> >>>>>> -Dcom.sun.management.jmxremote.authenticate=false,
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>> -Dstorage-config=/outbrain/cassandra/apache-cassandra-0.6.1/bin/../conf,
>> >> >>>>>> -Dcassandra-pidfile=/var/run/cassandra.pid ];
>> >> >>>>>>
>> >> >>>>>> ManagementSpecVersion = 1.2;
>> >> >>>>>>
>> >> >>>>>> SpecName = Java Virtual Machine Specification;
>> >> >>>>>>
>> >> >>>>>> SpecVendor = Sun Microsystems Inc.;
>> >> >>>>>>
>> >> >>>>>> SpecVersion = 1.0;
>> >> >>>>>>
>> >> >>>>>> StartTime = 1272911001415;
>> >> >>>>>> ...
>> >> >>>>>
>> >> >>>
>> >> >>>
>> >> >>
>> >> >>
>> >> >
>> >> >
>> >> >
>> >> > --
>> >> > Jonathan Ellis
>> >> > Project Chair, Apache Cassandra
>> >> > co-founder of Riptano, the source for professional Cassandra support
>> >> > http://riptano.com
>> >>
>> >
>> >
>>
>>
>>
>> --
>> Jonathan Ellis
>> Project Chair, Apache Cassandra
>> co-founder of Riptano, the source for professional Cassandra support
>> http://riptano.com
>
>



-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of Riptano, the source for professional Cassandra support
http://riptano.com

Reply via email to