Hi Jurgen,

On our Thumpers running MySQL, we limit the ARC to 4GB. On systems
with less RAM we limit the ARC to 1GB. What kind of disk is backing
the pools? Might be some kind of array cache flushing issues going on.
Also, you might try putting your InnoDB log files on a UFS partition
and see if the performance issues go away.

Best Regards,
Jason

On Fri, Apr 11, 2008 at 1:21 PM, Jürgen Keil <[EMAIL PROTECTED]> wrote:
> > And another case was an attempt to use "xsltproc(1)" on a
>  > big xml file, this time on an amd64 x2 machine with 4GB of
>  > memory, using zfs, and the xsltproc process had grown to
>  > use > 2GB of memory.  Again heavy disk trashing, and I
>  > didn't had the impression that the arc cache did
>  > shrink enough to prevent that thrashing.
>
>  It looks like this, in top:
>
>  load averages:  0.01,  0.02,  0.05                                     
> 21:11:47
>  75 processes:  74 sleeping, 1 on cpu
>  CPU states: 99.4% idle,  0.1% user,  0.5% kernel,  0.0% iowait,  0.0% swap
>  Memory: 4031M real, 108M free, 2693M swap in use, 1310M swap free
>
>    PID USERNAME LWP PRI NICE  SIZE   RES STATE    TIME    CPU COMMAND
>   7587 jk         1  60    0 2430M 2306M sleep    0:25  0.24% xsltproc
>   7588 jk         1  59    0 3916K 1472K cpu/0    0:00  0.02% top
>     44 root       1  59    0    0K    0K sleep    0:10  0.01% Xorg
>   7613 jk         1  59    0 2912K 1792K sleep    0:00  0.01% iostat
>   15634 postgres   1  59    0   19M 1152K sleep    0:53  0.00% postgres
>     80 root       1  59    0   13M 1644K sleep    0:07  0.00% dtgreet
>   4276 daemon     2  60  -20 2756K  676K sleep    3:48  0.00% nfsd
>   27392 root       1  59    0 6300K 1680K sleep    3:17  0.00% ypserv
>   3307 root       1  59    0 7080K 2716K sleep    3:13  0.00% intrd
>   7137 root       1 100  -20 2716K 1364K sleep    1:39  0.00% xntpd
>   2876 root       5  59    0 3568K 1780K sleep    1:21  0.00% automountd
>    579 daemon     1  59    0 3588K 1384K sleep    1:20  0.00% rpcbind
>      9 root      15  59    0   20M 1316K sleep    1:20  0.00% svc.configd
>   26803 root      35  59    0 5576K 2944K sleep    0:52  0.00% nscd
>    305 root       7  59    0 3996K 1000K sleep    0:42  0.00% devfsadm
>
>
>  The resident set size of xsltproc is *slowly* increasing.
>
>  truss on the xsltproc shows no system calls.
>
>  % truss -p 7587
>  ^C
>
>
>  But there are lots of page faults:
>
>  % truss -m all -p 7587
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B92  addr = 0x0F19A024
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B9A  addr = 0x0F19B018
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B92  addr = 0x0F19D004
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B92  addr = 0x0F19E00C
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68BA8  addr = 0x0F1A1008
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B92  addr = 0x0F1A3004
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B92  addr = 0x0F1A4024
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B92  addr = 0x0F1AC00C
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B9A  addr = 0x0F1AD018
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B92  addr = 0x0F1AE014
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B9A  addr = 0x0F1AF018
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68BA8  addr = 0x0F1B3008
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B9A  addr = 0x0F1C0020
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B92  addr = 0x0F1C101C
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B9A  addr = 0x0F1CD018
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68BA8  addr = 0x0F1CE010
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B92  addr = 0x0F1D400C
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B92  addr = 0x0F1DB17C
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B9A  addr = 0x0F1DE020
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B9A  addr = 0x0F1E1018
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68BA8  addr = 0x0F1E2010
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B92  addr = 0x0F1E3004
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68BA8  addr = 0x0F1E5008
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68BA8  addr = 0x0F1EA010
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B92  addr = 0x0F1F1004
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68BA8  addr = 0x0F1F3008
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68BA8  addr = 0x0F1F6010
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B92  addr = 0x0F1F800C
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B92  addr = 0x0F1F9014
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B92  addr = 0x0F1FA01C
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68BA8  addr = 0x0F1FC010
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B92  addr = 0x0F1FD004
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B92  addr = 0x0F1FE00C
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68BA8  addr = 0x0F201008
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68BA8  addr = 0x0F206010
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B92  addr = 0x0F20800C
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68BA8  addr = 0x0F20A010
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B92  addr = 0x0F20B004
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B92  addr = 0x0F20C00C
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B9A  addr = 0x0F20D018
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B92  addr = 0x0F20F004
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B92  addr = 0x0F212014
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B92  addr = 0x0F21600C
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B92  addr = 0x0F21D004
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B92  addr = 0x0F22501C
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68BA8  addr = 0x0F227008
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B92  addr = 0x0F229004
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B92  addr = 0x0F22A00C
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68BA8  addr = 0x0F22C010
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B92  addr = 0x0F22D004
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68BA8  addr = 0x0F236008
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68BA8  addr = 0x0F23B008
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B92  addr = 0x0F23D004
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68BA8  addr = 0x0F240010
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B9A  addr = 0x0F246000
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B92  addr = 0x0F249004
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B9A  addr = 0x0F24B018
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B9A  addr = 0x0F251018
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B9A  addr = 0x0F253000
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68BA8  addr = 0x0F256010
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B92  addr = 0x0F25800C
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B9A  addr = 0x0F25A020
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B9A  addr = 0x0F25B018
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68BA8  addr = 0x0F25F008
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B92  addr = 0x0F26200C
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B9A  addr = 0x0F266020
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B92  addr = 0x0F267004
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68BA8  addr = 0x0F269008
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B92  addr = 0x0F26B004
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68BA8  addr = 0x0F26F008
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68BA8  addr = 0x0F271008
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B9A  addr = 0x0F272020
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B92  addr = 0x0F27E00C
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68BA8  addr = 0x0F27F008
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B9A  addr = 0x0F280020
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B9A  addr = 0x0F283018
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B92  addr = 0x0F28C00C
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68BA8  addr = 0x0F28F008
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B92  addr = 0x0F290014
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68BA8  addr = 0x0F292010
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B9A  addr = 0x0F295018
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B9A  addr = 0x0F296020
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B92  addr = 0x0F297004
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B9A  addr = 0x0F2A0020
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B92  addr = 0x0F2A200C
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B92  addr = 0x0F2A3054
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B92  addr = 0x0F2AB004
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B92  addr = 0x0F2AD004
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B9A  addr = 0x0F2B3018
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B92  addr = 0x0F2B5004
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68BA8  addr = 0x0F2B8000
>     Incurred fault #11, FLTPAGE  %pc = 0xFEE68B92  addr = 0x0F2BF004
>  ^C    Incurred fault #11, FLTPAGE  %pc = 0xFEE68BA8  addr = 0x0F2E5008
>
>
>
>  In iostat, we see lots of 4K transfers, probably page-ins from the swap 
> device:
>
>  % iostat -xnzc 5
>      cpu
>   us sy wt id
>   1  4  0 95
>                     extended device statistics
>     r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
>     5.3    1.2   67.4   22.2  0.1  0.0   18.6    2.9   0   1 c4d0
>     0.0    0.0    0.0    0.0  0.0  0.0    0.0    9.5   0   0 c6t0d0
>     0.0    0.0    0.1    0.1  0.0  0.0    0.0    9.3   0   0 c7t0d0
>     0.0    0.0    0.0    0.0  0.0  0.0    0.0   11.7   0   0 c8t0d0
>     0.0    0.0    0.0    0.0  0.0  0.0    0.0    1.7   0   0 leo:/files/jk
>      cpu
>   us sy wt id
>   0  0  0 100
>                     extended device statistics
>     r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
>   144.6    0.2  578.4    0.9  0.0  1.0    0.0    6.9   0 100 c4d0
>      cpu
>   us sy wt id
>   0  0  0 99
>                     extended device statistics
>     r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
>   149.0    0.0  596.0    0.0  0.0  1.0    0.0    6.7   0 100 c4d0
>      cpu
>   us sy wt id
>   0  0  0 100
>                     extended device statistics
>     r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
>   143.6    0.0  574.4    0.0  0.0  1.0    0.0    7.0   0 100 c4d0
>      cpu
>   us sy wt id
>   0  0  0 99
>                     extended device statistics
>     r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
>   149.4    0.2  597.6    1.3  0.0  1.0    0.0    6.7   0 100 c4d0
>      cpu
>   us sy wt id
>   0  1  0 99
>                     extended device statistics
>     r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
>   148.4    0.0  593.6    0.0  0.0  1.0    0.0    6.7   0 100 c4d0
>      cpu
>
>
>  And the arc cache did not yet shrink to the minimum size:
>
>  > ::arc
>  hits                      =  25265436
>  misses                    =   1907270
>  demand_data_hits          =  19051747
>  demand_data_misses        =    983231
>  demand_metadata_hits      =   5766668
>  demand_metadata_misses    =     99264
>  prefetch_data_hits        =    219895
>  prefetch_data_misses      =    812873
>  prefetch_metadata_hits    =    227126
>  prefetch_metadata_misses  =     11902
>  mru_hits                  =  12030281
>  mru_ghost_hits            =     55142
>  mfu_hits                  =  12896265
>  mfu_ghost_hits            =    143230
>  deleted                   =   1997843
>  recycle_miss              =    115508
>  mutex_miss                =      1716
>  evict_skip                =   8146750
>  hash_elements             =     59206
>  hash_elements_max         =    132131
>  hash_collisions           =   2110982
>  hash_chains               =     15058
>  hash_chain_max            =        11
>  p                         =      1155 MB
>  c                         =      1356 MB
>  c_min                     =       377 MB
>  c_max                     =      3017 MB
>  size                      =      1356 MB
>  hdr_size                  =   9954840
>  l2_hits                   =         0
>  l2_misses                 =         0
>  l2_feeds                  =         0
>  l2_rw_clash               =         0
>  l2_writes_sent            =         0
>  l2_writes_done            =         0
>  l2_writes_error           =         0
>  l2_writes_hdr_miss        =         0
>  l2_evict_lock_retry       =         0
>  l2_evict_reading          =         0
>  l2_free_on_write          =         0
>  l2_abort_lowmem           =         0
>  l2_cksum_bad              =         0
>  l2_io_error               =         0
>  l2_size                   =         0
>  l2_hdr_size               =         0
>  arc_no_grow               =         1
>  arc_tempreserve           =         0 MB
>  arc_meta_used             =       448 MB
>  arc_meta_limit            =       754 MB
>  arc_meta_max              =       464 MB
>  > ::arc
>  hits                      =  25265436
>  misses                    =   1907270
>  demand_data_hits          =  19051747
>  demand_data_misses        =    983231
>  demand_metadata_hits      =   5766668
>  demand_metadata_misses    =     99264
>  prefetch_data_hits        =    219895
>  prefetch_data_misses      =    812873
>  prefetch_metadata_hits    =    227126
>  prefetch_metadata_misses  =     11902
>  mru_hits                  =  12030281
>  mru_ghost_hits            =     55142
>  mfu_hits                  =  12896265
>  mfu_ghost_hits            =    143230
>  deleted                   =   1997843
>  recycle_miss              =    115508
>  mutex_miss                =      1716
>  evict_skip                =   8146750
>  hash_elements             =     59206
>  hash_elements_max         =    132131
>  hash_collisions           =   2110982
>  hash_chains               =     15058
>  hash_chain_max            =        11
>  p                         =      1155 MB
>  c                         =      1356 MB
>  c_min                     =       377 MB
>  c_max                     =      3017 MB
>  size                      =      1356 MB
>  hdr_size                  =   9954840
>  l2_hits                   =         0
>  l2_misses                 =         0
>  l2_feeds                  =         0
>  l2_rw_clash               =         0
>  l2_writes_sent            =         0
>  l2_writes_done            =         0
>  l2_writes_error           =         0
>  l2_writes_hdr_miss        =         0
>  l2_evict_lock_retry       =         0
>  l2_evict_reading          =         0
>  l2_free_on_write          =         0
>  l2_abort_lowmem           =         0
>  l2_cksum_bad              =         0
>  l2_io_error               =         0
>  l2_size                   =         0
>  l2_hdr_size               =         0
>  arc_no_grow               =         1
>  arc_tempreserve           =         0 MB
>  arc_meta_used             =       448 MB
>  arc_meta_limit            =       754 MB
>  arc_meta_max              =       464 MB
>
>
>
>
>  This message posted from opensolaris.org
>  _______________________________________________
>  opensolaris-discuss mailing list
>  opensolaris-discuss@opensolaris.org
>
_______________________________________________
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org

Reply via email to