> A couple more questions here.
... 
> What do you have zfs compresison set to?  The gzip level is
> tunable, according to zfs set, anyway:
> 
> PROPERTY       EDIT  INHERIT   VALUES
> compression     YES      YES   on | off | lzjb | gzip | gzip-[1-9]

I've used the "default" gzip compression level, that is I used

    zfs set compression=gzip gzip_pool

> You still have idle time in this lockstat (and mpstat).
> 
> What do you get for a lockstat -A -D 20 sleep 30?

# lockstat -A -D 20 /usr/tmp/fill /gzip_pool/junk
lockstat: warning: 723388 aggregation drops on CPU 0
lockstat: warning: 239335 aggregation drops on CPU 1
lockstat: warning: 62366 aggregation drops on CPU 0
lockstat: warning: 51856 aggregation drops on CPU 1
lockstat: warning: 45187 aggregation drops on CPU 0
lockstat: warning: 46536 aggregation drops on CPU 1
lockstat: warning: 687832 aggregation drops on CPU 0
lockstat: warning: 575675 aggregation drops on CPU 1
lockstat: warning: 46504 aggregation drops on CPU 0
lockstat: warning: 40874 aggregation drops on CPU 1
lockstat: warning: 45571 aggregation drops on CPU 0
lockstat: warning: 33422 aggregation drops on CPU 1
lockstat: warning: 501063 aggregation drops on CPU 0
lockstat: warning: 361041 aggregation drops on CPU 1
lockstat: warning: 651 aggregation drops on CPU 0
lockstat: warning: 7011 aggregation drops on CPU 1
lockstat: warning: 61600 aggregation drops on CPU 0
lockstat: warning: 19386 aggregation drops on CPU 1
lockstat: warning: 566156 aggregation drops on CPU 0
lockstat: warning: 105502 aggregation drops on CPU 1
lockstat: warning: 25362 aggregation drops on CPU 0
lockstat: warning: 8700 aggregation drops on CPU 1
lockstat: warning: 585002 aggregation drops on CPU 0
lockstat: warning: 645299 aggregation drops on CPU 1
lockstat: warning: 237841 aggregation drops on CPU 0
lockstat: warning: 20931 aggregation drops on CPU 1
lockstat: warning: 320102 aggregation drops on CPU 0
lockstat: warning: 435898 aggregation drops on CPU 1
lockstat: warning: 115 dynamic variable drops with non-empty dirty list
lockstat: warning: 385192 aggregation drops on CPU 0
lockstat: warning: 81833 aggregation drops on CPU 1
lockstat: warning: 259105 aggregation drops on CPU 0
lockstat: warning: 255812 aggregation drops on CPU 1
lockstat: warning: 486712 aggregation drops on CPU 0
lockstat: warning: 61607 aggregation drops on CPU 1
lockstat: warning: 1865 dynamic variable drops with non-empty dirty list
lockstat: warning: 250425 aggregation drops on CPU 0
lockstat: warning: 171415 aggregation drops on CPU 1
lockstat: warning: 166277 aggregation drops on CPU 0
lockstat: warning: 74819 aggregation drops on CPU 1
lockstat: warning: 39342 aggregation drops on CPU 0
lockstat: warning: 3556 aggregation drops on CPU 1
lockstat: warning: ran out of data records (use -n for more)

Adaptive mutex spin: 4701 events in 64.812 seconds (73 events/sec)

Count indv cuml rcnt     spin Lock                   Caller                  
-------------------------------------------------------------------------------
 1726  37%  37% 0.00        2 vph_mutex+0x17e8       pvn_write_done+0x10c    
 1518  32%  69% 0.00        1 vph_mutex+0x17e8       hat_page_setattr+0x70   
  264   6%  75% 0.00        2 vph_mutex+0x2000       page_hashin+0xad        
  194   4%  79% 0.00        4 0xfffffffed2ee0a88     cv_wait+0x69            
  106   2%  81% 0.00        2 vph_mutex+0x2000       page_hashout+0xdd       
   91   2%  83% 0.00        4 0xfffffffed2ee0a88     taskq_dispatch+0x2c9    
   83   2%  85% 0.00        4 0xfffffffed2ee0a88     taskq_thread+0x1cb      
   83   2%  86% 0.00        1 0xfffffffec17a56b0     ufs_iodone+0x3d         
   47   1%  87% 0.00        4 0xfffffffec1e4ce98     vdev_queue_io+0x85      
   43   1%  88% 0.00        6 0xfffffffec139a2c0     trap+0xf66              
   38   1%  89% 0.00        6 0xfffffffecb5f8cd0     cv_wait+0x69            
   37   1%  90% 0.00        4 0xfffffffec143ee90     dmult_deque+0x36        
   26   1%  91% 0.00        2 htable_mutex+0x108     htable_release+0x79     
   26   1%  91% 0.00        1 0xfffffffec17a56b0     ufs_putpage+0xa4        
   18   0%  91% 0.00        4 0xfffffffec00dca48     ghd_intr+0xa8           
   17   0%  92% 0.00        2 0xfffffffec00dca48     ghd_waitq_delete+0x35   
   12   0%  92% 0.00        2 htable_mutex+0x248     htable_release+0x79     
   11   0%  92% 0.00        8 0xfffffffec1e4ce98     vdev_queue_io_done+0x3b 
   10   0%  93% 0.00        3 0xfffffffec00dca48     ghd_transport+0x71      
   10   0%  93% 0.00        2 0xffffff00077dc138     
page_get_mnode_freelist+0xdb
-------------------------------------------------------------------------------

Adaptive mutex block: 167 events in 64.812 seconds (3 events/sec)

Count indv cuml rcnt     nsec Lock                   Caller                  
-------------------------------------------------------------------------------
   78  47%  47% 0.00    31623 vph_mutex+0x17e8       pvn_write_done+0x10c    
   25  15%  62% 0.00    97897 0xfffffffed2ee0a88     cv_wait+0x69            
   13   8%  69% 0.00    70426 0xfffffffed2ee0a88     taskq_dispatch+0x2c9    
   10   6%  75% 0.00    30559 0xfffffffec17a56b0     ufs_iodone+0x3d         
    9   5%  81% 0.00    34569 0xfffffffec143ee90     dmult_deque+0x36        
    4   2%  83% 0.00    30579 0xfffffffec00dca48     ghd_waitq_delete+0x35   
    3   2%  85% 0.00   117726 0xfffffffec14365f0     
ohci_hcdi_pipe_bulk_xfer+0x50
    2   1%  86% 0.00    32324 pse_mutex+0x1a80       page_unlock+0x3b        
    1   1%  87% 0.00 381871279 ph_mutex+0xac0         page_create_va+0x334    
    1   1%  87% 0.00    34696 pse_mutex+0x1cc0       page_unlock+0x3b        
    1   1%  88% 0.00    29577 pse_mutex+0x1e80       page_unlock+0x3b        
    1   1%  89% 0.00    29324 pse_mutex+0x1580       page_unlock+0x3b        
    1   1%  89% 0.00    29029 pse_mutex+0x1400       page_unlock+0x3b        
    1   1%  90% 0.00    29806 pse_mutex+0x1700       page_unlock+0x3b        
    1   1%  90% 0.00    30873 pse_mutex+0x8c0        page_unlock+0x3b        
    1   1%  91% 0.00    34591 pse_mutex+0x700        page_unlock+0x3b        
    1   1%  92% 0.00    32196 pse_mutex+0x140        page_unlock+0x3b        
    1   1%  92% 0.00    31986 pse_mutex+0x440        page_unlock+0x3b        
    1   1%  93% 0.00    27500 pse_mutex+0xd00        page_unlock+0x3b        
    1   1%  93% 0.00    36400 pse_mutex+0xb00        page_unlock+0x3b        
-------------------------------------------------------------------------------

Spin lock spin: 252 events in 64.812 seconds (4 events/sec)

Count indv cuml rcnt     spin Lock                   Caller                  
-------------------------------------------------------------------------------
   97  38%  38% 0.00        4 cpu0_disp              disp_lock_enter+0x31    
   44  17%  56% 0.00        3 cpu0_disp              disp_lock_enter_high+0x11
   44  17%  73% 0.00        3 0xfffffffec1a82708     disp_lock_enter+0x31    
   36  14%  88% 0.00        3 xc_mbox_lock+0x11      mutex_vector_enter+0x4ee
   16   6%  94% 0.00        4 0xfffffffec1a82708     disp_lock_enter_high+0x11
   14   6% 100% 0.00        5 turnstile_table+0xd38  disp_lock_enter+0x31    
    1   0% 100% 0.00        4 turnstile_table+0xad8  disp_lock_enter+0x31    
-------------------------------------------------------------------------------

R/W writer blocked by writer: 8281 events in 64.812 seconds (128 events/sec)

Count indv cuml rcnt     nsec Lock                   Caller                  
-------------------------------------------------------------------------------
 8281 100% 100% 0.00 20609451 0xfffffffec17a56a0     ufs_rwlock+0xfd         
-------------------------------------------------------------------------------

Adaptive mutex hold: 14476537 events in 64.812 seconds (223362 events/sec)

Count indv cuml rcnt     nsec Lock                   Caller                  
-------------------------------------------------------------------------------
289666   2%   2% 0.00     1242 vph_mutex+0x2000       page_hashout+0xed       
274484   2%   4% 0.00     1295 vph_mutex+0x2000       page_hashin+0xc6        
256452   2%   6% 0.00     1359 vph_mutex+0x17e8       hat_page_setattr+0xbf   
255344   2%   7% 0.00     1245 vph_mutex+0x17e8       pvn_write_done+0x14b    
213470   1%   9% 0.00     1181 pcf+0x40               page_free+0x1ba         
146344   1%  10% 0.00     4043 0xfffffffec14742a0     getblk_common+0x1e1     
146344   1%  11% 0.00     1238 0xfffffffec14742a0     brelse+0x15d            
131168   1%  12% 0.00     1274 0xfffffffec142fdf0     deltamap_add+0x1f5      
106877   1%  13% 0.00     1277 0xfffffffec17a56b0     ufs_putpage+0xf8        
64537   0%  13% 0.00     1283 0xfffffffec0208cc8     free_vpmap+0x85         
64508   0%  13% 0.00     1293 0xfffffffec0208d60     free_vpmap+0x85         
61387   0%  14% 0.00     4756 0xfffffffec0208d08     get_free_vpmap+0x357    
61261   0%  14% 0.00     4640 0xfffffffec0208da0     get_free_vpmap+0x357    
59765   0%  15% 0.00     1294 0xfffffffec0208da0     free_vpmap+0x85         
59731   0%  15% 0.00     1251 0xfffffffec0208d08     free_vpmap+0x85         
57656   0%  16% 0.00     4686 0xfffffffec0208cc8     get_free_vpmap+0x357    
57655   0%  16% 0.00     4555 0xfffffffec0208d60     get_free_vpmap+0x357    
49703   0%  16% 0.00     1270 0xffffff00077dd718     page_ctr_sub+0x80       
48208   0%  17% 0.00     1273 0xffffff00077dd720     page_ctr_sub+0x80       
47333   0%  17% 0.00     1276 0xffffff00077dd710     page_ctr_sub+0x80       
-------------------------------------------------------------------------------

Spin lock hold: 1092671 events in 64.812 seconds (16859 events/sec)

Count indv cuml rcnt     nsec Lock                   Caller                  
-------------------------------------------------------------------------------
508900  47%  47% 0.00     2965 xc_mbox_lock+0x11      mutex_vector_exit+0xf1  
292686  27%  73% 0.00     1353 sleepq_head+0x12f8     disp_lock_exit+0x56     
32656   3%  76% 0.00     1537 pcicfg_mutex+0x1       mutex_vector_exit+0xf1  
24638   2%  79% 0.00     1555 cpu0_disp              disp_lock_exit_high+0x34
21397   2%  81% 0.00     1580 0xfffffffec1a82708     disp_lock_exit_high+0x34
13948   1%  82% 0.00     1787 cpu[0]+0xe8            disp_lock_exit_high+0x34
12960   1%  83% 0.00     1742 hres_lock              dtrace_hres_tick+0x55   
12763   1%  84% 0.00     1943 cpu[1]+0xe8            disp_lock_exit_high+0x34
 8280   1%  85% 0.00     6327 turnstile_table+0xd38  disp_lock_exit+0x56     
 8268   1%  86% 0.00     4912 turnstile_table+0xd38  disp_lock_exit_high+0x34
 5670   1%  86% 0.00     5425 sleepq_head+0x1ab8     disp_lock_exit+0x56     
 5237   0%  87% 0.00     2626 cpu0_disp              
disp_lock_exit_nopreempt+0x44
 4854   0%  87% 0.00     1204 sleepq_head+0x1eb8     disp_lock_exit+0x56     
 4843   0%  88% 0.00     1223 sleepq_head+0x1a08     disp_lock_exit+0x56     
 4671   0%  88% 0.00     1382 sleepq_head+0xf58      disp_lock_exit+0x56     
 4599   0%  88% 0.00     1272 sleepq_head+0x1f98     disp_lock_exit+0x56     
 4308   0%  89% 0.00     3844 sleepq_head+0x1ab8     
disp_lock_exit_nopreempt+0x44
 4138   0%  89% 0.00     1405 cpu0_disp              disp_lock_exit+0x56     
 4130   0%  90% 0.00     1353 sleepq_head+0x17c8     disp_lock_exit+0x56     
 4111   0%  90% 0.00     1526 0xfffffffec1a82708     disp_lock_exit+0x56     
-------------------------------------------------------------------------------

R/W writer hold: 164764 events in 64.812 seconds (2542 events/sec)

Count indv cuml rcnt     nsec Lock                   Caller                  
-------------------------------------------------------------------------------
125018  76%  76% 0.00    24819 0xfffffffec17a56a8     wrip+0x489              
12533   8%  83% 0.00     7098 0xfffffffec17a56a8     ufs_write+0x54b         
12532   8%  91% 0.00  1414263 0xfffffffec17a56a0     ufs_rwunlock+0x24       
 4729   3%  94% 0.00     1444 0xffffff01efb81d08     dnode_new_blkid+0x19d   
 3630   2%  96% 0.00     2009 0xfffffffec17a56a8     wrip+0x3cd              
 1648   1%  97% 0.00    31342 0xfffffffed09c3490     as_map_locked+0x305     
 1647   1%  98% 0.00     2829 0xfffffffec2b2bc88     segvn_extend_prev+0x149 
  438   0%  98% 0.00    40571 0xfffffffed09c33b0     as_unmap+0x25c          
  196   0%  99% 0.00     1468 0xfffffffed1ebc7d8     dnode_new_blkid+0x19d   
   89   0%  99% 0.00     1551 0xfffffffecba72700     spa_sync+0x4cc          
   89   0%  99% 0.00     1449 0xfffffffecb5f8cd8     txg_sync_thread+0x1c6   
   86   0%  99% 0.00     8430 0xfffffffecb5f8cd8     txg_sync_thread+0x186   
   61   0%  99% 0.00   910403 0xfffffffec142fef8     ldl_waito+0x6d          
   61   0%  99% 0.00     4446 0xfffffffec142fef8     get_write_bp+0xac       
   61   0%  99% 0.00     7958 0xfffffffec142fef8     inval_range+0xbd        
   53   0%  99% 0.00     1454 0xfffffffed37a2560     dnode_new_blkid+0x19d   
   40   0%  99% 0.00     1469 0xfffffffed3e58ad8     dnode_new_blkid+0x19d   
   34   0%  99% 0.00     1549 0xfffffffed379ea78     dnode_new_blkid+0x19d   
   32   0%  99% 0.00    16405 0xfffffffec7bec560     zfs_getpage+0x2ff       
   24   0%  99% 0.00     1497 0xfffffffed37a32d0     dnode_new_blkid+0x19d   
-------------------------------------------------------------------------------

R/W reader hold: 290702 events in 64.812 seconds (4485 events/sec)

Count indv cuml rcnt     nsec Lock                   Caller                  
-------------------------------------------------------------------------------
125018  43%  43% 0.00    28827 0xfffffffec1435648     wrip+0x49e              
23000   8%  51% 0.00   188307 0xfffffffec17a56a8     ufs_putpages+0x340      
22368   8%  59% 0.00  8206227 0xfffffffed2ee0a90     taskq_thread+0x1c3      
14403   5%  64% 0.00     1508 0xfffffffed37a3518     dmu_zfetch_find+0x3c1   
14403   5%  69% 0.00     6107 0xfffffffed37a32d0     dnode_hold_impl+0xbc    
14403   5%  73% 0.00     6606 0xfffffffed37a32d0     dbuf_read+0x250         
12533   4%  78% 0.00    10450 0xfffffffec1435648     ufs_write+0x554         
 6722   2%  80% 0.00    75280 0xfffffffed09c3490     as_fault+0x658          
 6704   2%  82% 0.00    70784 0xfffffffed1e95958     segvn_fault+0xbfc       
 6704   2%  85% 0.00    66973 0xfffffffec2b2bc88     segvn_fault+0xbf4       
 4766   2%  86% 0.00     6106 0xffffff01efb81d08     dbuf_dirty+0x39c        
 4734   2%  88% 0.00     3757 0xffffff01efb81d08     dbuf_read+0x250         
 4729   2%  90% 0.00   180997 0xffffffff76afd928     zfs_write+0x3f5         
 4728   2%  91% 0.00    21145 0xffffff01efb81d08     
dmu_buf_hold_array_by_dnode+0x1eb
 3630   1%  92% 0.00     5639 0xfffffffec1435648     wrip+0x3e2              
 3576   1%  94% 0.00 25200843 0xfffffffed2ee0b68     taskq_thread+0x1c3      
 2764   1%  95% 0.00     5248 0xfffffffed37a3a68     dbuf_read+0x250         
 2519   1%  96% 0.00     4934 0xfffffffec17a56a8     ufs_getpage+0x8d5       
 1379   0%  96% 0.00    68439 0xfffffffed09c33b0     as_fault+0x658          
 1229   0%  96% 0.00    56389 0xffffffff76b28d88     segvn_fault+0xbf4       
-------------------------------------------------------------------------------


> Do you see anyone with long lock hold times, long
> sleeps, or excessive spinning?

Not sure if the above lockstat shows anything excessive.

> The largest numbers from mpstat are for interrupts
> and cross calls.
> What does intrstat(1M) show?

The first two are for the idle machine; starting with
#3 filling of the gzip compressed zpool starts:

      device |      cpu0 %tim      cpu1 %tim
-------------+------------------------------
       ata#0 |         0  0.0         0  0.0
       ata#1 |         0  0.0         0  0.0
   audiohd#0 |         1  0.0         0  0.0
      ehci#0 |         1  0.0         0  0.0
       nge#0 |         0  0.0         0  0.0
    nvidia#0 |        60  0.1         0  0.0
      ohci#0 |         0  0.0         0  0.0
   pci-ide#1 |         0  0.0         0  0.0

      device |      cpu0 %tim      cpu1 %tim
-------------+------------------------------
       ata#0 |         0  0.0         0  0.0
       ata#1 |         0  0.0         0  0.0
   audiohd#0 |         2  0.0         0  0.0
      ehci#0 |         2  0.0         0  0.0
       nge#0 |         0  0.0       158  0.0
    nvidia#0 |        61  0.2         0  0.0
      ohci#0 |         0  0.0         0  0.0
   pci-ide#1 |         0  0.0       158  0.1

      device |      cpu0 %tim      cpu1 %tim
-------------+------------------------------
       ata#0 |         0  0.0         0  0.0
       ata#1 |         2  0.0         0  0.0
   audiohd#0 |         4  0.0         0  0.0
      ehci#0 |         4  0.0         0  0.0
       nge#0 |         0  0.0      3547  0.3
    nvidia#0 |        60  0.2         0  0.0
      ohci#0 |         0  0.0        35  0.1
   pci-ide#1 |         0  0.0      3547  2.8

      device |      cpu0 %tim      cpu1 %tim
-------------+------------------------------
       ata#0 |         0  0.0         0  0.0
       ata#1 |         0  0.0         0  0.0
   audiohd#0 |         1  0.0         0  0.0
      ehci#0 |         1  0.0         0  0.0
       nge#0 |         0  0.0      2861  0.2
    nvidia#0 |        60  0.1         0  0.0
      ohci#0 |         0  0.0         0  0.0
   pci-ide#1 |         0  0.0      2861  2.2

      device |      cpu0 %tim      cpu1 %tim
-------------+------------------------------
       ata#0 |         0  0.0         0  0.0
       ata#1 |         1  0.0         0  0.0
   audiohd#0 |         2  0.0         0  0.0
      ehci#0 |         2  0.0         0  0.0
       nge#0 |         0  0.0       380  0.0
    nvidia#0 |        61  0.3         0  0.0
      ohci#0 |         0  0.0         0  0.0
   pci-ide#1 |         0  0.0       380  0.3

      device |      cpu0 %tim      cpu1 %tim
-------------+------------------------------
       ata#0 |         0  0.0         0  0.0
       ata#1 |         0  0.0         0  0.0
   audiohd#0 |         0  0.0         0  0.0
      ehci#0 |         0  0.0         0  0.0
       nge#0 |         0  0.0      3302  0.2
    nvidia#0 |        60  0.2         0  0.0
      ohci#0 |         0  0.0        30  0.1
   pci-ide#1 |         0  0.0      3302  2.6

      device |      cpu0 %tim      cpu1 %tim
-------------+------------------------------
       ata#0 |         0  0.0         0  0.0
       ata#1 |         0  0.0         0  0.0
   audiohd#0 |         2  0.0         0  0.0
      ehci#0 |         2  0.0         0  0.0
       nge#0 |         0  0.0      3165  0.3
    nvidia#0 |        61  0.2         0  0.0
      ohci#0 |         0  0.0         0  0.0
   pci-ide#1 |         0  0.0      3165  2.5

      device |      cpu0 %tim      cpu1 %tim
-------------+------------------------------
       ata#0 |         0  0.0         0  0.0
       ata#1 |         1  0.0         0  0.0
   audiohd#0 |         3  0.0         0  0.0
      ehci#0 |         3  0.0         0  0.0
       nge#0 |         0  0.0       304  0.0
    nvidia#0 |        61  0.3         0  0.0
      ohci#0 |         0  0.0         0  0.0
   pci-ide#1 |         0  0.0       304  0.2

      device |      cpu0 %tim      cpu1 %tim
-------------+------------------------------
       ata#0 |         0  0.0         0  0.0
       ata#1 |         2  0.0         0  0.0
   audiohd#0 |         0  0.0         0  0.0
      ehci#0 |         0  0.0         0  0.0
       nge#0 |         0  0.0      3878  0.3
    nvidia#0 |        60  0.2         0  0.0
      ohci#0 |         0  0.0        49  0.1
   pci-ide#1 |         0  0.0      3878  3.0

      device |      cpu0 %tim      cpu1 %tim
-------------+------------------------------
       ata#0 |         0  0.0         0  0.0
       ata#1 |         0  0.0         0  0.0
   audiohd#0 |         2  0.0         0  0.0
      ehci#0 |         2  0.0         0  0.0
       nge#0 |         0  0.0      2679  0.2
    nvidia#0 |        61  0.1         0  0.0
      ohci#0 |         0  0.0         0  0.0
   pci-ide#1 |         0  0.0      2679  2.1

      device |      cpu0 %tim      cpu1 %tim
-------------+------------------------------
       ata#0 |         1  0.0         0  0.0
       ata#1 |         1  0.0         0  0.0
   audiohd#0 |         3  0.0         0  0.0
      ehci#0 |         3  0.0         0  0.0
       nge#0 |         0  0.0       312  0.0
    nvidia#0 |        60  0.2         0  0.0
      ohci#0 |         0  0.0         0  0.0
   pci-ide#1 |         0  0.0       312  0.2

      device |      cpu0 %tim      cpu1 %tim
-------------+------------------------------
       ata#0 |         0  0.0         0  0.0
       ata#1 |         0  0.0         0  0.0
   audiohd#0 |         0  0.0         0  0.0
      ehci#0 |         0  0.0         0  0.0
       nge#0 |         0  0.0      6403  0.5
    nvidia#0 |       103  0.5         0  0.0
      ohci#0 |         0  0.0       413  1.2
   pci-ide#1 |         0  0.0      6403  5.1

 
> Have you run dtrace to determine the most frequent
> cross-callers?
> 
> #!/usr/sbin/dtrace -s
> 
> sysinfo:::xcalls
> {
>         @a[stack(30)] = count();
> 
>       trunc(@a, 30);
> }
> 
> is an easy way to do this.

Seems to taskq_threads, running zfs' gzip compression:

dtrace: script '/home/jk/src/dtrace/xcalls.d' matched 3 probes
^C
CPU     ID                    FUNCTION:NAME
  1      2                             :END 


              unix`xc_do_call+0xfd
              unix`xc_wait_sync+0x2b
              unix`x86pte_inval+0x139
              unix`hat_pte_unmap+0xfc
              unix`hat_unload_callback+0x1df
              unix`hat_unload+0x41
              genunix`segkp_release_internal+0xb2
              genunix`segkp_release+0xb6
              genunix`schedctl_freepage+0x30
              genunix`schedctl_proc_cleanup+0x5c
              genunix`exec_args+0x1c2
              elfexec`elf32exec+0x410
              genunix`gexec+0x36d
              genunix`exec_common+0x417
              genunix`exece+0x1b
              unix`sys_syscall32+0x101
                1

              unix`xc_do_call+0xfd
              unix`xc_sync+0x2b
              unix`dtrace_xcall+0x6f
              dtrace`dtrace_state_go+0x3ec
              dtrace`dtrace_ioctl+0x79c
              genunix`cdev_ioctl+0x48
              specfs`spec_ioctl+0x86
              genunix`fop_ioctl+0x37
              genunix`ioctl+0x16b
              unix`sys_syscall+0x17b
                1

              unix`xc_do_call+0xfd
              unix`xc_wait_sync+0x2b
              unix`x86pte_inval+0x139
              unix`hat_pte_unmap+0xfc
              unix`hat_unload_callback+0x1df
              unix`hat_unload+0x41
              swrand`physmem_ent_gen+0x190
              swrand`rnd_handler+0x29
              genunix`callout_execute+0xb1
              genunix`taskq_thread+0x1a7
              unix`thread_start+0x8
               24

              unix`xc_do_call+0xfd
              unix`xc_wait_sync+0x2b
              unix`x86pte_inval+0x139
              unix`hat_pte_unmap+0xfc
              unix`hat_unload_callback+0x148
              unix`hat_unload+0x41
              genunix`segkp_release_internal+0xb2
              genunix`segkp_release+0xb6
              genunix`thread_free+0x238
              genunix`thread_reap_list+0x1f
              genunix`thread_reaper+0x130
              unix`thread_start+0x8
               45

              unix`xc_do_call+0xfd
              unix`xc_wait_sync+0x2b
              unix`x86pte_inval+0x139
              unix`hat_pte_unmap+0xfc
              unix`hat_unload_callback+0x148
              unix`hat_unload+0x41
              unix`segkmem_free_vn+0x6a
              unix`segkmem_free+0x20
              genunix`vmem_xfree+0x10c
              genunix`vmem_free+0x25
              genunix`kmem_free+0x47
              kstat`read_kstat_data+0x484
              kstat`kstat_ioctl+0x4a
              genunix`cdev_ioctl+0x48
              specfs`spec_ioctl+0x86
              genunix`fop_ioctl+0x37
              genunix`ioctl+0x16b
              unix`sys_syscall32+0x101
               76

              unix`xc_do_call+0xfd
              unix`xc_wait_sync+0x2b
              unix`x86pte_inval+0x139
              unix`hat_pte_unmap+0xfc
              unix`hat_unload_callback+0x148
              unix`hat_unload+0x41
              unix`segkmem_free_vn+0x6a
              unix`segkmem_free+0x20
              genunix`vmem_xfree+0x10c
              genunix`vmem_free+0x25
              unix`kfreea+0x5a
              unix`i_ddi_mem_free+0x5d
              genunix`ddi_dma_mem_free+0x26
              ehci`ehci_free_tw+0x45
              ehci`ehci_deallocate_tw+0xb6
              ehci`ehci_traverse_active_qtd_list+0xdf
              ehci`ehci_intr+0x162
              unix`av_dispatch_autovect+0x78
              unix`dispatch_hardint+0x2f
              unix`switch_sp_and_call+0x13
              120

              unix`xc_do_call+0xfd
              unix`xc_sync+0x2b
              unix`dtrace_xcall+0x6f
              unix`dtrace_sync+0x18
              dtrace`dtrace_state_deadman+0x18
              genunix`cyclic_softint+0xc9
              unix`cbe_low_level+0x17
              unix`av_dispatch_softvect+0x5f
              unix`dispatch_softint+0x34
              unix`switch_sp_and_call+0x13
              149

              unix`xc_do_call+0xfd
              unix`xc_wait_sync+0x2b
              unix`x86pte_inval+0x139
              unix`hat_pte_unmap+0xfc
              unix`hat_unload_callback+0x148
              unix`hat_unload+0x41
              unix`segkmem_free_vn+0x6a
              unix`segkmem_free+0x20
              genunix`vmem_xfree+0x10c
              genunix`vmem_free+0x25
              unix`kfreea+0x5a
              unix`i_ddi_mem_free+0x5d
              rootnex`rootnex_teardown_copybuf+0x20
              rootnex`rootnex_dma_unbindhdl+0xbe
              genunix`ddi_dma_unbind_handle+0x22
              ata`ghd_dmafree_attr+0x29
              ata`ata_disk_memfree+0x20
              gda`gda_free+0x39
              dadk`dadk_iodone+0xbf
              dadk`dadk_pktcb+0xa5
              ata`ata_disk_complete+0x115
              ata`ata_hba_complete+0x34
              ata`ghd_doneq_process+0x85
              unix`av_dispatch_softvect+0x5f
              unix`dispatch_softint+0x34
              unix`switch_sp_and_call+0x13
              290

              unix`xc_do_call+0xfd
              unix`xc_sync+0x2b
              unix`dtrace_xcall+0x6f
              dtrace`dtrace_ioctl+0x47f
              genunix`cdev_ioctl+0x48
              specfs`spec_ioctl+0x86
              genunix`fop_ioctl+0x37
              genunix`ioctl+0x16b
              unix`sys_syscall+0x17b
              300

              unix`xc_do_call+0xfd
              unix`xc_wait_sync+0x2b
              unix`x86pte_inval+0x139
              unix`hat_pte_unmap+0xfc
              unix`hat_unload_callback+0x148
              unix`hat_unload+0x41
              unix`segkmem_free_vn+0x6a
              unix`segkmem_free+0x20
              genunix`vmem_xfree+0x10c
              genunix`vmem_free+0x25
              unix`kfreea+0x5a
              unix`i_ddi_mem_free+0x5d
              genunix`ddi_dma_mem_free+0x26
              ohci`ohci_free_tw+0x59
              ohci`ohci_deallocate_tw_resources+0xa4
              ohci`ohci_traverse_done_list+0xfc
              ohci`ohci_intr+0x1d0
              unix`av_dispatch_autovect+0x78
              unix`dispatch_hardint+0x2f
              unix`switch_sp_and_call+0x13
             1020

              unix`xc_do_call+0xfd
              unix`xc_wait_sync+0x2b
              unix`x86pte_inval+0x139
              unix`hat_pte_unmap+0xfc
              unix`hat_unload_callback+0x148
              unix`hat_unload+0x41
              unix`segkmem_free_vn+0x6a
              unix`segkmem_free+0x20
              genunix`vmem_xfree+0x10c
              genunix`vmem_free+0x25
              genunix`kmem_free+0x47
              unix`kobj_free+0x23
              unix`zcfree+0x36
              unix`z_deflateEnd+0x74
              unix`z_compress_level+0x9d
              zfs`gzip_compress+0x4b
              zfs`zio_compress_data+0xbc
              zfs`zio_write_compress+0x8d
              genunix`taskq_thread+0x1a7
              unix`thread_start+0x8
           133773

              unix`xc_do_call+0xfd
              unix`xc_wait_sync+0x2b
              unix`x86pte_inval+0x139
              unix`hat_pte_unmap+0xfc
              unix`hat_unload_callback+0x148
              unix`hat_unload+0x41
              unix`segkmem_free_vn+0x6a
              unix`segkmem_free+0x20
              genunix`vmem_xfree+0x10c
              genunix`vmem_free+0x25
              genunix`kmem_free+0x47
              unix`kobj_free+0x23
              unix`zcfree+0x36
              unix`z_deflateEnd+0x91
              unix`z_compress_level+0x9d
              zfs`gzip_compress+0x4b
              zfs`zio_compress_data+0xbc
              zfs`zio_write_compress+0x8d
              genunix`taskq_thread+0x1a7
              unix`thread_start+0x8
           133773

              unix`xc_do_call+0xfd
              unix`xc_wait_sync+0x2b
              unix`x86pte_inval+0x139
              unix`hat_pte_unmap+0xfc
              unix`hat_unload_callback+0x148
              unix`hat_unload+0x41
              unix`segkmem_free_vn+0x6a
              unix`segkmem_free+0x20
              genunix`vmem_xfree+0x10c
              genunix`vmem_free+0x25
              genunix`kmem_free+0x47
              unix`kobj_free+0x23
              unix`zcfree+0x36
              unix`z_deflateEnd+0xae
              unix`z_compress_level+0x9d
              zfs`gzip_compress+0x4b
              zfs`zio_compress_data+0xbc
              zfs`zio_write_compress+0x8d
              genunix`taskq_thread+0x1a7
              unix`thread_start+0x8
           133773

              unix`xc_do_call+0xfd
              unix`xc_wait_sync+0x2b
              unix`x86pte_inval+0x139
              unix`hat_pte_unmap+0xfc
              unix`hat_unload_callback+0x148
              unix`hat_unload+0x41
              unix`segkmem_free_vn+0x6a
              unix`segkmem_free+0x20
              genunix`vmem_xfree+0x10c
              genunix`vmem_free+0x25
              genunix`kmem_free+0x47
              unix`kobj_free+0x23
              unix`zcfree+0x36
              unix`z_deflateEnd+0xcb
              unix`z_compress_level+0x9d
              zfs`gzip_compress+0x4b
              zfs`zio_compress_data+0xbc
              zfs`zio_write_compress+0x8d
              genunix`taskq_thread+0x1a7
              unix`thread_start+0x8
           133773
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to