It's not only when I try to stop nfsd - during normall operations I see that 
one CPU has 0% idle, all traffic is only to one pool (and this is very small 
traffic) and all nfs threads hung - I guess all these threads are to this pool.

bash-3.00# zpool iostat 1
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
nfs-s5-p0   2.46T  2.07T      9     44   591K  2.13M
nfs-s5-p1   1.61T  2.92T     11     57   690K  2.79M
nfs-s5-p2   1.41T  3.12T     14     98   923K  7.71M
nfs-s5-p3   42.4G  4.49T      1     19   122K   570K
nfs-s5-s8   4.40T   134G     40     36  1.69M   775K
----------  -----  -----  -----  -----  -----  -----
nfs-s5-p0   2.46T  2.07T      0      0      0      0
nfs-s5-p1   1.61T  2.92T      0      0      0      0
nfs-s5-p2   1.41T  3.12T      0      0      0      0
nfs-s5-p3   42.4G  4.49T      0      0      0      0
nfs-s5-s8   4.40T   134G      0     27      0  31.6K
----------  -----  -----  -----  -----  -----  -----
nfs-s5-p0   2.46T  2.07T      0      0      0      0
nfs-s5-p1   1.61T  2.92T      0      0      0      0
nfs-s5-p2   1.41T  3.12T      0      0      0      0
nfs-s5-p3   42.4G  4.49T      0      0      0      0
nfs-s5-s8   4.40T   134G      0     27      0  36.6K
----------  -----  -----  -----  -----  -----  -----
nfs-s5-p0   2.46T  2.07T      0      0      0      0
nfs-s5-p1   1.61T  2.92T      0      0      0      0
nfs-s5-p2   1.41T  3.12T      0      0      0      0
nfs-s5-p3   42.4G  4.49T      0      0      0      0
nfs-s5-s8   4.40T   134G      0     29      0  37.1K
----------  -----  -----  -----  -----  -----  -----
^C
bash-3.00#

pool nfs-s5-s8 is:

nfs-s5-s8             4.40T  61.3G  39.5K  /nfs-s5-s8
nfs-s5-s8/d5201        395G  61.3G   357G  /nfs-s5-s8/d5201
nfs-s5-s8/[EMAIL PROTECTED]    17.2G      -   331G  -
nfs-s5-s8/[EMAIL PROTECTED]     467M      -   314G  -
nfs-s5-s8/d5202        385G  61.3G   349G  /nfs-s5-s8/d5202
nfs-s5-s8/[EMAIL PROTECTED]    16.9G      -   320G  -
nfs-s5-s8/[EMAIL PROTECTED]     457M      -   304G  -
nfs-s5-s8/d5203        392G  61.3G   353G  /nfs-s5-s8/d5203
nfs-s5-s8/[EMAIL PROTECTED]    17.8G      -   326G  -
nfs-s5-s8/[EMAIL PROTECTED]     496M      -   309G  -
nfs-s5-s8/d5204        381G  61.3G   344G  /nfs-s5-s8/d5204
nfs-s5-s8/[EMAIL PROTECTED]    17.1G      -   315G  -
nfs-s5-s8/[EMAIL PROTECTED]     482M      -   299G  -
nfs-s5-s8/d5205        381G  61.3G   346G  /nfs-s5-s8/d5205
nfs-s5-s8/[EMAIL PROTECTED]    14.9G      -   316G  -
nfs-s5-s8/[EMAIL PROTECTED]     357M      -   302G  -
nfs-s5-s8/d5206        383G  61.3G   348G  /nfs-s5-s8/d5206
nfs-s5-s8/[EMAIL PROTECTED]    14.6G      -   317G  -
nfs-s5-s8/[EMAIL PROTECTED]     355M      -   303G  -
nfs-s5-s8/d5207        331G  61.3G   321G  /nfs-s5-s8/d5207
nfs-s5-s8/[EMAIL PROTECTED]    10.3G      -   243G  -
nfs-s5-s8/d5208        314G  61.3G   303G  /nfs-s5-s8/d5208
nfs-s5-s8/[EMAIL PROTECTED]    10.9G      -   250G  -
nfs-s5-s8/d5209        323G  61.3G   311G  /nfs-s5-s8/d5209
nfs-s5-s8/[EMAIL PROTECTED]    11.4G      -   258G  -
nfs-s5-s8/d5210        382G  61.3G   369G  /nfs-s5-s8/d5210
nfs-s5-s8/[EMAIL PROTECTED]    13.2G      -   317G  -
nfs-s5-s8/d5211        409G  61.3G   396G  /nfs-s5-s8/d5211
nfs-s5-s8/[EMAIL PROTECTED]    13.2G      -   323G  -
nfs-s5-s8/d5212        429G  61.3G   417G  /nfs-s5-s8/d5212
nfs-s5-s8/[EMAIL PROTECTED]    11.5G      -   252G  -


right now I can't export that pool nor destroy snapshot in that pool - both 
commands are hunging.
bash-3.00# zfs destroy nfs-s5-s8/[EMAIL PROTECTED]
[it's here now for ~3 minutes]


bash-3.00# zpool export nfs-s5-s8
^C^C^C^C
[it's here for ~3 minutes]


bash-3.00# mdb -kw
Loading modules: [ unix krtld genunix specfs dtrace ufs sd px md ip sctp usba 
lofs zfs random qlc fctl fcp ssd nfs crypto ptm ]
> ::ps!grep zfs
R    720    348    720    342      0 0x4a004000 0000060005673378 zfs
> 0000060005673378::walk thread|::findstack -v
stack pointer for thread 300024ea060: 2a102c4cc31
[ 000002a102c4cc31 cv_wait+0x40() ]
  000002a102c4cce1 txg_wait_synced+0x54(3000052eb90, 43ca2, 300419b600f, 
3000052ebd0, 3000052ebd2, 3000052eb88)
  000002a102c4cd91 dsl_dataset_destroy+0x64(300419b6000, 5, 7ba13d84, 
2a102c4d828, 300419b600f, 3000052eac0)
  000002a102c4cf71 dmu_objset_destroy+0x3c(300419b6000, 0, 0, 2, 0, 700d0f08)
  000002a102c4d031 zfsdev_ioctl+0x158(700d0c00, 54, ffbfedb8, 1c, 70, 
300419b6000)
  000002a102c4d0e1 fop_ioctl+0x20(600189b61c0, 5a1c, ffbfedb8, 100003, 
60019bd6850, 11fc888)
  000002a102c4d191 ioctl+0x184(4, 600011c4078, ffbfedb8, 4, 40490, 5a1c)
  000002a102c4d2e1 syscall_trap32+0xcc(4, 5a1c, ffbfedb8, 4, 40490, 2)
> ::ps!grep zpool
R    568    445    568    439      0 0x4a004000 00000301249b1000 zpool
> 00000301249b1000::walk thread|::findstack -v
stack pointer for thread 300027adc40: 2a103936f41
[ 000002a103936f41 cv_wait+0x40() ]
  000002a103936ff1 zil_commit+0x74(6001969f72c, ffffffffffffffff, 10, 
6001969f6c0, 300011ba000, bfa)
  000002a1039370a1 zfs_sync+0x9c(6001977ca80, 0, 60019759540, 60019759594, 0, 0)
  000002a103937151 dounmount+0x28(6001977ca80, 0, 6002a3312a8, 600196ba940, 
300011ba000, 300c8d92780)
  000002a103937201 umount2+0x12c(4c4658, 0, ffbfb6b8, 3ef6c, 0, ff38c194)
  000002a1039372e1 syscall_trap32+0xcc(4c4658, 0, ffbfb6b8, 3ef6c, 20a50, 
ff38c194)
>

bash-3.00# dtrace -n fbt:::entry'{self->vt=vtimestamp;}' -n 
fbt:::return'/self->vt/[EMAIL PROTECTED](vtimestamp-self->vt);self->vt=0;}' -n 
tick-10s'{printa(@);exit(0);}'
[...]
  resume                                                      6644840
  hwblkclr                                                    8601940
  lock_set_spl_spin                                          11818300
  send_mondo_set                                             15736288
  pid_entry                                                  25685856
  page_next_scan_large                                       49429932
  xc_serv                                                    50147308
  disp_getwork                                               65131372
  mutex_vector_enter                                        220272764
  avl_walk                                                  253932168
  disp_anywork                                              393395416

Maybe 'coz of snapshots in nfs-s5-s8?


ok, I did BREAK/sync while nfsd, zpool and zfs were hunging.
IOs were only to nfs-s5-s8

Crashdump could be provided - sorry but not for public eyes.
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to