Hi Bruno,

I've tried to reproduce this panic you are seeing. However, I had difficulty following your procedure. See below:



On 02/08/10 15:37, Bruno Damour wrote:
On 02/ 8/10 06:38 PM, Lori Alt wrote:

Can you please send a complete list of the actions taken: The commands you used to create the send stream, the commands used to receive the stream. Also the output of `zfs list -t all` on both the sending and receiving sides. If you were able to collect a core dump (it should be in /var/crash/<hostname>), it would be good to upload it.

The panic you're seeing is in the code that is specific to receiving a dedup'ed stream. It's possible that you could do the migration if you turned off dedup (i.e. didn't specify -D) when creating the send stream.. However, then we wouldn't be able to diagnose and fix what appears to be a bug.

The best way to get us the crash dump is to upload it here:

https://supportfiles.sun.com/upload

We need either both vmcore.X and unix.X OR you can just send us vmdump.X.

Sometimes big uploads have mixed results, so if there is a problem some helpful hints are on http://wikis.sun.com/display/supportfiles/Sun+Support+Files+-+Help+and+Users+Guide,
specifically in section 7.

It's best to include your name or your initials or something in the name of the file you upload. As
you might imagine we get a lot of files uploaded named vmcore.1

You might also create a defect report at http://defect.opensolaris.org/bz/

Lori


On 02/08/10 09:41, Bruno Damour wrote:
<copied from opensolaris-dicuss as this probably belongs here.>

I kept on trying to migrate my pool with children (see previous threads) and had the (bad) idea to try the -d option on the receive part.
The system reboots immediately.

Here is the log in /var/adm/messages

Feb 8 16:07:09 amber unix: [ID 836849 kern.notice]
Feb 8 16:07:09 amber ^Mpanic[cpu1]/thread=ffffff014ba86e40:
Feb 8 16:07:09 amber genunix: [ID 169834 kern.notice] avl_find() succeeded inside avl_add()
Feb 8 16:07:09 amber unix: [ID 100000 kern.notice]
Feb 8 16:07:09 amber genunix: [ID 655072 kern.notice] ffffff00053c4660 genunix:avl_add+59 () Feb 8 16:07:09 amber genunix: [ID 655072 kern.notice] ffffff00053c46c0 zfs:find_ds_by_guid+b9 () Feb 8 16:07:09 amber genunix: [ID 655072 kern.notice] ffffff00053c46f0 zfs:findfunc+23 () Feb 8 16:07:09 amber genunix: [ID 655072 kern.notice] ffffff00053c47d0 zfs:dmu_objset_find_spa+38c () Feb 8 16:07:09 amber genunix: [ID 655072 kern.notice] ffffff00053c4810 zfs:dmu_objset_find+40 () Feb 8 16:07:09 amber genunix: [ID 655072 kern.notice] ffffff00053c4a70 zfs:dmu_recv_stream+448 () Feb 8 16:07:09 amber genunix: [ID 655072 kern.notice] ffffff00053c4c40 zfs:zfs_ioc_recv+41d () Feb 8 16:07:09 amber genunix: [ID 655072 kern.notice] ffffff00053c4cc0 zfs:zfsdev_ioctl+175 () Feb 8 16:07:09 amber genunix: [ID 655072 kern.notice] ffffff00053c4d00 genunix:cdev_ioctl+45 () Feb 8 16:07:09 amber genunix: [ID 655072 kern.notice] ffffff00053c4d40 specfs:spec_ioctl+5a () Feb 8 16:07:09 amber genunix: [ID 655072 kern.notice] ffffff00053c4dc0 genunix:fop_ioctl+7b () Feb 8 16:07:09 amber genunix: [ID 655072 kern.notice] ffffff00053c4ec0 genunix:ioctl+18e () Feb 8 16:07:09 amber genunix: [ID 655072 kern.notice] ffffff00053c4f10 unix:brand_sys_syscall32+1ca ()
Feb 8 16:07:09 amber unix: [ID 100000 kern.notice]
Feb 8 16:07:09 amber genunix: [ID 672855 kern.notice] syncing file systems...
Feb 8 16:07:09 amber genunix: [ID 904073 kern.notice] done
Feb 8 16:07:10 amber genunix: [ID 111219 kern.notice] dumping to /dev/zvol/dsk/rpool/dump, offset 65536, content: kernel Feb 8 16:07:10 amber ahci: [ID 405573 kern.info] NOTICE: ahci0: ahci_tran_reset_dport port 3 reset port
Feb 8 16:07:35 amber genunix: [ID 100000 kern.notice]
Feb 8 16:07:35 amber genunix: [ID 665016 kern.notice] ^M100% done: 107693 pages dumped,
Feb 8 16:07:35 amber genunix: [ID 851671 kern.notice] dump succeeded

Hello,
I'll try to do my best.

Here are the commands :

    amber ~ # zfs unmount data
    amber ~ # zfs snapshot -r d...@prededup
    amber ~ # zpool destroy ezdata
    amber ~ # zpool create ezdata c6t1d0
    amber ~ # zfs set dedup=on ezdata
    amber ~ # zfs set compress=on ezdata
    amber ~ # zfs send -RD d...@prededup |zfs receive ezdata/data
    cannot receive new filesystem stream: destination 'ezdata/data' exists
    must specify -F to overwrite it
    amber ~ # zpool destroy ezdata
    amber ~ # zpool create ezdata c6t1d0
    amber ~ # zfs set compression=on ezdata
    amber ~ # zfs set dedup=on ezdata
    amber ~ # zfs send -RD d...@prededup |zfs receive -F ezdata/data
    cannot receive new filesystem stream: destination has snapshots
    (eg. ezdata/d...@prededup)
    must destroy them to overwrite it



This send piped to recv didn't even get started because of the above error.

Are you saying that the command ran for several house and THEN produced that message?

I created a hierarchy of dataset and snapshots to match yours (as shown below), though with only a small amount of data, and used this command:


zfs send -RD d...@prededup |zfs receive -dF ezdata/data


to do the copy , and got this result after completion:

# zfs list
NAME                           USED  AVAIL  REFER  MOUNTPOINT
data                          2.64M  66.9G    25K  /data
data/archive                  2.12M  66.9G   125K  /data/archive
data/archive/scanrisk          864K  66.9G   864K  /data/archive/scanrisk
data/archive/slbp_g           1.14M  66.9G   958K  /data/archive/slbp_g
data/cyrus23                    90K  66.9G    90K  /data/cyrus23
data/postgres84_64             181K  66.9G    91K  /data/postgres84_64
data/postgres84_64/8k           90K  66.9G    90K  /data/postgres84_64/8k
ezdata                        1.85M   274G    23K  /ezdata
ezdata/data                   1.52M   274G    25K  /ezdata/data
ezdata/data/archive           1.32M   274G    90K  /ezdata/data/archive
ezdata/data/archive/scanrisk 464K 274G 464K /ezdata/data/archive/scanrisk ezdata/data/archive/slbp_g 773K 274G 568K /ezdata/data/archive/slbp_g
ezdata/data/cyrus23           61.5K   274G  61.5K  /ezdata/data/cyrus23
ezdata/data/postgres84_64 124K 274G 62.5K /ezdata/data/postgres84_64 ezdata/data/postgres84_64/8k 61.5K 274G 61.5K /ezdata/data/postgres84_64/8k


What release of Solaris are you using?  Can this be reproduced?

lori

Each time the send/receive command took some hours and transferred 151G data before issuing the message

    amber ~ # zfs list
    NAME                         USED  AVAIL  REFER  MOUNTPOINT
    data                         295G   621G   161G  /data
    data/archive                 134G   621G  67.7G  /data/archive
data/archive/scanrisk 9.45G 621G 8.03G /data/archive/scanrisk
    data/archive/slbp_g         56.6G   621G  13.5G  /data/archive/slbp_g
    data/cyrus23                 219M   621G   219M  /data/cyrus23
    data/postgres84_64           373M   621G   199M  /data/postgres84_64
data/postgres84_64/8k 174M 621G 174M /data/postgres84_64/8k
    ezdata                       151G   144G    21K  /ezdata
    ezdata/data                  151G   144G   151G  /ezdata/data
    rpool                       16.2G  98.0G    87K  /rpool
    ...

And the complete :

    amber ~ # zfs list -t all
    NAME                            USED  AVAIL  REFER  MOUNTPOINT
    data                            295G   621G   161G  /data
    data/archive                    134G   621G  67.7G  /data/archive
    data/arch...@20090521          32.9M      -  36.4G  -
data/archive/scanrisk 9.45G 621G 8.03G /data/archive/scanrisk
    data/archive/scanr...@2008pre  1.42G      -  9.45G  -
data/archive/slbp_g 56.6G 621G 13.5G /data/archive/slbp_g
    data/archive/slb...@20081129   2.35G      -  9.02G  -
    data/archive/slb...@20081212   2.33G      -  9.70G  -
    data/archive/slb...@20090110   9.98M      -  9.34G  -
    data/archive/slb...@20090521   15.3M      -  9.35G  -
    data/archive/slb...@20090702   1.58G      -  14.1G  -
    data/archive/slb...@20090809     67K      -  18.4G  -
    data/archive/slb...@20090912     68K      -  18.4G  -
    data/archive/slb...@20090915   4.25G      -  22.1G  -
    data/archive/slb...@20091128   97.7M      -  19.0G  -
    data/archive/slb...@20091130    438M      -  19.2G  -
    data/cyrus23                    219M   621G   219M  /data/cyrus23
data/postgres84_64 373M 621G 199M /data/postgres84_64 data/postgres84_64/8k 174M 621G 174M /data/postgres84_64/8k
    ezdata                          151G   144G    24K  /ezdata
    ezd...@now                       29K      -    31K  -
    ezdata/data                     151G   144G   151G  /ezdata/data
    ezdata/d...@prededup             46K      -   151G  -
ezdata/test 31K 144G 31K /ezdata/testamber ~ #
    ezdata/t...@now                    0      -    31K  -
    rpool                          16.3G  97.8G    87K  /rpool
    rpool/ROOT                     8.29G  97.8G    21K  legacy
    rpool/ROOT/snv_132             8.29G  97.8G  7.19G  /
    rpool/ROOT/snv_...@install     1.10G      -  3.24G  -
    rpool/dump                     4.00G  97.8G  4.00G  -
    rpool/export                   49.9M  97.8G    23K  /export
    rpool/export/home              49.8M  97.8G    23K  /export/home
    rpool/export/home/admin        49.8M  97.8G  49.8M  /export/home/admin
    rpool/swap                     4.00G   102G   109M  -
    tank                            292G   165G   166G  /tank
    t...@20090517                  6.75G      -  63.6G  -
    tank/corwin.raw                  15G   166G  14.2G  -
    tank/dara.raw                    15G   168G  12.2G  -
    tank/deirdre.raw                 15G   172G  8.36G  -
    tank/fiona.raw                   20G   179G  6.49G  -
    tank/oberon.raw                  15G   180G    16K  -
    tank/rebma.raw                 22.9G   173G  7.93G  -
    tank/rebma....@20100202        7.34G      -  7.93G  -
    tank/soas.raw                    15G   180G   494M  -
    tank/test                        79K   165G    31K  /tank/test
    tank/t...@now                    18K      -    31K  -
    tank/test/child                  30K   165G    30K  /tank/test/child
    tank/test/ch...@now                0      -    30K  -
    tank/zones                      994M   165G    36K  /tank/zones
    tank/zones/avalon               994M   165G    24K  /tank/zones/avalon
    tank/zones/avalon/ROOT          994M   165G    21K  legacy
    tank/zones/avalon/ROOT/zbe      994M   165G   994M  legacy

I will upload the core as vmdump.amber.0.7z
Good luck

Bruno

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to