Ok, tested this myself ... 

(same hardware used for both tests) 

OpenSolaris svn_104 (actually Nexenta Core 2):

100 Snaps
------------
r...@nexenta:/volumes# time for i in $(seq 1 100); do zfs snapshot 
ssd/v...@test1_$i; done


real    0m24.991s
user    0m0.297s
sys     0m0.679s

Import: 
r...@nexenta:/volumes# time zpool import ssd

real    0m25.053s
user    0m0.031s
sys     0m0.216s

2) 500 snaps (400 Created, 500 imported)
-------------------------------------------------

r...@nexenta:/volumes# time for i in $(seq 101 500); do zfs snapshot 
ssd/v...@test1_$i; done

real    3m6.257s
user    0m1.190s
sys     0m2.896s

r...@nexenta:/volumes# time zpool import ssd

real    3m59.206s
user    0m0.091s
sys     0m0.956s

3) 1500 Snaps (1000 created, 1500 imported)
-----------------------------------------------------

r...@nexenta:/volumes# time for i in $(seq 501 1500); do zfs snapshot 
ssd/v...@test1_$i; done

real    22m23.206s
user    0m3.041s
sys     0m8.785s

r...@nexenta:/volumes# time zpool import ssd

real    36m26.765s
user    0m0.233s
sys     0m4.545s

 .... you see where this goes - its expotential !!

Now with svn_130 (same pool, still 1500 snaps on it) 

.. not we are booting OpenSolaris svn_130.
Sun Microsystems Inc.  SunOS 5.11      snv_130 November 2008

r...@osol_dev130:~# zpool import
  pool: ssd
    id: 16128137881522033167
 state: ONLINE
status: The pool is formatted using an older on-disk version.
action: The pool can be imported using its name or numeric identifier, though
        some features will not be available without an explicit 'zpool upgrade'.
config:

        ssd         ONLINE
          c9d1      ONLINE

r...@osol_dev130:~# time zpool import ssd

real    0m0.756s
user    0m0.014s
sys     0m0.056s

r...@osol_dev130:~# zfs list -t snapshot | wc -l
    1502
        
r...@osol_dev130:~# time zpool export ssd

real    0m0.425s
user    0m0.003s
sys     0m0.029s

I like this one :)

... just for fun ... (5K Snaps)

r...@osol_dev130:~# time for i in $(seq 1501 5000); do zfs snapshot 
ssd/v...@test1_$i; done

real    1m18.977s
user    0m9.889s
sys     0m19.969s

r...@osol_dev130:~# zpool export ssd
r...@osol_dev130:~# time zpool import  ssd

real    0m0.421s
user    0m0.014s
sys     0m0.055s

... just for fun ... (10K Snaps)

r...@osol_dev130:~# time for i in $(seq 5001 10000); do zfs snapshot 
ssd/v...@test1_$i; done

real    2m6.242s
user    0m14.107s
sys     0m28.573s

r...@osol_dev130:~# time zpool import ssd

real    0m0.405s
user    0m0.014s
sys     0m0.057s

Very nice, so volume import is solved. 

.. however ... a lot of snaps still have a impact on system performance. After 
the import of the 10000 snaps volume, I saw "devfsadm" eating up all CPU: 

load averages:  5.00,  3.32,  1.58;               up 0+00:18:12        18:50:05
99 processes: 95 sleeping, 2 running, 2 on cpu
CPU states:  0.0% idle,  4.6% user, 95.4% kernel,  0.0% iowait,  0.0% swap
Kernel: 409 ctxsw, 14 trap, 47665 intr, 1223 syscall
Memory: 8190M phys mem, 5285M free mem, 4087M total swap, 4087M free swap

   PID USERNAME NLWP PRI NICE  SIZE   RES STATE    TIME    CPU COMMAND
   167 root        6  22    0   25M   13M run      3:14 49.41% devfsadm

... a truss showed that it is the device node allocation eating up the CPU: 

/5:      0.0010 xstat(2, "/devices/pseudo/z...@0:8941,raw", 0xFE32FCE0) = 0
/5:      0.0005 fcntl(7, F_SETLK, 0xFE32FED0)                   = 0
/5:      0.0000 close(7)                                        = 0
/5:      0.0001 lwp_unpark(3)                                   = 0
/3:      0.0200 lwp_park(0xFE61EF58, 0)                         = 0
/3:      0.0000 time()                                          = 1263232337
/5:      0.0001 open("/etc/dev/.devfsadm_dev.lock", O_RDWR|O_CREAT, 0644) = 7
/5:      0.0001 fcntl(7, F_SETLK, 0xFE32FEF0)                   = 0
/5:      0.0000 read(7, "A7\0\0\0", 4)                          = 4
/5:      0.0001 getpid()                                        = 167 [1]
/5:      0.0000 getpid()                                        = 167 [1]
/5:      0.0001 open("/devices/pseudo/devi...@0:devinfo", O_RDONLY) = 10
/5:      0.0000 ioctl(10, DINFOIDENT, 0x00000000)               = 57311
/5:      0.0138 ioctl(10, 0xDF06, 0xFE32FA60)                   = 2258109
/5:      0.0027 ioctl(10, DINFOUSRLD, 0x086CD000)               = 2260992
/5:      0.0001 close(10)                                       = 0
/5:      0.0015 modctl(MODGETNAME, 0xFE32F060, 0x00000401, 0xFE32F05C, 
0xFD1E0008) = 0
/5:      0.0010 xstat(2, "/devices/pseudo/z...@0:8941", 0xFE32FCE0) = 0
/5:      0.0005 fcntl(7, F_SETLK, 0xFE32FED0)                   = 0
/5:      0.0000 close(7)                                        = 0
/5:      0.0001 lwp_unpark(3)                                   = 0
/3:      0.0201 lwp_park(0xFE61EF58, 0)                         = 0
/3:      0.0001 time()                                          = 1263232337
/5:      0.0001 open("/etc/dev/.devfsadm_dev.lock", O_RDWR|O_CREAT, 0644) = 7
/5:      0.0001 fcntl(7, F_SETLK, 0xFE32FEF0)                   = 0
/5:      0.0000 read(7, "A7\0\0\0", 4)                          = 4
/5:      0.0001 getpid()                                        = 167 [1]
/5:      0.0000 getpid()                                        = 167 [1]
/5:      0.0001 open("/devices/pseudo/devi...@0:devinfo", O_RDONLY) = 10
/5:      0.0000 ioctl(10, DINFOIDENT, 0x00000000)               = 57311
/5:      0.0138 ioctl(10, 0xDF06, 0xFE32FA60)                   = 2258109
/5:      0.0027 ioctl(10, DINFOUSRLD, 0x086CD000)               = 2260992
/5:      0.0001 close(10)                                       = 0
/5:      0.0015 modctl(MODGETNAME, 0xFE32F060, 0x00000401, 0xFE32F05C, 
0xFD1E0008) = 0

After 5 minutes all devices were created. 

.. another strange issue: 
r...@osol_dev130:/dev/zvol/dsk/ssd# ls -al

load averages:  1.16,  2.17,  1.50;               up 0+00:22:07        18:54:00
99 processes: 97 sleeping, 2 on cpu
CPU states: 49.1% idle,  0.1% user, 50.8% kernel,  0.0% iowait,  0.0% swap
Kernel: 257 ctxsw, 1 trap, 607 intr, 282 syscall
Memory: 8190M phys mem, 5280M free mem, 4087M total swap, 4087M free swap

   PID USERNAME NLWP PRI NICE  SIZE   RES STATE    TIME    CPU COMMAND
   860 rah         2  59    0   41M   14M sleep    0:01  0.05% gnome-netstatus
   866 root        1  59    0 1656K 1080K sleep    0:00  0.02% gnome-netstatus

.. it seems to be a issue related to /dev ... 
.. so having a 10000 snaps of a single zvol is not nice :)

Robert
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to