Concurrency/Parallelism testing.
I have 6 different filesystems populated with email data on our mail
development server.
I rebooted the server before beginning the tests.
The server is a T2000 (sun4v) machine so its ideally suited for this
type of testing.
The test was to tar (to /dev/null) each of the filesystems. Launch 1,
gather stats launch another , gather stats, etc.
The underlying storage system is a Network Appliance. Our only one. In
production. Serving NFS, CIFS and iscsi. Other work the appliance is
doing may effect these tests, and vice versa :) . No one seemed to
notice I was running these tests. 

After 6 concurrent tar's running we are probabaly seeing benefits of the
ARC. 
At certian points I included load averages and traffic stats for each of
the iscsi ethernet interfaces that are configured with MPXIO.

After the first 6 jobs, I launched duplicates of the 6. Then another 6,
etc.

At the end I included the zfs kernel statistics:

1 job
=====
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
space       70.5G  29.0G      0      0      0      0
space       70.5G  29.0G     19      0  1.04M      0
space       70.5G  29.0G    268      0  8.71M      0
space       70.5G  29.0G    196      0  11.3M      0
space       70.5G  29.0G    171      0  11.0M      0
space       70.5G  29.0G    182      0  5.01M      0
space       70.5G  29.0G    273      0  9.71M      0
space       70.5G  29.0G    292      0  8.91M      0
space       70.5G  29.0G    279      0  15.4M      0
space       70.5G  29.0G    219      0  11.3M      0
space       70.5G  29.0G    175      0  8.67M      0

2 jobs
======
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
space       70.5G  29.0G    381      0  23.8M      0
space       70.5G  29.0G    422      0  28.0M      0
space       70.5G  29.0G    386      0  26.5M      0
space       70.5G  29.0G    380      0  22.9M      0
space       70.5G  29.0G    411      0  18.8M      0
space       70.5G  29.0G    393      0  20.7M      0
space       70.5G  29.0G    302      0  15.0M      0
space       70.5G  29.0G    267      0  15.6M      0
space       70.5G  29.0G    304      0  18.7M      0
space       70.5G  29.0G    534      0  19.7M      0
space       70.5G  29.0G    339      0  17.0M      0

3 jobs
======
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
space       70.5G  29.0G    530      0  22.9M      0
space       70.5G  29.0G    428      0  16.3M      0
space       70.5G  29.0G    439      0  16.4M      0
space       70.5G  29.0G    511      0  22.1M      0
space       70.5G  29.0G    464      0  17.9M      0
space       70.5G  29.0G    371      0  12.1M      0
space       70.5G  29.0G    447      0  16.5M      0
space       70.5G  29.0G    379      0  15.5M      0

4jobs
======
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
space       70.5G  29.0G    434      0  22.0M      0
space       70.5G  29.0G    506      0  29.5M      0
space       70.5G  29.0G    424      0  21.3M      0
space       70.5G  29.0G    643      0  36.0M      0
space       70.5G  29.0G    688      0  31.1M      0
space       70.5G  29.0G    726      0  37.6M      0
space       70.5G  29.0G    652      0  24.8M      0
space       70.5G  29.0G    646      0  33.9M      0

5jobs
======
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
space       70.5G  29.0G    629      0  31.1M      0
space       70.5G  29.0G    774      0  45.8M      0
space       70.5G  29.0G    815      0  39.8M      0
space       70.5G  29.0G    895      0  44.4M      0
space       70.5G  29.0G    800      0  48.1M      0
space       70.5G  29.0G    857      0  51.8M      0
space       70.5G  29.0G    725      0  47.6M      0

6jobs
======
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
space       70.5G  29.0G    924      0  58.8M      0
space       70.5G  29.0G    767      0  51.8M      0
space       70.5G  29.0G    862      0  48.4M      0
space       70.5G  29.0G    977      0  43.9M      0
space       70.5G  29.0G    954      0  53.7M      0
space       70.5G  29.0G    903      0  48.3M      0

# uptime
  2:19pm  up 15 min(s),  2 users,  load average: 1.44, 1.10, 0.67

26MB ( 1 minute average) on each iSCSI ethernet port

12jobs
======
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
space       70.5G  29.0G    868      0  48.6M      0
space       70.5G  29.0G    903      0  45.3M      0
space       70.5G  29.0G    919      0  52.4M      0
space       70.5G  29.0G  1.20K      0  73.3M      0
space       70.5G  29.0G  1.16K      0  63.3M      0
space       70.5G  29.0G  1.12K      0  71.2M      0
space       70.5G  29.0G  1.29K      0  68.8M      0

# uptime
  2:22pm  up 18 min(s),  2 users,  load average: 1.75, 1.29, 0.80
33MB ( 1 minute average) on each iSCSI ethernet port

18jobs
======
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
space       70.5G  29.0G  1.31K      0  69.3M      0
space       70.5G  29.0G  1.25K      0  74.7M      0
space       70.5G  29.0G  1.23K      0  74.4M      0
space       70.5G  29.0G  1.25K      0  72.1M      0
space       70.5G  29.0G  1.34K      0  75.3M      0
space       70.5G  29.0G  1.31K      0  77.4M      0
space       70.5G  29.0G    892      0  51.8M      0
space       70.5G  29.0G  1.12K      0  69.6M      0

24jobs
======
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
space       70.5G  29.0G  1.56K      0  84.5M      0
space       70.5G  29.0G  1.46K      0  86.3M      0
space       70.5G  29.0G  1.43K      0  75.7M      0
space       70.5G  29.0G  1.35K      0  67.6M      0
space       70.5G  29.0G  1.38K      0  72.6M      0
space       70.5G  29.0G  1.14K      0  69.8M      0
space       70.5G  29.0G  1.19K      0  66.4M      0

# uptime
  2:26pm  up 23 min(s),  2 users,  load average: 2.29, 1.89, 1.20

36MB ( 1 minute average) on each iSCSI ethernet port

30jobs
======
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
space       70.5G  29.0G  1.20K      0  63.9M      0
space       70.5G  29.0G  1.76K      0  82.3M      0
space       70.5G  29.0G  1.57K      0  79.8M      0
space       70.5G  29.0G  1.82K      0  96.2M      0
space       70.5G  29.0G  1.81K      0  82.7M      0
space       70.5G  29.0G  1.55K      0  74.9M      0
space       70.5G  29.0G  1.53K      0  77.9M      0
space       70.5G  29.0G  1.50K      0  81.6M      0

# uptime
  2:29pm  up 26 min(s),  2 users,  load average: 2.57, 2.12, 1.39

40MB ( 1 minute average) on each iSCSI ethernet port

35jobs
======
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
space       70.5G  29.0G  1.41K      0  69.7M      0
space       70.5G  29.0G  1.58K      0  83.0M      0
space       70.5G  29.0G  1.31K      0  69.3M      0
space       70.5G  29.0G  1.53K      0  79.5M      0
space       70.5G  29.0G  1.42K      0  73.7M      0
space       70.5G  29.0G  1.45K      0  71.3M      0

# uptime
  2:34pm  up 30 min(s),  2 users,  load average: 2.70, 2.55, 1.79

45MB ( 1 minute average) on each iSCSI ethernet port

# kstat zfs
me:   arcstats                        class:    misc
        c                               4294967296
        c_max                           4294967296
        c_min                           536870912
        crtime                          5674386.62393914
        deleted                         1484966
        demand_data_hits                8323333
        demand_data_misses              1391606
        demand_metadata_hits            1320089
        demand_metadata_misses          83372
        evict_skip                      15986
        hash_chain_max                  10
        hash_chains                     47700
        hash_collisions                 1104590
        hash_elements                   166476
        hash_elements_max               188996
        hdr_size                        29907360
        hits                            10033815
        l2_abort_lowmem                 0
        l2_cksum_bad                    0
        l2_evict_lock_retry             0
        l2_evict_reading                0
        l2_feeds                        0
        l2_free_on_write                0
        l2_hdr_size                     0
        l2_hits                         0
        l2_io_error                     0
        l2_misses                       0
        l2_rw_clash                     0
        l2_size                         0
        l2_writes_done                  0
        l2_writes_error                 0
        l2_writes_hdr_miss              0
        l2_writes_sent                  0
        memory_throttle_count           0
        mfu_ghost_hits                  56647
        mfu_hits                        1963736
        misses                          1735570
        mru_ghost_hits                  27411
        mru_hits                        7715952
        mutex_miss                      82794
        p                               1918981120
        prefetch_data_hits              3017
        prefetch_data_misses            225803
        prefetch_metadata_hits          387376
        prefetch_metadata_misses        34789
        recycle_miss                    171217
        size                            3914208576
        snaptime                        5676565.69946945

module: zfs                             instance: 0
name:   vdev_cache_stats                class:    misc
        crtime                          5674386.6242014
        delegations                     15022
        hits                            38616
        misses                          64786
        snaptime                        5676565.7082284

-- 
Ed  


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to