Hi

okay its not what i feared, it is probably caching every bit of data and
metadata you have written so far, why shouldn't it you have the space in the
l2 cache, and it can't offer to return it if its not in the cache, after the
cache is full or near full it will choose more carefully what to keep and
what to throw away.

James Dickens
http://uadmin.blogspot.com


On Sat, Mar 6, 2010 at 2:15 AM, Abdullah Al-Dahlawi <dahl...@ieee.org>wrote:

> hi James
>
>
> here is the out put you've requested
>
> abdul...@hp_hdx_16:~/Downloads# zpool status -v
>   pool: hdd
>  state: ONLINE
>  scrub: none requested
> config:
>
>     NAME        STATE     READ WRITE CKSUM
>     hdd         ONLINE       0     0     0
>       c7t0d0p3  ONLINE       0     0     0
>     cache
>       c8t0d0p0  ONLINE       0     0     0
>
> errors: No known data errors
>
>   pool: rpool
>  state: ONLINE
>  scrub: none requested
> config:
>
>     NAME        STATE     READ WRITE CKSUM
>     rpool       ONLINE       0     0     0
>       c7t0d0s0  ONLINE       0     0     0
>
> -----------------------
>
> abdul...@hp_hdx_16:~/Downloads# zpool iostat -v hdd
>                capacity     operations    bandwidth
> pool         used  avail   read  write   read  write
> ----------  -----  -----  -----  -----  -----  -----
> hdd         1.96G  17.7G     10     64  1.27M  7.76M
>   c7t0d0p3  1.96G  17.7G     10     64  1.27M  7.76M
> cache           -      -      -      -      -      -
>   c8t0d0p0  *2.87G*  12.0G      0     17    103  2.19M
> ----------  -----  -----  -----  -----  -----  -----
>
> abdul...@hp_hdx_16:~/Downloads# kstat -m zfs
> module: zfs                             instance: 0
> name:   arcstats                        class:    misc
>     c                               2147483648
>     c_max                           2147483648
>     c_min                           268435456
>     crtime                          34.558539423
>     data_size                       2078015488
>     deleted                         9816
>     demand_data_hits                382992
>     demand_data_misses              20579
>     demand_metadata_hits            74629
>     demand_metadata_misses          6434
>     evict_skip                      21073
>     hash_chain_max                  5
>     hash_chains                     7032
>     hash_collisions                 31409
>     hash_elements                   36568
>     hash_elements_max               36568
>     hdr_size                        7827792
>     hits                            481410
>     l2_abort_lowmem                 0
>     l2_cksum_bad                    0
>     l2_evict_lock_retry             0
>     l2_evict_reading                0
>     l2_feeds                        1157
>     l2_free_on_write                475
>     l2_hdr_size                     0
>     l2_hits                         0
>     l2_io_error                     0
>     l2_misses                       14997
>     l2_read_bytes                   0
>     l2_rw_clash                     0
>     l2_size                         588342784
>     l2_write_bytes                  3085701632
>     l2_writes_done                  194
>     l2_writes_error                 0
>     l2_writes_hdr_miss              0
>     l2_writes_sent                  194
>     memory_throttle_count           0
>     mfu_ghost_hits                  9410
>     mfu_hits                        343112
>     misses                          33011
>     mru_ghost_hits                  4609
>     mru_hits                        116739
>     mutex_miss                      90
>     other_size                      51590832
>     p                               1320449024
>     prefetch_data_hits              4775
>     prefetch_data_misses            1694
>     prefetch_metadata_hits          19014
>     prefetch_metadata_misses        4304
>     recycle_miss                    484
>     size                            2137434112
>     snaptime                        1945.241664714
>
> module: zfs                             instance: 0
> name:   vdev_cache_stats                class:    misc
>     crtime                          34.558587713
>     delegations                     3415
>     hits                            5578
>     misses                          3647
>     snaptime                        1945.243484925
>
>
>
>
> On Fri, Mar 5, 2010 at 9:02 PM, James Dickens <jamesd...@gmail.com> wrote:
>
>> please post the output of zpool status -v.
>>
>>
>> Thanks
>>
>> James Dickens
>>
>>
>> On Fri, Mar 5, 2010 at 3:46 AM, Abdullah Al-Dahlawi <dahl...@ieee.org>wrote:
>>
>>> Greeting All
>>>
>>> I have create a pool that consists oh a hard disk and a ssd as a cache
>>>
>>> zpool create hdd c11t0d0p3
>>> zpool add hdd cache c8t0d0p0     - cache device
>>>
>>> I ran an OLTP bench mark to emulate a DMBS
>>>
>>> One I ran the benchmark, the pool started create the database file on the
>>> ssd cache device ???????????
>>>
>>>
>>> can any one explain why this happening ?
>>>
>>> is not L2ARC is used to absorb the evicted data from ARC ?
>>>
>>> why it is used this way ???
>>>
>>>
>>>
>>>
>>>
>>> --
>>> Abdullah Al-Dahlawi
>>> George Washington University
>>> Department. Of Electrical & Computer Engineering
>>> ----
>>> Check The Fastest 500 Super Computers Worldwide
>>> http://www.top500.org/list/2009/11/100
>>>
>>> _______________________________________________
>>> zfs-discuss mailing list
>>> zfs-discuss@opensolaris.org
>>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>>
>>>
>>
>
>
> --
> Abdullah Al-Dahlawi
> PhD Candidate
> George Washington University
> Department. Of Electrical & Computer Engineering
> ----
> Check The Fastest 500 Super Computers Worldwide
> http://www.top500.org/list/2009/11/100
>
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to