-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Le 09/07/2010 01:37, Edward Ned Harvey a écrit :
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Philippe Schwarz
>>
.
> But if you're impressed with performance by enabling writeback, you can
> still do better ...
> 
> The most effective thing you could possibly do is to disable the writeback,
> and add SSD for log device.  ZFS is able to perform in this configuration,
> better than the WriteBack.  And in this situation, surprisingly, enabling
> the WriteBack actually hurts performance slightly.
> 
.

Hi,
i bought a little SSD (OCZ Agility 30GB) and added half to L2ARC and
second half to ZIL:

zpool add zfsda1 log da3s2
zpool add zfsda1 cache da3s1
zpool status

  pool: zfsda1
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        zfsda1      ONLINE       0     0     0
          da1       ONLINE       0     0     0
        logs        ONLINE       0     0     0
          da3s2     ONLINE       0     0     0
        cache
          da3s1     ONLINE       0     0     0

errors: No known data errors


OK, let's try to burst the write (WC disabled on the Raid controller)
...
Result is ...awful!

A `zpool iostat -v 1` shows:

- - Although the L2ARC (da3s1) is showed separately from the pool, the ZIL
(da3s2) is shown within the pool. Is it the normal behaviour ?

- - ZIL seems to be quiet almost all the time and burst sometimes. OK, i
may the normal behaviour of a cache.

               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
- ----------  -----  -----  -----  -----  -----  -----
zfsda1      2.36G   694G      0    176      0  21.8M
  da1       2.36G   694G      0     56      0  6.88M
  da3s2      128K  15.0G      0    119      0  15.0M
cache           -      -      -      -      -      -
  da3s1     3.06G  11.7G      0      0      0      0
- ----------  -----  -----  -----  -----  -----  -----
But, at the end of the copy process (copy a 1GB file from & to the same
pool), the used capacity of the ZIL remains unchanged... Puzzling..

- - And  ,last but not least... the copy isn't faster at all!

- -- Without ZIL&L2ARC
 time cp /zfsda1/rnd /zfsda1/rn2
real    3m23.297s

- -- With ZIL&L2ARC
 time cp /zfsda1/rnd /zfsda1/rn2
real    3m34.847s

Should i call my (dummy) test into question ?

Thanks.
Best regards.

- -- 
Lycée Maximilien Perret, Alfortville

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkw5m0UACgkQlhqCFkbqHRZf1ACgkMIz6gf+H4bXfK5GH1HkWwag
WPgAn2H6/j344LdFEOiig3MAxEy68yG2
=Mjn9
-----END PGP SIGNATURE-----
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to