The 8K record is what optimises the transactional performance at the  
expense of file scan.
Now you could revert to a 32K recordsize if you want a different  
tradeoff but also did you consider
using snapshots and zfs send  | zfs recv ?
That will allow you to replicate the DB incrementally only copying the  
latest block changes.

-r

Le 22 nov. 08 à 02:20, Vincent Kéravec a écrit :

> I just try ZFS on one of our slave and got some really bad  
> performance.
>
> When I start the server yesterday, it was able to keep up with the  
> main server without problem but after two days of consecutive run  
> the server is crushed by IO.
>
> After running the dtrace script iopattern, I notice that the  
> workload is now 100% Random IO. Copying the database (140Go) from  
> one directory to an other took more than 4 hours without any other  
> tasks running on the server, and all the reads on table that where  
> updated where random... Keeping an eye on iopattern and zpool iostat  
> I saw that when the systems was accessing file that have not been  
> changed the disk was reading sequentially at more than 50Mo/s but  
> when reading files that changed often the speed got down to 2-3 Mo/s.
>
> The server has plenty of diskplace so it should not have such a  
> level of file fragmentation in such a short time.
>
> For information I'm using solaris 10/08 with a mirrored root pool on  
> two 1Tb Sata harddisk (slow with random io). I'm using MySQL 5.0.67  
> with MyISAM engine. The zfs recordsize is 8k as recommended on the  
> zfs guide.
> -- 
> This message posted from opensolaris.org
> _______________________________________________
> perf-discuss mailing list
> perf-discuss@opensolaris.org

_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org

Reply via email to