On Sat, Dec 01, 2007 at 12:22:34PM -1000, Jeff Roberson wrote: > On Sat, 1 Dec 2007, Gergely CZUCZY wrote: > > >On Sat, Dec 01, 2007 at 04:06:55PM -0500, Mike Tancsa wrote: > >>At 03:56 PM 12/1/2007, Gergely CZUCZY wrote: > >>>I don't quite understand the question. It's the very same box, with > >>>a dualboot configuration. > >> > >>Fire up the 3ware controller's RAID management software and make sure the > >>same write caching strategy is set for FreeBSD and Linux. The > >>driver my default to different values. > >> > >>i.e. under "controller settings" make sure "write cache" and "queuing" are > >>the same values for linux and freebsd. > >Let's get back to this on monday. I'm at home now, and the > >box is at me workplace, still running a test (i can't reboot it). > > Also, can you verify with a read-only test to see where it's at? I have not > tested writes with that many threads. I notice mysql goes much faster > with a fresh table too. So can you blow away and recreate the sysbench > tables and then do read-only? If that is much slower we'll know there is > some > configuration problem or similar. It will all be available here: http://phoemix.harmless.hu/mysql/
Some notes. With the ZFS tests, mysql seems to be a lot in zfs(&: state in top, and vmstate shows lots of the CPU spent in system: r b w avm fre flt re pi po fr sr da0 da1 in sy cs us sy id (32 threads) 5 0 0 2904868 8563836 7259 0 0 0 7783 0 0 0 1009 33097 24196 17 24 59 32 0 0 2921252 8565732 7445 0 0 0 7810 0 0 3 1579 48135 25277 19 80 1 6 0 0 3167012 8563304 7731 0 0 0 7789 0 0 0 1581 49608 24088 20 79 1 7 0 0 2861860 8564460 7226 0 0 0 7427 0 0 0 1547 47430 25276 17 82 1 7 0 0 2968356 8563624 7591 0 0 0 7752 0 2 0 1588 48899 23958 20 80 1 32 0 0 2984740 8562660 7495 0 0 0 7914 0 0 8 1583 48698 25508 17 82 1 26 0 0 3040036 8563708 6852 0 0 0 7035 0 0 0 1446 44358 25176 18 82 1 (64 threads) 5 0 0 3646244 8549136 6322 0 0 0 6552 0 0 0 1368 41438 30397 17 83 0 47 0 0 3908388 8547924 6425 0 0 0 6525 0 0 0 1395 41779 33059 18 81 1 65 0 0 3748644 8548356 6507 0 0 0 6689 0 0 0 1426 42855 29754 18 82 0 57 0 0 3785508 8549040 6452 0 0 0 6583 0 0 0 1390 42103 30140 18 81 1 8 0 0 4180772 8547492 6480 0 0 0 6604 0 0 0 1426 42261 30397 15 84 1 So on. "zpool iostat" shows no activty on the zm pool i have, only occasionally 1-3K in 5sec intervals, that's nothing. So I think everything is returned from the fscache/zfs cache. I've increased vm.kmem_size a bit to fit for zfs: vm.kmem_size: 1073741824 The test hasn't yet finished, but it still seems to have a very poor performance: 1 2 4 8 16 32 64 threads 436.83 1038.33 879.85 826.63 757.92 969.31 845.84 qps (this is the read-only, keep in mind) With UFS: 1926.87 2172.59 2093.41 2478.06 2577.58 2543.55 2341.46 2166.81 2026.50 1891.09 1753.52 1647.64 and the linux-2.6.19.2+mysql-5.0.41+tcmalloc: 3431.56 4135.05 4984.12 5487.01 5448.19 5354.64 5226.64 5113.96 5011.94 4705.62 4362.06 3996.42 vmstat when running the test on UFS: procs memory page disks faults cpu r b w avm fre flt re pi po fr sr da0 da1 in sy cs us sy id (8 threads) 7 0 0 2385660 9399000 19128 0 0 0 19601 0 0 0 3235 123806 43490 37 61 2 8 0 0 2461436 9399180 18975 0 0 0 19468 0 0 0 3213 122856 51389 39 60 1 6 0 0 2410236 9399508 19141 0 0 0 19706 0 0 0 3230 123783 50353 38 61 2 5 0 0 2348796 9399744 19273 0 0 0 19817 0 0 0 3272 124558 51281 38 60 2 (16 threads) 14 0 2 2664228 9393172 19988 0 0 0 20462 0 0 0 3148 123556 17475 35 65 0 9 0 0 2666276 9393004 20146 0 0 0 20661 0 0 0 3231 125252 17340 37 63 0 16 0 0 2596644 9394436 20157 0 0 0 20704 0 0 0 3204 124366 17421 38 62 0 9 0 0 2590500 9394556 19712 0 0 0 20197 0 0 0 3113 122209 17610 36 64 0 (32 threads) 30 0 0 2930468 9386688 19357 0 0 0 19919 0 0 0 3096 120375 18285 39 61 0 26 0 0 2760484 9386848 19372 0 0 0 19913 0 0 0 3112 121284 18020 39 60 0 10 0 0 2908964 9385772 19238 0 0 0 19672 0 1 0 3019 119013 18037 35 64 0 17 0 0 2981668 9384308 19265 0 0 0 19715 0 0 0 3088 120462 18040 39 61 0 (64 threads) 43 1 0 3662632 9372396 18201 0 0 0 18612 0 0 0 2864 113344 20063 38 62 0 18 0 0 4131624 9372004 17703 0 0 0 18172 0 0 0 2808 110922 21348 36 64 0 58 0 0 3562280 9374428 18016 0 0 0 18593 0 0 0 2840 111615 21078 36 64 0 58 0 0 3990312 9375276 17834 0 0 0 18361 0 0 0 2886 112559 20662 38 61 0 There was around 20% of CPU time spent in system state when mysql was running off a ZFS filesystem, then a UFS one. And also, there were more context switches, but less system caalls and interrupts. So, the result is basically the same as in the RW case. Where should I start investigating this issue? May I try again with the 4BSD scheduler? Currentl as you can see, I'm using the new ULE one. Sincerely, Gergely Czuczy mailto: [EMAIL PROTECTED] -- Weenies test. Geniuses solve problems that arise.
pgp2UMU8BjmN7.pgp
Description: PGP signature