In news:[EMAIL PROTECTED], "Ian P. Christian" <[EMAIL PROTECTED]> wrote:
> This database I'm dumping has something like 17 million rows, all but > 1 table (which uses FULLTEXT, and only has 3-4k rows) run innodb. > There is only one table of any real size, and this table has all but > about 100k of the total rows in. My understanding of this command is > that the database should not be locked whilst this command is running. > > However, here's my problem... > When the dump starts to read from large table, the database just > grinds to a halt - my website running from the database just stops, > and the dump (which I was watching progress with a privative `watch > ls -la`) slows down a bit. Unless the isolation level on your server is set to serializable, no locks should be set during the dump. My guess is that in a given period of time your web applications use only a fraction of the data stored in tables. Therefore most of the needed information is constantly cached in the internal buffers and thus the queries are exectued quickly and efficiently. Since InnoDB only reads data through those buffers, when you request all the rows from a multi-million table, the buffer pool contents is overwritten by random data pages. In consequence the execution of most queries can no longer be fulfilled with a fast memory access as the data is no longer there. The workload becomes heavly disk-bound, which is not efficient enough for your web traffic. Maciek -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]