Maybe you could get some speed increase for you queries by setting the
record_buffer to a higher value. Because with fixed row length this
buffer fills up faster too. Although I doubt that you will gain a lot...
/rudy
-Original Message-
From: Alexander Schulz [mailto:[EMAIL PROTECTED]
Se
re analyze
could be handy.
-Original Message-
From: Veysel Harun Sahin [mailto:[EMAIL PROTECTED]
Sent: dinsdag 15 juli 2003 16:24
To: Rudy Metzger
Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED]
Subject: Re: Managing big tables
Sorry rudy, but I can not understand what you tr
al Message-
From: Veysel Harun Sahin [mailto:[EMAIL PROTECTED]
Sent: dinsdag 15 juli 2003 15:22
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]
Subject: Re: Managing big tables
http://www.mysql.com/doc/en/Data_size.html
[EMAIL PROTECTED] wrote:
Hello,
i've got a little
thing completely
different.
Cheers
/rudy
-Original Message-
From: Veysel Harun Sahin [mailto:[EMAIL PROTECTED]
Sent: dinsdag 15 juli 2003 15:22
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]
Subject: Re: Managing big tables
http://www.mysql.com/doc/en/Data_size.html
[
I do not really think that optimizing (in your case "compressing", thus
cleaning up free space) is much faster with fixed record length on LARGE
tables. Why? When optimizing the table the DB rebuilds the file "record
for record" to a temporary file and then moves it back to the original
file (well,
http://www.mysql.com/doc/en/Data_size.html
[EMAIL PROTECTED] wrote:
Hello,
i've got a little problem,
we're using mysql with two big tables (one has 90 Mio. Rows (60 Gb on
HD), the other contains nearly 200.000.000 (130 Gb on HD).
Now we want to delete some rows from these tables to free diskspa
Hi
The only thing I can say is that if you optimise the table often there is
less work for it to do so you table will be left locked for shorter time.
I have not looked in to this but if you use the RAID option. I don't know if
splitting the table up you could just work on one bit at a time??