OK, the picture is now a little bit simpler: 1. Your performance problem is coming mainy from one BIG database. 2. Although your servers have a lot of RAM (at least 4Gb), InnoDB can only use a small fraction because of addressing limitations of 32 bits CPU.
Some random ideas: 1. Swith to a 64 bits system so that you can efficiently use more RAM 2. Do all tables in this BIG database need to be transaction safe ? If not, you can convert some (all ?) tables to MyISAM format. In this case, file caching will be done by the kernel outside of mysqld process allowing more efficient use of RAM. 3. Sponsor Innobase so that InnoDB use AWE memory on Linux/x86 (up to 64Gb). This has already been discussed on this mailing list: http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&oe=UTF-8&th=cddc8bdca9e4304a&seekm=bbkmji%2417qt%241%40FreeBSD.csie.NCTU.edu.tw 4. Make sure that you are really limited by RAM and disc I/O not CPU otherwise you won't gain anything with more RAM. 5. If you have more selects than updates/inserts you should consider replication (I use it, it rocks !) 6. If you have more selects than updates/inserts and identical selects are run several times, you could consider upgrading to MySQL 4.x. Its query cache may speed up your server (I have not tried it myself but there are been positive reports on this list).
regards, Joseph Bueno
Wendell Dingus wrote:
Ahh, there's something I could have explained better. Each of the databases represents a remote office of a different customer. I specifically want/need to keep the data separate, it doubles as an offsite backup for these offices as well as would conflict if it were merged. I currently use replication to get the data from those offices up to my web server but not exactly in the way it was intended. I just turn on binary logging and a process every 30 minutes or so checks the latest /var/lib/mysql/systemname-bin.nnn to see if it contains anything and starts up a new binary log file, uses mysqlbinlog to grab the contents of that one, sends them to the web server, which integrates them with it's offiste copy. Works great since some offices have live internet connections and others are dialup, etc...
Now then, I could divvy things up across multiple servers except that that one largest database is almost as slow at the customers office with live local (non-web) usage and that server has over 5GB of RAM. Similar specs to the web server I described otherwise and it only has that ONE database and not the ones of all the other customers.
Anyway, beyond that, the LVS approach would still involve having 1 master write server and all the backend "real" servers being readonly (to the application) and kept updated via replication slaving from the master. Just across multiple actual databases and not one... From what I've read so far that is!
Thanks!
-----Original Message----- From: Joseph Bueno [mailto:[EMAIL PROTECTED] Sent: Saturday, June 28, 2003 2:45 PM To: Wendell Dingus Cc: [EMAIL PROTECTED] Subject: Re: MySQL/INNODB speed on large databases
Maybe my question is too obvious but, since you have several databases instead of only one, have you considered moving some of them to a separate server ? Also, have you checked MySQL replication capabilities ? It is a very efficient way to distribute the load across several servers unless you do a lot of insert/updates and very few selects.
Regards, Joseph Bueno
Wendell Dingus wrote:
Hello, I've got a situation I'm hoping someone here can help me out with.
We
have a web server serving up data via some PHP programs pulling from MySQL (3.23.56 currently) in INNODB tables. There are 40+ (active) databases totalling a bit over 28 Gigabytes at this point. As we add more users
speed
is beginning to drop...
<snip>
-- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]