On Thu, Jan 10, 2002 at 02:27:29PM +0100, Henning Sprang wrote: > We have a web application with a mysql DB running for our clients on > one server in the Internet at a provider's place, and we want to > have a copy of the files and the database on a machine in our local > network which is accessible from outside, too, but with lower > bandwith, which we only want to use in emergency when something goes > wrong with the main machine. Now we started a discussion if we > should let write the main server some update logs, then fetch them > by scp or ftp to the local machine, and put them into the database > with the mysql client, or if it would be a better, and a not really > much more difficult solution to use the mysql replication > mechanisms, which don't know much about at this point.
The built-in replication should serve you well in that case. It essentially automates the manual process you are describing. > I already started reading the appropriate Manuals Pages, but in any > case I wanted to ask the list what is the better/easier way. What we > don't want to do is making bigger changes to the mysql servers, and > we can not be shure how reliable the connection between the both > machines will be, that should be thought of. MySQL replication works well over unreliable links. It has an automatic retry system built-in. Jeremy -- Jeremy D. Zawodny, <[EMAIL PROTECTED]> Technical Yahoo - Yahoo Finance Desk: (408) 349-7878 Fax: (408) 349-5454 Cell: (408) 685-5936 MySQL 3.23.41-max: up 7 days, processed 188,894,422 queries (287/sec. avg) --------------------------------------------------------------------- Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail <[EMAIL PROTECTED]> To unsubscribe, e-mail <[EMAIL PROTECTED]> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php