On Wed, Aug 7, 2013 at 4:43 AM, Luca Ferrari <fluca1...@infinito.it> wrote:
> Not really helpful, but here are my considerations. > The low frequency and the preference for a single server suggest me a > dump and restore cycle on two databases, assuming this is possible due > to not high volume data. > I would also consider some way of data partitioning in order to > isolate the data that has to be pushed from staging into the master > (you say data is only added or queried). > The problem for replication is that both the staging and the master > would be in read-write mode, so sounds to me like a multi-master > setup > I wasn't very careful with my wording. Sorry about that. There will be updates and possibly deletions as well as additions. Furthermore, the public version would be read only, I believe. The client would be modifying data, not end users. (It's a catalog site; the client is a non-profit that's publishing information in their field.) You're right in guessing that the data is not high volume. A dump/restore does seem pretty logical. Replication might be complete overkill. The part that makes it a little more difficult is managing getting rid of the old database before restoring (and preferably, not getting rid of the old DB unless the restore succeeds). The best way I can think of to keep from doing all those steps manually would just be to write a bash script, unless someone has better ideas. Also, do not degrade your contributions. =) This is very helpful, even if it's kind of vague due to the fact I'm not actually doing this yet. I'd much rather already have a vague idea in mind when this comes up again than to be totally unprepared. Thank you.