On 2/07/2013 4:01pm, Simon Riggs wrote:
On 2 July 2013 05:49, Mike Dewhirst <mi...@dewhirst.com.au
<mailto:mi...@dewhirst.com.au>> wrote:
On Monday, July 1, 2013 10:32:53 PM UTC+10, si...@2ndquadrant.com
<mailto:si...@2ndquadrant.com> wrote:
On 1 July 2013 13:02, Mike Dewhirst <mi...@dewhirst.com.au> wrote:
On 1/07/2013 9:35pm, Tom Evans wrote:
On Sun, Jun 30, 2013 at 11:24 PM, Mike Dewhirst
<mi...@dewhirst.com.au> wrote:
mulianto and Avraham
Thanks for your suggestions. Dumping data isn't the
entire problem - which
is this:
There will be an *ongoing* need to add new data from
tables in one database
to the same-named tables in another database on a
remote machine. Two of the
tables are in a m2m relationship.
You can use direct access via "foreign tables" the name of the
Postgres distributed database feature.
http://www.postgresql.org/__docs/devel/static/sql-__createforeigntable.html
<http://www.postgresql.org/docs/devel/static/sql-createforeigntable.html>
Simon
I read the sql-__createforeigntable
<http://www.postgresql.org/docs/devel/static/sql-createforeigntable.html>
page
and I think I know what a distributed database is versus a
replicated database. But I have no idea why I would choose one over
the other. Can you suggest?
Distributed access is more dynamic but likely somewhat slower when
access required. Replication could be thought of as pre-cacheing the
data you want to see.
There is only one production database, one staging database and one
development database.
New reference data gets entered into the development database as
required for new features. The entire database is dumped and loaded over
the top of the staging database from time to time. We obviously can't do
that to the production database.
So it isn't really pre-caching nor (perhaps) distribution except that
such facilities might be used as "tools" for accomplishing selective
data transfer of *only* the reference data from the development database
to the production database.
In any case, I'm 99% sure I should "refactor" (if that's the right
word) the reference tables out of the database into a separate
database and get to that data via a Django router.
Why take them out and then bring them back in? Why not leave where they
are and copy elsewhere?
Because reference information is fundamentally different than user data.
It won't change. It will be added to over time and is referred to
commonly by many different users.
The reference data tables are self-contained. All relationships are
within that group of tables.
So it feels (99%) like a good idea to treat the reference data
separately. While updating reference data on the production server I
would hate an accident to interfere with user data.
Django database routing ought to make such separation relatively easy
since everything (ie both databases) would be on the same Postgres server.
Mike
Not sure how I'll do that just yet but it has to come ahead of a
distribute/replicate solution.
--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
You received this message because you are subscribed to the Google
Groups "Django users" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to django-users+unsubscr...@googlegroups.com.
To post to this group, send email to django-users@googlegroups.com.
Visit this group at http://groups.google.com/group/django-users.
For more options, visit https://groups.google.com/groups/opt_out.
--
You received this message because you are subscribed to the Google Groups "Django
users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to django-users+unsubscr...@googlegroups.com.
To post to this group, send email to django-users@googlegroups.com.
Visit this group at http://groups.google.com/group/django-users.
For more options, visit https://groups.google.com/groups/opt_out.