Yes, but the implication is that large databases probably don't update
every row between backup periods.
On Thu, 17 May 2007, Ron Johnson wrote:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On 05/17/07 11:04, Jim C. Nasby wrote:
[snip]
Ultimately though, once your database gets past a certain size, you
really want to be using PITR and not pg_dump as your main recovery
strategy.
But doesn't that just replay each transaction? It must manage the
index nodes during each update/delete/insert, and multiple UPDATE
statements means that you hit the same page over and over again.
- --
Ron Johnson, Jr.
Jefferson LA USA
Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
iD8DBQFGTL+3S9HxQb37XmcRAqGyAKDYxtahXCuZD0WkNV8fY8p48Wcn2gCgk3hQ
ExVOZQBDuVVafTqB1XD/Gno=
=6Pzi
-----END PGP SIGNATURE-----
---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend
---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match