Okay, I see your point with staging table. That's a good idea!
The only problem I see here is the transfer-to-archive-table process. As
you've correctly noticed, the system is kind of a real-time and there
can be dozens of processes writing to the staging table, i cannot see
how to make the tra
That may help with the queries speed (not a problem now), but we'll then
have to add UNION statement for daily staging table for other 5% of
requests, right? And there would be a moment when daily message is in
archive table AND in daily table (while transferring from daily table to
archive).
O
I do not see any way to normalize this table anymore. it's size is 4Gig
for ~4M rows, i.e. 1Kb per row, i think it's ok.
Also there are 2 indexes: by date_time and by a couple of service fields
(total index size is 250Mb now).
I think i'll be going to partition by months (approx. 1M rows or 1Gig
Hi 2 all,
Here is my typical configuration: 1(2) GB of RAM, HP ML 350(150) series
server, SATA raid, Linux.
I have 1 big table (called "archive") which contains short text messages
with a plenty of additional service info.
Currently this table contains more than 4M rows for a period of 4,5
m
Hi 2 all,
Here is my typical configuration: 1(2) GB of RAM, HP ML 350(150) series
server, SATA raid, Linux.
I have 1 big table (called "archive") which contains short text messages
with a plenty of additional service info.
Currently this table contains more than 4M rows for a period of 4,5
m
I also have a question about warm standby replication.
What'd be the best solution for the system with 2 db servers (nodes), 1
database and 10 seconds max to switch between them (ready to switch time).
Currently I'm using Slony, but it's kind of slow when doing subscribe
after failover on the fa
Tom Lane wrote:
Nickolay writes:
BUT it seems that rarely this transaction is being delayed to apply and
log entry is being inserted in wrong order:
ID timestamp
1 2009-08-08 00:00:00.111
2 2009-08-08 00:00:30.311
3 2009-08-08 00:00:00.211
Yep, that's right - sometimes f
Tom Lane wrote:
Nickolay writes:
BUT it seems that rarely this transaction is being delayed to apply and
log entry is being inserted in wrong order:
ID timestamp
1 2009-08-08 00:00:00.111
2 2009-08-08 00:00:30.311
3 2009-08-08 00:00:00.211
Yep, that's right - sometimes f
Hello All,
I'm developing specialized message switching system and I've chosen to
use PostgreSQL as general tool to handle transactions, store and manage
all the data.
This system has pretty strong timing requirements. For example, it must
process not less than 10 messages per second. FYI: mes