On 9/01/2010 6:32 PM, Nickolay wrote:
Hi 2 all,
Here is my typical configuration: 1(2) GB of RAM, HP ML 350(150) series
server, SATA raid, Linux.
I have 1 big table (called "archive") which contains short text messages
with a plenty of additional service info.
Currently this table contains more
2010/1/9 Nickolay
> Okay, I see your point with staging table. That's a good idea!
> The only problem I see here is the transfer-to-archive-table process. As
> you've correctly noticed, the system is kind of a real-time and there can be
> dozens of processes writing to the staging table, i cannot
Okay, I see your point with staging table. That's a good idea!
The only problem I see here is the transfer-to-archive-table process. As
you've correctly noticed, the system is kind of a real-time and there
can be dozens of processes writing to the staging table, i cannot see
how to make the tra
Richard Neill writes:
> I'm trying to optimise the speed of some selects with the where condition:
> WHERE id =
> (SELECT MAX(id) FROM tbl_sort_report WHERE parcel_id_code='43024')
> This is relatively slow, taking about 15-20ms, even though I have a
> joint index on both fields:
> CREATE INDEX
Thanks for advice, Dave!
This saga ended in an unexpected way: the firewall died.
Since the replacement firewall installed I have not seen any 3 seconds
connects. Well, there was no real load so far, but I will keep checking.
Thanks to everyone replied, it was very helpful.
Cheers,
Dmitri.
That may help with the queries speed (not a problem now), but we'll then
have to add UNION statement for daily staging table for other 5% of
requests, right? And there would be a moment when daily message is in
archive table AND in daily table (while transferring from daily table to
archi
That may help with the queries speed (not a problem now), but we'll then
have to add UNION statement for daily staging table for other 5% of
requests, right? And there would be a moment when daily message is in
archive table AND in daily table (while transferring from daily table to
archive).
O
Hi,
I first suggestion would be to either build the index only on
parcel_id_code or on (parcel_id_code, id).
But I am not sure because I am new in pg:)
cheers,
lefteris
On Sat, Jan 9, 2010 at 1:46 PM, Richard Neill wrote:
> Dear All,
>
> I'm trying to optimise the speed of some selects with th
you can also try :
select val FROM table ORDER BY val DESC LIMIT 1;
which usually is much quicker.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Sat, Jan 09, 2010 at 03:42:08PM +0300, Nickolay wrote:
> I do not see any way to normalize this table anymore. it's size is 4Gig for
> ~4M rows, i.e. 1Kb per row, i think it's ok.
> Also there are 2 indexes: by date_time and by a couple of service fields
> (total index size is 250Mb now).
> I
Dear All,
I'm trying to optimise the speed of some selects with the where condition:
WHERE id =
(SELECT MAX(id) FROM tbl_sort_report WHERE parcel_id_code='43024')
This is relatively slow, taking about 15-20ms, even though I have a
joint index on both fields:
CREATE INDEX testidx3 ON tbl_so
I do not see any way to normalize this table anymore. it's size is 4Gig
for ~4M rows, i.e. 1Kb per row, i think it's ok.
Also there are 2 indexes: by date_time and by a couple of service fields
(total index size is 250Mb now).
I think i'll be going to partition by months (approx. 1M rows or 1Gig
Nickolay wrote on 09.01.2010 11:24:
it would be pretty hard with one table because of blocking
What do you man with "because of blocking"?
Thomas
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mail
maybe that 'one big table' needs something called 'normalisation'
first. See how much that will shed off. You might be surprised.
The partitioning needs to be done by some constant intervals, of time
- in your case. Whatever suits you, I would suggest to use the rate
that will give you both ease of
Hi 2 all,
Here is my typical configuration: 1(2) GB of RAM, HP ML 350(150) series
server, SATA raid, Linux.
I have 1 big table (called "archive") which contains short text messages
with a plenty of additional service info.
Currently this table contains more than 4M rows for a period of 4,5
m
Hi 2 all,
Here is my typical configuration: 1(2) GB of RAM, HP ML 350(150) series
server, SATA raid, Linux.
I have 1 big table (called "archive") which contains short text messages
with a plenty of additional service info.
Currently this table contains more than 4M rows for a period of 4,5
m
crank it up more and delay the checkpoints as much as possible during
these updates. 64 segments is already 1024M.
We have 425M rows, total table size is 78GB, so we can imagine a worst
case UPDATE write is less than 200 bytes * number of rows specified in
the update (is that logic correc
17 matches
Mail list logo