On Sat, 2026-03-28 at 15:07 +0800, Hua W Peng wrote:
> I have a common table for telemetry data. the stru is:
> [25 columns]
> 
> Data records are growing by about 10 million every day, reaching 300 million 
> per month.
> In this case, even a simple COUNT(*) query becomes extremely slow, taking 
> about
> 7-8 minutes to finish.
> I am running PostgreSQL 14 on Ubuntu 22.04 with a 24GB shared buffer.
> And, though in our test env we have timescaledb enabled:
> 
> Triggers:
>     ts_insert_blocker BEFORE INSERT ON dms_data_gzdy FOR EACH ROW EXECUTE 
> FUNCTION _timescaledb_functions.insert_blocker()
> Number of child tables: 9 (Use \d+ to list them.)
> 
> But in production env there is no timescaledb which can't be installed as 
> well.
> 
> Can you help me?

First, a test environment should be as similar to production as possible,
otherwise it cannot serve its purpose.

There is little you can do about speeding up count(*), it is bound to be slow.
See https://www.cybertec-postgresql.com/en/postgresql-count-made-fast/

But if counting the rows is really your use case, you are doing something wrong.
I suspect that your real problem are other queries.

The way to get good performance with large tables it to have your queries use
an index scan.  The indexes you need will depend on your queries, so without
knowing the queries, it is impossible to recommend anything.

Partitioning is not primarily a measuer for improving query performance,
but it would still be a smart idea, primarily to be able to delete old data
efficiently.  It doesn't matter if you use TimescaleDB for partitioning or
use PostgreSQL's support directly.

Yours,
Laurenz Albe


Reply via email to