On Wed, 2019-11-20 at 15:56 +0800, James(王旭) wrote:
> I am doing a query to fetch about 10000000 records in one time. But the query 
> seems
> very slow, like "mission impossible".
> I am very confident that these records should be fit into my shared_buffers 
> settings(20G),
> and my query is totally on my index, which is this big:(19M x 100 
> partitions), this index
> size can also be put into shared_buffers easily.(actually I even made a new 
> partial index
> which is smaller and delete the bigger old index)
> 
> This kind of situation makes me very disappointed.How can I make my queries 
> much faster
> if my data grows more than 10000000 in one partition? I am using pg11.6.

There are no parameters that make queries faster wholesale.

If you need help with a query, please include the table definitions
and the EXPLAIN (ANALYZE, BUFFERS) output for the query.
Including a list of parameters you changed from the default is helpful too.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com



Reply via email to