Re: Declarative partitioning and automatically generated row-IDs using BIGSERIAL

2021-01-05 Thread Laurenz Albe
On Thu, 2020-12-31 at 17:38 +0100, Thorsten Schöning wrote: > I have the following table containing 100+ millions rows currently and > which needs to be queried by "captured_at" a lot. That table stores > rows for the last 6 years, but most of the queries focus on the last > 15 months, 15 days or r

Re: Declarative partitioning and automatically generated row-IDs using BIGSERIAL

2020-12-31 Thread Michael Lewis
My apologies. You are correct. My brain may have already switched to holiday mode. Hopefully others will chime in shortly.

Re: Declarative partitioning and automatically generated row-IDs using BIGSERIAL

2020-12-31 Thread Thorsten Schöning
Guten Tag Michael Lewis, am Donnerstag, 31. Dezember 2020 um 19:28 schrieben Sie: > select >t.reloptions > from pg_class t > join pg_namespace n on n.oid = t.relnamespace > where t.relname = 'clt_rec' > and n.nspname = 'public'; That outputs NULL, as well for other tested indexes. Add

Re: Declarative partitioning and automatically generated row-IDs using BIGSERIAL

2020-12-31 Thread Michael Lewis
On Thu, Dec 31, 2020 at 11:18 AM Thorsten Schöning wrote: > Guten Tag Michael Lewis, > am Donnerstag, 31. Dezember 2020 um 18:20 schrieben Sie: > > > Why is your fillfactor so low?[...] > > I've just copied what my GUI-tool pgModeler generated as SQL right > now, that fill factor might have never

Re: Declarative partitioning and automatically generated row-IDs using BIGSERIAL

2020-12-31 Thread Thorsten Schöning
Guten Tag Michael Lewis, am Donnerstag, 31. Dezember 2020 um 18:20 schrieben Sie: > Why is your fillfactor so low?[...] I've just copied what my GUI-tool pgModeler generated as SQL right now, that fill factor might have never been applied at all. > Perhaps a silly question, but do you have an in

Re: Declarative partitioning and automatically generated row-IDs using BIGSERIAL

2020-12-31 Thread Michael Lewis
Why is your fillfactor so low? That seems pretty crazy, especially for a table with only 4 columns that are fixed width. 100 million rows with so little data in each row is not very much at all. You should be looking to other solutions before partitioning I expect. Perhaps a silly question, but do

Declarative partitioning and automatically generated row-IDs using BIGSERIAL

2020-12-31 Thread Thorsten Schöning
Hi all, I have the following table containing 100+ millions rows currently and which needs to be queried by "captured_at" a lot. That table stores rows for the last 6 years, but most of the queries focus on the last 15 months, 15 days or really only 15 minutes. > CREATE TABLE public.clt_rec( >