t the query into chunks to reduce the IN()-statement list size
makes no measurable difference
- I don't want to use JSONB columns for b,c,d and e because future
changes of b,c,d or e's structure (e.g. new fields, changing a datatype)
are harder to achieve with JSONB and it lacks constra
ant to use JSONB columns for b,c,d and e because future
changes of b,c,d or e's structure (e.g. new fields, changing a datatype)
are harder to achieve with JSONB and it lacks constraint checks on
insert (e.g. not null on column b.xy)
Kind regards and thank you for your time,
Nico Heller
P.S: Sorry for the long list of "I don't want to do this", some of them
are not possible because of other requirements
Am 12.08.2022 um 21:02 schrieb Rick Otten:
On Fri, Aug 12, 2022 at 2:50 PM Nico Heller wrote:
Good day,
consider the following query:
WITH aggregation(
SELECT
a.*,
(SELECT array_agg(b.*) FROM b WHERE b.a_id = a.id
<http://a.id&g
Am 12.08.2022 um 21:15 schrieb Rick Otten:
On Fri, Aug 12, 2022 at 3:07 PM Nico Heller wrote:
Am 12.08.2022 um 21:02 schrieb Rick Otten:
On Fri, Aug 12, 2022 at 2:50 PM Nico Heller
wrote:
Good day,
consider the following query:
WITH aggregation
d: (a_id = a.id)
SubPlan 4
-> Aggregate (cost=1.27..1.28 rows=1 width=32) (actual
time=0.005..0.005 rows=1 loops=200)
-> Seq Scan on e (cost=0.00..1.26 rows=1 width=76) (actual
time=0.004..0.004 rows=0 loops=200)
Filter: (a_id = a.id)
Rows Re