On Tue, 2023-06-20 at 08:13 +0200, Les wrote:
> I'm aware of the TOAST, and how it works. I was referring to it ("I think
> that it should
> be as large as possible, without hitting the toast. ") I have designed a
> separate "block"
> table specifically to avoid storing binary data in the TOAST.
David G. Johnston ezt írta (időpont: 2023.
jún. 19., H, 22:30):
> On Mon, Jun 19, 2023 at 1:05 PM Les wrote:
>
>> AFAIK PostgreSQL does not allow a row to occupy multiple blocks.
>>
>
> Your plan is going to heavily involve out-of-band storage. Please read up
> on it here:
>
> https://www.postg
On 6/19/23 17:45, nicolas paris wrote:
>> But you wrote that in both cases the query is:
>
> that was indeed yet another tipo, hope to do better in the future.
>
>
>> I'm willing to continue to investigate, but only if you prepare a
>> reproducer,
>
> Thanks for your starter script. Please
On Mon, Jun 19, 2023 at 1:05 PM Les wrote:
> AFAIK PostgreSQL does not allow a row to occupy multiple blocks.
>
Your plan is going to heavily involve out-of-band storage. Please read up
on it here:
https://www.postgresql.org/docs/current/storage-toast.html
I'm not looking for a definitive ans
Dear fellow list members,
I'm in the process of implementing a file storage system that is based on
PostgreSQL and streaming replication. There will possibly be many similar
files stored. I would like to implement block-level deduplication: each
file consists of a series of blocks, and each uni
No it is not. But do you think there is an impact here?
Le 18/06/2023 à 23:23, msal...@msym.fr a écrit :
Hi,
Do you really need to do “select *”?
In other words, is it necessary to have all columns in the result?
/Michel SALAIS/
*De :*benoit
*Envoyé :* lundi 12 juin 2023 23:35
*À :* Chri
> But you wrote that in both cases the query is:
that was indeed yet another tipo, hope to do better in the future.
> I'm willing to continue to investigate, but only if you prepare a
> reproducer,
Thanks for your starter script. Please find attached 2 scripts which
now illustrates two trouble
On 6/19/23 14:20, nicolas paris wrote:
>> IMHO the thing that breaks it is the ORDER BY in the merge, which
>> likely
>> acts as an optimization fence and prevents all sorts of smart things
>> including the partitionwise join. I'd bet that if Nicolas replaces
>>
>> MERGE INTO "goliath" ca
>> US
> IMHO the thing that breaks it is the ORDER BY in the merge, which
> likely
> acts as an optimization fence and prevents all sorts of smart things
> including the partitionwise join. I'd bet that if Nicolas replaces
>
> MERGE INTO "goliath" ca
> USING (SELECT * FROM "david" ORDER BY "list_id"
On Mon, 2023-06-19 at 13:34 +0200, Tomas Vondra wrote:
>
>
> On 6/18/23 22:57, nicolas paris wrote:
> > > ...
> > > Well, I kinda doubt you have 17023331531230 rows (not even
> > > physically
> > > possible with 2TB disk), so that's immediately suspicious.
> >
> > Below is the full plan for the
On 6/19/23 09:46, Alvaro Herrera wrote:
> I came here to talk about partitionwise join, but then noticed you have
> already thought of that:
>
> On 2023-Jun-18, nicolas paris wrote:
>
>> Note that both plan acome from the same partitioned by hash table with
>> 100 parts, with a unique index on
On 6/18/23 22:57, nicolas paris wrote:
>> ...
>> Well, I kinda doubt you have 17023331531230 rows (not even physically
>> possible with 2TB disk), so that's immediately suspicious.
>
> Below is the full plan for the strategy 2.1 (Indeed the previous email
> plan was truncated and wrong, sorry f
I came here to talk about partitionwise join, but then noticed you have
already thought of that:
On 2023-Jun-18, nicolas paris wrote:
> Note that both plan acome from the same partitioned by hash table with
> 100 parts, with a unique index on the list_id + hash_key. For strategy
> 2.1, I turned o
13 matches
Mail list logo