On Fri, 5 Jul 2024 at 20:53, Lok P wrote:
> However out of curiosity, if the roasted/compressed component or column which
> is JSON itself goes beyond 8k post compression, will it break then?
No. The size limit of a varlena field such as TEXT, JSON or JSONB is
1GB. See "field size" in [1].
Ple
On Fri, 5 Jul, 2024, 1:44 pm David Rowley, wrote:
> On Fri, 5 Jul 2024 at 19:53, Lok P wrote:
> > As David suggested it breaks if a row exceeds the 8k limit I. E a single
> page size , will that still holds true if we have a column with JSON in it?
>
> You wouldn't be at risk of the same tuple l
On Fri, 5 Jul 2024 at 19:53, Lok P wrote:
> As David suggested it breaks if a row exceeds the 8k limit I. E a single page
> size , will that still holds true if we have a column with JSON in it?
You wouldn't be at risk of the same tuple length problem if you
reduced the column count and stored t
Some folks in the team suggested to have key business attributes or say
frequently queried attributes in individual columns and others in a column
in same table clubbed in JSON format. Is that advisable or any issues can
occur with this approach? Also not sure how effectively postgres processes
JSO
On Fri, Jul 5, 2024 at 10:45 AM Guyren Howe wrote:
> On Jul 4, 2024, at 22:07, Lok P wrote:
>
> If you stick to the principle of grouping columns in a table when you use
> those columns together, you should be good.
>
> Note that you might want to split up the “parent” table if that naturally
>
On Thursday, July 4, 2024, Lok P wrote:
>
> But do you also suggest keeping those table pieces related to each other
> through the same primary key ?
>
>
Yes, everyone row produced from the input data “row” should get the same ID
associated with it - either as an entire PK or a component of a
mul
On Fri, 5 Jul 2024 at 17:07, Lok P wrote:
> Also I understand the technical limitation of the max number of columns per
> table is ~1600. But should you advise to restrict/stop us to some low number
> long before reaching that limit , such that we will not face any anomalies
> when we grow in f
On Fri, Jul 5, 2024 at 1:26 AM David G. Johnston
wrote:
> On Thu, Jul 4, 2024 at 12:38 PM Lok P wrote:
>
>>
>> Should we break the single transaction into multiple tables like one main
>> table and other addenda tables with the same primary key to join and fetch
>> the results wherever necessary
Ultimately, the way you should store the data depends on how you will use it.
When you retrieve these values, what are the different ways you’ll be using
them?
Normalised representations are more flexible, and the pragmatic,
performance-based consideration is that all the values in a row are al
On Thu, Jul 4, 2024 at 3:38 PM Lok P wrote:
> Hello,
> In one of the applications we are getting transactions in messages/events
> format and also in files and then they are getting parsed and stored into
> the relational database. The number of attributes/columns each transaction
> has is ~900+.
On 7/4/24 15:37, Lok P wrote:
Or say, what is the maximum number of columns per table we should
restrict? Should we break the single transaction into multiple tables
like one main table and other addenda tables with the same primary key
to join and fetch the results wherever necessary?
900 co
On Thu, Jul 4, 2024 at 12:38 PM Lok P wrote:
>
> Should we break the single transaction into multiple tables like one main
> table and other addenda tables with the same primary key to join and fetch
> the results wherever necessary?
>
>
I would say yes. Find a way to logically group sets of col
Hello,
In one of the applications we are getting transactions in messages/events
format and also in files and then they are getting parsed and stored into
the relational database. The number of attributes/columns each transaction
has is ~900+. Logically they are part of one single transaction and s
13 matches
Mail list logo