On 4/28/15 5:41 AM, José Luis Tallón wrote:
On 04/27/2015 08:49 AM, Jim Nasby wrote:
On 4/25/15 1:19 PM, Bruce Momjian wrote:
Note if you are storing a table with rows that exceed 2KB in size
(aggregate size of each row) then the "Maximum number of rows in a
table" may be limited to
On 04/27/2015 08:49 AM, Jim Nasby wrote:
On 4/25/15 1:19 PM, Bruce Momjian wrote:
Note if you are storing a table with rows that exceed 2KB in size
(aggregate size of each row) then the "Maximum number of rows in a
table" may be limited to 4 Billion, see TOAST.
That's not accurate
On 4/27/15, Jim Nasby wrote:
> On 4/25/15 1:19 PM, Bruce Momjian wrote:
>> Note if you are storing a table with rows that exceed 2KB in size
>> (aggregate size of each row) then the "Maximum number of rows in a
>> table" may be limited to 4 Billion, see TOAST.
>
> That's not accurat
On 4/25/15 1:19 PM, Bruce Momjian wrote:
Note if you are storing a table with rows that exceed 2KB in size
(aggregate size of each row) then the "Maximum number of rows in a
table" may be limited to 4 Billion, see TOAST.
That's not accurate though; you could be limited t
* Álvaro Hernández Tortosa (a...@8kdata.com) wrote:
> It's worth document but also, as I said, maybe also fixing them,
> so that if three years from now they really show up, solution is
> already in production (rather than in patching state).
With the proliferation of JSON usage in PG thanks t
On 25/04/15 06:39, Jim Nasby wrote:
On 4/24/15 7:11 PM, Álvaro Hernández Tortosa wrote:
On 24/04/15 05:24, Tom Lane wrote:
...
TBH, I've got very little enthusiasm for fixing this given the number
of reports of trouble from the field, which so far as I recall is zero.
Álvaro's case came up th
On Fri, Apr 24, 2015 at 11:39:04PM -0500, Jim Nasby wrote:
> On 4/24/15 7:11 PM, Álvaro Hernández Tortosa wrote:
> >On 24/04/15 05:24, Tom Lane wrote:
> ...
> >>TBH, I've got very little enthusiasm for fixing this given the number
> >>of reports of trouble from the field, which so far as I recall i
On 4/24/15 7:11 PM, Álvaro Hernández Tortosa wrote:
On 24/04/15 05:24, Tom Lane wrote:
...
TBH, I've got very little enthusiasm for fixing this given the number
of reports of trouble from the field, which so far as I recall is zero.
Álvaro's case came up through intentionally trying to create a
On 24/04/15 05:24, Tom Lane wrote:
Stephen Frost writes:
* Bruce Momjian (br...@momjian.us) wrote:
On Sun, Feb 1, 2015 at 03:54:03PM +0100, Álvaro Hernández Tortosa wrote:
The problem here is that performance degrades exponentially, or
worse. Speaking here from experience, we already tested
* Robert Haas (robertmh...@gmail.com) wrote:
> On Thu, Apr 23, 2015 at 11:24 PM, Tom Lane wrote:
> > Having said all that, if we did try to fix it today, I'd imagine changing
> > TOAST value identifiers to int64 and inventing a new TOAST pointer format
> > for use when 32 bits isn't wide enough fo
On Thu, Apr 23, 2015 at 11:24 PM, Tom Lane wrote:
> Having said all that, if we did try to fix it today, I'd imagine changing
> TOAST value identifiers to int64 and inventing a new TOAST pointer format
> for use when 32 bits isn't wide enough for the ID. But I think we're best
> advised to hold o
Stephen Frost writes:
> * Bruce Momjian (br...@momjian.us) wrote:
>> On Sun, Feb 1, 2015 at 03:54:03PM +0100, Álvaro Hernández Tortosa wrote:
>>> The problem here is that performance degrades exponentially, or
>>> worse. Speaking here from experience, we already tested this for a
>>> very similar
* Bruce Momjian (br...@momjian.us) wrote:
> On Sun, Feb 1, 2015 at 03:54:03PM +0100, Álvaro Hernández Tortosa wrote:
> > The problem here is that performance degrades exponentially, or
> > worse. Speaking here from experience, we already tested this for a
> > very similar case (table creation,
On Sun, Feb 1, 2015 at 03:54:03PM +0100, Álvaro Hernández Tortosa wrote:
>
> On 31/01/15 14:55, Roger Pack wrote:
> >[...]
> >
> >Now, the reality is that GetNewOidWithIndex() is going to keep
> >incrementing the global OID counter until it finds an OID that isn't in
> >the toast table. That mean
On 2/3/15 5:27 PM, Jim Nasby wrote:
> On 2/3/15 9:50 AM, David Steele wrote:
>> EXTENDED is the default for most TOAST-able types and is still subject
>> to TOAST_TUPLE_THRESHOLD which is normally 2K. EXTERNAL is the same but
>> with no compression.
>>
>> See: http://www.postgresql.org/docs/9.4/sta
On 2/3/15 9:50 AM, David Steele wrote:
On 2/3/15 10:01 AM, José Luis Tallón wrote:
Hmmm alter column set storage external / set storage extended ?
From http://www.postgresql.org/docs/9.4/static/sql-altertable.html :
ALTER [ COLUMN ] column_name SET STORAGE { PLAIN | EXTERNAL |
EXTENDED
On 2/3/15 10:01 AM, José Luis Tallón wrote:
> Hmmm alter column set storage external / set storage extended ?
>
> From http://www.postgresql.org/docs/9.4/static/sql-altertable.html :
> ALTER [ COLUMN ] column_name SET STORAGE { PLAIN | EXTERNAL |
> EXTENDED | MAIN }
>
> This would do w
On 02/03/2015 03:44 AM, Jim Nasby wrote:
[snip]
The alternative could be some "long LOB" ("HugeOBject"?) using the
equivalent to "serial8" whereas regular LOBs would use "serial4".
Well, it depends on how we did this. We could (for example) add a
field to pg_class that determines what type to
On 2/2/15 3:50 PM, José Luis Tallón wrote:
Hmm 2^32 times aprox. 2kB (as per usual heuristics, ~4 rows per heap
page) is 8796093022208 (~9e13) bytes
... which results in 8192 1GB segments :O
Looks like partitioning might be needed much sooner than that (if only
for index efficiency reasons)
On 02/02/2015 09:36 PM, Roger Pack wrote:
On 2/2/15, José Luis Tallón wrote:
On 01/31/2015 12:25 AM, Jim Nasby wrote:
[snip]
It's a bit more complex than that. First, toast isn't limited to
bytea; it holds for ALL varlena fields in a table that are allowed to
store externally. Second, the limi
Roger Pack writes:
> On 2/2/15, José Luis Tallón wrote:
>> So yes, using one sequence per TOAST table should help.
>> Combined with the new SequenceAMs / sequence implementation being
>> proposed (specifically: one file for all sequences in a certain
>> tablespace) this should scale much better
On 2/2/15, José Luis Tallón wrote:
> On 01/31/2015 12:25 AM, Jim Nasby wrote:
>> [snip]
>> It's a bit more complex than that. First, toast isn't limited to
>> bytea; it holds for ALL varlena fields in a table that are allowed to
>> store externally. Second, the limit is actually per-table: every t
On 1/30/15, Jim Nasby wrote:
> On 1/30/15 11:54 AM, Roger Pack wrote:
On 1/29/15, Roger Pack wrote:
> Hello. I see on this page a mention of basically a 4B row limit for
> tables that have BLOB's
Oops I meant for BYTEA or TEXT columns, but it's possible the
reasoning
On 01/31/2015 12:25 AM, Jim Nasby wrote:
[snip]
It's a bit more complex than that. First, toast isn't limited to
bytea; it holds for ALL varlena fields in a table that are allowed to
store externally. Second, the limit is actually per-table: every table
gets it's own toast table, and each toas
On 31/01/15 14:55, Roger Pack wrote:
[...]
Now, the reality is that GetNewOidWithIndex() is going to keep
incrementing the global OID counter until it finds an OID that isn't in
the toast table. That means that if you actually get anywhere close to
using 4B OIDs you're going to become extremely
Oops forgot to forward to the list (suggestion/feature request to the
list admin for the various pg lists: make the default "reply to" go to
the list, not the sender, if at all possible).
Response below:
On 1/30/15, Jim Nasby wrote:
> On 1/30/15 11:54 AM, Roger Pack wrote:
On 1/29/15, Roger
On 1/30/15 11:54 AM, Roger Pack wrote:
On 1/29/15, Roger Pack wrote:
Hello. I see on this page a mention of basically a 4B row limit for
tables that have BLOB's
Oops I meant for BYTEA or TEXT columns, but it's possible the
reasoning is the same...
It only applies to large objects, not byte
>> On 1/29/15, Roger Pack wrote:
>>> Hello. I see on this page a mention of basically a 4B row limit for
>>> tables that have BLOB's
>>
>> Oops I meant for BYTEA or TEXT columns, but it's possible the
>> reasoning is the same...
>
> It only applies to large objects, not bytea or text.
OK I think
28 matches
Mail list logo