On 1/29/15, Steve Atkins wrote:
>
> On Jan 29, 2015, at 9:53 AM, Roger Pack wrote:
>
>> On 1/29/15, Roger Pack wrote:
>>> Hello. I see on this page a mention of basically a 4B row limit for
>>> tables that have BLOB's
>>
>> Oops I mean
Forgot to reply all on this one, many thanks to Steve Adrian and Bill
for their answers.
On Jan 29, 2015, at 12:32 PM, Roger Pack wrote:
> On 1/29/15, Steve Atkins wrote:
>>
>> On Jan 29, 2015, at 9:53 AM, Roger Pack wrote:
>>
>>> On 1/29/15, Roger Pack wrote:
&
On 1/29/15, Roger Pack wrote:
> Hello. I see on this page a mention of basically a 4B row limit for
> tables that have BLOB's
Oops I meant for BYTEA or TEXT columns, but it's possible the
reasoning is the same...
> https://wiki.postgresql.org/wiki/BinaryFilesInDB
>
>
Hello. I see on this page a mention of basically a 4B row limit for
tables that have BLOB's
https://wiki.postgresql.org/wiki/BinaryFilesInDB
Is this fact mentioned in the documentation anywhere? Is there an
official source for this? (If not, maybe consider this a feature
request to mention it in
Hello.
As a note, I ran into the following today (doing a select distinct is fast,
doing a count distinct is significantly slower?)
assume a table "issue" with a COLUMN nodename character varying(64);, 7.5M
rows...
select distinct substring(nodename from 1 for 9) from issue;
-- 5.8s
select co
Hello.
I was trying to get postgres to return the "correct" number of rows
inserted for batch inserts to a partitioned table [using the triggers as
suggested here
http://www.postgresql.org/docs/9.1/static/ddl-partitioning.html results in
it always returning 0 by default].
What I ideally wanted it