""Dann Corbit"" <[EMAIL PROTECTED]> wrote:
> /*
> ** This will generate a 28 megabyte SQL script.
> ** 1600 table definitions will be created for tables
> ** with from 1 to 1600 columns.
> */
That's easy, now you shall do real query, real vacuum, real
reindex on it
Regards
Gaetano Mendola
-On [20030909 23:02], Andrew Dunstan ([EMAIL PROTECTED]) wrote:
>They must be very big images or there must be an awful lot of them :-)
*grin*
I was more thinking of organizations such as NASA and commercial
entities storing satellite images in databases.
--
Jeroen Ruigrok van der Werven / asm
Jeroen Ruigrok/asmodai wrote:
At work right now I have a bunch of 2-3 TB databases using Oracle 8.
We're expected to be using 60 TB in total storage about 2 years down the
road (right now we're using about 20).
I guess GIS databases and image databases might be the ones who would be
more concerned
> -Original Message-
> From: Jeroen Ruigrok/asmodai [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, September 09, 2003 1:23 PM
> To: Bruce Momjian
> Cc: Tatsuo Ishii; [EMAIL PROTECTED]; [EMAIL PROTECTED]
> Subject: Re: [HACKERS] Maximum table size
>
>
> -On [200
From: "Gaetano Mendola" <[EMAIL PROTECTED]>
> Why this ? just because bigger is better? I agree with Tom Lane, is
> better underpromise than overpromise.
My $0.02:
You are talking about pg teoretical limits.
Why not add to the docs some information about the lack of resources
for testing these li
Jeroen Ruigrok/asmodai <[EMAIL PROTECTED]> writes:
> -On [20030909 20:32], Bruce Momjian ([EMAIL PROTECTED]) wrote:
>> I know Tom is concerned because we haven't tested it, but I don't think
>> anyone has tested 16TB either, nor our 1600-column limit.
> The 1600 column limit should be easy to test
-On [20030909 20:32], Bruce Momjian ([EMAIL PROTECTED]) wrote:
>I know Tom is concerned because we haven't tested it, but I don't think
>anyone has tested 16TB either, nor our 1600-column limit.
If I had the space free on my SAN right now I'd try it.
The 1600 column limit should be easy to test o
On Tue, 9 Sep 2003 14:25:19 -0400 (EDT), [EMAIL PROTECTED] (Bruce
Momjian) wrote:
>Tatsuo Ishii wrote:
>> > Tom Lane wrote:
>> > > Bruce Momjian <[EMAIL PROTECTED]> writes:
>> > > > Is our maximum table size limited by the maximum block number?
>> > >
>> > > Certainly.
>> > >
>> > > > Is the 16T
Tatsuo Ishii wrote:
> > Tom Lane wrote:
> > > Bruce Momjian <[EMAIL PROTECTED]> writes:
> > > > Is our maximum table size limited by the maximum block number?
> > >
> > > Certainly.
> > >
> > > > Is the 16TB number a hold-over from when we weren't sure block number
> > > > was unsigned, though no
On Tue, Sep 09, 2003 at 02:04:43AM -0400, Tom Lane wrote:
> It's a holdover. As to how certain we are that all the
> signed-vs-unsigned bugs are fixed, who have you heard from running a
> greater-than-16Tb table? And how often have they done CLUSTER, REINDEX,
> or even VACUUM FULL on it? AFAIK
Bruce Momjian <[EMAIL PROTECTED]> writes:
> I guess the big question is what do we report as the maximum table size?
> Do we report 32TB and fix any bug that happen over 16TB?
[shrug] I'm happy with what the docs say now. I'd rather underpromise
than overpromise.
regards,
> Tom Lane wrote:
> > Bruce Momjian <[EMAIL PROTECTED]> writes:
> > > Is our maximum table size limited by the maximum block number?
> >
> > Certainly.
> >
> > > Is the 16TB number a hold-over from when we weren't sure block number
> > > was unsigned, though now we are pretty sure it is handled a
Tom Lane wrote:
> Bruce Momjian <[EMAIL PROTECTED]> writes:
> > Is our maximum table size limited by the maximum block number?
>
> Certainly.
>
> > Is the 16TB number a hold-over from when we weren't sure block number
> > was unsigned, though now we are pretty sure it is handled as unsigned
> > c
On Tue, 9 Sep 2003, Tom Lane wrote:
> Bruce Momjian <[EMAIL PROTECTED]> writes:
> > Is our maximum table size limited by the maximum block number?
>
> Certainly.
>
> > Is the 16TB number a hold-over from when we weren't sure block number
> > was unsigned, though now we are pretty sure it is hand
Bruce Momjian <[EMAIL PROTECTED]> writes:
> Is our maximum table size limited by the maximum block number?
Certainly.
> Is the 16TB number a hold-over from when we weren't sure block number
> was unsigned, though now we are pretty sure it is handled as unsigned
> consistenly?
It's a holdover. A
Is our maximum table size limited by the maximum block number?
With our block number maximum of:
#define MaxBlockNumber ((BlockNumber) 0xFFFE)
0xFFFE = 4294967294
would the max table size really be (4GB * 8k) or 32 TB, not 16TB, as
listed in the FAQ:
16 matches
Mail list logo