Assert(i == PG_STAT_STATEMENTS_COLS);
+ else
+ Assert(i == (sql_supports_v1_1_counters ?
PG_STAT_STATEMENTS_COLS :
PG_STAT_STATEMENTS_COLS_V1_0));
tuplestore_putvalues(tupstore, tupdesc, values, nulls);
-- end of diff
Nachwelt aber,
insofern wir keines Burgers
Blut vergossen, aus dem nicht tausend andere der Nachwelt geschenkt werden. Der
Grund und Boden,
auf dem dereinst deutsche Bauerngeschlecht
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http
der Sklaverei erduldet, so ist dies schlimmer, als wenn ein
solcher Staat und ein
solches Volk zertrum
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Demokratie gefordert 85. M. und
Demokratie 412. M. und Judentum 350 f., 352, 498. Staatsauffassung 420. V
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
n wrong.
Well, the API is there, it is where, I guess, PostgreSQL is going, but I
think, philosophically, the API needs to see the XML contained within SQL
columns as being able to represent variable and optional columns in object
oriented environments easily. The harder it is to use a feature, the
ems is that it is intended to use extracted
XML within a query. The new xpath functionality seems not to be designed
to facilitate this, requiring a pretty arcane query structure to do the
same thing:
select datum from objects where key='GUID' and (xpath(E'foo/bar',
XMLPARSE(CONTENT da
used uuid, and if one
substitutes "uuid()" for "text()" that doesn't work.
The API is less intuitive than the previous incarnation and is, indeed,
more difficult to use.
>
> -Kevin
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.or
hich produces an unusable:
{b5212259-a91f-4dca-a547-4fe89cf2f32c}
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
How difficult would it be, and does anyone think it is possible to have a
continuous "restore_command" ala pg_standby running AND have the database
operational in a "read-only" mode?
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes
is copied. You
wouldn't want it changing "mid-copy" would you? How is this any less of a
hit than just calculating the checksum?
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
pt
to regenerate the data, then we could certainly optimize the check
algorithm. A simple checksum may be good enough.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
may be in older CPUs, but these days CPUs
are so fast in RAM and a block is very small. On x86 systems, depending on
page alignment, we are talking about two or three pages that will be "in
memory" (They were used to read the block from disk or previously
accessed).
--
Sent
eaning a false OK.
Also, regardless of whether or not the block is full, the block is read
and written as a block and that the underlying data unimportant.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
ll, if there are any spare bits in a block header,
they could be used for the check value.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
t -hackers at
> large approve of this feature before starting serious coding.
>
> Opinions?
If its fast enough, its a good idea. It could be very helpful in
protecting users data.
>
> --
> Alvaro Herrera
> http://www.CommandPrompt.com/
> PostgreSQL Replication, Consulting, Custom Develo
ble to insert arbitrary named values, and extracting them
>> similarly, IMHO works "better" and more naturally than some external
>> aggregate system built on a column. I know it is a little "outside the
>> box" thinking, what do you think?
>>
>>
>
right?
For what its worth, I don't expect you to jump all over this. It really is
a divergence from classic SQL design. I'm not even sure I like it. In
fact, I don't like it, but the argument that you are being forced to
create a second class data storage mechanism or a relational join for data
that is logically in a single relation does cause one to ponder the
problem.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
strategy to the
web guys.
Being able to insert arbitrary named values, and extracting them
similarly, IMHO works "better" and more naturally than some external
aggregate system built on a column. I know it is a little "outside the
box" thinking, what do you think?
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
s an important problem of easily mapping programmatic types to a
database.
Anyone think its interesting?
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
clause? would that be
called out of order of the select target list? I'm doing a fairly large
amount of processing and doing it once is important.
/
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
irst come first served" strategy, is there any discontinuity
between the function calls for t1.col1 and t2.col2. Will they all be
called for a particular combination of t1.col1 and t2.col2, in some
unpredictable order before the next row(s) combination is evaluated or
will I have to execute
t1.column1 and t2.column2 will only be evaluated once and
that myscore(...) and myrank(...) will all be called before the next
permutation is evaluated?
So, basically, I don't want to recalculate the values for each and every
function call as that would make the system VERY slow.
--
Sent via pgsql
gt; This is just a bad, bad idea. Side-effects in a WHERE-clause function
> are guaranteed to cause headaches. When (not if) it breaks, you get
> to keep both pieces.
I was kind of afraid of that. So, how could one implement such a function
set?
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
w? Is this possible? Is
there a specific order on which you can count?
Would it be something like: "where" clause first, left to right, followed
by select terms, left to right, and lastly the "order by" clause?
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Here is the SSL patch we discussed previously for 8.3.1.
sslconfig.patch.8.3.1
Description: Binary data
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
sslcrl=fullpath_to_revocation_list
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
be debated is if we should also somehow allow it
> to be specified in .pgpass for example?
>
I am testing a patch that is currently against the 8.2 series.
It implements in PQconnectdb(...)
sslmode=require sslkey=client.key sslcert=client.crt ssltrustcrt=certs.pem
sslcrl=crl.pem"
BTW: th
ct. The client
on the other hand, needs to access one or more postgresql servers.
It makes sense that the server keys and credentials be hard coded to its
protected data directory, while the client needs the ability to have
multiple keys.
Think of it this way, a specific lock only takes one key wh
s/client.key
sslcert=/opt/myapp/share/keys/client.crt");
Any comments?
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
nctionality you suggest above,
> given support for environment variables:
>
> void PQsetSSLUserCertFileName(char * filename)
> {
>setenv("PGCERTFILE", filename);
> }
>
> void PQsetSSLUserKeyFileName(char *filename)
> {
>setenv("PGKEYFILE", filename);
> }
>
> Or, in perl, $ENV{PGKEYFILE} = $file and so on. It seems
> less intrusive than adding new API calls.
>
> Cheers,
>Steve
Doesn't it make sense that the connection be configured in one place? I
agree with Tom, if it should be done, it should be done in PQconnectdb.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
es, not me. In addition my
application also communicates with other SSL enabled versions of itself.
I think you would agree that a hard coded immutable location for "client"
interface is problematic.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to
d almost always lead to configuration issues. As a
methodology for default configuration, it adds flexibility. Also, the
current configuration does not easily take in to consideration the idea
that different databases with different keys can be used from the same
system the same user.
Maybe we need
unsubscribe
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
> [EMAIL PROTECTED] wrote:
>> The point is that this *is* silly, but I am at a loss to understand why
>> it
>> isn't a no-brainer to change. Why is there a fight over a trivial change
>> which will ensure that PostgreSQL aligns to the documented behavior of
>> "open()"
>
> (Why characterise this as
> My copy of APUE says on page 49: "The file descriptor returned by open
> is the lowest numbered unused descriptor. This is used by some
> applications to open a new file on standard input, standard output, or
> standard error."
Yes, I'll restate my questions:
What is meant by "unused?" Is it r
> [EMAIL PROTECTED] writes:
>> That is hardly anything that I would feel comfortable with. Lets break
>> this down into all the areas that are ambiguous:
>
> There isn't anything ambiguous about this, nor is it credible that there
> are implementations that don't follow the intent of the spec.
How
> [EMAIL PROTECTED] writes:
>>> The POSIX spec requires open() to assign fd's consecutively from zero.
>>> http://www.opengroup.org/onlinepubs/007908799/xsh/open.html
>
>> With all due respect, PostgreSQL now runs natively on Win32.
>
> ... using the POSIX APIs that Microsoft so kindly provides.
>
>
>> Maybe we make the assumption that all OS will
>> implement "fd" as an array index
>
> The POSIX spec requires open() to assign fd's consecutively from zero.
> http://www.opengroup.org/onlinepubs/007908799/xsh/open.html
With all due respect, PostgreSQL now runs natively on Win32. Having a
POS
> Tom Lane wrote:
>> [EMAIL PROTECTED] writes:
>> >>> Please see my posting about using a macro for snprintf.
>>
>> > Wasn't the issue about odd behavior of the Win32 linker choosing the
>> wrong
>> > vnsprintf?
>>
>> You're right, the point about the macro was to avoid linker weirdness on
>> Windo
> Tom Lane wrote:
>> Bruce Momjian writes:
>> > Please see my posting about using a macro for snprintf. If the
>> current
>> > implementation of snprintf is enough for our existing translation
>> users
>> > we probably don't need to add anything more to it because snprintf
>> will
>> > not be exp
>From what I recall from the conversation, I would say rename the vsprintf
and the sprintf functions in postgres to pq_vsnprintf and pq_snprintf.
Define a couple macros: (in some common header, pqprintf.h?)
#define snprintf pq_snprintf
#define vsnprintf pq_snprintf
Then just maintain the postgres
Tom recently said, when talking about allowing the user (in this case me)
from passing a hash table size to "create index:"
"but that doesn't mean I want to make the user deal with it."
I started thinking about this and, maybe I'm old fashioned, but I would
like the ability to deal with it. So m
> Hello hackers,
>
> i'm wondering if is possible to somehow spread pretty big db (aprox 50G)
> over few boxes to get more speed ?
> if anyone did that i'd be glad to have some directions in right way,
>
I have done different elements of clusering with PostgreSQL on a per task
basis, but not a ful
> [EMAIL PROTECTED] writes:
>> Anyway, IMHO, hash indexes would be dramatically improved if you could
>> specify your own hashing function
>
> That's called a custom operator class.
Would I also be able to query the bucket size and all that?
>
>> and declare initial table size.
>
> It would be in
> Pailloncy Jean-Gerard wrote:
>> You should have a look to this thread
>> http://archives.postgresql.org/pgsql-hackers/2005-02/msg00263.php
>>
>> Take a look at this paper about "lock-free parallel hash table"
>> http://www.cs.rug.nl/~wim/mechver/hashta
> Ühel kenal päeval (teisipäev, 1. märts 2005, 14:54-0500), kirjutas
> [EMAIL PROTECTED]:
>> Now, it occurs to me that if my document reference table can refer to
>> something other than an indexed primary key, I can save a lot of index
>> processing time in PostgreSQL if I can have a "safe" analog
> I'm wondering,
> is there any sense to cluster table using two-column index ?
>
>
We've had this discussion a few weeks ago. Look at the archives for my
post "One Big Trend "
The problem is that while the statistics can resonably deal with the
primary column it completely misses the trends p
> Bruce Momjian writes:
>> Tom Lane wrote:
>>> First line of thought: we surely must not insert a snprintf into
>>> libpq.so unless it is 100% up to spec *and* has no performance issues
>>> ... neither of which can be claimed of the CVS-tip version.
>
>> Agreed, and we have to support all the 64-b
>
> Yes, strangly the Window's linker is fine because libpqdll.def defines
> what symbols are exported. I don't think Unix has that capability.
A non-static "public" function in a Windows DLL is not available for
dynamic linking unless explicitly declared as dll export. This behavior is
completel
>
> The big question is why our own vsnprintf() is not being called from
> snprintf() in our port file.
>
I have seen this "problem" before, well, it isn't really a problem I guess.
I'm not sure of the gcc compiler options, but
On the Microsoft compiler if you specify the option "/Gy" it sep
> On Tue, 1 Mar 2005 15:38:58 -0500 (EST), [EMAIL PROTECTED]
> <[EMAIL PROTECTED]> wrote:
>> Is there a reason why we don't use the snprintf that comes with the
>> various C compilers?
>
> snprintf() is usually buried in OS libraries. We implement
> our own snprintf to make things like this:
> snpr
> I spent all day debugging it. Still have absolutely
> no idea what could possibly go wrong. Does
> anyone have a slightest clue what can it be and
> why it manifests itself only on win32?
It may be that the CLIB has badly broken support for 64bit integers on 32
bit platforms. Does anyone know o
> Nicolai Tufar wrote:
>> On Tue, 1 Mar 2005 00:55:20 -0500 (EST), Bruce Momjian
>> > My next guess
>> > is that Win32 isn't handling va_arg(..., long long int) properly.
>> >
>>
>> I am trying various combination of number and types
>> of parameters in my test program and everything prints fine.
>
OK, lets step back a bit and see if there is a solution that fits what we
think we need and PostgreSQL.
Lets talk about FTSS, its something I can discuss easily. It is a two
stage system with an indexer and a server. Only the data to be indexed is
in the database, all the FTSS data structures are
> [EMAIL PROTECTED] writes:
>> Tom, I posted a message about a week ago (I forget the name) about a
>> persistent reference index, sort of like CTID, but basically a table
>> lookup. The idea is to simulate a structure that ISAM sort of techniques
>> can work in PostgreSQL.
>
>> Eliminating the bit
> I don't think we really need any more fundamentally nonconcurrent index
> types :-(
>
Tom, I posted a message about a week ago (I forget the name) about a
persistent reference index, sort of like CTID, but basically a table
lookup. The idea is to simulate a structure that ISAM sort of technique
> Linux and Solaris 10 x86 pass regression tests fine when I force the use
> of new
> snprintf(). The problem should be win32 - specific. I will
> investigate it throughly
> tonight. Can someone experienced in win32 what can possibly be the
> problem?
Do we have any idea about what format string
> "Magnus Hagander" <[EMAIL PROTECTED]> writes:
>> My results are:
>> Fisrt, baseline:
>> * Linux, with fsync (default), write-cache disabled: no data corruption
>> * Linux, with fsync (default), write-cache enabled: usually no data
>> corruption, but two runs which had
>
> That makes sense.
>
>> *
> Jim C. Nasby wrote:
>> On Mon, Feb 14, 2005 at 09:55:38AM -0800, Ron Mayer wrote:
>>
>> > I still suspect that the correct way to do it would not be
>> > to use the single "correlation", but 2 stats - one for estimating
>> > how sequential/random accesses would be; and one for estimating
>> > the
> On Sun, 20 Feb 2005 [EMAIL PROTECTED] wrote:
>
>> > On Sat, Feb 19, 2005 at 18:04:42 -0500,
>> >>
>> >> Now, lets imagine PostgreSQL is being developed by a large company.
>> QA
>> >> announces it has found a bug that will cause all the users data to
>> >> disappear if they don't run a maintenenc
> On Sat, Feb 19, 2005 at 18:04:42 -0500,
>>
>> Now, lets imagine PostgreSQL is being developed by a large company. QA
>> announces it has found a bug that will cause all the users data to
>> disappear if they don't run a maintenence program correctly. Vacuuming
>> one
>> or two tables is not enoug
> [ Shrugs ] and looks at other database systems ...
>
> CA has put Ingres into Open Source last year.
>
> Very reliable system with a replicator worth looking at.
>
> Just a thought.
The discussion on hackers is how to make PostgreSQL better. There are many
different perspectives, differences are
> On Sat, Feb 19, 2005 at 13:35:25 -0500,
> [EMAIL PROTECTED] wrote:
>>
>> The catastrophic failure of the database because a maintenence function
>> is
>> not performed is a problem with the software, not with the people using
>> it.
>
> There doesn't seem to be disagreement that something shoul
> On Fri, 18 Feb 2005 22:35:31 -0500, Tom Lane <[EMAIL PROTECTED]> wrote:
>> [EMAIL PROTECTED] writes:
>> > I think there should be a 100% no data loss fail safe.
>>
>> Possibly we need to recalibrate our expectations here. The current
>> situation is that PostgreSQL will not lose data if:
>>
>>
> [EMAIL PROTECTED] writes:
>> I think there should be a 100% no data loss fail safe.
OK, maybe I was overly broad in my statement, but I assumed a context that
I guess you missed. Don't you think that in normal operations, i.e. with
no hardware of OS failure, we should see any data loss as unacce
> On Sat, 19 Feb 2005 04:10 am, Tom Lane wrote:
>> [EMAIL PROTECTED] writes:
>> > In fact, I think it is so bad, that I think we need to back-port a fix
>> to
>> > previous versions and issue a notice of some kind.
>>
>> They already do issue notices --- see VACUUM.
>>
>> A real fix (eg the forcibl
More suggestions:
(1) At startup, postmaster checks for an XID, if it is close to a problem,
force a vacuum.
(2) At "sig term" shutdown, can the postmaster start a vacuum?
(3) When the XID count goes past the "trip wire" can it spontaneously
issue a vacuum?
NOTE:
Suggestions 1 and 2 are for 8.
I want to see if there is a concensus of opinion out there.
We've all known that data loss "could" happen if vacuum is not run and you
perform more than 2b transactions. These days with faster and bigger
computers and disks, it more likely that this problem can be hit in months
-- not years.
To
> Gaetano Mendola <[EMAIL PROTECTED]> writes:
>
>> We do ~4000 txn/minute so in 6 month you are screewd up...
>
> Sure, but if you ran without vacuuming for 6 months, wouldn't you notice
> the
> huge slowdowns from all those dead tuples before that?
>
>
I would think that only applies to databases
>
> On Wed, 16 Feb 2005 [EMAIL PROTECTED] wrote:
>
>> > On Wed, 16 Feb 2005 [EMAIL PROTECTED] wrote:
>> >
>> >> >
>> >> > Once autovacuum gets to the point where it's used by default, this
>> >> > particular failure mode should be a thing of the past, but in the
>> >> > meantime I'm not going to pa
> Stephan Szabo <[EMAIL PROTECTED]> writes:
>> Right, but since the how to resolve it currently involves executing a
>> query, simply stopping dead won't allow you to resolve it. Also, if we
>> stop at the exact wraparound point, can we run into problems actually
>> trying to do the vacuum if that'
>
> On Wed, 16 Feb 2005, Joshua D. Drake wrote:
>
>>
>> >Do you have a useful suggestion about how to fix it? "Stop working" is
>> >handwaving and merely basically saying, "one of you people should do
>> >something about this" is not a solution to the problem, it's not even
>> an
>> >approach towa
> On Wed, 16 Feb 2005 [EMAIL PROTECTED] wrote:
>
>> >
>> > Once autovacuum gets to the point where it's used by default, this
>> > particular failure mode should be a thing of the past, but in the
>> > meantime I'm not going to panic about it.
>>
>> I don't know how to say this without sounding lik
> [EMAIL PROTECTED] writes:
>> Maybe I'm missing something, but shouldn't the prospect of data loss
>> (even
>> in the presense of admin ignorance) be something that should be
>> unacceptable? Certainly within the realm "normal PostgreSQL" operation.
>
> [ shrug... ] The DBA will always be able to
>> The checkpointer is entirely incapable of either detecting the problem
>> (it doesn't have enough infrastructure to examine pg_database in a
>> reasonable way) or preventing backends from doing anything if it did
>> know there was a problem.
>
> Well, I guess I meant 'some regularly running proc
I was at Linux world Tuesday, it was pretty good. I was in the "org"
pavilion, where the "real" Linux resides. The corporate people were on the
other side of the room. (There was a divider where the rest rooms and
elevators were.)
I say that this was where the "real" linux resides because all the
I will be at the BLU booth Tuesday.
Any and all, drop by.
> I will be on Boston for Linuxworld from Tuesday through Thursday. I
> will read email only occasionally.
>
> --
> Bruce Momjian| http://candle.pha.pa.us
> pgman@candle.pha.pa.us | (610) 359-1
> [EMAIL PROTECTED] writes:
>> I think that is sort of arrogant. Look at Oracle, you can give the
>> planner
>> hints in the form of comments.
>
> Arrogant or not, that's the general view of the people who work on the
> planner.
>
> The real issue is not so much whether the planner will always get
> [EMAIL PROTECTED] wrote:
>> Might it be possible to contact IBM directly and ask if they will allow
>> usage of the patent for PostgreSQL. They've let 500 patents for open
>> source, maybe they'll give a write off for this as well.
>>
>> There is an advantage beyond just not having to re-write th
Might it be possible to contact IBM directly and ask if they will allow
usage of the patent for PostgreSQL. They've let 500 patents for open
source, maybe they'll give a write off for this as well.
There is an advantage beyond just not having to re-write the code, but it
would also be sort of an I
> On Thu, 2005-02-10 at 14:37 -0500, Bruce Momjian wrote:
>> No, we feel that is of limited value. If the optimizer isn't doing
>> things properly, we will fix it.
>
> I agree that improving the optimizer is the right answer for normal
> usage, so I can't get excited about query-level plan hints,
>> I think you're pretty well screwed as far as getting it *all* back goes,
>> but you could use pg_resetxlog to back up the NextXID counter enough to
>> make your tables and databases reappear (and thereby lose the effects of
>> however many recent transactions you back up over).
>>
>> Once you've
It must be possible to create a tool based on the PostgreSQL sources that
can read all the tuples in a database and dump them to a file stream. All
the data remains in the file until overwritten with data after a vacuum.
It *should* be doable.
If there data in the table is worth anything, then it
> Probably off-topic, but I think it's worth to see what astronomers are
> doing with their very big spatial databases. For example, we are working
> with more than 500,000,000 rows catalog and we use some special
> transformation
> of coordinates to integer numbers with preserving objects closenes
s to the table, re-cluster it.
Yea, like I said, there are easier ways of doing that with fairly static
data.
>
>> -Original Message-
>> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
>> Sent: Thursday, February 10, 2005 11:22 AM
>> To: pgsql-hackers@postgr
For about 5 years now, I have been using a text search engine that I wrote
and maintain.
In the beginning, I hacked up function mechanisms to return multiple value
sets and columns. Then PostgreSQL aded "setof" and it is was cool. Then it
was able to return a set of rows, which was even better.
L
> On Wed, Feb 09, 2005 at 07:30:16PM -0500, [EMAIL PROTECTED] wrote:
>> I would love to keep these things current for PG development, but my
>> company's server is on a plan that gets 1G free, and is billed after
>> that. Also, I am on a broadband line at my office, and uploading the
>> data
>> wo
> Mark,
>
>> Hey, I can give you a copy of RT1 which is fine, but it is 1.1G
>> compressed. I'd have to mail you a DVD.
>
> Sure, cool.
>
[address info sniped]
I would be willing to send a couple DVDs (on a regular basis) to anyone
who is able to post this on a good mirror that anyone could get at
I wrote a message caled "One Big trend vs multiple smaller trends in table
statistics" that, I think, explains what we've been seeing.
> [EMAIL PROTECTED] wrote:
>>
>> In this case, the behavior observed could be changed by altering the
>> sample size for a table. I submit that an arbitrary fixed
> Mark, Stephen, etc:
>
>> > I can see your point, however I wonder if the issue is that the
>> default
>> > stats settings of '10' (3000 rows, 10 histogram buckets) is too low,
>> and
>> > maybe we should consider making a higher value (say '100') the
>> default.
>>
>> Personally, I think that'd b
> [EMAIL PROTECTED] writes:
>> Is there a way, and if I'm being stupid please tell me, to use something
>> like a row ID to reference a row in a PostgreSQL database? Allowing the
>> database to find a specific row without using an index?
>
> ctid ... which changes on every update ...
Well, how doe
A question to the hackers:
Is there a way, and if I'm being stupid please tell me, to use something
like a row ID to reference a row in a PostgreSQL database? Allowing the
database to find a specific row without using an index?
I mean, an index has to return something like a row ID for the databa
I haven't worked with GiST, although I have been curious from time to
time. Just never had the time to sit, read, and try out the GiST system.
On my text search system (FTSS) I use functions that return sets of data.
It make be easier to implement that than a GiST.
Basically, I create a unique ID
ery; as well as estimate how many pages will be read.
>
> Unfortunately, many tables in my larger databases have
> columns with values that are tightly packed on a few pages;
> even though there is no total-ordering across the whole table.
> Stephan Szabo described this as a "
A couple of us using the US Census TIGER database have noticed something
about the statistics gathering of analyze. If you follow the thread "Query
Optimizer 8.0.1" you'll see the progression of the debate.
To summarize what I think we've seen:
The current implementation of analyze is designed ar
> [EMAIL PROTECTED] writes:
>
>> The basic problem with a fixed sample is that is assumes a normal
>> distribution.
>
> That's sort of true, but not in the way you think it is.
>
[snip]
Greg, I think you have an excellent ability to articulate stats, but I
think that the view that this is like ele
> [EMAIL PROTECTED] wrote:
>>
>> In this case, the behavior observed could be changed by altering the
>> sample size for a table. I submit that an arbitrary fixed sample size is
>> not a good base for the analyzer, but that the sample size should be
>> based
>> on the size of the table or some calc
> On Mon, Feb 07, 2005 at 05:16:56PM -0500, [EMAIL PROTECTED] wrote:
>> > On Mon, Feb 07, 2005 at 13:28:04 -0500,
>> >
>> > What you are saying here is that if you want more accurate statistics,
>> you
>> > need to sample more rows. That is true. However, the size of the
>> sample
>> > is essential
> Maybe I am missing something - ISTM that you can increase your
> statistics target for those larger tables to obtain a larger (i.e.
> better) sample.
No one is arguing that you can't manually do things, but I am not the
first to notice this. I saw the query planner doing something completely
stu
> On Mon, Feb 07, 2005 at 13:28:04 -0500,
>
> What you are saying here is that if you want more accurate statistics, you
> need to sample more rows. That is true. However, the size of the sample
> is essentially only dependent on the accuracy you need and not the size
> of the population, for large
1 - 100 of 244 matches
Mail list logo