Aha,
I got the same problem on 8.2dev.
Oleg
On Fri, 2 Jun 2006, Rodrigo Hjort wrote:
Oleg,
Actually I got PG 8.1.4 compiled from source on a Debian GNU/Linux
2.6.16-k7-2.
My locale is pt_BR, but I configured TSearch2 to use rules from the
'simple'.
Then I just followed the instructio
On Fri, 2 Jun 2006, Rodrigo Hjort wrote:
Oleg,
Actually I got PG 8.1.4 compiled from source on a Debian GNU/Linux
2.6.16-k7-2.
My locale is pt_BR, but I configured TSearch2 to use rules from the
'simple'.
Then I just followed the instructions from the link. The fact is that it
only works at the
"Jim Nasby" <[EMAIL PROTECTED]> wrote
> Now that we've got a nice amount of tuneability in the bgwriter, it
> would be nice if we had as much insight into how it's actually doing.
> I'd like to propose that the following info be added to the stats
> framework to assist in tuning it:
>
In gener
Tino Wildenhain <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> You're not seriously suggesting we reimplement evaluation of WHERE clauses
>> on the client side, are you?
> no, did I? But what is wrong with something like:
> \COPY 'SELECT foo,bar,baz FROM footable WHERE baz=5 ORDER BY foo' TO
>
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> Joshua D. Drake wrote:
>> I was looking at this todo item and I was wondering why we want to do
>> this? I have had to use -o -P on many occassion and was wondering if
>> there is something new to replace it in newer PostgreSQL?
> Keep in mind that po
Joshua D. Drake wrote:
> Hello,
>
> I was looking at this todo item and I was wondering why we want to do
> this? I have had to use -o -P on many occassion and was wondering if
> there is something new to replace it in newer PostgreSQL?
Uh, are you confusing it with
postgres -O -P?
Keep in min
VIEW.
Not to be a sour apple or anything but I don't see how any of this is
needed in the backend since we can easily use Psql to do it, or pretty
much any other tool.
There is an important difference between a capability in the backend vs
one synthesized in the frontend.
After much patience
Mark Woodward wrote:
Allow COPY to output from views
Another idea would be to allow actual SELECT statements in a COPY.
Personally I strongly favor the second option as being more flexible
than the first.
I second that - allowing arbitrary SELECT statements as a COPY source
seems much more p
Hello,
I was looking at this todo item and I was wondering why we want to do
this? I have had to use -o -P on many occassion and was wondering if
there is something new to replace it in newer PostgreSQL?
Joshua D. Drake
--
=== The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support
Not to be a sour apple or anything but I don't see how any of this is
needed in the backend since we can easily use Psql to do it, or pretty
much any other tool.
There is an important difference between a capability in the backend vs
one synthesized in the frontend.
And that would be? The su
>
>>> Allow COPY to output from views
>>> Another idea would be to allow actual SELECT statements in a COPY.
>>>
>>> Personally I strongly favor the second option as being more flexible
>>> than the first.
>>
>>
>> I second that - allowing arbitrary SELECT statements as a COPY source
>> seems muc
Allow COPY to output from views
Another idea would be to allow actual SELECT statements in a COPY.
Personally I strongly favor the second option as being more flexible
than the first.
I second that - allowing arbitrary SELECT statements as a COPY source
seems much more powerful and flexibl
Rod Taylor <[EMAIL PROTECTED]> writes:
>> One objection to this is that after moving "off the gold standard" of
>> 1.0 = one page fetch, there is no longer any clear meaning to the
>> cost estimate units; you're faced with the fact that they're just an
>> arbitrary scale. I'm not sure that's such
We got another report of this failure today:
http://archives.postgresql.org/pgsql-novice/2006-06/msg00020.php
which I found particularly interesting because it happened on a Fedora
machine, and I had thought Fedora impervious because it considers
glibc-common a standard component. Seems it can hap
> One objection to this is that after moving "off the gold standard" of
> 1.0 = one page fetch, there is no longer any clear meaning to the
> cost estimate units; you're faced with the fact that they're just an
> arbitrary scale. I'm not sure that's such a bad thing, though. For
> instance, some
Josh Berkus writes:
> Greg, Tom,
>
> > But for most users analyze doesn't really have to run as often as
> > vacuum. One sequential scan per night doesn't seem like that big a deal
> > to me.
>
> Clearly you don't have any 0.5 TB databases.
Actually I did not so long ago.
Sequential scans
Oleg,
Actually I got PG 8.1.4 compiled from source on a Debian GNU/Linux 2.6.16-k7-2.
My locale is pt_BR, but I configured TSearch2 to use rules from the 'simple'.
Then I just followed the instructions from the link. The fact is that it only works at the first time.
Regards,
Rodrigo Hjort
http://
I wrote:
> In general it seems to me that for CPU-bound databases, the default values
> of the cpu_xxx_cost variables are too low. ... rather than telling people
> to manipulate all three of these variables individually, I think it might
> also be a good idea to provide a new GUC variable named so
> Mark Woodward wrote:
> ...
>
pg_dump -t mytable | psql -h target -c "COPY mytable FROM STDIN"
With a more selective copy, you can use pretty much this mechanism to
limit a copy to a sumset of the records in a table.
>>> Ok, but why not just implement this into pg_dump or psql?
Josh Berkus wrote:
Greg, Tom,
But for most users analyze doesn't really have to run as often as
vacuum. One sequential scan per night doesn't seem like that big a deal
to me.
Clearly you don't have any 0.5 TB databases.
Perhaps something like "ANALYZE FULL"? Then only those who need the
Greg, Tom,
> But for most users analyze doesn't really have to run as often as
> vacuum. One sequential scan per night doesn't seem like that big a deal
> to me.
Clearly you don't have any 0.5 TB databases.
> > I'd still be worried about the CPU pain though. ANALYZE can afford to
> > expend a
Tino Wildenhain <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
> > Tino Wildenhain <[EMAIL PROTECTED]> writes:
> >> Ok, but why not just implement this into pg_dump or psql?
> >> Why bother the backend with that functionality?
> >
> > You're not seriously suggesting we reimplement evaluation of WH
Tom Lane <[EMAIL PROTECTED]> writes:
> Greg Stark <[EMAIL PROTECTED]> writes:
> > And a 5% sample is a pretty big. In fact my tests earlier showed the i/o
> > from
> > 5% block sampling took just as long as reading all the blocks. Even if we
> > figure out what's causing that (IMHO surprising) re
On Fri, Jun 02, 2006 at 09:56:07AM -0400, Andrew Dunstan wrote:
> Mark Woodward wrote:
> >Tom had posted a question about file compression with copy. I thought
> >about it, and I want to through this out and see if anyone things it is a
> >good idea.
> >
> >Currently, the COPY command only copies a
Mark Woodward wrote:
...
>>> pg_dump -t mytable | psql -h target -c "COPY mytable FROM STDIN"
>>>
>>> With a more selective copy, you can use pretty much this mechanism to
>>> limit a copy to a sumset of the records in a table.
>> Ok, but why not just implement this into pg_dump or psql?
>> Why bo
Now that we've got a nice amount of tuneability in the bgwriter, it
would be nice if we had as much insight into how it's actually doing.
I'd like to propose that the following info be added to the stats
framework to assist in tuning it:
bgwriter_rounds - number of rounds that have run
bgwr
Tom Lane wrote:
> Tino Wildenhain <[EMAIL PROTECTED]> writes:
>> Ok, but why not just implement this into pg_dump or psql?
>> Why bother the backend with that functionality?
>
> You're not seriously suggesting we reimplement evaluation of WHERE clauses
> on the client side, are you?
no, did I? Bu
> Mark Woodward wrote:
> ...
> create table as select ...; followed by a copy of that table
> if it really is faster then just the usual select & fetch?
Why "create table?"
>>> Just to simulate and time the proposal.
>>> SELECT ... already works over the network and if COPY from a
>>>
On Fri, Jun 02, 2006 at 01:39:32PM -0700, Michael Dean wrote:
> I'm sorry to interrupt your esoteric (to me) discussion, but I have
> a very simple question: would you define a "good unbiased sample"?
> My statistics professor Dan Price (rest his soul) would tell me
> there are only random sample
Greg Stark <[EMAIL PROTECTED]> writes:
> And a 5% sample is a pretty big. In fact my tests earlier showed the i/o from
> 5% block sampling took just as long as reading all the blocks. Even if we
> figure out what's causing that (IMHO surprising) result and improve matters I
> would only expect it t
Greg Stark wrote:
Josh Berkus writes:
Using a variety of synthetic and real-world data sets, we show that
distinct sampling gives estimates for distinct values queries that
are within 0%-10%, whereas previous methods were typically 50%-250% off,
across the spectrum of data sets and q
Tino Wildenhain <[EMAIL PROTECTED]> writes:
> Ok, but why not just implement this into pg_dump or psql?
> Why bother the backend with that functionality?
You're not seriously suggesting we reimplement evaluation of WHERE clauses
on the client side, are you?
regards, tom la
Josh Berkus writes:
> > Using a variety of synthetic and real-world data sets, we show that
> > distinct sampling gives estimates for distinct values queries that
> > are within 0%-10%, whereas previous methods were typically 50%-250% off,
> > across the spectrum of data sets and queries
Mark Woodward wrote:
...
create table as select ...; followed by a copy of that table
if it really is faster then just the usual select & fetch?
>>> Why "create table?"
>> Just to simulate and time the proposal.
>> SELECT ... already works over the network and if COPY from a
>> select (wh
Greg,
> Using a variety of synthetic and real-world data sets, we show that
> distinct sampling gives estimates for distinct values queries that
> are within 0%-10%, whereas previous methods were typically 50%-250% off,
> across the spectrum of data sets and queries studied.
Aha. It's a
Rodrigo,
you gave us too little information. Did you use your own dictionary ?
What's your configuration, version, etc.
Oleg
On Fri, 2 Jun 2006, Rodrigo Hjort wrote:
Sorry, but I thought it that was the most appropriate list for the issue.
I was following these instructions:
http://www.sai.ms
Tom et al,
Bruce and I talked a little bit about modularizing the xlog code a bit
more. As you know, one of the PostgreSQL Summer of Code projects is
to enhance xlogdump. This and other projects which would like to be
able to use the xlog code directly (like resetxlog in the -f option)
rather t
Sorry, but I thought it that was the most appropriate list for the issue.I was following these instructions:http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/docs/custom-dict.html
And what happens is that the function works just once. Perhaps a malloc/free issue?$ psql fuzzyfuzzy=# select to_t
Larry Rosenman wrote:
> Larry Rosenman wrote:
>> Tom Lane wrote:
>>> "Andrew Dunstan" <[EMAIL PROTECTED]> writes:
Larry Rosenman said:
> If I generate fixes for firefly (I'm the owner), would they have a
> prayer Of being applied?
>>>
Sure, although I wouldn't bother with 7.3 -
David Fetter <[EMAIL PROTECTED]> writes:
> > In the prior discussions someone posted the paper with the algorithm
> > I mentioned. That paper mentions that previous work showed poor
> > results at estimating n_distinct even with sample sizes as large as
> > 50% or more.
>
> Which paper? People
Neil Conway wrote:
> On Fri, 2006-06-02 at 09:56 -0400, Andrew Dunstan wrote:
> > Allow COPY to output from views
>
> FYI, there is a patch for this floating around -- I believe it was
> posted to -patches a few months back.
I have it. The pieces of it than I can use to implement the idea belo
On Fri, 2006-06-02 at 09:56 -0400, Andrew Dunstan wrote:
> Allow COPY to output from views
FYI, there is a patch for this floating around -- I believe it was
posted to -patches a few months back.
> Another idea would be to allow actual SELECT statements in a COPY.
>
> Personally I strongly f
Tom Lane wrote:
Sudden thought: is there any particularly good reason to use the cvs
update -P switch in buildfarm repositories? If we simply eliminated
the create/prune thrashing for these directories, it'd fix the problem,
if Andrew's idea is correct. Probably save a few cycles too. And sinc
Andrew Dunstan <[EMAIL PROTECTED]> writes:
> I suppose I could provide a switch to turn it off ... in one recent case
> the repo was genuinely not clean, though, so I am not terribly keen on
> that approach - but I am open to persuasion.
No, I agree it's a good check. Just wondering if we can r
Joshua D. Drake wrote:
Tom Lane wrote:
Andrew Dunstan <[EMAIL PROTECTED]> writes:
What's happening here is that cvs actually creates the directory and
then later prunes it when it finds it is empty.
I find that explanation pretty unconvincing. Why would cvs print a "?"
for such a directory?
"Joshua D. Drake" <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> Andrew Dunstan <[EMAIL PROTECTED]> writes:
>>> What's happening here is that cvs actually creates the directory and
>>> then later prunes it when it finds it is empty.
>>
>> I find that explanation pretty unconvincing. Why would
Larry Rosenman wrote:
> Tom Lane wrote:
>> "Andrew Dunstan" <[EMAIL PROTECTED]> writes:
>>> Larry Rosenman said:
If I generate fixes for firefly (I'm the owner), would they have a
prayer Of being applied?
>>
>>> Sure, although I wouldn't bother with 7.3 - just take 7.3 out of
>>> firefl
Tom Lane wrote:
Andrew Dunstan <[EMAIL PROTECTED]> writes:
What's happening here is that cvs actually creates the directory and
then later prunes it when it finds it is empty.
I find that explanation pretty unconvincing. Why would cvs print a "?"
for such a directory?
cvs will print a ? if
Tom Lane wrote:
Andrew Dunstan <[EMAIL PROTECTED]> writes:
What's happening here is that cvs actually creates the directory and
then later prunes it when it finds it is empty.
I find that explanation pretty unconvincing. Why would cvs print a "?"
for such a directory?
A
Tom Lane wrote:
> Stefan Kaltenbrunner <[EMAIL PROTECTED]> writes:
>
>>FWIW: lionfish had a weird make check error 3 weeks ago which I
>>(unsuccessfully) tried to reproduce multiple times after that:
>
>
>>http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=lionfish&dt=2006-05-12%2005:30:14
>
>
Andrew Dunstan <[EMAIL PROTECTED]> writes:
> What's happening here is that cvs actually creates the directory and
> then later prunes it when it finds it is empty.
I find that explanation pretty unconvincing. Why would cvs print a "?"
for such a directory?
regards, tom l
Tom Lane wrote:
"Andrew Dunstan" <[EMAIL PROTECTED]> writes:
I strongly suspect that snake is hitting the "file/directory doesn't
disappear immediately when you unlink/rmdir" problem on Windows that we have
had to code around inside Postgres. It looks like cvs is trying to prune an
empty dire
Andrew Dunstan wrote:
> Mark Woodward wrote:
>
>> Tom had posted a question about file compression with copy. I thought
>> about it, and I want to through this out and see if anyone things it is a
>> good idea.
>>
>> Currently, the COPY command only copies a table, what if it could operate
>> with
> Mark Woodward wrote:
>>> Mark Woodward wrote:
Tom had posted a question about file compression with copy. I thought
about it, and I want to through this out and see if anyone things it
is
a
good idea.
Currently, the COPY command only copies a table, what if it c
Mark Woodward wrote:
>> Mark Woodward wrote:
>>> Tom had posted a question about file compression with copy. I thought
>>> about it, and I want to through this out and see if anyone things it is
>>> a
>>> good idea.
>>>
>>> Currently, the COPY command only copies a table, what if it could
>>> opera
Mark Woodward wrote:
Tom had posted a question about file compression with copy. I thought
about it, and I want to through this out and see if anyone things it is a
good idea.
Currently, the COPY command only copies a table, what if it could operate
with a query, as:
COPY (select * from mytable
> Mark Woodward wrote:
>> Tom had posted a question about file compression with copy. I thought
>> about it, and I want to through this out and see if anyone things it is
>> a
>> good idea.
>>
>> Currently, the COPY command only copies a table, what if it could
>> operate
>> with a query, as:
>>
>>
Mark Woodward wrote:
> Tom had posted a question about file compression with copy. I thought
> about it, and I want to through this out and see if anyone things it is a
> good idea.
>
> Currently, the COPY command only copies a table, what if it could operate
> with a query, as:
>
> COPY (select
Tom Lane wrote:
> "Andrew Dunstan" <[EMAIL PROTECTED]> writes:
>> Larry Rosenman said:
>>> If I generate fixes for firefly (I'm the owner), would they have a
>>> prayer Of being applied?
>
>> Sure, although I wouldn't bother with 7.3 - just take 7.3 out of
>> firefly's build schedule. That's not
"Andrew Dunstan" <[EMAIL PROTECTED]> writes:
> Larry Rosenman said:
>> If I generate fixes for firefly (I'm the owner), would they have a
>> prayer Of being applied?
> Sure, although I wouldn't bother with 7.3 - just take 7.3 out of firefly's
> build schedule. That's not carte blanche on fixes, o
"Andrew Dunstan" <[EMAIL PROTECTED]> writes:
> I strongly suspect that snake is hitting the "file/directory doesn't
> disappear immediately when you unlink/rmdir" problem on Windows that we have
> had to code around inside Postgres. It looks like cvs is trying to prune an
> empty directory but isn'
Tom had posted a question about file compression with copy. I thought
about it, and I want to through this out and see if anyone things it is a
good idea.
Currently, the COPY command only copies a table, what if it could operate
with a query, as:
COPY (select * from mytable where foo='bar') as BA
Stefan Kaltenbrunner <[EMAIL PROTECTED]> writes:
> FWIW: lionfish had a weird make check error 3 weeks ago which I
> (unsuccessfully) tried to reproduce multiple times after that:
> http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=lionfish&dt=2006-05-12%2005:30:14
Weird.
SELECT ''::text AS el
> -Original Message-
> From: Andrew Dunstan [mailto:[EMAIL PROTECTED]
> Sent: 02 June 2006 12:18
> To: Dave Page
> Cc: [EMAIL PROTECTED]; pgsql-hackers@postgresql.org
> Subject: RE: [HACKERS] 'CVS-Unknown' buildfarm failures?
>
>
> That's why I said "almost always" :-)
:-)
> I stron
Tom Lane wrote:
Or is
it worth improving buildfarm to be able to skip specific tests?
There is a session on buildfarm improvements scheduled for the Toronto
conference. This is certainly one possibility.
cheers
andrew
---(end of broadcast)-
Larry Rosenman said:
> Tom Lane wrote:
>> I've been making another pass over getting rid of buildfarm failures.
>> The remaining ones I see at the moment are:
>>
>> firefly HEAD: intermittent failures in the stats test. We seem to
>> have fixed every other platform back in January, but not this on
Dave Page said:
>> I have
>> repeatedly
>> advised buildfarm member owners not to build by hand in the
>> buildfarm repos.
>> Not everybody listens, apparently.
>
> The owner of snake can guarantee that that is not the case - that box
> is not used for *anything* other than the buildfarm and hasn
Josh, Greg, and Tom,
I do not know how sensitive the plans will be to the correlation,
but one thought might be to map the histogram X histogram correlation
to a square grid of values. Then you can map them to an integer which
would give you 8 x 8 with binary values, a 5 x 5 with 4 values per
poin
Hi All,
Just a small comment from a mortal user.
On Thursday 01 June 2006 19:28, Josh Berkus wrote:
> 5. random_page_cost (as previously discussed) is actually a funciton of
> relatively immutable hardware statistics, and as such should not need to
> exist as a GUC once the cost model is fixed.
I
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of
> Andrew Dunstan
> Sent: 02 June 2006 03:31
> To: [EMAIL PROTECTED]
> Cc: pgsql-hackers@postgresql.org
> Subject: Re: [HACKERS] 'CVS-Unknown' buildfarm failures?
>
> cvs-unknown means there are unk
Tom Lane wrote:
> I've been making another pass over getting rid of buildfarm failures.
> The remaining ones I see at the moment are:
>
> firefly HEAD: intermittent failures in the stats test. We seem to
> have fixed every other platform back in January, but not this one.
>
>
> firefly 7.4: db
Am Freitag, 2. Juni 2006 09:46 schrieb Zdenek Kotala:
> I would like to implement "Allow postgresql.conf file values to be
> changed via an SQL API, perhaps using SET GLOBAL" functionality. Is
> there anybody who works on it? Is there any detailed explanation?
I don't think the semantics are all t
I would like to implement "Allow postgresql.conf file values to be
changed via an SQL API, perhaps using SET GLOBAL" functionality. Is
there anybody who works on it? Is there any detailed explanation?
Thanks Zdenek
---(end of broadcast)--
73 matches
Mail list logo