If you happen to use Fedora 8, you might be interested in these
just-closed bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=430983
Fixes the problem of gssapi reporting uninterpreted numeric error codes
instead of useful error messages when there's a Kerberos problem, as
we previously saw in th
On Fri, 15 Feb 2008, Michael Meskes wrote:
Could anyone please tell me how NetBSD defines dtime_t? We have a
buildfarm failure on canary.
/usr/include/sys/types.h says:
typedef int32_t dtime_t;/* on-disk time_t */
Kris Jurka
---(end of broadcast)--
Could anyone please tell me how NetBSD defines dtime_t? We have a
buildfarm failure on canary. I know which change triggered this. I
enabled
typedef timestamp dtime_t;
Which was mishandled by ecpg in Informix mode before and didn't work at
all if you were using the data type.
Michael
--
Michae
On Fri, Feb 15, 2008 at 10:06:49AM +, Richard Huxton wrote:
> Decibel! wrote:
>> On Feb 14, 2008, at 10:06 AM, Tom Lane wrote:
>>> =?UTF-8?B?UmFwaGHDq2wgSmFjcXVvdA==?= <[EMAIL PROTECTED]> writes:
so, I propose the use of
NEW[variable_containing_the_column_name]
(which can obv
Is anyone opposed to applying the patch shown here
http://archives.postgresql.org/pgsql-hackers/2008-02/msg00434.php
(plus appropriate docs changes etc)? This allows AS to be omitted
in a SELECT output-column alias clause, but only if the desired
alias name lexes as an IDENT, that is, it's not any
Hello there,
I'm running a conversion of PostgreSQL's CVS repository, but I'm
stuck on a revision that cscvs fails to parse. The hint that the error
gives me is:
Parser error: failed to parse revision data line (line: 'date: 2000/12/04
01:20:38; author: tgl; state: Exp; lines:
')
That's
> "Roberts, Jon" <[EMAIL PROTECTED]> writes:
> > I have no idea why you can't do a subquery in the limit
>
> It hasn't seemed worth putting any effort into --- AFAIR this is the
> first time anyone's even inquired about it. As you say, you can
always
> use a function.
>
> > And I'm sure someo
"Roberts, Jon" <[EMAIL PROTECTED]> writes:
> I have no idea why you can't do a subquery in the limit
It hasn't seemed worth putting any effort into --- AFAIR this is the
first time anyone's even inquired about it. As you say, you can always
use a function.
> And I'm sure someone will point out a
ITAGAKI Takahiro <[EMAIL PROTECTED]> writes:
> In my workload, ANALYZE takes long time (1min at statistics_target = 10,
> and 5min at 100), but the updated table needs to be vacuumed every 30 seconds
> because seqscans run on the table repeatedly.
There is something *seriously* wrong with that. I
I have no idea why you can't do a subquery in the limit but you can
reference a function:
create table test as select * from pg_tables;
create or replace function fn_count(p_sql varchar) returns int as
$$
declare
v_count int;
begin
execute p_sql into v_count;
return v_count;
end;
$$
languag
Raphaƫl Jacquot wrote:
I recently found myself trying to build a trigger to modify some fields
in a good dozen similarly structured tables in which the similar columns
had different names.
in fact, I got stuck in pl/pgsql with the fact that there's no way to
access the NEW tuple in an indirect
On Thu, Feb 14, 2008 at 11:06:42AM -0500, Tom Lane wrote:
> =?UTF-8?B?UmFwaGHDq2wgSmFjcXVvdA==?= <[EMAIL PROTECTED]> writes:
> > so, I propose the use of
> > NEW[variable_containing_the_column_name]
> > (which can obviously be extended to any tuples)
> > to allow such access.
>
> > what do you
(just as an example):
select * from test order by a limit (select count(*)*0.9 from test);
is not doable in postgresql. Someone recently asked on IRC about,
"SELECT TOP 90 PERCENT" type of query in m$sql.
Any ideas how should this be approach in psql. I ask here, because you
guys probably can
Decibel! wrote:
On Feb 14, 2008, at 10:06 AM, Tom Lane wrote:
=?UTF-8?B?UmFwaGHDq2wgSmFjcXVvdA==?= <[EMAIL PROTECTED]> writes:
so, I propose the use of
NEW[variable_containing_the_column_name]
(which can obviously be extended to any tuples)
to allow such access.
what do you experts think
On Thu, 2008-02-14 at 16:50 -0500, Tom Lane wrote:
> Simon Riggs <[EMAIL PROTECTED]> writes:
> > On Thu, 2008-02-14 at 13:52 -0500, Tom Lane wrote:
> >> [ still staring at the code ... ] Something that might be interesting
> >> though is to try to move some of the buffer control logic overhead out
When there are a heavily updated table and a large table at the same time,
ANALYZE against the large table disturbs other autovacuums and HOT updates.
In my workload, ANALYZE takes long time (1min at statistics_target = 10,
and 5min at 100), but the updated table needs to be vacuumed every 30 secon
16 matches
Mail list logo