I'm running our products test suite against PostgreSQL 8.4.1. The test
suite runs fine against 8.3.7.
With 8.4.1, some of our tests are failing with the exception
'attempted to lock invisible tuple'. The failures are repeatable -
they crash every time at the same point. They crash no matter if the
I just did an upgrade on two of my servers (the main and the
failover). The main went OK but the postgres on the failover won't
start.
Unfortunately there is nothing anywhere telling me what the problem
is. The log file is empty, there is nothing in the /var/log/messages
or /var/log/syslog either.
On Oct 4, 2009, at 7:09 PM, Guy Rouillier wrote:
There is no reason why PG could not support packed decimal.
Is that not NUMERIC?
--
-- Christophe Pettus
x...@thebuild.com
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://w
Scott Ribe writes:
>> Can you show an actual test case?
> create sequence "DbRowIds";
> create table "PatientRelated" (id int8 not null default
> nextval('"DbRowIds"'));
> create table "Document" (id int8 not null default nextval('"DbRowIds"'));
> create table "PatientDocument" () inherits ("
Rich Shepard wrote:
On Sun, 4 Oct 2009, Sam Mason wrote:
Withing PG procedures at least in pgsql it is impossible to do 'money'
calculations without a loss of precision.
The point is that on *any* computer it's impossible to perform arbitrary
calculations to infinite precision (i.e. "without
> Can you show an actual test case?
create sequence "DbRowIds";
create table "PatientRelated" (id int8 not null default
nextval('"DbRowIds"'));
create table "Document" (id int8 not null default nextval('"DbRowIds"'));
create table "PatientDocument" () inherits ("PatientRelated", "Document");
Scott Ribe writes:
> Should I really have to re-specify the default in this case???
Works for me:
regression=# create sequence s1;
CREATE SEQUENCE
regression=# create table t1 (f1 bigint default nextval('s1'::text::regclass));
CREATE TABLE
regression=# create table t2 (f1 bigint default nextval(
In 8.4.1, trying to load a dump from 8.3.5, I get that error from this
statement:
CREATE TABLE "PatientDocument" (
)
INHERITS ("PatientRelated", "Document");
But I do not see any conflict:
# \d "PatientRelated"
Table "v2.PatientRelated"
Column | Type |
Sam Mason wrote:
>> > 8.4 has a generate_series(timestamp,timestamp,interval) which would seem
>> > to be a bit more flexible than you want.
>> Yes, I know :-). But as "generate_series(A, B, C)" can also
>> be written as "A + generate_series(0, (C - B) / C) * C" (or
>> something "flexible" like
Rich Shepard wrote:
In the early and mid-1980s we used a procedure for business
applications
involving money that worked regardless of programming language or
platform.
To each (float, real) monetary amount we added 0.005 and truncated the
result
to two digits on the right of the decimal
On Fri, 2 Oct 2009, Greg Smith wrote:
On Fri, 2 Oct 2009, Scott Marlowe wrote:
I found that lowering checkpoint completion target was what helped.
Does that seem counter-intuitive to you?
I set it to 0.0 now.
Generally, but there are plenty of ways you can get into a state where a
short
On Sun, Oct 04, 2009 at 09:31:02AM -0700, Rich Shepard wrote:
> On Sun, 4 Oct 2009, Sam Mason wrote:
> >The point is that on *any* computer it's impossible to perform arbitrary
> >calculations to infinite precision (i.e. "without a loss of precision as
> >you put it).
>
> I've not followed this
Gerhard Heift writes:
> I'm playing with postgres in OpenVZ. When I migrate my machine with
> the database from one host to another I get following errors:
> 2009-10-04 20:20:43 CEST PANIC: hash table "LOCK hash" corrupted
> 2009-10-04 20:20:43 CEST STATEMENT: begin
> 2009-10-04 20:20:43 CEST
Hello,
I'm playing with postgres in OpenVZ. When I migrate my machine with
the database from one host to another I get following errors:
2009-10-04 20:20:43 CEST PANIC: hash table "LOCK hash" corrupted
2009-10-04 20:20:43 CEST STATEMENT: begin
2009-10-04 20:20:43 CEST @ LOG: server process (P
The problem is identical if I run this example:
#include
#include
#include
extern Datum trigf(PG_FUNCTION_ARGS);
PG_FUNCTION_INFO_V1(trigf);
Datum
trigf(PG_FUNCTION_ARGS)
{
TriggerData *trigdata = (TriggerData *) fcinfo->context;
TupleDesc tupdesc;
HeapTuple rettuple;
cha
On Sun, Oct 04, 2009 at 01:07:54PM -0400, Yadisnel Galvez Velazquez wrote:
> The problem is identical if I run this example:
Please attach files instead of posting them in-line :) I've attached
your example as a file.
Please also to group-reply instead of just replying to the poster.
Cheers,
Da
Yadisnel Galvez Velazquez wrote:
> I am working actually in an replication project. I need a trigger function
> but when I compile this, the C compilator return the log message:
>
> # cc -fpic -c main.c
You're missing some -I directives (among other things). I suggest you
compile this way:
cc
I am working actually in an replication project. I need a trigger function but
when I compile this, the C compilator return the log message:
# cc -fpic -c main.c
In file included from /usr/include/postgresql/postgres.h:49,
from main.c:1:
/usr/include/postgresql/utils/elog.h:68:28
On Sun, Oct 04, 2009 at 11:39:32AM -0400, Yadisnel Galvez Velazquez wrote:
>
> I am working actually in an replication project. I need a trigger
> function but when I compile this, the C compilator return the log
> message:
Please include the actual code, or at least a pointer to your source
code
I am working actually in an replication project. I need a trigger function but
when I compile this, the C compilator return the log message:
# cc -fpic -c main.c
In file included from /usr/include/postgresql/postgres.h:49,
from main.c:1:
/usr/include/postgresql/utils/elog.h:68:
On Sun, 4 Oct 2009, Sam Mason wrote:
Withing PG procedures at least in pgsql it is impossible to do 'money'
calculations without a loss of precision.
The point is that on *any* computer it's impossible to perform arbitrary
calculations to infinite precision (i.e. "without a loss of precision a
On Sat, Oct 03, 2009 at 10:14:53PM -0400, V S P wrote:
> Withing PG procedures at least in pgsql it is impossible to do 'money'
> calculations without a loss of precision.
The point is that on *any* computer it's impossible to perform arbitrary
calculations to infinite precision (i.e. "without a l
On Sun, Oct 04, 2009 at 11:08:01AM -0400, Tom Lane wrote:
> Sam Mason writes:
> > On Sun, Oct 04, 2009 at 01:44:30AM -0700, tomrevam wrote:
> >> -> Bitmap Index Scan on session_allocation_info_status_idx
> >> (cost=0.00..5.28 rows=1 width=0) (actual time=1619.652..1619.652
> >> rows=51025 loops
tomrevam writes:
> Bill Moran wrote:
>> My apologies, I should have asked for the output of VACUUM VERBOSE on
>> the problem table in conjunction with these settings. (make sure you
>> do VACUUM VERBOSE when the table is exhibiting the speed problem)
> INFO: "session_allocation_info": found 388
Sam Mason writes:
> On Sun, Oct 04, 2009 at 01:44:30AM -0700, tomrevam wrote:
>> -> Bitmap Index Scan on session_allocation_info_status_idx (cost=0.00..5.28
>> rows=1 width=0) (actual time=1619.652..1619.652 rows=51025 loops=1)
>> Index Cond: ((status)::text = 'active'::text)
>> -> Bitmap Index
Withing PG procedures at least in pgsql it is impossible to do 'money'
calculations
without a loss of precision.
There is an open source library by IBM that I use in my C++ code to do
this, and may be it can
be incorporated into PG
it is called decNumber
http://speleotrove.com/decimal/decnumber.
Bill Moran wrote:
>
> My apologies, I should have asked for the output of VACUUM VERBOSE on
> the problem table in conjunction with these settings. (make sure you
> do VACUUM VERBOSE when the table is exhibiting the speed problem)
>
INFO: vacuuming "public.session_allocation_info"
INFO: sc
On Sun, Oct 04, 2009 at 01:44:30AM -0700, tomrevam wrote:
> -> Bitmap Index Scan on session_allocation_info_status_idx
> (cost=0.00..5.28 rows=1 width=0) (actual time=1619.652..1619.652 rows=51025
> loops=1)
>Index Cond: ((status)::text = 'active'::text)
> -> B
On Sat, Oct 03, 2009 at 04:20:59PM +1000, ? wrote:
> Since I also need to consider gego, is this the best way to do it?
I think you need to be clearer about what you're trying to count.
Consider a nestjoin plan where:
- For the inner side it considers 7 paths and throws away 4.
- For the
Andy Colson-2 wrote:
>
> Can you post an explain analyze'es'es for (1) when its quick and (2)
> when its slow?
>
Here are results:
1. Upon startup:
QUERY
PLAN
30 matches
Mail list logo