Can anyone provide me with some direction on how to write a function
I can load into postgres that will execute a function specified by
OID (or regproc/regprocedure) at runtime, with type safety? I've been
able to write such a function in C, but I was unable to figure out
how to determine t
On Mar 13, 2006, at 11:12 PM, Tom Lane wrote:
The relation-extension race condition could explain recently-added
tuples simply disappearing, though if it happened in more than one
table
you'd have to assume that the race condition window got hit more than
once. The slru race condition is eve
"Eric B. Ridge" <[EMAIL PROTECTED]> writes:
> Does anyone here have any kind of explanation other than bad hardware?
Well, there are several data-corruption bugs fixed between 7.4.8 and
7.4.12, though whether any of them explains your symptoms is difficult
to say:
2005-11-02 19:23 tgl
*
First off, let me make clear that I blame the strange data corruption
problem we encountered today on our hardware raid controller -- some
versions of its firmware are known-to-be-buggy and cause the raid set
to "drop" off, and we've encountered this problem in the past on this
particular s
I'm updating a field via a web form, and an em-dash is
getting stored in the database as 'âÃ-', and is
getting displayed back on the web page as 'âÂÂÃÂ'. The
encoding of the database is SQL_ASCII - should I
change it? And if so, to what and how?
Thanks,
CSN
Brandon Keepers <[EMAIL PROTECTED]> writes:
> Thanks for your quick response! I had actually just been trying that
> (with 7.1) and came across another error:
> NOTICE: ShmemAlloc: out of memory
> NOTICE: LockAcquire: xid table corrupted
> dumpBlobs(): Could not open large object. Explanation
Tom,
On Mon, 2006-03-13 at 20:38 -0500, Tom Lane wrote:
> pg_dump should work. If using a pg_dump version older than 8.1, you
> need to use -b switch and a non-default output format (I'd suggest -Fc).
>
> regards, tom lane
Thanks for your quick response! I had actually ju
Brandon Keepers <[EMAIL PROTECTED]> writes:
> I'm trying to upgrade a postgresql 7.0.3 database that uses large
> objects to a more recent version, but I'm not able to export the blobs.
pg_dump should work. If using a pg_dump version older than 8.1, you
need to use -b switch and a non-default out
On Mar 13, 2006, at 9:50 AM, Michael Fuhr wrote:
On Sun, Mar 12, 2006 at 11:36:23PM -0800, Casey Duncan wrote:
SELECT count(*) FROM webhits
WHERE path LIKE '/radio/tuner_%.swf' AND status = 200
AND date_recorded >= '3/10/2006'::TIMESTAMP
AND date_recorded < '3/11/
I'm trying to upgrade a postgresql 7.0.3 database that uses large
objects to a more recent version, but I'm not able to export the blobs.
pg_dumplo was added in 7.1, so I tried compiling and running that
against the 7.0.3 database, but I get the following error:
./contrib/pg_dumplo/pg_dumplo: Fail
>>> On Mon, Mar 13, 2006 at 3:16 pm, in message
<[EMAIL PROTECTED]>, Tony Caduto
<[EMAIL PROTECTED]> wrote:
> Kevin Grittner wrote:
>> Overall, PostgreSQL
>> has been faster than the commercial product from which we converted.
>
> Are you allowed to say what commercial product you converted fro
Tom Lane <[EMAIL PROTECTED]> writes:
> Harco de Hilster <[EMAIL PROTECTED]> writes:
> > What is the definition of a merge-joinable condition?
>
> Equality on a sortable datatype.
>
> > Can I create an type/operator that compares both records that is
> > considered merge-joinable?
>
> I think y
Kevin Grittner wrote:
The Consolidated Court Automation Programs (CCAP) of the Wisconsin Court
System has migrated to PostgreSQL for all of its Circuit Court web
operations. Eight production databases have been converted, six of them
around 180 GB each, holding statewide information replicated r
gkoskenmaki wrote:
Has anyone used ExtenDB? Our company is going to be putting a database
server cluster in the coming months and we would like some feedback from
anyone who has used ExtenDB as to what their experience with it has been
like.
Haven't had any experience with it myself, and ha
On Mon, 2006-03-13 at 15:26, Scott Marlowe wrote:
> On Mon, 2006-03-13 at 15:16, Tony Caduto wrote:
> > Kevin Grittner wrote:
> > > Overall, PostgreSQL
> > > has been faster than the commercial product from which we converted.
> > >
> >
> >
> > Kevin,
> > Are you allowed to say what commercia
I got the answer from the docs. |initcap|(text)
thanks anyway,
Ying
Hello all,
Does anyone have available plpgsql codes to update all capital letters
in a column to "the first character is capital and the rest is small" ?
For example, in tableA(id, description)
001, 'ZHANG ZHE XIN'
Emi Lu wrote:
> Hello all,
>
> Does anyone have available plpgsql codes to update all capital letters
> in a column to "the first character is capital and the rest is small" ?
I don't know about plpgsql codes, but there is a function initcap() that
you can use for that.
alvherre=# select initca
Hello all,
Does anyone have available plpgsql codes to update all capital letters
in a column to "the first character is capital and the rest is small" ?
For example, in tableA(id, description)
001, 'ZHANG ZHE XIN' =>
'Zhang Zhe Xin'
002, 'LIU
Regarding other government users, I read a case study awhile back on the
web regarding ease of installation of databases. This was from a fellow in
Texas - I think he was with the department of agriculture. He
said that he had tried Oracle but had a lot of trouble getting it up and
runnin
Hi All,
Thought I’d give this one more try.
Has anyone used ExtenDB? Our company is going to be putting
a database server cluster in the coming months and we would like some feedback
from anyone who has used ExtenDB as to what their experience with it has been
like.
"Peter" <[EMAIL PROTECTED]> writes:
> I have no triggers defined on any of the tables, and no foreign keys that
> could cause cascaded updates and stuff. Care to see full text of the proc?
> It's pl/PgPerlU
If there's no triggers involved then it sounds like a memory leak. What
PG version is th
Tom Lane <[EMAIL PROTECTED]> writes:
> Of course, there's no free lunch --- the price we pay for escaping
> rollback-segment-overflow is table bloat if you don't vacuum often
> enough.
Well it's worse than that. If you have long-running transactions that would
cause rollback-segment-overflow in
> >> I have stored proc that retrieves a bunch of data, stores it in temp =
> >> table, computes all sorts of totals/averages/whatnots from the temp =
> >> table, and inserts results in another table. It works fine (except I =
> >> don't like wrapping all SQL statements in 'execute'), but multiple
On Mon, 2006-03-13 at 15:16, Tony Caduto wrote:
> Kevin Grittner wrote:
> > Overall, PostgreSQL
> > has been faster than the commercial product from which we converted.
> >
>
>
> Kevin,
> Are you allowed to say what commercial product you converted from?
And whether he can or not, this would
Kevin Grittner wrote:
Overall, PostgreSQL
has been faster than the commercial product from which we converted.
Kevin,
Are you allowed to say what commercial product you converted from?
Thanks,
Tony Caduto
AM Software Design
Milwaukee WI
http://www.amsoftwaredesign.com
--
On Monday 13 March 2006 03:21 pm, Tom Lane wrote:
> Chris Kratz <[EMAIL PROTECTED]> writes:
> > Thanks for the reply. Yes, subselects would work very well and in some
> > ways are more elegant then the hand waving we had to do to get the
> > multi-column aggregates to work.
>
> BTW, there is not a
-Original Message-
From: Richard Huxton [mailto:[EMAIL PROTECTED]
Sent: Monday, March 13, 2006 11:32 AM
To: gkoskenmaki
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] ExtenDB
gkoskenmaki wrote:
>
> Has anyone used ExtenDB? Our company is going to be putting a database
> serve
Tom Lane wrote:
Chris Kratz <[EMAIL PROTECTED]> writes:
Thanks for the reply. Yes, subselects would work very well and in some ways
are more elegant then the hand waving we had to do to get the multi-column
aggregates to work.
BTW, there is not any fundamental reason why we can't su
Greg Stark <[EMAIL PROTECTED]> writes:
> Tom Lane <[EMAIL PROTECTED]> writes:
>> I think you could do something involving a time interval datatype that
>> considers "overlap" as equality and does something reasonable for
>> sorting non-overlapping intervals.
> How could a non-transitive property
> I have to confess I'm not real familiar with rowwise comparisons. Would this
> work when you have a large number of rows. For example, give me all
> individuals and their income their favorite TV Show the first and last times
> they were contacted. ie | Person | First Favorite | Last Favorite
Chris Kratz <[EMAIL PROTECTED]> writes:
> Thanks for the reply. Yes, subselects would work very well and in some ways
> are more elegant then the hand waving we had to do to get the multi-column
> aggregates to work.
BTW, there is not any fundamental reason why we can't support aggregate
functi
On Thursday 09 March 2006 02:18 pm, Merlin Moncure wrote:
> Chris Kratz wrote:
> > Well for anyone else who may be interested in doing something similar,
> > here is what we did. It does require typecasting going into the
> > functions, composite types and using the dot notation to get the value
>
The Consolidated Court Automation Programs (CCAP) of the Wisconsin Court
System has migrated to PostgreSQL for all of its Circuit Court web
operations. Eight production databases have been converted, six of them
around 180 GB each, holding statewide information replicated real-time
from 72 county
Hello Berend,
Thanks for the reply. Yes, subselects would work very well and in some ways
are more elegant then the hand waving we had to do to get the multi-column
aggregates to work. The reason we moved away from the subselects is that the
queries tend to be quite complex and all of the joi
Order of terms in ts_query hasn't any meaning in current implementation. But you
can use your own ranking function.
Hannes Dorbath wrote:
2 rows of tsvector:
'bar':2 'baz':3 'foo':1
'bar':2 'baz':1 'foo':3
so source text was:
foo bar baz
baz bar foo
ts_query now is 'foo&baz&baz', so both ma
Hello Bruno,
Yes, we have used the distinct on operator in the past and that works quite
well when you have a single ordering column or multiples which don't
contradict each other. The joins would work, but I was hoping for a simpler
solution as this is sql generated from a general purpose que
Harco de Hilster <[EMAIL PROTECTED]> writes:
> What is the definition of a merge-joinable condition?
Equality on a sortable datatype.
> Can I create an type/operator that compares both records that is
> considered merge-joinable?
I think you could do something involving a time interval datatype
On 3/11/06, Bill Moseley <[EMAIL PROTECTED]> wrote:
> I need to insert a row, but how that row is inserted depends on the
> number of items existing in the table. I initially thought
> SERIALIZABLE would help, but that only keeps me from seeing changes
> until the commit in that session.
serializ
> I will try separate my huge data computation into several pieces
> something like:
[...]
> If I understood correctly, "begin ... exception when .. then ... end"
> can work the same way as commit. In another way, if commands in the
> sub-block (such as step1) run successfully, data in this part (
On Sun, Mar 12, 2006 at 11:36:23PM -0800, Casey Duncan wrote:
> SELECT count(*) FROM webhits
>WHERE path LIKE '/radio/tuner_%.swf' AND status = 200
>AND date_recorded >= '3/10/2006'::TIMESTAMP
>AND date_recorded < '3/11/2006'::TIMESTAMP;
[...]
> Aggregate (cost=79
[EMAIL PROTECTED] <[EMAIL PROTECTED]> schrieb:
> hi,
>
> I try to dump a database 8.1.3 on windows
> from pg_dump from 8.0 on Linux
> no result
> is it a normal behavior ?
You want tu dump a new version with a old pg_dump? Silly.
My question: 'no result', how can i understand this? Nothing
er
Thanks for your thoughts.
What is the definition of a merge-joinable condition?
Even if I put ExpTime = Infinity (I saw that one coming ;-)), the same
error is reported. My only option here is to add A.exptime = B.exptime
(which is only true for live data if I use INFINITY), and lose the
abil
I have stored proc that retrieves a bunch of data, stores it in temp =
table, computes all sorts of totals/averages/whatnots from the temp =
table, and inserts results in another table. It works fine (except I =
don't like wrapping all SQL statements in 'execute'), but multiple calls
=
to that p
Hi Merlin,
>> In general, if you have the choice between looping over a large result
>> in a stored procedure (or, even worse, in a client app) and letting the
>> backend do the looping, then letting the backend handle it is nearly
>> always
>> faster.
There are different reasons why a la
Martijn van Oosterhout writes:
> I think the reason it hasn't been done for general join conditions is
> because we havn't thought of an efficient algorithm.
Right, it's keeping track of the unmatched right-hand rows that's a
problem.
> However, I wonder if youre case couldn't be handled with a
hi,
I try to dump a database 8.1.3 on windows
from pg_dump from 8.0 on Linux
no result
is it a normal behavior ?
Thanks ...
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail co
"Peter" <[EMAIL PROTECTED]> writes:
> I have stored proc that retrieves a bunch of data, stores it in temp =
> table, computes all sorts of totals/averages/whatnots from the temp =
> table, and inserts results in another table. It works fine (except I =
> don't like wrapping all SQL statements in '
> Please, what is the meaning of 'AIUI' ..
This site was a big help for me as acronyms are popular on this list:
http://www.acronymfinder.com
Regards,
Richard
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
On Mon, Mar 13, 2006 at 04:51:03PM +0100, Agnes Bocchino wrote:
> Please, what is the meaning of 'AIUI' ..
As I Understand It
--
Martijn van Oosterhout http://svana.org/kleptog/
> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
> tool for doing 5% of the work and
Martijn van Oosterhout wrote:
On Mon, Mar 13, 2006 at 03:59:33PM +0100, Agnes Bocchino wrote:
I would like to know how Postgresql works when all the files
(checkpoint_segment *2 + 1)
are full ,
does Postgresql rollback the transaction when all the wal segments are used,
or does the server s
I have stored proc that retrieves a bunch of data,
stores it in temp table, computes all sorts of totals/averages/whatnots from the
temp table, and inserts results in another table. It works fine (except I don't
like wrapping all SQL statements in 'execute'), but multiple calls to that proc
"surabhi.ahuja" <[EMAIL PROTECTED]> writes:
> .DMException: java.sql.SQLException: FATAL: terminating connection due =
> to administrator command
> <2006-02-27 18:40:44 CST%idle>LOG: unexpected EOF on client connection
> please note the lines in bold, is it because of this EOF on client =
> conn
Martijn van Oosterhout writes:
> On Mon, Mar 13, 2006 at 03:59:33PM +0100, Agnes Bocchino wrote:
>> (I have tried to make the test but without success for finding a long
>> transaction)
> AIUI it just keeps creating more segments. i.e. checkpoint_segment is
> not a hard limit. It's just the numbe
On 13 mar 2006, at 11.35, Helge Elvik wrote:
Is there any way for me to force plpgsql not to use a cached query
plan, but instead figure out what’s best based on the LIKE-string
that actually get passed to the function?
You can build the query as a string and EXECUTE it. This will force a
n
On Mon, Mar 13, 2006 at 03:59:33PM +0100, Agnes Bocchino wrote:
> I would like to know how Postgresql works when all the files
> (checkpoint_segment *2 + 1)
> are full ,
> does Postgresql rollback the transaction when all the wal segments are used,
> or does the server stop with an error message ?
I would like to know how Postgresql works when all the files
(checkpoint_segment *2 + 1)
are full ,
does Postgresql rollback the transaction when all the wal segments are used,
or does the server stop with an error message ?
(I have tried to make the test but without success for finding a long
t
> >> In general, if you have the choice between looping over a large result
> >> in a stored procedure (or, even worse, in a client app) and letting the
> >> backend do the looping, then letting the backend handle it is nearly
> >> always
> >> faster.
There are different reasons why a large q
On 3/11/06, Frank Church <[EMAIL PROTECTED]> wrote:
>
> I need to access PostgreSQL on a low level using libpq.dll.
>
> Are there any programmers using Delphi here? Free Pascal users is also fine.
I do a lot of programming with C++ builder. If you haven't already,
check out Zeos Database Objects
On Mon, 13 Mar 2006, Hannes Dorbath wrote:
2 rows of tsvector:
'bar':2 'baz':3 'foo':1
'bar':2 'baz':1 'foo':3
so source text was:
foo bar baz
baz bar foo
ts_query now is 'foo&baz&baz', so both matched.
How can I honor the correct order of the first row and rank it higher? The
position inf
ts_query now is 'foo&bar&baz'
Sorry, typo.
On 13.03.2006 12:38, Hannes Dorbath wrote:
ts_query now is 'foo&baz&baz', so both matched.
--
Regards,
Hannes Dorbath
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
2 rows of tsvector:
'bar':2 'baz':3 'foo':1
'bar':2 'baz':1 'foo':3
so source text was:
foo bar baz
baz bar foo
ts_query now is 'foo&baz&baz', so both matched.
How can I honor the correct order of the first row and rank it higher?
The position information is there, why doesn't rank() / rank_
On Mon, Mar 13, 2006 at 11:02:35AM +0100, Harco de Hilster wrote:
> Hi all,
>
> I am porting my application from Ingres to Postgres, and I have the
> following problem. I am not sure if this is a known limitation of
> Postgresql or a bug. My code works under Ingres but fails in Postgres
> with the
Hi,
I’m having trouble making plpgsql choose the
right query plan for a query. From what I understand from googling around, my
problem happens because plpgsql is very eager to cache query plans, and
therefore often works with “worst-case-scenario” query plans.
The query I’m trying t
Hi all,
I am porting my application from Ingres to Postgres, and I have the
following problem. I am not sure if this is a known limitation of
Postgresql or a bug. My code works under Ingres but fails in Postgres
with the following error:
ERROR: FULL JOIN is only supported with merge-joinable jo
please see the following snippet
Feb 27, 2006 6:23:51 .DMException: java.sql.SQLException: FATAL:
terminating connection due to administrator command
this error message is in one of the log files, and because of the above
error the desired job does not get executed
i tried to see the p
please see the following snippet
Feb 27, 2006 6:23:51 .DMException: java.sql.SQLException: FATAL:
terminating connection due to administrator command
this error message is in one of the log files, and because of the above
error the desired job does not get executed
i tried to see the postg
I have this report query that runs daily on a table with several
hundred million rows total using pg 8.1.3 on Debian Linux on hw with
dual opteron processors:
SELECT count(*) FROM webhits
WHERE path LIKE '/radio/tuner_%.swf' AND status = 200
AND date_recorded >= '3/10/2006'
67 matches
Mail list logo