You should try directing this request to the postgres jdbc list.
Have you looked at
http://jdbc.postgresql.org/documentation/80/datasource.html?
John Sidney-Woollett
Sharon Abu wrote:
Dear Postgresql experts,
First i want to mention that i'm new to postgreSQL DB.
I have a J2EE applic
te simplistic and may not be suitable for your
environment. You are free to modify the code to suit your own needs.
John Sidney-Woollett
check_slon
==
#!/bin/sh
# nagios plugin that checks whether the slon daemon is running
# if the 3rd parameter (LOGFILE) is specified then the log file i
e, w.title,
w.inventory_number
from works w
) as t
order by artist_name, title
BTW, I haven't checked any of this, but hopefully it will give you some
pointers or ideas.
John Sidney-Woollett
tony wrote:
Hello,
I am having a problem with returning distinct rows this is probably a
newbie questio
pplication) you
will probably end up making your life a little easier.
John Sidney-Woollett
Martijn van Oosterhout wrote:
You can put the DISTINCT ON() query as a subquery, and an ORDER BY in
the outer query.
Sorting by surname is tricky since you need to tell the computer how to
find it...
Hope this
out having to explicitly install it in each
new db I create in the cluster.
Recreating the database using psql dbname < dump.sql does work OK, but
not pg_restore.
Can pg_restore be made to ignore the error?
John Sidney-Woollett
Martijn van Oosterhout wrote:
On Thu, Mar 10, 2005 at 01:
Tom Lane wrote:
John Sidney-Woollett <[EMAIL PROTECTED]> writes:
I'm pretty sure I had the same problem when using pg_restore. If
pl/pgsql is installed in template1, then the restore fails.
And I couldn't find any solution to this on the list either.
You're supposed to re
files?
Thanks
John Sidney-Woollett
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Hmm you're right, they do seem to be working OK now.
Thanks for your response - maybe it was just a temporary glitch.
John Sidney-Woollett
Michael Fuhr wrote:
On Mon, Jun 06, 2005 at 08:13:10AM +0100, John Sidney-Woollett wrote:
Can anyone explain why none of the links to download the
UNSUBSCRIBE
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
Oh my god!
DB is pg 7.4.6 on linux
2005-10-27 05:55:55 WARNING: some databases have not been vacuumed in
2129225822 transactions
HINT: Better vacuum them within 18257825 transactions, or you may have
a wraparound failure.
2005-10-28 05:56:58 WARNING: some databases have not been vacu
Oosterhout wrote:
On Sun, Oct 30, 2005 at 08:50:18AM +, John Sidney-Woollett wrote:
Oh my god!
DB is pg 7.4.6 on linux
Firstly, check pg_database, it should tell you which databases need to
be vacuumed. Any database you regularly vacuumed is fine so maybe the
corruption is in some o
am I likely to have missed in my vacuuming? Because whatever I did
wrong is going to break our current live database at some point soon.
Thanks
John
Tom Lane wrote:
John Sidney-Woollett <[EMAIL PROTECTED]> writes:
I decided to switch over to the slave which is now our live database
template0| -2074187459
(4 rows)
mail_lxtreme is a test mail db and I don't care about it. So it could
have been deleted without any worries...
Which databases are a problem? Is it template0 or bp_live and template1?
Thanks
John
Tom Lane wrote:
John Sidney-Woollett <[EMAIL PROTECTED]>
ve deleted mail_lxtreme and then continued to use bp_live
as though nothing had happened?
Or had database bp_live already been damaged by the wraparound?
Thanks for your great help/advice - it's much appreciated.
John
Tom Lane wrote:
John Sidney-Woollett <[EMAIL PROTECTED]> writes:
OK, I re
before it
bit me in the ass!
Stressful day, but learnt a lot...
Thanks for everyone for their input - great product and great support!
John
Martijn van Oosterhout wrote:
On Sun, Oct 30, 2005 at 06:41:45PM +, John Sidney-Woollett wrote:
Hmm. I'm pretty sure that database mail_lxtrem
ing other vacuums once a month, and running the "vacuumdb
-a" option instead wouldn't be a bad idea...
Many thanks for all your support and advice - you've been great help
(and comfort).
John
Tom Lane wrote:
John Sidney-Woollett <[EMAIL PROTECTED]> writes:
Lincoln Yeoh said:
> At 07:48 PM 10/30/2005 +0000, John Sidney-Woollett wrote:
>
>>"Panic" - that's my middle name. ;)
>>
>>Had I known how to identify the database at fault, and that it would have
>>had no effect on the other databases, then I
Execute the following SQL on your pg cluster:
SELECT datname, age(datfrozenxid) FROM pg_database;
datname|age
--+
bp_live | 1075940691
template1| 1130066178
template0| 56361936
(3 rows)
Apart from template0 which is a special case (provided i
I think that something like this should work for you
SELECT child_pk, coalesce(cnt, 0)
FROM childtable c
left outer join (
SELECT child_fk, count(*) as cnt
FROM grandchildtable
GROUP BY child_fk
) t
ON (c.child_pk= t.child_fk);
IMHO, if Postgres had Oracle's (+) notation these would be a lot eas
You can use an union if the column types match
{...off the top of my head...}
select id, title, added as sortcol from table1
union
select id, headline, datetime as sortcol from table2
order by sortcol
limit 25;
John
CSN wrote:
I have two tables:
items: id, title, added, ...
news: id, headlin
Stepping back a bit...
Why not use an update trigger on the affected tables to record a
lastupdated timestamp value when the record is changed.
Surely this is simpler thanks computing some kind of row hash?
John
Karen Hill wrote:
Tom Lane wrote:
"Karen Hill" <[EMAIL PROTECTED]> writes:
Ra
Ah, good point! Missed the subtlety of what was being asked.
John
Joachim Wieland wrote:
On Thu, Sep 28, 2006 at 07:09:43AM +0100, John Sidney-Woollett wrote:
Why not use an update trigger on the affected tables to record a
lastupdated timestamp value when the record is changed.
Surely
Merlin Moncure wrote:
i'm wondering if anybody has ever attempted to manage large
collections of binary objects inside the database and has advice here.
We have designed and built an image library using Postgres and NFS
servers which currently holds 1.4 million images totalling more than 250Gb
If you could alter the foreign key constraint such that the update on
t1's primary key cascaded to t2, that would help.
However, I'm not sure that you alter the constraint in postgres once
it's created.
Hopefully someone more knowledgeable will be able to respond.
John
Rafal Pietrak wrote:
We're looking to upgrade from 7.4 - should we go to 8.1.5 or 8.2?
We have two databases; 7.4.6 and 7.4.11 in a master slave config using
Slony. Both databases use the C locale with UTF-8 encoding on unix.
We've dumped and loaded the data into an 8.1.4 database and have seen no
problems with i
MESSAGE-
Hash: SHA1
On 10/20/06 10:09, John Sidney-Woollett wrote:
We're looking to upgrade from 7.4 - should we go to 8.1.5 or 8.2?
We have two databases; 7.4.6 and 7.4.11 in a master slave config using
Slony. Both databases use the C locale with UTF-8 encoding on unix.
We've dumped
Tom Lane wrote:
John Sidney-Woollett <[EMAIL PROTECTED]> writes:
Are the differences between 8.1.5 and 8.2 significant? Is 8.2 more about
speed or new features?
8.2 is still all about beta testing.
regards, tom lane
---
Thanks for the link - that was very useful.
John
Alvaro Herrera wrote:
John Sidney-Woollett wrote:
Is there a page that describes the new features/improvements between
8.1.5 and 8.2? I couldn't find one on the postgres website.
If there aren't *big* performance gains between 8.1
Thanks for the info. I think that we'll move to 8.1.5 first, and then
8.2 when it's stable. We have to keep our test and production systems in
sync (version-wise).
John
Merlin Moncure wrote:
I've been developing against 8.2 for months without a single 8.2
specific problem. I run both linux an
Off the top of my head (and not sure how optimized)...
select t1.id, t1.score, t1.description
from scoretable t1, (
select id, max(score) as score
from scoretable
group by id
) as t2
where t1.id = t2.id
and t1.score = t2.score
order by t1.id
If you get duplicated rows back, then try using
Alternative options for what they're worth - you'd have to explain to
see how efficient they are
select id, name from (
select lower(name) as sortkey, id, name from table where name != 'Other'
union
select 'z' as sortkey, id, name from table where name = 'Other'
) as t
order by sortkey
To recover disk space, reindex the heavily updated tables. You can do
this while the database is in production.
Check the REINDEX command.
John
Schwenker, Stephen wrote:
Hello,
I'm having a major Vacuuming problem. I used to do a full vacuum every
morning on my postgres database to clean
Take a look at contrib/tablefunc there is a function called connectby()
that should do what you're looking for.
Otherwise roll your own...
http://archives.postgresql.org/pgsql-sql/2003-10/msg00374.php
John
Thomas Kellerer wrote:
On 08.01.2007 17:24 Scott Marlowe wrote:
On Mon, 2007-01-08 at
centric crm works with postgres
John
Mario Guenterberg wrote:
On Fri, Mar 09, 2007 at 01:22:22AM +, Bradley Kieser wrote:
I hope that someone has cracked this one because I have run into a brick
wall the entire week and after 3 all-nighters with bad installations, I
would appreciate heari
'X'
) as t
order by coalesce(col1, col3);
John Sidney-Woollett
Chris Green wrote:
It's not quite clear (to me at least) whether I can have a UNION and
an ORDER BY in a SELECT statement.
What I want to do is:-
SELECT
col1, col2, col5, col6
FROM
table
I've got a few plpgsql stored functions (in 7.4.x) that use the
select x into y from table where condition for update
syntax without any problem.
Maybe there's something else going on?
John Sidney-Woollett
Eric Brown wrote:
I'm trying to write a stored procedure in plpgsql that se
(which you may
not want it to do).
John Sidney-Woollett
Joost Kraaijeveld wrote:
Hi all,
Is it possible to count and display the number of children of a parent in a
generic query?
parent table: id
child table: id, parent_id
Example output of the query:
parentidnumber_of_children
parent1
). When GROUP BY is present, it is not valid for
the SELECT list expressions to refer to ungrouped columns except within
aggregate functions, since there would be more than one possible value
to return for an ungrouped column.
John Sidney-Woollett
Joost Kraaijeveld wrote:
Hi John,
John Sidney
r)
3) or find someone who has already solved this problem
Does anyone have a reliable way of monitoring the state of a Perc 3/di
array which works with nagios that they'd like to share?
Thanks to anyone who can help.
John Sidney-Woollett
---(end of
plugin gives an OK or Error message.
Hope that helps someone else in the future.
Johh Sidney-Woollett
John Sidney-Woollett wrote:
I know that this is off topic, but I'm not getting anywhere and wondered
if anyone has come across and solved this problem before.
We use Nagios to monitor our servers
Try
select a.col1 as ename from a
union
select b.othercolumn as ename from b
order by ename
Give the columns you want to order on the same name using the "as XXX"
syntax, and remove the "a." prefix from the order statement.
John Sidney-Woollett
Antony Paul wrote:
Hi,
I
me from file_info_3 WHERE file_parent_name='/'
order by file_name;
Does that do what you want?
John Sidney-Woollett
Madison Kelly wrote:
Hi all,
I have another question, I hope it isn't too basic. ^.^
I want to do a select from multiple tables but not join them. What I
am trying to do is somet
can't afford to trash our db or data.
Thanks for any help anyone can give.
John Sidney-Woollett
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
Does anyone know if it's permitted to use the 'C' locale with a UNICODE
encoded database in 7.4.6? And will it work correctly?
Or do you have to use a en_XX.utf8 locale if you want to use unicode
encoding for your databases?
John Sidney-Woollett
---(e
etween a
database using a UTF-8 locale and the C locale (if you don't care about
the small issues you detailed below)?
Thanks.
John Sidney-Woollett
Tom Lane wrote:
John Sidney-Woollett <[EMAIL PROTECTED]> writes:
Does anyone know if it's permitted to use the 'C' locale with a
Thanks for the info - to the point and much appreciated!
John Sidney-Woollett
Tom Lane wrote:
John Sidney-Woollett <[EMAIL PROTECTED]> writes:
Do upper() and lower() only work correctly for postgres v8 UTF-8 encoded
databases? (They don't seem to work on chars > standard ascii o
used it, so can't give any advice...
Hope that helps.
John Sidney-Woollett
Max wrote:
Hello,
Our postgresql database is getting too big to be handled by one server. We
need the database to be in RAM and cannot afford swapping. At the moment,
we're using only 3GB or RAM, however our bus
your database. Or install directly
into your database...
Hope that helps.
John Sidney-Woollett
Andre Schnoor wrote:
"Joshua D. Drake" wrote:
Andre Schnoor wrote:
Hi,
I am moving from Sybase to pgsql but have problems with stored procedures.
The typical procedure uses
a) named parameters
e::date;
vInv.Currency := vCustOrder.Currency;
vInv.TaxCode := vCustOrder.WSTaxCode;
vInv.TaxRate := vCustOrder.TaxRate;
vInv.Gross := round(vCustOrder.Gross,2);
vInv.Net := round(vCustOrder.Net,2);
vInv.Tax := round(vCustOrder.Gross - vInv.Net,2);
RETURN NEXT vInv;
END LOO
Yes, it does. And it's a lot easier than DBMS_SQL too!
Look at the EXECUTE command in the pl/pgsql programming language.
See http://www.postgresql.org/docs/7.4/interactive/plpgsql-statements.html
John Sidney-Woollett
Mark Battersby wrote:
Hi
When looking at PostGres through the eyes
Can you not use your table's primary key value instead?
If you table is created with OIDs you may be able to use those -
although I don't know if that this advisable or not since I never use
OIDs...
John Sidney-Woollett
Clodoaldo Pinto wrote:
This one must be obvious for most here.
I
ohn Sidney-Woollett
Clodoaldo Pinto wrote:
On Fri, 04 Feb 2005 07:38:26 +0000, John Sidney-Woollett
<[EMAIL PROTECTED]> wrote:
Can you not use your table's primary key value instead?
John, If a primary key existed there would be no duplicates.
The row insertion is made in 650k rows batche
n 2013308218 transactions
HINT: Better vacuum them within 134175429 transactions, or you may have
a wraparound failure.
VACUUM
Why are we seeing this message when the only databases in this cluster
is the one we are vacuuming (each night), and template0 and template1?
Is there something that we
e a week rather
than every night (like our main database)?
Thanks for your help.
John Sidney-Woollett
Richard Huxton wrote:
John Sidney-Woollett wrote:
I've got a 7.4.6 db running on linux where we've been vacuuming "full"
the database each night, and have recently switched to
it reference to vacuuming template0/1 in the
section on "Preventing transaction ID wraparound failures" would be helpful.
I'll add a weekly cron job to vacuum these two template databases.
Thanks for your help again.
John Sidney-Woollett
Richard Huxton wrote:
John Sidney-Woollett wrote
loss situations
caused by people deciding that they only needed to vacuum their active
user tables, rather than issuing database-wide vacuum commands. That
will appear to work fine ... for a while.
were added to the 7.4.x docs that would help others too.
Thanks
John Sidney-Woollett
Richard Hux
were added to the 7.4.x docs that would help others too.
Thanks
John Sidney-Woollett
John Sidney-Woollett wrote:
Thanks Richard, I found the page too...
However the implication (and my confusion) is that you need to vacuum
your own databases only. It's not clear (to me) that you have to do the
.
Thanks
John Sidney-Woollett
Tom Lane wrote:
John Sidney-Woollett <[EMAIL PROTECTED]> writes:
However the implication (and my confusion) is that you need to vacuum
your own databases only. It's not clear (to me) that you have to do the
same for template0 and template1 as well.
You hav
sible data loss??)...
Question: Why do our database objects still refer to removed slony
objects after they are removed?
John Sidney-Woollett
More info...
Our system is a web application which runs against a postgres 7.4.6
database. Tomcat is restarted at 5am each day.
Last Friday afternoon we uninstalled th
; recompile function, and re-execute
OK -> return result
ERROR - OID -> report error
This would help get rid of the temp table in functions work around
having to use an execute statement.
Thanks for your help and feedback.
John Sidney-Woollett
Richard Huxton wrote:
John Sidney-Woollett
or OID issues... :(
Maybe someone will get to this for 8.1?
John Sidney-Woollett
ps That having been said I still think postgres is very good, and we're
only using 7.4 at the moment!...
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
slony, even if the
issue is an artifact of postgres rather than slony itself.
At any rate, I think that a warning is definitely needed in the slony
manuals for the slonik uninstall node command.
John Sidney-Woollett
David Parker wrote:
We recently ran into this as well, because in testing we had p
Have you tried creating a script to do the import operation, and then
scheduling it to run periodically using cron (on unix) or at (on windows)?
Seems like that is all you need to do...
John Sidney-Woollett
walker1145 wrote:
I get a dump of some data in a CSV file and am able to import it
Try this for starters - it's a good introductory article
http://www.onlamp.com/pub/a/onlamp/2004/11/18/slony.html
Maybe someone else can provide other links.
John Sidney-Woollett
Dieter Schröder wrote:
Hello all,
I am currently migrating a few oracle servers to postgre sql and
management wan
Take a loot at postfix's proxymap feature - I believe it works for MySQL
and Postgres, and may solve your problem.
http://www.postfix.org/proxymap.8.html
From the docs
"To consolidate the number of open lookup tables by
sharing one open table among multiple processes.
For example, making m
We're seeing memory problems on one of our postgres databases. We're
using 7.4.6, and I suspect the kernel version is a key factor with this
problem.
One running under Redhat Linux 2.4.18-14smp #1 SMP and the other Debian
Linux 2.6.8.1-4-686-smp #1 SMP
The second Debian server is a replicate
er back to
normality. Stopping slon on its own is not enough.
John
Tom Lane wrote:
John Sidney-Woollett <[EMAIL PROTECTED]> writes:
This server just consumes more and more memory until it goes swap crazy
and the load averages start jumping through the roof.
*What* is consuming
Not sure if there is a numeric formatting option that allows what you want.
But how about?
substr(to_char(1029, '9,999'),2)
John
Terry Lee Tucker said:
> Greetings List:
>
> I am using to_char to format numeric data into a string that is ultimately
> displayed in an XmText widget. Much of the
Tom Lane said:
> John Sidney-Woollett <[EMAIL PROTECTED]> writes:
>> Tom Lane wrote:
>>> *What* is consuming memory, exactly --- which processes?
>
>> Sorry but I don't know how to determine that.
>
> Try "ps auxw", or some other incantation i
Martijn
Thanks for the tip.
Since the connections on this server are from slon, I'm hoping that they
hand around for a *long* time, and long enough to take a look to see
what is going on.
John
Martijn van Oosterhout wrote:
On Tue, Dec 13, 2005 at 04:37:42PM -0000, John Sidney-Woo
In trying to investigate a possible memory issue that affects only one
of our servers, I have been logging the process list for postgres
related items 4 times a day for the past few days.
This server uses postgres 7.4.6 + slon 1.1.0 on Debian i686 (Linux
server2 2.6.8.1-4-686-smp) and is a slo
arting ALL slon processes on all
nodes mean that I wouldn't have to stop and restart the slave postgres DB?
Thanks
John
Ian Burrell wrote:
On 12/22/05, John Sidney-Woollett <[EMAIL PROTECTED]> wrote:
In trying to investigate a possible memory issue that affects only one
of our server
You need to create a servlet (or the equivalent in ASP, PHP etc) that
sits between the clients browser and your database. In this layer you
implement a cache. The first time the image is requested you retrieve it
from the database and serve it normally, but you also copy the byte
stream to your
Either date_trunc
eg, SELECT date_trunc('hour', TIMESTAMP '2001-02-16 20:38:40');
Result: 2001-02-16 20:00:00
Or format the timestamp as a string
select to_char(now(), '-mm-dd HH24:MI:SS');
Hope that helps.
John
Tadej Kanizar wrote:
Hi!
I've got a timestamp field.. and it returns s
Select last_value from your_sequence_name;
John
MG wrote:
Hello,
I use PostgreSQL 8.0.3.
I want to get the information of the last value of a sequence.
The function 'currval' only gives the value back, if before a nextval is
executed.
Return the value most recently obtained by nextval for t
I'm no expert on this but can't you limit the points to checking any
hotel whose lat is +- 25km north/south of your city, AND whose longitude
is also +- 25km of your city. It's crude but will probably eliminate
lots of points you should never be checking...
If you could index an approx lat and
Have you tried the initcap function?
select initcap('abcd efgh');
initcap
---
Abcd Efgh
John
Emi Lu wrote:
Hello all,
Does anyone have available plpgsql codes to update all capital letters
in a column to "the first character is capital and the rest is small" ?
For example, in
I just added a new table to a slony relication set. The new table seems
to have a really high tab_reloid value of 94,198,669
I'm using Slon 1.1.5 with pg 7.4.6 and 7.4.11 on unix
On the master I see (wcprogallery is the new table added to set 4)
tab_id|tab_reloid|tab_relname |tab_nspname|tab_
My tables are defined "WITHOUT OID" - does that make a difference?
John
Hannu Krosing wrote:
Ühel kenal päeval, N, 2006-04-13 kell 10:06, kirjutas John
Sidney-Woollett:
I just added a new table to a slony relication set. The new table seems
to have a really high tab_reloi
Thanks for the reassurance. You're right the db has been around for a while.
Doea anyone know if OIDs for data and system (DDL) objects from the same
number generator?
John
Christopher Browne wrote:
In an attempt to throw the authorities off his trail, [EMAIL PROTECTED] (John
S
Not sure if I have this the right way round, but one option is a self-join
select p2.first, p2.last, p1.first, p1.last
from people p1, people p2
where p1.person_id = p2.alias
and p2.isalias = true;
Another is to use a sub-select as a column result (haven't checked the
SQL for mistakes)
select
If you're using a (java) web app with a connection pool is there a
possibility that some connections are configured with auto-commit=false
and that some future transaction issues a rollback which may include the
insert of the order items?
Perhaps some kind of persistance manager is doing this
I would think the answer is yes.
pljava allows you to create java functions, and triggers invoke functions
eg
CREATE TRIGGER sometrigger
BEFORE UPDATE
ON schema.table
FOR EACH ROW
EXECUTE PROCEDURE yourfunction();
Here's a link for the pljava language.
http://gborg.postgresql.org/proj
Do you mean?
select replace(address, 'mailto:', '') from people
... and if you only want to find the ones that start with "mailto:";
select replace(address, 'mailto:', '') from people
where address like 'mailto:%'
John
badlydrawnbhoy wrote:
Hi all,
I hope this is the right forum for this, b
Or something like
select ltrim(substr(address, 8)) from people
where address like 'mailto:%'
union
select address from people
where address not like 'mailto:%'
John
John Sidney-Woollett wrote:
Do you mean?
select replace(address, 'mailto:', '') from
s all email address that don't need the prefix stripped off
The UNION of the two gives you all the unique/distinct addresses by
combining the results from the first and second query.
John
brian ally wrote:
John Sidney-Woollett wrote:
I need to locate all the entries in a table that
I don't think that null values are indexed - you'll probably need to
coalesce your null data value to some value if you want it indexed.
You can coalesce those value back to null when you retrieve the data
from the query.
John
Carlos Oliva wrote:
Thank for your response Alan. This indeed c
specific location (either the beginning or the
end depending on your substitution value).
John
Greg Stark wrote:
John Sidney-Woollett <[EMAIL PROTECTED]> writes:
I don't think that null values are indexed - you'll probably need to coalesce
your null data value to some valu
Take a look at http://www.databaseanswers.com/modelling_tools.htm
I have used Case Studio 2 (fine, weak on documentation generation), and
PowerDesigner by Sybase (truly excellent! but $$$)
John
[EMAIL PROTECTED] wrote:
Bjørn T Johansen wrote:
Have you tried Druid (http://druid.sourceforge.
Back in April 2006 I emailed about high OIDs in a 7.4.6 database.
I've just added new tables to the database (and the slony-relication
set) and I'm seeing much higher OID values for the new tables.
In April I added a table, it was assigned an OID value of 94198669 (94
million). I've just adde
Jim C. Nasby wrote:
> Except IIRC the OP is running 7.4 which doesn't have checks in DDL
> code to deal with OID collisions. :(
This is not good news! :(
What about other long runing 7.4.x DBs? Do you really have to dump, init
and restore every once in a while?
Also, do you know what is actua
We'll probably upgrade to 8.1.x before we hit the wraparound problem! :)
Hmm, looks like slony uses OIDs... And I found a couple of my own tables
which were incorrectly created with OIDs.
select relname, relnamespace, reltype from pg_catalog.pg_class where
relhasoids=true;
relname
Slony does appear to use OIDs.
John
Jim C. Nasby wrote:
On Mon, Jun 12, 2006 at 05:02:09PM +0100, John Sidney-Woollett wrote:
Jim C. Nasby wrote:
Except IIRC the OP is running 7.4 which doesn't have checks in DDL
code to deal with OID collisions. :(
This is not good news! :(
What
Save yourself some effort and use Lucene to index a directory of your 300
word documents. I'm pretty sure that Lucene includes an extension to read
Word documents, and you can use PDFBox to read/write PDF files. Marrying
the searching and displaying of results to your web application should be
triv
at I may be able to run the demo.
thanks for any help you can provide.
James
"John Sidney-Woollett" wrote:
Save yourself some effort and use Lucene to index a directory of your 300
word documents. I'm pretty sure that Lucene includes an extension to read
Word documents, and you ca
Without trying it out, how about something like:
select username, maxbid
from users u, (
select user_id, max(amount) as maxbid
from bids
group by user_id
where auction_id = XXX
) as b
where u.id = b.user_id;
John
Alan Bullock wrote:
hi all, I have the following schema:
CREATE TABLE a
In addition to making sure databases are vacuumed regularly, it is worth
running REINDEX on tables that see a lot of updates (or insert/deletes).
Running REINDEX on a regular basis will keep the indexes compacted and
can noticeably improve the database performance.
The other benefit is that t
I'd say that the biggest benefit of pl/pgsql for postgres is that it is
so close to Oracle's own procedural language. This makes the job of
porting from Oracle to postgres *nearly* trivial.
Convincing a site to switch from Oracle to Postgres is therefroe easier
and a major feather in postgres'
Disagree.
We only apply reindex on tables that see lots of updates...
With our 7.4.x databases we vacuum each day, but we see real performance
gains after re-indexing too - we see lower load averages and no decrease
in responsiveness over time. Plus we have the benefit of reduced disk
space u
If you have a primary key value (or OID?) then you can delete the
duplicates in situ using something like (untested)
-- should work if never more than 1 duplicate row for colname1, colname2
delete from table where pk_value in (
select min(pk_value)
from table
group by colname1, colname2
having
1 - 100 of 178 matches
Mail list logo