On Tue, Dec 14, 2004 at 09:25:19AM -0800, Andrew Lapides wrote:
> I tried the second option, i.e. calling psql from the command
> line redirecting output to the file:
>
> psql -U ... -W all_tables.sql > log
>
> Again not everything is recorded into the log file.
Some messages go to standard er
Add :
hostallall 10.1.9.15 [your subnet mask number] trust
in pg_hba.conf file.
have you set the ODBC parameter correctly?
database name, etc..?
On Tue, 14 Dec 2004 16:55:20 -0500, Nadia Kunkov
<[EMAIL PROTECTED]> wrote:
> Hi all,
>
> After wasting about half a day I thought I'd a
> What's the purpose of the updates? If we knew more about what
> you're trying to do then maybe we could suggest alternate solutions.
There are two fields (of all the records) I need to update when I
insert/update/delete a record. These two fields contain numbers and
they act as a link between t
Hi, I have posgres7.4 installed on RedHat9. I
downloaded and unzipped the tsearch2 module into posgressrcdir/contrib/tsearch2.
gmake produced a tsearch2.sql that had references to '$libdir' in them. If
I try to use that (after successful gmake install) in a line like
pgEdit 1.0b5 is now available. This version adds support for Windows NT
4.0 and Windows 2000. In addition to some minor bug fixes, this version
includes some new editing features and another option for executing
files with psql. The final version of pgEdit 1.0 is expected shortly
after PostgreS
Hi all,
After wasting about half a day I thought I'd ask you for help.
I have Fedora Core 3 box that came with postgresql 7.4.6 installed.
Locally I have no problems connecting to the database and running psql.
I need to set up an ODBC connection from my Windows box.
Here is what I've done:
1.
I've come across a problem which is only happening intermittently, and
having a difficult time finding information to help me determine what is
the source of my frustration.
The symptom is the error reported in the subject. We invoke SQL
statements from a Java servlet via pg74jdbc3.jar. The r
>>> Michael Fuhr <[EMAIL PROTECTED]> 12/14/04 1:45 PM >>>
>>On Tue, Dec 14, 2004 at 11:48:21AM -0600, Kall, Bruce A. wrote:
>> What is the easiest way to copy a particular row from one table to
>> another (assuming all the columns are exactly the same). I want to
>> maintain a table ('backup_tab
I am attempting to connect to a PostgreSQL database via C# or VB.net
using the Npgsql .Net data provider, but I get the following error when
I attempt to open the connection:
***
An unhandled exception of type 'System.IO.IOException' occurred in
npgsql.dll
Additional information: Unable to
REQUEST FOR DISCUSSION (RFD)
unmoderated group comp.databases.postgresql
This is a formal Request For Discussion (RFD) for the creation of
the worldwide unmoderated Usenet newsgroup comp.databases.postgresql.
This is not a Call for Votes (CFV); you cannot vote at
Hi,
I have a problem catching the output of my scripts into the file.
For example I have a set of scripts to create my schema. In psql I just type \i
all_tables.sql and
it calls a bunch of scripts - drop script and then individual table creation
script.
I have all the output inside my psql (wh
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Greg Stark
Sent: Tuesday, December 14, 2004 8:49 PM
To: [EMAIL PROTECTED]
Subject: Re: [GENERAL] Corrupt RTREE index
Scott Marlowe <[EMAIL PROTECTED]> writes:
> IS this same issue true for hash or GiST in
On Tue, Dec 14, 2004 at 07:23:31PM -0800, Scott Frankel wrote:
> is there a reasonable way to traverse the dependencies from the
> students table back to the universities table?
See the documentation for joined tables in the Tutorial and in the
SQL Language documentation. It sounds like you want
Scott Marlowe <[EMAIL PROTECTED]> writes:
> IS this same issue true for hash or GiST indexes?
I think that's true, afaik rtree, GiST, and hash are all not WAL-logged.
> On Tue, 2004-12-14 at 13:49, Dann Corbit wrote:
> > I suggest a warning (if there is not already one generated) on create
> >
On Tue, 2004-12-14 at 17:15 -0800, Tim Vadnais wrote:
> My question is:
> What is stand-alone mode?
http://www.postgresql.org/docs/7.4/static/app-postgres.html
-Neil
---(end of broadcast)---
TIP 8: explain analyze is your friend
On Wed, Dec 15, 2004 at 11:55:34AM +1100, Ben wrote:
> I have this scenario and would like to use lock table for this but I
> don't know which mode I should use.
>
> When I delete/update/insert a record from/to a table, I need to update
> some fields of all the records in this table.
What's the
Title: VACUUM FULL [ANALYZE] problem
If I run a vacuum full on my system I get the following warning. The optional analyze has no effect on the output.
bwks=# VACUUM FULL;
WARNING: index "pg_database_datname_index" contains 3 row versions, but table contains 4 row versions
HINT: Rebuild
My first schema design has passed all the tests I've thrown it so far,
'cept one -- and a
simple one at that. I wonder if the following boils down to a question
of query construction
or if I need to redesign my schema.
Consider the (contrived) example of 3 universities, where each hosts a
*uni
Hi
I have this scenario and would like to use lock table for this but I
don't know which mode I should use.
When I delete/update/insert a record from/to a table, I need to update
some fields of all the records in this table. During this process I
don't want anyone to insert, update or delete the
Would it be possible to rebuild all non-btree indexes when a recovery
takes place?
Another thing that seems it might be nice is to check the non-btree
indexes during analyze (if that is possible and not too expensive).
-Original Message-
From: Neil Conway [mailto:[EMAIL PROTECTED]
Sent:
On Tue, 2004-12-14 at 14:12 -0600, Scott Marlowe wrote:
> IS this same issue true for hash or GiST indexes?
Yes, it is: currently, only btree indexes are WAL safe.
(I spent some time recently looking into adding page-level concurrency
and WAL to GiST, but I haven't had a chance to finish that wor
On Wed, 2004-12-15 at 10:22 +1100, Harvey, Allan AC wrote:
> I have a small table about 20 rows, a constant, that is receiving
> about 160 updates per second.
> The table is used to share gathered data to other process asynchronously.
> After 5 min it is 12 updates per second.
> Performance returns
On Mon, 2004-12-13 at 17:43 -0500, Jimmie H. Apsey wrote:
> Hello all,
> I have just loaded Postgresql 7.3.6-7 onto a new server on the
> recommendation of Tom Lane. It is part of Red Hat AS 3.
> I have Postgresql 7.1.3-5 running on Red Hat AS 2.1.
> I have a simple view from which I select on bo
On the 'old' Red Hat AS 2.1 here is the results of explain and the query:
[ ~]$ time /usr/bin/psql mpt -c"explain select count(*) from tpv;"
NOTICE: QUERY PLAN:
Aggregate (cost=4563.87..4563.87 rows=1 width=56)
-> Nested Loop (cost=870.92..4563.01 rows=342 width=56)
-> Hash Join (cost
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Harvey, Allan
AC
Sent: Tuesday, December 14, 2004 3:23 PM
To: [EMAIL PROTECTED]
Subject: [GENERAL] Performance suggestions?
Hi all,
I'm after suggestions on how to keep the initial performance
of a system I
On Tue, 2004-12-14 at 16:50, Jimmie H. Apsey wrote:
> I think my indexes are OK.
>
> I have no settings for 'shared memory buffers' in postgresql.conf on
> either system. Both systems have same, i.e.
> #sort_mem = 512
> #shared_buffers = 2*max_connections # min 16
> #fsync = true
>
> Do you, or
Hi all,
I'm after suggestions on how to keep the initial performance
of a system I've put together for longer than 2 minutes.
I have a small table about 20 rows, a constant, that is receiving
about 160 updates per second.
The table is used to share gathered data to other process asynchronously.
A
"Jimmie H. Apsey" <[EMAIL PROTECTED]> writes:
> On the 'old' Red Hat AS 2.1 here is the results of explain and the query:
The major problem seems to be that the old system is using a nestloop
with inner indexscan on ada_code:
> -> Nested Loop (cost=870.92..4563.01 rows=342 width=56)
>
I think my indexes are OK.
I have no settings for 'shared memory buffers' in postgresql.conf on
either system. Both systems have same, i.e.
#sort_mem = 512
#shared_buffers = 2*max_connections # min 16
#fsync = true
Do you, or any of you, have any other suggestions for why such a simple
"sele
-Original Message-
From: Jimmie H. Apsey [mailto:[EMAIL PROTECTED]
Sent: Tuesday, December 14, 2004 1:18 PM
To: Dann Corbit
Cc: [EMAIL PROTECTED]
Subject: Re: [GENERAL] Performance differences 7.1 to 7.3
On the 'old' Red Hat AS 2.1 here is the results of explain and the
query:
[ ~]$ ti
It's part of the conventional wisdom of performance tuning to keep
pg_xlog on a separate physical disk, and this seems to be common sense
to me. Am I correct in deducing that the performance gain comes from
preventing a file being written to sequentially from competing for I/O
resources with th
IS this same issue true for hash or GiST indexes?
On Tue, 2004-12-14 at 13:49, Dann Corbit wrote:
> I suggest a warning (if there is not already one generated) on create
> index for rtree indexes so that users know that they are not fully
> supported.
>
> -Original Message-
> From: [EMAIL
Oracle and PostgreSQL schemas seem to have fairly equivalent functionality
from what I've read, except that PostgreSQL schemas aren't as tied to a
particular user as are Oracle's.
I suggest a warning (if there is not already one generated) on create
index for rtree indexes so that users know that they are not fully
supported.
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Tom Lane
Sent: Monday, December 13, 2004 4:14 PM
To: Greg St
Hi,
one more question: how can I set up vaccum handling in postgres to make it
less agressive? I'm faceing a problem where a batch process (inserting 184
records at once which takes 1 second in normal cases) stops because of a
dead lock. This occures (seemingly) only when a vacuum analyze is runnin
On Tue, Dec 14, 2004 at 11:48:21AM -0600, Kall, Bruce A. wrote:
> What is the easiest way to copy a particular row from one table to
> another (assuming all the columns are exactly the same). I want to
> maintain a table ('backup_table') that has rows I have (or will be)
> deleting from my 'ori
On Tue, Dec 14, 2004 at 06:34:05PM +0100, Christian Kratzer wrote:
>
> At least on FreeBSD 5.x there is no need to build a customer kernel.
> The following can be set in /boot/loader.conf
I forgot about /boot/loader.conf. If I get a chance I'll check if
the settings you posted also work in FreeB
What is the easiest way to copy a particular row from one table to
another (assuming all the columns are exactly the same). I want to
maintain a table ('backup_table') that has rows I have (or will be)
deleting from my 'original_table'. (Both tables have the same 68 columns).
I've tried
INSERT
Hi Grag,
thank you, I realized the problem. I treated these values as per server
values not per connection.
It's now working with 20 or more concurrent connections well.
bye,
-- Csaba
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Greg Stark
Sent: Tue
On Tue, 14 Dec 2004, Michael Fuhr wrote:
On Tue, Dec 14, 2004 at 10:01:47AM -0500, Frank D. Engel, Jr. wrote:
[snipp]
The error probably says something about not having enough shared
memory or semaphores. On FreeBSD some IPC settings can be configured
via sysctl but others might need to be built i
Együd Csaba (Freemail) <[EMAIL PROTECTED]> writes:
> shared_buffers = 2 # min 16, at least max_connections*2, 8KB each
You can lower this to 10,000 or even lower.
> max_connections = 100
> work_mem = 16384# min 64, size in KB
That's 16M per connection with a maximu
On Tue, Dec 14, 2004 at 10:01:47AM -0500, Frank D. Engel, Jr. wrote:
> On Dec 14, 2004, at 5:46 AM, Hengki Suhartoyo wrote:
> >
> >I want to increase my max_connections up to 128
> >connections, but I got that I need to recompile my
> >kernel. I'm newbie in postgresql and freebsd. How to
> >increas
> Thank you Oleg,
>
> It works.
glad to hear that. You might submit this to posgis faq.
>
> Greetings
> Jens
>
> -Urspr?ngliche Nachricht-
> Von: Oleg Bartunov [mailto:[EMAIL PROTECTED]
> Gesendet: Dienstag, 7. Dezember 2004 12:20
> An: Werdin Jens
> Cc: [EMAIL PROTECTED]
> Betreff: Re:
Alex wrote:
Hi,
I have a query that runs pretty slow and tried to use explain to see
where the problem is.
Both tables have 2.1 and 2.8 million records.
In the Explain output I see that a Seq Scan is used on nval_price_hist
what I dont quite understand.
Could some one help me creating the correct i
(WinXP, PG8b5, 1GB, 3,2GHz)
Hi,
I regulary get the above error message when I run my applications in
parallel. There are minimum of 5 applications which have to access the
server in parallel, but in the production environment this number will about
30-50 (with the additional clients). Each of th
hi,
i cant afford to move the database to another location.. i will still
end up
on lack of disk space.. i was thinking if theres a way in version 7.3.4 to
move tables on a different partition or when ever i create new tables it
automatically create files on a diff partition..
tia,
46 matches
Mail list logo