Carl R. Brune wrote:
I should have added that I want to make further use of the temporary
table after the COMMIT -- the rollback approach you propose makes it
go away.
In which case the transaction isn't READONLY. You have two options:
CREATE TEMPORARY TABLE ... AS SELECT ...
BEGIN READONLY;
.
[EMAIL PROTECTED] wrote:
I would like to create a database server with postgres web application.
My DB size will be 20GB.
I will write 1 a day with until 10 to 15 simultanous write.
How could I size my server (memory, CPU, ...) ?
The only reliable way is to do some testing. Now 10,000 rows
On Fri, Jul 07, 2006 at 03:48:00PM +0200, Marc Haber wrote:
> I am the maintainer of Debian's packages for exim4, a powerful and
> versatile Mail Transfer Agent developed in Cambridge and in wide use
> throughout the Free Software Community (http://www.exim.org/).
>
> One of our daemon flavours ha
On Tue, Aug 08, 2006 at 05:25:12PM -0400, Jasbinder Bali wrote:
> Why I'm asking this is (though generally m/c hosting the middle tier
> component can connect to the m/c hosting the Db without any DB client
> installed) because i tried to run ecpg in the middle tier m/c and it doesn't
> work. Well
hi,
I'm working in the implementation of a
datawarehouse on Postgres.
For analisys of aggregated data I'd like to use
some OLAP tools like for example, Mondrian, OR use the meterialized view (if
better).
My questions:
is there any documentation about
warehouse (with OLAP or materialized
The best way of finding out is to do some testing - use a standard
desktop machine (or laptop) and see how that performs. Make sure you
have a realistic mix and amount of data though.
If I had to guess, I'd buy a cheap server with some SATA disks (at least
2 so you can mirror them) and as mu
Jasbinder Bali wrote:
In my database server, I'm using packaged version of postgres.
I installed it using: "yum install postgres" and this inturn searches for
the rmp.
So, its a packaged version of postgresql.
Now as far as my client is concerned, that has the middle tier component
written in C,
is there a way to configure psql/readline so that it doesn't clear the
screen after browsing a query with 'q'?
regards,
merlin
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
On Wed, Aug 09, 2006 at 09:02:55AM -0400, Merlin Moncure wrote:
> is there a way to configure psql/readline so that it doesn't clear the
> screen after browsing a query with 'q'?
You don't say the OS, but you may be seeing the effects of less working
on a terminal with a seperate interactive scree
On Aug 9, 2006, at 9:02 , Merlin Moncure wrote:
is there a way to configure psql/readline so that it doesn't clear the
screen after browsing a query with 'q'?
Assuming you are using less as your PAGER, then see:
http://www.greenwoodsoftware.com/less/#tite
One can add the -X option to your LE
Hi,
I'm running PostgreSQL version 8 on a dual 2.4GHz Xeon with 1GB of RAM
and an IDE hard drive. My big table has around 9 million records.
Is there a tuning parameter I can change to increase speed of selects?
Clearly, there's already some buffering going on since selecting an
indexed ~50,
On 8/9/06, Stefano B. <[EMAIL PROTECTED]> wrote:
hi,
I'm working in the implementation of a datawarehouse on Postgres.
For analisys of aggregated data I'd like to use some OLAP tools like for
example, Mondrian, OR use the meterialized view (if better).
My questions:
is there any documentation
On Wed, Aug 09, 2006 at 09:19:31AM -0400, Tom Laudeman wrote:
> Hi,
>
> I'm running PostgreSQL version 8 on a dual 2.4GHz Xeon with 1GB of RAM
> and an IDE hard drive. My big table has around 9 million records.
>
> Is there a tuning parameter I can change to increase speed of selects?
> Clearly
On Wed, Aug 09, 2006 at 03:46:38PM +0200, Martijn van Oosterhout wrote:
> On Wed, Aug 09, 2006 at 09:19:31AM -0400, Tom Laudeman wrote:
> > Is there a tuning parameter I can change to increase speed of selects?
> > Clearly, there's already some buffering going on since selecting an
> > indexed ~5
I'm having some trouble getting a google hit on this topic, and the docs
aren't doing much for me.
What I'm wondering is, how do I limit any particular postgres operation,
in particular a long-running batch operation with lots of inserts, from
bogging down a server?
This is not so much a que
Richard Huxton writes:
> Carl R. Brune wrote:
>> I should have added that I want to make further use of the temporary
>> table after the COMMIT -- the rollback approach you propose makes it
>> go away.
> In which case the transaction isn't READONLY.
It does seem a bit inconsistent that we allow
On Wed, 2006-08-09 at 09:02 -0400, Merlin Moncure wrote:
is there a way to configure psql/readline so that it doesn't clear the
screen after browsing a query with 'q'?
I assume you're using the less text pager on a Unix box. I think what you're seeing is less clearing the screen aft
Hello all.
I am
trying to build postgres from the source on a WINXP system. I am using MSVC++.Net
and I use the link with it to get to the command prompt. After I do that I run
vcvars32.bat and the nmake /f win32.mak and I get:
C:\Dev\postgresql-8.1.4\src>nmake
/f win32.mak
On 8/9/06, Reece Hart <[EMAIL PROTECTED]> wrote:
On Wed, 2006-08-09 at 09:02 -0400, Merlin Moncure wrote:
is there a way to configure psql/readline so that it doesn't clear the
screen after browsing a query with 'q'?
I assume you're using the less text pager on a Unix box. I think wh
Hi, I'm having a problem with one of my functions, where I delete all
rows containing a particular date and then re-insert a row with that
same date. When I try this I get a constraint error. This just started
recently, after upgrading our database from 7.4 to 8.1.4 (now on RH EL).
here's the
On Wed, Aug 09, 2006 at 10:33:02AM -0700, Ron St-Pierre wrote:
> ERROR: duplicate key violates unique constraint "systemcounts_pkey"
> CONTEXT: SQL statement "INSERT INTO dm.systemCounts (updateDate) VALUES (
> $1 ::date)"
> PL/pgSQL function "updatesystemcounts" line 8 at SQL statement
Is the
Hello,
I will be basically unavailable from this Saturday until the 21st of
August. I will be spending a long week in SF at LinuxWorld West.
Please use email to contact me if it is important.
Sincerely,
Joshua D. Drake
--
=== The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Suppor
On 8/9/06, Ron St-Pierre <[EMAIL PROTECTED]> wrote:
Hi, I'm having a problem with one of my functions, where I delete all
rows containing a particular date and then re-insert a row with that
same date. When I try this I get a constraint error. This just started
recently, after upgrading our datab
> I don't see how checkpoint_timeout is relevant. Just because we
> checkpoint doesn't mean the WAL file will get archived. I have to have
> 16M of WAL traffic before a file gets archived regardless of
> check-pointing, or am I missing something?
Right, I think ;-) If you want finer-grained back
What about creating views on areas of the table that are queried often?
I don't know if you have access or the ability to find what type of
trends the table has, in terms of queries, but if you create some views
on frequently visited information, this could also help.
Tom Laudeman wrote:
Hi
I too faced same kind of a problem building libpq on Windows and as a solution to it removed "inline" from the file
"\backend\utils\mb\wchar.c"Did the following ... changed "static inline int" to "static int" instead at a couple of places in the wchar.c file
Run "nmake" after doing this specific c
Louis,Views certainly help in managing complexity. They do nothing to improve query-speed. Querying a view gets rewritten to queries to the underlying tables on the fly.(as long as there are no materialized views, which are still on a the TODO list)
-- GHUM Harald Massapersuadere et programmareHara
Shoaib Mir wrote:
> I too faced same kind of a problem building libpq on Windows and as a
> solution to it removed "inline" from the file
> "\backend\utils\mb\wchar.c"
>
> Did the following ... changed "static inline int" to "static int"
> instead at a couple of places in the wchar.c file
>
> Run
> Views certainly help in managing complexity. They do nothing to improve
> query-speed.
>
> Querying a view gets rewritten to queries to the underlying tables on the
> fly.
> (as long as there are no materialized views, which are still on a the TODO
> list)
Would partial indexs on the most queri
Shoaib,
That looks like it worked thank you!
Bruce,
I am using the following compliler:
C:\Dev\postgresql-8.1.4\src\interfaces\libpq\Release>nmake --help
Microsoft (R) Program Maintenance Utility Version 7.10.3077
Copyright (C) Microsoft Corporation. All rights reserved.
Which c
DEV wrote:
> Shoaib,
> That looks like it worked thank you!
>
> Bruce,
> I am using the following compliler:
> C:\Dev\postgresql-8.1.4\src\interfaces\libpq\Release>nmake --help
>
> Microsoft (R) Program Maintenance Utility Version 7.10.3077
> Copyright (C) Microsoft Corporation. All
I'm not so sure about that, when you create a view on a table - at least
with Oracle - which is a subset(the trivial or 'proper' subset is the
entire table view) of the information on a table, when a select is
issued against a table, Oracle at least, determines if there is a view
already on a t
Bruce Momjian <[EMAIL PROTECTED]> writes:
> Interesting. It seem the key line is here:
> cl : Command line warning D4029 : optimization is not available in the
> standard edition compiler
> So the "standard" version doesn't support inline functions.
Doesn't it simply ignore the inline keyword t
Tom Lane wrote:
> Bruce Momjian <[EMAIL PROTECTED]> writes:
> > Interesting. It seem the key line is here:
>
> > cl : Command line warning D4029 : optimization is not available in the
> > standard edition compiler
>
> > So the "standard" version doesn't support inline functions.
>
> Doesn't it
Michael Fuhr wrote:
On Wed, Aug 09, 2006 at 10:33:02AM -0700, Ron St-Pierre wrote:
ERROR: duplicate key violates unique constraint "systemcounts_pkey"
CONTEXT: SQL statement "INSERT INTO dm.systemCounts (updateDate) VALUES (
$1 ::date)"
PL/pgSQL function "updatesystemcounts" line 8 at SQL
Title: Re: [GENERAL] Tuning to speed select
On Wed, 2006-08-09 at 14:58, louis gonzales wrote:
> I'm not so sure about that, when you create a view on a table - at least
> with Oracle - which is a subset(the trivial or 'proper' subset is the
> entire table view) of the information on a table,
Okay I get a
build and I get the release of the libpq built but it does not build the Debug
version of that. I have tried going to the directory and running nmake /f
win32.mak DEBUG but it errors with:
Building the
Win32 static library...
NMAKE : fatal
error U1073: don't know how to m
Michael,
Great suggestion. I've read about CLUSTER, but never had a chance to
use it. The only problem is that this table with 9 million records has
5 or 6 indexes. It is hard to pick the most used, but I'll bet CLUSTER
will make at least one of the queries run very fast, especially for an
inde
On Wed, Aug 09, 2006 at 04:54:00PM -0400, Tom Laudeman wrote:
> I'll try CLUSTER (I'm looking forward to that test), but if we really
> need speed, it will probably be necessary to create copies of the table,
> or copy portions of the table elsewhere (essentially creating
> materialized views, I
Dev,You are doing it a little wrong for Debug version, in order to build the Debug version use the following:nmake /f win32.mak DEBUG=1Bruce,If you remove "inline" the build process goes fine and if you dont, it first gives a few warning and in the end quits the build process with a fatal error.
Lo
Shoaib Mir wrote:
> Dev,
>
> You are doing it a little wrong for Debug version, in order to build the
> Debug version use the following:
>
> nmake /f win32.mak DEBUG=1
>
> Bruce,
>
> If you remove "inline" the build process goes fine and if you dont, it first
> gives a few warning and in the en
As a workaround if you dont want to make the change and delete "inline" from wchar.c then you can do the following change in the win32.mak file for libpq:You can see at line 121 in win32.mak file as:
"WIN32" /D "_WINDOWS" /Fp"$(INTDIR)\libpq.pch" /YX\Change it to"WIN32" /D "_WINDOWS" /D "inline=__i
On Wed, 2006-08-09 at 16:54 -0400, Tom Laudeman wrote:
Great suggestion. I've read about CLUSTER, but never had a chance to use it. The only problem is that this table with 9 million records has 5 or 6 indexes. It is hard to pick the most used, but I'll bet CLUSTER will make at least one o
On Fri, Aug 04, 2006 at 03:46:09PM -0400, Matthew T. O'Connor wrote:
> Chander Ganesan wrote:
> >Matthew T. O'Connor wrote:
> >>I have done some googling for real world archive_command examples and
> >>haven't really found anything. The example in the PGSQL Docs are
> >>qualified by (This is an
On Thu, Aug 03, 2006 at 05:46:44PM -0500, Scott Marlowe wrote:
> On Thu, 2006-08-03 at 17:44, John Sidney-Woollett wrote:
> > In addition to making sure databases are vacuumed regularly, it is worth
> > running REINDEX on tables that see a lot of updates (or insert/deletes).
> >
> > Running REIND
On Wed, Aug 09, 2006 at 10:22:20AM -0400, Kenneth Downs wrote:
> I'm having some trouble getting a google hit on this topic, and the docs
> aren't doing much for me.
>
> What I'm wondering is, how do I limit any particular postgres operation,
> in particular a long-running batch operation with l
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:pgsql-general-
> [EMAIL PROTECTED] On Behalf Of Bruce Momjian
> Sent: Wednesday, August 09, 2006 2:19 PM
> To: Shoaib Mir
> Cc: DEV; pgsql-general@postgresql.org
> Subject: Re: [GENERAL] WIN32 Build?
>
> Shoaib Mir wrote:
> > Dev,
> >
On Aug 3, 2006, at 3:01 PM, Tom Lane wrote:
Jim Nasby <[EMAIL PROTECTED]> writes:
If you're using multiple timelines with PITR, do you have to re-copy
all the data files into the cluster every time you start a new
timeline? Or can you copy the data files out of the backup once, and
then perform
Jim Nasby <[EMAIL PROTECTED]> writes:
> On Aug 3, 2006, at 3:01 PM, Tom Lane wrote:
>> The only way to "roll time backwards" is to replace the data directory
>> with a backup, so if I understand your question correctly, the answer
>> is no.
> Well, specifically I was thinking of...
> Roll PITR fo
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Jim C. Nasby wrote:
> On Thu, Aug 03, 2006 at 05:46:44PM -0500, Scott Marlowe wrote:
>> On Thu, 2006-08-03 at 17:44, John Sidney-Woollett wrote:
[snip]
> And if you're vacuuming frequently enough, there shouldn't be
> that much need to reindex.
How ag
In article <[EMAIL PROTECTED]>,
RPK <[EMAIL PROTECTED]> wrote:
% I tried to modify the parameters of the .CONF file when the pgsql-8.1
% service was running. The next time I started, it displayed error: "The
% service did not listen to local host". (something like this).
%
% So I copied the old D
On 8/8/06, Michael Fuhr <[EMAIL PROTECTED]> wrote:
On Tue, Aug 08, 2006 at 12:16:18PM -0700, [EMAIL PROTECTED] wrote:
> I'd like to write a C Function for Postgresql that can access data in
> the database, use that to conduct additional queries to find more
> information in the database and event
Hello All,
When we pass file name in pg_ctl (with option -l) what has to be syslog
value..
Should it be enabled..?if so what cud be the value for syslog..?
Though I have sent approprite file name with -l option ,and if I use
default value for syslog (0) then
I am getting the messages in console.
Also (obvsly) we aren't able to get a new connection after hitting max
fd limit..
Thx,
Prasanna.
-Original Message-
From: Mavinakuli, Prasanna (STSD)
Sent: Wednesday, August 09, 2006 10:10 AM
To: pgsql-general@postgresql.org
Cc: Mavinakuli, Prasanna (STSD)
Subject: Too many open files fr
Hello All,
We are using postgres with HP-UX.
Usually We will have many records,and will do vacuum once in 2 days.
We have observed that postgres opens many files in data directory
And won't close till next vacuum (I experience it)
When it hits max limit, we won't able to open any other files.
S
Hi,I'm trying to connect to postgres database in a distributed environment, say from machine X to Machine Y (Machine Y has postgres DB)How should my connection string look like in a program in Machine X.EXEC SQL CONNECT TO --??? (Do i need to specify the IP of DB server?)
I tried a few options
Jim C. Nasby wrote:
Take a look at http://pgfoundry.org/projects/pgpitrha/
I had already seen this however it says that this project has yet to
release any files, so I thought it was a dead project. Am I missing
something?
Also, note that in 8.1, you have to manually archive the last WAL
On Wed, Aug 09, 2006 at 11:02:00AM -0400, Harpreet Dhaliwal wrote:
> I'm trying to connect to postgres database in a distributed environment,
> say from machine X to Machine Y (Machine Y has postgres DB)
> How should my connection string look like in a program in Machine X.
>
> EXEC SQL CONNECT T
"Jeffrey Bigham" <[EMAIL PROTECTED]> writes:
> I also have some vague understanding that C might be faster. I hope I'm
> not opening a can-of-worms, but is C actually faster?
C should theoretically be faster than any of the alternatives you
mention, all else being equal (eg, you are implementing
Hi
I already read that documentation.
My ECPG code for connecting to the DB server is:
EXEC SQL CONNECT TO 192.168.1.100:/xyz
i also tried
tcp:postgresql://192.168.1.100[:port][/dbname][
?options]
unix:postgresql://192.168.1.100[:port][
/dbname][?options]
but unfortunately it say DB do
"Mavinakuli, Prasanna (STSD)" <[EMAIL PROTECTED]> writes:
> We have observed that postgres opens many files in data directory
> And won't close till next vacuum (I experience it)
> When it hits max limit, we won't able to open any other files.
> So what's the solution ..?
Reduce PG's max_files_p
On Aug 8, 2006, at 8:33 PM, Jeffrey Bigham wrote:
Do you have a reason to use C instead of a higher-level language?
Not really. Basically I know C decently enough and would have to
learn PL/pgSQL (although it looks pretty easy). Perl sounds quite
attractive for the added benefit of text-p
On Thu, Aug 10, 2006 at 12:02:24AM -0400, Harpreet Dhaliwal wrote:
> I already read that documentation.
>
> My ECPG code for connecting to the DB server is:
>
> EXEC SQL CONNECT TO 192.168.1.100:/xyz
That format isn't shown in the documentation; the ecpg preprocessor
should fail with a syntax er
Disagree.
We only apply reindex on tables that see lots of updates...
With our 7.4.x databases we vacuum each day, but we see real performance
gains after re-indexing too - we see lower load averages and no decrease
in responsiveness over time. Plus we have the benefit of reduced disk
space u
64 matches
Mail list logo