Yes. Look at the pg_statio% views in system catalog.
2010/8/30, Valoo, Julian :
> Hi
>
>
>
> Are there any performance tuning tools for PostgreSQL, besides explain.
> Any system management views to find out missing indexes or indexes that
> are not being used.
>
>
>
> Thanks
>
>
>
>
>
>
>
> Julia
Hi
Are there any performance tuning tools for PostgreSQL, besides explain.
Any system management views to find out missing indexes or indexes that
are not being used.
Thanks
Julian Valoo
SQL Database Administrator
Corporate and Transactional Banking IT
BankCity
e-mail julia
Hello there,
1. Try using COPY Command, you will see significant decrease in the loading
time.
2. Turn off auto commit and Remove foreign key constraints if it is only one
time load - this will also help in decreasing the load time.
Try these options and let us know how it went.
We load around 6
I am curious to know how much of your delay is due to PostgreSQL and how
much to your Java batch program. If you comment out the call to the
database function, so that you are reading your input file but not doing
anything with the data, how long does your batch program take to run?
RobR
-
2010/4/27 Greg Smith :
> a.bhattacha...@sungard.com wrote:
>>
>> I have *622,000 number of records *but it is taking almost *4 and half
>> hours* to load these data into the tables.
Without the schema and the queries, all you can get is guessing.
--
Vincenzo Romano
NotOrAnd Information Technolog
a.bhattacha...@sungard.com wrote:
I have *622,000 number of records *but it is taking almost *4 and half
hours* to load these data into the tables.
I have a simple function in db which is being called from Java batch
program to populate the records into tables from flat files.
Four likely
Thanks a lot for your help. However I am new to Postgres database
therefore it would be nice if you can let me know how to set autocommit
off.
I know from psql client issuing “\set Autocommit Off” would set it off but
unfortunately it doesn’t set it off.
It's a client-side setting, not a serv
On Tue, Apr 27, 2010 at 5:17 AM, Alban Hertroys
wrote:
> On 27 Apr 2010, at 10:11,
> wrote:
>
>> Dear All Experts,
>>
>> I am using in PostgreSQL 8.3.5 database on windows 64 bit OS.
>> However, I have a batch program written in Java which processes the data and
>> populates them into tables i
On 27 Apr 2010, at 11:15,
wrote:
> Thanks a lot for your help. However I am new to Postgres database therefore
> it would be nice if you can let me know how to set autocommit off.
> I know from psql client issuing “\set Autocommit Off” would set it off but
> unfortunately it doesn’t set it of
I am curious to know how much of your delay is due to PostgreSQL and how
much to your Java batch program. If you comment out the call to the
database function, so that you are reading your input file but not doing
anything with the data, how long does your batch program take to run?
RobR
s for your help.
From: Magnus Hagander [mailto:mag...@hagander.net]
Sent: Tuesday, April 27, 2010 2:19 PM
To: Bhattacharya, A
Cc: pgsql-general
Subject: Re: [GENERAL] PostgreSQL Performance issue
On Tue, Apr 27, 2010 at 10:11, wrote:
Dear All Experts,
I am usin
On 27 Apr 2010, at 10:11,
wrote:
> Dear All Experts,
>
> I am using in PostgreSQL 8.3.5 database on windows 64 bit OS.
> However, I have a batch program written in Java which processes the data and
> populates them into tables in Postgres database.
>
> I have 622,000 number of records but
a.bhattacha...@sungard.com wrote:
Dear All Experts,
I am using in PostgreSQL 8.3.5 database on windows 64 bit OS.
However, I have a batch program written in Java which processes the
data and populates them into tables in Postgres database.
I have *622,000 number of records *but it is taking
On Tue, Apr 27, 2010 at 10:11, wrote:
> Dear All Experts,
>
>
>
> I am using in PostgreSQL 8.3.5 database on windows 64 bit OS.
>
You really need to upgrade. At least to 8.3.10. It has many important
bugfixes.
> However, I have a batch program written in Java which processes the data
> and po
Most likely you are inserting one per transaction. Set autocommit to
false and commit only after all the inserts are done.
-n.
On 27-04-2010 13:41, a.bhattacha...@sungard.com wrote:
Dear All Experts,
I am using in PostgreSQL 8.3.5 database on windows 64 bit OS.
However, I have a batch pr
On 27 April 2010 09:11, wrote:
> Dear All Experts,
>
>
>
> I am using in PostgreSQL 8.3.5 database on windows 64 bit OS.
>
> However, I have a batch program written in Java which processes the data
> and populates them into tables in Postgres database.
>
>
>
> I have *622,000 number of records *
Dear All Experts,
I am using in PostgreSQL 8.3.5 database on windows 64 bit OS.
However, I have a batch program written in Java which processes the data
and populates them into tables in Postgres database.
I have 622,000 number of records but it is taking almost 4 and half
hours to load th
hi all,
I want this mail to be continued about summary of performance tuning
tools... or other postgres related tools..
I ll start with saying there is a tool SCHEMASPY ( i got to know about this
from another group ), this will draw ER diagram and gives interesting
informations about our postgres
Hi,
I have a few questions about the storage and performance
1. How do you estimate the table size in postgresql?
For example if I have a table 'Dummy' with 1 varchar (3) & 1
numeric(8,0) fields and 1000 rows, what is the tablesize estimate for
this (including the row overhead etc)? How many pages
> The command you gave only shows currently
> active users, not the historic peak of connections for instance. I'll
> keep digging tha manual but would love any nudges in the right
> direction, thanks!
Can you set up a snapshot in a cronjob? It would still only be sample
of a sample, but?
>
> --
On 19/08/07, Magnus Hagander <[EMAIL PROTECTED]> wrote:
> Phoenix Kiula wrote:
> No need to match. If you have 150 relations, 200 is a reasonable value.
> But once you get the proper output from the vacuum command, it tells you
> that as well (74 in my example above)
Found it! You get those word
Phoenix Kiula wrote:
> On 19/08/07, Magnus Hagander <[EMAIL PROTECTED]> wrote:
>> Phoenix Kiula wrote:
>
> ...snip
>
>> There should be a line like this at the end of a "VACUUM VERBOSE" command:
>> INFO: free space map contains 33 pages in 74 relations
>> DETAIL: A total of 1184 page slots
Btw, related to one my earlier questions: where can I see how many
connections are being made to the DB, what was the maximum number
attempted at any given time, and so on? The connections related info.
Thanks!
---(end of broadcast)---
TIP 4: Have y
On 19/08/07, Magnus Hagander <[EMAIL PROTECTED]> wrote:
> Phoenix Kiula wrote:
...snip
> There should be a line like this at the end of a "VACUUM VERBOSE" command:
> INFO: free space map contains 33 pages in 74 relations
> DETAIL: A total of 1184 page slots are in use (including overhead).
Phoenix Kiula wrote:
> On 19/08/07, Magnus Hagander <[EMAIL PROTECTED]> wrote:
>> Phoenix Kiula wrote:
>>> On 19/08/07, Magnus Hagander <[EMAIL PROTECTED]> wrote:
>
>
>>> should we do one (VACUUM FULL) now given that we've overrun our
>>> max_fsm_pages?
>> Yes, but not until you've fixed it. And
On 19/08/07, Magnus Hagander <[EMAIL PROTECTED]> wrote:
> Phoenix Kiula wrote:
> > On 19/08/07, Magnus Hagander <[EMAIL PROTECTED]> wrote:
> > should we do one (VACUUM FULL) now given that we've overrun our
> > max_fsm_pages?
>
> Yes, but not until you've fixed it. And only once.
>
FIxed what
Phoenix Kiula wrote:
> On 19/08/07, Magnus Hagander <[EMAIL PROTECTED]> wrote:
>> Phoenix Kiula wrote:
>
>> There are ways to do this, but if you can't just use timeouts to expire
>> from the cache, things can become pretty complicated pretty fast. But
>> perhaps you can isolate some kinds of quer
Phoenix Kiula wrote:
> On 19/08/07, Magnus Hagander <[EMAIL PROTECTED]> wrote:
>> Phoenix Kiula wrote:
>
> .snipped
>
>>> I can merrily increase the "max_fsm_pages" directive, but the manual
>>> also caveats that with "this can use more system V memory than
>>> available on your system".
On 19/08/07, Magnus Hagander <[EMAIL PROTECTED]> wrote:
> Phoenix Kiula wrote:
> There are ways to do this, but if you can't just use timeouts to expire
> from the cache, things can become pretty complicated pretty fast. But
> perhaps you can isolate some kinds of queries that can be cached for
>
On 19/08/07, Magnus Hagander <[EMAIL PROTECTED]> wrote:
> Phoenix Kiula wrote:
.snipped
> > I can merrily increase the "max_fsm_pages" directive, but the manual
> > also caveats that with "this can use more system V memory than
> > available on your system". My full verbose vacuum info be
Phoenix Kiula wrote:
> On 19/08/07, Phoenix Kiula <[EMAIL PROTECTED]> wrote:
[... ]
> Well based on some past posts, I looked into my pg_log stuff and found
> a number of these lines:
>
>
> [
> LOG: could not fork new process for connection: Resource temporarily
> unavailable
>
Phoenix Kiula wrote:
> [Sorry for the length of this post. It stretched as I provided as much
> info as possible..]
>
> So the rubber meets the road. We've put postgresql in a production
> environment with some heavy simultaneous usage. It works well in
> general, but often PG doesn't respond. How
Phoenix Kiula wrote:
>
>
> Well based on some past posts, I looked into my pg_log stuff and found
> a number of these lines:
>
>
> [
> LOG: could not fork new process for connection: Resource temporarily
> unavailable
> LOG: could not fork new process for connection: Resource
On Sun, Aug 19, 2007 at 05:15:34PM +0800, Phoenix Kiula wrote:
> Well based on some past posts, I looked into my pg_log stuff and found
> a number of these lines:
> [
> LOG: could not fork new process for connection: Resource temporarily
> unavailable
Usually this message means th
On 19/08/07, Phoenix Kiula <[EMAIL PROTECTED]> wrote:
> [Sorry for the length of this post. It stretched as I provided as much
> info as possible..]
>
> So the rubber meets the road. We've put postgresql in a production
> environment with some heavy simultaneous usage. It works well in
> general, b
[Sorry for the length of this post. It stretched as I provided as much
info as possible..]
So the rubber meets the road. We've put postgresql in a production
environment with some heavy simultaneous usage. It works well in
general, but often PG doesn't respond. How should I test what is going
wron
Dear,
We are facing performance tuning problem while using PostgreSQL Database
over the network on a linux OS.
Our Database consists of more than 500 tables with an average of 10K
records per table with an average of 20 users accessing the database
simultaneously over the network. Each table has
Bruno Wolff III wrote:
On Sat, Aug 27, 2005 at 18:19:54 +0530,
sunil arora <[EMAIL PROTECTED]> wrote:
Bruno,
thanks for the reply,
we did run vaccum on it.. and we do it regulary to maintain its
performance but its not giving the expected results.
Did you do VACUUM FULL or just plain VACUU
On Sat, Aug 27, 2005 at 18:19:54 +0530,
sunil arora <[EMAIL PROTECTED]> wrote:
> Bruno,
> thanks for the reply,
> we did run vaccum on it.. and we do it regulary to maintain its
> performance but its not giving the expected results.
Did you do VACUUM FULL or just plain VACUUM?
> I dont know but
Bruno,
thanks for the reply,
we did run vaccum on it.. and we do it regulary to maintain its
performance but its not giving the expected results.
I dont know but if we delete the entire database and restore it with
the dump, then things seems to improve a _LOT_.
Isnt vaccum suppose to do the same
On Fri, Aug 26, 2005 at 22:13:04 +0530,
sunil arora <[EMAIL PROTECTED]> wrote:
> Hi folks,
>
> this is my first post to this emailing list. We are using postgres-7.4
> in a Server based application which requires frequent updates and
> inserts of the database. We have observed a substantial fall
Hi folks,
this is my first post to this emailing list. We are using postgres-7.4
in a Server based application which requires frequent updates and
inserts of the database. We have observed a substantial fall in the
performance of database server over the time. It works fine for some
initial days a
Does anyone have suggestions that could improve performance?
If you have doubled your ram you could definately increase your
effective_cache.
As you also doubled your memory you could increase your work_mem but be
careful with this setting.
Also just to be certain are you running 32bit o
Hi,
We've been using a 4 CPU Opteron (16GB) for the past several months, and
have now upgraded to an 8 CPU machine with 32GB. Basically, we've taken
the postgresql.conf from the 4 way, and used that in the 8 way without
making any changes. Shared memory for postgresql in both machines is
256M
>We support servers on NT, but not on Win95/98. Cygwin supports both,
What's the performance like?
NT doesn't do forks well and Postgres uses forks right?
Cheerio,
Link.
45 matches
Mail list logo