Hi,
>>What I am worry about is "93.5% wa" ...
>>
>>Could someone explain me what is the VACUUM process waiting for ?
>>
>
>
> Disk I/O.
>
CPU
wa: Time spent waiting for IO. Prior to Linux 2.5.41, shown as zero.
Just a little more info to help understand what Alan has pointed out.
Your CPU pr
Hi,
>I do my batch processing daily using a python script I've written. I
>found that trying to do it with pl/pgsql took more than 24 hours to
>process 24 hours worth of logs. I then used C# and in memory hash
>tables to drop the time to 2 hours, but I couldn't get mono installed
>on some of my ol
Hi,
I had some disk io issues recently with NFS, I found the command 'iostat
-x 5' to be a great help when using Linux.
For example here is the output when I do a 10GB file transfer onto hdc
Device:rrqm/s wrqm/s r/s w/s rsec/s wsec/srkB/swkB/s
avgrq-sz avgqu-sz await svct
Hi,
I have a web app using PostgreSQL which indexes, searches and
streams/downloads online movies.
I think I have a problem with NFS and RAID, it is not strictly
PostgreSQL but closely linked and I know
many people on this list are experienced with this technology. Apologies
if it is off topic.
Hi,
But is there a tool that could compile a summary out of the log? The
log grows awefully big after a short time.
There's also pg_analyzer to check out.
http://www.samse.fr/GPL/pg_analyzer/
Some of it's features are: written in Perl and produces HTML output.
You might want to look at the "Practi
Hi,
This email is picking up a thread from yesterday on INSERTS and INDEXES.
In this case the question is to use and index or a sequential scan.
I have included the db DDL and SELECT query.
For each month I have a csv data dump of council property data.
So the First CD will have almost all unique r
Hi,
In an attempt to throw the authorities off his trail, [EMAIL PROTECTED] (Rudi
Starcevic) transmitted:
A minute for your thoughts and/or suggestions would be great.
Heh heh
Could you give a more concrete example? E.g. - the DDL for the
table(s), most particularly.
Thanks, I didn'
the final
table like:
insert into original (x,x,x) (select temp.1, temp.2, etc from temp left
join original on temp.street=original.street where original.street is null)
Good Luck
Jim
Rudi Starcevic wrote:
Hi,
I have a question on bulk checking, inserting into a table and
how best to use an index
Hi,
I have a question on bulk checking, inserting into a table and
how best to use an index for performance.
The data I have to work with is a monthly CD Rom csv data dump of
300,000 property owners from one area/shire.
So every CD has 300,000 odd lines, each line of data which fills the
'property
Hi,
Yes I Analyze also, but there was no need to because it was a fresh brand
new database.
Hmm ... Sorry I'm not sure then. I only use Linux with PG.
Even though it's 'brand new' you still need to Analyze so that any
Indexes etc. are built.
I'll keep an eye on this thread - Good luck.
Regards
Hi,
And yes I did a vacuum.
Did you 'Analyze' too ?
Cheers
Rudi.
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faqs/FAQ.html
Hi,
> I have some tables (which can get pretty large) in which I want to
> record 'current' data as well as 'historical' data.
Another solution can be using a trigger and function to record every
transaction to a 'logging' table.
This way you'll have one 'current' table and one 'historical' tabl
Chris,
Oops - it's changed !
Here's the link's you need:
http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html
http://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html
Cheers
Rudi.
Chris_Wu wrote:
>Hello all!
> I'm a new to Postgresql , I have never used it be
Hi Chris,
I suggest you read this tech. document:
http://www.varlena.com/GeneralBits/
I think you'll it's the best place to start.
Cheers
Rudi.
Chris_Wu wrote:
>Hello all!
> I'm a new to Postgresql , I have never used it before.
> I am having an issue with configure the pos
14 matches
Mail list logo