Jim C. Nasby wrote:
Well, given that perl is using an entire CPU, it sounds like you should
start looking either at ways to remove some of the overhead from perl,
or to split that perl into multiple processes.
I use Perl for big database copies (usually with some processing/transformation
alon
http://stats.distributed.net used to use a perl script to do some
transformations before loading data into the database. IIRC, when we
switched to using C we saw 100x improvement in speed, so I suspect that
if you want performance perl isn't the way to go. I think you can
compile perl into C, so m
On Mon, Oct 23, 2006 at 05:51:40PM -0400, Steve wrote:
> Hello there;
>
> I've got an application that has to copy an existing database to a new
> database on the same machine.
>
> I used to do this with a pg_dump command piped to psql to perform the
> copy; however the database is 18 gigs larg
On Mon, Oct 23, 2006 at 04:54:00PM -0300, Mara Dalponte wrote:
> Hello,
>
> I have a query with several join operations and applying the same
> filter condition over each involved table. This condition is a complex
> predicate over an indexed timestamp field, depending on some
> parameters.
> To
On Tue, Oct 24, 2006 at 09:17:08AM -0400, Worky Workerson wrote:
> >http://stats.distributed.net used to use a perl script to do some
> >transformations before loading data into the database. IIRC, when we
> >switched to using C we saw 100x improvement in speed, so I suspect that
> >if you want per
On Mon, Oct 23, 2006 at 03:37:47PM -0700, Craig A. James wrote:
> Jim C. Nasby wrote:
> >http://stats.distributed.net used to use a perl script to do some
> >transformations before loading data into the database. IIRC, when we
> >switched to using C we saw 100x improvement in speed, so I suspect th
> Try Command Prompt's ODBC driver. Lately it has been measured to be
> consistently faster than psqlODBC.
>
> http://projects.commandprompt.com/public/odbcng
Thanks,
I tried this, but via Access it always reports a login (username/password)
to db failure. However, this a an Alpha - is there a
Markus,
Could you COPY one of your tables out to disk via psql, and then COPY it
back into the database, to reproduce this measurement with your real data?
$ psql -c "COPY my_table TO STDOUT" > my_data
$ ls my_data
2018792 edgescape_pg_load
$ time cat my_data | psql -c "COPY mytable FROM STDIN
> -Original Message-
> From: [EMAIL PROTECTED]
[mailto:pgsql-performance-
> [EMAIL PROTECTED] On Behalf Of John Philips
> Sent: Monday, October 23, 2006 8:17 AM
> To: Ben Suffolk
> Cc: pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] Optimizing disk throughput on quad Opteron
>
>
Steve,
Are you using the latest update release of Solaris 10 ?
When you are doing the copy, did you check with prstat -amL to see if it
is saturating on any CPU?
If it is saturating on a CPU then atleast it will narrow down that you
need to improve the CPU utilization of the copy process.
Hi, Tom,
Tom Lane wrote:
> You're wrong. An UPDATE always writes a new version of the row (if it
> overwrote the row in-place, it wouldn't be rollback-able). The new
> version has a different TID and therefore the index entry must change.
> To support MVCC, our approach is to always insert a ne
Tom Lane wrote:
> Stuart Bishop <[EMAIL PROTECTED]> writes:
>> I would like to understand what causes some of my indexes to be slower to
>> use than others with PostgreSQL 8.1.
>
> I was about to opine that it was all about different levels of
> correlation between the index order and physical tab
12 matches
Mail list logo