Re: [PERFORM] 3-table query optimization

2006-08-10 Thread Michal Taborsky - Internet Mall

Tom Lane napsal(a):

Michal Taborsky - Internet Mall <[EMAIL PROTECTED]> writes:

SELECT product.product_id
   FROM action
   JOIN product ON (product.product_id=action.product_id)
  WHERE action.shop_group_id=1
AND EXISTS (SELECT 1
  FROM catalog.product_program
  WHERE product_id=product.product_id
AND product_program.program_id =1104322
)


Try converting the EXISTS subquery to an IN.


The performance is roughly the same. For some groups it's better, for 
some groups, the bigger ones, it's a bit worse. I forgot to mention, 
that the server is running 8.0.2. Upgrading would be a bit painful, as 
it is a 24/7 production system, but if it would help significantly, we'd 
give it a go.


--
Michal Táborský


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[PERFORM] setting up foreign keys

2006-08-10 Thread Sue Fitt

Hi all,

This is my first post to the performance list, I hope someone can help me.

I'm setting up a table with 2 columns, both of which reference a column 
in another table:


CREATE TABLE headwords_core_lexemes (
core_id int REFERENCES headwords_core(core_id),
lexeme_id int REFERENCES headwords_core(core_id),
);

Trouble is, it's taken 18 hours and counting!  The table headwords_core 
only has about 13,000 lines, and core_id is the primary key on that 
table. However, I assume it must be those 13,000 lines that are the 
problem, since if I try it referencing a similar table with 360 lines 
the new table is created almost instantly.


I found a post on a similar subject from quite a while ago, but no 
answer, and that was for millions of rows anyway. I only have 13,000. 
Surely it should be faster than this? Is there a way to speed it up?


Sue Fitt


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [PERFORM] setting up foreign keys

2006-08-10 Thread Chris

Sue Fitt wrote:

Hi all,

This is my first post to the performance list, I hope someone can help me.

I'm setting up a table with 2 columns, both of which reference a column 
in another table:


CREATE TABLE headwords_core_lexemes (
core_id int REFERENCES headwords_core(core_id),
lexeme_id int REFERENCES headwords_core(core_id),
);


One problem here is both of these are referencing the same column ;) I'm 
sure that's a typo.


It sounds like you have something blocking or locking the other table. 
Check pg_locks (I think it is), 13,000 rows shouldn't take *that* long.



Make sure there is an index on headwords_core(core_id) and whatever the 
other column should be.


Foreign keys have to check the other table so without those indexes, it 
will be slow(er).


--
Postgresql & php tutorials
http://www.designmagick.com/

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] setting up foreign keys

2006-08-10 Thread Sue Fitt

Thanks Chris and Chris, you've solved it.

I had a gui open that connects to the database. It was doing nothing 
(and not preventing me adding to or altering headwords_core via psql), 
but having closed it the table is instantly created. Weird.


BTW, referencing the same column twice is deliberate, it's a 
cross-reference.


Sue

Chris Mair wrote:
>> This is my first post to the performance list, I hope someone can 
help me.

>>
>> I'm setting up a table with 2 columns, both of which reference a 
column in another table:

>>
>> CREATE TABLE headwords_core_lexemes (
>> core_id int REFERENCES headwords_core(core_id),
>> lexeme_id int REFERENCES headwords_core(core_id),
>> );
>>
>> Trouble is, it's taken 18 hours and counting!  The table 
headwords_core only has about 13,000 lines, and core_id is the primary 
key on that table. However, I assume it must be those 13,000 lines that 
are the problem, since if I try it referencing a similar table with 360 
lines the new table is created almost instantly.
>>
>

> Hi,
>
> the 13000 rows in headwords_core don't matter at all for what this
> statement concerns. I bet you have another idle transaction that keeps
> headwords_core locked, for example because you did an
> alter table headwords_core there...
>
> Bye,
> Chris.
>
>

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [PERFORM] setting up foreign keys

2006-08-10 Thread Chris

Sue Fitt wrote:

Thanks Chris and Chris, you've solved it.

I had a gui open that connects to the database. It was doing nothing 
(and not preventing me adding to or altering headwords_core via psql), 
but having closed it the table is instantly created. Weird.


BTW, referencing the same column twice is deliberate, it's a 
cross-reference.


The same column and the same table?

Same column different table I could understand but not the same column & 
table ;)


I'm sure there's a reason for it though :)

--
Postgresql & php tutorials
http://www.designmagick.com/

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [PERFORM] setting up foreign keys

2006-08-10 Thread Sue Fitt

Well they don't necessarily have the same value!

It's a dictionary with cross-referenced words, e.g. 'bring' and 
'brought' are both headwords in the dictionary, but 'brought' is 
cross-referenced to 'bring'. So, the table stores the information (using 
integer id's rather than words) that

   bring: bring
   brought: see bring
   sing: sing
   sang: see sing
etc.

Sue

Chris wrote:

Sue Fitt wrote:

Thanks Chris and Chris, you've solved it.

I had a gui open that connects to the database. It was doing nothing 
(and not preventing me adding to or altering headwords_core via 
psql), but having closed it the table is instantly created. Weird.


BTW, referencing the same column twice is deliberate, it's a 
cross-reference.


The same column and the same table?

Same column different table I could understand but not the same column 
& table ;)


I'm sure there's a reason for it though :)



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] Beginner optimization questions, esp. regarding Tsearch2

2006-08-10 Thread Richard Huxton

Carl Youngblood wrote:

- I noticed that there are six different postmaster daemons running.
Only one of them is taking up a lot of RAM (1076m virtual and 584m
resident).  The second one is using 181m resident while the others are
less than 20m each.  Is it normal to have multiple postmaster
processes?


You should have one master backend process and one per connection. PG is 
a classic multi-process designed server.


> Even the biggest process doesn't seem to be using near as

much RAM as I have on this machine.  Is that bad?  What percentage of
my physical memory should I expect postgres to use for itself?  How
can I encourage it to cache more query results in memory?


OK - one of the key things with PostgreSQL is that it relies on the O.S. 
to cache its disk files. So, allocating too much memory to PG can be 
counterproductive.


From your figures, you're allocating about 64MB to work_mem, which is 
per sort. So, a complex query could use several times that amount. If 
you don't have many concurrent queries that might be what you want.


Also, you've allocated 1GB to your shared_buffers which is more than I'd 
use as a starting point.


You've only mentioned one main table with 100,000 rows, so presumably 
you're going to cache the entire DB in RAM. So, you'll want to increase 
effective_cache_size and reduce random_page_cost.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] Beginner optimization questions, esp. regarding Tsearch2

2006-08-10 Thread Markus Schaber
Hi, Richard and Carl,

Richard Huxton wrote:
> Carl Youngblood wrote:
>> - I noticed that there are six different postmaster daemons running.
>> Only one of them is taking up a lot of RAM (1076m virtual and 584m
>> resident).  The second one is using 181m resident while the others are
>> less than 20m each.  Is it normal to have multiple postmaster
>> processes?
> 
> You should have one master backend process and one per connection. PG is
> a classic multi-process designed server.

There may be some additional background processes, such as the
background writer, stats collector or autovacuum, depending on your
version and configuration.

HTH,
Markus

-- 
Markus Schaber | Logical Tracking&Tracing International AG
Dipl. Inf. | Software Development GIS

Fight against software patents in EU! www.ffii.org www.nosoftwarepatents.org

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Postgresql Performance on an HP DL385 and

2006-08-10 Thread Michael Stone

On Wed, Aug 09, 2006 at 08:29:13PM -0700, Steve Poe wrote:

I tried as you suggested and my performance dropped by 50%. I went from
a 32 TPS to 16. Oh well.


If you put data & xlog on the same array, put them on seperate 
partitions, probably formatted differently (ext2 on xlog).


Mike Stone

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [PERFORM] most bang for buck with ~ $20,000

2006-08-10 Thread Jeff Trout


On Aug 9, 2006, at 5:35 PM, Jim C. Nasby wrote:

Note that some controllers (such as 3ware) need to periodically  
test the

life of the BBU, and they disable write caching when they do so, which
would tank performance.


Yep. I did the battery capacity test before I went live with our  
9550sx controller.
The only downside I see by not doing it is its estimated battery  
lifetime number may be inaccurate, and once a week you get an alarm  
message about the capacity test being overdue.


It does seem like a big design flaw needing to do it, but if you  
think about it, you don't want to have data in the cache while seeing  
how long it takes for the battery to drain :)


--
Jeff Trout <[EMAIL PROTECTED]>
http://www.dellsmartexitin.com/
http://www.stuarthamm.net/




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [PERFORM] setting up foreign keys

2006-08-10 Thread Merlin Moncure

On 8/10/06, Chris <[EMAIL PROTECTED]> wrote:

Sue Fitt wrote:
> Thanks Chris and Chris, you've solved it.
>
> I had a gui open that connects to the database. It was doing nothing
> (and not preventing me adding to or altering headwords_core via psql),
> but having closed it the table is instantly created. Weird.
>
> BTW, referencing the same column twice is deliberate, it's a
> cross-reference.

The same column and the same table?

Same column different table I could understand but not the same column &
table ;)


create table color(color text);

create table person(eye_color text references color(color), hair_color
text references color(color));

;)
merlin

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [PERFORM] Postgresql Performance on an HP DL385 and

2006-08-10 Thread Luke Lonergan
Mike,

On 8/10/06 4:09 AM, "Michael Stone" <[EMAIL PROTECTED]> wrote:

> On Wed, Aug 09, 2006 at 08:29:13PM -0700, Steve Poe wrote:
>> I tried as you suggested and my performance dropped by 50%. I went from
>> a 32 TPS to 16. Oh well.
> 
> If you put data & xlog on the same array, put them on seperate
> partitions, probably formatted differently (ext2 on xlog).

If he's doing the same thing on both systems (Sun and HP) and the HP
performance is dramatically worse despite using more disks and having faster
CPUs and more RAM, ISTM the problem isn't the configuration.

Add to this the fact that the Sun machine is CPU bound while the HP is I/O
wait bound and I think the problem is the disk hardware or the driver
therein.

- Luke



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PERFORM] Postgresql Performance on an HP DL385 and

2006-08-10 Thread Scott Marlowe
On Thu, 2006-08-10 at 10:15, Luke Lonergan wrote:
> Mike,
> 
> On 8/10/06 4:09 AM, "Michael Stone" <[EMAIL PROTECTED]> wrote:
> 
> > On Wed, Aug 09, 2006 at 08:29:13PM -0700, Steve Poe wrote:
> >> I tried as you suggested and my performance dropped by 50%. I went from
> >> a 32 TPS to 16. Oh well.
> > 
> > If you put data & xlog on the same array, put them on seperate
> > partitions, probably formatted differently (ext2 on xlog).
> 
> If he's doing the same thing on both systems (Sun and HP) and the HP
> performance is dramatically worse despite using more disks and having faster
> CPUs and more RAM, ISTM the problem isn't the configuration.
> 
> Add to this the fact that the Sun machine is CPU bound while the HP is I/O
> wait bound and I think the problem is the disk hardware or the driver
> therein.

I agree.  The problem here looks to be the RAID controller.

Steve, got access to a different RAID controller to test with?

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] Postgresql Performance on an HP DL385 and

2006-08-10 Thread Steve Poe
Scott,I *could* rip out the LSI MegaRAID 2X from my Sun box. This belongs to me for testing. but I don't know if it will fit in the DL385. Do they have full-heigth/length slots? I've not worked on this type of box before. I was thinking this is the next step. In the meantime, I've discovered their no email support for them so I am hoping find a support contact through the sales rep that this box was purchased from. 
SteveOn 8/10/06, Scott Marlowe <[EMAIL PROTECTED]> wrote:
On Thu, 2006-08-10 at 10:15, Luke Lonergan wrote:> Mike,>> On 8/10/06 4:09 AM, "Michael Stone" <[EMAIL PROTECTED]> wrote:>
> > On Wed, Aug 09, 2006 at 08:29:13PM -0700, Steve Poe wrote:> >> I tried as you suggested and my performance dropped by 50%. I went from> >> a 32 TPS to 16. Oh well.> >> > If you put data & xlog on the same array, put them on seperate
> > partitions, probably formatted differently (ext2 on xlog).>> If he's doing the same thing on both systems (Sun and HP) and the HP> performance is dramatically worse despite using more disks and having faster
> CPUs and more RAM, ISTM the problem isn't the configuration.>> Add to this the fact that the Sun machine is CPU bound while the HP is I/O> wait bound and I think the problem is the disk hardware or the driver
> therein.I agree.  The problem here looks to be the RAID controller.Steve, got access to a different RAID controller to test with?---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate   subscribe-nomail command to [EMAIL PROTECTED] so that your   message can get through to the mailing list cleanly



Re: [PERFORM] 3-table query optimization

2006-08-10 Thread Alvaro Herrera
Michal Taborsky - Internet Mall wrote:
> Tom Lane napsal(a):
> >Michal Taborsky - Internet Mall <[EMAIL PROTECTED]> writes:
> >>SELECT product.product_id
> >>   FROM action
> >>   JOIN product ON (product.product_id=action.product_id)
> >>  WHERE action.shop_group_id=1
> >>AND EXISTS (SELECT 1
> >>  FROM catalog.product_program
> >>  WHERE product_id=product.product_id
> >>AND product_program.program_id =1104322
> >>)
> >
> >Try converting the EXISTS subquery to an IN.
> 
> The performance is roughly the same.

That's strange -- IN is usually much more amenable to better plans than
EXISTS.  Please post an EXPLAIN ANALYZE of the queries to see what's
going on.  It may be that the query is bound to be "slow" for some
cases (depending on the program_id I guess?)

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PERFORM] [BUGS] BUG #2567: High IOWAIT

2006-08-10 Thread Jim Nasby
Please cc the list so others can help.

How large is the database? What indexes are on the tables you're inserting 
into? What speed is the drive?

Since it's a single SCSI drive I'm assuming it's only 10k RPM, which means the 
theoretical maximum you can hit is 160 transfers per second. At 40 inserts per 
second (I'm assuming each insert is it's own transaction), you're already at 40 
WAL operations per second, minimum. Plus whatever traffic you have to the data 
tables.

Your biggest win would be to batch those inserts together into transactions, if 
possible. If not, the commit_delay settings might help you out.

There may be some further gains to be had by tweaking the background writer 
settings; it might be too aggressive in your application.

That update statement could also be causing a lot of activity, depending on 
what it's doing.
--
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461



-Original Message-
From: Kumarselvan S [mailto:[EMAIL PROTECTED]
Sent: Wed 8/9/2006 11:33 PM
To: Jim Nasby
Subject: RE: [BUGS] BUG #2567: High IOWAIT
 
Yes , it is not a Bug. 
Here the some Info abt the Hardware
It has an SCSI Drive.
It an dell made quad processor machine. 

The changes to Postgresql.conf
1. max_connections =50
2. shared buffer = 3
3. Temp buffer 2

Regards,
Kumar
-Original Message-
From: Jim C. Nasby [mailto:[EMAIL PROTECTED] 
Sent: Thursday, August 10, 2006 3:57 AM
To: kumarselvan
Cc: pgsql-performance@postgresql.org
Subject: Re: [BUGS] BUG #2567: High IOWAIT

This isn't a bug; moving to pgsql-performance.

On Tue, Aug 08, 2006 at 08:42:02AM +, kumarselvan wrote:
> i have installed the postgres as mentioned in the Install file. it is a 4
> cpu 8 GB Ram Machine installed with Linux Enterprise version 3. when i am
> running a load which will perfrom 40 inserts persecond on 2 tables and 10
> updates per 10seconds on differnt table IOWait on avg going upto 70% due
to
> which i am not able to increase the load. Is there is any other way to
> install the postgres on multiprocessor machine.. can any one help me on
> this...

You haven't given us nearly enough information. What kind of hardware is
this? RAID? What changes have you made to postgresql.conf?
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461


quad




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PERFORM] setting up foreign keys

2006-08-10 Thread Stephan Szabo
On Thu, 10 Aug 2006, Sue Fitt wrote:

> Hi all,
>
> This is my first post to the performance list, I hope someone can help me.
>
> I'm setting up a table with 2 columns, both of which reference a column
> in another table:
>
> CREATE TABLE headwords_core_lexemes (
> core_id int REFERENCES headwords_core(core_id),
> lexeme_id int REFERENCES headwords_core(core_id),
> );
>
> Trouble is, it's taken 18 hours and counting!

What precisely is taking the time, the create table itself? The only thing
that the create should be waiting for as far as I know is a lock on
headwords_core to add the triggers.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[PERFORM] Migrating data from DB2 to SQL Server

2006-08-10 Thread contact1981
Hello,

I am trying to migrate data from a DB2 database to SQL Server 2005
database.  Does anyone know about any migration tool that does that?  I

have heard about DB2 Migration Tool kit, but I think you can only
migrate data to a DB2 database with that.  Thank you.


Sincerely, 


Eldhose Cyriac


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[PERFORM] slow transfer speeds with PostgreSQL

2006-08-10 Thread hansell baran
Hi. I'm new at using PostgreSQL.Where I work, all databases were built with MS Access. The Access files are hosted by computers with Windows 2000 and Windows XP. A new server is on its way and only Open Source Software is going to be installed. The OS is going to be SUSE Linux 10.1 and we are making comparisons between MySQL, PostgreSQL and MS Access. We installed MySQL and PostgreSQL on both SUSE and Windows XP (MySQL & PostgreSQL DO NOT run at the same time)(There is one HDD for Windows and one for Linux)The "Test Server" in which we install the DBMS has the following characteristics:CPU speed = 1.3 GHzRAM = 512 MBHDD = 40 GBThe biggest table has 544371 rows(tuples?) with 55 rows. All fields are float8. Only 1 is varchar(255) and 1 timestamp.We query the MS Access databases through Visual Basic Programs and ODBC Drivers. We made a Visual Basic program that uses ADO to connect to ALL three DBMS using
 ODBC drivers.When we run the following query "SELECT * FROM big_table", we get the following resutls:MS Access- Execution time ~ 51 seconds (Depending on the client machine, it can go as low as 20 seconds)- Network Utilization ~ 80 Mbps (According to Windows Task Manager)MySQL 5.0 (under Windows)- Execution time ~ 630 seconds- Network Utilization ~ 8 MbpsPostgreSQL 8.1 (under Windows)- Execution time ~ 290 seconds)- Network Utilization ~ 13 MbpsMS Access (under Linux. MS Access files are in the Linux computer which has the SAMBA server running. The client computer has a mapped network drive that conects to the Linux files.)- Execution time ~ 55 seconds (Depending on the client machine, it can go as low as 20 seconds)- Network Utilization ~ 76 Mbps (According to Windows Task Manager)MySQL 5.0(under Linux)- Execution time ~ 440 seconds- Network Utilization ~ 11
 MbpsPostgreSQL 8.1(under Linux)- Execution time ~ 180 seconds)- Network Utilization ~ 18 MbpsVery different results are obtained if a the query "SELECT * from big_table ORDER BY "some_column"". In this scenario PostgreSQL is faster than MS Access or MySQL by more than 100 seconds.We have run many other queries (not complex, at most nesting of 5 inner joins) and MS Access is always faster. We have seen by looking at the network activity in the Windows Task Manager that the main problem is the transfer speed. We also have noticed that MS Access quickly downloads the file that has the necesary information and works on it locally on the client computer. The queries, obviously, run faster if the client computer has more resources (CPU speed, RAM, etc.). The fact that the client computer does not use any resource to execute the query, only to receive the results, is one big plus for PostgreSQL
 (we think). We need,however, to improve the performance of the queries that return a lot of rows because those are the most used queries.We searched the postgresql archives, mailing lists, etc. and have tried changing the parameters of the PostgreSQL server(both on Linux and Windows)(We also tried with the default parameters) and changing the parameters of the ODBC driver as suggested. We still get aproximately the same results. We have even changed some TCP/IP parameters(only in Windows) but no improvement.To get to the point: Is this problem with the transfer rates a PostgreSQL server/PostgresQL ODBC driver limitation?Is there a way to increase the transfer rates?Thank you very much for any help received!Hansell E. Baran AltuveP.S.: I apologize for the lenght of this post and for any missing information you might need. I will gladly hand out all the necessary information to receive any help with
 my problem. Thanks again! 
		Yahoo! Music Unlimited - Access over 1 million songs.
Try it free. 

Re: [PERFORM] Migrating data from DB2 to SQL Server

2006-08-10 Thread Pit M.

contact1981 wrote:

Hello,

I am trying to migrate data from a DB2 database to SQL Server 2005
database.  Does anyone know about any migration tool that does that?  I

have heard about DB2 Migration Tool kit, but I think you can only
migrate data to a DB2 database with that.  Thank you.


Sincerely, 



Eldhose Cyriac



We use SQLWays to migrate from SQL Server to PostgreSQL.

P.M.

---(end of broadcast)---
TIP 6: explain analyze is your friend


[PERFORM] Slow access to PostgreSQL server

2006-08-10 Thread Phil Cairns

Hi all,

I have an application that uses PostgreSQL to store its data. The 
application and an instance of the database have been installed in three 
different locations, and none of these three locations have anything to 
do with any of the others. I'm observing a problem in that large 
transfers to some machines on the network (specifically while running 
pg_dump) are dead slow. In fact, the information is going from the 
server to the client machine at dialup speeds over a 100 Mb LAN to some 
machines, and full speed to others.


This not a universal problem. Obviously, I'm not experiencing it at my 
development location, or I would have found and fixed it by now. One of 
the production installations had no problems. The second of the 
production environments experienced the problem on one out of 4 laptops 
(all the desktop machines were OK) until their technical guy uninstalled 
AVG (anti-virus). The third location has 4 laptops that are all slow in 
transferring PostgreSQL data, while the desktop machines are OK. There 
are no problems with copying files across the network. At the third 
location, they have the same software installed on the laptops and 
desktops, including the Vet security suite. Suspecting that something 
was screwing up the transfers by fiddling with packets, we suspended 
Vet, but that didn't help. We're going to try changing NICs and checking 
to see what happens when Pg runs on port 80.


Has anyone experienced this sort of thing before? We're running with 
8.0.4. My application uses libpg, while another application is using 
OLEDB. Both the native and OLEDB layers exhibit the delay on the "slow" 
machines, and have no problems on the "fast" machines. Note that the 
laptops are in no way inferior to the desktop machines in terms of CPU, 
RAM, etc.


TIA,
   Phil (yak from the build farm).

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [PERFORM] slow transfer speeds with PostgreSQL

2006-08-10 Thread AgentM
On Aug 3, 2006, at 19:39 , hansell baran wrote:When we run the following query "SELECT * FROM big_table", we get the following resutls: Very different results are obtained if a the query "SELECT * from big_table ORDER BY "some_column"". In this scenario  You should perform your test with queries which are identical or similar to the queries which the database will really be seeing. Anything else isn't really relevant for tuning because different configurations cater to different types of workloads. -M

Re: [PERFORM] slow transfer speeds with PostgreSQL

2006-08-10 Thread Merlin Moncure

On 8/3/06, hansell baran <[EMAIL PROTECTED]> wrote:

Hi. I'm new at using PostgreSQL.
Where I work, all databases were built with MS Access. The Access files are
hosted by computers with Windows 2000 and Windows

XP. A new server is on its way and only Open Source Software is going to be
installed. The OS is going to be SUSE Linux 10.1

and we are making comparisons between MySQL, PostgreSQL and MS Access. We
installed MySQL and PostgreSQL on both SUSE and

Windows XP (MySQL & PostgreSQL DO NOT run at the same time)(There is one HDD
for Windows and one for Linux)
The "Test Server" in which we install the DBMS has the following
characteristics:

CPU speed = 1.3 GHz
RAM = 512 MB
HDD = 40 GB

The biggest table has 544371 rows(tuples?) with 55 rows. All fields are
float8. Only 1 is varchar(255) and 1 timestamp.
We query the MS Access databases through Visual Basic Programs and ODBC
Drivers. We made a Visual Basic program that uses ADO

to connect to ALL three DBMS using ODBC drivers.

When we run the following query "SELECT * FROM big_table", we get the
following resutls:

MS Access
- Execution time ~ 51 seconds (Depending on the client machine, it can go as
low as 20 seconds)
- Network Utilization ~ 80 Mbps (According to Windows Task Manager)

MySQL 5.0 (under Windows)
- Execution time ~ 630 seconds
- Network Utilization ~ 8 Mbps

PostgreSQL 8.1 (under Windows)
- Execution time ~ 290 seconds)
- Network Utilization ~ 13 Mbps


MS Access (under Linux. MS Access files are in the Linux computer which has
the SAMBA server running. The client computer has

a mapped network drive that conects to the Linux files.)
- Execution time ~ 55 seconds (Depending on the client machine, it can go as
low as 20 seconds)
- Network Utilization ~ 76 Mbps (According to Windows Task Manager)

MySQL 5.0(under Linux)
- Execution time ~ 440 seconds
- Network Utilization ~ 11 Mbps

PostgreSQL 8.1(under Linux)
- Execution time ~ 180 seconds)
- Network Utilization ~ 18 Mbps


Very different results are obtained if a the query "SELECT * from big_table
ORDER BY "some_column"". In this scenario


you have to be careful comparing access to mysql/postgresql in this
way because the architecture is different...these results are a bit
misleading.  access can do some optimization tricks on very simple
queries, especially select * from bigtable becuase the result does not
have to be fully materialized and returned to the client.


PostgreSQL is faster than MS Access or MySQL by more than 100 seconds.

We have run many other queries (not complex, at most nesting of 5 inner
joins) and MS Access is always faster. We have seen


i find this really hard to believe. is your postgresql database
properly indexed and did you run analyze?  do the standard
-performance thing, run the query in with explain analyze:

explain anaylze 5_table_join_query

and post the results to this list.

merlin

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] Slow access to PostgreSQL server

2006-08-10 Thread Merlin Moncure

On 8/10/06, Phil Cairns <[EMAIL PROTECTED]> wrote:

Hi all,

I have an application that uses PostgreSQL to store its data. The
application and an instance of the database have been installed in three
different locations, and none of these three locations have anything to
do with any of the others. I'm observing a problem in that large
transfers to some machines on the network (specifically while running
pg_dump) are dead slow. In fact, the information is going from the
server to the client machine at dialup speeds over a 100 Mb LAN to some
machines, and full speed to others.


there have been numerous problems reported on windows due to various
applications, especially malware and virus scanners, that cause this
problem.  be especially cautious about anything that runs in kernel
mode or runs as a LSP.

merlin

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] setting up foreign keys

2006-08-10 Thread Chris

Merlin Moncure wrote:

On 8/10/06, Chris <[EMAIL PROTECTED]> wrote:

Sue Fitt wrote:
> Thanks Chris and Chris, you've solved it.
>
> I had a gui open that connects to the database. It was doing nothing
> (and not preventing me adding to or altering headwords_core via psql),
> but having closed it the table is instantly created. Weird.
>
> BTW, referencing the same column twice is deliberate, it's a
> cross-reference.

The same column and the same table?

Same column different table I could understand but not the same column &
table ;)


create table color(color text);

create table person(eye_color text references color(color), hair_color
text references color(color));


lol. Good point :)

*back to the hidey hole!*

--
Postgresql & php tutorials
http://www.designmagick.com/

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] Beginner optimization questions, esp. regarding Tsearch2

2006-08-10 Thread Carl Youngblood

Thanks a lot for the advice Richard.  I will try those things out and
report back to the list.

Carl

On 8/10/06, Richard Huxton  wrote:

 From your figures, you're allocating about 64MB to work_mem, which is
per sort. So, a complex query could use several times that amount. If
you don't have many concurrent queries that might be what you want.

Also, you've allocated 1GB to your shared_buffers which is more than I'd
use as a starting point.

You've only mentioned one main table with 100,000 rows, so presumably
you're going to cache the entire DB in RAM. So, you'll want to increase
effective_cache_size and reduce random_page_cost.


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings