Re: [PERFORM] libpq vs ODBC

2010-12-08 Thread Pavel Stehule
Hello 2010/12/9 Divakar Singh : > hmm > If I understand it correctly you argument is valid from performance point of > view. > But in practical scenarios, it would make more sense to do ODBC if the > difference is only 5% or so, because it opens up so many choices of > databases for me. > Do we ha

Re: [PERFORM] libpq vs ODBC

2010-12-08 Thread Divakar Singh
hmm If I understand it correctly you argument is valid from performance point of view. But in practical scenarios, it would make more sense to do ODBC if the difference is only 5% or so, because it opens up so many choices of databases for me. Do we have some published data in this area. Best

Re: [PERFORM] libpq vs ODBC

2010-12-08 Thread Alex Goncharov
,--- You/Divakar (Wed, 8 Dec 2010 21:17:22 -0800 (PST)) * | So it means there will be visible impact if the nature of DB interaction is DB | insert/select. We do that mostly in my app. You can't say a "visible impact" unless you can measure it in your specific application. Let's say ODBC ta

Re: [PERFORM] Slow BLOBs restoring

2010-12-08 Thread Tom Lane
Vlad Arkhipov writes: > 08.12.2010 22:46, Tom Lane writes: >> Are you by any chance restoring from an 8.3 or older pg_dump file made >> on Windows? If so, it's a known issue. > No, I tried Linux only. OK, then it's not the missing-data-offsets issue. > I think you can reproduce it. First I cre

Re: [PERFORM] Hardware recommendations

2010-12-08 Thread mark
-Original Message- From: pgsql-performance-ow...@postgresql.org [mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Andy Sent: Wednesday, December 08, 2010 5:24 PM To: Marti Raudsepp Cc: pgsql-performance@postgresql.org; Benjamin Krajmalnik Subject: Re: [PERFORM] Hardware recommen

Re: [PERFORM] libpq vs ODBC

2010-12-08 Thread Divakar Singh
So it means there will be visible impact if the nature of DB interaction is DB insert/select. We do that mostly in my app. Performance difference would be negligible if the query is server intensive where execution time is far more than time taken by e.g. communication interface or transaction

Re: [PERFORM] libpq vs ODBC

2010-12-08 Thread Alex Goncharov
,--- You/Divakar (Wed, 8 Dec 2010 20:31:30 -0800 (PST)) * | Is there any performance penalty when I use ODBC library vs using libpq? In general, yes. In degenerate cases when most of the work happens in the server, no. You need to measure in the contents of your specific application. -- Ale

[PERFORM] libpq vs ODBC

2010-12-08 Thread Divakar Singh
Is there any performance penalty when I use ODBC library vs using libpq? Best Regards, Divakar

Re: [PERFORM] Slow BLOBs restoring

2010-12-08 Thread Vlad Arkhipov
08.12.2010 22:46, Tom Lane writes: Are you by any chance restoring from an 8.3 or older pg_dump file made on Windows? If so, it's a known issue. No, I tried Linux only. Not without a complete reproducible example ... and not at all if it's the known problem. The fix for that is to update pg

Re: [PERFORM] Slow BLOBs restoring

2010-12-08 Thread Vlad Arkhipov
08.12.2010 22:46, Tom Lane writes: Are you by any chance restoring from an 8.3 or older pg_dump file made on Windows? If so, it's a known issue. No, I tried Linux only. Not without a complete reproducible example ... and not at all if it's the known problem. The fix for that is to update

Re: [PERFORM] Hardware recommendations

2010-12-08 Thread Scott Marlowe
On Wed, Dec 8, 2010 at 5:03 PM, Benjamin Krajmalnik wrote: > John, > > The platform is a network monitoring system, so we have quite a lot of > inserts/updates (every data point has at least one record insert as well as > at least 3 record updates).  The management GUI has a lot of selects.  We

Re: [PERFORM] Performance under contention

2010-12-08 Thread Robert Haas
2010/12/8 Tom Lane : > Robert Haas writes: >> 2010/12/8 Tom Lane : >>> Now, it's possible that you could avoid *ever* needing to search for a >>> specific PROCLOCK, in which case eliminating the hash calculation >>> overhead might be worth it. > >> That seems like it might be feasible.  The backen

Re: [PERFORM] Hardware recommendations

2010-12-08 Thread alaricd
Sent from my android device. -Original Message- From: Benjamin Krajmalnik To: pgsql-performance@postgresql.org Sent: Wed, 08 Dec 2010 17:14 Subject: [PERFORM] Hardware recommendations Received: from mx2.hub.org [200.46.204.254] by mail.pengdows.com with SMTP (EHLO mx2.hub.org) (ArGo

Re: [PERFORM] Hardware recommendations

2010-12-08 Thread alaricd
Sent from my android device. -Original Message- From: Benjamin Krajmalnik To: pgsql-performance@postgresql.org Sent: Wed, 08 Dec 2010 17:14 Subject: [PERFORM] Hardware recommendations Received: from mx2.hub.org [200.46.204.254] by mail.pengdows.com with SMTP (EHLO mx2.hub.org) (ArGo

Re: [PERFORM] Hardware recommendations

2010-12-08 Thread alaricd
Sent from my android device. -Original Message- From: Benjamin Krajmalnik To: pgsql-performance@postgresql.org Sent: Wed, 08 Dec 2010 17:14 Subject: [PERFORM] Hardware recommendations Received: from mx2.hub.org [200.46.204.254] by mail.pengdows.com with SMTP (EHLO mx2.hub.org) (ArGo

Re: [PERFORM] Hardware recommendations

2010-12-08 Thread alaricd
Sent from my android device. -Original Message- From: Benjamin Krajmalnik To: pgsql-performance@postgresql.org Sent: Wed, 08 Dec 2010 17:14 Subject: [PERFORM] Hardware recommendations Received: from mx2.hub.org [200.46.204.254] by mail.pengdows.com with SMTP (EHLO mx2.hub.org) (ArGo

Re: [PERFORM] Hardware recommendations

2010-12-08 Thread Andy
> > If you are IO-bound, you might want to consider using > SSD. > > > > A single SSD could easily give you more IOPS than 16 > 15k SAS in RAID 10. > > Are there any that don't risk your data on power loss, AND > are cheaper > than SAS RAID 10? > Vertex 2 Pro has a built-in supercapacitor to s

Re: [PERFORM] Hardware recommendations

2010-12-08 Thread alaricd
Sent from my android device. -Original Message- From: Benjamin Krajmalnik To: pgsql-performance@postgresql.org Sent: Wed, 08 Dec 2010 17:14 Subject: [PERFORM] Hardware recommendations Received: from mx2.hub.org [200.46.204.254] by mail.pengdows.com with SMTP (EHLO mx2.hub.org) (ArGo

Re: [PERFORM] Hardware recommendations

2010-12-08 Thread Marti Raudsepp
On Thu, Dec 9, 2010 at 01:26, Andy wrote: > If you are IO-bound, you might want to consider using SSD. > > A single SSD could easily give you more IOPS than 16 15k SAS in RAID 10. Are there any that don't risk your data on power loss, AND are cheaper than SAS RAID 10? Regards, Marti -- Sent vi

Re: [PERFORM] Hardware recommendations

2010-12-08 Thread John W Strange
Ben, It would help if you could tell us a bit more about the read/write mix and transaction requirements. *IF* you are heavy writes I would suggest moving off the RAID1 configuration to a RAID10 setup. I would highly suggest looking at SLC based solid state drives or if your budget has legs, l

Re: [PERFORM] Hardware recommendations

2010-12-08 Thread Benjamin Krajmalnik
John, The platform is a network monitoring system, so we have quite a lot of inserts/updates (every data point has at least one record insert as well as at least 3 record updates). The management GUI has a lot of selects. We are refactoring the database to some degree to aid in the performanc

Re: [PERFORM] Hardware recommendations

2010-12-08 Thread alaricd
Sent from my android device. -Original Message- From: Benjamin Krajmalnik To: pgsql-performance@postgresql.org Sent: Wed, 08 Dec 2010 17:14 Subject: [PERFORM] Hardware recommendations Received: from mx2.hub.org [200.46.204.254] by mail.pengdows.com with SMTP (EHLO mx2.hub.org) (ArGo

Re: [PERFORM] Compared MS SQL 2000 to Postgresql 9.0 on Windows

2010-12-08 Thread Pierre C
The hardware it is running on is fairly good, dual Xeon CPUs, 4 GB of RAM, Raid 5. For a database you'd want to consider replacing the RAID1 with a RAID1 (or RAID10). RAID5 is slow for small random updates, which are common in databases. Since you probably have enough harddisks anyway, this

Re: [PERFORM] Hardware recommendations

2010-12-08 Thread Andy
If you are IO-bound, you might want to consider using SSD. A single SSD could easily give you more IOPS than 16 15k SAS in RAID 10. --- On Wed, 12/8/10, Benjamin Krajmalnik wrote: > From: Benjamin Krajmalnik > Subject: [PERFORM] Hardware recommendations > To: pgsql-performance@postgresql.org

[PERFORM] Hardware recommendations

2010-12-08 Thread Benjamin Krajmalnik
I need to build a new high performance server to replace our current production database server. The current server is a SuperMicro 1U with 2 RAID-1 containers (one for data, one for log, SAS - data is 600GB, Logs 144GB), 16GB of RAM, running 2 quad core processors (E5405 @ 2GHz), Adaptec 5405 C

Re: [PERFORM] Performance under contention

2010-12-08 Thread Tom Lane
Robert Haas writes: > 2010/12/8 Tom Lane : >> Now, it's possible that you could avoid *ever* needing to search for a >> specific PROCLOCK, in which case eliminating the hash calculation >> overhead might be worth it. > That seems like it might be feasible. The backend that holds the lock > ought

Re: [PERFORM] Performance under contention

2010-12-08 Thread Robert Haas
2010/12/8 Tom Lane : > Robert Haas writes: >>> Yeah, that was my concern, too, though Tom seems skeptical (perhaps >>> rightly). šAnd I'm not really sure why the PROCLOCKs need to be in a >>> hash table anyway - if we know the PROC and LOCK we can surely look up >>> the PROCLOCK pretty expensively

Re: [PERFORM] hashed subplan 5000x slower than two sequential operations

2010-12-08 Thread Bryce Nesbitt
Marc Mamin wrote: Another point: would a conditionl index help ? on articles (context_key) where indexed no. production=> select count(*),indexed from articles group by indexed; count | indexed +- 517433 | t 695814 | f -- Sent via pgsql-performance mailing list (pgsql-per

Re: [PERFORM] hashed subplan 5000x slower than two sequential operations

2010-12-08 Thread Bryce Nesbitt
Title: AW: [PERFORM] hashed subplan 5000x slower than two sequential operations Marc Mamin wrote: Hello, are the table freshly analyzed, with a sufficient default_statistics_target ? autovacuum = on        # Enable autovacuum subprocess?  'on' autovac

Re: [PERFORM] hashed subplan 5000x slower than two sequential operations

2010-12-08 Thread Pavel Stehule
2010/12/8 Tom Lane : > Shrirang Chitnis writes: >> Bryce, >> The two queries are different: > > I suspect the second one is a typo and not what he really wanted. > >> WHERE (contexts.parent_key = 392210 >>       OR contexts.context_key IN >>          (SELECT collection_data.context_key >>        

Re: [PERFORM] hashed subplan 5000x slower than two sequential operations

2010-12-08 Thread Marc Mamin
Another point: would a conditionl index help ? on articles (context_key) where indexed regards, -Ursprüngliche Nachricht- Von: pgsql-performance-ow...@postgresql.org im Auftrag von Marc Mamin Gesendet: Mi 12/8/2010 9:06 An: Shrirang Chitnis; Bryce Nesbitt; pgsql-performance@postgresql.o

Re: [PERFORM] hashed subplan 5000x slower than two sequential operations

2010-12-08 Thread Tom Lane
Shrirang Chitnis writes: > Bryce, > The two queries are different: I suspect the second one is a typo and not what he really wanted. > WHERE (contexts.parent_key = 392210 > OR contexts.context_key IN > (SELECT collection_data.context_key > FROM collection_data >

Re: [PERFORM] hashed subplan 5000x slower than two sequential operations

2010-12-08 Thread Marc Mamin
Hello, are the table freshly analyzed, with a sufficient default_statistics_target ? You may try to get a better plan while rewriting the query as an UNION to get rid of the OR clause. Something like (not tested): SELECT contexts.context_key FROM contexts JOIN articles ON (articles.

Re: [PERFORM] hashed subplan 5000x slower than two sequential operations

2010-12-08 Thread Bryce Nesbitt
Shrirang Chitnis wrote: Bryce, The two queries are different: Ah, due to a mistake. The first version with the hashed subplan is from production. The second version should have read: production=> SELECT c

Re: [PERFORM] hashed subplan 5000x slower than two sequential operations

2010-12-08 Thread Shrirang Chitnis
Bryce, The two queries are different: You are looking for contexts.context_key in first query WHERE (contexts.parent_key = 392210 OR contexts.context_key IN (SELECT collection_data.context_key FROM collection_data WHERE collection_data.collection_context_key = 3

Re: [PERFORM] Group commit and commit delay/siblings

2010-12-08 Thread Simon Riggs
On Mon, 2010-12-06 at 23:52 -0500, Greg Smith wrote: > Jignesh Shah wrote: > > On Tue, Dec 7, 2010 at 1:55 AM, Tom Lane wrote: > > > >> I could have sworn we'd refactored that to something like > >>bool ThereAreAtLeastNActiveBackends(int n) > >> which could drop out of the loop as soon

[PERFORM] hashed subplan 5000x slower than two sequential operations

2010-12-08 Thread Bryce Nesbitt
Can you help me understand how to optimize the following. There's a subplan which in this case returns 3 rows, but it is really expensive: = explain analyze SELECT contexts.context_key FROM contexts JOIN articles

Re: [PERFORM] Slow BLOBs restoring

2010-12-08 Thread Tom Lane
Vlad Arkhipov writes: > I discovered this issue a bit more. -j option is slowing down BLOBs > restoring. It's about 1000x times slower if you specify this option. Are you by any chance restoring from an 8.3 or older pg_dump file made on Windows? If so, it's a known issue. > Does anybody plan

Re: [PERFORM] Performance under contention

2010-12-08 Thread Tom Lane
Robert Haas writes: >> Yeah, that was my concern, too, though Tom seems skeptical (perhaps >> rightly). šAnd I'm not really sure why the PROCLOCKs need to be in a >> hash table anyway - if we know the PROC and LOCK we can surely look up >> the PROCLOCK pretty expensively by following the PROC SHM_

Re: [PERFORM] Slow BLOBs restoring

2010-12-08 Thread Vlad Arkhipov
I discovered this issue a bit more. -j option is slowing down BLOBs restoring. It's about 1000x times slower if you specify this option. Does anybody plan to fix it? I have encountered a problem while restoring the database. There is a table that contains XML data (BLOB), ~ 3 000 000 records, ~