Re: [PERFORM] Big number of connections

2016-03-31 Thread Andrew Dunstan
On 03/31/2016 03:12 PM, Igor Neyman wrote: -Original Message- From: pgsql-performance-ow...@postgresql.org [mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Jarek Sent: Thursday, March 31, 2016 3:08 PM To: pgsql-performance@postgresql.org Subject: [PERFORM] Big number of con

Re: [PERFORM] jsonb_agg performance

2016-01-29 Thread Andrew Dunstan
On 01/29/2016 05:06 PM, jflem...@kispring.com wrote: The jsonb_agg function seems to have significantly worse performance than its json_agg counterpart: => explain analyze select pa.product_id, jsonb_agg(attributes) from product_attributes2 pa group by pa.product_id; QUERY PLAN

Re: [PERFORM] problem with select *

2015-08-24 Thread Andrew Dunstan
On 08/24/2015 03:04 AM, bhuvan Mitra wrote: Hello, I have a table with 12 columns and 20 Million rows. While writing the table I do not find any problem but when reading that I have some issues faced. When I perform a 'select * from table limit 1400;' (selecting 14million rows), it is w

Re: [PERFORM] pg bouncer issue what does sv_used column means

2015-06-12 Thread Andrew Dunstan
Please do not cross-post on the PostgreSQL lists. Pick the most appropriate list to post to and just post there. cheers andrew -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-perform

Re: [PERFORM] Fastest Backup & Restore for perf testing

2015-05-27 Thread Andrew Dunstan
On 05/27/2015 04:24 PM, Wes Vaske (wvaske) wrote: Hi, I’m running performance tests against a PostgreSQL database (9.4) with various hardware configurations and a couple different benchmarks (TPC-C & TPC-H). I’m currently using pg_dump and pg_restore to refresh my dataset between runs but

Re: [PERFORM] PostgreSQL disk fragmentation causes performance problems on Windows

2015-04-29 Thread Andrew Dunstan
On 04/29/2015 10:35 AM, k...@rice.edu wrote: On Wed, Apr 29, 2015 at 07:07:04AM -0700, Joshua D. Drake wrote: On 04/29/2015 01:08 AM, Andres Freund wrote: Which OS and filesystem is this done on? Because many halfway modern systems, like e.g ext4 and xfs, implement this in the background as '

Re: [PERFORM] Number of Columns and Update

2014-12-22 Thread Andrew Dunstan
On 12/22/2014 03:53 PM, Robert DiFalco wrote: This may fall into the category of over-optimization but I've become curious. I have a user table with about 14 columns that are all 1:1 data - so they can't be normalized. When I insert a row all columns need to be set. But when I update, I so

Re: [PERFORM] Re: [SQL] querying with index on jsonb slower than standard column. Why?

2014-12-12 Thread Andrew Dunstan
On 12/12/2014 08:20 PM, Tom Lane wrote: We can't just add the operator and worry about usability later; if we're thinking we might want to introduce such an automatic transformation, we have to be sure the new operator is defined in a way that allows the transformation to not change any semantic

Re: [PERFORM] Re: [SQL] querying with index on jsonb slower than standard column. Why?

2014-12-12 Thread Andrew Dunstan
On 12/12/2014 04:44 PM, Tom Lane wrote: Josh Berkus writes: Yeah, I believe the core problem is that Postgres currently doesn't have any way to have variadic return times from a function which don't match variadic input types. Returning a value as an actual numeric from JSONB would require re

Re: [PERFORM] Postgres does not use indexes with OR-conditions

2014-11-07 Thread Andrew Dunstan
On 11/07/2014 12:06 AM, Vlad Arkhipov wrote: It was just a minimal example. The real query looks like this. select * from commons.financial_documents fd where fd.creation_time < '2011-11-07 10:39:07.285022+08' or (fd.creation_time = '2011-11-07 10:39:07.285022+08' and fd.financial_document_i

Re: [PERFORM] assignment vs SELECT INTO

2014-11-03 Thread Andrew Dunstan
On 11/03/2014 03:24 PM, Tom Lane wrote: Andrew Dunstan writes: andrew=# do $x$ declare r abc; begin for i in 1 .. 1000 loop select 'a','b',i into r.x,r.y,r.z; end loop; end; $x$; DO Time: 63731.434 ms andrew=# do $x$ declare r abc; begin fo

[PERFORM] assignment vs SELECT INTO

2014-11-03 Thread Andrew Dunstan
I found out today that direct assignment to a composite type is (at least in my test) about 70% faster than setting it via SELECT INTO. That seems like an enormous difference in speed, which I haven't really been able to account for. Test case: andrew=# \d abc Table "public.abc

Re: [PERFORM] Query with large number of joins

2014-10-21 Thread Andrew Dunstan
On 10/21/2014 12:09 PM, Marco Di Cesare wrote: I did not mean to imply this works any better on SQL Server. We never tried. I just meant to say this is the first time we are using Postgres so we don't have much experience with it. We tried with work_mem set to 1GB (even as high as 3GB) but it

Re: [PERFORM] char(N), varchar(N), varchar, text

2014-10-08 Thread Andrew Dunstan
On 10/08/2014 10:22 AM, Emi Lu wrote: Good morning, For performance point of view, are there big differences between: char(N), varchar(N), varchar, text? Some comments from google shows: No difference, under the hood it's all varlena. Check this article from Depesz: http://www.depesz.com/i

Re: [PERFORM] help: function failing

2014-10-07 Thread Andrew Dunstan
On 10/07/2014 04:41 PM, Sergey Konoplev wrote: On Thu, Oct 2, 2014 at 4:00 PM, George Neuner wrote: --- code CREATE OR REPLACE FUNCTION gen_random() RETURNS double precision AS $BODY$ DECLARE num float8 := 0; den float8 := 281474976710655; -- 0xF

Re: [PERFORM] postgres 9.3 vs. 9.4

2014-09-18 Thread Andrew Dunstan
On 09/18/2014 03:09 PM, Mkrtchyan, Tigran wrote: - Original Message - From: "Josh Berkus" To: pgsql-performance@postgresql.org Sent: Thursday, September 18, 2014 7:54:24 PM Subject: Re: [PERFORM] postgres 9.3 vs. 9.4 On 09/18/2014 08:09 AM, Mkrtchyan, Tigran wrote: 9.4beta2: ...

Re: [PERFORM] Tuning Postgres for Single connection use

2014-04-14 Thread Andrew Dunstan
On 04/14/2014 05:46 PM, Nick Eubank wrote: Any rules of thumb for |work_mem|, |maintenance_work_mem|, |shared_buffer|, etc. for a database that DOESN'T anticipate concurrent connections and that is doing lots of aggregate functions on large tables? All the advice I can find online on tuning

Re: [PERFORM] Are there some additional postgres tuning to improve performance in multi tenant system

2013-12-28 Thread Andrew Dunstan
On 12/28/2013 08:46 AM, ankush upadhyay wrote: On Sat, Dec 28, 2013 at 6:50 PM, Andrew Dunstan <mailto:and...@dunslane.net>> wrote: On 12/28/2013 12:19 AM, ankush upadhyay wrote: Hello All, I am using multi tenant system and doing performance testing

Re: [PERFORM] Are there some additional postgres tuning to improve performance in multi tenant system

2013-12-28 Thread Andrew Dunstan
On 12/28/2013 12:19 AM, ankush upadhyay wrote: Hello All, I am using multi tenant system and doing performance testing of multi tenant application. In case of single tenant it is working fine but once I enable tenants, then some time database servers not responding. Any clue? I usuall

Re: [PERFORM] Current query of the PL/pgsql procedure.

2013-12-16 Thread Andrew Dunstan
On 12/16/2013 05:26 AM, hubert depesz lubaczewski wrote: On Sun, Dec 15, 2013 at 04:18:18PM +, Yuri Levinsky wrote: Dear ALL, I am running PL/pgsql procedure with sql statements that taking a long time. I able to see them in the log just after their completion. How can I see currently runni

Re: [PERFORM] Postgresql in a Virtual Machine

2013-11-26 Thread Andrew Dunstan
On 11/26/2013 08:51 AM, Boszormenyi Zoltan wrote: 2013-11-25 21:19 keltezéssel, Heikki Linnakangas írta: On 25.11.2013 22:01, Lee Nguyen wrote: Hi, Having attended a few PGCons, I've always heard the remark from a few presenters and attendees that Postgres shouldn't be run inside a VM. That

Re: [PERFORM] Postgresql in a Virtual Machine

2013-11-26 Thread Andrew Dunstan
On 11/26/2013 09:26 AM, Craig James wrote: On 25.11.2013 22:01, Lee Nguyen wrote: Why shouldn't we run Postgres in a VM? What are the downsides? Does anyone have any metrics or benchmarks with the latest Postgres? For those of us with small (a fe

Re: [PERFORM] Postgresql in a Virtual Machine

2013-11-25 Thread Andrew Dunstan
On 11/25/2013 03:19 PM, Heikki Linnakangas wrote: On 25.11.2013 22:01, Lee Nguyen wrote: Hi, Having attended a few PGCons, I've always heard the remark from a few presenters and attendees that Postgres shouldn't be run inside a VM. That bare metal is the only way to go. Here at work we were

Re: [PERFORM] Performance bug in prepared statement binding in 9.2?

2013-09-11 Thread Andrew Dunstan
On 09/11/2013 02:35 PM, Josh Berkus wrote: All, We've confirmed that this issue is caused by having long-running idle transactions on the server. When we disabled their queueing system (which prodiced hour-long idle txns), the progressive slowness went away. Why that should affect 9.X far mor

Re: [PERFORM] slow sort

2013-09-11 Thread Andrew Dunstan
On 09/11/2013 06:36 AM, Maximilian Tyrtania wrote: Hi there, here is another one from the "why is my query so slow?" category. First post, so please bare with me. The query (which takes around 6 seconds) is this: SET work_mem TO '256MB';//else sort spills to disk SELECT et.subject,

Re: [PERFORM] Intermittent hangs with 9.2

2013-09-10 Thread Andrew Dunstan
On 09/10/2013 11:04 AM, David Whittaker wrote: Hi All, I've been seeing a strange issue with our Postgres install for about a year now, and I was hoping someone might be able to help point me at the cause. At what seem like fairly random intervals Postgres will become unresponsive to the 3

Re: [PERFORM] Performance bug in prepared statement binding in 9.2?

2013-09-10 Thread Andrew Dunstan
On 09/10/2013 09:23 AM, Andres Freund wrote: On 2013-09-10 15:21:33 +0200, Andres Freund wrote: If I interpret things correctly you're using serializable? I guess there is no chance to use repeatable read instead? Err, that wouldn't help much. Read committed. That lets PGXACT->xmin advance the

Re: [PERFORM] Performance bug in prepared statement binding in 9.2?

2013-09-10 Thread Andrew Dunstan
On 09/10/2013 08:20 AM, Andres Freund wrote: A backtrace for this would be useful. Alternatively you could recompile postgres using -fno-omit-frame-pointer in CFLAGS and use perf record -g. It's using a custom build, so this should be doable. Any chance you have older prepared xacts, older

Re: [PERFORM] Performance bug in prepared statement binding in 9.2?

2013-09-09 Thread Andrew Dunstan
On 08/01/2013 03:20 PM, Jeff Janes wrote: On Thu, Aug 1, 2013 at 10:58 AM, Josh Berkus wrote: Amit, All: So we just retested this on 9.3b2. The performance is the same as 9.1 and 9.2; that is, progressively worse as the test cycles go on, and unacceptably slow compared to 8.4. Some issue in

Re: [PERFORM] Varchar vs foreign key vs enumerator - table and index size

2013-09-02 Thread Andrew Dunstan
On 09/02/2013 05:53 AM, Łukasz Walkowski wrote: On 1 wrz 2013, at 05:10, Vitalii Tymchyshyn wrote: Well, there are some more options: a) Store int keys and do mapping in the application (e.g. with java enums). This can save you a join, that is especially useful if you are going to do paged

Re: [PERFORM] Hstore VS. JSON

2013-07-16 Thread Andrew Dunstan
On 07/16/2013 11:05 AM, Niels Kristian Schjødt wrote: Hi, I'm in the process of implementing a table for storing some raw data in the format of a hash containing one level of keys and values. The hash can be quite big (up to 50 keys pointing at values varying from one to several hundred char

Re: [PERFORM] Process in state BIND, authentication, PARSE

2013-07-08 Thread Andrew Dunstan
On 07/08/2013 12:22 PM, Jeison Bedoya wrote: Hi, i want to know why in my database the process stay in BID, PARSE, autentication, startup by a couple minuts, generating slow in the process, perhaps tunning parameters? or configuration of operating system (Linux RHEL 6). You haven't given

Re: [PERFORM] Dynamic queries in stored procedure

2013-07-05 Thread Andrew Dunstan
On 07/05/2013 09:57 AM, Greg Jaskiewicz wrote: Hey, We have a search method that depending on search params will join 3-5 tables, craft the joins and where section. Only problem is, this is done in rather horrible java code. So using pgtap for tests is not feasible. I want to move the databas

Re: [PERFORM] Partitions not Working as Expected

2013-06-27 Thread Andrew Dunstan
On 06/27/2013 03:14 PM, Shaun Thomas wrote: On 06/27/2013 01:42 PM, Tom Lane wrote: That will break things: CURRENT_DATE will then be equivalent to just writing today's date as a literal. Interesting. I tested it by creating a view and a table with a default, and it always seems to get tran

Re: [PERFORM] PostgreSQL settings for running on an SSD drive

2013-06-20 Thread Andrew Dunstan
On 06/20/2013 05:23 PM, Shaun Thomas wrote: On 06/20/2013 03:32 PM, Josh Berkus wrote: Did you compare setting RPC to 1.0 vs. setting it to 1.1, or something else just slightly higher than SPC? Yes, actually. My favored setting when we were on 8.3 was 1.5. But something with the planner cha

Re: [PERFORM] Thinking About Correlated Columns (again)

2013-05-15 Thread Andrew Dunstan
On 05/15/2013 12:23 PM, Craig James wrote: On Wed, May 15, 2013 at 8:31 AM, Shaun Thomas mailto:stho...@optionshouse.com>> wrote: [Inefficient plans for correlated columns] has been a pain point for quite a while. While we've had several discussions in the area, it always seems to

Re: [PERFORM] template1 vacuuming consuming much time compared to another production DBs

2013-04-22 Thread Andrew Dunstan
On 04/22/2013 07:31 AM, pradeep singh wrote: Hi, I am using postgresql 8.1 DB. Why are you using a release of Postgres that is way out of date and unsupported? cheers andrew -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscri

Re: [PERFORM] Speed of EXCECUTE in PL/PGSQL

2013-03-14 Thread Andrew Dunstan
On 03/14/2013 03:22 PM, Artur Zając wrote: Why speed of executing (or planning) some very simple query from string in pl/pgsql is dependent from whole query or why “FOR r IN EXECUTE q” is significally slower from “FOR r IN query”? The whole point of EXECUTE is that it's reparsed and plan

Re: [PERFORM] High CPU usage after partitioning

2013-01-22 Thread Andrew Dunstan
On 01/22/2013 09:21 AM, rudi wrote: On Tue, Jan 22, 2013 at 3:04 PM, Merlin Moncure > wrote: let's see the query -- it's probably written in such a way so as to not be able to be optimized through CE. The query is pretty simple and standard, the behaviour (a

Re: [PERFORM] High CPU usage after partitioning

2013-01-21 Thread Andrew Dunstan
On 01/21/2013 10:05 AM, rudi wrote: Hello, I'm running postgresl 9.0. After partitioning a big table, CPU usage raised from average 5-10% to average 70-80%. - the table contains about 20.000.000 rows - partitions are selected using a trigger, based on an indexed field, a date (IF date_taken

Re: [PERFORM] backend suddenly becomes slow, then remains slow

2012-12-27 Thread Andrew Dunstan
On 12/26/2012 11:03 PM, Jeff Janes wrote: On Fri, Dec 14, 2012 at 10:40 AM, Andrew Dunstan wrote: > One of my clients has an odd problem. Every so often a backend will suddenly > become very slow. The odd thing is that once this has happened it remains > slowed down, for all s

Re: [PERFORM] backend suddenly becomes slow, then remains slow

2012-12-14 Thread Andrew Dunstan
On 12/14/2012 02:56 PM, Tom Lane wrote: Andrew Dunstan writes: One of my clients has an odd problem. Every so often a backend will suddenly become very slow. The odd thing is that once this has happened it remains slowed down, for all subsequent queries. Zone reclaim is off. There is no IO or

[PERFORM] backend suddenly becomes slow, then remains slow

2012-12-14 Thread Andrew Dunstan
One of my clients has an odd problem. Every so often a backend will suddenly become very slow. The odd thing is that once this has happened it remains slowed down, for all subsequent queries. Zone reclaim is off. There is no IO or CPU spike, no checkpoint issues or stats timeouts, no other symp

Re: [PERFORM] Why does the number of rows are different in actual and estimated.

2012-12-13 Thread Andrew Dunstan
On 12/13/2012 05:42 PM, Claudio Freire wrote: And it looks like it all may be starting to go south here: -> Hash Join (cost=9337.97..18115.71 rows=34489 width=244) (actual time=418.054..1156.453 rows=205420 loops=1)

Re: [PERFORM] Why does the number of rows are different in actual and estimated.

2012-12-13 Thread Andrew Dunstan
On 12/13/2012 05:12 PM, AI Rumman wrote: Why does the number of rows are different in actual and estimated? Isn't that in the nature of estimates? An estimate is a heuristic guess at the number of rows it will find for the given query or part of a query. It's not uncommon for estimates to

Re: [PERFORM] encouraging index-only scans

2012-12-12 Thread Andrew Dunstan
On 12/12/2012 05:12 PM, Andrew Dunstan wrote: On 12/12/2012 04:32 PM, Tom Lane wrote: Andrew Dunstan writes: A client is testing a migration from 9.1 to 9.2, and has found that a large number of queries run much faster if they use index-only scans. However, the only way he has found to get

Re: [PERFORM] encouraging index-only scans

2012-12-12 Thread Andrew Dunstan
On 12/12/2012 04:32 PM, Tom Lane wrote: Andrew Dunstan writes: A client is testing a migration from 9.1 to 9.2, and has found that a large number of queries run much faster if they use index-only scans. However, the only way he has found to get such a plan is by increasing the seq_page_cost

[PERFORM] encouraging index-only scans

2012-12-12 Thread Andrew Dunstan
A client is testing a migration from 9.1 to 9.2, and has found that a large number of queries run much faster if they use index-only scans. However, the only way he has found to get such a plan is by increasing the seq_page_cost to insanely high levels (3.5). Is there any approved way to enco

Re: [PERFORM] Read rows deleted

2012-12-12 Thread Andrew Dunstan
On 12/12/2012 03:24 PM, Alejandro Carrillo wrote: Hi, Anybody knows how to create a table using a table file? It isn't a fdw, is a file that compose the table in postgresql and get with the pg_relation_filepath function. Ex: select pg_relation_filepath('pg_proc'); Anybody knows a JDBC or a

[PERFORM] track_activity_query_size

2012-12-12 Thread Andrew Dunstan
Is there a performance downside to setting track_activity_query_size to a significantly larger value than the default 1024 (say 10240), given that there's plenty of memory to spare? cheers andrew -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to

Re: xfs perform a lot better than ext4 [WAS: Re: [PERFORM] Two identical systems, radically different performance]

2012-12-05 Thread Andrew Dunstan
On 12/05/2012 11:51 AM, Jean-David Beyer wrote: I thought that postgreSQL did its own journalling, if that is the proper term, so why not use an ext2 file system to lower overhead? Postgres journalling will not save you from a corrupt file system. cheers andrew -- Sent via pgsql-perfo

Re: [PERFORM] Postgres configuration for 8 CPUs, 6 GB RAM

2012-11-27 Thread Andrew Dunstan
On 11/27/2012 02:47 AM, Syed Asif Tanveer wrote: Hi, I am using PostgreSQL 9.1.5 for Data warehousing and OLAP puposes. Data size is around 100 GB and I have tuned my PostgreSQL accordingly still I am facing performance issues. The query performance is too low despite tables being properly

Re: [PERFORM] Poor performance using CTE

2012-11-21 Thread Andrew Dunstan
On 11/21/2012 02:30 PM, Gavin Flower wrote: WITH FENCE foo AS (SELECT ...) default? WITHOUT FENCE foo AS (SELECT ...) :-) Nah! I prefer this, but it is too specific to 'WITH', and very unSQL standardish! Alternatively one of the following 1. WITH UNFENCED foo AS (SELECT ...) 2. WITH NO

Re: [PERFORM] Poor performance using CTE

2012-11-21 Thread Andrew Dunstan
On 11/21/2012 11:32 AM, Claudio Freire wrote: On Wed, Nov 21, 2012 at 1:24 PM, Andres Freund wrote: On 2012-11-21 13:16:25 -0300, Claudio Freire wrote: On Wed, Nov 21, 2012 at 12:32 PM, Andres Freund wrote: +1 WITH foo AS (SELECT ...) (barrier=on|off)? 9.3 introduces the syntax, defaultin

Re: [PERFORM] Poor performance using CTE

2012-11-21 Thread Andrew Dunstan
On 11/21/2012 09:59 AM, Tom Lane wrote: Andrew Dunstan writes: If we're going to do it can we please come up with something more intuitive and much, much more documented than "OFFSET 0"? And if/when we do this we'll need to have big red warnings all over then release n

Re: [PERFORM] Poor performance using CTE

2012-11-21 Thread Andrew Dunstan
On 11/21/2012 08:04 AM, Heikki Linnakangas wrote: On 21.11.2012 01:53, Tom Lane wrote: I think the more interesting question is what cases wouldn't be covered by such a rule. Typically you need to use OFFSET 0 in situations where the planner has guessed wrong about costs or rowcounts, and I th

Re: SOLVED - RE: [PERFORM] Poor performance using CTE

2012-11-14 Thread Andrew Dunstan
On 11/14/2012 08:17 PM, Craig Ringer wrote: On 11/15/2012 12:29 AM, Tom Lane wrote: David Greco writes: Thanks, that did the trick. Though I'm still not clear as to why. PG treats WITH as an optimization fence --- the WITH query will be executed pretty much as-is. It may be that Oracle flat

Re: [PERFORM] Poor performance using CTE

2012-11-14 Thread Andrew Dunstan
On 11/14/2012 10:56 AM, David Greco wrote: You're right. I was translating an oracle query , but looks like PG will allow some syntax that is different. Trying to find entries in fedexinvoices where smp_pkg.get_invoice_charges(id) returns a record containing charge_name in ('ADDRESS CORRECTIO

Re: [PERFORM] Poor performance using CTE

2012-11-14 Thread Andrew Dunstan
On 11/14/2012 10:23 AM, David Greco wrote: Have a query using a CTE that is performing very poorly. The equivalent query against the same data in an Oracle database runs in under 1 second, in Postgres it takes 2000 seconds. The smp_pkg.get_invoice_charges queries fedexinvoices for some dat

Re: [HACKERS] [PERFORM] pg_dump and thousands of schemas

2012-11-13 Thread Andrew Dunstan
On 11/13/2012 10:12 PM, Denis wrote: Please don't think that I'm trying to nitpick here, but pg_dump has options for dumping separate tables and that's not really consistent with the idea that "pg_dump is primarily designed for dumping entire databases". Sure it is. The word "primarily" is

Re: [PERFORM] add column with default value is very slow

2012-09-11 Thread Andrew Dunstan
On 09/11/2012 09:55 AM, AI Rumman wrote: I added the excel file for locks data. I was surprised to see that while I was updating a single column value for all records in a tables, all indexes are locked by the server. Any ALTER TABLE command locks the whole table in ACCESS EXCLUSIVE mode, i

Re: [PERFORM] best practice to avoid table bloat?

2012-08-16 Thread Andrew Dunstan
On 08/16/2012 04:33 PM, Anibal David Acosta wrote: Hi, if I have a table that daily at night is deleted about 8 millions of rows (table maybe has 9 millions) is recommended to do a vacuum analyze after delete completes or can I leave this job to autovacuum? This table is very active during

Re: [PERFORM] Odd blocking (or massively latent) issue - even with EXPLAIN

2012-07-23 Thread Andrew Dunstan
On 07/23/2012 04:41 AM, Jim Vanns wrote: We're seeing SELECT statements and even EXPLAIN (no ANAYLZE) statements hang indefinitely until *something* (we don't know what) releases some kind of resource or no longer becomes a massive bottle neck. These are the symptoms. I have seen this sort of

Re: [PERFORM] queries are fast after dump->restore but slow again after some days dispite vacuum

2012-07-19 Thread Andrew Dunstan
On 07/19/2012 11:13 AM, Felix Scheicher wrote: Andrew Dunstan dunslane.net> writes: Try running CLUSTER on the relevant tables and see if it makes a difference. If it does you might want to look into using pg_reorg periodically. That worked like a charm! Many thanks. But how comes,

Re: [PERFORM] queries are fast after dump->restore but slow again after some days dispite vacuum

2012-07-19 Thread Andrew Dunstan
On 07/19/2012 07:33 AM, Felix Scheicher wrote: Hi, I am running a 9.1 server at Ubuntu. When I upgraded to the current version I did a pg_dump followed by pg_restore and found that the db was much faster. But slowed down again after two days. I did the dump-restore again and could now compare th

Re: [PERFORM] DELETE vs TRUNCATE explanation

2012-07-11 Thread Andrew Dunstan
On 07/11/2012 04:47 PM, Shaun Thomas wrote: On 07/11/2012 03:18 PM, Craig James wrote: It strikes me as a contrived case rather than a use case. What sort of app repeatedly fills and truncates a small table thousands of times ... other than a test app to see whether you can do it or not? Te

Re: [PERFORM] Paged Query

2012-07-09 Thread Andrew Dunstan
On 07/09/2012 01:41 PM, Misa Simic wrote: From my experience users even very rare go to ending pages... easier to them would be to sort data by field to get those rows in very first pages... Yeah, the problem really is that most client code wants to know how many pages there are, even

Re: [PERFORM] query optimization

2012-04-26 Thread Andrew Dunstan
On 04/26/2012 04:08 PM, Tom Lane wrote: Thomas Kellerer writes: Tom Lane wrote on 26.04.2012 21:17: Um ... did you analyze all the tables, or just some of them? I get sub-millisecond runtimes if all four tables have been analyzed, but it does seem to pick lousy plans if, say, only a and b h

Re: [PERFORM] scale up (postgresql vs mssql)

2012-04-13 Thread Andrew Dunstan
On 04/11/2012 06:11 PM, Eyal Wilde wrote: hi, i had a stored procedure in ms-sql server. this stored procedure gets a parameter (account-id), dose about 20 queries, fills some temporary tables, and finally, returns a few result-sets. this stored procedure converted to stored function in pos

Re: [PERFORM] Update join performance issues

2012-04-03 Thread Andrew Dunstan
On 04/03/2012 01:29 PM, Kevin Kempter wrote: Hi All; I have a query that wants to update a table based on a join like this: update test_one set f_key = t.f_key from upd_temp1 t, test_one t2 where t.id_number = t2.id_number Why is test_one in the from clause? update joins whatev

Re: [PERFORM] Performance of SQL Function versus View

2012-04-03 Thread Andrew Dunstan
On 04/03/2012 10:21 AM, Robert Haas wrote: You should probably test this in your environment, but I'd expect the view to be better. Wrapping logic inside PL/pgsql functions needlessly rarely turn outs to be a win. Right, But also note that auto_explain is very useful in getting plans and

Re: [PERFORM] TCP Overhead on Local Loopback

2012-04-02 Thread Andrew Dunstan
On 04/01/2012 09:11 PM, Andrew Dunstan wrote: On 04/01/2012 08:29 PM, Claudio Freire wrote: On Sun, Apr 1, 2012 at 8:54 PM, Andrew Dunstan wrote: You could try using Unix domain socket and see if the performance improves. A relevant link: He said Windows. There are no Unix domain

Re: [PERFORM] TCP Overhead on Local Loopback

2012-04-01 Thread Andrew Dunstan
On 04/01/2012 08:29 PM, Claudio Freire wrote: On Sun, Apr 1, 2012 at 8:54 PM, Andrew Dunstan wrote: You could try using Unix domain socket and see if the performance improves. A relevant link: He said Windows. There are no Unix domain sockets on Windows. (And please don't top

Re: [PERFORM] TCP Overhead on Local Loopback

2012-04-01 Thread Andrew Dunstan
On 04/01/2012 06:01 PM, Andy wrote: You could try using Unix domain socket and see if the performance improves. A relevant link: He said Windows. There are no Unix domain sockets on Windows. (And please don't top-post) cheers andrew -- Sent via pgsql-performance mailing list (pgsql-per

Re: [PERFORM] Tablespaces on a raid configuration

2012-03-30 Thread Andrew Dunstan
On 03/30/2012 10:45 AM, Campbell, Lance wrote: PostgreSQL 9.0.x When PostgreSQL storage is using a relatively large raid 5 or 6 array is there any value in having your tables distributed across multiple tablespaces if those tablespaces will exists on the same raid array? I understand th

Re: [PERFORM] database slowdown while a lot of inserts occur

2012-03-29 Thread Andrew Dunstan
On 03/29/2012 03:27 PM, Bob Lunney wrote: Lance, May small inserts cause frequent fsyncs. Is there any way those small inserts can be batched into some larger sets of inserts that use copy to perform the load? Or possibly a prepared statement called many times in a single transaction,

Re: [PERFORM] slow self-join query

2012-03-18 Thread Andrew Dunstan
On 03/18/2012 10:37 AM, Robert Poor wrote: On Sat, Mar 17, 2012 at 23:09, Scott Marlowe > wrote: Also it looks like you're still not using the index on this: Subquery Scan u1 (cost=0.00..313.55 rows=50 width=4) (actual time=0.030..147.136 rows=100

Re: [PERFORM] PG as in-memory db? How to warm up and re-populate buffers? How to read in all tuples into memory?

2012-03-01 Thread Andrew Dunstan
On 03/01/2012 07:58 PM, Claudio Freire wrote: On Thu, Mar 1, 2012 at 9:28 PM, Peter van Hardenberg wrote: Setting work_mem to hundreds of MB in a 4G system is suicide. Tens even is dangerous. Why do you say that? We've had work_mem happily at 100MB for years. Is there a particular degenerat

Re: [PERFORM] PG as in-memory db? How to warm up and re-populate buffers? How to read in all tuples into memory?

2012-03-01 Thread Andrew Dunstan
On 03/01/2012 05:52 PM, Stefan Keller wrote: These are the current modified settings in postgresql.conf: shared_buffers = 128MB work_mem = 3MB These are extremely low settings on virtually any modern computer. I usually look to set shared buffers in numbers of Gb and work_mem at least in te

Re: [PERFORM] Very long deletion time on a 200 GB database

2012-02-27 Thread Andrew Dunstan
On 02/27/2012 09:45 AM, Shaun Thomas wrote: On 02/27/2012 02:08 AM, Reuven M. Lerner wrote: In the end, it was agreed that we could execute the deletes over time, deleting items in the background, or in parallel with the application's work. After all, if the disk is filling up at the rate of

Re: [PERFORM] Very long deletion time on a 200 GB database

2012-02-23 Thread Andrew Dunstan
On 02/23/2012 05:07 AM, Marcin Mańk wrote: DELETE FROM B WHERE r_id IN (SELECT R.id FROM R, B WHERE r.end_date< (NOW() - (interval '1 day' * 30)) AND r.id = b.r_id How about: DELETE FROM B WHERE r_id IN (SELECT distinct R.id FROM R WHERE r.end_date< (NOW() - (in

Re: [PERFORM] Query slow as Function

2012-02-18 Thread Andrew Dunstan
On 02/18/2012 11:37 AM, Tom Lane wrote: Andreas Kretschmer writes: You can check the plan with the auto_explain - Extension, and you can force the planner to create a plan based on the actual input-value by using dynamic SQL (EXECUTE 'your query string' inside the function) Steve *is* using

Re: [PERFORM] Performance costs of various PL languages

2011-12-27 Thread Andrew Dunstan
On 12/27/2011 05:54 PM, Merlin Moncure wrote: On Tue, Dec 27, 2011 at 4:20 PM, Pavel Stehule wrote: Hello 2011/12/27 Carlo Stonebanks: We are currently using pltclu as our PL of choice AFTER plpgSql. I'd like to know if anyone can comment on the performance costs of the various PL language

Re: [PERFORM] Exploring memory usage

2011-12-27 Thread Andrew Dunstan
On 12/27/2011 11:00 AM, Scott Marlowe wrote: On Tue, Dec 27, 2011 at 8:33 AM, Claudio Freire wrote: On Sat, Dec 24, 2011 at 4:22 PM, Michael Smolsky wrote: work_mem = 128MB (tried 257MB, didn't change anything) This is probably your problem. Without an EXPLAIN output, I cannot be sure, bu

Re: [HACKERS] [PERFORM] Slow count(*) again...

2011-02-06 Thread Andrew Dunstan
On 02/04/2011 02:32 AM, da...@lang.hm wrote: when a copy command is issued, I assume that there is some indication of how much data is going to follow. No of course there isn't. How would we do that with a stream like STDIN? Read the code. cheers andrew -- Sent via pgsql-performance

Re: [HACKERS] [PERFORM] Slow count(*) again...

2011-02-03 Thread Andrew Dunstan
On 02/02/2011 07:17 PM, Greg Smith wrote: I direct anyone who thought Mladen was making a serious comment to http://www.nydailynews.com/news/politics/2009/01/08/2009-01-08_misunderestimate_tops_list_of_notable_bu-3.html if you want to get his little joke there. I plan to start using "misunde

Re: [HACKERS] [PERFORM] Sun Donated a Sun Fire T2000 to the PostgreSQL

2006-06-21 Thread Andrew Dunstan
Jim Nasby wrote: On Jun 16, 2006, at 12:01 PM, Josh Berkus wrote: First thing as soon as I have a login, of course, is to set up a Buildfarm instance. Keep in mind that buildfarm clients and benchmarking stuff don't usually mix well. On a fast machine like this a buildfarm run is

Re: [HACKERS] [PERFORM] A Better External Sort?

2005-10-04 Thread Andrew Dunstan
Ron Peacetree wrote: The good news is all this means it's easy to demonstrate that we can improve the performance of our sorting functionality. Assuming we get the abyssmal physical IO performance fixed... (because until we do, _nothing_ is going to help us as much) I for one would be p

Re: [HACKERS] [PERFORM] Bad n_distinct estimation; hacks suggested?

2005-04-28 Thread Andrew Dunstan
Mischa Sandberg wrote: Perhaps I can save you some time (yes, I have a degree in Math). If I understand correctly, you're trying extrapolate from the correlation between a tiny sample and a larger sample. Introducing the tiny sample into any decision can only produce a less accurate result than

Re: [HACKERS] [PERFORM] Bad n_distinct estimation; hacks suggested?

2005-04-26 Thread Andrew Dunstan
Simon Riggs wrote: The comment * Every value in the sample appeared more than once. Assume * the column has just these values. doesn't seem to apply when using larger samples, as Josh is using. Looking at Josh's application it does seem likely that when taking a sample, all site

Re: [HACKERS] [PERFORM] Bad n_distinct estimation; hacks suggested?

2005-04-26 Thread Andrew Dunstan
Tom Lane wrote: Josh Berkus writes: Overall, our formula is inherently conservative of n_distinct. That is, I believe that it is actually computing the *smallest* number of distinct values which would reasonably produce the given sample, rather than the *median* one. This is contrary to

Re: [HACKERS] [PERFORM] Bad n_distinct estimation; hacks suggested?

2005-04-26 Thread Andrew Dunstan
Josh Berkus wrote: Simon, Tom: While it's not possible to get accurate estimates from a fixed size sample, I think it would be possible from a small but scalable sample: say, 0.1% of all data pages on large tables, up to the limit of maintenance_work_mem. Setting up these samples as a % of da

Re: [HACKERS] [PERFORM] Bad n_distinct estimation; hacks suggested?

2005-04-23 Thread Andrew Dunstan
Josh Berkus said: > > > Well, unusual distributions are certainly tough. But I think the > problem exists even for relatively well-distributed populations. > Part of it is, I believe, the formula we are using: > > n*d / (n - f1 + f1*n/N) > [snip] > > This is so broken, in fact, that I'm wonderin