Re: [PERFORM] performance problem on big tables

2017-08-27 Thread Mariel Cherkassky
I have the newest version : select oracle_diag(); oracle_diag - oracle_fdw 1.5.0, PostgreSQL 9.6.3, Oracle client 11.2.0.4.0, ORA

Re: [PERFORM] performance problem on big tables

2017-08-27 Thread Claudio Freire
On Sun, Aug 27, 2017 at 1:34 PM, Mariel Cherkassky wrote: > Hi, yes indeed I'm using laurenz`s oracle_fdw extension. I tried to run it > but I'm getting error > > dbch=# ALTER FOREIGN TABLE tc_sub_rate_ver_prod OPTIONS ( SET prefetch 10240 > ); > ERROR: syntax error at or near "10240" > LINE 1: .

Re: [PERFORM] performance problem on big tables

2017-08-27 Thread Claudio Freire
On Sun, Aug 27, 2017 at 1:34 PM, Mariel Cherkassky wrote: > Hi, yes indeed I'm using laurenz`s oracle_fdw extension. I tried to run it > but I'm getting error > > dbch=# ALTER FOREIGN TABLE tc_sub_rate_ver_prod OPTIONS ( SET prefetch 10240 > ); > ERROR: syntax error at or near "10240" > LINE 1: .

Re: [PERFORM] performance problem on big tables

2017-08-27 Thread Mariel Cherkassky
Hi, yes indeed I'm using laurenz`s oracle_fdw extension. I tried to run it but I'm getting error dbch=# ALTER FOREIGN TABLE tc_sub_rate_ver_prod OPTIONS ( SET prefetch 10240 ); ERROR: syntax error at or near "10240" LINE 1: ...N TABLE tc_sub_rate_ver_prod OPTIONS ( SET prefetch 10240 ); dbch=#

Re: [PERFORM] performance problem on big tables

2017-08-24 Thread Claudio Freire
On Thu, Aug 24, 2017 at 4:51 AM, Mariel Cherkassky wrote: > Hi Claudio, how can I do that ? Can you explain me what is this option ? > > 2017-08-24 2:15 GMT+03:00 Claudio Freire : >> >> On Mon, Aug 21, 2017 at 5:00 AM, Mariel Cherkassky >> wrote: >> > To summarize, I still have performance proble

Re: [PERFORM] performance problem on big tables

2017-08-24 Thread Mariel Cherkassky
Hi Claudio, how can I do that ? Can you explain me what is this option ? 2017-08-24 2:15 GMT+03:00 Claudio Freire : > On Mon, Aug 21, 2017 at 5:00 AM, Mariel Cherkassky > wrote: > > To summarize, I still have performance problems. My current situation : > > > > I'm trying to copy the data of man

Re: [PERFORM] performance problem on big tables

2017-08-23 Thread Claudio Freire
On Mon, Aug 21, 2017 at 5:00 AM, Mariel Cherkassky wrote: > To summarize, I still have performance problems. My current situation : > > I'm trying to copy the data of many tables in the oracle database into my > postgresql tables. I'm doing so by running insert into local_postgresql_temp > select

Re: [PERFORM] performance problem on big tables

2017-08-21 Thread Michael DNA
ehalf Of Mariel Cherkassky > Sent: Monday, August 21, 2017 10:20 AM > To: MichaelDBA > Cc: pgsql-performance@postgresql.org > Subject: Re: [PERFORM] performance problem on big tables > > I had a system that consist from many objects(procedures,functions..) on an > oracle da

Re: [PERFORM] performance problem on big tables

2017-08-21 Thread Mariel Cherkassky
@postgresql.org > *Subject:* Re: [PERFORM] performance problem on big tables > > > > I had a system that consist from many objects(procedures,functions..) on > an oracle database. We decided to integrate that system to postgresql. That > system coppied alot of big tables from a diff

Re: [PERFORM] performance problem on big tables

2017-08-21 Thread Igor Neyman
From: pgsql-performance-ow...@postgresql.org [mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Mariel Cherkassky Sent: Monday, August 21, 2017 10:20 AM To: MichaelDBA Cc: pgsql-performance@postgresql.org Subject: Re: [PERFORM] performance problem on big tables I had a system that

Re: [PERFORM] performance problem on big tables

2017-08-21 Thread Mariel Cherkassky
I had a system that consist from many objects(procedures,functions..) on an oracle database. We decided to integrate that system to postgresql. That system coppied alot of big tables from a different read only oracle database and preformed on it alot of queries to produce reports. The part of getti

Re: [PERFORM] performance problem on big tables

2017-08-21 Thread MichaelDBA
Maybe I missed it in this continuous thread activity, but have you tried '''ora2pg"? You can export from Oracle and import to Postgres in parallel jobs. The import commands use the efficient COPY command by default (unless you override it in the ora2pg configuration file). You can do the exp

Re: [PERFORM] performance problem on big tables

2017-08-21 Thread Mariel Cherkassky
I`m searching for a way to improve the current performance, I'm not interesting in using a different tool or writing something new because I'm trying to migrate a system on oracle database to a postgresql database. 2017-08-21 14:53 GMT+03:00 Daniel Blanch Bataller < daniel.blanch.batal...@gmail.co

Re: [PERFORM] performance problem on big tables

2017-08-21 Thread Daniel Blanch Bataller
> El 21 ago 2017, a las 13:27, Mariel Cherkassky > escribió: > > All this operation runs as part of a big transaction that I run. > How can I create a dump in the oracle server and copy it to the postgresql > server from a postgresql transaction ? I guess you could create a user defined funct

Re: [PERFORM] performance problem on big tables

2017-08-21 Thread Mariel Cherkassky
All this operation runs as part of a big transaction that I run. How can I create a dump in the oracle server and copy it to the postgresql server from a postgresql transaction ? Chopping the table is optional when I use copy, but when I use copy to remote oracle table it takes longer to create the

Re: [PERFORM] performance problem on big tables

2017-08-21 Thread Daniel Blanch Bataller
> El 21 ago 2017, a las 10:00, Mariel Cherkassky > escribió: > > To summarize, I still have performance problems. My current situation : > I'm trying to copy the data of many tables in the oracle database into my > postgresql tables. I'm doing so by running insert into local_postgresql_temp

Re: [PERFORM] performance problem on big tables

2017-08-21 Thread Mariel Cherkassky
To summarize, I still have performance problems. My current situation : I'm trying to copy the data of many tables in the oracle database into my postgresql tables. I'm doing so by running insert into local_postgresql_temp select * from remote_oracle_table. The performance of this operation are ve

Re: [PERFORM] performance problem on big tables

2017-08-20 Thread Mariel Cherkassky
When I run copy from local table the speed of the writing is 22 M/S. When I use the copy from remote_oracle_Table it writes 3 M/s. SCP between the servers coppies very fast. How should I continue ? 2017-08-20 14:00 GMT+03:00 Mariel Cherkassky : > I realized something weird. When I`m preforming th

Re: [PERFORM] performance problem on big tables

2017-08-20 Thread Mariel Cherkassky
I realized something weird. When I`m preforming the copy utility of postgresql in order to create dump from a local table in my postgresql db it takes for 32G table 20 minutes. When I try to use copy for a foregin table (on oracle database) It takes more than 2 hours.. During the copy operation fro

Re: [PERFORM] performance problem on big tables

2017-08-19 Thread Mariel Cherkassky
This server is dedicated to be a postgresql production database, therefore postgresql is the only thing the runs on the server. The fs that I`m using is xfs. I`ll add two different disks - one for the wals and one for the temp tablespace. Regarding the disk, what size should they be considering tha

Re: [PERFORM] performance problem on big tables

2017-08-17 Thread Claudio Freire
On Thu, Aug 17, 2017 at 6:00 AM, Mariel Cherkassky wrote: > I checked with the storage team in the company and they saw that I have alot > of io on the server. How should I reduce the io that the postgresql uses ? Do you have concurrent activity on that server? What filesystem are you using wher

Re: [PERFORM] performance problem on big tables

2017-08-17 Thread Daniel Blanch Bataller
I would just check how does it take to copy 3GB using an standard copy command. on my computer it took 10 secs. > El 17 ago 2017, a las 11:00, Mariel Cherkassky > escribió: > > I checked with the storage team in the company and they saw that I have alot > of io on the server. How should I r

Re: [PERFORM] performance problem on big tables

2017-08-17 Thread Mariel Cherkassky
I checked with the storage team in the company and they saw that I have alot of io on the server. How should I reduce the io that the postgresql uses ? 2017-08-17 9:25 GMT+03:00 Mariel Cherkassky : > Hi Daniel, > I already tried to set the destination table to unlogged - it improved the > perform

Re: [PERFORM] performance problem on big tables

2017-08-16 Thread Mariel Cherkassky
Hi Daniel, I already tried to set the destination table to unlogged - it improved the performance slightly. Is there a way to make sure that I/O is the problem ? 2017-08-17 0:46 GMT+03:00 Daniel Blanch Bataller < daniel.blanch.batal...@gmail.com>: > Seems your disks are too slow. On my laptop (no

Re: [PERFORM] performance problem on big tables

2017-08-16 Thread Daniel Blanch Bataller
Seems your disks are too slow. On my laptop (nothing special, just one disk) using COPY I can dump 3G in ~ 20 secs, loading takes 120 secs, bare copying 3G takes 10 secs. Similar proportion you had, but much faster. confirm I/O is your bottleneck, and tell us how you solved your problem Anyw

Re: [PERFORM] performance problem on big tables

2017-08-16 Thread Mariel Cherkassky
My server is virtual and it have virtual hd from a vnx storage machine. The logs and the data are on the same disk. 2017-08-16 17:04 GMT+03:00 Daniel Blanch Bataller < daniel.blanch.batal...@gmail.com>: > Considering it has to write logs and data at checkpoints I don’t see it > particularly slow

Re: [PERFORM] performance problem on big tables

2017-08-16 Thread Daniel Blanch Bataller
Considering it has to write logs and data at checkpoints I don’t see it particularly slow compared to the extract phase. What kind of disks you have SSD or regular disks? Different disks for ltransaction logs and data? > El 16 ago 2017, a las 15:54, Mariel Cherkassky > escribió: > > I run th

Re: [PERFORM] performance problem on big tables

2017-08-16 Thread Mariel Cherkassky
I run the copy command via psql to create a local dump of a 3G table and it took me 134059.732ms =~2 minutes. After that I imported the data via copy and it took 458648.677ms =~7 minutes. So the copy command works but pretty slow. 2017-08-16 16:08 GMT+03:00 Daniel Blanch Bataller < daniel.blanch.b

Re: [PERFORM] performance problem on big tables

2017-08-16 Thread Daniel Blanch Bataller
See if the copy command is actually working, copy should be very fast from your local disk. > El 16 ago 2017, a las 14:26, Mariel Cherkassky > escribió: > > > After all the changes of the memory parameters the same operation(without the > copy utility) didnt run much faster - it took one m

Re: [PERFORM] performance problem on big tables

2017-08-16 Thread Mariel Cherkassky
After all the changes of the memory parameters the same operation(without the copy utility) didnt run much faster - it took one minute less. I made a test with the copy command (without the 'with binary') and it took 1.5 hours to create the dumpfile in my local postgresql server. Then I tried to r

Re: [PERFORM] performance problem on big tables

2017-08-15 Thread Pavel Stehule
2017-08-15 18:13 GMT+02:00 Jeff Janes : > On Tue, Aug 15, 2017 at 3:06 AM, Mariel Cherkassky < > mariel.cherkas...@gmail.com> wrote: > >> Hi, >> So I I run the cheks that jeff mentioned : >> \copy (select * from oracle_remote_table) to /tmp/tmp with binary - 1 >> hour and 35 minutes >> \copy local

Re: [PERFORM] performance problem on big tables

2017-08-15 Thread Scott Marlowe
On Tue, Aug 15, 2017 at 4:06 AM, Mariel Cherkassky wrote: > Hi, > So I I run the cheks that jeff mentioned : > \copy (select * from oracle_remote_table) to /tmp/tmp with binary - 1 hour > and 35 minutes So 26G takes 95 minutes, or 27 MB/minute or 456k/second? Sound about right (it's early, I have

Re: [PERFORM] performance problem on big tables

2017-08-15 Thread Jeff Janes
On Tue, Aug 15, 2017 at 3:06 AM, Mariel Cherkassky < mariel.cherkas...@gmail.com> wrote: > Hi, > So I I run the cheks that jeff mentioned : > \copy (select * from oracle_remote_table) to /tmp/tmp with binary - 1 hour > and 35 minutes > \copy local_postresql_table from /tmp/tmp with binary - Didnt

Re: [PERFORM] performance problem on big tables

2017-08-15 Thread Mariel Cherkassky
Hi, So I I run the cheks that jeff mentioned : \copy (select * from oracle_remote_table) to /tmp/tmp with binary - 1 hour and 35 minutes \copy local_postresql_table from /tmp/tmp with binary - Didnt run because the remote oracle database is currently under maintenance work. So I decided to follow

Re: [PERFORM] performance problem on big tables

2017-08-14 Thread Jeff Janes
On Mon, Aug 14, 2017 at 6:24 AM, Mariel Cherkassky < mariel.cherkas...@gmail.com> wrote: > I have performance issues with two big tables. Those tables are located on > an oracle remote database. I'm running the quert : insert into > local_postgresql_table select * from oracle_remote_table. > > The

Re: [PERFORM] performance problem on big tables

2017-08-14 Thread Rick Otten
Moving that many gigs of data across your network could also take a long time simply depending on your network configuration. Before spending a huge amount of energy tuning postgresql, I'd probably look at how long it takes to simply copy 20 or 30 G of data between the two machines. > El 14 ago

Re: [PERFORM] performance problem on big tables

2017-08-14 Thread Daniel Blanch Bataller
Hi. In general using COPY is *much faster* than anything else. You can even split the data load and run it in parallel, start with as many jobs as processors you have. Same with indexes, run them in parallel. With parallel I mean various psql running at the same time. Tuning postgres will help

Re: [PERFORM] performance problem on big tables

2017-08-14 Thread MichaelDBA
Total RAM on your host is 5GB, really? Before touching anything else, increase your RAM. That will be your big performance boost right there. Then, you can "up" your effective_cache_size and maintenance_work_mem. Regards, Michael Vitale Mariel Cherkassky

[PERFORM] performance problem on big tables

2017-08-14 Thread Mariel Cherkassky
I have performance issues with two big tables. Those tables are located on an oracle remote database. I'm running the quert : insert into local_postgresql_table select * from oracle_remote_table. The first table has 45M records and its size is 23G. The import of the data from the oracle remote dat

Re: [PERFORM] Performance problem with gin index

2015-09-29 Thread Bertrand Paquet
Thx you for your hints. I found lot of information in this thread http://postgresql.nabble.com/how-to-investigate-GIN-fast-updates-and-cleanup-cycles-td5863756.html Currently, we are monitoring pending_pages (pgstatginindex works on 9.4.4), and run a vacuum every night. We hope it will solve the

Re: [PERFORM] Performance problem with gin index

2015-09-29 Thread Jeff Janes
On Tue, Sep 29, 2015 at 8:45 AM, Bertrand Paquet < bertrand.paq...@doctolib.fr> wrote: > Hi, > > We have got big slow down on our production plateform (PG 9.4.4). > What is it slow compared to? Did your version change, or your workload/usage change? > > After analyzing wals with pg_xlogdump, w

Re: [PERFORM] Performance problem with gin index

2015-09-29 Thread k...@rice.edu
On Tue, Sep 29, 2015 at 05:45:41PM +0200, Bertrand Paquet wrote: > Hi, > > We have got big slow down on our production plateform (PG 9.4.4). > After analyzing wals with pg_xlogdump, we see lot of writing in Gin Indexes. > We suspect slow down are related to the write of pending update on the > ind

[PERFORM] Performance problem with gin index

2015-09-29 Thread Bertrand Paquet
Hi, We have got big slow down on our production plateform (PG 9.4.4). After analyzing wals with pg_xlogdump, we see lot of writing in Gin Indexes. We suspect slow down are related to the write of pending update on the index. So, is there any method to see - what is the current config of gin_pen

Re: [PERFORM] Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!

2011-11-13 Thread Lucas Mocellin
how about your harddisks?? you could get a little help from a RAID10 SAS 15k disks. if you don't even have RAID, it would help a lot! Lucas. 2011/11/8 Sam Gendler > > > Sent from my iPhone > > On Nov 7, 2011, at 7:21 PM, Mohamed Hashim wrote: > > Hi all, > > Thanks for all your responses. > >

Re: [PERFORM] Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!

2011-11-08 Thread Mario Weilguni
Am 08.11.2011 13:15, schrieb Mohamed Hashim: Hi Sam,Tomas In my first post i have mentioned all how much shared (shared buffers, effective cache size, work mem, etc.) and my OS and hardware information and what are the basic settings i have changed and regarding Explain analyze i gave one sa

Re: [PERFORM] Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!

2011-11-08 Thread Tomas Vondra
On 8 Listopad 2011, 13:15, Mohamed Hashim wrote: > Hi Sam,Tomas > > In my first post i have mentioned all how much shared (shared buffers, > effective cache size, work mem, etc.) and my OS and hardware information > and what are the basic settings i have changed Sorry, I've missed that first messa

Re: [PERFORM] Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!

2011-11-08 Thread Mohamed Hashim
Hi Sam,Tomas In my first post i have mentioned all how much shared (shared buffers, effective cache size, work mem, etc.) and my OS and hardware information and what are the basic settings i have changed and regarding Explain analyze i gave one sample query because if i tune that particular table

Re: [PERFORM] Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!

2011-11-08 Thread Tomas Vondra
On 8 Listopad 2011, 4:21, Mohamed Hashim wrote: > Hi all, > > Thanks for all your responses. > > Sorry for late response > > Earlier we used Postgres8.3.10 with Desktop computer (as server) and > configuration of the system (I2 core with 4GB RAM) and also the > application > was slow i dint change

Re: [PERFORM] Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!

2011-11-08 Thread Sam Gendler
Sent from my iPhone On Nov 7, 2011, at 7:21 PM, Mohamed Hashim wrote: > Hi all, > > Thanks for all your responses. > > Sorry for late response > > Earlier we used Postgres8.3.10 with Desktop computer (as server) and > configuration of the system (I2 core with 4GB RAM) and also the applicat

Re: [PERFORM] Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!

2011-11-07 Thread Mohamed Hashim
Hi all, Thanks for all your responses. Sorry for late response Earlier we used Postgres8.3.10 with Desktop computer (as server) and configuration of the system (I2 core with 4GB RAM) and also the application was slow i dint change any postgres config settings. May be because of low config We t

Re: [PERFORM] Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!

2011-11-03 Thread Mario Weilguni
Am 03.11.2011 17:08, schrieb Tomas Vondra: On 3 Listopad 2011, 16:02, Mario Weilguni wrote: No doubt about that, querying tables using conditions on array columns is not the best direction in most cases, especially when those tables are huge. Still, the interesting part here is that the OP clai

Re: [PERFORM] Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!

2011-11-03 Thread Tomas Vondra
On 3 Listopad 2011, 16:02, Mario Weilguni wrote: > Am 02.11.2011 08:12, schrieb Mohamed Hashim: >> Dear All >> >> Thanks for your suggestions & replies. >> >> The below are the sample query which i put for particular one bill_id >> >> EXPLAIN ANALYZE SELECT abd.bill_no as >> bill_no,to_char(abd.bil

Re: [PERFORM] Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!

2011-11-03 Thread Mario Weilguni
Am 02.11.2011 08:12, schrieb Mohamed Hashim: Dear All Thanks for your suggestions & replies. The below are the sample query which i put for particular one bill_id EXPLAIN ANALYZE SELECT abd.bill_no as bill_no,to_char(abd.bill_date,'dd/mm/') AS date,mp.product_desc as product_desc,std.qua

Re: [PERFORM] Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!

2011-11-02 Thread Tom Lane
Mohamed Hashim writes: > The below are the sample query which i put for particular one bill_id > EXPLAIN ANALYZE SELECT abd.bill_no as > bill_no,to_char(abd.bill_date,'dd/mm/') AS date,mp.product_desc as > product_desc,std.quantity,std.area,rip.price AS rate > FROM acc_bill_items_106 abi >

Re: [PERFORM] Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!

2011-11-02 Thread Mohamed Hashim
Dear All Thanks for your suggestions & replies. The below are the sample query which i put for particular one bill_id EXPLAIN ANALYZE SELECT abd.bill_no as bill_no,to_char(abd.bill_date,'dd/mm/') AS date,mp.product_desc as product_desc,std.quantity,std.area,rip.price AS rate FROM acc_bill_it

Re: [PERFORM] Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!

2011-11-01 Thread Tom Lane
Marcus Engene writes: > After I upgraded from Postgres 8.3/8.4 to 9.0 I had all sorts of > problems with queries with many joins. Queries that used to take 1ms > suddenly take half a minute for no apparent reason. Could we see a concrete test case, rather than hand waving? If there's really a

Re: [PERFORM] Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!

2011-11-01 Thread Tomas Vondra
On 1 Listopad 2011, 10:57, Marcus Engene wrote: > Hi Hashim, > > One workaround I've done is if something looking like this > > select > ... > from > table_linking_massive_table tlmt > ,massive_table mt > ,some_table1 st1 > ,some_table2 st2 > ,some_table3 st3 > ,so

Re: [PERFORM] Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!

2011-11-01 Thread k...@rice.edu
On Tue, Nov 01, 2011 at 08:33:51AM +0530, Mohamed Hashim wrote: > Any idea or suggestions how to improve my database best > performance.??? > > Regards > Hashim > Hi Hashim, Ignoring the description of your tables, you should probably try updating to the latest release 9.0.5. You

Re: [PERFORM] Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!

2011-11-01 Thread Marcus Engene
Hi Hashim, After I upgraded from Postgres 8.3/8.4 to 9.0 I had all sorts of problems with queries with many joins. Queries that used to take 1ms suddenly take half a minute for no apparent reason. I have 72GB which I think makes the planner go bonkers and be too eager doing a seq scan. I tri

Re: [PERFORM] Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!

2011-10-31 Thread Mohamed Hashim
Any idea or suggestions how to improve my database best performance.??? Regards Hashim On Sat, Oct 29, 2011 at 9:40 AM, Mohamed Hashim wrote: > Thanks Alban & Gregg. > > > i will describe little more about that table > > >- We are using PHP application with Apache server & P

Re: [PERFORM] Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!

2011-10-28 Thread Mohamed Hashim
Thanks Alban & Gregg. i will describe little more about that table - We are using PHP application with Apache server & Postgresql 9.0.3 in a dedicated server. - stk_source table is mainly used to track the transactions from parent to child Table "_100

Re: [PERFORM] Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!

2011-10-28 Thread Mohamed Hashim
Actually we are using various views and functions to get the info for reporting purpose in that views or functions we have used or joined the above table mentioned. I thought of will get reply from any one from the lists so only i put anyway i will continue with only pgsql-performance mailing list

Re: [PERFORM] Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!

2011-10-28 Thread Gregg Jaskiewicz
what sort of queries you are running against it ? the select * from.. is not really (hopefully) a query you are running from your php app. -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-p

[PERFORM] Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!

2011-10-28 Thread Mohamed Hashim
I have Quadcore server with 8GB RAM vendor_id : GenuineIntel cpu family : 6 model : 44 model name : Intel(R) Xeon(R) CPU E5607 @ 2.27GHz stepping: 2 cpu MHz : 1197.000 cache size : 8192 KB MemTotal:8148636 kB MemFree: 4989

Re: [PERFORM] Performance problem with a table with 38928077 record

2011-10-27 Thread Tom Lane
Shaun Thomas writes: > On 10/27/2011 02:13 PM, Robert Haas wrote: >> If I had to guess, I'd bet that the second one is trying to spool the >> resultset in memory someplace and that's driving the machine into >> swap. > That would be my guess too. SELECT * on a 40-million row table is a > *lot* d

Re: [PERFORM] Performance problem with a table with 38928077 record

2011-10-27 Thread Shaun Thomas
On 10/27/2011 02:13 PM, Robert Haas wrote: If I had to guess, I'd bet that the second one is trying to spool the resultset in memory someplace and that's driving the machine into swap. That would be my guess too. SELECT * on a 40-million row table is a *lot* different than getting the count,

Re: [PERFORM] Performance problem with a table with 38928077 record

2011-10-27 Thread Robert Haas
What is a bit strange about this is that you can do this: On Fri, Oct 7, 2011 at 6:04 AM, Giovanni Mancuso wrote: > select count(*) from dm_object_perm; > count > -- > 38928077 > (1 row) > But not this: If i run "explain analyze select * from dm_object_perm;" it goes on for many >

Re: [PERFORM] Performance problem with a table with 38928077 record

2011-10-08 Thread Giovanni Mancuso
Title: Documento senza titolo I clean all unused data, run VACUUM FULL and run REINDEX d,_object_perm. Un my table, now i have: --  24089952 But the problem is the same. Thanks Il 07/10/2011 17:12, Giovanni Mancuso ha scritto:

Re: [PERFORM] Performance problem with a table with 38928077 record

2011-10-07 Thread Giovanni Mancuso
Title: Documento senza titolo Il 07/10/2011 12:24, Guillaume Cottenceau ha scritto: Giovanni Mancuso writes: select count(*) from dm_object_perm;   count   --  38'928'077 (1 row) [...] If i run "explain analyze select *

Re: [PERFORM] Performance problem with a table with 38928077 record

2011-10-07 Thread Cédric Villemain
2011/10/7 Giovanni Mancuso > Hi, > > I have a problem with my postgres 8.2. > > I Have an application that write ojbect (file, folder, ecc.) and another > table that have account. This to tables are likend eith another tablenthat > have a permissions foreach objects + accounts. > > My structure

Re: [PERFORM] Performance problem with a table with 38928077 record

2011-10-07 Thread Gregg Jaskiewicz
Do you need left join ? Can you further normalize the tables? (to lower the I/O) Can you upgrade to at least 8.3 ? It has huuge performance improvements over 8.3. -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgre

Re: [PERFORM] Performance problem with a table with 38928077 record

2011-10-07 Thread Guillaume Cottenceau
Giovanni Mancuso writes: > select count(*) from dm_object_perm; >   count   > -- >  38'928'077 > (1 row) [...] > If i run "explain analyze select * from dm_object_perm;" it goes on for many > hours. Almost 39 million records is not small, especially if you run on poor hardware[1], poor

[PERFORM] Performance problem with a table with 38928077 record

2011-10-07 Thread Giovanni Mancuso
Title: Documento senza titolo Hi, I have a problem with my postgres 8.2. I Have an application that write ojbect (file, folder, ecc.) and another table that have account. This to tables are likend eith another tablenthat have a permissions foreach objects + acc

Re: [PERFORM] performance problem with LIMIT (order BY in DESC order). Wrong index used?

2011-04-12 Thread Tom Lane
Claudio Freire writes: > Did you try increasing the statistic targets? > AFAIK, it looks a lot like the planner is missing stats, since it > estimates the index query on idx_nfi_newsfeed will fetch 10k rows - > instead of 25. BTW, this is the right suggestion, but for the wrong reason. You seem

Re: [PERFORM] performance problem with LIMIT (order BY in DESC order). Wrong index used?

2011-04-12 Thread Dieter Rehbein
thank's a lot guys, I will try that out. regards Dieter Am 12.04.2011 um 11:07 schrieb Claudio Freire: On Tue, Apr 12, 2011 at 10:59 AM, Dieter Rehbein wrote: > I just executed a VACUUM ANALYZE and now everything performs well. hm, > strange. That probably means you need more statistics -

Re: [PERFORM] performance problem with LIMIT (order BY in DESC order). Wrong index used?

2011-04-12 Thread tv
> On Tue, Apr 12, 2011 at 10:59 AM, Dieter Rehbein > wrote: >> I just executed a VACUUM ANALYZE and now everything performs well. hm, >> strange. > > That probably means you need more statistics - try increasing the > newsfeed's statistics target count. > > ALTER TABLE newsfeed_item ALTER COLUMN n

Re: [PERFORM] performance problem with LIMIT (order BY in DESC order). Wrong index used?

2011-04-12 Thread Claudio Freire
On Tue, Apr 12, 2011 at 10:59 AM, Dieter Rehbein wrote: > I just executed a VACUUM ANALYZE and now everything performs well. hm, > strange. That probably means you need more statistics - try increasing the newsfeed's statistics target count. ALTER TABLE newsfeed_item ALTER COLUMN newsfeed SET S

Re: [PERFORM] performance problem with LIMIT (order BY in DESC order). Wrong index used?

2011-04-12 Thread Dieter Rehbein
what I did, was an ANALYZE, which did not change anything. I just executed a VACUUM ANALYZE and now everything performs well. hm, strange. thanks Dieter Am 12.04.2011 um 09:42 schrieb Claudio Freire: On Tue, Apr 12, 2011 at 7:20 AM, Dieter Rehbein wrote: > Hi everybody, > > I have a perfor

Re: [PERFORM] performance problem with LIMIT (order BY in DESC order). Wrong index used?

2011-04-12 Thread Claudio Freire
On Tue, Apr 12, 2011 at 7:20 AM, Dieter Rehbein wrote: > Hi everybody, > > I have a performance-problem with a query using a LIMIT. There are other > threads rergading performance issues with LIMIT, but I didn't find useful > hints for our problem and it might > be interesting for other postgres

[PERFORM] performance problem with LIMIT (order BY in DESC order). Wrong index used?

2011-04-11 Thread Dieter Rehbein
Hi everybody, I have a performance-problem with a query using a LIMIT. There are other threads rergading performance issues with LIMIT, but I didn't find useful hints for our problem and it might be interesting for other postgres-users. There are only 2 simple tables: CREATE TABLE newsfeed (

Re: [PERFORM] Performance problem with joined aggregate query

2010-09-16 Thread Merlin Moncure
On Thu, Sep 16, 2010 at 1:51 AM, Anssi Kääriäinen wrote: > Yes, looks like this is the same case. This makes it hard to use views > having group by in them, as the whole group by part will always be > executed. Back to planning board then... > > I guess my possibilities for pivot views are: >  - c

Re: [PERFORM] Performance problem with joined aggregate query

2010-09-16 Thread Anssi Kääriäinen
On 09/16/2010 01:25 AM, Merlin Moncure wrote: Take a look at this, and the responses. Is it the same case?: http://www.mail-archive.com/pgsql-performance@postgresql.org/msg21756.html merlin Yes, looks like this is the same case. This makes it hard to use views having group by in them, as th

Re: [PERFORM] Performance problem with joined aggregate query

2010-09-15 Thread Merlin Moncure
On Wed, Sep 15, 2010 at 2:26 AM, Anssi Kääriäinen wrote: > Hello all, > > I am trying to use aggregate queries in views, and when joining these views > to other > tables, I get seq scan in the view, even if index scan would be clearly > better. The views > I am using in my Db are actually long piv

[PERFORM] Performance problem with joined aggregate query

2010-09-15 Thread Anssi Kääriäinen
Hello all, I am trying to use aggregate queries in views, and when joining these views to other tables, I get seq scan in the view, even if index scan would be clearly better. The views I am using in my Db are actually long pivot queries, but the following simple test case is enough to show t

Re: [PERFORM] Performance problem with UNION ALL view and domains

2007-11-24 Thread Dean Rasheed
> It looks like the problem is that the UNION is taken as producing plain > text output, as you can see with \d: > > regression=# \d foo > Table "public.foo" > Column | Type | Modifiers > +--+--- > a | foo_text | not null > b | text | > Indexes: > "foo_pkey" PRIMARY KEY, bt

Re: [PERFORM] Performance problem with UNION ALL view and domains

2007-11-23 Thread Tom Lane
"Jeff Larsen" <[EMAIL PROTECTED]> writes: > On Nov 23, 2007 7:29 AM, Dean Rasheed <[EMAIL PROTECTED]> wrote: >> I am having a performance problem trying to query a view which is a >> UNION ALL of 2 tables. I have narrowed the problem down to my use of >> DOMAINS in the underlying table. > In my ca

Re: [PERFORM] Performance problem with UNION ALL view and domains

2007-11-23 Thread Jeff Larsen
On Nov 23, 2007 7:29 AM, Dean Rasheed <[EMAIL PROTECTED]> wrote: > I am having a performance problem trying to query a view which is a > UNION ALL of 2 tables. I have narrowed the problem down to my use of > DOMAINS in the underlying table. So in the test-case below, when the > column "a" is of dom

[PERFORM] Performance problem with UNION ALL view and domains

2007-11-23 Thread Dean Rasheed
Hi, I am having a performance problem trying to query a view which is a UNION ALL of 2 tables. I have narrowed the problem down to my use of DOMAINS in the underlying table. So in the test-case below, when the column "a" is of domain type foo_text, the query runs slowly using the following plan:

RE: [PERFORM] Performance problem (outer join + view + n on-strict functions)‏

2007-11-18 Thread Dean Rasheed
Ah yes, I see the problem. I see that it is also going to be a problem where I have used CASE..WHEN in the select list of views :-( Naively, couldn't the subquery be pulled up if any non-nullable columns from the right table t2 were automatically wrapped in a simple function which returned NUL

Re: [PERFORM] Performance problem (outer join + view + non-strict functions)‏

2007-11-18 Thread Tom Lane
Dean Rasheed <[EMAIL PROTECTED]> writes: > I am having performance problems running a number of queries > involving views based on non-strict functions. I have reproduced the > problem with the simple test-case below which shows how the query plan > is different depending on whether the view uses s

[PERFORM] Performance problem (outer join + view + non-strict func tions)‏

2007-11-18 Thread Dean Rasheed
Hi, I am having performance problems running a number of queries involving views based on non-strict functions. I have reproduced the problem with the simple test-case below which shows how the query plan is different depending on whether the view uses strict or non-strict functions (even though

Re: [PERFORM] Performance problem with table containing a lot of text (blog)

2007-08-29 Thread Heikki Linnakangas
Kari Lavikka wrote: > On Wed, 29 Aug 2007, Heikki Linnakangas wrote: > >> The idea of being able to set the toast threshold per column was >> discussed during 8.3 development, but no patch was produced IIRC. We >> might do that in the future. If you're willing to compile from source, >> you can lo

Re: [PERFORM] Performance problem with table containing a lot of text (blog)

2007-08-29 Thread Kari Lavikka
On Wed, 29 Aug 2007, Heikki Linnakangas wrote: The idea of being able to set the toast threshold per column was discussed during 8.3 development, but no patch was produced IIRC. We might do that in the future. If you're willing to compile from source, you can lower TOAST_TUPLE_THRESHOLD. We ar

Re: [PERFORM] Performance problem with table containing a lot of text (blog)

2007-08-29 Thread Heikki Linnakangas
Kari Lavikka wrote: > It would be nice if I could flag a column to be toasted always, > regardless of it's length. The idea of being able to set the toast threshold per column was discussed during 8.3 development, but no patch was produced IIRC. We might do that in the future. If you're willing to

Re: [PERFORM] Performance problem with table containing a lot of text (blog)

2007-08-28 Thread Kari Lavikka
I didn't see your schema, but couldn't these problems be solved by storing the article id, owner id, and blog date in a separate table? It seems that if you don't actually need the content of the blogs, all of those questions could be answered by querying a very simple table with minimal I/O

Re: [PERFORM] Performance problem with table containing a lot of text (blog)

2007-08-28 Thread Dan Harris
Kari Lavikka wrote: Hello! Some background info.. We have a blog table that contains about eight million blog entries. Average length of an entry is 1200 letters. Because each 8k page can accommodate only a few entries, every query that involves several entries causes several random seeks to

[PERFORM] Performance problem with table containing a lot of text (blog)

2007-08-28 Thread Kari Lavikka
Hello! Some background info.. We have a blog table that contains about eight million blog entries. Average length of an entry is 1200 letters. Because each 8k page can accommodate only a few entries, every query that involves several entries causes several random seeks to disk. We are having

Re: [PERFORM] Performance Problem

2007-06-05 Thread Gregory Stark
"Gauri Kanekar" <[EMAIL PROTECTED]> writes: > Befor doing vaccum full on the database this query use to take less > than 4min. But now after doing vacumming reindexing the tables it is > taking 73mins. Vacuum full is generally not necessary. You do need to ensure regular vacuum is run frequently

Re: [PERFORM] Performance Problem

2007-06-05 Thread Steinar H. Gunderson
On Tue, Jun 05, 2007 at 03:23:35PM +0530, Gauri Kanekar wrote: > Befor doing vaccum full on the database this query use to take less > than 4min. But now after doing vacumming reindexing the tables it is > taking 73mins. Did you analyze the table recently? Some of the selectivity estimates seem q

[PERFORM] Performance Problem

2007-06-05 Thread Gauri Kanekar
Hi, explain analyze SELECT am.campaign_id, am.optimize_type, am.creative_id, am.optimize_by_days, am.impressions_delta, am.clicks_delta, am.channel_code, am.cost,dm.allocation_map_id, SUM(CASE dm.sqldate when 20070602 then dm.impressions_delivered else 0 end) as deliv_yest, SUM(CASE sign(20070526

  1   2   >