set log_min_duration_statement = 300,000
(300,000 ms = 5min)
From the docs:
log_min_duration_statement (integer)
Causes the duration of each completed statement to be logged if the
statement ran for at least the specified number of milliseconds. Setting
this to zero prints all statement d
Thanks you guys are correct... the size of the table caused the optimizer
to do a seq scan instead of using the index. I tried it on a 24 MB and 1
GB table and the expected index was used.
On Fri, Feb 17, 2017 at 7:04 PM, Tom Lane wrote:
> "David G. Johnston" writes:
> > On Fri, Feb 17, 2017
, Feb 17, 2017 at 5:52 PM, Tomas Vondra
wrote:
> On 02/17/2017 11:42 PM, David G. Johnston wrote:
>
>> On Fri, Feb 17, 2017 at 3:19 PM, Hustler DBA > <mailto:hustler...@gmail.com>>wrote:
>>
>>
>>
>> my_db=# create index tab_idx1 on tab(ID);
&
I am seeing this strange behavior, I don't know if this is by design by
Postgres.
I have an index on a column which is defined as "character varying(255)".
When the value I am searching for is of a certain length, the optimizer
uses the index but when the value is long, the optimizer doesn't use t
On 01/06/2017 12:24 PM, Ivan Voras wrote:
Hello,
I'm investigating options for an environment which has about a dozen
servers and several dozen databases on each, and they occasionally
need to run huge reports which slow down other services. This is of
course "legacy code". After some discu
Possibly some buffer caching is happening, what happens if you then
remove the added index and run the query again?
On 12/27/2016 10:38 AM, Valerii Valeev wrote:
Thank you David,
I used same rationale to convince my colleague — it didn’t work :)
Sort of “pragmatic” person who does what seems
Something like this in a bash script?
#!/bin/bash
echo "select schemaname, tablename from pg_tables where tablename not in
(your list of excluded tables) and schemaname not in
('information_schema', 'pg_catalog')" | psql -t > /tmp/tablist
exec < /tmp/tablist
while read line
do
set - $l
Hi All;
We have a client running on VMware, they have heavy write traffic and we
want to isolate the IO for the tx logs (pg_xlog). However it seems the
best plan based on feedback from the client is either
(a) simply leave the pg_xlog dir in the VMDK
or
(b) relocate pg_xlog to NAS/NFS
I'
This is on a CentOS 6.5 box running PostgreSQL 9.2
On 11/13/14 4:09 PM, CS DBA wrote:
All;
We have a large db server with 128GB of ram running complex functions.
with the server set to have the following we were seeing a somewhat
low hit ratio and lots of temp buffers
shared_buffers
All;
We have a large db server with 128GB of ram running complex functions.
with the server set to have the following we were seeing a somewhat low
hit ratio and lots of temp buffers
shared_buffers = 18GB
work_mem = 75MB
effective_cache_size = 105GB
checkpoint_segments = 128
when we increas
All;
I'm working with a client running PostgreSQL on a Fusion-IO drive.
They have a PostgreSQL setup guide from Fusion recommending the
following settings:
effective_io_concurrency=0
bgwriter_lru_maxpages=0
random_page_cost=0.1
sequential_page_cost=0.1
These seem odd to me, effectively turni
On 11/03/2011 09:40 AM, Robert Haas wrote:
On Thu, Nov 3, 2011 at 11:31 AM, Rodrigo Gonzalez
wrote:
El 03/11/11 11:42, Robert Haas escribió:
On Fri, Oct 28, 2011 at 9:39 AM, CS DBA wrote:
No parameters, one of them looks like this:
[ code snippet ]
It's hard to believe this is the
On 11/02/2011 02:45 PM, Scott Marlowe wrote:
On Wed, Nov 2, 2011 at 2:21 PM, CS DBA wrote:
Hi All;
The below contab2 table conmtains ~400,000 rows. This query should not take
this long. We've tweaked work_mem up to 50MB, ensured that the appropriate
indexes are in place, etc...
Tho
Hi All;
The below contab2 table conmtains ~400,000 rows. This query should not
take this long. We've tweaked work_mem up to 50MB, ensured that the
appropriate indexes are in place, etc...
Thoughts?
Thanks in advance
Explain analyze:
SELECT contab2.contacts_tab
FROM contab2
INNER JOIN scta
On 10/27/2011 11:10 PM, Tom Lane wrote:
CS DBA writes:
I have code that drops a table, re-create's it (based on a long set of
joins) and then re-creates the indexes.
It runs via psql in about 10 seconds. I took the code and simply
wrapped it into a plpgsql function and the function ve
Hi All ;
I have code that drops a table, re-create's it (based on a long set of
joins) and then re-creates the indexes.
It runs via psql in about 10 seconds. I took the code and simply
wrapped it into a plpgsql function and the function version takes almost
60 seconds to run.
I always tho
On 10/11/2011 12:03 PM, Szymon Guz wrote:
On 11 October 2011 19:52, CS DBA <mailto:cs_...@consistentstate.com>> wrote:
Hi all ;
I'm trying to tune a difficult query.
I have 2 tables:
cust_acct (9million rows)
cust_orders (200,000 rows)
Here's the
On 10/11/2011 12:02 PM, Pavel Stehule wrote:
Hello
please, send EXPLAIN ANALYZE output instead.
Regards
Pavel Stehule
2011/10/11 CS DBA:
Hi all ;
I'm trying to tune a difficult query.
I have 2 tables:
cust_acct (9million rows)
cust_orders (200,000 rows)
Here's the quer
Hi all ;
I'm trying to tune a difficult query.
I have 2 tables:
cust_acct (9million rows)
cust_orders (200,000 rows)
Here's the query:
SELECT
a.account_id, a.customer_id, a.order_id, a.primary_contact_id,
a.status, a.customer_location_id, a.added_date,
o.agent_id, p.order_location
On 06/02/2011 11:31 AM, Shaun Thomas wrote:
On 06/02/2011 11:15 AM, Kevin Grittner wrote:
They all gave the same result, of course, and they all used a seq
scan..
And they all will. I created a test table with a bunch of
generate_series and emulated 200 unique matches of column1 and
column2
On 06/01/2011 03:15 PM, Merlin Moncure wrote:
On Wed, Jun 1, 2011 at 3:14 PM, CS DBA wrote:
Hi All;
We have a table with approx 200 columns. about a dozen columns are text data
types and the rest are a mix of integers , bigint's and double precision
types.
The table has about 25million
On 06/01/2011 03:38 PM, Kevin Grittner wrote:
CS DBA wrote:
The app wants to run a query like this:
select count(pri_num) from max_xtrv_st_t
where pri_num in (select max(pri_num) from max_xtrv_st_t where 1=1
group by tds_cx_ind, cxs_ind_2)
Why not something
Hi All;
We have a table with approx 200 columns. about a dozen columns are text
data types and the rest are a mix of integers , bigint's and double
precision types.
The table has about 25million rows.
The app wants to run a query like this:
select count(pri_num) from max_xtrv_st_t
where pr
I have two identical tables. But the with of the fields are different. Need
to know whether changing from varchar(100) to varchar(30) will increase the
performance, or its just the memory access.
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Field-wise-checking-the-pe
create table a( address1 int,address2 int,address3 int)
create table b(address int[3])
I have created two tables. In the first table i am using many fields to
store 3 address.
as well as in b table, i am using array data type to store 3 address. is
there any issue would face in performance rel
25 matches
Mail list logo