Re: [PERFORM] Query planner chooses index scan backward instead of better index option

2016-11-15 Thread Seckin Pulatkan
Thank you, Jeff for your reply.

Yes, we tested with CTE as well but we are using Hibernate to generate the
query and there are some more conditions that can be added if certain
parameters supplied. For my knowledge, Hibernate is still not supporting
CTE structures yet. That's why I will keep this as last resort to convert
it to native query but much appreciated for the info you gave how query
planner is thinking.

explain (analyze, buffers)
with cte as (select booking0_.*
from booking booking0_
where (booking0_.customer_id in (select customer1_.id from customer
customer1_ where (lower((customer1_.first_name||' '||customer1_.last_name))
like '%sahby%')))
)
select * from cte
order by cte.id desc limit 30

 QUERY PLAN


---
 Limit  (cost=34171.73..34171.80 rows=30 width=1237) (actual
time=321.370..321.371 rows=4 loops=1)
   Buffers: shared hit=18 read=1680
   CTE cte
 ->  Nested Loop  (cost=3384.39..33967.93 rows=5155 width=241) (actual
time=309.167..321.312 rows=4 loops=1)
   Buffers: shared hit=15 read=1680
   ->  Bitmap Heap Scan on customer customer1_
(cost=3383.96..11612.18 rows=5155 width=4) (actual time=302.196..310.625
rows=4 loops=1)
 Recheck Cond: (lowerfirst_name)::text || ' '::text) ||
(last_name)::text)) ~~ '%sahby%'::text)
 Heap Blocks: exact=3
 Buffers: shared hit=5 read=1674
 ->  Bitmap Index Scan on idx_customer_name_lower
(cost=0.00..3382.67 rows=5155 width=0) (actual time=300.142..300.142 rows=4
loops=1)
   Index Cond: (lowerfirst_name)::text || '
'::text) || (last_name)::text)) ~~ '%sahby%'::text)
   Buffers: shared hit=5 read=1671
   ->  Index Scan using idx_booking_customer_id on booking
booking0_  (cost=0.43..4.33 rows=1 width=241) (actual time=2.666..2.667
rows=1 loops=4)
 Index Cond: (customer_id = customer1_.id)
 Buffers: shared hit=10 read=6
   ->  Sort  (cost=203.80..216.69 rows=5155 width=1237) (actual
time=321.368..321.369 rows=4 loops=1)
 Sort Key: cte.id
 Sort Method: quicksort  Memory: 25kB
 Buffers: shared hit=18 read=1680
 ->  CTE Scan on cte  (cost=0.00..51.55 rows=5155 width=1237)
(actual time=309.173..321.327 rows=4 loops=1)
   Buffers: shared hit=15 read=1680
 Planning time: 92.501 ms
 Execution time: 321.521 ms


I will also share another info.. We have also passenger table, same as
customer regards to this name fields and search but relation is different
then.. Passenger (4.2 million records) has booking_id then the query
planner behaves different. It runs the in clause query first.

explain (analyze, buffers)
select booking0_.*
from booking booking0_
where (booking0_.id in (select p.booking_id from passenger p where
(lower((p.first_name||' '||p.last_name)) like '%sahby%')))
order by booking0_.id desc limit 30

 QUERY PLAN



 Limit  (cost=4871.81..4871.88 rows=30 width=241) (actual
time=91.867..91.868 rows=4 loops=1)
   Buffers: shared hit=22 read=1683
   ->  Sort  (cost=4871.81..4872.76 rows=383 width=241) (actual
time=91.866..91.866 rows=4 loops=1)
 Sort Key: booking0_.id
 Sort Method: quicksort  Memory: 25kB
 Buffers: shared hit=22 read=1683
 ->  Nested Loop  (cost=4107.13..4860.49 rows=383 width=241)
(actual time=90.791..91.850 rows=4 loops=1)
   Buffers: shared hit=22 read=1683
   ->  HashAggregate  (cost=4106.70..4107.55 rows=170 width=4)
(actual time=86.624..86.627 rows=4 loops=1)
 Group Key: p.booking_id
 Buffers: shared hit=10 read=1679
 ->  Bitmap Heap Scan on passenger p
(cost=3366.97..4105.74 rows=383 width=4) (actual time=86.561..86.613 rows=4
loops=1)
   Recheck Cond: (lowerfirst_name)::text || '
'::text) || (last_name)::text)) ~~ '%sahby%'::text)
   Heap Blocks: exact=4
   Buffers: shared hit=10 read=1679
   ->  Bitmap Index Scan on
idx_passenger_name_lower  (cost=0.00..3366.88 rows=383 width=0) (actual
time=80.148..80.148 rows=4 loops=1)
 Index Cond: (lowerfirst_name)::text ||
' '::text) || (last_name)::text)) ~~ '%sahby%'::text)
 Buffers: shared hit=7 read=1678
   ->  Index Scan using pk_booking_id on booking booking0_
(cost=0.43..4.42 rows=1 width=241) (actual time=1.300..1.301 rows=1 loops=4)
 Index Cond: (id = p.booking_id)
 Buffers: shared hit=12 read=4
 Planning time: 39.774 ms
 Execution time: 92.

[PERFORM] Sql Query :: Any advice ?

2016-11-15 Thread Henrik Ekenberg

Hi,

I have some data to join and I want to get som advice from you.

Any tips ? Any comments are apreciated

//H

select trade_no
from
forecast_trades.hist_account_balance
left join trades using (trade_no)
where  trade_date > current_date - 120
   and    trade_date < current_date - 30
   and    forex = 'f'
   and    options = 'f'
   group by trade_no
   having max(account_size) > 0
;

( Query Plan : https://explain.depesz.com/s/4lOD )

QUERY
PLAN
  
 
--
 HashAggregate  (cost=34760605.76..34773866.26 rows=1060840 width=15)
(actual time=1142816.632..1150194.076 rows=2550634 loops=1)
   Group Key: hist_account_balance.trade_no
   Filter: (max(hist_account_balance.account_size) > 0::numeric)
   Rows Removed by Filter: 18240023
   ->  Hash Join  (cost=3407585.35..34530512.29 rows=46018694 width=15)
(actual time=60321.201..1108647.151 rows=44188963 loops=1)
 Hash Cond: (hist_account_balance.trade_no =
trades.trade_no)
 ->  Seq Scan on hist_account_balance 
(cost=0.00..14986455.20 rows=570046720 width=15) (actual
time=0.016..524427.140 rows=549165594 loops=1)
 ->  Hash  (cost=3159184.13..3159184.13 rows=19872098
width=12) (actual time=60307.001..60307.001 rows=20790658 loops=1)
   Buckets: 2097152  Batches: 1  Memory Usage:
913651kB
   ->  Index Scan using trades_trade_date_index
on trades  (cost=0.58..3159184.13 rows=19872098 width=12) (actual
time=0.078..52213.976 rows=20790658 loops=1)
 Index Cond: ((trade_date >
(('now'::cstring)::date - 120)) AND (trade_date < (('now'::cstring)::date -
30)))
 Filter: ((NOT forex) AND (NOT
options))
 Rows Removed by Filter: 2387523
 Planning time: 2.157 ms
 Execution time: 1151234.290 ms
(15 rows)


Re: [PERFORM] Sql Query :: Any advice ?

2016-11-15 Thread vinny

On 2016-11-15 14:27, Henrik Ekenberg wrote:

Hi,

I have some data to join and I want to get som advice from you.

Any tips ? Any comments are apreciated

//H

select trade_no
from
forecast_trades.hist_account_balance
left join trades using (trade_no)
where  trade_date > current_date - 120
   andtrade_date < current_date - 30
   andforex = 'f'
   andoptions = 'f'
   group by trade_no
   having max(account_size) > 0
;

( Query Plan : https://explain.depesz.com/s/4lOD )

QUERY PLAN

--
 HashAggregate  (cost=34760605.76..34773866.26 rows=1060840 width=15)
(actual time=1142816.632..1150194.076 rows=2550634 loops=1)
   Group Key: hist_account_balance.trade_no
   Filter: (max(hist_account_balance.account_size) > 0::numeric)
   Rows Removed by Filter: 18240023
   ->  Hash Join  (cost=3407585.35..34530512.29 rows=46018694
width=15) (actual time=60321.201..1108647.151 rows=44188963 loops=1)
 Hash Cond: (hist_account_balance.trade_no = trades.trade_no)
 ->  Seq Scan on hist_account_balance  (cost=0.00..14986455.20
rows=570046720 width=15) (actual time=0.016..524427.140 rows=549165594
loops=1)
 ->  Hash  (cost=3159184.13..3159184.13 rows=19872098
width=12) (actual time=60307.001..60307.001 rows=20790658 loops=1)
   Buckets: 2097152  Batches: 1  Memory Usage: 913651kB
   ->  Index Scan using trades_trade_date_index on trades
(cost=0.58..3159184.13 rows=19872098 width=12) (actual
time=0.078..52213.976 rows=20790658 loops=1)
 Index Cond: ((trade_date >
(('now'::cstring)::date - 120)) AND (trade_date <
(('now'::cstring)::date - 30)))
 Filter: ((NOT forex) AND (NOT options))
 Rows Removed by Filter: 2387523
 Planning time: 2.157 ms
 Execution time: 1151234.290 ms
(15 rows)



What kind of indexes have you created for those tables?


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Sql Query :: Any advice ?

2016-11-15 Thread Henrik Ekenberg

 Here are the indexes I have for those queries

Indexes:

hist_account_balance  :: "hist_account_balance_ix1" btree (trade_no)
   

trades :: "trades_pkey" PRIMARY KEY, btree  (trade_no)
 "trades_trade_date_index" btree (trade_date)

//H

Quoting vinny :


On 2016-11-15 14:27, Henrik Ekenberg wrote:

Hi,

I have some data to join and I want to get som advice from you.

Any tips ? Any comments are apreciated

//H

select trade_no
from
forecast_trades.hist_account_balance
left join trades using (trade_no)
where  trade_date > current_date - 120
  and    trade_date < current_date - 30
  and    forex = 'f'
  and    options = 'f'
  group by trade_no
  having max(account_size) > 0
;

( Query Plan : https://explain.depesz.com/s/4lOD )

QUERY PLAN



--

HashAggregate  (cost=34760605.76..34773866.26 rows=1060840 width=15)
(actual time=1142816.632..1150194.076 rows=2550634 loops=1)
  Group Key: hist_account_balance.trade_no
  Filter: (max(hist_account_balance.account_size) > 0::numeric)
  Rows Removed by Filter: 18240023
  ->  Hash Join  (cost=3407585.35..34530512.29 rows=46018694
width=15) (actual time=60321.201..1108647.151 rows=44188963 loops=1)
        Hash Cond: (hist_account_balance.trade_no = trades.trade_no)
        ->  Seq Scan on hist_account_balance 

(cost=0.00..14986455.20

rows=570046720 width=15) (actual time=0.016..524427.140 rows=549165594
loops=1)
        ->  Hash  (cost=3159184.13..3159184.13 rows=19872098
width=12) (actual time=60307.001..60307.001 rows=20790658 loops=1)
              Buckets: 2097152  Batches: 1  Memory Usage:

913651kB

              ->  Index Scan using trades_trade_date_index on

trades

(cost=0.58..3159184.13 rows=19872098 width=12) (actual
time=0.078..52213.976 rows=20790658 loops=1)
                    Index Cond: ((trade_date >
(('now'::cstring)::date - 120)) AND (trade_date <
(('now'::cstring)::date - 30)))
                    Filter: ((NOT forex) AND (NOT options))
                    Rows Removed by Filter: 2387523
Planning time: 2.157 ms
Execution time: 1151234.290 ms
(15 rows)


What kind of indexes have you created for those tables?

--
Sent via pgsql-performance mailing list

(pgsql-performance@postgresql.org)

To make changes to your
subscription:http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Sql Query :: Any advice ?

2016-11-15 Thread vinny

Are the forex and options in the hist_account_balance table?
The sequential scan is on that table so if they are,
so I'm guessing they should probably by in the index.

On 2016-11-15 15:30, Henrik Ekenberg wrote:

Here are the indexes I have for those queries

Indexes:

hist_account_balance  :: "hist_account_balance_ix1" btree (trade_no)

trades :: "trades_pkey" PRIMARY KEY, btree  (trade_no)
 "trades_trade_date_index" btree (trade_date)

//H

Quoting vinny :


On 2016-11-15 14:27, Henrik Ekenberg wrote:


Hi,

I have some data to join and I want to get som advice from you.

Any tips ? Any comments are apreciated

//H

select trade_no
from
forecast_trades.hist_account_balance
left join trades using (trade_no)
where  trade_date > current_date - 120
andtrade_date < current_date - 30
andforex = 'f'
andoptions = 'f'
group by trade_no
having max(account_size) > 0
;

( Query Plan : https://explain.depesz.com/s/4lOD )

QUERY PLAN





--

HashAggregate  (cost=34760605.76..34773866.26 rows=1060840
width=15)
(actual time=1142816.632..1150194.076 rows=2550634 loops=1)
Group Key: hist_account_balance.trade_no
Filter: (max(hist_account_balance.account_size) > 0::numeric)
Rows Removed by Filter: 18240023
->  Hash Join  (cost=3407585.35..34530512.29 rows=46018694
width=15) (actual time=60321.201..1108647.151 rows=44188963
loops=1)
Hash Cond: (hist_account_balance.trade_no =
trades.trade_no)
->  Seq Scan on hist_account_balance
(cost=0.00..14986455.20
rows=570046720 width=15) (actual time=0.016..524427.140
rows=549165594
loops=1)
->  Hash  (cost=3159184.13..3159184.13 rows=19872098
width=12) (actual time=60307.001..60307.001 rows=20790658 loops=1)
Buckets: 2097152  Batches: 1  Memory Usage: 913651kB
->  Index Scan using trades_trade_date_index on
trades
(cost=0.58..3159184.13 rows=19872098 width=12) (actual
time=0.078..52213.976 rows=20790658 loops=1)
Index Cond: ((trade_date >
(('now'::cstring)::date - 120)) AND (trade_date <
(('now'::cstring)::date - 30)))
Filter: ((NOT forex) AND (NOT options))
Rows Removed by Filter: 2387523
Planning time: 2.157 ms
Execution time: 1151234.290 ms
(15 rows)

What kind of indexes have you created for those tables?

--
Sent via pgsql-performance mailing list
(pgsql-performance@postgresql.org)
To make changes to your
subscription:http://www.postgresql.org/mailpref/pgsql-performance



--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Sql Query :: Any advice ?

2016-11-15 Thread Henrik
Hi, 

Forex and options are in  trades table

Best regards 
Henrik 



Sent from my Mi padOn vinny , Nov 15, 2016 6:46 PM wrote:Are the forex and options in the hist_account_balance table?
The sequential scan is on that table so if they are,
so I'm guessing they should probably by in the index.

On 2016-11-15 15:30, Henrik Ekenberg wrote:
> Here are the indexes I have for those queries
> 
> Indexes:
> 
> hist_account_balance  :: "hist_account_balance_ix1" btree (trade_no)
> 
> trades :: "trades_pkey" PRIMARY KEY, btree  (trade_no)
>  "trades_trade_date_index" btree (trade_date)
> 
> //H
> 
> Quoting vinny :
> 
>> On 2016-11-15 14:27, Henrik Ekenberg wrote:
>> 
>>> Hi,
>>> 
>>> I have some data to join and I want to get som advice from you.
>>> 
>>> Any tips ? Any comments are apreciated
>>> 
>>> //H
>>> 
>>> select trade_no
>>> from
>>> forecast_trades.hist_account_balance
>>> left join trades using (trade_no)
>>> where  trade_date > current_date - 120
>>> and    trade_date < current_date - 30
>>> and    forex = 'f'
>>> and    options = 'f'
>>> group by trade_no
>>> having max(account_size) > 0
>>> ;
>>> 
>>> ( Query Plan : https://explain.depesz.com/s/4lOD )
>>> 
>>> QUERY PLAN
>>> 
>>> 
>> 
> --
>>> HashAggregate  (cost=34760605.76..34773866.26 rows=1060840
>>> width=15)
>>> (actual time=1142816.632..1150194.076 rows=2550634 loops=1)
>>> Group Key: hist_account_balance.trade_no
>>> Filter: (max(hist_account_balance.account_size) > 0::numeric)
>>> Rows Removed by Filter: 18240023
>>> ->  Hash Join  (cost=3407585.35..34530512.29 rows=46018694
>>> width=15) (actual time=60321.201..1108647.151 rows=44188963
>>> loops=1)
>>> Hash Cond: (hist_account_balance.trade_no =
>>> trades.trade_no)
>>> ->  Seq Scan on hist_account_balance
>>> (cost=0.00..14986455.20
>>> rows=570046720 width=15) (actual time=0.016..524427.140
>>> rows=549165594
>>> loops=1)
>>> ->  Hash  (cost=3159184.13..3159184.13 rows=19872098
>>> width=12) (actual time=60307.001..60307.001 rows=20790658 loops=1)
>>> Buckets: 2097152  Batches: 1  Memory Usage: 913651kB
>>> ->  Index Scan using trades_trade_date_index on
>>> trades
>>> (cost=0.58..3159184.13 rows=19872098 width=12) (actual
>>> time=0.078..52213.976 rows=20790658 loops=1)
>>> Index Cond: ((trade_date >
>>> (('now'::cstring)::date - 120)) AND (trade_date <
>>> (('now'::cstring)::date - 30)))
>>> Filter: ((NOT forex) AND (NOT options))
>>> Rows Removed by Filter: 2387523
>>> Planning time: 2.157 ms
>>> Execution time: 1151234.290 ms
>>> (15 rows)
>> What kind of indexes have you created for those tables?
>> 
>> --
>> Sent via pgsql-performance mailing list
>> (pgsql-performance@postgresql.org)
>> To make changes to your
>> subscription:http://www.postgresql.org/mailpref/pgsql-performance


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance



[PERFORM] Performance decrease after upgrade to 9.6.1

2016-11-15 Thread Gabriela Serventi
Hello!
We have a server with 8.4.1 that we want to migrate to 9.6.1
Before doing anything, we ran pgbench serveral times.
The results were always similar to the following:

$ pgbench -l -c 100 -T 30 pgbench
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 1
query mode: simple
number of clients: 100
duration: 30 s
number of transactions actually processed: 36049
tps = 1193.682690 (including connections establishing)
tps = 1198.838960 (excluding connections establishing)

Then, we follow the procedure in 
https://www.postgresql.org/docs/9.6/static/pgupgrade.html to upgrade the server 
using pg_upgrade.
To install the new version, we downloaded and compiled the sources, with the 
same option that we use with the previous version (configure 
--prefix=/var/lib/pgsql).
We upgrade only one server, so we don't run the steps for replication.

After this, we ran the script analyze_new_cluster.sh, that was created by 
pg_upgrade, to generate statistics.

At this point, we run pgbench again, serveral times, to make the comparision.
The results were always similar to the following:

$ pgbench -l -c 100 -T 30 pgbench
starting vacuum...end.
transaction type: 
scaling factor: 1
query mode: simple
number of clients: 100
number of threads: 1
duration: 30 s
number of transactions actually processed: 27428
latency average = 110.104 ms
tps = 908.234296 (including connections establishing)
tps = 908.278187 (excluding connections establishing)

We ran the statistics again, this time with vacuumdb --all --analyze, no change 
at all.

In the postgresql.conf of the new version (9.6.1), we use this values:
max_connections = 100
superuser_reserved_connections = 3
shared_buffers = 512MB
work_mem = 5MB
maintenance_work_mem = 128MB
effective_cache_size = 1500MB
max_wal_size = 2GB
min_wal_size = 1GB
wal_level = replica

In the postgresql.conf of the old version (8.4.1), we use this values:
max_connections = 100
shared_buffers = 512MB
(The other values are set by default)

We try also with the default values in the new installation, without change in 
the times.

The hardware doesn't change, its a Intel(R) Pentium(R) CPU G3220 @ 3.00GHz with 
2 cores, 2GB of RAM, 500GB SCSI hard disk. The operating system is Enterprise 
Linux Enterprise Linux Server release 5.8, 64 bits.

Any suggestion about what could be the problem?
Thanks!
Gabriela



Re: [PERFORM] Performance decrease after upgrade to 9.6.1

2016-11-15 Thread Tom Lane
Gabriela Serventi  writes:
> $ pgbench -l -c 100 -T 30 pgbench
> starting vacuum...end.
> transaction type: 
> scaling factor: 1
> query mode: simple
> number of clients: 100
> number of threads: 1
> duration: 30 s
> number of transactions actually processed: 27428
> latency average = 110.104 ms
> tps = 908.234296 (including connections establishing)
> tps = 908.278187 (excluding connections establishing)

That's not a tremendously exciting benchmark case, for a number of
reasons:

* 100 sessions in a scale-factor-1 database are all going to be fighting
over updating the single row in the pgbench_branches table.

* 100 sessions driven by a single pgbench thread are probably going to be
bottlenecked by that thread, not by the server.

* 100 sessions on a machine with only 2 cores is going to be all about
process-swap contention anyhow.


My first thought about why the difference from 8.4 to 9.6 is that pgbench
has grown a lot more measurement apparatus since then (for example, the
transaction latency numbers, which weren't there at all in 8.4).  You
might try testing 9.6 server with 8.4 pgbench and vice versa to tease out
how much of this is actually on pgbench changes not the server.  But in
the end, what you're measuring here is mostly contention, and you'd need
to alter the test parameters to make it not so.  The "Good Practices"
section at the bottom of the pgbench reference page has some tips about
that.

regards, tom lane


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Some tuning suggestions on a Red Hat 6.7 - PG 9.5.3 production environment

2016-11-15 Thread Jeff Janes
On Mon, Nov 14, 2016 at 3:45 AM, Pietro Pugni 
wrote:

>
> The first issue I faced was about maintenance_work_mem because I set it to
> 16GB and the server silently crashed during a VACUUM because I didn’t
> consider that it could take up to autovacuum_max_workers *
> maintenance_work_mem (roughly 48GB).
>

I don't think that this is the true cause of the problem. In current
versions of PostgreSQL, VACUUM cannot make use of more than 1GB of
process-local memory, even if maintenance_work_mem is set to a far greater
value.

Cheers,

Jeff


Re: [PERFORM] Performance decrease after upgrade to 9.6.1

2016-11-15 Thread Gabriela Serventi
Hi Tom!

Thanks for the answer.

This is just one of the benchmark that we run, we test with fewer clients and 
much more time, but you're right about de scale-factor, we didn't realize about 
that.

We are going to test using your recomendations.

Thanks!



De: Tom Lane 
Enviado: martes, 15 de noviembre de 2016 19:35:03
Para: Gabriela Serventi
Cc: pgsql-performance@postgresql.org
Asunto: Re: [PERFORM] Performance decrease after upgrade to 9.6.1

Gabriela Serventi  writes:
> $ pgbench -l -c 100 -T 30 pgbench
> starting vacuum...end.
> transaction type: 
> scaling factor: 1
> query mode: simple
> number of clients: 100
> number of threads: 1
> duration: 30 s
> number of transactions actually processed: 27428
> latency average = 110.104 ms
> tps = 908.234296 (including connections establishing)
> tps = 908.278187 (excluding connections establishing)

That's not a tremendously exciting benchmark case, for a number of
reasons:

* 100 sessions in a scale-factor-1 database are all going to be fighting
over updating the single row in the pgbench_branches table.

* 100 sessions driven by a single pgbench thread are probably going to be
bottlenecked by that thread, not by the server.

* 100 sessions on a machine with only 2 cores is going to be all about
process-swap contention anyhow.


My first thought about why the difference from 8.4 to 9.6 is that pgbench
has grown a lot more measurement apparatus since then (for example, the
transaction latency numbers, which weren't there at all in 8.4).  You
might try testing 9.6 server with 8.4 pgbench and vice versa to tease out
how much of this is actually on pgbench changes not the server.  But in
the end, what you're measuring here is mostly contention, and you'd need
to alter the test parameters to make it not so.  The "Good Practices"
section at the bottom of the pgbench reference page has some tips about
that.

regards, tom lane