Re: [PERFORM] Solaris Performance (Again)

2003-12-10 Thread Jeff
On Wed, 10 Dec 2003 18:56:38 +1300
Mark Kirkwood <[EMAIL PROTECTED]> wrote:

> The major performance killer appeared to be mounting the filesystem
> with the logging option. The next most significant seemed to be the
> choice of sync_method for Pg - the default (open_datasync), which we
> initially thought should be the best - appears noticeably slower than
> fdatasync.
> 

Some interesting stuff, I'll have to play with it. Currently I'm pleased
with my solaris performance.

What version of PG?

If it is before 7.4 PG compiles with _NO_ optimization by default and
was a huge part of the slowness of PG on solaris. 


-- 
Jeff Trout <[EMAIL PROTECTED]>
http://www.jefftrout.com/
http://www.stuarthamm.net/

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] TRUNCATE veeeery slow compared to DELETE in 7.4

2003-12-10 Thread Josh Berkus
Hartmut,

> DELETE:
> 1) 0.03u 0.04s 0:02.46 2.8% (already empty)
> 2) 0.05u 0.06s 0:01.19 9.2% (already empty)
>
> TRUNCATE:
> 1) 0.10u 0.06s 6:58.66 0.0% (already empty, compile runnig simult.)
> 2) 0.10u 0.02s 2:51.71 0.0% (already empty)

How about some times for a full table?

Incidentally, I believe that TRUNCATE has always been slightly slower than 
DROP TABLE.

-- 
Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [PERFORM] Solaris Performance (Again)

2003-12-10 Thread Neil Conway
Mark Kirkwood <[EMAIL PROTECTED]> writes:
> Note : The Pgbench runs were conducted using -s 10 and -t 1000 -c
> 1->64, 2 - 3 runs of each setup were performed (averaged figures
> shown).

FYI, the pgbench docs state:

  NOTE: scaling factor should be at least as large as the largest
  number of clients you intend to test; else you'll mostly be
  measuring update contention.

-Neil


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[PERFORM]

2003-12-10 Thread nbarraza
I have some problems on performance using postgresql v. 7.3.2 running on Linux RedHat 
9. An update involving several rows (about 50) on a table having 280 tuples 
takes in the order of 6 minutes. It is more than it takes on other plataforms 
(SqlServer, FOX). I think that there´s something wrong on my configuration. I´ve 
already adjusted some parameters as I could understand memory and disk usage. Next, I 
send a description of parameters changed in postgresql.conf, a scheme of the table, 
and an EXPLAIN ANALYZE of the command. The hardware configuration is a Pentium III 1 
Ghz, 512 MB of memory, and an SCSI drive of 20 GB. Following goes the description:

-- Values changed in postgresql.conf

tcpip_socket = true
max_connections = 64
shared_buffers = 4096
wal_buffers = 100
vacuum_mem = 16384
vacuum_mem = 16384
sort_mem = 32168
checkpoint_segments = 8
effective_cache_size = 1


--
-- PostgreSQL database dump
--

\connect - nestor

SET search_path = public, pg_catalog;

--
-- TOC entry 2 (OID 22661417)
-- Name: jugadas; Type: TABLE; Schema: public; Owner: nestor
--

CREATE TABLE jugadas (
fecha_ju character(8),
hora_ju character(4),
juego character(2),
juego_vta character(2),
sorteo_p character(5),
sorteo_v character(5),
nro_servidor character(1),
ticket character(9),
terminal character(4),
sistema character(1),
agente character(5),
subagente character(3),
operador character(2),
importe character(7),
anulada character(1),
icode character(15),
codseg character(15),
tipo_moneda character(1),
apuesta character(100),
extraido character(1)
);


--
-- TOC entry 4 (OID 25553754)
-- Name: key_jug_1; Type: INDEX; Schema: public; Owner: nestor
--

CREATE UNIQUE INDEX key_jug_1 ON jugadas USING btree (juego, juego_vta, sorteo_p, 
nro_servidor, ticket);

boss=# explain analyze update jugadas set extraido = 'S' where juego = '03' and
juego_vta = '03' and sorteo_p = '89353' and extraido = 'N';
   QUERY PLAN

-
 Seq Scan on jugadas  (cost=0.00..174624.96 rows=70061 width=272) (actual 
time=21223.88..51858.07 rows=517829 loops=1)
   Filter: ((juego = '03'::bpchar) AND (juego_vta = '03'::bpchar) AND (sorteo_p
= '89353'::bpchar) AND (extraido = 'N'::bpchar))
 Total runtime: 291167.36 msec
(3 rows)

boss=# show enable_seqscan;
 enable_seqscan

 on
(1 row)


* FORCING INDEX SCAN ***

boss=# set enable_seqscan = false;
SET

boss=# explain analyze update jugadas set extraido = 'N' where juego = '03' and
juego_vta = '03' and sorteo_p = '89353' and extraido = 'S';
 QUERY PLAN

-
 Index Scan using key_jug_1 on jugadas  (cost=0.00..597959.76 rows=98085 width=272) 
(actual time=9.93..39947.93 rows=517829 loops=1)
   Index Cond: ((juego = '03'::bpchar) AND (juego_vta = '03'::bpchar) AND (sorteo_p = 
'89353'::bpchar))
   Filter: (extraido = 'S'::bpchar)
 Total runtime: 335280.56 msec
(4 rows)

boss=#

Thank you in advance for any help.

Nestor


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [PERFORM] TRUNCATE veeeery slow compared to DELETE in 7.4

2003-12-10 Thread Tom Lane
Josh Berkus <[EMAIL PROTECTED]> writes:
> Incidentally, I believe that TRUNCATE has always been slightly slower than 
> DROP TABLE.

Well, it would be: it has to delete the original files and then create
new ones.  I imagine the time to create new, empty indexes is the bulk
of the time Hartmut is measuring.  (Remember that an "empty" index has
at least one page in it, the metadata page, for all of our index types,
so there is some actual I/O involved to do this.)

It does not bother me that TRUNCATE takes nonzero time; it's intended
to be used in situations where DELETE would take huge amounts of time
(especially after you factor in the subsequent VACUUM activity).
The fact that DELETE takes near-zero time on a zero-length table is
not very relevant.

regards, tom lane

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM]

2003-12-10 Thread Stephan Szabo

On Wed, 10 Dec 2003 [EMAIL PROTECTED] wrote:

> I have some problems on performance using postgresql v. 7.3.2 running on
> Linux RedHat 9. An update involving several rows (about 50) on a
> table having 280 tuples takes in the order of 6 minutes. It is more
> than it takes on other plataforms (SqlServer, FOX). I think that there´s
> something wrong on my configuration. I´ve already adjusted some
> parameters as I could understand memory and disk usage. Next, I send a
> description of parameters changed in postgresql.conf, a scheme of the
> table, and an EXPLAIN ANALYZE of the command. The hardware configuration
> is a Pentium III 1 Ghz, 512 MB of memory, and an SCSI drive of 20 GB.
> Following goes the description:

> -- Values changed in postgresql.conf

> CREATE TABLE jugadas (
> fecha_ju character(8),
> hora_ju character(4),
> juego character(2),
> juego_vta character(2),
> sorteo_p character(5),
> sorteo_v character(5),
> nro_servidor character(1),
> ticket character(9),
> terminal character(4),
> sistema character(1),
> agente character(5),
> subagente character(3),
> operador character(2),
> importe character(7),
> anulada character(1),
> icode character(15),
> codseg character(15),
> tipo_moneda character(1),
> apuesta character(100),
> extraido character(1)
> );

Are there any tables that reference this one or other triggers? If so,
what do the tables/contraints/triggers involve look like?

I'm guessing there might be given the difference in the actual time
numbers to the total runtime on the explain analyze.


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[PERFORM] Performance problems with a higher number of clients

2003-12-10 Thread Alfranio Correia Junior
Hello,

I am facing a problem trying to put 500 concurrent users accessing
a postgresql instance. Basically, the machine begins to do a lot i/o...
swap area increases more and more...
The vmstat began with 9200 (swpd) and after 20 minutes it was like that:

VMSTAT:

   procs  memoryswap  io system 
 cpu
 r  b  w   swpd   free   buff  cache  si  sobibo   incs  us 
 sy  id
 2 29  1 106716   9576   7000 409876  32 154  5888  1262  616  1575   8 
 12  80
 0 29  1 107808   9520   6896 409904  60 220  5344  1642  662  1510   9 
 15  76
 0 89  1 108192   9528   6832 410184 172 138  6810  1750  693  2466  11 
 16  73
 0 27  1 108192   9900   6824 409852  14 112  4488  1294  495   862   2 
  9  88
 8 55  1 108452   9552   6800 410284  26  12  6266  1082  651  2284   8 
 11  81
 5 78  2 109220   8688   6760 410816 148 534  6318  1632  683  1230   6 
 13  81

The application that I am trying to running mimmics the tpc-c benchmark...
Actually, I am simulating the tpc-c workload without considering
screens and other details. The only interesting is
on the database workload proposed by the benchmark and its distributions.
The machine is a dual-processor pentium III, with 1GB, external storage 
device. It runs Linux version 2.4.21-dt1 ([EMAIL PROTECTED]) (gcc version 2.96 
2731 (Red Hat Linux 7.3 2.96-113)) #7 SMP Mon Apr 21 19:43:17 GMT 
2003, Postgresql 7.5devel.

Postgresql configuration:

effective_cache_size = 35000
shared_buffers = 5000
random_page_cost = 2
cpu_index_tuple_cost = 0.0005
sort_mem = 10240
I would like to know if this behaivor is normal considering
the number of clients, the workload and the database size (7.8 GB) ?
Or if there is something that I can change to get better results.
Best regards,

Alfranio Junior.

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [PERFORM] Solaris Performance (Again)

2003-12-10 Thread Mark Kirkwood
Good point -

It is Pg 7.4beta1 , compiled with

CFLAGS += -O2 -funroll-loops -fexpensive-optimizations

Jeff wrote:

What version of PG?

If it is before 7.4 PG compiles with _NO_ optimization by default and
was a huge part of the slowness of PG on solaris. 

 



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM] Solaris Performance (Again)

2003-12-10 Thread Mark Kirkwood
yes - originally I was going to stop at 8 clients, but once the bit was 
between the teethIf I get another box to myself I will try -s 50 or 
100 and see what that shows up.

cheers

Mark

Neil Conway wrote:

FYI, the pgbench docs state:

 NOTE: scaling factor should be at least as large as the largest
 number of clients you intend to test; else you'll mostly be
 measuring update contention.
-Neil

 



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [PERFORM] Performance problems with a higher number of clients

2003-12-10 Thread Shridhar Daithankar
Alfranio Correia Junior wrote:
Postgresql configuration:

effective_cache_size = 35000
shared_buffers = 5000
random_page_cost = 2
cpu_index_tuple_cost = 0.0005
sort_mem = 10240
Lower sort mem to say 2000-3000, up shared buffers to 10K and up effective cache 
size to around 65K. That should make it behave bit better.

I guess tuning sort mem alone would give you performance you are expecting.. 
Tune them one by one.

HTH

 Shridhar

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly