[PERFORM] select count(*) from anIntColumn where int_value = 0; is very slow

2004-02-11 Thread David Teran
Hi

we have a table with about 4 million rows. One column has an int value, 
there is a btree index on it. We tried to execute the following 
statement and it is very slow on a dual G5 2GHZ with 4 GB of RAM.

explain analyze select count(*) from job_property where int_value = 0;

Aggregate  (cost=144348.80..144348.80 rows=1 width=0) (actual 
time=13536.852..13536.852 rows=1 loops=1)
  ->  Seq Scan on job_property  (cost=0.00..144255.15 rows=37459 
width=0) (actual time=19.422..13511.653 rows=42115 loops=1)
Filter: (int_value = 0)
Total runtime: 13560.862 ms



Is this more or less normal or can we optimize this a little bit? 
FrontBase (which we compare currently) takes 2 seconds first time and 
about 0.2 seconds on second+ queries.

regards David

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [PERFORM] select count(*) from anIntColumn where int_value = 0;

2004-02-11 Thread Pavel Stehule
Hello, 

If you has  index on id, then you can use
SELECT id FROM tabulka ORDER BY id DESC LIMIT 1;

See 4.8. FAQ 

Regards
Pavel Stehule

On Wed, 11 Feb 2004, David Teran wrote:

> Hi
> 
> we have a table with about 4 million rows. One column has an int value, 
> there is a btree index on it. We tried to execute the following 
> statement and it is very slow on a dual G5 2GHZ with 4 GB of RAM.
> 
> explain analyze select count(*) from job_property where int_value = 0;
> 
> Aggregate  (cost=144348.80..144348.80 rows=1 width=0) (actual 
> time=13536.852..13536.852 rows=1 loops=1)
>->  Seq Scan on job_property  (cost=0.00..144255.15 rows=37459 
> width=0) (actual time=19.422..13511.653 rows=42115 loops=1)
>  Filter: (int_value = 0)
> Total runtime: 13560.862 ms
> 
> 
> 
> Is this more or less normal or can we optimize this a little bit? 
> FrontBase (which we compare currently) takes 2 seconds first time and 
> about 0.2 seconds on second+ queries.
> 
> regards David
> 
> 
> ---(end of broadcast)---
> TIP 6: Have you searched our list archives?
> 
>http://archives.postgresql.org
> 


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [PERFORM] select count(*) from anIntColumn where int_value = 0; is very slow

2004-02-11 Thread Rigmor Ukuhe

>
> Hi
>
> we have a table with about 4 million rows. One column has an int value,
> there is a btree index on it. We tried to execute the following
> statement and it is very slow on a dual G5 2GHZ with 4 GB of RAM.
>
> explain analyze select count(*) from job_property where int_value = 0;
>
> Aggregate  (cost=144348.80..144348.80 rows=1 width=0) (actual
> time=13536.852..13536.852 rows=1 loops=1)
>->  Seq Scan on job_property  (cost=0.00..144255.15 rows=37459
> width=0) (actual time=19.422..13511.653 rows=42115 loops=1)
>  Filter: (int_value = 0)
> Total runtime: 13560.862 ms


Is your int_value data type int4? If not then use "... from job_property
where int_value = '0'"
Indexes are used only if datatypes matches.

Rigmor Ukuhe


>
>
>
> Is this more or less normal or can we optimize this a little bit?
> FrontBase (which we compare currently) takes 2 seconds first time and
> about 0.2 seconds on second+ queries.
>
> regards David
>
>
> ---(end of broadcast)---
> TIP 6: Have you searched our list archives?

---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.564 / Virus Database: 356 - Release Date: 19.01.2004


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [PERFORM] select count(*) from anIntColumn where int_value = 0; is very slow

2004-02-11 Thread David Teran
Hi,

Is your int_value data type int4? If not then use "... from 
job_property
where int_value = '0'"
Indexes are used only if datatypes matches.

tried those variations already. Strange enough, after dropping and 
recreating the index everything worked fine.

regards David

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [PERFORM] select count(*) from anIntColumn where int_value = 0; is very slow

2004-02-11 Thread PC Drew
Had you done a VACUUM ANALYZE at all?  There has been much discussion 
lately about the planner needing to be updated to know that the index 
is a better choice.

On Feb 11, 2004, at 6:32 AM, David Teran wrote:

Hi,

Is your int_value data type int4? If not then use "... from 
job_property
where int_value = '0'"
Indexes are used only if datatypes matches.

tried those variations already. Strange enough, after dropping and 
recreating the index everything worked fine.

regards David

---(end of 
broadcast)---
TIP 1: subscribe and unsubscribe commands go to 
[EMAIL PROTECTED]

--
PC Drew
Manager, Dominet
IBSN
1600 Broadway, Suite 400
Denver, CO 80202
Phone: 303-984-4727 x107
Cell: 720-841-4543
Fax: 303-984-4730
Email: [EMAIL PROTECTED]
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[PERFORM] update performance

2004-02-11 Thread stefan bogdan
hello
i have postgres 7.3.2.,linux redhat 9.0
a database,and 20 tables
a lot of fields are char(x)
when i have to make update for all the fields except index
postgres works verry hard
what should i've changed in configuration to make it work faster
thanks
bogdan
---(end of broadcast)---
TIP 8: explain analyze is your friend


[PERFORM] slow database

2004-02-11 Thread alemon


my data base is very slow. The machine is a processor Xeon 2GB with
256 MB of RAM DDR. My archive of configuration is this:



#
# PostgreSQL configuration file
# -
#
# This file consists of lines of the form:
#
#   name = value
#
# (The '=' is optional.) White space may be used. Comments are introduced
# with '#' anywhere on a line. The complete list of option names and
# allowed values can be found in the PostgreSQL documentation. The
# commented-out settings shown in this file represent the default values.
#
# Any option can also be given as a command line switch to the
# postmaster, e.g. 'postmaster -c log_connections=on'. Some options
# can be changed at run-time with the 'SET' SQL command.
#
# This file is read on postmaster startup and when the postmaster
# receives a SIGHUP. If you edit the file on a running system, you have
# to SIGHUP the postmaster for the changes to take effect, or use
# "pg_ctl reload".
#


#
#   Connection Parameters
#
#tcpip_socket = false
#ssl = false

max_connections = 50
superuser_reserved_connections = 2

#port = 5432
#hostname_lookup = false
#show_source_port = false

#unix_socket_directory = ''
#unix_socket_group = ''
#unix_socket_permissions = 0777 # octal

#virtual_host = ''

#krb_server_keyfile = ''

#
#   Shared Memory Size
#
shared_buffers = 5000   # min max_connections*2 or 16, 8KB each
max_fsm_relations = 400 # min 10, fsm is free space map, ~40 bytes
max_fsm_pages = 8   # min 1000, fsm is free space map, ~6 bytes
max_locks_per_transaction = 128 # min 10
wal_buffers = 4 # min 4, typically 8KB each

#
#   Non-shared Memory Sizes
#
sort_mem = 131072   # min 64, size in KB
#vacuum_mem = 8192  # min 1024, size in KB


#
#   Write-ahead log (WAL)
#
checkpoint_segments = 3 # in logfile segments, min 1, 16MB each
checkpoint_timeout = 300# range 30-3600, in seconds
#
commit_delay = 0# range 0-10, in microseconds
commit_siblings = 5 # range 1-1000
#
fsync = false
wal_sync_method = fdatasync # the default varies across platforms:
#   # fsync, fdatasync, open_sync, or open_datasync
wal_debug = 0   # range 0-16


#
#   Optimizer Parameters
#
enable_seqscan = false
enable_indexscan = false
enable_tidscan = false
enable_sort = false
enable_nestloop = false
enable_mergejoin = false
enable_hashjoin = false

effective_cache_size = 17   # typically 8KB each
random_page_cost = 10   # units are one sequential page fetch cost
cpu_tuple_cost = 0.3# (same)
cpu_index_tuple_cost = 0.6  # (same)
cpu_operator_cost = 0.7 # (same)

default_statistics_target = 1   # range 1-1000

#
#   GEQO Optimizer Parameters
#
geqo = true
geqo_selection_bias = 2.0   # range 1.5-2.0
geqo_threshold = 2000
geqo_pool_size = 1024   # default based on tables in statement,
# range 128-1024
geqo_effort = 1
geqo_generations = 0
geqo_random_seed = -1   # auto-compute seed


#
#   Message display
#
server_min_messages = fatal # Values, in order of decreasing detail:
#   debug5, debug4, debug3, debug2, debug1,
#   info, notice, warning, error, log, fatal,
#   panic
client_min_messages = fatal # Values, in order of decreasing detail:
#   debug5, debug4, debug3, debug2, debug1,
#   log, info, notice, warning, error
silent_mode = false

log_connections = false
log_pid = false
log_statement = false
log_duration = false
log_timestamp = false

#log_min_error_statement = error # Values in order of increasing severity:
 #   debug5, debug4, debug3, debug2, debug1,
 #   info, notice, warning, error, panic(off)


#debug_print_parse = false
#debug_print_rewritten = false
#debug_print_plan = false
#debug_pretty_print = false

#explain_pretty_print = true

# requires USE_ASSERT_CHECKING
#debug_assertions = true


#
#   Syslog
#
#syslog = 0 # range 0-2
#syslog_facility = 'LOCAL0'
#syslog_ident = 'postgres'


#
#   Statistics
#
#show_parser_stats = false
#show_planner_stats = false
#show_executor_stats = false
#show_statement_stats = false

# requires BTREE_BUILD_STATS
#show_btree_build_stats = false


#
#   Access statistics collection
#
#stats_start_collector = true
#stats_reset_on_server_start = true
#stats_command_string = false
#stats_row_level = false
#stats_block_level = false
#
#   Lock Tracing
#
#trace_notify = false

# requires LOCK_DEBUG
#trace_locks = false
#trace_userlocks = false
#trace_lwlocks = false
#debug_deadlocks = false
#trace_

Re: [PERFORM] slow database

2004-02-11 Thread Dennis Bjorklund
On Wed, 11 Feb 2004 [EMAIL PROTECTED] wrote:

> somebody please knows to give tips to me to increase the
> performance

Run VACUUM ANALYZE. Find one query that is slow. Run EXPLAIN ANALYZE on
that query. Read the plan and figure out why it is slow. Fix it.

-- 
/Dennis Björklund


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [PERFORM] slow database

2004-02-11 Thread PC Drew
On Feb 11, 2004, at 7:23 AM, [EMAIL PROTECTED] wrote:

#
#   Optimizer Parameters
#
enable_seqscan = false
enable_indexscan = false
enable_tidscan = false
enable_sort = false
enable_nestloop = false
enable_mergejoin = false
enable_hashjoin = false
Why did you disable *every* type of query method?  Try commenting all 
of these out or changing them to "true" instead of "false".

--
PC Drew
Manager, Dominet
IBSN
1600 Broadway, Suite 400
Denver, CO 80202
Phone: 303-984-4727 x107
Cell: 720-841-4543
Fax: 303-984-4730
Email: [EMAIL PROTECTED]
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM] select count(*) from anIntColumn where int_value = 0;

2004-02-11 Thread Christopher Browne
Oops! [EMAIL PROTECTED] (Pavel Stehule) was seen spray-painting on a wall:
>
> Regards
> Pavel Stehule
>
> On Wed, 11 Feb 2004, David Teran wrote:
>
>> Hi
>> 
>> we have a table with about 4 million rows. One column has an int value, 
>> there is a btree index on it. We tried to execute the following 
>> statement and it is very slow on a dual G5 2GHZ with 4 GB of RAM.
>> 
>> explain analyze select count(*) from job_property where int_value = 0;
>> 
>> Aggregate  (cost=144348.80..144348.80 rows=1 width=0) (actual 
>> time=13536.852..13536.852 rows=1 loops=1)
>>->  Seq Scan on job_property  (cost=0.00..144255.15 rows=37459 
>> width=0) (actual time=19.422..13511.653 rows=42115 loops=1)
>>  Filter: (int_value = 0)
>> Total runtime: 13560.862 ms
>> 
> If you has  index on id, then you can use
> SELECT id FROM tabulka ORDER BY id DESC LIMIT 1;
>
> See 4.8. FAQ 

I'm afraid that's not the answer.  That would be the faster
alternative to "select max(id) from tabulka;"

I guess the question is, is there a faster way of coping with the
"int_value = 0" part?

It seems a little odd that the index was not selected; it appears that
the count was 42115, right?

The estimated number of rows was 37459, and if the table size is ~4M,
then I would have expected the query optimizer to use the index.

Could you try doing "ANALYZE JOB_PROPERTY;" and then try again?  

One thought that comes to mind is that perhaps the statistics are
outdated.

Another thought is that perhaps there are several really common
values, and the statistics are crummy.  You might relieve that by:

  alter table job_property alter column int_value set statistics 20;
  analyze job_property;

(Or perhaps some higher value...)

If there are a few very common discrete values in a particular field,
then the default statistics may get skewed because the histogram
hasn't enough bins...
-- 
let name="cbbrowne" and tld="acm.org" in name ^ "@" ^ tld;;
http://cbbrowne.com/info/wp.html
Rules of  the Evil  Overlord #102.  "I will not  waste time  making my
enemy's death look  like an accident -- I'm  not accountable to anyone
and my other enemies wouldn't believe it.

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM] slow database

2004-02-11 Thread Rod Taylor
On Wed, 2004-02-11 at 09:23, [EMAIL PROTECTED] wrote:
> my data base is very slow. The machine is a processor Xeon 2GB with
> 256 MB of RAM DDR. My archive of configuration is this:

I'm not surprised. New values below old.


> sort_mem = 131072   # min 64, size in KB

sort_mem = 8192.

> fsync = false

Are you aware of the potential for data corruption during a hardware,
power or software failure?

> enable_seqscan = false
> enable_indexscan = false
> enable_tidscan = false
> enable_sort = false
> enable_nestloop = false
> enable_mergejoin = false
> enable_hashjoin = false

You want all of these set to true, not false.

> effective_cache_size = 17   # typically 8KB each

effective_cache_size =  16384.

> random_page_cost = 10   # units are one sequential page fetch cost

random_page_cost = 3

> cpu_tuple_cost = 0.3# (same)

cpu_tuple_cost = 0.01

> cpu_index_tuple_cost = 0.6  # (same)

cpu_index_tuple_cost = 0.001

> cpu_operator_cost = 0.7 # (same)

cpu_operator_cost = 0.0025

> default_statistics_target = 1   # range 1-1000

default_statistics_target = 10



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [PERFORM] slow database

2004-02-11 Thread PC Drew
On Feb 11, 2004, at 7:23 AM, [EMAIL PROTECTED] wrote:



my data base is very slow. The machine is a processor Xeon 2GB with
256 MB of RAM DDR. My archive of configuration is this:
After looking through the configuration some more, I would definitely 
recommend getting rid of your current postgresql.conf file and 
replacing it with the default.  You have some very very odd settings, 
namely:

This is dangerous, but maybe you need it:
fsync = false
You've essentially disabled the optimizer:
enable_seqscan = false
enable_indexscan = false
enable_tidscan = false
enable_sort = false
enable_nestloop = false
enable_mergejoin = false
enable_hashjoin = false
WOAH, this is huge:
random_page_cost = 10
Take a look at this page which goes through each option in the 
configuration file:

http://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html

--
PC Drew
Manager, Dominet
IBSN
1600 Broadway, Suite 400
Denver, CO 80202
Phone: 303-984-4727 x107
Cell: 720-841-4543
Fax: 303-984-4730
Email: [EMAIL PROTECTED]
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [PERFORM] slow database

2004-02-11 Thread Chris Trawick
[EMAIL PROTECTED] wrote:

my data base is very slow. The machine is a processor Xeon 2GB with
256 MB of RAM DDR. My archive of configuration is this:
 

This is a joke, right?

chris

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [PERFORM] select count(*) from anIntColumn where int_value = 0;

2004-02-11 Thread scott.marlowe
On Wed, 11 Feb 2004, David Teran wrote:

> Hi,
> 
> > Is your int_value data type int4? If not then use "... from 
> > job_property
> > where int_value = '0'"
> > Indexes are used only if datatypes matches.
> >
> tried those variations already. Strange enough, after dropping and 
> recreating the index everything worked fine.

Has that table been updated a lot in its life?  If so, it may have had a 
problem with index bloat...


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [PERFORM] slow database

2004-02-11 Thread Chris Browne
[EMAIL PROTECTED] writes:
> my data base is very slow. The machine is a processor Xeon 2GB with
> 256 MB of RAM DDR. My archive of configuration is this:

> sort_mem = 131072   # min 64, size in KB
> #vacuum_mem = 8192  # min 1024, size in KB

Change it back to 8192, or perhaps even less.  This large value is
probably causing swapping, because it leads to every sort trying to
use 1073741824 bytes of memory, which is considerably more than you
have.

> fsync = false
> wal_sync_method = fdatasync # the default varies across platforms:

I presume that you are aware that you have chosen the value that
leaves your data vulnerable to corruption?  I wouldn't set this to false...

> enable_seqscan = false
> enable_indexscan = false
> enable_tidscan = false
> enable_sort = false
> enable_nestloop = false
> enable_mergejoin = false
> enable_hashjoin = false

Was there some reason why you wanted to disable every query
optimization strategy that can be disabled?  If you're looking to get
slow queries, this would accomplish that nicely.

> effective_cache_size = 17   # typically 8KB each
> random_page_cost = 10   # units are one sequential page fetch cost
> cpu_tuple_cost = 0.3# (same)
> cpu_index_tuple_cost = 0.6  # (same)
> cpu_operator_cost = 0.7 # (same)

Where did you get those numbers?  The random_page_cost alone will
probably force every query to do seq scans, ignoring indexes, and is
_really_ nonsensical.  The other values seem way off.

> default_statistics_target = 1   # range 1-1000

... Apparently it didn't suffice to try to disable query optimization,
and modify the cost parameters into nonsense; it was also "needful" to
tell the statistics analyzer to virtually eliminate statistics
collection.

If you want a value other than 10, then pick a value slightly LARGER than 10.

> somebody please knows to give tips to me to increase the performance

Delete the postgresql.conf file, create a new database using initdb,
and take the file produced by _that_, and replace with that one.  The
default values, while not necessarily perfect, are likely to be 100x
better than what you have got.

Was this the result of someone trying to tune the database for some
sort of anti-benchmark?
-- 
let name="cbbrowne" and tld="cbbrowne.com" in name ^ "@" ^ tld;;
http://www.ntlug.org/~cbbrowne/rdbms.html
Rules  of  the  Evil  Overlord   #179.  "I  will  not  outsource  core
functions." 

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [PERFORM] slow database

2004-02-11 Thread alemon

the normal queries do not present problems, but all the ones
that join has are very slow.

OBS: I am using way ODBC. He will be that they exist some
configuration specifies inside of the same bank or in the ODBC?


Quoting Rod Taylor <[EMAIL PROTECTED]>:

> On Wed, 2004-02-11 at 09:23, [EMAIL PROTECTED] wrote:
> > my data base is very slow. The machine is a processor Xeon 2GB with
> > 256 MB of RAM DDR. My archive of configuration is this:
> 
> I'm not surprised. New values below old.
> 
> 
> > sort_mem = 131072   # min 64, size in KB
> 
> sort_mem = 8192.
> 
> > fsync = false
> 
> Are you aware of the potential for data corruption during a hardware,
> power or software failure?
> 
> > enable_seqscan = false
> > enable_indexscan = false
> > enable_tidscan = false
> > enable_sort = false
> > enable_nestloop = false
> > enable_mergejoin = false
> > enable_hashjoin = false
> 
> You want all of these set to true, not false.
> 
> > effective_cache_size = 17   # typically 8KB each
> 
> effective_cache_size =  16384.
> 
> > random_page_cost = 10   # units are one sequential page fetch cost
> 
> random_page_cost = 3
> 
> > cpu_tuple_cost = 0.3# (same)
> 
> cpu_tuple_cost = 0.01
> 
> > cpu_index_tuple_cost = 0.6  # (same)
> 
> cpu_index_tuple_cost = 0.001
> 
> > cpu_operator_cost = 0.7 # (same)
> 
> cpu_operator_cost = 0.0025
> 
> > default_statistics_target = 1   # range 1-1000
> 
> default_statistics_target = 10
> 
> 
> 
> ---(end of broadcast)---
> TIP 4: Don't 'kill -9' the postmaster
> 




---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [PERFORM] update performance

2004-02-11 Thread scott.marlowe
On Wed, 11 Feb 2004, stefan bogdan wrote:

> hello
> i have postgres 7.3.2.,linux redhat 9.0
> a database,and 20 tables
> a lot of fields are char(x)
> when i have to make update for all the fields except index
> postgres works verry hard
> what should i've changed in configuration to make it work faster

1:  Upgrate to 7.3.5, (or 7.4.1 if you're feeling adventurous)
2:  Read this: 
http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] slow database

2004-02-11 Thread scott.marlowe
If my boss came to me and asked me to make my database server run as 
slowly as possible, I might come up with the exact same postgresql.conf 
file as what you posted.

Just installing the default postgresql.conf that came with postgresql 
should make this machine run faster.

Read this:

http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [PERFORM] slow database

2004-02-11 Thread Tom Lane
[EMAIL PROTECTED] writes:
> the normal queries do not present problems, but all the ones
> that join has are very slow.

No surprise, as you've disabled all but the stupidest join algorithm...

As others already pointed out, you'd be a lot better off with the
default configuration settings than with this set.

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [PERFORM] slow database

2004-02-11 Thread alemon
I already came back the old conditions and I continue slow in the same
way!

Quoting Tom Lane <[EMAIL PROTECTED]>:

> [EMAIL PROTECTED] writes:
> > the normal queries do not present problems, but all the ones
> > that join has are very slow.
> 
> No surprise, as you've disabled all but the stupidest join algorithm...
> 
> As others already pointed out, you'd be a lot better off with the
> default configuration settings than with this set.
> 
>   regards, tom lane
> 
> ---(end of broadcast)---
> TIP 8: explain analyze is your friend
> 
> 




---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html



Re: [PERFORM] update performance

2004-02-11 Thread scott.marlowe
On Wed, 11 Feb 2004, scott.marlowe wrote:

> On Wed, 11 Feb 2004, stefan bogdan wrote:
> 
> > hello
> > i have postgres 7.3.2.,linux redhat 9.0
> > a database,and 20 tables
> > a lot of fields are char(x)
> > when i have to make update for all the fields except index
> > postgres works verry hard
> > what should i've changed in configuration to make it work faster
> 
> 1:  Upgrate to 7.3.5, (or 7.4.1 if you're feeling adventurous)
> 2:  Read this: 
> http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html

As a followup, do you have mismatched foreign keys or big ugly 
constraints?  Sometimes those can slow things down too.


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [PERFORM] slow database

2004-02-11 Thread alemon
the version is 7.3.2 in a connective 9.
the hen foot is without nails at the moment: =)
 |
/|\

this is a principal table of system:

CREATE TABLE public.compra_prod_forn
(
  nu_seq_prod_forn numeric(12) NOT NULL,
  cd_fabricante numeric(6),
  cd_moeda numeric(4) NOT NULL,
  cd_compra numeric(12) NOT NULL,
  cd_produto numeric(9),
  cd_fornecedor numeric(6),
  cd_incotermes numeric(3) NOT NULL,
  qtde_compra numeric(12,3),
  perc_comissao_holding numeric(5,2),
  vl_cotacao_unit_negociacao numeric(20,3),
  dt_retorno date,
  cd_status_cv numeric(3) NOT NULL,
  cd_usuario numeric(6) NOT NULL,
  tp_comissao varchar(25),
  vl_pif numeric(9,2),
  cd_fornecedor_contato numeric(6),
  cd_contato numeric(6),
  cd_un_peso varchar(20),
  vl_currier numeric(9,2),
  cd_iqf numeric(3),
  cd_un_peso_vl_unit varchar(10),
  dt_def_fornecedor date,
  vl_cotacao_unit_forn numeric(20,3),
  vl_cotacao_unit_local numeric(20,3),
  tp_vl_cotacao_unit numeric(1),
  cd_moeda_forn numeric(4),
  cd_moeda_local numeric(4),
  vl_cotacao_unit numeric(20,3),
  peso_bruto_emb varchar(20),
  id_fax numeric(1),
  id_email numeric(1),
  fob varchar(40),
  origem varchar(40),
  tipo_comissao varchar(40),
  descr_fabricante_select varchar(200),
  farmacopeia varchar(100),
  vl_frete numeric(10,3),
  descr_abandono_representada varchar(2000),
  descr_abandono_interno varchar(2000),
  vl_frete_unit numeric(10,3),
  CONSTRAINT compra_prod_forn_pkey PRIMARY KEY (cd_compra, nu_seq_prod_forn),
  CONSTRAINT "$1" FOREIGN KEY (cd_moeda_local) REFERENCES public.moeda 
(cd_moeda) ON UPDATE NO ACTION ON DELETE NO ACTION,
  CONSTRAINT "$10" FOREIGN KEY (cd_usuario) REFERENCES public.usuario_sistema 
(cd_usuario) ON UPDATE NO ACTION ON DELETE NO ACTION,
  CONSTRAINT "$11" FOREIGN KEY (cd_status_cv) REFERENCES 
public.status_compra_venda (cd_status_cv) ON UPDATE NO ACTION ON DELETE NO 
ACTION,
  CONSTRAINT "$12" FOREIGN KEY (cd_moeda) REFERENCES public.moeda (cd_moeda) ON 
UPDATE NO ACTION ON DELETE NO ACTION,
  CONSTRAINT "$2" FOREIGN KEY (cd_moeda_forn) REFERENCES public.moeda 
(cd_moeda) ON UPDATE NO ACTION ON DELETE NO ACTION,
  CONSTRAINT "$3" FOREIGN KEY (cd_un_peso_vl_unit) REFERENCES 
public.unidades_peso (cd_un_peso) ON UPDATE NO ACTION ON DELETE NO ACTION,
  CONSTRAINT "$4" FOREIGN KEY (cd_un_peso) REFERENCES public.unidades_peso 
(cd_un_peso) ON UPDATE NO ACTION ON DELETE NO ACTION,
  CONSTRAINT "$5" FOREIGN KEY (cd_fornecedor_contato, cd_contato) REFERENCES 
public.fornecedor_contato (cd_fornecedor, cd_contato) ON UPDATE NO ACTION ON 
DELETE NO ACTION,
  CONSTRAINT "$6" FOREIGN KEY (cd_fabricante) REFERENCES public.fabricante 
(cd_fabricante) ON UPDATE NO ACTION ON DELETE NO ACTION,
  CONSTRAINT "$7" FOREIGN KEY (cd_produto, cd_fornecedor) REFERENCES 
public.fornecedor_produto (cd_produto, cd_fornecedor) ON UPDATE NO ACTION ON 
DELETE NO ACTION,
  CONSTRAINT "$8" FOREIGN KEY (cd_incotermes) REFERENCES public.incotermes 
(cd_incotermes) ON UPDATE NO ACTION ON DELETE NO ACTION,
  CONSTRAINT "$9" FOREIGN KEY (cd_compra) REFERENCES public.compra (cd_compra) 
ON UPDATE NO ACTION ON DELETE CASCADE
) WITH OIDS;



Quoting "scott.marlowe" <[EMAIL PROTECTED]>:

> On Wed, 11 Feb 2004 [EMAIL PROTECTED] wrote:
> 
> > I already came back the old conditions and I continue slow in the same
> > way!
> 
> OK, we need some things from you to help troubleshoot this problem.
> 
> Postgresql version
> schema of your tables
> output of "explain analyze your query here"
> a chicken foot (haha, just kidding. :-)
> 
> 




---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [PERFORM] slow database

2004-02-11 Thread scott.marlowe
On Wed, 11 Feb 2004 [EMAIL PROTECTED] wrote:

> I already came back the old conditions and I continue slow in the same
> way!

OK, we need some things from you to help troubleshoot this problem.

Postgresql version
schema of your tables
output of "explain analyze your query here"
a chicken foot (haha, just kidding. :-)


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [PERFORM] slow database

2004-02-11 Thread Rod Taylor
On Wed, 2004-02-11 at 12:15, [EMAIL PROTECTED] wrote:
> I already came back the old conditions and I continue slow in the same
> way!

Dumb question, but did you restart the database after changing the
config file?


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [PERFORM] slow database

2004-02-11 Thread scott.marlowe
First thing I would check is to make sure all those foreign keys are the 
same type.

Second, make sure you've got indexes to go with them.  I.e. on a multi-key 
fk, have a multi-key index.


On Wed, 11 Feb 2004 [EMAIL PROTECTED] wrote:

> the version is 7.3.2 in a connective 9.
> the hen foot is without nails at the moment: =)
>  |
> /|\
> 
> this is a principal table of system:
> 
> CREATE TABLE public.compra_prod_forn
> (
>   nu_seq_prod_forn numeric(12) NOT NULL,
>   cd_fabricante numeric(6),
>   cd_moeda numeric(4) NOT NULL,
>   cd_compra numeric(12) NOT NULL,
>   cd_produto numeric(9),
>   cd_fornecedor numeric(6),
>   cd_incotermes numeric(3) NOT NULL,
>   qtde_compra numeric(12,3),
>   perc_comissao_holding numeric(5,2),
>   vl_cotacao_unit_negociacao numeric(20,3),
>   dt_retorno date,
>   cd_status_cv numeric(3) NOT NULL,
>   cd_usuario numeric(6) NOT NULL,
>   tp_comissao varchar(25),
>   vl_pif numeric(9,2),
>   cd_fornecedor_contato numeric(6),
>   cd_contato numeric(6),
>   cd_un_peso varchar(20),
>   vl_currier numeric(9,2),
>   cd_iqf numeric(3),
>   cd_un_peso_vl_unit varchar(10),
>   dt_def_fornecedor date,
>   vl_cotacao_unit_forn numeric(20,3),
>   vl_cotacao_unit_local numeric(20,3),
>   tp_vl_cotacao_unit numeric(1),
>   cd_moeda_forn numeric(4),
>   cd_moeda_local numeric(4),
>   vl_cotacao_unit numeric(20,3),
>   peso_bruto_emb varchar(20),
>   id_fax numeric(1),
>   id_email numeric(1),
>   fob varchar(40),
>   origem varchar(40),
>   tipo_comissao varchar(40),
>   descr_fabricante_select varchar(200),
>   farmacopeia varchar(100),
>   vl_frete numeric(10,3),
>   descr_abandono_representada varchar(2000),
>   descr_abandono_interno varchar(2000),
>   vl_frete_unit numeric(10,3),
>   CONSTRAINT compra_prod_forn_pkey PRIMARY KEY (cd_compra, nu_seq_prod_forn),
>   CONSTRAINT "$1" FOREIGN KEY (cd_moeda_local) REFERENCES public.moeda 
> (cd_moeda) ON UPDATE NO ACTION ON DELETE NO ACTION,
>   CONSTRAINT "$10" FOREIGN KEY (cd_usuario) REFERENCES public.usuario_sistema 
> (cd_usuario) ON UPDATE NO ACTION ON DELETE NO ACTION,
>   CONSTRAINT "$11" FOREIGN KEY (cd_status_cv) REFERENCES 
> public.status_compra_venda (cd_status_cv) ON UPDATE NO ACTION ON DELETE NO 
> ACTION,
>   CONSTRAINT "$12" FOREIGN KEY (cd_moeda) REFERENCES public.moeda (cd_moeda) ON 
> UPDATE NO ACTION ON DELETE NO ACTION,
>   CONSTRAINT "$2" FOREIGN KEY (cd_moeda_forn) REFERENCES public.moeda 
> (cd_moeda) ON UPDATE NO ACTION ON DELETE NO ACTION,
>   CONSTRAINT "$3" FOREIGN KEY (cd_un_peso_vl_unit) REFERENCES 
> public.unidades_peso (cd_un_peso) ON UPDATE NO ACTION ON DELETE NO ACTION,
>   CONSTRAINT "$4" FOREIGN KEY (cd_un_peso) REFERENCES public.unidades_peso 
> (cd_un_peso) ON UPDATE NO ACTION ON DELETE NO ACTION,
>   CONSTRAINT "$5" FOREIGN KEY (cd_fornecedor_contato, cd_contato) REFERENCES 
> public.fornecedor_contato (cd_fornecedor, cd_contato) ON UPDATE NO ACTION ON 
> DELETE NO ACTION,
>   CONSTRAINT "$6" FOREIGN KEY (cd_fabricante) REFERENCES public.fabricante 
> (cd_fabricante) ON UPDATE NO ACTION ON DELETE NO ACTION,
>   CONSTRAINT "$7" FOREIGN KEY (cd_produto, cd_fornecedor) REFERENCES 
> public.fornecedor_produto (cd_produto, cd_fornecedor) ON UPDATE NO ACTION ON 
> DELETE NO ACTION,
>   CONSTRAINT "$8" FOREIGN KEY (cd_incotermes) REFERENCES public.incotermes 
> (cd_incotermes) ON UPDATE NO ACTION ON DELETE NO ACTION,
>   CONSTRAINT "$9" FOREIGN KEY (cd_compra) REFERENCES public.compra (cd_compra) 
> ON UPDATE NO ACTION ON DELETE CASCADE
> ) WITH OIDS;
> 
> 
> 
> Quoting "scott.marlowe" <[EMAIL PROTECTED]>:
> 
> > On Wed, 11 Feb 2004 [EMAIL PROTECTED] wrote:
> > 
> > > I already came back the old conditions and I continue slow in the same
> > > way!
> > 
> > OK, we need some things from you to help troubleshoot this problem.
> > 
> > Postgresql version
> > schema of your tables
> > output of "explain analyze your query here"
> > a chicken foot (haha, just kidding. :-)
> > 
> > 
> 
> 
> 
> 


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[PERFORM] coercing int to bigint for indexing purposes

2004-02-11 Thread Mark Harrison
Is there a way to automatically coerce an int into a bigint for
indexing purposes?
We have a table with a bigint column that is an index.
For mi, there's no problem, since I now know to say
select * from foo where id = 123::bigint
but our casual users still say
select * from foo where id = 123
causing a sequential scan because the type of 123 is not
a bigint.
As you can see, there's nearly 4 orders of magnitude difference
in time, and we anticipate this will only get worse as our
tables increase in size:
LOG:  duration:0.861 ms  statement: select * from big where id = 123123123123123;
LOG:  duration: 6376.917 ms  statement: select * from big where id = 123;
One thing I have considered is starting our id sequence at 50
so that "real" queries will always be bigint-sized, but this seems
to me a bit of a hack.
Many TIA,
Mark
--
Mark Harrison
Pixar Animation Studios
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [PERFORM] coercing int to bigint for indexing purposes

2004-02-11 Thread Tom Lane
Mark Harrison <[EMAIL PROTECTED]> writes:
> Is there a way to automatically coerce an int into a bigint for
> indexing purposes?

This problem is fixed in CVS tip.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [PERFORM] slow database

2004-02-11 Thread Mark Kirkwood
If things are still slow after you have checked your keys as indicated, 
then pick one query and post the output from EXPLAIN ANALYZE for the 
list to examine.

oh - and ensure you are *not* still using your original postgresql.conf :-)

best wishes

Mark

scott.marlowe wrote:

First thing I would check is to make sure all those foreign keys are the 
same type.

Second, make sure you've got indexes to go with them.  I.e. on a multi-key 
fk, have a multi-key index.



 



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [PERFORM] select count(*) from anIntColumn where int_value =

2004-02-11 Thread Christopher Kings-Lynne

Is your int_value data type int4? If not then use "... from 
job_property
where int_value = '0'"
Indexes are used only if datatypes matches.

tried those variations already. Strange enough, after dropping and 
recreating the index everything worked fine.


Has that table been updated a lot in its life?  If so, it may have had a 
problem with index bloat...
Try creating a partial index: create index blah on tablw where int_value=0;

Chris

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org