> I'm on 7.3.4 and this query gets horrible performance. Is there a way to
rewrite it with an exists or some way to get better performance?
>
> select code, id, name, date_of_service
> from tbl
> where date_of_service in
> (select date_of_service
> from tbl
> where xxx >=
I'm on 7.3.4 and this query gets horrible performance. Is there a way to rewrite it
with an exists or some way to get better performance?
select code, id, name, date_of_service
from tbl
where date_of_service in
(select date_of_service
from tbl
where xxx >= '29800'
Oops! [EMAIL PROTECTED] (Josh Berkus) was seen spray-painting on a wall:
>> I understand this needs an exclusive lock on the whole table, which is
>> simply not possible more than once a month, if that... Workarounds/hack
>> suggestions are more than welcome :-)
>
> Would it be reasonable to use p
Rhaoni,
First off, thanks for posting such complete info.
>... gsames00.ano_mes = to_char(ftnfco00.data_emissao,'MM') AND ...
>
>ftnfco00.data_emissao is a timestamp. When I run the explain analyze it
says:
>
> ...
> -> Seq Scan on gsames00 (cost=1.00..10006.72 rows=
Matt,
> I understand this needs an exclusive lock on the whole table, which is
> simply not possible more than once a month, if that... Workarounds/hack
> suggestions are more than welcome :-)
Would it be reasonable to use partial indexes on the table?
--
-Josh Berkus
__AGLIO DATABASE SOL
Hi list,
I have a table like this:
CREATE TABLE "gsames00" (
"ano_mes" varchar(6) NOT NULL,
"descricao" varchar(30),
PRIMARY KEY ("ano_mes")
);
and an index like this:
CREATE INDEX GSAMES01 ON GSAMES00 (ANO_MES);
When I run a explain analyze with this where claus
> 2) Are you sure that ANALYZE is needed? Vacuum is required
> whenever lots of
> rows are updated, but analyze is needed only when the *distribution* of
> values changes significantly.
You are right. I have a related qn in this thread about random vs. monotonic
values in indexed fields.
> 3) u
> "RT" == Robert Treat <[EMAIL PROTECTED]> writes:
RT> hmm... i wonder what would happen if you pushed your sort_mem higher...
RT> on some of our development boxes and upgrade scripts, i push the
RT> sort_mem to 102400 and sometimes even higher depending on the box. this
RT> really speeds up m
Yes, that makes sense. My worry is really the analyzes. I gather/imagine
that:
1) Indexes on fields that are essentially random gain little from being
analyzed.
2) Fields that increase monotonically with insertion order have a problem
with index growth in 7.2. There may be a performan
Matt,
> Is there any reason for me not to run continuous sequential vacuum analyzes?
> At least for the 6 tables that see a lot of updates?
No. You've already proven that the performance gain on queries offsets the
loss from the vacuuming. There is no other "gotcha".
However:
1) You may
On Mon, 2003-09-15 at 15:15, Vivek Khera wrote:
> And the winner is... checkpoint_segments.
>
> Restore of a significanly big database (~19.8GB restored) shows nearly
> no time difference depending on sort_mem when checkpoint_segments is
> large. There are quite a number of tables and indexes. T
On Wed, 17 Sep 2003, Matt Clark wrote:
> *** THE QUESTION(S) ***
> Is there any reason for me not to run continuous sequential vacuum analyzes?
> At least for the 6 tables that see a lot of updates?
> I hear 10% of tuples updated as a good time to vac-an, but does my typical
> count of 3 indexes p
I'm running a load of stress scripts against my staging environment to
simulate user interactions, and watching the various boxen as time goes by.
I noticed that the CPU utilisation on the DB server (PG 7.2.3, RH7.3, Dual
PII 550MHz, 1GB RAM, 1GB database on disk, Single 10k SCSI drive) was
increa
Michael Adler <[EMAIL PROTECTED]> writes:
> I have been experimenting with a new Seagate Cheetah 10k-RPM SCSI to
> compare with a cheaper Seagate Barracuda 7200-RPM IDE (each in a
> single-drive configuration). The Cheetah definately dominates the generic
> IO tests such as bonnie++, but fares poor
I have been experimenting with a new Seagate Cheetah 10k-RPM SCSI to
compare with a cheaper Seagate Barracuda 7200-RPM IDE (each in a
single-drive configuration). The Cheetah definately dominates the generic
IO tests such as bonnie++, but fares poorly with pgbench (and other
postgresql operations)
On 17 Sep 2003 at 11:48, Nick Barr wrote:
> Hi,
>
> I have been following a thread on this list "Inconsistent performance"
> and had a few questions especially the bits about effective_cache_size.
> I have read some of the docs, and some other threads on this setting,
> and it seems to used by th
Hi,
I have been following a thread on this list "Inconsistent performance"
and had a few questions especially the bits about effective_cache_size.
I have read some of the docs, and some other threads on this setting,
and it seems to used by the planner to either choose a sequential or
index scan.
17 matches
Mail list logo