Garrett Bladow wrote:
Recently we upgraded the RAM in our server. After the install a LIKE query that used to take 5 seconds now takes 5 minutes. We have tried the usual suspects, VACUUM, ANALYZE and Re-indexing.
Any thoughts on what might have happened?
What all tuning you have done? Have you se
On Tue, 23 Sep 2003, Garrett Bladow wrote:
> Recently we upgraded the RAM in our server. After the install a LIKE query that used
> to take 5 seconds now takes 5 minutes. We have tried the usual suspects, VACUUM,
> ANALYZE and Re-indexing.
>
> Any thoughts on what might have happened?
>
Did you
On Tue, 23 Sep 2003, Garrett Bladow wrote:
> Recently we upgraded the RAM in our server. After the install a LIKE
> query that used to take 5 seconds now takes 5 minutes. We have tried the
> usual suspects, VACUUM, ANALYZE and Re-indexing.
If you mean that you reinstalled postgresql then it's pro
Hello,
I have been trying to get my Postgres database to do faster inserts.
The environment is basically a single user situation.
The part that I would like to speed up is when a User copys a Project.
A Project consists of a number of Rooms(say 60). Each room contains a
number of items.
A proje
The performance list seemed to be off-line for a while, so I posed the same
question on the admin list and Tom Lane has been helping in that forum.
-Nick
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] Behalf Of Nick
> Fankhauser
> Sent: Monday, September 22, 2
Hi,
I have a table containing columns:
"END_DATE" timestamptz NOT NULL
"REO_ID" int4 NOT NULL
and i am indexed "REO_ID" coulumn.
I have a query:
select "REO_ID", "END_DATE" from "PRIORITY_STATISTICS" where "REO_ID" IN
('112851' ,'112859' ,'112871' ,'112883' ,'112891' ,'112904' ,'11291
On Tue, 2003-09-23 at 20:24, Garrett Bladow wrote:
> Recently we upgraded the RAM in our server. After the install a LIKE query that used
> to take 5 seconds now takes 5 minutes. We have tried the usual suspects, VACUUM,
> ANALYZE and Re-indexing.
>
> Any thoughts on what might have happened?
W
Garrett,
> Recently we upgraded the RAM in our server. After the install a LIKE query
that used to take 5 seconds now takes 5 minutes. We have tried the usual
suspects, VACUUM, ANALYZE and Re-indexing.
>
> Any thoughts on what might have happened?
Bad RAM? Have you tested it?
--
-Josh Berk
On Tue, 23 Sep 2003, Bruce Momjian wrote:
> With the new warning about too-frequent checkpoints, people have actual
> feedback to encourage them to increase checkpoint_segments. One issue
> is that it is likely to recommend increasing checkpoint_segments during
> restore, even if there is no valu
On Tue, 23 Sep 2003, Josh Berkus wrote:
> Garrett,
>
> > Recently we upgraded the RAM in our server. After the install a LIKE query
> that used to take 5 seconds now takes 5 minutes. We have tried the usual
> suspects, VACUUM, ANALYZE and Re-indexing.
> >
> > Any thoughts on what might have ha
Peter,
One possibility is to drop all the indexes, do the insert and re-add the
indexes.
The more indexes that exist and the more rows that exist, the more costly
the insert.
Regards,
Joseph
At 05:48 PM 9/24/2003 +1200, peter wrote:
Hello,
I have been trying to get my Postgres database to d
Hi,
I have a table containing columns:
"END_DATE" timestamptz NOT NULL
"REO_ID" int4 NOT NULL
and i am indexed "REO_ID" coulumn.
I have a query:
select "REO_ID", "END_DATE" from "PRIORITY_STATISTICS" where "REO_ID" IN
('112851' ,'112859' ,'112871' ,'112883' ,'112891' ,'112904' ,'112915'
get rid of any unnecessary indexes?
i've found that droping indexes and re-creating them isn't usually worth the
effort
mount the disk with the noatime option which saves you the time involved in
updating the last access time on files
make sure you're doing all the inserts in one transaction..
> What causes this behaviour? is there any workaround? Suggestions?
>
How many rows are there in the table, and can you post the 'explain analyze' for both
queries after doing a 'vacuum verbose analyze
[tablename]'?
Cheers
Matt
---(end of broadcast)--
> My statistics(Athlon 1.8Ghz)
>
> 20,000 itemsTakes on average 0.078seconds/room
> 385,000 items Takes on average .11seconds/room
> 690,000 items takes on average .270seconds/room
> 1,028,000 items Takes on average .475seconds/room
[snip]
> I a
> 20,000 itemsTakes on average 0.078seconds/room
> 385,000 items Takes on average .11seconds/room
> 690,000 items takes on average .270seconds/room
> 1,028,000 items Takes on average .475seconds/room
>
> As can be seen the time taken to process each room increas
All this talk of checkpoints got me wondering if I have them set at an
optimum level on my production servers. I noticed the following in the
docs:
"There will be at least one 16 MB segment file, and will normally not
be more than 2 * checkpoint_segments + 1 files. You can use this to
estimate sp
Robert Treat <[EMAIL PROTECTED]> writes:
> In .conf file I have default checkpoints set to 3, but I noticed that in
> my pg_xlog directory I always seem to have at least 8 log files. Since
> this is more than the suggested 7, I'm wondering if this means I ought
> to bump my checkpoint segments up t
On Fri, 19 Sep 2003 11:35:35 -0700, Jenny Zhang <[EMAIL PROTECTED]>
wrote:
>I posted more results as you requested:
Unfortunately they only confirm what I suspected earlier:
>> 2) -> Index Scan using i_ps_suppkey on partsupp
>> (cost=0.00..323.16 rows=80 width=34)
>>
Manfred Koizar <[EMAIL PROTECTED]> writes:
> Cutting down the number of heap page fetches if PF1 * L > P and P <
> effective_cache_size seems like an obvious improvement, but I was not
> able to figure out where to make this change. Maybe it belongs into
> costsize.c near
> run_cost += outer
20 matches
Mail list logo