Tom,
The Analyze did in fact fix the issue. Thanks.
--sean
On 9/27/04 11:54 PM, "Sean Shanny" <[EMAIL PROTECTED]> wrote:
> Tom,
>
> We have been running pg_autovacuum on this entire DB so I did not even
> consider that. I am running an analyze verbose now.
&
49 PM, "Tom Lane" <[EMAIL PROTECTED]> wrote:
> Sean Shanny <[EMAIL PROTECTED]> writes:
>> -> Seq Scan on f_pageviews t1 (cost=0.00..11762857.88
>> rows=1 width=8)
>> Filter: ((date_key >= 610) AND (date
To all,
Running into an out of memory error on our data warehouse server. This
occurs only with our data from the 'September' section of a large fact
table. The exact same query running over data from August or any prior
month for that matter works fine which is why this is so weird. Note that
J
the query again to
see what happens.
You were right on the analyze, we do that in frequently as it takes a
whole bunch of time over this much data. Something to cron in the
middle of the night I think.
Thanks.
--sean
Tom Lane wrote:
Sean Shanny <[EMAIL PROTECTED]> writes:
When
Tom,
Let me clarify I was meant shutdown in the context of issuing a stop
against postgres not shutting down the OS. Sorry if I am confusing things.
The scripts we are using to issue start, stop etc for postgres seem to
be causing the issue. I changed the config to use timestamps in the l
;Ed L." <[EMAIL PROTECTED]> writes:
On Monday February 23 2004 8:43, Sean Shanny wrote:
*LOG: received smart shutdown request *
*FATAL: the database system is shutting down
FATAL: the database system is shutting down
LOG: server process (PID 4691) was terminated by signal 9
L
Tom,
Sort of piggybacking on this thread but why the suggestion to drop the
use of DISTINCT in 7.4? We use DISTINCT all over the place to eliminate
duplicates in sub select statements. Running 7.4.0 currently on
FreeBSD5.1 Dell 2650 4GB RAM 5 disk SCSI array hardware RAID 0
Example:
explain
so we
can exchange the info you need?
Thanks.
--sean
Tom Lane wrote:
Sean Shanny <[EMAIL PROTECTED]> writes:
I run this:
explain update f_commerce_impressions set servlet_key = 60 where
servlet_key in (68,69,70,71,87,90,94,91,98,105,106);
ERROR: out of memory
DETAIL: Failed on
(servlet_key = 94))
(2 rows)
Tom Lane wrote:
Sean Shanny <[EMAIL PROTECTED]> writes:
There are no FK's or triggers on this or any of the tables in our
warehouse schema. Also I should have mentioned that this update will
produce 0 rows as these values do not exist in this table.
Hm,
host sss.pgh.pa.us [192.204.191.242]: 550 5.0.0 If you would like to talk to me,
find a more responsible ISP than earthlink
Tom Lane wrote:
Sean Shanny <[EMAIL PROTECTED]> writes:
sort_mem = 64000# min 64, size in KB
You might want to lower that; a complex query cou
ory
DETAIL: Failed on request of size 1024.
Thanks
--sean
Tom Lane wrote:
Sean Shanny <[EMAIL PROTECTED]> writes:
update f_commerce_impressions set servlet_key = 60 where servlet_key in
(68,69,70,71,87,90,94,91,98,105,106);
ERROR: out of memory
How many rows will this try to
wrote:
Sean Shanny <[EMAIL PROTECTED]> writes:
Does anyone have an explanation as to why this might occur?
What have you got vacuum_mem set to? Also, what ulimit settings is the
postmaster running under? (I'm wondering exactly how large the backend
process has grown when
To all,
The facts:
PostgreSQL 7.4.0 running on BSD 5.1 on Dell 2650 with 4GB RAM, 5 SCSI
drives in hardware RAID 0 configuration. Database size with indexes is
currently 122GB. Schema for the table in question is at the end of this
email. The DB has been vacuumed full and analyzed. Between
To all,
The facts:
PostgreSQL 7.4.0 running on BSD 5.1 on Dell 2650 with 4GB RAM, 5 SCSI
drives in hardware RAID 0 configuration. Database size with indexes is
currently 122GB. DB size before we completed the vacuum full was 150GB.
We have recently done a major update to a table, f_pageviews
14 matches
Mail list logo