Tom Lane writes:
> Rob Nagler <[EMAIL PROTECTED]> writes:
> > q5 and q6 are too complex to discuss here,
>
> How do you expect us to get better if you don't show us the problems?
With all due respect and thanks for the massive amount of help, I have
presented the
scott.marlowe writes:
> t2 was 'vacuum full'ed and analyzed, right? Just guessing.
Fresh import. I've been told this includes a ANALYZE.
Rob
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
Josh Berkus writes:
> I hope that you'll stay current with PostgreSQL developments so that you can
> do a similarly thourough evaluation for your next project.
Oh, no worries. This project just happens to be a tough one. We're
heavily invested in Postgres. Other projects we maintain that use
P
Greg Stark writes:
> > > SELECT a, (SELECT name FROM t2 WHERE t2.f2 = t1.f2)
> > > FROM t1
> > > GROUP BY f2
> >
> > This doesn't solve the problem. It's the GROUP BY that is doing the
> > wrong thing. It's grouping, then aggregating.
>
> But at least in the form above it will consider usin
Greg Stark writes:
> I don't understand why you would expect overwriting to win here.
> What types of updates do you do on these tables?
These are statistics that we're adjusting. I think that's pretty
normal stuff. The DSS component is the avg() of these numbers on
particular groups. The gro
Greg Stark writes:
> Sorry I was unclear. By "usual case" I meant reading, as opposed to updates.
> The size of the on-disk representation turns out to be a major determinant in
> a lot of database applications, since the dominant resource is i/o bandwidth.
> Try doing a fresh import of a large tab
Greg Stark writes:
> Note that pctfree/pctused are a big performance drain on the usual case. Try
> setting them to 0/100 on a table that doesn't get updates (like a many-many
> relation table) and see how much faster it is to insert and scan.
Right. You can optimize each table independently. Th
Vivek Khera writes:
> Also, how close are you to the capacity of your disk bandwidth? I
> don't see that in your numbers. I know in freebsd I can run "systat
> -vmstat" and it gives me a percentage of utilization that lets me know
> when I'm near the capacity.
The vacuum totally consumes the sys
Mario Weilguni writes:
> of course both approaches have advantages, it simply depends on the usage
> pattern. A case where oracle really rules over postgresql are m<-->n
> connection tables where each record consist of two foreign keys, the
> overwrite approach is a big win here.
That's usually
Stephen writes:
> I ran into the same problem with VACUUM on my Linux box. If you are running
> Linux, take a look at "elvtune" or read this post:
The default values were -r 64 -w 8192. The article said this was
"optimal". I just futzed with different values anywere from -w 128 -r
128 to -r 16 -
Tom Lane writes:
> ... if all tuples are the same size, and if you never have any
Incorrect. If the tuples smaller, Oracle does the right thing. If
there's enough space in the page, it shifts the tuples to make room.
That's what pctfree, pctused and pctincrease allow you to control.
It's all in
Vivek Khera writes:
> AMI or Adaptec based?
Adaptec, I think. AIC-7899 LVD SCSI is what dmidecode says, and
Red Hat/Adaptec aacraid driver, Aug 18 2003 is what comes up when it
boots. I haven't be able to use the aac utilities with this driver,
however, so it's hard to interrogate the device.
>
Josh Berkus writes:
> Yes, but it will have less of an impact on the system while it's running.
We'll find out. I lowered it to vacuum_mem to 32000.
> What sort of disk array do you have? That seems like a lot of time
> considering how little work VACUUM is doing.
Vendor: DELL Model: PE
Manfred Koizar writes:
> ISTM you are VACCUMing too aggressively. You are reclaiming less than
> 1% and 0.005%, respectively, of tuples. I would increase FSM settings
> to ca. 1000 fsm_relations, 10 fsm_pages and VACUUM *less* often,
> say every two hours or so.
I did this. We'll see how it
Shridhar Daithankar writes:
> You should try 7.4 beta and pg_autovacuum which is a contrib module
> in CVS tip.
It's on our todo list. :)
How does pg_autovacuum differ from vacuumdb? I mean it seems to call
the vacuum operation underneath just as vacuumdb does. I obviously
didn't follow the lo
It seems a simple "vacuum" (not full or analyze) slows down the
database dramatically. I am running vacuum every 15 minutes, but it
takes about 5 minutes to run even after a fresh import. Even with
vacuuming every 15 minutes, I'm not sure vacuuming is working
properly.
There are a lot of updates
Greg Stark writes:
> Call it a wishlist bug. The problem is it would be a hard feature to
> implement properly. And none of the people paid to work on postgres
> by various companies seem to have this on their to-do lists. So
> don't expect it in the near future.
We are using Postgres heavily, and
> vacuum full does require exclusive lock, plain vacuum does not.
I think I need full, because there are updates on the table. As I
understand it, an update in pg is an insert/delete, so it needs
to be garbage collected.
> It's considerably more likely that the vacuum was waiting for an open
> c
I've read some posts that says vacuum doesn't lock, but my experience
today indicates the opposite. It seemed that "vacuum full analyze"
was locked waiting and so were other postmaster processes. It
appeared to be deadlock, because all were in "WAITING" state according
to ps. I let this go for a
Tom Lane writes:
> Keep in mind though that you seem to be experimenting with a
> fully-cached database; you may find that the planner's beliefs more
> nearly approach reality when actual I/O has to occur.
My hope is that the entire database should fit in memory. This may
not be in the case right
Ron Johnson writes:
> Dumb question: given your out-of-the-box satisfaction, could it be
> that postgresql.conf hasn't been tweaked?
Here are the modified values:
shared_buffers = 8000
wal_buffers = 80
sort_mem = 32000
effective_cache_size = 40
random_page_cost = 4
autocommit = false
timezone
Tom Lane writes:
> That doesn't really tell me anything. What's the proportion of 21
> records out of the total table?
Currently we have about 15 servers so 6% of the data is uniformly
distributed with the value 21.
> create index fooi on foo (min_date_time) where server_id = 21;
>
> This
Tom Lane writes:
> The reason the planner does not much like this plan is that it's
> estimating that quite a lot of rows will have to be hit in min_date_time
> order before it finds enough rows with server_id = 21. Thus the high
> cost estimate for the above step.
Thanks for the speedy and usefu
I'm trying to understand how I can get the planner to always do the
right thing with this query:
EXPLAIN ANALYZE
SELECT
aa_t.min_date_time
FROM
aa_t
, bb_t
, cc_t
WHERE bb_t.bb_id = aa_t.bb_id
AND aa_t.realm_id = cc_t.realm_id
AND aa_
24 matches
Mail list logo