[PERFORM] Simple filesystem benchmark on Linux 2.6

2003-08-12 Thread Shridhar Daithankar
http://kerneltrap.org/node/view/715

Might be interesting for people running 2.6. Last I heard, the anticipatory 
scheduler did not yield it's maximum throughput for random reads. So they said 
database guys would not want it right away.

Anybody using it for testing? Couple of guys are running it here in my company 
on a moderate desktop-cum-server. So far it's good.. In fact far better..


Bye
 Shridhar

--
Wedding, n: A ceremony at which two persons undertake to become one, one 
undertakes  to become nothing and nothing undertakes to become supportable.
 -- 
Ambrose Bierce


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [PERFORM] Perfomance Tuning

2003-08-12 Thread Ron Johnson
On Mon, 2003-08-11 at 19:50, Christopher Kings-Lynne wrote:
> > Well, yeah.  But given the Linux propensity for introducing major
> > features in "minor" releases (and thereby introducing all the
> > attendant bugs), I'd think twice about using _any_ Linux feature
> > until it's been through a major version (e.g. things introduced in
> > 2.4.x won't really be stable until 2.6.x) -- and even there one is
> > taking a risk[1].
> 
> Dudes, seriously - switch to FreeBSD :P

But, like, we want a *good* OS... 8-0

-- 
+---+
| Ron Johnson, Jr.Home: [EMAIL PROTECTED]   |
| Jefferson, LA  USA|
|   |
| "Man, I'm pretty.  Hoo Hah!"  |
|Johnny Bravo   |
+---+



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM] Odd problem with performance in duplicate database

2003-08-12 Thread Ron Johnson
On Mon, 2003-08-11 at 17:03, Josh Berkus wrote:
> Folks,
> 
> More followup on this:
> 
> The crucial difference between the two execution plans is this clause:
> 
> test db has:
> ->  Seq Scan on case_clients  (cost=0.00..3673.48 rows=11274 width=11) (actual 
> time=0.02..302.20 rows=8822 loops=855)
> 
> whereas live db has:
> ->  Index Scan using idx_caseclients_case on case_clients  (cost=0.00..5.10 
> rows=1 width=11) (actual time=0.03..0.04 rows=1 loops=471)
> 
> using an enable_seqscan = false fixes this, but is obviously not a long-term 
> solution.   
> 
> I've re-created the test system from an immediate copy of the live database, 
> and checked that the the main tables and indexes were reproduced faithfully.
> 
> Lowering random_page_cost seems to do the trick.  But I'm still mystified; why 
> would one identical database pick a different plan than its copy?

If the databases are on different machines, maybe the postgres.conf
or pg_hba.conf files are different, and the buffer counts is affect-
ing the optimizer?

-- 
+---+
| Ron Johnson, Jr.Home: [EMAIL PROTECTED]   |
| Jefferson, LA  USA|
|   |
| "Man, I'm pretty.  Hoo Hah!"  |
|Johnny Bravo   |
+---+



---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] Perfomance Tuning

2003-08-12 Thread Bjoern Metzdorf
> be able to handle at least 8M at a time. The machine has
> two P III 933MHz CPU's, 1.128G RAM (512M*2 + 128M), and
> a 36 Gig hd with 1 Gig swap and 3 equal size ext3 partitions.
> What would be the recomended setup for good performance
> considering that the db will have about 15 users for
> 9 hours in a day, and about 10 or so users throughout the day
> who wont be conistenly using the db.

For 15 users you won't need great tuning at all. Just make sure, that you
have the right indizes on the tables and that you have good queries (query
plan).

About the 8Meg blobs, I don't know. Other people on this list may be able to
give you hints here.

Regards,
Bjoern


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [PERFORM] Some vacuum & tuning help

2003-08-12 Thread Shridhar Daithankar
On 5 Aug 2003 at 10:29, Christopher Browne wrote:

> Shridhar Daithankar wrote: 
> > I agree, specifying per table thresholds would be good in autovacuum.. 
>  
> Which begs the question of what the future direction is for pg_autovacuum.
> 
> There would be some merit to having pg_autovacuum throw in some tables
> in which to store persistent information, and at that point, it would
> make sense to add some flags to support the respective notions that:

Well, the C++ version I wrote quite a while back, which resides on gborg and 
unmaintained, did that. It was considered as table pollution. However whenever 
autovacuum stuff goes in backend as such, it is going to need a catalogue.
 
>  -> Some tables should _never_ be touched;

That can be determined runtime from stats. Not required as a special feature 
IMHO..

> 
>  -> Some tables might get "reset" to indicate that they should be
> considered as having been recently vacuumed, or perhaps that they
> badly need vacuuming;

Well, stats collector takes care of that. Autovacuum daemon reads that 
statistics, maintain  a periodic snapshot of the same to determine whether or 
not it needs to vacuum.

Why it crawls for a dirty database is as follows. Autovauum daemon starts, read 
statistics, sets it as base level and let a cycle pass, which is typically few 
minutes. When it goes again, it finds that lots of things are modified and need 
vacuum and so it triggers vacuum.

Now vacuum goes on cleaning entire table which might be days job continously 
postponed some one reason or another. Oops.. your database is on it's knees..

 
>  -> As you suggest, per-table thresholds;

I would rather put it in terms of pages. If any table wastes 100 pages each, it 
deserves a vacuum..

 
>  -> pg_autovacuum would know when tables were last vacuumed by
> it...

If you maintain a table in database, there are lot of things you can maintain. 
And you need to connect to database anyway to fire vacuum..

 
>  -> You could record vacuum times to tell pg_autovacuum that you
> vacuumed something "behind its back."

It should notice..

>  -> If the system queued up proposed vacuums by having a "queue"
> table, you could request that pg_autovacuum do a vacuum on a
> particular table at the next opportunity.

That won't ever happen if autovacuum is constantly running..

> Unfortunately, the "integrate into the backend" thing has long seemed
> "just around the corner."  I think we should either:
>  a) Decide to enhance pg_autovacuum, or
>  b) Not.

In fact, I would say that after we have autovacuum, we should not integrate it. 
It is a very handy tool of tighting a database. Other database go other way 
round. They develop maintance functionality built in and then create tool on 
top of it. Here we have it already done.

It's just that it should be triggered by default. That would rock..

Bye
 Shridhar

--
Bubble Memory, n.:  A derogatory term, usually referring to a person's 
intelligence.See also "vacuum tube".


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org