Hi,
I currently have a mysql server running with a database of around 800
gb. The problem is that the server is old (500 MHz Pentium III with 512
MB RAM) and I want to change this to a new server and convert the
existing database to Postgresql on Debian (I assume that Postgresql
offers better perf
Hi, I am trying to make a cluster out of any
database, postgresql or mysql or any other free database. I have looked at
openmosix patched with the migshm patch for shared memory support and it seems
that neither work fully. Postgresql in particular uses "shared memory but
not the system se
ROTECTED]>
To: "Bill" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Tuesday, June 22, 2004 11:31 AM
Subject: Re: [PERFORM] postgresql and openmosix migration
> Bill,
>
> > Any ideas of how I can cluster my database (around 800 GB
> > in size so even par
Actually, I have some
queries that are slow, however I was wondering if you could help me write a
query that is rather simple, but I, as a true database novice, can't seem to
conjure. So we have stocks, as I have previously said, and I have a huge table
which contains all of the opening an
x27;
high | numeric(12,2) | not null default '0.00'
Indexes: symbol_2_oclh_index btree (symbol, date),
symbol_oclh_index btree (symbol, date)
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Richard Huxton
Sent: Monday, June 28, 2004 4:14
TECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Richard Huxton
Sent: Tuesday, June 29, 2004 3:38 AM
To: Bill
Cc: [EMAIL PROTECTED]
Subject: Re: [PERFORM] Query performance
Bill wrote:
> Okso here lies the output of oclh (i.e "\d oclh")
>
> Table &quo
Scan on oclh (cost=0.00..228404.40
rows=10674140 width=29) (actual time=20.00..137720.61 rows=10674140 loops=1)
Total runtime: 1986748.44 msec
(9 rows)
Can I get any better performance?
Thanks.
-Original Message-
From: Bruno Wolff III [mailto:[EMAIL PROTECTED]
Sent: Tuesday, June 29, 2004 2:
Hi, I have downloaded the new postgresql (version 8.0 beta2) and I was
wondering what performance features I can take advantage of before I start
to dump my 3/4 terrabyte database into the new format. More specifically
I am interested in tablespaces--what exactly is this feature, some sort of
orga
constantly. REINDEX locks
client apps out while in progress. Same with "CREATE
INDEX" when we drop/create. The table can have over
10 million row. Recreating the indexes seems to take
hours. This is too long to lock the client apps out.
Is there any other solu
date query planning statistics?
Or are those two completely separate operations
requiring separate invocations of 'vacuumdb'.
thanks,
Bill
__
Do you Yahoo!?
Plan great trips with Yahoo! Travel: Now over 17,000 guide
of-the-ordinary number of rows
were deleted (say 75% of rows in the table, as opposed
to normal 5%) followed by a 'VACUUM ANALYZE'? Could
things get out of whack because of that situation?
thanks,
Bill
__
Do You Yahoo!?
Tired of spam? Ya
--- Josh Berkus wrote:
> Bill,
>
> > What about if an out-of-the-ordinary number of
> rows
> > were deleted (say 75% of rows in the table, as
> opposed
> > to normal 5%) followed by a 'VACUUM ANALYZE'?
> Could
> > things get out of whack becaus
be dropped, there
> is a big prologue txn, whichs moves (INSERT then
> DELETE) the good rows into a
> child table that is NOT to be dropped. Then BANG
> pull the plug on the tables you
> don't want. MUCH faster than DELETE: the dropped
> tables' files' disk space go
thing else.
If I deleted 75% of the rows but had a max_fsm_pages
setting that still exceeded the pages required (as
indicated in VACUUM output), would that solve my
indexing problem or would I still need to REINDEX
after such a purge?
regards,
Bill
--- Mischa Sandberg <[EMAIL PROTECTED]>
nd
we will be your most ardent supporters.
==Bill==
--
Bill Ewing
Posted via http://www.codecomments.com
--
s
making a bad guess about something but I'm not sure what. I didn't see
any hard-coded limits grepping through the source (though it is close
to the 16-bit unsigned boundry - probably coincidental) so if anyone
has ideas or pointers to how I might figure out what's going wrong that
ower on disk but makes up for it in
single-threaded access to CPU and memory speed. But if this app were to
be scaled it would make much more sense to cluster several $800
machines than it would to buy 'big-iron'.
-Bill
-
Bill McGonigle, Owner Work: 603.448.4440
BFC
of system-level settings.
I haven't been on this list for long, and only using postgres for a
handful of years, so forgive it if this has been hashed out before.
-Bill
-
Bill McGonigle, Owner Work: 603.448.4440
BFC Computing, LLC Home: 603.448.1668
[
when
things are more resilient. I've tried the superhero thing before and
it's just not worth it.
-Bill
-
Bill McGonigle, Owner Work: 603.448.4440
BFC Computing, LLC Home: 603.448.1668
[EMAIL PROTECTED] Mobile: 603.252.2606
http://www.bfccomputi
Bruno Wolff III <[EMAIL PROTECTED]> wrote:
>
> This really should have been asked on pgsql-performance and would probably
> get a better response there..
>
> On Sun, Nov 26, 2006 at 16:35:52 +,
> Michael Simms <[EMAIL PROTECTED]> wrote:
> > PostgreSQL version: 8.1.4
> > Operating system: L
sible for the most
common queries, then force the developers to use "set work_mem to x"
to adjust it for big queries.
--
Bill Moran
Collaborative Fusion Inc.
[EMAIL PROTECTED]
Phone: 412-422-3463x4023
---(end of broadcast)---
TIP 9: In ve
ase can fit in 100M, that 900M
might be better used as work_mem, or something else.
I haven't been able to find anything regarding how much of the
shared buffer space PostgreSQL is actually using, as opposed to
simply allocating.
--
Bill Moran
Collaborative Fusion Inc.
[EMAIL PROTE
gt; not running a system optimized from the ground up for its HW do not see.
http://www.potentialtech.com/wmoran/source.php
You get an idea of how old these tests are by the fact that the latest
and greatest was FreeBSD 4.9 at the time, but I suppose it may still
be relevent.
--
Bill Moran
u created a separate table for each emailid (call them
Messages_1, Messages_2, etc). I expect that no one user will have an
unbearable number of messages, thus each user will see reasonable
performance when working with their mailbox.
You can handle the management of this entirely in your applica
In response to Tom Lane <[EMAIL PROTECTED]>:
> Bill Moran <[EMAIL PROTECTED]> writes:
> > Does the creation of a temp file trigger any logging?
>
> No; but it wouldn't be hard to add some if you wanted. I'd do it at
> deletion, not creation, so you co
In response to Stephen Frost <[EMAIL PROTECTED]>:
> * Bill Moran ([EMAIL PROTECTED]) wrote:
> > What I'm fuzzy on is how to discretely know when I'm overflowing
> > work_mem? Obviously, if work_mem is exhausted by a particular
> > query, temp files will be
that the docs on this are very good. If the
amount of data that changes between runs of autovacuum is greater
than max_fsm_pages, then vacuum will be unable to reclaim all the
space.
--
Bill Moran
Collaborative Fusion Inc.
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
improvements in both the usage of shared memory, and the
optimization of count(*) since 7.4, so the first suggestion I have is to
upgrade your installation.
--
Bill Moran
Collaborative Fusion Inc.
---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at
http://www.postgresql.org/about/donate
stem to enforce a certain amount of policy an a per-connection basis.
HTH,
Bill
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly
h exists, but
> when I do the queries they don't appear.
[snip]
Just a guess, but perhaps your index is damaged. Have you tried
REINDEXing?
--
Bill Moran
Collaborative Fusion Inc.
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
would appear to be enough space to do the entire sort in memory.
The second is with work_mem set to 128k.
More interesting is that that actual runtime doesn't differ by nearly
that much: 3100ms vs 2200ms. (I've realized that my setting for
random_page_cost is too damn high for this hardwar
, but my table is hidden behind a view, and
all I get is the ugly sequential scan. Any ideas on how to convince the
optimizer to unfold the subquery properly?
Bill
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
e of my post: I was attempting to be jovial,
but in retrospect, I see how it reads as a "gripe," which I guess
evoked your frowny-face emoticon.
Thanks for the quick response, elegant fix, and ongoing excellent work!
Cheers,
Bill
> Index: planagg.c
> =
ally did more tests,
you should provide that information, otherwise, people will criticize your
testing strategy instead of looking at the problem.
The other side to this is that you haven't shown enough information about
your alleged problem to even start to investigate it.
--
Bill Moran
Collaborative Fusion Inc.
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
t the case, and large shared_buffers improve performance.
I've yet to do any in-depth testing on this, though.
--
Bill Moran
Collaborative Fusion Inc.
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desir
ou consent to this
policy and agree that it overrides any disclaimers or policies that may
exist elsewhere."
I have no idea if that's legally binding or not, but I've talked to a few
associates who have some experience in law, and they all argue that email
disclaimers
> protected by law, and any opinions expressed are those of the
> > individualsender. Internet e-mail guarantees neither the
> > confidentiality nor the proper receipt of the message sent.
> > If the addressee of this
setup, my only suggestion would be to double-verify
that your postgresql.conf settings are correct on the 64 bit system.
--
Bill Moran
Collaborative Fusion Inc.
[EMAIL PROTECTED]
Phone: 412-422-3463x4023
IMPORTANT: This message c
In response to David Brain <[EMAIL PROTECTED]>:
>
> Thanks for the response.
> Bill Moran wrote:
> > In response to David Brain <[EMAIL PROTECTED]>:
> >> I recently migrated one of our large (multi-hundred GB) dbs from an
> >> Intel 32bit platform (D
In response to David Brain <[EMAIL PROTECTED]>:
> Bill Moran wrote:
>
> >
> > Install the pg_bufferstats contrib module and take a look at how shared
> > memory is being use. I like to use MRTG to graph shared buffer usage
> > over time, but you can just do
field foo.id in Schema 1 superfluous? For example, wouldn't the
> referencing from bar to foo really be done "behind the scenes" through some
> hidden field (oid?) instead of through the frobnitz text field? Which of
> the two schemas would give better perfornance?
--
launching
the DB.
> Can anyone else confirm this? I don't know if this is a windows-only
> issue, but I don't know of a way to check fragmentation in unix.
I can confirm that it's only a Windows problem. No UNIX filesystem
that I'm aware of suffers from fragmentation
is and prevent it from becoming
a performance issue.
> There's a tool for Linux in the e2fsprogs package called filefrag that
> shows the fragmentation of a file, but I've never used it myself.
Interesting. However, the existence of a tool does not particularly
indicated the _need_ f
ght also need bumped.
Figure out which postgresql.conf your system is using and get it dialed
in for your hardware. You can make all the indexes you want, but if
you've told Postgres that it only has 8M of RAM to work with, performance
is going to suck. I don't see hardware specs on th
ple.
Personally, I wouldn't be opposed to more automagic stuff, just as long as
I have the option to disable it. There are some cases where I still
disable autovac.
> In closing, I am not bashing PG! I love it and swear by it. These comments
> are
> purely from an advocacy perspective. I'd love to see PG user base continue
> to grow.
I expect that part of the problem is "who's going to do it?"
--
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/
[EMAIL PROTECTED]
Phone: 412-422-3463x4023
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
ion about which OS to use ;). I would stick with the long supported
> versions of Linux, thus CentOS 5, Debian 4, Ubuntu Dapper.
There used to be a prominent site that recommended FreeBSD for Postgres.
Don't know if that's still recommended or not -- but bringing it up is
likely to st
ason I can see for vacuum full/reindex is if you _can_.
For example, if there is a period that you know the database will be
unused that it sufficiently long that you know these operations can
complete. Keep in mind that both reindex and vacuum full create performance
problems while they are
act?
>
> # alter database test set lc_collate = 'C';
> ERROR: parameter "lc_collate" cannot be changed
How would that command handle UTF data that could not be converted to C?
--
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/
[EMAIL
want take a backseat to your
rendering process, why not just nice the initial PostgreSQL daemon? All
children will inherit the nice value, and there's no chance of priority
inversion because all the PostgreSQL backends are running at the same
priority.
Just a thought.
--
my $uq = $dbh2->prepare(sprintf("SELECT db.name, accession,
> version, is_current from feature_dbxref fd, dbxref dx, db where fd.feature_id
> = %d and fd.dbxref_id = dx.dbxref_id and dx.db_id = db.db_id and db.name =
> '%s'",$pr{feature_id},$uds{$uh{$rec}{stflag}}
ters control).
> > > >
> > > >That actually has nothing at all to do with pg_stats or pg_statistics.
> > > >Those deal with statistics about the data in the database, and not
> > about
> > > >statistics from the engine (which is what the pg_stat*
Dave Cramer <[EMAIL PROTECTED]> wrote:
> Since PITR has to enable archiving does this not increase the amount
> of disk I/O required ?
It does increase the required amount of I/O.
--
Bill Moran
http://www.potentialtech.com
---(end
.
>
> Would that scenario work, or I am simply too naive considering it?
It won't work, unfortunately. The on-disk representation of the data is
different between ia32 and amd64.
--
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/
[EMAIL PROTECTED]
Ph
it
done any faster, but there's just not enough information in your
post to know for sure.
Post some explains of some problem queries. Let us know more about
your IO load. Give us some snapshots of top under load. Find out
how large the database is. Provide the output of vacuum verbose.
to shared_buffers (only a few 100 meg?).
As a result, PostgreSQL is constantly asking the OS for disk blocks that it
doesn't have cached, but the OS has those disk blocks cached in RAM.
If my guess is right, you'll probably see improved performance by allocating
more share
In response to "Chris Hoover" <[EMAIL PROTECTED]>:
> On 6/8/07, Bill Moran <[EMAIL PROTECTED]> wrote:
> >
> > In response to "Chris Hoover" <[EMAIL PROTECTED]>:
> >
> > > I need some help. I have started taking snapshots of perf
l backend processes will allocate more memory above shared_buffers
for processing individual queries. See work_mem.
--
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/
[EMAIL PROTECTED]
Phone: 412-422-3463x4023
---(end of broadcast)--
o see how well autovacuum is keeping up. There's
no problem with running manual vacuum and autovacuum together, and you'll
be able to gather _some_ information about how well autovacuum is
keeping up.
--
Bill Moran
Collaborative Fusion Inc.
http://people.collab
In response to "Sabin Coanda" <[EMAIL PROTECTED]>:
> Hi Bill,
>
> ...
> >
> > However, you can get some measure of tracking my running VACUUM VERBOSE
> > on a regular basis to see how well autovacuum is keeping up. There's
> > no problem w
that doesn't apply here.
Why not? I see no reason why an appropriate autovaccum schedule would not
apply to your scenario. I'm not saying it does, only that your response
does not indicate that it doesn't, and thus I'm concerned that you're
writing auto
exes --- VACUUM FULL isn't good at that.
> >
> > regards, tom lane
> >
> > ---(end of
> > broadcast)---
> > TIP 4: Have you searched our list archives?
> >
> >
G's shared_buffers -- if that's the case, have
a look at the pg_buffercache contrib module.
--
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/
[EMAIL PROTECTED]
Phone: 412-422-3463x4023
---(end of broadcast)---
TIP 6: explain analyze is your friend
ring the reindex, but everything should
continue to chug along. A drop/recreate of the index should be OK as well.
> Tom, Bill, Chris and Richard, thank you so much for your thoughts on
> this matter so far. It helps to not feel "so alone" when dealing
> with difficult issues
iodic.conf (just like other
periodic job):
daily_pgsql_vacuum_enable="YES"
daily_pgsql_backup_enable="NO"
are the defaults.
> Hmmm... I wonder why this would just start now, three days ago. Everything
> seemed to be normal for the last two weeks.
Someone alter /etc/periodic.conf? Perhaps it
> #cpu_operator_cost = 0.0025 # (same)
> #---
> # LOCK MANAGEMENT
> #---
>
> #deadlock_timeout = 1000
prove/disprove that theory:
http://www.postgresql.org/docs/8.2/interactive/view-pg-locks.html
If none of those are the case, then please describe the actual problem
you are having.
HTH.
--
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/
[EMAIL PROTEC
r problem, then
you either need to implement a data expiration policy to get rid of
old data, or increase the amount of storage to accommodate the data
you want to keep.
--
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/
[EMAIL PROTECTED]
Phone: 412-422-3463x4023
that process, unless you're running
a 486dx2. You didn't mention your hardware or your postgresql.conf
settings. What other activity is occurring during this long count()?
Can you give us a shot of the iostat output and/or top during this
phenomenon?
>
> Jozsef
>
>
>
fers = 16
> >
> > checkpoint_segments = 16
> >
> >
> >
> >
> >
> > top reports
> >
> >
> >
> > PID USER PR NI VIRT RES SHR S %CPU %MEMTIME+ COMMAND
> >
> > 19478 postgres 25 0 740m 721m 536m R 99.7 4
t;
> > checkpoint_segments = 16
> >
> >
> >
> >
> >
> > top reports
> >
> >
> >
> > PID USER PR NI VIRT RES SHR S %CPU %MEMTIME+ COMMAND
> >
> > 19478 postgres 25 0 740m 721m 536m R 99.7 4.4 609:41.16
>
UM seems to fix. A FULL seems to fix that, but it appears
to bloat the indexes, thus a REINDEX helps.
I would expect that setting fillfactor to 100 will encourage indexs to bloat
faster, and would only be recommended if you didn't expect the index contents
to change?
--
Bill
In response to "Decibel!" <[EMAIL PROTECTED]>:
> On Wed, Aug 08, 2007 at 03:27:57PM -0400, Bill Moran wrote:
> > I've had similar experience. One thing you didn't mention that I've noticed
> > is that VACUUM FULL often bloats indexes. I've mad
e any difference ?
Does the documentation leave anything unanswered?
http://www.postgresql.org/docs/8.2/static/runtime-config-locks.html
--
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/
[EMAIL PROTECTED]
Phone: 412-422-3463x4023
---(end o
ons on a server if needed, the
> ability to quickly revert to a previous version, etc however this is a
> discussion for another day - I only mention it in case the question "why not
> just use RPM's?" arises...
>
>
> So here's my questions:
>
> 1
> You can't use these techniques for a major version upgrade.
> Use pg_dump piped to psql. That will also eliminate all bloat.
If you can't afford any downtime, you may be able to use Slony to
do your upgrade. However, slony adds overhead, and if this system
is tapped out already, i
ou autovacuum or vacuum manually?
> Tell us more...
>
>
> Bye,
> Chris.
>
>
>
> ---(end of broadcast)---
> TIP 3: Have you checked our extensive FAQ?
>
>http://www.postgresql
re not obvious to me. The explain analyze output is
going to be key to working this out -- unless it's something like
your postgresql.conf isn't properly tuned.
> I vacuum and
> reindex the database daily.
>
> I'd prefer not to have to rewrite th
ay with a lot less
space than many other usages, so they might be very practical.
--
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/
[EMAIL PROTECTED]
Phone: 412-422-3463x4023
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
mance related stuff.
>
> Thank you all in advance
To add to Mikko's comments:
Periodic vacuuming and analyzing is a mandatory part of running a
PostgreSQL database server. You'll probably be best served to configure
the autovacuum daemon to handle this for you. See the postgre
x27;ll see detailed log
messages about its activities.
There were discussions on other lists about improving autovacuum's log
messages, I'm pretty sure it will log more helpful information in 8.3.
--
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/
[EMAIL PR
gt; Generally it's faster to run more commands in a single transaction but what
> I'm worried about is that you may have a transaction open which you aren't
> committing for a long time. This can stop vacuum from being able to clean up
> dead space and if it's in t
anner faster.
>
> Do you think it could be reasonable?
Based on the information you've given and the responses you've made,
I think you're as likely to roll a 1d6 and get the right solution as
anything else.
Good luck.
> -Messaggio originale-
> Da: Bil
>
> Index Cond: (qa.answer_id = "outer".id)
>
>
> -> BitmapOr (cost=55.08..55.08 rows=6596 width=0) (never e
ery, in the old one, in the new one), without difference and I only
> retrieve the first 100 records (I didn't count the network time in any case).
> But the weird thing is running the query in the new server the are many disk
> access and cpu usage. And wi
in the same server there
> aren't networks delays!
Not network, no. But the results of your explains seem to show that the
query is executing much faster on the new system than the old, so the
problem still becomes, "what is happening after the query completes that
is so slow?"
ep up the vacuum schedule you've already
established. You may want to (as has already been suggested) explicitly
vacuum this table after large delete operations as well.
--
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/
[EMAIL PROTECTED]
Phone: 412-422-3463x4023
us the explain.
However, 2 guesses:
1) You never analyzed the table, thus PG has awful statistics and
doesn't know how to pick a good plan.
2) You have so few rows in the table that a seq scan is actually
faster than an index scan, which is why PG uses it
78 kB shared
> memory.
> VACUUM
This doesn't look problematic, so I doubt your vacuum policy is to blame.
--
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/
[EMAIL PROTECTED]
Phone: 412-422-3463x4023
---(end of broadcast)---
2000
> deadlock_timeout = 1
> max_locks_per_transaction = 640
> add_missing_from = on
>
> As I mentioned, any insights into changing the configuration to optimize
> performance are most welcome.
>
> Thanks
>
> Ron
>
> ---(end of broadc
In response to Ron St-Pierre <[EMAIL PROTECTED]>:
> Bill Moran wrote:
> > In response to Ron St-Pierre <[EMAIL PROTECTED]>:
> >
> >
> >> We vacuum only a few of our tables nightly, this one is the last one
> >> because it takes longer to run
CPUs seem to be the limiting
factor. As a result, this decision on what kind of CPUs to get in
the next round of servers is pretty important.
Any advice is much appreciated.
--
Bill Moran
Collaborative Fusion Inc.
IMPORTANT: Th
s done once to get you back on track, assuming that #1 is
done properly.
A little bit of wasted space in the database is OK, and lazy vacuum done
on a reasonable schedule will keep the level of wasted space to an
acceptable level.
--
Bill Moran
Potential Technologies
http://www.potentialtech.
"Jim C. Nasby" <[EMAIL PROTECTED]> wrote:
> On Sun, Apr 30, 2006 at 10:03:46AM -0400, Bill Moran wrote:
> > At this point, it seems like you need to do 2 things:
> > 1) Schedule lazy vacuum to run, or configure autovacuum.
> > 2) Schedule some downtime to r
s that we did. We pulled the plug before doing any extensive
testing, because it just didn't seem as if it was going to be worth it.
--
Bill Moran
I already know the ending it's the part that makes your face implode.
I don't know what makes your face implode, but that's the
ose some problems - e.g. effective_cache_size.
The only reason I can see for doing this is when you need to run two
different versions of PostgreSQL. Which is what I've been forced to
do on one of our servers.
It works, but it's a pain to admin. If you can just put all the databases
in
P. If you've got a cheap,
laggy switch, not amount of TCP or PostgreSQL tuning is going to overcome
it.
Hope some of this is helpful.
--
Bill Moran
Collaborative Fusion Inc.
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
made almost no difference in our tests. If your DB is
very IO intensive, you may have different results.
--
Bill Moran
Collaborative Fusion Inc.
IMPORTANT: This message contains confidential information and is
intended only for
In response to "Gourish Singbal" <[EMAIL PROTECTED]>:
If you look in the mail headers:
List-Unsubscribe: <mailto:[EMAIL PROTECTED]>
--
Bill Moran
Collaborative Fusion Inc.
---(end of broadcast)---
TIP 6: explain analyze is your friend
r:
http://www.postgresql.org/docs/8.1/interactive/manage-ag-templatedbs.html
--
Bill Moran
Collaborative Fusion Inc.
IMPORTANT: This message contains confidential information and is
intended only for the individual named. If the reader of
autovac, you may want to experiment with
vacuum_cost_delay and associated parameters, which can lessen the
impact of vacuuming.
--
Bill Moran
Collaborative Fusion Inc.
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
1 - 100 of 256 matches
Mail list logo