I'm trying to query a table with 250,000+ rows. My
query requires I provide 5 colums in my "order by" clause:
select column
from table
where
column >= '2004-3-22 0:0:0'order by
ds.receipt desc,
ds.carrier_id asc,
ds.batchnum asc,
encounternum asc,
ds.encounter_id AS
new index.
Am I doing something wrong ?
- Original Message -
From: "Josh Berkus"
To:
Cc: <[EMAIL PROTECTED]>
Sent: Tuesday, January 17, 2006 5:25 PM
Subject: Re: [PERFORM] Multiple Order By Criteria
J,
I have an index built for each of these columns in my order by clause.
To: "Josh Berkus"
Cc: ; <[EMAIL PROTECTED]>
Sent: Tuesday, January 17, 2006 5:40 PM
Subject: Re: [PERFORM] Multiple Order By Criteria
On Tue, 17 Jan 2006, Josh Berkus wrote:
J,
> I have an index built for each of these columns in my order by clause.
> This query takes an
I've read all of this info, closely. I wish when I was searching for an
answer for my problem these pages came up. Oh well.
I am getting an idea of what I need to do to make this work well. I was
wondering if there is more information to read on how to implement this
solution in a more simple way
I have the answer I've been looking for and I'd like to share with all.
After help from you guys, it appeared that the real issue was using an index
for my order by X DESC clauses. For some reason that doesn't make good
sense, postgres doesn't support this, when it kinda should automatically.
Here's some C to use to create the operator classes, seems to work ok.
---
#include "postgres.h"
#include
#include "fmgr.h"
#include "utils/date.h"
/* For date sorts */
PG_FUNCTION_INFO_V1(ddd_date_revcmp);
Datum ddd_date_revcmp(PG_FUNCTION_ARGS){
DateADT arg1=PG_GETARG_DATEA
Hey guys, how u been. This is quite a newbie
question, but I need to ask it. I'm trying to wrap my mind around the syntax of
join and why and when to use it. I understand the concept of making a query go
faster by creating indexes, but it seems that when I want data from multiple
tables that
If I want my database to go faster, due to X then I would think that the
issue is about performance. I wasn't aware of a paticular constraint on X.
I have more that a rudementary understanding of what's going on here, I was
just hoping that someone could shed some light on the basic principal o
Yes, that helps a great deal. Thank you so much.
- Original Message -
From: "Richard Huxton"
To: <[EMAIL PROTECTED]>
Cc:
Sent: Thursday, January 26, 2006 11:47 AM
Subject: Re: [PERFORM] Query optimization with X Y JOIN
[EMAIL PROTECTED] wrote:
If I want my database to go faster, du
On Wednesday 02 June 2010 13:37:37 Mozzi wrote:
> Hi
>
> Thanx mate Create Index seems to be the culprit.
> Is it normal to just use 1 cpu tho?
If it is a single-threaded process, then yes.
And a "Create index" on a single table will probably be single-threaded.
If you now start a "create index"
On Friday 04 June 2010 14:17:35 Jon Schewe wrote:
> Some interesting data about different filesystems I tried with
> PostgreSQL and how it came out.
>
> I have an application that is backed in postgres using Java JDBC to
> access it. The tests were all done on an opensuse 11.2 64-bit machine,
> on
I think I have read what is to be read about queries being prepared in
plpgsql functions, but I still can not explain the following, so I thought
to post it here:
Suppose 2 functions: factor(int,int) and offset(int, int).
Suppose a third function: convert(float,int,int) which simply returns
$1*
out time
zone))"
"-> Bitmap Index Scan on tbl_sensor_channel_data_pkey
(cost=0.00..2978.92 rows=105456 width=0) (actual time=27.433..27.433
rows=150678 loops=1)"
" Index Cond: ((tree_id = 70) AND (sens_chan_data_timestamp >=
'2008-06-11 00:00:00
a way to affect the functions. So, as far as I understand the
Postgres workings, this shouldn't pose a problem.
Regards,
Davor
"Tom Lane" wrote in message
news:25116.1277047...@sss.pgh.pa.us...
> "Davor J." writes:
>> Suppose 2 functions: factor(int,int) and
> "Tom Lane" wrote in message
> news:25116.1277047...@sss.pgh.pa.us...
>> "Davor J." writes:
>>> Suppose 2 functions: factor(int,int) and offset(int, int).
>>> Suppose a third function: convert(float,int,int) which simply returns
>>>
On Wednesday 17 November 2010 15:26:56 Eric Comeau wrote:
> This is not directly a PostgreSQL performance question but I'm hoping
> some of the chaps that build high IO PostgreSQL servers on here can help.
>
> We build file transfer acceleration s/w (and use PostgreSQL as our
> database) but we ne
Odds are that a table of 14 rows will more likely be cached in RAM
than a table of 14 million rows. PostgreSQL would certainly be more
"openminded" to using an index if chances are low that the table is
cached. If the table *is* cached, though, what point would there be
in reading an index?
Also
On Thu, Mar 17, 2011 at 10:13 AM, Jeff wrote:
> hey folks,
>
> Running into some odd performance issues between a few of our db boxes.
We've noticed similar results both in OLTP and data warehousing conditions here.
Opteron machines just seem to lag behind *especially* in data
warehousing. Smal
If the bulk load has the possibility of duplicating data, then you need
to change methods. Try bulk loading into a temp table, index it like
the original, eliminate the dups and merge the tables.
It is also possible to do an insert from the temp table into the final
table like:
insert into or
There are a few things you didn't mention...
First off, what is the context this database is being used in? Is it the
backend for a web server? Data warehouse? Etc?
Second, you didn't mention the use of indexes. Do you have any indexes on
the table in question, and if so, does EXPLAIN ANALYZE
Let's say you have one partitioned table, "tbl_p", partitioned according to
the PK "p_pk". I have made something similar with triggers, basing myself on
the manual for making partitioned tables.
According to the manual, optimizer searches the CHECKs of the partitions to
determine which table(s)
2010/2/1 :
> * joke 1: insert operation would use a excluse lock on reference row by the
> foreign key . a big big big performance killer , i think this is a stupid
> design .
>
> * joke 2: concurrency update on same row would lead to that other
> transaction must wait the earlier transaction comp
On Thu, Apr 21, 2011 at 3:04 PM, Scott Marlowe wrote:
> Just because you've been walking around with a gun pointing at your
> head without it going off does not mean walking around with a gun
> pointing at your head is a good idea.
+1
--
Sent via pgsql-performance mailing list (pgsql-performanc
Not sure if this is the right list...but:
Disclaimer: I realize this is comparing apples to oranges. I'm not
trying to start a database flame-war. I just want to say thanks to
the PostgreSQL developers who make my life easier.
I manage thousands of databases (PostgreSQL, SQL Server, and MySQL)
On Mon, Apr 25, 2011 at 10:04 PM, Rob Wultsch wrote:
> Tip from someone that manages thousands of MySQL servers: Use InnoDB
> when using MySQL.
Granted, my knowledge of PostgreSQL (and even MSSQL) far surpasses my
knowledge of MySQL, but if InnoDB has such amazing benefits as being
crash safe, an
On Wed, Aug 17, 2011 at 1:55 PM, Ogden wrote:
>
>
> What about the OS itself? I put the Debian linux sysem also on XFS but
> haven't played around with it too much. Is it better to put the OS itself on
> ext4 and the /var/lib/pgsql partition on XFS?
>
>
We've always put the OS on whatever default
On Sun, Sep 11, 2011 at 1:36 PM, Ogden wrote:
> As someone who migrated a RAID 5 installation to RAID 10, I am getting far
> better read and write performance on heavy calculation queries. Writing on
> the RAID 5 really made things crawl. For lots of writing, I think RAID 10 is
> the best. It sho
Sorry, meant to send this to the list.
For really big data-warehousing, this document really helped us:
http://pgexperts.com/document.html?id=49
On Sun, Sep 11, 2011 at 5:22 PM, Maciek Sakrejda wrote:
> performance guidelines, I recommend Greg Smith's "PostgreSQL 9.0 High
> Performance" [1] (disclaimer: I used to work with Greg and got a free
> copy)
>
> I'll second that. "PostgreSQL 9.0 High Performance" is an excellent
resource
(I recom
we are using cloud server
*this are memory info*
free -h
total used free sharedbuffers cached
Mem: 15G15G 197M 194M 121M14G
-/+ buffers/cache: 926M14G
Swap: 15G32M15G
*this are
You asked for it! ;-)
If you want cheap, get SATA. If you want fast under
*load* conditions, get SCSI. Everything else at this
time is marketing hype, either intentional or learned.
Ignoring dollars, expect to see SCSI beat SATA by 40%.
* * * What I tell you three times is true * * *
Also, c
A good one page discussion on the future of SCSI and SATA can
be found in the latest CHIPS (The Department of the Navy Information
Technology Magazine, formerly CHIPS AHOY) in an article by
Patrick G. Koehler and Lt. Cmdr. Stan Bush.
Click below if you don't mind being logged visiting Space and Na
pending on your needs.
Cheers,
J. Andrew Rogers
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
grossly out of
date anyway).
J. Andrew Rogers
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
few benchmarks that suggest it
is at least as fast as XFS for Postgres.
Since XFS is more mature than JFS on Linux, I go with XFS
by default. If some tragically bad problems develop with
XFS I may reconsider that position, but we've been very
happy with it so far
the individuals with this issue. Using the patched kernel, one gets the
performance most people were expecting.
The v2.6.12+ kernels are a bit new, but they contain a very important
performance patch for systems like the one above. It would definitely be
worth testing if possible.
J. Andrew Roger
ne.
The closest thing to a supported v2.6.12 kernel that I know of is FC4, which
is not really supported in the enterprise sense of course.
J. Andrew Rogers
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
design.
In short, what you are trying to do is easily doable on PostgreSQL in
theory. However, restrictions on design choices may pose significant
hurdles. We did not start out with an ideal system either; it took a fair
amount of re-engineering to solve all the bottlene
s all its time waiting for disks, no
quantity of processors will help you unless you are doing a lot of math on
the results.
YMMV, as always. Recommendations more specific than "Opterons rule, Xeons
suck" depend greatly on what you plan on doing with the database.
Ron Peacetree sounds like someone talking out of his _AZZ_.
He can save his unreferenced flapdoodle for his SQL Server
clients. Maybe he will post references so that we may all
learn at the feet of Master Peacetree. :-)
douglas
On Oct 4, 2005, at 7:33 PM, Ron Peacetree wrote:
pg is _ver
Hey, you can say what you want about my style, but you
still haven't pointed to even one article from the vast literature
that you claim supports your argument. And I did include a
smiley. Your original email that PostgreSQL is wrong and
that you are right led me to believe that you, like other
A blast from the past is forwarded below.
douglas
Begin forwarded message:
From: Tom Lane <[EMAIL PROTECTED]>
Date: August 23, 2005 3:23:43 PM EDT
To: Donald Courtney <[EMAIL PROTECTED]>
Cc: pgsql-performance@postgresql.org, Frank Wiles <[EMAIL PROTECTED]>, gokulnathbabu manoharan <[EMAIL PROTEC
AMD added quad-core processors to their public roadmap for 2007.
Beyond 2007, the quad-cores will scale up to 32 sockets
(using Direct Connect Architecture 2.0)
Expect Intel to follow.
douglas
On Nov 16, 2005, at 9:38 AM, Steve Wampler wrote:
[...]
Got it - the cpu is only
d in theory and the upgrade cost is below the noise floor for
most database servers.
J. Andrew Rogers
---(end of broadcast)---
TIP 6: explain analyze is your friend
On Dec 12, 2005, at 2:19 PM, Vivek Khera wrote:
On Dec 12, 2005, at 5:16 PM, J. Andrew Rogers wrote:
We've swapped out the DIMMs on MegaRAID controllers. Given the
cost of a standard low-end DIMM these days (which is what the LSI
controllers use last I checked), it is a very cheap up
r our apps, about half the CPU time will be spent inside the
geometry ops. Fortunately, there is significant opportunity for
improvement in the performance of the underlying code if anyone found
the time to optimize (and uglify) it for raw speed.
Cheers,
J. Andrew Rogers
.ELsmp. Hyperthreading is disabled in the BIOS and there are 2 Xeon
3.4Ghz cpus. There is 8Gb of RAM in the machine, and another 8Gb of swap.
Thank you in advance for any and all assistance you can provide.
--
Douglas J Hunley (doug at hunley.homeip.net) - Linux User #174778
http://doug.hunley
enable_mergejoin = off
> geqo = off
>
> I've occasionally had to tweak planner settings but I prefer to do
> so for specific queries instead of changing them server-wide.
I concur. Unfortunately, our Engr group don't actually write the SQL for the
app. It's generated, and
7;initdb' will make changes to
the file? The file I sent is the working copy from the machine in question.
--
Douglas J Hunley (doug at hunley.homeip.net) - Linux User #174778
http://doug.hunley.homeip.net
"Does it worry you that you don't talk any kind of sense?"
-
nterim, I did an 'initdb' to
another location on the same box and then copied those values into the config
file. That's cool to do, I assume?
--
Douglas J Hunley (doug at hunley.homeip.net) - Linux User #174778
http://doug.hunley.homeip.net
Cowering in a closet is starting
;m on the list, so there's no need to reply direct. I can get the
replies from the list
Thanks again for everyone's assistance thus far. Y'all rock!
--
Douglas J Hunley (doug at hunley.homeip.net) - Linux User #174778
http://doug.hunley.homeip.net
I feel like I'm di
LIKE 'tracker.peer_review_tracker.%' OR folder.path LIKE 'tracker.tars_0.%'
OR folder.path LIKE 'tracker.reviews.%' OR folder.path LIKE 'tracker.defects.
%' OR folder.path LIKE 'tracker.tars.%' OR folder.path
LIKE 'tracker.database_change_r
On Monday 04 June 2007 17:17:03 Heikki Linnakangas wrote:
> And did you use the same encoding and locale? Text operations on
> multibyte encodings are much more expensive.
The db was created as:
createdb -E UNICODE -O
--
Douglas J Hunley (doug at hunley.homeip.net) - Linux User #17477
uot;f_val_fid_val_idx" UNIQUE, btree (field_id, value)
"field_class_idx" btree (value_class)
"field_value_idx" btree (value)
item table:
Indexes:
"item_pk" PRIMARY KEY, btree (id)
"item_created_by_id" btree (created_by_id)
"item_folder" btree
On Tuesday 05 June 2007 10:34:04 Douglas J Hunley wrote:
> On Monday 04 June 2007 17:11:23 Gregory Stark wrote:
> > Those plans look like they have a lot of casts to text in them. How have
> > you defined your indexes? Are your id columns really text?
>
> pro
On Sat, Mar 20, 2010 at 10:47:30PM -0500, Andy Colson wrote:
>
> I guess, for me, once I started using PG and learned enough about it (all
> db have their own quirks and dark corners) I was in love. It wasnt
> important which db was fastest at xyz, it was which tool do I know, and
> trust, tha
On Tue, Mar 23, 2010 at 03:22:01PM -0400, Tom Lane wrote:
> "Ross J. Reedstrom" writes:
>
> > Andy, you are so me! I have the exact same one-and-only-one mission
> > critical mysql DB, but the gatekeeper is my wife. And experience with
> > that instance has made
On Sun, Jan 30, 2011 at 05:18:15PM -0500, Tom Lane wrote:
> Andres Freund writes:
> > What happens if you change the
> > left join event.origin on event.id = origin.eventid
> > into
> > join event.origin on event.id = origin.eventid
> > ?
>
> > The EXISTS() requires that origin is not nul
On Thu, Feb 03, 2011 at 12:44:23PM -0500, Chris Browne wrote:
> mladen.gog...@vmsinfo.com (Mladen Gogala) writes:
> > Hints are not even that complicated to program. The SQL parser should
> > compile the list of hints into a table and optimizer should check
> > whether any of the applicable access
On Tue, Feb 08, 2011 at 03:52:31PM -0600, Kevin Grittner wrote:
> Scott Marlowe wrote:
> > Greg Smith wrote:
>
> >> Kevin and I both suggested a "fast plus timeout then immediate"
> >> behavior is what many users seem to want.
>
> > Are there any settings in postgresql.conf that would make it
or am I
just crazy)?
Thanks for any insight!
--
Daniel J. Luke
++
| * [EMAIL PROTECTED] * |
| *-- http://www.geeklair.net -* |
+==
utes and I potentially
have people querying it constantly, so I can't remove and re-create
the index.
--
Daniel J. Luke
++
| * [EMAIL PROTECTED] * |
| *-- http://w
bably :) I'll keep searching the list archives and see if I find
anything else (I did some searching and didn't find anything that I
hadn't already tried).
Thanks!
--
Daniel J. Luke
++
| *-
On May 24, 2006, at 4:13 PM, Steinar H. Gunderson wrote:
On Wed, May 24, 2006 at 04:09:54PM -0400, Daniel J. Luke wrote:
no warnings in the log (I did change the checkpoint settings when I
set up the database, but didn't notice an appreciable difference in
insert performance).
How
8.1.x (I think we're upgrading from 8.1.3 to 8.1.4 today).
--
Daniel J. Luke
++
| * [EMAIL PROTECTED] * |
| *-- htt
ite
capacity, so I don't think that's currently limiting performance).
--
Daniel J. Luke
++
| * [EMAIL PROTECTED] * |
| *
"/copy", having the
file sitting on the client?
COPY table FROM STDIN using psql on the server
I should have gprof numbers on a similarly set up test machine soon ...
--
Daniel J. Luke
++
| * [EMAIL PRO
On May 30, 2006, at 3:59 PM, Daniel J. Luke wrote:
I should have gprof numbers on a similarly set up test machine
soon ...
gprof output is available at http://geeklair.net/~dluke/
postgres_profiles/
(generated from CVS HEAD as of today).
Any ideas are welcome.
Thanks!
--
Daniel J. Luke
replaced with 64-bit Linux on
Opterons because the AMD64 systems tend to be both faster and
cheaper. Architectures like Sparc have never given us problems, but
they have not exactly thrilled us with their performance either.
Cheers,
J. Andrew Rogers
---(end
, since those differ significantly from the P4 in
capability.
J. Andrew Rogers
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
would suggest that XFS is a fine
and safe choice for your application.
J. Andrew Rogers
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
spot if you use your
disk system hard (like we do). For databases with low disk I/O
intensity, stay with IDE/SATA and save a little money. For databases
that have high disk I/O intensity, use SCSI. The price premium for SCSI
is about 50%, but the performance difference is an integer factor
good database hardware in general
for us.
j. andrew rogers
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
I verified problem on a Dual Opteron server. I temporarily killed the
normal load, so the server was largely idle when the test was run.
Hardware:
2x Opteron 242
Rioworks HDAMA server board
4Gb RAM
OS Kernel:
RedHat9 + XFS
1 proc: 10-15 cs/sec
2 proc: 400,000-420,000 cs/sec
j. andrew
t;$6k inclusive of everything. Add the money for 4 of the 8xx
processors and the Tyan quad motherboard, and the sum comes out to a
very reasonable number for what you are getting.
j. andrew rogers
---(end of broadcast)---
TIP 5: Have you checke
th write-back and write-through, and IIRC, three
different algorithms for reading (none, read-ahead, adaptive). Plenty
of configuration options.
It is a pretty mature and feature complete hardware RAID implementation.
j. andrew rogers
---(end of broadcast)
narrow types of workloads that it doesn't do so
well on, but for many normal DBMS loads it scales quite well.
j. andrew rogers
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
joining
Hi all -
I've got a schema I'm working on modifying, nad I need some help getting
the best performance out. The orginal schema has a many to many linkage
between a couple tables, using a two column linkage table. This is used
to represent groups of people and their relationship to an object
(autho
On Mon, Jul 26, 2004 at 02:27:20AM -0400, Greg Stark wrote:
>
> "Ross J. Reedstrom" <[EMAIL PROTECTED]> writes:
>
> > In the new schema, the same thing is:
> >
> > SELECT * from content where 42 = ANY (authors);
> >
> > Works fine, bu
is much less necessary at that point
because, at least for my particular purposes, the rows are mostly
ordered due to how the data was partitioned.
So there are ways to work around CLUSTER, but you'll have to be clever
and it will require tailoring the solution to your particular
requirements.
of heaps in Oracle 8 on heavily used tables
with tens of millions of rows, we frequently got a 10x or better
performance improvement on queries against those tables. It is only
really useful for tables with vast quantities of relatively small rows,
but it can be a lifesaver in those c
o having
locking contention under some access patterns. This is one of those
options that needs to be used knowledgeably; it is not a general
architectural improvement that you would want to apply to every table
all the time.
J. Andrew Rogers
---(end of broadcast
Ron St-Pierre wrote:
We have a web based application with data that is updated daily. The
biggest bottleneck occurs when we try to update
one of the tables. This table contains 58,000 rows and 62 columns, and
EVERY column is indexed.
Have you thought of / tried using 2 separate databases or table
y already exist, and managing the partition
hashing and association is the ugly part when rolling your own.
Intercepting DML when necessary and making it behave correctly is
already pretty easy, but could probably be streamlined.
j. andrew rogers
---(end of broadc
ay normal. I'm going to think
about this some more and see if I can't construct a generic solution.
cheers,
j. andrew rogers
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
d on cache line replacement.
My random thought of the day,
j. andrew rogers
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
7;ve gotten really bad performance before under postgres, but once I
isolated the reason I've always gotten performance that was comparable
to any other commercial RDBMS on the same hardware.
J. Andrew Rogers
---(end of broadcast)---
TIP 8: explain analyze is your friend
Hi,
according to
http://www.postgresql.org/docs/8.0/interactive/limitations.html ,
concurrent access to GiST indexes isn't possible at the moment. I
haven't read the thesis mentioned there, but I presume that concurrent
read access is also impossible. Is there any workaround for this, esp.
if
Oleg Bartunov wrote:
On Thu, 3 Feb 2005, Marinos J. Yannikos wrote:
concurrent access to GiST indexes isn't possible at the moment. I [...]
there are should no problem with READ access.
OK, thanks everyone (perhaps it would make sense to clarify this in the
manual).
I'm willing t
b reaper { my $waitedpid = wait; $running--; $SIG{CHLD} = \&reaper; }
$SIG{CHLD} = \&reaper;
for $i (1..$n)
{
if (fork() > 0) { $running++; }
else
{
my
$dbh=DBI->connect('dbi:Pg:host=daedalus;dbname=','root','',{
AutoCommit =>
Tom Lane schrieb:
What's the platform exactly (hardware and OS)?
Hardware: http://www.appro.com/product/server_1142h.asp
- SCSI version, 2 x 146GB 10k rpm disks in software RAID-1
- 32GB RAM
OS: Linux 2.6.10-rc3, x86_64, debian GNU/Linux distribution
- CONFIG_K8_NUMA is currently turned off (no cha
Tom Lane wrote:
You might try the attached patch (which I just applied to HEAD).
It cuts down the number of acquisitions of the BufMgrLock by merging
adjacent bufmgr calls during a GIST index search. [...]
Thanks - I applied it successfully against 8.0.0, but it didn't seem to
have a noticeable e
Tom Lane wrote:
I'm not completely convinced that you're seeing the same thing,
but if you're seeing a whole lot of semops then it could well be.
I'm seeing ~280 semops/second with spinlocks enabled and ~80k
semops/second (> 4 mil. for 100 queries) with --disable-spinlocks, which
increases total
'll grant you that it's a 5.1G tar file, but 7 hours seems excessive.
Is that kind of timeframe 'abnormal' or am I just impatient? :) If the former,
I can provide whatever you need, just ask for it.
Thanks!
--
Douglas J Hunley (doug at hunley.homeip.net) -
le HD drive? What
> are your settings for postgresql?
It wasn't doing anything but the restore. Dedicated DB box
postgresql.conf attached
system specs:
Intel(R) Xeon(TM) CPU 3.40GHz (dual, so shows 4 in Linux)
MemTotal: 8245524 kB
The db resides on a HP Modular Storage Array 500 G2. 4x7
On Tuesday 19 February 2008 13:13:37 Richard Huxton wrote:
> Douglas J Hunley wrote:
> > I spent a whopping seven hours restoring a database late Fri nite for a
> > client. We stopped the application, ran pg_dump -v -Ft -b -o $db >
> > ~/pre_8.3.tar on the 8.2.x db, and the
On Tuesday 19 February 2008 13:22:58 Tom Lane wrote:
> Richard Huxton <[EMAIL PROTECTED]> writes:
> > Douglas J Hunley wrote:
> >> I spent a whopping seven hours restoring a database late Fri nite for a
> >
> > Oh, and have you tweaked the configuration s
On Tuesday 19 February 2008 13:23:23 Jeff Davis wrote:
> On Tue, 2008-02-19 at 13:03 -0500, Douglas J Hunley wrote:
> > I spent a whopping seven hours restoring a database late Fri nite for a
> > client. We stopped the application, ran pg_dump -v -Ft -b -o $db >
> > ~/pre
On Tuesday 19 February 2008 14:28:54 Dave Cramer wrote:
> shared buffers is *way* too small as is effective cache
> set them to 2G/6G respectively.
>
> Dave
pardon my ignorance, but is this in the context of a restore only? or 'in
general'?
--
Douglas J Hunley (dou
On Tuesday 19 February 2008 15:16:42 Dave Cramer wrote:
> On 19-Feb-08, at 2:35 PM, Douglas J Hunley wrote:
> > On Tuesday 19 February 2008 14:28:54 Dave Cramer wrote:
> >> shared buffers is *way* too small as is effective cache
> >> set them to 2G/6G respectively.
>
1 - 100 of 138 matches
Mail list logo