I'm running a 64-bit build of Postgres 8.3.5 on AIX 5.3, and have a really
strange, annoying transient problem with one particular query stalling.
The symptom here is that when this query is made with X or more records in
a temp table involved in the join (where X is constant when the problem
mani
tself. It's possible something's going wrong in that, but the code's
pretty simple.
Arguably in this case the actual query should run faster than the EXPLAIN
ANALYZE version, since the cache is hot. (Though that'd only likely shave
a few dozen ms off the runtime)
-Dan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
e problems, but
that isn't helping as it shows perfectly sane results. That leaves
abnormal means, and outside of trussing the back end or attaching with dbx
to get a stack trace I just don't have any of those. I'm not even sure
what I should be looking for when I do get a stack trace.
-Dan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ing a lot of tables together? Could be GEQO kicking in.
Only if I get different query plans for the query depending on whether
it's being EXPLAIN ANALYZEd or not. That seems unlikely...
-Dan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your
uery has seven
tables (one of them a temp table) and my geqo_threshold is set to 12. If
I'm reading the docs right GEQO shouldn't kick in.
-Dan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
#x27;)
and libobject.objinstance = provide_symbol.objinstance
and libinstance.branchid = ?
and provide_symbol.symbolid = temp_symbol.symbolid
and objectinstance.objinstance = libobject.objinstance
and libinstance.istemp = 0
The explain analyze for the query's attached in a (poss
undef, $db->{arch},
$db->{basebranch});
There's no transform of the sql variable between the two statements, just
a quick loop over the returned rows from the explain analyze to print them
out. (I did try to make sure that the debugging bi
rchitecture) is 1, the second ? (for branchid) is 0.
They both should get passed to Postgres as $1 and $2, respectively,
assuming DBD::Pg does its substitution right. (They're both supposed to go
in as placeholders)
-Dan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
over psql.
Fair enough. (And sorry about the mis-read) Next time this occurs I'll try
and duplicate this in psql. FWIW, a quick read of the C underlying the
DBD::Pg module shows it using PQexecPrepared, so I'm pretty sure it is
using prepared statements with placeholders, but
s and have not found them to be congruous with my
application.
Why is it choosing a sequential scan one part of the query when
searching for the words, yet using an index scan for another part of
it? Is there a better way to structure the query to give it better
hints?
I'm using 8.0.1 o
On May 8, 2005, at 6:51 PM, Russell Smith wrote:
On Mon, 9 May 2005 09:20 am, Dan Harris wrote:
You cannot use an index for %CORVETTE%, or %RED%. There is no way
for the index to know if a row had that in the middle without scanning
the whole
index. So it's much cheaper to do a sequence
each my goal without Tsearch2
by just restructuring my query to narrow down the results by date
first, then seq scan for the 'likes'?
-Dan
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
7;%RED%'))
JOIN eg ON em.incidentid = eg.incidentid)
Yes, it is nasty, but so was my previous query :) So long as this is
faster, I'm ok with that. I'll see if i can make this work. Thank you
very much.
-Dan
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
ption and the formatting of the log
file and the fact that EVERY query is logged is not what I'm after
for this project. The "infinite-running" queries are unpredictable
and may only happen once a week. Logging 24/7 in anticipation of one
of these o
command_string = true
stats_row_level = true
Thanks in advance,
Dan
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
So sorry, I forgot to mention I'm running version 8.0.1
Thanks
---(end of broadcast)---
TIP 6: explain analyze is your friend
s
system, same schema but varying sizes, and the small ones perform
very well. It feels like there is an O(n) increase in wait time that
has recently become very noticeable on the largest of them.
-Dan
---(end of broadcast)---
TIP 9: In versions
es up, I will be sure to do the explain without analyze.
FYI that update query I mentioned in the initial thread just finished
after updating 8.3 million rows.
-Dan
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore
On Jul 14, 2005, at 12:12 AM, Greg Stark wrote:
Dan Harris <[EMAIL PROTECTED]> writes:
I keep the entire database vacuumed regularly.
How often is "regularly"?
Well, once every day, but there aren't a ton of inserts or updates
going on a daily basis. Maybe 1,000
vity using the
index on incidentid? It seems like that shouldn't be too hard to
make fast but my SQL query skills are only average.
Thanks
-Dan
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
On Jul 14, 2005, at 9:47 AM, Alvaro Herrera wrote:
On Thu, Jul 14, 2005 at 12:28:05AM -0600, Dan Harris wrote:
. Ext3 must really be crappy
for postgres, or at least is on this box.
Were you using the default journal settings for ext3?
Yes, I was. Next time I get a chance to reboot
On Jul 14, 2005, at 9:42 AM, John A Meinel wrote:
You might try giving it a little bit more freedom with:
EXPLAIN ANALYZE
SELECT recordtext FROM eventactivity, k_r, k_b
WHERE eventactivity.incidentid = k_r.incidentid
AND eventactivity.incidentid = k_b.incidentid
AND k_r.id = 94
AND
On Jul 14, 2005, at 5:12 PM, John A Meinel wrote:
Dan Harris wrote:
Well, postgres is estimating around 500 rows each, is that way
off? Try
just doing:
EXPLAIN ANALYZE SELECT incidentid FROM k_b WHERE id = 107;
EXPLAIN ANALYZE SELECT incidentid FROM k_r WHERE id = 94;
Once again, do
I
can certainly live with these numbers.
Thanks again for your continued efforts.
-Dan
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that you
On Jul 14, 2005, at 10:12 PM, John A Meinel wrote:
My biggest question is why the planner things the Nested Loop would be
so expensive.
Have you tuned any of the parameters? It seems like something is
out of
whack. (cpu_tuple_cost, random_page_cost, etc...)
here's some of my postgresql.co
On Jul 15, 2005, at 9:09 AM, Dan Harris wrote:
On Jul 14, 2005, at 10:12 PM, John A Meinel wrote:
My biggest question is why the planner things the Nested Loop
would be
so expensive.
Have you tuned any of the parameters? It seems like something is
out of
whack. (cpu_tuple_cost
lprit.
-Dan
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
e 10k SCSI RAID 0+1 for a heavy-loaded database?
If not software RAID, how about their dedicated RAID controller blade?
I'm definitely IO bound right now and starving for spindles. Does
this make sense or is it too good to be true?
Thanks
-Dan
---(end of
Even if this could be a small increase per record, even a few percent
faster compounded over the whole load could be a significant reduction.
Thanks for any ideas you might have.
-Dan
---(end of broadcast)---
TIP 6: explain analyze is your friend
ut it sounds similar.
-Dan
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
XFS, but I sure
hope not. How should I go about troubleshooting the "problem"
queries? They don't seem to be specific to a single table or single
database.
I'm running 8.0.1 on kernel 2.6.12-3 on 64-bit Opterons if that
matters..
-Dan
-
debug from there.
Here are some things to look for:
Is it a large table (and thus large indexes) that it is updating?
Is the query using indexes?
Is the query modifying ALOT of rows?
Another thing to look at is foreign keys. Dan could be running into
problems with an update on one side of an
On Aug 10, 2005, at 12:49 AM, Steve Poe wrote:
Dan,
Do you mean you did RAID 1 + 0 (RAID 10) or RAID 0 + 1? Just a
clarification, since RAID 0 is still a single-point of failure even if
RAID1 is on top of RAID0.
Well, you tell me if I stated incorrectly. There are two raid
enclosures
On Aug 9, 2005, at 3:51 PM, John A Meinel wrote:
Dan Harris wrote:
On Aug 10, 2005, at 12:49 AM, Steve Poe wrote:
Dan,
Do you mean you did RAID 1 + 0 (RAID 10) or RAID 0 + 1? Just a
clarification, since RAID 0 is still a single-point of failure
even if
RAID1 is on top of RAID0.
Well
ng "select ... group by ... order by .. limit 1" to get
the min/max since I have already been bit by the issue of min() max()
being slower.
-Dan
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your d
Thanks for all the great ideas. I have more options to evaluate now.
-Dan
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
as
noticeable, especially in VACUUM operations.
Specs for the server:
PG 8.0.1
Linux 2.6.12-3 kernel
4xOpteron 2.2
12GB RAM
16-drive RAID 10
XFS mounted with noatime
pg_xlog on separate RAID controller
-Dan
---(end of broadcast)---
TIP 1: if
XFS seems to be a trusted choice, followed by Reiser and JFS both
with the occasional controversy when the comparisons pop up.
-Dan
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscr
Do you have any sources for that information? I am running dual
SmartArray 6402's in my DL585 and haven't noticed anything poor about
their performance.
On Sep 1, 2005, at 2:24 PM, Luke Lonergan wrote:
Are you using the built-in HP SmartArray RAID/SCSI controllers? If
so, that
could be
that runs plain old top. I have
customized my settings enough that I can pretty much see anything I
need to from there.
-Dan
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
bus to provide power to the device, not for I/O
b) It is limited to SATA bandwidth
c) The benchmarks did not prove it to be noticeably faster than a
good single SATA drive
A few of us were really excited at first too, until seeing the
benchmarks..
-Dan
---(end of
last article I read on
this. There may be more interest now..
-Dan
---(end of broadcast)---
TIP 6: explain analyze is your friend
2.6.18 fairly recently, I am *very* interested in
what caused the throughput to drop in 2.6.18? I haven't done any
benchmarking on my system to know if it affected my usage pattern
negatively, but I am curious if anyone knows why this happened?
-Dan
---(e
key piece of knowledge is escaping me on this.
I don't expect someone to write this for me, I just need a nudge in the
right direction and maybe a URL or two to get me started.
Thank you for reading this far.
-Dan
---(end of broadcast)---
Thank you all for your ideas. I appreciate the quick response.
-Dan
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
idea for obvious security reasons...
So far, that's all I can come up with, other than a dedicated socket
server daemon on the DB machine to do it.
Any creative ideas are welcomed :)
Thanks
-Dan
---(end of broadcast)---
TIP 1: if posting/
Dan Harris wrote:
I've found that it would be helpful to be able to tell how busy my
dedicated PG server is ( Linux 2.6 kernel, v8.0.3 currently ) before
pounding it with some OLAP-type queries.
..snip
Thank you all for your great ideas! I'm going to try the perl function
as
d then do the seq scan for the LIKE condition. Instead, it seems that it's seqscanning the
whole 70 million rows first and then doing the join, which takes a lot longer than I'd like to wait for it. Or, maybe I'm
misreading the explain output?
Thanks again
-Dan
-
run.
I have been able to do this with tables, using a helpful view posted to this
list a few months back, but I'm not sure if I can get the same results on indexes.
Thanks
-Dan
---(end of broadcast)---
TIP 6: explain analyze is your friend
We have a table which we want to normalize and use the same SQL to
perform selects using a view.
The old table had 3 columns in it's index
(region_id,wx_element,valid_time).
The new table meteocode_elmts has a similar index but the region_id is a
reference to another table region_lookup and wx_e
3.2.3-20)
(1 row)
We used the rpm source from postgresql-7.4-0.5PGDG.
You make it sound so easy. Our database size is at 308 GB. We actually
have 8.2.3 running and would like to transfer in the future. We have to
investigate the best way to do it.
Dan.
-Original Message-
From
.
In closing, I am not bashing PG! I love it and swear by it. These comments are
purely from an advocacy perspective. I'd love to see PG user base continue to grow.
My .02
-Dan
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
Bill Moran wrote:
In response to Dan Harris <[EMAIL PROTECTED]>:
Why does the user need to manually track max_fsm_pages and max_fsm_relations? I
bet there are many users who have never taken the time to understand what this
means and wondering why performance still stinks after vac
Orhan Aglagul wrote:
Hi Everybody,
I was trying to see how many inserts per seconds my application could
handle on various machines.
I read that postgres does have issues with MP Xeon (costly context
switching). But I still think that with fsync=on 65 seconds is ridiculous.
CPU is unlikel
about. This should be a
cleaner solution for you.
-Dan
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
http://archives.postgresql.org
select count(*) will *always* do a sequential scan, due to the MVCC
architecture. See archives for much discussion about this.
-Dan
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
Andrew Sullivan wrote:
On Thu, Jun 07, 2007 at 03:26:56PM -0600, Dan Harris wrote:
They don't always have to be in a single transaction, that's a good idea to
break it up and vacuum in between, I'll consider that. Thanks
If you can do it this way, it helps _a lot_. I've
Some of our databases are doing about 250,000 commits/min.
Best Regards,
Dan Gorman
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
level) which a lot of storage vender provide,
the backup data
can be corrupted as Dan said. During recovery we can't even
read it,
especially if meta-data was corrupted.
I can't see any explanation for how this could happen, other
than your hardware vendor is lying about snapsh
This snapshot is done at the LUN (filer) level, postgres is un-aware
we're creating a backup, so I'm not sure how pg_start_backup() plays
into this ...
Regards,
Dan Gorman
On Jun 22, 2007, at 3:55 AM, Simon Riggs wrote:
On Fri, 2007-06-22 at 11:30 +0900, Toru SHIMOGAKI wrote:
Ah okay. I understand now. So how can I signal postgres I'm about to
take a backup ? (read doc from previous email ? )
Regards,
Dan Gorman
On Jun 22, 2007, at 4:38 AM, Simon Riggs wrote:
On Fri, 2007-06-22 at 04:10 -0700, Dan Gorman wrote:
This snapshot is done at the LUN (filer)
It's the latter, is snapshot of the durable state of the storage
system (e.g. it will never be corrupted)
Regards,
Dan Gorman
On Jun 22, 2007, at 11:02 AM, Tom Lane wrote:
"Simon Riggs" <[EMAIL PROTECTED]> writes:
On Fri, 2007-06-22 at 13:12 -0400, Tom Lane wrote:
If
I took several snapshots. In all cases the FS was fine. In one case
the db looked like on recovery it thought there were outstanding
pages to be written to disk as seen below and the db wouldn't start.
Jun 21 00:39:43 sfmedstorageha001 postgres[3506]: [9-1] 2007-06-21
00:39:43 PDTLOG: redo
Greg,
PG 8.2.4
Regards,
Dan Gorman
On Jun 25, 2007, at 9:02 AM, Gregory Stark wrote:
"Dan Gorman" <[EMAIL PROTECTED]> writes:
I took several snapshots. In all cases the FS was fine. In one
case the db
looked like on recovery it thought there were outstanding pages
t
you guys would like me to try to 'break' it again and keep the db
around for further testing let me know.
Regards,
Dan Gorman
On Jun 25, 2007, at 9:34 AM, Tom Lane wrote:
Dan Gorman <[EMAIL PROTECTED]> writes:
Jun 21 00:39:43 sfmedstorageha001 postgres[3506]: [9-1] 2007-06
No, however, I will attach the postgreql.conf so everyone can look at
other settings just in case.
postgresql.conf
Description: Binary data
Regards,
Dan Gorman
On Jun 25, 2007, at 10:07 AM, Gregory Stark wrote:
"Simon Riggs" <[EMAIL PROTECTED]> writes:
WARNING: page 2
doesn't really make that big of a difference.
My recommendation, each database gets it's own aggregate unless the
IO footprint is very low.
Let me know if you need more details.
Regards,
Dan Gorman
On Jul 11, 2007, at 6:03 AM, Dave Cramer wrote:
Assuming we have 24 73G drives is it bett
thomas
I'd say that "it depends". We run an OLAP workload on 350+ gigs of database on
a system with 64GB of RAM. I can tell you for certain that fetching non-cached
data is very sensitive to disk throughput!
Different types of workloads will find different bottlenecks in the
there is no index on word (
there should be! ). Would this have caused the problem?
This is 8.0.12
Linux sunrise 2.6.15-26-amd64-server #1 SMP Fri Sep 8 20:33:15 UTC 2006 x86_64
GNU/Linux
Any idea what might have set it into this loop?
-Dan
---(end of broadcast)
Tom Lane wrote:
Dan Harris <[EMAIL PROTECTED]> writes:
Here's the strace summary as run for a few second sample:
% time seconds usecs/call callserrors syscall
-- --- --- - -
97.250.671629 9
nfirmed via explain (or explain analyse) that the index is
being used?
> So I'm asking me if it is useful to update to the actual 8.2 version
> and if we could experience performance improvement only by updating.
There are other benefits from upgrading, but you may be able to
Kari Lavikka wrote:
Hello!
Some background info.. We have a blog table that contains about eight
million blog entries. Average length of an entry is 1200 letters.
Because each 8k page can accommodate only a few entries, every query
that involves several entries causes several random seeks to
My PG server came to a screeching halt yesterday. Looking at top saw a very
large number of "startup waiting" tasks. A pg_dump was running and one of my
scripts had issued a CREATE DATABASE command. It looks like the CREATE DATABASE
was exclusive but was having to wait for the pg_dump to fin
e more to that original table. What about triggers?
rules? Perhaps there other things going on in the background.
--
Dan Langille - http://www.langille.org/
Available for hire: http://www.freebsddiary.org/dan_langille.php
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
On 3/22/10 4:36 PM, Carlo Stonebanks wrote:
Here we go again!
Can anyone see any obvious faults?
Carlo
maintenance_work_mem = 256MB
I'm not sure how large your individual tables are, but you might want to
bump this value up to get faster vacuums.
max_fsm_relations = 1000
I think this will d
he other major bottleneck they ran into was a kernel one: reading from
the heap file requires a couple lseek operations, and Linux acquires a
mutex on the inode to do that. The proper place to fix this is
certainly in the kernel but it may be possible to work around in
Postgres.
Dan
--
Dan R. K
ing a connection pooler
like pgpool to reduce your connection memory overhead.
-Dan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
tremendously.
We just had a corrupt table caused by an XFS online defrag. I'm scared
to use this again while the db is live. Has anyone else found this to
be safe? But, I can vouch for the fragmentation issue, it happens very
quickly in our system.
-Dan
--
Sent via pgsql-performance ma
On 10/12/10 10:44 AM, Scott Carey wrote:
On Oct 12, 2010, at 8:39 AM, Dan Harris wrote:
On 10/11/10 8:02 PM, Scott Carey wrote:
would give you a 1MB read-ahead. Also, consider XFS and its built-in
defragmentation. I have found that a longer lived postgres DB will get extreme
file
On 10/12/10 4:33 PM, Neil Whelchel wrote:
On Tuesday 12 October 2010 08:39:19 Dan Harris wrote:
On 10/11/10 8:02 PM, Scott Carey wrote:
would give you a 1MB read-ahead. Also, consider XFS and its built-in
defragmentation. I have found that a longer lived postgres DB will get
extreme file
tion. (We actually
tried Tokyo Cabinet and found it to perform quite well. However it does not measure up to Postgres in terms of replication, data
interrogation, community support, acceptance, etc).
Thanks
Dan Schaffer
Paul Hamer
Nick Matheson
<>
--
Sent via pgsql-performance mailing list (p
Hi,
My name is Dan and I'm a co-worker of Nick Matheson who initially submitted this question (because the mail group had me blacklisted
for awhile for some reason).
Thank you for all of the suggestions. We were able to improve out bulk read performance from 3 MB/s to 60 MB/s (assumin
eplication was keeping up, which would be
monitored).
-Dan
Thank you everybody for the detailed answers, the help is well appreciated.
A couple of follow-up questions:
- Is the supercap + flash memory considered superior to the BBU in practice?
Is that type of system well tested?
- Is the linux support of the LSI and Adaptec cards comparable?
-Dan
On
On 3/4/11 11:03 AM, Wayne Conrad wrote:
On 03/04/11 10:34, Glyn Astill wrote:
> I'm wondering (and this may be a can of worms) what peoples opinions
are on these schedulers?
When testing our new DB box just last month, we saw a big improvement
in bonnie++ random I/O rates when using the noop
13 rows=444613
width=113)"
One general question: does the width of the tables (i.e. the numbers
of columns not being joined and the size of those fields) matter? The
tables do have a lot of extra columns that I could slice out.
Thanks so much!
Dan
System:
client: pgadmin III, Mac OS
ry.
And you're right fork, Record Linkage is in fact an entire academic
discipline! I had no idea, this is fascinating and helpful:
http://en.wikipedia.org/wiki/Record_linkage
Thanks so much!
Dan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql
ting about 40-50MB/s on a PV with 14 disks on a RAID10 in
real world usage. (random IO and fully saturating a Dell 1850 with 4
concurrent threads (to peg the cpu on selects) and raw data files)
Best Regards,
Dan Gorman
On Feb 24, 2006, at 4:29 PM, Mark Kirkwood wrote:
Luke Lonergan wrote:
I&
em to drag it down
significantly.
Is it because there's such slight differences between the records,
since they are all within a few hundredths of a degree from each other?
Thanks for your time and ideas.
-Dan
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
Dan Harris wrote:
explain analyze
doh.. sorry to reply to my own post. But I messed up copying some
of the fields into the select statement that you'll see in the "Sort
Key" section of the analyze results. The mistake was mine. Everything
else is "normal"
500 rows matched. I'll look into
GIST indexes, thanks for the feedback.
-Dan
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
Dan Harris wrote:
Markus Bertheau wrote:
Have you tried using a GIST index on lat & long? These things are
meant for two-dimensional data, whereas btree doesn't handle
two-dimensional data that well. How many rows satisfy either of the
long / lat condition?
According to th
Merlin Moncure wrote:
As others will probably mention, effective queries on lot/long which
is a spatial problem will require r-tree or gist. I don't have a lot
of experience with exotic indexes but this may be the way to go.
One easy optimization to consider making is to make an index on eithe
I have a table with 1 live row that I found has 115000 dead rows in it (
from a testing run ). I'm trying to VACUUM FULL the table and it has
run for over 18 hours without completion. Considering the hardware on
this box and the fact that performance seems reasonable in all other
aspects, I'm
before joining the
small ones, resulting in a huge amount of disk I/O. How would I make
this query join the large table only after narrowing down the possible
selections from the smaller tables? This is running on version 8.0.3.
Thanks for any ideas.
-Dan
QUERY
##
Tom Lane wrote:
That's very strange --- the estimated cost of the seqscan is high enough
that the planner should have chosen a nestloop with inner indexscan on
the big table. I'm not sure about the join-order point, but the hash
plan for the first join seems wrong in any case.
Um, you do have a
e table, if that matters.
Thanks again,
Dan
---(end of broadcast)---
TIP 6: explain analyze is your friend
r off the first time? This table
has been ANALYZED regularly ever since creation.
Once again, thank you and all of the developers for your hard work on
PostgreSQL. This is by far the most pleasant management experience of
any database I've worked on.
-Dan
and I would stay away from iscsi.
Regards,
Dan Gorman
On Apr 26, 2006, at 7:35 PM, Steve Wampler wrote:
On Wed, Apr 26, 2006 at 10:06:58PM -0400, Ketema Harris wrote:
I was wondering if there were any performance issues with having a
data
directory that was an nfs mounted drive? Say like a
So do NAS's
Dan
On Apr 27, 2006, at 6:42 AM, Ketema Harris wrote:
The SAN has the snapshot capability.
On 4/27/06 9:31 AM, "Bruno Wolff III" <[EMAIL PROTECTED]> wrote:
On Thu, Apr 27, 2006 at 09:06:48 -0400,
Ketema Harris <[EMAIL PROTECTED]> wrote:
Yes, your
1 - 100 of 190 matches
Mail list logo