help?
TIA
peter Mcgregor
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
when it's not actually
committed.
My question is, if the power goes off, and the drives stop, how does the
battery backed cache save things out to the dead drives? Is there another
component that is implied that will provide power to the drives that I
should be looking into as well?
Tha
data admitiaddly increasing it by 1/3 but
it does at least convert it to text which means that its more
unserstandable. base64 is also pritty standard being whats used in
EMails for mime attachments.
Peter
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
27;t think of any schema/index changes that
would help, since everything looks pretty optimal from the 'explain
analyze' output. I'd like to get a 10x improvement when querying from
the 'cold' state.
Thanks for any assistance. The advice from reading this list to
On Jan 8, 2006, at 1:42 PM, Luke Lonergan wrote:
Have you tested the underlying filesystem for it's performance?
Run this:
time bash -c 'dd if=/dev/zero of=/my_file_system/bigfile bs=8k
count= && sync'
This is a 2-disk RAID0
[EMAIL PROTECTED] /opt/alt-2]# time bash -c 'dd if=/dev/zero of=
ect the
table to grow to this size regulally otherwise vacuum full will take
ages.
Peter.
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
r than distinct. I
always thought distinct was just short hand for "group by same columns
as I've just selected"
Is it actually in the sql spec to sort in a distinct or could we just
get the parser to rewrite distinct into group by and hence remove the
ou can say what it might be rather than what it is the
index is more liklly to be used.
Peter.
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that
A related question:
Is it sufficient to disable write cache only on the disk where pg_xlog
is located? Or should write cache be disabled on both disks?
Thanks
Peter
On 2/25/07, Tom Lane <[EMAIL PROTECTED]> wrote:
Carlos Moreno <[EMAIL PROTECTED]> writes:
> The question is: does
actually
recover space you need to either run vacuum full or cluster.
This ought to be in the manual somewhere as this question gets asked
about once a week.
Peter.
---(end of broadcast)---
TIP 7: You can help support the PostgreSQL projec
On 2/26/07, Jeff Davis <[EMAIL PROTECTED]> wrote:
On Sun, 2007-02-25 at 23:11 +0100, Peter Kovacs wrote:
> A related question:
> Is it sufficient to disable write cache only on the disk where pg_xlog
> is located? Or should write cache be disabled on both disks?
>
When
hine on a UPS + 1 hot standby internal PS is
equivalent, in terms of data integrity, to using battery backed write
cache. Instinctively, I'd think that UPS + 1 hot standby internal PS
is better, since this setup also provides for the disk to actually
write out the content
ed to be doing that would increase
performance above a "seek once per matching row" plan? I haven't been
able to Google my way to what the intended benefit is of a heap scan
vs. a plain index scan.
--
/ Peter Schuller
PGP userID: 0xE9758B7D or 'Peter Schuller <[EMAIL PROTECTED]
ful in some
other context down the road.
--
/ Peter Schuller
PGP userID: 0xE9758B7D or 'Peter Schuller <[EMAIL PROTECTED]>'
Key retrieval: Send an E-Mail to [EMAIL PROTECTED]
E-Mail: [EMAIL PROTECTED] Web: http://www.scode.org
pgp2AH9xvZCzu.pgp
Description: PGP signature
he index for
a few tens of thousands of entries sounds a bit excessive. Or does it
not? Because at that level, the CPU bound period alone is approaching
the time it would take to seek for each entry instead. But then I
presume the amount of work is similar/the same for the other case,
except it's
rspace vs. kernelspace CPU
concerns (since obviously the data being worked on is in RAM). Or am I
missing something?
It is worth noting that the SELECT of fewer entries is entirely disk
bound; there is almost no CPU usage whatsoever. Even taking the
cumulative CPU usage into account (gut feeli
This may be a silly question but: will not 3 times as many disk drives
mean 3 times higher probability for disk failure? Also rumor has it
that SATA drives are more prone to fail than SCSI drivers. More
failures will result, in turn, in more administration costs.
Thanks
Peter
On 4/4/07, [EMAIL
But if an individual disk fails in a disk array, sooner than later you
would want to purchase a new fitting disk, walk/drive to the location
of the disk array, replace the broken disk in the array and activate
the new disk. Is this correct?
Thanks
Peter
On 4/4/07, Alvaro Herrera <[EM
I explicitly did not want to turn it up so
that I could benchmark the raw performance of disk I/O, rather than
having things be cached in memory more than it would already be. But
apparantly it had other side-effects I did not consider.
Thanks again,
--
/ Peter Schuller
PGP userID: 0xE9758B7D or
ation.
Most likely, all of these database is cached, so I tried reducing
seq_page_cost and random_page_cost, but I needed to turn them all the way
down to 0.02 or 0.03, which is almost like cpu_tuple_cost. Is that
reasonable? Or what is wrong here?
PostgreSQL 8.2.1 on x86_64-unknown-li
effects are much more significant than the
model takes into account.
--
Peter Eisentraut
http://developer.postgresql.org/~petere/
---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at
http://ww
n all
of them have the same predicate and none of them has an indexed
expression that appears in the query?
There are more partial indexes with the same predicate, but it appears
to always use three. (The two "dummy" indexes are just leftovers from
these experiments.)
--
Peter Eisentraut
ht
Am Mittwoch, 9. Mai 2007 16:29 schrieb Alvaro Herrera:
> Peter Eisentraut wrote:
> > There's another odd thing about this plan from yesterday.
>
> Is this still 8.2.1? The logic to choose bitmap indexes was rewritten
> just before 8.2.4,
OK, upgrading to 8.2.4 fixes thi
Cond: ((email.email_id = eh_subj.email_id) AND (0 =
eh_subj.mime_part_id))
Filter: (header_name = 'subject'::text)
Total runtime: 5161.390 ms
> Are you using any nondefault planner settings?
random_page_cost = 3
effective_cache_size = 384MB
> How big are th
o go that far.
Heh, when I decrease these parameters, the hash join gets cheaper as well. I
can't actually get it to pick the nested-loop join.
--
Peter Eisentraut
http://developer.postgresql.org/~petere/
---(end of broadcast)---
TIP 4:
ks cache space is oversubscribed nearly 3X when in reality
> the database is fully cached.
I should add that other, similar queries in this database that do not involve
joining the same table twice produce seemingly optimal plans. (It picks hash
joins which are actually faster than neste
ible I could get all the performance
benefit without the code complexity, and with no penalty (because in
this case persistence is not important).
--
/ Peter Schuller
PGP userID: 0xE9758B7D or 'Peter Schuller <[EMAIL PROTECTED]>'
Key retrieval: Send an E-Mail to [EMAIL PROTECTED]
s more efficient than update because update is always a delete
followed by an insert.
Oh and group by is nearly always quicker than distinct and can always? be
rewritten as such. I'm not 100% sure why its different but it is.
Peter.
I have some PL/pgSQL code in a stored procedure like
FOR r
ve the RC script will cause the message to be
printed interactively at the console too, if you run it. (Assuming you
are using it installed from ports).
--
/ Peter Schuller
PGP userID: 0xE9758B7D or 'Peter Schuller <[EMAIL PROTECTED]>'
Key retrieval: Send an E-Mail to [EMAIL PROTECTED]
ou had Raid 1 with 3 drives with some bit errors at
least you can take a vote on whats right. Where as if you only have 2 and
they disagree how do you know which is right other than pick one and hope...
But whatever it will be slower to keep in sync on a heavy write system.
Peter.
er source for the list of
> distinct
> ids you're interested in than my_table. If you have a source that just has
> one
> record for each id then you won't need an extra step to eliminate
> duplicates.
>
>
My personal reaction is why are you using distinct at all?
why not
select id,
min(the_date) as min_date,
max(the_date) as max_date
from my_table group by id;
Since 8.0 or was it earlier this will use an index should a reasonable one
exist.
Peter.
hought oh just one of those things but if they
can be reduced by changing a few config variables that would be great. I'm
just trying to workout what figures are worth trying to see if I can reduce
them.
>From time to time I get commits that take 6 or 7 seconds but not all the
time.
I'm currently working with the defaults.
Peter Childs
On 14/09/2007, Peter Childs <[EMAIL PROTECTED]> wrote:
>
>
>
> On 13/09/2007, Greg Smith <[EMAIL PROTECTED]> wrote:
> >
> >
> > Every time the all scan writes a buffer that is frequently used, that
> > write has a good chance that it was wasted bec
cking mechanisms on shared file systems don't play nice
with databases. You're better off using something else to load balance
or replicate data.
Peter
P.S. Why not just set up those servers you're planning on using as NFS
shares as your postgres server(s)?
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
> Anyway... One detail I don't understand --- why do you claim that
> "You can't take advantage of the shared file system because you can't
> share tablespaces among clusters or servers" ???
I say that because you can't set up two servers to point to the same
tablespace (i.e. you can't have serve
ve 2 questions:
- Is there a better query for this purpose? Mine works when coalesced,
but it seems a little brute-force and the random() sorting, while
kinda nice, is slow.
- Is this in any way expected? I know that nulls sometimes cause
problems, but why is it taking forever even when trying to f
ess the solution is either
to cast the column or wait for 8.3 (which isn't a problem since the
port won't be done until 8.3 is released anyway).
Thanks again.
Peter
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
R when you want it for reasons other than
perfect order ("mostly sorted").
Opinions/thoughts?
--
/ Peter Schuller
PGP userID: 0xE9758B7D or 'Peter Schuller <[EMAIL PROTECTED]>'
Key retrieval: Send an E-Mail to [EMAIL PROTECTED]
E-Mail: [EMAIL PROTECTED] Web: http://www.scode.org
pgpFbOXmSf908.pgp
Description: PGP signature
ms of each other no matter what
limit and ordering constraints you put on the queries.
Peter
=> explain analyze select a.uid from generate_series(1000, 32767) as
a(uid) where a.uid not in (select coalesce(uid, 0) from people);
hould be set to.
Go here for full details:
http://www.postgresql.org/docs/8.2/static/runtime-config.html, especially
http://www.postgresql.org/docs/8.2/static/runtime-config-resource.html
Peter
On 10/22/07, Lee Keel <[EMAIL PROTECTED]> wrote:
>
>
>
> I have a client server that
> Intel(R) Pentium(R) D CPU 2.80GHz
> 2 GB RAM
> Storage is local disk.
>
> Postgresql parameters (what I felt are relevant) -
> max_connections = 100
> shared_buffers = 64MB
> work_mem = 16MB
> maintenance_work_mem = 16MB
> synchronous_commit on
Do the big tables have
On tor, 2010-06-17 at 18:28 -0400, Brant Fitzsimmons wrote:
> Performance has dropped through the floor after converting my db from
> ASCI to UTF8.
Converting from ASCII to UTF8 is a noop.
If you did some configuration changes, you need to tell us which.
--
Sent via pgsql-performance mailing l
fit in the current work_mem, and choose a
low-memory plan instead?
Excuse the long-winded post; I was trying to give the facts and nothing but
the facts.
Thanks,
Peter Hussey
LabKey Software
sis and have it
work well for all queries.
thanks
Peter
On Fri, Jul 30, 2010 at 7:03 AM, Tom Lane wrote:
> Peter Hussey writes:
> > Using the default of 1MB work_mem, the planner chooses a hash join plan :
> > "Hash Left Join (cost=252641.82..11847353.87 rows=971572 width
On Jan 9, 2006, at 2:01 PM, Luke Lonergan wrote:
Peter,
On 1/9/06 9:23 AM, "peter royal" <[EMAIL PROTECTED]> wrote:
This is a 2-disk RAID0
Your 2-disk results look fine - what about your 8-disk results?
after some further research the 2-disk RAID0 numbers are not bad.
cess.
Hmm but then you would have to include Access Vacuum too I'll think you
will find "Tools -> Database Utils -> Compact Database" preforms
a simular purpose and is just as important as I've seen many Access
Databases bloat in my time.
Peter Childs
ts are: 4 page slots, 1000 relations, using 299 KB.
if the required page slots (9760 in my case) goes above the current
limit (4 in my case) you will need to do a vacuum full to reclaim
the free space. (cluster of the relevent tables may work.
If you run Vacuum Verbose regullally you can check you are vacuuming
often enough and that your free space map is big enough to hold your
free space.
Peter Childs
so it's not just a case of a
temporary I/O bottleneck, say.
(PostgreSQL 8.0.3)
--
Peter Eisentraut
http://developer.postgresql.org/~petere/
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
Am Donnerstag, 30. März 2006 14:02 schrieb Steinar H. Gunderson:
> On Thu, Mar 30, 2006 at 01:59:10PM +0200, Peter Eisentraut wrote:
> > EXPLAIN ANALYZE select activity_id from activity where state in (1,
> > 10001) order by activity_id limit 100;
> >
> > QUERY PLA
Am Donnerstag, 30. März 2006 14:06 schrieb Michael Stone:
> On Thu, Mar 30, 2006 at 01:59:10PM +0200, Peter Eisentraut wrote:
> >The table has seen VACUUM FULL and REINDEX before this.
>
> But no analyze?
ANALYZE as well, but the plan choice is not the point anyway.
--
Peter
ds to scan all
> those records before it founds 100 it can output. This is the “startup
> cost” you're seeing.
The startup cost is the cost until the plan is set up to start outputting
rows. It is not the time until the first row is found.
--
Peter Eisentraut
http://develop
poor startup time.
So from when to when is the startup time (the "x" in "x..y") actually
measured? When does the clock start ticking and when does it stop?
That is what's confusing me.
--
Peter Eisentraut
http://developer.postgresql.org/~petere
I've recently configured a new high-performance database server:
2xXeon 3.4G, 2G RAM, 4x15K SCSI disks in RAID 10, h/w RAID
This has been live for a couple of weeks.
The box is running Fedora Core 4.
The only thing running on this box is PostgreSQL 8.1.4 and some stub
applications that
nothing to do with performance, I gather, so it might be more
appropriate for the pgsql-sql list.
--
Peter Eisentraut
http://developer.postgresql.org/~petere/
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
ies.
--
Peter Eisentraut
http://developer.postgresql.org/~petere/
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
rovement, but it's still
an
order of magnitude slower than MySQL.
Can anyone suggest why PostgreSQL performs the original query so much slower
than even BDE?
--
Peter Hardman
Acre Cottage, Horsebridge
King's Somborne
Stockbridge
SO20 6PT
== Breeder of Shetland Cattl
On 16 Aug 2006 at 20:02, Arjen van der Meijden wrote:
> On 16-8-2006 18:48, Peter Hardman wrote:
> > Using identically structured tables and the same primary key, if I run this
> > on
> > Paradox/BDE it takes about 120ms, on MySQL (5.0.24, local server) about
> > 3
ne sheep if several were bought or registered on
the same day.
Forgive me for being verbose - I want to make sure I understand it propely
myself!
regards,
--
Peter Hardman
Acre Cottage, Horsebridge
King's Somborne
Stockbridge
SO20 6PT
== Breeder of Shetland Cattle and Shetland She
On 16 Aug 2006 at 18:51, Tom Lane wrote:
> "Peter Hardman" <[EMAIL PROTECTED]> writes:
> > I'm in the process of migrating a Paradox 7/BDE 5.01 database from
> > single-user
Arjen van der Meijden has proposed a very elegant query in another post.
> Wh
On 17 Aug 2006 at 12:11, Markus Schaber wrote:
> Hi, Peter,
>
> Peter Hardman wrote:
>
> >> BTW, are you *sure* you are testing PG 8.1? The "Subquery Scan f2" plan
> >> node looks unnecessary to me, and I'd have expected 8.1 to drop it out.
>
On 16 Aug 2006 at 17:48, Peter Hardman wrote:
> I'm in the process of migrating a Paradox 7/BDE 5.01 database from
> single-user
> Paradox to a web based interface to either MySQL or PostgreSQL.
I've uploaded my data to www.shetland-sheep.org.uk/pgdata/sheep-flock.zip
ator).
As someone else suggested, these fields ought really to be CHAR no VARCHAR.
I chose VARCHAR because the data mostly is shorter than the maximum lengths
(although probably not enough to matter). I'd not really got into the
subtleties of
different behaviour of CHAR and VARCHAR.
>
On 17 Aug 2006 at 20:58, Peter Hardman wrote:
>
>
> On 16 Aug 2006 at 17:48, Peter Hardman wrote:
>
> > I'm in the process of migrating a Paradox 7/BDE 5.01 database from single-user
> > Paradox to a web based interface to either MySQL or PostgreSQL.
>
e significate in small
record tables and can cause trouble even with a smal numbers of
record. Hence I've got a table thats static and fresly "vacuum full"
which reads with a bloat of 4.
Easy to recreate problem to
Create table regionpostcode (area varchar(4), regionid int);
f I have 5 columns in a view but
only select 1 column, is the system smart enough to not calculate the unused
columns, or am I taking a performance hit over a smaller view that doesn't
have the extra 4 columns?
Thanks,
Peter Darley
---(end of broadcast)--
ated above.
The whole database has been vacuum analysed right before the explain
analyse output has been captured.
What can I do to reduce the time this query takes? And why is the
function
executed although there is no row in the result set of t1 in my
rephrased
query?
TIA, peter
Ps
but
> > you can fsync less often and thus be faster" (with a possibility of
> > setting that on a per-table basis) ?
Then it would be "ACI" compliant.
--
Peter Eisentraut
http://developer.postgresql.org/~petere/
---(end of broadcast)--
All,
Well, you should still escape any strings you're getting from a web
page so
you can ensure you're not subject to a SQL insert attack, even if you're
expecting integers.
Thanks,
Peter Darley
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] B
Tatsuo,
What would happen with SELECT queries that, through a function or some
other mechanism, updates data in the database? Would those need to be
passed to pgpool in some special way?
Thanks,
Peter Darley
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED
servers, at which
point we won't even be getting requests from the same machine, much less the
same connection.
Thanks,
Peter Darley
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Josh Berkus
Sent: Monday, January 24, 2005 3:46 PM
To: Ragnar Hafsta
ctual
time=0.033..0.040 rows=2 loops=1)
Filter: ((user_id IS NOT NULL) AND ((subplan) IS NULL))
SubPlan
-> Seq Scan on assignment_settings (cost=0.00..0.00 rows=1
width=13) (actual time=0.001..0.001 rows=0 loops=2)
Filter: (((setti
s likely to be faster than a distinct, tho I
can't really recall where I got that idea from.
Thanks for your suggestions!
Peter Darley
-Original Message-
From: Richard Huxton [mailto:[EMAIL PROTECTED]
Sent: Wednesday, January 26, 2005 1:36 AM
To: Peter Darley
Cc: Pgsql-Performan
have human
readable/editable tables.
Thanks,
Peter Darley
-Original Message-
From: Manfred Koizar [mailto:[EMAIL PROTECTED]
Sent: Monday, January 31, 2005 3:06 AM
To: Peter Darley
Cc: Richard Huxton; Pgsql-Performance
Subject: Re: [PERFORM] Possibly slow query
On Wed, 26 Jan 2005 07:
roblems. pg_dump will show a constant state at the
time when the backup was started. If your database was not "consistent" at
that time you may have issues, But it will be constant from a database
point of view ie foreign keys, primary keys, check constraints, triggers
etc.
It all depends what you mean by consistent.
Peter.
On 25/11/2007, Pablo Alcaraz <[EMAIL PROTECTED]> wrote:
>
> Tom Lane wrote:
> > "Peter Childs" <[EMAIL PROTECTED]> writes:
> >
> >> On 25/11/2007, Erik Jones <[EMAIL PROTECTED]> wrote:
> >>
> >>>> Does the pg_dump
would fully support). If any of you have ideas of how
well PostgreSQL compares to SQL Server, especially in TB-sized
databases, that would be much appreciated.
We're running PG 8.2.5, by the way.
Peter
---(end of broadcast)---
TIP 4: Have you s
e client's database the biggest table has 237Gb+ (only 1
> > table!) and postgresql run the database without problem using
> > partitioning, triggers and rules (using postgresql 8.2.5).
> >
> > Pablo
> >
> > Peter Koczan wrote:
> >> Hi all,
> >>
&
Using Postgresql 8.1.10 every so often I get a transaction that takes a
while to commit.
I log everything that takes over 500ms and quite reguallly it says things
like
707.036 ms statement: COMMIT
Is there anyway to speed this up?
Peter Childs
On 03/01/2008, Tom Lane <[EMAIL PROTECTED]> wrote:
>
> "Peter Childs" <[EMAIL PROTECTED]> writes:
> > Using Postgresql 8.1.10 every so often I get a transaction that takes a
> > while to commit.
>
> > I log everything that takes over 50
best performance.
Can you send the table definitions of the tables involved in the
query, including index information? Might be if we look hard enough we
can find something.
Peter
---(end of broadcast)---
TIP 3: Have you checked our
SANs?
- Are there any killer features with SANs compared to local storage?
Any other comments are certainly welcome.
Peter
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
other important factors.
Thanks again.
Peter
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
- the database working set is larger than RAM
it would be generally advisable to pump up shared_buffers pretty much
as far as possible instead of relying on the buffer cache?
--
/ Peter Schuller
PGP userID: 0xE9758B7D or 'Peter Schuller <[EMAIL PROTECTED]>'
Key retrieval: Sen
nd make appropriate decisions.
Or is it a matter of PostgreSQL doing non-direct I/O, such that
anything cached in shared_buffers will also be cached by the OS?
--
/ Peter Schuller
PGP userID: 0xE9758B7D or 'Peter Schuller <[EMAIL PROTECTED]>'
Key retrieval: Send an E-Mail to [EMAIL
uld require flushing the data in question first (i.e.,
normally you just fsync the WAL, but when you want to recycle space you need
fsync() for the barrier and are then free to nuke the old WAL).
--
/ Peter Schuller
PGP userID: 0xE9758B7D or 'Peter Schuller <[EMAIL PROTECTED]>'
e to invest in more UPS capacity
to protect our data. Are there any other similar points that people
don't initially consider regarding a SAN?
Again, thanks for all your help.
Peter
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner w
ve to drop and ultimately
recreate a particular index, as some of the data sets are quite large.
So, is there any way to do this, or at least mimic this sort of behavior?
Peter
---(end of broadcast)---
TIP 4: Have you searched our lis
to be
able to put together a proof of concept and a test using 3par's
technology and commodity 80GB slow disks, but I really can't. You're
stuck with going all-in right away, and enough people have had
problems being married to specific techs or vendors that it's really
hard
Vinubalaji Gopal wrote:
> I tried using the nice command (Linux system) on the maintenance script
> - it did not have any effect - guess it does not change the niceness of
> the postgresql vacuum process.
You are probably looking for the command ionice. nice only affects the CPU
priority, and th
k systems. I have no
idea which one is faster (it depends, I'm sure) nor do I have much of
an idea of how to benchmark this properly.
Peter
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
esn't allow for any disaster
recovery, RAID 1 is ok as long as you can handle having only 2 disks
available, and RAID 5 and RAID 6 are just huge pains and terribly slow
for writes.
Note that you should go for a battery-backup if you use hardware RAID.
Hope this helps.
Peter
--
Sent via pgsql-
d concepts. Unfortunately, this comes at a higher
price. To each his own, I guess.
Our meetings didn't focus a lot on scalability of capacity, as we just
didn't think to ask. I think the basic pitch was "it scales well"
without any real hard data.
Peter
--
Sent via pgsql-p
y swapping from ram-disk to memory. Not much would
be cached, and performance would suffer.
FWIW, I did a select count(*) on a table with just over 30 rows,
and it only took 0.28 sec.
Peter
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Laszlo Nagy wrote:
> Question 1. We are going to use PostgreSQL 3.1 with FreeBSD. The pg docs
> say that it is better to use FreeBSD because it can alter the I/O
> priority of processes dynamically.
Where does it say that?
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.
ur) and it should not take as long and save space.
If still have trouble run "vacuum analyse verbose table1;" and see what it
says.
If your doing it right you should be able to vacuum with the database up.
Sounds like you might be happier a fix for the problem rather than a complex
work around which will actually solve a completely different problem.
Regards
Peter.
x releases above .4, and the 8.3
releases, but did not see anything that indicated locking/conflict
related fixes in relation to vacuums.
--
/ Peter Schuller
PGP userID: 0xE9758B7D or 'Peter Schuller <[EMAIL PROTECTED]>'
Key retrieval: Send an E-Mail to [EMAIL PROTECTED]
E-Mail: [
informative response.
--
/ Peter Schuller
PGP userID: 0xE9758B7D or 'Peter Schuller <[EMAIL PROTECTED]>'
Key retrieval: Send an E-Mail to [EMAIL PROTECTED]
E-Mail: [EMAIL PROTECTED] Web: http://www.scode.org
pgpUjtABeKgmx.pgp
Description: PGP signature
short periods of time
even for things that are officially declared non-blocking; the
question is whether this falls into this category.)
--
/ Peter Schuller
PGP userID: 0xE9758B7D or 'Peter Schuller <[EMAIL PROTECTED]>'
Key retrieval: Send an E-Mail to [EMAIL PROTECTED]
E-Mail: [
Tom, for the very insightful
discussion!
--
/ Peter Schuller
PGP userID: 0xE9758B7D or 'Peter Schuller <[EMAIL PROTECTED]>'
Key retrieval: Send an E-Mail to [EMAIL PROTECTED]
E-Mail: [EMAIL PROTECTED] Web: http://www.scode.org
pgppa59ys6aP0.pgp
Description: PGP signature
Dan Harris wrote:
> My desire would be to have a parallel server that could act as a hot
> standby system with automatic fail over in a multi-master role.
I will add my "me too" for DRBD + Heartbeat.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to
1 - 100 of 220 matches
Mail list logo