On 17/08/2011 7:56 PM, Ogden wrote:
On Aug 17, 2011, at 1:33 PM, Gary Doades wrote:
On 17/08/2011 7:26 PM, Ogden wrote:
I am using bonnie++ to benchmark our current Postgres system (on RAID 5) with
the new one we have, which I have configured with RAID 10. The drives are the
same (SAS 15K
On 17/08/2011 7:26 PM, Ogden wrote:
I am using bonnie++ to benchmark our current Postgres system (on RAID
5) with the new one we have, which I have configured with RAID 10. The
drives are the same (SAS 15K). I tried the new system with ext3 and
then XFS but the results seem really outrageous as
On 07/12/2010 9:29 PM, Tom Polak wrote:
From EXPLAIN ANALYZE I can see the query ran much faster.
"Nested Loop Left Join (cost=0.00..138.04 rows=1001 width=1298) (actual
time=0.036..4.679 rows=1001 loops=1)"
" Join Filter: (pgtemp1.state = pgtemp2.stateid)"
" -> Seq Scan on pgtemp1 (cost=
On 07/12/2010 7:43 PM, Andy Colson wrote:
On 12/7/2010 1:22 PM, Justin Pitts wrote:
Also, as a fair warning: mssql doesn't really care about
transactions, but
PG really does. Make sure all your code is properly starting and
commiting
transactions.
-Andy
I do not understand that statement
On 28/03/2010 10:07 AM, Tadipathri Raghu wrote:
Hi All,
I want to give some more light on this by analysing more like this
1. In my example I have created a table with one column as INT( which
occupies 4 bytes)
2. Initially it occupies one page of space on the file that is (8kb).
So, here is i
On 04/01/2010 8:30 PM, Madison Kelly wrote:
Steve Crawford wrote:
Madison Kelly wrote:
Hi all,
I've got a fairly small DB...
It slows down over time and I can't seem to find a way to get the
performance to return without doing a dump and reload of the
database...
Some questions:
Is
On 04/01/2010 7:10 PM, Madison Kelly wrote:
Hi all,
I've got a fairly small DB (~850MB when pg_dump'ed) running on PgSQL
v8.1.11 a CentOS 5.3 x86_64 Xen-based virtual machine. The filesystem is
ext3 on LVM with 32MB extents. It's about the only real resource-hungry
VM on the server.
It slow
Rajesh Kumar Mallah wrote:
Hi,
I am going to get a Dell 2950 with PERC6i with
8 * 73 15K SAS drives +
300 GB EMC SATA SAN STORAGE,
I seek suggestions from users sharing their experience with
similar hardware if any. I have following specific concerns.
1. On list i read that RAID10 function in
> "Gary Doades" <[EMAIL PROTECTED]> writes:
>> I think the reason I wasn't seeing performance issues with normal sort
>> operations is because they use work_mem not maintenance_work_mem which
>> was
>> only set to 2048 anyway. Does that sound ri
Tom Lane wrote:
> I increased the size of the test case by 10x (basically s/10/100/)
> which is enough to push it into the external-sort regime. I get
> amazingly stable runtimes now --- I didn't have the patience to run 100
> trials, but in 30 trials I have slowest 11538 msec and fastest
Tom Lane wrote:
For some reason I hadn't immediately twigged to the fact that your test
script is just N repetitions of the exact same structure with random data.
So it's not so surprising that you get random variations in behavior
with different test data sets.
> It seems clear that our qsort
Tom Lane wrote:
So it sure looks like this script does expose a problem on BSD-derived
qsorts. Curiously, the case that's much the worst for me is the third
in the script, while the shortest time is the first case, which was slow
for Gary. So I'd venture that the *BSD code has been tweaked som
Tom Lane wrote:
> I tried forcing PG to use src/port/qsort.c on the Fedora machine,
and lo and behold:
new Fedora 4 machine: shortest 434 msec, longest 8530 msec
So it sure looks like this script does expose a problem on BSD-derived
qsorts. Curiously, the case that's much the worst for
Tom Lane wrote:
shared_buffers is unlikely to impact index build time noticeably in
recent PG releases. maintenance_work_mem would affect it a lot, though.
What setting were you using for that?
Also, i tried upping maintenance_work_mem to 65536 and it didn't make
much difference (maybe 10% f
Tom Lane wrote:
Interesting. I tried your test script and got fairly close times
for all the cases on two different machines:
old HPUX machine: shortest 5800 msec, longest 7960 msec
new Fedora 4 machine: shortest 461 msec, longest 608 msec
(the HPUX machine was doing other stuff
Platform: FreeBSD 6.0, Postgresql 8.1.2 compiled from the ports collection.
Not sure if this belongs in performance or bugs..
A pg_restore of my 2.5GB database was taking up to 2 hours to complete
instead of the expected 10-15 minutes. Checking the server it was mostly
CPU bound. Testing furth
Dave Cramer wrote:
I understand that but I have seen VM's crash.
This does bring up another point. Since postgresql is not threaded a
.NET pl would require a separate VM for each connection (unless you can
share the vm ?). One of the java pl's (pl-j) for postgres has dealt
with this issue.
For
Dave Cramer wrote:
Ok, so one use case is to select a large number of rows and do some
non-trivial operation on them.
I can see where getting the rows inside the server process ( ie some
procedural language ) thereby reducing the round trip overhead would be
beneficial. However how do you deal w
Dave Cramer wrote:
I'm curious, why do you think that's serious ? What do you really expect
to do in the stored procedure ? Anything of consequence will seriously
degrade performance if you select it in say a million rows.
I'm not sure what you mean by "select it in a million rows". I would
exp
Rosser Schwarz wrote:
while you weren't looking, Gary Doades wrote:
The .NET Runtime will be a part of the next MS SQLServer engine.
It won't be long before someone writes a procedural language binding
to PostgreSQL for Parrot [1]. That should offer us a handful or six
more language
Pierre-Frédéric Caillaud wrote:
On Mon, 10 Jan 2005 12:46:01 -0500, Alex Turner <[EMAIL PROTECTED]> wrote:
You sir are correct! You can't use perl in MS-SQL or Oracle ;).
Can you benefit from the luminous power of Visual Basic as a pl in
MSSQL ?
The .NET Runtime will be a part of the next M
Randolf Richardson wrote:
I'm looking for recent performance statistics on PostgreSQL vs. Oracle
vs. Microsoft SQL Server. Recently someone has been trying to convince my
client to switch from SyBASE to Microsoft SQL Server (they originally wanted
to go with Oracle but have since fallen in
Tom Lane wrote:
It's also worth noting that work_mem is temporarily set to
maintenance_work_mem, which you didn't tell us the value of:
It's left at the default. (16384).
This would be OK if that is all it used for this type of thing.
My recollection is that hash join chooses hash table partitions
How much RAM can a single postgres backend use?
I've just loaded a moderately sized dataset into postgres and was
applying RI constraints to the tables (using pgadmin on windows). Part
way though I noticed the (single) postgres backend had shot up to using
300+ MB of my RAM!
The two tables are:
cre
On 21 Oct 2004 at 15:50, Thomas F.O'Connell wrote:
> If not, should I be REINDEXing manually, as well as VACUUMing manually
> after large data imports (whether via COPY or INSERT)? Or will a VACUUM
> FULL ANALYZE be enough?
>
It's not the vacuuming that's important here, just the analyze. If
On 8 Oct 2004 at 16:04, Tom Lane wrote:
> "Gary Doades" <[EMAIL PROTECTED]> writes:
> > If I remove the redundant clauses, the planner now estimates 1000 rows returned
> > from
> > the table, not unreasonable since it has no statistics. But *why* in that
Oops, forgot to mention:
PostgreSQL 8.0 beta 2 Windows.
Thanks,
Gary.
On 8 Oct 2004 at 20:32, Gary Doades wrote:
>
> I'm looking at one of my standard queries and have encountered some strange
> performance
> problems.
>
> The query below is to search for vacant
I'm looking at one of my standard queries and have encountered some strange
performance problems.
The query below is to search for vacant staff member date/time slots given a series of
target date/times. The data contained in the booking_plan/staff_booking tables contain
the existing bookin
On 22 Sep 2004 at 15:59, Tom Lane wrote:
> Scott Kirkwood <[EMAIL PROTECTED]> writes:
> > What do you think?
>
> I think this would allow the problems of cached plans to bite
> applications that were previously not subject to them :-(.
> An app that wants plan re-use can use PREPARE to identify t
On 31 Aug 2004 at 22:24, Jean-Max Reymond wrote:
> On Tue, 31 Aug 2004 21:16:46 +0100, Gary Doades <[EMAIL PROTECTED]> wrote:
>
> > I can only presume you mean 1 GB RAM. What exactly are your
> > settings for shared buffers and effective_cache_size?
>
> for 1 GB
On 31 Aug 2004 at 21:42, Jean-Max Reymond wrote:
> - Original Message -
> From: Gary Doades <[EMAIL PROTECTED]>
> Date: Tue, 31 Aug 2004 20:21:49 +0100
> Subject: Re: [PERFORM] Optimizing a request
> To: [EMAIL PROTECTED]
>
>
>
> > Have you run ANA
On 31 Aug 2004 at 20:59, Jean-Max Reymond wrote:
> hi,
>
> I want to optimize the following request and avoid the seq scan on the
> table article (1000 rows).
>
> explain SELECT art_id, art_titre, art_texte, rub_titre
> FROM article inner join rubrique on article.rub_id = rubrique.rub_id
On 2 Jun 2004 at 16:45, Merlin Moncure wrote:
>
> 'better' does not mean 'faster'. Win32 has a pretty decent journaling
> filesytem (ntfs) and a good I/O subsystem which includes IPC. Process
> management is poor compared to newer linux kernels but this is
> unimportant except in extreme cases.
Try using
select * from files_t where parent = ::int8
You have declared parent as int8, but the query will assume int4 for "" and may not
use the index.
Also make sure you have ANALYZEd this table.
Regards,
Gary.
On 20 May 2004 at 0:07, Eugeny Balakhonov wrote:
> Hello for
On 1 May 2004 at 13:18, Jochem van Dieten wrote:
> Yes, really :-)
>
> When a tuple is inserted the visible-to-all flag is set to false.
> The effect of this is that every index scan that finds this tuple
> has to visit the heap to verify visibility. If it turns out the
> tuple is not only vis
On 30 Apr 2004 at 9:37, Kevin Barnard wrote:
>
> I was always under the impression that MSSQL used leaf and row level locking and
> therefore
> was not a concurrent, in the same sense that postgres is, database. It would still
> allow for
> concurrent connections and such but updates will get
On 30 Apr 2004 at 8:32, Jeff wrote:
>
> A better comparision query may be a simple "select a from mytable
> where a between foo and bar" to get an index scan. In that case its a
> straight up, vanilla index scan. Nothing else getting in the way.
>
Yes, you're right and I have done this ju
On 30 Apr 2004 at 7:26, Dennis Bjorklund wrote:
> On Fri, 30 Apr 2004, Gary Doades wrote:
>
> > I should have also pointed out that MSSQL reported that same index scan
> > as taking 65% of the overall query time. It was just "faster". The
> > overall query
On 29 Apr 2004 at 19:17, Tom Lane wrote:
> Josh Berkus <[EMAIL PROTECTED]> writes:
> > Certainly the fact that MSSQL is essentially a single-user database makes
> > things easier for them.
>
> Our recent testing (cf the "Xeon" thread) says that the interlocking we
> do to make the world safe for
>
> Having picked out an index scan as being the highest time user I
> concentrated on that in this case and compared the same index scan on
> MSSQL. At least MSSQL reported it as an index scan on the same index
> for the same number of rows.
>
I should have also pointed out that MSSQL rep
On 29 Apr 2004 at 17:54, Tom Lane wrote:
> "Gary Doades" <[EMAIL PROTECTED]> writes:
> > In this example the statistics don't matter.
>
> Don't they?
>
> A prior poster mentioned that he thought MSSQL tries to keep all its
> indexes in m
On 29 Apr 2004 at 13:54, Josh Berkus wrote:
> Gary,
>
>
> It's also quite possble the MSSQL simply has more efficient index scanning
> implementation that we do.They've certainly had incentive; their storage
> system sucks big time for random lookups and they need those fast indexes.
> (
On 29 Apr 2004 at 15:35, Kenneth Marshall wrote:
> Did you try to cluster based on the index?
>
> --Ken
Yes, This speeds up the index scan a little (12%). This to me just
reinforces the overhead that subsequently having to go and fetch the
data tuple actually has on the performance.
Cheers,
G
> It's also entirely possible your indices are using inaccurate
> statistical information. Have you ANALYZEd recently?
>
In this example the statistics don't matter. The plans used were the same for
MSSQL and Postgres. I was trying to eliminate the difference in plans
between the two, which o
>
> I guess the real question is, why maintain index flags and not simply
> drop the index entry altogether?
>
> A more interesting case would be to have the backend process record
> index tuples that it would invalidate (if committed), then on commit
> send that list to a garbage collection proc
On 29 Apr 2004 at 19:03, Manfred Koizar wrote:
> While the storage overhead could be reduced to 1 bit (not a joke) we'd
> still have the I/O overhead of locating and updating index tuples for
> every heap tuple deleted/updated.
But this is what a lot of DBMSs do and seem to do well enough. I can
I can understand the performance loss on non-selects for keeping the
index validity state tracking the row validity, but would that outweigh the
performance gains on selects? Depends on your mix of selects to non
selects I guess, but other database systems seem to imply that keeping
the index o
I know you will shoot me down, but...
Why is there an entry in the index for a row if the row is not valid?
Wouldn't it be better for the index entry validity to track the row validity.
If a particular data value for a query (join, where etc.) can be satisfied
by the index entry itself this wou
On 5 Apr 2004 at 8:36, Josh Berkus wrote:
>
> Point taken, though, SQL Server has done a better job in opitimizing for
> "dumb" queries. This is something that PostgreSQL needs to work on, as is
> self-referential updates for large tables, which also tend to be really slow.
> Mind you, in
It says:
Using anticipatory io scheduler.
This then fits with the earlier post on other observations of up to 10
times better performance, which I what I was seeing in in certain
circumstances.
Cheers,
Gary.
On 4 Apr 2004 at 6:04, Cott Lang wrote:
> On Sun, 2004-04-04 at 01:56, Gary Doa
On 3 Apr 2004 at 21:23, Mike Nolan wrote:
> > Almost any cross dbms migration shows a drop in performance. The engine
> > effectively trains developers and administrators in what works and what
> > doesn't. The initial migration thus compares a tuned to an untuned version.
>
> I think it is also
fectively trains developers and administrators in what works and what
> doesn't. The initial migration thus compares a tuned to an untuned version.
>
> /Aaron
>
> - Original Message -
> From: "Josh Berkus" <[EMAIL PROTECTED]>
> To: "Gary Doades"
u the information.
Regards,
Gary.
On 3 Apr 2004 at 16:52, Cott Lang wrote:
> On Sat, 2004-04-03 at 03:50, Gary Doades wrote:
> > On 2 Apr 2004 at 22:36, [EMAIL PROTECTED] wrote:
> >
> > OK, some more detail:
> >
> > Before wiping 2.4 off my test box for the second time
s" <[EMAIL PROTECTED]>
To: "Gary Doades" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Saturday, April 03, 2004 1:59 PM
Subject: Re: [PERFORM] PostgreSQL and Linux 2.6 kernel.
> Gary,
>
> > There are no indexes on the columns involved in the update, the
Following on from Josh's response and my previous reply on SQLServer planning.
The main problem query is this one:
SELECT VS.*,VL.TEL1,SC.CONTRACT_ID,SC.CONTRACT_REF, SC.MAX_HOURS,
SC.MIN_HOURS,
(SELECT COUNT(*) FROM TIMESHEET_DETAIL JOIN MAIN_ORDER ON
(MAIN_ORDER.ORDER_ID = TIMESHEET_DETAIL.
Sorry, I think I misread your post in my last reply. I thought you were still talking
about
the big update
The main thing I have noticed about SQLServer is it seems more willing to do hash or
merge joins than PostgreSQL. I have experimented with various postgresql.conf
parameters and even
text form.
Cheers,
Gary.
On 3 Apr 2004 at 6:50, @g v t c wrote:
Use "Set Show_Plan" or something of the sort in Query Analyzer. Then
run your SQL. This will change the graphical plan to a text plan
similar to Postgresql or at least something close to readable.
Gary Doades wrote:
&g
Actually it hasn't been my experience either. Most of my queries against
the database, large and small are either a little quicker or no real
difference. I have only really noticed big differences under stress when
memory (RAM) is being squeezed. The main winner on 2.6 seems to be
write perform
r, after deciphering SQLServers graphical plans I will attempt to
post comparitive performance/access plans, using the same data of
course, if anyone would be interested
Cheers,
Gary.
On 2 Apr 2004 at 1:32, Tom Lane wrote:
> "Gary Doades" <[EMAIL PROTECTED]> w
ssion.
As this could become important I will be doing more analysis, but due to
the nature of the issue and trying to keep as many factors constant as
possible, this may take some time.
Cheers,
Gary.
On 2 Apr 2004 at 1:32, Tom Lane wrote:
> "Gary Doades" <[EMAIL PROTECTED]>
As part of my ongoing evaluation of PostgreSQL I have been doing a little stress
testing.
I though I would share an interesting result here..
Machine spec:
500 MHz PIII
256MB RAM
"old-ish" IDE HD (5400RPM)
Linux 2.4.22 kernel (Madrake 9.2)
I have PostgreSQL 7.4.1 installed and have managed to l
61 matches
Mail list logo