JJosh,
> I'm trying to figure out what I need to do to get my postgres server
> moving faster. It's just crawling right now. It's on a p4 HT with 2
> gigs of mem.
There have been issues with Postgres+HT, especially on Linux 2.4. Try
turning HT off if other tuning doesn't solve things.
Otherwi
Josh Close <[EMAIL PROTECTED]> writes:
> I'm trying to figure out what I need to do to get my postgres server
> moving faster. It's just crawling right now.
I suspect that fooling with shared_buffers is entirely the wrong tree
for you to be barking up. My suggestion is to be looking at individual
On Wed, 20 Oct 2004 01:33:16 +0100, Simon Riggs <[EMAIL PROTECTED]> wrote:
> and using what version of PostgreSQL are you using? 8.0beta, I hope?
I'm using version 7.4.5.
> > I was thinking I need to increase the amount of shared buffers, but
> > I've been told "the sweet spot for shared_buff
> I've done some manual benchmarking running my script 'time script.pl'
> I realise my script uses some of the time, bench marking shows that
> %50 of the time is spent in dbd:execute.
The perl drivers don't currently use database level prepared statements
which would give a small boost.
But your
Brock Henry wrote:
Hi,
I've after some opinions about insert performance.
Have you looked into using the copy command instead of inserts? For
bulk loading of data it can be significantly faster.
---(end of broadcast)---
TIP 2: you can get off all
Hi,
I've after some opinions about insert performance.
I'm importing a file with 13,002 lines to a database that ends up with
75,703 records across 6 tables. This is a partial file â the real data
is 4 files with total lines 95174. I'll be loading these files each
morning, and then running a num
>Josh Close
> I'm trying to figure out what I need to do to get my postgres server
> moving faster. It's just crawling right now. It's on a p4 HT with 2
> gigs of mem.
and using what version of PostgreSQL are you using? 8.0beta, I hope?
> I was thinking I need to increase the amount of shared
How many lines do you have in your daily logfiles
As you can see, this looks for clients who have visited the same site
within 20 min. If there is no match, a unique sessionid is assigned
from a sequence. If there is a visit, the session id assigned to them
is used. I'm only able to process
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
> My basic question to the community is "is PostgreSQL approximately
> as fast as Oracle?"
>
> I don't want benchmarks, they're BS. I want a gut feel from this community
> because I know many of you are in mixed shops that run both products, or
>
On Fri, 15 Oct 2004 08:47 pm, Gavin Sherry wrote:
> On Fri, 15 Oct 2004, Bernd wrote:
>
> > Hi,
[snip]
> > Table-def:
> > Table "public.scr_well_compound"
> >Column | Type | Modifiers
> > ++---
> > mat_id | numeric(1
Josh Berkus <[EMAIL PROTECTED]> writes:
>> The bigger problem here is that the SMP locking bottlenecks we are
>> currently seeing are *hardware* issues (AFAICT anyway).
> Well, initial results from Gavin/Neil's patch seem to indicate that, while
> futexes do not cure the CSStorm bug, they do less
I'm trying to figure out what I need to do to get my postgres server
moving faster. It's just crawling right now. It's on a p4 HT with 2
gigs of mem.
I was thinking I need to increase the amount of shared buffers, but
I've been told "the sweet spot for shared_buffers is usually on the
order of 100
Tom,
> The bigger problem here is that the SMP locking bottlenecks we are
> currently seeing are *hardware* issues (AFAICT anyway). ÂThe only way
> that futexes can offer a performance win is if they have a smarter way
> of executing the basic atomic-test-and-set sequence than we do;
> and if so,
Manfred Spraul <[EMAIL PROTECTED]> writes:
> Has anyone tried to replace the whole lwlock implementation with
> pthread_rwlock? At least for Linux with recent glibcs, pthread_rwlock is
> implemented with futexes, i.e. we would get a fast lock handling without
> os specific hacks.
"At least for
Neil wrote:
. In any case, the "futex patch"
uses the Linux 2.6 futex API to implement PostgreSQL spinlocks.
Has anyone tried to replace the whole lwlock implementation with
pthread_rwlock? At least for Linux with recent glibcs, pthread_rwlock is
implemented with futexes, i.e. we would get a fa
On Fri, 15 Oct 2004, Bernd wrote:
> Hi,
>
> we are working on a product which was originally developed against an Oracle
> database and which should be changed to also work with postgres.
>
> Overall the changes we had to make are very small and we are very pleased with
> the good performance of p
Magnus Hagander schrieb:
IIRC, previous versions of postgresql (< 8.0) did not correctly sync
disks when running on Cygwin. I'm not 100% sure, can someone confirm?
8.0 does, and I beleive it does both under native win32 and cygwin.
yes, sync is a NOOP on cygwin.
It's been my experience that the nat
Hi,
We are experiencing slow performance on 8 Beta 2 Dev3 on Win32 and are
trying to determine why. Any info is appreciated.
We have a Web Server and a DB server both running Win2KServer with all
service packs and critical updates.
An ASP page on the Web Server hits the DB Server with a simple q
Right - if you split a table to a lot of more selective tables, it can often
dramatically change the plan options (e.g. - in a single table, selectivity
for a query may be 1% and require an expensive nested loop while in the more
restrictive table it may match 14% of the data and do a cheaper scan)
Thanks Magnus,
So are we correct to rely on
- 8 being slower than 7.x in general and
- 8 on Win32 being a little faster than 8 on Cygwin?
Will the final release of 8 be faster than the beta?
Thanks,
Mike
- Original Message -
From: "Magnus Hagander" <[EMAIL PROTECTED]>
To: "[EMAIL PRO
hi,
[EMAIL PROTECTED] wrote:
Hello
I am doing a comparison between MySQL and PostgreSQL.
In the MySQL manual it says that MySQL performs best with Linux 2.4 with
ReiserFS on x86. Can anyone official, or in the know, give similar
information regarding PostgreSQL?
Also, any links to benchmarking test
This may sound more elaborate than it's worth, but I don't know of
a better way to avoid a table scan.
You want to index on a computed value that is a common prefix of your
FROM and TO fields.
The next step is to search on a fixed SET of prefixes of different
lengths. For example, some of your ran
On Tue, 19 Oct 2004 15:49:45 -0400, Jeremy Dunn <[EMAIL PROTECTED]> wrote:
> > -Original Message-
> > From: [EMAIL PROTECTED]
> > [mailto:[EMAIL PROTECTED] On Behalf Of
> > Matt Nuzum
> > Sent: Tuesday, October 19, 2004 3:35 PM
> > To: pgsql-performance
> > Subject: [PERFORM] Speeding up th
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of
> Matt Nuzum
> Sent: Tuesday, October 19, 2004 3:35 PM
> To: pgsql-performance
> Subject: [PERFORM] Speeding up this function
>
>
> All it does is try to "link" pageviews together into a session.
>
Hello, I've thought it would be nice to index certain aspects of my
apache log files for analysis. I've used several different techniques
and have something usable now, but I'd like to tweak it one step
further.
My first performance optimization was to change the logformat into a
CSV format. I pr
On Wed, Oct 13, 2004 at 12:21:27PM -0400, Aaron Mulder wrote:
> All,
> My company (Chariot Solutions) is sponsoring a day of free
> PostgreSQL training by Bruce Momjian (one of the core PostgreSQL
> developers). The day is split into 2 sessions (plus a Q&A session):
>
> * Mastering Postgre
Hi Rod,
On Tue, Oct 19, 2004 at 11:40:17AM -0400, Rod Taylor wrote:
> > Whatever the case, the database still slows down to a halt after a month or
> > so, and I have to go in and shut everything down and do a VACUUM FULL by
> > hand. One index (of many many) takes 2000 seconds to vacuum. The wh
"Andrei Bintintan" <[EMAIL PROTECTED]> writes:
> Hi to all! I have the following query. The execution time is very big, it
> doesn't use the indexes and I don't understand why...
Indexes are not necessarily the best way to do a large join.
> If I use the following query the indexes are used:
The
Hi to all! I have the following query. The execution time is very big, it
doesn't use the indexes and I don't understand why...
SELECT count(o.id) FROM orders o
INNER JOIN report r ON o.id=r.id_order
INNER JOIN status s ON o
Max Baker <[EMAIL PROTECTED]> writes:
> I've been having problems maintaining the speed of the database in the
> long run. VACUUMs of the main tables happen a few times a day after maybe
> 50,000 or less rows are added and deleted (say 6 times a day).
> I have a whole lot (probably too much) inde
On Tue, Oct 19, 2004 at 11:33:50AM -0400, Knutsen, Mark wrote:
> (Why don't replies automatically go to the list?)
Because sometimes you don't want them to. There's been dozens of
discussions about this. BTW, mutt has a nice feature which allows
you to reply to lists -- I imagine other MUAs have
> Whatever the case, the database still slows down to a halt after a month or
> so, and I have to go in and shut everything down and do a VACUUM FULL by
> hand. One index (of many many) takes 2000 seconds to vacuum. The whole
> process takes a few hours.
Do a REINDEX on that table instead, and r
Hi Folks,
This is my _4th_ time trying to post this, me and the mailing list software
are fighting. I think it's because of the attachments so I'll just put
links to them instead. All apologies if this gets duplicated.
I've been having problems maintaining the speed of the database in the
long
(Why don't replies automatically go to the list?)
Sure enough, quoting the constants fixes the problem.
Is it a best practice to always quote constants?
> -Original Message-
> From: Doug Y [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, October 19, 2004 11:28 AM
> To: Knutsen, Mark
> Cc: [EMA
Hi, I ran into a similar problem using bigints...
See:
http://www.postgresql.org/docs/7.3/static/datatype.html#DATATYPE-INT
small & big int have to be cast when used in querries... try:
explain select * from db where type=90::smallint and
subtype=70::smallint and date='7/1/2004';
or
explain select
The following is from a database of several hundred million
rows of real data that has been VACUUM ANALYZEd.
Why isn't the index being used for a query that seems
tailor-made for it? The results (6,300 rows) take about ten minutes to retrieve
with a sequential scan.
A copy of this d
You seem to not have index on botnumber, but in
your query bot number is the clause.
I don't explain you why the same query is so
long.
but have your try procedure with a loop structure
(witch create cursor) ?
you could try
CREATE OR
REPLACE FUNCTION sp_test_Alban1 ( )
returns in
37 matches
Mail list logo