12:28p
Dear All,
This question is regarding the performance of queries in general.
The performance of the queries wud varying depending on the no. Of tuples it is
returning, and the sort of alogorithm that will be implemented or the retrieval.
Now if the relation returns zero tuples.. (the seq, a
No: the function is calc_total(int,int,int) and the table have the
same types.
As Tom said that my problem is because of planning in pl/pgsql. As
is written in
http://www.postgresql.org/docs/7.3/static/plpgsql.html#PLPGSQL-OVERVIEW
plans for queries in pl/pgsql are made just once - when they are
On Friday 03 October 2003 07:34, [EMAIL PROTECTED] wrote:
> 12:28p
> Dear All,
> This question is regarding the performance of queries in general.
> The performance of the queries wud varying depending on the no. Of tuples
> it is returning, and the sort of alogorithm that will be implemented or th
Oops! [EMAIL PROTECTED] (Dror Matalon) was seen spray-painting on a wall:
> I smell a religious war in the aii:-).
> Can you go several days in a row without doing select count(*) on any
> of your tables?
I would be more likely, personally, to run "VACUUM VERBOSE ANALYZE",
which has useful side-
On Thu, 2 Oct 2003, Christopher Browne wrote:
> I can't imagine why the raw number of tuples in a relation would be
> expected to necessarily be terribly useful.
>
We use stuff like that for reporting queries.
example:
On our message boards each post is a row. The powers that be like to know
ho
Hi everyone,
I've been trying to find out if some guidelines
exist, somewhere, describing how postgres
can possibly run on less than 8MB of RAM.
(Disk space not an issue).
The closest thread I could find in the list
archives is :
http://archives.postgresql.org/pgsql-general/2002-06/msg01343.php
Hi,
I have a select like this:
SELECT MAX(transactionid) FROM cbntransaction WHERE transactiontypeid=0;
in the query:
transactionid is the primary key of cbntransaction table,
But transactiontypeid is a low cardinality column, there're over 100,000
records has the same trnsactiontypeid.
I was tr
> "CB" == Christopher Browne <[EMAIL PROTECTED]> writes:
CB> Unfortunately, while there are companies hawking SSDs, they are in the
CB> "you'll have to talk to our salescritter for pricing" category, which
CB> means that they must be ferociously expensive. :-(.
You ain't kidding. Unfortunat
On Thu, 2 Oct 2003, CN wrote:
> Hi!
>
> It's just my curiosity. I wonder if there is any way to break my speed
> limit on AMD 450Mhz:
You're most likely I/O bound, not CPU bound here. So, if you want better
speed, you'll likely need a better storage subsystem.
---(end
Well I can think of many more case where it would be usefull:
SELECT COUNT(DISTINCT x) FROM ...
SELECT COUNT(*) FROM ... WHERE x = ?
Also having transaction number (visibility) would tip the balance more
toward index_scan than seq_scan because you do not have to look up
visibility in the data fi
Stef <[EMAIL PROTECTED]> writes:
> I've been trying to find out if some guidelines
> exist, somewhere, describing how postgres
> can possibly run on less than 8MB of RAM.
Are you sure you want Postgres, and not something smaller? BDB,
or SQL Lite, for example?
"Postgres is bloatware by design: i
On Fri, 03 Oct 2003 11:42:54 -0400
Tom Lane <[EMAIL PROTECTED]> wrote:
=> Are you sure you want Postgres, and not something smaller? BDB,
=> or SQL Lite, for example?
I have considered various options, including BDB and SQL Lite, but
alas, it will have to be postgres if it's going to be a databas
Stef <[EMAIL PROTECTED]> writes:
> Crawling is ok. Won't differ much from normal operation on a machine
> like that. Any tips on how to achieve the most diminutive vmem an
> conf settings?
The out-of-the-box settings are already pretty diminutive on current
releases :-(. In 7.4 you'd likely want
We frequently need to know the number of tuples in a table although sometimes we do
have WHERE status='X' for example but this often doesn't guarantee an indexed scan.
And yes, my reasons are the same - reporting figures eg number of bookings made since
the system was introduced. Have you trie
Josh,
I declared all the indexes that you suggested and ran vacuum full
analyze. The query plan has not changed and it's still trying to use
seqscan. I tried to disable seqscan, but the plan didn't change. Any
other suggestions?
I started explain analyze on the query, but I doubt it will finish an
Oleg,
> I declared all the indexes that you suggested and ran vacuum full
> analyze. The query plan has not changed and it's still trying to use
> seqscan. I tried to disable seqscan, but the plan didn't change. Any
> other suggestions?
> I started explain analyze on the query, but I doubt it will
On Fri, 03 Oct 2003 12:32:00 -0400
Tom Lane <[EMAIL PROTECTED]> wrote:
=> What exactly is failing? And what's the platform, anyway?
Nothing is really failing atm, except the funds for better
hardware. JBOSS and some other servers need to be
run on these machines, along with linux, which will b
Josh,
My data directory is 3.8 GB.
I can send you flat data files and scripts to create indices, but still
it would be about 1.3 GB of data. Do you still want me to transfer data
to you? If yes, then just give me your FTP address.
Thanks.
Oleg
-Original Message-
From: Josh Berkus [mailto:
Stef,
> I've been trying to find out if some guidelines
> exist, somewhere, describing how postgres
> can possibly run on less than 8MB of RAM.
> (Disk space not an issue).
I can tell you from experience that you will get some odd behaviour, and even
connection failures, when Postgres is forced
On Fri, 2003-10-03 at 12:52, Stef wrote:
> On Fri, 03 Oct 2003 12:32:00 -0400
> Tom Lane <[EMAIL PROTECTED]> wrote:
>
> => What exactly is failing? And what's the platform, anyway?
>
> Nothing is really failing atm, except the funds for better
> hardware. JBOSS and some other servers need to be
On Fri, 2003-10-03 at 14:08, Josh Berkus wrote:
> I can tell you from experience that you will get some odd behaviour, and even
> connection failures, when Postgres is forced into swap by lack of memory.
Why would you get a connection failure? And other than poor performance,
why would you get "o
On Fri, 3 Oct 2003, Ron Johnson wrote:
> On Fri, 2003-10-03 at 12:52, Stef wrote:
> > On Fri, 03 Oct 2003 12:32:00 -0400
> > Tom Lane <[EMAIL PROTECTED]> wrote:
> >
> > => What exactly is failing? And what's the platform, anyway?
> >
> > Nothing is really failing atm, except the funds for bette
On Fri, 03 Oct 2003 11:42:54 -0400 Tom Lane <[EMAIL PROTECTED]> wrote:
> "Postgres is bloatware by design: it was built to house PhD theses."
> -- J. Hellerstein (who ought to know)
if postgres is bloatware, what is oracle 9i?
(after i downloaded a copy of oracle 8i a couple of months back, i swo
OK, so I've had a bit of time to look things over, and appear to be
making headway. Here's how things stand right now:
I added a function called best_inner_scan used the same way as
best_inner_indexscan, but it's a bit more generalized in the sense that
it can make append plans comprising of t
Hi,
I have a query that ran quite well initially, but slowed down quite a
bit once I introduced an aggregate into the equation. The average
execution time went up from around 15 msec to around 300 msec.
The original query fetches a bunch of articles:
select articlenumber, channel, description,
I've read some posts that says vacuum doesn't lock, but my experience
today indicates the opposite. It seemed that "vacuum full analyze"
was locked waiting and so were other postmaster processes. It
appeared to be deadlock, because all were in "WAITING" state according
to ps. I let this go for a
Rob Nagler <[EMAIL PROTECTED]> writes:
> I've read some posts that says vacuum doesn't lock, but my experience
> today indicates the opposite. It seemed that "vacuum full analyze"
> was locked waiting and so were other postmaster processes.
vacuum full does require exclusive lock, plain vacuum do
Dror,
> select articlenumber, channel, description, title, link, dtstamp from
> items, my_channels where items.channel = '2' and my_channels.id =
> '2' and owner = 'drormata' and dtstamp > last_viewed and
> articlenumber not in (select item from viewed_items where chann
Hi Josh,
On Fri, Oct 03, 2003 at 02:07:10PM -0700, Josh Berkus wrote:
> Dror,
>
> > select articlenumber, channel, description, title, link, dtstamp from
> > items, my_channels where items.channel = '2' and my_channels.id =
> > '2' and owner = 'drormata' and dtstamp > last_view
Dror,
> I am using 7.4, and had tried NOT EXISTS and didn't see any
> improvements.
It wouldn't if you're using 7.4, which has improved IN performance immensely.
What happens if you stop using a function and instead use a subselect?
--
-Josh Berkus
Aglio Database Solutions
San Francisco
--
> item_max_date() looks like this:
>select max(dtstamp) from items where channel = $1 and link = $2;
It is too bad the (channel, link) index doesn't have dtstamp at the end
of it, otherwise the below query would be a gain (might be a small one
anyway).
select dtstamp
from items
where
> vacuum full does require exclusive lock, plain vacuum does not.
I think I need full, because there are updates on the table. As I
understand it, an update in pg is an insert/delete, so it needs
to be garbage collected.
> It's considerably more likely that the vacuum was waiting for an open
> c
On Fri, Oct 03, 2003 at 15:47:01 -0600,
Rob Nagler <[EMAIL PROTECTED]> wrote:
> > vacuum full does require exclusive lock, plain vacuum does not.
>
> I think I need full, because there are updates on the table. As I
> understand it, an update in pg is an insert/delete, so it needs
> to be garba
On Fri, Oct 03, 2003 at 05:44:49PM -0400, Rod Taylor wrote:
> > item_max_date() looks like this:
> >select max(dtstamp) from items where channel = $1 and link = $2;
>
> It is too bad the (channel, link) index doesn't have dtstamp at the end
> of it, otherwise the below query would be a gain (m
On Fri, Oct 03, 2003 at 02:35:46PM -0700, Josh Berkus wrote:
> Dror,
>
> > I am using 7.4, and had tried NOT EXISTS and didn't see any
> > improvements.
>
> It wouldn't if you're using 7.4, which has improved IN performance immensely.
>
> What happens if you stop using a function and instead use
On Fri, 2003-10-03 at 17:53, Dror Matalon wrote:
> On Fri, Oct 03, 2003 at 05:44:49PM -0400, Rod Taylor wrote:
> > > item_max_date() looks like this:
> > >select max(dtstamp) from items where channel = $1 and link = $2;
> >
> > It is too bad the (channel, link) index doesn't have dtstamp at th
On Fri, 2003-10-03 at 17:47, Rob Nagler wrote:
> They don't deadlock normally,
> only with reindex and vacuum did I see this behavior.
If you can provide a reproducible example of a deadlock induced by
REINDEX + VACUUM, that would be interesting.
(FWIW, I remember noticing a potential deadlock in
On Fri, Oct 03, 2003 at 06:10:29PM -0400, Rod Taylor wrote:
> On Fri, 2003-10-03 at 17:53, Dror Matalon wrote:
> > On Fri, Oct 03, 2003 at 05:44:49PM -0400, Rod Taylor wrote:
> > > > item_max_date() looks like this:
> > > >select max(dtstamp) from items where channel = $1 and link = $2;
> > >
[EMAIL PROTECTED] (Rob Nagler) writes:
> I've read some posts that says vacuum doesn't lock, but my experience
> today indicates the opposite. It seemed that "vacuum full analyze"
> was locked waiting and so were other postmaster processes. It
> appeared to be deadlock, because all were in "WAITI
In the last exciting episode, [EMAIL PROTECTED] (Jean-Luc Lachance) wrote:
> Well I can think of many more case where it would be usefull:
>
> SELECT COUNT(DISTINCT x) FROM ...
> SELECT COUNT(*) FROM ... WHERE x = ?
Those are precisely the cases that the "other databases" ALSO fall
down on.
Maint
> > vacuum full does require exclusive lock, plain vacuum does not.
>
> I think I need full, because there are updates on the table. As I
> understand it, an update in pg is an insert/delete, so it needs
> to be garbage collected.
Yes and no. You only need a plain VACUUM that is run often enough
Rob,
> > I think I need full, because there are updates on the table. As I
> > understand it, an update in pg is an insert/delete, so it needs
> > to be garbage collected.
>
> Yes and no. You only need a plain VACUUM that is run often enough to
> recover space as fast as you need to grab it. F
> > In summary, I suspect that it is better from a UI perspective to
> > bring down the app on Sat at 3 a.m and reimport with a fixed time
> > period than to live through reindexing/vacuuming which may deadlock.
> > Am I missing something?
>
> Consider running pg_autovacuum, and thereby do a little
> > I hope it isn't the first or second one ;)
>
> CREATE or REPLACE FUNCTION item_max_date (int4, varchar) RETURNS
> timestamptz AS '
> select max(dtstamp) from items where channel = $1 and link = $2;
> ' LANGUAGE 'sql';
How about the below?
CREATE or REPLACE FUNCTION item_max_date (int4, var
> Also, if you find that you need to run VACUUM FULL often, then
> you need to
> raise your max_fsm_pages.
Yes and no. If it's run often enough then the number of tracked pages
shouldn't need to be raised, but then again...
...max_fsm_pages should be raised anyway. I'm about to reclaim a Pentiu
On Fri, 2003-10-03 at 17:34, Christopher Browne wrote:
> Not surprising either. While the reindex takes place, updates to that
> table have to be deferred.
Right, but that's no reason not to let SELECTs proceed, for example.
(Whether that would actually be *useful* is another question...)
-Neil
> > Also, if you find that you need to run VACUUM FULL often, then
> > you need to
> > raise your max_fsm_pages.
>
> Yes and no. If it's run often enough then the number of tracked pages
> shouldn't need to be raised, but then again...
Oops, sorry, didn't pay attention and missed the mention of F
I have updated the FAQ to be:
In comparison to MySQL or leaner database systems, we are
faster for multiple users, complex queries, and a read/write query
load. MySQL is faster for SELECT queries done by a few users.
Is this accurate? It seems so.
--
On Fri, 2003-10-03 at 21:39, Bruce Momjian wrote:
> I have updated the FAQ to be:
>
> In comparison to MySQL or leaner database systems, we are
> faster for multiple users, complex queries, and a read/write query
> load. MySQL is faster for SELECT queries done by a few users.
>
Rod Taylor wrote:
-- Start of PGP signed section.
> On Fri, 2003-10-03 at 21:39, Bruce Momjian wrote:
> > I have updated the FAQ to be:
> >
> > In comparison to MySQL or leaner database systems, we are
> > faster for multiple users, complex queries, and a read/write query
> > loa
Centuries ago, Nostradamus foresaw when [EMAIL PROTECTED] (Bruce Momjian) would write:
> I have updated the FAQ to be:
>
> In comparison to MySQL or leaner database systems, we are
> faster for multiple users, complex queries, and a read/write query
> load. MySQL is faster for SE
Neil Conway <[EMAIL PROTECTED]> writes:
> On Fri, 2003-10-03 at 17:34, Christopher Browne wrote:
>> Not surprising either. While the reindex takes place, updates to that
>> table have to be deferred.
> Right, but that's no reason not to let SELECTs proceed, for example.
What if said SELECTs are
Rod Taylor <[EMAIL PROTECTED]> writes:
> I've often wondered if they win on those because they have a lighter
> weight parser / optimizer with less "lets try simplifying this query"
> steps or if the MYISAM storage mechanism is simply quicker at pulling
> data off the disk.
Comparing pre-PREPAREd
53 matches
Mail list logo