Adam Palmblad wrote:
can I actually look at the call tree that occurs when my function is
being executed or will I be limited to viewing calls to functions in
the postmaster binary?
You're the one with the gprof data, you tell us :)
It wouldn't surprise me if gprof didn't get profiling data for dlo
Tom Lane wrote:
Not too many releases ago, there were several columns in pg_proc that
were intended to support estimation of the runtime cost and number of
result rows of set-returning functions. I believe in fact that these
were the remains of Joe Hellerstein's thesis on expensive-function
evalua
Tom Lane wrote:
The larger point is that writing an estimator for an SRF is frequently a
task about as difficult as writing the SRF itself
True, although I think this doesn't necessarily kill the idea. If
writing an estimator for a given SRF is too difficult, the user is no
worse off than they ar
Keith Worthington wrote:
-> Seq Scan on tbl_current (cost=0.00..1775.57 rows=76457
width=31) (actual time=22.870..25.024 rows=605 loops=1)
This rowcount is way off -- have you run ANALYZE recently?
-Neil
---(end of broadcast)---
TIP 4:
Ying Lu wrote:
May I know for simple "=" operation query, for "Hash index" vs. "B-tree"
index, which can provide better performance please?
I don't think we've found a case in which the hash index code
outperforms B+-tree indexes, even for "=". The hash index code also has
a number of additional
Christopher Petrilli wrote:
This being the case, is there ever ANY reason for someone to use it?
Well, someone might fix it up at some point in the future. I don't think
there's anything fundamentally wrong with hash indexes, it is just that
the current implementation is a bit lacking.
If not, t
Jim C. Nasby wrote:
Having indexes that people shouldn't be using does add confusion for
users, and presents the opportunity for foot-shooting.
Emitting a warning/notice on hash-index creation is something I've
suggested in the past -- that would be fine with me.
Even if there is some kind of adv
Jim C. Nasby wrote:
>> No, hash joins and hash indexes are unrelated.
I know they are now, but does that have to be the case?
I mean, the algorithms are fundamentally unrelated. They share a bit of
code such as the hash functions themselves, but they are really solving
two different problems (dis
Tom Lane wrote:
On the other hand, once you reach the target index page, a hash index
has no better method than linear scan through all the page's index
entries to find the actually wanted key(s)
I wonder if it would be possible to store the keys in a hash bucket in
sorted order, provided that the
Tom Lane wrote:
I have a gut reaction against that: it makes hash indexes fundamentally
subservient to btrees.
I wouldn't say "subservient" -- if there is no ordering defined for the
index key, we just do a linear scan.
However: what about storing the things in hashcode order? Ordering uint32s
d
Josh Berkus wrote:
Don't hold your breath. MySQL, to judge by their first "clustering"
implementation, has a *long* way to go before they have anything usable.
Oh? What's wrong with MySQL's clustering implementation?
-Neil
---(end of broadcast)---
Joshua D. Drake wrote:
Neil Conway wrote:
Oh? What's wrong with MySQL's clustering implementation?
Ram only tables :)
Sure, but that hardly makes it not "usable". Considering the price of
RAM these days, having enough RAM to hold the database (distributed over
the entire c
Josh Berkus wrote:
The other problem, as I was told it at OSCON, was that these were not
high-availability clusters; it's impossible to add a server to an existing
cluster
Yeah, that's a pretty significant problem.
a server going down is liable to take the whole cluster down.
That's news to me. D
Tom Lane wrote:
Performance?
I'll run some benchmarks tomorrow, as it's rather late in my time zone.
If anyone wants to post some benchmark results, they are welcome to.
I disagree completely with the idea of forcing this behavior for all
datatypes. It could only be sensible for fairly wide valu
mark durrant wrote:
PostgreSQL Machine:
"Aggregate (cost=140122.56..140122.56 rows=1 width=0)
(actual time=24516.000..24516.000 rows=1 loops=1)"
" -> Index Scan using "day" on mtable
(cost=0.00..140035.06 rows=35000 width=0) (actual
time=47.000..21841.000 rows=1166025 loops=1)"
"Inde
On Sun, 2005-05-29 at 16:17 -0400, Eric Lauzon wrote:
> So OID can be beneficial on static tables
OIDs aren't beneficial on "static tables"; unless you have unusual
requirements[1], there is no benefit to having OIDs on user-created
tables (see the default_with_oids GUC var, which will default to
Mark Stosberg wrote:
I've used PQA to analyze my queries and happy overall with how they are
running. About 55% of the query time is going to variations of the pet
searching query, which seems like where it should be going. The query is
frequent and complex. It has already been combed over for ap
Mark Rinaudo wrote:
I'm running the Redhat Version of Postgresql which came pre-installed
with Redhat ES. It's version number is 7.3.10-1. I'm not sure what
options it was compiled with. Is there a way for me to tell?
`pg_config --configure` in recent releases.
Should i just compile my own p
Tom Arthurs wrote:
Yes, shared buffers in postgres are not used for caching
Shared buffers in Postgres _are_ used for caching, they just form a
secondary cache on top of the kernel's IO cache. Postgres does IO
through the filesystem, which is then cached by the kernel. Increasing
shared_buff
Tom Arthurs wrote:
I just puhsd 8.0.3 to production on Sunday, and haven't had a time to
really monitor it under load, so I can't tell if it's helped the context
switch problem yet or not.
8.0 is unlikely to make a significant difference -- by "current sources"
I meant the current CVS HEAD so
Gnanavel S wrote:
reindex the tables separately.
Reindexing should not affect this problem, anyway.
-Neil
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining co
Jim C. Nasby wrote:
Actually, from what I've read 4.2BSD actually took priority into account
when scheduling I/O.
FWIW, you can set I/O priority in recent versions of the Linux kernel
using ionice, which is part of RML's schedutils package (which was
recently merged into util-linux).
-Neil
Jignesh Shah wrote:
Now the question is why there are so many calls to MemoryContextSwitchTo
in a single SELECT query command? Can it be minimized?
I agree with Tom -- if profiling indicates that MemoryContextSwitchTo()
is the bottleneck, I would be suspicious that your profiling setup is
mis
Pryscila B Guttoski wrote:
On my master course, I'm studying the PostgreSQL's optimizer.
I don't know if anyone in this list have been participated from the
PostgreSQL's Optimizer development, but maybe someone can help me on this
question.
pgsql-hackers might be more appropriate.
PostgreSQL
Cristian Prieto wrote:
Anyway, do you know where could I get more info and theory about
database optimizer plan? (in general)
Personally I like this survey paper on query optimization:
http://citeseer.csail.mit.edu/371707.html
The paper also cites a lot of other papers that cover specific
On Mon, 2005-26-09 at 12:54 -0500, Announce wrote:
> Is there an performance benefit to using int2 (instead of int4) in cases
> where i know i will be well within its numeric range?
int2 uses slightly less storage space (2 bytes rather than 4). Depending
on alignment and padding requirements, as w
On Fri, 2005-21-10 at 07:34 -0500, Martin Nickel wrote:
> Let's say I do the same thing in Postgres. I'm likely to have my very
> fastest performance for the first few queries until memory gets filled up.
No, you're not: if a query doesn't hit the cache (both the OS cache and
the Postgres userspa
On Sun, 2005-23-10 at 21:36 -0700, Josh Berkus wrote:
> SELECT id INTO v_check
> FROM some_table ORDER BY id LIMIT 1;
>
> IF id > 0 THEN
>
> ... that says pretty clearly to code maintainers that I'm only interested in
> finding out whether there's any rows in the table, while making sure I
On Mon, 2005-31-10 at 17:16 -0600, PostgreSQL wrote:
> We're running 8.1beta3 on one server and are having ridiculous performance
> issues. This is a 2 cpu Opteron box and both processors are staying at 98
> or 99% utilization processing not-that-complex queries. Prior to the
> upgrade, our I/
On Mon, 2005-07-11 at 19:07 +0100, Enrico Weigelt wrote:
> I've got a similar problem: I have to match different datatypes,
> ie. bigint vs. integer vs. oid.
>
> Of course I tried to use casted index (aka ON (foo::oid)), but
> it didn't work.
Don't include the cast in the index definition, inc
On Mon, 2005-12-05 at 09:42 +0200, Howard Oblowitz wrote:
> I am trying to run a query that selects 26 million rows from a
> table with 68 byte rows.
>
> When run on the Server via psql the following error occurs:
>
> calloc : Cannot allocate memory
That's precisely what I'd expect: the backend
On Fri, 2006-01-13 at 15:10 -0500, Michael Stone wrote:
> OIDs seem to be on their way out, and most of the time you can get a
> more helpful result by using a serial primary key anyway, but I wonder
> if there's any extension to INSERT to help identify what unique id a
> newly-inserted key will ge
On Fri, 2006-01-20 at 18:14 +0900, James Russell wrote:
> I am looking to speed up performance, and since each page executes a
> static set of queries where only the parameters change, I was hoping
> to take advantage of stored procedures since I read that PostgreSQL's
> caches the execution plans
On Wed, 2006-02-15 at 18:28 -0500, Tom Lane wrote:
> It seems clear that our qsort.c is doing a pretty awful job of picking
> qsort pivots, while glibc is mostly managing not to make that mistake.
> I haven't looked at the glibc code yet to see what they are doing
> differently.
glibc qsort is act
On Thu, 2006-02-16 at 12:35 +0100, Steinar H. Gunderson wrote:
> glibc-2.3.5/stdlib/qsort.c:
>
> /* Order size using quicksort. This implementation incorporates
> four optimizations discussed in Sedgewick:
>
> I can't see any references to merge sort in there at all.
stdlib/qsort.c defin
On Mon, 2004-04-05 at 11:36, Josh Berkus wrote:
> Unfortunately, these days only Tom and Neil seem to be seriously working on
> the query planner (beg pardon in advance if I've missed someone)
Actually, Tom is the only person actively working on the planner --
while I hope to contribute to it in
On Wed, 2004-05-12 at 05:02, Shridhar Daithankar wrote:
> I agree. For shared buffers start with 5000 and increase in batches on 1000. Or
> set it to a high value and check with ipcs for maximum shared memory usage. If
> share memory usage peaks at 100MB, you don't need more than say 120MB of buf
On Fri, 2004-05-14 at 17:08, Jaime Casanova wrote:
> is there any diff. in performance if i use smallint in place of integer?
Assuming you steer clear of planner deficiencies, smallint should be
slightly faster (since it consumes less disk space), but the performance
difference should be very smal
Eugeny Balakhonov wrote:
I tries to run simple query:
select * from files_t where parent =
Use this instead:
select * from files_t where parent = '';
("parent = ::int8" would work as well.)
PostgreSQL (< 7.5) won't consider using an indexscan when the predicate
involves an integer lit
Rosser Schwarz wrote:
PostgreSQL uses the operating system's disk cache.
... in addition to its own buffer cache, which is stored in shared
memory. You're correct though, in that the best practice is to keep the
PostgreSQL cache small and give more memory to the operating system's
disk cache.
P
Christopher Browne wrote:
One of our sysadmins did all the "configuring OS stuff" part; I don't
recall offhand if there was a need to twiddle something in order to
get it to have great gobs of shared memory.
FWIW, the section on configuring kernel resources under various
Unixen[1] doesn't have any
Tom Lane wrote:
Markus Schaber <[EMAIL PROTECTED]> writes:
So, now my question is, why does the query optimizer not recognize that
it can throw away those "non-unique" Sort/Unique passes?
Because the issue doesn't come up often enough to justify expending
cycles to check for it.
How many cycles are
On Thu, 2004-09-23 at 05:59, Tom Lane wrote:
> I think this would allow the problems of cached plans to bite
> applications that were previously not subject to them :-(.
> An app that wants plan re-use can use PREPARE to identify the
> queries that are going to be re-executed.
I agree; if you want
On Mon, 2004-09-20 at 17:57, Guy Thornley wrote:
> According to the manpage, O_DIRECT implies O_SYNC:
>
> File I/O is done directly to/from user space buffers. The I/O is
> synchronous, i.e., at the completion of the read(2) or write(2)
> system call, data is guaranteed to
On Thu, 2004-10-14 at 04:57, Mark Wong wrote:
> I have some DBT-3 (decision support) results using Gavin's original
> futex patch fix.
I sent an initial description of the futex patch to the mailing lists
last week, but it never appeared (from talking to Marc I believe it
exceeded the size limit
On Fri, 2004-10-15 at 04:38, Igor Maciel Macaubas wrote:
> I have around 100 tables, and divided them in 14 different schemas,
> and then adapted my application to use schemas as well.
> I could percept that the query / insert / update times get pretty much
> faster then when I was using the old un
On Tue, 2004-09-28 at 08:42, Gaetano Mendola wrote:
> Now I'm reading an article, written by the same author that ispired the magic "300"
> on analyze.c, about "Self-tuning Histograms". If this is implemented, I understood
> we can take rid of "vacuum analyze" for mantain up to date the statistics.
On Thu, 2004-10-07 at 08:26, Paul Ramsey wrote:
> The shared_buffers are shared (go figure) :). It is all one pool shared
> by all connections.
Yeah, I thought this was pretty clear. Doug, can you elaborate on where
you saw the misleading docs?
> The sort_mem and vacuum_mem are *per*connection*
Matt Clark wrote:
I'm thinking along the lines of an FS that's aware of PG's strategies and
requirements and therefore optimised to make those activities as efiicient
as possible - possibly even being aware of PG's disk layout and treating
files differently on that basis.
As someone else noted, thi
On Mon, 2004-10-25 at 17:17, Curt Sampson wrote:
> When you select all the columns, you're going to force it to go to the
> table. If you select only the indexed column, it ought to be able to use
> just the index, and never read the table at all.
Perhaps in other database systems, but not in Post
On Mon, 2004-11-01 at 11:01, Josh Berkus wrote:
> > Gist indexes take a long time to create as compared
> > to normal indexes is there any way to speed them up ?
> >
> > (for example by modifying sort_mem or something temporarily )
>
> More sort_mem will indeed help.
How so? sort_mem improves ind
On Fri, 2004-11-05 at 06:20, Steinar H. Gunderson wrote:
> You mean, like, open(filename, O_DIRECT)? :-)
This disables readahead (at least on Linux), which is certainly not we
want: for the very case where we don't want to keep the data in cache
for a while (sequential scans, VACUUM), we also want
On Thu, 2004-11-04 at 23:29, Pierre-Frédéric Caillaud wrote:
> There is also the fact that syncing after every transaction could be
> changed to syncing every N transactions (N fixed or depending on the data
> size written by the transactions) which would be more efficient than the
> cu
On Fri, 2004-11-05 at 02:47, Chris Browne wrote:
> Another thing that would be valuable would be to have some way to say:
>
> "Read this data; don't bother throwing other data out of the cache
>to stuff this in."
This is similar, although not exactly the same thing:
http://www.opengroup.or
Dawid Kuroczko wrote:
Side question: Do TEMPORARY tables operations end up in PITR log?
No.
-Neil
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
Josh Berkus wrote:
I was under the impression that work_mem would be used for the index if there
was an index for the RI lookup. Wrong?
Yes -- work_mem is not used for doing index scans, whether for RI
lookups or otherwise.
-Neil
---(end of broadcast)---
On Fri, 2004-11-26 at 14:37 +1300, Andrew McMillan wrote:
> In PostgreSQL the UPDATE will result
> internally in a new record being written, with the old record being
> marked as deleted. That old record won't be re-used until after a
> VACUUM has run, and this means that the on-disk tables will h
On Sat, 2005-02-05 at 14:42 -0500, Tom Lane wrote:
> Marinos Yannikos <[EMAIL PROTECTED]> writes:
> > Some more things I tried:
>
> You might try the attached patch (which I just applied to HEAD).
> It cuts down the number of acquisitions of the BufMgrLock by merging
> adjacent bufmgr calls during
Magnus Hagander wrote:
You can *never* get above 80 without using write cache, regardless of
your OS, if you have a single disk.
Why? Even with, say, a 15K RPM disk? Or the ability to fsync() multiple
concurrently-committing transactions at once?
-Neil
---(end of broadcast
Magnus Hagander wrote:
Yes, fsync=false is very good for bulk loading *IFF* you can live with
data loss in case you get a crash during load.
It's not merely data loss -- you could encounter potentially
unrecoverable database corruption.
There is a TODO item about allowing the delaying of WAL writ
Bruno Wolff III wrote:
Functions are just black boxes to the planner.
... unless the function is a SQL function that is trivial enough for the
planner to inline it into the plan of the invoking query. Currently, we
won't inline set-returning SQL functions that are used in the query's
rangetable,
On Wed, Aug 06, 2003 at 03:03:41PM -0300, Wilson A. Galafassi Jr. wrote:
> I'm installing Postgresql under linux for better performance and i want to know how
> is the best configuration.
> 1. What is the best linux distribuition for better performance?
The Linux distribution itself isn't that i
On Tue, Aug 12, 2003 at 12:52:46AM -0400, Bruce Momjian wrote:
I don't use Linux and was just repeating what I had heard from others,
> and read in postings. I don't have any first-hand experience with ext2
> (except for a laptop I borrowed that wouldn't boot after being shut
> off), but others o
On Wed, Aug 06, 2003 at 12:45:34AM -0400, Tom Lane wrote:
> For core code, the answer would be a big NYET. We do not do feature
> additions in point releases, only bug fixes. While contrib code is more
> under the author's control than the core committee's control, I'd still
> say that you'd be m
On Mon, Aug 11, 2003 at 06:59:30PM -0400, Bruce Momjian wrote:
> Uh, the ext2 developers say it isn't 100% reliable --- at least that is
> that was told. I don't know any personally, but I mentioned it while I
> was visiting Red Hat, and they didn't refute it.
IMHO, if we're going to say "don't u
On Tue, Aug 26, 2003 at 02:18:23AM -0700, Bupp Phillips wrote:
> It there a way to get server side cursors with Postgresql like SQLServer has
> or is this a limitation that it has?
http://www.postgresql.org/docs/7.3/static/sql-declare.html
http://www.postgresql.org/docs/7.3/static/sql-fetch.html
On Tue, Aug 26, 2003 at 03:05:12PM -0400, Jeff wrote:
> On Tue, 26 Aug 2003, Darcy Buskermolen wrote:
> > I'm still seeing differences in the planner estimates, have you run a VACUUM
> > ANALYZE prior to running these tests?
> >
> I did. I shall retry that.. but the numbers (the cost estimates) are
On Wed, Aug 27, 2003 at 09:02:25PM +0530, Shridhar Daithankar wrote:
> IIRC in a kernel release note recently, it was commented that IO scheduler is
> still being worked on and does not perform as much for random seeks, which
> exaclty what database needs.
Yeah, I've read that as well. It would
On Wed, Aug 27, 2003 at 05:40:05PM -0400, Michael Guerin wrote:
> ex. query: select * from x where id in (select id from y);
>
> There's an index on each table for id. SQL Server takes <1s to return,
> postgresql doesn't return at all, neither does explain analyze.
This particular form of query
On Wed, 2003-09-03 at 21:32, Rudi Starcevic wrote:
> Hmm ... Sorry I'm not sure then. I only use Linux with PG.
> Even though it's 'brand new' you still need to Analyze so that any
> Indexes etc. are built.
ANALYZE doesn't build indexes, it only updates the statistics used by
the query optimizer
On Wed, 2003-09-03 at 15:32, Naveen Palavalli wrote:
> shared_buffers = 200
If you're using a relatively modern machine, this is probably on the low
side.
> 1) Effects related to Vaccum :- I performed 10 trials of adding and
> deleting entries . In each trial , 1 client adds 10,000 entries and
On Thu, 2003-09-04 at 13:46, Rod Taylor wrote:
> Run a VACUUM FULL ANALYZE between runs. This will force a full scan of
> all data for stats
It will? Are you sure about that?
-Neil
---(end of broadcast)---
TIP 5: Have you checked our extensive F
On Thu, 2003-09-04 at 22:13, Relaxin wrote:
> Finally, someone who will actually assume/admit that it is returning the
> entire result set to the client.
> Where as other DBMS manage the records at the server.
Is there a reason you can't use cursors (explicitely, or via ODBC if it
provides some gl
On Fri, 2003-09-05 at 14:18, Relaxin wrote:
> Expect that the Declare/Fetch only creates a forwardonly cursor, you can go
> backwards thru the result set.
No, DECLARE can create scrollable cursors, read the ref page again. This
functionality is much improved in PostgreSQL 7.4, though.
-Neil
--
On Fri, 2003-09-05 at 06:07, Richard Huxton wrote:
> PG's parser will assume an explicit number is an int4 - if you need an int8
> etc you'll need to cast it, yes.
Or enclose the integer literal in single quotes.
> You should find plenty of discussion of why in the archives, but the short
> rea
On Mon, 2003-09-08 at 11:56, scott.marlowe wrote:
> Basically, Postgresql uses an MVCC locking system that makes massively
> parallel operation possible, but costs in certain areas, and one of those
> areas is aggregate performance over large sets. MVCC makes it very hard
> to optimize all but
On Wed, 2003-10-01 at 13:45, Dimitri Nagiev wrote:
> template1=# explain analyze select * from mytable where
> mydate>='2003-09-01';
> QUERY PLAN
>
>
>
On Fri, 2003-10-03 at 14:08, Josh Berkus wrote:
> I can tell you from experience that you will get some odd behaviour, and even
> connection failures, when Postgres is forced into swap by lack of memory.
Why would you get a connection failure? And other than poor performance,
why would you get "o
On Fri, 2003-10-03 at 17:47, Rob Nagler wrote:
> They don't deadlock normally,
> only with reindex and vacuum did I see this behavior.
If you can provide a reproducible example of a deadlock induced by
REINDEX + VACUUM, that would be interesting.
(FWIW, I remember noticing a potential deadlock in
On Fri, 2003-10-03 at 17:34, Christopher Browne wrote:
> Not surprising either. While the reindex takes place, updates to that
> table have to be deferred.
Right, but that's no reason not to let SELECTs proceed, for example.
(Whether that would actually be *useful* is another question...)
-Neil
On Sat, 2003-10-04 at 11:22, Andrew Sullivan wrote:
> Also, a vacuum pretty much destroys your shared buffers, so you have
> to be aware of that trade-off too.
True, although there is no reason that this necessary needs to be the
case (at least, as far as the PostgreSQL shared buffer goes). As has
On Sun, 2003-10-05 at 19:43, Tom Lane wrote:
> This would be relatively easy to fix as far as our own buffering is
> concerned, but the thing that's needed to make it really useful is
> to prevent caching of seqscan-read pages in the kernel disk buffers.
True.
> I don't know any portable way to d
On Sun, 2003-10-05 at 19:50, Neil Conway wrote:
> On Sun, 2003-10-05 at 19:43, Tom Lane wrote:
> > This would be relatively easy to fix as far as our own buffering is
> > concerned, but the thing that's needed to make it really useful is
> > to prevent caching of seqsca
On Wed, 2003-10-08 at 08:36, Jeff wrote:
> So here's the results using my load tester (single connection per beater,
> repeats the query 1000 times with different input each time (we'll get
> ~20k rows back), the query is a common query around here.
What is the query?
> Linux - 1x - 35 seconds, 2
On Wed, 2003-10-08 at 10:48, Andrew Sullivan wrote:
> My worry about this test is that it gives us precious little
> knowledge about concurrent connection slowness, which is where I find
> the most significant problems.
As Jeff points out, the second set of results is for 20 concurrent
connections
On Wed, 2003-10-08 at 11:46, Jeff wrote:
> Yeah - like I expected it was able to generate much better code for
> _bt_checkkeys which was the #1 function in gcc on both sun & linux.
>
> and as you can see, suncc was just able to generate much nicer code.
What CFLAGS does configure pick for gcc? Fr
On Wed, 2003-10-08 at 14:05, Josh Berkus wrote:
> Hmmm ... both, I think. The Install Docs should have:
>
> "Here are the top # things you will want to adjust in your PostgreSQL.conf:
> 1) Shared_buffers
> 2) Sort_mem
> 3) effective_cache_size
> 4) random_page_cost
> 5) Fsync
> etc."
> Bar
On Wed, 2003-10-08 at 11:02, Jeff wrote:
> The boss cleared my de-company info-ified pg presentation.
Slide 37: as far as I know, reordering of outer joins is not implemented
in 7.4
-Neil
---(end of broadcast)---
TIP 6: Have you searched our list
On Wed, 2003-10-08 at 15:38, Jeff wrote:
> Huh. I could have sworn Tom did something like that.
> Perhaps I am thinking of something else.
> You had to enable some magic GUC.
Perhaps you're thinking of the new GUC var join_collapse_limit, which is
related, but doesn't effect the reordering of oute
On Wed, 2003-10-08 at 14:31, Bruce Momjian wrote:
> Well, this is really embarassing. I can't imagine why we would not set
> at least -O on all platforms.
ISTM the most legitimate reason for not enabling compilater
optimizations on a given compiler/OS/architecture combination is might
cause compi
On Wed, 2003-10-08 at 21:44, Bruce Momjian wrote:
> Agreed. Do we set them all to -O2, then remove it from the ones we
> don't get successful reports on?
I took the time to compile CVS tip with a few different machines from
HP's TestDrive program, to see if there were any regressions using the
ne
On Mon, 2003-10-13 at 15:43, David Griffiths wrote:
> Here are part of the contents of my sysctl.conf file (note that I've
> played with values as low as 60 with no difference)
> kernel.shmmax=14
> kernel.shmall=14
This is only a system-wide limit -- it either allows the share
On Sat, 2003-10-25 at 13:49, Reece Hart wrote:
> Having to explicitly cast criterion is very non-intuitive. Moreover,
> it seems quite straightforward that PostgreSQL might incorporate casts
This is a well-known issue with the query optimizer -- search the
mailing list archives for lots more infor
On Mon, 2003-10-27 at 10:15, Tarhon-Onu Victor wrote:
> select count(*) from items where channel <
> 5000; will never use any of the current indexes because none matches
> your WHERE clause (channel appears now only in multicolumn indexes).
No -- a multi-column index can be used to answer querie
On Sun, 2003-10-26 at 22:49, Greg Stark wrote:
> What version of postgres is this?. In 7.4 (and maybe 7.3?) count() uses an
> int8 to store its count so it's not limited to 4 billion records.
> Unfortunately int8 is somewhat inefficient as it has to be dynamically
> allocated repeatedly.
Uh, what?
On Fri, 2003-10-24 at 20:11, Allen Landsidel wrote:
> However, I do the same thing with the reindex, so I'll definitely be taking
> it out there, as that one does lock.. although I would think the worst this
> would do would be a making the index unavailable and forcing a seq scan..
> is that no
On Mon, 2003-10-27 at 12:56, Greg Stark wrote:
> Neil Conway <[EMAIL PROTECTED]> writes:
> > Uh, what? Why would an int8 need to be "dynamically allocated
> > repeatedly"?
>
> Perhaps I'm wrong, I'm extrapolating from a comment Tom Lane made that
On Mon, 2003-10-27 at 13:52, Tom Lane wrote:
> Greg is correct. int8 is a pass-by-reference datatype and so every
> aggregate state-transition function cycle requires at least one palloc
> (to return the function result).
Interesting. Is there a reason why int8 is pass-by-reference? (ISTM that
pa
On Fri, 2003-10-31 at 13:27, Allen Landsidel wrote:
> I had no idea analyze was playing such a big role in this sense.. I really
> thought that other than saving space, it wasn't doing much for tables that
> don't have indexes on the.
ANALYZE doesn't save any space at all -- VACUUM is probably w
On Fri, 2003-10-31 at 11:37, Greg Stark wrote:
> My understanding is that the case where HT hurts is precisely your case. When
> you have two real processors with HT the kernel will sometimes schedule two
> jobs on the two virtual processors on the same real processor leaving the two
> virtual proc
1 - 100 of 135 matches
Mail list logo