On Sun, 2005-02-06 at 18:05 -0500, Tom Lane wrote:
> Can anyone think of a reason we aren't inlining MemoryContextSwitchTo()
> in GCC builds, similarly to the way list_head() et al are handled?
>
> It wouldn't be a huge gain, but I consistently see MemoryContextSwitchTo
> eating a percent or three
>Tom Lane
> But without a smart idea about data structures I don't see how to do
> better.
>
> Thoughts?
>
Hmm, seems like you summed up the lack of algorithmic choices very well.
After much thought, I believe the best way is to implement bufferpools
(BPs). That is, we don't just have one bufferh
>Neil Conway writes
> We're only concerned with a buffer's access recency when it is on the
> free list, right? (That is certainly true with naive LRU; I confess I
> haven't had a chance to grok the 2Q paper yet). If so:
> - we need only update a pinned buffer's LRU position when its shared
> refc
>Karel Zak wrote
> On Sun, 2005-02-06 at 18:05 -0500, Tom Lane wrote:
> > Can anyone think of a reason we aren't inlining
> MemoryContextSwitchTo()
> > in GCC builds, similarly to the way list_head() et al are handled?
> >
> > It wouldn't be a huge gain, but I consistently see
> MemoryContextSwitch
Hi
As VACUUM is not something that can be rolled back, could we not make it
run completely outside transactions. It already needs to be run outside
a transaction block.
I try to explain the problem more thoroughly below (I'm quite sleepy, so
the explanation may be not too clear ;)
My problem is
> [EMAIL PROTECTED] writes:
>> One of the things that is disturbing to me about the analyze settings is
>> that it wants to sample the same number of records from a table
>> regardless
>> of the size of that table.
>
> The papers that I looked at say that this rule has a good solid
> statistical fo
"Simon Riggs" <[EMAIL PROTECTED]> writes:
> But why MemoryContextSwitchTo ?
Because (a) it's so small that inlining it will probably be a net code
savings rather than expenditure, and (b) it does have noticeable cost.
For example, in this gprof profile taken Saturday:
% cumulative self
"Simon Riggs" <[EMAIL PROTECTED]> writes:
> After much thought, I believe the best way is to implement bufferpools
> (BPs). That is, we don't just have one bufferhash and one LRU, we have
> many pairs. We then work out some mapping by which a block can be known
> to be in a particular BP, then acqu
Hannu Krosing <[EMAIL PROTECTED]> writes:
> As VACUUM is not something that can be rolled back, could we not make it
> run completely outside transactions.
No, because it has to be able to hold a table-level lock on the target
table. Besides, where did you get the idea that it can't be rolled bac
> [EMAIL PROTECTED] writes:
>> On a very basic level, why bother sampling the whole table at all? Why
>> not
>> check one block and infer all information from that? Because we know
>> that
>> isn't enough data. In a table of 4.6 million rows, can you say with any
>> mathmatical certainty that a sam
[EMAIL PROTECTED] writes:
> On a very basic level, why bother sampling the whole table at all? Why not
> check one block and infer all information from that? Because we know that
> isn't enough data. In a table of 4.6 million rows, can you say with any
> mathmatical certainty that a sample of 100 p
On 1/25/2005 6:23 PM, Marc G. Fournier wrote:
On Tue, 25 Jan 2005, Bruce Momjian wrote:
pgman wrote:
Not yet --- I suggested it but didn't get any yeas or nays. I don't
feel this is solely core's decision anyway ... what do the assembled
hackers think?
I am not in favor of adjusting the 8.1 releas
On Mon, Feb 07, 2005 at 11:27:59 -0500,
[EMAIL PROTECTED] wrote:
>
> It is inarguable that increasing the sample size increases the accuracy of
> a study, especially when diversity of the subject is unknown. It is known
> that reducing a sample size increases probability of error in any poll or
Jan Wieck wrote:
No, as an 8.0.x is mean to be for minor changes/fixes/improvements
... 'addressing a patnt conflict', at least in ARC's case, is a major
change, which is why we are looking at a short dev cycle for 8.1 ...
Then we better make sure that 8.0 -> 8.1 does not require dump&reload.
Ühel kenal päeval (esmaspäev, 7. veebruar 2005, 10:51-0500), kirjutas
Tom Lane:
> Hannu Krosing <[EMAIL PROTECTED]> writes:
> > As VACUUM is not something that can be rolled back, could we not make it
> > run completely outside transactions.
>
> No, because it has to be able to hold a table-level
Andrew Dunstan <[EMAIL PROTECTED]> writes:
> Jan Wieck wrote:
>> Then we better make sure that 8.0 -> 8.1 does not require dump&reload.
> There was some mention of an upgrade tool which would avoid the need for
> a dump/restore - did that idea die?
No, but I don't see anyone volunteering to wor
At 2005-02-07 12:28:34 -0500, [EMAIL PROTECTED] wrote:
>
> > There was some mention of an upgrade tool which would avoid the need
> > for a dump/restore - did that idea die?
>
> No, but I don't see anyone volunteering to work on it now
I like the idea of having a working pg_upgrade (independent o
> On Mon, Feb 07, 2005 at 11:27:59 -0500,
> [EMAIL PROTECTED] wrote:
>>
>> It is inarguable that increasing the sample size increases the accuracy
>> of
>> a study, especially when diversity of the subject is unknown. It is
>> known
>> that reducing a sample size increases probability of error in
On Mon, Feb 07, 2005 at 13:28:04 -0500,
> >
> > For large populations the accuracy of estimates of statistics based on
> > random
> > samples from that population are not very sensitve to population size and
> > depends primarily on the sample size. So that you would not expect to need
> > to use l
Maybe I am missing something - ISTM that you can increase your
statistics target for those larger tables to obtain a larger (i.e.
better) sample.
regards
Mark
[EMAIL PROTECTED] wrote:
[EMAIL PROTECTED] writes:
Any and all random sampling assumes a degree of uniform distribution. This
is the basi
On Mon, Feb 07, 2005 at 07:16:41PM +0200, Hannu Krosing wrote:
> If all the changes it does to internal data storage can be rolled back,
> then I can't see how VACUUM FULL can work at all without requiring 2x
> the filesize for the ROLLBACK.
I think the point is that the table is still consistent
> On Mon, Feb 07, 2005 at 13:28:04 -0500,
>
> What you are saying here is that if you want more accurate statistics, you
> need to sample more rows. That is true. However, the size of the sample
> is essentially only dependent on the accuracy you need and not the size
> of the population, for large
On Mon, Feb 07, 2005 at 05:16:56PM -0500, [EMAIL PROTECTED] wrote:
> > On Mon, Feb 07, 2005 at 13:28:04 -0500,
> >
> > What you are saying here is that if you want more accurate statistics, you
> > need to sample more rows. That is true. However, the size of the sample
> > is essentially only depen
> Maybe I am missing something - ISTM that you can increase your
> statistics target for those larger tables to obtain a larger (i.e.
> better) sample.
No one is arguing that you can't manually do things, but I am not the
first to notice this. I saw the query planner doing something completely
stu
Hi,
There is better than lock-free algorithm, this is wait-free.
A lock-free algorithm guarantees progress regardless of whether some
processes are delayed or even killed and regardless of scheduling
policies. By definition, a lock-free object must be immune to deadlock
and livelock.
A wait-free
Tom Lane <[EMAIL PROTECTED]> writes:
> The papers that I looked at say that this rule has a good solid
> statistical foundation, at least for the case of estimating histograms.
Earlier I was discussing the issue of how to measure cross-column
"correlation" with someone who works on precisely the
What operations does 2Q require on the shared lists? (Assuming that's
the replacement policy we're going with.) Depending on how complex the
list modifications are, non-blocking algorithms might be worth
considering. For example, to remove a node from the middle of a linked
list can be done via ato
> On Mon, Feb 07, 2005 at 05:16:56PM -0500, [EMAIL PROTECTED] wrote:
>> > On Mon, Feb 07, 2005 at 13:28:04 -0500,
>> >
>> > What you are saying here is that if you want more accurate statistics,
>> you
>> > need to sample more rows. That is true. However, the size of the
>> sample
>> > is essential
Hello,
Has anyone seen the following:
http://pecl.php.net/package/PDO
The description from the site:
PDO provides a uniform data access interface, sporting advanced features
such as prepared statements and bound parameters. PDO drivers are
dynamically loadable and may be developed independently fro
>Tom Lane
> "Simon Riggs" <[EMAIL PROTECTED]> writes:
> > After much thought, I believe the best way is to implement
> bufferpools
> > (BPs). That is, we don't just have one bufferhash and one
> LRU, we have
> > many pairs. We then work out some mapping by which a block
> can be known
> > to be in
I was looking at some stubborn queries in one of our applications, and
not knowing the internals of the query planner, thought I might ask if
this planner improvement is possible at all. We have queries with the
general form of IN (SELECT FROM AA JOIN (SELECT foo UNION ALL SELECT
bar)) clauses.
[EMAIL PROTECTED] wrote:
In this case, the behavior observed could be changed by altering the
sample size for a table. I submit that an arbitrary fixed sample size is
not a good base for the analyzer, but that the sample size should be based
on the size of the table or some calculation of its devia
[EMAIL PROTECTED] writes:
> The basic problem with a fixed sample is that is assumes a normal
> distribution.
That's sort of true, but not in the way you think it is.
What's under discussion here, w.r.t. the histograms and the validity of
sampling is really basic statistics. You don't have to b
On Mon, Feb 07, 2005 at 02:33:24PM +0700, Premsun Choltanwanich wrote:
> > I'd guess that you haven't installed some third-party modules that
> > you need on the Linux box, or that you've installed them in the wrong
> > place.
>
> I don't make sure about third-party information cause all of module
Robby Russell wrote:
Sincerely,
Joshua D. Drake
Command Prompt, Inc.
503-667-4564
It hasn't been updated since May 2004 though. :-/
Hmm... Well there must be another home for it then because
it is set to be the default database api for 5.1. Ahh now I see
it is already been pushed into the PHP
"Joshua D. Drake" <[EMAIL PROTECTED]> writes:
> it would be great if we could get some people to review the pgsql
> driver to make sure it is fully up to snuff.
>
> Anyone up for it? This is our chance to get a really top notch PHP
> driver for PostgreSQL that supports all the appropriate goodie
Arthur Ward <[EMAIL PROTECTED]> writes:
> The problem for us is that the default estimate at the HashAggregate is
> absurdly low, undercutting the other available join candidates' row
> estimates resulting in _bad_ plans. What I was wondering is whether the
> planner has enough information avail
Hi everyone,
I had a crash last night and since while vacuuming databases (either full
or lazy) I get this error:
duplcate key violates unique cnstraint "pg_statistic_relid_att_index"
this is 7.4.6 on unixware.
How bad is this?
TIA
--
Olivier PRENANT Tel: +33-5-61-50-97-00 (W
Does anyone know what to do with this?
C:\Downloads\PostGreSQL\v8.0.0\src\interfaces\libpq>"c:\Program
Files\Borland\CB
uilder6\Bin\make.exe" -DCFG=Release /f bcc32.mak
MAKE Version 5.2 Copyright (c) 1987, 2000 Borland
Building the Win32 DLL and Static Library...
Configuration "Release"
Fatal: '
39 matches
Mail list logo