On Wed, 2004-02-11 at 16:36, [EMAIL PROTECTED] wrote:
> Hi, all
>
> What should I do if I want to have 2 completely seperated databases in
> PostgreSQL? I want each database to have its own data, log and
> everything needed to access that database. I don't want them to share
> anything. Has anyone
On Thu, 27 May 2004, Alvaro Herrera wrote:
> On Wed, May 26, 2004 at 04:35:52PM -0700, Stephan Szabo wrote:
>
> > On Wed, 26 May 2004, Alvaro Herrera wrote:
> >
> > > I'm missing one item: deferred triggers. The problem with this is that
> > > the deftrig queue is not implemented using normal Lis
On Wed, 26 May 2004, Tom Lane wrote:
> if you have to do an UPDATE that affects every row of a large table
>
> UPDATE tab SET col = col + 1
>
> which leaves you with N live rows, N dead rows, and lots of pain to get
> back down to a less-than-twice-normal-size table. (Traditional way is
>
On Wed, May 26, 2004 at 04:35:52PM -0700, Stephan Szabo wrote:
> On Wed, 26 May 2004, Alvaro Herrera wrote:
>
> > I'm missing one item: deferred triggers. The problem with this is that
> > the deftrig queue is not implemented using normal Lists, so there's no
> > efficient way to reassign to the
Tom Lane wrote:
> Manfred Spraul <[EMAIL PROTECTED]> writes:
> > But what about kerberos: I'm a bit reluctant to add a forth mutex: what
> > if kerberos calls gethostbyname or getpwuid internally?
>
> Wouldn't help anyway, if some other part of the app also calls kerberos.
> I think we should jus
Tom Lane wrote:
of dead tuples followed by a lot of pages worth of live tuples. Plain
VACUUM cannot do much to fix this since it doesn't move rows around.
VACUUM FULL will fix it, but its index-update overhead is high enough
that CLUSTER is a better deal.
Tom: I was interested in performance impro
On May 26, 2004, at 7:14 PM, [EMAIL PROTECTED] wrote:
[EMAIL PROTECTED] writes:
First, we keep the standard PostgreSQL directory the way it has
always
been with template0, template1, pg_xlog, pg_clog, etc. in the same
place.
We can refer to this as the "system" directory. This makes sense
because
David Blasby <[EMAIL PROTECTED]> writes:
> I have another copy of this table in another database - vacuum analyse
> verbose says its "only" 1,500,000 pages (vs 2,800,000).
Hmm ... this is consistent with the idea that you did an UPDATE
affecting every row of the table. That would generate N new
Christopher Kings-Lynne <[EMAIL PROTECTED]> writes:
>> It wouldn't; you'd need vacuum full to collapse out the dead space.
>> You could also try CLUSTER which is faster than VACUUM FULL when most
>> of the data has to be moved anyway. (Some days I think we should dump
>> VACUUM FULL, because it's
BTW: Is there a public spec on what will be tablespace compatible and how?
For instance: will is be possible to create a table on a separate
tablespace than the DB? Will it be possible to create an index on a
separate tablespace than the table?
(Since Gavin hasn't replied yet)
1. There are two defa
It wouldn't; you'd need vacuum full to collapse out the dead space.
You could also try CLUSTER which is faster than VACUUM FULL when most
of the data has to be moved anyway. (Some days I think we should dump
VACUUM FULL, because it's optimized for a case that's no longer very
interesting...)
Out o
On Wed, 26 May 2004, Alvaro Herrera wrote:
> I'm missing one item: deferred triggers. The problem with this is that
> the deftrig queue is not implemented using normal Lists, so there's no
> efficient way to reassign to the parent when the subtransaction commits.
> Also I'm not sure what should
> David Blasby <[EMAIL PROTECTED]> writes:
>> I just did another vacuum analyse on the table:
>
> Ah, here we go:
>
>> INFO: "csn_edges": found 0 removable, 16289929 nonremovable row
>> versions in 2783986 pages
>
> That works out to just under 6 rows per 8K page, which wouldn't be too
> bad if th
> [EMAIL PROTECTED] writes:
>> First, we keep the standard PostgreSQL directory the way it has always
>> been with template0, template1, pg_xlog, pg_clog, etc. in the same
>> place.
>> We can refer to this as the "system" directory. This makes sense because
>> all the system level stuff is there. U
Tom Lane wrote:
INFO: "csn_edges": found 0 removable, 16289929 nonremovable row
versions in 2783986 pages
That works out to just under 6 rows per 8K page, which wouldn't be too
bad if the rows are 1K wide on average, but are they? (You might want
to run contrib/pgstattuple to get some exact inf
David Blasby <[EMAIL PROTECTED]> writes:
> I just did another vacuum analyse on the table:
Ah, here we go:
> INFO: "csn_edges": found 0 removable, 16289929 nonremovable row
> versions in 2783986 pages
That works out to just under 6 rows per 8K page, which wouldn't be too
bad if the rows are 1K
David Blasby <[EMAIL PROTECTED]> writes:
> I have a table with about 16,000,000 rows in it.
> When I do a:
> SELECT * FROM LIMIT 1;
> it takes about 10 minutes (thats about how long it takes to do a full
> sequential scan).
The only explanation that comes to mind is huge amounts of dead space
Hackers,
Ok, I've finally coded solutions to most problems regarding nested
transactions. This means:
- reversing for the lock manager, catcache, relcache, buffer manager,
asynchronous notifies, storage manager.
- transaction block state support, including appropiate XLog recording
- pg_subt
Gaetano Mendola wrote:
David Blasby wrote:
I have a table with about 16,000,000 rows in it.
When I do a:
SELECT * FROM LIMIT 1;
it takes about 10 minutes (thats about how long it takes to do a full
sequential scan).
I had originally thought that there might be a large number of
"wasted/retired"
David Blasby wrote:
I have a table with about 16,000,000 rows in it.
When I do a:
SELECT * FROM LIMIT 1;
it takes about 10 minutes (thats about how long it takes to do a full
sequential scan).
I had originally thought that there might be a large number of
"wasted/retired" tuples in the table so
I have a table with about 16,000,000 rows in it.
When I do a:
SELECT * FROM LIMIT 1;
it takes about 10 minutes (thats about how long it takes to do a full
sequential scan).
I had originally thought that there might be a large number of
"wasted/retired" tuples in the table so I "vacuum analysed"
[EMAIL PROTECTED] writes:
> First, we keep the standard PostgreSQL directory the way it has always
> been with template0, template1, pg_xlog, pg_clog, etc. in the same place.
> We can refer to this as the "system" directory. This makes sense because
> all the system level stuff is there. User datab
Now that it looks like tablespaces will become a practical reality, I want
to suggest some changes in documented procedure and preferred setup. The
reason why I suggest these changes is to enlighten new PostgreSQL users to
the new features and, perhaps, make a more enterprise-familiar environment
f
Patch applied. Thanks.
Backpatched to 7.4.X.
---
Matthew T. O'Connor wrote:
> This weekend I am trying to fix up all known the pg_autovacuum issues
> that should be resolved for 7.4.3. I am aware of only two issues: tem
>> Ingres is to be released as open source:
>>
>>
>> http://developers.slashdot.org/article.pl?sid=04/05/25/0043219&mode=nested&tid=126&tid=137&tid=163&tid=185&tid=198
>
> Like the article says, I wonder if these is any synergy between the
> products. ie. Can we grab features from their code
On May 26, 2004, at 12:47 AM, Neil Conway wrote:
I've applied the list rewrite patch to CVS HEAD. I've also sent a copy
of the patch I applied to the -patches list.
Nifty.
Do we have any numbers as to how much this will help things?
If not, would something like a pg_bench exercise the new code eno
26 matches
Mail list logo