Here is a new replication documentation section I want to add for 8.2:
ftp://momjian.us/pub/postgresql/mypatches/replication
Comments welcomed.
--
Bruce Momjian [EMAIL PROTECTED]
EnterpriseDBhttp://www.enterprisedb.com
+ If your life is a hard drive, Christ can be your back
Please disregard. I am redoing it and will post a URL with the most
recent version.
---
Bruce Momjian wrote:
>
> Here is my first draft of a new replication section for our
> documentation. I am looking for any comments.
Here is my first draft of a new replication section for our
documentation. I am looking for any comments.
---
Replication
===
Database replication allows multiple computers to work together, making
them appear as a
This behavior exists in 8.1.4 and CVS HEAD.
I list below my preexisting schema, a set of commands that behave as I
expect (and result in an ERROR), and a similar set of commands that do
not behave as I expect (and result in a PANIC). Note the position of
"BEGIN" in each.
This is quite finicky be
Jim C. Nasby wrote:
> On Mon, Oct 23, 2006 at 03:08:03PM -0400, Tom Lane wrote:
> > "Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> > > The only case I can think of where autovac might not work as well as
> > > smartvacuum would be if you had a lot of databases in the cluster, since
> > > autovacuum w
On Mon, Oct 23, 2006 at 03:08:03PM -0400, Tom Lane wrote:
> "Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> > The only case I can think of where autovac might not work as well as
> > smartvacuum would be if you had a lot of databases in the cluster, since
> > autovacuum will only vacuum one database a
Tom Lane wrote:
Mark Kirkwood <[EMAIL PROTECTED]> writes:
Right - I think the regression is caused by libc and kernel being built
with gcc 3.4.6 and the test program being built with gcc 4.1.2.
Why do you think that? The performance of the CRC loop shouldn't depend
at all on either libc or t
Tom Lane wrote:
"Jim C. Nasby" <[EMAIL PROTECTED]> writes:
The only case I can think of where autovac might not work as well as
smartvacuum would be if you had a lot of databases in the cluster, since
autovacuum will only vacuum one database at a time.
It's conceivable that it'd make sense to
Mark Kirkwood <[EMAIL PROTECTED]> writes:
> Right - I think the regression is caused by libc and kernel being built
> with gcc 3.4.6 and the test program being built with gcc 4.1.2.
Why do you think that? The performance of the CRC loop shouldn't depend
at all on either libc or the kernel, beca
Benny Amorsen wrote:
"MK" == Mark Kirkwood <[EMAIL PROTECTED]> writes:
MK> Here are the results after building gcc 4.1.2 (repeating results
MK> for gcc 3.4.6 for comparison). I suspect that performance is
MK> probably impacted because gcc 4.1.2 (and also the rest of the
MK> tool-chain) is built
Andrew Dunstan <[EMAIL PROTECTED]> writes:
> Zdenek Kotala wrote:
>> I'm not sure if it is important, but I think that preserve OID is
>> important and SQL level does not allow set OID.
> Does it matter in any case other than where it refers to an on-disk
> object? And does that need anything ot
2. I have a tsearch2 index which is 756MB in size in 8.1.2 but balloons to
910MB in 8.2!
FILLFACTOR?
Tom,
Of course! I had it in my head that fillfactor had to be explicitely set.
But then, after RTFM, it looks like there are defaults! Thank you!
One more inane question, though. Th
Zdenek Kotala wrote:
Tom Lane wrote:
The right way to implement pg_upgrade is to transfer the catalog data
at the SQL-command level of abstraction, ie, "pg_dump -s" and reload.
I'm not sure if it is important, but I think that preserve OID is
important and SQL level does not allow set OID.
Tom Lane wrote:
Zdenek Kotala <[EMAIL PROTECTED]> writes:
I'm playing with catalog upgrade. The very basic idea of my experiment
is export data from catalog and import it back to the new
initialized/fresh catalog.
That is never going to work, at least not for any interesting catalogs.
A syste
Zdenek Kotala <[EMAIL PROTECTED]> writes:
>if( donot_resolve_procname == TRUE)
>{
> result = (char *) palloc(NAMEDATALEN);
> snprintf(result, NAMEDATALEN, "%u", proid);
>}
What for? If you want numeric OIDs you can have that today by casting
the column to OID. More to the
Zdenek Kotala <[EMAIL PROTECTED]> writes:
> I'm playing with catalog upgrade. The very basic idea of my experiment
> is export data from catalog and import it back to the new
> initialized/fresh catalog.
That is never going to work, at least not for any interesting catalogs.
A system with a "fre
Tom Lane wrote:
Alvaro Herrera <[EMAIL PROTECTED]> writes:
Hmm, maybe it should be using regprocedure instead?
Not unless you want to break initdb. The only reason regproc still
exists, really, is to accommodate loading of pg_type during initdb.
Guess what: we can't do type lookup at that poi
Zdenek Kotala wrote:
Andrew Dunstan wrote:
Zdenek Kotala wrote:
I tried to use COPY command to export and import tables from catalog
Is it just me or does this seem like a strange thing to want to do? I
am trying to think of a good use case, so far without much success.
I'm playing with
Tom Lane wrote:
> Alvaro Herrera <[EMAIL PROTECTED]> writes:
> > Hmm, maybe it should be using regprocedure instead?
>
> Not unless you want to break initdb. The only reason regproc still
> exists, really, is to accommodate loading of pg_type during initdb.
> Guess what: we can't do type lookup a
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> Hmm, maybe it should be using regprocedure instead?
Not unless you want to break initdb. The only reason regproc still
exists, really, is to accommodate loading of pg_type during initdb.
Guess what: we can't do type lookup at that point.
Andrew Dunstan wrote:
Zdenek Kotala wrote:
I tried to use COPY command to export and import tables from catalog
Is it just me or does this seem like a strange thing to want to do? I am
trying to think of a good use case, so far without much success.
I'm playing with catalog upgrade. The
Zdenek Kotala wrote:
> I tried to use COPY command to export and import tables from catalog,
> but COPY command has problem with data type regproc. See example
>
> create table test (like pg_aggregate);
> copy pg_aggregate to '/tmp/pg_agg.out';
> copy test from '/tmp/pg_agg.out';
>
> ERROR
Zdenek Kotala wrote:
I tried to use COPY command to export and import tables from catalog
Is it just me or does this seem like a strange thing to want to do? I am
trying to think of a good use case, so far without much success.
cheers
andrew
---(end of broadcast)--
I tried to use COPY command to export and import tables from catalog,
but COPY command has problem with data type regproc. See example
create table test (like pg_aggregate);
copy pg_aggregate to '/tmp/pg_agg.out';
copy test from '/tmp/pg_agg.out';
ERROR: more than one function named "pg_
On Mon, 23 Oct 2006, Tom Lane wrote:
> Hmm. Maybe store the CRCs into a global array somewhere?
>
> uint32 results[NTESTS];
>
> for ...
> {
> INIT/COMP/FIN_CRC32...
> results[j] = mycrc;
> }
>
> This still adds a bit of overhead to the outer loo
[EMAIL PROTECTED] writes:
> 1. The release notes indicate "more efficient vacuuming." However, both
> vacuums seems to take about the same amount of time, ie. approx: 9 hours.
I think the improvements were only in btree index vacuuming, which it
sounds like isn't your big problem.
> 2. I have
Jeremy Drake <[EMAIL PROTECTED]> writes:
> On Mon, 23 Oct 2006, Tom Lane wrote:
>> That's not a good workaround, because making mycrc expensive to access
>> means your inner loop timing isn't credible at all. Instead try making the
>> buffer array nonlocal --- malloc it, perhaps.
> That did not m
On Mon, 23 Oct 2006, Tom Lane wrote:
> Jeremy Drake <[EMAIL PROTECTED]> writes:
> > So at this point I realize that intel's compiler is optimizing the loop
> > away, at least for the std crc and probably for both. So I make mycrc an
> > array of 2, and substript mycrc[j&1] in the loop.
>
> That's
"Simon Riggs" <[EMAIL PROTECTED]> writes:
> On Mon, 2006-10-23 at 13:52 -0400, Tom Lane wrote:
>> No can do --- we rely on the checksums to be able to tell when we've hit
>> the end of WAL during replay.
> No we don't: Zero length records are the trigger for EOF.
Only if the file happens to be
I am running versions 8.1.2 and I installed 8.2B last week. I dumped
the data from the old version into the new version. The db consists of
several million records. Total disk usage is approximately 114GB.
My two observations are as follows... Also, keep in mind these are truly
just observat
"Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> The only case I can think of where autovac might not work as well as
> smartvacuum would be if you had a lot of databases in the cluster, since
> autovacuum will only vacuum one database at a time.
It's conceivable that it'd make sense to allow multiple
> In most cases, it would be foolish to avoid: but there are cases where
> the data is CRC checked by the hardware/system already, so I'd like to
> make an option to turn this off, defaulting to on, for safety.
How would we know? What are those cases?
Sounds like a foot gun to me.
Sincerely,
J
On Mon, 2006-10-23 at 13:52 -0400, Tom Lane wrote:
> "Simon Riggs" <[EMAIL PROTECTED]> writes:
> > Instead, I'd like to include a parameter to turn off CRC altogether, for
> > heavily CPU bound operations and the WAL drive on trustworthy hardware.
>
> No can do --- we rely on the checksums to be a
Bruce, Tom, All:
> > Given the numbers I posted earlier today, the proposal is dead in the
> > water anyway, quite aside from any legal considerations.
>
> Agreed. I just wanted to point out we have other sharks in the water.
*IF* Slice-by-8 turned out to be a winner, I could get the legal issue
If the decision to vacuum based on autovacuum criteria is good enough
for you then I think you should just focus on getting autovac to do what
you want it to do. Perhaps you just need to decrease the sleep time to a
few seconds, so that autovac will quickly detect when something needs to
be vacuume
Hello, I'm from Venezuela, and I've been making some modifications to Postgre's Catalog, but it seems to be a problem creating the Template1 Database. When the creation of the database is starting this is what happens:
[EMAIL PROTECTED]:~> /home/luis/pgsql/bin/initdb -D /home/luis/pgsql/data/The fi
On 10/23/06, Tom Lane <[EMAIL PROTECTED]> wrote:
It's not so much that I don't trust Intel as that a CRC algorithm is
exactly the sort of nice little self-contained thing that people love
to try to patent these days. What I am really afraid of is that someone
else has already invented this same
"Simon Riggs" <[EMAIL PROTECTED]> writes:
> Instead, I'd like to include a parameter to turn off CRC altogether, for
> heavily CPU bound operations and the WAL drive on trustworthy hardware.
No can do --- we rely on the checksums to be able to tell when we've hit
the end of WAL during replay. You
On Sun, 2006-10-22 at 18:06 -0400, Tom Lane wrote:
> These numbers are um, not impressive. Considering that a large fraction
> of our WAL records are pretty short, the fact that slice8 consistently
> loses at short buffer lengths is especially discouraging. Much of that
> ground could be made up
On 10/23/06, Jim C. Nasby <[EMAIL PROTECTED]> wrote:
Since Jonah hasn't done anything with it he's presumably lost interest,
so you'd need to find someone else looking for an itch to scratch. And
it appears the original patch was against 7.4, so it'd probably need a
decent amount of work to make
On Fri, Oct 20, 2006 at 03:30:40AM +0300, Tux P wrote:
> Hi .*
>
> Is there any chance to see the quota implementation described in this post
> in any next releases?
>
> http://archives.postgresql.org/pgsql-hackers/2004-07/msg00392.php
Since Jonah hasn't done anything with it he's presumably los
Jeremy Drake <[EMAIL PROTECTED]> writes:
> So at this point I realize that intel's compiler is optimizing the loop
> away, at least for the std crc and probably for both. So I make mycrc an
> array of 2, and substript mycrc[j&1] in the loop.
That's not a good workaround, because making mycrc expe
Am Montag, 23. Oktober 2006 13:50 schrieb Gevik Babakhani:
> How long are we supporting MVCC?
6.5
--
Peter Eisentraut
http://developer.postgresql.org/~petere/
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http
Release 6.5 brings MVCC to PostgreSQL.
Check out the following doc for details
http://www.postgresql.org/docs/7.1/static/release-6-5.html
--Imad
EnterpriseDB
(http://www.enterprisedb.com)
On 10/23/06, Gevik Babakhani <[EMAIL PROTECTED]> wrote:
Folks,
How long are we supporting MVCC?
It is from
> "MK" == Mark Kirkwood <[EMAIL PROTECTED]> writes:
MK> Here are the results after building gcc 4.1.2 (repeating results
MK> for gcc 3.4.6 for comparison). I suspect that performance is
MK> probably impacted because gcc 4.1.2 (and also the rest of the
MK> tool-chain) is built with gcc 3.4.6 -
Folks,
How long are we supporting MVCC?
It is from the beginning or is it added later to PG
--
Regards,
Gevik Babakhani
www.postgresql.nl
www.truesoftware.nl
---(end of broadcast)---
TIP 6: explain analyze is your friend
On Mon, 2006-10-23 at 05:22 -0400, Gregory Stark wrote:
> "Simon Riggs" <[EMAIL PROTECTED]> writes:
>
> > Slice-By-8 was first mentioned here:
>
> Are you sure?
>
> US patent 7,047,479 filed in 2002 sounds like it may be relevant:
>
> http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&Sect2
Mark Kirkwood wrote:
Tom Lane wrote:
Are you running similar gcc versions on both? I realize I forgot to
document what I was using:
Ah - good point, FreeBSD is using an older compiler:
FreeBSD: gcc (GCC) 3.4.6 [FreeBSD] 20060305
Linux: gcc (GCC) 4.1.1 (Gentoo 4.1.1)
Hmm - there is a Free
"Simon Riggs" <[EMAIL PROTECTED]> writes:
> Slice-By-8 was first mentioned here:
Are you sure?
US patent 7,047,479 filed in 2002 sounds like it may be relevant:
http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&Sect2=HITOFF&d=PALL&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.htm&r=1&f=G&l=50&s1=7047
Mario Weilguni wrote:
>> This has been been discussed before, but Oracle behaves differently,
and
>> IMHO in a more correct way.
>>
>> The following query returns NULL in PG:
>> SELECT NULL || 'fisk';
>>
>> But in Oracle, it returns 'fisk':
>> SELECT NULL || 'fisk' FROM DUAL;
>>
>> The latter seems
> > Yup, that would be the scenario where it helps (provided that you
have
> > a smart disk or a disk array and an intelligent OS aio
implementation).
> > It would be used to fetch the data pages pointed at from an index
> > leaf, or the next level index pages.
> > We measured the IO bandwidth d
> > So far I've seen no evidence that async I/O would help us, only a
lot
> > of wishful thinking.
>
> is this thread moot? while researching this thread I came across this
> article: http://kerneltrap.org/node/6642 describing claims of
> 30% performance boost when using posix_fadvise to ask t
> > >So far I've seen no evidence that async I/O would help us, only a
lot
> > >of wishful thinking.
> >
> > is this thread moot? while researching this thread I came across
this
> > article: http://kerneltrap.org/node/6642 describing claims of 30%
> > performance boost when using posix_fadvis
53 matches
Mail list logo