-- Forwarded message --
From: Michael Nolan
Date: Tue, Apr 10, 2012 at 9:47 PM
Subject: Re: [GENERAL] [streaming replication] 9.1.3 streaming replication
bug ?
To: Fujii Masao
On Tue, Apr 10, 2012 at 9:09 PM, Fujii Masao wrote:
> On Wed, Apr 11, 2012 at 10:06 AM,
On 4/11/12, 乔志强 wrote:
>
>> Yes, increase wal_keep_segments. Even if you set wal_keep_segments to 64,
>> the amount of disk space for WAL files is only 1GB, so there is no need to
>> worry so much, I think. No?
>
> But when a transaction larger than 1GB...
Then you may need WAL space larger than
On 4/11/12, Kevin Grittner wrote:
> Michael Nolan wrote:
>> On 4/11/12, 乔志强 wrote:
>
>>> But when a transaction larger than 1GB...
>>
>> Then you may need WAL space larger than 1GB as well. For
>> replication to work, it seems likely that you may need to
On 4/11/12, Fujii Masao wrote:
> On Wed, Apr 11, 2012 at 3:31 PM, 乔志强 wrote:
>> So in sync streaming replication, if master delete WAL before sent to the
>> only standby, all transaction will fail forever,
>> "the master tries to avoid a PANIC error rather than termination of
>> replication." but
-- Forwarded message --
From: Michael Nolan
Date: Wed, 11 Apr 2012 14:48:18 -0400
Subject: Re: [HACKERS] [GENERAL] [streaming replication] 9.1.3
streaming replication bug ?
To: Robert Haas
On Wed, Apr 11, 2012 at 2:14 PM, Robert Haas wrote:
>
>
> We've talked
On Thu, Apr 12, 2012 at 4:50 PM, Gavin Flower wrote:
> On 11/04/12 21:24, Gavin Flower wrote:
>
> On 11/04/12 19:15, Sidney Cadot wrote:
>
> Dear all,
>
> As a hobby project, I am toying around with a database containing
> about 5 million chess games. On average, these games have about 80
> posi
On Tue, Apr 17, 2012 at 5:20 PM, Eliot Gable wrote:
>
>
>
> I cannot find a single non-volatile function in the call path; so I am
> baffled on where this error message is coming from. I would be thankful for
> any ideas anyone might have on where this error message might be coming
> from or how
On Thu, Apr 19, 2012 at 12:46 PM, Jen wrote:
> I have been working on a hot backup for Postgres 9.1 for awhile and have
> run
> into a consistent issue.
>
The instructions in the Binary Replication Tutorial work well for me, I
suggest you read through the '10 minute' version.
Specifically, look
On Thu, Apr 19, 2012 at 1:07 PM, Michael Nolan wrote:
>
>
> On Thu, Apr 19, 2012 at 12:46 PM, Jen wrote:
>
>> I have been working on a hot backup for Postgres 9.1 for awhile and have
>> run
>> into a consistent issue.
>>
>
> The instructions in the Binar
On Tue, Apr 24, 2012 at 4:08 PM, Yunong J Xiao wrote:
> I am currently backing up my postgres instances using ZFS snapshots
> instead of the sanctioned pg_dump utility mainly because I am running on
> Solaris and it offers a copy-on-write file system. Anecdotally this has
> been working fine for
Your options range from doing something simple to something complex.
A simple option on a Linux server would be placing a command like this in
/etc/rc/rc.local:
su - postgres -C "/usr/local/pgsql/bin/pg_ctl -D ;/usr/local/pgsql/data -l
/usr/local/pgsql/logfile start"
However, that might not be t
On Thu, May 3, 2012 at 11:49 AM, Samba wrote:
> Hi,
>
>
> Please advise me if what i'm doing is makes sense and is an accepted
> mechanism for taking backups or if there is any other procedure that i can
> emplpoy to avoid unnecessarily archiving gigabytes of WAL logs which may be
> growing many
This is due to how sequences are pre-allocated in blocks to sessions
running on the master.
Since the slave is updated via the WALs, and not via 'nextval' function
calls in queries, the sequences that are actually used will remain in sync
with the master.
--
Mike Nolan
On Sat, May 5, 2012 at 4:19 PM, Stefan Tzeggai wrote:
> Hi
>
> postgresql 9.1 on Ubuntu 10.04
>
> All important information is in the other tablespaces. I would be
> totally happy to just loose all relations in that lost tablespace. It's
> just indexes. Is there any way to tell PG to drop/ignore t
On Mon, May 7, 2012 at 4:01 AM, Vincent de Phily <
vincent.deph...@mobile-devices.fr> wrote:
> On Sunday 06 May 2012 10:29:17 Simon Riggs wrote:
> > On 4 May 2012 14:55, Vincent de Phily >
> wrote:
>
>
> Would be nice to see it added to the documentation (unless I just didn't
> find
> it ?), as
On Tue, Apr 21, 2009 at 12:04 PM, jc_mich wrote:
>
> The problem is not the formula, the problem is the join. I've summarized it
> because is a large formula and I've got right results
>
>
You're quite correct that the problem is the join, because neither of the
examples you gave have any code to
I have a table with over 6 million rows in it that I do a dump on every night. It takes less than 2 minutes to create a file that is around 650 MB.Are you maybe dumping this file in 'insert' mode?--Mike Nolan
On 9/21/06, Charles Ambrose <[EMAIL PROTECTED]> wrote:
Hi!
I have a fairly large databas
I'm a little confused about what you mean when you say you can't specify where the index should go. Schemas are a logical division, not a physical one. There's no logical reason to have the index for a table in a separate schema. (And if one were limiting which schemas a user could access, there
On 11/28/06, Andrus <[EMAIL PROTECTED]> wrote:
My goal is to create ERP system which creates backups without any
attendance.
I don'nt know how to automate this 100% and havent found any such sample.
Depending on what you plan to do with the backups (like create a fallover
server), I don't kno
I don't see where doing the backup directly to another computer increases
your safety margin, it may even lower it due to the increased potential for
network issues messing up the backup cycle. Do it locally then SCP the
completed (and compressed) file to another computer, which is what I do.
(In
IMHO you need at least five values:
Male
Female
Unknown (aka NULL)
Not Available
Not Applicable
BTW, my wife's grandfather's given name was "Pearl".
A few years ago I taught a lesson to a group of about 30 third grade
students. There were 6 students in that class with a first name pronounced
l
On several occasions I have thought that each row in a table should have a
SYSTEM COLUMN which gave the timestamp of the last update of that row. This
could get a bit expensive on space and in some cases might be redundant with
(or have a slightly different value from) a user-maintained timestamp
I have a MySQL table on our public website that is populated from a similar
table on our internal site, which runs PostgreSQL.
Recently I was trying to enhance one of our website queries and ran across
an interesting phenomenon:
The following query runs very quickly in both PostgreSQL (8.1.3) an
This is a generated query in a web form where there could be a series of 12
digit event IDs input by the user, hence using the 'in' form. This is
slightly lazy programming on my part, but it makes little difference in
either PostgreSQL or MySQL whether I use = or 'in'.
--
Mike Nolan
On 1/20/07,
reSQL any better, or vice versa.
Tom Lane wrote:
> "Michael Nolan" <[EMAIL PROTECTED]> writes:
>> select count(*) from memmast where memid in (select plr_rated_memid
from
>> tnmt_plr where plr_eventid in ('200607163681');
>
>> This query takes abo
Am I just missing it or is there no way to tell if a function is noted as
immutable from the \df or \df+ output in psql?
--
Mike Nolan
I get a segmentation fault in pg_dump (8.2.3 on a Linux Fedora Core 5
system) when dumping some (but not all) of the sequences in the public
schema:.
Here's the output from /usr/local/pgsql/bin/pg_dump -v -t
uscf_dues_dues_key_seq -U postgres uscf >uscf_dues_dues_key_seq.seq
pg_dump: reading sch
GDB produces:
Program received signal SIGSEGV, Segmentation fault.
0x0804fd6f in dumpSequence ()
The sequence definitely exists and works:
select * from uscf_dues_dues_key_seq
;
sequence_name | last_value | increment_by | max_value |
min_
value | cache_value | log_cnt | is_cy
What else can I do to help track this down?
--
Mike Nolan
On 4/5/07, Alvaro Herrera < [EMAIL PROTECTED]> wrote:
Michael Nolan escribió:
> GDB produces:
>
> Program received signal SIGSEGV, Segmentation fault.
> 0x0804fd6f in dumpSequence ()
Not very helpful -- what does it say if
I've narrowed down the conditions under which pg_dump in 8.2.3 is creating a
segmentation fault.
It appears to happen only when dumping a sequence that is created for a
serial data element.
uscf=> create table public.seqtest
uscf-> (field1 text,
uscf(> field2 serial);
create table public.seqtes
foreign key constraints?
Dumping the sequence separately should also be permissable, but possibly
with some kind of warning note in the dump output that the sequence is
linked to a data field.
--
Mike Nolan
On 4/15/07, Tom Lane <[EMAIL PROTECTED]> wrote:
I wrote:
> "Michael
On 4/16/07, Tom Lane <[EMAIL PROTECTED]> wrote:
. It's still
the case that not using a proper escaping function will come back to
haunt you someday, though.
Has anybody written a 'proper escaping function' in php, something along the
lines of addslashes()?
--
Mike Nolan
Can WAL files be used to create/update a warm standby on a different minor
version of PostgreSQL (eg, using files from a server running 8.2.3 on an
8.2.4 server, or vice-versa?)
I suspect this is a FAQ, but I didn't see it in the docs on WALs and PITR
(section 23.3)
--
Mike Nolan
Steps taken that work:
1. Take a low-level (tar) backup on the live server
2. Restore the files on a second server (identical OS and PG versions)
3. Copy the archived WAL files to the backup server
4. Restore the archived WAL files on the backup server
However, why don't steps 5 through 8 wo
I don't know if my database is typical (as there probably is no such thing),
but to restore a full dump (pg_dumpall) takes over 4 hours on my backup
server, but to restore a low level backup (about 35GB) and then process 145
WAL files (since Tuesday morning when the last low level backup was run)
On 5/17/07, Ron Johnson <[EMAIL PROTECTED]> wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 05/17/07 16:49, Michael Nolan wrote:
> I don't know if my database is typical (as there probably is no such
> thing), but to restore a full dump (pg_dumpall) takes over 4 hou
Recently I went through the steps to refresh the database (8.3.4) on my
development server (running Fedora Core 5 Linux),
making a tarball of the live database, then restoring it on the development
server, and running all the archived WAL files.
Everything worked fine as far as I can tell, I don'
Sorry, I meant 8.2.4 (darn typo virus)
On 5/24/07, Michael Nolan <[EMAIL PROTECTED]> wrote:
Recently I went through the steps to refresh the database (8.3.4) on my
development server (running Fedora Core 5 Linux),
making a tarball of the live database, then restoring it on the
devel
On 5/24/07, Tom Lane <[EMAIL PROTECTED]> wrote:
It sounds like you have a reproducible test case --- can you make it
available to someone else?
Not very easily, Tom. The tarball files are around 35 GB (uncompressed)
and I assume they'd only work on a fairly similar system anyway.
I assume s
I need to be able to have postmaster (8.1.2) listen on more than one port, eg, 5432 and some other port, such as 5431.There does not appear to be a way to configure that. You can configure multiple IP addresses, but not multiple ports.
If I set up a TCP port redirector like rinetd, to forward packe
On 8/30/06, Martijn van Oosterhout <kleptog@svana.org> wrote:
On Wed, Aug 30, 2006 at 02:42:58PM -0500, Michael Nolan wrote:> I need to be able to have postmaster (8.1.2) listen on more than one port,> eg, 5432 and some other port, such as 5431.Why?
Situation in a nutshell:Producti
parate configurations to the live section -simply have 2 databases defined in the same server 'ourlivedatabase' and'ourtestdatabase'. This still gives one small login/config setting that
changes between live and dev.On 31/8/2006 12:20, "Kelly Burkhart" <[EMAI
Setting postgresql 8.1.3 to log all queries only logs direct queries, not ones made as part of a stored procedure (function) call.Is there a way to get it to log queries that are part of functions?--Mike Nolan
101 - 143 of 143 matches
Mail list logo