On Fri, Jun 1, 2012 at 8:07 AM, Bryan Murphy wrote:
> On Thu, May 31, 2012 at 4:28 PM, Jeff Davis wrote:
>
>> On Thu, 2012-05-31 at 15:55 -0500, Bryan Murphy wrote:
>> > I'm having a problem upgrading a cluster from 9.0.7 to 9.1.3. Here's
>>
On Thu, May 31, 2012 at 4:28 PM, Jeff Davis wrote:
> On Thu, 2012-05-31 at 15:55 -0500, Bryan Murphy wrote:
> > I'm having a problem upgrading a cluster from 9.0.7 to 9.1.3. Here's
> > the error:
>
> Please send /srv/pg_upgrade_dump_globals.sql
>
> Als
I'm having a problem upgrading a cluster from 9.0.7 to 9.1.3. Here's the
error:
psql:/srv/pg_upgrade_dump_globals.sql:54: ERROR: duplicate key value
violates unique constraint "pg_authid_oid_index"
DETAIL: Key (oid)=(10) already exists.
Any ideas what I'm doing wrong?
Here's the verbose outpu
On Mon, Jan 31, 2011 at 10:35 AM, Ben Chobot wrote:
> > Any advice? What should I be looking for?
>
> Any particular reason you are running the OOM killer on a database server?
> Why have the kernel set to overcommit memory in the first place?
Simply an oversight. That being said, it does not
Last night we were hit by the out of memory killer. Looking at the
following graph, you can clearly see unusual memory growth. This is a
database server running Postgres 9.0.0.
http://mediafly-public.s3.amazonaws.com/dbcluster02-master-month.png
We have another server, running Postgres 9.0.1 wh
On Tue, Sep 21, 2010 at 8:08 PM, Tatsuo Ishii wrote:
> Unfortunately the gdb backtrace does not show enough information
> because of optimization, I guess. Can you take a backtrace with
> optimization disabled binary?
>
> You can obtain this by editing Makefile around line 147.
>
>
I edited conf
On Tue, Sep 21, 2010 at 10:45 AM, Bryan Murphy wrote:
> I'm sorry, when I went back over to double check my steps I realized I ran
> the wrong command. I am *still* having the problem. It appears that the
> MD5 hashes now match, but it's still failing. I have postgres and pg
On Tue, Sep 21, 2010 at 10:26 AM, Bryan Murphy wrote:
> On Mon, Sep 20, 2010 at 6:23 PM, Tatsuo Ishii wrote:
>
>> I have used PostgreSQL 9.0 + pgpool-II 3.0 and they work fine with md5
>> auth. Your log seems to indicate that the password in pool_passwd and
>> the
On Mon, Sep 20, 2010 at 6:23 PM, Tatsuo Ishii wrote:
> I have used PostgreSQL 9.0 + pgpool-II 3.0 and they work fine with md5
> auth. Your log seems to indicate that the password in pool_passwd and
> the one in pg_shadow are not identical. Can you verify that?
> The query result:
>
> select pass
On Sun, Sep 19, 2010 at 11:31 PM, Tatsuo Ishii wrote:
> Sorry for delay. I had a trip outside Japan.
>
No problem.
> I found nasty bug with pgpool. Please try attached patches.
>
I tried the patch file and I still cannot connect. The only other
difference is that I've already upgraded our im
On Tue, Sep 14, 2010 at 6:55 PM, Tatsuo Ishii wrote:
> Sorry for not enough description about pool_passwd. It's located under
> the same directory as pgpool.conf. So the default is
> /usr/local/etc/pool_passwd.
>
> You need to create /usr/local/etc/pool_passwd if the uid to run pgpool
> server d
I can't get md5 authentication working with postgres 9rc1 and pgpool-II 3.0.
I see references to "pool_passwd" in the pgpool documentation, but I see
nothing indicating *where* this file should exist and how pgpool finds it.
I've set my accounts up in pcp.conf, however, I do not believe this is w
On Fri, May 7, 2010 at 9:02 AM, Magnus Hagander wrote:
> Try doing a binary search with LIMIT. E.g., if you have 20M reecords,
> do a SELECT * FROM ... LIMIT 10M. (throw away the results) If that
> broke, check the upper half, if not, check the lower one (with
> OFFSET).
>
> If you have a serial p
I'm running into this issue again:
psql --version
psql (PostgreSQL) 8.3.7
COPY items_extended TO '/dev/null';
ERROR: missing chunk number 0 for toast value 25693266 in pg_toast_25497233
Unfortunately, I do not know where these are coming from and I cannot
replicate the data in at least one of m
We have a production database that contains data which is easily
recreated at runtime. I'm considering upgrading this to 9.0 beta1 to
get some experience with the new hot standby system on a server that
is under medium to heavy load.
Obviously, being a production database, it's inconvenient if th
On Tue, Apr 27, 2010 at 11:32 PM, Greg Smith wrote:
> What do you mean by an instance failure here? The actual EC2 image getting
> corrupted so that it won't boot anymore, or just the instance going down
> badly?
The instance going down, badly. The last time it happened, what logs
I was able to
On Tue, Apr 27, 2010 at 11:31 AM, Greg Smith wrote:
> Nikhil G. Daddikar wrote:
>> I was wondering if any of you are using (or tried to use) PG+EC2/EBS on a
>> production system. Are any best-practices. Googling didn't help much. A few
>> articles I came across scared me a bit.
>
> There have been
On Fri, Mar 26, 2010 at 1:32 PM, Greg Smith wrote:
> If there's another server around, you can have your archive_command on the
> master ship to two systems, then use the second one as a way to jump-start
> this whole process. After fail-over, just start shipping from the new
> primary to that 3
On Mon, Mar 22, 2010 at 9:21 AM, Ogden wrote:
> I have looked all over but could not find any detailed docs on setting up a
> warm standby solution using PostgreSQL 8.4. I do know of
> http://www.postgresql.org/docs/8.4/static/warm-standby.html but was
> wondering if there was a more detailed doc
On Sun, Aug 16, 2009 at 9:35 PM, Bryan Murphy wrote:
> Assuming we are running a Postgres instance that is shipping log files to 2
> or more warm spares, is there a way I can fail over to one of the spares,
> and have the second spare start receiving updates from the new master
> wit
On Mon, Aug 17, 2009 at 4:02 PM, Greg Stark wrote:
> For what it's worth at EDB I dealt with another case like this and I
> imagine others have too. I think it's too easy to do things in the
> wrong order or miss a step and end up with these kinds of problems.
>
> I would really like to know what
On Mon, Aug 17, 2009 at 12:41 PM, Tom Lane wrote:
> Bryan Murphy writes:
> > On Mon, Aug 17, 2009 at 12:17 PM, Tom Lane wrote:
> >> Hm, what's your current XID counter? (pg_controldata would give an
> >> approximate answer.) I'm wondering if the xmax
On Mon, Aug 17, 2009 at 12:17 PM, Tom Lane wrote:
> Bryan Murphy writes:
> > Here's the xmin/xmax/ctid for three problematic records:
>
> > prodpublic=# select xmin,xmax,ctid from items_extended where id in
> > ('34537ed90d7546d78f2c172fc8eed687
Could I run pg_resetxlog on a warm spare? Would that give the same result?
Unfortunately, this is our production system and I simply cannot bring it
down at the moment to run pg_resetxlog.
Bryan
On Mon, Aug 17, 2009 at 11:35 AM, Greg Stark wrote:
> On Mon, Aug 17, 2009 at 4:23 PM, Br
On Mon, Aug 17, 2009 at 11:35 AM, Greg Stark wrote:
> On Mon, Aug 17, 2009 at 4:23 PM, Bryan Murphy
> wrote:
> > I've identified 82 bad records. When I try to query for the records,
> > we get the following:
> > ERROR: missing chunk number 0 for toast value 25
We had a hardware failure last week and had to switch over to our spare.
Unfortunately, at some point we managed to get some data corruption. I've
been going through the database table by table, record by record, trying to
find the problems and fix them.
This one has me stumped. We have one tab
Assuming we are running a Postgres instance that is shipping log files to 2
or more warm spares, is there a way I can fail over to one of the spares,
and have the second spare start receiving updates from the new master
without missing a beat? I can live with losing the old master, and at least
at
On Fri, Jun 12, 2009 at 11:08 AM, Bryan Murphy wrote:
> I've read through the PITR documentation many times. I do not see anything
> that sheds light on what I'm doing wrong, and I've restored older backups
> successfully many times in the past few months using this
On Fri, Jun 12, 2009 at 10:48 AM, Alan Hodgson wrote:
> On Friday 12 June 2009, Bryan Murphy wrote:
> > What am I doing wrong? FYI, we're running 8.3.7.
>
> See the documentation on PITR backups for how to do this correctly.
>
I've read through the PITR documentati
Hey guys, I'm having difficulty restoring some of our backups. Luckily, I'm
only trying to do this to bring up a copy of our database for testing
purposes, but this still has me freaked out because it means we currently
have no valid backups and are only running with a single warm spare.
Our prima
On Sun, Apr 12, 2009 at 5:52 AM, Simon Riggs wrote:
> The database is performing too frequent restartpoints.
>
> This has been optimised in PostgreSQL 8.4 by the addition of the
> bgwriter running during recovery. This will mean that your hot spare
> will not pause while waiting for restartpoint t
On Thu, Apr 9, 2009 at 7:33 PM, Greg Smith wrote:
>>> 1) Decrease the maximum possible segment backlog so you can never get
>>> this
>>> far behind
>>
>> I understand conceptually what you are saying, but I don't know how to
>> practically realize this. :) Do you mean lower checkpoint_segments?
On Thu, Apr 9, 2009 at 6:38 PM, Greg Smith wrote:
> What does vmstat say about the bi/bo during this time period? It sounds
> like the volume of random I/O produced by recovery is just backing up as
> expected. Some quick math:
I'll have to capture this, unfortunately I won't be able to do that
I have two hot-spare databases that use wal archiving and continuous
recovery mode. I want to minimize recovery time when we have to fail
over to one of our hot spares. Right now, I'm seeing the following
behavior which makes a quick recovery seem problematic:
(1) hot spare applies 70 to 75 wal
On Sun, Feb 22, 2009 at 7:30 PM, Tim Uckun wrote:
>> 1. It's OK if we lose a few seconds (or even minutes) of transactions
>> should one of our primary databases crash.
>> 2. It's unlikely we'll need to load a backup that's more than a few days
>> old.
>
> How do you handle failover and falling ba
Hey guys, we just moved our system to Amazon's EC2 service. I'm a bit
paranoid about backups, and this environment is very different than
our previous environment. I was hoping you guys could point out any
major flaws in our backup strategy that I may have missed.
A few assumptions:
1. It's OK
Hey guys, we're changing the way we version our database from some old
unsupported (and crappy) ruby migrations-like methodology to straight
SQL scripts. We currently run CruiseControl.NET on a windows machine
pointed to a test database server hosted on linux for our builds. At
each build we atte
I'm a bit of a novice writing tsearch2 queries, so forgive me if this
is a basic question.
We have a table with 2million+ records which has a considerable amount
of text content. Some search terms (such as comedy, new, news, music,
etc.) cause a significant performance hit on our web site. There
On Dec 6, 2007 10:09 AM, Tom Lane <[EMAIL PROTECTED]> wrote:
> Why dump such a table at all? It evidently doesn't contain any
> data you need to preserve ...
>
> I forget which version you are running, but 8.2 pg_dump has an
> --exclude-table switch which'd work peachy for this.
I did not know ab
On Dec 5, 2007 9:49 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
> Only access-share locks, but that could still be an issue if anything in
> your system likes to take exclusive locks. Have you looked into
> pg_locks to see if anything's getting blocked?
>
> pg_dump is entirely capable of causing an un
On Dec 5, 2007 10:14 AM, Joshua D. Drake <[EMAIL PROTECTED]> wrote:
> Pg_dump uses Access Share if I recall. You can operate normally while
> running pg_dump. I am having a hard time parsing that. Could you instead
> go over to pgsql.privatepaste.com and send back a paste link?
http://pgsql.privat
Sorry about the formatting, here's the dump as a text file.
Thanks,
Bryan
On Dec 5, 2007 10:05 AM, Bryan Murphy <[EMAIL PROTECTED]> wrote:
> When we run pg_dump on our database, our web site becomes completely
> unresponsive. I thought pg_dump was runnable while the database
When we run pg_dump on our database, our web site becomes completely
unresponsive. I thought pg_dump was runnable while the database was
still being actively used? Anyway, I'm not entirely sure why, but
here's what I'm seeing.
pg_dump -v database_name | gzip > output_file
25% to 50% CPU usage (4
On 10/17/07, Joao Miguel Ferreira <[EMAIL PROTECTED]> wrote:
> If your intention is to eliminate the unused indexes rows you should run
> 'vaccum' and/or 'vacuum full' and/or 'reindex'.
>
> This also has the consequence of freing filesystem space and returning
> it back to the OS.
>
> Check it out
Is there a way I can track index usage over a long period of time?
Specifically, I'd like to identify indexes that aren't being regularly
used and drop them.
Bryan
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
First question... did you create the appropriate indexes on the appropriate
columns for these tables? Foreign keys do not implicitly create indexes in
postgres.
Bryan
On 7/30/07, Cultural Sublimation <[EMAIL PROTECTED]> wrote:
>
> Hi,
>
> I'm fairly new with Postgresql, so I am not sure if the p
I highly recommend you use the Npgsql driver, and if you're feeling really
saucy try NHibernate on top of that.
http://pgfoundry.org/projects/npgsql
http://www.nhibernate.org/
Bryan
On 7/23/07, longlong <[EMAIL PROTECTED]> wrote:
hi,all
i have a local system with windows xp.
i want to use c#
of those areas where
I have yet to find a lot of guidance on the issue.
Bryan
On 3/29/07, Lew <[EMAIL PROTECTED]> wrote:
Bryan Murphy wrote:
> I think the other guys suggestion will work better. ;)
>
> Really, the table was just an example off the top of my head. I believe
> w
29, 2007, at 17:39, Bryan Murphy wrote:
> Is it possible to declare a unique constraint in combination with a
> deleted flag?
> For example, if I have a table like this:
>
> CREATE TABLE
> (
> ID NOT NULL PRIMARY KEY,
> Key VARCHAR(32) NOT NULL,
> Value VARCHAR(32)
Thanks! That works great!
Bryan
On 3/29/07, Jonathan Hedstrom <[EMAIL PROTECTED]> wrote:
Bryan Murphy wrote:
> Is it possible to declare a unique constraint in combination with a
> deleted flag?
>
> For example, if I have a table like this:
>
> CREATE TABLE
> (
Is it possible to declare a unique constraint in combination with a deleted
flag?
For example, if I have a table like this:
CREATE TABLE
(
ID NOT NULL PRIMARY KEY,
Key VARCHAR(32) NOT NULL,
Value VARCHAR(32) NOT NULL,
Deleted INT NOT NULL DEFAULT 0
);
can I declare a unique constraint that
51 matches
Mail list logo