Any ideas why we would be receiving this cache lookup failed message?
PostgreSQL 8.3.7 64 bit, RHEL 5 64 bit OS
Linux hostname.net 2.6.18-92.el5 #1 SMP Tue Apr 29 13:16:15 EDT 2008 x86_64
x86_64 x86_64 GNU/Linux
<2010-02-07 08:05:36 MST>ERROR: cache lookup failed for relation 391262678
<2010-
8, 2010, at 4:50 PM, Erik Jones wrote:
>
> On Jan 8, 2010, at 9:50 AM, Keaton Adams wrote:
>
>> OK,
>>
>> So what am I doing wrong here?
>>
>> Installed PG 8.3.7 on Slave machine
>>
>> Restored from last evening's backup from the master
To clean up from a prior run.
Erik Jones wrote:
On Jan 8, 2010, at 9:50 AM, Keaton Adams wrote:
> OK,
>
> So what am I doing wrong here?
>
> Installed PG 8.3.7 on Slave machine
>
> Restored from last evening's backup from the master DB to make the rsync
> acr
"000100F600E8" from archive: return code 15
Again, nothing was changed with the scripts or the replication process and this
worked just fine under 8.1.4.
Thanks!
On 1/8/10 8:10 AM, "Keaton Adams" wrote:
I did find some references to a fix of last-completed
leted-transaction time
during PITR . Last transaction end time is now logged at end of recovery
and at each logged restart point (Simon) ...
On 1/7/10 12:53 PM, "Keaton Adams" wrote:
We had WAL Log shipping (warm standby) working fine under 8.1.4 but under 8.3.7
we can't get
We had WAL Log shipping (warm standby) working fine under 8.1.4 but under 8.3.7
we can't get the slave to come up properly. Nothing has changed in our process
with regard to start_backup, rsync, stop_backup, bring up the warm standby
server in continuous recovery mode, but the failover DB won't
listenerdb | 1074114794
template1 | 1073908727
template0 | 30121699
(4 rows)
Thanks again.
On 7/27/09 3:10 PM, "Martijn van Oosterhout" wrote:
> On Mon, Jul 27, 2009 at 02:21:02PM -0600, Keaton Adams wrote:
>> We are upgrading to 8.3.7 in September, if that helps the situatio
losing data in a table that is only around for
a two week (14 day period) if I never do this "database wide VACUUM" on the
actual production DB?
Thanks again for your response.
On 7/27/09 1:41 PM, "Bill Moran" wrote:
In response to Keaton Adams :
> We are to a point
We are to a point in size and utilization of a set of our Postgres 8.1.17
databases that a vacuumdb -a -z -U postgres is still running after 50 hours and
we have to kill it off because the additional i/o is causing queries to stack
up. We have archived off as much data as possible (100+ GB) and
On 6/8/09 11:46 AM, "Tom Lane" wrote:
Keaton Adams writes:
> This is a QA system and unfortunately there is no recent backup So as a
> last resort I am looking for any way to bring up Postgres when it has corrupt
> data in it:
pg_resetxlog?
regards, tom lane
This is a QA system and unfortunately there is no recent backup So as a
last resort I am looking for any way to bring up Postgres when it has corrupt
data in it:
FATAL: could not remove old lock file "postmaster.pid": Read-only file system
HINT: The file seems accidentally left over, but i
This looks great and is a much easier solution to the problem than what I had
planned.
Thanks!
Keaton
mydb=# select ( date('2009-01-01') + ('1 day'::interval * (21-1)*7)) - ('1
day'::interval * (select 7 - extract(dow from timestamp '2009-01-01')));
?column?
-
2009-05-18
PG 8.1.17
For a given week number (2009w22) I need to calculate the beginning and ending
date that makes up that particular week in the year. I want to use the
beginning/ending date as part of a CHECK constraint on an inherited table, with
each child table based on a week of the year.
This is
You can wrap a temporary function in a script and call it this way:
keaton:811:~$more my_shell_script.sh
#!/bin/bash
OS=`uname -s`
PSQL="/usr/bin/psql"
USERNAME="postgres"
export PGPASSWORD="${PASSWORD}"
DATABASE="mydatabase"
${PSQL} "${DATABASE}" -U "${USERNAME}" << EOF
BEGIN;
CREATE OR REPLA
Any ideas on how to debug these types of error messages?
Apr 30 01:36:02 mxlqa401 postgres[23600]: [3-1] FATAL: invalid frontend
message type 77
Apr 30 01:36:02 mxlqa401 postgres[23601]: [3-1] LOG: unexpected EOF on client
connection
Apr 30 01:36:02 mxlqa401 postgres[23602]: [3-1] LOG: unexp
Any ideas on how to debug these types of error messages?
Apr 30 01:36:02 mxlqa401 postgres[23600]: [3-1] FATAL: invalid frontend
message type 77
Apr 30 01:36:02 mxlqa401 postgres[23601]: [3-1] LOG: unexpected EOF on client
connection
Apr 30 01:36:02 mxlqa401 postgres[23602]: [3-1] LOG: unexpe
, in the future, do in-place upgrades from 8.5
onward.
Any comments you can make on this suggestion would be very much
appreciated.
Thank you,
Keaton Adams
being able to do an in-place upgrade from an 8.3 release is off the
table?
Thanks again,
Keaton
On 2/23/09 12:03 PM, "Tom Lane" wrote:
Keaton Adams writes:
> We are currently at PG 8.1 and are in the process of upgrading to 8.3.6. I
> read on your development roadmap
What would you do in this situation?
We are currently at PG 8.1 and are in the process of upgrading to 8.3.6. I
read on your development roadmap page that 8.4 is slated for release in Q1 of
this year, possibly on the 31st of March:
"The next release of PostgreSQL is planned to be the 8.4 relea
PostgreSQL 8.1 question:
I have two columns.
policyNumber contains a 12-13 varchar string
AllPolicyNumbersIncluded contains one or more 12-13 varchar strings (policy
nums) separated by commas
I want to check if policyNumber is contained in AllPolicyNumbersIncluded.
In SQL Server the PATINDEX fu
Just wanted to clarify something in the Docs.
An "index" page.
If I create an index on a table for column foo character(8) would that require:
Character value overhead: 4 bytes
Per index page: 20 bytes
Row offset per index entry: 4 bytes
Row fixed-size header per index entry: 27 bytes
Doe
What is the the largest PostgreSQL 8.x database that is running in a production
environment that you are aware of? We top out at roughly 400 GB but have a
need for a new project to go much, much larger (in the several TB range). I am
attempting to get a feel for how large one should take a sin
PostgreSQL 8.1.4
RHEL 4.x
So we have run into an interesting problem I want to know if anyone else has
encountered before. We have a scheduler process that 'hangs' on occasion and
we have isolated the issue to Postgres not returning any records when there are
actual records to return. Here i
issue.
Thanks,
Keaton
On 7/8/08 6:24 AM, "Keaton Adams" <[EMAIL PROTECTED]> wrote:
Good point. But
This is a database used to capture logged information, such as success/failure
of an operation. Daily tables are rolled up to weekly tables, weekly tables
are rolled up t
eekly table is dropped.
I'll send out a full EXPLAIN from the original query, which was against the
view, so you can see the scope of the issue.
Thanks for the reply,
Keaton
On 7/8/08 4:32 AM, "Dennis Brakhane" <[EMAIL PROTECTED]> wrote:
On Tue, Jul 8, 2008 at 12:0
PG: PostgreSQL 8.1.4
OS: RHEL 4.x
I have a set of queries on a production server that have been running fine for
the past few months but as of last Friday started performing poorly. I have
isolated the problem down to a particular part that is common to all queries
involved and have provided
Sorry. Good point. I always try to remember to include that important info:
PostgreSQL 8.1.4 on RHEL 4 (we're moving to 8.3.x on RHEL 5 this summer)
Thanks again,
Keaton
On 6/11/08 2:20 PM, "Tom Lane" <[EMAIL PROTECTED]> wrote:
Keaton Adams <[EMAIL PROTECTED]&g
I searched the archives and found references to the "missing chunk number 0 for
toast value X" error and tried the suggestions, but can't seem to resolve this
particular problem.
This is a test/QA system that uses an iSCSI disk array. Yesterday the array
hiccupped and had to be rebooted while
I did search the archives for this But didn't see a posting that directly
answered the question.
How do I tell if the 32 bit version of PostgreSQL is running on a 64 bit
machine, or if the 64 bit version was installed? Is there a pg_ table that I
can query or a config file I can look in to
Is there a way to manage a queue in memory within Postgres?
We have a scenario where we have a list of servers stored in a table. Multiple
clients make a request for the next "group" of servers to process. Something
within the database needs to keep track of the last server set that was passe
Our organization is looking for a hot-standby option for PostgreSQL that
uses the WAL (transaction) data to keep the standby current and also allows
the standby to be read-only accessible for reporting. We have implemented
WAL shipping through a set of scripts we developed and that works well to
25, 2008 at 3:08 PM, Simon Riggs <[EMAIL PROTECTED]> wrote:
>> On Tue, 2008-03-25 at 14:11 -0600, Keaton Adams wrote:
>> Some funding would help that move forwards. If you or others would
>> consider that, it would help, even if just to provide the seed for
>> additional
done in batch or after hours.
Thanks for the reply,
Keaton
On 3/25/08 2:18 PM, "Richard Broersma" <[EMAIL PROTECTED]> wrote:
> On Tue, Mar 25, 2008 at 1:11 PM, Keaton Adams <[EMAIL PROTECTED]> wrote:
>> Our organization is looking for a hot-standby option for
But will that stand-by replication provide for a read-only slave?
On 3/25/08 2:26 PM, "Richard Broersma" <[EMAIL PROTECTED]> wrote:
> On Tue, Mar 25, 2008 at 1:17 PM, salman <[EMAIL PROTECTED]> wrote:
>> IIRC, it was mentioned previously in one posting that this a TODO for a
>> future version of
Our organization is looking for a hot-standby option for PostgreSQL that
uses the WAL (transaction) data to keep the standby current and also allows
the standby to be read-only accessible for reporting. We have implemented
WAL shipping through a set of scripts we developed and that works well to
Postgres 8.1 on RHEL
How do I formulate this EXECUTE statement so that cust_hold (boolean column
in the table) can be set to the value in v_cust_on_hold (boolean variable in
a function)?
v_cust_on_hold BOOLEAN;
EXECUTE 'UPDATE customer_action_ytd_' || v_tyear || ' ' ||
'SET bytes_sc = b
The GROUP BY was the fastest method.
Thanks for the suggestions,
Keaton
On 2/15/08 3:12 PM, "Gregory Stark" <[EMAIL PROTECTED]> wrote:
> "Keaton Adams" <[EMAIL PROTECTED]> writes:
>
>> Version: Postgres 8.1.4
>> Platform: RHEL
>>
>
Version: Postgres 8.1.4
Platform: RHEL
Given this scenario with the indexes in place, when I ask for the distinct
field1_id values, why does the optimizer choose a sequential scan instead of
just reading from the kda_log_fid_cre_20080123_idx index? The time it takes
to perform the sequential scan
We¹re running PostgreSQL 8.1.4 on RHEL. I¹m running a vacuum analyze on the
mxl_fs_size table to see if that shows anything.
-Keaton
On 12/4/07 10:50 PM, "Keaton Adams" <[EMAIL PROTECTED]> wrote:
>
> We have two servers configured the same way running the same type of
We have two servers configured the same way running the same type of
processes that write/read to the database. Server 2 filled up pg_xlog and
crashed. When it came back we began to experience slow query performance.
I ran an ANALYZE against the tables involved in the query, but for some
reason
I¹m looking for a way to see how many rows have been processed while a COPY
is actually running. I can¹t seem to find a pg_stat table/view that will
give me this level of visibility into the process.
Is there any way to do this, to tell the number of rows processed during a
COPY into a table whi
When loading (inserting) data into a table with COPY I have read in the
documentation that rows are appended to the end of the table instead of
being added to existing table pages, so I'm wondering about memory
utilization. Our application uses a number of COPY statements in
parallel, so COPY perf
After reading several articles on the performance drag that Linux atime
has on file systems we would like to mount our DB volumes with the
noatime parameter to see just what type of a performance gain we will
achieve. Does PostgreSQL use atime in any way when reading/writing
data? If we turn off/
1497605405
So before it grows into the 2.1 billion value we'll need to do a manual
reset of the FrozenXID on this DB as well.
Of course, we are looking into an 8.1.9 upgrade to avoid this problem in
the future.
Thanks,
Keaton
-Original Message-
From: Tom Lane [mailto:[EMAIL P
From: Tom Lane [mailto:[EMAIL PROTECTED]
Sent: Monday, August 06, 2007 11:02 AM
To: Joshua D. Drake
Cc: Keaton Adams; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Template zero xid issue
"Joshua D. Drake" <[EMAIL PROTECTED]> writes:
> 2. IIRC there is a bug in 8.1.4 that c
FATAL: database "template0" is not currently accepting connections
Previous connection kept
-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED]
Sent: Monday, August 06, 2007 10:30 AM
To: Keaton Adams
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Template zero xid
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Keaton Adams
Sent: Monday, August 06, 2007 10:14 AM
To: Tom Lane
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Template zero xid issue
Our production DB shut down because of it:
<2007-08-06
6 09:44:08 MDT>FATAL: the database system is shutting down
<2007-08-06 09:44:08 MDT>FATAL: the database system is shutting down
<2007-08-06 09:44:08 MDT>LOG: database system is shut down
-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED]
Sent: Mon
Our template 0 database has an XID issue:
postgres=# select datname, age(datfrozenxid) from pg_database;
datname|age
--+
postgres | 1073965753
.
.
.
template1| 1073892377
template0| 2146706522
(7 rows)
When we try a vacuum full,
I am being asked by management when PostgreSQL 8.3 will become generally
available. Is there an updated timeline for 8.3?
Thanks,
Keaton
50 matches
Mail list logo