Hello Community,
We have used "pg_ctl kill QUIT " to terminate one of the processes on
the production database and the database went into recovery mode.
We understand that we should not use "kill -9" and we did not do that.
Could you please help us avoid this problem permanently.
Regards,
Venka
I have restarted the cluster with "ignore_system_indexes=true" and was able
to connect to databases.
I have started re-indexing, seems to be working fine. Will get back if i
find further issues.
Regards,
Venkat
On Wed, Jul 4, 2012 at 3:35 PM, Raghavendra <
raghavendra@enterprisedb.com> wrote
On Wed, Jul 4, 2012 at 2:12 PM, Raghavendra <
raghavendra@enterprisedb.com> wrote:
> On Wed, Jul 4, 2012 at 2:11 PM, Raghavendra <
> raghavendra@enterprisedb.com> wrote:
>
>> \
>>> postgres=# \c oltp_db
>>> FATAL: index "pg_attribute_relid_attnum_index" contains unexpected
>>> zero page
Hello Community,
We have a critical situation where-in our production database server got
effected by "Root Kit".
When tried to build a replication site by copying the data directory to a
different server, so many files got missed while copying (this is due to
root kit effect).
So, we moved the
On Wed, Feb 22, 2012 at 5:40 PM, Stuart Bishop wrote:
Hi.
>
> I need to measure how far in the past a hot standby is, async
> streaming replication.
>
> On the Hot Standby, "select
> age(current_timestamp,pg_last_xact_replay_timestamp())" gets me this
> (or close enough for my purposes - I underst
On Thu, Feb 16, 2012 at 8:14 PM, Adrian Klaver wrote:
> On Wednesday, February 15, 2012 10:21:02 pm Venkat Balaji wrote:
> > Andrian,
> >
> > Thanks a lot !
> >
> > So in this case you are not waiting for confirmation of the commit being
> >
> > >
Andrian,
Thanks a lot !
So in this case you are not waiting for confirmation of the commit being
> flushed
> to disk on the standby. It that case you are bypassing the primary reason
> for
> sync replication. The plus is transactions on the master will complete
> faster
> and do so in the absenc
On Wed, Feb 15, 2012 at 4:12 PM, Andres Freund wrote:
> On Wednesday, February 15, 2012 10:38:23 AM Venkat Balaji wrote:
> > On Wed, Feb 15, 2012 at 12:21 PM, Scott Marlowe
> wrote:
> > > On Tue, Feb 14, 2012 at 10:57 PM, Venkat Balaji <
> venkat.bal...@verse.in>
&
> > Data loss would be an issue when there is a server crash or pg_xlog crash
> > etc. That many number of pg_xlog files (1000) would contribute to huge
> > data
> > loss (data changes not synced to the base are not guaranteed). Of-course,
> > this is not related to the current situation. Normally
he same behavior both in asynch and sync rep.
Thanks,
VB
On Wed, Feb 15, 2012 at 11:11 AM, Venkat Balaji wrote:
> On Wed, Feb 15, 2012 at 11:01 AM, Venkat Balaji wrote:
>
>>
>> On Tue, Feb 14, 2012 at 8:09 PM, Adrian Klaver
>> wrote:
>>
>>> On Tuesday, Febr
On Wed, Feb 15, 2012 at 12:21 PM, Scott Marlowe wrote:
> On Tue, Feb 14, 2012 at 10:57 PM, Venkat Balaji
> wrote:
> >
> > On Wed, Feb 15, 2012 at 1:35 AM, Jay Levitt
> wrote:
> >>
> >> We need to do a few bulk updates as Rails migrations. We're a
On Wed, Feb 15, 2012 at 1:35 AM, Jay Levitt wrote:
> We need to do a few bulk updates as Rails migrations. We're a typical
> read-mostly web site, so at the moment, our checkpoint settings and WAL are
> all default (3 segments, 5 min, 16MB), and updating a million rows takes 10
> minutes due to
On Wed, Feb 15, 2012 at 11:01 AM, Venkat Balaji wrote:
>
> On Tue, Feb 14, 2012 at 8:09 PM, Adrian Klaver wrote:
>
>> On Tuesday, February 14, 2012 4:21:22 am Venkat Balaji wrote:
>> > Hello,
>> >
>> > Disaster Recovery testing for Synchronous replicati
On Tue, Feb 14, 2012 at 8:09 PM, Adrian Klaver wrote:
> On Tuesday, February 14, 2012 4:21:22 am Venkat Balaji wrote:
> > Hello,
> >
> > Disaster Recovery testing for Synchronous replication setup -
> >
> > When the standby site is down, transactions at the prod
g that this as good as putting "synchronous_commit=on" on an
stand-alone system.
We need to get this setup live on production shortly.
Thanks
VB
On Fri, Feb 10, 2012 at 4:47 PM, Venkat Balaji wrote:
>
> This issue stays resolved !!!
>
> The statements are no more hanging on production
s fast and awesome.
Thanks
VB
On Fri, Feb 3, 2012 at 9:45 PM, Adrian Klaver wrote:
> On Thursday, February 02, 2012 10:21:28 pm Venkat Balaji wrote:
>
> >
> > Connection is working fine between primary and standby, ping is working
> > fine and wal archive file tran
On Thu, Feb 2, 2012 at 8:37 PM, Adrian Klaver wrote:
> On Wednesday, February 01, 2012 10:51:44 pm Venkat Balaji wrote:
> > Hello,
> >
> > I was testing the Postgres-9.1.1 synchronous streaming replication on our
> > UAT system.
> >
> > Without synchronous r
spot.com/
>
>
>
> On Thu, Feb 2, 2012 at 12:21 PM, Venkat Balaji wrote:
>
>> Hello,
>>
>> I was testing the Postgres-9.1.1 synchronous streaming replication on our
>> UAT system.
>>
>> Without synchronous replication, everything was working fine.
&g
Hello,
I was testing the Postgres-9.1.1 synchronous streaming replication on our
UAT system.
Without synchronous replication, everything was working fine.
But, when i enabled synchronous_replication_names='*', the "create table"
started hanging for long time.
When i pressed "Ctrl+C" i got the f
Hello,
We have configured "streaming replication" (not synchronous) for our
production in PG-9.1.1.
Replication is working fine, we can see the transactions getting replicated
without any issues.
I see the below problem -
pg_stat_replication on master shows no rows all the time..
ps -Af | grep
2011/12/2 Oleg Serov
> And, i'm an idiot.
>
> My DB version:
> PostgreSQL 8.4.9 on x86_64-redhat-linux-gnu, compiled by GCC gcc (GCC)
> 4.1.2 20080704 (Red Hat 4.1.2-51), 64-bit
>
>
>
> 2011/12/2 Oleg Serov
>
>> Hello, i have a problem.
>>
>> I've got a production server, working fine. Then i've
Do you have Tablespace directories with a softlink to the data directory ?
Thanks
VB
On Wed, Nov 30, 2011 at 7:42 PM, Samba wrote:
> Hi all,
>
> I have taken a base backup of my master server using pg_basebackup command
> as below:
> pg_basebackup -D /tmp/PostgresBackup/ -Ft -Z 9 -l masterback
Hello Everyone,
Can someone please help me know if there exists a "pglesslog" version for
PG-9.0.
I only see beta version (1.4.2) for pg9.0 being released sometime ago.
Anyone using "pg_lesslog_1.4.2_pg90_beta.tar" for PG-9.0 production
successfully ?
Can we use the above said version on produc
On Fri, Nov 18, 2011 at 6:08 AM, Phoenix Kiula wrote:
> On Mon, Nov 14, 2011 at 1:45 PM, Venkat Balaji
> wrote:
> >> Question: what can I do to rsync only the new additions in every table
> >> starting 00:00:01 until 23:59:59 for each day?
> >
> > A table l
, Alban Hertroys wrote:
> On 15 November 2011 12:58, Venkat Balaji wrote:
> > Hello,
> > We are facing an issue while installing Postgres-9.0.1 on CentOS-5.
>
> That name always makes me wonder when they're releasing PennyOS or
> DollarOS :P
>
> >
Hello,
We are facing an issue while installing Postgres-9.0.1 on CentOS-5.
Below is the error we are encountering -
./configure -- output
checking for inflate in -lz... no
configure: error: zlib library not found
If you have zlib already installed, see config.log for details on the
failure. It
>
> Question: what can I do to rsync only the new additions in every table
> starting 00:00:01 until 23:59:59 for each day?
>
A table level replication (like Slony) should help here.
Or
A trigger based approach with dblink would be an-other (but, a bit complex)
option.
Thanks
VB
This problem has been resolved !!
Thanks
VB
On Fri, Nov 11, 2011 at 9:58 PM, Venkat Balaji wrote:
> Hello,
>
> WAL Archive process in our production is not working.
>
> [postgres@hostname]$ ps -ef | grep archive
> postgres 12077 16015 0 10:19 pts/400:00:00 grep archiv
Hello,
WAL Archive process in our production is not working.
[postgres@hostname]$ ps -ef | grep archive
postgres 12077 16015 0 10:19 pts/400:00:00 grep archive
postgres 31126 27607 0 Nov10 ?00:01:18 postgres: archiver process
failed on 00010F7200F0
I see WAL files get
Sorry forgot to mention the thread I referred to -
http://archives.postgresql.org/pgsql-general/2010-12/msg01000.php
Thanks
VB
On Thu, Nov 3, 2011 at 3:48 PM, Venkat Balaji wrote:
> Hello Everyone,
>
> We had recently taken an online backup of our production database cluster
> (pg_
Hello Everyone,
We had recently taken an online backup of our production database cluster
(pg_start_backup() - rsync - pg_stop_backup()).
We had built the testing cluster with the backup.
When we try to vacuum the database or vacuum full the testing database, we
are getting the following error.
>
> We're not doing this long-term, in order to have a backup server we can
> fail-over to, but rather as a one-off low impact move of our database.
> Consequently, instead of using pg_start_backup and pg_stop_backup, and
> keeping all WAL, we're stopping the database, rsync of everything, and
> st
>
> "Another option is to use rsync to perform a file system backup. This is
> done by first running rsync while the database server is running, then
> shutting down the database server just long enough to do a second rsync. The
> second rsync will be much quicker than the first, because it has rel
On Thu, Oct 27, 2011 at 7:57 PM, rihad wrote:
> Hi, I'm backing up the entire server directory from time to time. pg_xlog/
> directory containing WAL files is pretty heavy (wal_level=archive). Can I
> exclude it from the regular tar archive?
>
The best would be to perform "pg_switch_xlog()" and
My answers are in line in RED -
How can I list a users permissions table by table?
>
> i.e. User Joe
> has read/write on table1
>
has read on table2
> no access on table 3
>
For a particular user you can use below function. You can write a SQL query
or script which takes table names from
Sorry for not responding to this email for so long.
Alan,
We had mentioned the following line in recovery.conf file (we had given
pg_xlog location since we did not have WAL archives) -
restore_command = 'cp /pg_xlog/%f %p'
We found where the problem was -
Here is what i did -
1. We had taken
256K).
Thanks
VB
2011/10/3 Alan Hodgson
> On October 3, 2011 05:33:35 AM Venkat Balaji wrote:
> > Did anyone observe this behavior ?? Please help !
> >
> > This is critical for us. I want to recommend not to use "rsync" (use cp
> or
> > scp instead) fo
critical for us. I want to recommend not to use "rsync" (use cp or
scp instead) for production backup.
Thanks
VB
On Tue, Sep 27, 2011 at 2:36 PM, Albe Laurenz wrote:
> Venkat Balaji wrote:
> > Our problem is -
> >
> > We had mistakenly executed "rsync"
One way could be -
If the restore you are performing is not very big. Then pg_dump can be taken
with "--inserts" and we can remove "create function " lines
from the output file and restore.
Thanks
VB
On Fri, Sep 30, 2011 at 10:59 AM, Venkat Balaji wrote:
> I had faced
I had faced the same problem 2 days earlier and that was for
"pg_freespacemap" contrib module.
I did not know the way to ignore these functions and installed THE contrib
modules and restored. It worked !
I am also looking for a way to ignore these functions.
Thanks
VB
On Fri, Sep 30, 2011 at 8:
Sep 28, 2011 at 1:14 PM, Venkat Balaji wrote:
>
>> Hello Everyone,
>>
>> I have been working on PostgreSQL for quite a while (2 yrs) now.
>>
>> I have got "PostgreSQL 9.0 High Performance" book and quite excited to go
>> through it.
>>
>>
Hello Everyone,
I have been working on PostgreSQL for quite a while (2 yrs) now.
I have got "PostgreSQL 9.0 High Performance" book and quite excited to go
through it.
Please let me know any source where i can get more books on PG, I am
especially looking for books on PG internals, architecture,
Hi Tuan Hoang Anh,
Are you able to bring up the cluster ??
Please let us know what problem you are facing.
Thanks
Venkat
On Tue, Sep 27, 2011 at 12:08 PM, tuanhoanganh wrote:
> I am running PostgreSQL 9.0.1 32bit on windows 2003. Last night my disk had
> some problem and i lost some file in d
:29 PM, Alan Hodgson wrote:
> On September 26, 2011 05:49:50 AM Venkat Balaji wrote:
> > I tried restoring the backup, after taking the full backup.
> >
> > Below is what i see in the "archive destination".
> >
> > Postgres was asking for "0001
10193006F.gz
-rw--- 1 postgres postgres 219 Sep 26 02:53
00010193006F.00328508.backup.gz
Why is PG (9.0) putting an extension for the WAL Archive file as "backup.gz"
??
Please help !
Thanks
VB
On Mon, Sep 26, 2011 at 5:11 PM, Venkat Balaji wrote:
> Hello Eve
Hello Everyone,
We have had situations where-in "rsync" was executed without executing
"pg_start_backup()" on the production data directory and on the next runs,
"pg_start_backup()" has been executed with "rsync". This was to avoid high
IO load on production. We ended up getting unmatched files (e
Thanks Richard !
I realized that, I was restoring on an 32 bit server.
Regards,
Venkat
On Fri, Sep 23, 2011 at 6:59 PM, Richard Huxton wrote:
> On 23/09/11 13:53, Venkat Balaji wrote:
>
>> Linux *prod-server* 2.6.18-194.17.1.el5 #1 SMP Mon Sep 20 07:12:06 EDT
>> 2010 x86_64
restored the production backup is as follows -
Linux *backup-server* 2.6.18-194.3.1.el5PAE #1 SMP Sun May 2 04:42:25 EDT
2010 i686 i686 i386 GNU/Linux
I read some where that, Postgres datafiles are not architecture
independent.
Please help !
Thanks
Venkat
On Fri, Sep 23, 2011 at 6:11 PM, Venkat
Hello Everyone,
I am testing the Online Backups of our production databases ( this is part
of our disaster recovery plan ).
After restoring the Online Backup, we tried to bring up the cluster and
ended up with the following error -
2011-09-23 07:29:04 CDT [24092]: [1-1] FATAL: incorrect checksu
Hello Everyone,
I am in the process of scheduling a VACUUM FULL for our production databases
where in downtime is extremely critical.
Can someone please help me calculate the amount of free space (or free
pages) in the Table and Index (even after regular autovacuum or vacuum
analyze is performed)
Syncing just WAL archive directory every minute should not be a problem at
all (running rsync every minute for a data directory is not recommended).
As said earlier, we had configured warm standby for a db of size 2 TB and
wal archive generation was in 100s.
We did not encounter any issues in run
Yes. I would be excited to know if there is a possibility of multi-master
replication system on Postgres.
We will be soon using 9.1 Streaming replication.
Thanks
Venkat
On Tue, Sep 13, 2011 at 1:31 AM, Aleksey Tsalolikhin <
atsaloli.t...@gmail.com> wrote:
> Congratulations on the release of 9.1
Considering the size of WAL archives = 200GB
Compressing them using gzip (you can use this command in a shell script and
place it in archive_command as well) would possibly reduce the size to as
low as 10 - 20 GB.
Please let us know the results.
Thanks
Venkat
On Tue, Sep 6, 2011 at 1:03 PM, Mir
In my experience, I had configured a warm standby for 2 TB Postgres Cluster
(PostgreSQL 8.4).
Note : I do not know your database size and WAL archive generation rate.
Important considerations i made were as follows -
1. WAL archives transfer from production to standy depends on the network
bandw
files must
be fully copied as on the backup time. We cannot afford to miss any of them.
Thanks
Venkat
On Wed, Aug 31, 2011 at 5:46 AM, Craig Ringer wrote:
> On 30/08/2011 6:59 PM, Venkat Balaji wrote:
>
>> Hello Everyone,
>>
>> I have a situation here -
>>
>> I
Hello Everyone,
I have a situation here -
I am trying to restore the production online backup and recover the same.
- I had initially rsynced (excluded pg_log) the data directory and the
tarred and zipped the same
- SCP'd the tar to a different server and untarred and unzipped the same
- I go
It is recommended to identify the processes using up high work_mem and try
to set work_mem to higher value at the session level.
I this case, all the connections using up maximum work_mem is the potential
threat. As said by Zoltan, work_mem is very high and shared_buffers as well.
Other considera
57 matches
Mail list logo