On Thu, Aug 11, 2016 at 1:30 PM, Patrick B wrote:
> Hi guys,
>
> I got the following index:
>
> CREATE INDEX "ix_mo_pk" ON "mo" USING "btree" ((("id")::"text"))
>
>
> The "ID" is my primary key:
>
>> "mo_pkey" PRIMARY KEY, "btree" ("id")
>
>
> The ix_mo_pk index is not being used... But as it ha
> Now the logical decoding client connects to B (the new primary). The
> replication slot doesn't exist. So, it creates it and starts streaming.
> This is where the problem lies - as it would begin streaming from LSN 4
> (anything after what has already been committed), because I have no way
> (tha
On Sun, Aug 7, 2016 at 9:29 PM, Colin Morelli
wrote:
> Venkata,
>
> Thanks for the reply. Unfortunately something like PgPool still won't
> create the replication slots on all hosts, and record the LSN in a way that
> is reusable on the secondary.
>
Yes, thats correct, pgPool does not have anyth
On Sat, Aug 6, 2016 at 1:17 PM, Colin Morelli
wrote:
> Hey all,
>
> I'm using logical decoding in my application to capture change streams and
> ship them to Kafka. However, logical replication slots aren't included in
> the WAL and thus don't make it to replicas. In the case of a failover, it's
>
> I use a bash script to ship them. The script hasn't being changed So
> it isn't the problem.
>
>
> *postgresql.conf:*
>
>> archive_command = 'exec nice -n 19 ionice -c 2 -n 7 archive_command.bash
>> "%p" slave01 slave02'
>
> *archive_command.bash:*
>
> Basically we use TAR to ship through s
sible that,
1000s of requests can be served and those being concurrent will be far from
real with kind of hardware capacity you have. So, the solution would be to
have appropriate tuning and benchmarking process in place.
Regards,
Venkata B N
Fujitsu, Australia
> On August 1, 2016 at 10:30:48
On Sun, Jul 31, 2016 at 12:07 PM, Silk Parrot wrote:
> Hi,
>
> We are trying build a multi tenant application and are debating which
> approach we should take: (also my understanding is based on that pgbouncer
> connection pool doesn’t work across different user/database pair):
>
> 1. For each
On Thu, Jun 30, 2016 at 11:15 PM, Nick Babadzhanian wrote:
> Setup:
> 2 PostgreSQL servers are geographically spread. The first one is used for
> an application that gathers data. It is connected to the second database
> that is used to process the said data. Connection is not very stable nor is
On Fri, Jun 17, 2016 at 5:36 AM, avi Singh
wrote:
> I am working on a project to upgrade postgresql from 9.4 to 9.5 we use
> slony for replication
>
>
> Phase 1 problem
>
> 1. We had slony version slony1-94-2.2.2-1.rhel5.x86_64 installed when i
> tried testing upgrade on my test env it failed coz
On Thu, Jun 9, 2016 at 8:01 PM, Sunil N Shinde
wrote:
> Thanks Venkata.
>
>
>
> I am considering latest version now i.e. 9.4 or 9.5 on Linux 6.
>
> Is there any difference in setup from 9.1 to 9.5?
>
There is no difference in the setup. Streaming Replication in the version
9.5 is a lot better wi
>
>
>
> I need to do the setup for High availability function.
>
> Also want to implement load balancing for 02 nodes.
>
You will have to build streaming replication which was introduced in
PostgreSQL-9.0
> I think PGPool will be require for that. Can I use PGPool without cost.
>
pgpool-II is a
On Thu, Jun 2, 2016 at 11:43 AM, Jeff Beck wrote:
> Hi-
> We have a master (pg 9.4.4 on Ubuntu 14.10) and a slave (pg 9.4.8 on
> Centos 7). During a period of heavy use, the slave began complaining
> that the “requested WAL segment xx has already been removed”. But
> the WAL segment was still
Oops, i missed including pgsql-general in my earlier replies..
> > > I have data_checksums switched on so am suspecting a streaming
> > > > > replication bug. Anyone know of a recent bug which could have
> caused
> > > > > this?
> > > > >
> > > >
> > > > I cannot conclude at this point. I encoun
On Wed, Jun 1, 2016 at 3:13 AM, Jim Longwill wrote:
> I am trying to setup a 2nd, identical, db server (M2) for development and
> I've run into a problem with starting up the 2nd Postgres installation.
>
> Here's what I've done:
> 1) did a 'clone' of 1st (production) machine M1 (so both machine
On Mon, May 30, 2016 at 11:37 PM, Brian Sutherland
wrote:
> I'm running a streaming replication setup with PostgreSQL 9.5.2 and have
> started seeing these errors on a few INSERTs:
>
> ERROR: could not read block 8 in file "base/3884037/3885279": read
> only 0 of 8192 bytes
>
These errors a
On Sun, May 22, 2016 at 6:38 PM, Venkata Balaji N wrote:
>
> On Sun, May 22, 2016 at 1:59 AM, Tom Lane wrote:
>
>> Venkata Balaji N writes:
>> > On Sat, May 21, 2016 at 1:04 AM, Tom Lane wrote:
>> >> http://www.unix.com/man-page/opensolaris/3c/atomic_
Hi Asif,
Sorry, i should have responded first. Thanks for inputs and my replies are
inline -
> Any help would be appreciated. Do i need to give any particular CFLAGS ?
>>
>>
>> Below is the *configure *command -
>>
>>
>> ./configure --prefix=/opt/postgres/9.5.3 CC=/opt/SUNWspro/bin/cc
>> 'CFLAGS
On Sun, May 22, 2016 at 1:59 AM, Tom Lane wrote:
> Venkata Balaji N writes:
> > On Sat, May 21, 2016 at 1:04 AM, Tom Lane wrote:
> >> http://www.unix.com/man-page/opensolaris/3c/atomic_cas/
> >> http://docs.oracle.com/cd/E23824_01/html/821-1465/atomic-cas-3c.html
>
On Sat, May 21, 2016 at 1:04 AM, Tom Lane wrote:
> Venkata Balaji N writes:
> > "make" command is generating the following error while compiling
> > postgresql-9.5.3 on Solaris SPARC.
>
> > Undefined first referenced
> > sy
Hi,
"make" command is generating the following error while compiling
postgresql-9.5.3 on Solaris SPARC.
I tried compiling 9.2 and 9.3, works fine. This is only happening on 9.5.
../../src/port/libpgport_srv.a ../../src/common/libpgcommon_srv.a -lnsl
-lrt -lsocket -lm -o postgres
Undefined
On Tue, May 17, 2016 at 6:28 AM, Lucas Possamai
wrote:
> hmm.. thanks for all the answers guys...
>
>
> One more question: Those IDLE connections.. are using the server's
> resources?
> To solve that problem I would need a Pool connection, right?
>
> Would the pool connection solve that IDLE conn
On Sat, May 14, 2016 at 5:38 PM, Venkata Balaji N wrote:
>
> On Wed, May 11, 2016 at 9:04 PM, wrote:
>
>> I apologise for the missing data.
>>
>> we are running 9.1.15 on debian servers.
>>
>
> There is a possibility of making the old master standby if
On Sat, May 14, 2016 at 12:27 PM, Jay Howard
wrote:
> I'm seeing long-running transactions (pg_dump) canceled on the standby
> when there are a lot of inserts happening on the master. This despite my
> having set max_standby_streaming_delay to -1 on the standby.
>
Do you have hot_standby_feedba
On Wed, May 11, 2016 at 9:04 PM, wrote:
> I apologise for the missing data.
>
> we are running 9.1.15 on debian servers.
>
There is a possibility of making the old master standby if you have
promoted standby after clean-shutting down the master. I I tested this in
9.2.x and later versions. This
On Wed, May 11, 2016 at 2:31 PM, wrote:
> Hi All,
>
> we are currently using streaming replication on multiple node pairs. We
> are seeing some issues, but I am mainly interrested in clarification.
>
> When a failover occurs, we touch the trigger file, promoting the previous
> slave to master. Th
On Sun, Apr 24, 2016 at 2:30 AM, Tomas J Stehlik
wrote:
> Hello,
>
> There was a corruption to the file system due to sudden shutdown of Windows
> 7.
>
> The attempt to connect to one of the databases results in multiple errors,
> like the following one:
> ERROR: invalid page in block 58 of rela
> What would be the effect of suddenly introducing a 1-2 GB of WAL archives
> to the WAL restore folder on the slave? Would there be a big performance
> effect on the incoming queries to the slave? Would the slave be available
> for queries while the WAL logs are restored into the DB?
>
If the Que
On Tue, Feb 23, 2016 at 10:02 AM, Ashish Chauhan wrote:
> Thanks Venkata, I am able to setup replication now. Just wondering when I
> check replication_delay and lag, I am getting negative number, any idea why?
>
>
>
> receive|replay| replication_delay | lag
>
> --+---
On Mon, Feb 22, 2016 at 3:20 PM, Andrew Smith wrote:
> Hello,
>
> I am setting up a proof of concept database to store some historical
> data. Whilst I've used PostgreSQL a bit in the past this is the first time
> I've looked into disk usage due to the amount of data that could
> potentially be
On Fri, Feb 19, 2016 at 6:24 PM, Ashish Chauhan
wrote:
> Below is recovery.conf on slave
>
>
> #---
> # STANDBY SERVER PARAMETERS
>
> #---
> #
> # standb
>
> How do I setup replication between DR server and slave server while slave
> server and master server are running? I cannot stop master server. Can
> someone please guide with steps?
>
>
>
Steps are pretty much similar. You can setup replication between slave and
DR by using the backup of Maste
On Thu, Feb 18, 2016 at 1:54 AM, Data Cruncher wrote:
> We will be creating multiple databases in a cluster (instance). Is there
> any way to separate shared_buffers for each database? Looks like not since
> PG does not allow user created shared buffers.
shared_buffers parameter is for the whol
On Tue, Nov 17, 2015 at 3:24 PM, zh1029 wrote:
> Hi,
>
> While start PostgreSQL(9.3.6) and execute pg_restore soon after PostgreSQL
> bringing up. I encounter pg_restore failure because of deadlock detected.
>
> postgres[2737]: [3-1] LOG: process 2737 detected deadlock while waiting
> for
> Acce
On Fri, Nov 13, 2015 at 6:49 AM, db042190
wrote:
> Hi. We have a postgres 9.1 query in a pentaho job (table input component
> contains the query and is followed by various switches) that runs nightly.
> More and more frequently (about half the time now), the query returns no
> data (or appears t
be upgraded to 9.4.x.
After a detailed analysis on memory usage by OS and other processes, is it
safe to advise on configuring shared_buffers to 2 GB ?
Any advise will be appreciated.
Regards,
Venkata Balaji N
Fujitsu Australia
On Thu, Sep 17, 2015 at 1:27 AM, Ramesh T
wrote:
> Hi All,
>I'm using or2pg tool ,I exported data but I have to change the
> schema import to postgres database.exported data more than gb.
> Can you please let me know, how to do the change the name in data script..?
>
Do you mean that
d to be careful regarding
privileges/grants and dependencies on the table.
Or the second approach would be --
Create a table called users_orig from the "users" table and execute SELECT
on user_orig table and let the TRUNCATE/data-repopulation operation run on
"users" table. This will be a problem if the data is huge. It might take up
your hardware resources.
Third and simple approach would be to -
Execute SELECT and TRUNCATE at different times.
All of the above approaches are without considering data-size and other
critical aspects of environment, which you need to worry about.
Regards,
Venkata Balaji N
Fujitsu Australia
he database end. If you want process not to continue
anymore at the database-end, then, you need to manually kill it at the
database end too using pg_cancel_backend() or pg_terminate_backend().
Regards,
Venkata Balaji N
Fujitsu Australia
/pg_archivecleanup
> /var/lib/postgresql/9.1/wal_archive/ %r'
>
>
> How can I reduce the number of WAL files on the hot_stanby slave ?
>
If the number of WAL files in pg_xlog are growing, then you need to look at
why the files are not getting deleted.
Do you see master and standby in sync ? You can check that by getting the
current pg_xlog position in standby.
Regards,
Venkata Balaji N
Fujitsu Australia
e
slave to a consistent state. Nothing can be advised straight without
knowing your replication configuration/architecture details.
Regards,
Venkata Balaji N
Fujitsu Australia
t.relname
> WHERE idstat.idx_scan < 200
> AND indexdef !~* 'unique'
> ORDER BY idstat.schemaname,
> idstat.relname,
> indexrelname;
>
Not sure why do you have "<200"
Regards,
Venkata Balaji N
Fujitsu Australia
On Fri, May 22, 2015 at 6:52 AM, Piotr Gasidło
wrote:
> Got strange problem. Unable to repeat, but got logs.
>
> Simple master-slave using streaming replication.
> Master is running. Slave is down.
> Segment 00044C4D0090 was successfully archived and send
> from master to slave.
>
> N
r archiving is not working as expected.
It is the checkpoint process which cleans up the WAL files from pg_xlog, if
that is not happening
Regards,
Venkata Balaji N
Fujitsu Australia
?
> How I handle this case (pg_xlog folder size) when Production people
> entering the data in bulk, kindly suggest. I am missing something in my
> postgresql.conf and somewhere else.
>
What is the *archive_timeout* value you have ?
Regards,
Venkata Balaji N
Fujitsu Australia
On Tue, May 5, 2015 at 7:57 AM, Edson F. Lidorio
wrote:
>
>
> On 04-05-2015 00:46, Venkata Balaji N wrote:
>
>> You do not see the above WAL file in the archive directory ?
>> "/mnt/server/archivedir" is shared between master and slave databases ? The
>
re rare and are not deleting much, then frequent
VACUUM FULL is not ideal.
Regards,
Venkata Balaji N
Fujitsu Australia
e at the OS level
(this is not related to pg_xlog directory).
As VACUUMING is a data change operation, "pg_xlog" will also have only the
WAL data (modifications) written at the time of VACUUMING.
http://www.postgresql.org/docs/9.4/static/sql-vacuum.html
Regards,
Venkata Balaji N
Fujitsu Australia
47 matches
Mail list logo