[GENERAL] Duplicate rows inserted in PGSQL 9.4 for Windows, and accented character issue

2015-09-04 Thread phiroc

Hello,

I am using PGSQL 9.4 for Windows on Windows 7.

when I insert 9 rows in a table called FINANCIAL_INSTIT, 24 rows end up being 
created!

Furthermore, accented characters encoded in UTF-8 in the original SQL file, 
such as
é, end up as "é".

Any help would be much appreciated.

Philippe




---

insert into "FINANCIAL_INSTIT" ("FINANCIAL_INSTIT_ID", "NAME", "BANK_ID_CODE", 
"ADDRESS_ID")
values(DEFAULT, 'Crédit Mutuel Arkea', 'CMBRFR2BPBS', 1);

insert into "FINANCIAL_INSTIT" ("FINANCIAL_INSTIT_ID", "NAME", "BANK_ID_CODE", 
"ADDRESS_ID")
values(DEFAULT, 'LCL Banque Privée', 'LCLPFRP1', 2);

insert into "FINANCIAL_INSTIT" ("FINANCIAL_INSTIT_ID", "NAME",  "BANK_ID_CODE", 
"ADDRESS_ID")
values(DEFAULT, 'BNP Paribas', 'BNPAFRPH', 3);

insert into "FINANCIAL_INSTIT" ("FINANCIAL_INSTIT_ID", "NAME", "BANK_ID_CODE", 
"ADDRESS_ID")
values(DEFAULT, 'Abn Amro Bank N V Uk Branch', 'FTSBGB2L', 4);

insert into "FINANCIAL_INSTIT" ("FINANCIAL_INSTIT_ID", "NAME", "BANK_ID_CODE", 
"ADDRESS_ID")
values(DEFAULT, 'Aig Investments Europe Limited', 'AIGOGB21', 5); 

insert into "FINANCIAL_INSTIT" ("FINANCIAL_INSTIT_ID", "NAME", "BANK_ID_CODE", 
"ADDRESS_ID")
values(DEFAULT, 'Abn Amro Mezzanine (Uk) Limited', 'ABAZGB21', 6);

insert into "FINANCIAL_INSTIT" ("FINANCIAL_INSTIT_ID", "NAME", "BANK_ID_CODE", 
"ADDRESS_ID")
values(DEFAULT, 'Bank of America N A', 'FTSBGB2L', 7);

insert into "FINANCIAL_INSTIT" ("FINANCIAL_INSTIT_ID", "NAME", "BANK_ID_CODE", 
"ADDRESS_ID")
values(DEFAULT, 'Amerasia Bank', 'AIABUS31', 8);

insert into "FINANCIAL_INSTIT" ("FINANCIAL_INSTIT_ID", "NAME", "BANK_ID_CODE", 
"ADDRESS_ID")
values(DEFAULT, 'Falcon International Bank', 'FAIOUS41', 9);

---

-- Table: "FINANCIAL_INSTIT"

-- DROP TABLE "FINANCIAL_INSTIT";

CREATE TABLE "FINANCIAL_INSTIT"
(
"FINANCIAL_INSTIT_ID" serial NOT NULL,
  "NAME" character varying(40),
  "BANK_ID_CODE" character varying(34),
  "ADDRESS_ID" integer,
  CONSTRAINT "FINANCIAL_INSTIT_pkey" PRIMARY KEY ("FINANCIAL_INSTIT_ID"),
  CONSTRAINT "FINANCIAL_INSTIT_ADDRESS_ID_fkey" FOREIGN KEY ("ADDRESS_ID")
  REFERENCES "ADDRESS" ("ADDRESS_ID") MATCH SIMPLE
  ON UPDATE NO ACTION ON DELETE NO ACTION
)
WITH (
  OIDS=FALSE
);
ALTER TABLE "FINANCIAL_INSTIT"
  OWNER TO postgres;
COMMENT ON TABLE "FINANCIAL_INSTIT"
  IS 'Financial Institution';
COMMENT ON COLUMN "FINANCIAL_INSTIT"."BANK_ID_CODE" IS 'Bank Identification 
Code (BIC) = SWIFT Code';

--


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Duplicate rows inserted in PGSQL 9.4 for Windows, and accented character issue

2015-09-04 Thread phiroc

If I drop the FINANCIAL_INSTIT table, create it again, and run the insert 
commands, it now contains 9 rows as expected.


- Mail original -
De: phi...@free.fr
À: pgsql-general@postgresql.org
Envoyé: Vendredi 4 Septembre 2015 09:45:36
Objet: [GENERAL] Duplicate rows inserted in PGSQL 9.4 for Windows, and accented 
character issue


Hello,

I am using PGSQL 9.4 for Windows on Windows 7.

when I insert 9 rows in a table called FINANCIAL_INSTIT, 24 rows end up being 
created!

Furthermore, accented characters encoded in UTF-8 in the original SQL file, 
such as
é, end up as "é".

Any help would be much appreciated.

Philippe




---

insert into "FINANCIAL_INSTIT" ("FINANCIAL_INSTIT_ID", "NAME", "BANK_ID_CODE", 
"ADDRESS_ID")
values(DEFAULT, 'Crédit Mutuel Arkea', 'CMBRFR2BPBS', 1);

insert into "FINANCIAL_INSTIT" ("FINANCIAL_INSTIT_ID", "NAME", "BANK_ID_CODE", 
"ADDRESS_ID")
values(DEFAULT, 'LCL Banque Privée', 'LCLPFRP1', 2);

insert into "FINANCIAL_INSTIT" ("FINANCIAL_INSTIT_ID", "NAME",  "BANK_ID_CODE", 
"ADDRESS_ID")
values(DEFAULT, 'BNP Paribas', 'BNPAFRPH', 3);

insert into "FINANCIAL_INSTIT" ("FINANCIAL_INSTIT_ID", "NAME", "BANK_ID_CODE", 
"ADDRESS_ID")
values(DEFAULT, 'Abn Amro Bank N V Uk Branch', 'FTSBGB2L', 4);

insert into "FINANCIAL_INSTIT" ("FINANCIAL_INSTIT_ID", "NAME", "BANK_ID_CODE", 
"ADDRESS_ID")
values(DEFAULT, 'Aig Investments Europe Limited', 'AIGOGB21', 5); 

insert into "FINANCIAL_INSTIT" ("FINANCIAL_INSTIT_ID", "NAME", "BANK_ID_CODE", 
"ADDRESS_ID")
values(DEFAULT, 'Abn Amro Mezzanine (Uk) Limited', 'ABAZGB21', 6);

insert into "FINANCIAL_INSTIT" ("FINANCIAL_INSTIT_ID", "NAME", "BANK_ID_CODE", 
"ADDRESS_ID")
values(DEFAULT, 'Bank of America N A', 'FTSBGB2L', 7);

insert into "FINANCIAL_INSTIT" ("FINANCIAL_INSTIT_ID", "NAME", "BANK_ID_CODE", 
"ADDRESS_ID")
values(DEFAULT, 'Amerasia Bank', 'AIABUS31', 8);

insert into "FINANCIAL_INSTIT" ("FINANCIAL_INSTIT_ID", "NAME", "BANK_ID_CODE", 
"ADDRESS_ID")
values(DEFAULT, 'Falcon International Bank', 'FAIOUS41', 9);

---

-- Table: "FINANCIAL_INSTIT"

-- DROP TABLE "FINANCIAL_INSTIT";

CREATE TABLE "FINANCIAL_INSTIT"
(
"FINANCIAL_INSTIT_ID" serial NOT NULL,
  "NAME" character varying(40),
  "BANK_ID_CODE" character varying(34),
  "ADDRESS_ID" integer,
  CONSTRAINT "FINANCIAL_INSTIT_pkey" PRIMARY KEY ("FINANCIAL_INSTIT_ID"),
  CONSTRAINT "FINANCIAL_INSTIT_ADDRESS_ID_fkey" FOREIGN KEY ("ADDRESS_ID")
  REFERENCES "ADDRESS" ("ADDRESS_ID") MATCH SIMPLE
  ON UPDATE NO ACTION ON DELETE NO ACTION
)
WITH (
  OIDS=FALSE
);
ALTER TABLE "FINANCIAL_INSTIT"
  OWNER TO postgres;
COMMENT ON TABLE "FINANCIAL_INSTIT"
  IS 'Financial Institution';
COMMENT ON COLUMN "FINANCIAL_INSTIT"."BANK_ID_CODE" IS 'Bank Identification 
Code (BIC) = SWIFT Code';

--


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Slave promotion problem...

2015-09-04 Thread marin

On 2015-08-31 16:05, ma...@kset.org wrote:

On 2015-08-31 14:38, Martín Marqués wrote:

El 31/08/15 a las 03:29, ma...@kset.org escribió:

Last week we had some problems on the master server which caused a
failover on the slave (the master was completely unresponsive due to
reasons still unknown). The slave received the promote signal (pg_ctl
promote) and logged that in the logs:
2015-08-28 23:05:10 UTC [6]: [50-1] user=,db= LOG:  received promote
request
2015-08-28 23:05:10 UTC [467]: [2-1] user=,db= FATAL:  terminating
walreceiver process due to administrator command

5 hours later the slave still didn't promote. Meanwhile we fixed the
master and restarted it. The slave was restarted and it behaved just
like the promote signal didn't arrive, connecting to the master as a
regular slave.


Aren't there any further logs after the walreceiver termination?
Up to here everything looks fine, but we have no idea on what was 
logged

afterwards.

There are logs (quite a few, cca. 5 hours of it), every second
something like this:
2015-08-28 23:05:12 UTC [79867]: [1-1] user=[unknown],db=[unknown]
LOG:  connection received: host=[local]
2015-08-28 23:05:12 UTC [79867]: [2-1] user=postgres,db=postgres LOG:
connection authorized: user=postgres database=postgres
This logs the connection of the process that probes the server is 
alive.


I was expecting to see something like:
redo done at x
last completed transaction was at log time xxx

But those lines didn't appear after 5 hours. As I understand, these
are written before the server uses the restore_command to get WAL and
history files from the archive.


After careful testing of two scenarios:
1. Master is alive (or at least the slave thinks so), slave is signaled 
to promote

2. Master is dead, slave is signaled to promote.
We have come to a conclusion that in the scenario 1. the slave requires 
an additional WAL file from the archive before the "redo done at xx" 
message. Knowing that we established that the problem was in the 
restore_command script (wrapper around several rsync commands) which 
hanged indefinitely.





I am unsure if this promotion failure is a bug/glitch, but the 
promote

procedure is automated and tested a couple of hundred times so I am
certain we initiated the promote correctly.


Are you using homemade scripts? Maybe you need to test them more
thoroughly, with different environment parameters.


We use a custom script for the restore_command, but is seems that it
was not invoked.

Regards,
Mladen Marinović




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgresql C extension and SIGSEGV

2015-09-04 Thread Albe Laurenz
Etienne Champetier wrote:
> We are planning to add a C extension 
> (https://github.com/petropavel13/pg_rrule) to our shared
> postgresql cluster, and wondering what are the risk? (looking for the worst 
> case scenario here)
> 
> If there is a SIGSEGV, SIGBUS, SIGABRT ..., is the whole server stopping, or 
> just the request?

All client connections will be terminated and the server will initiate
recovery from the latest checkpoint.  Until that is done, no client
can connect to the database.

That is something you normally don't want to have in a production database.

> Knowing that the extension is only used in select statement, is there a risk 
> of (on disk) data
> corruption?

Even when run from a SELECT, a C function can do anything it wants with the 
server.

> Is the risk limited to the current database? (the extension will only be used 
> by 1 application with 1
> database, and we prefer not to impact other applications/databases)

The C function can happily start removing arbitrary file owned by
the PostgreSQL user if it chooses to, so no.

> Are there any techniques to limit/mitigate these risks? 
> (configuration/compile flags/...)

You should only use C functions that you trust.

Code review of the extension and good testing are your best protection.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] bdr admin role

2015-09-04 Thread Ray Stell

On 9/3/15 11:36 PM, Craig Ringer wrote:

Please show your connection strings, and the contents of bdr.bdr_nodes
and bdr.bdr_connections.

On 3 September 2015 at 05:00, Ray Stell  wrote:

This doc specifies to initdb with the admin user "postgres,"
http://bdr-project.org/docs/stable/quickstart-instances.html
but if I do that the supervisor falls over with:
$ cat bdr5598.log
LOG:  registering background worker "bdr supervisor"
LOG:  database system was shut down at 2015-09-02 16:04:45 EDT
LOG:  starting up replication identifier with ckpt at 0/171EBF8
LOG:  MultiXact member wraparound protections are now enabled
LOG:  starting background worker process "bdr supervisor"
LOG:  autovacuum launcher started
LOG:  database system is ready to accept connections
LOG:  Created database bdr_supervisordb (oid=16384) during BDR startup
LOG:  worker process: bdr supervisor (PID 21666) exited with exit code 1
FATAL:  role "postgresql" does not exist
LOG:  starting background worker process "bdr supervisor"

It works if I init with "-U postgresql"


Forgive me, I'm a little confused by your request.  As a general rule, I 
never use the commonly used defaults for the admin usename. That's just 
a long standing behavior of mine in most environments. The less the 
black hats know, the better.  So, if I initdb with "-U fred" or 
whatever, I would expect that to be detected by the bdr process, rather 
than hardcoding some default value.  At the point of startup with 
pg_ctl, there is no connection string or nodes in play.  The log records 
I posted were right after the "pg_ctl ... start" of the first db.


I was walking through the demo exercise here and silly me changed the 
admin role name without thinking about it.  I was just reporting that it 
seems to be only working if the admin value is this default.  The demo 
works fine if I use the string "postgresql" as the admin role.  It looks 
like you might want to build -U support into bdr at some point.


I did build from source as we use some non-standard flags locally, so 
maybe something is amiss in my build?   I'll try to go back and use the 
pre-built software sometime soon.


Ray


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgresql C extension and SIGSEGV

2015-09-04 Thread Etienne Champetier
Hi and thanks for the answer

2015-09-04 11:45 GMT+02:00 Albe Laurenz :

> Etienne Champetier wrote:
> > We are planning to add a C extension (
> https://github.com/petropavel13/pg_rrule) to our shared
> > postgresql cluster, and wondering what are the risk? (looking for the
> worst case scenario here)
> >
> > If there is a SIGSEGV, SIGBUS, SIGABRT ..., is the whole server
> stopping, or just the request?
>
> All client connections will be terminated and the server will initiate
> recovery from the latest checkpoint.  Until that is done, no client
> can connect to the database.
>
> That is something you normally don't want to have in a production database.
>
ok, so hitting assert() is bad(tm) and i should really prevent all SIG*
from happening


>
> > Knowing that the extension is only used in select statement, is there a
> risk of (on disk) data
> > corruption?
>
> Even when run from a SELECT, a C function can do anything it wants with
> the server.
>
I'm thinking about "trusted" code with bugs in it, like out of bound write
or ...


>
> > Is the risk limited to the current database? (the extension will only be
> used by 1 application with 1
> > database, and we prefer not to impact other applications/databases)
>
> The C function can happily start removing arbitrary file owned by
> the PostgreSQL user if it chooses to, so no.
>
> > Are there any techniques to limit/mitigate these risks?
> (configuration/compile flags/...)
>
> You should only use C functions that you trust.
>
> Code review of the extension and good testing are your best protection.
>

I trust that the extension will not do harm on purpose, but it's C, and
there is almost always bug :)
pg_rrule use libical, which id 50k sloc, so code review is out, but i've
already tested all the data in the database,
and i'm playing with afl-fuzz (and already found a cool out of bound write)
Just wanted to know the worst case scenario


> Yours,
> Laurenz Albe
>

Thanks again
Etienne


[GENERAL] Any thoughts on a better approach to this query?

2015-09-04 Thread David G. Johnston
​Formatted query attached in addition to placing it inline.  The commentary
is inline with the query.  Basically I've already solved this problem but
was wondering if someone has a different perspective; or simply
observations.

TIA,

David J.

/*
For a given id there are multiple linked values of differing types.
Types "A" and "B" are important and, if present, should be explicitly
assigned.
It is possible that more than one link is associated with a given type.
If either A or B is lacking an explicit value it is assigned a value
from:
1. any extra As that are present
2. any extra Bs that are present
3. any extra non-A/B values that are present

The final result contains values for A and B and
and array of values for whatever links went unused.
*/
WITH demo AS (
-- A sample record where B needs to be assigned and ends
-- up using the excess A
SELECT * FROM (
VALUES (1,1,'A'), (1,2,'A'), (1,3,'C'), (1,4,'D')
) vals (id, link, type)
)
, link_allocation_1 AS (
SELECT id,

-- Grab the first A
(SELECT link
   FROM demo
  WHERE demo.id = master.id
AND type = 'A' LIMIT 1
) AS type_a_first_link,

-- Grab the first B
(SELECT link
   FROM demo
  WHERE demo.id = master.id
AND type = 'B' LIMIT 1
) AS type_b_first_link,

-- Any additional As and Bs are placed into an array
-- and appended to an array constructed from all of the non-A/B
ARRAY(
SELECT link
  FROM demo
 WHERE demo.id = master.id
   AND type = 'A'
  ORDER BY link
OFFSET 1) ||
ARRAY(
SELECT link
  FROM demo
 WHERE demo.id = master.id
   AND type = 'B'
  ORDER BY link
OFFSET 1) ||
ARRAY(
SELECT link
  FROM demo
 WHERE demo.id = master.id
   AND type NOT  IN ('A','B')
  ORDER BY link)  AS unassigned_links
FROM (SELECT DISTINCT id FROM demo) master
)
, allocate_unassigned_links AS (
SELECT *,
-- Determine how many allocations from the "extra" array are required
-- so that we can trim slice them out of the final result
CASE WHEN type_a_first_link IS NULL
 THEN 1
 ELSE 0 END +
CASE WHEN type_b_first_link IS NULL
 THEN 1
 ELSE 0
END   AS reassign_count,
-- A always gets the first extra if needed
CASE WHEN type_a_first_link IS NULL
THEN unassigned_links[1]
ELSE type_a_first_link
END   AS actual_a_link,
-- B gets the first extra unless A took it in which case it gets the
second one
CASE WHEN type_b_first_link IS NULL THEN
 CASE WHEN type_a_first_link IS NOT NULL
  THEN unassigned_links[1]
  ELSE unassigned_links[2]
 END
 ELSE type_b_first_link
END  AS actual_b_link
FROM link_allocation_1
)
SELECT id,
-- For A and B flag is the value was pulled from the extras
   type_a_first_link IS NULL AS a_link_is_missing,
   actual_a_link,
   type_b_first_link IS NULL AS b_link_is_missing,
   actual_b_link,
-- Now slice off the first portion of the extras array based upon the
assignment count
   unassigned_links[1+reassign_count:array_length(unassigned_links,1)]
AS final_unassigned_links,
-- output the unsliced array for visual comparison
   unassigned_links AS pre_allocation_unassigned_links
  FROM allocate_unassigned_links


Output =>
id a_link_is_missing actual_a_link b_link_is_missing actual_b_link
final_unassigned_links pre_allocation_unassigned_links
1 False 1 True 2 {3,4} {2,3,4}


demonstration-two-group-allocation.sql
Description: Binary data

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] in defensive of zone_reclaim_mode on linux

2015-09-04 Thread Ben Chobot
Over the last several months, I've seen a lot of grumbling about how 
zone_reclaim_mode eats babies, kicks puppies, and basically how you should just 
turn it off and live happily ever after. I thought I should add a 
counterexample, because that advice has not proven very good for us.

Some facts about us:
- postgres 9.3.9
- ubuntu trusty kernels (3.13.0-29-generic #53~precise1-Ubuntu) 
- everything in AWS, on 32-core HVM instances with 60GB of ram
- 6GB shared buffers
- mostly simple queries

Generally, this has worked out pretty well for us. However, we've recently 
added a bunch more load, which, because we're sharded and each shard has its 
own user, means we've added more concurrently active users. ("A bunch" = ~300.) 
We are big pgBouncer users, but because we also use transaction pooling, 
pgBouncer can only do so much to reuse existing connections. (SET ROLE isn't an 
option.)

The end result is that recently, we've been running a dumb number of backends 
(between 600 and 1k) - which are *usually* mostly idle, but there are frequent 
spikes of activity when dozens of them wake up at onces. Even worse, those 
spikes tend to also come with connection churn, as pgBouncer tears down 
existing idle connections to build up new backends for different users.

So our load would hover under 10 most of the time, then spike to over 100 for a 
minute or two. Connections would get refused, the system would freeze up... and 
then everything would go back to normal. The solution? Turning on 
zone_reclaim_mode.

It appears that connection churn is far more manageable to Linux with 
zone_reclaim_mode enabled. I suspect that our dearth of large, complex queries 
helps us out as well. Regardless, our systems no longer desperately seek free 
memory when many idle backends wake up while others are getting torn down and 
and replaced. Babies and puppies rejoice. 

Our situation might not apply to you. But if it does, give zone_reclaim_mode a 
chance. It's not (always) as bad as others have made it out to be.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Buffers: shared hit/read to shared_buffers dependence

2015-09-04 Thread Pavel Suderevsky
Hi,

When I have been passing through "Understanding explain" manual (
http://www.dalibo.org/_media/understanding_explain.pdf) I've faced some
strange situation when table with size of 65MB completely placed in cache
with shared_buffers=320MB and it doesn't with shared_buffers <= 256MB.
Actually behaviour of caching in my case is the same with either 256MB or
32MB. Im my mind shared_buffers with size of 256MB should be enough for
caching table with size of 65MB, but it isn't. Could you please explain
such behaviour?

Steps:

understanding_explain=# select pg_size_pretty(pg_relation_size('foo'));
 pg_size_pretty

 65 MB
(1 row)
=

postgres=# show shared_buffers ;
 shared_buffers

 320MB
(1 row)

postgres=# \c understanding_explain
You are now connected to database "understanding_explain" as user
"postgres".
understanding_explain=# EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM foo;
   QUERY PLAN


 Seq Scan on foo  (cost=0.00..17500.60 rows=100 width=37) (actual
time=0.786..143.686 rows=100 loops=1)
   Buffers: shared read=8334
 Planning time: 3.796 ms
 Execution time: 195.557 ms
(4 rows)

understanding_explain=# EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM foo;
  QUERY PLAN

---
 Seq Scan on foo  (cost=0.00..17500.60 rows=100 width=37) (actual
time=0.009..83.546 rows=100 loops=1)
   Buffers: shared hit=8334
 Planning time: 0.029 ms
 Execution time: 129.499 ms
(4 rows)

=
[root@dbtest3 ~]# systemctl stop postgres
[root@dbtest3 ~]# sync
[root@dbtest3 ~]# echo 3 > /proc/sys/vm/drop_caches
[root@dbtest3 ~]# systemctl start postgres
=

understanding_explain=# show shared_buffers;
 shared_buffers

 256MB
(1 row)

understanding_explain=# EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM foo;
   QUERY PLAN


 Seq Scan on foo  (cost=0.00..17500.60 rows=100 width=37) (actual
time=0.772..126.242 rows=100 loops=1)
   Buffers: shared read=8334
 Planning time: 5.164 ms
 Execution time: 181.306 ms
(4 rows)

understanding_explain=# EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM foo;
  QUERY PLAN

---
 Seq Scan on foo  (cost=0.00..17500.60 rows=100 width=37) (actual
time=0.029..91.686 rows=100 loops=1)
   Buffers: shared hit=32 read=8302
 Planning time: 0.025 ms
 Execution time: 136.584 ms
(4 rows)

With every new query execution 32 hits adding to shared hit value.


[GENERAL] Trouble setting up replication

2015-09-04 Thread clmartin
I have been trying to set up "hot standby" replication, but have run 
into problems, and hope someone here can help.


I have configured an ssh tunnel from a computer that is not on a public 
IP, and was unable to figure out how to run pg_basebackup from that 
server to backup the data from the main server, so I decided to run 
pb_basebackup on the main server and copy it later. So I did this:


pg_basebackup -D /var/lib/pgsql/9.3/backups/rep/init -P -v -X s

(after a number of other attempts), and got errors. First I got a 
permissions error, and found that the directory was owned by root 
instead of postgres. I changed that, but eventually I got this error:


pg_basebackup: directory "/var/lib/pgsql/9.3/backups/rep" exists but is 
not empty


So my question is whether I can safely delete the directory and run 
pb_basebackup again?


I appreciate any help you can give. Let me know what additional 
information is needed to answer the question.


Chuck


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general