Re: Deadlock with 2 processes waiting on transactionids and locking unrelated tables

2018-02-19 Thread David Wheeler
Thanks for your response

> Does any of the two tables have triggers?

Yes the ticket table has a trigger that inserts changes into a ticketstatuslog 
table when the ticket.status column changes and on insert.

ticket_status_insert_trigger AFTER INSERT ON ticket FOR EACH ROW EXECUTE 
PROCEDURE ticket_status_trigger_function()
ticket_status_update_trigger AFTER UPDATE OF ticketstatus ON ticket FOR 
EACH ROW WHEN (old.ticketstatus <> new.ticketstatus) EXECUTE PROCEDURE 
ticket_status_trigger_function()

> What's the database / transaction isolation level?

Both read committed

> Do the updates run in a transaction among other read / write operations 
> within the same transaction ?

Yes they will both have many reads and writes before running the deadlocking 
query. 

Cheers, 

-- David


 

 David Wheeler • software engineer
Inomial Pty Ltd • Automatic Billing 
p +61 3 9663 3554


  
 
 
> On 19 Feb 2018, at 4:43 pm, Rene Romero Benavides  
> wrote:
> 
> Hi. Does any of the two tables have triggers? What's the database / 
> transaction isolation level? Do the updates run in a transaction among other 
> read / write operations within the same transaction ?
> Regards.
> 
> 2018-02-18 23:28 GMT-06:00 David Wheeler  >:
> Hi,
> 
> We’re seeing deadlock semi-regularly (0-2 per day) that I’m really having 
> trouble getting to the bottom of. 
> 
> Process 7172 waits for ShareLock on transaction 4078724272 
> ; blocked by process 7186.
> Process 7186 waits for ShareLock on transaction 4078724210 
> ; blocked by process 7172.
> 
> The two queries in question are updates on unrelated tables. Running the 
> queries on their own shows no overlapping entries in pg_locks. 
> 
> Process 7172: update ticket set unread = true where ticketid = $1
> Process 7186: update "planscheduleitem" set "planschedule"=$1 where "psi"=$2
> 
> How can I work out why Postgres has decided that the two processes are in 
> deadlock? Is there an explainer somewhere on transaction level locks? I can’t 
> see anything in the docs besides that they exist. 
> 
> 
> 
> Details below
> 
> select version();
>   version
> ---
>  PostgreSQL 10.2 (Ubuntu 10.2-1.pgdg14.04+1) on x86_64-pc-linux-gnu, compiled 
> by gcc (Ubuntu 4.8.4-2ubuntu1~14.04.3) 4.8.4, 64-bit
> (1 row)
> 
> ---
> 
> 
> after running update "planscheduleitem" set "planschedule"=$1 where "psi"=$2
> 
> SELECT locktype, virtualxid, transactionid, virtualtransaction, pid, mode, 
> relname, page, tuple
> FROM pg_locks LEFT JOIN pg_class ON (relation=oid) where pid =41715;
>locktype| virtualxid | transactionid | virtualtransaction |  pid  |
>mode   | relname | page | tuple
> ---++---++---+--+-+--+---
>  relation  ||   | 56/2306863 | 41715 | 
> RowExclusiveLock | ticket_parentticketid   |  |
>  relation  ||   | 56/2306863 | 41715 | 
> RowExclusiveLock | ticket_originalticketid |  |
>  relation  ||   | 56/2306863 | 41715 | 
> RowExclusiveLock | ticket_tickettypeid_idx |  |
>  relation  ||   | 56/2306863 | 41715 | 
> RowExclusiveLock | ticket_subject_idx  |  |
>  relation  ||   | 56/2306863 | 41715 | 
> RowExclusiveLock | ticket_closedtime_idx   |  |
>  relation  ||   | 56/2306863 | 41715 | 
> RowExclusiveLock | ticket_assignedto_idx   |  |
>  relation  ||   | 56/2306863 | 41715 | 
> RowExclusiveLock | ticket_serviceuid_idx   |  |
>  relation  ||   | 56/2306863 | 41715 | 
> RowExclusiveLock | ticket_parentuid_idx|  |
>  relation  ||   | 56/2306863 | 41715 | 
> RowExclusiveLock | ticket_createdtime_idx  |  |
>  relation  ||   | 56/2306863 | 41715 | 
> RowExclusiveLock | ticket_txid |  |
>  relation  ||   | 56/2306863 | 41715 | 
> RowExclusiveLock | ticket_tickettype   |  |
>  relation  ||   | 56/2306863 | 41715 | 
> RowExclusiveLock | ticket_ticketpriority   |  |
>  relation  | 

Re: Deadlock with 2 processes waiting on transactionids and locking unrelated tables

2018-02-19 Thread Rene Romero Benavides
My guess is that the transaction doing:

update "planscheduleitem" set "planschedule"=$1 where "psi"=$2

updates ticket before reaching that point

And

update ticket set unread = true where ticketid = $1

updates planscheduleitem before that

Does it make sense to you? Btw, do the transactions use explicit locking?

2018-02-18 23:28 GMT-06:00 David Wheeler :

> Hi,
>
> We’re seeing deadlock semi-regularly (0-2 per day) that I’m really having
> trouble getting to the bottom of.
>
> Process 7172 waits for ShareLock on transaction 4078724272
> <(407)%20872-4272>; blocked by process 7186.
> Process 7186 waits for ShareLock on transaction 4078724210
> <(407)%20872-4210>; blocked by process 7172.
>
> The two queries in question are updates on unrelated tables. Running the
> queries on their own shows no overlapping entries in pg_locks.
>
> Process 7172: update ticket set unread = true where ticketid = $1
> Process 7186: update "planscheduleitem" set "planschedule"=$1 where
> "psi"=$2
>
> How can I work out why Postgres has decided that the two processes are in
> deadlock? Is there an explainer somewhere on transaction level locks? I
> can’t see anything in the docs besides that they exist.
>
>
>
> Details below
>
> select version();
>   version
> 
> ---
>  PostgreSQL 10.2 (Ubuntu 10.2-1.pgdg14.04+1) on x86_64-pc-linux-gnu,
> compiled by gcc (Ubuntu 4.8.4-2ubuntu1~14.04.3) 4.8.4, 64-bit
> (1 row)
>
> 
> 
> ---
>
>
> after running update "planscheduleitem" set "planschedule"=$1 where
> "psi"=$2
>
> SELECT locktype, virtualxid, transactionid, virtualtransaction, pid, mode,
> relname, page, tuple
> FROM pg_locks LEFT JOIN pg_class ON (relation=oid) where pid =41715;
>locktype| virtualxid | transactionid | virtualtransaction |  pid  |
>   mode   | relname | page | tuple
> ---++---+---
> -+---+--+-+--+---
>  relation  ||   | 56/2306863 | 41715 |
> RowExclusiveLock | ticket_parentticketid   |  |
>  relation  ||   | 56/2306863 | 41715 |
> RowExclusiveLock | ticket_originalticketid |  |
>  relation  ||   | 56/2306863 | 41715 |
> RowExclusiveLock | ticket_tickettypeid_idx |  |
>  relation  ||   | 56/2306863 | 41715 |
> RowExclusiveLock | ticket_subject_idx  |  |
>  relation  ||   | 56/2306863 | 41715 |
> RowExclusiveLock | ticket_closedtime_idx   |  |
>  relation  ||   | 56/2306863 | 41715 |
> RowExclusiveLock | ticket_assignedto_idx   |  |
>  relation  ||   | 56/2306863 | 41715 |
> RowExclusiveLock | ticket_serviceuid_idx   |  |
>  relation  ||   | 56/2306863 | 41715 |
> RowExclusiveLock | ticket_parentuid_idx|  |
>  relation  ||   | 56/2306863 | 41715 |
> RowExclusiveLock | ticket_createdtime_idx  |  |
>  relation  ||   | 56/2306863 | 41715 |
> RowExclusiveLock | ticket_txid |  |
>  relation  ||   | 56/2306863 | 41715 |
> RowExclusiveLock | ticket_tickettype   |  |
>  relation  ||   | 56/2306863 | 41715 |
> RowExclusiveLock | ticket_ticketpriority   |  |
>  relation  ||   | 56/2306863 | 41715 |
> RowExclusiveLock | ticket_idx_0|  |
>  relation  ||   | 56/2306863 | 41715 |
> RowExclusiveLock | ticket_pkey |  |
>  relation  ||   | 56/2306863 | 41715 |
> RowExclusiveLock | number_constraint   |  |
>  relation  ||   | 56/2306863 | 41715 |
> RowExclusiveLock | ticket  |  |
>  virtualxid| 56/2306863 |   | 56/2306863 | 41715 |
> ExclusiveLock| |  |
>  transactionid ||4089785154 <(408)%20978-5154> |
> 56/2306863 | 41715 | ExclusiveLock| |
>|
>  relation  ||   | 56/2306863 | 41715 |
> RowExclusiveLock | ticket_fromuid_idx  |  |
> (19 rows)
>
> 
> 
> 
>
> after running 

Re: Join query

2018-02-19 Thread Laurenz Albe
hmidi slim wrote:
> I have two tables: establishment which contains these columns: id, name, 
> longitude, latitude, geom (Geometric column)
> Product contains: id, name, establishment_id
> First of all I want to select the establishment within a radius. 
> I run this query:
> select e.name, e1.name
> from establishment as e, establishment as e1
> where e.id <> e1.id
> and e1.id = 1
> and ST_DWithin(geom, ST_MakePoint(e1.longitude, e1.latitude)::geography, 1000)
> 
> The result of this query returns all the establishment within a radius 1KM 
> from from a given establishment which has an id = 1.
> 
> After that I want to get the product's name of each establishment from the 
> query's result.
> 
> Is there an other optimized solution to make a query such this:
> select * from (
> select e.name, e1.name, e.id
> from establishment as e, establishment as e1
> where e.id <> e1.id
> and e1.id = 1
> and ST_DWithin(geom, ST_MakePoint(e1.longitude, e1.latitude)::geography, 
> 1000)) as tmp inner join product as p on p.establishment_id = tmp.id

A simple join is what you need:

   SELECT e.name, e1.name
   FROM establishment AS e
  JOIN establishment AS e1
 ON ST_DWithin(e.geom, ST_MakePoint(e1.longitude, 
e1.latitude)::geography, 1000)
AND e.id <> e1.id
  JOIN product AS p
 ON p.establishment_id = e.id
   WHERE e1.id = 1;

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com



Re: Connection loosing at some places - caused by firewall

2018-02-19 Thread Durumdara
Hello!


2018-02-13 21:21 GMT+01:00 George Neuner :

> On Tue, 14 Nov 2017 12:09:31 +0100, Durumdara 
> wrote:
>
>
> >*I disabled my firewall at home - the [keepalive] problem vanished!!!*
>
> What firewall are you using?  Windows own firewall doesn't interfere
> with keepalive packets.  Most commercial SOHO firewalls won't either.
>
>
Normal Windows Firewall (Defender). An ASUS test notebook works. The DELL
isn't.
So something is different in the FW on DELL. With manually defined (faster)
keepalive it's working.

dd


Re: shared_buffers 8GB maximum

2018-02-19 Thread Vitaliy Garnashevich


Yes.  I don't know the exact reason, but reading a buffer from OS 
cache is quite a bit more expensive than just pinning a buffer already 
in the buffer_pool, about 5 times more expensive the last time I 
tested it, which was before Meltdown.  (And just pinning a buffer 
which is already in the cache is already pretty expensive--about 15 
times as expensive as reading the next tuple from an already-pinned 
buffer).


Thanks for the numbers. Just out of curiosity, do you happen to know how 
much more expensive compared to that a read from disk is? And also, how 
much the pinning can be slowed down, when having to iterate using the 
clock-sweep method over large shared_buffers?


I don't think that there is any reason to think that buffers_clean > 
buffers_checkpoint is a problem.  In fact, you could argue that it is 
the way it was designed to work.  Although the background writer does 
need to tell the checkpointer about every file it dirties, so it can 
be fsynced at the end of the checkpoint.  The overhead of this was 
minimal in my testing.




The reason why I mentioned buffers_clean is because I was assuming that 
under "healthy" conditions, most writes should be done by checkpointer, 
because, as it was already mentioned, that's the most efficient way of 
writing (no duplicate writes of the same buffer, write optimizations 
etc.). I was thinking about bgwriter as a way of reducing latency by 
avoiding the case when a backend has to write buffers by itself. So that 
would mean that big numbers in buffers_clean and buffers_backend 
compared to buffers_checkpoint, would mean that a lot of writes are done 
not by checkpointer, and thus probably less efficiently than they could 
be. That might have resulted in IO writes being more random, and more IO 
writes done in general, because same buffer can be written multiple 
times between checkpoints.


But buffers_backend > buffers_checkpoint could be a problem, 
especially if they are also much larger than buffers_clean.  But the 
wrinkle here is that if you do bulk inserts or bulk updates (what 
about vacuums?), the backends by design write their own dirty 
buffers.  So if you do those kinds of things, buffers_backend being 
large doesn't indicate much.  There was a patch someplace a while ago 
to separate the counters of backend-intentional writes from 
backend-no-choice writes, but it never went anywhere.


We do daily manual vacuuming. Knowing what part of total writes is 
accounted for them indeed would be nice.


When looking at buffers_checkpoint/buffers_clean/buffers_backend, I was 
saving the numbers with several hours interval, knowing that there are 
no vacuums running at that time, and calculated the difference.


It is not clear to me that this is the best way to measure health.  
Did your response time go down?  Did your throughput go up?


We have mixed type of DB usage. There is OLTP-like part with many small 
read/write transactions. Predictable latency does not matter in that 
case, but throughput does, because that is basically a background data 
loading job. Then there is an OLAP-like part when heavier report queries 
are being run. Then there are more background jobs which are a 
combination of both, which at first run long queries and then do lots of 
small inserts, thus pre-calculating some data for bigger reports.


After increasing shared_buffers 8GB -> 64GB, there was 7% improvement in 
run time of the background pre-calculating job (measured by running 
several times in a row, and caches are hot).


When we configured hugepages for the bigger shared_buffers, the 
additional improvement was around 3%.


Regards,
Vitaliy



Re: shared_buffers 8GB maximum

2018-02-19 Thread Vitaliy Garnashevich


When we did calculation of some analytic tasks, then increasing 
shared_buffers had negative impact on speed. Probably hit ration was 
too low after change, but the maintenance of shared buffers (searching 
free blocks) was slower.


What was the size of shared buffers when slowdown happened (approximately)?

Regards,
Vitaliy



Re: pgBackRest backup from standby

2018-02-19 Thread David Steele
Hi Don,

On 2/18/18 7:34 PM, Don Seiler wrote:
> 
> Looking to use pgBackRest to take a backup from a hot standby. I'm
> reading that pgBackRest still needs to connect to the primary and copy
> some files. My questions are:
> 
>  1. What files does it need to copy? Config files? WAL files?

It copies files that are not replicated from the primary so that a
primary-style backup is created. Anything that is replicated (which is
by far the bulk of the data) is copied from the standby.

>  2. How does it connect? SSH?

Yes.

>  3. Does pgBackRest need to be installed and configured on the primary
> as well?

Yes.  Anyway, it's best to archive from the primary so a replication
failure does not affect your archiving.

Configuring pgBackRest, SSH, standby, and backup from standby are all
covered in the user guide.

http://pgbackrest.org/user-guide.html

In particular:

http://pgbackrest.org/user-guide.html#backup-host
http://pgbackrest.org/user-guide.html#replication
http://pgbackrest.org/user-guide.html#standby-backup

Regards,
-- 
-David
da...@pgmasters.net



Re: pgBackRest backup from standby

2018-02-19 Thread David Steele
On 2/18/18 10:20 PM, Michael Paquier wrote:
> On Sun, Feb 18, 2018 at 06:48:30PM -0600, Don Seiler wrote:
>> On Sun, Feb 18, 2018 at 6:43 PM, Michael Paquier  wrote:
>>> You may want to contact the maintainers directly through github where
>>> the project is maintained:
>>> https://github.com/pgbackrest/pgbackrest
>>
>> Is that the place to just ask questions? I wasn't sure that "Issues" meant
>> just questions versus problems or bugs. I didn't see any mention of a forum
>> or list on their website, and there have been lots of pgBackRest questions
>> on this list in the past so I settled on this one.
> 
> Stephen and David are around all the time, so it is not really an issue
> to discuss things related to pgBackRest on this list I guess :)
> Attaching related folks directly in CC: also usually helps.
> 
> You may get faster feedback by opening an issue directly on github
> though, as there are a lot of emails on -general so it is easy to get
> things lost.  My 2c.

Either is fine with me, but as Michael says I might miss postings to
-general.  I'm sure somebody else would catch it, though.

-- 
-David
da...@pgmasters.net



signature.asc
Description: OpenPGP digital signature


Re: pgBackRest backup from standby

2018-02-19 Thread Don Seiler
On Mon, Feb 19, 2018 at 8:18 AM, David Steele  wrote:

> It copies files that are not replicated from the primary so that a
> primary-style backup is created. Anything that is replicated (which is
> by far the bulk of the data) is copied from the standby.
>

OK so all data files would be copied from standby. Can you give me an
example of the types of files that need to be copied from primary?


> it's best to archive from the primary so a replication
> failure does not affect your archiving.
>

Understood, just not something I can change in production primary at the
moment. Hence looking to see about a quick one-off backup from standby.


> Configuring pgBackRest, SSH, standby, and backup from standby are all
> covered in the user guide.


Thanks, I've been through it a few times and played with some test backups
from primary clones. I just ditched my master/replica clone setup but I'll
test there as well. I just had a couple questions about the mechanics.



-- 
Don Seiler
www.seiler.us


Re: pgBackRest backup from standby

2018-02-19 Thread Don Seiler
On Mon, Feb 19, 2018 at 8:23 AM, David Steele  wrote:

>
> Either is fine with me, but as Michael says I might miss postings to
> -general.  I'm sure somebody else would catch it, though.
>

OK, I'll make use of the issues tracker going forward.

-- 
Don Seiler
www.seiler.us


Re: pgBackRest backup from standby

2018-02-19 Thread David Steele
Hi Don,

On 2/19/18 9:25 AM, Don Seiler wrote:
> On Mon, Feb 19, 2018 at 8:18 AM, David Steele  > wrote:
> 
> It copies files that are not replicated from the primary so that a
> primary-style backup is created. Anything that is replicated (which is
> by far the bulk of the data) is copied from the standby.
> 
> OK so all data files would be copied from standby. Can you give me an
> example of the types of files that need to be copied from primary?
>  

Anything *not* in global (except pg_control), base, pg_tblspc,
pg_xact/pg_clog, and pg_multixact are copied from the primary.

For example, pg_stat is copied from the primary so these stats are
preserved on a standby backup.

pgBackRest uses all the same exclusions as pg_basebackup, so many
dirs/files are not copied at all: pg_dynshmem, pg_notify, pg_replslot,
pg_serial, pg_snapshots, pg_stat_tmp, pg_subtrans, etc.

Full list here
https://www.postgresql.org/docs/10/static/protocol-replication.html.

> it's best to archive from the primary so a replication
> failure does not affect your archiving.
> 
> Understood, just not something I can change in production primary at the
> moment. Hence looking to see about a quick one-off backup from standby.

For a quick one-off, pg_basebackup is your friend.

Regards,
-- 
-David
da...@pgmasters.net



Re: pgBackRest backup from standby

2018-02-19 Thread Don Seiler
On Mon, Feb 19, 2018 at 8:53 AM, David Steele  wrote:

> Anything *not* in global (except pg_control), base, pg_tblspc,
> pg_xact/pg_clog, and pg_multixact are copied from the primary.
>
> For example, pg_stat is copied from the primary so these stats are
> preserved on a standby backup.
>

So if I have tablespaces outside of $PGDATA (but symlinked from within
pg_tblspc, of course), those will still be backed up from the standby,
right?

Is it right to say that the files that would be copied from primary are
very small, typically? So it isn't a huge transfer over the WAN (in my
case)?



> pgBackRest uses all the same exclusions as pg_basebackup, so many
> dirs/files are not copied at all: pg_dynshmem, pg_notify, pg_replslot,
> pg_serial, pg_snapshots, pg_stat_tmp, pg_subtrans, etc.
>
> Full list here
> https://www.postgresql.org/docs/10/static/protocol-replication.html.
>
> > it's best to archive from the primary so a replication
> > failure does not affect your archiving.
> >
> > Understood, just not something I can change in production primary at the
> > moment. Hence looking to see about a quick one-off backup from standby.
>
> For a quick one-off, pg_basebackup is your friend.


One of the requirements of this backup is encryption, which I don't see any
notes for with pg_basebackup. Also due to the size, parallel workers.
pgBackRest gives me both of these. I need compression as well but that
pg_basebackup does do.

I did come up with a sort of Rube Goldberg-esque workaround for now
involving using a clone of the prod standby VM from Veeam backup to use as
the backup source (after stopping recovery and opening it as a standalone
DB).

Don.

-- 
Don Seiler
www.seiler.us


Re: pgBackRest backup from standby

2018-02-19 Thread David Steele
Hi Don,

On 2/19/18 10:01 AM, Don Seiler wrote:
> On Mon, Feb 19, 2018 at 8:53 AM, David Steele  > wrote:
> 
> Anything *not* in global (except pg_control), base, pg_tblspc,
> pg_xact/pg_clog, and pg_multixact are copied from the primary.
> 
> For example, pg_stat is copied from the primary so these stats are
> preserved on a standby backup.
> 
> So if I have tablespaces outside of $PGDATA (but symlinked from within
> pg_tblspc, of course), those will still be backed up from the standby,
> right?

Correct.

> Is it right to say that the files that would be copied from primary are
> very small, typically? So it isn't a huge transfer over the WAN (in my
> case)?

Yes, they are typically very small.  The general exception to this rule
is if logs are stored in pg_log.  I recommend storing logs out of the
PGDATA dir as they can be quite large and don't really make sense to
restore to another server.

Files copied from the master will be marked as such in backup.manifest
(master:true) so you can check for yourself.

> I did come up with a sort of Rube Goldberg-esque workaround for now
> involving using a clone of the prod standby VM from Veeam backup to use
> as the backup source (after stopping recovery and opening it as a
> standalone DB).

You don't get PITR that way, of course, but at least it's a backup.  As
long as your clone is consistent.

-- 
-David
da...@pgmasters.net



Re: pgBackRest backup from standby

2018-02-19 Thread Don Seiler
On Mon, Feb 19, 2018 at 9:21 AM, David Steele  wrote:

>
> Yes, they are typically very small.  The general exception to this rule
> is if logs are stored in pg_log.  I recommend storing logs out of the
> PGDATA dir as they can be quite large and don't really make sense to
> restore to another server.
>
> Files copied from the master will be marked as such in backup.manifest
> (master:true) so you can check for yourself.
>

Good to know. And fortunately for this DB we do have pg_log (and pg_xlog)
symlinked to different volumes outside of $PGDATA.

> I did come up with a sort of Rube Goldberg-esque workaround for now
> > involving using a clone of the prod standby VM from Veeam backup to use
> > as the backup source (after stopping recovery and opening it as a
> > standalone DB).
>
> You don't get PITR that way, of course, but at least it's a backup.  As
> long as your clone is consistent.


Yes it's a crash-consistent snapshot-based backup. I've done quite a few
restores from it and it works great. It can do PITR as well since I would
have all the WAL files from prod needed to keep recovering. But for these
cases I just recover it to the first consistent point and open it for
testing (or backups in this case).

Thanks for all your help!

Don.

-- 
Don Seiler
www.seiler.us


Re: pgBackRest backup from standby

2018-02-19 Thread David Steele
On 2/19/18 10:32 AM, Don Seiler wrote:
> On Mon, Feb 19, 2018 at 9:21 AM, David Steele  > wrote:
> 
> 
> Yes, they are typically very small.  The general exception to this rule
> is if logs are stored in pg_log.  I recommend storing logs out of the
> PGDATA dir as they can be quite large and don't really make sense to
> restore to another server.
> 
> Files copied from the master will be marked as such in backup.manifest
> (master:true) so you can check for yourself.
> 
> 
> Good to know. And fortunately for this DB we do have pg_log (and
> pg_xlog) symlinked to different volumes outside of $PGDATA.

If pg_log is symlinked to PGDATA it will be copied.  pg_xlog is not
copied in any backup.

> 
> > I did come up with a sort of Rube Goldberg-esque workaround for now
> > involving using a clone of the prod standby VM from Veeam backup to use
> > as the backup source (after stopping recovery and opening it as a
> > standalone DB).
> 
> You don't get PITR that way, of course, but at least it's a backup.  As
> long as your clone is consistent.
> 
> 
> Yes it's a crash-consistent snapshot-based backup. I've done quite a few
> restores from it and it works great. It can do PITR as well since I
> would have all the WAL files from prod needed to keep recovering. But
> for these cases I just recover it to the first consistent point and open
> it for testing (or backups in this case). 

I don't think it would be safe to do PITR on a backup taken in this way.
 The WAL diverges even if you suppress a timeline switch.

-- 
-David
da...@pgmasters.net



Re: pgBackRest backup from standby

2018-02-19 Thread Simon Riggs
On 19 February 2018 at 16:17, David Steele  wrote:
>> > I did come up with a sort of Rube Goldberg-esque workaround for now
>> > involving using a clone of the prod standby VM from Veeam backup to use
>> > as the backup source (after stopping recovery and opening it as a
>> > standalone DB).
>>
>> You don't get PITR that way, of course, but at least it's a backup.  As
>> long as your clone is consistent.
>>
>>
>> Yes it's a crash-consistent snapshot-based backup. I've done quite a few
>> restores from it and it works great. It can do PITR as well since I
>> would have all the WAL files from prod needed to keep recovering. But
>> for these cases I just recover it to the first consistent point and open
>> it for testing (or backups in this case).
>
> I don't think it would be safe to do PITR on a backup taken in this way.

If you have all the WAL files, then it would be safe.

>  The WAL diverges even if you suppress a timeline switch.

Which is exactly why we have timelines.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Error when compiling postgresql 9.6.7 with Visual Studio 15.5.6

2018-02-19 Thread Aditya Nugraha
Hello everyone,

   When trying to compile postgresql 9.6.7 with Visual Studio 15.5.6,
i am getting errors at these following line :
"C:\Users\nameless\Documents\Works\Compiling\pgwininstall\builddir\postgresql\postgresql-9.6.7\pgsql.sln"
(default targe
t) (1) ->
"C:\Users\nameless\Documents\Works\Compiling\pgwininstall\builddir\postgresql\postgresql-9.6.7\ascii_and_mic.vcxproj"
(d
efault target) (5) ->
"C:\Users\nameless\Documents\Works\Compiling\pgwininstall\builddir\postgresql\postgresql-9.6.7\postgres.vcxproj"
(defaul
t target) (6) ->
(ClCompile target) ->
  src/backend/utils/adt/pg_locale.c(927): error C2037: left of
'locale_name' specifies undefined struct/union '__crt_lo
cale_data' 
[C:\Users\nameless\Documents\Works\Compiling\pgwininstall\builddir\postgresql\postgresql-9.6.7\postgres.vcxpr
oj]
  src/backend/utils/adt/pg_locale.c(928): error C2198: 'wchar2char':
too few arguments for call [C:\Users\nameless\Docum
ents\Works\Compiling\pgwininstall\builddir\postgresql\postgresql-9.6.7\postgres.vcxproj]

4 Warning(s)
2 Error(s)

Searching through the mailinglist archive i am getting this thread :
http://www.postgresql-archive.org/Building-PostgreSQL-9-6devel-sources-with-Microsoft-Visual-C-2015-td5880108.html

But the intended patched file is different than above error which is
on this src/backend/utils/adt/pg_locale.c file.

Here is the snapshot of the offending code from pg_locale.c file :
/* Locale names use only ASCII, any conversion locale suffices. */
rc = wchar2char(iso_lc_messages, loct->locinfo->locale_name[LC_CTYPE],
sizeof(iso_lc_messages), NULL);

Can anyone tell me the fixes ?.


Cheers.



Aditya



Empty ./share/timezone/UTC and failure to set UTC as timezone

2018-02-19 Thread Sasa Vilic
Hello,

we are using "embedded" PostgreSQL for integration tests (see
https://github.com/yandex-qatools/postgresql-embedded) and we have noticed
that our (java) clients fail to connect when their timezone is UTC, with PG
server giving following error message:

invalid value for parameter "TimeZone": "UTC"

To double check, I took official 9.6.7 sources, compiled and installed it
locally (/tmp/pgsql) and tried connecting with client again and I again we
got same error message.

When using psql, the same happens:

postgres=# set timezone='UTC';
ERROR:  invalid value for parameter "TimeZone": "UTC"

In order to further investigate this, I have attached strace on backend and
I have noticed that PG is able to find and read file ./share/timezone/UTC.
But right after reading the file, we get error message from above.

We noticed that UTC file is empty! Is this on purpose or is this is bug? Am
I missing something?

Kind Regards,

Sasa Vilic


Re: Error when compiling postgresql 9.6.7 with Visual Studio 15.5.6

2018-02-19 Thread Tom Lane
Aditya Nugraha  writes:
>When trying to compile postgresql 9.6.7 with Visual Studio 15.5.6,
> i am getting errors at these following line :
>   src/backend/utils/adt/pg_locale.c(927): error C2037: left of
> 'locale_name' specifies undefined struct/union '__crt_lo
> cale_data' 
> [C:\Users\nameless\Documents\Works\Compiling\pgwininstall\builddir\postgresql\postgresql-9.6.7\postgres.vcxpr
> oj]
>   src/backend/utils/adt/pg_locale.c(928): error C2198: 'wchar2char':
> too few arguments for call [C:\Users\nameless\Docum
> ents\Works\Compiling\pgwininstall\builddir\postgresql\postgresql-9.6.7\postgres.vcxproj]

This was previously discussed here:

https://www.postgresql.org/message-id/flat/CANFyU959RRvAqmrGvLor%3Dgo3PSi4LdYcjDw8TQ6YRvzpouZrog%40mail.gmail.com

Apparently Microsoft have changed their APIs in VS2015.  Somebody needs to
figure out what's the approved way now to get the locale name out of a
_locale_t object.

regards, tom lane



Re: Empty ./share/timezone/UTC and failure to set UTC as timezone

2018-02-19 Thread Tom Lane
Sasa Vilic  writes:
> To double check, I took official 9.6.7 sources, compiled and installed it
> locally (/tmp/pgsql) and tried connecting with client again and I again we
> got same error message.

Hm, what platform and compiler are you using, exactly?

> In order to further investigate this, I have attached strace on backend and
> I have noticed that PG is able to find and read file ./share/timezone/UTC.
> But right after reading the file, we get error message from above.
> We noticed that UTC file is empty! Is this on purpose or is this is bug? Am
> I missing something?

It should certainly not be empty.  On my machine it's 127 bytes long:

$ ls -l share/timezone/UTC
-rw-r--r--. 6 postgres postgres 127 Feb 19 12:27 share/timezone/UTC

and file(1) knows what it is:

$ file share/timezone/UTC
share/timezone/UTC: timezone data, version 2, 1 gmt time flag, 1 std time flag, 
no leap seconds, no transition times, 1 abbreviation char

I wonder if you've tripped over some portability issue in the zic
compiler.  Another idea, seeing that this file is multiply-linked
in the timezone install tree, is that maybe you're on a filesystem
that does strange things with hard links.

It'd be interesting to see the rest of your timezone directory.  Mine
looks like

total 248
drwxr-xr-x.  2 postgres postgres 4096 Feb 19 12:27 Africa
drwxr-xr-x.  6 postgres postgres 4096 Feb 19 12:27 America
drwxr-xr-x.  2 postgres postgres 4096 Feb 19 12:27 Antarctica
drwxr-xr-x.  2 postgres postgres 4096 Feb 19 12:27 Arctic
drwxr-xr-x.  2 postgres postgres 4096 Feb 19 12:27 Asia
drwxr-xr-x.  2 postgres postgres 4096 Feb 19 12:27 Atlantic
drwxr-xr-x.  2 postgres postgres 4096 Feb 19 12:27 Australia
drwxr-xr-x.  2 postgres postgres 4096 Feb 19 12:27 Brazil
-rw-r--r--.  1 postgres postgres 2102 Feb 19 12:27 CET
-rw-r--r--.  1 postgres postgres 2294 Feb 19 12:27 CST6CDT
drwxr-xr-x.  2 postgres postgres 4096 Feb 19 12:27 Canada
drwxr-xr-x.  2 postgres postgres 4096 Feb 19 12:27 Chile
-rw-r--r--.  2 postgres postgres 2437 Feb 19 12:27 Cuba
-rw-r--r--.  1 postgres postgres 1876 Feb 19 12:27 EET
-rw-r--r--.  1 postgres postgres  127 Feb 19 12:27 EST
-rw-r--r--.  1 postgres postgres 2294 Feb 19 12:27 EST5EDT
-rw-r--r--.  2 postgres postgres 1972 Feb 19 12:27 Egypt
-rw-r--r--.  2 postgres postgres 3543 Feb 19 12:27 Eire
drwxr-xr-x.  2 postgres postgres 4096 Feb 19 12:27 Etc
drwxr-xr-x.  2 postgres postgres 4096 Feb 19 12:27 Europe
-rw-r--r--.  1 postgres postgres  148 Feb 19 12:27 Factory
-rw-r--r--.  7 postgres postgres 3687 Feb 19 12:27 GB
-rw-r--r--.  7 postgres postgres 3687 Feb 19 12:27 GB-Eire
-rw-r--r--. 10 postgres postgres  127 Feb 19 12:27 GMT
-rw-r--r--. 10 postgres postgres  127 Feb 19 12:27 GMT+0
-rw-r--r--. 10 postgres postgres  127 Feb 19 12:27 GMT-0
-rw-r--r--. 10 postgres postgres  127 Feb 19 12:27 GMT0
-rw-r--r--. 10 postgres postgres  127 Feb 19 12:27 Greenwich
-rw-r--r--.  1 postgres postgres  128 Feb 19 12:27 HST
-rw-r--r--.  2 postgres postgres 1189 Feb 19 12:27 Hongkong
-rw-r--r--.  2 postgres postgres 1188 Feb 19 12:27 Iceland
drwxr-xr-x.  2 postgres postgres 4096 Feb 19 12:27 Indian
-rw-r--r--.  2 postgres postgres 1718 Feb 19 12:27 Iran
-rw-r--r--.  3 postgres postgres 2265 Feb 19 12:27 Israel
-rw-r--r--.  2 postgres postgres  507 Feb 19 12:27 Jamaica
-rw-r--r--.  2 postgres postgres  318 Feb 19 12:27 Japan
-rw-r--r--.  2 postgres postgres  259 Feb 19 12:27 Kwajalein
-rw-r--r--.  2 postgres postgres  655 Feb 19 12:27 Libya
-rw-r--r--.  1 postgres postgres 2102 Feb 19 12:27 MET
-rw-r--r--.  1 postgres postgres  127 Feb 19 12:27 MST
-rw-r--r--.  1 postgres postgres 2294 Feb 19 12:27 MST7MDT
drwxr-xr-x.  2 postgres postgres 4096 Feb 19 12:27 Mexico
-rw-r--r--.  4 postgres postgres 2460 Feb 19 12:27 NZ
-rw-r--r--.  2 postgres postgres 2087 Feb 19 12:27 NZ-CHAT
-rw-r--r--.  4 postgres postgres 2453 Feb 19 12:27 Navajo
-rw-r--r--.  5 postgres postgres  414 Feb 19 12:27 PRC
-rw-r--r--.  1 postgres postgres 2294 Feb 19 12:27 PST8PDT
drwxr-xr-x.  2 postgres postgres 4096 Feb 19 12:27 Pacific
-rw-r--r--.  2 postgres postgres 2705 Feb 19 12:27 Poland
-rw-r--r--.  2 postgres postgres 3453 Feb 19 12:27 Portugal
-rw-r--r--.  2 postgres postgres  790 Feb 19 12:27 ROC
-rw-r--r--.  2 postgres postgres  531 Feb 19 12:27 ROK
-rw-r--r--.  2 postgres postgres  424 Feb 19 12:27 Singapore
-rw-r--r--.  3 postgres postgres 2166 Feb 19 12:27 Turkey
-rw-r--r--.  2 postgres postgres  127 Feb 19 12:27 UCT
drwxr-xr-x.  2 postgres postgres 4096 Feb 19 12:27 US
-rw-r--r--.  6 postgres postgres  127 Feb 19 12:27 UTC
-rw-r--r--.  6 postgres postgres  127 Feb 19 12:27 Universal
-rw-r--r--.  2 postgres postgres 1544 Feb 19 12:27 W-SU
-rw-r--r--.  1 postgres postgres 1873 Feb 19 12:27 WET
-rw-r--r--.  6 postgres postgres  127 Feb 19 12:27 Zulu
-rw-r--r--.  3 postgres postgres 3545 Feb 19 12:27 posixrules

regards, tom lane



Re: pgBackRest backup from standby

2018-02-19 Thread David Steele
On 2/19/18 11:29 AM, Simon Riggs wrote:
> On 19 February 2018 at 16:17, David Steele  wrote:
>>> > I did come up with a sort of Rube Goldberg-esque workaround for now
>>> > involving using a clone of the prod standby VM from Veeam backup to 
>>> use
>>> > as the backup source (after stopping recovery and opening it as a
>>> > standalone DB).
>>>
>>> You don't get PITR that way, of course, but at least it's a backup.  As
>>> long as your clone is consistent.
>>>
>>>
>>> Yes it's a crash-consistent snapshot-based backup. I've done quite a few
>>> restores from it and it works great. It can do PITR as well since I
>>> would have all the WAL files from prod needed to keep recovering. But
>>> for these cases I just recover it to the first consistent point and open
>>> it for testing (or backups in this case).
>>
>> I don't think it would be safe to do PITR on a backup taken in this way.
> 
> If you have all the WAL files, then it would be safe.

I read "open it for testing (or backups in this case)" as letting
recovery complete and promoting the cluster to a master before taking
the backup.

Don, is that the case?  If it is, I think there's a problem with or
without a timeline switch.  If you confirm the backup is being taken as
above then I'll detail my concerns.

-- 
-David
da...@pgmasters.net



Re: pgBackRest backup from standby

2018-02-19 Thread Don Seiler
On Mon, Feb 19, 2018 at 12:39 PM, David Steele  wrote:
>
>
> I read "open it for testing (or backups in this case)" as letting
> recovery complete and promoting the cluster to a master before taking
> the backup.
>
> Don, is that the case?  If it is, I think there's a problem with or
> without a timeline switch.  If you confirm the backup is being taken as
> above then I'll detail my concerns.
>

Note that this is just for creating a couple of one-off backups to restore
for our dev and pre-prod environments. Given that, I was going to open a
new clone as its own cluster and take backups from that. The data would be
the same though and suit purposes of the dev and pre-prod refreshes.

If I were taking backups for the purpose of production backups, I would not
do things this way. That is the eventual plan but right now we aren't ready
to make the changes necessary in the production environment.

-- 
Don Seiler
www.seiler.us


Re: pgBackRest backup from standby

2018-02-19 Thread David Steele
On 2/19/18 2:05 PM, Don Seiler wrote:
> On Mon, Feb 19, 2018 at 12:39 PM, David Steele  > wrote:
> 
> 
> I read "open it for testing (or backups in this case)" as letting
> recovery complete and promoting the cluster to a master before taking
> the backup.
> 
> Don, is that the case?  If it is, I think there's a problem with or
> without a timeline switch.  If you confirm the backup is being taken as
> above then I'll detail my concerns.
> 
> 
> Note that this is just for creating a couple of one-off backups to
> restore for our dev and pre-prod environments. Given that, I was going
> to open a new clone as its own cluster and take backups from that. The
> data would be the same though and suit purposes of the dev and pre-prod
> refreshes.
> 
> If I were taking backups for the purpose of production backups, I would
> not do things this way. That is the eventual plan but right now we
> aren't ready to make the changes necessary in the production environment.

OK, that's fine then.  You can play these to consistency and they'll be
fine.  I just wouldn't try to do any PITR using the production WAL archive.

-- 
-David
da...@pgmasters.net



Re: Empty ./share/timezone/UTC and failure to set UTC as timezone

2018-02-19 Thread Sasa Vilic
Hi Tom,

thanks for quick response. I have to apologize to you, PG from source is
fine. Sorry for wasting your time.

It only happens that after I put it into archive (tar czf
postgresql-9.6.7-linux-x64-binaries.tar.gz pgsql) and extract it, that it
is empty and it looks like it is a bug in java library implementation of
tar.gz.

Once again, sorry for wasting your time. Thank you very much for your
support.

Kind Regards,

Sasa Vilic

2018-02-19 17:33 GMT+00:00 Tom Lane :

> Sasa Vilic  writes:
> > To double check, I took official 9.6.7 sources, compiled and installed it
> > locally (/tmp/pgsql) and tried connecting with client again and I again
> we
> > got same error message.
>
> Hm, what platform and compiler are you using, exactly?
>
> > In order to further investigate this, I have attached strace on backend
> and
> > I have noticed that PG is able to find and read file
> ./share/timezone/UTC.
> > But right after reading the file, we get error message from above.
> > We noticed that UTC file is empty! Is this on purpose or is this is bug?
> Am
> > I missing something?
>
> It should certainly not be empty.  On my machine it's 127 bytes long:
>
> $ ls -l share/timezone/UTC
> -rw-r--r--. 6 postgres postgres 127 Feb 19 12:27 share/timezone/UTC
>
> and file(1) knows what it is:
>
> $ file share/timezone/UTC
> share/timezone/UTC: timezone data, version 2, 1 gmt time flag, 1 std time
> flag, no leap seconds, no transition times, 1 abbreviation char
>
> I wonder if you've tripped over some portability issue in the zic
> compiler.  Another idea, seeing that this file is multiply-linked
> in the timezone install tree, is that maybe you're on a filesystem
> that does strange things with hard links.
>
> It'd be interesting to see the rest of your timezone directory.  Mine
> looks like
>
> total 248
> drwxr-xr-x.  2 postgres postgres 4096 Feb 19 12:27 Africa
> drwxr-xr-x.  6 postgres postgres 4096 Feb 19 12:27 America
> drwxr-xr-x.  2 postgres postgres 4096 Feb 19 12:27 Antarctica
> drwxr-xr-x.  2 postgres postgres 4096 Feb 19 12:27 Arctic
> drwxr-xr-x.  2 postgres postgres 4096 Feb 19 12:27 Asia
> drwxr-xr-x.  2 postgres postgres 4096 Feb 19 12:27 Atlantic
> drwxr-xr-x.  2 postgres postgres 4096 Feb 19 12:27 Australia
> drwxr-xr-x.  2 postgres postgres 4096 Feb 19 12:27 Brazil
> -rw-r--r--.  1 postgres postgres 2102 Feb 19 12:27 CET
> -rw-r--r--.  1 postgres postgres 2294 Feb 19 12:27 CST6CDT
> drwxr-xr-x.  2 postgres postgres 4096 Feb 19 12:27 Canada
> drwxr-xr-x.  2 postgres postgres 4096 Feb 19 12:27 Chile
> -rw-r--r--.  2 postgres postgres 2437 Feb 19 12:27 Cuba
> -rw-r--r--.  1 postgres postgres 1876 Feb 19 12:27 EET
> -rw-r--r--.  1 postgres postgres  127 Feb 19 12:27 EST
> -rw-r--r--.  1 postgres postgres 2294 Feb 19 12:27 EST5EDT
> -rw-r--r--.  2 postgres postgres 1972 Feb 19 12:27 Egypt
> -rw-r--r--.  2 postgres postgres 3543 Feb 19 12:27 Eire
> drwxr-xr-x.  2 postgres postgres 4096 Feb 19 12:27 Etc
> drwxr-xr-x.  2 postgres postgres 4096 Feb 19 12:27 Europe
> -rw-r--r--.  1 postgres postgres  148 Feb 19 12:27 Factory
> -rw-r--r--.  7 postgres postgres 3687 Feb 19 12:27 GB
> -rw-r--r--.  7 postgres postgres 3687 Feb 19 12:27 GB-Eire
> -rw-r--r--. 10 postgres postgres  127 Feb 19 12:27 GMT
> -rw-r--r--. 10 postgres postgres  127 Feb 19 12:27 GMT+0
> -rw-r--r--. 10 postgres postgres  127 Feb 19 12:27 GMT-0
> -rw-r--r--. 10 postgres postgres  127 Feb 19 12:27 GMT0
> -rw-r--r--. 10 postgres postgres  127 Feb 19 12:27 Greenwich
> -rw-r--r--.  1 postgres postgres  128 Feb 19 12:27 HST
> -rw-r--r--.  2 postgres postgres 1189 Feb 19 12:27 Hongkong
> -rw-r--r--.  2 postgres postgres 1188 Feb 19 12:27 Iceland
> drwxr-xr-x.  2 postgres postgres 4096 Feb 19 12:27 Indian
> -rw-r--r--.  2 postgres postgres 1718 Feb 19 12:27 Iran
> -rw-r--r--.  3 postgres postgres 2265 Feb 19 12:27 Israel
> -rw-r--r--.  2 postgres postgres  507 Feb 19 12:27 Jamaica
> -rw-r--r--.  2 postgres postgres  318 Feb 19 12:27 Japan
> -rw-r--r--.  2 postgres postgres  259 Feb 19 12:27 Kwajalein
> -rw-r--r--.  2 postgres postgres  655 Feb 19 12:27 Libya
> -rw-r--r--.  1 postgres postgres 2102 Feb 19 12:27 MET
> -rw-r--r--.  1 postgres postgres  127 Feb 19 12:27 MST
> -rw-r--r--.  1 postgres postgres 2294 Feb 19 12:27 MST7MDT
> drwxr-xr-x.  2 postgres postgres 4096 Feb 19 12:27 Mexico
> -rw-r--r--.  4 postgres postgres 2460 Feb 19 12:27 NZ
> -rw-r--r--.  2 postgres postgres 2087 Feb 19 12:27 NZ-CHAT
> -rw-r--r--.  4 postgres postgres 2453 Feb 19 12:27 Navajo
> -rw-r--r--.  5 postgres postgres  414 Feb 19 12:27 PRC
> -rw-r--r--.  1 postgres postgres 2294 Feb 19 12:27 PST8PDT
> drwxr-xr-x.  2 postgres postgres 4096 Feb 19 12:27 Pacific
> -rw-r--r--.  2 postgres postgres 2705 Feb 19 12:27 Poland
> -rw-r--r--.  2 postgres postgres 3453 Feb 19 12:27 Portugal
> -rw-r--r--.  2 postgres postgres  790 Feb 19 12:27 ROC
> -rw-r--r--.  2 postgres postgres  531 Feb 19 12:27 ROK
> -rw-r--r--.  2 postgres postgres  424 Feb 19 12:27 Singapore
> -rw-r--r--.  3 post

Re: pgBackRest backup from standby

2018-02-19 Thread Don Seiler
On Mon, Feb 19, 2018 at 10:17 AM, David Steele  wrote:

> If pg_log is symlinked to PGDATA it will be copied.  pg_xlog is not
> copied in any backup.


So an external pg_log directory symlinked into $PGDATA will have its log
contents copied?

I'm curious, why even copy the pg_log logs? They aren't needed for database
restore or recovery.

Don.
-- 
Don Seiler
www.seiler.us


Re: pgBackRest backup from standby

2018-02-19 Thread David Steele
On 2/19/18 3:41 PM, Don Seiler wrote:
> On Mon, Feb 19, 2018 at 10:17 AM, David Steele  > wrote:
> 
> If pg_log is symlinked to PGDATA it will be copied.  pg_xlog is not
> copied in any backup.
> 
> 
> So an external pg_log directory symlinked into $PGDATA will have its log
> contents copied?

Yes.

> I'm curious, why even copy the pg_log logs? They aren't needed for
> database restore or recovery.

The general philosophy is to copy everything except what we know for
sure can be excluded.  In practice, this means sticking to what
pg_basebackup excludes because that list is vetted by the community.

Also, relocating the log directory is easy using the log_directory
setting, so that's what I recommend if it's an issue.  Some users do
want to backup their logs.

-- 
-David
da...@pgmasters.net



Re: Error when compiling postgresql 9.6.7 with Visual Studio 15.5.6

2018-02-19 Thread Thomas Munro
On Tue, Feb 20, 2018 at 6:23 AM, Tom Lane  wrote:
> Aditya Nugraha  writes:
>>When trying to compile postgresql 9.6.7 with Visual Studio 15.5.6,
>> i am getting errors at these following line :
>>   src/backend/utils/adt/pg_locale.c(927): error C2037: left of
>> 'locale_name' specifies undefined struct/union '__crt_lo
>> cale_data' 
>> [C:\Users\nameless\Documents\Works\Compiling\pgwininstall\builddir\postgresql\postgresql-9.6.7\postgres.vcxpr
>> oj]
>>   src/backend/utils/adt/pg_locale.c(928): error C2198: 'wchar2char':
>> too few arguments for call [C:\Users\nameless\Docum
>> ents\Works\Compiling\pgwininstall\builddir\postgresql\postgresql-9.6.7\postgres.vcxproj]
>
> This was previously discussed here:
>
> https://www.postgresql.org/message-id/flat/CANFyU959RRvAqmrGvLor%3Dgo3PSi4LdYcjDw8TQ6YRvzpouZrog%40mail.gmail.com
>
> Apparently Microsoft have changed their APIs in VS2015.  Somebody needs to
> figure out what's the approved way now to get the locale name out of a
> _locale_t object.

[Not a Windows person, just curious]

I wonder if the result of ResolveLocaleName("foo", output_buffer,
output_buffer_size) would be the same as
locale->locinfo->local_name[LC_TYPE] on the result of
_create_locale(LC_CTYPE, "foo").

https://msdn.microsoft.com/en-us/library/windows/desktop/dd319112(v=vs.85).aspx

-- 
Thomas Munro
http://www.enterprisedb.com



Re: pgBackRest backup from standby

2018-02-19 Thread Don Seiler
On Mon, Feb 19, 2018 at 2:53 PM, David Steele  wrote:

>
> Also, relocating the log directory is easy using the log_directory
> setting, so that's what I recommend if it's an issue.  Some users do
> want to backup their logs.
>
>
That's probably a lot better idea than symlinking anyway. I'll look to do
that in my next round of config changes.



-- 
Don Seiler
www.seiler.us


Re: pgBackRest backup from standby

2018-02-19 Thread Martin Marques
El 19/02/18 a las 17:53, David Steele escribió:
> On 2/19/18 3:41 PM, Don Seiler wrote:
> 
>> I'm curious, why even copy the pg_log logs? They aren't needed for
>> database restore or recovery.
> 
> Also, relocating the log directory is easy using the log_directory
> setting, so that's what I recommend if it's an issue.  Some users do
> want to backup their logs.

I was about to reply to Don about this, but David's reply was very clear.

If you don't want the logs inside pgdata (a very wise idea, specially if
you place them on a different partition), just change the configuration GUC.

Regards,

-- 
Martín Marquéshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services



Re: Error when compiling postgresql 9.6.7 with Visual Studio 15.5.6

2018-02-19 Thread Aditya Nugraha
Found out that the patch that is included in that discussion is
working fine for Visual Studio 2017 15.5.6, i think postgresql
developer should include it on upstream branch or 9.6.x variant.

Thanks a lot Tom!

@Thomas

I haven't checked it out but as above mentioned, the patch is working
fine so far, although i haven't check it thoroughly.

On Tue, Feb 20, 2018 at 3:54 AM, Thomas Munro
 wrote:
> On Tue, Feb 20, 2018 at 6:23 AM, Tom Lane  wrote:
>> Aditya Nugraha  writes:
>>>When trying to compile postgresql 9.6.7 with Visual Studio 15.5.6,
>>> i am getting errors at these following line :
>>>   src/backend/utils/adt/pg_locale.c(927): error C2037: left of
>>> 'locale_name' specifies undefined struct/union '__crt_lo
>>> cale_data' 
>>> [C:\Users\nameless\Documents\Works\Compiling\pgwininstall\builddir\postgresql\postgresql-9.6.7\postgres.vcxpr
>>> oj]
>>>   src/backend/utils/adt/pg_locale.c(928): error C2198: 'wchar2char':
>>> too few arguments for call [C:\Users\nameless\Docum
>>> ents\Works\Compiling\pgwininstall\builddir\postgresql\postgresql-9.6.7\postgres.vcxproj]
>>
>> This was previously discussed here:
>>
>> https://www.postgresql.org/message-id/flat/CANFyU959RRvAqmrGvLor%3Dgo3PSi4LdYcjDw8TQ6YRvzpouZrog%40mail.gmail.com
>>
>> Apparently Microsoft have changed their APIs in VS2015.  Somebody needs to
>> figure out what's the approved way now to get the locale name out of a
>> _locale_t object.
>
> [Not a Windows person, just curious]
>
> I wonder if the result of ResolveLocaleName("foo", output_buffer,
> output_buffer_size) would be the same as
> locale->locinfo->local_name[LC_TYPE] on the result of
> _create_locale(LC_CTYPE, "foo").
>
> https://msdn.microsoft.com/en-us/library/windows/desktop/dd319112(v=vs.85).aspx
>
> --
> Thomas Munro
> http://www.enterprisedb.com



Re: Error when compiling postgresql 9.6.7 with Visual Studio 15.5.6

2018-02-19 Thread Tom Lane
Aditya Nugraha  writes:
> Found out that the patch that is included in that discussion is
> working fine for Visual Studio 2017 15.5.6, i think postgresql
> developer should include it on upstream branch or 9.6.x variant.

Well, as far as I'm concerned the patch that was proposed there is
far too fragile to be acceptable.  We need to find out what the
Microsoft-approved way of getting the information is.  Assuming that
we know the contents of non-exported data structures is surely not
what they have in mind, and it would undoubtedly break in the next VS
update.

regards, tom lane



Re: Deadlock with 2 processes waiting on transactionids and locking unrelated tables

2018-02-19 Thread David Wheeler
Oh I see. Yeah it’s kind of obvious now that you point it out! 

> Btw, do the transactions use explicit locking?


We occasionally use for update so that could be involved. We’ll have a closer 
look at the code involved. 

I’m still curious about why the locks are both transaction locks rather than 
one of them being a row lock. 

Thanks for your help!

Cheers, 

-- David


 

 David Wheeler • software engineer
Inomial Pty Ltd • Automatic Billing 
p +61 3 9663 3554


  
 
 
> On 19 Feb 2018, at 5:54 pm, Rene Romero Benavides  
> wrote:
> 
> My guess is that the transaction doing:
>  
> update "planscheduleitem" set "planschedule"=$1 where "psi"=$2
> 
> updates ticket before reaching that point
> 
> And
> 
> update ticket set unread = true where ticketid = $1
> 
> updates planscheduleitem before that
> 
> Does it make sense to you? Btw, do the transactions use explicit locking?
> 
> 2018-02-18 23:28 GMT-06:00 David Wheeler  >:
> Hi,
> 
> We’re seeing deadlock semi-regularly (0-2 per day) that I’m really having 
> trouble getting to the bottom of. 
> 
> Process 7172 waits for ShareLock on transaction 4078724272 
> ; blocked by process 7186.
> Process 7186 waits for ShareLock on transaction 4078724210 
> ; blocked by process 7172.
> 
> The two queries in question are updates on unrelated tables. Running the 
> queries on their own shows no overlapping entries in pg_locks. 
> 
> Process 7172: update ticket set unread = true where ticketid = $1
> Process 7186: update "planscheduleitem" set "planschedule"=$1 where "psi"=$2
> 
> How can I work out why Postgres has decided that the two processes are in 
> deadlock? Is there an explainer somewhere on transaction level locks? I can’t 
> see anything in the docs besides that they exist. 
> 
> 
> 
> Details below
> 
> select version();
>   version
> ---
>  PostgreSQL 10.2 (Ubuntu 10.2-1.pgdg14.04+1) on x86_64-pc-linux-gnu, compiled 
> by gcc (Ubuntu 4.8.4-2ubuntu1~14.04.3) 4.8.4, 64-bit
> (1 row)
> 
> ---
> 
> 
> after running update "planscheduleitem" set "planschedule"=$1 where "psi"=$2
> 
> SELECT locktype, virtualxid, transactionid, virtualtransaction, pid, mode, 
> relname, page, tuple
> FROM pg_locks LEFT JOIN pg_class ON (relation=oid) where pid =41715;
>locktype| virtualxid | transactionid | virtualtransaction |  pid  |
>mode   | relname | page | tuple
> ---++---++---+--+-+--+---
>  relation  ||   | 56/2306863 | 41715 | 
> RowExclusiveLock | ticket_parentticketid   |  |
>  relation  ||   | 56/2306863 | 41715 | 
> RowExclusiveLock | ticket_originalticketid |  |
>  relation  ||   | 56/2306863 | 41715 | 
> RowExclusiveLock | ticket_tickettypeid_idx |  |
>  relation  ||   | 56/2306863 | 41715 | 
> RowExclusiveLock | ticket_subject_idx  |  |
>  relation  ||   | 56/2306863 | 41715 | 
> RowExclusiveLock | ticket_closedtime_idx   |  |
>  relation  ||   | 56/2306863 | 41715 | 
> RowExclusiveLock | ticket_assignedto_idx   |  |
>  relation  ||   | 56/2306863 | 41715 | 
> RowExclusiveLock | ticket_serviceuid_idx   |  |
>  relation  ||   | 56/2306863 | 41715 | 
> RowExclusiveLock | ticket_parentuid_idx|  |
>  relation  ||   | 56/2306863 | 41715 | 
> RowExclusiveLock | ticket_createdtime_idx  |  |
>  relation  ||   | 56/2306863 | 41715 | 
> RowExclusiveLock | ticket_txid |  |
>  relation  ||   | 56/2306863 | 41715 | 
> RowExclusiveLock | ticket_tickettype   |  |
>  relation  ||   | 56/2306863 | 41715 | 
> RowExclusiveLock | ticket_ticketpriority   |  |
>  relation  ||   | 56/2306863 | 41715 | 
> RowExclusiveLock | ticket_idx_0|  |
>  relation  ||   | 56/2306863 | 41715 | 
> RowExclusiveLock | ticket_pkey |  |
>  relation  ||   | 56/2306863  

Re: Error when compiling postgresql 9.6.7 with Visual Studio 15.5.6

2018-02-19 Thread Michael Paquier
On Mon, Feb 19, 2018 at 04:17:18PM -0500, Tom Lane wrote:
> Well, as far as I'm concerned the patch that was proposed there is
> far too fragile to be acceptable.  We need to find out what the
> Microsoft-approved way of getting the information is.  Assuming that
> we know the contents of non-exported data structures is surely not
> what they have in mind, and it would undoubtedly break in the next VS
> update.

Definitely agreed.  The locale-related code is *the* pain point when it
comes to MSVC things.  And each time we add support for a new version of
MSVC there is always something different happening and breaking.  There
has never been any discussion around ResolveLocaleName() though.  A
downside is that this would increase the minimal version support bar on
Windows.  Still that would be worth a serious look.
--
Michael


signature.asc
Description: PGP signature


Performance issues during backup

2018-02-19 Thread Dylan Luong
Hi

We perform nighty base backup of our production PostgreSQL instance. We have a 
script that basically puts the instance into back mode and then backs up (tar) 
the /Data directory and then takes it out of backup mode.
Ie,
psql -c "SELECT pg_start_backup('${DATE}');"
tar -cvf - ${DATA_DIR} --exclude ${DATA_DIR}/pg_log | split -d -b 
$TAR_SPLIT_SIZE - ${BACKUP_DIR}/${BACKUP_NAME}
psql -c "SELECT pg_stop_backup();"

The size of our database is about 250GB and it usually takes about 1 hour to 
backup.
During this time, we have performance issue where queries can take up to 15secs 
to return where normally it takes 2 to 3 seconds.
During this time (1:30am) usage is low (less than 10 users) on the system.

Has anyone experience the same problem and any suggestions where to look at to 
resolve the problem?

Thanks.
Dylan




Re: Connection loosing at some places - caused by firewall

2018-02-19 Thread George Neuner
On Mon, 19 Feb 2018 13:02:30 +0100, Durumdara 
wrote:

>2018-02-13 21:21 GMT+01:00 George Neuner :
>
>> On Tue, 14 Nov 2017 12:09:31 +0100, Durumdara 
>> wrote:
>>
>> >*I disabled my firewall at home - the [keepalive] problem vanished!!!*
>>
>> What firewall are you using?  Windows own firewall doesn't interfere
>> with keepalive packets.  Most commercial SOHO firewalls won't either.
>>
>Normal Windows Firewall (Defender). An ASUS test notebook works. The DELL
>isn't.
>So something is different in the FW on DELL. With manually defined (faster)
>keepalive it's working.

Keepalive is part of the TCP protocol - it's an ACK that repeats the
last used packet sequence number.  Under normal circumstances, the
sequence number is incremented (modulo rollover) for each new packet.
Keepalive is recognized as a deliberate and particular breaking of the
packet sequence.

Windows firewall has no settings pertaining to keepalive separate from
the operating system.  Once you have permitted TCP connections to the
port or program, if any keepalive packets are sent, the firewall must
allow them through.

[Of course, both sides must agree on the keepalive settings for it to
work, but that is a different issue.]


It may be that your firewall is f'd up.  The firewall doe perform
"stateful inspection" of open TCP connections - essentially looking
for protocol mistakes that may indicate intrusion hacks or replayed
connections.  It could be that the SI module is losing synchronization
when the connection goes idle.
[Back in the day, there were a number of SOHO NAT routers that had
this and other problems with their stateful inspection code.]


You can try running a system file check to see if something has gotten
corrupted:
https://support.microsoft.com/en-us/help/929833/use-the-system-file-checker-tool-to-repair-missing-or-corrupted-system

You also can try resetting the firewall rules to default from the
control panel.  Of course then you'll have to reauthorize every
program that requires a connection.  


for more information on how the firewall works, see: 
https://technet.microsoft.com/en-us/library/cc779199(v=ws.10).aspx


Hope this helps,
George




stored procedure call is not working with "select procedure()" option

2018-02-19 Thread Abhra Kar
Hi ,
 I have a stored procedure name "procedure()". Which I am calling
by --

Session sess = (Session)entityManager.getDelegate(); //entityManager is
javax.persistent.EntityManager[ Properly Initialise]
  sess.createSQLQuery("select procedure()");

procedure is containing some truncate queries like --  EXECUTE('truncate
table abc');

It's not throwing any exception but not executing the procedure.Using
oracle query -- sess.createSQLQuery("{ call procedure()
}").executeUpdate();   procedure execution is working fine.

What's need to be change here.

Thanks and Regards,
Abhra