Sv: Re: Re: Removing INNER JOINs

2017-12-01 Thread Andreas Joseph Krogh
På fredag 01. desember 2017 kl. 03:53:29, skrev David Rowley <
david.row...@2ndquadrant.com >:
On 1 December 2017 at 15:34, Andreas Joseph Krogh mailto:andr...@visena.com>> wrote:  Can someone please explain, in 
layman-terms, what the problems with FKs are related to JOIN-removal?
 
Pretty much what I just wrote after "Unfortunately not," above, although you 
asked a few seconds before I sent.
 
We're able to use UNIQUE INDEXes as proofs to remove LEFT JOINs as (with the 
exception of deferred unique indexes) these are updated right away, rather than 
deferred until the end of the statement as is the case with NOT DEFERRABLE and 
not DEFERRED foreign keys. The fact that the foreign keys do not update the 
referenced rows right away means that there is a non-zero window of time that 
the constraint is violated, therefore, if a query which is run, or is running 
during that time, we could return the incorrect results if we were to remove an 
INNER JOIN during the planning of that query.



 
Hm...
The fact that Oracle has solved this makes me think you guys can solve this 
too if you put enough brain-power to it:-)
 
-- Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com 
www.visena.com 
 


 


Re: large numbers of inserts out of memory strategy

2017-12-01 Thread Steven Lembark
On Thu, 30 Nov 2017 08:43:32 -0600
Ted Toth  wrote:

> What is the downside of using a DO block? I'd have to do a nextval on
> each sequence before I could use currval, right? Or I could do 'select
> last_value from '.

You are creating a piece of code that has to be parsed, tokenized,
and compiled prior to execution. What's biting you is that you've
created a function the size of your dataset.

If you like do-blocks then write a short block to insert one record
using placeholders and call it a few zillion times.

That or (in DBI-speak):


eval
{
$dbh->{ RaiseError  } = 1;
$dbh->{ AutoCommit  } = 0;

my $sth = $dbh->prepare
(
'insert into yourtable ( field field ) values ( $1, $2 )'
);

$sth->do( @$_ ) for @rows;

$dbh->commit
}
or die "Failed execution: $@";

which will be nearly as effecient in the long run.

That or just import the data from a csv/tsv (there are good 
examples of data import available in the PG docs).

-- 
Steven Lembark   1505 National Ave
Workhorse Computing Rockford, IL 61103
lemb...@wrkhors.com+1 888 359 3508



pg data backup from vps

2017-12-01 Thread support-tiger
To diversify risk, we would like to have a daily or weekly data backup 
stored in another location besides the VPS service we are using - 
pg_dump is great for the backup but transferring a growing db across the 
internet to a local machine disk seems slow - how are others handling 
this with postgresql ?  Thks.


--
Support Dept
Tiger Nassau, Inc.
www.tigernassau.com
406-624-9310





Re: pg data backup from vps

2017-12-01 Thread Rob Sargent



On 12/01/2017 12:56 PM, support-tiger wrote:
To diversify risk, we would like to have a daily or weekly data backup 
stored in another location besides the VPS service we are using - 
pg_dump is great for the backup but transferring a growing db across 
the internet to a local machine disk seems slow - how are others 
handling this with postgresql ? Thks.


So long as you're looking for more kinds of risk, why not stream the WAL 
files to another instance?





Re: pg data backup from vps

2017-12-01 Thread Melvin Davidson
On Fri, Dec 1, 2017 at 3:07 PM, Rob Sargent  wrote:

>
>
> On 12/01/2017 12:56 PM, support-tiger wrote:
>
>> To diversify risk, we would like to have a daily or weekly data backup
>> stored in another location besides the VPS service we are using - pg_dump
>> is great for the backup but transferring a growing db across the internet
>> to a local machine disk seems slow - how are others handling this with
>> postgresql ? Thks.
>>
>> So long as you're looking for more kinds of risk, why not stream the WAL
> files to another instance?
>
>
>
>how are others handling this with postgresql ?

One technique is to replicate to a slave on another server and do the
pg_dump on the slave.

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: pg data backup from vps

2017-12-01 Thread Joshua D. Drake

On 12/01/2017 11:56 AM, support-tiger wrote:
To diversify risk, we would like to have a daily or weekly data backup 
stored in another location besides the VPS service we are using - 
pg_dump is great for the backup but transferring a growing db across 
the internet to a local machine disk seems slow - how are others 
handling this with postgresql ?  Thks.


I would consider something like an archiving slave (PITR).

JD



--
Command Prompt, Inc. || http://the.postgres.company/ || @cmdpromptinc

PostgreSQL Centered full stack support, consulting and development.
Advocate: @amplifypostgres || Learn: https://pgconf.org
* Unless otherwise stated, opinions are my own.   *




Re: PostgreSQL and Data Warehouse optimizations

2017-12-01 Thread legrand legrand
Hi,

Parallel Queries are now available and development is very active for
Declarative Partitioning (arrived with pg 10),
many new features are currently developped in pg 11, (for exemple hash
partition, partition wise ... most regarding partition usage optimizations)
see https://commitfest.postgresql.org/16/

There are also many works regarding Sharding (using FDW with remote
partitions) and pocs regarding compression, vectorization ...

I'm like you, I very interested in dataware house / datamart / OLAP queries
...
I've asked about
http://www.postgresql-archive.org/Partition-pruning-agg-push-down-for-star-schema-in-pg-v11-td5994678.html
that seems to me a kind of pre-requisite before trying to migrate ...

Sure pg 11 will be great for us ;o)

Regards
PAscal



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: pg data backup from vps

2017-12-01 Thread basti
Replication is no backup.
Its more like a RAID.

That mean tubles that are delete on master by a mistake there are also
delete on slave.

correct me if i'am wrong.

Best regards,
basti

On 01.12.2017 21:10, Melvin Davidson wrote:
> 
> 
> On Fri, Dec 1, 2017 at 3:07 PM, Rob Sargent  > wrote:
> 
> 
> 
> On 12/01/2017 12:56 PM, support-tiger wrote:
> 
> To diversify risk, we would like to have a daily or weekly data
> backup stored in another location besides the VPS service we are
> using - pg_dump is great for the backup but transferring a
> growing db across the internet to a local machine disk seems
> slow - how are others handling this with postgresql ? Thks.
> 
> So long as you're looking for more kinds of risk, why not stream the
> WAL files to another instance?
> 
> 
> 
>>how are others handling this with postgresql ?
> 
> One technique is to replicate to a slave on another server and do the
> pg_dump on the slave.
> 
> -- 
> *Melvin Davidson*
> I reserve the right to fantasize.  Whether or not you
> wish to share my fantasy is entirely up to you.



Re: pg data backup from vps

2017-12-01 Thread Rory Campbell-Lange
On 01/12/17, support-tiger (supp...@tigernassau.com) wrote:
> To diversify risk, we would like to have a daily or weekly data backup
> stored in another location besides the VPS service we are using - pg_dump is
> great for the backup but transferring a growing db across the internet to a
> local machine disk seems slow - how are others handling this with postgresql
> ?  Thks.

Speed is related to size. If you have a problem with the speed of data
transfer, you really might want to look at different ways of backing up.
Here is one pretty straightforward strategy based on both pg_dump and
streaming replication:

1. pg_dump
   use the -Fc flags for compression
   use the -j option to parallelize the dumps
   consider adding an audit schema and inserting a row just before the
   dump to make it easy to check the dump worked with pg_restore (you
   can grep pg_restore output)
   consider dumping twice a day and then rsync those files to nearby
   machines and offsite.
   'nearby machines' is in case we have to restore quickly and it can
   take a lot of time to get big files back into production.
   'offsite' because the place you have your database server might
   evaporate

2. streaming replication
   keep streaming changes to other servers
   https://www.postgresql.org/docs/current/static/warm-standby.html
   again you might want local and remote servers to the existing main
   database
   'local' so if you have other servers using the database they can
   switch over to another server in the same space
   'remote' so you have pretty up-to-date information on a remote server
   that you can use if the place where your main database server
   evaporates
   a cool thing is you can play with the WAL file replay mechanisms so
   that you can keep a slave roughly an hour behind the main database
   server, for example, which can be nice if someone just did something
   really bad in production.

These two approaches serve different purposes. Generally having a live,
up-to-date version of your database elsewhere based on streaming
replication is the most valuable thing to have if your main database
server goes down, and it can be brilliant having those other servers for
read-only tests and - if you are careful about offlining them and making
them masters - upgrade testing.  But if a data bug or some other issue
started affecting data over time, or you need to retrieve the state of
something a month ago, you really might need dumps too.

I'm sure others will have much more sage advice, but that is a starter
for 10. 

Rory




Problems with triggers and table lock

2017-12-01 Thread Job
Dear guys,

we are using Postgresql 9.6.1 with Rubyrep in order to replicate some tables 
across two different, and remote, database servers.
Since few weeks sometimes when inserting/updating some rows, the statement 
remains waiting and table seems locked for insert/updates.

When i issue "select * from pg_stat_activity" no query is locked.
I only obtain this error messages when i kill (i also try directly from bash 
with pgsql) with "ctrl+c" the locked insert:

ERROR:  canceling statement due to user request
CONTEXT:  while updating tuple (0,7) in relation "TABLE"
SQL statement "UPDATE TABLE SET FIELD=NOW() WHERE FIELD IS NULL"
PL/pgSQL function TRIGGER_NAME() line 3 at SQL statement

How can i see what is locking my table/tables?

Thank you!
F


Re: pg data backup from vps

2017-12-01 Thread John R Pierce

On 12/1/2017 12:44 PM, basti wrote:

Replication is no backup.
Its more like a RAID.

That mean tubles that are delete on master by a mistake there are also
delete on slave.

correct me if i'am wrong.


a wal archive plus occasional basebackups lets you restore to any point 
in time (PITR) covered since the oldest basebackup.


think of a base backup as a 'full' backup, and the wal logs in the 
archive as incrementals.    one such approach might be a weekly 
basebackup, where you keep the last 4 weeks, and keep all wal files 
since the start of oldest basebackup.   yes, this will take quite a bit 
of space



--
john r pierce, recycling bits in santa cruz