Re: stable for each row before insert trigger

2019-10-17 Thread Олег Самойлов
Eh, stupid answer. Of cause, I read the documentation. But what about you? Do you know what is a trigger function? (for each row before insert) A trigger function don't have parameters, instead it get special variables. OLD and NEW for instance or TG_ARGV[]. If trigger function depends only on

Postgres Point in time Recovery (PITR),

2019-10-17 Thread Daulat Ram
Hello All, Can you please share some ideas and scenarios how we can do the PITR in case of disaster. Thanks,

Can you please tell us how set this prefetch attribute in following lines.

2019-10-17 Thread M Tarkeshwar Rao
Hi all, How to fetch certain number of tuples from a postgres table. Same I am doing in oracle using following lines by setting prefetch attribute. For oracle // Prepare query if( OCIStmtPrepare( myOciStatement, myOciError, (text *)aSqlStatement, // Get statement type OCIAttrGet( (voi

Re:Using PostgreSQL for Machine Learning Data Pipelines

2019-10-17 Thread chris
Hi there, There is a project named Apache MADlib, may help you. http://madlib.apache.org Thanks, | | Chris | | yuanzefuwa...@126.com | 签名由网易邮箱大师定制 On 10/18/2019 02:04,Pankaj Jangid wrote: Hi, I am working on a machine-learning project. Because of the available study material in the ML area

Re: drop database

2019-10-17 Thread Ron
There are other databases in the cluster. On 10/17/19 5:12 PM, Andrew Kerber wrote: If you are decommissioning the database, why not just rm -rf the whole system? On Thu, Oct 17, 2019 at 4:31 PM Michael Lewis > wrote: Your plan to loop over tables and truncate t

Re: stable for each row before insert trigger

2019-10-17 Thread Adrian Klaver
On 10/17/19 4:31 PM, Олег Самойлов wrote: Hi all. According to documentation the words "VOLATILE, STABLE, IMMUTABLE" is somehow useful with trigger functions, for instance mentioned that the AFTER INSERT trigger should be VOLATILE. The question is how this words affect a for each row before in

stable for each row before insert trigger

2019-10-17 Thread Олег Самойлов
Hi all. According to documentation the words "VOLATILE, STABLE, IMMUTABLE" is somehow useful with trigger functions, for instance mentioned that the AFTER INSERT trigger should be VOLATILE. The question is how this words affect a for each row before insert trigger? Can be some optimisation here

Re: drop database

2019-10-17 Thread pabloa98
Perhaps you want to TRUNCATE TABLEs. That will mitigate any I/O impact On Thu, Oct 17, 2019 at 3:13 PM Andrew Kerber wrote: > If you are decommissioning the database, why not just rm -rf the whole > system? > > On Thu, Oct 17, 2019 at 4:31 PM Michael Lewis wrote: > >> Your plan to loop over ta

Re: drop database

2019-10-17 Thread Andrew Kerber
If you are decommissioning the database, why not just rm -rf the whole system? On Thu, Oct 17, 2019 at 4:31 PM Michael Lewis wrote: > Your plan to loop over tables and truncate them seems great if you are > worried. It seems simple to verify that space is being freed as you go, and > also easy t

Re: drop database

2019-10-17 Thread Michael Lewis
Your plan to loop over tables and truncate them seems great if you are worried. It seems simple to verify that space is being freed as you go, and also easy to change tactics if the need arises. >

Re: drop database

2019-10-17 Thread Julie Nishimura
I think you don't swallow an entire elephant at once, you can choke on it, it is more safe to be eaten piece by piece:) From: Julie Nishimura Sent: Thursday, October 17, 2019 2:19 PM To: Ron ; pgsql-general@lists.postgresql.org Subject: Re: drop database Ive s

Re: drop database

2019-10-17 Thread Julie Nishimura
Ive seen some performance degradation on some other RDBMS systems when "drop database" was in progress. We need to drop database which is 16 tb with minimal impact for our end users. There are 32 other databases with hundreds of connections on the same cluster, and I just want to release the spa

Re: Sv: drop database

2019-10-17 Thread Ron
On 10/17/19 3:58 PM, Andreas Joseph Krogh wrote: På torsdag 17. oktober 2019 kl. 22:44:15, skrev Julie Nishimura mailto:juliez...@hotmail.com>>: Guys, can anybody reply if they drop any postgresql database larger than 1 tb and how long did it take? Thanks About 280ms: andreak@[loca

Sv: drop database

2019-10-17 Thread Andreas Joseph Krogh
På torsdag 17. oktober 2019 kl. 22:44:15, skrev Julie Nishimura < juliez...@hotmail.com >: Guys, can anybody reply if they drop any postgresql database larger than 1 tb and how long did it take? Sorry, I missread you question as 1GB (not TB)... -- Andreas Jose

Sv: drop database

2019-10-17 Thread Andreas Joseph Krogh
På torsdag 17. oktober 2019 kl. 22:44:15, skrev Julie Nishimura < juliez...@hotmail.com >: Guys, can anybody reply if they drop any postgresql database larger than 1 tb and how long did it take? Thanks About 280ms: andreak@[local]:5432 12.0 andreak=# select p

Re: drop database

2019-10-17 Thread Ron
On 10/17/19 3:44 PM, Julie Nishimura wrote: Guys, can anybody reply if they drop any postgresql database larger than 1 tb and how long did it take? A lot has to do with how quickly the underlying file system can delete files. To be honest, though... *does it really matter* how long it takes? 

drop database

2019-10-17 Thread Julie Nishimura
Guys, can anybody reply if they drop any postgresql database larger than 1 tb and how long did it take? Thanks

Re: Changing PK on replicated database

2019-10-17 Thread PegoraroF10
Very good, records were changed and everything is ok. Well, sorry for this stupid question but some time ago we did something similar to this and our replica stoped, so I was just afraid of that happenning again. -- Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html

Re: releasing space

2019-10-17 Thread Adrian Klaver
On 10/17/19 10:20 AM, Julie Nishimura wrote: Hello everybody, We are running PostgreSQL 9.6.2 cluster master -> standby (streaming replication). 22 tb of space (constantly struggling with the space, pruning the old data, but not fast enough). The biggest db takes 16 tb. So, we've copied it to

Re: Vacuum very big table - how the full vacuum works in background/internally?

2019-10-17 Thread Michael Lewis
> > We must know it to avoid disk out problems, and too much off-time. > You may be interested in this extension- https://github.com/reorg/pg_repack

Using PostgreSQL for Machine Learning Data Pipelines

2019-10-17 Thread Pankaj Jangid
Hi, I am working on a machine-learning project. Because of the available study material in the ML area, the team is inclined towards Apache Kafka, Apache Spark for data-pipelines and analytics. Our requirement is to store huge amounts of continuously increasing data that cannot fit into a single

releasing space

2019-10-17 Thread Julie Nishimura
Hello everybody, We are running PostgreSQL 9.6.2 cluster master -> standby (streaming replication). 22 tb of space (constantly struggling with the space, pruning the old data, but not fast enough). The biggest db takes 16 tb. So, we've copied it to another server, and now we would like to delete

Re: Can you please tell us how set this prefetch attribute in following lines.

2019-10-17 Thread Laurenz Albe
On Thu, 2019-10-17 at 11:16 +, M Tarkeshwar Rao wrote: > How to fetch certain number of tuples from a postgres table. > > Same I am doing in oracle using following lines by setting prefetch attribute. > > For oracle > // Prepare query > if( OCIStmtPrepare( myOciStatement, myOciError, (tex

Re: Can you please tell us how set this prefetch attribute in following lines.

2019-10-17 Thread Reid Thompson
On Thu, 2019-10-17 at 11:16 +, M Tarkeshwar Rao wrote: > [EXTERNAL SOURCE] > > > > Hi all, > > How to fetch certain number of tuples from a postgres table. > > Same I am doing in oracle using following lines by setting prefetch attribute. > > For oracle > // Prepare query > if( OC

Re: Vacuum very big table - how the full vacuum works in background/internally?

2019-10-17 Thread Luca Ferrari
On Thu, Oct 17, 2019 at 5:10 PM Durumdara wrote: > Please help me, how the PGSQL full vacuum works internally? (1., 2. case, or > something else) > As far as I know a VACUUM FULL will rewrite the whole table without inserting (and therefore deleting later) not visible tuples. I would be quite su

Vacuum very big table - how the full vacuum works in background/internally?

2019-10-17 Thread Durumdara
Hello! We stored some binaries in largeobjects. Because of that the table size now 80 GB. We deleted 80% of records (lo_unlink), and autovacuum reclaimed space for new elements. So the table don't grow anymore, but we have to get more space in this server. We can delete 99% of these records, but

Re: Changing PK on replicated database

2019-10-17 Thread Adrian Klaver
On 10/17/19 3:13 AM, PegoraroF10 wrote: Correct, those records are on replica too. I´m just talking about the best way to update those keys and all their dependent tables. If I change them first on master they will not be replicated because it will be an update and their pk will not be found on r

RE: CVE-2018-1058

2019-10-17 Thread Lizeth Solis Aramayo
Forgot a fourth option: 4) Use the 9.6.15 pg_restore to restore the 9.6.15 pg_dump to the 9.6.5 database. I don't know how to do that. May you help me please Is it just copy the pg_restore from one server to another? -Mensaje original- De: Adrian Klaver [mailto:adrian.kla...@akl

Re: CVE-2018-1058

2019-10-17 Thread Adrian Klaver
On 10/17/19 6:46 AM, Lizeth Solis Aramayo wrote: Forgot a fourth option: 4) Use the 9.6.15 pg_restore to restore the 9.6.15 pg_dump to the 9.6.5 database. I don't know how to do that. May you help me please Is it just copy the pg_restore from one server to another? It would be easier to

Sv: Conflict between autovacuum and backup restoration

2019-10-17 Thread Andreas Joseph Krogh
På torsdag 17. oktober 2019 kl. 15:48:09, skrev Ekaterina Amez < ekaterina.a...@zunibal.com >: Good afternoon, We've finally made the migration+upgrade from old server with v7.14 to new server with v8.4 and, before test and plan an upgrade to v9.6, I'm check

Conflict between autovacuum and backup restoration

2019-10-17 Thread Ekaterina Amez
Good afternoon, We've finally made the migration+upgrade from old server with v7.14 to new server with v8.4 and, before test and plan an upgrade to v9.6, I'm checking logs to find out if there's any problem with this upgrade. We've been fixing things and in only remains one problem in the log

Re: Can functions containing a CTE be PARALLEL SAFE?

2019-10-17 Thread Erwin Brandstetter
Makes sense, thanks for the confirmation. Maybe clarify in the manual? Regards Erwin On Thu, Oct 17, 2019 at 11:20 AM Tom Lane wrote: > Erwin Brandstetter writes: > >> The following operations are always parallel restricted. > >> - Scans of common table expressions (CTEs). > > > Now I am unsur

Re: PostgreSQL memory usage

2019-10-17 Thread Tom Lane
Alexander Pyhalov writes: > However, one table has 2250 partitions and I've seen several selects to this > table (based on the primary key of individual partitions) with a huge IN () > list (about 500 keys). Don't expect this to be efficient, but unsure that > these queries caused such memory

RE: PostgreSQL memory usage

2019-10-17 Thread Alexander Pyhalov
No, there are no long text or bytea fields, just int, bigint, ctime, limited varchar types. However, one table has 2250 partitions and I've seen several selects to this table (based on the primary key of individual partitions) with a huge IN () list (about 500 keys). Don't expect this to be ef

Re: A little confusion about JSON Path

2019-10-17 Thread Laurenz Albe
Thomas Kellerer wrote: > I don't understand why the following two JSON Path expressions aren't doing > the same thing in Postgres 12: > > with sample (data) as ( > values > ('{"k1": {"list":[1,2,3]}}'::jsonb) > ) > select data, >jsonb_path_exists(data, '$.k1

Can you please tell us how set this prefetch attribute in following lines.

2019-10-17 Thread M Tarkeshwar Rao
Hi all, How to fetch certain number of tuples from a postgres table. Same I am doing in oracle using following lines by setting prefetch attribute. For oracle // Prepare query if( OCIStmtPrepare( myOciStatement, myOciError, (text *)aSqlStatement, // Get statement type OCIAttrGet( (voi

Re: PostgreSQL memory usage

2019-10-17 Thread Laurenz Albe
Alexander Pyhalov wrote: > After looking at my DBMS server for some time I've understood that I don't > understand what was going on... > > A server has 48 GB RAM. shared_buffers is set to 12GB, work_mem - to 32MB, > pgbouncer > in transaction mode is used to connect pool (pool size 80) to Postg

Re: pgbackrest with PAF(corosync and pacmaker)

2019-10-17 Thread David Steele
On 10/17/19 9:48 AM, Ajay Pratap wrote: I am using pacemaker and corosync to setup two nodes High availability cluster for postgreSQL 10. In the scenario I want to setup timely backup with pgbackrest. Using the _backup from standby_ feature I could able to take backup from my secondary  postg

Re: Has there been any discussion of custom dictionaries being defined in the database?

2019-10-17 Thread Karsten Hilbert
On Thu, Oct 17, 2019 at 11:52:39AM +0200, Tom Lane wrote: > Morris de Oryx writes: > > Given that Amazon is bragging this week about turning off Oracle, it seems > > like they could kick some resources towards contributing something to the > > Postgres project. With that in mind, is the idea of d

Re: Changing PK on replicated database

2019-10-17 Thread PegoraroF10
Correct, those records are on replica too. I´m just talking about the best way to update those keys and all their dependent tables. If I change them first on master they will not be replicated because it will be an update and their pk will not be found on replica, correct ? If so, do I need to upda

Re: Has there been any discussion of custom dictionaries being defined in the database?

2019-10-17 Thread Morris de Oryx
Nope, no custom C installs. RDS is super convenient in many ways, but also limited. You can't, for example, run TimeScale, install RUM indexes (if those still work), or any novel plugins. And you can't do anything at all requiring a file reference. The backup features are outstanding. But, yeah, so

Re: Pgbackrest backup is too slow

2019-10-17 Thread Ajay Pratap
Thanks for your previous replies. It turns out my machine was having kernel Issues, which made it slow. I setup another cluster in different machine now I could able to backup 42G in 6 min(s). On Fri, Oct 11, 2019 at 7:23 PM Brad Nicholson wrote: > Stephen Frost wrote on 2019/10/11 08:50:53 AM

pgbackrest with PAF(corosync and pacmaker)

2019-10-17 Thread Ajay Pratap
##Arch OS: Centos 7 PostgreSQL 10 pgbackrest 2.18 PAF: two nodes(primary and secondary ) with corosync and pacemaker ##Query I am using pacemaker and corosync to setup two nodes High availability cluster for postgreSQL 10. In the scenario I want to setup timely backup with pgbackrest. Using the _b

Re: Has there been any discussion of custom dictionaries being defined in the database?

2019-10-17 Thread Tom Lane
Morris de Oryx writes: > Given that Amazon is bragging this week about turning off Oracle, it seems > like they could kick some resources towards contributing something to the > Postgres project. With that in mind, is the idea of defining dictionaries > within a table somehow meritless, or unexpec

Re: Has there been any discussion of custom dictionaries being defined in the database?

2019-10-17 Thread Morris de Oryx
Fair. Given that Amazon is bragging this week about turning off Oracle, it seems like they could kick some resources towards contributing something to the Postgres project. With that in mind, is the idea of defining dictionaries within a table somehow meritless, or unexpectedly difficult?

Re: Can functions containing a CTE be PARALLEL SAFE?

2019-10-17 Thread Tom Lane
Erwin Brandstetter writes: >> The following operations are always parallel restricted. >> - Scans of common table expressions (CTEs). > Now I am unsure whether I can use `PARALLEL SAFE` for functions containing > a CTE (while fulfilling all other criteria)? AFAIR, the reason for treating CTEs a

Re: Has there been any discussion of custom dictionaries being defined in the database?

2019-10-17 Thread Tom Lane
Morris de Oryx writes: > We're deployed on RDS, which does not give you any file system to access. > I'd love to be able to create a custom thesaurus dictionary for our > situation, which seems like it is impossible in a setup like ours. > Has there been any discussion of making dictionary config