Logical decoding on standby

2018-03-12 Thread Andreas Joseph Krogh
Anybody knows if $subject will make it into v11?
 
--
Andreas Joseph Krogh


Sv: Re: Logical decoding on standby

2018-03-12 Thread Andreas Joseph Krogh
På tirsdag 13. mars 2018 kl. 01:08:03, skrev Andreas Kretschmer <
andr...@a-kretschmer.de <mailto:andr...@a-kretschmer.de>>:
On 13 March 2018 00:58:27 CET, Andreas Kretschmer  
wrote:
 >On 12 March 2018 21:18:22 CET, Andreas Joseph Krogh
 > wrote:
 >>Anybody knows if $subject will make it into v11?
 >> 
 >>--
 >>Andreas Joseph Krogh
 >
 >Why do you think you needs this?
 >
 >Regards, Andreas

 Let me explain my question. One of the key aspects of logical replication is, 
that you can define what to replicate. That wouldn't work in this way, that's 
why i'm asking.

 Regards, Andreas
 
I have streaming-replication of a cluster containing many databases to a 
standby-server. I need a reporting-server which only needs a subset of some 
(large) tables of one database and am planning to use the built-in logical 
replication for that. It is my understanding that logical replication will 
cause more wal-traffic so I'm trying to offload wal-traffic from the primary. I 
thought using logical replication from the standby would help with that but 
realized it's not supported in v10. Im I wrong in planning this way?
 
-- Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


 


Sv: Re: inner join elimination

2018-06-07 Thread Andreas Joseph Krogh
På torsdag 07. juni 2018 kl. 23:44:04, skrev pinker mailto:pin...@onet.eu>>:
As far as I know PostgreSQL does only OUTER JOIN Elimination, with inner join
 it doesn't work.
 
See this thread:

https://www.postgresql.org/message-id/flat/VisenaEmail.2b.7dfa64bdab147c49.1600d1d9df0%40tc7-visena#VisenaEmail.2b.7dfa64bdab147c49.1600d1d9df0@tc7-visena
 
-- Andreas Joseph Krogh




Sv: Re: CTE optimization fence

2018-06-27 Thread Andreas Joseph Krogh
På onsdag 27. juni 2018 kl. 07:45:25, skrev Thomas Kellerer mailto:spam_ea...@gmx.net>>:
Tom Lane schrieb am 27.06.2018 um 05:48:
 >> I see there was some discussion last year about removing the CTE
 >> optimization fence (e.g.
 >> http://www.postgresql-archive.org/CTE-inlining-td5958992.html) but can't
 >> find anything more recent. Does anyone know if this is still under
 >> consideration?
 >
 > but we have to settle on a way of controlling it.

 +1 from me.

 I am running more and more into situations where people consider this a bug 
rather than a feature.

 FWIW, I think a GUC that switches between the current (mostly unwanted, at 
least surprising)
 way and one where the CTE is optimized together with the main query would 
suit "most" people.

 For sake of compatibility this could default to the current behaviour
 
+1 from me. The default should be "no fence" for sake of least surprise I 
think. Documenting the change would be sufficient.
I hope this will be picked up in the comming V12-cycle.
 
-- Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


 


Sv: Re: Sv: Re: CTE optimization fence

2018-06-27 Thread Andreas Joseph Krogh
På onsdag 27. juni 2018 kl. 11:44:05, skrev Adrien NAYRAT <
adrien.nay...@anayrat.info <mailto:adrien.nay...@anayrat.info>>:
On 06/27/2018 09:58 AM, Andreas Joseph Krogh wrote:
 >      >
 >      > but we have to settle on a way of controlling it.
 >
 >     +1 from me.
 >
 >     I am running more and more into situations where people consider
 >     this a bug rather than a feature.
 >
 >     FWIW, I think a GUC that switches between the current (mostly
 >     unwanted, at least surprising)
 >     way and one where the CTE is optimized together with the main query
 >     would suit "most" people.
 >
 >     For sake of compatibility this could default to the current behaviour
 >
 > +1 from me. The default should be "no fence" for sake of least surprise
 > I think. Documenting the change would be sufficient.
 > I hope this will be picked up in the comming V12-cycle.


 FYI this subject has been discussed in this thread :
 https://www.postgresql.org/message-id/5351711493487900%40web53g.yandex.ru

 Regards,
  
 
I know. I hate the INLINE proposal and hope default-behaviour will be like in 
other DBs, inline like sub-query as default. GUC for preserving fence is what I 
hope will happen.
 
-- Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


 


WAL-dir filling up with wal_level = logical

2018-08-03 Thread Andreas Joseph Krogh
Version: PG-10.4
 
I don't (yet) have any replication-slots configured, and hence no standbys are 
subscribed, but I have wal_level = logical configured to be able to add 
subscribers later. I'm seeing that WAL-dir is filling up with WAL-files (now 
17GB and not declining), is this expected behaviour?
 
Thanks.
 
--
 Andreas Joseph Krogh



Sv: WAL-dir filling up with wal_level = logical

2018-08-03 Thread Andreas Joseph Krogh
På fredag 03. august 2018 kl. 12:12:33, skrev Andreas Joseph Krogh <
andr...@visena.com <mailto:andr...@visena.com>>:
Version: PG-10.4
 
I don't (yet) have any replication-slots configured, and hence no standbys are 
subscribed, but I have wal_level = logical configured to be able to add 
subscribers later. I'm seeing that WAL-dir is filling up with WAL-files (now 
17GB and not declining), is this expected behaviour?
 
Thanks.
 
Hm, a co-worker noticed that we have max_wal_size = 16GB, that might be the 
reason:-)
 
--
 Andreas Joseph Krogh



Logical replication from standby

2018-08-16 Thread Andreas Joseph Krogh
Any plans to support $subject?
 
Offloading the primary by replicating from standby would be very useful.
 
-- Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>




Sv: Re: Logical replication from standby

2018-08-16 Thread Andreas Joseph Krogh
På torsdag 16. august 2018 kl. 14:18:12, skrev Andres Freund mailto:and...@anarazel.de>>:
On 2018-08-16 13:35:06 +0200, Andreas Joseph Krogh wrote:
 > Any plans to support $subject?

 There's LOADS of discussion on this on the lists.
 
I couldn't find anything specifically regarding "from standby", other than 
previous posts by me, which didn't really result in anything.
 
--
 Andreas Joseph Krogh



Sv: Re: Logical replication from standby

2018-08-16 Thread Andreas Joseph Krogh
På torsdag 16. august 2018 kl. 14:50:47, skrev Andres Freund mailto:and...@anarazel.de>>:
Hi,

 On 2018-08-16 14:32:34 +0200, Andreas Joseph Krogh wrote:
 > På torsdag 16. august 2018 kl. 14:18:12, skrev Andres Freund 
  <mailto:and...@anarazel.de>>:
 > On 2018-08-16 13:35:06 +0200, Andreas Joseph Krogh wrote:
 >  > Any plans to support $subject?
 >
 >  There's LOADS of discussion on this on the lists.
 >  
 > I couldn't find anything specifically regarding "from standby", other than
 > previous posts by me, which didn't really result in anything.

 See e.g.
 
http://archives.postgresql.org/message-id/CAMsr%2BYEVmBJ%3DdyLw%3D%2BkTihmUnGy5_EW4Mig5T0maieg_Zu%3DXCg%40mail.gmail.com
 and also https://commitfest.postgresql.org/15/788/ etc.
 
I saw the commitfest-item but sadly nothing has happended.
Thanks for pointing out the thread.
 
Craig, if you are picking up this, are you planning to work more on this?
 
--
 Andreas Joseph Krogh



Sv: Re: regex match and special characters

2018-08-16 Thread Andreas Joseph Krogh
På torsdag 16. august 2018 kl. 15:16:52, skrev Adrian Klaver <
adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>>:
On 08/16/2018 03:59 AM, Alex Kliukin wrote:
 > Hi,
 >
 > Here is a simple SQL statement that gives different results on PostgreSQL 
9.6 and PostgreSQL 10+. The space character at the end of the string is 
actually U+2006 SIX-PER-EM SPACE 
(http://www.fileformat.info/info/unicode/char/2006/index.htm)
 >
 > test=# select 'abcd ' ~ 'abcd\s';
 >   ?column?
 > --
 >   t
 > (1 row)
 >
 > test=# select version();
 >                                               version
 > 
-
 >   PostgreSQL 12devel on x86_64-pc-linux-gnu, compiled by gcc (Gentoo 
6.4.0-r1 p1.3) 6.4.0, 64-bit
 > (1 row)
 >
 >
 > On another server (running on the same system on a different port)
 >
 > postgres=# select version();
 >                                              version
 > 
---
 >   PostgreSQL 9.6.9 on x86_64-pc-linux-gnu, compiled by gcc (Gentoo 6.4.0-r1 
p1.3) 6.4.0, 64-bit
 > (1 row)
 >
 > postgres=# select 'abcd ' ~ 'abcd\s';
 >   ?column?
 > --
 >   f
 > (1 row)
 >
 > For both clusters, the client encoding is UTF8, the database encoding and 
collation is UTF8 and en_US.utf8 respectively, and the lc_ctype is en_US.utf8. 
I am accessing the databases running locally by ssh-ing first to the host.
 >
 > I observed similar issues with other Linux-based servers running Ubuntu, in 
all cases the regex resulted in true on PostgreSQL 10+ and false on earlier 
versions (down to 9.3). The query comes from a table check that suddenly 
stopped accepting rows valid in the older version during the migration. Making 
it  select 'abcd ' ~ E'abcd\\s' doesn't  modify the outcome, unsurprisingly.
 >
 > Is it reproducible for others here as well? Given that it is, Is there a 
way to make both versions behave the same?

 select version();
                                        version

 

   PostgreSQL 10.5 on x86_64-pc-linux-gnu, compiled by gcc (SUSE Linux)
 4.8.5, 64-bit


 lc_collate                          | en_US.UTF-8

 lc_ctype                            | en_US.UTF-8


 test=# select 'abcd'||chr(2006) ~ E'abcd\s';
   ?column?
 --
   f
 (1 row)

 In your example you are working on Postgres devel. Have you tried it on
 Postgres 10 and/or 11?
 
char(2006) produces the wrong character as 2006 is the hex-value. You have to 
use 8198:
 
andreak@[local]:5433 10.4 andreak=# select version(); 
 
┌┐
 │    version 
│
 
├┤
 │ PostgreSQL 10.4 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 
7.3.0-16ubuntu3) 7.3.0, 64-bit │
 
└────────┘
 (1 row)

andreak@[local]:5433 10.4 andreak=# select 'abcd'||chr(8198) ~ 'abcd\s'; 
 ┌──┐
 │ ?column? │
 ├──┤
 │ t    │
 └──┘
 (1 row)
  
 
-- Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


 


Sv: Re: Sv: Re: regex match and special characters

2018-08-16 Thread Andreas Joseph Krogh
På torsdag 16. august 2018 kl. 16:32:40, skrev Adrian Klaver <
adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>>:
On 08/16/2018 07:04 AM, Andreas Joseph Krogh wrote:

 > char(2006) produces the wrong character as 2006 is the hex-value. You
 > have to use 8198:
 > andreak@[local]:543310.4 andreak=# select version();
 > 
┌┐
 > │    version
 >  │
 > 
├┤
 > │ PostgreSQL 10.4 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu
 > 7.3.0-16ubuntu3) 7.3.0, 64-bit │
 > 
└┘
 > (1 row)
 >
 > andreak@[local]:543310.4 andreak=# select 'abcd'||chr(8198) ~ 'abcd\s';
 > ┌──┐
 > │ ?column? │
 > ├──┤
 > │ t    │
 > └──┘
 > (1 row)

 Argh, read the wrong line. Thanks for the correction. Still:

 test=# select version();
                                        version

 

   PostgreSQL 10.5 on x86_64-pc-linux-gnu, compiled by gcc (SUSE Linux)
 4.8.5, 64-bit
 (1 row)


 test=# select 'abcd'||chr(8198) ~ E'abcd\s';
   ?column?
 --
   f
 (1 row)
 
When using E-syntax you need to double the backslash for escaping:
 
andreak@[local]:5433 10.4 andreak=# select 'abcd'||chr(8198) ~ E'abcd\\s'; 
 ┌──┐
 │ ?column? │
 ├──┤
 │ t    │
 └──┘
 (1 row)
  
 
-- Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


 


How to get shorter SERVER_VERSION in psql-prompt?

2018-09-03 Thread Andreas Joseph Krogh
Hi all, I'm using the Ubuntu-packages and have this in my .psqlrc:
 
\set PROMPT1 '%[%033[35m%]%n@%m:%>%[%033[0m%] %:server_version: 
%[%033[32m%]%/%[%033[0m%]%R%# '
  
This results in this verver_version:
10.5 (Ubuntu 10.5-1.pgdg18.04+1)
  
Is it possible to adjust this somehow so it outputs only "10.5"?
 
Thanks in advance.
 
-- Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>




Sv: Re: How to get shorter SERVER_VERSION in psql-prompt?

2018-09-03 Thread Andreas Joseph Krogh
På mandag 03. september 2018 kl. 23:03:10, skrev Alvaro Herrera <
alvhe...@2ndquadrant.com <mailto:alvhe...@2ndquadrant.com>>:
On 2018-Sep-03, Andreas Joseph Krogh wrote:

 > Hi all, I'm using the Ubuntu-packages and have this in my .psqlrc:
 >  
 > \set PROMPT1 '%[%033[35m%]%n@%m:%>%[%033[0m%] %:server_version:
 > %[%033[32m%]%/%[%033[0m%]%R%# '
 >   
 > This results in this verver_version:
 > 10.5 (Ubuntu 10.5-1.pgdg18.04+1)
 >   
 > Is it possible to adjust this somehow so it outputs only "10.5"?

 Well, where does that server_version come from?  Try adding this
 somewhere to .psqlrc:

 select setting as server_version from pg_settings where name = 
'server_version' \gset
 
select setting as server_version from pg_settings where name = 
'server_version';
 ┌──┐
 │  server_version  │
 ├──┤
 │ 10.5 (Ubuntu 10.5-1.pgdg18.04+1) │
 └──────┘
 (1 row)
  
I use packages from http://apt.postgresql.org/pub/repos/apt/
 
-- Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


 


Sv: Re: How to get shorter SERVER_VERSION in psql-prompt?

2018-09-03 Thread Andreas Joseph Krogh
På mandag 03. september 2018 kl. 23:34:48, skrev Christoph Moench-Tegeder <
c...@burggraben.net <mailto:c...@burggraben.net>>:
## Andreas Joseph Krogh (andr...@visena.com):

 > This results in this verver_version:
 > 10.5 (Ubuntu 10.5-1.pgdg18.04+1)
 >   
 > Is it possible to adjust this somehow so it outputs only "10.5"?

 On Debian/Ubuntu, all version strings are somewhat extended.
 Luckily, with the power of SQL we're not completely helpless, so try
 this in your .psqlrc (somewhat simpler than your prompt, but you
 get the idea):

 select substring(current_setting('server_version') from '#"[^ ]+#"( +%)?' for 
'#') as short_server_ver\gset
 \set PROMPT1 '%/ %:short_server_ver: %R%# '

 Yes, that works across \c.
 
Nice, thanks!
 
-- Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


 


Sv: Re: Sv: Re: How to get shorter SERVER_VERSION in psql-prompt?

2018-09-03 Thread Andreas Joseph Krogh
På mandag 03. september 2018 kl. 23:43:46, skrev Rob Sargent <
robjsarg...@gmail.com <mailto:robjsarg...@gmail.com>>:


 On 09/03/2018 03:42 PM, Alvaro Herrera wrote:
 > On 2018-Sep-03, Andreas Joseph Krogh wrote:
 >
 >> select setting as server_version from pg_settings where name =
 >> 'server_version';
 >>   ┌──┐
 >>   │  server_version  │
 >>   ├──┤
 >>   │ 10.5 (Ubuntu 10.5-1.pgdg18.04+1) │
 >>   └──┘
 > Ugh.  (So this is coming from "configure --with-extra-version" stuff)
 >
 > I guess you could just split it out at the first whitespace ...
 >
 Does that also diddle the value of "server_version_num"?
 
No:
show server_version_num; 
 ┌┐
 │ server_version_num │
 ├────┤
 │ 15 │
 └┘
 (1 row)
  
 
-- Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


 


Sv: Two Postgres master process are showing - one is on and off

2019-08-20 Thread Andreas Joseph Krogh
På tirsdag 20. august 2019 kl. 22:32:25, skrev chiru r mailto:chir...@gmail.com>>: 
Hi All,



I have observed one of our PostgreSQL DB instance showing two postgres process 
on Linux server as highlighted. The second postgres process is on and off.

We did not find any references in logs.

[...]

postgres 33438 1 0 12:41 ? 00:00:03 /u01/postgres/9.5/bin/postgres -D 
/u02/pgdata01/9.5/data



[...]



postgres 110181 33438 0 15:30 ? 00:00:00 /u01/postgres/9.5/bin/postgres -D 
/u02/pgdata01/9.5/data
Strange, the second is a child of the first... --
 Andreas Joseph Krogh

Sv: Uninstall postgres

2019-09-05 Thread Andreas Joseph Krogh
På torsdag 05. september 2019 kl. 10:53:01, skrev Sonam Sharma <
sonams1...@gmail.com <mailto:sonams1...@gmail.com>>: I have installed postgres 
with the source code option using configure --prefix and then make install. Can 
someone please help in uninstalling this. How to uninstall the postgres now. 
The installation was done as postgres user. make uninstall -- Andreas Joseph 
Krogh CTO / Partner - Visena AS Mobile: +47 909 56 963 andr...@visena.com 
<mailto:andr...@visena.com> www.visena.com <https://www.visena.com>  
<https://www.visena.com>

Re: Uninstall postgres

2019-09-05 Thread Andreas Joseph Krogh
På torsdag 05. september 2019 kl. 11:01:25, skrev Sonam Sharma <
sonams1...@gmail.com <mailto:sonams1...@gmail.com>>: It's saying gmake *** No 
rule to make Target uninstall. On Thu, Sep 5, 2019, 2:27 PM Andreas Joseph 
Krogh mailto:andr...@visena.com>> wrote: På torsdag 05. 
september 2019 kl. 10:53:01, skrev Sonam Sharma mailto:sonams1...@gmail.com>>: I have installed postgres with the source code 
option using configure --prefix and then make install. Can someone please help 
in uninstalling this. How to uninstall the postgres now. The installation was 
done as postgres user. make uninstall Strange – works for me: [
andreak@spaceballs-one] ~/dev/postgresql (REL_12_STABLE) 
 $ make uninstall
 make -C doc uninstall
 make[1]: Entering directory '/home/andreak/dev/postgresql/doc'
 make -C src uninstall
 make[2]: Entering directory '/home/andreak/dev/postgresql/doc/src'
 make -C sgml uninstall
 ... ... -- Andreas Joseph Krogh CTO / Partner - Visena AS Mobile: +47 909 56 
963 andr...@visena.com <mailto:andr...@visena.com> www.visena.com 
<https://www.visena.com>  <https://www.visena.com>

Sv: Primary Key Update issue ?

2019-09-06 Thread Andreas Joseph Krogh
På fredag 06. september 2019 kl. 11:06:04, skrev Patrick FICHE <
patrick.fi...@aqsacom.com <mailto:patrick.fi...@aqsacom.com>>: 
Hello,



While doing some testing on a Postgresql database, I encountered a strange 
behavior which is very simple to reproduce.

I just wanted to know if this is expected behavior or if it should be 
considered as an issue.



The scenario to reproduce it is the following.



CREATE TABLE Test ( pKey integer, Val integer, CONSTRAINT PK_Test PRIMARY Key( 
pKey ) );



INSERT INTO Test( pKey, Val ) VALUES ( 1, 1 );

INSERT INTO Test( pKey, Val ) VALUES ( 2, 2 );



UPDATE Test SET pKey = pKey + 1;



Here is the error that I get.



SQL Error [23505]: ERROR: duplicate key value violates unique constraint 
"pk_test"

 Detail: Key (pkey)=(2) already exists.



I was expecting pKey to be incremented for each row, which would still respect 
the unique constraint….



I’m currently using PostgreSQL 11.5 but have the same problem on PostgreSQL 
10.3 server.



Best Regards, 
It works if you add "DEFERRABLE INITIALLY DEFERRED" to the PK: CREATE TABLE 
Test ( pKey integer, Val integer, CONSTRAINT PK_Test PRIMARY Key( pKey ) 
DEFERRABLE INITIALLY DEFERRED );
andreak@[local]:5432 11.5 test=# CREATE TABLE Test ( pKey integer, Val 
integer, CONSTRAINT PK_Test PRIMARY Key( pKey ) DEFERRABLE INITIALLY DEFERRED );
 CREATE TABLE
andreak@[local]:5432 11.5 test=# INSERT INTO Test( pKey, Val ) VALUES ( 1, 1 );
 INSERT 0 1
andreak@[local]:5432 11.5 test=# INSERT INTO Test( pKey, Val ) VALUES ( 2, 2 );
 INSERT 0 1
andreak@[local]:5432 11.5 test=# UPDATE Test SET pKey = pKey + 1; 
 UPDATE 2
-- Andreas Joseph Krogh CTO / Partner - Visena AS Mobile: +47 909 56 963 
andr...@visena.com <mailto:andr...@visena.com> www.visena.com 
<https://www.visena.com>  <https://www.visena.com>

RE: Primary Key Update issue ?

2019-09-06 Thread Andreas Joseph Krogh
På fredag 06. september 2019 kl. 11:25:36, skrev Patrick FICHE <
patrick.fi...@aqsacom.com <mailto:patrick.fi...@aqsacom.com>>: 
Hi Andreas,



Thanks a lot for your answer, which solves this case.

I was still a bit surprised as this is linked to transaction management while 
I have here a single statement until I saw the Compatibility Remark in 
documentation :Also, PostgreSQL checks non-deferrable uniqueness constraints 
immediately, not at end of statement as the standard would suggest.
FWIW - PostgreSQL behaves like Oracle in this regard. -- Andreas Joseph Krogh 
CTO / Partner - Visena AS Mobile: +47 909 56 963 andr...@visena.com 
<mailto:andr...@visena.com> www.visena.com <https://www.visena.com>  
<https://www.visena.com>

Re: pg full text search very slow for Chinese characters

2019-09-10 Thread Andreas Joseph Krogh
På tirsdag 10. september 2019 kl. 18:21:45, skrev Tom Lane mailto:t...@sss.pgh.pa.us>>: Jimmy Huang  writes:
 > I tried pg_trgm and my own customized token parser 
https://github.com/huangjimmy/pg_cjk_parser

 pg_trgm is going to be fairly useless for indexing text that's mostly
 multibyte characters, since its unit of indexable data is just 3 bytes
 (not characters). I don't know of any comparable issue in the core
 tsvector logic, though. The numbers you're quoting do sound quite awful,
 but I share Cory's suspicion that it's something about your setup rather
 than an inherent Postgres issue.

 regards, tom lane We experienced quite awful performance when we hosted the 
DB on virtual servers (~5 years ago) and it turned out we hit the write-cache 
limit (then 8GB), which resulted in ~1MB/s IO thruput. Running iozone might 
help tracing down IO-problems. --
 Andreas Joseph Krogh

Re: PostgreSQL License

2019-09-17 Thread Andreas Joseph Krogh
På tirsdag 17. september 2019 kl. 22:55:02, skrev Peter J. Holzer <
hjp-pg...@hjp.at <mailto:hjp-pg...@hjp.at>>: On 2019-09-17 14:56:30 +0300, 
Ashkar Dev wrote:
 > but it is saying (without fee)
 > if I create a database with it to work with Web Application if want to sell 
it
 > so the buyer must have the PostgreSQL installed in his device to work 
offline
 > right?
 > "Permission to use, copy, modify, and distribute this software and its
 > documentation for any purpose, without fee, and without a written agreement 
is
 > hereby granted, provided that the above copyright notice and this paragraph 
and
 > the following two paragraphs appear in all copies."

 This means that you don't have to pay a fee or sign a written agreement
 to use, copy, modify, and distribute this software and its documentation
 for any purpose. It doesn't say that you can't charge a fee for
 distributing (although why anybody would pay you for something they can
 download themselves for free I don't know).

 hp A rule of thumb is - you can do anything you want with it (the PG software 
inc. its source), except claim you wrote it, as long as you preserve the 
original license-file(s). -- Andreas Joseph Krogh CTO / Partner - Visena AS 
Mobile: +47 909 56 963 andr...@visena.com <mailto:andr...@visena.com> 
www.visena.com <https://www.visena.com>  <https://www.visena.com>

Re: PostgreSQL License

2019-09-17 Thread Andreas Joseph Krogh
På onsdag 18. september 2019 kl. 01:07:41, skrev Rob Sargent <
robjsarg...@gmail.com <mailto:robjsarg...@gmail.com>>: On Sep 17, 2019, at 4:18 
PM, Andreas Joseph Krogh mailto:andr...@visena.com>> wrote:
På tirsdag 17. september 2019 kl. 22:55:02, skrev Peter J. Holzer <
hjp-pg...@hjp.at <mailto:hjp-pg...@hjp.at>>: On 2019-09-17 14:56:30 +0300, 
Ashkar Dev wrote:
 > but it is saying (without fee)
 > if I create a database with it to work with Web Application if want to sell 
it
 > so the buyer must have the PostgreSQL installed in his device to work 
offline
 > right?
 > "Permission to use, copy, modify, and distribute this software and its
 > documentation for any purpose, without fee, and without a written agreement 
is
 > hereby granted, provided that the above copyright notice and this paragraph 
and
 > the following two paragraphs appear in all copies."

 This means that you don't have to pay a fee or sign a written agreement
 to use, copy, modify, and distribute this software and its documentation
 for any purpose. It doesn't say that you can't charge a fee for
 distributing (although why anybody would pay you for something they can
 download themselves for free I don't know).

 hp A rule of thumb is - you can do anything you want with it (the PG software 
inc. its source), except claim you wrote it, as long as you preserve the 
original license-file(s).  I take it that the OP has an app/dataset on top of 
PG he/she wishes to market (and protect). Perfectly legit, no? Not clear if 
there is a desire to disable direct db access. That seems perfectly legit. I'm 
not sure what "to work offline" means, but using PG for whatever commercial 
purposes is totally fine, given the license-requirement above. -- Andreas 
Joseph Krogh CTO / Partner - Visena AS Mobile: +47 909 56 963 andr...@visena.com
 <mailto:andr...@visena.com> www.visena.com <https://www.visena.com>  
<https://www.visena.com>

Re: Installed PostgreSQL-11 in Ubuntu 18.04.02 Server Edition: No existing local cluster is suitable as a default target

2019-09-18 Thread Andreas Joseph Krogh
På onsdag 18. september 2019 kl. 12:13:24, skrev Marco Ippolito <
ippolito.ma...@gmail.com <mailto:ippolito.ma...@gmail.com>>: Thanks Matthias. 
Followed these steps (indicated here:
https://www.postgresql.org/docs/11/creating-cluster.html 
<https://www.postgresql.org/docs/11/creating-cluster.html> )  root# chown 
postgres /usr/local/pgsql root# su postgres postgres$ initdb -D 
/usr/local/pgsql/data postgres@pc:/home/marco$ 
/usr/lib/postgresql/11/bin/initdb -D /usr/local/pgsql/data
 The files belonging to this database system will be owned by user "postgres".
 This user must also own the server process.

 The database cluster will be initialized with locales
 COLLATE: en_GB.UTF-8
 CTYPE: en_GB.UTF-8
 MESSAGES: en_GB.UTF-8
 MONETARY: C.UTF-8
 NUMERIC: C.UTF-8
 TIME: C.UTF-8
 The default database encoding has accordingly been set to "UTF8".
 The default text search configuration will be set to "english".

 Data page checksums are disabled.

 creating directory /usr/local/pgsql/data ... ok
 creating subdirectories ... ok
 selecting default max_connections ... 100
 selecting default shared_buffers ... 128MB
 selecting default timezone ... Europe/Rome
 selecting dynamic shared memory implementation ... posix
 creating configuration files ... ok
 running bootstrap script ... ok
 performing post-bootstrap initialization ... ok
 syncing data to disk ... ok

 WARNING: enabling "trust" authentication for local connections
 You can change this by editing pg_hba.conf or using the option -A, or
 --auth-local and --auth-host, the next time you run initdb.

 Success. You can now start the database server using:

 /usr/lib/postgresql/11/bin/pg_ctl -D /usr/local/pgsql/data -l logfile start 
But now permission denied: postgres@pc:/home/marco$ sudo 
/usr/lib/postgresql/11/bin/pg_ctl -D /usr/local/pgsql/data -l logfile start
 [sudo] password for postgres:
 postgres is not in the sudoers file. This incident will be reported.
 postgres@pc:/home/marco$ /usr/lib/postgresql/11/bin/pg_ctl -D 
/usr/local/pgsql/data -l logfile start
 waiting for server to start/bin/sh: 1: cannot create logfile: Permission 
denied
 stopped waiting
 pg_ctl: could not start server
 Examine the log output.
 postgres@pc:/home/marco$ Start pg_ctl as postgres user, no need to sudo. -- 
Andreas Joseph Krogh CTO / Partner - Visena AS Mobile: +47 909 56 963 
andr...@visena.com <mailto:andr...@visena.com> www.visena.com 
<https://www.visena.com>  <https://www.visena.com>

Re: Installed PostgreSQL-11 in Ubuntu 18.04.02 Server Edition: No existing local cluster is suitable as a default target

2019-09-18 Thread Andreas Joseph Krogh
På onsdag 18. september 2019 kl. 12:25:05, skrev Marco Ippolito <
ippolito.ma...@gmail.com <mailto:ippolito.ma...@gmail.com>>: Hi Andreas, if I 
understand correctly, this is what I've done afterwards: 
postgres@pc:/home/marco$ /usr/lib/postgresql/11/bin/pg_ctl -D 
/usr/local/pgsql/data -l logfile start
 waiting for server to start/bin/sh: 1: cannot create logfile: Permission 
denied
 stopped waiting
 pg_ctl: could not start server
 Examine the log output.
 postgres@pc:/home/marco$ What am I doing wrong? You don't have permissions to 
create the logfile (named "logfile" in your command) in CWD (/home/macro). 
Specify absolute path to somewhere writable for user "postgres". -- Andreas 
Joseph Krogh CTO / Partner - Visena AS Mobile: +47 909 56 963 andr...@visena.com
 <mailto:andr...@visena.com> www.visena.com <https://www.visena.com>  
<https://www.visena.com>

lc_numeric and negative-prefix

2019-09-25 Thread Andreas Joseph Krogh
Hi. We're having this thread over at 
https://github.com/impossibl/pgjdbc-ng/issues/420 
<https://github.com/impossibl/pgjdbc-ng/issues/420> Can anybody shed som light 
on when negative-prefix is supposed to be respected by PG's 
formatting-functions? In lc_numeric='nb_NO.UTF-8' negative-prefix is '−'(8722), 
not '-'(45), at least in the JAVA-world, but it seems the JDBC-driver must use 
lc_numeric='C' when parsing server output. -- Andreas Joseph Krogh CTO / Partner
 - Visena AS Mobile: +47 909 56 963 andr...@visena.com 
<mailto:andr...@visena.com> www.visena.com <https://www.visena.com>  
<https://www.visena.com>

Re: lc_numeric and negative-prefix

2019-09-25 Thread Andreas Joseph Krogh
På torsdag 26. september 2019 kl. 00:53:28, skrev Tom Lane mailto:t...@sss.pgh.pa.us>>: Andreas Joseph Krogh  writes:
 > Can anybody shed som light
 > on when negative-prefix is supposed to be respected by PG's
 > formatting-functions? In lc_numeric='nb_NO.UTF-8' negative-prefix is 
'−'(8722),
 > not '-'(45), at least in the JAVA-world, but it seems the JDBC-driver must 
use
 > lc_numeric='C' when parsing server output. -- Andreas Joseph Krogh CTO / 
Partner

 PG does not consider LC_NUMERIC at all when producing output from
 the standard numeric data types (and we aren't going to start).
 AFAIR the only functions that do pay attention to LC_NUMERIC are
 to_char() and friends.

 regards, tom lane Thanks for clarifying. -- Andreas Joseph Krogh CTO / Partner
 - Visena AS Mobile: +47 909 56 963 andr...@visena.com 
<mailto:andr...@visena.com> www.visena.com <https://www.visena.com>  
<https://www.visena.com>

Logical replicatino from standby

2019-09-26 Thread Andreas Joseph Krogh
Hi. Will the feature described here (Minimal logical decoding on standbys): 
https://www.postgresql.org/message-id/flat/20181212204154.nsxf3gzqv3gesl32%40alap3.anarazel.de
 
<https://www.postgresql.org/message-id/flat/20181212204154.nsxf3gzqv3gesl32%40alap3.anarazel.de>
make it possible to do logical replication from standby like I'm looking for in 
this thread:
https://www.postgresql.org/message-id/VisenaEmail.15.9f118cec79ac2589.1621cfd8405%40tc7-visena
 
<https://www.postgresql.org/message-id/VisenaEmail.15.9f118cec79ac2589.1621cfd8405%40tc7-visena>
 ? Thanks. -- Andreas Joseph Krogh CTO / Partner - Visena AS Mobile: +47 909 56 
963 andr...@visena.com <mailto:andr...@visena.com> www.visena.com 
<https://www.visena.com>  <https://www.visena.com> 

Segmentation fault with PG-12

2019-10-08 Thread Andreas Joseph Krogh
In our production-environment we get sig11 every now and then after upgrading 
to PG-12: 2019-10-08 15:45:29.654 CEST [8829-76] LOG: server process (PID 
20631) was terminated bysignal 11: Segmentation fault 
 2019-10-08 15:45:29.654 CEST [8829-77] DETAIL: Failed process was running: 
COMMIT
 2019-10-08 15:45:29.654 CEST [8829-78] LOG: terminating any other active 
server processes
Will running a debug-enabled build slow things noticably down? Is there a way 
to make it dump a stack-trace (or back-trace in C-land?) on sig11? -- Andreas 
Joseph Krogh CTO / Partner - Visena AS Mobile: +47 909 56 963 andr...@visena.com
 <mailto:andr...@visena.com> www.visena.com <https://www.visena.com>  
<https://www.visena.com> 

Re: Segmentation fault with PG-12

2019-10-08 Thread Andreas Joseph Krogh
På tirsdag 08. oktober 2019 kl. 17:24:21, skrev Tom Lane mailto:t...@sss.pgh.pa.us>>: Andreas Joseph Krogh  writes:
 > Will running a debug-enabled build slow things noticably down?

 gcc promises that the generated code is the same with or without debug.
 I think clang does too. With other compilers you may pay some penalty. Nice, 
I'm using the ubuntu-packages, so I'll go ahead and installpostgresql-12-dbgsym
> Is there a way
 > to make it dump a stack-trace (or back-trace in C-land?) on sig11?

 You should be able to get a core file from which you can extract a
 stack trace (and other info) after the fact.

 
https://wiki.postgresql.org/wiki/Generating_a_stack_trace_of_a_PostgreSQL_backend
I'll look into that, thanks. -- Andreas Joseph Krogh CTO / Partner - Visena AS 
Mobile: +47 909 56 963 andr...@visena.com <mailto:andr...@visena.com> 
www.visena.com <https://www.visena.com>  <https://www.visena.com>

Re: Segmentation fault with PG-12

2019-10-09 Thread Andreas Joseph Krogh
På tirsdag 08. oktober 2019 kl. 17:24:21, skrev Tom Lane mailto:t...@sss.pgh.pa.us>>: Andreas Joseph Krogh  writes:
 > Will running a debug-enabled build slow things noticably down?

 gcc promises that the generated code is the same with or without debug.
 I think clang does too. With other compilers you may pay some penalty.

 > Is there a way
 > to make it dump a stack-trace (or back-trace in C-land?) on sig11?

 You should be able to get a core file from which you can extract a
 stack trace (and other info) after the fact.

 
https://wiki.postgresql.org/wiki/Generating_a_stack_trace_of_a_PostgreSQL_backend

 regards, tom lane Attached is output from "bt full". Is this helpful? 
Anything else I can do to help narrowing down the problem? Thanks. -- Andreas 
Joseph Krogh CTO / Partner - Visena AS Mobile: +47 909 56 963 andr...@visena.com
 <mailto:andr...@visena.com> www.visena.com <https://www.visena.com>  
<https://www.visena.com>#0  slot_deform_heap_tuple (natts=24, offp=0x5598ec873d90, tuple=, slot=0x5598ec873d48)
at ./build/../src/backend/executor/execTuples.c:895
isnull = 0x5598ec8733e8
tup = 
bp = 
tupleDesc = 
values = 0x5598ec875de8
off = 
slow = 
hasnulls = 
attnum = 
tp = 
tupleDesc = 
values = 
isnull = 
tup = 
hasnulls = 
attnum = 
tp = 
off = 
bp = 
slow = 
thisatt = 
#1  tts_buffer_heap_getsomeattrs (slot=0x5598ec873d48, natts=24) at 
./build/../src/backend/executor/execTuples.c:676
bslot = 0x5598ec873d48
#2  0x5598e94534ac in slot_getsomeattrs_int 
(slot=slot@entry=0x5598ec873d48, attnum=24)
at ./build/../src/backend/executor/execTuples.c:1877
__errno_location = 
#3  0x5598e94443f1 in slot_getsomeattrs (attnum=, 
slot=0x5598ec873d48)
at ./build/../src/include/executor/tuptable.h:345
No locals.
#4  ExecInterpExpr (state=0x5598ec8776b8, econtext=0x5598ec876ea8, 
isnull=)
at ./build/../src/backend/executor/execExprInterp.c:441
op = 
resultslot = 0x0
innerslot = 
outerslot = 
scanslot = 0x0
dispatch_table = {0x5598e9443608 , 0x5598e94443f8 
, 
  0x5598e94443d0 , 0x5598e94443b0 
, 0x5598e9444380 , 
  0x5598e9444350 , 0x5598e9444328 
, 0x5598e9444318 , 
  0x5598e9444148 , 0x5598e94442e0 
, 0x5598e9444300 , 
  0x5598e94442c8 , 0x5598e9444120 
, 0x5598e94440f0 , 
  0x5598e94442a0 , 0x5598e9444270 
, 0x5598e9444250 , 
  0x5598e9444240 , 0x5598e94441d8 
, 0x5598e94441c0 , 
  0x5598e94441a8 , 0x5598e94435a8 
, 0x5598e94435af , 
  0x5598e9444168 , 0x5598e94435d0 
, 0x5598e94435d7 , 
  0x5598e94440b8 , 0x5598e94440b0 
, 0x5598e9444088 , 
  0x5598e9444080 , 0x5598e9444070 
, 0x5598e9444058 , 
  0x5598e9444028 , 0x5598e9444008 
, 0x5598e9443fe0 , 
  0x5598e9443fd0 , 0x5598e9443fb8 
, 0x5598e9443f60 , 
  0x5598e9443f90 , 0x5598e9443f38 
, 0x5598e9443d98 , 
  0x5598e9443f20 , 0x5598e9443f08 
, 0x5598e9443ef0 , 
  0x5598e9443ec0 , 0x5598e9443e28 
, 0x5598e9443dc0 , 
  0x5598e9443d60 , 0x5598e9443e48 
, 0x5598e9443ce8 , 
  0x5598e9443cd0 , 0x5598e9444598 
, 0x5598e9443cb8 , 
  0x5598e9443ca0 , 0x5598e9443c78 
, 0x5598e9443be8 , 
  0x5598e9443c18 , 0x5598e9443b98 
, 0x5598e9443b80 , 
  0x5598e9443b68 , 0x5598e9443b50 
, 0x5598e9443b38 , 
  0x5598e9443b18 , 0x5598e9443b00 
, 0x5598e9443c00 , 
  0x5598e9443ae8 , 0x5598e9443e90 
, 0x5598e9443aa0 , 
  0x5598e9443690 , 0x5598e9443ad0 
, 0x5598e9443ab8 , 
  0x5598e9443a88 , 0x5598e9443a48 
, 0x5598e9443a70 , 
  0x5598e9443a10 , 0x5598e94439f8 
, 0x5598e94439e0 , 
  0x5598e94439c0 , 0x5598e9443628 
, 0x5598e94438f8 , 
  0x5598e9443980 , 0x5598e94438a0 
, 0x5598e9443940 , 
  0x5598e94437f8 , 0x5598e9443710 
, 0x5598e94436f8 , 
  0x5598e94436e0 , 0x5598e9443608 
}
#5  0x5598e942326b in ExecEvalExprSwitchContext (isNull=0x7ffdf2aae7a7, 
econtext=, state=)
at ./build/../src/include/executor/executor.h:307
retDatum = 
oldContext = 
retDatum = 
oldContext = 
#6  ExecQual (econtext=, state=) at 
./build/../src/include/executor/executor.h:376
ret = 
isnull = false
ret = 
isnull = 
#7  TriggerEnabled (estate=estate@entry=0x5598eaeabdd0, 
trigger=trigger@entry=0x5598eaeac828, event=, 
modifiedCols=modifiedCols@entry=0x5598eb7d34b8, oldslot=0x5598ec876b88, 
newslot=0x5598ec873d48, relinfo=, 
relinfo=) at ./build/../src/backend/commands/trigger.c:3516
predicate = 
econtext = 
oldContext = 
i = 
relinfo = 
relinfo = 
oldslot = 0x5598ec876b88
modifiedCols = 0x5598eb7d34b8
estate = 0x5598eaeabdd0
   

Re: Segmentation fault with PG-12

2019-10-09 Thread Andreas Joseph Krogh
På onsdag 09. oktober 2019 kl. 16:16:37, skrev Tom Lane mailto:t...@sss.pgh.pa.us>>: Andreas Joseph Krogh  writes:
 > Attached is output from "bt full". Is this helpful?

 Well, it shows that the failure is occurring while trying to evaluate
 a variable in a trigger's WHEN clause during
 "UPDATE origo_email_delivery SET folder_id=$1, created=$2\nWHERE entity_id IN 
($3)\nRETURNING entity_id"
 And I'd bet that the root cause is something to do with Andres' tuple slot
 work. But (at least to my eye) it's not apparent exactly what's wrong.

 Can you show us the table definition and associated trigger definitions
 for origo_email_delivery?

 This doesn't seem to correlate with your original report, btw,
 as that claimed the crash was during COMMIT.

 regards, tom lane FWIW: It doesn't always happen when that UPDATE-statement 
is issued, so it's not reproducable. We'll see what the next core-dump gives us.
Is it OK if I send you the table/trigger-definitions off-list? -- Andreas 
Joseph Krogh CTO / Partner - Visena AS Mobile: +47 909 56 963 andr...@visena.com
 <mailto:andr...@visena.com> www.visena.com <https://www.visena.com>  
<https://www.visena.com>

Re: Segmentation fault with PG-12

2019-10-10 Thread Andreas Joseph Krogh
På torsdag 10. oktober 2019 kl. 07:25:26, skrev Andres Freund <
and...@anarazel.de <mailto:and...@anarazel.de>>: On 2019-10-09 10:16:37 -0400, 
Tom Lane wrote:
 > Andreas Joseph Krogh  writes:
 > > Attached is output from "bt full". Is this helpful?
 >
 > Well, it shows that the failure is occurring while trying to evaluate
 > a variable in a trigger's WHEN clause during
 > "UPDATE origo_email_delivery SET folder_id=$1, created=$2\nWHERE entity_id 
IN ($3)\nRETURNING entity_id"
 > And I'd bet that the root cause is something to do with Andres' tuple slot
 > work. But (at least to my eye) it's not apparent exactly what's wrong.

 It looks like this could "just" be another report of #16036, which was
 already fixed in:

 commit d986d4e87f61c68f52c68ebc274960dc664b7b4e
 Author: Andres Freund 
 Date: 2019-10-04 11:59:34 -0700

 Fix crash caused by EPQ happening with a before update trigger present.

(Tom: This mail is only viewable as text/html, to if you're reading the 
text/plain version it will seem "hashed") Aha, that whould be 
60e97d63e5d19098e11fa32431a20eea820e2ae9 in REL_12_STABLE We'll build and run 
HEAD of REL_12_STABLE, and report back. > This doesn't seem to correlate with 
your original report, btw,
 > as that claimed the crash was during COMMIT.

 That however, would be confusing, unless there's some deferred trigger
 that causes another update, which then fires a before update trigger
 causing the problem.

 Greetings,

 Andres Freund We have a deferred trigger which updates origo_email_delivery: 
CREATE OR REPLACE FUNCTIONorigo_index_email_props_tf() RETURNS TRIGGER AS $$ 
declare v_prop origo_email_message_property; BEGIN v_prop := NEW; UPDATE 
origo_email_delivery SET is_seen = v_prop.is_seen, followup_id = 
v_prop.followup_id, is_replied = v_prop.is_replied, is_forwarded = 
v_prop.is_forwarded, is_draft = v_prop.is_draft, is_done = v_prop.is_done, 
is_flagged = v_prop.is_flagged, modseq =greatest(modseq, v_prop.modseq) WHERE 
message_id = v_prop.message_idAND owner_id = v_prop.owner_id; RETURN NEW; END; 
$$LANGUAGE plpgsql; CREATE CONSTRAINT TRIGGER origo_index_email_props_t AFTER 
INSERT OR UPDATE ON origo_email_message_property DEFERRABLE INITIALLY DEFERRED  
FOR EACH ROWEXECUTE PROCEDURE origo_index_email_props_tf(); .. and then trigger 
the following UPDATE-trigger: CREATE TRIGGER origo_email_delivery_update_t 
BEFORE UPDATE ON origo_email_delivery FOR EACH ROW  WHEN (OLD.folder_id <> NEW
.folder_idOR NEW.is_deleted <> OLD.is_deleted) EXECUTE PROCEDURE 
origo_email_delivery_update_tf(); Maybe that will trigger the bug. Thanks. -- 
Andreas Joseph Krogh CTO / Partner - Visena AS Mobile: +47 909 56 963 
andr...@visena.com <mailto:andr...@visena.com> www.visena.com 
<https://www.visena.com>  <https://www.visena.com>

Re: Segmentation fault with PG-12

2019-10-10 Thread Andreas Joseph Krogh
We had another crash today, and it appears to be the same: #0 
slot_deform_heap_tuple (natts=26, offp=0x5598eba0b968,
 tuple=, slot=0x5598eba0b920)
 at ./build/../src/backend/executor/execTuples.c:895 -- Andreas Joseph Krogh 

Re: Segmentation fault with PG-12

2019-10-10 Thread Andreas Joseph Krogh

På torsdag 10. oktober 2019 kl. 21:32:38, skrev Tom Lane mailto:t...@sss.pgh.pa.us>>: 
Andres Freund  writes:
 > On 2019-10-09 10:16:37 -0400, Tom Lane wrote:
 >> Well, it shows that the failure is occurring while trying to evaluate
 >> a variable in a trigger's WHEN clause during
 >> "UPDATE origo_email_delivery SET folder_id=$1, created=$2\nWHERE entity_id 
IN ($3)\nRETURNING entity_id"
 >> And I'd bet that the root cause is something to do with Andres' tuple slot
 >> work. But (at least to my eye) it's not apparent exactly what's wrong.

 > It looks like this could "just" be another report of #16036, which was
 > already fixed in:
 > commit d986d4e87f61c68f52c68ebc274960dc664b7b4e
 > Author: Andres Freund 
 > Date: 2019-10-04 11:59:34 -0700
 > Fix crash caused by EPQ happening with a before update trigger present.

 Bingo. I can reproduce the crash (using concurrent updates of the same
 table row, in the schema Andreas sent off-list) on the predecessor of
 that commit, but on that commit it's fine. 


That's great! 


Andreas, that's a pretty simple patch if you're in a position to
 build from source ...

 regards, tom lane 


Yes, we've built a new .deb-package from 
f224c7c11ea7be2751e3342e11317070ffb5622d in REL_12_STABLE which we'll deploy 
tonight. 
Thanks! 


-- 
Andreas Joseph Krogh 


Re: Segmentation fault with PG-12

2019-10-12 Thread Andreas Joseph Krogh

På torsdag 10. oktober 2019 kl. 22:21:13, skrev Andres Freund <
and...@anarazel.de <mailto:and...@anarazel.de>>: 
On 2019-10-10 15:32:38 -0400, Tom Lane wrote:
 > Andres Freund  writes:
 > > On 2019-10-09 10:16:37 -0400, Tom Lane wrote:
 > >> Well, it shows that the failure is occurring while trying to evaluate
 > >> a variable in a trigger's WHEN clause during
 > >> "UPDATE origo_email_delivery SET folder_id=$1, created=$2\nWHERE 
entity_id IN ($3)\nRETURNING entity_id"
 > >> And I'd bet that the root cause is something to do with Andres' tuple 
slot
 > >> work. But (at least to my eye) it's not apparent exactly what's wrong.
 >
 > > It looks like this could "just" be another report of #16036, which was
 > > already fixed in:
 > > commit d986d4e87f61c68f52c68ebc274960dc664b7b4e
 > > Author: Andres Freund 
 > > Date: 2019-10-04 11:59:34 -0700
 > > Fix crash caused by EPQ happening with a before update trigger present.
 >
 > Bingo. I can reproduce the crash (using concurrent updates of the same
 > table row, in the schema Andreas sent off-list) on the predecessor of
 > that commit, but on that commit it's fine.

 Cool, and thanks for checking. 


No crashes in production after deploying the fix. 


-- 
Andreas Joseph Krogh 


Sv: Conflict between autovacuum and backup restoration

2019-10-17 Thread Andreas Joseph Krogh

På torsdag 17. oktober 2019 kl. 15:48:09, skrev Ekaterina Amez <
ekaterina.a...@zunibal.com <mailto:ekaterina.a...@zunibal.com>>: 
Good afternoon,

 We've finally made the migration+upgrade from old server with v7.14 to
 new server with v8.4 and, before test and plan an upgrade to v9.6, I'm
 checking logs to find out if there's any problem with this upgrade.
 We've been fixing things and in only remains one problem in the log that
 I don't understand and not sure how to fix, or if it has to be fixed.
 Original logs are not in english so I'm posting my translation.

 In the morning we're making a backup of -let's call it- the main
 database. To avoid intensive use of this main database, we have a second
 db that's a copy of the main one, used only to display data to some
 users that don't need it up to date and that shouldn't change anything.
 So at night we're restoring the morning backup from main database into
 this second one. The upgrade that I've mentioned has ended with both,
 main and second databases, being in the same server. The "problem"
 (because I'm not sure if it's really a problem) is that while the backup
 is restoring in the second database, it seems like autovacuum is
 launched and conflicts with this db restore. The log is this:

 [...] Several messages about checkpoints running too frequently [...]
 2019-10-16 23:01:30.904 CEST - [162851] - user@[local]:secondDB - LOG: 
 duration: 7446.139 ms sentence: COPY one_table (some_columns) FROM stdin;
 2019-10-16 23:01:37.457 CEST - [13750] LOG: checkpoints are running too
 frequently ( 9 seconds)
 2019-10-16 23:01:37.457 CEST - [13750] HINT: Consider changing
 «checkpoint_segments» configuration parameter.
 2019-10-16 23:01:58.663 CEST - [162851] - user@[local]:secondDB - LOG: 
 duration: 6492.426 ms sentence: CREATE INDEX another_table_index1 ON
 another_table USING btree (another_field1);
 2019-10-16 23:02:04.042 CEST - [162851] - user@[local]:secondDB - LOG: 
 duration: 5378.886 ms sentence: CREATE INDEX another_table_index2 ON
 another_table USING btree (another_field2);
 2019-10-16 23:02:11.742 CEST - [162851] - user@[local]:secondDB - LOG: 
 duration: 7699.246 ms sentence: CREATE INDEX another_table_index3 ON
 another_table USING btree (another_field3);
 2019-10-16 23:02:12.742 CEST - [162851] - user@[local]:secondDB - LOG: 
 sending cancel signal to blocking autovacuum with PID 162869
 2019-10-16 23:02:12.742 CEST - [162851] - user@[local]:secondDB -
 DETAIL: Process 162851 is waiting for ShareLock on relation
 3880125365 for database 3880125112.
 2019-10-16 23:02:12.743 CEST - [162851] - user@[local]:secondDB -
 SENTENCE: CREATE INDEX another_table_index4 ON another_table USING
 btree (another_field4);
 2019-10-16 23:02:12.743 CEST - [162869] ERROR: cancelling autovacuum task
 2019-10-16 23:02:12.743 CEST - [162869] CONTEXT: automatic analyze of
 «secondDB.public.another_table»
 2019-10-16 23:02:20.899 CEST - [162851] - user@[local]:secondDB - LOG: 
 duration: 9157.371 ms sentence: CREATE INDEX another_table_index4 ON
 another_table USING btree (another_field4);

 After upgrade main db to the new server, I've tuned following parameters
 in postgresql.conf (using pgtune recommendations)

 max_connections = 200
 shared_buffers = 8GB
 work_mem = 2621kB
 maintenance_work_mem = 2GB
 effective_io_concurrency = 2
 wal_buffers = 8MB
 checkpoint_segments = 32
 checkpoint_completion_target = 0.7
 effective_cache_size = 16GB
 log_min_duration_statement = 5000
 log_line_prefix = '%m - [%p] %q- %u@%h:%d - %a '
 standard_conforming_strings = on


 I've been looking for the problem with checkpoints and I've decided to
 let it be, because these messages only appear when we make the second db
 restore. The rest of the log is clean from checkpoint messages.

 But I don't understand why I'm getting those messages about autovacuum
 blocking db restore process. I guess that after one table is created
 with COPY sentence, as many rows have been inserted, autoanalyze process
 runs to gather statistics for the Execution Planner. But why is
 happening this block? Is autoanalyze running before the table gets fully
 loaded? Is this really a problem? If so, how can I handle it? This task
 is running at night, when nobody is using second database.


 Thank you for reading,

 Ekaterina 

It is normal to get these "canceling autovacuum"-messages when restoring a 
database, just ignore them. 
If it bothers you, just turn autovacuum off by setting this in postgresql.conf:

autovacuum = off 

and reload the config (SIGHUP) 



--
 Andreas Joseph Krogh

Sv: drop database

2019-10-17 Thread Andreas Joseph Krogh

På torsdag 17. oktober 2019 kl. 22:44:15, skrev Julie Nishimura <
juliez...@hotmail.com <mailto:juliez...@hotmail.com>>: 
Guys, can anybody reply if they drop any postgresql database larger than 1 tb 
and how long did it take? 

Thanks 

About 280ms: 

andreak@[local]:5432 12.0 andreak=# select 
pg_size_pretty(pg_database_size('rsm'));
 ┌┐
 │ pg_size_pretty │
 ├┤
 │ 26 GB │
 └┘
 (1 row)

andreak@[local]:5432 12.0 andreak=# \timing 
 Timing is on.
andreak@[local]:5432 12.0 andreak=# drop DATABASE rsm; 
 DROP DATABASE
 Time: 280,355 ms


-- 
Andreas Joseph Krogh 
CTO / Partner - Visena AS 
Mobile: +47 909 56 963 
andr...@visena.com <mailto:andr...@visena.com> 
www.visena.com <https://www.visena.com> 
 <https://www.visena.com> 


Sv: drop database

2019-10-17 Thread Andreas Joseph Krogh

På torsdag 17. oktober 2019 kl. 22:44:15, skrev Julie Nishimura <
juliez...@hotmail.com <mailto:juliez...@hotmail.com>>: 
Guys, can anybody reply if they drop any postgresql database larger than 1 tb 
and how long did it take? 

Sorry, I missread you question as 1GB (not TB)... 


-- 
Andreas Joseph Krogh 
CTO / Partner - Visena AS 
Mobile: +47 909 56 963 
andr...@visena.com <mailto:andr...@visena.com> 
www.visena.com <https://www.visena.com> 
 <https://www.visena.com> 


Sv: Postgres Point in time Recovery (PITR),

2019-10-18 Thread Andreas Joseph Krogh

På fredag 18. oktober 2019 kl. 07:59:21, skrev Daulat Ram <
daulat@exponential.com <mailto:daulat@exponential.com>>: 

Hello All,

Can you please share some ideas and scenarios how we can do the PITR in case 
of disaster.

We use barman (https://www.pgbarman.org/ <https://www.pgbarman.org/index.html>
) for continuous streaming backup and I had to restore from it once, and it 
went like this: 

==8<=== 

$ barman recover --target-time "2018-12-06 12:20:00" --remote-ssh-command "ssh 
andreak@192.168.0.184 <mailto:andreak@192.168.0.184>" db01_11 20181130T190002 
"/home/andreak/barman-restore"
 Processing xlog segments from streaming for db01_11
 00010174002E
 00010174002F
 000101740030
 Starting remote restore for server db01_11 using backup 20181130T190002
 Destination directory: /home/andreak/barman-restore
 Doing PITR. Recovery target time: '2018-12-06 12:20:00+01:00'
 17445, dbname1, /storage/fast_ssd/11/tablespaces/dbname1
 29218, dbname2, /storage/fast_ssd/11/tablespaces/dbname2
 ... 
 29235503, dbnameX, /storage/fast_ssd/11/tablespaces/dbnameX
Copying the base backup.
 Copying required WAL segments.
 Generating recovery.conf
 Identify dangerous settings in destination directory.

 WARNING
 The following configuration files have not been saved during backup, hence 
they have not been restored.
 You need to manually restore them in order to start the recovered PostgreSQL 
instance:

 postgresql.conf
 pg_hba.conf
 pg_ident.conf

 Recovery completed (start time: 2018-12-06 13:14:53.220043, elapsed time: 4 
hours, 52 minutes, 47 seconds)

 Your PostgreSQL server has been successfully prepared for recovery! 
==8<=== 


-- 
Andreas Joseph Krogh 
CTO / Partner - Visena AS 
Mobile: +47 909 56 963 
andr...@visena.com <mailto:andr...@visena.com> 
www.visena.com <https://www.visena.com> 
 <https://www.visena.com> 


Having more than one constraint trigger on a table

2019-10-22 Thread Andreas Joseph Krogh

Hi. 

I have the following schema (question at bottom): 
== 
CREATE TABLE company(id SERIAL PRIMARY KEY, parent_id INTEGER REFERENCES 
company(id) DEFERRABLE INITIALLY DEFERRED , name VARCHAR NOT NULL, duns_number 
VARCHAR, fts_all tsvector, t_updated BOOLEAN); CREATE or replace FUNCTION 
update_company_fts(p_company_id integer) RETURNS VOID AS $$ BEGIN  UPDATE 
companycomp SET fts_all = to_tsvector('simple' , comp.name || ' ' || coalesce
(comp.duns_number,'') ) WHERE comp.id = p_company_id; raise notice 'Running 
update of %', p_company_id; END; $$ LANGUAGE plpgsql; -- re-index all: CREATE 
OR REPLACE FUNCTIONindex_company() RETURNS VOID AS $$ DECLARE v_company_id 
INTEGER; begin  FOR v_company_id IN (SELECT id FROM company) LOOP  perform 
update_company_fts(v_company_id); END LOOP; END; $$ LANGUAGE plpgsql; create or 
replace functionupdate_company_fts_tf() returns TRIGGER AS $$ declare 
v_company_idINTEGER; BEGIN v_company_id := NEW.id; perform update_company_fts
(v_company_id);RETURN NULL; END; $$ LANGUAGE plpgsql; -- General cleanup 
functions for constraint triggersCREATE OR REPLACE FUNCTION 
trigger_function_set_updated() returns TRIGGER AS $$ BEGIN  update company set 
t_updated =TRUE WHERE id = NEW.id; RETURN NULL; END; $$ LANGUAGE plpgsql; 
CREATE OR REPLACE FUNCTIONtrigger_function_clear_updated() returns TRIGGER AS $$
BEGIN update company set t_updated = NULL WHERE id = NEW.id; RETURN NULL; END; 
$$LANGUAGE plpgsql; CREATE CONSTRAINT TRIGGER trigger_1_update_fts AFTER INSERT 
OR UPDATE of name, duns_number ON company DEFERRABLE INITIALLY DEFERRED  FOR 
EACH ROW WHEN (NEW.t_updated IS NULL) EXECUTE PROCEDURE update_company_fts_tf();
CREATE CONSTRAINT TRIGGERtrigger_2 AFTER INSERT OR UPDATE of name, duns_number, 
parent_idON company -- NOT DEFERRED FOR EACH ROW  WHEN (NEW.t_updated IS NULL) 
EXECUTE PROCEDUREtrigger_function_set_updated(); CREATE CONSTRAINT TRIGGER 
trigger_3AFTER INSERT OR UPDATE OF t_updated ON company DEFERRABLE INITIALLY 
DEFERRED FOR EACH ROW  WHEN (NEW.t_updated) EXECUTE PROCEDURE 
trigger_function_clear_updated(); CREATE OR REPLACE FUNCTION 
company_parent_no_cycle() returns TRIGGER AS $$ BEGIN  IF (WITH recursive tr 
(id, parent_id, all_ids,cycle) AS ( SELECT id, parent_id, ARRAY [id], false  
FROMcompany tr WHERE id = NEW.id UNION ALL  SELECT t.id, t.parent_id, all_ids ||
t.id, t.id =ANY (all_ids) FROM company t JOIN tr ON t.parent_id = tr.id AND NOT 
cycle) SELECT count(*) FROM tr where cycle = true) > 0 THEN  RAISE EXCEPTION 
'Cannot have cyclic parent relations for company' USING SCHEMA = 
TG_TABLE_SCHEMA,TABLE = TG_TABLE_NAME, CONSTRAINT = TG_NAME , ERRCODE = '23514'
/*check_violation*/, COLUMN = 'parent_id'; END IF; RETURN NULL; END; $$ LANGUAGE
plpgsql;CREATE CONSTRAINT TRIGGER trigger_1_check_nocycle AFTER INSERT OR 
UPDATE ofparent_id ON company DEFERRABLE INITIALLY DEFERRED  FOR EACH ROW  WHEN 
(NEW.t_updated IS NULL) EXECUTE PROCEDURE company_parent_no_cycle(); 
== 

What I'm after is to have 2 "logical constraint-triggers" perform logic only 
once (each) on the "company"-table. 
To make constraint-triggers fire only once (in PostgreSQL) a common method is 
to have a schema with 3 triggers, and a "magic" t_updated column, and they must 
be named so they (the triggers, not the trigger-functions) are fired in lexical 
order (alphabetically). And it's important that the 2nd. trigger (here 
"trigger_2") is NOT deferred. 

In my schema above I have 2 "logical chuchks" which each perform some stuff 
and shall only do it once per row at commit-time. 
The first "main" trigger-function is update_company_fts_tf() and it updates a 
column (fts_all) of type tsvector. This is done in a trigger so that it may add 
stuff (customer-number etc.) from other tables as needed (which is not possible 
with PG-12's new STORED-columns). 
The second "main" trigger-function is company_parent_no_cycle() and assures 
there are no parent/child-cycles. 

Question: 
1. I have "re-used" trigger_2 and trigger_3, and trigger_2's "...OR UPDATE 
OF"-list is the sum of all columns updated(used) in the 2 main-triggers, that 
is "name", "duns_number" and parent_id. trigger_3 only checks t_updated. 
Is this correct usage, can I assume this will work correctly? 
2. If I need a 3rd "logical trigger", is it enough to add another trigger 
named accordingly, for instance "trigger_1_someotherstuff", and add it's column 
to the "UPDATE OF"-list of "trigger_2" (it it uses a column not already listed 
there)? 
3. Is there some easier way to do this? 

Is it clear what I'm asking about? :-) 

Thanks. 


--
 Andreas Joseph Krogh 

Re: Having more than one constraint trigger on a table

2019-10-22 Thread Andreas Joseph Krogh

På tirsdag 22. oktober 2019 kl. 17:12:59, skrev Adrian Klaver <
adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>>: 
[snip]
 No.
 When I sort the triggers I get:

 test=# create table trg_str(fld_1 varchar);
 CREATE TABLE
 test=# insert into trg_str values ('trigger_1_update_fts'),
 ('trigger_2'), ('trigger_3'), ('trigger_1_check_nocycle');
 INSERT 0 4
 test=# select * from trg_test order by fld_1 ;
 id | fld_1
 +---
 (0 rows)

 test=# select * from trg_str order by fld_1 ;
 fld_1
 -
 trigger_1_check_nocycle
 trigger_1_update_fts
 trigger_2
 trigger_3

 Is this how you want them to fire as it does not match what you say above?: 

(I know they were not declared in that order, but..) 
Yes, all "trigger_1_*" are the "actuall triggers triggering the logic", 
trigger_2 and trigger_3 are only there as part of the "make constraint-triggers 
fire only once"-mechanism, in which the function in the first trigger is the 
function performing the actual logic. 
So, being I want 2 "logical chunks" to happen I have two "trigger_1"-triggers 
(there is no established terminilogy for this AFAIK), each calling a function 
performing the logick which is to happen only once (per row). 


"The first "main" trigger-function is update_company_fts_tf() ... The
 second "main" trigger-function is company_parent_no_cycle()"

 It might be easier to understand if sketch out a schematic version of
 what you are trying to achieve. 

The point is; I want to functions to be called 

- update_company_fts_tf() 
- company_parent_no_cycle() 

, each only once, as constraint-triggers on the same table. So they are called 
by the "level 1 triggers" which must fire first. 

Is it clearer now what I'm trying to achieve? 


--
 Andreas Joseph Krogh

Re: Having more than one constraint trigger on a table

2019-10-24 Thread Andreas Joseph Krogh


På torsdag 24. oktober 2019 kl. 16:59:42, skrev Adrian Klaver <
adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>>: 
On 10/22/19 8:26 AM, Andreas Joseph Krogh wrote:
 > På tirsdag 22. oktober 2019 kl. 17:12:59, skrev Adrian Klaver
 > mailto:adrian.kla...@aklaver.com>>:
 >
 > [snip]
 > No.
 > When I sort the triggers I get:
 >
 > test=# create table trg_str(fld_1 varchar);
 > CREATE TABLE
 > test=# insert into trg_str values ('trigger_1_update_fts'),
 > ('trigger_2'), ('trigger_3'), ('trigger_1_check_nocycle');
 > INSERT 0 4
 > test=# select * from trg_test order by fld_1 ;
 > id | fld_1
 > +---
 > (0 rows)
 >
 > test=# select * from trg_str order by fld_1 ;
 > fld_1
 > -
 > trigger_1_check_nocycle
 > trigger_1_update_fts
 > trigger_2
 > trigger_3
 >
 > Is this how you want them to fire as it does not match what you say
 > above?:
 >
 > (I know they were not /declared/ in that order, but..)
 > Yes, all "trigger_1_*" are the "actuall triggers triggering the logic",
 > trigger_2 and trigger_3 are only there as part of the "make
 > constraint-triggers fire only once"-mechanism, in which the function in
 > the first trigger is the function performing the actual logic.
 > So, being I want 2 "logical chunks" to happen I have two
 > "trigger_1"-triggers (there is no established terminilogy for this
 > AFAIK), each calling a function performing the logick which is to happen
 > only once (per row).
 >
 > "The first "main" trigger-function is update_company_fts_tf() ... The
 > second "main" trigger-function is company_parent_no_cycle()"
 >
 > It might be easier to understand if sketch out a schematic version of
 > what you are trying to achieve.
 >
 > The point is; I want to functions to be called
 > - update_company_fts_tf()
 > - company_parent_no_cycle()
 > , each only once, as constraint-triggers on the same table. So they are
 > called by the "level 1 triggers" which must fire first.

 To be clear the order they fire relative to each other is not important? 


Correct, these main functions may fire in any order. 



> Is it clearer now what I'm trying to achieve?

 Sort of, though I am still not entirely what the whole process is trying
 to achieve. What the mix of deferred and un-deferred triggers and
 'logical' and housekeeping functions are doing is not clear to me. That
 is why I suggested a schematic representation of the trigger flow would
 be helpful. Leave out the fine details and create a flow chart of what
 you want to happen. 


Normally, CONSTRAINT TRIGGERs will fire once for each UPDATE. That means, if 
you do CREATE, the 2 UPDATES, then the trigger(s) will fire 3 times for each 
row. I'm trying to make these triggers fire only ONCE per row, and at COMMIT 
(being CONSTRAINT TRIGGER). 

I'm using the trick mentioned here to achieve this: 

https://stackoverflow.com/questions/8937203/execute-deferred-trigger-only-once-per-row-in-postgresql#answer-8954058
 
<https://stackoverflow.com/questions/8937203/execute-deferred-trigger-only-once-per-row-in-postgresql#answer-8954058>


But I'm trying to have more than one CONSTRAINT TRIGGER on the same table, 
each one doing dirfferent things and reacting (triggering) on different 
columns, and I'm wondering if I can "re-use" the "cleanup-triggers" 2 and 3 as 
I mentioned, having trigger 2 firing on the sum of all involved COLUMNS (name, 
duns_number, parent_id) ? 

-- 
Andreas Joseph Krogh 
CTO / Partner - Visena AS 
Mobile: +47 909 56 963 
andr...@visena.com <mailto:andr...@visena.com> 
www.visena.com <https://www.visena.com> 
 <https://www.visena.com>

Re: Create a logical and physical replication

2019-11-05 Thread Andreas Joseph Krogh

På tirsdag 05. november 2019 kl. 12:15:20, skrev Deepak Pahuja . <
deepakpah...@hotmail.com <mailto:deepakpah...@hotmail.com>>: 
Yes it is possible. 


No it isn't. I think maybe this will address it for v13: 
https://www.postgresql.org/message-id/flat/20181212204154.nsxf3gzqv3gesl32%40alap3.anarazel.de
 
<https://www.postgresql.org/message-id/flat/20181212204154.nsxf3gzqv3gesl32%40alap3.anarazel.de>


-- 
Andreas Joseph Krogh 


Sv: DELETE Query Hang

2019-11-12 Thread Andreas Joseph Krogh

På tirsdag 12. november 2019 kl. 23:47:18, skrev DrakoRod <
drakofla...@hotmail.com <mailto:drakofla...@hotmail.com>>: 
Hi folks!

 I Have a question, in a database are a table with many files (bytea) stored
 (I support this database a don't design it), but we need delete many rows
 (38000 rows approx), but I when execute query:

 BEGIN;
 ALTER TABLE my_file_table DISABLE TRIGGER ALL;
 DELETE FROM my_file_table WHERE id_table <> 230;

 This query hang... 50 minutes and the query do not finish.

 Any suggestion? 

Check for locks and blocking statements: 
https://wiki.postgresql.org/wiki/Lock_Monitoring 
<https://wiki.postgresql.org/wiki/Lock_Monitoring> 

You can delete in chunks like this: 

do $_$ declare num_rows bigint; begin  loop  delete from YourTable where id in 
(select id from YourTable where id < 500 limit 100); get diagnostics num_rows = 
row_count; raise notice 'deleted % rows', num_rows; exit when num_rows = 0; end 
loop; end;$_$; 


--
 Andreas Joseph Krogh 


Sv: Why are clobs always "0"

2019-12-01 Thread Andreas Joseph Krogh

På søndag 01. desember 2019 kl. 18:31:35, skrev Arnie Morein <
arnie.mor...@mac.com <mailto:arnie.mor...@mac.com>>: 

I have tested the most recent driver in three different SQL IDEs, and now with 
an application I'm writing that uses JDBC metadata, the comment on a field 
definition also isn't available as a string value. 

The only thing I ever see regarding data type "text" field values are either a 
0 or a 1; neither of which applies. 

So why is this happening, even from the JDBC metadata results as well? 

Have you tried the NG-driver: https://github.com/impossibl/pgjdbc-ng 
<https://github.com/impossibl/pgjdbc-ng> 
We use it with Blobs/Clobs and it's working good. 

It would help us help you if you mention which IDEs you have tried, and 
provide configuration-paramteres, error-messages etc. 


-- 
Andreas Joseph Krogh 
CTO / Partner - Visena AS 
Mobile: +47 909 56 963 
andr...@visena.com <mailto:andr...@visena.com> 
www.visena.com <https://www.visena.com> 
 <https://www.visena.com> 


Re: How to transfer databases form one server to other

2020-01-26 Thread Andreas Joseph Krogh


På mandag 27. januar 2020 kl. 03:26:59, skrev Ron mailto:ronljohnso...@gmail.com>>: [..]
 I ran uncompressed pg_dump on multiple TB+ sized databases from v8.4 servers 
across the LAN using 9.6 binaries on the remote server. It was quite fast. 
Threading was key. 

According to the manual: https://www.postgresql.org/docs/12/app-pgdump.html 
<https://www.postgresql.org/docs/12/app-pgdump.html> 
the "directory format" is the only format which supports parallel dumps, if 
I'm not reading it wrong. 

How did threading solve "between database" dump/restore for you? Did you dump 
to "directory format" first, then restore? If so, then that requires quite a 
bit of temp-space... 

-- 
Andreas Joseph Krogh 
CTO / Partner - Visena AS 
Mobile: +47 909 56 963 
andr...@visena.com <mailto:andr...@visena.com> 
www.visena.com <https://www.visena.com> 
 <https://www.visena.com>

Sv: Replacing Apache Solr with Postgre Full Text Search?

2020-03-26 Thread Andreas Joseph Krogh

På onsdag 25. mars 2020 kl. 13:36:38, skrev J2eeInside J2eeInside <
j2eeins...@gmail.com <mailto:j2eeins...@gmail.com>>: 
Hi all,

 I hope someone can help/suggest:
 I'm currently maintaining a project that uses Apache Solr /Lucene. To be 
honest, I wold like to replace Solr with Postgre Full Text Search. However, 
there is a huge amount of documents involved - arround 200GB. Wondering, can 
Postgre handle this efficiently?
 Does anyone have specific experience, and what should the infrastructure look 
like?

 P.S. Not to be confused, the Sol works just fine, i just wanted to eliminate 
one component from the whole system (if Full text search can replace Solr at 
all) 

I see you've gotten some answers but wanted to chime in... 
We seach in ~15mill. emails and ~10 mill documents (extracted text from 
Word/PDF etc. using Java-tools), and use PG and FTS (gin, not rum) for the 
exact same reasons as Evergreen (it seems). We have to mix FTS with 
domain-specific logic/filtering and that is based on relational data in the 
database. I don't see how we could have done that using an external 
search-engine. Maybe it's easy, I don't have any experience with it. 


-- 
Andreas Joseph Krogh 
CTO / Partner - Visena AS 
Mobile: +47 909 56 963 
andr...@visena.com <mailto:andr...@visena.com> 
www.visena.com <https://www.visena.com> 
 <https://www.visena.com> 


Sv: Triggers and Full Text Search *

2020-04-21 Thread Andreas Joseph Krogh

På tirsdag 21. april 2020 kl. 19:24:10, skrev Malik Rumi <
malik.a.r...@gmail.com <mailto:malik.a.r...@gmail.com>>: 


[...]

I am not (yet) posting the trigger code because this post is long already, and 
if your answers are 1) yes, 2) no and 3) triggers often work / fail like this, 
then there’s no point and we can wrap this up. But if not, I will happily post 
what I have. Thank you.


This is too much prose for the regular programmer, show us the code, and point 
out what doesn't work for you, then we can help:-) 


--
 Andreas Joseph Krogh

Sv: Practical usage of large objects.

2020-05-14 Thread Andreas Joseph Krogh

På onsdag 13. mai 2020 kl. 19:53:38, skrev Dmitry Igrishin mailto:dmit...@gmail.com>>: 
Hello all,

 As you know, PostgreSQL has a large objects facility [1]. I'm curious
 are there real systems which are use this feature? I'm asking because
 and I'm in doubt should the Pgfe driver [2] provide the convenient API
 for working with large objects or not.

 Thanks! 

Yea, we use LOs, because using JDBC bytea reallys doesn't stream (at least 
using the pgjdbc-ng driver). When retrieving bytea using JDBC it retunrs an 
InputStream but it's backed by an in-memory byte[]. With LOs and java.sql.Blob 
(which the standard pgjdbc-dirver doesn't support ,but pgjdbc-ngdoes) it 
acutally uses strams and memory is kept down to a minimum. 


--
 Andreas Joseph Krogh

Order by lower(column-alias) doesn't work...

2020-05-28 Thread Andreas Joseph Krogh

Hi. 

This works: 
select p.firstname, p.lastname, p.firstname || p.lastname as fullname from 
onp_crm_person p order by fullname;

But this doesn't: 
select p.firstname, p.lastname, p.firstname || p.lastname as fullname from 
onp_crm_person p order by lower(fullname);
ERROR: column "fullname" does not exist 
 LINE 1: ... as fullname from onp_crm_person p order by lower(fullname);

This is just an example-query, in my real query I have a more complex query 
generating an array of a custom-type which is then referenced to as a 
column-alias, and then ORDER BY on a function tranforming this array doesn't 
work: 

SELECT ... 
ARRAY(WITH RECURSIVE t AS (SELECT 
...
) SELECT ROW(t.entity_id, t.name)::BigIntVarChar from t order by level DESC) 
as folder_parent_array


ORDER BY bigintvarchar_to_text_value_flatten(folder_parent_array) ASC; 
column "folder_parent_array" does not exist 

What bigintvarchar_to_text_value_flatten() does is to take the "varchar"-part 
out of the BigintVarchar-type and "flatten" the array by that value so that it 
sorts nicely. 

 Any way round this? 


--
 Andreas Joseph Krogh 

Re: Order by lower(column-alias) doesn't work...

2020-05-28 Thread Andreas Joseph Krogh

På torsdag 28. mai 2020 kl. 14:50:54, skrev Geoff Winkless mailto:pgsqlad...@geoff.dj>>: 
On Thu, 28 May 2020 at 13:14, Andreas Joseph Krogh  wrote:
 > This works:
 > select p.firstname, p.lastname, p.firstname || p.lastname as fullname from 
onp_crm_person p order by fullname;
 >
 > But this doesn't:
 > select p.firstname, p.lastname, p.firstname || p.lastname as fullname from 
onp_crm_person p order by lower(fullname);
 > ERROR: column "fullname" does not exist
 > LINE 1: ... as fullname from onp_crm_person p order by lower(fullname);

 Wrap the original query in either a CTE or a temporary table.
 eg
 [..] 

Yea, I was hoping to avoid that, as the query is generated and rewriting it is 
a pain... 

Is there a way to define "sorting-rules" on custom-types so that I can have 
ORDER BY  and PG will pick my custom odering? 


--
 Andreas Joseph Krogh

Re: Order by lower(column-alias) doesn't work...

2020-05-28 Thread Andreas Joseph Krogh

På torsdag 28. mai 2020 kl. 15:26:42, skrev Tom Lane mailto:t...@sss.pgh.pa.us>>: 
Andreas Joseph Krogh  writes:
 > Is there a way to define "sorting-rules" on custom-types so that I can have
 > ORDER BY  and PG will pick my custom 
odering?

 You'd have to write your own type, which would be a lotta work :-(.

 A possible partial answer is to define the composite type as

 firstname citext, lastname citext, other-fields-here

 and then the regular composite-type comparison rule would give you
 approximately what you said you wanted ... but only approximately.

 regards, tom lane 


Hm, ok. I think the most non-intrusive way for me is to craft a sub-select 
producing the "varchar-string-flattened" so I can order by that alias. 


--
 Andreas Joseph Krogh

Re: Oracle vs. PostgreSQL - a comment

2020-06-03 Thread Andreas Joseph Krogh

På onsdag 03. juni 2020 kl. 18:50:12, skrev Jeremy Schneider <
schnj...@amazon.com <mailto:schnj...@amazon.com>>: 
> On 6/2/20 1:30 PM, Stephen Frost wrote:
 >> No, nothing does as PG doesn't support it as we have one WAL stream for
 >> the entire cluster.

 On 6/2/20 11:38, Ron wrote:
 > Right. Making WAL files specific to a database should be high on the
 > list of priorities.

 Did Oracle change this? Last time I looked, I don't think Oracle
 supported local redo in their multitenant architecture either. 


Regardless of what Oracle does, I agree this would be a huge step in the right 
direction for pg-DBAs. 
I have absolutely no clue about how much work is required etc., but I think 
it's kind of strange that no companies have invested in making this happen. 


--
 Andreas Joseph Krogh 

Re: Oracle vs. PostgreSQL - a comment

2020-06-03 Thread Andreas Joseph Krogh

På onsdag 03. juni 2020 kl. 20:07:24, skrev Chris Travers <
chris.trav...@gmail.com <mailto:chris.trav...@gmail.com>>: 


[...] 

Regardless of what Oracle does, I agree this would be a huge step in the right 
direction for pg-DBAs. 
I have absolutely no clue about how much work is required etc., but I think 
it's kind of strange that no companies have invested in making this happen. 

I manage database clusters where the number of databases is a reason not to do 
logical replication based upgrades, where pg_upgrade is far preferred instead. 

If this were to be the case, I would be very concerned that a bunch of things 
would have to change: 
1. Shared catalogs would have txid problems unless you stay with global txids 
and then how do local wal streams work there? 
2. Possibility that suddenly streaming replication has the possibility of 
different databases having different amounts of lag 
3. Problems with io management on WAL on high throughput systems (I have 
systems where a db cluster generates 10-20TB of WAL per day) 

So I am not at all sure this would be a step in the right direction or worth 
the work. 

I agree these are all technical issues, but nevertheless - "implementation 
details", which DBAs don't care about. What's important from a DBA's 
perspective is not whether WAL is cluster-wide or database-wide, but whether 
it's possible to manage backups/PITR/restores of individual databases in a more 
convenient matter, which other RDBMS-vendors seem to provide. 

I love PG, have been using it professionally since 6.5, and our company 
depends on it, but there are things other RDBMS-vendors do better... 


--
 Andreas Joseph Krogh 

Sv: autovacuum failing on pg_largeobject and disk usage of the pg_largeobject growing unchecked

2020-06-16 Thread Andreas Joseph Krogh

På tirsdag 16. juni 2020 kl. 17:59:37, skrev Jim Hurne mailto:jhu...@us.ibm.com>>: 
We have a cloud service that uses PostgreSQL to temporarily store binary
 content. We're using PostgreSQL's Large Objects to store the binary
 content. Each large object lives anywhere from a few hundred milliseconds
 to 5-10 minutes, after which it is deleted.
 [...] 

In my experience vacuumlo, https://www.postgresql.org/docs/12/vacuumlo.html 
<https://www.postgresql.org/docs/12/vacuumlo.html>, is needed to remove large 
objects, before vacuum can remove them from pg_largeobject. 


--
 Andreas Joseph Krogh 

Re: Postgresql HA Cluster

2020-06-29 Thread Andreas Joseph Krogh

På mandag 29. juni 2020 kl. 09:40:13, skrev Andreas Kretschmer <
andr...@a-kretschmer.de <mailto:andr...@a-kretschmer.de>>: 


 Am 29.06.20 um 09:33 schrieb Laurenz Albe:
 > That would not provode a multi-master solution, though. There are some
 > commercial solutions for that, but be warned that it would require 
non-trivial
 > changes to your application.

 not really with BDR3 ;-) 

Well, BDR, last time I checked, still doesn't support exclusion-constraints, 
so it's not a drop-in replacement. 


--
 Andreas Joseph Krogh 


Sv: PostgreSQL transaction aborted on SQL error

2020-08-04 Thread Andreas Joseph Krogh

På tirsdag 04. august 2020 kl. 10:44:36, skrev Urko Lekuona mailto:u...@arima.eu>>: 
Hello, 

First time writing here, I hope this is the right place to ask this kind of 
question. I've been working with PostgreSQL for a while now but i've just found 
out that PostgreSQL marks my transaction for ROLLBACK and even stops the 
execution of the transaction if an error occurs. 

I'm a Java developer and I'm using JDBC to connect to PostgreSQL. I've made a 
gist to showcase this behavior (
https://gist.github.com/UrkoLekuona/4be14e099aeec2e7d8ef616886b66fdf/ 
<https://gist.github.com/UrkoLekuona/4be14e099aeec2e7d8ef616886b66fdf/>). If 
you run it, you'd see that when the unique key constraint is violated, my 
transaction is stopped, i.e. the SELECT and DROP statements are not executed. 
The thrown exception is org.postgresql.util.PSQLException: ERROR: current 
transaction is aborted, commands ignored until end of transaction block 

I've purposely set AutoCommit to false, because in my real life use case this 
is not an option. The only workaround I've found for this exception is setting 
the connection propertyautosave to ALWAYS, (
https://jdbc.postgresql.org/documentation/head/connect.html 
<https://jdbc.postgresql.org/documentation/head/connect.html>). 

My question is: is this the correct way of solving this issue? I'd rather if 
there was a PostgreSQL flag to disable this behavior and make it work like 
other RDBMS do, where if a statement failed, the transaction could continue 
without explicitly marking a savepoint and rolling back. 

Thanks in advance for your help, it is appreciated. 

Urko 


The correct approach is to ROLLBACK the transaction in a "catch-block" instead 
of trying to execute further statements. The java.sql.Connection is "invalid" 
after an SQLException and should be rolled back. 


--
 Andreas Joseph Krogh 

RUM and WAL-generation

2020-10-21 Thread Andreas Joseph Krogh

Hi all. 

The RUM-index is very nice, but indexes get very large and produce insane 
amounts of WAL. 
Due to the way PG works (IIUC), updating an int-column in a row produces lots 
of WAL because the whole row is duplicated, and if that row holds RUM-indexed 
columns it gets really bad... 
We hav actually run out of WAL-space in production because of this. 

I see this TODO-entry in RUM: 


 * Improve GENERIC WAL to support shift (PostgreSQL core changes). 
What is the status on this? 


-- 
Andreas Joseph Krogh 
CTO / Partner - Visena AS 
Mobile: +47 909 56 963 
andr...@visena.com <mailto:andr...@visena.com> 
www.visena.com <https://www.visena.com> 
 <https://www.visena.com> 

Re: Hot backup in PostgreSQL

2020-10-22 Thread Andreas Joseph Krogh

På torsdag 22. oktober 2020 kl. 10:18:12, skrev hubert depesz lubaczewski <
dep...@depesz.com <mailto:dep...@depesz.com>>: 
On Thu, Oct 22, 2020 at 09:45:36AM +0200, W.P. wrote:
 > > There are many ways to do it. To be able to suggest proper solution we'd
 > > need to know:
 > > 1. what is the problem with pg_dump?
 > Time (I guess a bit, but copying files could be done using rsync, so much
 > faster).

 Is it *really* too slow for you? Please note that you can easily make it
 much faster by doing -Fd -j $( nproc ). 

I got curious and tried with this DB: 

andreak@[local]:5433 13.0 visena=# select 
pg_size_pretty(pg_database_size(current_database()));
 ┌┐
 │ pg_size_pretty │
 ├┤
 │ 47 GB │
 └┘
 (1 row)


nproc=16


Regular pg_dump: 

$ time pg_dump -O -d visena > ~/data/visena/visena.dmp

 real 2m43,904s
 user 0m10,135s
 sys 0m24,260s 


Parallell pg_dump: 

$ time pg_dump -OFd -j $(nproc) -f ~/data/visena/pg_backup -d visena 

 real 3m43,726s
 user 12m36,620s
 sys 0m9,537s


pg_dump with pbzip2 

$ time pg_dump -O -d visena | pbzip2 -c > ~/data/visena/visena.dmp.bz2 

 real 6m58,741s
 user 92m4,833s
 sys 2m18,565s 

Here are the sizes of all: 

7,4G pg_backup (directory with -Fd)
32G visena.dmp 
 5,8G visena.dmp.bz2 

-- 

Andreas Joseph Krogh 
CTO / Partner - Visena AS 
Mobile: +47 909 56 963 
andr...@visena.com <mailto:andr...@visena.com> 
www.visena.com <https://www.visena.com> 
 <https://www.visena.com> 


How to specify that a trigger should fire when column is NOT in SET-clause?

2020-12-25 Thread Andreas Joseph Krogh

Hi. 

I need to set a value in a trigger if a column is explicitly NOT specified in 
UPDATE's SET-clause. 


Like for example having a "BEFORE UPDATE OF NOT" 
create TRIGGER my_trigger BEFORE UPDATE OF NOT modified ON my_table FOR EACH 
ROW WHEN(OLD.val <> NEW.val) EXECUTE PROCEDURE do_stuff(); 


I want the trigger to be fired when the column "modified" is NOT specified, is 
it possible? 
Or - is it possible to check for this in the trigger-function? 

-- 
Andreas Joseph Krogh 
CTO / Partner - Visena AS 
Mobile: +47 909 56 963 
andr...@visena.com <mailto:andr...@visena.com> 
www.visena.com <https://www.visena.com> 
 <https://www.visena.com> 

Re: How to specify that a trigger should fire when column is NOT in SET-clause?

2020-12-25 Thread Andreas Joseph Krogh

På fredag 25. desember 2020 kl. 17:48:39, skrev Adrian Klaver <
adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>>: 
On 12/25/20 5:52 AM, Andreas Joseph Krogh wrote:
 > Hi.
 > I need to set a value in a trigger if a column is explicitly NOT
 > specified in UPDATE's SET-clause.
 > Like for example having a "BEFORE UPDATE OF NOT"
 >
 > create TRIGGER my_trigger
 > BEFORE UPDATE OF NOT modified ON my_table FOR EACH ROW WHEN (OLD.val 
<>NEW.val)
 > EXECUTE PROCEDURE do_stuff();
 >
 > I want the trigger to be fired when the column "modified" is NOT
 > specified, is it possible?

 It will always be specified, it may or may not be changed. As example: 

True, but what I'm after is using the value from the "modified" column, if 
specified, else use CURRENT_TIMESTAMP 

My use-case is this; 

I have this table: 
create table person ( id serial primary key, username varchar not null unique, 
passwordvarchar not null, credentials_last_updated timestamp NOT NULL default 
CURRENT_TIMESTAMP, created timestamp NOT NULL default CURRENT_TIMESTAMP, 
modifiedtimestamp ); Then this trigger to update "credentials_last_updated" 
whenever "password" is modified.create or replace FUNCTION 
person_password_updated_tf() returns TRIGGER AS $$ BEGIN  NEW.
credentials_last_updated= NEW.modified; -- OR CURRENT_TIMESTAMP if "modified" 
isn't specified RETURN NEW; END; $$ LANGUAGE plpgsql; create TRIGGER 
person_password_updated_tBEFORE UPDATE OF password ON onp_user FOR EACH ROW WHEN
(OLD.password <> NEW.password ) EXECUTE PROCEDURE person_password_updated_tf(); 

So, I want to set "credentials_last_updated to NEW.modified if "modified" is 
specified, else toCURRENT_TIMESTAMP 



-- 
Andreas Joseph Krogh 
CTO / Partner - Visena AS 
Mobile: +47 909 56 963 
andr...@visena.com <mailto:andr...@visena.com> 
www.visena.com <https://www.visena.com> 
 <https://www.visena.com> 


RE: How to keep format of views source code as entered?

2021-01-08 Thread Andreas Joseph Krogh

På fredag 08. januar 2021 kl. 09:38:29, skrev Markhof, Ingolf <
ingolf.mark...@de.verizon.com <mailto:ingolf.mark...@de.verizon.com>>: 




Thanks for your comments and thoughts.



I am really surprised that PostgreSQL is unable to keep the source text of a 
view. Honestly, for me the looks like an implementation gap. Consider software 
development. You are writing code in C++ maybe on a UNIX host. And whenever you 
feed you source code into the compiler, it will delete it, keeping the 
resulting executable, only. And you could not even store your source code on 
the UNIX system. Instead, you'd be forced to do so in a separate system, like 
GitHub. Stupid, isn't it? Right. There are good reasons to store the source 
code on GitHub or alike anyhow. Especially when working on larger project and 
when collaborating with many people. But in case of rather small project with a 
few people only, this might be an overkill.



It shouldn't be rocket science to enable PostgreSQL to store the original 
source code as well. It's weird PostgreSQL is not doing it.

It isn't rocket-science, of couse, but I'm pretty sure it is implemented like 
this on purpose. PG doesn't store queries you feed it either, nor any other
command. It stores the resulting structure. SQL-scripts, containing DDL/DML 
should be versioned using scm like Git, not rely on the DB to store it. 




-- 
Andreas Joseph Krogh 


Max sane value for join_collapse_limit?

2022-06-03 Thread Andreas Joseph Krogh


Hi, I have set join_collapse_limit = 12 in production, but I'm thinking about 
raising it to 16.

On modern HW is there a “sane maximum” for this value?

I can easily spare 10ms for extra planning per query on our workload, is 16 
too high?



Thanks.






--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>

Logical replication of large objects

2022-06-05 Thread Andreas Joseph Krogh


I started this thread 5 years ago: 
https://www.postgresql.org/message-id/flat/7c70d9bd-76fc-70fa-cfec-14f00a4a49c3%40matrix.gatewaynet.com#15cbf1c82be9341e551e60e287264380
 
<https://www.postgresql.org/message-id/flat/7c70d9bd-76fc-70fa-cfec-14f00a4a49c3%40matrix.gatewaynet.com#15cbf1c82be9341e551e60e287264380>



We'd be willing to help funding development needed to support Large Object 
logical replication.

Anyone interested?






--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>

Re: Logical replication of large objects

2022-06-09 Thread Andreas Joseph Krogh


På torsdag 09. juni 2022 kl. 20:24:56, skrev Joshua Drake 
mailto:j...@commandprompt.com>>:

Large objects are largely considered a deprecated feature.


Though I like the idea, was there any consensus on -hackers?
Nobody seems interested in it…






--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


Sv: How can I set up Postgres to use given amount of RAM?

2022-06-26 Thread Andreas Joseph Krogh


På søndag 26. juni 2022 kl. 20:40:01, skrev W.P. mailto:laure...@wp.pl>>:
Question in topic:

"How can I set up Postgres to use given amount of RAM?"

I have now laptop with 8GB of RAM, i can see Linux uses no more than 2-3GB.

So my question is how to FORCE PostgreSQL

use let's say 2-4Ghow to B of RAM for caching tables I run queries on?

As I can see disk actity running queries.


W.P..
The closest thing I can think of is effective_cache_size: 
https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-EFFECTIVE-CACHE-SIZE
 
<https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-EFFECTIVE-CACHE-SIZE>








--
Andreas Joseph Krogh


Sv: How to store "blobs" efficiently for small and large sizes, with random access

2022-10-19 Thread Andreas Joseph Krogh
puts that limit per-project
 which will be more than enough. For the sum of all projects, maybe
not... I.e. with real client-case of 3K projects,
 that puts an average of only 10GB of lo's per-project (i.e. schema),
which could very well be problematic...











--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


Re: How to store "blobs" efficiently for small and large sizes, with random access

2022-10-19 Thread Andreas Joseph Krogh


På onsdag 19. oktober 2022 kl. 12:48:24, skrev Dominique Devienne <
ddevie...@gmail.com <mailto:ddevie...@gmail.com>>:

On Wed, Oct 19, 2022 at 12:17 PM Andreas Joseph Krogh mailto:andr...@visena.com>> wrote:



First advice, don't do it. We started off storing blobs in DB for “TX safety”

Not really an option, I'm afraid.


, but backup/restore quickly became too cumbersome so we ended up moving all 
blobs out and only store reference in DB.

This required us to make a “vacuum system” that cleans up the blob-storage 
regularly as ROLLBACK/crash can make it out of sync.

Note the fact the data is spread in many mostly independent schemas.
Might ease the backup/restore? I'm not much of a DBA though...


We chose storing as LO because with it, streaming large blobs (not using much 
memory) actually worked, with JDBC at least.


I'm in C++, with I believe efficient use of binary binds and results, and use 
of COPY as much as possible,
so as good as it gets I think (that I know of, with libpq), in terms of 
performance. Haven't looked at libpq's
new in v14 pipeline mode yet though.

Investigated Cursor vs Statement too, and it's a tradeoff between latency and 
throughput.
Not sure how good JDBC drivers for PostgreSQL are. I'm doing my best to go 
fast with libpq.

In any case, thanks for your input. But it's not really a question of "if". 
But of "how".

Putting thousands of large blobs in the file system is a no go. Assuming the 
clients
can even see the file system the server sees. This is a 2-tier system, there's 
no mid-tier
that would somehow magically handle proper security and lifetime management of 
these blobs.

Thanks, --DD
Ok, just something to think about; Will your database grow beyond 10TB with 
blobs? If so try to calculate how long it takes to restore, and comply with 
SLA, and how long it would have taken to restore without the blobs.



PS: Our blobstore is not “the file system”, but SeaweedFS.






--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


Re: How to store "blobs" efficiently for small and large sizes, with random access

2022-10-19 Thread Andreas Joseph Krogh


På onsdag 19. oktober 2022 kl. 13:21:38, skrev Dominique Devienne <
ddevie...@gmail.com <mailto:ddevie...@gmail.com>>:
On Wed, Oct 19, 2022 at 1:00 PM Andreas Joseph Krogh  
wrote:
> Ok, just something to think about;

Thank you. I do appreciate the feedback.

> Will your database grow beyond 10TB with blobs?

The largest internal store I've seen (for the subset of data that goes
in the DB) is shy of 3TB.
But we are an ISV, not one of our clients, which have truly massive
scale for data.
And they don't share the exact scale of their proprietary data with me...

> If so try to calculate how long it takes to restore, and comply with SLA,
> and how long it would have taken to restore without the blobs.

Something I don't quite get is why somehow backup is no longer needed
if the large blobs are external?
i.e. are you saying backups are so much more worse in PostgreSQL than
with the FS? I'm curious now.
I'm not saying you don't need backup (or redundancy) of other systems holding 
blobs, but moving them out of RDBMS makes you restore the DB to a consistent 
state, and able to serve clients, faster. In my experience It's quite unlikely 
that your (redundant) blob-store needs crash-recovery at the same time you DB 
does. The same goes with PITR, needed because of some logical error (like 
client deleted some data they shouldn't have), which is much faster without 
blobs in DB and doesn't affect the blobstore at all (if you have a smart 
insert/update/delete-policy there).



Also, managing the PostgreSQL server will be the client's own concern
mostly. We are not into Saas here.
As hinted above, the truly massive data is already not in the DB, used
by different systems, and processed
down to the GB sized inputs all the data put in the DB is generated
from. It's a scientific data heavy environment.
And one where security of the data is paramount, for contractual and
legal reasons. Files make that harder IMHO.

Anyways, this is straying from the main theme of this post I'm afraid.
Hopefully we can come back on the main one too. --DD

There's a reason “everybody” advices to move blobs out of DB, I've learned.






--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


Re: How to store "blobs" efficiently for small and large sizes, with random access

2022-10-20 Thread Andreas Joseph Krogh


På torsdag 20. oktober 2022 kl. 10:32:44, skrev Dominique Devienne <
ddevie...@gmail.com <mailto:ddevie...@gmail.com>>:
On Wed, Oct 19, 2022 at 5:05 PM Laurenz Albe  wrote:
> On Wed, 2022-10-19 at 12:48 +0200, Dominique Devienne wrote:
> > On Wed, Oct 19, 2022 at 12:17 PM Andreas Joseph Krogh  
wrote:
> > > First advice, don't do it. We started off storing blobs in DB for “TX 
safety”
> > Not really an option, I'm afraid.
> You should reconsider. Ruling out that option now might get you into trouble
> later. Large Objects mean trouble.

Andreas, Ericson, Laurenz, thanks for the advice.
I'll be sure to discuss these concerns with the team.
[…]
But before I finish this thread for now, I'd like to add that I
consider unfortunate a state of affairs where
NOT putting the data in the DB is the mostly agreed upon advice. It
IMHO points to a weak point of
PostgreSQL, which does not invest in those use-cases with large data,
perhaps with more file-system
like techniques. Probably because most of the large users of
PostgreSQL are more on the "business"
side (numerous data, but on the smaller sizes) than the "scientific"
side, which (too often) uses files and
files-in-a-file formats like HDF5.
[…]


Note that my views were not PG-specific and applies to all 
applications/architectures involving RDBMS.
>From my point of view having all data in RDBMS is (maybe) theoretically sound, 
but given that IO is not instant I consider it a design-flaw, for some reasons 
which I've already pointed out.





--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


Sv: PostgreSQL configuration in a VM

2023-02-17 Thread Andreas Joseph Krogh


På fredag 17. februar 2023 kl. 09:13:10, skrev Sebastien Flaesch <
sebastien.flae...@4js.com <mailto:sebastien.flae...@4js.com>>:
Hi!


I was wondering if the is any specific configuration setting that should be 
used with PostgreSQL, when running in a VM...


Is there anything obvious that must be set, to get best performances with such 
a config?


Sorry for this general question...
In my experience the most important parameter when running in a VM is 
random_page_cost, and for that to be set to a sane value you need to know the 
characteristics of the disk available to your VM. In other words, disk IO is 
what you should be worried about as VMs are pretty good at scaling CPU-wise.






--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


Delete values from JSON

2023-03-17 Thread Andreas Joseph Krogh


Hi, in PG-14, how do I delete the keys "dunsNumber": "NaN":


{ "sessionId": "ce6fc9d0-5923-4d71-9f7e-ae60f35c45d6", "details": [ { 
"keyInformation": { "dunsNumber": "NaN", "organizationType": "LIMITED_COMPANY" 
} }, { "keyInformation": { "dunsNumber": "123", "organizationType": 
"LIMITED_COMPANY" } } ], "nisse": 123 }


So that the result becomes:


{ "sessionId": "ce6fc9d0-5923-4d71-9f7e-ae60f35c45d6", "details": [ { 
"keyInformation": { "organizationType": "LIMITED_COMPANY" } }, { 
"keyInformation": { "dunsNumber": "123", "organizationType": "LIMITED_COMPANY" 
} } ], "nisse": 123 }


Thanks.






--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>

Re: Delete values from JSON

2023-03-17 Thread Andreas Joseph Krogh


På fredag 17. mars 2023 kl. 11:56:22, skrev Romain MAZIÈRE <
romain.mazi...@sigmaz-consilium.fr <mailto:romain.mazi...@sigmaz-consilium.fr>>:
Hi,

If it is jsonb type, you can have a look at the documentation : 
https://www.postgresql.org/docs/14/functions-json.html 
<https://www.postgresql.org/docs/14/functions-json.html>

There are some examples :


jsonb - text → jsonb

Deletes a key (and its value) from a JSON object, or matching string value(s) 
from a JSON array.

'{"a": "b", "c": "d"}'::jsonb - 'a' → {"c": "d"}

'["a", "b", "c", "b"]'::jsonb - 'b' → ["a", "c"]


jsonb - text[] → jsonb

Deletes all matching keys or array elements from the left operand.

'{"a": "b", "c": "d"}'::jsonb - '{a,c}'::text[] → {}


jsonb - integer → jsonb

Deletes the array element with specified index (negative integers count from 
the end). Throws an error if JSON value is not an array.

'["a", "b"]'::jsonb - 1 → ["a"]


jsonb #- text[] → jsonb

Deletes the field or array element at the specified path, where path elements 
can be either field keys or array indexes.

'["a", {"b":1}]'::jsonb #- '{1,b}' → ["a", {}]

Regards

I have looked at the docs, but it doesn't, AFAIU, show how to conditionally 
delete a key based on its value, and leave other keys in the JSONB not matching 
the value alone.

I want to delete all keys in the (pseudo) path 
details.keyInformation[*].dunsNumber if the value is "NaN".






--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


Re: Delete values from JSON

2023-03-18 Thread Andreas Joseph Krogh


Excellent!
Thanks!

På lørdag 18. mars 2023 kl. 14:26:57, skrev Boris Zentner mailto:b...@2bz.de>>:







Am 17.03.2023 um 08:56 schrieb Andreas Joseph Krogh :





Hi, in PG-14, how do I delete the keys "dunsNumber": "NaN":

{ "sessionId": "ce6fc9d0-5923-4d71-9f7e-ae60f35c45d6", "details": [ { 
"keyInformation": { "dunsNumber": "NaN", "organizationType": "LIMITED_COMPANY" 
} }, { "keyInformation": { "dunsNumber": "123", "organizationType": 
"LIMITED_COMPANY" } } ], "nisse": 123 }


So that the result becomes:

{ "sessionId": "ce6fc9d0-5923-4d71-9f7e-ae60f35c45d6", "details": [ { 
"keyInformation": { "organizationType": "LIMITED_COMPANY" } }, { 
"keyInformation": { "dunsNumber": "123", "organizationType": "LIMITED_COMPANY" 
} } ], "nisse": 123 }


Thanks.



Hi Andreas, this works for me.



➤ 2023-03-18 14:23:51 CET bz@[local]:5432/test
=# WITH data(j)
 AS (VALUES (CAST('{ "sessionId": "ce6fc9d0-5923-4d71-9f7e-ae60f35c45d6", 
"details": [ { "keyInformation": { "dunsNumber": "NaN", "organizationType": 
"LIMITED_COMPANY" } }, { "keyInformation": { "dunsNumber": "123", 
"organizationType": "LIMITED_COMPANY" } } ], "nisse": 123 }' AS jsonb)))

 SELECT jsonb_pretty(jsonb_set(j
 , '{details}'
 , (SELECT jsonb_agg(CASE
 WHEN ((elem -> 'keyInformation') ->> 'dunsNumber') = 'NaN'
 THEN jsonb_set(elem
 , '{keyInformation}'
 , (elem -> 'keyInformation') - 'dunsNumber')
 ELSE elem
 END)
 FROM jsonb_array_elements(data.j -> 'details') AS elem))) AS nice_output
 FROM data
;
nice_output
{
 "nisse": 123,
 "details": [
 {
 "keyInformation": {
 "organizationType": "LIMITED_COMPANY"
 }
 },
 {
 "keyInformation": {
 "dunsNumber": "123",
 "organizationType": "LIMITED_COMPANY"
 }
 }
 ],
 "sessionId": "ce6fc9d0-5923-4d71-9f7e-ae60f35c45d6"
}
(1 row)
Time: 0,731 ms

--
Boris











--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


Sv: Re: Oracle vs PG

2018-10-23 Thread Andreas Joseph Krogh
På tirsdag 23. oktober 2018 kl. 22:45:36, skrev Adrian Klaver <
adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>>:
On 10/23/18 12:58 PM, Ravi Krishna wrote:
 > Well it is Aurora.
 >
 > 
https://www.cnbc.com/2018/10/23/amazon-move-off-oracle-caused-prime-day-outage-in-warehouse.html
 >

 Since the article was almost content-free I not would use it on either
 side of the argument. The only thing I pulled from it was Amazon changed
 databases and hit the learning curve. That will happen in either direction.
 
Is it so hard to accept commercial databases have advantages?
I find that not one bit surprising.
 
I've used PG since 90's and it's no secret the "big guys" beat PG on certain 
workloads.
 
-- Andreas Joseph Krogh
​




Sv: Re: Sv: Re: Oracle vs PG

2018-10-23 Thread Andreas Joseph Krogh
På tirsdag 23. oktober 2018 kl. 23:36:29, skrev Adrian Klaver <
adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>>:
On 10/23/18 2:34 PM, Andreas Joseph Krogh wrote:
 > På tirsdag 23. oktober 2018 kl. 22:45:36, skrev Adrian Klaver
 > mailto:adrian.kla...@aklaver.com>>:
 >
 >     On 10/23/18 12:58 PM, Ravi Krishna wrote:
 >      > Well it is Aurora.
 >      >
 >      >
 >    
 
https://www.cnbc.com/2018/10/23/amazon-move-off-oracle-caused-prime-day-outage-in-warehouse.html
 >      >
 >
 >     Since the article was almost content-free I not would use it on either
 >     side of the argument. The only thing I pulled from it was Amazon changed
 >     databases and hit the learning curve. That will happen in either
 >     direction.
 >
 > Is it so hard to accept commercial databases have advantages?

 That is entirely possible. My point is that the article does not contain
 enough information to make that determination. As with many media
 articles it was written to support the headline, not to actually shed
 light on the issue.
 
I think it provides enough.
It's of course entirely up to the reader to ignore, question, or not believe, 
the results of the published report(s).
 
--
 Andreas Joseph Krogh



Sv: Re: Syndicating PostgreSQL mailing list to Discourse

2018-11-21 Thread Andreas Joseph Krogh
På onsdag 21. november 2018 kl. 15:53:34, skrev legrand legrand <
legrand_legr...@hotmail.com <mailto:legrand_legr...@hotmail.com>>:
Hello,

 What you are proposing with https://rubytalk.org/ seems very interesting.

 It offers a quick view on mobile of "latests posts for all sites" in one
 click,
 and many other grouping /filtering options (that miss PostgreSQL website),
 for users that don't use fat client mailling list system (like me).

 This seems even better than nabble www.postgresql-archive.org, that is
 mobile friendly, and even *even* better as you don't include Ads.

 As you can see Pg community members are very frightened by this option that
 would permit "seamlessly interact with the mailing list" (there are many
 demands
 to ask nabble to remove it,
 
https://www.postgresql-archive.org/postgresql-archive-org-td6035059i20.html#a6059185).

 Maybe you will have a better answer if you propose a pure Read-Only mailling
 list system
 - without any possibility to reply from your site,
 - promising NO Ads for ever (and explaining how you get the money for
 running costs),
 - ...

 Are there any other mobile users here, to vote for this solution (maybe
 adding other restrictions) ?

 Regards
 PAscal
 
 
Scala (scala-lang.org) moved from mailing-list to Discourse a while ago and 
it's in my oppinion a disaster. No matter what they tell you, it does _not_ 
work well with email-only. Replying, quoting and reading history is a mess imo.
 
 
--
 Andreas Joseph Krogh



Sv: Reg: Query for DB growth size

2018-11-28 Thread Andreas Joseph Krogh
På torsdag 29. november 2018 kl. 07:41:24, skrev R.R. PRAVEEN RAJA <
rrpraveenr...@live.com <mailto:rrpraveenr...@live.com>>:
Hi All,
 
Can i get the query or steps for how to check the Database growth in postgres. 
Thanks in advance.
 
select pg_size_pretty(pg_database_size(current_database()));
  
-- Andreas Joseph Krogh




sha512sum (program) gives different result than sha512 in PG11

2019-01-14 Thread Andreas Joseph Krogh
Hi.
 
Anyone can explain why these two don't give the same result?
 
1.
$ echo "A" | sha512sum 
 
7a296fab5364b34ce3e0476d55bf291bd41aa085e5ecf2a96883e593aa1836fed22f7242af48d54af18f55c8d1def13ec9314c92a0ba63f7663500090565
 
 -
  
2.
$ psql -A -t -c "select encode(sha512('A'), 'hex')" 
 
21b4f4bd9e64ed355c3eb676a28ebedaf6d8f17bdc365995b319097153044080516bd083bfcce66121a3072646994c8430cc382b8dc543e84880183bf856cff5
  
Thanks!
 
-- Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>




Sv: Re: sha512sum (program) gives different result than sha512 in PG11

2019-01-14 Thread Andreas Joseph Krogh
På mandag 14. januar 2019 kl. 16:18:30, skrev Thomas Markus <
t.mar...@proventis.net <mailto:t.mar...@proventis.net>>:
Hi,

 echo contains a trailing carriage return. Try

 echo -n "A" | sha512sum

 regards
 Thomas  
Ha ha, didn't think of that!
 
Thanks:-)
 
-- Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>
 




Using psql variables in DO-blocks

2019-01-15 Thread Andreas Joseph Krogh
Hi all.
 
I'm trying to use a psql variable in a DO-block, but it fails:
 
\set resource_group 'Ressurser' \set quoted_resource_group '\'' 
:resource_group '\'' DO $$ begin  if not exists(SELECT * FROM tbl_group WHERE 
groupname = :quoted_resource_group)then  raise notice 'Group % not found, 
creating it.', :quoted_resource_group; end if; end; $$; 
 
ERROR:  syntax error at or near ":" 
 LINE 3: ... exists(SELECT * FROM tbl_group WHERE groupname = :quoted_re...
                                                            ^
  
Any hints?
 
-- Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>




Sv: Using psql variables in DO-blocks

2019-01-15 Thread Andreas Joseph Krogh
På tirsdag 15. januar 2019 kl. 16:51:09, skrev Andreas Joseph Krogh <
andr...@visena.com <mailto:andr...@visena.com>>:
Hi all.
 
I'm trying to use a psql variable in a DO-block, but it fails:
 
[snip] 
 
Seems I was a bit lazy, here's what works:
 
\set resource_group 'Ressurser' \set quoted_resource_group '\'' 
:resource_group '\'' set myvars.quoted_resource_group to :quoted_resource_group;
DO$$ begin  if not exists(SELECT * FROM onp_group WHERE groupname = 
current_setting('myvars.quoted_resource_group')) then  raise notice 'Group % 
not found, creating it.', current_setting('myvars.quoted_resource_group'); end 
if; end; $$; 
 
-- Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


 


Sv: lost "left join"

2019-01-16 Thread Andreas Joseph Krogh
På onsdag 16. januar 2019 kl. 11:54:21, skrev Олег Самойлов mailto:spl...@ya.ru>>:
Hi, all.

 I got some mystic behaviour of PostgreSQL, perhaps this is a bug.
 
[snip]

 But things begin be strange if I add validation by time.

 => explain select * from node as parent left join link on 
parent.node_id=link.parent left join node as child on link.child=child.node_id 
where parent.node_id=1 and current_date <@ parent.valid and current_date <@ 
link.valid and current_date <@ child.valid;
                                         QUERY PLAN
 
--
  Nested Loop  (cost=4.50..32.35 rows=1 width=112)
    ->  Nested Loop  (cost=4.35..21.88 rows=1 width=76)
          ->  Index Scan using node_pkey on node parent  (cost=0.15..8.18 
rows=1 width=36)
                Index Cond: (node_id = 1)
                Filter: (CURRENT_DATE <@ valid)
          ->  Bitmap Heap Scan on link  (cost=4.20..13.70 rows=1 width=40)
                Recheck Cond: (parent = 1)
                Filter: (CURRENT_DATE <@ valid)
                ->  Bitmap Index Scan on link_pkey  (cost=0.00..4.20 rows=6 
width=0)
                      Index Cond: (parent = 1)
    ->  Index Scan using node_pkey on node child  (cost=0.15..8.18 rows=1 
width=36)
          Index Cond: (node_id = link.child)
          Filter: (CURRENT_DATE <@ valid)
 (13 rows)

 «Left Join»’s are lost. And in the result too:

 => select * from node as parent left join link on parent.node_id=link.parent 
left join node as child on link.child=child.node_id where parent.node_id=1 and 
current_date <@ parent.valid and current_date <@ link.valid and current_date <@ 
child.valid;
  node_id | valid | parent | child | valid | node_id | valid
 -+---++---+---+-+---
 (0 rows)
 
The moment you involve columns on "left joined" relations this way in the 
WHERE-clause, it effectively becomes a right join.
 
-- Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


 


Sv: Re: Geographical multi-master replication

2019-01-25 Thread Andreas Joseph Krogh
På fredag 25. januar 2019 kl. 06:45:43, skrev Andreas Kretschmer <
andr...@a-kretschmer.de <mailto:andr...@a-kretschmer.de>>: 

 Am 25.01.19 um 06:10 schrieb Jeremy Finzel:
 >
 >     The problem is that the version for BDR 1.0.7, which has an
 >     implementation for postgres 9.4, will be on end of live at the end
 >     of this year. Unfortunately the paid solution is out of our
 >     budget, so we currently have two options: find an alternative or
 >     remove the multi-region implementation. We are currently looking
 >     for alternatives.
 >
 >
 > You are missing all of the alternatives here.  Why don't you consider
 > upgrading from postgres 9.4 and with it to a supported version of
 > BDR?  There is nothing better you can do to keep your infrastructure
 > up to date, performant, secure, and actually meet your multi-master
 > needs than to upgrade to a newer version of postgres which does have
 > BDR support.
 >
 > Even "stock" postgres 9.4 is set for end of life soon. Upgrade!

 ACK!

 Sure, you have to pay for a support contract, and this isn't for free,
 but you will get a first-class support for BDR. If you really needs a
 worldwide distributed multi-master solution you should be able to buy that.


 Regards, Andreas   To my surprise I'm unable to find downloadable BDR3. I 
thought it was an open-source extention to vanilla-pg-11, isn't that the case 
anymore?   -- Andreas Joseph Krogh CTO / Partner - Visena AS Mobile: +47 909 56 
963 andr...@visena.com <mailto:andr...@visena.com> www.visena.com 
<https://www.visena.com>  <https://www.visena.com>  

Sv: SQL queries not matching on certain fields

2019-04-03 Thread Andreas Joseph Krogh
På onsdag 03. april 2019 kl. 15:06:03, skrev Felix Ableitner mailto:m...@nutomic.com>>: 
Hello,

I'm having a very strange problem with the Postgres database for my website. 
Some SQL queries are not matching on certain fields. I am running these 
commands via the psql command.

Here is a query that works as expected:
 # SELECT id, "preferredUsername" FROM actor WHERE "preferredUsername"='emma'; 
id | preferredUsername ---+--- 48952 | emma 58672 | emma (2 
rows) 
The following query should work as well, because the username exists. But in 
fact, it consistently returns nothing:
 # SELECT id, "preferredUsername" FROM actor WHERE 
"preferredUsername"='mailab'; id | preferredUsername +--- 
(0 rows) 
There are some workarounds which fix the WHERE statement, all of the following 
work as expected:
SELECT id, "preferredUsername" FROM actor WHERE trim("preferredUsername")=
'mailab'; SELECT id, "preferredUsername" FROM actor WHERE "preferredUsername" 
ILIKE'mailab'; SELECT id, "preferredUsername" FROM actor WHERE md5(
"preferredUsername")=md5('mailab'); 


Now you might think that there is something wrong with the encoding, or the 
field contains some extra whitespace. But SHOW SERVER_ENCODING and SHOW 
SERVER_ENCODING show UTF8. Length and m5 sum are also exactly as expected. And 
I checked the individual bytes with get_byte(), all of them are in the range 
97-122.

About the setup, this is Postgres 10 running in Docker, on an Ubuntu VPS (see 
below for all versions etc). I had this problem before on the same setup, so I 
did an export to text file with pg_dump, and imported into a completely new 
database with psql. That fixed the problem for a few days, but it came back 
soon after.

The problem only seems to affect one or two specific columns, and only a few 
specific rows in those columns. Most other rows work normally. Affected columns 
also randomly start working again after a few days, and other columns get 
affected. I havent noticed any kind of pattern.

You can find the table definition here: https://gitlab.com/snippets/1840320 
<https://gitlab.com/snippets/1840320>

Version info:

Postgres Docker Image: postgres:10.7-alpine
 Docker version: 18.09.2
 OS: Ubuntu 18.04.2

Please tell me if you have any idea how to fix or debug this. I already asked 
multiple people, and no one has a clue what is going on.

Best,
 Felix Ableitner
Does disabling index-scan make a difference? SET enable_indexscan to off;
How about dumping the relevant data and reloading it into another similar (but 
smaller) table, can you reproduce it then? -- Andreas Joseph Krogh CTO / Partner
 - Visena AS Mobile: +47 909 56 963 andr...@visena.com 
<mailto:andr...@visena.com> www.visena.com <https://www.visena.com>  
<https://www.visena.com>

Sv: Transactions

2019-04-09 Thread Andreas Joseph Krogh
På tirsdag 09. april 2019 kl. 11:26:29, skrev Karl Martin Skoldebrand <
ks0c77...@techmahindra.com <mailto:ks0c77...@techmahindra.com>>: 
Hi,



Is there a way to track “transactions” by default (i.e. without anyone having 
set up anything specific). The problem I am facing is that users are claiming 
that settings are disappearing with them doing anything to affect them. It 
would be good to be able to see what postgresql thinks is going on.

*Subscriber adds Severity/BU/Service by ticking the corresponding box in 
subscriber configuration in WEBAPP. This works for some time.

*Subscriber stops receiving selected [tickets].

*Upon checking settings the selected Severity/BU/Service has been unselected.
Not "without anyone having set up anything specific", but you can change the 
setting in postgresql.conf to: log_statement = 'all' and reload the settings. 
You can now see all SQL executed in the log and can debug what's going on. -- 
Andreas Joseph Krogh CTO / Partner - Visena AS Mobile: +47 909 56 963 
andr...@visena.com <mailto:andr...@visena.com> www.visena.com 
<https://www.visena.com>  <https://www.visena.com>

Sv: RE: Transactions

2019-04-09 Thread Andreas Joseph Krogh
På tirsdag 09. april 2019 kl. 11:56:28, skrev Karl Martin Skoldebrand <
ks0c77...@techmahindra.com <mailto:ks0c77...@techmahindra.com>>: 
How much impact on performance and disk space would this or 

set log_min_duration_statement=0

have?



I have no idea as to how common this is, or when it happens, so it would need 
to run until this reported again (or some reasonable time if it doesn’t happen).
Well, the answer here is of course "it depends"... If you have lots of 
activity the logs will fill up quite quickly, but you can easily test this in 
production and just turn off logging again by setting it to 'none' and reload 
settings (no need to restart). You can also only log modifications by setting 
log_statement = 'mod' Also watch out for triggers modifying stuff. -- Andreas 
Joseph Krogh CTO / Partner - Visena AS Mobile: +47 909 56 963 andr...@visena.com
 <mailto:andr...@visena.com> www.visena.com <https://www.visena.com>  
<https://www.visena.com>

Re: PostgreSQL and GUI management

2023-08-15 Thread Andreas Joseph Krogh


På tirsdag 15. august 2023 kl. 20:43:16, skrev Rob Sargent <
robjsarg...@gmail.com <mailto:robjsarg...@gmail.com>>:
[…]
Adrian, 
Much as I love it, psql is not what I would call a management tool?

We have a multi-terabyte cluster in production which we manage using psql.

Locally I use IntelliJ IDEA for development (has syntax highlight, code 
completion, introspection etc.). IDEA has a PostgreSQL plugin which is only 
commercially available, and uses the same components as DataGrip, AFAIK.






--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


Re: PostgreSQL and GUI management

2023-08-15 Thread Andreas Joseph Krogh


På tirsdag 15. august 2023 kl. 22:52:48, skrev Rob Sargent <
robjsarg...@gmail.com <mailto:robjsarg...@gmail.com>>:
[…]
All the nice buzzwordy things: dashboards, alarm bells, point-and-click drop 
downs for config values (with tool tip descriptions of what they might do), 
coffee dispenser. Things some feel they need in a management tool.

If you need these things, I'm sure there's a budget somewhere for investing in 
available commercial tools, some already mentioned in this thread.






--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


Re: PostgreSQL and local HDD

2023-08-16 Thread Andreas Joseph Krogh


På onsdag 16. august 2023 kl. 05:40:40, skrev Ron mailto:ronljohnso...@gmail.com>>:
On 8/15/23 02:23, Jason Long wrote:
[snip]
> Does PostgreSQL have an option to increase speed?

Like a Turbo button?
It actually has that, but you'll have to sacrifice some safety.






--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


Sv: Base files compatibility between PG11 and PG15

2023-08-17 Thread Andreas Joseph Krogh


På torsdag 17. august 2023 kl. 11:32:47, skrev Abraham, Danny <
danny_abra...@bmc.com <mailto:danny_abra...@bmc.com>>:
Hi,

I have a database on Aurora@RDS.
It Used V11.9, and then upgraded automatically by AWS to V15.2 AWS states that 
it relies on compatibility provided by the PG community.
We now experience problems with the data.
Has anyone met similar issues? Compatibility of data files?
The data-files are nowhere near compatible between major-versions, as stated 
clearly in all release notes:

https://www.postgresql.org/docs/release/12.0/ 
<https://www.postgresql.org/docs/release/12.0/>

https://www.postgresql.org/docs/release/13.0/ 
<https://www.postgresql.org/docs/release/13.0/>

https://www.postgresql.org/docs/release/14.0/ 
<https://www.postgresql.org/docs/release/14.0/>

https://www.postgresql.org/docs/release/15.0/ 
<https://www.postgresql.org/docs/release/15.0/>



They all state the same:

“A dump/restore using pg_dumpall or use of pg_upgrade or logical replication 
is required for those wishing to migrate data from any previous release.”






--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


How to eliminate extra "NOT EXISTS"-query here?

2023-11-25 Thread Andreas Joseph Krogh


Hi, I'm testing if some dataset contains an array of elements and want to 
return all “not containing the specified array”, including entries in master 
table not being referenced.



I have the following schema:

drop table if exists stuff;
drop table if exists test;
CREATE TABLE test(
id varchar primary key
);

create table stuff(
id serial primary key,
test_id varchar NOT NULL REFERENCES test(id),
v varchar not null,
unique (test_id, v)
);

INSERT INTO test(id) values ('a');
INSERT INTO test(id) values ('b');
INSERT INTO test(id) values ('c');
INSERT INTO test(id) values ('d');

INSERT INTO stuff(test_id, v)
values ('a', 'x')
;

INSERT INTO stuff(test_id, v)
values ('b', 'x')
 , ('b', 'y')
;

INSERT INTO stuff(test_id, v)
values ('c', 'x')
 , ('c', 'y')
 , ('c', 'z')
;

select * from test t
WHERE NOT ARRAY['x']::varchar[] <@ (select array_agg(s.v) from stuff s WHERE s.
test_id= t.id)
;

select * from test t
WHERE NOT ARRAY['x', 'y']::varchar[] <@ (select array_agg(s.v) from stuff s 
WHERE s.test_id = t.id)
;

select * from test t
WHERE NOT ARRAY['x', 'y', 'z']::varchar[] <@ (select array_agg(s.v) from stuff 
s WHERE s.test_id = t.id)
;

select * from test t
WHERE NOT ARRAY ['x', 'y', 'z', 't']::varchar[] <@ (select array_agg(s.v) from 
stuffs WHERE s.test_id = t.id)
;

-- This works, but I'd rather not do the extra EXISTS
select * from test t
WHERE (NOT ARRAY ['x', 'y', 'z', 't']::varchar[] <@ (select array_agg(s.v) from
stuffs WHERE s.test_id = t.id)
OR NOT EXISTS (
select * from stuff s where s.test_id = t.id
)
 )
;



So, I want to return all entries in test not having any of ARRAY ['x', 'y', 
'z', 't'] referenced in the table stuff, and I'd like to have test.id="d" 
returned as well, but in order to do that I need to execute the “or not 
exists”-query. Is it possible to avoid that?






--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>

Re: How to eliminate extra "NOT EXISTS"-query here?

2023-11-25 Thread Andreas Joseph Krogh


På lørdag 25. november 2023 kl. 17:08:28, skrev Tom Lane mailto:t...@sss.pgh.pa.us>>:
Andreas Joseph Krogh  writes:
> -- This works, but I'd rather not do the extra EXISTS
> select * from test t
> WHERE (NOT ARRAY ['x', 'y', 'z', 't']::varchar[] <@ (select array_agg(s.v) 
from
> stuffs WHERE s.test_id = t.id)
> OR NOT EXISTS (
> select * from stuff s where s.test_id = t.id
> )
> )
> ;

> So, I want to return all entries in test not having any of ARRAY ['x', 'y', 
> 'z', 't'] referenced in the table stuff, and I'd like to have test.id="d" 
> returned as well, but in order to do that I need to execute the “or not 
> exists”-query. Is it possible to avoid that?

Probably not directly, but perhaps you could improve the performance of
this query by converting the sub-selects into a left join:

select * from test t
 left join
 (select s.test_id, array_agg(s.v) as arr from stuffs group by s.test_id) ss
 on ss.test_id = t.id
WHERE (NOT ARRAY ['x', 'y', 'z', 't']::varchar[] <@ ss.arr)
 OR ss.test_id IS NULL;

Another possibility is

...
WHERE (ARRAY ['x', 'y', 'z', 't']::varchar[] <@ ss.arr) IS NOT TRUE

but I don't think that's more readable really, and it will save little.

In either case, this would result in computing array_agg once for
each group of test_id values in "stuffs", while your original computes
a similar aggregate for each row in "test". So whether this is better
depends on the relative sizes of the tables, although my proposal
avoids random access to "stuffs" so it will have some advantage.

regards, tom lane
Excellent, thanks!






--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


Daterange question

2024-01-19 Thread Andreas Joseph Krogh


I have order-lines with start-end like this:
start_date DATE NOT NULL, end_date DATE, drange daterange NOT NULL GENERATED 
ALWAYS AS (daterange(start_date, end_date, '[)')) STORED
and have an index on using gist(drange)



I want to list all order-lines which does not have end-date set in the past, 
but want to show lines with start-dates in future.



This seems to do what I want:
NOT (drange << daterange(CURRENT_DATE, NULL, '[)'))
But this doesn't use the index.



Any idea how to write a query so it uses the index on drange?



Thanks.






--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>

Re: Daterange question

2024-01-19 Thread Andreas Joseph Krogh

create table order_line ( id serial primary key, start_date DATE NOT NULL, 
end_date DATE, drange daterange NOT NULL GENERATED ALWAYS AS 
(daterange(start_date, end_date, '[)')) STORED ); CREATE INDEX 
order_line_not_end_idx ON order_line using gist(drange); INSERT INTO 
order_line(start_date, end_date) values('2023-01-01', null); INSERT INTO 
order_line(start_date, end_date) values('2023-01-01', '2024-01-01'); INSERT 
INTO order_line(start_date, end_date) values('2024-01-01', null); INSERT INTO 
order_line(start_date, end_date) values('2025-01-01', null); set enable_seqscan 
to false; explain analyse select * from order_line WHERE (drange << 
daterange(CURRENT_DATE, NULL, '[)')); -- Uses index 
┌┐
 
│ QUERY PLAN │ 
├┤
 
│ Index Scan using order_line_not_end_idx on order_line (cost=0.14..8.15 rows=1 
width=44) (actual time=0.008..0.008 rows=1 loops=1) │ │ Index Cond: (drange << 
daterange(CURRENT_DATE, NULL::date, '[)'::text)) │ │ Planning Time: 0.043 ms │ 
│ Execution Time: 0.013 ms │ 
└┘
 
explain analyse select * from order_line WHERE NOT (drange << 
daterange(CURRENT_DATE, NULL, '[)')); -- Does not use index 
┌─┐
 
│ QUERY PLAN │ 
├─┤
 
│ Seq Scan on order_line (cost=100.00..101.07 rows=3 width=44) 
(actual time=0.007..0.008 rows=3 loops=1) │ │ Filter: (NOT (drange << 
daterange(CURRENT_DATE, NULL::date, '[)'::text))) │ │ Rows Removed by Filter: 1 
│ │ Planning Time: 0.077 ms │ │ Execution Time: 0.015 ms │ 
└─────────────┘





--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


Re: Daterange question

2024-01-19 Thread Andreas Joseph Krogh


På lørdag 20. januar 2024 kl. 06:35:07, skrev Tom Lane mailto:t...@sss.pgh.pa.us>>:
[…]
Well, we can definitively state that the NOT makes this unindexable.
You need a WHERE clause that looks like
indexed-column indexable-operator pseudo-constant
which this isn't, nor does << have a negator operator that could
allow the NOT to be simplified out.

Wouldn't
drange && daterange(CURRENT_DATE, NULL, '[)')
serve the purpose? That should be indexable.

regards, tom lane
Yes it will, thanks.






--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


Re: Improving pg_dump performance when handling large numbers of LOBs

2024-02-05 Thread Andreas Joseph Krogh


På mandag 05. februar 2024 kl. 20:55:43, skrev Wyatt Tellis <
wyatt.tel...@gmail.com <mailto:wyatt.tel...@gmail.com>>:

Yes, the LOBs themselves are tiny, but there are a lot of them (~150 million) 
which seem to be slowing down pg_dump. Note, we did not design/build this 
system and agree that use of LOBs for this purpose was not necessary.

Well, the data is there nonetheless, is it an option to convert it to bytea 
before migration?






--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


Re: Improving pg_dump performance when handling large numbers of LOBs

2024-02-05 Thread Andreas Joseph Krogh


På mandag 05. februar 2024 kl. 21:10:53, skrev Wyatt Tellis <
wyatt.tel...@gmail.com <mailto:wyatt.tel...@gmail.com>>:

No, we don't have the ability to make schema changes and the schema in the 
PG15 copy needs to match what's in the PG 12 versions

Well then, I guess it boils down to how badly the ones in charge wants this 
migration…






--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


  1   2   >