Case Insensitive

2019-03-28 Thread Sridhar N Bamandlapally
Hi PG-General and Pgsql-Admin

Can we achieve CASE INSENSITIVE in PostgreSQL?

I mean, need below way

postgres=# select * from emp;
 eid | ename
-+---
   1 | aaa
   2 | AAA
(2 rows)



*postgres=# select * from emp where ename='aaa';*
* eid | ename*
*-+---*
*   1 | aaa*
*   2 | AAA*
*(2 rows)*
*--above result is just an manual made example only*


Thanks
Sridhar


Re: Case Insensitive

2019-03-28 Thread Sameer Kumar
On Thu, 28 Mar, 2019, 4:20 PM Sridhar N Bamandlapally, <
sridhar@gmail.com> wrote:

> Hi PG-General and Pgsql-Admin
>
> Can we achieve CASE INSENSITIVE in PostgreSQL?
>

You are perhaps migrating from another RDBMS where this kind of feature is
considered a feature.



> I mean, need below way
>
> postgres=# select * from emp;
>  eid | ename
> -+---
>1 | aaa
>2 | AAA
> (2 rows)
>
>
>
> *postgres=# select * from emp where ename='aaa';*
> * eid | ename*
> *-+---*
> *   1 | aaa*
> *   2 | AAA*
> *(2 rows)*
> *--above result is just an manual made example only*
>
>
You can write a query with upper function:

select * from emp where upper(ename)=upper('aaa');

Or you can overload the "=" operator for text arguements.



> Thanks
> Sridhar
>
>


Re: Case Insensitive

2019-03-28 Thread Ben Madin
Or you can just use `ilike`:

SELECT * FROM emp WHERE ename ilike 'aaa';

https://www.postgresql.org/docs/10/sql-select.html#SQL-WHERE

cheers

Ben

On Thu, 28 Mar 2019 at 16:24, Sameer Kumar  wrote:

>
>
> On Thu, 28 Mar, 2019, 4:20 PM Sridhar N Bamandlapally, <
> sridhar@gmail.com> wrote:
>
>> Hi PG-General and Pgsql-Admin
>>
>> Can we achieve CASE INSENSITIVE in PostgreSQL?
>>
>
> You are perhaps migrating from another RDBMS where this kind of feature is
> considered a feature.
>
>
>
>> I mean, need below way
>>
>> postgres=# select * from emp;
>>  eid | ename
>> -+---
>>1 | aaa
>>2 | AAA
>> (2 rows)
>>
>>
>>
>> *postgres=# select * from emp where ename='aaa';*
>> * eid | ename*
>> *-+---*
>> *   1 | aaa*
>> *   2 | AAA*
>> *(2 rows)*
>> *--above result is just an manual made example only*
>>
>>
> You can write a query with upper function:
>
> select * from emp where upper(ename)=upper('aaa');
>
> Or you can overload the "=" operator for text arguements.
>
>
>
>> Thanks
>> Sridhar
>>
>>

-- 

[image: Ausvet Logo] 

Dr Ben Madin

BVMS MVPHMgmt PhD MANZCVS GAICD
Managing Director
Mobile:
+61 448 887 220 <+61448887220>
E-mail:
b...@ausvet.com.au
Website:
www.ausvet.com.au
Skype: benmadin
Address:
5 Shuffrey Street
Fremantle, WA 6160
Australia


Re: Case Insensitive

2019-03-28 Thread Murtuza Zabuawala
On Thu, Mar 28, 2019 at 1:50 PM Sridhar N Bamandlapally <
sridhar@gmail.com> wrote:

> Hi PG-General and Pgsql-Admin
>
> Can we achieve CASE INSENSITIVE in PostgreSQL?
>

You can try
*select * from emp where ename *ILIKE *'aaa';*


>
> I mean, need below way
>
> postgres=# select * from emp;
>  eid | ename
> -+---
>1 | aaa
>2 | AAA
> (2 rows)
>
>
>
> *postgres=# select * from emp where ename='aaa';*
> * eid | ename*
> *-+---*
> *   1 | aaa*
> *   2 | AAA*
> *(2 rows)*
> *--above result is just an manual made example only*
>
>
> Thanks
> Sridhar
>
>


Re: Case Insensitive

2019-03-28 Thread Andreas Kretschmer




Am 28.03.19 um 09:20 schrieb Sridhar N Bamandlapally:

Hi PG-General and Pgsql-Admin

Can we achieve CASE INSENSITIVE in PostgreSQL?


test=# create extension citext;
CREATE EXTENSION
test=*# create table emp (eid int, ename citext);
CREATE TABLE
test=*# insert into emp values (1, 'aaa');
INSERT 0 1
test=*# insert into emp values (2, 'AAA');
INSERT 0 1
test=*# select * from emp where ename = 'aaa';
 eid | ename
-+---
   1 | aaa
   2 | AAA
(2 rows)


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: Case Insensitive

2019-03-28 Thread Ron

On 3/28/19 3:23 AM, Sameer Kumar wrote:
[snip]

You can write a query with upper function:

select * from emp where upper(ename)=upper('aaa');


That's a guaranteed table scan.


--
Angular momentum makes the world go 'round.


Re: Case Insensitive

2019-03-28 Thread Steve Atkins



> On Mar 28, 2019, at 8:29 AM, Ron  wrote:
> 
> On 3/28/19 3:23 AM, Sameer Kumar wrote:
> [snip]
>> You can write a query with upper function:
>> 
>> select * from emp where upper(ename)=upper('aaa');
> 
> That's a guaranteed table scan.

Unless you have an index on upper(ename).

Cheers,
  Steve





Re: Case Insensitive

2019-03-28 Thread Sameer Kumar
On Thu, 28 Mar, 2019, 4:28 PM Andreas Kretschmer, 
wrote:

>
>
> Am 28.03.19 um 09:20 schrieb Sridhar N Bamandlapally:
> > Hi PG-General and Pgsql-Admin
> >
> > Can we achieve CASE INSENSITIVE in PostgreSQL?
>
> test=# create extension citext;
> CREATE EXTENSION
> test=*# create table emp (eid int, ename citext);
> CREATE TABLE
> test=*# insert into emp values (1, 'aaa');
> INSERT 0 1
> test=*# insert into emp values (2, 'AAA');
> INSERT 0 1
> test=*# select * from emp where ename = 'aaa';
>   eid | ename
> -+---
> 1 | aaa
> 2 | AAA
> (2 rows)
>


Ummm... Will it use an index (a BTree index)?


>
> Regards, Andreas
>
> --
> 2ndQuadrant - The PostgreSQL Support Company.
> www.2ndQuadrant.com
>
>
>
>


Re: Case Insensitive

2019-03-28 Thread Sameer Kumar
On Thu, 28 Mar, 2019, 4:33 PM Steve Atkins,  wrote:

>
>
> > On Mar 28, 2019, at 8:29 AM, Ron  wrote:
> >
> > On 3/28/19 3:23 AM, Sameer Kumar wrote:
> > [snip]
> >> You can write a query with upper function:
> >>
> >> select * from emp where upper(ename)=upper('aaa');
> >
> > That's a guaranteed table scan.
>
> Unless you have an index on upper(ename).
>

Yup, with this or overloading "=", one can create an expression based index
and benefit from it.


> Cheers,
>   Steve
>
>
>
>


Re: Case Insensitive

2019-03-28 Thread Sridhar N Bamandlapally
With ILIKE or extension CITEXT, does it have any impact on Indexes, like
not picking index ?

ILIKE works only for operator LIKE not for operator =

CITEXT seems some possibilities,

Thanks
Sridhar


On Thu, Mar 28, 2019 at 2:07 PM Sameer Kumar 
wrote:

>
>
> On Thu, 28 Mar, 2019, 4:33 PM Steve Atkins,  wrote:
>
>>
>>
>> > On Mar 28, 2019, at 8:29 AM, Ron  wrote:
>> >
>> > On 3/28/19 3:23 AM, Sameer Kumar wrote:
>> > [snip]
>> >> You can write a query with upper function:
>> >>
>> >> select * from emp where upper(ename)=upper('aaa');
>> >
>> > That's a guaranteed table scan.
>>
>> Unless you have an index on upper(ename).
>>
>
> Yup, with this or overloading "=", one can create an expression based
> index and benefit from it.
>
>
>> Cheers,
>>   Steve
>>
>>
>>
>>


Re: Case Insensitive

2019-03-28 Thread Ron

On 3/28/19 3:33 AM, Steve Atkins wrote:



On Mar 28, 2019, at 8:29 AM, Ron  wrote:

On 3/28/19 3:23 AM, Sameer Kumar wrote:
[snip]

You can write a query with upper function:

select * from emp where upper(ename)=upper('aaa');

That's a guaranteed table scan.

Unless you have an index on upper(ename).


Are you sure?  I thought the lpart had to be immutable for the query 
optimizer to decide to use an index (and upper(ename) is mutable).


--
Angular momentum makes the world go 'round.




Re: Case Insensitive

2019-03-28 Thread Peter Eisentraut
On 2019-03-28 09:20, Sridhar N Bamandlapally wrote:
> Can we achieve CASE INSENSITIVE in PostgreSQL?

Use the citext extension.

In PostgreSQL 12, there will be support for case-insensitive collations.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: Case Insensitive

2019-03-28 Thread Steve Atkins



> On Mar 28, 2019, at 9:08 AM, Ron  wrote:
> 
> On 3/28/19 3:33 AM, Steve Atkins wrote:
>> 
>>> On Mar 28, 2019, at 8:29 AM, Ron  wrote:
>>> 
>>> On 3/28/19 3:23 AM, Sameer Kumar wrote:
>>> [snip]
 You can write a query with upper function:
 
 select * from emp where upper(ename)=upper('aaa');
>>> That's a guaranteed table scan.
>> Unless you have an index on upper(ename).
> 
> Are you sure?  I thought the lpart had to be immutable for the query 
> optimizer to decide to use an index (and upper(ename) is mutable).

Yeah. Case insensitive searches like this are pretty much the first example 
given for why you might want to use an expression index.

The expression in an expression index has to be immutable, but upper() is - it 
will always give the same output for a given input. (For values of "always" 
that probably depend on not performing major surgery on collations, but that 
falls into the "lie to the planner, get rotten results" category).

Check "\df+ upper"

Cheers,
  Steve





Re: Case Insensitive

2019-03-28 Thread Andy Anderson
Would this also select characters with diacritical marks? For example,

eid | ename
-+---
   1 | aaa
   2 | AAA
   3 | áäâ
   4 | āåȧ

— Andy

> On Mar 28, 2019, at 4:26 AM, Ben Madin  wrote:
> 
> Or you can just use `ilike`:
> 
> SELECT * FROM emp WHERE ename ilike 'aaa';
> 
> https://www.postgresql.org/docs/10/sql-select.html#SQL-WHERE 
> 
> 
> cheers
> 
> Ben
> 
> On Thu, 28 Mar 2019 at 16:24, Sameer Kumar  > wrote:
> 
> 
> On Thu, 28 Mar, 2019, 4:20 PM Sridhar N Bamandlapally,  > wrote:
> Hi PG-General and Pgsql-Admin
> 
> Can we achieve CASE INSENSITIVE in PostgreSQL?
> 
> You are perhaps migrating from another RDBMS where this kind of feature is 
> considered a feature.
> 
> 
> 
> I mean, need below way
> 
> postgres=# select * from emp;
>  eid | ename
> -+---
>1 | aaa
>2 | AAA
> (2 rows)
> 
> 
> postgres=# select * from emp where ename='aaa';
>  eid | ename
> -+---
>1 | aaa
>2 | AAA
> (2 rows)
> --above result is just an manual made example only
> 
> 
> You can write a query with upper function:
> 
> select * from emp where upper(ename)=upper('aaa');
> 
> Or you can overload the "=" operator for text arguements.
> 
> 
> 
> Thanks
> Sridhar
> 
> 
> 
> --
> 
>  
> 
> Dr Ben Madin BVMS MVPHMgmt PhD MANZCVS GAICD
> Managing Director
> Mobile:   +61 448 887 220 
> E-mail:   b...@ausvet.com.au 
> Website:  www.ausvet.com.au 
> Skype:benmadin <>
> Address:  5 Shuffrey Street
> Fremantle, WA 6160
> Australia



signature.asc
Description: Message signed with OpenPGP


Re: Case Insensitive

2019-03-28 Thread Andreas Kretschmer




Am 28.03.19 um 09:33 schrieb Sameer Kumar:


test=*# select * from emp where ename = 'aaa';
  eid | ename
-+---
    1 | aaa
    2 | AAA
(2 rows)



Ummm... Will it use an index (a BTree index)?



test=# explain select * from emp where ename = 'aaa';
 QUERY PLAN
-
 Seq Scan on emp  (cost=0.00..25.88 rows=6 width=36)
   Filter: (ename = 'aaa'::citext)
(2 rows)

test=*# set enable_seqscan to off;
SET
test=*# explain select * from emp where ename = 'aaa';
   QUERY PLAN

 Seq Scan on emp  (cost=100.00..125.88 rows=6 width=36)
   Filter: (ename = 'aaa'::citext)
(2 rows)

test=*# create index emp_ename on emp(ename);
CREATE INDEX
test=*# explain select * from emp where ename = 'aaa';
  QUERY PLAN
--
 Index Scan using emp_ename on emp  (cost=0.13..8.14 rows=1 width=36)
   Index Cond: (ename = 'aaa'::citext)
(2 rows)

test=*#


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: plctl extension issue postgresql 11.2

2019-03-28 Thread Prakash Ramakrishnan
Hi Adrian,

I tried which command you gave but its not working.please check,

[root@brurhenaaa001 postgres]# yum install postgresql11-tcl
Loaded plugins: enabled_repos_upload, langpacks, package_upload,
product-id, search-disabled-repos, subscription-manager
rhel-7-server-eus-optional-rpms
   | 2.0 kB
00:00:00
rhel-7-server-eus-satellite-tools-6.2-rpms
  | 2.1 kB  00:00:00
rhel-7-server-extras-rpms
   | 2.0 kB
00:00:00
rhel-7-server-rpms
  | 2.0 kB  00:00:00
rhel-7-server-satellite-tools-6.2-rpms
  | 2.1 kB  00:00:00
rhel-server-rhscl-7-eus-rpms
  | 2.0 kB  00:00:00
tnc_Extra_Packages_for_Enterprise_Linux_7_Extra_Packages_for_Enterprise_Linux_7
 | 2.1 kB  00:00:00
No package postgresql11-tcl available.
Error: Nothing to do
Uploading Enabled Repositories Report
Loaded plugins: langpacks, product-id, subscription-manager


On Wed, Mar 27, 2019 at 9:53 PM Adrian Klaver 
wrote:

> On 3/27/19 8:32 AM, Prakash Ramakrishnan wrote:
> > Hi Adrian,
> >
> > Sorry for the late reply please find the below details,
>
> Again I am not a RH user so take the following with that in mind.
>
> What if you do?:
>
> sudo yum install postgresql11-tcl
>
> >
> > [root@brurhenaaa001 postgres]# yum list installed | grep -i postgresql
> > postgresql.x86_64 9.2.24-1.el7_5
> >   @rhel-7-server-rpms
> > postgresql-devel.x86_64   9.2.24-1.el7_5
> >   @rhel-7-server-rpms
> > postgresql-jdbc.noarch9.2.1002-6.el7_5
> >   @rhel-7-server-rpms
> > postgresql-libs.x86_649.2.24-1.el7_5
> >   @rhel-7-server-rpms
> > postgresql-odbc.x86_6409.03.0100-2.el7
> >   @rhel-7-server-rpms
> > postgresql-pgpool-II.x86_64   3.4.6-1.el7
> >
> @tnc_Extra_Packages_for_Enterprise_Linux_7_Extra_Packages_for_Enterprise_Linux_7
> > postgresql-pgpool-II-devel.x86_64 3.4.6-1.el7
> >
> @tnc_Extra_Packages_for_Enterprise_Linux_7_Extra_Packages_for_Enterprise_Linux_7
> > postgresql-pgpool-II-extensions.x86_64
> > postgresql-plruby.x86_64  0.5.3-13.el7
> >
>  
> @tnc_Extra_Packages_for_Enterprise_Linux_7_Extra_Packages_for_Enterprise_Linux_7
> > postgresql-plruby-doc.x86_64  0.5.3-13.el7
> >
>  
> @tnc_Extra_Packages_for_Enterprise_Linux_7_Extra_Packages_for_Enterprise_Linux_7
> > postgresql-pltcl.x86_64   9.2.24-1.el7_5
> >   @rhel-7-server-rpms
> > postgresql-server.x86_64  9.2.24-1.el7_5
> >   @rhel-7-server-rpms
> > *postgresql11.x86_64   11.2-1PGDG.rhel7   @pgdg11*
> > *postgresql11-contrib.x86_64   11.2-1PGDG.rhel7   @pgdg11*
> > *postgresql11-devel.x86_64 11.2-1PGDG.rhel7   @pgdg11*
> > *postgresql11-libs.x86_64  11.2-1PGDG.rhel7   @pgdg11*
> > *postgresql11-llvmjit.x86_64   11.2-1PGDG.rhel7   @pgdg11*
> > *postgresql11-server.x86_6411.2-1PGDG.rhel7   @pgdg11*
> >
> >
> >
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


-- 


Regards,
Prakash.R
Postgresql DBA


Re: Case Insensitive

2019-03-28 Thread Tim Clarke
On 28/03/2019 11:10, Andy Anderson wrote:
Would this also select characters with diacritical marks? For example,

eid | ename
-+---
   1 | aaa
   2 | AAA
   3 | áäâ
   4 | āåȧ

— Andy



For that you need https://www.postgresql.org/docs/10/unaccent.html


Tim Clarke



Main: +44 (0)1376 503500 | Fax: +44 (0)1376 503550
Web: https://www.manifest.co.uk/



Minerva Analytics Ltd
9 Freebournes Court | Newland Street | Witham | Essex | CM8 2BL | England



Copyright: This e-mail may contain confidential or legally privileged 
information. If you are not the named addressee you must not use or disclose 
such information, instead please report it to 
ad...@minerva.info
Legal:  Minerva Analytics is the trading name of: Minerva Analytics
Ltd: Registered in England Number 11260966 & The Manifest Voting Agency Ltd: 
Registered in England Number 2920820 Registered Office at above address. Please 
Click Here >> for further information.


Re: Case Insensitive

2019-03-28 Thread Brad Nicholson

Andreas Kretschmer  wrote on 03/28/2019 07:28:53
AM:


> >
> > Ummm... Will it use an index (a BTree index)?
> >
>
> test=# explain select * from emp where ename = 'aaa';
>   QUERY PLAN
> -
>   Seq Scan on emp  (cost=0.00..25.88 rows=6 width=36)
>     Filter: (ename = 'aaa'::citext)
> (2 rows)
>
> test=*# set enable_seqscan to off;
> SET
> test=*# explain select * from emp where ename = 'aaa';
>     QUERY PLAN
> 
>   Seq Scan on emp  (cost=100.00..125.88 rows=6 width=36)
>     Filter: (ename = 'aaa'::citext)
> (2 rows)
>
> test=*# create index emp_ename on emp(ename);
> CREATE INDEX
> test=*# explain select * from emp where ename = 'aaa';
>    QUERY PLAN
> --
>   Index Scan using emp_ename on emp  (cost=0.13..8.14 rows=1 width=36)
>     Index Cond: (ename = 'aaa'::citext)
> (2 rows)


I used citext heavily in a past project and was quite happy with it.  It
was never a source of performance issues.

Brad.


Re: stale WAL files?

2019-03-28 Thread Michael Paquier
On Tue, Mar 26, 2019 at 09:50:37AM -0600, Rob Sargent wrote:
> No, sorry I should have said that up front. We’re simple folk.

What is the WAL position (LSN) where Postgres is writing to and what
is the set of WAL segments in pg_wal (or pg_xlog if that's a server
older than 10)?  Please double-check the configuration value of
wal_keep_segments, and as mentioned upthread, could you make sure that
you have no replication slots active?  This can be done simply by
querying pg_replication_slots.  Please note as well that checkpoints
are server-wide, so there is no point to run them on all databases.
Only one command will be effective for all databases.
--
Michael


signature.asc
Description: PGP signature


Re: plctl extension issue postgresql 11.2

2019-03-28 Thread Adrian Klaver

On 3/28/19 3:20 AM, Prakash Ramakrishnan wrote:

Hi Adrian,

I tried which command you gave but its not working.please check,


First I posted the wrong package name previously. For the Tcl PL it 
should be:


postgresql11-pltcl

Still postgresql11-tcl is a valid package in the PGDG repos(it is for a 
Tcl client library). It is trying to find postgresql11-tcl in the RHEL 
repo not the PGDG repo. Yet packages have been installed from  the PGDG 
repo(from previous post) in the past:


postgresql11.x86_64   11.2-1PGDG.rhel7   @pgdg11
postgresql11-contrib.x86_64   11.2-1PGDG.rhel7   @pgdg11
postgresql11-devel.x86_64 11.2-1PGDG.rhel7   @pgdg11
postgresql11-libs.x86_64  11.2-1PGDG.rhel7   @pgdg11
postgresql11-llvmjit.x86_64   11.2-1PGDG.rhel7   @pgdg11
postgresql11-server.x86_6411.2-1PGDG.rhel7   @pgdg11

You will need to sort out how the above packages where installed and 
replicate that for the pltcl package. I cannot be of more help as I do 
not use RH. Maybe someone else can chime in.




[root@brurhenaaa001 postgres]# yum install postgresql11-tcl
Loaded plugins: enabled_repos_upload, langpacks, package_upload, 
product-id, search-disabled-repos, subscription-manager
rhel-7-server-eus-optional-rpms  
                                                              | 2.0 kB  
00:00:00
rhel-7-server-eus-satellite-tools-6.2-rpms  
                                                               | 2.1 kB  
00:00:00
rhel-7-server-extras-rpms
                                                              | 2.0 kB  
00:00:00
rhel-7-server-rpms  
                                                               | 2.0 kB  
00:00:00
rhel-7-server-satellite-tools-6.2-rpms  
                                                               | 2.1 kB  
00:00:00
rhel-server-rhscl-7-eus-rpms
                                                               | 2.0 kB  
00:00:00
tnc_Extra_Packages_for_Enterprise_Linux_7_Extra_Packages_for_Enterprise_Linux_7  
                                                      | 2.1 kB  00:00:00

No package postgresql11-tcl available.
Error: Nothing to do
Uploading Enabled Repositories Report
Loaded plugins: langpacks, product-id, subscription-manager





--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Key encryption and relational integrity

2019-03-28 Thread Moreno Andreo

  
  
Il 27/03/2019 07:42, Tony Shelver ha
  scritto:


  
  

  Not in Europe, but have worked a bit with medical records
systems in the USA, including sharing across providers.
  
  
  
  The primary key of the user is _should_ be system
generated, and this is meaningless from a user identity
standpoint.  

  

It is, I was planning to use UUID

  

  If you encrypt user name and other significant personal
data on the user id record, you can query the id column all
day long, and there would be no way to identify who the user
is without the encryption key.

  

That's the first idea that came in my mind. Unfortunately, the
customer needs to do "real time" search in personal data, so for
each key that's pressed, I need to query the whole table filtering
the encrypted value (1st layer of slowness) LIKE (2nd layer of
slowness) the value the user is typing. I ran a test on my i7-2660
with SSD and 16G RAM and on a 2500 rows table these queries last
about 2 seconds each keypress. So it's not the way to go, we have to
find something different. Same thing with clinical data, it would be
a mess because when I open the patient clinical record I need to see
all treatments, or all drugs that have been prescripted, all
diseases and so on, and it would be not-so fast if every clinical
data name (i.e. all drug names) is encrypted.

  

  
  
  The only other way to do it would be to store the
encrypted key value in both user.id and
medications.user_id.  That would encrypt the data and
maintain relational integrity.
  

  

Hmmm... if user.id and medications.user_id are the same, I can link
user with medication... and GDPR rule does not apply. or am I
missing something?

  

  
  
  For US medical record systems, there is a strict act
(HIPAA) that specifies both privacy and security rules, with
lists of what is regarded as sensitive or private
information, what can be shared with various providers and
outside organizations, and so on..  As far as user name
goes, that is almost never a decisive form of identification
for any person in a system.

  

GDPR is almost the same concept, even if some rules may differ

  

  
  
  While GDPR is relatively young and untested, surely
someone in your organization (or outside) has a list of the
relevant rules broken down to specific requirements.
  

  

You sure? :-)

  

  Also, securing the data internally on a database system
MAY have very different requirements to making that data
visible to applications or transmitting it to external
parties.
  

  

Profiling, IMHO, has to be designed in application, here I'm trying
to find a way so nobody can, without the use of the
application, match a patient with their clinical records (i.e.
someone breaking into the server -- data breach)

  

  
  
  Storing the user id in plain on the medications record
and encrypted on the user primary key would seem
meaningless, assuming some meaning could be derived from a
system generated ID.

  

It is a system generated ID, obviously the query is more logical and
quicker if i look from master into detail, so I SELECT something
FROM medications WHERE medications.user_id = encrypt(user.id).
Encrypting the (ex-)FK appears the best solution so far, but I'm
afraid of the consequences of losing RI.

  

   
  I would suggest you sit down with the application / legal
guys generating the requirements to see what the real rules
are.  if they want everything encrypted, then automatic
primary key generation in the DB has to be invalidated, and
they should provide that from the application side.
  

  

Having everything encrypted would be a big performance hit. We
  are trying to achieve best performance with the right masking. 

Thanks
Moreno.-


  






Re: Data entry / data editing tools (more end-user focus).

2019-03-28 Thread Adrian Klaver

On 3/27/19 11:49 PM, Tony Shelver wrote:

Please reply to list also, more eyes on the the problem.
Ccing list

My take on below is since you are feeding a Website why not use Web 
technologies for your data entry. My language of choice is Python. I 
have done something similar to this(on small scale) using the Django 
framework. For something lighter weight there is Flask. Then your client 
becomes a browser and you do not have to distribute forms around. You 
could integrate with the existing Web apps you are using e.g. SnipCart.



Actually I found a possibility.  LibreOffice Base on top of PG lets me 
paste photos into a Postgresql bytea field no problem.  MS Access should 
work well also, but I am not going to buy it, and running Ubuntu most of 
the time.

Possibly will distribute the Base forms to select users to enter data.
We are a startup company, so this is an affordable temporary fix, until 
the product I have been looking at matures, or we can roll our own.


We are building a company website, including an eStore, and have a few 
hundred products to load and maintain. Our product data currently isn't 
suitable for a sales catalog.
(Brands, categories, products, pricing and deal info, specials, images, 
product comparisons and so on).


Right now I input / maintain this via CSV files maintained through a 
spreadsheet  (LibreOffice Calc) which our site generator (Jekyll) uses 
to build out the static HTML product [pages automatically.
This is really quick to enter basic data, but I have to manually 
maintain image uploads, image names and so on manually in the 
spreadsheet and through individual file uploads. We have at least one, 
preferably 3 and up to 6 photos per product to maintain.  Call it a 1000 
images right now, and that will only go up.
Invalid text / characters in product descriptions and so on can break 
the CSV as well.


There are headless CMS solutions out on the market targeting this same 
area, but for various reasons the suitable ones are still maturing and 
shaking out in the marketplace, so I am not in a hurry to make a choice.


So the goal is to replace CSV with JSON file input.  This will also make 
my life easier for more complex structures such as multiple categories 
and specs per product.
I also want to migrate text that can change from the HTML pages into the 
database for easier changes by end users. For this the users could use  
a WYSIWIG MarkDown editor, and just cut and past the MarkDown into Base 
forms when finished.  This will be converted to HTML at build time by 
Jekyll static site generator or a script.


So the proposed solution:
1. Create the database in Postgresql.
2. Link Base or other tool to it and design input forms where necessary

3. Enter the data through Base into PG including images, MarkDown / HTML 
text, long descriptions and so on.
3a. If I don't get a suitable CMS going, I could spend some time 
developing a Vue/Quasar/Nuxt whatever front end to handle this, in 
several months time.


4. Pull the data from Postgres using Python (Psycopg2 will handle 
images). Or a node.js app once my JS skills improve.
4A: optionally use PostgREST, Postgraphile, Pytone Graphene or other to 
create an externally accessible API, and then use Python or javascript 
module to pull the data out.


5. This program will then write the JSON product file to the website 
data source directory with image tags, and upload the files to the image 
store.  Also create product item HTML page templates or or modify HTML 
content where necessary.
6. At this stage the Jekyll static site generator will detect the new 
JSON data and any changed pages, and regenerate all changed pages, move 
images and so on.


7. Git commit will push the generated site to Github, and Git will then 
send everything to our CDN.


There is no traditional web server / app server / db server setup as you 
would find for most websites using, for example, Wordpress, Magento 
commerce or other tools.  Just the CDN.


Everything is static HTML and some javascript.  Because there is no 
backend system, database or anything else at run time,just when 
generating the site,  I am not concerned about performance except at 
site build time, which will not happen that often.  All the SEO data 
(w3schema / Google, OG / Facebook and Twitter cards) is automatically 
built into the templates and fleshed out by our build process, so it 
exists as searchable static content on our page.


Further down the road we will slowly migrate to a front-end javascript 
framework like Vue / Nuxt or React / Next, where our site will remain 
mostly static, with JS in the browser talking to back end hosted 
services.  We already interact directly from the browser with SnipCart 
for shopping card, order management and payment gateway services.


Not sure if that helps explain the problem space a bit better.



On Wed, 27 Mar 2019 at 16:15, Adrian Klaver > wrote:


On 3/27/19 3:48 AM, Tony Shelver

Re: Key encryption and relational integrity

2019-03-28 Thread Adrian Klaver

On 3/28/19 7:29 AM, Moreno Andreo wrote:

Il 27/03/2019 07:42, Tony Shelver ha scritto:
Not in Europe, but have worked a bit with medical records systems in 
the USA, including sharing across providers.


The only other way to do it would be to store the encrypted key value 
in both user.id  and medications.user_id.  That would 
encrypt the data and maintain relational integrity.
Hmmm... if user.id and medications.user_id are the same, I can link user 
with medication... and GDPR rule does not apply. or am I missing 
something?


Yes the link means that someone could use the medications.user_id to 
fetch the rest of the user information from the user table. Unless you 
encrypted that information also, which I gather you do not want to do 
for performance reasons.



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: plctl extension issue postgresql 11.2

2019-03-28 Thread Tom Lane
Adrian Klaver  writes:
> Still postgresql11-tcl is a valid package in the PGDG repos(it is for a 
> Tcl client library). It is trying to find postgresql11-tcl in the RHEL 
> repo not the PGDG repo. Yet packages have been installed from  the PGDG 
> repo(from previous post) in the past:

> postgresql11.x86_64   11.2-1PGDG.rhel7   @pgdg11
> postgresql11-contrib.x86_64   11.2-1PGDG.rhel7   @pgdg11
> postgresql11-devel.x86_64 11.2-1PGDG.rhel7   @pgdg11
> postgresql11-libs.x86_64  11.2-1PGDG.rhel7   @pgdg11
> postgresql11-llvmjit.x86_64   11.2-1PGDG.rhel7   @pgdg11
> postgresql11-server.x86_6411.2-1PGDG.rhel7   @pgdg11

> You will need to sort out how the above packages where installed and 
> replicate that for the pltcl package. I cannot be of more help as I do 
> not use RH. Maybe someone else can chime in.

Yeah.  The PGDG package repo is evidently not being searched by yum,
so it's either been removed or disabled in yum's repo configuration
list.  (It must have been there at one time, unless you used a
*really* nonstandard way of installing those RPMs.)

Look into /etc/yum.repos.d/ ... if you see a file for the PGDG repo,
fix it to be enabled, otherwise you need to download and install that
repo config file.

regards, tom lane




Re: Case Insensitive

2019-03-28 Thread Shreeyansh Dba
Hi Sridhar,

There are a few workarounds available, hope this will help you.

1) Use the citext extension
2) Use ILIKE instead of LIKE
3) Use Postgres lower() function
4) Add an index on lower(ename)

Thanks & Regards,
*Shreeyansh DBA Team*
www.shreeyansh.com


On Thu, Mar 28, 2019 at 1:50 PM Sridhar N Bamandlapally <
sridhar@gmail.com> wrote:

> Hi PG-General and Pgsql-Admin
>
> Can we achieve CASE INSENSITIVE in PostgreSQL?
>
> I mean, need below way
>
> postgres=# select * from emp;
>  eid | ename
> -+---
>1 | aaa
>2 | AAA
> (2 rows)
>
>
>
> *postgres=# select * from emp where ename='aaa';*
> * eid | ename*
> *-+---*
> *   1 | aaa*
> *   2 | AAA*
> *(2 rows)*
> *--above result is just an manual made example only*
>
>
> Thanks
> Sridhar
>
>


Re: stale WAL files?

2019-03-28 Thread Rob Sargent



On 3/28/19 7:30 AM, Michael Paquier wrote:

On Tue, Mar 26, 2019 at 09:50:37AM -0600, Rob Sargent wrote:

No, sorry I should have said that up front. We’re simple folk.

What is the WAL position (LSN)

postgres=# select * from pg_current_wal_flush_lsn();
 pg_current_wal_flush_lsn
--
 CEA/E57EAA8
(1 row)

postgres=# select * from pg_current_wal_insert_lsn();
 pg_current_wal_insert_lsn
---
 CEA/E57EAA8
(1 row)

postgres=# select * from pg_current_wal_lsn();
 pg_current_wal_lsn

 CEA/E57EAA8
(1 row)


where Postgres is writing to and what
is the set of WAL segments in pg_wal (or pg_xlog if that's a server
older than 10)?


 select version();
version
-
 PostgreSQL 10.7 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 
20150623 (Red Hat 4.8.5-36), 64-bit

(1 row)

This is pg10 so it's pg_wal.  ls -ltr


-rw---. 1 postgres postgres 16777216 Mar 16 16:33 
00010CEA00B1
-rw---. 1 postgres postgres 16777216 Mar 16 16:33 
00010CEA00B2


 ... 217 more on through to ...

-rw---. 1 postgres postgres 16777216 Mar 16 17:01 
00010CEA00E8
-rw---. 1 postgres postgres 16777216 Mar 16 17:01 
00010CEA00E9
-rw---. 1 postgres postgres 16777216 Mar 28 09:46 
00010CEA000E



Please double-check the configuration value of
wal_keep_segments,

#wal_keep_segments = 0  # in logfile segments, 16MB each

and as mentioned upthread, could you make sure that
you have no replication slots active?


This could be part of the problem?

#max_replication_slots = 10 # max number of replication slots

but

  select * from pg_replication_slots;
 slot_name | plugin | slot_type | datoid | database | temporary | 
active | active_pid | xmin | catalog_xmin | restart_lsn | 
confirmed_flush_lsn

---++---++--+---+++--+--+-+-
(0 rows)



This can be done simply by
querying pg_replication_slots.  Please note as well that checkpoints
are server-wide, so there is no point to run them on all databases.
Only one command will be effective for all databases.

I suspected as much, but there weren't many dbs so I went all in.

--
Michael





Re: Key encryption and relational integrity

2019-03-28 Thread Moreno Andreo

Il 26/03/2019 18:08, Adrian Klaver ha scritto:

On 3/26/19 9:08 AM, Moreno Andreo wrote:

Il 26/03/2019 15:24, Adrian Klaver ha scritto:

On 3/26/19 7:19 AM, Moreno Andreo wrote:

Hello folks :-)

Is there any workaround to implement key encryption without 
breaking relational integrity?


This is going to need more information. 

OK, I'll try to be as clearer as I can
For starters 'key' has separate meanings for encryption and RI. I 
could make some guesses about what you want, but to avoid false 
assumptions a simple example would be helpful.
In a master-detail relation, I need to encrypt one of master table PK 
or detail table FK, in order to achieve pseudonimization, required by 
GDPR in Europe when managing particular data

Imagine I have
Table users
id   surname    last name
1    John            Doe
2    Jane            Doe
3    Foo         Bar

Table medications
id    user_id    med
1 1                Medication
2 1                Ear check
...
...
medications.user_id is FK on users.id
we should achieve

Table medications
id    user_id        med
1    sgkighs98    Medication
2    sghighs98    Ear check

or the opposite (users.id encryption and medications.user_id kept plain)

At a first glance, it IS breaking relational integrity, so is there a 
way to manage this encryption internally so RI is kept safe?


Not that I know of. RI is based on maintaining a link between parent 
and child. So by definition you would be able to get to the parent 
record via the child.

That's what I was afraid of :-(


A quick search on pseudonymisation found a boatload of interpretations 
of how to implement this:


"Pseudonymisation' means the processing of personal data in such a 
manner that the personal data can no longer be attributed to a 
specific data subject without the use of additional information, 
provided that such additional information is kept separately and is 
subject to technical and organisational measures to ensure that the 
personal data are not attributed to an identified or identifiable 
natural person."



To me it would seem something like:

Table medications
id    user_id    med
1    sgkighs98    Medication
2    sghighs98    Ear check



Table users
id    surname    last name
sgkighs98 John    Doe
jkopkl1   Jane    Doe
uepoti21  Foo Bar

Where there is no direct link between the two. 


Are you sure there isn't?... the key "sgkighs98" is present on both 
tables and I can join tables on that field, so the pseudonimysation does 
not apply, it's just "separation" (that was OK with the last privacy 
act, but not with GDPR


The problem is not on the application side... there you can do almost 
anything you want to do. The prolem is that if someone breaks in the 
server (data breach) it is easy to join patients and their medications.


Instead permissions would prevent linking from medications to users 
even via a SELECT. One could also use pgcrypto:


https://www.postgresql.org/docs/10/pgcrypto.html

on the users table to further hide the personal info.
That's what I used to try to encrypt first name, last name, street 
address and some other fields (that would be the best solution because 
RI was not impacted at all), but the customer stated that they have to 
perform real-time search (like when you type in the Google search box), 
and the query that has to decrypt all names and return only the ones 
that begin with a certain set of characters is way too slow (tried on a 
good i7 configuration, that's about 2 seconds for each key pressed on a  
2500-row table). So I dropped this approach.


*NOTE* I am not a lawyer so any advice on my part as to meeting legal 
requirements are just me thinking out loud. I would suggest, if not 
already done, getting proper legal advice on what the section quoted 
above actually means.

Relax, I'm not here to ask and then sue anyone :-)






Re: Key encryption and relational integrity

2019-03-28 Thread Moreno Andreo

Il 28/03/2019 15:45, Adrian Klaver ha scritto:

On 3/28/19 7:29 AM, Moreno Andreo wrote:

Il 27/03/2019 07:42, Tony Shelver ha scritto:
Not in Europe, but have worked a bit with medical records systems in 
the USA, including sharing across providers.


The only other way to do it would be to store the encrypted key 
value in both user.id  and medications.user_id.  
That would encrypt the data and maintain relational integrity.
Hmmm... if user.id and medications.user_id are the same, I can link 
user with medication... and GDPR rule does not apply. or am I 
missing something?


Yes the link means that someone could use the medications.user_id to 
fetch the rest of the user information from the user table. Unless you 
encrypted that information also, which I gather you do not want to do 
for performance reasons.



Yes, but to be GDPR compliant I _have_ to remove that link. As you 
reported in an earlier email, they say that you can't link patient and 
medication if not with an external resource. In this case we are linking 
them without an external resource


Wandering on the web I also bumped on an article that suggested to have 
the tables on 2 databases on 2 different servers with different 
credentials Interesting, but still without RI.


I think I have to answer this question: "is it acceptable to have this 
kind of master-detail structure in a database without RI?" (The example 
is about 2 tables, obviously in the real world the master will have 50+ 
detail tables)


Thanks

Moreno.






Aw: Re: Key encryption and relational integrity

2019-03-28 Thread Karsten Hilbert
> Yes, but to be GDPR compliant I _have_ to remove that link. As you
> reported in an earlier email, they say that you can't link patient and
> medication if not with an external resource. In this case we are linking
> them without an external resource

I REALLY doubt that.

I believe you when you say "they say that ..." but I truly doubt that
GDPR intended to make data processing fully impractical.

(I work in the medical field)

Karsten




Aw: Re: Key encryption and relational integrity

2019-03-28 Thread Karsten Hilbert
> I believe you when you say "they say that ..." but I truly doubt that
> GDPR intended to make data processing fully impractical.
>
> (I work in the medical field)

In Germany, that is, which usually takes things to the
extreme, for better or worse.

Karsten




Timescale database is going towards emergency autovacuum

2019-03-28 Thread AI Rumman
We are running timescale pg_prometheus with separating each services in
schema. So in one database we have:

> >> schema_a
>view: metrics
> tables: metrics_copy, metrics_labels, metrics_values
> >> schema_b
>view: metrics
> tables: metrics_copy, metrics_labels, metrics_values
> and so on


We have multiple databases with size:

> datname   size   age(datfrozenxid)
> --
> postgres8973 kB51018138
> testdb9165 kB51018138
> template17649 kB51018138
> template07473 kB51018138
> testdb17781 kB51018138
> db16334 MB51018138
> db274 MB51018138
> db39645 kB51018138
> db411 MB51018138
> db4759 MB51018138


We have only 25 days of data and are also using a materialized view. I am
using the following query to check my emergency autovacuum threshold:

> select ROUND(100*(max(age(datfrozenxid))/(
> ( select setting AS value FROM pg_catalog.pg_settings   WHERE name =
> 'autovacuum_freeze_max_age' ))::float)
>   ) as percent_towards_wraparound
> from pg_database

Now each day I can see it is growing towards the limit. Now it is showing
26%.
How can we save our system from wraparound issue?
Someone please give some suggestions.

Thanks.


Re: software or hardware RAID?

2019-03-28 Thread Perumal Raj
Hi All

Just would like to know conclusion here ,

What is best RAID method (Software Or Hardware) for Postgres DB and what
level ?

Thanks,
Raj




On Sat, Mar 23, 2019 at 3:12 PM Kenneth Marshall  wrote:

> On Sat, Mar 23, 2019 at 12:09:11PM +, Rory Campbell-Lange wrote:
> > On 17/03/19, Rory Campbell-Lange (r...@campbell-lange.net) wrote:
> > > We aren't sure whether to use software MDRaid or a MegaRAID card.
> > >
> > > We're buying some new Postgres servers with
> > >
> > > 2 x 240GB Intel SSD S4610 (RAID1 : system)
> > > 4 x 960GB Intel SSD S4610 (RAID10 : db)
> > >
> > > We'll be using Postgres 11 on Debian.
> > >
> > > The MegaRAID 9271-8i with flash cache protection is available from our
> > > provider. I think they may also have the 9361-8i which is 12Gb/s.
> > >
> > > Our current servers which use the LSI 9261 with SSDs and we don't see
> > > any IO significant load as we are in RAM most of the time and the RAID
> > > card seems to flatten out any IO spikes.
> > >
> > > We use MDRaid elsewhere but we've never used it for our databases
> > > before.
> >
> > Apologies for re-heating this email from last week. I could really do
> with the
> > advice.
> >
> > Has anyone got any general comments on whether software RAID or an LSI
> card
> > is preferable?
> >
> > We will be replicating load on an existing server, which has an LSI 9261
> card.
> > Below is some stats from sar showing a "heavy" period of load on vdisk
> sda
> >
> >   00:00:01  DEV tps  rd_sec/s  wr_sec/s  avgrq-sz  avgqu-sz
>  await svctm %util
> >   14:15:01  sda  112.82643.09  14986.24138.53  2.09
>  18.50  0.25  2.86
> >   14:25:01  sda  108.52270.17  15682.94147.01  1.87
>  17.22  0.25  2.73
> >   14:35:01  sda  107.96178.25  14868.52139.37  1.70
>  15.73  0.23  2.53
> >   14:45:01  sda  150.97748.94  16919.69117.03  1.83
>  12.11  0.22  3.28
> >
> > Thanks for any advice.
> > Rory
>
> Hi Rory,
>
> The main reason, in my opinion, to use a HW RAID card is for the NVRAM
> battery backed cache to support writing to traditional spinning disks.
> Since your SSDs have power-loss support, you do not need that and the HW
> RAID controller. For database use, you would almost certainly be using
> RAID 10 and software RAID 10 is extremely performant. I am in the middle
> of setting up a new system with NVMe SSD drives and HW RAID would be a
> terrible bottle-neck and software RAID is really the only realistice
> option.
>
> Regards,
> Ken
>
>


Re: software or hardware RAID?

2019-03-28 Thread Rory Campbell-Lange
On 28/03/19, Perumal Raj (peruci...@gmail.com) wrote:
> On Sat, Mar 23, 2019 at 3:12 PM Kenneth Marshall  wrote:
> > On Sat, Mar 23, 2019 at 12:09:11PM +, Rory Campbell-Lange wrote:
> > > On 17/03/19, Rory Campbell-Lange (r...@campbell-lange.net) wrote:

> > > > We aren't sure whether to use software MDRaid or a MegaRAID card.
...
> > > > We use MDRaid elsewhere but we've never used it for our databases
> > > > before.
> > >
> > > Apologies for re-heating this email from last week. I could really
> > > do with the advice.
> > >
> > > Has anyone got any general comments on whether software RAID or an
> > > LSI card is preferable?

...

> > The main reason, in my opinion, to use a HW RAID card is for the NVRAM
> > battery backed cache to support writing to traditional spinning disks.
> > Since your SSDs have power-loss support, you do not need that and the HW
> > RAID controller. For database use, you would almost certainly be using
> > RAID 10 and software RAID 10 is extremely performant. I am in the middle
> > of setting up a new system with NVMe SSD drives and HW RAID would be a
> > terrible bottle-neck and software RAID is really the only realistice
> > option.

> Just would like to know conclusion here ,
> 
> What is best RAID method (Software Or Hardware) for Postgres DB and what
> level ?

Hi Perumal

Ken's comments summarise the general tenor of the advice I've been
given.

Rory 




Re: Key encryption and relational integrity

2019-03-28 Thread Adrian Klaver

On 3/28/19 10:36 AM, Moreno Andreo wrote:

Il 26/03/2019 18:08, Adrian Klaver ha scritto:

On 3/26/19 9:08 AM, Moreno Andreo wrote:

Il 26/03/2019 15:24, Adrian Klaver ha scritto:

On 3/26/19 7:19 AM, Moreno Andreo wrote:

Hello folks :-)

Is there any workaround to implement key encryption without 
breaking relational integrity?


This is going to need more information. 

OK, I'll try to be as clearer as I can
For starters 'key' has separate meanings for encryption and RI. I 
could make some guesses about what you want, but to avoid false 
assumptions a simple example would be helpful.
In a master-detail relation, I need to encrypt one of master table PK 
or detail table FK, in order to achieve pseudonimization, required by 
GDPR in Europe when managing particular data

Imagine I have
Table users
id   surname    last name
1    John            Doe
2    Jane            Doe
3    Foo         Bar

Table medications
id    user_id    med
1 1                Medication
2 1                Ear check
...
...
medications.user_id is FK on users.id
we should achieve

Table medications
id    user_id        med
1    sgkighs98    Medication
2    sghighs98    Ear check

or the opposite (users.id encryption and medications.user_id kept plain)

At a first glance, it IS breaking relational integrity, so is there a 
way to manage this encryption internally so RI is kept safe?


Not that I know of. RI is based on maintaining a link between parent 
and child. So by definition you would be able to get to the parent 
record via the child.

That's what I was afraid of :-(


A quick search on pseudonymisation found a boatload of interpretations 
of how to implement this:


"Pseudonymisation' means the processing of personal data in such a 
manner that the personal data can no longer be attributed to a 
specific data subject without the use of additional information, 
provided that such additional information is kept separately and is 
subject to technical and organisational measures to ensure that the 
personal data are not attributed to an identified or identifiable 
natural person."



To me it would seem something like:

Table medications
id    user_id    med
1    sgkighs98    Medication
2    sghighs98    Ear check



Table users
id    surname    last name
sgkighs98 John    Doe
jkopkl1   Jane    Doe
uepoti21  Foo Bar

Where there is no direct link between the two. 


Are you sure there isn't?... the key "sgkighs98" is present on both 
tables and I can join tables on that field, so the pseudonimysation does 
not apply, it's just "separation" (that was OK with the last privacy 
act, but not with GDPR


Yes but you can use permissions to make the user table is unreachable by 
folks with insufficient permission.




The problem is not on the application side... there you can do almost 
anything you want to do. The prolem is that if someone breaks in the 
server (data breach) it is easy to join patients and their medications.


That really depends on what level of user they break in as. That is a 
separate security issue. It also is the difference between 
pseudonymisation and anonymization, where the latter makes the data 
totally unrelated to an individuals personal information.




Instead permissions would prevent linking from medications to users 
even via a SELECT. One could also use pgcrypto:


https://www.postgresql.org/docs/10/pgcrypto.html

on the users table to further hide the personal info.
That's what I used to try to encrypt first name, last name, street 
address and some other fields (that would be the best solution because 
RI was not impacted at all), but the customer stated that they have to 
perform real-time search (like when you type in the Google search box), 
and the query that has to decrypt all names and return only the ones 
that begin with a certain set of characters is way too slow (tried on a 
good i7 configuration, that's about 2 seconds for each key pressed on a 
2500-row table). So I dropped this approach.


*NOTE* I am not a lawyer so any advice on my part as to meeting legal 
requirements are just me thinking out loud. I would suggest, if not 
already done, getting proper legal advice on what the section quoted 
above actually means.

Relax, I'm not here to ask and then sue anyone :-)


Hey, I live in the US its just best policy to make that clear:)








--
Adrian Klaver
adrian.kla...@aklaver.com




Re: plctl extension issue postgresql 11.2

2019-03-28 Thread Adrian Klaver

On 3/28/19 9:07 AM, Prakash Ramakrishnan wrote:

Hi Tom,

Thanks for your big help i did yum install plctl and extension below 
path but still am getting error please find the below details,


==> psql
psql (11.2)
Type "help" for help.

postgres=# \c "AIBE01PR"
You are now connected to database "AIBE01PR" as user "postgres".
*AIBE01PR=# create extension plctl;*
*ERROR:  could not open extension control file 
"/usr/pgsql-11/share/extension/plctl.control": No such file or directory*


Try:
create extension pltcl;


*AIBE01PR=#*
AIBE01PR=# \q
postgres@ brurhenaaa001   .enterprisenet.org:/home/postgres
==> sudo su -
Last login: Thu Mar 28 17:02:53 CET 2019 on pts/0
[root@ brurhenaaa001   ~]# yum list installed | grep -i postgresql
postgresql.x86_64                     9.2.24-1.el7_5
  @rhel-7-server-rpms
postgresql-devel.x86_64               9.2.24-1.el7_5
  @rhel-7-server-rpms
postgresql-jdbc.noarch                9.2.1002-6.el7_5  
  @rhel-7-server-rpms
postgresql-libs.x86_64                9.2.24-1.el7_5
  @rhel-7-server-rpms
postgresql-odbc.x86_64                09.03.0100-2.el7  
  @rhel-7-server-rpms
postgresql-pgpool-II.x86_64           3.4.6-1.el7
@tnc_Extra_Packages_for_Enterprise_Linux_7_Extra_Packages_for_Enterprise_Linux_7
postgresql-pgpool-II-devel.x86_64     3.4.6-1.el7
@tnc_Extra_Packages_for_Enterprise_Linux_7_Extra_Packages_for_Enterprise_Linux_7

postgresql-pgpool-II-extensions.x86_64
postgresql-plruby.x86_64              0.5.3-13.el7  
  @tnc_Extra_Packages_for_Enterprise_Linux_7_Extra_Packages_for_Enterprise_Linux_7
postgresql-plruby-doc.x86_64          0.5.3-13.el7  
  @tnc_Extra_Packages_for_Enterprise_Linux_7_Extra_Packages_for_Enterprise_Linux_7
postgresql-pltcl.x86_64               9.2.24-1.el7_5
  @rhel-7-server-rpms
postgresql-server.x86_64              9.2.24-1.el7_5
  @rhel-7-server-rpms

*postgresql11.x86_64                   11.2-2PGDG.rhel7       @pgdg11*
*postgresql11-contrib.x86_64           11.2-2PGDG.rhel7       @pgdg11*
*postgresql11-devel.x86_64             11.2-2PGDG.rhel7       @pgdg11*
*postgresql11-libs.x86_64              11.2-2PGDG.rhel7       @pgdg11*
*postgresql11-llvmjit.x86_64           11.2-2PGDG.rhel7       @pgdg11*
*postgresql11-pltcl.x86_64             11.2-2PGDG.rhel7       @pgdg11*
*postgresql11-server.x86_64            11.2-2PGDG.rhel7       @pgdg11*
*postgresql11-tcl.x86_64               2.4.0-2.rhel7.1        @pgdg11*
[root@brurhenaaa001 ~]# cd /usr/pgsql-11/share/extension
[root@ brurhenaaa001   extension]# ls
adminpack--1.0--1.1.sql             fuzzystrmatch--1.1.sql  
   oracle_fdw--1.0--1.1.sql                 pg_trgm--1.3--1.4.sql
adminpack--1.0.sql                  fuzzystrmatch.control
  oracle_fdw--1.1.sql                      pg_trgm--1.3.sql
adminpack--1.1--2.0.sql             fuzzystrmatch--unpackaged--1.0.sql  
   oracle_fdw.control                       pg_trgm.control
adminpack.control                   hstore--1.0--1.1.sql
   pageinspect--1.0--1.1.sql                pg_trgm--unpackaged--1.0.sql
amcheck--1.0--1.1.sql               hstore--1.1--1.2.sql
   pageinspect--1.1--1.2.sql                pg_visibility--1.0--1.1.sql
amcheck--1.0.sql                    hstore--1.2--1.3.sql
   pageinspect--1.2--1.3.sql                pg_visibility--1.1--1.2.sql
amcheck.control                     hstore--1.3--1.4.sql
   pageinspect--1.3--1.4.sql                pg_visibility--1.1.sql
autoinc--1.0.sql                    hstore--1.4--1.5.sql
   pageinspect--1.4--1.5.sql                pg_visibility.control
autoinc.control                     hstore--1.4.sql  
  pageinspect--1.5--1.6.sql                plpgsql--1.0.sql
autoinc--unpackaged--1.0.sql        hstore.control  
   pageinspect--1.5.sql                     plpgsql.control
bloom--1.0.sql                      hstore_plperl--1.0.sql  
   pageinspect--1.6--1.7.sql                plpgsql--unpackaged--1.0.sql
bloom.control                       hstore_plperl.control
  pageinspect.control *pltcl--1.0.sql*
btree_gin--1.0--1.1.sql             hstore_plperlu--1.0.sql  
  pageinspect--unpackaged--1.0.sql * pltcl.control*
btree_gin--1.0.sql                  hstore_plperlu.control  
   pg_buffercache--1.0--1.1.sql * pltclu--1.0.sql*



On Thu, Mar 28, 2019 at 9:09 PM Prakash Ramakrishnan 
> wrote:


Thanks, I'll check it out.

On Thu, Mar 28, 2019, 20:26 Tom Lane mailto:t...@sss.pgh.pa.us>> wrote:

Adrian Klaver mailto:adrian.kla...@aklaver.com>> writes:
 > Still postgresql11-tcl is a valid package in the PGDG
repos(it is for a
 > Tcl client library). It is trying to find postgresql11-tcl in
the RHEL
 > repo not the PGDG repo. Yet

Re: plctl extension issue postgresql 11.2

2019-03-28 Thread Prakash Ramakrishnan
Thanks, I'll check it out.

On Thu, Mar 28, 2019, 20:26 Tom Lane  wrote:

> Adrian Klaver  writes:
> > Still postgresql11-tcl is a valid package in the PGDG repos(it is for a
> > Tcl client library). It is trying to find postgresql11-tcl in the RHEL
> > repo not the PGDG repo. Yet packages have been installed from  the PGDG
> > repo(from previous post) in the past:
>
> > postgresql11.x86_64   11.2-1PGDG.rhel7   @pgdg11
> > postgresql11-contrib.x86_64   11.2-1PGDG.rhel7   @pgdg11
> > postgresql11-devel.x86_64 11.2-1PGDG.rhel7   @pgdg11
> > postgresql11-libs.x86_64  11.2-1PGDG.rhel7   @pgdg11
> > postgresql11-llvmjit.x86_64   11.2-1PGDG.rhel7   @pgdg11
> > postgresql11-server.x86_6411.2-1PGDG.rhel7   @pgdg11
>
> > You will need to sort out how the above packages where installed and
> > replicate that for the pltcl package. I cannot be of more help as I do
> > not use RH. Maybe someone else can chime in.
>
> Yeah.  The PGDG package repo is evidently not being searched by yum,
> so it's either been removed or disabled in yum's repo configuration
> list.  (It must have been there at one time, unless you used a
> *really* nonstandard way of installing those RPMs.)
>
> Look into /etc/yum.repos.d/ ... if you see a file for the PGDG repo,
> fix it to be enabled, otherwise you need to download and install that
> repo config file.
>
> regards, tom lane
>


Re: plctl extension issue postgresql 11.2

2019-03-28 Thread Prakash Ramakrishnan
Hi Tom,

Thanks for your big help i did yum install plctl and extension below path
but still am getting error please find the below details,

==> psql
psql (11.2)
Type "help" for help.

postgres=# \c "AIBE01PR"
You are now connected to database "AIBE01PR" as user "postgres".
*AIBE01PR=# create extension plctl;*
*ERROR:  could not open extension control file
"/usr/pgsql-11/share/extension/plctl.control": No such file or directory*
*AIBE01PR=#*
AIBE01PR=# \q
postgres@ brurhenaaa001   .enterprisenet.org:/home/postgres
==> sudo su -
Last login: Thu Mar 28 17:02:53 CET 2019 on pts/0
[root@ brurhenaaa001   ~]# yum list installed | grep -i postgresql
postgresql.x86_64 9.2.24-1.el7_5
 @rhel-7-server-rpms
postgresql-devel.x86_64   9.2.24-1.el7_5
 @rhel-7-server-rpms
postgresql-jdbc.noarch9.2.1002-6.el7_5
 @rhel-7-server-rpms
postgresql-libs.x86_649.2.24-1.el7_5
 @rhel-7-server-rpms
postgresql-odbc.x86_6409.03.0100-2.el7
 @rhel-7-server-rpms
postgresql-pgpool-II.x86_64   3.4.6-1.el7
@tnc_Extra_Packages_for_Enterprise_Linux_7_Extra_Packages_for_Enterprise_Linux_7
postgresql-pgpool-II-devel.x86_64 3.4.6-1.el7
@tnc_Extra_Packages_for_Enterprise_Linux_7_Extra_Packages_for_Enterprise_Linux_7
postgresql-pgpool-II-extensions.x86_64
postgresql-plruby.x86_64  0.5.3-13.el7
 
@tnc_Extra_Packages_for_Enterprise_Linux_7_Extra_Packages_for_Enterprise_Linux_7
postgresql-plruby-doc.x86_64  0.5.3-13.el7
 
@tnc_Extra_Packages_for_Enterprise_Linux_7_Extra_Packages_for_Enterprise_Linux_7
postgresql-pltcl.x86_64   9.2.24-1.el7_5
 @rhel-7-server-rpms
postgresql-server.x86_64  9.2.24-1.el7_5
 @rhel-7-server-rpms
*postgresql11.x86_64   11.2-2PGDG.rhel7   @pgdg11*
*postgresql11-contrib.x86_64   11.2-2PGDG.rhel7   @pgdg11*
*postgresql11-devel.x86_64 11.2-2PGDG.rhel7   @pgdg11*
*postgresql11-libs.x86_64  11.2-2PGDG.rhel7   @pgdg11*
*postgresql11-llvmjit.x86_64   11.2-2PGDG.rhel7   @pgdg11*
*postgresql11-pltcl.x86_64 11.2-2PGDG.rhel7   @pgdg11*
*postgresql11-server.x86_6411.2-2PGDG.rhel7   @pgdg11*
*postgresql11-tcl.x86_64   2.4.0-2.rhel7.1@pgdg11*
[root@brurhenaaa001 ~]# cd /usr/pgsql-11/share/extension
[root@ brurhenaaa001   extension]# ls
adminpack--1.0--1.1.sql fuzzystrmatch--1.1.sql
oracle_fdw--1.0--1.1.sql pg_trgm--1.3--1.4.sql
adminpack--1.0.sql  fuzzystrmatch.control
 oracle_fdw--1.1.sql  pg_trgm--1.3.sql
adminpack--1.1--2.0.sql fuzzystrmatch--unpackaged--1.0.sql
oracle_fdw.control   pg_trgm.control
adminpack.control   hstore--1.0--1.1.sql
pageinspect--1.0--1.1.sqlpg_trgm--unpackaged--1.0.sql
amcheck--1.0--1.1.sql   hstore--1.1--1.2.sql
pageinspect--1.1--1.2.sqlpg_visibility--1.0--1.1.sql
amcheck--1.0.sqlhstore--1.2--1.3.sql
pageinspect--1.2--1.3.sqlpg_visibility--1.1--1.2.sql
amcheck.control hstore--1.3--1.4.sql
pageinspect--1.3--1.4.sqlpg_visibility--1.1.sql
autoinc--1.0.sqlhstore--1.4--1.5.sql
pageinspect--1.4--1.5.sqlpg_visibility.control
autoinc.control hstore--1.4.sql
 pageinspect--1.5--1.6.sqlplpgsql--1.0.sql
autoinc--unpackaged--1.0.sqlhstore.control
pageinspect--1.5.sql plpgsql.control
bloom--1.0.sql  hstore_plperl--1.0.sql
pageinspect--1.6--1.7.sqlplpgsql--unpackaged--1.0.sql
bloom.control   hstore_plperl.control
 pageinspect.control  *pltcl--1.0.sql*
btree_gin--1.0--1.1.sql hstore_plperlu--1.0.sql
 pageinspect--unpackaged--1.0.sql   *  pltcl.control*
btree_gin--1.0.sql  hstore_plperlu.control
pg_buffercache--1.0--1.1.sql* pltclu--1.0.sql*


On Thu, Mar 28, 2019 at 9:09 PM Prakash Ramakrishnan <
prakash.ramakrishnan...@nielsen.com> wrote:

> Thanks, I'll check it out.
>
> On Thu, Mar 28, 2019, 20:26 Tom Lane  wrote:
>
>> Adrian Klaver  writes:
>> > Still postgresql11-tcl is a valid package in the PGDG repos(it is for a
>> > Tcl client library). It is trying to find postgresql11-tcl in the RHEL
>> > repo not the PGDG repo. Yet packages have been installed from  the PGDG
>> > repo(from previous post) in the past:
>>
>> > postgresql11.x86_64   11.2-1PGDG.rhel7   @pgdg11
>> > postgresql11-contrib.x86_64   11.2-1PGDG.rhel7   @pgdg11
>> > postgresql11-devel.x86_64 11.2-1PGDG.rhel7   @pgdg11
>> > postgresql11-libs.x86_64  11.2-1PGDG.rhel7   @pgdg11
>> > postgresql11-llvmjit.x86_64   11.2-1PGDG.rhel7   @pgdg11
>> > postgresql11-server.x86_6411.2-1PG

Re: plctl extension issue postgresql 11.2

2019-03-28 Thread Christopher Browne
On Thu, 28 Mar 2019 at 17:11, Prakash Ramakrishnan <
prakash.ramakrishnan...@nielsen.com> wrote:

> Hi Tom,
>
> Thanks for your big help i did yum install plctl and extension below path
> but still am getting error please find the below details,
>
> ==> psql
> psql (11.2)
> Type "help" for help.
>
> postgres=# \c "AIBE01PR"
> You are now connected to database "AIBE01PR" as user "postgres".
> *AIBE01PR=# create extension plctl;*
> *ERROR:  could not open extension control file
> "/usr/pgsql-11/share/extension/plctl.control": No such file or directory*
> *AIBE01PR=#*
>

If that is cut/and/pasted, then it is crystal clear that it won't work
because there's a typo.

There is no such computer language as "ctl".  And therefore...
There is no such PostgreSQL extension as "plctl"

Try "create extension pltcl;" instead.

It is an absolute certainty that "create extension plctl;" WILL NOT WORK,
regardless of what software you may have installed.  In contrast "create
extension pltcl;" may quite possibly work.

Many of us don't use your Linux distribution, so can only take wild guesses
as to what commands might install what RPM files that might be helpful.

You almost certainly did not "yum install plctl" with much success, though
conceivably "yum install pltcl" might have worked instead.
-- 
When confronted by a difficult problem, solve it by reducing it to the
question, "How would the Lone Ranger handle this?"


Re: Key encryption and relational integrity

2019-03-28 Thread Peter J. Holzer
On 2019-03-28 18:36:40 +0100, Moreno Andreo wrote:
> Il 26/03/2019 18:08, Adrian Klaver ha scritto:
> > To me it would seem something like:
> > 
> > Table medications
> > id    user_id    med
> > 1    sgkighs98    Medication
> > 2    sghighs98    Ear check
> > 
> > 
> > 
> > Table users
> > id    surname    last name
> > sgkighs98 John    Doe
> > jkopkl1   Jane    Doe
> > uepoti21  Foo Bar
> > 
> > Where there is no direct link between the two.
> 
> Are you sure there isn't?... the key "sgkighs98" is present on both
> tables and I can join tables on that field, so the pseudonimysation
> does not apply,

Yes. It doesn't matter whether the key is 'sgkighs98' or 1438 or
692da0c1-cf2d-476d-8910-7f82c050f8fe. 

> it's just "separation" (that was OK with the last privacy act, but not
> with GDPR

I doubt that this is correct. The GDPR doesn't prescribe specific
technical means (there may be laws or standards in your country which
prescribe such means for medical data, but that's not the GDPR).


> The problem is not on the application side... there you can do almost
> anything you want to do. The prolem is that if someone breaks in the
> server (data breach) it is easy to join patients and their
> medications.

I sure hope that the doctors are able to join patients and their
medications. So at some level that has to be possible. If you assume a
break-in into the server, there will always be a level of penetration at
which the attacker will be able to access any data an authorized user
can access.

hp

-- 
   _  | Peter J. Holzer| we build much bigger, better disasters now
|_|_) || because we have much more sophisticated
| |   | h...@hjp.at | management tools.
__/   | http://www.hjp.at/ | -- Ross Anderson 


signature.asc
Description: PGP signature


Re: Key encryption and relational integrity

2019-03-28 Thread Peter J. Holzer
On 2019-03-28 15:29:50 +0100, Moreno Andreo wrote:
> here I'm trying to find a way so nobody can, without the use of the
> application, match a patient with their clinical records (i.e. someone
> breaking into the server -- data breach)

I think it is very optimistic to assume that an intruder would get
access to the database but not the application.

hp

-- 
   _  | Peter J. Holzer| we build much bigger, better disasters now
|_|_) || because we have much more sophisticated
| |   | h...@hjp.at | management tools.
__/   | http://www.hjp.at/ | -- Ross Anderson 


signature.asc
Description: PGP signature


Re: Subquery to select max(date) value

2019-03-28 Thread Rich Shepard

On Fri, 15 Feb 2019, Andrew Gierth wrote:


select p.person_id, p.lname, p.fname, p.direct_phone, o.org_name, sq.*
 from people as p
  join organizations as o on p.organization_id=o.id   -- OR WHATEVER
  cross join
lateral (select a.next_contact
   from activities as a
  where a.person_id=p.person_id --VERY IMPORTANT
and a.next_contact > '2018-12-31'
and a.next_contact <= 'today'
and a.next_contact is not null
  order by a.next_contact DESC
  limit 1) sq;


After working with this query I modified it slightly to return only the
next_contact date:

select p.person_id, p.lname, p.fname, p.direct_phone, p.active, o.org_name, sq.*
from people as p
 join organizations as o on p.org_id = o.org_id
 cross join
 lateral
 (select a.next_contact
 from activities as a
 where a.person_id = p.person_id and
 p.active='True' and
 a.next_contact is not null
 order by a.next_contact DESC
 limit 1) sq;

It works wellm, but the row order is not that of a.next_contact. In fact,
there seems to be no order in the returned set. The next_contact column is
in the lateral sub-query. Does this make a difference? I've no idea how to
modify the query so that returned rows are in decreasing next_contact order.

Best regards,

Rich





Re: Subquery to select max(date) value

2019-03-28 Thread Ken Tanzer
On Thu, Mar 28, 2019 at 3:59 PM Rich Shepard 
wrote:

>
> After working with this query I modified it slightly to return only the
> next_contact date:
>
> select p.person_id, p.lname, p.fname, p.direct_phone, p.active,
> o.org_name, sq.*
> from people as p
>   join organizations as o on p.org_id = o.org_id
>   cross join
>   lateral
>   (select a.next_contact
>   from activities as a
>   where a.person_id = p.person_id and
>   p.active='True' and
>   a.next_contact is not null
>   order by a.next_contact DESC
>   limit 1) sq;
>
> It works wellm, but the row order is not that of a.next_contact. In fact,
> there seems to be no order in the returned set. The next_contact column is
> in the lateral sub-query. Does this make a difference? I've no idea how to
> modify the query so that returned rows are in decreasing next_contact
> order.
>
> Best regards,
>
> Rich
>
> You need the ORDER BY in the outer join.  (And I don't think the one
inside the lateral join is doing you any good).  Try:

...
  a.next_contact is not null
 limit 1) sq
order by sq.next_contact DESC;

Cheers,
Ken



-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://demo.agency-software.org/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: Subquery to select max(date) value

2019-03-28 Thread David G. Johnston
On Thu, Mar 28, 2019 at 3:59 PM Rich Shepard 
wrote:

> select p.person_id, p.lname, p.fname, p.direct_phone, p.active,
> o.org_name, sq.*
> from people as p
>   join organizations as o on p.org_id = o.org_id
>   cross join
>   lateral
>   (select a.next_contact
>   from activities as a
>   where a.person_id = p.person_id and
>   p.active='True' and
>   a.next_contact is not null
>   order by a.next_contact DESC
>   limit 1) sq;
>
> It works wellm, but the row order is not that of a.next_contact. In fact,
> there seems to be no order in the returned set.

The next_contact column is
> in the lateral sub-query. Does this make a difference?
>

Yes, if you join the result on an ordered subquery to anything you no
longer have a guaranteed order for the combined relation.

select ...
from ...
join ...
cross join lateral ...
-- now add an order by for the top-level query
order by

David J.


Re: Subquery to select max(date) value

2019-03-28 Thread David G. Johnston
On Thu, Mar 28, 2019 at 4:05 PM Ken Tanzer  wrote:

>
>  (And I don't think the one inside the lateral join is doing you any
>> good).  Try:
>>
>
> ...
>   a.next_contact is not null
>  limit 1) sq
> order by sq.next_contact DESC;
>
>
The fact that the subquery has a LIMT 1 clause leads me to assume the ORDER
BY there is quite necessary.

David J.


Re: Subquery to select max(date) value

2019-03-28 Thread Ken Tanzer
On Thu, Mar 28, 2019 at 4:07 PM David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Thu, Mar 28, 2019 at 4:05 PM Ken Tanzer  wrote:
>
>>
>>  (And I don't think the one inside the lateral join is doing you any
>>> good).  Try:
>>>
>>
>> ...
>>   a.next_contact is not null
>>  limit 1) sq
>> order by sq.next_contact DESC;
>>
>>
> The fact that the subquery has a LIMT 1 clause leads me to assume the
> ORDER BY there is quite necessary.
>
> David J.
>

Have to agree with you there.  Sorry for the brain fart!

Ken

-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://demo.agency-software.org/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: Subquery to select max(date) value

2019-03-28 Thread Rich Shepard

On Thu, 28 Mar 2019, Ken Tanzer wrote:


You need the ORDER BY in the outer join.


Ken,

I thought so. But, ...


(And I don't think the one inside the lateral join is doing you any good).
Try:
...
 a.next_contact is not null
limit 1) sq
   order by sq.next_contact DESC;


This re-orders the returned set, but still not in chronological order.

Thanks,

Rich




Re: Subquery to select max(date) value

2019-03-28 Thread Rich Shepard

On Thu, 28 Mar 2019, David G. Johnston wrote:


Yes, if you join the result on an ordered subquery to anything you no
longer have a guaranteed order for the combined relation.


David,

This makes sense to me.


select ...
from ...
join ...
cross join lateral ...
-- now add an order by for the top-level query
order by


Tried this and did not do it correctly. Should there be two 'order by', one
in the sub-query, the other in the top-level query? This does not return the
desired order:

select p.person_id, p.lname, p.fname, p.direct_phone, p.active, o.org_name, sq.*
from people as p
 join organizations as o on p.org_id = o.org_id
 cross join
 lateral
 (select a.next_contact
 from activities as a
 where a.person_id = p.person_id and
 p.active='True' and
 a.next_contact is not null
 order by a.next_contact DESC
 limit 1) sq
 order by sq.next_contact DESC;

Obviously, I'm still missing the implementation of your response.

Best regards,

Rich




Re: Subquery to select max(date) value

2019-03-28 Thread Ken Tanzer
On Thu, Mar 28, 2019 at 4:14 PM Rich Shepard 
wrote:

> On Thu, 28 Mar 2019, Ken Tanzer wrote:
>
> > You need the ORDER BY in the outer join.
>
> Ken,
>
> I thought so. But, ...
>
> > (And I don't think the one inside the lateral join is doing you any
> good).
> > Try:
> > ...
> >  a.next_contact is not null
> > limit 1) sq
> >order by sq.next_contact DESC;
>
> This re-orders the returned set, but still not in chronological order.
>

Really?  Is your next_contact field a date field and not a text field?
What order does it come out in?

(And I assume you saw David J's correction to the misinformation I offered.)

Cheers,
Ken


-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://demo.agency-software.org/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: Subquery to select max(date) value

2019-03-28 Thread David G. Johnston
On Thu, Mar 28, 2019 at 4:21 PM Rich Shepard 
wrote:

> This does not return the
> desired order:
>

It returns something at least.  If you put the output you get into a
spreadsheet are you able to manually sort it the way you desire?

David J.


Re: Subquery to select max(date) value

2019-03-28 Thread Andrew Gierth
> "Rich" == Rich Shepard  writes:

 Rich> Tried this and did not do it correctly. Should there be two
 Rich> 'order by', one in the sub-query, the other in the top-level
 Rich> query?

Yes.

 Rich> This does not return the desired order:

 Rich> select p.person_id, p.lname, p.fname, p.direct_phone, p.active, 
o.org_name, sq.*
 Rich> from people as p
 Rich>  join organizations as o on p.org_id = o.org_id
 Rich>  cross join
 Rich>  lateral
 Rich>  (select a.next_contact
 Rich>  from activities as a
 Rich>  where a.person_id = p.person_id and
 Rich>  p.active='True' and
 Rich>  a.next_contact is not null
 Rich>  order by a.next_contact DESC
 Rich>  limit 1) sq
 Rich>  order by sq.next_contact DESC;

That query seems correct assuming you want the result in descending
order of next_contact. How did the actual result differ from your
expectation?

-- 
Andrew (irc:RhodiumToad)




Archival process of partition tables with filtering few rows from tables.

2019-03-28 Thread github kran
Hello Team,

We are using PostgreSQL Version 9.6 and planning to archive our partition
tables containing about 300 - 500 million rows . We have around ~ 50
partition tables to be archived to a new
cold path PostgreSQL database , version 10.6. We have a requirement to
filter few rows before exporting this data from these tables as we dont
want to archive those rows .

 What is a better approach to export and restore these tables ?.

   - Does COPY Command with a filter query to filter few rows using select
   works better ?.
   - pg_dump with filtering these rows ?.
   - Can i able to export my indexes , check constraints , constraints ?.
   - Any other options ?.


Appreciate your replies.

Thanks.


Re: Archival process of partition tables with filtering few rows from tables.

2019-03-28 Thread Andreas Kretschmer
On 29 March 2019 05:13:31 CET, github kran  wrote:
>Hello Team,
>
>We are using PostgreSQL Version 9.6 and planning to archive our
>partition
>tables containing about 300 - 500 million rows . We have around ~ 50
>partition tables to be archived to a new
>cold path PostgreSQL database , version 10.6.

Consider Version 11 instead, much better features for partitioning.


 We have a requirement to
>filter few rows before exporting this data from these tables as we dont
>want to archive those rows .
>
> What is a better approach to export and restore these tables ?.
>
>- Does COPY Command with a filter query to filter few rows using select
>   works better ?.
>   - pg_dump with filtering these rows ?.
> - Can i able to export my indexes , check constraints , constraints ?.
>   - Any other options ?.

Yeah, consider logical replication using pg_logical from us with row filtering.



Regards, Andreas


-- 
2ndQuadrant - The PostgreSQL Support Company