ERROR: invalid byte sequence for encoding "UTF8": 0xfc

2018-03-27 Thread armand pirvu
Folks


From another system (non postgres) as data gets changed we collect in the form 
like below into a named pipe which is tailed into psql -U csidba -d repdb 
-hrephost

insert into foo (mydata) values ('{"mail_stop": "" , "city_id": "Garching bei 
M?nchen"}');

And got a failure 
ERROR:  invalid byte sequence for encoding "UTF8": 0xfc


insert into foo (mydata) values ('{"mail_stop": "" , "city_id": "Garching bei 
M?nchen"}');

Problem is said system to convert to UTF8 is nicely put a huge amount of effort 
and we don't have the resources anyway at this point


This is not a copy, just a tailed named pipe into psql

Is there any trick/way I can shove this data into Postgres ? I really don't 
care whether it read "Garching bei M?nchen" or "Garching bei Mfoonchen"



Thanks
— Armand





Re: Proposition for better performance

2018-03-27 Thread armand pirvu

> On Mar 27, 2018, at 4:25 PM, Paul Jungwirth  
> wrote:
> 
> On 03/27/2018 11:42 AM, hmidi slim wrote:
>> This is the message that I got:
>> Successfully run. Total query runtime: 2 min.
>> 1500 rows affected.
> 
> Sorry, I don't think this is enough information to suggest anything.
> 
> -- 
> Paul  ~{:-)
> p...@illuminatedcomputing.com
> 

I took the case prsented and ran in a sandbox and 

UPDATE 1500

explain analyze SELECT product_id, start_date, end_date, during
FROM product_availabilities
WHERE during @> daterange('2018-02-01', '2018-04-01')

 Bitmap Heap Scan on product_availabilities  (cost=1156.67..46856.67 rows=75000 
width=44) (actual time=5413.792..11367.379 rows=1500 loops
=1)
   Recheck Cond: (during @> '[2018-02-01,2018-04-01)'::daterange)
   Heap Blocks: exact=110295
   ->  Bitmap Index Scan on idx_time  (cost=0.00..1137.92 rows=75000 width=0) 
(actual time=5325.844..5325.844 rows=1500 loops=1)
 Index Cond: (during @> '[2018-02-01,2018-04-01)'::daterange)
 Planning time: 0.145 ms
 Execution time: 14055.666 ms
(7 rows)

But

considering the update (I did not check bloating or anything but still)

vacuum full product_availabilities;
analyze product_availabilities;

The plan changes to 
 Seq Scan on product_availabilities  (cost=0.00..242647.91 rows=1533 
width=26) (actual time=0.034..7207.697 rows=1500 loops=1)
   Filter: (during @> '[2018-02-01,2018-04-01)'::daterange)
 Planning time: 6.701 ms
 Execution time: 9238.285 ms

And the runtime does get in the two minutes
time psql -U csidba -d armandp  /dev/null
real2m39.767s
user1m45.576s
sys 0m12.324s


Not sure if that confirms the OP’s findings but to me a first question would be 
if the fact that the execution time reported by epxlain analyze does not seem 
to be even close to the actual run time is expected or not

BTW I was the postgres version reported ? I ran the presented case on 9.5.8


— Armand









Re: ERROR: invalid byte sequence for encoding "UTF8": 0xfc

2018-03-27 Thread armand pirvu

> On Mar 27, 2018, at 3:47 PM, armand pirvu  wrote:
> 
> Folks
> 
> 
> From another system (non postgres) as data gets changed we collect in the 
> form like below into a named pipe which is tailed into psql -U csidba -d 
> repdb -hrephost
> 
> insert into foo (mydata) values ('{"mail_stop": "" , "city_id": "Garching bei 
> M?nchen"}');
> 
> And got a failure 
> ERROR:  invalid byte sequence for encoding "UTF8": 0xfc
> 
> 
> insert into foo (mydata) values ('{"mail_stop": "" , "city_id": "Garching bei 
> M?nchen"}');
> 
> Problem is said system to convert to UTF8 is nicely put a huge amount of 
> effort and we don't have the resources anyway at this point
> 
> 
> This is not a copy, just a tailed named pipe into psql
> 
> Is there any trick/way I can shove this data into Postgres ? I really don't 
> care whether it read "Garching bei M?nchen" or "Garching bei Mfoonchen"
> 
> 
> 
> Thanks
> — Armand
> 
> 

Sorry for double posting but I found that if I change the client_encoding to 
WIN1252 I am able to move forward

"city_id": "Garching bei München”



 


connection dropped from the backend server

2018-03-27 Thread armand pirvu
Hi all


I have a process of some data manipulation and ultimate transfer to a postgres 
database
A DML statement gest concoted with the transformed data and pusshed into a 
named pipe
The named pipe is tailed -f in the background like this

nohup $SHELL <

Re: connection dropped from the backend server

2018-03-27 Thread armand pirvu
As long as the connection stays up yes data gets fine across 
In pg_stat_activity I see the node ip address where tail -f piped into psql 
happens



Sent from my iPhone

> On Mar 27, 2018, at 6:03 PM, Adrian Klaver  wrote:
> 
>> On 03/27/2018 03:36 PM, armand pirvu wrote:
>> Hi all
>> I have a process of some data manipulation and ultimate transfer to a 
>> postgres database
>> A DML statement gest concoted with the transformed data and pusshed into a 
>> named pipe
>> The named pipe is tailed -f in the background like this
>> nohup $SHELL <> tail -f /u1/sys_admin/dba/mypipe.fifo | psql -U csidba -d repdb -h rephost
>> EOF
>> All good BUT I do notice every say 10 min although I see the tail and psql 
>> processes in the ps output, looking in pg_stat_activity there is really 
>> nothing the host I run the nohuped tail
> 
> Could it be that pg_stat_activity shows nothing because the DML has completed 
> when you look?
> 
> Does the data find its way into the database?
> 
>> Any suggestions how to approach this/make it better/monitor ?
>> Thanks
>> -- Armand
> 
> 
> -- 
> Adrian Klaver
> adrian.kla...@aklaver.com



Re: connection dropped from the backend server

2018-03-27 Thread armand pirvu

> On Mar 27, 2018, at 6:49 PM, Tom Lane  wrote:
> 
> Adrian Klaver  writes:
>> On 03/27/2018 04:07 PM, armand pirvu wrote:
>>> As long as the connection stays up yes data gets fine across
>>> In pg_stat_activity I see the node ip address where tail -f piped into psql 
>>> happens
> 
>> So what does the rest of that record show? In particular for:
> 
> I wonder how often data gets put into the pipe.  If it's "not very often",
> maybe the connection from psql to the server is timing out due to
> inactivity?  This would be the fault of a firewall or something in
> between.  You could probably fix it by enabling (more aggressive) TCP
> keepalive settings.
> 
>   regards, tom lane

Well there is no flow pattern, The flow can be inexistent for days , even weeks 
and then it can get super busy

The data flows as expected well untill the connection gets dropped. Bolded from 
pg_stat_activity (a test I just did)

birstdb=# select datname, pid, client_addr, client_port, backend_start, 
query_start, state from pg_stat_Activity;
 datname |  pid  |  client_addr  | client_port | backend_start 
|  query_start  | state  
-+---+---+-+---+---+
 birstdb | 10046 |   |  -1 | 2018-03-27 20:40:11.721804-05 
| 2018-03-27 20:47:27.118434-05 | active
 birstdb | 10082 | 192.168.1.187 |   55859 | 2018-03-27 20:43:55.301216-05 
| 2018-03-27 20:46:07.190064-05 | idle


ps -fu armandp
 UID   PID  PPID   CSTIME TTY TIME CMD
 armandp  3264  3263   0 20:39:13 pts/2   0:00 tail -f 
/u1/sys_admin/dba/ingres2birst.fifo
 armandp  3265  3263   0 20:39:13 pts/2   0:00 psql -U csidba -d birstdb -h 
172.16.10.93



Give it about 10 min at most and  bam out it goes

birstdb=# select datname, pid, client_addr, client_port, backend_start, 
query_start, state from pg_stat_Activity;
 datname |  pid  | client_addr | client_port | backend_start |  
query_start  | state  
-+---+-+-+---+---+
 birstdb | 10208 | |  -1 | 2018-03-27 20:51:25.835382-05 | 
2018-03-27 21:08:47.164249-05 | active

Although the above two processes are still out 

I think the tcp keep alives might help but I am also thinking like a each min 
check maybe and if things got in the pipe well dump ‘em to Postgres. Something 
along these lines

Any ideas/suggestions you might have to improve this ? I am not saying it is 
perfect far from it, but I kinda took the model/idea from the Nagios named pipe 
only that one too runs at x seconds/minutes interval defined



Thank you both
— Armand

Re: connection dropped from the backend server

2018-03-27 Thread armand pirvu


Sent from my iPhone

> On Mar 27, 2018, at 9:21 PM, armand pirvu  wrote:
> 
> 
>>> On Mar 27, 2018, at 6:49 PM, Tom Lane  wrote:
>>> 
>>> Adrian Klaver  writes:
>>>> On 03/27/2018 04:07 PM, armand pirvu wrote:
>>>> As long as the connection stays up yes data gets fine across
>>>> In pg_stat_activity I see the node ip address where tail -f piped into 
>>>> psql happens
>> 
>>> So what does the rest of that record show? In particular for:
>> 
>> I wonder how often data gets put into the pipe.  If it's "not very often",
>> maybe the connection from psql to the server is timing out due to
>> inactivity?  This would be the fault of a firewall or something in
>> between.  You could probably fix it by enabling (more aggressive) TCP
>> keepalive settings.
>> 
>>  regards, tom lane
> 
> Well there is no flow pattern, The flow can be inexistent for days , even 
> weeks and then it can get super busy
> 
> The data flows as expected well untill the connection gets dropped. Bolded 
> from pg_stat_activity (a test I just did)
> 
> birstdb=# select datname, pid, client_addr, client_port, backend_start, 
> query_start, state from pg_stat_Activity;
>  datname |  pid  |  client_addr  | client_port | backend_start
>  |  query_start  | state  
> -+---+---+-+---+---+
>  birstdb | 10046 |   |  -1 | 2018-03-27 
> 20:40:11.721804-05 | 2018-03-27 20:47:27.118434-05 | active
>  birstdb | 10082 | 192.168.1.187 |   55859 | 2018-03-27 
> 20:43:55.301216-05 | 2018-03-27 20:46:07.190064-05 | idle
> 
> 
> ps -fu armandp
>  UID   PID  PPID   CSTIME TTY TIME CMD
>  armandp  3264  3263   0 20:39:13 pts/2   0:00 tail -f 
> /u1/sys_admin/dba/ingres2birst.fifo
>  armandp  3265  3263   0 20:39:13 pts/2   0:00 psql -U csidba -d birstdb 
> -h 172.16.10.93
> 
> 
> 
> Give it about 10 min at most and  bam out it goes
> 
> birstdb=# select datname, pid, client_addr, client_port, backend_start, 
> query_start, state from pg_stat_Activity;
>  datname |  pid  | client_addr | client_port | backend_start 
> |  query_start  | state  
> -+---+-+-+---+---+
>  birstdb | 10208 | |  -1 | 2018-03-27 20:51:25.835382-05 
> | 2018-03-27 21:08:47.164249-05 | active
> 
> Although the above two processes are still out 
> 
> I think the tcp keep alives might help but I am also thinking like a each min 
> check maybe and if things got in the pipe well dump ‘em to Postgres. 
> Something along these lines
> 
> Any ideas/suggestions you might have to improve this ? I am not saying it is 
> perfect far from it, but I kinda took the model/idea from the Nagios named 
> pipe only that one too runs at x seconds/minutes interval defined
> 
> 
> 
> Thank you both
> — Armand


Sorry for the double post but as a possible solution. Why not move the named 
pipe to the postgres host and simply whatever i was dumping into said pipe 
instead of doing locally just doing over ssh

What do you think ?


Thank you
-- Armand

authentication failure

2018-04-12 Thread armand pirvu
Hi there

I have a process in place which runs several queries from one host to another 
one


All of a sudden I started noticing authentication failures

Like below

.009 ms  statement: COPY  NACDS.tf_show_code_response_person FROM STDIN with 
csv;","psql"
2018-04-12 00:10:48.765 
CDT,"csidba","birstdb",7553,"172.16.20.4:40330",5aceea2e.1d81,1,"UPDATE",2018-04-12
 00:10:06 CDT,24/0,0,LOG,0,"duration: 425
90.993 ms  statement: UPDATE
csischema.tf_transaction_person
SET 
is_deleted = 'TRUE',
birst_is_deleted = 'TRUE',
update_datetime = now()::timestamp(0)
WHERE
show_id = '984BIOWC18' AND
birst_is_deleted = 'FALSE' AND
person_transaction_id IN (
SELECT a.person_transaction_id
FROM csischema.tf_transaction_person a
 LEFT JOIN BIOWC.tf_transaction_person b
 ON a.person_transaction_id=b.person_transaction_id
WHERE a.show_id = '984BIOWC18' AND b.person_transaction_id IS NULL
)
;","psql"
2018-04-12 00:10:48.823 
CDT,"csidba","birstdb",7755,"172.16.20.4:40455",5aceea58.1e4b,1,"authentication",2018-04-12
 00:10:48 CDT,3/20320168,0,FATAL,28P0
1,"password authentication failed for user ""csidba""","Connection matched 
pg_hba.conf line 84: ""host all all 0.0.0.0/0 md5"""""
2018-04-12 00:10:48.841 
CDT,"csidba","birstdb",7756,"172.16.20.4:40456",5aceea58.1e4c,1,"authentication",2018-04-12
 00:10:48 CDT,3/20320169,0,FATAL,28P0
1,"password authentication failed for user ""csidba""","Connection matched 
pg_hba.conf line 84: ""host all all 0.0.0.0/0 md5"""""
2018-04-12 00:10:48.957 
CDT,"csidba","birstdb",7759,"172.16.20.4:40459",5aceea58.1e4f,1,"authentication",2018-04-12
 00:10:48 CDT,3/20320172,0,FATAL,28P0


pg_hba.conf

# "local" is for Unix domain socket connections only
#local   all all peer
local   all all md5
# IPv4 local connections:
#hostall all 127.0.0.1/32ident
host all all 0.0.0.0/0 md5
# IPv6 local connections:
hostall all ::1/128 ident
# Allow replication connections from localhost, by a user with the
# replication privilege.
#local   replication postgrespeer
#hostreplication postgres127.0.0.1/32ident
#hostreplication postgres::1/128 ident
local  replication  csidba  md5
host   replication  csidba  127.0.0.1/32  md5
host   replication  csidba  0.0.0.0/0 md5
local  replication  repuser  md5
host   replication  repuser  127.0.0.1/32  md5
host   replication  repuser  0.0.0.0/0 md5
local  all repuser  md5
host   all repuser  127.0.0.1/32  md5
host   all repuser  0.0.0.0/0 md5



Did I run in somthing similar to a racong condition ?


Any ideas ?



Many thanks

— Armand






Re: authentication failure

2018-04-12 Thread armand pirvu

> On Apr 12, 2018, at 9:08 AM, Adrian Klaver  wrote:
> 
> On 04/12/2018 06:59 AM, armand pirvu wrote:
> 
> Please reply to list also.
> Ccing list.
>> Yes and worked fine until two days ago
>> I use .pgpass
> 
> So can you connect from wherever the process is run to the server manually? 
> Something like:
> 
> 
> psql -d birstab -U csidba -h some_server
> 
> 
>> I will also check and see if there are not too many hands in the cookie jar 
>> so to speak and things happen without being communicated
>>> On Apr 12, 2018, at 8:56 AM, Adrian Klaver  
>>> wrote:
>>> 
>>> On 04/12/2018 06:51 AM, armand pirvu wrote:
>>>> Hi there
>>>> I have a process in place which runs several queries from one host to 
>>>> another one
>>>> All of a sudden I started noticing authentication failures
>>>> Like below
>>>> .009 ms  statement: COPY  NACDS.tf_show_code_response_person FROM STDIN 
>>>> with csv;","psql"
>>>> 2018-04-12 00:10:48.765 
>>>> CDT,"csidba","birstdb",7553,"172.16.20.4:40330",5aceea2e.1d81,1,"UPDATE",2018-04-12
>>>>  00:10:06 CDT,24/0,0,LOG,0,"duration: 425
>>>> 90.993 ms  statement: UPDATE
>>>> csischema.tf_transaction_person
>>>> SET
>>>> is_deleted = 'TRUE',
>>>> birst_is_deleted = 'TRUE',
>>>> update_datetime = now()::timestamp(0)
>>>> WHERE
>>>> show_id = '984BIOWC18' AND
>>>> birst_is_deleted = 'FALSE' AND
>>>> person_transaction_id IN (
>>>> SELECT a.person_transaction_id
>>>> FROM csischema.tf_transaction_person a
>>>>  LEFT JOIN BIOWC.tf_transaction_person b
>>>>  ON a.person_transaction_id=b.person_transaction_id
>>>> WHERE a.show_id = '984BIOWC18' AND b.person_transaction_id IS NULL
>>>> )
>>>> ;","psql"
>>>> 2018-04-12 00:10:48.823 
>>>> CDT,"csidba","birstdb",7755,"172.16.20.4:40455",5aceea58.1e4b,1,"authentication",2018-04-12
>>>>  00:10:48 CDT,3/20320168,0,FATAL,28P0
>>>> 1,"password authentication failed for user ""csidba""","Connection matched 
>>>> pg_hba.conf line 84: ""host all all 0.0.0.0/0 md5"""""
>>>> 2018-04-12 00:10:48.841 
>>>> CDT,"csidba","birstdb",7756,"172.16.20.4:40456",5aceea58.1e4c,1,"authentication",2018-04-12
>>>>  00:10:48 CDT,3/20320169,0,FATAL,28P0
>>>> 1,"password authentication failed for user ""csidba""","Connection matched 
>>>> pg_hba.conf line 84: ""host all all 0.0.0.0/0 md5"""""
>>>> 2018-04-12 00:10:48.957 
>>>> CDT,"csidba","birstdb",7759,"172.16.20.4:40459",5aceea58.1e4f,1,"authentication",2018-04-12
>>>>  00:10:48 CDT,3/20320172,0,FATAL,28P0
>>>> pg_hba.conf
>>>> # "local" is for Unix domain socket connections only
>>>> #local   all all peer
>>>> local   all all md5
>>>> # IPv4 local connections:
>>>> #hostall all 127.0.0.1/32ident
>>>> host all all 0.0.0.0/0 md5
>>>> # IPv6 local connections:
>>>> hostall all ::1/128 ident
>>>> # Allow replication connections from localhost, by a user with the
>>>> # replication privilege.
>>>> #local   replication postgrespeer
>>>> #hostreplication postgres127.0.0.1/32ident
>>>> #hostreplication postgres::1/128 ident
>>>> local  replication  csidba  md5
>>>> host   replication  csidba  127.0.0.1/32  md5
>>>> host   replication  csidba  0.0.0.0/0 md5
>>>> local  replication  repuser  md5
>>>> host   replication  repuser  127.0.0.1/32  md5
>>>> host   replication  repuser  0.0.0.0/0 md5
>>>> local  all repuser  md5
>>>> host   all repuser  127.0.0.1/32  md5
>>>> host   all repuser  0.0.0.0/0 md5
>>>> Did I run in somthing similar to a racong condition ?
>>>> Any ideas ?
>>> 
>>> Is the process using the correct password?
>>> 
>>>> Many thanks
>>>> — Armand
>>> 
>>> 
>>> -- 
>>> Adrian Klaver
>>> adrian.kla...@aklaver.com
> 
> 
> -- 
> Adrian Klaver
> adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>

Yes I can that’s what is really puzzling me

[armandp@devweb2004 ~]$ /usr/pgsql-9.5/bin/psql -d birstdb -U csidba -h 
172.16.26.4
psql (9.5.12, server 9.5.8)
Type "help" for help.

birstdb=# 

It is almost like the authnetication stops working for whatever reason
I did check the .pgpass and all that and nothing chaged

Is there a possibility like say 70 processes try to authenticate in the same 
time and postgres authentication gets a bit lost ?


Thank you




Re: authentication failure

2018-04-12 Thread armand pirvu

> On Apr 12, 2018, at 9:12 AM, armand pirvu  wrote:
> 
> 
>> On Apr 12, 2018, at 9:08 AM, Adrian Klaver > <mailto:adrian.kla...@aklaver.com>> wrote:
>> 
>> On 04/12/2018 06:59 AM, armand pirvu wrote:
>> 
>> Please reply to list also.
>> Ccing list.
>>> Yes and worked fine until two days ago
>>> I use .pgpass
>> 
>> So can you connect from wherever the process is run to the server manually? 
>> Something like:
>> 
>> 
>> psql -d birstab -U csidba -h some_server
>> 
>> 
>>> I will also check and see if there are not too many hands in the cookie jar 
>>> so to speak and things happen without being communicated
>>>> On Apr 12, 2018, at 8:56 AM, Adrian Klaver >>> <mailto:adrian.kla...@aklaver.com>> wrote:
>>>> 
>>>> On 04/12/2018 06:51 AM, armand pirvu wrote:
>>>>> Hi there
>>>>> I have a process in place which runs several queries from one host to 
>>>>> another one
>>>>> All of a sudden I started noticing authentication failures
>>>>> Like below
>>>>> .009 ms  statement: COPY  NACDS.tf_show_code_response_person FROM STDIN 
>>>>> with csv;","psql"
>>>>> 2018-04-12 00:10:48.765 
>>>>> CDT,"csidba","birstdb",7553,"172.16.20.4:40330",5aceea2e.1d81,1,"UPDATE",2018-04-12
>>>>>  00:10:06 CDT,24/0,0,LOG,0,"duration: 425
>>>>> 90.993 ms  statement: UPDATE
>>>>> csischema.tf_transaction_person
>>>>> SET
>>>>> is_deleted = 'TRUE',
>>>>> birst_is_deleted = 'TRUE',
>>>>> update_datetime = now()::timestamp(0)
>>>>> WHERE
>>>>> show_id = '984BIOWC18' AND
>>>>> birst_is_deleted = 'FALSE' AND
>>>>> person_transaction_id IN (
>>>>> SELECT a.person_transaction_id
>>>>> FROM csischema.tf_transaction_person a
>>>>>  LEFT JOIN BIOWC.tf_transaction_person b
>>>>>  ON a.person_transaction_id=b.person_transaction_id
>>>>> WHERE a.show_id = '984BIOWC18' AND b.person_transaction_id IS NULL
>>>>> )
>>>>> ;","psql"
>>>>> 2018-04-12 00:10:48.823 
>>>>> CDT,"csidba","birstdb",7755,"172.16.20.4:40455",5aceea58.1e4b,1,"authentication",2018-04-12
>>>>>  00:10:48 CDT,3/20320168,0,FATAL,28P0
>>>>> 1,"password authentication failed for user ""csidba""","Connection 
>>>>> matched pg_hba.conf line 84: ""host all all 0.0.0.0/0 md5"""""
>>>>> 2018-04-12 00:10:48.841 
>>>>> CDT,"csidba","birstdb",7756,"172.16.20.4:40456",5aceea58.1e4c,1,"authentication",2018-04-12
>>>>>  00:10:48 CDT,3/20320169,0,FATAL,28P0
>>>>> 1,"password authentication failed for user ""csidba""","Connection 
>>>>> matched pg_hba.conf line 84: ""host all all 0.0.0.0/0 md5"""""
>>>>> 2018-04-12 00:10:48.957 
>>>>> CDT,"csidba","birstdb",7759,"172.16.20.4:40459",5aceea58.1e4f,1,"authentication",2018-04-12
>>>>>  00:10:48 CDT,3/20320172,0,FATAL,28P0
>>>>> pg_hba.conf
>>>>> # "local" is for Unix domain socket connections only
>>>>> #local   all all peer
>>>>> local   all all md5
>>>>> # IPv4 local connections:
>>>>> #hostall all 127.0.0.1/32ident
>>>>> host all all 0.0.0.0/0 md5
>>>>> # IPv6 local connections:
>>>>> hostall all ::1/128 ident
>>>>> # Allow replication connections from localhost, by a user with the
>>>>> # replication privilege.
>>>>> #local   replication postgrespeer
>>>>> #hostreplication postgres127.0.0.1/32ident
>>>>> #hostreplication postgres::1/128 ident
>>>>> local  replication  csidba 

Re: authentication failure

2018-04-12 Thread armand pirvu

> On Apr 12, 2018, at 9:28 AM, Adrian Klaver  wrote:
> 
> On 04/12/2018 07:15 AM, armand pirvu wrote:
> 
>>> 
>>> 
>> Sorry for the double posting but could it be from
>> #authentication_timeout = 1min# 1s-600s
> 
> From you previous post:
> 
> "It is almost like the authnetication stops working for whatever reason"
> 
> So to be clear the initial connections in the process go through, but at some 
> point they start failing. Is that correct?
> 
> The timeout could be an issue. It would helpful to also see what 
> max_connections setting is.
> 
>> So if the server gets a bit oveloaded this could play a role ?
> 
> 
> -- 
> Adrian Klaver
> adrian.kla...@aklaver.com


Yes correct initially they go through but at some point they start failing
Max_connections is set to 200 on the postgres server, but I did not notice any 
message about being maxed out connections wise
As a workaround for this very specific set of processes, until things get back 
in line so to speak, do you think from the devweb2004 (where processing is 
done) to devdb2004 (where the postgres database resides), should I just go from 
md5 to trusted ?








Re: authentication failure

2018-04-12 Thread armand pirvu

> On Apr 12, 2018, at 9:48 AM, Adrian Klaver  wrote:
> 
> On 04/12/2018 07:37 AM, armand pirvu wrote:
>>> On Apr 12, 2018, at 9:28 AM, Adrian Klaver  
>>> wrote:
>>> 
>>> On 04/12/2018 07:15 AM, armand pirvu wrote:
>>> 
>>>>> 
>>>>> 
>>>> Sorry for the double posting but could it be from
>>>> #authentication_timeout = 1min# 1s-600s
>>> 
>>> From you previous post:
>>> 
>>> "It is almost like the authnetication stops working for whatever reason"
>>> 
>>> So to be clear the initial connections in the process go through, but at 
>>> some point they start failing. Is that correct?
>>> 
>>> The timeout could be an issue. It would helpful to also see what 
>>> max_connections setting is.
>>> 
>>>> So if the server gets a bit oveloaded this could play a role ?
>>> 
>>> 
>>> -- 
>>> Adrian Klaver
>>> adrian.kla...@aklaver.com
>> Yes correct initially they go through but at some point they start failing
>> Max_connections is set to 200 on the postgres server, but I did not notice 
>> any message about being maxed out connections wise
>> As a workaround for this very specific set of processes, until things get 
>> back in line so to speak, do you think from the devweb2004 (where processing 
>> is done) to devdb2004 (where the postgres database resides), should I just 
>> go from md5 to trusted ?
> 
> I would be inclined to raise the authentication_timeout first before setting 
> the auth method to trust.
> 
> I would also set the below.:
> 
> https://www.postgresql.org/docs/10/static/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT
> 
> log_connections (boolean)
> Causes each attempted connection to the server to be logged, as well as 
> successful completion of client authentication. Only superusers can change 
> this parameter at session start, and it cannot be changed at all within a 
> session. The default is off.
> 
> log_disconnections (boolean)
> 
> That will give you a better idea of what is going on connection wise.
> 
> 
> 
> -- 
> Adrian Klaver
> adrian.kla...@aklaver.com


Will do so and report back 

Many thanks
Armand




Re: authentication failure

2018-04-12 Thread armand pirvu

> On Apr 12, 2018, at 9:55 AM, Adrian Klaver  wrote:
> 
> On 04/12/2018 07:50 AM, armand pirvu wrote:
> 
>>> I would be inclined to raise the authentication_timeout first before 
>>> setting the auth method to trust.
>>> 
>>> I would also set the below.:
>>> 
>>> https://www.postgresql.org/docs/10/static/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT
>>> 
>>> log_connections (boolean)
>>> Causes each attempted connection to the server to be logged, as well as 
>>> successful completion of client authentication. Only superusers can change 
>>> this parameter at session start, and it cannot be changed at all within a 
>>> session. The default is off.
>>> 
>>> log_disconnections (boolean)
>>> 
>>> That will give you a better idea of what is going on connection wise.
>>> 
>>> 
>>> 
>>> -- 
>>> Adrian Klaver
>>> adrian.kla...@aklaver.com
>> Will do so and report back
> 
> Also, in a previous post you mentioned:
> 
> "Yes and worked fine until two days ago"
> 
> Is the code under version control so you can see if anything changed two days 
> ago?
> 
> If not, any recollections of significant events from that time period?
> 
>> Many thanks
>> Armand
> 
> 
> -- 
> Adrian Klaver
> adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>

No change in the code
Only maybe the data volume and potentially the number of what we call events 
for which that runs
There was a talk to add some more functionality but never went through
However I will ask around to see if someone did not forget to say about a 
change put in place overnight (it happened before)

Thanks
Armand




Re: authentication failure

2018-04-15 Thread armand pirvu

> On Apr 12, 2018, at 10:07 AM, armand pirvu  wrote:
> 
> 
>> On Apr 12, 2018, at 9:55 AM, Adrian Klaver > <mailto:adrian.kla...@aklaver.com>> wrote:
>> 
>> On 04/12/2018 07:50 AM, armand pirvu wrote:
>> 
>>>> I would be inclined to raise the authentication_timeout first before 
>>>> setting the auth method to trust.
>>>> 
>>>> I would also set the below.:
>>>> 
>>>> https://www.postgresql.org/docs/10/static/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT
>>>>  
>>>> <https://www.postgresql.org/docs/10/static/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT>
>>>> 
>>>> log_connections (boolean)
>>>> Causes each attempted connection to the server to be logged, as well as 
>>>> successful completion of client authentication. Only superusers can change 
>>>> this parameter at session start, and it cannot be changed at all within a 
>>>> session. The default is off.
>>>> 
>>>> log_disconnections (boolean)
>>>> 
>>>> That will give you a better idea of what is going on connection wise.
>>>> 
>>>> 
>>>> 
>>>> -- 
>>>> Adrian Klaver
>>>> adrian.kla...@aklaver.com
>>> Will do so and report back
>> 
>> Also, in a previous post you mentioned:
>> 
>> "Yes and worked fine until two days ago"
>> 
>> Is the code under version control so you can see if anything changed two 
>> days ago?
>> 
>> If not, any recollections of significant events from that time period?
>> 
>>> Many thanks
>>> Armand
>> 
>> 
>> -- 
>> Adrian Klaver
>> adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>
> 
> No change in the code
> Only maybe the data volume and potentially the number of what we call events 
> for which that runs
> There was a talk to add some more functionality but never went through
> However I will ask around to see if someone did not forget to say about a 
> change put in place overnight (it happened before)
> 
> Thanks
> Armand
> 
> 


Sorry for the late reply. In and out sll sorts of things which do distract me.
I raised the authentication_timeout and seems okay for now

Thank you for the help

Armand





limit and query planner

2018-06-05 Thread armand pirvu
All

Please see below

explain analyze select * from sp_i2birst_reg_staging_test where evt_id = 
'ACSF17'   
and status=0 limit 10;
QUERY PLAN  
   
---
 Limit  (cost=0.00..1.29 rows=10 width=519) (actual time=0.110..0.439 rows=10 
loops=1)
   ->  Seq Scan on sp_i2birst_reg_staging_test  (cost=0.00..548.40 rows=4239 
width=519) (actual time=0.109..0.429 rows=10 loops=1)
 Filter: (((evt_id)::text = 'ACSF17'::text) AND (status = 0))
 Rows Removed by Filter: 115
 Planning time: 3.022 ms
 Execution time: 0.639 ms
(6 rows)



birstdb=# \d sp_i2birst_reg_staging_test
 Table 
"csischema.sp_i2birst_reg_staging_test"
Column |Type |
Modifiers
---+-+-
 action_id | bigint  | not null default 
nextval('i2birst_reg_staging_action_id_seq'::regclass)
 reg_uid   | integer | not null
 evt_uid   | integer | not null
 evt_id| character varying(10)   | not null
 operation | character varying(6)| not null
 status| smallint| not null
 category  | character varying(20)   | not null default ''::character 
varying
 add_date  | timestamp with time zone| not null default now()
 mod_date  | timestamp with time zone| not null default now()
 ingres_data   | jsonb   | 
 thread_number | bigint  | not null default 0
 start_time| timestamp without time zone | 
 end_time  | timestamp without time zone | 
Indexes:
"sp_i2birst_reg_staging_test_pkey" PRIMARY KEY, btree (action_id)
"sp_i2birst_reg_staging_test_idx" btree (status, evt_id, category)
Check constraints:
"sp_i2birst_reg_staging_test_status_check" CHECK (status = ANY (ARRAY[0, 1, 
2, 3]))

Even if add an index on evt_id and status same table scan

But 

select count(*) from sp_i2birst_reg_staging_test;
 count 
---
  6860

select count(*) from sp_i2birst_reg_staging_test where evt_id = 'ACSF17'   
and status=0 ;
 count 
---
  4239

So I can see why the planner is choosing a table scan 

My question is: I suspect the limit simply limits the fethching to the first 
n-records retrieved and has no implications whatsoever on the planner, meaning 
the planner ignores it. Am I right or wrong ?

Thanks
— Armand







Re: limit and query planner

2018-06-05 Thread armand pirvu

> On Jun 5, 2018, at 1:32 PM, Pavel Stehule  wrote:
> 
> 
> 
> 2018-06-05 20:24 GMT+02:00 armand pirvu  <mailto:armand.pi...@gmail.com>>:
> All
> 
> Please see below
> 
> explain analyze select * from sp_i2birst_reg_staging_test where evt_id = 
> 'ACSF17'   
> and status=0 limit 10;
> QUERY PLAN
>  
> ---
>  Limit  (cost=0.00..1.29 rows=10 width=519) (actual time=0.110..0.439 rows=10 
> loops=1)
>->  Seq Scan on sp_i2birst_reg_staging_test  (cost=0.00..548.40 rows=4239 
> width=519) (actual time=0.109..0.429 rows=10 loops=1)
>  Filter: (((evt_id)::text = 'ACSF17'::text) AND (status = 0))
>  Rows Removed by Filter: 115
>  Planning time: 3.022 ms
>  Execution time: 0.639 ms
> (6 rows)
> 
> 
> 
> birstdb=# \d sp_i2birst_reg_staging_test
>  Table 
> "csischema.sp_i2birst_reg_staging_test"
> Column |Type |
> Modifiers
> ---+-+-
>  action_id | bigint  | not null default 
> nextval('i2birst_reg_staging_action_id_seq'::regclass)
>  reg_uid   | integer | not null
>  evt_uid   | integer | not null
>  evt_id| character varying(10)   | not null
>  operation | character varying(6)| not null
>  status| smallint| not null
>  category  | character varying(20)   | not null default ''::character 
> varying
>  add_date  | timestamp with time zone| not null default now()
>  mod_date  | timestamp with time zone| not null default now()
>  ingres_data   | jsonb   | 
>  thread_number | bigint  | not null default 0
>  start_time| timestamp without time zone | 
>  end_time  | timestamp without time zone | 
> Indexes:
> "sp_i2birst_reg_staging_test_pkey" PRIMARY KEY, btree (action_id)
> "sp_i2birst_reg_staging_test_idx" btree (status, evt_id, category)
> Check constraints:
> "sp_i2birst_reg_staging_test_status_check" CHECK (status = ANY (ARRAY[0, 
> 1, 2, 3]))
> 
> Even if add an index on evt_id and status same table scan
> 
> But 
> 
> select count(*) from sp_i2birst_reg_staging_test;
>  count 
> ---
>   6860
> 
> select count(*) from sp_i2birst_reg_staging_test where evt_id = 'ACSF17'   
> and status=0 ;
>  count 
> ---
>   4239
> 
> So I can see why the planner is choosing a table scan 
> 
> My question is: I suspect the limit simply limits the fethching to the first 
> n-records retrieved and has no implications whatsoever on the planner, 
> meaning the planner ignores it. Am I right or wrong ?
> 
> LIMIT is last clause and it is processed after aggregation. 
> 
> probably you would select count(*) from (select * from 
> sp_i2birst_reg_staging_test where evt_id = 'ACSF17'  LIMIT 10) s;
> 
> more you have not index on evt_id column - there is composite index, but the 
> chance can be low
> 
> Regards
> 
> Pavel
> 
> 
> Thanks
> — Armand
> 
> 
> 
> 
> 
> 


Thank you Pavel

I put the counts to show that the number of records retrieved without limit 
relative to a plain select count(*) is far more than 5% and an index is just 
from this very reason deemed useless, aka the restriction is really non existent

— Armand



Re: limit and query planner

2018-06-05 Thread armand pirvu


> On Jun 5, 2018, at 2:02 PM, Tom Lane  wrote:
> 
> armand pirvu  writes:
>> My question is: I suspect the limit simply limits the fethching to the first 
>> n-records retrieved and has no implications whatsoever on the planner, 
>> meaning the planner ignores it. Am I right or wrong ?
> 
> You're quite wrong.  The presence of a LIMIT causes the planner to prefer
> "fast start" plans, since it will then optimize on the basis of picking
> the lowest estimated cost to fetch the first N rows.  As an example,
> you're more likely to get an ordered indexscan than a seqscan-and-sort
> for small N, though there are many cases where seqscan-and-sort wins
> if the need is to fetch the whole table.
> 
>   regards, tom lane



Thank you Tom


So since 
select count(*) from sp_i2birst_reg_staging_test;
count 
---
 6860
and
select count(*) from sp_i2birst_reg_staging_test where evt_id = 'ACSF17'   
and status=0 ;
count 
---
 4239
 
That means to me I fetch almost the whole table and then I fall in the case you 
described seqscan-and-sort wins over indexscan .

My statement was made because in the case of an index it gets used as long as 
the data returned back falls below 10% (or so) from the total data in the table 
and in the case of the original query no matter how low I get the N still seq 
scan but I guess is again the above sescan-and-sort scenario (see below)

create index fooidx on sp_i2birst_reg_staging_test (evt_id, status);
vacuum analyze sp_i2birst_reg_staging_test;

explain analyze select * from sp_i2birst_reg_staging_test where evt_id = 
'ACSF17'   
and status=1;
 QUERY PLAN 

 Index Scan using fooidx on sp_i2birst_reg_staging_test  (cost=0.28..202.91 
rows=500 width=519) (actual time=0.097..0.527 rows=500 loops=1)
   Index Cond: (((evt_id)::text = 'ACSF17'::text) AND (status = 1))
 Planning time: 1.024 ms
 Execution time: 0.766 ms
this gets 500 rows out of 6860

explain analyze select * from sp_i2birst_reg_staging_test where evt_id = 
'ACSF17'   
and status=1 limit 10;

   QUERY PLAN   

 Limit  (cost=0.28..4.33 rows=10 width=519) (actual time=0.073..0.105 rows=10 
loops=1)
   ->  Index Scan using fooidx on sp_i2birst_reg_staging_test  
(cost=0.28..202.91 rows=500 width=519) (actual time=0.072..0.101 rows=10 
loops=1)
 Index Cond: (((evt_id)::text = 'ACSF17'::text) AND (status = 1))
 Planning time: 0.280 ms
 Execution time: 0.173 ms

Back to the original 

explain analyze select * from sp_i2birst_reg_staging_test where evt_id = 
'ACSF17'   
and status=0 limit 1 ;

QUERY PLAN  
  
 Limit  (cost=0.00..0.13 rows=1 width=519) (actual time=0.021..0.021 rows=1 
loops=1)
   ->  Seq Scan on sp_i2birst_reg_staging_test  (cost=0.00..548.40 rows=4239 
width=519) (actual time=0.019..0.019 rows=1 loops=1)
 Filter: (((evt_id)::text = 'ACSF17'::text) AND (status = 0))
 Rows Removed by Filter: 1
 Planning time: 0.286 ms
 Execution time: 0.110 ms



— Armand


characters converted to ??? in postgres

2018-01-11 Thread armand pirvu

Hi all,


Got the following thing :  ≠, ≤, and ≥ store in the database as question marks 
according to one of my developers. 


I have postgres installed on both MAC OS X and Centos 7

All locale on both point to UTF8

LANG=en_US.UTF-8
LC_CTYPE="en_US.UTF-8"
LC_NUMERIC="en_US.UTF-8"
LC_TIME="en_US.UTF-8"
LC_COLLATE="en_US.UTF-8"
LC_MONETARY="en_US.UTF-8"
LC_MESSAGES="en_US.UTF-8"
LC_PAPER="en_US.UTF-8"
LC_NAME="en_US.UTF-8"
LC_ADDRESS="en_US.UTF-8"
LC_TELEPHONE="en_US.UTF-8"
LC_MEASUREMENT="en_US.UTF-8"
LC_IDENTIFICATION="en_US.UTF-8"
LC_ALL=

insert into jt1 values ('≤') ;

This I can run either copy/paste which is case 1 (which does reproduce the 
developer issue) , or have it in an sql script which case 2


On OS X:
- case 1 fails
testdb=# insert into jt1 values ('??') ;
ERROR:  invalid byte sequence for encoding "UTF8": 0xe2 0xa4 0x27
Note that at paste time  ≤ changed in ??
- case 2 is fine
- echo -n '≤' |hexdump -C
  e2 89 a4  |...|
0003

On Centos:
- Both cases are fine
- echo -n '≤' |hexdump -C
  e2 89 a4  |...|
0003

http://www.fileformat.info/info/unicode/char/2264/index.htm
UTF-8 (hex) 0xE2 0x89 0xA4 (e289a4)


So to me the representation is fine in all cases. Also in all cases my encoding 
is UTF8.
I am trying to understand in OS X where does the change occur ? What is causing 
the failure ? 
In the bigger picture a developer complained about this failure and I am fairly 
sure this is not a postgres issue but I need to prove it


Many thanks for help

-- Armand










Re: characters converted to ??? in postgres

2018-01-11 Thread armand pirvu
Hi Peter

The -n flag worked fine on OS X. I don’t have this issue on Centos. As a side 
question I wonder why was postgres built with libedit instead of libreadline , 
just curious.
Back to my developer issue , he is using what he calls a data object in java. 
Apparently this is the place where this bad conversion happens, in other words 
it passes to the backend the ?? characters
Any similar trick I could use on the postgres jdbc driver ? 

Will report more once I find more from him


Many thanks
Armand


> On Jan 11, 2018, at 4:12 PM, Peter Eisentraut 
>  wrote:
> 
> On 1/11/18 16:34, armand pirvu wrote:
>> On OS X:
>> - case 1 fails
>> testdb=# insert into jt1 values ('??') ;
>> ERROR:  invalid byte sequence for encoding "UTF8": 0xe2 0xa4 0x27
>> Note that at paste time  ≤ changed in ??
> 
> This looks like something is wrong with your libedit library.  Try
> running psql with the -n option.  If that helps, then look into building
> psql with libreadline instead.  Because libedit is terrible.
> 
>> - case 2 is fine
>> - echo -n '≤' |hexdump -C
>>   e2 89 a4  |...|
>> 0003
> 
> -- 
> Peter Eisentraut  http://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: characters converted to ??? in postgres

2018-01-12 Thread armand pirvu
Yes it appears to be a separate issue but using OS X is the only way I could 
reproduce trying to see with my own eyes so to speak
I did though a quick java program and that confirmed to me (at least so far) 
that the whole issue resides in the so called data object

Thank you so much for help and insight

— Armand



> On Jan 12, 2018, at 8:10 AM, Peter Eisentraut 
>  wrote:
> 
> On 1/11/18 20:52, armand pirvu wrote:
>> The -n flag worked fine on OS X. I don’t have this issue on Centos. As a 
>> side question I wonder why was postgres built with libedit instead of 
>> libreadline , just curious.
> 
> You'll have to ask that of whoever built the binaries you are using.
> There are different sources.
> 
> libedit is part of the macOS operating system, whereas libreadline has
> to be obtained separately, so that's probably a reason.
> 
>> Back to my developer issue , he is using what he calls a data object in 
>> java. Apparently this is the place where this bad conversion happens, in 
>> other words it passes to the backend the ?? characters
>> Any similar trick I could use on the postgres jdbc driver ? 
> 
> That appears to be a completely separate issue.
> 
> -- 
> Peter Eisentraut  http://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




cursors and function question

2018-02-13 Thread armand pirvu
Hi 

Is there any elegant way not a two steps way I can output the cursor value at 
each step?


testtbl table has this content

col1|col2| col3 
++--
 E1 | CAT1   |0
 E1 | CAT2   |0
 E1 | CAT3   |0
 E4 | CAT1   |0
 E5 | CAT1   |0
 E6 | CAT1   |0
 E7 | CAT1   |0


This works
BEGIN WORK;
DECLARE fooc  CURSOR FOR SELECT * FROM testtbl;
FETCH ALL FROM fooc;
CLOSE fooc;
COMMIT WORK;

col1|col2| col3 
++--
 E1 | CAT1   |0
 E1 | CAT2   |0
 E1 | CAT3   |0
 E4 | CAT1   |0
 E5 | CAT1   |0
 E6 | CAT1   |0
 E7 | CAT1   |0


But 
CREATE OR REPLACE FUNCTION foofunc()
   RETURNS text AS $$
DECLARE 
 var2   RECORD;
 cur CURSOR FOR SELECT * from testtbl;
BEGIN
   OPEN cur;
LOOP
  FETCH cur INTO var2;
  return var2;
   END LOOP;
   CLOSE cur;
END; $$
LANGUAGE plpgsql;


select foofunc();
foofunc
---
 ("E1","CAT1  ",0)

But I am looking to get 

foofunc
---
 ("E1","CAT1  ",0)
 ("E1","CATs  ",0)
etc 



Many thanks
— Armand


Re: cursors and function question

2018-02-13 Thread armand pirvu

> On Feb 13, 2018, at 12:26 PM, David G. Johnston  
> wrote:
> 
> On Tuesday, February 13, 2018, armand pirvu  <mailto:armand.pi...@gmail.com>> wrote:
> 
> CREATE OR REPLACE FUNCTION foofunc()
>RETURNS text AS $$
> 
> select foofunc();
> foofunc
> ---
>  ("E1","CAT1  ",0)
> 
> But I am looking to get
> 
> foofunc
> ---
>  ("E1","CAT1  ",0)
>  ("E1","CATs  ",0)
> 
> 
> You need to specify SETOF
> 
> CREATE FUNCTION foofunc() RETURNS SETOF text AS
> 
> David J.

Thank you but


CREATE OR REPLACE FUNCTION foofunc()
   RETURNS setof text AS $$
DECLARE 
 var2   RECORD;
 cur  CURSOR FOR SELECT * from testtbl;
BEGIN
   OPEN cur;
LOOP
  FETCH cur INTO var2;
 return  var2;
   END LOOP;
   CLOSE cur;
END; $$
LANGUAGE plpgsql;

ERROR:  RETURN cannot have a parameter in function returning set
LINE 10:  return  var2;
HINT:  Use RETURN NEXT or RETURN QUERY.


so I employed next



CREATE OR REPLACE FUNCTION foofunc()
   RETURNS setof  text AS $$
DECLARE 
 var2   text;
 cur  CURSOR FOR SELECT col1 from testtbl;
BEGIN
   OPEN cur;
LOOP
  FETCH cur INTO var2;
 return next var2;
   END LOOP;
   CLOSE cur;
END; $$
LANGUAGE plpgsql;



and it just sits there

Any hints ?


Thank you
— Armand



Re: cursors and function question

2018-02-13 Thread armand pirvu

> On Feb 13, 2018, at 12:54 PM, Adrian Klaver  wrote:
> 
> On 02/13/2018 10:22 AM, armand pirvu wrote:
>> Hi
>> Is there any elegant way not a two steps way I can output the cursor value 
>> at each step?
>> testtbl table has this content
>> col1|col2| col3
>> ++--
>>  E1 | CAT1   |0
>>  E1 | CAT2   |0
>>  E1 | CAT3   |0
>>  E4 | CAT1   |0
>>  E5 | CAT1   |0
>>  E6 | CAT1   |0
>>  E7 | CAT1   |0
>> This works
>> BEGIN WORK;
>> DECLARE fooc  CURSOR FOR SELECT * FROM testtbl;
>> FETCH ALL FROM fooc;
>> CLOSE fooc;
>> COMMIT WORK;
>> col1|col2| col3
>> ++--
>>  E1 | CAT1   |0
>>  E1 | CAT2   |0
>>  E1 | CAT3   |0
>>  E4 | CAT1   |0
>>  E5 | CAT1   |0
>>  E6 | CAT1   |0
>>  E7 | CAT1   |0
>> But
>> CREATE OR REPLACE FUNCTION foofunc()
>>RETURNS text AS $$
>> DECLARE
>>  var2   RECORD;
>>  cur CURSOR FOR SELECT * from testtbl;
>> BEGIN
>>OPEN cur;
>> LOOP
>>   FETCH cur INTO var2;
>>   return var2;
>>END LOOP;
>>CLOSE cur;
>> END; $$
>> LANGUAGE plpgsql;
> 
> 
> CREATE OR REPLACE FUNCTION public.foofunc()
> RETURNS SETOF testtbl
> LANGUAGE sql
> AS $function$
>SELECT * FROM testtbl;
> $function$
> 
> 
> test=> select * from foofunc();
> col1 | col2 | col3
> --+--+--
> E1   | CAT1 |0
> E1   | CAT2 |0
> E1   | CAT3 |0
> E4   | CAT1 |0
> E5   | CAT1 |0
> E6   | CAT1 |0
> E7   | CAT1 |0
> (7 rows)
> 
> 
>> select foofunc();
>> foofunc
>> ---
>>  ("E1","CAT1  ",0)
>> But I am looking to get
>> foofunc
>> ---
>>  ("E1","CAT1  ",0)
>>  ("E1","CATs  ",0)
>> etc
>> Many thanks
>> — Armand
> 
> 
> -- 
> Adrian Klaver
> adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>


Thanks Adrian

That one I figured it out as well. The idea is that said table has some records 
which I need to loop and do some processing using cursors similar with 

DECLARE
cur CURSOR FOR SELECT *
FROM testtbl FOR UPDATE;
BEGIN
  FOR row IN cur LOOP
UPDATE testtbl
SET col3=1
WHERE CURRENT OF cur;
  END LOOP;
  return cur;
END

For a row update the goal is to return the cursor value  be it before/after the 
update, hence my question and test

I found some code which seems to do what I need but it involves two functions
  
CREATE or replace FUNCTION reffunc(refcursor) RETURNS refcursor AS $$
BEGIN
OPEN $1 FOR SELECT col FROM test;
RETURN $1;
END;
$$ LANGUAGE plpgsql;

BEGIN;
SELECT reffunc('funccursor');
FETCH ALL IN funccursor;
COMMIT;


And this is what beats  me , aka can I put all in one / how ?




Thanks
Armand











Re: cursors and function question

2018-02-13 Thread armand pirvu

> On Feb 13, 2018, at 1:22 PM, Adrian Klaver  wrote:
> 
> On 02/13/2018 11:17 AM, armand pirvu wrote:
>>> On Feb 13, 2018, at 12:54 PM, Adrian Klaver >> <mailto:adrian.kla...@aklaver.com><mailto:adrian.kla...@aklaver.com 
>>> <mailto:adrian.kla...@aklaver.com>>> wrote:
>>> 
>>> On 02/13/2018 10:22 AM, armand pirvu wrote:
>>>> Hi
>>>> Is there any elegant way not a two steps way I can output the cursor value 
>>>> at each step?
>>>> testtbl table has this content
>>>> col1|col2| col3
>>>> ++--
>>>>  E1 | CAT1   |0
>>>>  E1 | CAT2   |0
>>>>  E1 | CAT3   |0
>>>>  E4 | CAT1   |0
>>>>  E5 | CAT1   |0
>>>>  E6 | CAT1   |0
>>>>  E7 | CAT1   |0
>>>> This works
>>>> BEGIN WORK;
>>>> DECLARE fooc  CURSOR FOR SELECT * FROM testtbl;
>>>> FETCH ALL FROM fooc;
>>>> CLOSE fooc;
>>>> COMMIT WORK;
>>>> col1|col2| col3
>>>> ++--
>>>>  E1 | CAT1   |0
>>>>  E1 | CAT2   |0
>>>>  E1 | CAT3   |0
>>>>  E4 | CAT1   |0
>>>>  E5 | CAT1   |0
>>>>  E6 | CAT1   |0
>>>>  E7 | CAT1   |0
>>>> But
>>>> CREATE OR REPLACE FUNCTION foofunc()
>>>>RETURNS text AS $$
>>>> DECLARE
>>>>  var2   RECORD;
>>>>  cur CURSOR FOR SELECT * from testtbl;
>>>> BEGIN
>>>>OPEN cur;
>>>> LOOP
>>>>   FETCH cur INTO var2;
>>>>   return var2;
>>>>END LOOP;
>>>>CLOSE cur;
>>>> END; $$
>>>> LANGUAGE plpgsql;
>>> 
>>> 
>>> CREATE OR REPLACE FUNCTION public.foofunc()
>>> RETURNS SETOF testtbl
>>> LANGUAGE sql
>>> AS $function$
>>>SELECT * FROM testtbl;
>>> $function$
>>> 
>>> 
>>> test=> select * from foofunc();
>>> col1 | col2 | col3
>>> --+--+--
>>> E1   | CAT1 |0
>>> E1   | CAT2 |0
>>> E1   | CAT3 |0
>>> E4   | CAT1 |0
>>> E5   | CAT1 |0
>>> E6   | CAT1 |0
>>> E7   | CAT1 |0
>>> (7 rows)
>>> 
>>> 
>>>> select foofunc();
>>>> foofunc
>>>> ---
>>>>  ("E1","CAT1  ",0)
>>>> But I am looking to get
>>>> foofunc
>>>> ---
>>>>  ("E1","CAT1  ",0)
>>>>  ("E1","CATs  ",0)
>>>> etc
>>>> Many thanks
>>>> — Armand
>>> 
>>> 
>>> --
>>> Adrian Klaver
>>> adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com> 
>>> <mailto:adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>>
>> Thanks Adrian
>> That one I figured it out as well. The idea is that said table has some 
>> records which I need to loop and do some processing using cursors similar 
>> with
>> DECLARE
>> cur CURSOR FOR SELECT *
>> FROM testtbl FOR UPDATE;
>> BEGIN
>>   FOR row IN cur LOOP
>> UPDATE testtbl
>> SET col3=1
>> WHERE CURRENT OF cur;
>>   END LOOP;
>>   return cur;
>> END
>> For a row update the goal is to return the cursor value  be it before/after 
>> the update, hence my question and test
> 
> Not following, are you looking to do this in an UPDATE trigger or somewhere 
> else?
> 
> Another way to ask is why do you want to use a cursor?
> 
>> I found some code which seems to do what I need but it involves two functions
>> CREATE or replace FUNCTION reffunc(refcursor) RETURNS refcursor AS $$
>> BEGIN
>> OPEN $1 FOR SELECT col FROM test;
>> RETURN $1;
>> END;
>> $$ LANGUAGE plpgsql;
>> BEGIN;
>> SELECT reffunc('funccursor');
>> FETCH ALL IN funccursor;
>> COMMIT;
>> And this is what beats  me , aka can I put all in one / how ?
>> Thanks
>> Armand
> 
> 
> -- 
> Adrian Klaver
> adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>

Not a trigger , but the idea is we will do some batch processing from said 
table let’s name it testtbl

1 - we get the records using  select for update with a limit 100 for example
2 - update each record using using cursor
3 - print the cursor content so that way I have an idea what was updated

I was thinking that if I can put a unique constraint on the table, I can 
generate a global table in the function , update main table from global table 
and return select from global table

I can see the developer desire to use cursors to minimize some effort on his 
side

Thanks 

Armand




Re: cursors and function question

2018-02-13 Thread armand pirvu

> On Feb 13, 2018, at 4:37 PM, David G. Johnston  
> wrote:
> 
> On Tue, Feb 13, 2018 at 3:31 PM, Adrian Klaver  > wrote:
> 2) By global table do you mean a temporary table? If so not sure that is 
> going to work as I am pretty sure it will disappear after the function is run.
> 
> ​Temporary tables can survive until either session or transaction end - 
> neither of which occurs automatically when exiting a function.
> 
> https://www.postgresql.org/docs/10/static/sql-createtable.html 
> 
> 
> "Temporary tables are automatically dropped at the end of a session, or 
> optionally at the end of the current transaction (see ON COMMIT below)"
> 
> David J.
> 

Thank you both of you Adrian and David for the input and help

I owe you some nice red wine :)


Cheers
- Armand