[GENERAL] Setting search_path ignored

2017-10-02 Thread Guyren Howe
CREATE ROLE thing_accessor;

CREATE ROLE

CREATE SCHEMA thing_accessor;

CREATE SCHEMA

covermything=> ALTER ROLE thing_accessor SET search_path=thing_accessor;

ALTER ROLE

covermything=# SET ROLE thing_accessor;

SET

covermything=> SHOW search_path;

   search_path

-

 "$user", public

(1 row)



This seems to contradict all the advice I can find about setting the schema
search path.


Re: [GENERAL] Setting search_path ignored

2017-10-02 Thread David G. Johnston
On Mon, Oct 2, 2017 at 7:00 AM, Guyren Howe  wrote:

> CREATE ROLE thing_accessor;
>
> CREATE ROLE
>
> CREATE SCHEMA thing_accessor;
>
> CREATE SCHEMA
>
> covermything=> ALTER ROLE thing_accessor SET search_path=thing_accessor;
>
> ALTER ROLE
>
> covermything=# SET ROLE thing_accessor;
>
> SET
>
> covermything=> SHOW search_path;
>
>search_path
>
> -
>
>  "$user", public
>
> (1 row)
>
>
>
> This seems to contradict all the advice I can find about setting the
> schema search path.
>

​Settings associated with roles only take affect at session start/login.
 SET ROLE does not cause them to be read/executed.

https://www.postgresql.org/docs/9.6/static/config-setting.html

"Values set with ALTER DATABASE and ALTER ROLE are applied only when
starting a fresh database session."

David J.
​


Re: [GENERAL] time series data

2017-10-02 Thread Melvin Davidson
On Sun, Oct 1, 2017 at 6:20 PM, Clifford Snow 
wrote:

> I have a stream that updates every minute with a trigger that updates
> another table with information from the stream. That way I'm constantly
> updated with no need to run a script to update before I want a report.
>
> Clifford
>
> On Sun, Oct 1, 2017 at 10:08 AM, Melvin Davidson 
> wrote:
>
>>
>>
>> On Sun, Oct 1, 2017 at 4:17 AM, Khalil Khamlichi <
>> khamlichi.kha...@gmail.com> wrote:
>>
>>> Hi everyone,
>>>
>>> I have a data stream of a call center application coming in  to postgres
>>> in this format :
>>>
>>> user_name, user_status, event_time
>>>
>>> 'user1', 'ready', '2017-01-01 10:00:00'
>>> 'user1', 'talking', '2017-01-01 10:02:00'
>>> 'user1', 'after_call', '2017-01-01 10:07:00'
>>> 'user1', 'ready', '2017-01-01 10:08:00'
>>> 'user1', 'talking', '2017-01-01 10:10:00'
>>> 'user1', 'after_call', '2017-01-01 10:15:00'
>>> 'user1', 'paused', '2017-01-01 10:20:00'
>>> ...
>>> ...
>>>
>>> so as you see each new insert of an "event" is in fact the start_time of
>>> that event and also the end_time of the previous one so should be used to
>>> calculate the duration of this previous one.
>>>
>>> What is the best way to get user_status statistics like total duration,
>>> frequency, avg ...etc , does any body have an experience with this sort of
>>> data streams ?
>>>
>>>
>>> Thanks in advance.
>>>
>>
>> Just a suggestion, but here is what I would do.
>> First, create your tables similar to as follows
>>
>> CREATE TABLE status
>> (
>>  call_statusvarchar(10) NOT NULL,
>>  CONSTRAINT status_pk PRIMARY KEY (call_status)
>> );
>>
>> INSERT INTO status
>> (call_status)
>> VALUES
>> ('ready'),
>> ('talking'),
>> ('after_call');
>>
>> CREATE TABLE user_sessions
>> (
>>  usernamenameNOT NULL,
>>  session_idbigintNOT NULL,
>>  call_status varchar(10) NOT NULL,
>>  call_timetimestamp NOT NULL,
>>  CONSTRAINT user_sessions_pk PRIMARY KEY (username,
>> session_id,call_status),
>>  CONSTRAINT user_sessions_fk_status FOREIGN KEY (call_status)
>> REFERENCES status(call_status)
>> );
>>
>> Next, you will need to generate a unique session_id for each
>> user, but only for when call_status is 'ready'. So probably
>> a table of the form:
>>
>> CREATE TABLE current_session
>> (
>> usernamenameNOT NULL,
>> session_idserialNOT NULL,
>> CONSTRAINT current_session_pk PRIMARY KEY (username)
>> );
>>
>> Then all you need to do is:
>> 1. Update current_session and get the new session_id each time a user
>> connects (call_status = 'ready'.
>>Probably best to use a BEFORE trigger to do this, but you will need to
>> code it yourself.
>>
>> 2. You can then do
>>
>> SELECT username,
>>age ( (SELECT call_time FROM current_session WHERE call_status =
>> 'talking'),
>>   ( SELECT call_time FROM current_session WHERE call_status =
>> 'after_call')
>>) as duration
>>   FROM user_sessions
>>  WHERE username = '*actual_user_name*'
>>AND session_id =  *actual_session_id*;
>>
>> You can use similar queries for avg and frequency.
>>
>> --
>> *Melvin Davidson*
>> I reserve the right to fantasize.  Whether or not you
>> wish to share my fantasy is entirely up to you.
>>
>
>
>
> --
> @osm_seattle
> osm_seattle.snowandsnow.us
> OpenStreetMap: Maps with a human touch
>

I thought about the table design for user_sessions and came up
with a better one:

CREATE TABLE user_sessions
(
 username  name  NOT NULL,
 session_id bigint   NOT NULL,
 call_status varchar(10) NOT NULL,
 call_ready  timestamp   NOT NULL,
 call_talking timestamp,
 call_after_call timestamp,
 call_duration   interval,
 CONSTRAINT user_sessions_pk PRIMARY KEY (username, session_id),
 CONSTRAINT user_sessions_fk_status FOREIGN KEY (call_status)
REFERENCES status(call_status)
);

So in essence, when the call starts, just do:

INSERT INTO user_sessions
(username, call_ready)
VALUES
('actual_user_name', now() );

Then
SELECT max(session_id) AS current_session
  FROM user_sessions
 WHERE username = 'actual_user_name';

When talking starts:
UPDATE user_sessions
   SET call_status = 'talking',
   call_talking = now()
 WHERE username = 'actual_user_name'
   AND session_id = current_session;

When call ends:
UPDATE user_sessions
   SET call_status = 'after_call',
   call_after_call = now()
 WHERE username = 'actual_user_name'
   AND session_id = current_session;

Now all you have to do to get call length is:

SELECT username,
   age ( call_after_call, call_talking ) as duration
  FROM user_sessions
 WHERE username = 'actual_user_name'
   AND session_id =  current_session;


Re: [GENERAL] Setting search_path ignored

2017-10-02 Thread Guyren Howe
I logged out and back and did SET ROLE and got the same resullt.

On Oct 2, 2017, 10:06 -0400, David G. Johnston , 
wrote:
> On Mon, Oct 2, 2017 at 7:00 AM, Guyren Howe  wrote:
> > > CREATE ROLE thing_accessor;
> > > CREATE ROLE
> > > CREATE SCHEMA thing_accessor;
> > > CREATE SCHEMA
> > > covermything=> ALTER ROLE thing_accessor SET search_path=thing_accessor;
> > > ALTER ROLE
> > > covermything=# SET ROLE thing_accessor;
> > > SET
> > > covermything=> SHOW search_path;
> > >    search_path
> > > -
> > >  "$user", public
> > > (1 row)
> > >
> > > This seems to contradict all the advice I can find about setting the 
> > > schema search path.
>
> ​Settings associated with roles only take affect at session start/login.   
> SET ROLE does not cause them to be read/executed.
>
> https://www.postgresql.org/docs/9.6/static/config-setting.html
>
> "Values set with ALTER DATABASE and ALTER ROLE are applied only when starting 
> a fresh database session."
>
> David J.
>
>


Re: [GENERAL] Setting search_path ignored

2017-10-02 Thread David G. Johnston
On Mon, Oct 2, 2017 at 7:09 AM, Guyren Howe  wrote:

> I logged out and back and did SET ROLE and got the same resullt.
>

​
Are you logging in as "thing_accessor" or some role that is a member of
"thing_accessor"?

David J.

​


Re: [GENERAL] Setting search_path ignored

2017-10-02 Thread Charles Clavadetscher
Hello

 

From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Guyren Howe
Sent: Montag, 2. Oktober 2017 16:10
To: David G. Johnston 
Cc: PostgreSQL General 
Subject: Re: [GENERAL] Setting search_path ignored

 

I logged out and back and did SET ROLE and got the same resullt.


On Oct 2, 2017, 10:06 -0400, David G. Johnston mailto:david.g.johns...@gmail.com> >, wrote:



On Mon, Oct 2, 2017 at 7:00 AM, Guyren Howe mailto:guy...@gmail.com> > wrote:

CREATE ROLE thing_accessor;

CREATE ROLE

CREATE SCHEMA thing_accessor;

CREATE SCHEMA

covermything=> ALTER ROLE thing_accessor SET search_path=thing_accessor;

ALTER ROLE

covermything=# SET ROLE thing_accessor;

SET

covermything=> SHOW search_path;

   search_path   

-

 "$user", public

(1 row)

 

This seems to contradict all the advice I can find about setting the schema 
search path.

 

​Settings associated with roles only take affect at session start/login.   SET 
ROLE does not cause them to be read/executed.

 

  
https://www.postgresql.org/docs/9.6/static/config-setting.html

 

"Values set with ALTER DATABASE and ALTER ROLE are applied only when starting a 
fresh database session."

 

David J.

​

 

search_path is not a privilege, but a property.

Properties are tight to the logged in user. With SET ROLE you become the role 
only for its privileges, not its properties.

 

Regards

Charles

 



[GENERAL] Checkpoint write time - anything unusual?

2017-10-02 Thread pinker
I've just run pgBadger on my pg logs and wonder if those checkpoint
statistics is something I should worry about or not?
The highest write time is about 47 minutes but I'm not sure if that's
checkpoint_completion_target*checkpoint_target value or real time between
sending the command to write and getting response?
If that's the first one - then I assume everything is ok because:
 checkpoint_timeout = 8min
 checkpoint_completion_target = 0.9

but I'm not sure which value it is...

Hour  Written buffersWrite timeSync timeTotal time
00104,6222,510.379s0.558s2,511.144s
0123,549 2,218.747s0.298s2,219.199s
029,776  919.796s  0.244s920.281s
031,075  107.07s   0.398s107.667s
041,062  105.853s  0.396s106.384s
0516,216 1,609.462s0.576s1,610.245s
0638,315 2,137.492s0.246s2,137.879s
0751,504 2,571.057s0.48s 2,571.7s
0853,921 2,295.652s2.071s2,297.957s
0918,716 1,116.161s3.132s1,119.715s
109,000  581.912s  3.866s586.109s
1112,795 879.337s  2.427s882.124s
1213,872 860.987s  1.127s862.444s
1314,883 805.011s  0.866s806.119s
1413,172 1,062.677s1.228s1,064.188s
159,375  620.979s  1.013s622.632s
166,268  626.158s  1.059s627.721s
1712,387 613.113s  4.849s618.534s
1840,641 1,791.432s1.571s1,793.546s
1938,316 1,946.747s1.343s1,948.442s
2069,564 2,817.917s1.268s2,819.678s
2117,333 1,736.004s0.812s1,737.25s
229,230  924.742s  0.576s925.49s
23118,0372,150.212s2.167s2,152.673s



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] time series data

2017-10-02 Thread Scott Marlowe
On Sun, Oct 1, 2017 at 2:17 AM, Khalil Khamlichi
 wrote:
> Hi everyone,
>
> I have a data stream of a call center application coming in  to postgres in
> this format :
>
> user_name, user_status, event_time
>
> 'user1', 'ready', '2017-01-01 10:00:00'
> 'user1', 'talking', '2017-01-01 10:02:00'
> 'user1', 'after_call', '2017-01-01 10:07:00'
> 'user1', 'ready', '2017-01-01 10:08:00'
> 'user1', 'talking', '2017-01-01 10:10:00'
> 'user1', 'after_call', '2017-01-01 10:15:00'
> 'user1', 'paused', '2017-01-01 10:20:00'
> ...
> ...
>
> so as you see each new insert of an "event" is in fact the start_time of
> that event and also the end_time of the previous one so should be used to
> calculate the duration of this previous one.
>
> What is the best way to get user_status statistics like total duration,
> frequency, avg ...etc , does any body have an experience with this sort of
> data streams ?

Have you looked at temporal_tables extension? It seems custom made for
what you're trying to do.

http://clarkdave.net/2015/02/historical-records-with-postgresql-and-temporal-tables-and-sql-2011/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] time series data

2017-10-02 Thread Khalil Khamlichi
interesting proposition, I am reading the docs.

On Mon, Oct 2, 2017 at 6:08 PM, Scott Marlowe 
wrote:

> On Sun, Oct 1, 2017 at 2:17 AM, Khalil Khamlichi
>  wrote:
> > Hi everyone,
> >
> > I have a data stream of a call center application coming in  to postgres
> in
> > this format :
> >
> > user_name, user_status, event_time
> >
> > 'user1', 'ready', '2017-01-01 10:00:00'
> > 'user1', 'talking', '2017-01-01 10:02:00'
> > 'user1', 'after_call', '2017-01-01 10:07:00'
> > 'user1', 'ready', '2017-01-01 10:08:00'
> > 'user1', 'talking', '2017-01-01 10:10:00'
> > 'user1', 'after_call', '2017-01-01 10:15:00'
> > 'user1', 'paused', '2017-01-01 10:20:00'
> > ...
> > ...
> >
> > so as you see each new insert of an "event" is in fact the start_time of
> > that event and also the end_time of the previous one so should be used to
> > calculate the duration of this previous one.
> >
> > What is the best way to get user_status statistics like total duration,
> > frequency, avg ...etc , does any body have an experience with this sort
> of
> > data streams ?
>
> Have you looked at temporal_tables extension? It seems custom made for
> what you're trying to do.
>
> http://clarkdave.net/2015/02/historical-records-with-
> postgresql-and-temporal-tables-and-sql-2011/
>


Re: [GENERAL] time series data

2017-10-02 Thread Khalil Khamlichi
Hi Melvin,

Thanks a lot for your help,

let me explain to you my problem.

we have records like this

ccdb1=# select user_name, agent_status, event_time from cc_events ;

 user_name | agent_status | event_time
---+--+-
 user1 | ready| 2017-01-01 10:00:00
 user1 | talking  | 2017-01-01 10:02:00
 user1 | after_call   | 2017-01-01 10:08:00
 user1 | ready| 2017-01-01 10:10:00
 user1 | talking  | 2017-01-01 10:12:00
 user1 | after_call   | 2017-01-01 10:15:00
 user1 | paused   | 2017-01-01 10:17:00
 user1 | ready| 2017-01-01 10:25:00
 user1 | talking  | 2017-01-01 10:26:00
(9 rows)


so user1 was READY at  2017-01-01 10:00:00 then he received a call
that he attended at 2017-01-01 10:02:00 and so on ...
so user1 was ready for 2 minutes, then he was talking for 6 minutes
then he was in after_call (doing after call work)  for 2 minutes and
this is the kind of information we want to query.

my solution so far that I came with, is in my table I have 1 more
field : end_time
so when an event comes in and before the insert I do :
update cc_events set end_time = current_timestamp  where user_name =
'user_of_event' and end_time is null;

then I insert new event leaving the end_time as null so that next
event will update it and so on.

its working fine, I have the start and end times for each event, its
not too painful to query (sum(end-start) while grouping by user_name,
agent_status), but its one more update on the table and also limited
in what you can query about,

I know this must be a common problem in every software that deals with
events, so I suppose something is already built-in in postgres to deal
with it.

I looked at your solution, it's very clever and we use something
similar but on another module where we manage live calls and route
them to available agents.

kkh


On Mon, Oct 2, 2017 at 4:06 PM, Melvin Davidson  wrote:
>
>
>
> On Sun, Oct 1, 2017 at 6:20 PM, Clifford Snow  wrote:
>>
>> I have a stream that updates every minute with a trigger that updates 
>> another table with information from the stream. That way I'm constantly 
>> updated with no need to run a script to update before I want a report.
>>
>> Clifford
>>
>> On Sun, Oct 1, 2017 at 10:08 AM, Melvin Davidson  
>> wrote:
>>>
>>>
>>>
>>> On Sun, Oct 1, 2017 at 4:17 AM, Khalil Khamlichi 
>>>  wrote:

 Hi everyone,

 I have a data stream of a call center application coming in  to postgres 
 in this format :

 user_name, user_status, event_time

 'user1', 'ready', '2017-01-01 10:00:00'
 'user1', 'talking', '2017-01-01 10:02:00'
 'user1', 'after_call', '2017-01-01 10:07:00'
 'user1', 'ready', '2017-01-01 10:08:00'
 'user1', 'talking', '2017-01-01 10:10:00'
 'user1', 'after_call', '2017-01-01 10:15:00'
 'user1', 'paused', '2017-01-01 10:20:00'
 ...
 ...

 so as you see each new insert of an "event" is in fact the start_time of 
 that event and also the end_time of the previous one so should be used to 
 calculate the duration of this previous one.

 What is the best way to get user_status statistics like total duration, 
 frequency, avg ...etc , does any body have an experience with this sort of 
 data streams ?


 Thanks in advance.
>>>
>>>
>>> Just a suggestion, but here is what I would do.
>>> First, create your tables similar to as follows
>>>
>>> CREATE TABLE status
>>> (
>>>  call_statusvarchar(10) NOT NULL,
>>>  CONSTRAINT status_pk PRIMARY KEY (call_status)
>>> );
>>>
>>> INSERT INTO status
>>> (call_status)
>>> VALUES
>>> ('ready'),
>>> ('talking'),
>>> ('after_call');
>>>
>>> CREATE TABLE user_sessions
>>> (
>>>  usernamenameNOT NULL,
>>>  session_idbigintNOT NULL,
>>>  call_status varchar(10) NOT NULL,
>>>  call_timetimestamp NOT NULL,
>>>  CONSTRAINT user_sessions_pk PRIMARY KEY (username, session_id,call_status),
>>>  CONSTRAINT user_sessions_fk_status FOREIGN KEY (call_status)
>>> REFERENCES status(call_status)
>>> );
>>>
>>> Next, you will need to generate a unique session_id for each
>>> user, but only for when call_status is 'ready'. So probably
>>> a table of the form:
>>>
>>> CREATE TABLE current_session
>>> (
>>> usernamenameNOT NULL,
>>> session_idserialNOT NULL,
>>> CONSTRAINT current_session_pk PRIMARY KEY (username)
>>> );
>>>
>>> Then all you need to do is:
>>> 1. Update current_session and get the new session_id each time a user 
>>> connects (call_status = 'ready'.
>>>Probably best to use a BEFORE trigger to do this, but you will need to 
>>> code it yourself.
>>>
>>> 2. You can then do
>>>
>>> SELECT username,
>>>age ( (SELECT call_time FROM current_session WHERE call_status = 
>>> 'talking'),
>>>   ( SELECT call_time FROM current_session WHERE call_status = 
>>> 'after_call')
>>>

Re: [GENERAL] time series data

2017-10-02 Thread Nico Williams

You have these choices:

 - turn events into INSERTs and UPDATES on a table that represents a
   single call

   You might have an events VIEW with INSTED OF insert/update triggers
   so you can insert events as the interface for updating calls.

 - store the events and have a VIEW on the events table that gives you
   rows that summarize each call

 - both: store the events and the summaries of the calls

   You might have an events table with AFTER INSERT triggers to insert
   or update the corresponding rows in the calls table.

Nico
-- 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_stat_tmp and pg_upgrade

2017-10-02 Thread Bruce Momjian
On Thu, Sep 28, 2017 at 08:02:44AM -0700, kbran...@pwhome.com wrote:
> With Pg10.0 coming, I have a question about pg_stat_tmp and upgrades that I'm 
> hoping I can get some advice on.
> 
> Historically, we've created a tmpfs "disk" and mounted it on 
> $PGDATA/pg_stat_tmp and then started Pg. For
> most situations, this works well. However, we have one situation where it 
> doesn't. Therefore, we've decided
> to put pg_stat_tmp outside of $PGDATA.
> 
> OTOH, it occurs to me that when we run pg_upgrade, we're going to have 2 Pg's 
> running at the same time. So
> my concern is that if both new and old have "stats_temp_directory = 
> '/db/pg_stat_tmp'" in their
> postgresql.conf, are there going to be conflicts and "bad things" happen? Or 
> is there some sort of
> versioning within that area that will keep stuff separate and we'll be OK?
> 
> I'm concerned because I don't see anything like versioning when I look in the 
> "dir", so if both of them try
> to create a "global.stat" then it seems like "bad things" could happen.
> 
> OTOH, it's only stats and only an upgrade, so would it be better to make sure 
> that the new version doesn't
> use that (that config isn't set so it uses the dir in PGDATA) and then change 
> it over once the upgrade is
> done or what?
> 
> I could see where the upgrade process wouldn't do anything with stats while 
> the 2 servers are running (but
> only after the upgrade is done) so this might not matter. I will also say 
> that I don't see anything like
> this mentioned in the 10.0 docs for pg_upgrade.
> 
> I don't think it matters, but we'll be going from 9.5.1 to 10.0.

pg_upgrade never runs the old and new servers at the same time.  I am
not sure if that would be OK, but I have never heard of problems related
to this.  I am afraid you will just need to test it.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general