selecting timestamp

2018-02-27 Thread chris


What is the best way of selecting current timestamp in UTC?

SELECT
CURRENT_TIMESTAMP   as ct1
,timezone('UTC',CURRENT_TIMESTAMP) as ct2
,timezone('utc',now()) as ct3
,CURRENT_TIMESTAMP at time zone 'UTC' as ct4
,NOW() at time zone 'utc' as ct5


Re: selecting timestamp

2018-02-28 Thread chris

Thank you everyone for the help.
Sorry there wasn't much detail given originally.

CURRENT_TIMESTAMP at time zone 'UTC' as ct4

Seems to be working well.

Thanks,
Chris

On 02/27/2018 01:26 PM, David G. Johnston wrote:
On Tue, Feb 27, 2018 at 1:16 PM, chris <mailto:chr...@pgsqlrocket.com>>wrote:



What is the best way of selecting current timestamp in UTC?


​You
​ haven't​
 define
​d​
criteria upon which to judge - and the list below is not exhaustive
​ (but sufficiently so)​
​


SELECT
CURRENT_TIMESTAMP   as ct1


standard conforming, ​assumes server configured for UTC
​

,timezone('UTC',CURRENT_TIMESTAMP) as ct2
,timezone('utc',now()) as ct3


​non-standard, personally non-obvious (the function looks like an 
implementation detail that should be avoided)​


,CURRENT_TIMESTAMP at time zone 'UTC' as ct4


​This -​

standard conforming and doesn't require assumptions about the calling 
environment


,NOW() at time zone 'utc' as ct5


​non-standard but frequently used; no semantic different compared to 
the previous entry

​
David J.






plpgsql function-parsing

2018-03-06 Thread chris

Hi,

I am trying to create a function that gets passed a statement as a 
string and then I need to change the table_name within the string by 
adding a "_cdc" to it, then execute the statement


ex:

string passed could be GRANT all ON authors TO bob.

then

EXECUTE GRANT all ON authors_cdc TO bob


I'm looking into parsing the string but i can't find a way to parse in a 
way that'll work. Any ideas?



Thanks,

Chris




JDBC connectivity issue

2018-03-08 Thread chris

Hi,

I have a JDBC temping to connect to 2 postgres nodes in a hot standby 
configuration. Running postgres 9.4 on centos6.


What we are trying to accomplish is in an event of a failover, to first 
try to connect to the master. If that fails to then reach out to the 
standby.


I looked online and found the suggested way to do this, but it's not 
working.


This is the JDBC line we have:

/jdbc.url=jdbc:postgresql://10.16.10.12:5432,10.16.10.13:5432/app_db/

Here is the error we are getting:/
/

/2018-03-07 13:54:36, 994 ERROR:com.zaxxer.hikari.util.PropertyElf - 
failed to set properly port number on target class 
org.postgresql.ds.PGSimpleDataSource java.langNumberFormatException:for 
input string: "5432,10.16.10.13:5432"/


It looks like the first IP address has disappeared or is not set right./
/

Thanks in advance for the help.


//




Re: JDBC connectivity issue

2018-03-08 Thread chris
Given that the syntax looks correct for the url, how would we go about 
debugging that it's not seeing the comma?



On 03/08/2018 02:27 PM, Adrian Klaver wrote:

On 03/08/2018 01:12 PM, chris wrote:

Hi,

I have a JDBC temping to connect to 2 postgres nodes in a hot standby 
configuration. Running postgres 9.4 on centos6.


What we are trying to accomplish is in an event of a failover, to 
first try to connect to the master. If that fails to then reach out 
to the standby.


I looked online and found the suggested way to do this, but it's not 
working.


This is the JDBC line we have:

/jdbc.url=jdbc:postgresql://10.16.10.12:5432,10.16.10.13:5432/app_db/

Here is the error we are getting:/
/

/2018-03-07 13:54:36, 994 ERROR:com.zaxxer.hikari.util.PropertyElf - 
failed to set properly port number on target class 
org.postgresql.ds.PGSimpleDataSource 
java.langNumberFormatException:for input string: 
"5432,10.16.10.13:5432"/


Lets try that again:

To me it looks like whatever code you are using is trying to use 
"5432,10.16.10.13:5432" as the port number for 10.16.10.12. In other 
words it is not seeing the ',' as a separator for the two IP's'.


Time to clean the glasses:)



It looks like the first IP address has disappeared or is not set right./
/

Thanks in advance for the help.


//










Re: JDBC connectivity issue

2018-03-13 Thread chris

I'm sorry that took a few days but I am running;


Postgresql-9.4

and

jre7.jar

Thanks in advance.


On 03/08/2018 02:30 PM, chris wrote:
Given that the syntax looks correct for the url, how would we go about 
debugging that it's not seeing the comma?



On 03/08/2018 02:27 PM, Adrian Klaver wrote:

On 03/08/2018 01:12 PM, chris wrote:

Hi,

I have a JDBC temping to connect to 2 postgres nodes in a hot 
standby configuration. Running postgres 9.4 on centos6.


What we are trying to accomplish is in an event of a failover, to 
first try to connect to the master. If that fails to then reach out 
to the standby.


I looked online and found the suggested way to do this, but it's not 
working.


This is the JDBC line we have:

/jdbc.url=jdbc:postgresql://10.16.10.12:5432,10.16.10.13:5432/app_db/

Here is the error we are getting:/
/

/2018-03-07 13:54:36, 994 ERROR:com.zaxxer.hikari.util.PropertyElf - 
failed to set properly port number on target class 
org.postgresql.ds.PGSimpleDataSource 
java.langNumberFormatException:for input string: 
"5432,10.16.10.13:5432"/


Lets try that again:

To me it looks like whatever code you are using is trying to use 
"5432,10.16.10.13:5432" as the port number for 10.16.10.12. In other 
words it is not seeing the ',' as a separator for the two IP's'.


Time to clean the glasses:)



It looks like the first IP address has disappeared or is not set 
right./

/

Thanks in advance for the help.


//












Re: JDBC connectivity issue

2018-03-14 Thread chris


I believe its Postgresql-9.4.1208.jre7.jar
On 03/13/2018 05:48 PM, Adrian Klaver wrote:

On 03/13/2018 04:46 PM, chris wrote:

I'm sorry that took a few days but I am running;


Postgresql-9.4

and

jre7.jar


What we are looking for is the JDBC driver you are using?



Thanks in advance.


On 03/08/2018 02:30 PM, chris wrote:
Given that the syntax looks correct for the url, how would we go 
about debugging that it's not seeing the comma?



On 03/08/2018 02:27 PM, Adrian Klaver wrote:

On 03/08/2018 01:12 PM, chris wrote:

Hi,

I have a JDBC temping to connect to 2 postgres nodes in a hot 
standby configuration. Running postgres 9.4 on centos6.


What we are trying to accomplish is in an event of a failover, to 
first try to connect to the master. If that fails to then reach 
out to the standby.


I looked online and found the suggested way to do this, but it's 
not working.


This is the JDBC line we have:

/jdbc.url=jdbc:postgresql://10.16.10.12:5432,10.16.10.13:5432/app_db/

Here is the error we are getting:/
/

/2018-03-07 13:54:36, 994 ERROR:com.zaxxer.hikari.util.PropertyElf 
- failed to set properly port number on target class 
org.postgresql.ds.PGSimpleDataSource 
java.langNumberFormatException:for input string: 
"5432,10.16.10.13:5432"/


Lets try that again:

To me it looks like whatever code you are using is trying to use 
"5432,10.16.10.13:5432" as the port number for 10.16.10.12. In 
other words it is not seeing the ',' as a separator for the two IP's'.


Time to clean the glasses:)



It looks like the first IP address has disappeared or is not set 
right./

/

Thanks in advance for the help.


//


















Re: JDBC connectivity issue

2018-03-14 Thread chris

Oh sorry for getting the wrong information.

How would I get the information on what driver is currently installed?


On 03/14/2018 09:44 AM, Adrian Klaver wrote:

On 03/14/2018 07:51 AM, chris wrote:


I believe its Postgresql-9.4.1208.jre7.jar


Pretty sure solving this is going to require knowing exactly what 
driver is in use. The failover syntax looks to be fairly recent, so 
being off by a little on the driver version can make a big difference.


Or you could do as this post suggests:

https://www.postgresql.org/message-id/CADK3HHJgdio_TZ-fpk4rguWaA-wWZFNZrjBft_T4jLBK_E_c8w%40mail.gmail.com 



That is install driver version 42.2.1.


On 03/13/2018 05:48 PM, Adrian Klaver wrote:

On 03/13/2018 04:46 PM, chris wrote:

I'm sorry that took a few days but I am running;


Postgresql-9.4

and

jre7.jar


What we are looking for is the JDBC driver you are using?



Thanks in advance.


On 03/08/2018 02:30 PM, chris wrote:
Given that the syntax looks correct for the url, how would we go 
about debugging that it's not seeing the comma?



On 03/08/2018 02:27 PM, Adrian Klaver wrote:

On 03/08/2018 01:12 PM, chris wrote:

Hi,

I have a JDBC temping to connect to 2 postgres nodes in a hot 
standby configuration. Running postgres 9.4 on centos6.


What we are trying to accomplish is in an event of a failover, 
to first try to connect to the master. If that fails to then 
reach out to the standby.


I looked online and found the suggested way to do this, but it's 
not working.


This is the JDBC line we have:

/jdbc.url=jdbc:postgresql://10.16.10.12:5432,10.16.10.13:5432/app_db/ 



Here is the error we are getting:/
/

/2018-03-07 13:54:36, 994 
ERROR:com.zaxxer.hikari.util.PropertyElf - failed to set 
properly port number on target class 
org.postgresql.ds.PGSimpleDataSource 
java.langNumberFormatException:for input string: 
"5432,10.16.10.13:5432"/


Lets try that again:

To me it looks like whatever code you are using is trying to use 
"5432,10.16.10.13:5432" as the port number for 10.16.10.12. In 
other words it is not seeing the ',' as a separator for the two 
IP's'.


Time to clean the glasses:)



It looks like the first IP address has disappeared or is not set 
right./

/

Thanks in advance for the help.


//
























Re: JDBC connectivity issue

2018-03-21 Thread chris
I wasnt able to find what version we had installed so we went ahead and 
reinstalled it.


we downloaded the current version JDBC 4.1 Driver 42.2.1.jre7


We are still having the same problem.

Thanks




On 03/14/2018 03:27 PM, Adrian Klaver wrote:

On 03/14/2018 01:47 PM, chris wrote:

Oh sorry for getting the wrong information.


At this point not sure whether it is wrong information or not.



How would I get the information on what driver is currently installed?


I am not a Java programmer, so I am not the best person to answer 
this. Still I would the think the place to start would be the 
connection code itself. Another way would be examining CLASSPATH:


https://jdbc.postgresql.org/documentation/head/classpath.html




On 03/14/2018 09:44 AM, Adrian Klaver wrote:

On 03/14/2018 07:51 AM, chris wrote:


I believe its Postgresql-9.4.1208.jre7.jar


Pretty sure solving this is going to require knowing exactly what 
driver is in use. The failover syntax looks to be fairly recent, so 
being off by a little on the driver version can make a big difference.










Re: JDBC connectivity issue

2018-03-21 Thread chris
I did the re install not to change versions but to now know what version 
I am running



On 03/21/2018 02:44 PM, Adrian Klaver wrote:

On 03/21/2018 01:16 PM, chris wrote:
I wasnt able to find what version we had installed so we went ahead 
and reinstalled it


Maybe I am missing something, but if you could not find the version 
you where using how do you know installing a new driver actually 
changed the version you are using now?





we downloaded the current version JDBC 4.1 Driver 42.2.1.jre7


We are still having the same problem.

Thanks













Re: Do we need yet another IDE (SQL development assistant) for PostgreSQL?

2018-07-18 Thread Chris
Hi Dmitry,

I think this a wonderful idea, but it will be tough. Share my experience:
—dbeaver: 
It is for multi-platform so it is just for use, no particular function, 
also there is more bugs(our company had changed some of them).
dbeaver is likely the most open source app form pg now, I know more 
people use it.
—pgadmin4:
I don’t like web client for database, I used it and it is good for 
there is simple monitor-windows.
For now, I used jetbrains’s product:datagrip, it is also coded by java,but is 
better for dbeaver.

Best Wishes,
Chris


> 在 2018年7月17日,上午6:21,Tim Cross  写道:
> 
> 
> Dmitry Igrishin mailto:dmit...@gmail.com>> writes:
> 
>> пн, 16 июл. 2018 г. в 1:14, Tim Cross :
>> 
>>> 
>>> Your idea to make it integrate with user's preferred editor is a good
>>> idea as editors are like opinions and certain anatomical parts -
>>> everyone has one! Finding an appropriate API to do this will be a
>>> challenge.
>>> 
>> I see two options here: the core of the tool acts as a long-lived server or
>> as a short-lived
>> console application which communicates with the editor's plugin via
>> stdin/stdout.
>> Btw, what the text editor do you prefer? :-)
>> 
> 
> Most of the time, I use Emacs on either Linux or macOS. With the support
> it has for running a psql process, it works pretty well for most
> things. There are pretty reasonable packages for writing SQL and
> 'static' completion. Getting things setup can take a bit of effort, but
> once it is working, it tends to work pretty well.
> 
> The two areas where it lacks are dynamic completion i.e. completing on
> objects the user has created such as table names and column
> names/function names etc. and decent result formatting. 
> 
>>> 
>>> I seem to remember reading somewhere that Oracle was going to remove
>>> swing from the core java library. I've always been a little disappointed
>>> with Java UIs and found they don't give the cross-platform support that
>>> Java originally promised, plus OSX/macOS has not made Java as welcome as
>>> it use to be. If you do choose Java, it will need to work under openJDK
>>> as this is what most Linux users will have installed.
>>> 
>> For now, the possible options for the GUI part are Qt, wxWidgets or FLTK,
>> or even Electron.
> 
> I would look at either Qt or even Electron (I believe visual code is
> written using Electron, which is the other editor I use from time to
> time).
> 
> There was an Emacs project called Eclaim (I think) which interfaced with
> Eclipse services in order to provide dynamic completion when doing
> Java. That could be worth checking out for ideas to borrow.
> 
> Tim
> 
> -- 
> Tim Cross



Re:Using PostgreSQL for Machine Learning Data Pipelines

2019-10-17 Thread chris
Hi there,


There is a project named Apache MADlib, may help you.


http://madlib.apache.org


Thanks,
| |
Chris
|
|
yuanzefuwa...@126.com
|
签名由网易邮箱大师定制


On 10/18/2019 02:04,Pankaj Jangid wrote:
Hi,

I am working on a machine-learning project. Because of the available
study material in the ML area, the team is inclined towards Apache
Kafka, Apache Spark for data-pipelines and analytics.

Our requirement is to store huge amounts of continuously increasing data
that cannot fit into a single machine. The algorithms require data in
batches so it is not necessary to keep full data ready for
consumption. Using Kafka, the data can be distributed and fetched in
varying batch sizes as and when required.

I am more comfortable with PostgreSQL. And wanted to know more about
case-studies where PostgreSQL is deployed for ML use. Any pointers
referring to study material will be helpful. Please share in this
thread.

--
Thanks & Regards,
Pankaj Jangid



pg_trgm vs. Solr ngram

2023-02-09 Thread Chris

Hello list

I'm pondering migrating an FTS application from Solr to Postgres, just 
because we use Postgres for everything else.


The application is basically fgrep with a web frontend. However the 
indexed documents are very computer network specific and contain a lot 
of hyphenated hostnames with dot-separated domains, as well as IPv4 and 
IPv6 addresses. In Solr I was using ngrams and customized the 
TokenizerFactories until more or less only whitespace was as separator, 
while [.:-_\d] remains part of the ngrams. This allows to search for 
".12.255/32" or "xzy-eth5.example.org" without any false positives.


It looks like a straight conversion of this method is not possible since 
the tokenization in pg_trgm is not configurable afaict. Is there some 
other good method to search for a random substring including all the 
punctuation using an index? Or a pg_trgm-style module that is more 
flexible like the Solr/Lucene variant?


Or maybe hacking my own pg_trgm wouldn't be so hard and could be fun, do 
I pretty much just need to change the emitted tokens or will this lead 
to significant complications in the operators, indexes etc.?


thanks for any hints & cheers
Christian




Re:POSTGRES/MYSQL

2019-03-11 Thread chris
More blog online, choose one for your attention. 
https://www.mssqltips.com/sqlservertip/5745/compare-sql-server-mysql-and-postgresql-features/


Best Regards,
| |
Chris
|
|
yuanzefuwa...@126.com
|
签名由网易邮箱大师定制


On 03/11/2019 19:31,Sonam Sharma wrote:
Hi All,


We are planning to migrate our database into any open source DB.
Can someone please help me in knowing which one will be better among POSTGRESQL 
and MYSQL.


In what terms postgres is better than MYSQL.


Regards,
Sonam

Re:Why in pgAdmin an active session is marked/highlighted in Red

2021-11-11 Thread chris
Which version? Active session should be green.


Regards,
Chris
On 11/11/2021 21:24,Shaozhong SHI wrote:
I never came across this before and wonder why?


Regards,


David

gathering ownership and grant permissions

2018-02-16 Thread chris

HI,

I would like to know if there is a better way to grab the grant 
permissions  as well as the "owner to" of a table.


I can currently do this through a pg_dumb with greps for "^grant" and 
"^alter" but than I need to do a word search of those lines looking for 
the specific answers which gets much more involved.


I essentially need to know what grant command was ran and use that grant 
permission to set to a variable for a script.


Ex: GRANT ALL ON TABLE testing TO bob; then set only the "all" to a 
variable.


And then same for the ALTER  OWNER TO bob.

This is on postgresl 9.6.

Thank you,

Chris




Re: gathering ownership and grant permissions

2018-02-16 Thread chris

Thanks for the quick response.

That does not work for what I need because I only need the owner and 
permissions of one table, I need the grant to look like the output that 
pg_dump displays.


ex:

GRANT ALL ON TABLE testing_cdc TO bob;


--
-- PostgreSQL database dump complete
--

I need a way which my script can isolate the owner's name and set it to 
a variable on its own. Same with grant.


Thanks


On 02/16/2018 01:05 PM, Melvin Davidson wrote:



On Fri, Feb 16, 2018 at 2:47 PM, chris <mailto:chr...@pgsqlrocket.com>> wrote:


HI,

I would like to know if there is a better way to grab the grant
permissions  as well as the "owner to" of a table.

I can currently do this through a pg_dumb with greps for "^grant"
and "^alter" but than I need to do a word search of those lines
looking for the specific answers which gets much more involved.

I essentially need to know what grant command was ran and use that
grant permission to set to a variable for a script.

Ex: GRANT ALL ON TABLE testing TO bob; then set only the "all" to
a variable.

And then same for the ALTER  OWNER TO bob.

    This is on postgresl 9.6.

Thank you,

Chris



*>... is a better way to grab the grant permissions  as well as the 
"owner to" of a table.


*
*Chris, see if the query below will help. Note, you need to execute as 
a superuser.


SELECT n.nspname,
   c.relname,
   o.rolname AS owner,
   array_to_string(ARRAY[c.relacl], '|') as permits
  FROM pg_class c
    JOIN pg_namespace n ON (n.oid = c.relnamespace)
    JOIN pg_authid o ON (o.oid = c.relowner)
WHERE n.nspname not like 'pg_%'
     AND n.nspname not like 'inform_%'
 AND relkind = 'r'
ORDER BY 1;*




--
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.




Re: gathering ownership and grant permissions

2018-02-16 Thread chris
I'm sorry I realized that I only need to know which users have 
permissions to the table which I can do through


$ psql -t

SELECT grantee
FROM information_schema.role_table_grants
WHERE table_name='table_name'
GROUP BY grantee;

thanks!



On 02/16/2018 01:13 PM, chris wrote:


Thanks for the quick response.

That does not work for what I need because I only need the owner and 
permissions of one table, I need the grant to look like the output 
that pg_dump displays.


ex:

GRANT ALL ON TABLE testing_cdc TO bob;


--
-- PostgreSQL database dump complete
--

I need a way which my script can isolate the owner's name and set it 
to a variable on its own. Same with grant.


Thanks


On 02/16/2018 01:05 PM, Melvin Davidson wrote:



On Fri, Feb 16, 2018 at 2:47 PM, chris <mailto:chr...@pgsqlrocket.com>> wrote:


HI,

I would like to know if there is a better way to grab the grant
permissions  as well as the "owner to" of a table.

I can currently do this through a pg_dumb with greps for "^grant"
and "^alter" but than I need to do a word search of those lines
looking for the specific answers which gets much more involved.

I essentially need to know what grant command was ran and use
that grant permission to set to a variable for a script.

Ex: GRANT ALL ON TABLE testing TO bob; then set only the "all" to
a variable.

And then same for the ALTER .... OWNER TO bob.

This is on postgresl 9.6.

Thank you,

Chris



*>... is a better way to grab the grant permissions  as well as the 
"owner to" of a table.


*
*Chris, see if the query below will help. Note, you need to execute 
as a superuser.


SELECT n.nspname,
   c.relname,
   o.rolname AS owner,
   array_to_string(ARRAY[c.relacl], '|') as permits
  FROM pg_class c
    JOIN pg_namespace n ON (n.oid = c.relnamespace)
    JOIN pg_authid o ON (o.oid = c.relowner)
WHERE n.nspname not like 'pg_%'
     AND n.nspname not like 'inform_%'
 AND relkind = 'r'
ORDER BY 1;*




--
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.






Re: Code of Conduct plan

2018-06-05 Thread Chris Travers
 highly frowned
> upon.


Again key point that a CoC committee needs to be international and used to
addressing these sorts of issues.


>
>
> Be very careful in attempting to codify 'correct' behaviour!
>>
>>
> Correct. I think one way to look at all of this is, "if you wouldn't say
> it to your boss or a client don't say it here". That too has problems but
> generally speaking I think it keeps the restrictions rational.
>
>
I will post a more specific set of thoughts here but in general I think the
presumption ought to be that people are trying to work together.
Misunderstanding can happen.  But let's try to act in a collegial and
generally respectful way around eachother.

-- 
Best Regards,
Chris Travers
Database Administrator

Tel: +49 162 9037 210 | Skype: einhverfr | www.adjust.com
Saarbrücker Straße 37a, 10405 Berlin


Re: Code of Conduct plan

2018-06-05 Thread Chris Travers
On Sun, Jun 3, 2018 at 8:29 PM, Tom Lane  wrote:

> Two years ago, there was considerable discussion about creating a
> Code of Conduct for the Postgres community, as a result of which
> the core team announced a plan to create an exploration committee
> to draft a CoC [1].  That process has taken far longer than expected,
> but the committee has not been idle.  They worked through many comments
> and many drafts to produce a version that seems acceptable in the view
> of the core team.  This final(?) draft can be found at
>
> https://wiki.postgresql.org/wiki/Code_of_Conduct


Reading through this, it seems like a generally useful and fair set of
rules.I want to offer some comments though about some specific issues
here.

>
>
> We are now asking for a final round of community comments.
> Please send any public comments to the pgsql-general list (only).
> If you wish to make a private comment, you may send it to
> c...@postgresql.org.
>
> The initial membership of the CoC committee will be announced separately,
> but shortly.
>

One of the issues I see here is the issue of cross-cultural attacks, and a
certain American slant on where inappropriate behavior might begin when it
comes to disparaging remarks.  In my blog I covered one hypothetical about
an argument via email signatures over a culture war issue like same-sex
marriage for example where one side might put forth an American viewpoint
and someone else might condemn sexual ethics that permit accepting
homosexual contact using, say, Gandhi as an authority.

This is a serious issue.  It won't go away.  There will be, at some point,
Americans trying to push these sorts of issues via email signatures and the
like, and it will cause conflict.  The current code of conduct makes it
very clear that the second viewpoint is not welcome, but is very ambiguous
on the first viewpoint.  I.e. arguing that marriage shouldn't be a bond
that binds parents to their children but solely exists for the benefit of
the spouses could be a cultural attack and hence an attack on the national
backgrounds of many people in the community around the world but that isn't
clear.  My concern is that the current code of conduct will lead to these
disputes ensuring that the CoC community gets to decide who gets to feel
like they are not protected, and I think we all agree that's not what we
want.

For this reason I think the introduction should be left as is, but I would
suggest one of two modifications to the second section (Inclusivity):

1.  Either include culture as a part of the protected criteria to indicate
that this definitely is protected and that culture-war pushing will not be
tolerated any more than any other disturbance of the peace, or
2.  Note that trolling or divisive political behavior likely to threaten
the peace will be dealt with as a violation of the code of conduct, or
3.  Simply demand civility and leave a lot of the examples out.

On to the code of conduct committee:

This needs to be explicitly international and ideally people from very
different cultures.  This is the best protection against one small group
within one country deciding to push a political agenda via the Code of
Conduct.  I would recommend adding a note here that the committee will be
international and culturally diverse, and tasked with keeping the peace and
facilitating a productive and collegial environment.


> Unless there are substantial objections, or nontrivial changes as a result
> of this round of comments, we anticipate making the CoC official as of
> July 1 2018.
>
> regards, tom lane
>
> [1] https://www.postgresql.org/message-id/56a8516b.8000...@agliodbs.com
>
>


-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


Re: Code of Conduct plan

2018-06-05 Thread Chris Travers
On Tue, Jun 5, 2018 at 6:21 PM, James Keener  wrote:

> I also think that a CoC focuses on the wrong things. If someone is
> disruptive, they need to be told to leave, just like in every forum ever.
>
> We should focus on ensuring that the code and documentation is free from
> slurs and culturally specific idioms. We should hold gatekeepers
> accountable for making decisions based on technical merit and not the
> person who proposed an idea or submitted a patch.
>
> We can't control the behavior of the internet as a whole. We can control
> our codebase and our gatekeepers.
>


I think in our case those fears are overblown.

There is a very well-founded fear among a lot of people of ideological
litmus tests being imposed on economic commons.  The current impetus for a
code of conduct here followed one attempt at that on some other projects.
On my blog I have discussed these things.  One can find them there.  I
think a whole lot of us understand that at some point there will be an
attempt to use our code of conduct to that end.  This has been discussed
before and one of the key points is that not  having a code of conduct
doesn't really protect us because the MO in these cases has been "Look at
that extremely offensive viewpoint!  You should have a code of conduct we
can use to throw him out!"  So having a code of conduct doesn't hurt and it
may provide a bulwark against some of the larger efforts in this regard.
In essence often not having a code of conduct is an encouragement for
people to push a politically charged code of conduct.  Having a politically
neutral code of conduct at least suggests we have rejected the politically
charged ones.

We are an international and largely politically neutral project.  I doubt
that as a community we would have tolerated trying to harass, for example,
either side in the recent Irish referendum to stop using PostgreSQL if they
were, or that we would tolerate an effort to politically hijack the
community for culture war issues, or trying to drive people out for trying
to form viable third parties in the US political landscape.  An
international CoC committee is our best defense against an effort to co-opt
the community in the way you are worried about.

-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


Re: Code of Conduct plan

2018-06-05 Thread Chris Travers
On Tue, Jun 5, 2018 at 7:01 PM, James Keener  wrote:

> I'm sorry for the double post.
>
> > If you read the reporting guidelines, it is requested that someone
> filing a
> report provides as much evidence as possible, and that is a really
> important provision, both for the person reporting and for the committee
> to review and adjudicate fairly.
>
> What does fairly mean?
>
> Let's role play. I'll be a homophobic person.
>
> You've just submitted a proposal suggesting that we change master-master
> replication to be multi-partner replication. I've told you I don't like the
> wording because of it's implication of supporting homosexual marriage,
> which I believe to be a personal offense to me, my marriage, and my "deeply
> held religious beliefs". You tell me that's not your intent and that you do
> not plan to change your proposed wording. You continue to use the term in
> all correspondences on the list and I continually tell you that supporting
> gay marriage is offensive and that you need to not be so deeply offensive.
> I submit all our correspondences to the CoC committee and complain that
> you're purposely using language that is extremely offensive.
>
> What is a "fair" outcome? Should you be banned? Should you be forced to
> change the wording of your proposal that no one else has complained about
> and others support? What is a fair, just outcome?
>

I think the fundamental outcome is likely to be that people who cause
trouble are likely to get trouble.  This sort of case really doesn't worry
me.  I am sure whoever is stirring the pot will be asked at least to cease
doing so.

But let's look at all fairness in a more likely scenario where someone
involved in, say, Human Rights Campaign posts something arguing that
marriage is not a bond that binds parents to their children but something
that exists solely for the benefit of the spouses and a conservative from,
say, India, complains.  Do we ask the individual to change his or her
signature?

What happens if the signature proclaims that Tibet should be free and
Chinese folks on the list worry about ramifications for participating in
these cases?

But worse, what if by not taking sides, we say that this isn't big enough
for us to adjudicate and so the conservative from India puts up a quote on
his email signature citing Gandhi's view that accepting consent-based
morality to sexual contact leads to accepting homosexual contact, and this
leads to misery for everyone.  When challenged he points out it is just
social critique like the other signature.

Now what do we do?  Do we side with one or the other?  Or do we ban both or
refuse to get involved?  At that point there are no longer any good options
but I will state my preference would be to reiterate to both that we ought
to have a live-and-let-live culture and this applies to cultural
differences on concepts of gender and marriage.

This sort of thing will happen.  I have watched calls for pushing gay and
lesbian roles on television in the US lead to policies of censorship of
Western media in countries like Indonesia (where Glee among other shows are
now formally banned), and this is one issue which is incredibly divisive
throughout the world with a lot of people having very deep-seated feelings
on the issue, where one can expect small differences to lead to big
conflicts.  And I think we want to avoid wading into those conflicts.

>
> Jim
>
> --
> Sent from my Android device with K-9 Mail. Please excuse my brevity.
>



-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


Re: Code of Conduct plan

2018-06-05 Thread Chris Travers
On Tue, Jun 5, 2018 at 8:42 PM, James Keener  wrote:

> I accidentally didn't send this to the whole list.  I'll let Chris resend
> his response if he'd like.
>
> On Tue, Jun 5, 2018 at 1:58 PM, James Keener  wrote:
>
>> I think the fundamental outcome is likely to be that people who cause
>>> trouble are likely to get trouble.  This sort of case really doesn't worry
>>> me.  I am sure whoever is stirring the pot will be asked at least to cease
>>> doing so.
>>>
>>>
>> Are you implying that either of my RPCs are causing "trouble" for either
>> advancing a technical proposal, not wanting to change wording they feel is
>> clear and non-political, or for voicing their concerns that a proposal is
>> highly offensive?
>>
>
There's an old Icelandic mythic poem "Lokasenna" which describes what I
have seen happening very well.  If you come to the feast to pick fights,
fights is what one will get.

>
>> The whole point of the CoC is that people shouldn't feel like they're
>> causing "trouble" if they feel like they're being picked on or offended or
>> marginalized. That's specifically why people want them: they want to know,
>> or at least feel like, they'll be taken seriously if someone is
>> legitimately picking on them or marginalizing them.
>>
>> I complain a lot about the CoC, but I agree with Tom (I think it was) in
>> saying that there does need to be some written framework for how disputes
>> are handled by the organization. I just feel that CoC has, unfortunately,
>> become a politically charged term that often find themselves talking about
>> politically charged subjects instead of saying you should focus on
>> technical topics and not on the person when discussing a technical topic
>> and how a dispute will be handled if someone is misbehaving. I've seen them
>> used as weapons in real life and have watch disputes play out over the
>> internet, e.g. the famous push for opal to adop the Contributor Covenent by
>> someone not affiliated with the project and who (potentially/allegedly)
>> misunderstood a partial conversation they heard. (
>> https://github.com/opal/opal/issues/941).
>>
>> The question is: how can you (honestly) make people feel like we'll take
>> complaints seriously, while also not allowing for the politics that I've
>> seen surround recent incarnations of Codes of Conduct?
>>
>> Jim
>>
>
>
At the end I see signals in the current CoC that make me hopeful.  Phrases
like "common interest" occur.  There are some minor changes I think would
help avoid problems.  But they aren't big deals.  The big thing is I trust
our community not to exclude people based, for example, on political or
cultural perspectives and thats really important.

-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


Re: Code of Conduct plan

2018-06-05 Thread Chris Travers
On Wed, Jun 6, 2018 at 2:07 AM, Jan Claeys  wrote:

> On Tue, 2018-06-05 at 16:45 +0200, Chris Travers wrote:
> > If I may suggest:  The committee should be international as well
> > and include people from around the world.  The last thing we want is
> > for it to be dominated by people from one particular cultural
> > viewpoint.
>
> Being international/intercultural certainly has some value, but I think
> it's at least as useful to have people with different competencies and
> professional backgrounds.
>
> For example: having some people who have a background in something like
> psychology, sociology, education, law, human resources, marketing, etc.
> (in addition to the likely much easier to find developers, DBAs and IT
> managers) would be valuable too.
>

Besides what the others have said I don't think this would help.

The real fear here is the code of conduct being co-opted as a weapon of
world-wide culture war and that's what is driving a lot of the resistance
here.  This is particularly an American problem here and it causes  a lot
of resistance among people who were, until the second world war, subject to
some pretty serious problems by colonial powers.

Putting a bunch of American lawyers, psychologists, sociologists, marketers
etc on the board in the name of diversity would do way more harm than good.


>
>
> --
> Jan Claeys
>
>


-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


Re: Code of Conduct plan

2018-06-07 Thread Chris Travers
On Thu, Jun 7, 2018 at 2:14 AM, Jan Claeys  wrote:

> On Wed, 2018-06-06 at 07:27 +0200, Chris Travers wrote:
> > The real fear here is the code of conduct being co-opted as a weapon
> > of world-wide culture war and that's what is driving a lot of the
> > resistance here.  This is particularly an American problem here and
> > it causes  a lot of resistance among people who were, until the
> > second world war, subject to some pretty serious problems by colonial
> > powers.
>
> I don't see how this could happen any more than it already can, because
> as far as I can tell the goal is not to discuss complaints in public;
> the committee would handle cases in private.  And if committee members
> would try to abuse their power, I'm pretty sure they would be removed.
>

Right.  I think the fears are overblown but you do have to remember that we
started this whole public side of the process when there was a real effort
by some in around open source to push contributor codes of conducts that
were expressly political (the Contributor Covenant for example) and in the
wake of Opalgate.

I do not doubt that at some point we will face the same.  I don't doubt
that such efforts will be unsuccessful.  But I do think they will put the
project through some public controversy and grief and so we are best off to
try to minimize the attack surface.

>
> > Putting a bunch of American lawyers, psychologists, sociologists,
> > marketers etc on the board in the name of diversity would do way more
> > harm than good.
>
> I didn't say they have to be American, and I didn't say there has to be
> a bunch of them.  I just said it would be good if there were also
> people who aren't (just only) developers, DBAs or other very technical
> people.
>

Ok I get what your concern is now.  I am not sure the formal qualifications
matter but I would agree that the committee needs to be staffed with people
we trust to be good "people people" rather than good "tech people."

>
>
> --
> Jan Claeys
>
>


-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


Re: Code of Conduct plan

2018-06-07 Thread Chris Travers
On Fri, Jun 8, 2018 at 7:09 AM, Christophe Pettus  wrote:

>
> > On Jun 7, 2018, at 21:00, Gavin Flower 
> wrote:
> >
> >> I have to say that this seems like a red herring to me.
> > Not entirely.  American web sites tend to insist on weird date format,
> and insist on the archaic imperial units rather than the metric system that
> most people in the world use.
>
> Then you will be pleased to know that neither writing dates day-of-month
> first, nor using meters, will be Code of Conduct violations. :)
>

However, this is a symptom of a much larger and deeper set of issues.
Americans, particularly in the cities, tend to take deep offense to
political disagreements and this happens on both sides, because in the US,
politics is often extremely binary.

A closely related problem here is that in the US, this often constitutes a
sort of signaling as to whether someone is going to get a fair chance or
not.  This is very different from Europe, where political discrimination is
barred under the European Charter of Fundamental Rights.

>
> > For example try defining something simple, like what is a car!
> [...]
> >
> > Try defining success at university
>
> It is equally unlikely that the Code of Conduct committee will need to
> decide what a car is, or whether or not someone has succeeded at university.
>
> I'm not trying to be snide, but this does seem to be exactly what I was
> talking about: When asked for examples of cultural differences that might
> run afoul of the CoC, the examples don't seem to be either relevant (i.e.,
> they are not things the CoC committee will have to address), or are clearly
> contextual in a way that a human will have no trouble understanding.
>

I think they are likely to have to decide whether wearing a MAGA hat at a
conference is allowed (and now that I know the people who did this were
South Africans I personally feel bad about not conversing with them).

They might also have to decide whether statements like the following is
disparaging based on protected group characteristics:

"I am enough of a Marxist to see gender as a qualitative relationship to
biological reproduction, and this is something GLBT groups don't fit into
very well."

>
> > I've called a friend of mine a bastard, but he took it as a mark of
> respect in the context of our discussion.
>
> This is why we have human beings, rather than a regex, forming the Code of
> Conduct committee.  It's important to remember that the CoC committee is
> not going to be going around policing the community for potential
> violations; their job is to resolve actual situations between real people.
> It's not their job to define values; it's their job to resolve situations.
> In my experience in dealing with CoC issues, the situations (while often
> complex) are rarely of the form, "This word does not mean anything bad
> where I come from."
>

This is true but it is needed that the committee is culturally diverse and
advertised as such.  Otherwise I am concerned that it would signal to some
people that a certain sort of abuse would be permitted.

>
> --
> -- Christophe Pettus
>x...@thebuild.com
>
>
>


-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


Re: Code of Conduct plan

2018-06-07 Thread Chris Travers
On Fri, Jun 8, 2018 at 7:53 AM, Tom Lane  wrote:

> Christophe Pettus  writes:
> > 2. I don't think that there is a country where someone being driven out
> of a technical community by harassment is an acceptable local value.
>
> Yeah, this.  People that I've known and respected, and who did not seem
> at all thin-skinned, have left this community because of harassment.
> We need to try to stop that, not because of "political correctness",
> but to ensure that our community has a long-term future.
>
> It's not a simple thing, and I don't envy the CoC committee's task.
> For instance, I hope we can all agree that sexual harassment is
> unacceptable --- but I can imagine that what one person thought was
> friendly banter was harassment to the other, particularly if different
> cultures are involved.  The committee will likely have to sort out such
> situations and try to reconcile the two people without either starting a
> war or driving away either person.  They may not always succeed.  But not
> trying is not a better answer.


For what its worth, I am 100% in agreement with everything Tom just said
here.
-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


Re: Code of Conduct plan

2018-06-08 Thread Chris Travers
On Fri, Jun 8, 2018 at 11:18 AM, Justin Clift  wrote:

> On 2018-06-08 09:46, Simon Riggs wrote:
> 
>
>> Would it not be better to consider arbitration as the first step in
>> dispute resolution?
>>
>
> This bit sounds like it'd need to be on a case-by-case basis.
>
> It's pretty easy to imagine scenarios where arbitration wouldn't be
> appropriate.
>
> Whether or not they come about in the PG Community or not is a
> different matter.
>
> My point being that arbitration isn't necessarily automatically the
> right direction.
>
> I'd probably leave it up to the CoC team/people to figure it out. :)


+1

If it were me I would just say that CoC has an obligation to try in good
faith to resolve things in line with the common interest of an
international community and leave it at that.

>
>
> + Justin
>
>


-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


RE: Do we need yet another IDE (SQL development assistant) for PostgreSQL?

2018-07-16 Thread Chris Coutinho
VS Code is open source (MIT License). You may be thinking of Visual Studo, the 
closed source IDE - the two are not the same

Met vriendlijke groet,
REDstack BV

Chris Coutinho
Onderzoeker/Data analist

-Original Message-
From: Tim Clarke [mailto:tim.cla...@minerva-analytics.info] 
Sent: maandag 16 juli 2018 13:09
To: pgsql-general@lists.postgresql.org
Cc: Dmitry Igrishin 
Subject: Re: Do we need yet another IDE (SQL development assistant) for 
PostgreSQL?

-1 for VSC not being open source

Tim Clarke


On 16/07/18 11:47, Dmitry Igrishin wrote:
>
>
> пн, 16 июл. 2018 г. в 13:41, Tim Clarke 
>  <mailto:tim.cla...@minerva-analytics.info>>:
>
> +1 for not re-inventing the wheel - building on Netbeans or the
> Eclipse
> project would save you heaps of time and effort and provide
> cross-platform out of the box. I use Eclipse all the time.
>
> I agree and don't want to waste my time for reinventing the wheel. And 
> I'm also considering Visual Studio Code as the base.




Replication protocol question for logical replication.

2018-07-23 Thread Chris Travers
While we are building a streaming logical backup solution for our data
warehouse environment, we made a surprising discovery that select
statements could be executed in a replication connection but I cannot find
any clear documentation that says this is supported but I think it ought to
be since that way you can, for example, copy a table to stdout.

Is this expected?

Chriss-MBP:postgres christravers$ PGSSLMODE=prefer psql "host=localhost
port=5432 dbname=postgres replication=database" -c "IDENTIFY_SYSTEM"

  systemid   | timeline |  xlogpos   |  dbname

-+--++--

 6562045063708868894 |1 | 0/CC9F4920 | postgres

(1 row)


Chriss-MBP:postgres christravers$ PGSSLMODE=prefer psql "host=localhost
port=5432 dbname=postgres replication=database" -c "SELECT VERSION()"

version


---

 PostgreSQL 10.4 on x86_64-apple-darwin17.5.0, compiled by Apple LLVM
version 9.1.0 (clang-902.0.39.2), 64-bit

(1 row)


I am asking because if this is clearly intended to be supported we would
like to work with upstream library vendors to make sure this functionality
is included. My reading of the docs is that parameterized queries are not
possible but simple queries are.  Is this correct?

-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


Re: Postgresql

2018-08-19 Thread Chris Travers
On Sun, Aug 19, 2018 at 7:57 PM Sonam Sharma  wrote:

> Thank you! Can you please help me with any advantages/disadvantages.. my
> db size is less than 10gb. I am very new to this.
>

At 10GB you are unlikely to hit performance limitations that make you think
about how to use PostgreSQL more effectively.  You will have to avoid doing
the sorts of things you want to avoid doing on all databases.

DB2 and PostgreSQL are both very feature-rich databases but in somewhat
different directions.   For example, DB2 has more storage options than
PostgreSQL does.   But again at 10GB it is unlikely you will have to worry
about these.

Where PostgreSQL really shines is in generally workload tuning and
programmability.  Especially programmability.  PostgreSQL is extremely
extensible and this means not only can you build your own extensions for
whatever you need to do (assuming some skill) but there are tons of
extensions you can use for free including great ones like PostGIS.

Generally you can expect at 10GB to have to avoid needless complexity like
repeated sequential scans. Indexing becomes a bit important.  By 100GB you
have to pay significant attention to index strategies and disk access.  I
am aware of single databases of up to 50TB in size used in production and
federated storage environments into the petabytes.

In general I can think of no reason not to use PostgreSQL unless your
salary depends on paying license fees

>
> On Sun, Aug 19, 2018, 11:25 PM Adrian Klaver 
> wrote:
>
>> On 08/19/2018 10:53 AM, Sonam Sharma wrote:
>> >
>> >
>> >
>> >
>> > I am planning to migrate my db from db2 to postgresql. Before that I
>> > wanted to know is postgresql better than db2? Is it completely free ?
>> > Any performance issues or any limitations?
>>
>> Yes, yes, depends
>>
>> --
>> Adrian Klaver
>> adrian.kla...@aklaver.com
>>
>

-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


Slow shutdowns sometimes on RDS Postgres

2018-09-13 Thread Chris Williams
Hi,

I'm using AWS RDS Postgres (9.6.6) and have run into very slow shutdowns
(10+ minutes) a few times when making database modifications (e.g. reboot,
changing instance size, etc.).  Other times, it shuts down quickly (1
minute or so).  I have not been able to figure out why sometimes it takes a
long time to shutdown.

When it happens, I see a bunch of lines in the postgres log like the
following over and over (almost every second or two) during this 10 minute
shutdown period:
2018-09-12 06:37:01 UTC:XXX.XXX.XXX.XXX(19712):my_user@my_db
:[16495]:FATAL:
2018-09-12 06:37:01 UTC:localhost(31368):rdsadmin@rdsadmin:[16488]:FATAL:
the database system is shutting down

Once I start seeing these messages, I start manually shutting down all of
our applications that are connected to the db.  I'm not sure if shutting
down the apps fixes it or if there's some timeout on the RDS side, but it
seems like once I start doing this, the database finally shuts down.

When it takes this long to shut down, it ends up causing a lot more
downtime than I would like.  I've tried asking AWS's support why it takes
so long to shutdown sometimes, but they basically just told me that's "how
it works" and that I should try to shut down all of my connections ahead of
time before making database modifications.

We just have a few ruby on rails applications connected to the database,
and don't really have any long running or heavy queries and the db is under
very light load, so I don't understand why it takes so long to shutdown.
We do have a sizeable number of connections though (about 600) and there
are two replicas connected to it.  I also tried
setting idle_in_transaction_session_timeout to 300 seconds to see if that
would help, but it made no difference.

I was wondering if anyone else had seen this behavior on their RDS Postgres
instances or had any suggestions on how I could shorten the shutdown time?

Thanks,
Chris


Re: Slow shutdowns sometimes on RDS Postgres

2018-09-13 Thread Chris Williams
Yeah, I figured that.  Unfortunately, every time it happens, I open a
support ticket with them, but they always just tell me that this is normal
behavior for postgres.  Whether it's "normal" or not, I really would like
to get my db to shut down faster, and their suggestion of manually shutting
down all of my apps ahead of time is a real headache.  Given that I haven't
gotten anywhere with their support, I figured I'd try asking on the mailing
list.

Thanks,
Chris

On Thu, Sep 13, 2018 at 3:17 PM Joshua D. Drake 
wrote:

> On 09/13/2018 03:04 PM, Chris Williams wrote:
> > Hi,
> >
> > I'm using AWS RDS Postgres (9.6.6) and have run into very slow
> > shutdowns (10+ minutes) a few times when making database modifications
> > (e.g. reboot, changing instance size, etc.).  Other times, it shuts
> > down quickly (1 minute or so).  I have not been able to figure out why
> > sometimes it takes a long time to shutdown.
>
> This is probably something you would have to talk to Amazon about. AWS
> RDS Postgres is a fork of PostgreSQL and not 100% compatible from an
> administrative perspective.
>
> JD
>
>
>
> >
> > When it happens, I see a bunch of lines in the postgres log like the
> > following over and over (almost every second or two) during this 10
> > minute shutdown period:
> > 2018-09-12 06:37:01
> > UTC:XXX.XXX.XXX.XXX(19712):my_user@my_db:[16495]:FATAL:
> > 2018-09-12 06:37:01
> > UTC:localhost(31368):rdsadmin@rdsadmin:[16488]:FATAL: the database
> > system is shutting down
> >
> > Once I start seeing these messages, I start manually shutting down all
> > of our applications that are connected to the db.  I'm not sure if
> > shutting down the apps fixes it or if there's some timeout on the RDS
> > side, but it seems like once I start doing this, the database finally
> > shuts down.
> >
> > When it takes this long to shut down, it ends up causing a lot more
> > downtime than I would like.  I've tried asking AWS's support why it
> > takes so long to shutdown sometimes, but they basically just told me
> > that's "how it works" and that I should try to shut down all of my
> > connections ahead of time before making database modifications.
> >
> > We just have a few ruby on rails applications connected to the
> > database, and don't really have any long running or heavy queries and
> > the db is under very light load, so I don't understand why it takes so
> > long to shutdown.  We do have a sizeable number of connections though
> > (about 600) and there are two replicas connected to it.  I also tried
> > setting idle_in_transaction_session_timeout to 300 seconds to see if
> > that would help, but it made no difference.
> >
> > I was wondering if anyone else had seen this behavior on their RDS
> > Postgres instances or had any suggestions on how I could shorten the
> > shutdown time?
> >
> > Thanks,
> > Chris
> >
>
> --
> Command Prompt, Inc. || http://the.postgres.company/ || @cmdpromptinc
> ***  A fault and talent of mine is to tell it exactly how it is.  ***
> PostgreSQL centered full stack support, consulting and development.
> Advocate: @amplifypostgres || Learn: https://postgresconf.org
> * Unless otherwise stated, opinions are my own.   *
>
>


Re: Code of Conduct plan

2018-09-14 Thread Chris Travers
On Wed, Sep 12, 2018 at 10:53 PM Tom Lane  wrote:

> I wrote:
> > Stephen Frost  writes:
> >> We seem to be a bit past that timeline...  Do we have any update on when
> >> this will be moving forward?
> >> Or did I miss something?
>
> > Nope, you didn't.  Folks have been on holiday which made it hard to keep
> > forward progress going, particularly with respect to selecting the
> initial
> > committee members.  Now that Magnus is back on shore, I hope we can
> > wrap it up quickly --- say by the end of August.
>
> I apologize for the glacial slowness with which this has all been moving.
> The core team has now agreed to some revisions to the draft CoC based on
> the comments in this thread; see
>
> https://wiki.postgresql.org/wiki/Code_of_Conduct
>
> (That's the updated text, but you can use the diff tool on the page
> history tab to see the changes from the previous draft.)
>

I really have to object to this addition:
"This Code is meant to cover all interaction between community members,
whether or not it takes place within postgresql.org infrastructure, so long
as there is not another Code of Conduct that takes precedence (such as a
conference's Code of Conduct)."

That covers things like public twitter messages over live political
controversies which might not be personally directed.   At least if one is
going to go that route, one ought to *also* include a safe harbor for
non-personally-directed discussions of philosophy, social issues, and
politics.  Otherwise, I think this is asking for trouble.  See, for
example, what happened with Opalgate and how this could be seen to
encourage use of this to silence political controversies unrelated to
PostgreSQL.

>
> I think we are about ready to announce the initial membership of the
> CoC committee, as well, but that should be a separate post.
>
> regards, tom lane
>
>

-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


Re: Code of Conduct plan

2018-09-14 Thread Chris Travers
On Fri, Sep 14, 2018 at 10:31 AM Chris Travers 
wrote:

>
>
> On Wed, Sep 12, 2018 at 10:53 PM Tom Lane  wrote:
>
>> I wrote:
>> > Stephen Frost  writes:
>> >> We seem to be a bit past that timeline...  Do we have any update on
>> when
>> >> this will be moving forward?
>> >> Or did I miss something?
>>
>> > Nope, you didn't.  Folks have been on holiday which made it hard to keep
>> > forward progress going, particularly with respect to selecting the
>> initial
>> > committee members.  Now that Magnus is back on shore, I hope we can
>> > wrap it up quickly --- say by the end of August.
>>
>> I apologize for the glacial slowness with which this has all been moving.
>> The core team has now agreed to some revisions to the draft CoC based on
>> the comments in this thread; see
>>
>> https://wiki.postgresql.org/wiki/Code_of_Conduct
>>
>> (That's the updated text, but you can use the diff tool on the page
>> history tab to see the changes from the previous draft.)
>>
>
> I really have to object to this addition:
> "This Code is meant to cover all interaction between community members,
> whether or not it takes place within postgresql.org infrastructure, so
> long as there is not another Code of Conduct that takes precedence (such as
> a conference's Code of Conduct)."
>
> That covers things like public twitter messages over live political
> controversies which might not be personally directed.   At least if one is
> going to go that route, one ought to *also* include a safe harbor for
> non-personally-directed discussions of philosophy, social issues, and
> politics.  Otherwise, I think this is asking for trouble.  See, for
> example, what happened with Opalgate and how this could be seen to
> encourage use of this to silence political controversies unrelated to
> PostgreSQL.
>

Suggestion instead:

"Personally directed behavior is not automatically excluded from this code
of conduct merely because it does not happen on the postgresql.org
infrastructure.  In the case where a dispute of such a nature occurs
outside said infrastructure, if other parties are unable to act, this code
of conduct may be considered where it is, on the balance, in the interest
of the global community to do so."

This preserves the ability to act, without basically providing the same
invitation for problems.


>> I think we are about ready to announce the initial membership of the
>> CoC committee, as well, but that should be a separate post.
>>
>> regards, tom lane
>>
>>
>
> --
> Best Wishes,
> Chris Travers
>
> Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
> lock-in.
> http://www.efficito.com/learn_more
>


-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


Re: Code of Conduct plan

2018-09-14 Thread Chris Travers
On Fri, Sep 14, 2018 at 11:45 AM Ilya Kosmodemiansky 
wrote:

> On Fri, Sep 14, 2018 at 10:31 AM, Chris Travers 
> wrote:
> > I really have to object to this addition:
> > "This Code is meant to cover all interaction between community members,
> > whether or not it takes place within postgresql.org infrastructure, so
> long
> > as there is not another Code of Conduct that takes precedence (such as a
> > conference's Code of Conduct)."
> >
> > That covers things like public twitter messages over live political
> > controversies which might not be personally directed.   At least if one
> is
> > going to go that route, one ought to *also* include a safe harbor for
> > non-personally-directed discussions of philosophy, social issues, and
> > politics.  Otherwise, I think this is asking for trouble.  See, for
> example,
> > what happened with Opalgate and how this could be seen to encourage use
> of
> > this to silence political controversies unrelated to PostgreSQL.
>
> I think, this point has nothing to do with _correct_ discussions or
> public tweets.
>
> If one community member tweets publicly and in a way which abuses
> other community members, it is obvious CoC violation. It is hard to
> imagine healthy community if someone interacts with others  correctly
> on the list or at a conference because the CoC stops him doing things
> which he will do on private capacity to the same people when CoC
> doesnt apply.
>
> If someone reports CoC violation just because other community member's
> _correct_ public tweet or whatsoever  expressed different
> political/philosophical/religious views, this is a quite different
> story. I suppose CoC committee and/or Core team in this case should
> explain the reporter the purpose of CoC rather than automatically
> enforce it.
>

So first, I think what the clause is trying to do is address cases where
harassment targeting a particular community member takes place outside the
infrastructure and frankly ensuring that the code of conduct applies in
these cases is important and something I agree with.

However, let's look at problem cases:

"I am enough of a Marxist to see gender as a qualitative relationship to
biological reproduction and maybe economic production too."

I can totally imagine someone arguing that such a tweet might be abusive,
and certainly not "correct."

Or consider:

"The effort to push GLBT rights on family-business economies is nothing
more than an effort at corporate neocolonialism."

Which would make the problem more clear.  Whether or not a comment like
that occurring outside postgresql.org infrastructure would be considered
"correct" or "abusive" is ultimately a political decision and something
which, once that fight is picked, has no reasonable solution in an
international and cross-cultural product (where issues like sexuality,
economics, and how gender and individualism intersect will vary
dramatically across members around the world).  There are people who will
assume that both of the above statements are personally offensive and
attacks on the basis of gender identity even if they are critiques of
political agendas severable from that.  Worse, the sense of attack
themselves could be seen as attacks on culture or religions of other
participants.

Now neither of these comments would be tolerated as viewpoints expressed on
PostgreSQL.org email lists because they are off-topic, but once one expands
the code of conduct in this way they become fair game.  Given the way
culture war issues are shaping up particularly in the US, I think one has
to be very careful not to set an expectation that this applies to literally
everything that anyone does anywhere.

So maybe something more like:

"Conduct that occurs outside the postgresql.org infrastructure is not
automatically excluded from enforcement of this code of conduct.  In
particular if other parties are unable to act, and if it is, on balance, in
the interest of the global community to apply the code of conduct, then the
code of conduct shall apply."

>
> > --
> > Best Wishes,
> > Chris Travers
> >
> > Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
> > lock-in.
> > http://www.efficito.com/learn_more
>


-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


Re: Code of Conduct plan

2018-09-14 Thread Chris Travers
On Fri, Sep 14, 2018 at 4:14 PM Dave Page  wrote:

>
>
> On Fri, Sep 14, 2018 at 3:08 PM, Joshua D. Drake 
> wrote:
>
>> On 09/14/2018 01:31 AM, Chris Travers wrote:
>>
>>
>> I apologize for the glacial slowness with which this has all been moving.
>>> The core team has now agreed to some revisions to the draft CoC based on
>>> the comments in this thread; see
>>>
>>> https://wiki.postgresql.org/wiki/Code_of_Conduct
>>>
>>> (That's the updated text, but you can use the diff tool on the page
>>> history tab to see the changes from the previous draft.)
>>>
>>
>> I really have to object to this addition:
>> "This Code is meant to cover all interaction between community members,
>> whether or not it takes place within postgresql.org infrastructure, so
>> long as there is not another Code of Conduct that takes precedence (such as
>> a conference's Code of Conduct)."
>>
>> That covers things like public twitter messages over live political
>> controversies which might not be personally directed.   At least if one is
>> going to go that route, one ought to *also* include a safe harbor for
>> non-personally-directed discussions of philosophy, social issues, and
>> politics.  Otherwise, I think this is asking for trouble.  See, for
>> example, what happened with Opalgate and how this could be seen to
>> encourage use of this to silence political controversies unrelated to
>> PostgreSQL.
>>
>>
>> I think this is a complicated issue. On the one hand, postgresql.org has
>> no business telling people how to act outside of postgresql.org. Full
>> stop.
>>
>
> I'm going to regret jumping in here, but...
>
> I disagree. If a community member decides to join forums for other
> software and then strongly promotes PostgreSQL to the point that they
> become abusive or offensive to people making other software choices, then
> they are clearly bringing the project into disrepute and we should have
> every right to sanction them by preventing them participating in our
> project in whatever ways are deemed appropriate.
>

 Actually, the easier case here is not being abusive to MySQL users, as the
code of conduct really doesn't clearly cover that anyway.  The easier case
is where two people have a feud and one person carries on a harassment
campaign over various forms of social media.  The current problem is:

1.  The current code of conduct is not clear as to whether terms of
service/community standards of, say, Reddit, supersede or not, and
2.  The community has to act (even if it is includes behavior at a
conference which has its own code of conduct)

So I think the addition is both over inclusive and under inclusive.   It is
over inclusive because it invites a certain group of (mostly American)
people to pick fights (not saying this is all Americans).  And it is under
inclusive because there are cases where the code of conduct *should* be
employed when behavior includes behavior at events which might have their
own codes of conduct.

On the other side, consider someone carrying on a low-grade harassment
campaign against another community member at a series of conferences where
each conference may not amount to a real actionable concern but where the
pattern as a whole might.  There's the under inclusive bit.

So I don't like this clause because I think it invites problems and doesn't
solve issues.
-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


Re: Code of Conduct plan

2018-09-14 Thread Chris Travers
On Fri, Sep 14, 2018 at 4:16 PM Tom Lane  wrote:

> [ Let's try to trim this discussion to just -general, please ]
>
> Robert Eckhardt  writes:
> > On Fri, Sep 14, 2018 at 9:41 AM, Adrian Klaver
> >  wrote:
> >> On 9/14/18 1:31 AM, Chris Travers wrote:
> >>> I really have to object to this addition:
> >>>> "This Code is meant to cover all interaction between community
> members,
> >>>> whether or not it takes place within postgresql.org <
> http://postgresql.org>
> >>>> infrastructure, so long as there is not another Code of Conduct that
> takes
> >>>> precedence (such as a conference's Code of Conduct)."
>
> >> I second that objection. It is not in PGDG's remit to cure the world,
> for
> >> whatever form of cure you ascribe to. This is especially true as
> 'community
> >> member' has no strict definition.
>
> > I understand the concern, however, if you look at how attacks happen
> > it is frequently through other sites. Specifically under/poorly
> > moderated sites. For specific examples, people who have issues with
> > people on Quora will frequently go after them on Facebook and Twitter.
>
> Actually, that addition was in response to concerns that the previous
> version didn't delimit the intended scope of the document *at all*.
> So I would say it's more restricted now than the previous version.
>
> I feel that most of the concerns being raised today are straw men.
> If the PG lists were a place for political discussion, there'd be
> valid points to worry about as to whether a CoC might be used to
> stifle free speech.  But every example that's been given has been
> not merely off-topic but wildly so, so I don't find the discussion
> to be very realistic.
>

If the code of conduct limited conduct that related to postgresql.org
infrastructure, I would agree.  This one explicitly includes all kinds of
interactions which are beyond that.

I assume "all interaction between members" could include having a few beers
at a pub, or being in an argument over the scope of human rights on
facebook, and I think there are people who will read it that way.

-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


Re: Code of Conduct plan

2018-09-14 Thread Chris Travers
On Fri, Sep 14, 2018 at 4:51 PM Dave Page  wrote:

>
>
> On Fri, Sep 14, 2018 at 3:43 PM, Joshua D. Drake 
> wrote:
>
>> On 09/14/2018 07:36 AM, Dave Page wrote:
>>
>>
>>
>> On Fri, Sep 14, 2018 at 3:21 PM, James Keener  wrote:
>>
>>>
>>> Now, you may say that (2) would be rejected by the committee, but I would
>>>>> counter that it's still a stain on me and something that will forever
>>>>> appear
>>>>> along side my name in search results and that the amount of time and
>>>>> stress it'd take me to defend myself would make my voluntarily leaving
>>>>> the community, which would be seen as an admission of guilt, my only
>>>>> option.
>>>>>
>>>>
>>>> If you had read the policy, you would know that wouldn't happen as
>>>> reports and details of reports are to be kept confidential.
>>>>
>>>
>>> That doesn't mean I won't be strung along and it doesn't mean that the
>>> attacker can't release those details. Remember, I'm worried
>>> about politically motivated attacks, and attacks meant to silence
>>> opposing viewpoints, not legitimate instances of harassment.
>>>
>>
>> Sure, but an attacker can do that now. Having the CoC doesn't change
>> anything there, though it does give us a framework to deal with it.
>>
>>
>>>
>>>
>>>>
>>>>
>>>>>
>>>>> People are shitheads. People are assholes. We're not agreeing to join
>>>>> some organization and sign an ethics clause when signing up for the
>>>>> mailing
>>>>> list.  The current moderators can already remove bad actors from the
>>>>> list.
>>>>> How they act outside of the list is non of this list's concern.
>>>>>
>>>>
>>>> The lists are just one of many different ways people in this community
>>>> interact.
>>>>
>>>
>>> So? We interact with people outside of specific groups all the time.
>>> Baring specific
>>> agreements to the contrary, why should any one group claim
>>> responsibility of my
>>> personal business?
>>>
>>
>> If that business is publicly bringing the project into disrepute, or
>> harassing other community members and they approach us about it, then it
>> becomes our business.
>>
>> If it's unrelated to PostgreSQL, then it's your personal business and not
>> something the project would get involved in.
>>
>>
>> O.k. so this isn't clear (at least to me) within the CoC. I want to make
>> sure I understand. You are saying that if a community member posts on
>> Twitter that they believe gays are going to hell, reporting that to the CoC
>> committee would result in a non-violation UNLESS they referenced postgresql
>> within the post?
>>
>
> Yes, I believe so. Isn't that what "To that end, we have established this Code
> of Conduct for community interaction and participation in the project’s
> work and the community at large." basically says?
>
> And in the end, a broad scope is required to some extent.

I want to be clear about where my concern and objection is:

1.  I think PostgreSQL, as an international project with people from many
different walks of life and different cultures needs to stay out of culture
war topics or assigning truth values to political viewpoints to the extent
absolutely possible.  We do this today and we must continue to do this.
2.  Compared to the rest of the world, people from my culture (the US) have
a tendency to take disagreements regarding political policies, social
theories, etc. personally and see abuse/attack where mere disagreement was
present.  People making complaints aren't necessarily acting in bad faith.
3.  If we don't set the expectation ahead of time that we remain
pluralistic in terms of political philosophy, culture, then it is way too
easy to end up in a situation where people are bringing up bad press for
failing to kick out people who disagree with them.

Like it or not there are precedents for this in the open source community,
such as the dismissal of Brendan Eich, and in an international project with
developers from all kinds of cultures with different views on deeply
divisive issues, such conflicts could hurt our community.

-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


Re: Code of Conduct plan

2018-09-14 Thread Chris Travers
On Fri, Sep 14, 2018 at 7:47 PM Peter Geoghegan  wrote:

> On Fri, Sep 14, 2018 at 10:31 AM, Dimitri Maziuk 
> wrote:
> > So let me get this straight: you want to have a "sanctioned" way to deny
> > people access to postgresql community support channel?
>
> Yes.
>
> > "Because
> > somebody who may or may not be the same person, allegedly said something
> > somewhere that some other tweet disagreed with on faceplant"?
> >
> > Great plan if you do for-pay postgresql support for the living.
>
> You can make your own conclusions about my motivations, just as I'll
> make my own conclusions about yours. I'm not going to engage with you
> on either, though.
>

With regard to the  concerns about authoritarianism, I have to defend the
Code of Conduct here.

It's not anything of the above.  The PostgreSQL project has a pretty good
track record of ensuring that people can participate across boundaries of
culture, ethnicity, political ideology (which is always informed by culture
and ethnicity), and the like.  On the whole I trust the committee to make
sound judgments.

The thing is, yes it is scary that someone might be effectively denied
access to commons based on false accusations, but it is also concerning
that people might be driven away from commons by aggressive harassment (on
or off list) or the like.  The code of conduct is a welcome step in that
goal.  I think we should trust long-standing communities with a track
record of being generally cultivating access to the commons with decisions
which foster that.   The fact is, at least I would hope we all agree that

This is basic governance.  Communities require arbitration and management
of the economic commons we build together and this is a part of that.  I am
pretty sure that's why the expansive wording was included.  And I support
the right of the committee to act even for off-list behavior when it is
appropriate to do so.  That part, I am not questioning.  I think that's
important.

So I think a lot of the hysteria misses the point.  We have good people.
We have a generally good track record of getting along.  We have a track
record of not being mean to eachother because of differences in political,
social, religious, etc. belief.  The committee as a custodian of this
community can't really take the hard sides on divisive issues that we might
expect in, say, an American corporation like Mozilla or Google.  I think
people who worry about this don't get the weight of responsibility that
will be placed on such individuals to support a breathtakingly diverse
international project and keep the peace, giving people room for civic
engagement even on divisive issues.

And frankly I am probably being paranoid here though I find paranoia is a
good thing when it comes to care of databases and computer systems.  But I
do worry about the interactions between the PostgreSQL community and the
larger world with things worded this way.



> --
> Peter Geoghegan
>
>

-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


Re: Slow shutdowns sometimes on RDS Postgres

2018-09-14 Thread Chris Williams
Hey Jeremy,

Thanks so much for your response. That's a great tip regarding enabling
enhanced monitoring.  Maybe it will give some insight into which particular
backends are causing the shutdown to hang.  One interesting data point when
this happened is that in cloudwatch, once the database started shutting
down, you can see the connection count drop from 600 to 4 immediately.
Then the graph shows a constant 4 connections for 10 minutes straight
before it finally shuts down and goes to 0.  I'm guessing the shutdown is
hanging because of one of these 4 connections.  Unfortunately, because the
database is shutting down, I'm not able to connect and get any info about
these connections, so enabling enhanced monitoring might give some more
clues.

My other question is, do you think shutting down my apps manually once I
noticed the shutdown was hanging had any effect on the total shutdown time?
It seems a bit coincidental that the database finally shut down after
exactly 10 minutes.  This makes me think that some internal timeout in
RDS's shutdown script was triggered and that shutting down my apps didn't
actually affect anything. I'd much rather just wait 10 minutes then
frantically try to shutdown all of my apps.  So I'd be curious to know if
you are able to look up what the timeout is and if you think the timeout is
what is actually causing it to finally shut down. While I'd much rather
have my database shutdown in a minute or two, at least there would be some
comfort in knowing that 10 minutes is the upper bound.

I'd love to be able to figure out how to reproduce it, but it doesn't
happen consistently unfortunately and I've only ever seen it on our main
production database so that makes things trickier.  We do need to resize
our database again in a few weeks.  If there's any special debug settings i
should enable before modifying it (besides enhanced monitoring) or if
support wanted to observe it in the middle of the shutdown, let me know.
The last two times I've resized the database, the shutdown has hung like
this, so there's a good chance it will do it again.

Another idea I had too was to remove all the security group rules on the db
right before applying the modification.  My thinking was that maybe that
might help postgres terminate all the connections quicker? That said, it's
all a shot in the dark I guess since we don't know the underlying cause.

Best,
Chris

On Fri, Sep 14, 2018 at 8:43 AM Jeremy Schneider 
wrote:

> Hi Chris - this is an interesting one that we do see from time to time;
> seems worth responding here as actually our best understanding right now
> is that this is something in community code, not AWS-specific.
>
>
> On 9/13/18 15:17, Joshua D. Drake wrote:
> > This is probably something you would have to talk to Amazon about. AWS
> > RDS Postgres is a fork of PostgreSQL and not 100% compatible from an
> > administrative perspective.
>
> Hey JD! FWIW, the code differences are pretty minimal and generally just
> what's required to have a managed service where people can still use the
> database as they normally would. The biggest difference is just getting
> used to operating without direct OS access, and working through
> automation/tooling instead. (And there's always EC2 for the many
> customers who want/need superuser but still don't want to maintain the
> hardware.)
>
>
> On 9/13/18 16:10, Adrian Klaver wrote:
> > The thing is, what you are doing ("(e.g. reboot, changing instance
> > size, etc.)") are instance operations not database operations. That
> > comes under AWS's purview.
>
> Correct, managing reboots and hardware reconfigurations would be the
> responsibility of AWS. However Chris' issue here is just that PostgreSQL
> itself took a long time to shut down. I'm not aware of anything
> RDS-specific with this.
>
>
> > On 09/13/2018 03:04 PM, Chris Williams wrote:
> >> I'm using AWS RDS Postgres (9.6.6) and have run into very slow
> >> shutdowns (10+ minutes) a few times when making database modifications
> >> (e.g. reboot, changing instance size, etc.).  Other times, it shuts
> >> down quickly (1 minute or so).  I have not been able to figure out why
> >> sometimes it takes a long time to shutdown.
>
> I don't know about this specific incident, but I do know that the RDS
> team has seen cases where a backend gets into a state (like a system
> call) where it's not checking signals and thus doesn't receive or
> process the postmaster's request to quit. We've seen these processes
> delay shutdowns and also block recovery on streaming replicas.
>
>
> >> Once I start seeing these messages, I start manually shutting down all
> >> of 

Re: Code of Conduct plan

2018-09-14 Thread Chris Travers
On Sat, Sep 15, 2018 at 4:47 AM James Keener  wrote:

>
>
> The preceding's pretty simple. An attacker goes after an individual,
>> presumably without provocation and/or asymetrically. The attacked
>> person is on this mailing list. IMHO this attacker must choose between
>> continuing his attacks, and belonging to the Postgres community.
>>
>> What's tougher is the person who attacks groups of people.
>>
>>
> The preceding's pretty simple. An "attacker" voices their political
> opinions
> or other unorthodoxy or unpopular stance, but in no way directs it at the
> postgres user base or on a postgres list. The "attacked"
> person is on this mailing list. IMHO this "attacker" must choose between
> continuing to voice their opinion, and belonging to the Postgres community.
>

The protection there is a culturally diverse code of conduct committee who
can then understand the relationship between politics and culture.  And
just to note, you can't solve problems of abuse by adopting mechanistically
applied rules.

Also a lot of the major commercial players have large teams in areas where
there is a legal right to not face discrimination on the basis of political
opinion.  So I don't see merely expressing an unpopular political opinion
as something the code of conduct committee could ever find actionable, nor
do I think political donations or membership in political or religious
organizations etc would be easy to make actionable.

But I understand the sense of insecurity.  Had I not spent time working in
Asia and Europe, my concerns would be far more along these lines.  As it
is, I don't think the code of conduct committee will allow themselves to be
used to cause continental splits in the community or to internationalize
the politics of the US.

I think the bigger issue is that our community *will* take flak and
possibly be harmed if there is an expectation set that picking fights in
this way over political opinions is accepted.  Because while I don't see
the current community taking action on the basis of political views, I do
see a problem more generally with how these fights get picked and would
prefer to see some softening of language to protect the community in that
case.  But again, I am probably being paranoid.

-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


Re: Code of Conduct plan

2018-09-16 Thread Chris Travers
ing that could be used to apply
pressure from outside to get rid of community members for activity that is
not related to PostgreSQL (in particular, unrelated political involvement,
opinions, and participation).

If you aren't open to rewriting even that one sentence, I hope maybe you
can leave that sentence off and assert that it is up to the Code of Conduct
community to develop the scope of application based on actual complaints
and circumstances.

Again for reference the only change I am objecting to is the addition of "This
Code is meant to cover all interaction between community members, whether
or not it takes place within postgresql.org infrastructure, so long as
there is not another Code of Conduct that takes precedence (such as a
conference's Code of Conduct)."  I don't think that sentence solves the
problems you are trying to solve, and I think it creates new ones.

However I have said my piece.  Unless there are replies that provide
something new for me to add, I won't continue arguing over that from here.

-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


Re: Logical locking beyond pg_advisory

2018-09-17 Thread Chris Travers
On Mon, Sep 17, 2018 at 5:09 PM Merlin Moncure  wrote:

> On Sun, Sep 16, 2018 at 3:53 PM marcelo  wrote:
> >
> > I need a mechanism of "logical locking" more ductile than the
> pg_advisory family.
> > I'm thinking of a table ("lock_table") that would be part of the
> database, with columns
> > * tablename varchar - name of the table "locked"
> > * rowid integer, - id of the row "locked"
> > * ownerid varchar, - identifier of the "user" who acquired the lock
> > * acquired timestamp - to be able to release "abandoned" locks after a
> certain time
> >
> > and a group of functions
> > 1) lock_table (tablename varchar, ownerid varchar) bool - get to lock
> over the entire table, setting rowid to zero
> > 2) unlock_table (tablename varchar, ownerid varchar) bool - unlock the
> table, if the owner is the recorded one
> > 3) locked_table (tablename varchar, ownerid varchar) bool - ask if the
> table is locked by some user other than the ownerid argument
> > 4) lock_row (tablename varchar, rowid integer, ownerid varchar) bool -
> similar to pg_try_advisory_lock
> > 5) unlock_row (tablename varchar, rowid integer, ownerid varchar) bool -
> similar to pg_advisory_unlock
> > 6) unlock_all (ownerid varchar) bool - unlock all locks owned by ownerid
> >
> > The timeout (default, maybe 15 minutes) is implicitly applied if the
> lock is taken by another user (there will be no notification).
> > Redundant locks are not queued, they simply return true, may be after an
> update of the acquired column.
> > Successful locks insert a new row, except the rare case of a timeout,
> which becomes an update (ownerid and acquired)
> > Unlock operations deletes the corresponding row
> >
> > My question is double
> > a) What is the opinion on the project?
> > b) What are the consequences of the large number of inserts and deletions
> > c) Performance. In fact, pg_advisory* implies a network roundtrip, but
> (I think) no table operations.
>
> Why can't you use the advisory lock functions?  The challenge with
> manually managed locks are they they are slow and you will lose the
> coordination the database provides you.  For example, if your
> application crashes you will have to clean up all held locks yourself.
> Building out that infrastructure will be difficult.
>

First, I think in an ideal world, you wouldn't handle this problem with
either approach but sometimes you have to.

I have done both approaches actually.  LedgerSMB uses its own lock table
because locks have to persist across multiple HTTP requests and we have
various automatic cleanup processes.

When I was working on  the queue management stuff at Novozymes we used
advisory locks extensively.

These two approaches have serious downsides:
1.  Lock tables are *slow* and require careful thinking through cleanup
scenarios.  In LedgerSMB we tied to the application session with an ON
DELETE event that would unlock the row.  We estimated that for every 2
seconds that the db spent doing useful work, it spent 42 seconds managing
the locks.  Additionally the fact that locks take effect on snapshot
advance is a problem here.

2.  In my talk, "PostgreSQL at 10TB and  Beyond" I talk about a problem we
had using advisory locks for managing rows that were being processed for
deletion.  Since the deletion was the scan for items at the head of an
index, under heavy load we could spend long enough checking dead rows that
the locks could go away with our snapshot failing to advance.  This would
result in duplicate processing.  So the fact that advisory locks don't
really follow snapshot semantics is a really big problem here since it
means you can have race conditions in advisory locks that can't happen with
other locking issues.  I still love advisory locks but they are not a good
tool for this.

The real solution most of the time is actually to lock the rows by
selecting FOR UPDATE and possibly SKIP LOCKED.  The way update/delete row
locking in PostgreSQL works is usually good enough except in a few rare
edge cases.  Only in *very rare* cases do lock tables or advisory locks
make sense for actual row processing.

>
> merlin
>
>

-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


Re: Code of Conduct plan

2018-09-17 Thread Chris Travers
On Mon, Sep 17, 2018 at 5:28 PM Joshua D. Drake 
wrote:

> On 09/17/2018 08:11 AM, Dmitri Maziuk wrote:
>
> On Sun, 16 Sep 2018 12:52:34 +
> Martin Mueller  
>  wrote:
>
>
> ... The overreach is dubious on both practical and theoretical grounds. 
> "Stick to your knitting " or the KISS principle seem good advice in this 
> context.
>
> Moderated mailing lists ain't been broken all these years, therefore they 
> need fixing. Obviously.
>
>
> Folks,
>
> At this point it is important to accept that the CoC is happening. We
> aren't going to stop that. The goal now is to insure a CoC that is
> equitable for all community members and that has appropriate
> accountability. At hand it appears that major concern is the CoC trying to
> be authoritative outside of community channels. As well as wording that is
> a bit far reaching. Specifically I think people's main concern is these two
> sentences:
>
> "To that end, we have established this Code of Conduct for community
> interaction and participation in the project’s work and the community at
> large. This Code is meant to cover all interaction between community
> members, whether or not it takes place within postgresql.org
> infrastructure, so long as there is not another Code of Conduct that takes
> precedence (such as a conference's Code of Conduct)."
>

Exactly.  And actually the first sentence is not new.  The second one is a
real problem though.  I am going to try one last time at an additional
alternative.

" To that end, we have established this Code of Conduct for community
interaction and participation in the project’s work and the community at
large.   This code of conduct covers all interaction between community
members on the postgresql.org infrastructure.  Conduct outside the
postgresql.org infrastructure may call the Code of Conduct committee to act
as long as the interaction (or interaction pattern) is community-related,
other parties are unable to act, and the Code of Conduct committee
determines that it is in the best interest of the community to apply this
Code of Conduct."

This solves a number of important problems.

1.  It provides a backstop (as Tom Lane suggested was needed) against a
conference refusing to enforce their own code of conduct in a way the
community finds acceptable while the current wording does not provide any
backstop as long as there is a code of conduct for a conference.
2.  It provides a significant barrier to applying the code of conduct to,
say, political posts on, say, Twitter.
3.  It preserves the ability of the Code of Conduct Committee to act in the
case where someone takes a pattern of harassment off-list and
off-infrastructure.  And it avoids arguing whether Facebook's Community
Standards constitute "another Code of Conduct that takes precedence."

>
> If we can constructively provide feedback about those two sentences, great
> (or constructive feedback on other areas of the CoC). If we can't then this
> thread needs to stop. It has become unproductive.
>
> My feedback is that those two sentences provide an overarching authority
> that .Org does not have the right to enforce and that it is also largely
> redundant because we allow that the idea that if another CoC exists, then
> ours doesn't apply. Well every single major collaboration channel we would
> be concerned with (including something like Blogger) has its own CoC within
> its Terms of use. That effectively neuters the PostgreSQL CoC within places
> like Slack, Facebook, Twitter etc...
>

Fascinating that this would, on its face, not apply to a harassment
campaign carried out over twitter, but it would apply to a few comments
made over drinks at a bar.

>
> JD
>
> --
> Command Prompt, Inc. || http://the.postgres.company/ || @cmdpromptinc
> ***  A fault and talent of mine is to tell it exactly how it is.  ***
> PostgreSQL centered full stack support, consulting and development.
> Advocate: @amplifypostgres || Learn: https://postgresconf.org
> * Unless otherwise stated, opinions are my own.   *
>
>

-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


Re: Code of Conduct plan

2018-09-17 Thread Chris Travers
On Mon, Sep 17, 2018 at 6:08 PM Steve Litt 
wrote:

> On Mon, 17 Sep 2018 17:39:20 +0200
> Chris Travers  wrote:
>
>
> > Exactly.  And actually the first sentence is not new.  The second one
> > is a real problem though.  I am going to try one last time at an
> > additional alternative.
> >
> > " To that end, we have established this Code of Conduct for community
> > interaction and participation in the project’s work and the community
> > at large.   This code of conduct covers all interaction between
> > community members on the postgresql.org infrastructure.  Conduct
> > outside the postgresql.org infrastructure may call the Code of
> > Conduct committee to act as long as the interaction (or interaction
> > pattern) is community-related, other parties are unable to act, and
> > the Code of Conduct committee determines that it is in the best
> > interest of the community to apply this Code of Conduct."
>
> Chris,
>
> Would you be satisfied with the CoC if the current 2nd paragraph of the
> Introduction were replaced by the paragraph you wrote above?
>

Yes.  Or something like it.  It need not be exact.

I recognize a need  to be able to take enforcement to some areas off-list
activity, for what it's worth.

>
>
> SteveT
>
> Steve Litt
> September 2018 featured book: Quit Joblessness: Start Your Own Business
> http://www.troubleshooters.com/startbiz
>
>

-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


Re: Logical locking beyond pg_advisory

2018-09-17 Thread Chris Travers
On Mon, Sep 17, 2018 at 6:04 PM marcelo  wrote:

>
>
> I´m using an ORM (Devart´s) to access the database, so, I cannot "select
> ... FOR UPDATE". The application paradigm is that a user have a list of
> records (after a query) and she could update or delete any of them as the
> business rules allows it. So, at least an advisory lock is a must.
> I´m convinced by now: I would stay with advisory locks... expecting no app
> crash could occur...
>

I would say to fix this in the ORM rather than reinvent what the database
already gives you in the database.



> Thank you all.
> Marcelo
>
>
> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=emailclient>
>  Libre
> de virus. www.avast.com
> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=emailclient>
> <#m_-9091154853724945458_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>


-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


Re: Code of Conduct

2018-09-18 Thread Chris Travers
On Tue, Sep 18, 2018 at 4:35 PM Tomas Vondra 
wrote:

> On 09/18/2018 01:47 PM, James Keener wrote:
> >  > following a long consultation process
> >
> > It's not a consultation if any dissenting voice is simply ignored.
> > Don't sugar-coat or politicize it like this -- it was rammed down
> > everyone's throats. That is core's right, but don't act as everyone's
> > opinions and concerns were taken into consideration.
>
> I respectfully disagree.
>
> I'm not sure which dissenting voices you think were ignored, but from
> what I've observed in the various CoC threads the core team took the
> time to respond to all comments. That does not necessarily mean the
> resulting CoC makes everyone happy, but unfortunately that's not quite
> possible. And it does not mean it was not an honest consultation.
>
> IMO the core team did a good job in listening to comments, tweaking the
> wording and/or explaining the reasoning. Kudos to them.
>

I said I would stand aside my objections after the last point I mentioned
them but I did not feel that my particular objection and concern with
regard to one specific sentence added got much of a hearing.  This being
said, it is genuinely hard to sort through the noise and try to reach the
signal.  I think the resurgence of the debate about whether we need a code
of conduct made it very difficult to discuss specific objections to
specific wording.  So to be honest the breakdown was mutual.

>
> > There are a good number of folks who are concerned that this CoC is
> > overreaching and is ripe for abuse. Those concerns were always
> > simply, plainly, and purposely ignored.
> No, they were not. There were multiple long discussions about exactly
> these dangers, You may dislike the outcome, but it was not ignored.
>

Also those of us who had specific, actionable concerns were often drowned
out by the noise.  That's deeply unfortunate.

I think those of us who had specific concerns about one specific sentence
that was added were drowned out by those who seemed to be opposed to the
idea of a code of conduct generally.

I would have appreciated at least a reason why the concerns I had about the
fact that the addition a) doesn't cover what it is needs to cover, and b)
will attract complaints that it shouldn't cover was not considered valid.
But I can understand that given the noise-to-signal ratio of the discussion
made such discussion next to impossible.

Again I find that regrettable.

>
> >  > Please take time to read and understand the CoC, which is intended to
> > ensure that PostgreSQL remains an open and enjoyable project for anyone
> > to join and participate in.
> >
> > I sincerely hope so, and that it doesn't become a tool to enforce social
> > ideology like in other groups I've been part of. Especially since this
> > is the main place to come to get help for PostgreSQL and not a social
> club.
> >
>
> Ultimately, it's a matter of trust that the CoC committee and core team
> apply the CoC in a careful and cautious way. Based on my personal
> experience with most of the people involved in both groups I'm not
> worried about this part.
>

I would actually go further than you here.  The CoC committee *cannot*
apply the CoC in the way that the opponents fear.  The fact is, Europe has
anti-discrimination laws regarding social and political ideology (something
the US might want to consider as it would help avoid problems on this list
;-) ).  And different continents have different norms on these sorts of
things.  Pushing a social ideology via the code of conduct would, I
suspect, result in everything from legal action to large emerging markets
going elsewhere.  So I don't think ti is a question of "trust us" but
rather that the community won't let that sort of abuse happen no matter who
is on the CoC committee.

>
>
> regards
>
> --
> Tomas Vondra  http://www.2ndQuadrant.com
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>
>

-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


Re: Code of Conduct

2018-09-18 Thread Chris Travers
On Tue, Sep 18, 2018 at 8:35 PM Tom Lane  wrote:

> Stephen Frost  writes:
> > I would ask that you, and anyone else who has a suggestion for how to
> > improve or revise the CoC, submit your ideas to the committee by
> > email'ing c...@postgresql.org.
> > As was discussed previously, the current CoC isn't written in stone and
> > it will be changed and amended as needed.
>
> The change process is spelled out explicitly in the CoC document.
>
> I believe though that the current plan is to wait awhile (circa 1 year)
> and get some experience with the current version before considering
> changes.
>

My $0.02:

If you are going to have a comment period, have a comment period and
actually deliberate over changes.

If you are going to just gather feedback and wait a year, use some sort of
issue system.

Otherwise, there is no reason to think that feedback gathered now will have
any impact at all in the next revision.

>
> regards, tom lane
>


-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


Re: Code of Conduct plan

2018-09-19 Thread Chris Travers
ve lawyers. More likely we waste a lot of hot air. Like
> this mail, probably.
>
> There are intangible but very real (IMO) costs to being a community that
> welcomes an unhealthy and hostile communication style, harassment and
> personal attacks in the guise of technical argument, bullying defended as
> making sure you have the right stuff to survive in a "meritocracy", etc.
> Thankfully we are generally not such a community. But try asking a few
> women you know in the Postgres community - if you can find any! - how their
> experience at conferences has been. Then ask if maybe there are still a few
> things we could work on changing.
>
> I've found it quite confronting dealing with some of the more heated
> exchanges on hackers from some of our most prominent team members. I've
> sent the occasional gentle note to ask someone to chill and pause before
> replying, too. And I've deserved to receive one a couple of times, though I
> never have, as I'm far from free from blame here.
>

But that happens to everyone.  Male, female, etc.  And yes, such notes are
good.

I think you are right to point to harassment though.  I have seen people in
this community resort to some really aggressive tactics with other members,
particularly off-list (and sometimes in person).  The interactions on the
postgresql.org infrastructure have always been good except in a few cases.
That is the one really important reason for enforcement against off-list
actions.  It is not (and can't be) about politics.  It has to be about
personally directed campaigns of harassment.

>
> People love to point to LKML as the way it "must" be done to succeed in
> software. Yet slowly that community has also come to recognise that verbal
> abuse under the cloak of technical discussion is harmful to quality
> discussion and drives out good people, harming the community long term.
> Sure, not everything has to be super-diplomatic, but there's no excuse for
> verbal bullying and wilful use of verbal aggression either. As widely
> publicised, even Linus has recently recognised aspects of this, despite
> being the poster child of proponents of abusive leadership for decades.
>
> We don't have a culture like that. So in practice, I don't imagine the CoC
> will see much use. The real problematic stuff that happens in this
> community happens in conference halls and occasionally by private mail,
> usually in the face of a power imbalance that makes the recipient/victim
> reluctant to speak out. I hope a formal CoC will give them some hope
> they'll be heard if they do take the personal risk to speak up. I've seen
> so much victim blaming in tech that I'm not convinced most people
> experiencing problems will be willing to speak out anyway, but hopefully
> they'll be more so with a private and receptive group to talk to.
>

I will say also that where I have seen the most problems I would not speak
out in detail because I don't feel like they rise to a level where the CoC
should be involved.


>
> Let me be clear here, I'm no fan of trial by rabid mob. That's part of why
> something like the CoC and a backing body is important. Otherwise people
> are often forced to silently endure, or go loudly public. The latter tends
> to result in a big messy explosion that hurts the community, those saying
> they're victim(s) and the alleged perpetrator(s), no matter what the facts
> and outcomes. It also encourages people to jump on one comment and run way
> too far with it, instead of looking at patterns and giving people chances
> to fix their behaviour.
>
> I don't want us to have this:
> https://techcrunch.com/2013/03/21/a-dongle-joke-that-spiraled-way-out-of-control/
> . Which is actually why I favour a CoC, one with a resolution process and
> encouragement toward some common sense. Every player in that story was an
> idiot, and while none deserved the abuse and harrassment that came their
> way, it's a shame it wan't handled by a complaint to a conference CoC group
> instead.
>
> I'd like the CoC to emphasise that while we don't want to restrain people
> from "calling out" egregious behaviour, going via the CoC team is often
> more likely to lead to constructive communication and positive change.
>

Agreed on this.

My objection to the additional wording is simply that a) I think it does
not tackle the problem it needs to tackle, and b) creates a claim which
covers a bunch of things that it really shouldn't.  It's a serious bug and
I still hope it gets fixed before it causes problems.

>
> --
>  Craig Ringer   http://www.2ndQuadrant.com/
>  PostgreSQL Development, 24x7 Support, Training & Services
>


-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


Re: Code of Conduct

2018-09-20 Thread Chris Travers
On Wed, Sep 19, 2018 at 11:31 PM Bruce Momjian  wrote:

> On Wed, Sep 19, 2018 at 11:24:29AM +1000, Julian Paul wrote:
> > It's overly long and convoluted.
> >
> > "inclusivity" Is a ideologue buzzword of particular individuals that
> offer
> > very little value apart from excessive policing of speech and behaviour
> > assumed to be a problem where none exist.
> >
> > "Personal attacks and negative comments on personal characteristics are
> > unacceptable, and will not be permitted. Examples of personal
> > characteristics include, but are not limited to age, race, national
> origin
> > or ancestry, religion, gender, or sexual orientation."
> >
> > So just leaving it at "Personal attacks" and ending it there won't do
> > obviously. I'm a big advocate of people sorting out there own personal
> > disputes in private but...
> >
> > "further personal attacks (public or *private*);"
> >
> > ...lets assume people don't have the maturity for that and make it all
> > public.
> >
> > "may be considered offensive by fellow members" - Purely subjective and
> > irrelevant to a piece of community software.
>
> You might notice that a bullet list was removed and those example items
> were added 18 months ago:
>
>
> https://wiki.postgresql.org/index.php?title=Code_of_Conduct&diff=31924&oldid=29402
>
> I realize that putting no examples has its attractions, but some felt
> that having examples would be helpful.  I am not a big fan of the
> "protected groups" concept because it is often exploited, which is why
> they are listed more as examples.
>

I suspect most of us could probably get behind the groups listed in the
antidiscrimination section of the European Charter of Fundamental Rights at
least as a compromise.

Quoting the  English version:

"Any discrimination based on any ground such as sex, race, colour, ethnic
or social origin, genetic features, language, religion or belief, political
or any other opinion, membership of a national minority, property, birth,
disability, age or sexual orientation shall be prohibited."

The inclusion of "political or any other opinion" is a nice addition and
prevents a lot of concern.

>
> --
>   Bruce Momjian  http://momjian.us
>   EnterpriseDB http://enterprisedb.com
>
> + As you are, so once was I.  As I am, so you will be. +
> +  Ancient Roman grave inscription +
>
>

-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


Re: How to investigate what postgres is spending time on

2018-09-23 Thread Chris Travers
On Sun, Sep 23, 2018 at 1:15 PM Kim Rose Carlsen  wrote:

> Hi
>
>
> I have some simple INSERT / UPDATE queries, that takes a long time the
> first time they are run in out test environment, but I'm not sure what
> postgres is doing and what I can do to help it. Whats common is that the
> table contains many rows in the order of about 20 millions.
>
>
> Query:
>
> INSERT INTO communication.request_parameter (request_id,
> template_version_parameter_id, parameter_value)
>  VALUES (1222, 1211, 122) RETURNING request_parameter_id
>
> Row from pg_stat_statements:
> ---+++--++--++++++++++++++++++-
> userid | dbid   | queryid| query  
>   | calls  | total_time   | min_time   | max_time 
>   | mean_time  | stddev_time| rows   | shared_blk | 
> shared_blk | shared_blk | shared_blk | local_blks | local_blks | local_blks | 
> local_blks | temp_blks_ | temp_blks_ | blk_read_t | blk_write_ |
> ---+++--++--++++++++++++++++++-
> 16385  | 16389  | 2064198912 | INSERT INTO 
> communication.request_parameter (request_id, tem | 98 | 646.393451
>| 0.03   | 638.712758 | 6.59585154081633   | 64.1818799227704   | 98   
>   | 2850   | 24 | 21 | 0  | 0  | 0
>   | 0  | 0  | 0  | 0  | 0  | 0
>   |
>
> Description of table:
> # \d communication.request_parameter
>  Table
> "communication.request_parameter"
> Column |   Type| Collation | Nullable
> |
> Default
>
> ---+---+---+--+---
>  request_parameter_id  | integer   |   | not null
> |
> nextval('communication.request_parameter_request_parameter_id_seq'::regclass)
>  request_id| integer   |   | not null
> |
>  template_version_parameter_id | integer   |   | not null
> |
>  parameter_value   | character varying |   |
> |
> Indexes:
> "request_parameter_pkey" PRIMARY KEY, btree (request_parameter_id)
> "request_parameter_parameter_value_idx" btree (parameter_value)
> "request_parameter_request_id_idx" btree (request_id)
> "request_parameter_template_version_parameter_id_idx" btree
> (template_version_parameter_id)
> Foreign-key constraints:
> "request_parameter_request_id_fkey" FOREIGN KEY (request_id)
> REFERENCES communication.request(request_id)
> "request_parameter_template_version_parameter_id_fkey" FOREIGN KEY
> (template_version_parameter_id) REFERENCES
> communication.template_version_parameter(template_version_parameter_id)
>
> This only happens in testing, and on a cold bootet database. The test
> database is constructed with pg_dump and restore on fresh postgres
> installation.
>

Sounds like warming up the cache, but still in a test environment you may
want to add auto_explain to your list of preloads and perhaps set it to
dump explain analyze when it hits a certain threshold.  Note that while
dumping the query plans has very little overhead, timing the query plan
nodes does impact performance in a negative way.

>
>
> Best Regards
> Kim Carlsen
>
>

-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


Re: DB size difference after restore

2018-10-03 Thread Chris Travers
On Wed, Oct 3, 2018 at 2:59 PM Sonam Sharma  wrote:

>
>
> On Wed, Oct 3, 2018 at 6:21 PM Sonam Sharma  wrote:
>
>> Hello Ben,
>>
>> When we do \l+ , it is different than source, when we load backup from
>> target server.
>>
>Backup is taken using pg_dump  and its loaded as psql db name 
>

It's normal that there is a size difference.

Basically you have a database you dump which may have many versions of
visible rows or may have free space in the table, etc.

You take the most recent consistent backup of the visible data when you
take a dump.

You create a database with only that information in it.  So one generally
expects it to be smaller.  In for a db of reasonable size and load the
difference may be 2x or more.



>
>> Regards,
>>
>> Sonam
>>
>>
>>
>>
>>
>>
>> On Wed, Oct 3, 2018 at 6:17 PM Benjamin Scherrey <
>> scher...@proteus-tech.com> wrote:
>>
>>> If you're talking about space on drive then you can expect the new one
>>> to be smaller generally as it has been straight efficient writes rather
>>> than a bunch of updates and deletes which create "holes" in the physical
>>> file space.
>>>
>>> It helps if you are more detailed as to what you've observed if you want
>>> a more specific answer.
>>>
>>>   - - Ben Scherrey
>>>
>>> On Wed, Oct 3, 2018, 7:43 PM Sonam Sharma  wrote:
>>>
>>>> I have restored the database from backup dump but the size of source
>>>> and target databases are different. What can be the reason for this ?
>>>>
>>>> Regards,
>>>> Sonam
>>>>
>>>

-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


Re: kind of a bag of attributes in a DB . . .

2019-09-08 Thread Chris Travers
On Sat, Sep 7, 2019 at 5:17 PM Albretch Mueller  wrote:

> Say, you get lots of data and their corresponding metadata, which in
> some cases may be undefined or undeclared (left as an empty string).
> Think of youtube json files or the result of the "file" command.
>
> I need to be able to "instantly" search that metadata and I think DBs
> are best for such jobs and get some metrics out of it.
>
> I know this is not exactly a kosher way to deal with data which can't
> be represented in a nice tabular form, but I don't find the idea that
> half way off either.
>
> What is the pattern, anti-pattern or whatever relating to such design?
>
> Do you know of such implementations with such data?
>

We do the debug logs of JSONB with some indexing.It works in some
limited cases but you need to have a good sense of index possibilities and
how the indexes actually work.


> lbrtchx
>
>
>

-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


Re: kind of a bag of attributes in a DB . . .

2019-09-15 Thread Chris Travers
On Sat, Sep 14, 2019 at 5:11 PM Albretch Mueller  wrote:

>  just download a bunch of json info files from youtube data Feeds
>
>  Actually, does postgresql has a json Driver of import feature?
>

Sort of  There  are a bunch of features around JSON and JSONB data
types which could be useful.

>
>  the metadata contained in json files would require more than one
> small databases, but such an import feature should be trivial
>

It is not at all trivial for a bunch of reasons inherent to the JSON
specification.  How to handle duplicate keys, for example.

However writing an import for JSON objects into a particular database is
indeed trivial.

>
>  C
>
>
>

-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


Lookup Primary Key of Foreign Server's Table

2019-10-22 Thread Chris Morris
I'm looking for a system query that will lookup the primary key column on a
fdw table. It's possible we need to declare that part of the foreign
table's schema in the local (is that the right term?) database?

Here's the foreign table - I don't see anything showing a primary key, so
my hunch is we need to declare it in the local schema?




















*=> \d sidecar_link.actions
 Foreign table "sidecar_link.actions" Column  |Type
| Collation | Nullable | Default
   | FDW options
-+-+---+--+--+-
id
 | bigint  |   | not null |
nextval('sidecar_link.actions_id_seq'::regclass) |  user_session_id |
bigint  |   | not null |
   |  user_id | bigint
 |   | not null |
 |  created_at  | timestamp without time zone |   | not null |
now()|  occurred_at |
timestamp without time zone |   | not null | now()
   |  thing_id| integer
|   |  |  |
 parent_thing_id | integer |   |  |
 |  viewing_id  | integer
  |   |  |
 |  origin  | origin  |
  |  | 'mysteryscience'::origin |  scope
| text|   | not null |
 |  name| text
   |   | not null |
 |  details | text|   |
 |  |  request_path
 | text|   |  |
 | Server: pg_mysterysci_sidecarFDW options:
(schema_name 'public', table_name 'actions')*
Not really related question, but a curiosity: why does this table not show
in the list of foreign tables?






*=> \det List of foreign tables Schema | Table | Server
+---+(0 rows)*


Re: Lookup Primary Key of Foreign Server's Table

2019-10-23 Thread Chris Morris
The foreign table has a primary key. Ruby on Rails uses a system query to
lookup what the primary key on the table is, but it's querying the local
database, and not finding anything. In a schema dump of the local database,
I don't see a primary key defined, so I'm presuming I need to issue an ADD
CONSTRAINT command against the foreign table to reflect what is actually
true in the foreign table? Is that correct?


Re: Lookup Primary Key of Foreign Server's Table

2019-10-24 Thread Chris Morris
Yeah. Part of my confusion is due to Heroku providing a Data Links service
that handles a lot of the internal details around establishing a
foreign server connection, and not knowing exactly what to expect.

I experimented with IMPORT FOREIGN SCHEMA in a couple of test databases and
noticed that there was no declaration of primary key brought over to the
local database. So, this is something ActiveRecord simply won't be able to
handle right now, and I have to take care of that myself, which is easy
enough to do.

Thx for the feedback.

On Wed, Oct 23, 2019 at 11:12 AM David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Wed, Oct 23, 2019 at 8:47 AM Chris Morris 
> wrote:
>
>> The foreign table has a primary key. Ruby on Rails uses a system query to
>> lookup what the primary key on the table is, but it's querying the local
>> database, and not finding anything. In a schema dump of the local database,
>> I don't see a primary key defined, so I'm presuming I need to issue an ADD
>> CONSTRAINT command against the foreign table to reflect what is actually
>> true in the foreign table? Is that correct?
>>
>
> The documentation says doing what you suggest won't work...
>
> David J.
>
>


here does postgres take its timezone information from?

2019-11-05 Thread Chris Withers

Hi All,

Brazil recently abolished daylight savings time, resulting in updates to 
system timezone information packages.
Does postgres use these? If so, does it need a reload or restart to see 
the updated zone info?


If not, how does postgres store/obtain its timezone zone information and 
how would this be updated?


cheers,

Chris




Re: here does postgres take its timezone information from?

2019-11-05 Thread Chris Withers

On 05/11/2019 22:54, Adrian Klaver wrote:

On 11/5/19 2:46 PM, Chris Withers wrote:

Hi All,

Brazil recently abolished daylight savings time, resulting in updates 
to system timezone information packages.
Does postgres use these? If so, does it need a reload or restart to 
see the updated zone info?


If not, how does postgres store/obtain its timezone zone information 
and how would this be updated?


https://www.postgresql.org/about/news/1960/
PostgreSQL 11.5, 10.10, 9.6.15, 9.5.19, 9.4.24, and 12 Beta 3 Released!

"This update also contains tzdata release 2019b for DST law changes in 
Brazil, plus historical corrections for Hong Kong, Italy, and Palestine. 
This update also adds support for zic's new -b slim option to reduce the 
size of the installed zone files, though it is not currently being used 
by PostgreSQL."


Hmm. Is there any option to use the system timezone packages?

If not, why not?

Chris




Should I reinstall over current installation?

2020-02-03 Thread Chris Charley
I just got my Windows 10 computer in Nov 2019 and installed PostgreSQL Jan
this year.

I had a problem with Windows update and when I contacted Microsoft, they
installed a fresh operating system. After that, I was unable to run
postgres from pgAdmin 4 or from a perl program that accessed the database.

The message I got was:

could not connect to server: Connection refused (0x274D/10061) Is the
server running on host "localhost" (127.0.0.1) and accepting TCP/IP
connections on port 5432? at pg_01.pl line 8.

I  was considering 2 options:

!) Reinstall postgres over the current without removing the current
postgres first.
2) uninstall the current postgres and then install postgres again.

The tables I created in my database are not many and if I lose them doing a
reinstall, I could recreate my database and the tables. I was considering
option (1) as I was wondering if that choice would keep the database and
tables I created this January.

(The database and contained tables are from 'Practical SQL' by Anthony
DeBarros.)

A second question is: When I try to edit anything in pgAdmin, sometimes it
won't let me edit the code. I then refresh the database in the left pane
and then it allows me to edit the SQL in the query pane.

Thank you for any help you may provide.

Chris


Re: Should I reinstall over current installation?

2020-02-03 Thread Chris Charley
*What does the Windows system log show when you do this?*

I don't know where to find this log (for Windows 10)

On Mon, Feb 3, 2020 at 3:47 PM Adrian Klaver 
wrote:

> On 2/3/20 12:44 PM, Chris Charley wrote:
> Please reply to list also.
> Ccing list.
> > Adrian
> >
> > /Does Postgres still exist on the system?
> >
> > If it does have you started the service?/
> >
> > PostgreSQL-x64-12 is list as 'stopped' in the task manger services tab.
> > When I try to 'start', it doesn't change to 'running'.
>
> What does the Windows system log show when you do this?
>
>
> >
> >
> > On Mon, Feb 3, 2020 at 3:26 PM Adrian Klaver  > <mailto:adrian.kla...@aklaver.com>> wrote:
> >
> > On 2/3/20 12:20 PM, Chris Charley wrote:
> >  > I just got my Windows 10 computer in Nov 2019 and installed
> > PostgreSQL
> >  > Jan this year.
> >  >
> >  > I had a problem with Windows update and when I contacted
> > Microsoft, they
> >  > installed a fresh operating system. After that, I was unable to
> run
> >  > postgres from pgAdmin 4 or from a perl program that accessed the
> > database.
> >  >
> >  > The message I got was:
> >  >
> >  > could not connect to server: Connection refused
> > (0x274D/10061) Is
> >  > the server running on host "localhost" (127.0.0.1) and accepting
> > TCP/IP
> >  > connections on port 5432? at pg_01.pl <http://pg_01.pl>
> > <http://pg_01.pl> line 8.
> >  > **//___^
> >
> > Does Postgres still exist on the system?
> >
> > If it does have you started the service?
> >
> >  > I  was considering 2 options:
> >  >
> >  > !) Reinstall postgres over the current without removing the
> current
> >  > postgres first.
> >  > 2) uninstall the current postgres and then install postgres again.
> >  >
> >  > The tables I created in my database are not many and if I lose
> them
> >  > doing a reinstall, I could recreate my database and the tables. I
> > was
> >  > considering option (1) as I was wondering if that choice would
> > keep the
> >  > database and tables I created this January.
> >  >
> >  > (The database and contained tables are from 'Practical SQL' by
> > Anthony
> >  > DeBarros.)
> >  >
> >  > A second question is: When I try to edit anything in pgAdmin,
> > sometimes
> >  > it won't let me edit the code. I then refresh the database in the
> > left
> >  > pane and then it allows me to edit the SQL in the query pane.
> >  >
> >  > Thank you for any help you may provide.
> >  >
> >  > Chris
> >
> >
> > --
> > Adrian Klaver
> > adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: Should I reinstall over current installation?

2020-02-03 Thread Chris Charley
Hi

The link you provided (
https://www.howtogeek.com/123646/htg-explains-what-the-windows-event-viewer-is-and-how-you-can-use-it/
)

Points me to Event Viewer but I'm kinda lost there trying to find anything
pertaining to postgreql.

On Mon, Feb 3, 2020 at 4:02 PM Adrian Klaver 
wrote:

> On 2/3/20 12:54 PM, Chris Charley wrote:
> > /What does the Windows system log show when you do this?/
> >
> > I don't know where to find this log (for Windows 10)
> >
>
> I'm not a Windows user, what I can do is point you at:
>
>
> https://www.howtogeek.com/123646/htg-explains-what-the-windows-event-viewer-is-and-how-you-can-use-it/
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: Should I reinstall over current installation?

2020-02-03 Thread Chris Charley
I tried items you suggested (1-5), but could find no helpful info.

Thanks for your help and going the extra mile!

On Mon, Feb 3, 2020 at 4:23 PM Adrian Klaver 
wrote:

> On 2/3/20 1:11 PM, Chris Charley wrote:
> > Hi
> >
> > The link you provided
> > (
> https://www.howtogeek.com/123646/htg-explains-what-the-windows-event-viewer-is-and-how-you-can-use-it/
> )
> >
> > Points me to Event Viewer but I'm kinda lost there trying to find
> > anything pertaining to postgreql.
>
> I cranked up a Windows 7 instance, which should be close enough. What I
> found:
>
> 1) Open Event Viewer
>
> 2) Click on Windows Logs
>
> 3) Click on Application
>
> 4) The center section will show logs. The application name is under Source.
>
> 5) If you want to search, then Ctrl+f and enter Postgres
>
> >
> > On Mon, Feb 3, 2020 at 4:02 PM Adrian Klaver  > <mailto:adrian.kla...@aklaver.com>> wrote:
> >
> > On 2/3/20 12:54 PM, Chris Charley wrote:
> >  > /What does the Windows system log show when you do this?/
> >  >
> >  > I don't know where to find this log (for Windows 10)
> >  >
> >
> > I'm not a Windows user, what I can do is point you at:
> >
> >
> https://www.howtogeek.com/123646/htg-explains-what-the-windows-event-viewer-is-and-how-you-can-use-it/
> >
> >
> > --
> > Adrian Klaver
> > adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: Should I reinstall over current installation?

2020-02-05 Thread Chris Charley
Moreno, thank you for all your help.

Following your instructions, I was able to recover my databases. All is
good.

Chris


On Wed, Feb 5, 2020 at 6:45 AM Moreno Andreo 
wrote:

> Il 04/02/2020 21:18, Chris Charley ha scritto:
> > Hello Moreno
> >
> > Thanks for the reply!
> >
> > I ran Services and it reported postsql as Disabled.
>
> A disabled service will never run nor leave error messages anywhere
>
> Right click on the Postgresql service, select properties.
> In the next window, choose startup type (the combo under the executable
> path) and select Automatic, then click on the Start button below
> Click OK and close the window.
>
> Now something should happen. If everything goes well, You'll see
> "Started" and "Automatic" in the two columns next to the service
> description, and you can start your application
> If there is an error message, then it should log something. See my last
> mail to extract log informations and post them here
>
> > I am considering reinstall because I couldn't find any helpful info
> > from these services.
>
> You got no helpful info because service does not start, andd so does not
> log :-)
>
> > Thanks again for your help.
> >  I suspect when Windows was reinstalled, the firewall was reconfigured
> > and PostgreSQL wasn't allowed to access the port?
>
> ATM there's a service start issue to be resolved, if there's also
> something about the firewall, we'll think about it later.
>
> Moreno.-
>
>
>
>


sensible configuration of max_connections

2020-02-07 Thread Chris Withers

Hi All,

What's a sensible way to pick the number to use for max_connections?

I'm looking after a reasonable size multi-tenant cluster, where the 
master handles all the load and there's a slave in case of hardware 
failure in the master.
The machine is used to host what I suspect are mainly django 
applications, so lots of short web requests, not sure how much, if any, 
django's orm does connection pooling.


I arbitrarily picked 1000 for max_connections and haven't had any 
problems, but onboarding another app that handles a few million requests 
per day  on Monday and thinking about potential problems related to the 
number of available connections.


What's "too much" for max_connections? What happens when you set it to 
high? What factors affect that number?


cheers,

Chris




Fwd: sensible configuration of max_connections

2020-02-07 Thread Chris Ellis
Hi Chris

On Fri, 7 Feb 2020, 08:36 Chris Withers,  wrote:

> Hi All,
>
> What's a sensible way to pick the number to use for max_connections?
>

Sensible in this context is some what variable.  Each connection in
PostgreSQL will be allocated a backend process.  These are not the lightest
weight of things.

Each connection takes up space in shared memory, as mentioned in the manual.


> I'm looking after a reasonable size multi-tenant cluster, where the
> master handles all the load and there's a slave in case of hardware
> failure in the master.
> The machine is used to host what I suspect are mainly django
> applications, so lots of short web requests, not sure how much, if any,
> django's orm does connection pooling.
>
> I arbitrarily picked 1000 for max_connections and haven't had any
> problems, but onboarding another app that handles a few million requests
> per day  on Monday and thinking about potential problems related to the
> number of available connections.
>
> What's "too much" for max_connections? What happens when you set it to
> high? What factors affect that number?
>

When sizing max_connections you need to trade off how many connections your
application will use at peak vs how much RAM and CPU you have.

Each connection is capable of allocating work_mem and has a stack etc.

As such you don't want max_connections to be able to run your system out of
RAM.

Given your situation I'd very seriously look at connection pooling using
PgBouncer or similar.  That way you can run with a far smaller
max_connections and still cope with applications configured with large
usually idle connection pools.


> cheers,
>
> Chris
>

Regards,
Chris Ellis

>


Re: Fwd: sensible configuration of max_connections

2020-02-07 Thread Chris Withers

On 07/02/2020 12:49, Chris Ellis wrote:

What's "too much" for max_connections? What happens when you set it to

high? What factors affect that number?


When sizing max_connections you need to trade off how many connections 
your application will use at peak vs how much RAM and CPU you have.


Each connection is capable of allocating work_mem and has a stack etc.

As such you don't want max_connections to be able to run your system 
out of RAM.
Sure, but that's where I'm trying to find out what's sensible. The box 
has 196GB memory, most of that in hugepages, 18 core Intel Skylake with 
HT on giving 36 cores and tonnes of SSD for storage. How would I turn 
that spec into a sensible number for max_connections? As that number 
grows, what contention points in postgres will start creaking (shared 
memory where the IPC happens?)


In case I forgot to say, this is PostgreSQL 11...

Chris

PS: definitely thinking of pg_bouncer, but still trying to figure out 
what to sensibly set for max_connections.


PG 12: Partitioning across a FDW?

2020-03-24 Thread Chris Morris
Is it even possible to use PG partitioning across a Foreign Server?


Re: PG 12: Partitioning across a FDW?

2020-03-25 Thread Chris Morris
>
> > Is it even possible to use PG partitioning across a Foreign Server?
> I am not certain what you mean, but you can have foreign tables as
> partitions
> of a partitioned table.  The partitions won't be processed in parallel
> though.


I have a large, growing table, that I'd like to start partitioning, but
also would like "older" partitions to be stored in a separate database,
connected via FDW. I haven't played around with partitioning at all yet, so
clearly I'm not sure how to ask the question :)

The hope is to still have one "seamless" table users can query, but to
spread the storage across different databases. I realize that may be asking
for too much.


Re: PG 12: Partitioning across a FDW?

2020-03-25 Thread Chris Morris
>
> Not yet..  There is ongoing work to make that happen though.


Glad to hear it. :) Thx.


Re: PG 12: Partitioning across a FDW?

2020-03-25 Thread Chris Morris
Right now my dbs are hosted by Heroku, so I doubt I have any control over
the dbs at that level.

Thanks for the idea though! :)

On Wed, Mar 25, 2020 at 12:04 PM Michael Lewis  wrote:

> Chris,
> Does it actually need to be a different server and database, or would it
> be possible to have another storage device added to your existing database
> and make use of tablespaces to accomplish pseudo-archive of older
> partitions? Just a thought.
>
>>


Log Unique Queries without Params?

2020-04-11 Thread Chris Morris
I have a local script I've written that will scan a log of PG queries to
extract out unique queries without any specific parameter data. For
example, if these 2 queries are actually run:

SELECT * FROM foo where bar = 1;
SELECT * FROM foo where bar = 2;

It will capture only:

SELECT * FROM foo whee bar = :id;

Are there any existing tools that do this already for me? I'm considering
setting up a server that can have logs forwarded to it and only logging
unique queries like this, but I don't want to build anything that may
already exist out there.


Re: Log Unique Queries without Params?

2020-04-12 Thread Chris Morris
Thx!

On Sat, Apr 11, 2020 at 11:55 PM Julien Rouhaud  wrote:

> On Sun, Apr 12, 2020 at 6:51 AM Chris Morris 
> wrote:
> >
> > I have a local script I've written that will scan a log of PG queries to
> extract out unique queries without any specific parameter data. For
> example, if these 2 queries are actually run:
> >
> > SELECT * FROM foo where bar = 1;
> > SELECT * FROM foo where bar = 2;
> >
> > It will capture only:
> >
> > SELECT * FROM foo whee bar = :id;
> >
> > Are there any existing tools that do this already for me? I'm
> considering setting up a server that can have logs forwarded to it and only
> logging unique queries like this, but I don't want to build anything that
> may already exist out there.
>
> pgbadger (http://pgbadger.darold.net/#about) will do that and much
> more.  Depending on what you want to achieve maybe pg_stat_statements
> (https://www.postgresql.org/docs/current/pgstatstatements.html) is
> also an alternative.
>


possibilities for SQL optimization

2020-04-16 Thread Chris Stephens
d: (("pixelId" >= '10729604251648'::bigint) AND
("pixelId" <= '10729608445951'::bigint))
 ->  Bitmap Index Scan on "IDX_DiaSource_htmId20"  (cost=0.00..4.72
rows=15 width=0) (actual time=0.009..0.009 rows=16 loops=1)
   Index Cond: (("pixelId" >= '10729613164544'::bigint) AND
("pixelId" <= '10729613295615'::bigint))
 ->  Bitmap Index Scan on "IDX_DiaSource_htmId20"  (cost=0.00..4.67
rows=10 width=0) (actual time=0.002..0.002 rows=2 loops=1)
   Index Cond: (("pixelId" >= '10729613361152'::bigint) AND
("pixelId" <= '10729613426687'::bigint))
 ->  Bitmap Index Scan on "IDX_DiaSource_htmId20"  (cost=0.00..4.72
rows=15 width=0) (actual time=0.009..0.009 rows=2 loops=1)
   Index Cond: (("pixelId" >= '10729614737408'::bigint) AND
("pixelId" <= '10729614868479'::bigint))
 ->  Bitmap Index Scan on "IDX_DiaSource_htmId20"  (cost=0.00..4.72
rows=15 width=0) (actual time=0.003..0.003 rows=12 loops=1)
   Index Cond: (("pixelId" >= '10729614934016'::bigint) AND
("pixelId" <= '10729615065087'::bigint))
 ->  Bitmap Index Scan on "IDX_DiaSource_htmId20"  (cost=0.00..4.72
rows=15 width=0) (actual time=0.003..0.003 rows=11 loops=1)
   Index Cond: (("pixelId" >= '10729615130624'::bigint) AND
("pixelId" <= '10729615261695'::bigint))
 ->  Bitmap Index Scan on "IDX_DiaSource_htmId20"  (cost=0.00..4.67
rows=10 width=0) (actual time=0.003..0.003 rows=10 loops=1)
   Index Cond: (("pixelId" >= '10729615654912'::bigint) AND
("pixelId" <= '10729615720447'::bigint))
 ->  Bitmap Index Scan on "IDX_DiaSource_htmId20"  (cost=0.00..4.67
rows=10 width=0) (actual time=0.002..0.002 rows=2 loops=1)
   Index Cond: (("pixelId" >= '10729616048128'::bigint) AND
("pixelId" <= '10729616113663'::bigint))
 ->  Bitmap Index Scan on "IDX_DiaSource_htmId20"
 (cost=0.00..11.92 rows=335 width=0) (actual time=0.036..0.036 rows=254
loops=1)
   Index Cond: (("pixelId" >= '10729621028864'::bigint) AND
("pixelId" <= '10729625223167'::bigint))
 ->  Bitmap Index Scan on "IDX_DiaSource_htmId20"  (cost=0.00..4.67
rows=10 width=0) (actual time=0.009..0.009 rows=15 loops=1)
   Index Cond: (("pixelId" >= '10729625747456'::bigint) AND
("pixelId" <= '10729625812991'::bigint))
 ->  Bitmap Index Scan on "IDX_DiaSource_htmId20"  (cost=0.00..4.72
rows=15 width=0) (actual time=0.003..0.003 rows=17 loops=1)
   Index Cond: (("pixelId" >= '10729625878528'::bigint) AND
("pixelId" <= '10729626009599'::bigint))
 ->  Bitmap Index Scan on "IDX_DiaSource_htmId20"  (cost=0.00..4.98
rows=41 width=0) (actual time=0.004..0.004 rows=32 loops=1)
   Index Cond: (("pixelId" >= '10729626337280'::bigint) AND
("pixelId" <= '10729626796031'::bigint))
 ->  Bitmap Index Scan on "IDX_DiaSource_htmId20"  (cost=0.00..6.22
rows=165 width=0) (actual time=0.029..0.029 rows=144 loops=1)
   Index Cond: (("pixelId" >= '10729626861568'::bigint) AND
("pixelId" <= '10729628893183'::bigint))
 ->  Bitmap Index Scan on "IDX_DiaSource_htmId20"  

Re: possibilities for SQL optimization

2020-04-16 Thread Chris Stephens
disastrous :)

Planning Time: 7.569 ms
Execution Time: 316969.474 ms

On Thu, Apr 16, 2020 at 9:23 AM Pavel Stehule 
wrote:

>
>
> čt 16. 4. 2020 v 16:08 odesílatel Chris Stephens 
> napsal:
>
>> PG12
>> RHEL 8
>>
>> I suspect there's little I can do to get the following query to run
>> faster/more efficiently but thought I'd post to list and confirm.
>>
>> Caveat: I'm coming from an Oracle background and am extremely wet behind
>> ears w/ respect to postgresql (please be kind :)).
>>
>> Right now, we can't change the software generating the SQL though that is
>> very possible/likely in the future. For now, the query is what it is. I can
>> alter indexing, add table partitioning, or anything else that doesn't
>> change logical structure of table.
>>
>> This is a very wide table holding astronomical data which will be used to
>> track changes in the sky to generate alerts for astronomers to evaluate.
>> The query has a variable number of "pixelId BETWEEN" predicates. As data
>> volume in DiaSource increases, performance of query decreases. I need to
>> confirm this but I suspect the primary reason for the decreased performance
>> is increased number of "pixelId BETWEEN" predicates generated by
>> application. Predicate count is the only thing that changes. I don't think
>> performance would otherwise degrade given execution plan.
>>
>> [local] @z=# \dS+ "DiaSource"
>> Table
>> "public.DiaSource"
>>Column|Type | Collation |
>> Nullable |Default| Storage | Stats target | Description
>>
>> -+-+---+--+---+-+--+-
>>  diaSourceId | bigint  |   |
>> not null | '0'::bigint   | plain   |  |
>>  ccdVisitId  | bigint  |   |
>> not null | '0'::bigint   | plain   |  |
>>  diaObjectId | bigint  |   |
>>  | '0'::bigint   | plain   |  |
>>  ssObjectId  | bigint  |   |
>>  | '0'::bigint   | plain   |  |
>>  parentDiaSourceId   | bigint  |   |
>>  | '0'::bigint   | plain   |  |
>>  prv_procOrder   | integer |   |
>> not null | 0 | plain   |  |
>>  ssObjectReassocTime | timestamp without time zone |   |
>>  |   | plain   |  |
>>  midPointTai | double precision|   |
>> not null | '0'::double precision | plain   |  |
>>  ra  | double precision|   |
>> not null | '0'::double precision | plain   |  |
>>  raSigma | double precision|   |
>> not null | '0'::double precision | plain   |  |
>>  decl| double precision|   |
>> not null | '0'::double precision | plain   |  |
>>  declSigma   | double precision|   |
>> not null | '0'::double precision | plain   |  |
>>  ra_decl_Cov | double precision|   |
>> not null | '0'::double precision | plain   |  |
>>  x   | double precision|   |
>> not null | '0'::double precision | plain   |  |
>>  xSigma  | double precision|   |
>> not null | '0'::double precision | plain   |  |
>>  y   | double precision|   |
>> not null | '0'::double precision | plain   |  |
>>  ySigma  | double precision|   |
>> not null | '0'::double precision | plain   |  |
>>  x_y_Cov | double precision|   |
>> not null | '0'::double precision | plain   |  |
>>  apFlux  | double precision  

Re: possibilities for SQL optimization

2020-04-16 Thread Chris Stephens
On Thu, Apr 16, 2020 at 10:47 AM Michael Lewis  wrote:

> My other thought was to range partition by pixelID + brin index.
>

> I would expect brin index to be INSTEAD of partitioning. You didn't share
> buffer hits, which I expect were 100% on the subsequent explain analyze
> runs, but the index scan may still be faster if the planner knows it only
> needs to scan a few small indexes on one, or a few, partitions.
>

agreed but i wondered if partition elimination might be a faster way to
eliminate significant portions of table up fron then possibly parallelize
remaining partitioned brin index scans. not even sure its worth trying
though. this is a data volume vs cache size and predicate count (w/ each
predicate requiring a very efficient but not instantaneous index lookup)
issue.


>
What sort of growth do you see on this table? Is future scalability a
> significant concern, or is the problem just that 40-300ms for this select
> is unacceptable?
>

that's not really clear to me at this point but data will grow linearly for
a year and then remain constant. i think current volume represents ~ 3
months of data but i'm not sure. it is the 40-300ms response time that is
the issue. this system has ridiculous time constraints and has other
processing separate from database queries to account for. query response
times must fit into those requirements but since other pieces are still
being looked at, specific requirements aren't available as far as i
understand. "as fast as possible" is what we have right now. :(


>
>
> https://www.postgresql.org/docs/current/runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-ASYNC-BEHAVIOR
> Have you tuned effective_io_concurrency? The documentation says "this
> setting only affects bitmap heap scans" and nearly all the time is there.
> If it is still set to 1 as default, then increasing to 200 or perhaps more
> might be prudent when on SSD or other memory backed storage. You don't even
> need to change the server config defaults for testing the impact-
>
> set effective_io_concurrency = 200;
> /* select query */
> reset effective_io_concurrency; /* if doing other things in the same
> session and wanting to revert to default behavior, else just disconnect */
>

I just tried that. results were same as without. thanks for the suggestion!


script libraries?

2020-04-30 Thread Chris Stephens
as another Oracle DBA trying to pick up Postgresql one thing i haven't come
across are script libraries such as there are for Oracle (
https://github.com/tanelpoder/tpt-oracle and
https://oracle-base.com/dba/scripts as examples).

Does anything like that exist for PG? Would be nice to see how people
navigate through PG on the command line.

Thanks!


surprisingly slow creation of gist index used in exclude constraint

2020-05-14 Thread Chris Withers

  
  
Hi,
I'm upgrading a database from 9.4 to 11.5 by dumping from the old
  cluster and loading into the new cluster.
The database is tiny: around 2.3G, but importing this table is
  proving problematic:
Column  |   Type|Modifiers
+---+--
 period | tsrange   | not null
 col1   | character varying | not null
 col2   | character varying | not null
 col3   | integer   | not null
 col4   | character varying | not null default ''::character varying
 id | integer   | not null default nextval('mkt_profile_id_seq'::regclass)
 deleted| boolean   | not null default false
 managed| boolean   | not null default false
 col5   | character varying |
Indexes:
"mkt_profile_pkey" PRIMARY KEY, btree (id)
"mkt_profile_period_col1_col4_col2_chan_excl" EXCLUDE USING gist (period WITH &&, col1 WITH =, col4 WITH =, col2 WITH =, col3 WITH =)
Check constraints:
"mkt_profile_period_check" CHECK (period <> 'empty'::tsrange)
Foreign-key constraints:
"mkt_profile_col1_fkey" FOREIGN KEY (col1) REFERENCES host(name)
It has 4.1 million rows in it and while importing the data only
  takes a couple of minutes, when I did a test load into the new
  cluster, building the mkt_profile_period_col1_col4_col2_chan_excl
  index for the exclude constraint took 15 hours.
I feel like asking what I'm doing wrong here? The new server is
  pretty decent hardware...
Concrete questions:
- what, if anything, am I getting badly wrong here?
- what can I do to speed up creation of this index?
- failing that, what can I do to import and then create the index
  in the background? 

As you can imagine, a 15hr outage for an upgrade has not met with
  large amounts of happiness from the people whose application it is
  ;-)
Chris

  



Re: surprisingly slow creation of gist index used in exclude constraint

2020-05-14 Thread Chris Withers

On 14/05/2020 21:16, k...@rice.edu wrote:


Hi Chris,

This sounds like a candidate for pg_logical replicating from the old to 
new system.


Can you point me to a good guide as to how to easily set this up for one 
database and would work between pg 9.4 and pg 11.5?


cheers,

Chris




Re: surprisingly slow creation of gist index used in exclude constraint

2020-05-15 Thread Chris Withers

On 14/05/2020 21:31, Tom Lane wrote:

Chris Withers  writes:

   It has 4.1 million rows in it and while importing the data only
   takes a couple of minutes, when I did a test load into the new
   cluster, building the mkt_profile_period_col1_col4_col2_chan_excl
   index for the exclude constraint took 15 hours.


Don't recall for sure, but I think GIST index build is sensitive
to the maintenance_work_mem setting; did you have that cranked up?


postgres=# show maintenance_work_mem;
 maintenance_work_mem
--
 2GB
(1 row)

Would it be worth turning that up more? The server has ~130GB memory.


 - failing that, what can I do to import and then create the index
   in the background?


CREATE INDEX CONCURRENTLY, perhaps.


How would I bring this into play with respect to the dump and load 
cycle? Is there an option to pg_dump or something else I should use or 
is this a case of awk'ing the create index in the dump?


Chris




Audit Role Connections

2020-05-29 Thread Chris Morris
We're using Heroku's PG, and it comes with a default, almost super-user.
I'm wanting to restrict that account for only our production app servers,
and have anyone connecting for personal queries to go through an assigned
role by employee (that's all setup and working).

Other than polling pg_stat_activity (which isn't 100% accurate depending on
timing), is there a good way to audit connections? To detect which roles
are being used for connections?


Re: Audit Role Connections

2020-05-29 Thread Chris Morris
Ah, I do appear to have that enabled (inside Heroku's config), but I can't
find anything like that in the logs, so I've opened a ticket with them. Thx
a lot!

On Fri, May 29, 2020 at 2:25 PM Peter J. Holzer  wrote:

> On 2020-05-29 12:42:47 -0500, Chris Morris wrote:
> > We're using Heroku's PG,
> [...]
> > Other than polling pg_stat_activity (which isn't 100% accurate depending
> on
> > timing), is there a good way to audit connections? To detect which roles
> are
> > being used for connections?
>
> Do you have access to the log files?
>
> If you log_connections is on, you get messages like these:
>
>
> 2020-05-29 21:00:02 CEST [27995]: [2-1] user=w*,db=wds,pid=27995 LOG:
> connection authorized: user=w* database=wds
> 2020-05-29 21:00:18 CEST [27995]: [9-1] user=w*,db=wds,pid=27995 LOG:
> disconnection: session time: 0:00:15.979 user=w* database=wds
> host=[local]
> 2020-05-29 21:07:14 CEST [7481]: [2-1] user=u*,db=wds,pid=7481 LOG:
> connection authorized: user=u* database=wds
> 2020-05-29 21:07:14 CEST [7481]: [7-1] user=u*,db=wds,pid=7481 LOG:
> disconnection: session time: 0:00:00.016 user=u* database=wds
> host=[local]
> 2020-05-29 21:10:56 CEST [13918]: [2-1] user=m***,db=wds,pid=13918
> LOG:  connection authorized: user=m*** database=wds SSL enabled
> (protocol=TLSv1.2, cipher=ECDHE-RSA-AES256-GCM-SHA384, bits=256,
> compression=off)
> 2020-05-29 21:10:56 CEST [13918]: [11-1] user=m***,db=wds,pid=13918
> LOG:  disconnection: session time: 0:00:00.117 user=m*** database=wds
> host=143.130.**.** port=54037
>
> (user names and IP addresses censored for privacy reasons)
>
> hp
>
> --
>_  | Peter J. Holzer| Story must make more sense than reality.
> |_|_) ||
> | |   | h...@hjp.at |-- Charles Stross, "Creative writing
> __/   | http://www.hjp.at/ |   challenge!"
>


Re: Oracle vs. PostgreSQL - a comment

2020-06-03 Thread Chris Travers
On Wed, Jun 3, 2020 at 7:45 PM Andreas Joseph Krogh 
wrote:

> På onsdag 03. juni 2020 kl. 18:50:12, skrev Jeremy Schneider <
> schnj...@amazon.com>:
>
> > On 6/2/20 1:30 PM, Stephen Frost wrote:
> >> No, nothing does as PG doesn't support it as we have one WAL stream for
> >> the entire cluster.
>
> On 6/2/20 11:38, Ron wrote:
> > Right.  Making WAL files specific to a database should be high on the
> > list of priorities.
>
> Did Oracle change this?  Last time I looked, I don't think Oracle
> supported local redo in their multitenant architecture either.
>
>
>
> Regardless of what Oracle does, I agree this would be a huge step in the
> right direction for pg-DBAs.
> I have absolutely no clue about how much work is required etc., but I
> think it's kind of strange that no companies have invested in making this
> happen.
>

I manage database clusters where the number of databases is a reason not to
do logical replication based upgrades, where pg_upgrade is far preferred
instead.

If this were to be the case, I would be very concerned that a bunch of
things would have to change:
1.  Shared catalogs would have txid problems unless you stay with global
txids and then how do local wal streams work there?
2.  Possibility that suddenly streaming replication has the possibility of
different databases having different amounts of lag
3.  Problems with io management on WAL on high throughput systems (I have
systems where a db cluster generates 10-20TB of WAL per day)

So I am not at all sure this would be a step in the right direction or
worth the work.

>
> --
> Andreas Joseph Krogh
>


-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


troubleshooting postgresql ldap authentication

2020-06-08 Thread Chris Stephens
posgresql 12
centos 7

i am trying to configure ldap authentication.

i have the following pg_hba.conf entry (server and basedn are correct but
not shown):

hostsslall all 0.0.0.0/0  ldap
ldapserver="ldaps://xxx" ldapbasedn="yyy" ldaptls=1

when i attempt to authenticate, i see the following in pg logs:

2020-06-08 15:58:43 CDT [20235]: [1-1]
user=[unknown],db=[unknown],app=[unknown],client=zzz LOG:  connection
received: host=zzz port=12378
2020-06-08 15:58:43 CDT [20235]: [2-1]
user=uuu,db=postgres,app=[unknown],client=zzz LOG:  could not initialize
LDAP: Bad parameter to an ldap routine
2020-06-08 15:58:43 CDT [20235]: [3-1]
user=uuu,db=postgres,app=[unknown],client=zzz FATAL:  LDAP authentication
failed for user "uuu"
2020-06-08 15:58:43 CDT [20235]: [4-1]
user=uuu,db=postgres,app=[unknown],client=zzz DETAIL:  Connection matched
pg_hba.conf line 90: "hostsslall all 0.0.0.0/0
ldap ldapserver="ldaps://xxx" ldapbasedn="yyy" ldaptls=1"

does anyone know what might be causing "LDAP: Bad parameter to an ldap
routine"

ldapsearch works fine w/ the supplied ldapserver and base dn.


Re: troubleshooting postgresql ldap authentication

2020-06-09 Thread Chris Stephens
yes, shortly after i sent this out to the list, one of our security
administrators suggested ldapscheme.  I just tested and ldapurl works as
well.

the security admin explained it like this:

"since we are using port 636 I know that it needs the TLS connection in
place before LDAP commands. starttls does the opposite.  allows an LDAP
connection to "upgrade" to TLS.  so the previous errors were simply it
unable to connect to server."

i'm guessing information like that doesn't belong in postgresql
documentation but it would have been useful yesterday. :)

thanks for the response! i just recently made the switch to postgresql
after 20 years of mainly Oracle. during that time, the oracle-l mailing
list was invaluable as a learning tool and as a way to get help
when needed. it's great to know there's a similar mailing list in the
postgresql community!

On Mon, Jun 8, 2020 at 7:41 PM Thomas Munro  wrote:

> On Tue, Jun 9, 2020 at 9:05 AM Chris Stephens 
> wrote:
> > hostsslall all 0.0.0.0/0  ldap
> ldapserver="ldaps://xxx" ldapbasedn="yyy" ldaptls=1
>
> > does anyone know what might be causing "LDAP: Bad parameter to an ldap
> routine"
>
> You probably want ldapurl="ldaps://xxx" (note: ldapurl, not
> ldapserver).  Or you could use ldapscheme="ldaps" and
> ldapserver="xxx".
>


ansible modules for postgresql installation/config

2020-06-11 Thread Chris Stephens
it looks like there are a number of roles available through ansible galaxy
that support initial postgresql setup and configuration.

i'm wondering if there are any that are more popular/capable that i should
consider vs just picking a few and evaluating those.

does anyone have any recommendations?


Transaction control in SECURITY DEFINER procedures

2020-07-07 Thread Chris Sterritt
The documentation for CREATE PROCEDURE informs us "A|SECURITY 
DEFINER|procedure cannot execute transaction control statements (for 
example,|COMMIT|and|ROLLBACK|, depending on the language)."


Can anyone let me know why this is so and are there any plans to remove 
this restriction in future releases?


Regards, Chris Sterritt



Unexplained disk usage in AWS Aurora Postgres

2020-08-04 Thread Chris Borckholder
Hi!

We are experiencing a strange situation with an AWS Aurora postgres
instance.
The database steadily grows in size, which is expected and normal.
After enabling logical replication, the disk usage reported by AWS metrics
increases much faster then the database size (as seen by \l+ in psql). The
current state is that database size is ~290GB, while AWS reports >640GB
disk usage.
We reached out to AWS support of course, which is ultimately responsible.
Unfortunately they were not able to diagnose this until now.

I checked with the queries from wiki
https://wiki.postgresql.org/wiki/Disk_Usage , which essentially give the
same result.
I tried to check on wal segment file size, but we have no permission to
execute select pg_ls_waldir().
The replication slot is active and it also progresses
(pg_replication_slots.confirmed_flush_lsn increases and is close to
pg_current_wal_flush_lsn).

Can you imagine other things that I could check from within postgres with
limited permissions to diagnose this?

Best Regards
Chris


Re: Unexplained disk usage in AWS Aurora Postgres

2020-08-07 Thread Chris Borckholder
Thank you for your insight Seenu!

That is a good point, unfortunately we do not have access to the
server/file system as the database is a managed service.
Access to the file system from postgres like pg_ls_dir is also blocked.

Are you aware of another, creative way to infer the wal file size from
within postgres?

Best Regards
Chris

On Tue, Aug 4, 2020 at 11:39 AM Srinivasa T N  wrote:

> There may be lot of wal files or the size of log files in pg_log might be
> huge.  "du -sh *" of data directory holding the database might help.
>
> Regards,
> Seenu.
>
>
> On Tue, Aug 4, 2020 at 2:09 PM Chris Borckholder <
> chris.borckhol...@bitpanda.com> wrote:
>
>> Hi!
>>
>> We are experiencing a strange situation with an AWS Aurora postgres
>> instance.
>> The database steadily grows in size, which is expected and normal.
>> After enabling logical replication, the disk usage reported by AWS
>> metrics increases much faster then the database size (as seen by \l+ in
>> psql). The current state is that database size is ~290GB, while AWS reports
>> >640GB disk usage.
>> We reached out to AWS support of course, which is ultimately responsible.
>> Unfortunately they were not able to diagnose this until now.
>>
>> I checked with the queries from wiki
>> https://wiki.postgresql.org/wiki/Disk_Usage , which essentially give the
>> same result.
>> I tried to check on wal segment file size, but we have no permission to
>> execute select pg_ls_waldir().
>> The replication slot is active and it also progresses
>> (pg_replication_slots.confirmed_flush_lsn increases and is close to
>> pg_current_wal_flush_lsn).
>>
>> Can you imagine other things that I could check from within postgres with
>> limited permissions to diagnose this?
>>
>> Best Regards
>> Chris
>>
>>
>>


Re: Unexplained disk usage in AWS Aurora Postgres

2020-08-07 Thread Chris Borckholder
Thanks for your insight!

I cannot find any errors related to archiving in the logs that are
accessible to me.
It's definitely something that I will forward to the support team of the
managed database.

Best Regards
Chris

On Thu, Aug 6, 2020 at 3:18 AM Mohamed Wael Khobalatte <
mkhobala...@grubhub.com> wrote:

>
>
> On Tue, Aug 4, 2020 at 4:39 AM Chris Borckholder <
> chris.borckhol...@bitpanda.com> wrote:
>
>> Hi!
>>
>> We are experiencing a strange situation with an AWS Aurora postgres
>> instance.
>> The database steadily grows in size, which is expected and normal.
>> After enabling logical replication, the disk usage reported by AWS
>> metrics increases much faster then the database size (as seen by \l+ in
>> psql). The current state is that database size is ~290GB, while AWS reports
>> >640GB disk usage.
>> We reached out to AWS support of course, which is ultimately responsible.
>> Unfortunately they were not able to diagnose this until now.
>>
>> I checked with the queries from wiki
>> https://wiki.postgresql.org/wiki/Disk_Usage , which essentially give the
>> same result.
>> I tried to check on wal segment file size, but we have no permission to
>> execute select pg_ls_waldir().
>> The replication slot is active and it also progresses
>> (pg_replication_slots.confirmed_flush_lsn increases and is close to
>> pg_current_wal_flush_lsn).
>>
>> Can you imagine other things that I could check from within postgres with
>> limited permissions to diagnose this?
>>
>> Best Regards
>> Chris
>>
>>
> If you do archive wal files, maybe the archive_command is failing?
>


Re: Unexplained disk usage in AWS Aurora Postgres

2020-08-07 Thread Chris Borckholder
Thank you Adam and Christoph,

You are totally right, that AWS support is the one to help me with this
problem.
I am in contact with them for quite some time on this problem and as there
was no progress on resolving this,
I tried to find some insight or trick that I missed here. It's a long shot
(:

Best Regards
Chris

On Fri, Aug 7, 2020 at 4:22 PM Christoph Moench-Tegeder 
wrote:

> ## Chris Borckholder (chris.borckhol...@bitpanda.com):
>
> > We are experiencing a strange situation with an AWS Aurora postgres
> > instance.
>
> The main problem here is that "Amazon Aurora" is not PostgreSQL.
> If I understand Amazon's documentation, what you are using is
> officially named "Amazon Aurora with PostgreSQL Compatibility",
> and that sums is up quite nicely: Aurora is a database engine
> developed at Amazon - and it's inner workings are not publically
> documented.
> Whatever is using up that disk space - only AWS Support can know.
>
> Regards,
> Christoph
>
> --
> Spare Space
>


Re: Implement a new data type

2020-08-11 Thread Chris Travers
On Wed, Aug 12, 2020 at 2:01 AM raf  wrote:

> On Tue, Aug 11, 2020 at 06:38:39AM -0700, Miles Elam <
> miles.e...@productops.com> wrote:
>
> > Also of note: PostgreSQL already has a money type (
> > https://www.postgresql.org/docs/current/datatype-money.html)
> > But you shouldn't use it (
> > https://wiki.postgresql.org/wiki/Don%27t_Do_This#Don.27t_use_money).
> >
> > I only bring it up so that you can know to make your money type a
> slightly
> > different name to avoid a conflict. Money is deceptively hard to
> implement
> > correctly. I'd recommend reading the second link if you have not already
> to
> > avoid previously known issues.
>
> I use decimal(10,2) for whole cents, and decimal(12,6)
> for sub-cents. Single currency only. I didn't know
> there was a money type originally, but it wouldn't be
> usable for me anyway without the ability to specify the
> scale and precision.
>

It is worth noting that decimal is an alias for numeric in Postgres.   For
that reason you will have less confusion if you use numeric instead.

>
> I recommend considering passing values to the database
> as "decimal '1.23'" rather than bare numeric literals,
> just so there's no chance of the value being
> interpreted as a float at any stage by postgres. Maybe
> that's being too paranoid but that's a good idea when
> it comes to money. :-)
>

I don't think the type designation buys you anything. unless it is a part
of an arithmetic expression  The single quotes do and cannot be omitted
here.

So I think there is a difference between ('1.23' + 1)::numeric and
'1.23'::numeric + 1 but there is also a difference between 1.23::numeric +
1 and '1.23'::numeric + 1

But there is no reason to add the cast when doing something like an insert
of a single value.

>
> Perhaps the incorporation of currency would make a new
> money type interesting. Currencies probably don't
> change as often as timezones but there would probably
> still be some ongoing need for updates.
>

The existing money type has another problem in that the currency it is
attached to is taken from the current locale.  So if you change your locale
settings you can change a value from, say, 100 IDR to 100 EUR at least for
display purposes.

I have some thoughts about how to do a multi-currency type but I am not
actually sure you get anything by tying the data together instead of having
it in separate columns.

>
> cheers,
> raf
>
>
>
>

-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


Re: export to parquet

2020-08-26 Thread Chris Travers
On Wed, Aug 26, 2020 at 9:00 PM Scott Ribe 
wrote:

> I have no Hadoop, no HDFS. Just looking for the easiest way to export some
> PG tables into Parquet format for testing--need to determine what kind of
> space reduction we can get before deciding whether to look into it more.
>
> Any suggestions on particular tools? (PG 12, Linux)
>
> For simple exporting, the simplest thing is a single-node instance of
Spark.

You can read parquet files in Postgres using
https://github.com/adjust/parquet_fdw if you so desire but it does not
support writing as parquet files are basically immutable.


>
> --
> Scott Ribe
> scott_r...@elevated-dev.com
> https://www.linkedin.com/in/scottribe/
>
>
>
>
>
>

-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


Re: Check for duplicates before inserting new rows

2020-09-04 Thread Chris Sterritt


On 04/09/2020 15:46, Rich Shepard wrote:

On Fri, 4 Sep 2020, Olivier Gautherot wrote:


First of all, what version of PostgreSQL are you using?


Olivier,

12.2.

One way would be to add a UNIQUE constraint and perform for each row 
of the

source table a INSERT ... ON CONFLICT DO NOTHING (see
https://www.postgresql.org/docs/current/sql-insert.html#SQL-ON-CONFLICT)

If it is a 2-way merge, I would encapsulate the process in a function
(which will create a transaction to protect your process) and add a 
column
to trace the rows that have been merged. For this purpose, you can 
use the

ON CONFLICT DO UPDATE ...


Lots for me to ponder.

The new data has attributes stored in two database tables: location and
measurements. I'll need to think about how both could be handled in a 
single

function. For example, the new data contains a row with a location not
already in the existing location table. That location needs to be 
added to
the location table and its associated measurement attributes then 
added to

the existing measurements table.

Thanks for the suggestion,

Rich




Assuming some simple table structures (I've not included PK or FK 
definitions for simplicity):


create table station (station_id integer,station_data text);

create table measurement (measurement_id bigserial,station_id 
integer,ameasurement text);

-- Add some test stations
insert into station(station_id,station_data)
values (1,'station1'),
 (2,'station2'),
 (3,'station3') ;


*Query to identify new measurements:
select station_id,ameasurement from (VALUES (1,'1meas1'), -- this represents 
your new test data set
 (1,'1meas2'),
 (2,'2meas1'),
 (3,'3meas1'))as m(station_id,ameasurement)
except select station_id,ameasurement from measurement;

The query above will give you a list of the new values which are not currently 
stored in table measurement.
Run it and we get all 4 rows returned.

Add a measurement row:
insert into measurement(station_id,ameasurement)
values (2,'2meas1');

Now if you repeat the check for new measurements with the same query as at *, 
you only get 3 rows.

Cheers, Chris Sterritt



Re: Check for duplicates before inserting new rows

2020-09-04 Thread Chris Sterritt


On 04/09/2020 14:21, Rich Shepard wrote:
This is a new issue for me: I've received data from a different source 
and

need to add non-duplicates to two tables in the database. Each row in the
new data has a station ID and associated measurements.

The existing database includes a table for station information and 
another

for measurements made there.

I want to learn which stations and which measurements in the new data set
are not included in the existing tables. And, I don't want to try 
inserting

the new data and have postgres tell me when it's found duplicates,
especially since there are two tables involved.

My research into how to do this has not found a solution so I ask for
pointers to resources that will teach me how to add these new data to 
both

tables.

Regards,

Rich




To insert data into both tables:


drop table if exists station;
drop table if exists measurement;

create table station (station_id serial,station_data text);

create table measurement (measurement_id bigserial,station_id 
integer,ameasurement text);

insert into station(station_data)
values ('station1'),
 ('station2'),
 ('station3') ;

with src_data as (select station_data,ameasurement from (VALUES 
('station1','meas1'),
   ('station2','meas2'),
   ('station3','meas3'),
   ('station4','meas4'))as 
m(station_data,ameasurement)),
 ins_station as (insert into station (station_data)
 select station_data from src_data except select station_data from 
station returning station_id,station_data )

insert into measurement (station_id,ameasurement)
select s.station_id,sd.ameasurement from src_data sd join (SELECT 
station_id,station_data FROM ins_station UNION SELECT station_id,station_data 
FROM station)s using (station_data)
    except select station_id,ameasurement from measurement;

select * from station;
select * from measurement;


Regards,
Chris Sterritt



Re: Need explanation on index size

2020-09-24 Thread Chris Sterritt


On 24/09/2020 15:08, Guillaume Lelarge wrote:

Hi,

Le jeu. 24 sept. 2020 à 15:55, Guillaume Luchet <mailto:g.luc...@bilendi.com>> a écrit :


Hi,

I’m facing of a comportement I don’t understand on indexes, here a
quick example to reproduce my problem


test=# select version();
         version

--
 PostgreSQL 12.4 (Debian 12.4-1.pgdg90+1) on x86_64-pc-linux-gnu,
compiled by gcc (Debian 6.3.0-18+deb9u1) 6.3.0 20170516, 64-bit
(1 row)

test=# create table plop (col_a int, col_b int, col_c int);
CREATE TABLE

test=# create unique index on plop (col_a);
CREATE INDEX

test=# create index on plop(col_b);
CREATE INDEX

test=# insert into plop (col_a, col_b) select generate_series(1,
1), generate_series(1, 1);
INSERT 0 1

test=# SELECT schemaname, tablename,
         pg_size_pretty(SIZE) AS size_pretty,
         pg_size_pretty(total_size) AS total_size_pretty
    FROM (SELECT *, pg_relation_size(quote_ident(schemaname) ||
'.' || quote_ident(tablename)) AS SIZE,
pg_total_relation_size(quote_ident(schemaname) || '.' ||
quote_ident(tablename)) AS total_size
          FROM pg_tables) AS TABLES where tablename = 'plop';
 schemaname | tablename | size_pretty | total_size_pretty
+---+-+---
 public     | plop      | 360 kB      | 864 kB
(1 row)

test=# update plop set col_c = floor(random() * 10 + 1)::int;
UPDATE 1

test=# SELECT schemaname, tablename,
         pg_size_pretty(SIZE) AS size_pretty,
         pg_size_pretty(total_size) AS total_size_pretty
    FROM (SELECT *, pg_relation_size(quote_ident(schemaname) ||
'.' || quote_ident(tablename)) AS SIZE,
pg_total_relation_size(quote_ident(schemaname) || '.' ||
quote_ident(tablename)) AS total_size
          FROM pg_tables) AS TABLES where tablename = 'plop';
 schemaname | tablename | size_pretty | total_size_pretty
+---+-+---
 public     | plop      | 792 kB      | 2160 kB
(1 row)

test=# reindex table plop;
REINDEX

test=# SELECT schemaname, tablename,
         pg_size_pretty(SIZE) AS size_pretty,
         pg_size_pretty(total_size) AS total_size_pretty
    FROM (SELECT *, pg_relation_size(quote_ident(schemaname) ||
'.' || quote_ident(tablename)) AS SIZE,
pg_total_relation_size(quote_ident(schemaname) || '.' ||
quote_ident(tablename)) AS total_size
          FROM pg_tables) AS TABLES where tablename = 'plop';
 schemaname | tablename | size_pretty | total_size_pretty
+---+-+---
 public     | plop      | 792 kB      | 1304 kB
(1 row)

I don’t understand why after the update where I only update a non
indexed column the indexes size is growing. Is it something
someone can explain ?


Every tuple is now on a different location on the table (remember that 
update in PostgreSQL is more something like delete+insert). So even if 
the value of the column doesn't change, its tuple location changes, so 
the index needs to be updated to reflect that change.



--
Guillaume.



If you execute
  vacuum full plop;
you will see the size shrink back as the dead tuples will have been removed.

Chris



help flattening json

2020-09-27 Thread Chris Stephens
posgresql verion: 12

i can accomplish this procedurally fairly easily but would like to do this
strictly in SQL.

jsondb=# select jsonb_typeof(jsonb_path_query(vdl_json,'$.tables[*]')) from
vdl_json2;
 jsonb_typeof
--
 object
 object
 object
 object
 object
 object
(6 rows)

jsondb=# select jsonb_pretty(jsonb_path_query(vdl_json,'$.tables[1]')) from
vdl_json2;
jsonb_pretty

 { +
 "name": "AMP_DATA",   +
 "quoted": true,   +
 "columns": [  +
 { +
 "field": "WELL",  +
 "selected": true, +
 "displayName": "Well" +
 },+
 { +
 "field": "WELL_POSITION", +
 "selected": true, +
 "displayName": "Well Position"+
 },+
 { +
 "field": "CYCLE_NUMBER",  +
 "selected": true, +
 "displayName": "Cycle Number" +
 },+
 { +
 "field": "TARGET",+
 "selected": true, +
 "displayName": "Target"   +
 },+
 { +
 "field": "RN",+
 "selected": true, +
 "displayName": "Rn"   +
 },+
 { +
 "field": "DRN",   +
 "selected": true, +
 "displayName": "dRn"  +
 },+
 { +
 "field": "SAMPLE",+
 "selected": true, +
 "displayName": "Sample"   +
 },+
 { +
 "field": "OMIT",  +
 "selected": true, +
 "displayName": "Omit" +
 } +
 ],+
 "labeled": false, +
 "options": {  +
 },+
 "displayName": "Amplification Data",  +
 "sortedColumns": [+
 ],+
 "analysisModule": "primary"   +
 }
(1 row)

i would like to end up with (name text, field text[]).

I can get the field array when i manually filter on name:

jsondb=# select jsonb_path_query_array(vdl_json,'$.tables ? (@.name ==
"RAW_DATA").columns.field') fields
  from vdl_json2;
  fields
---
 ["WELL", "WELL_POSITION", "CYCLE_NUMBER"]
(1 row)

I can get the text of names:

jsondb=# \e
   name
--
 "RESULT"
 "AMP_DATA"
 "MULTICOMPONENT"
 "RAW_DATA"
 "REPLICATE_GROUP_RESULT"
 "WELL_CALL"
(6 rows)

I think i should be able to do this with a lateral join but i can't seem to
get it right. something like:

jsondb=# select tnames.tname, tfields_arr.* from
(select jsonb_path_query(vdl_json,'$.tables[*].name') as tname
  from vdl_json2) tnames,
lateral
(select jsonb_path_query_array(vdl_json,'$.tables ? (@.name ==
tnames.tname).columns.field') as tfields
  from vdl_json2) tfields_arr;
ERROR:  syntax error at or near " "
LINE 6:   from vdl_json2) tfields_arr;

Any json folks out there willing to help out?

i'd also like to get records of "(name text, field text, selected text,
displayName text)" but i haven't started on that one yet.

any help is greatly appreciated!


Re: PostgreSQL processes use large amount of private memory on Windows

2020-10-12 Thread Chris Sterritt



On 17/09/2020 15:06, Tom Lane wrote:

=?UTF-8?Q?=C3=98ystein_Kolsrud?=  writes:

So my question is: When does a postgres process forked for a connection use
private memory instead of shared, and what can I do to avoid this?

The only significant long-term consumption of private memory is for
caches.  There are catalog caches, which can get large if the session
accesses a whole lot of database objects (e.g., thousands of different
tables).  Some of the PLs maintain caches with parsed versions of any
function that's been executed.  (An ex-employer of mine had a lot of
trouble in that regard, because they had hundreds of thousands of lines
worth of plpgsql functions.)  There isn't any user-accessible knob for
limiting the size of those caches.  If you have a problem of that sort,
about the only way to mitigate it is to use fewer backends so that the
total memory consumption stays in bounds, or redesign your application.
In some cases it might help to restart your sessions when they get too
big, but that seems like at best a band-aid.

regards, tom lane



Would executing DISCARD ALL release the PL cache?

Regards, Chris Sterritt




Re: Drop column constraint

2020-10-30 Thread Chris Sterritt



On 30/10/2020 15:30, Rich Shepard wrote:
A table has a unique constraint on a column that needs removing. 
Reading the

postgres-12.x docs for alter table it appears the correct syntax is:

alter table locations drop constraint unique;

but this is wrong.

Trying 'alter table locations alter column loc_nbr drop constraint 
unique;' also

failed.

What's the proper syntax to drop the unique constraint on a table column?

TIA,

Rich




You need

alter table locations drop constraint ;


Cheers, Chris Sterritt





  1   2   >