Re: Shared memory error

2020-06-04 Thread Sonam Sharma
Can someone please help on this

On Wed, Jun 3, 2020, 11:27 PM Sonam Sharma  wrote:

> I have set the dynamic_shared_memory_type to none which was set as POSIX.
> For now we are not seeing the error... Can someone please explain what is
> the impact of setting it as none .
>
> On Wed, Jun 3, 2020, 8:28 PM Sonam Sharma  wrote:
>
>> Hi Team,
>>
>> Getting below error when any query is running from jdbc side :
>>
>> Error : could not open memory segment "/PostgreSQL.1511679208" : No such
>> file or directory
>>
>> Yesterday I have extended the /dev/shm filesystem and the query ran. But
>> today again the issue is repeating.
>>
>> Can someone please help
>>
>


Re: Shared memory error

2020-06-04 Thread Naresh g
Hi,

If the dynamic shared memory segment type is POSIX then *dynamic shared
memory control segment in your case it is **PostgreSQL.1511679208* is
stored in */dev/shm*

 The file you mentioned will be created at the time of PostgreSQL startup
and cleared automatically on successful shutdown.

Complete case study depicted your error is found here


https://postgreshelp.com/postgresql-dynamic-shared-memory-posix-vs-mmap/

Regards

On Thu, Jun 4, 2020 at 12:39 PM Sonam Sharma  wrote:

> Can someone please help on this
>
> On Wed, Jun 3, 2020, 11:27 PM Sonam Sharma  wrote:
>
>> I have set the dynamic_shared_memory_type to none which was set as POSIX.
>> For now we are not seeing the error... Can someone please explain what is
>> the impact of setting it as none .
>>
>> On Wed, Jun 3, 2020, 8:28 PM Sonam Sharma  wrote:
>>
>>> Hi Team,
>>>
>>> Getting below error when any query is running from jdbc side :
>>>
>>> Error : could not open memory segment "/PostgreSQL.1511679208" : No such
>>> file or directory
>>>
>>> Yesterday I have extended the /dev/shm filesystem and the query ran. But
>>> today again the issue is repeating.
>>>
>>> Can someone please help
>>>
>>


Re: Shared memory error

2020-06-04 Thread Thomas Munro
On Thu, Jun 4, 2020 at 6:18 AM Sonam Sharma  wrote:
>>> 1) Postgres version : 11.2

FYI This is missing over a year's worth of bugfixes.  That said, I
don't know of anything fixed that has this symptom.

>>> 4) Is this only with one query and if so what is it doing? : No , few 
>>> queries work, few fails .. mostly all are complex select queries.

It's a long shot, because I expect you're running the usual packages
under a user set up by the package to be a "system" user, but just in
case: there is one known way for POSIX shm to be unlinked while you're
still using it: systemd.  For example, this might happen if you have
cronjobs running as the same user.  Some more info:

https://www.postgresql.org/docs/11/kernel-resources.html#SYSTEMD-REMOVEIPC

Do you see any other errors around this one, in the PostgreSQL logs?




Re: select count(id) on RDS replica causing high CPU load on RDS master

2020-06-04 Thread Azul
Thanks Jeremy,

Plan is on the cards to upgrade to 10 this or next week, and then later on
towards v12.
hot_standby_feedback is enabled, yes.
So the process list only shows high cpu usage for the postgres process
itself, and all other ones are < 1.5%.
it's a bit baffling as I don't have anything heavy running on the master
and all the heavy reads have been directed to the standby.
As soon as a large query hits the standby the load goes up in the master,
but there's no correlation with IO memory or anything on the master, its
just cpu load.

- azul

On Wed, 3 Jun 2020 at 14:59, Jeremy Schneider  wrote:

> On 6/3/20 04:04, Azul wrote:
> > so I have a RDS 9.5 master 16vcpu/64GB RAM/5000 IOPS which is RDS
> > replicating to a replica with the same specs.
>
> If you're not already making plans to upgrade this, many people would
> strongly urge you start now. The final release for 9.5 is less than a
> year away, after which you will no longer be able to get security
> updates or bug fixes.
>
> https://www.postgresql.org/support/versioning/
>
> Please consider moving in the direction of v12.
>
> Just for posterity in case someone googles this email thread in the
> future, an RDS "read replica" is a "hot standby" in PostgreSQL terminology.
>
> https://www.postgresql.org/docs/9.5/high-availability.html
>
>
> > I am running the above on the replica to avoid causing an extra load on
> > the master, that query takes a long time (lets ignore the fact that it
> > badly needs an analyse to finish), roughly an hour or so.
>
> Just curious, did you happen to enable the hot_standby_feedback
> parameter? Looks to me like it's off by default in 9.4.
>
>
> > Now what is baffling me is the CPU load on the master goes  up steadily
> > all the way to 100% while this select count is running on the slave.
> > Worth mentioning that CPU on the slave increases by about 10% of so.
>
>
> Which PIDs/processes are using the CPU? (Vacuum? App connections running
> queries?) For people who manage PostgreSQL themselves, they'd use normal
> unix utilities like top, ps, etc. On RDS you want to enabled "Enhanced
> Monitoring" and check the "process list". (In the web console, as of
> last time I checked: go to the database, choose the "Monitoring" tab,
> click the dropdown box at the top right that says "Monitoring" and
> choose "OS Process List".)
>
> If you see that it's user sessions, then you can connect with psql at
> the same time as the high activity and query the contents of
> pg_stat_activity to get a little more information about what the
> particular process is doing.
>
>
> --
> Jeremy Schneider
> Database Engineer
> Amazon Web Services
>


Calculate hardware requirements

2020-06-04 Thread Praveen Kumar K S
Hello All,

I'm looking for hardware configurations to set up 1 master and 2
hot-standby slaves using 9.6 in one DC. Also, I need to have DR with the
same setup with cross-site replication enabled. I went through a lot of
docs/blogs suggesting 4cores and at least 4/8GB RAM. But I'm looking for
help on how exactly one can justify the hardware requirements, like a
formula ? Please advise.

Regards,
PK


Re: Calculate hardware requirements

2020-06-04 Thread Fabio Pardi
Hi,

in my opinion your question is too generic to get an accurate answer. To 
educate yourself reading Postgres docs or some good books would be in my 
opinion the best way to give an answer yourself to your own question. Then you 
can still post to the ML on some specific setting (postgres performance ML is 
the best place).

Much of the requirements depends on the expected load on the database and what 
kind of usage you will do, such as OLTP or DWH/BI. Also the database size is 
important to fit in the picture.

As rule of thumb, you want all your installations to be identical in terms of 
hardware specs. CPU should be able to serve your queries and your clients, so 
you must have enough cores to serve the expected number of connections without 
degrading performances.

About RAM, the more the better, but if you have enough to fit your db (or the 
part you use of your db) in RAM, you will probably avoid many of your problems 
about disks performances.

Do not forget disks, RAID controllers, networking, SLA, SLO, HA, DR..

OT: I would use newer Postgres than 9.6 if I were you, unless you have good 
reasons to use 9.6.


regards,

fabio pardi




On 04/06/2020 11:36, Praveen Kumar K S wrote:
> Hello All,
>
> I'm looking for hardware configurations to set up 1 master and 2 hot-standby 
> slaves using 9.6 in one DC. Also, I need to have DR with the same setup with 
> cross-site replication enabled. I went through a lot of docs/blogs suggesting 
> 4cores and at least 4/8GB RAM. But I'm looking for help on how exactly one 
> can justify the hardware requirements, like a formula ? Please advise. 
>
> Regards,
> PK



Re: Calculate hardware requirements

2020-06-04 Thread Praveen Kumar K S
Thank you for the detailed reply. I understand my question is generic. But
just thought if I can get some good place to start. I will look into the
suggestions you made.

On Thu, Jun 4, 2020 at 4:02 PM Fabio Pardi  wrote:

> Hi,
>
> in my opinion your question is too generic to get an accurate answer. To
> educate yourself reading Postgres docs or some good books would be in my
> opinion the best way to give an answer yourself to your own question. Then
> you can still post to the ML on some specific setting (postgres performance
> ML is the best place).
>
> Much of the requirements depends on the expected load on the database and
> what kind of usage you will do, such as OLTP or DWH/BI. Also the database
> size is important to fit in the picture.
>
> As rule of thumb, you want all your installations to be identical in terms
> of hardware specs. CPU should be able to serve your queries and your
> clients, so you must have enough cores to serve the expected number of
> connections without degrading performances.
>
> About RAM, the more the better, but if you have enough to fit your db (or
> the part you use of your db) in RAM, you will probably avoid many of your
> problems about disks performances.
>
> Do not forget disks, RAID controllers, networking, SLA, SLO, HA, DR..
>
> OT: I would use newer Postgres than 9.6 if I were you, unless you have
> good reasons to use 9.6.
>
>
> regards,
>
> fabio pardi
>
>
>
>
> On 04/06/2020 11:36, Praveen Kumar K S wrote:
>
> Hello All,
>
> I'm looking for hardware configurations to set up 1 master and 2
> hot-standby slaves using 9.6 in one DC. Also, I need to have DR with the
> same setup with cross-site replication enabled. I went through a lot of
> docs/blogs suggesting 4cores and at least 4/8GB RAM. But I'm looking for
> help on how exactly one can justify the hardware requirements, like a
> formula ? Please advise.
>
> Regards,
> PK
>
>
>

-- 


*Regards,*


*K S Praveen KumarM: +91-9986855625 *


Re: Shared memory error

2020-06-04 Thread Thomas Munro
>> Do you see any other errors around this one, in the PostgreSQL logs?
> No , only this is the error from db and jdbc end .. and queries are failing

If you need a workaround right now you could always set
max_parallel_workers_per_gather=0 so that it doesn't try to use
parallel query.  That could make some queries run more slowly.

When you say you extended the /dev/shm filesystem, what are the mount
options, or how did you configure it?  It's interesting that the
problem went away for a while when you did that.




Re: Shared memory error

2020-06-04 Thread Sonam Sharma
The dynamic_shared_memory_type was set to POSIX . Because of this it was
using tmpfs /dev/shm. When the query was running I saw the file system was
filling. So I  extended the file system and luckily the query worked for
that time

On Thu, Jun 4, 2020, 5:12 PM Thomas Munro  wrote:

> >> Do you see any other errors around this one, in the PostgreSQL logs?
> > No , only this is the error from db and jdbc end .. and queries are
> failing
>
> If you need a workaround right now you could always set
> max_parallel_workers_per_gather=0 so that it doesn't try to use
> parallel query.  That could make some queries run more slowly.
>
> When you say you extended the /dev/shm filesystem, what are the mount
> options, or how did you configure it?  It's interesting that the
> problem went away for a while when you did that.
>


Re: Can we get SQL Server-like cross database queries

2020-06-04 Thread Matt Zagrabelny
On Thu, Jun 4, 2020 at 12:56 AM Laurenz Albe 
wrote:

> On Thu, 2020-06-04 at 16:41 +1200, Thomas Munro wrote:
> > There's no doubt it's useful, and it's also part of the SQL spec,
> > which says you can do catalog.schema.table.  I would guess that we
> > might get that as a byproduct of any project to make PostgreSQL
> > multithreaded.  That mountain moving operation will require us to get
> > rid of all the global state that currently ties a whole process to one
> > session and one database, and replace it with heap objects with names
> > like Session and Database that can be passed around between worker
> > threads.
>
> I am -1 on cross-database queries.
>
> I think it is a desirable feature to have databases isolated from
> each other, so you don't have to worry about a permission you forgot
> that allows somebody to access a different database.
>

Perhaps make it a new right that can be granted and it is disabled by
default.

Superusers could have it by default.

ALTER USER foo WITH CROSSDB | NOCROSSDB

Obviously there is much more to flesh out than this.

-m


Re: When to use PARTITION BY HASH?

2020-06-04 Thread Jeff Janes
On Wed, Jun 3, 2020 at 7:55 AM Oleksandr Shulgin <
oleksandr.shul...@zalando.de> wrote:

With hash partitioning you are not expected, in general, to end up with a
> small number of partitions being accessed more heavily than the rest.  So
> your indexes will also not fit into memory.
>
> I have the feeling that using a hash function to distribute rows simply
> contradicts the basic assumption of when you would think of partitioning
> your table at all: that is to make sure the most active part of the table
> and indexes is small enough to be cached in memory.
>

While hash partitioning doesn't appeal to me, I think this may be overly
pessimistic.  It would not be all that unusual for your customers to take
turns being highly active and less active.  Especially if you do occasional
bulk loads all with the same customer_id for any given load, for example.
So while you might not have a permanently hot partition, you could have
partitions which are hot in turn.  Of course you could get the same benefit
(and probably better) with list or range partitioning rather than hash, but
then you have to maintain those lists or ranges when you add new customers.

Cheers,

Jeff


Re: Fine grained permissions on User Mapping

2020-06-04 Thread Paul Bonaud
Hi again Tom,

I re-read your point 2. “You don't want to grant USAGE on the foreign
server to the localuser, either.” to find out this was exactly the solution
I was looking for. That is: it's fine to not let the basic user create the
foreign tables.

Wow, it was as easy as moving the foreign table creations up to a higher
“admin” level and giving only classical “select” grants to my local “basic”
user. That's it, it works! When the basic user tries to list the existing
user mappings of the database the “FDW options” column is now empty thus
not revealing the remote server's username and password.

Thank you very much !


On Wed, 3 Jun 2020 at 22:22, Adrian Klaver 
wrote:

> On 6/3/20 4:11 AM, Paul Bonaud wrote:
> > Hi Tom,
> >
> > Thank you very much for your answer.
> >
> > I was worried to get this kind of solution, i.e. “don't be so miserly as
> > not to create a separate one for each privilege level you need.”,
> > however in the case of a remote database **you have no control over**it
> > sounds pretty impossible to do.
> >
> > If I understand correctly, my initial question doesn't have a solution
> > within postgres, does this sound right?
>
> As it stands now I can't think of one. You might reach out to the
> postgres_fdw folks and see if they could get it to use a service file:
>
> https://www.postgresql.org/docs/12/libpq-pgservice.html
>
> Then the user mapping could use information the end user can't see
> unless they had permissions on the file system.
>
> >
> > Thanks again !
> > Paul
> > **
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


PostgreSQL 11 with SSL on Linux

2020-06-04 Thread Susan Joseph
 Can someone tell me if there is a yum version of PostgreSQL 11 that can be 
installed on Linux that has SSL enabled?  Currently the only way I have gotten 
SSL turned on in PostgreSQL is by doing the following commands:
   
   - tar xzvf /tmp/postgresql-11.2.tar.gz -C /data
   - mv /data/postgresql-11.2 /data/pgsql
   - cd /data/pgsql
   - ./configure --prefix=/data/pgsql --without-readline --without-zlib 
--with-openssl >> conf.log
   - make
   - make install
Thanks




Re: Can we get SQL Server-like cross database queries

2020-06-04 Thread David G. Johnston
On Wednesday, June 3, 2020, Laurenz Albe  wrote:

> On Thu, 2020-06-04 at 16:41 +1200, Thomas Munro wrote:
> > There's no doubt it's useful, and it's also part of the SQL spec,
> > which says you can do catalog.schema.table.  I would guess that we
> > might get that as a byproduct of any project to make PostgreSQL
> > multithreaded.  That mountain moving operation will require us to get
> > rid of all the global state that currently ties a whole process to one
> > session and one database, and replace it with heap objects with names
> > like Session and Database that can be passed around between worker
> > threads.
>
> I am -1 on cross-database queries.
>
> I think it is a desirable feature to have databases isolated from
> each other, so you don't have to worry about a permission you forgot
> that allows somebody to access a different database.
>
> I think this is particularly relevant since all databases share the
> same users.
>
> I understand that sometimes the opposite would be desirable, but
> foreign data wrappers have alleviated that pain.
>

I agree with the conclusion but not so much with the premise.  Even with
global users you still need to grant permissions to individual databases
and its debatable whether its “more safe” to prevent a user from directly
accessing a database in the “catalog.schema” reference manner if they can
do so with a direct login.

I agree with the general premise that modularity and isolation are
generally positive qualities, especially as scale grows, and that expending
considerable resources strictly for the goal of adding this capability to
the system is not a direction that I would be in favor of.  Now, if the
prereqs for this feature also have other concrete benefits that are worth
working toward, and in the end the sum of those makes cross-database
queries a relatively simple matter, I would entertain putting in the last
10% of effort to become standard compliant.

David J.


Re: PostgreSQL 11 with SSL on Linux

2020-06-04 Thread Magnus Hagander
On Thu, Jun 4, 2020 at 5:01 PM Susan Joseph  wrote:

> Can someone tell me if there is a yum version of PostgreSQL 11 that can be
> installed on Linux that has SSL enabled?  Currently the only way I have
> gotten SSL turned on in PostgreSQL is by doing the following commands:
>
>
>- *tar xzvf /tmp/postgresql-11.2.tar.gz -C /data*
>- *mv /data/postgresql-11.2 /data/pgsql*
>- *cd /data/pgsql*
>- *./configure --prefix=/data/pgsql --without-readline --without-zlib
>--with-openssl >> conf.log*
>- *make*
>- *make install*
>
>
All the PostgreSQL versions available om yum from the postgresql.org site
have SSL enabled. Just install using the instructions from
https://www.postgresql.org/download/.

-- 
 Magnus Hagander
 Me: https://www.hagander.net/ 
 Work: https://www.redpill-linpro.com/ 


Index no longer being used, destroying and recreating it restores use.

2020-06-04 Thread Koen De Groote
Greetings,

The following is using version 11.2 of PostgreSQL.

I've got a table with about 30 million rows and a particular index that up
until recently was actively being used.

And then it stopped being used and the query that the index was made for,
is now doing sequential scans.

Deleting the index and creating it again, seems to fix the problem. The new
index, which is identical in composition, is being used and the query in
question no longer uses sequential scans.


It's the exact same query and the index is identical in composition. Yet
after a while the database stops using it. I'd like to find out why that is
and how to prevent it.


Also, I'm assuming this is the correct list for such a question?


In the past, I had asked this somewhere else, but then no longer had time
to spend on it: https://dba.stackexchange.com/questions/264237/

Some data I gathered then:

   1. Size of the index not being used is 101MB.
   2. Size of the index being used is 16MB.

The query takes the form of:

"select * from myTable where bool1 = true and bool2 = false and timestamp
<= ('timestampField'::timestamp without time zone) order by stringField
asc, id asc limit 100 offset 3;"

3 is an example value.
-

Here is the "explain analyze" for index used:
https://explain.depesz.com/s/H5X9y
-

Here is the "explain analyze" for index not used:
https://explain.depesz.com/s/n6bP

And I'm frankly stumped.An index growing from 16MB to 101MB isn't that big
of an increase, I would think? Is that the reason it's no longer being
used? Or is something else going on here?

The entire database, in which this table belongs, undergoes a "vacuum
analyze" every single night, which takes about 8 minutes. Do I perhaps need
to do something additional in terms of cleanup/maintenance?

I've tried altering statistics, to very large values even, but no changes
there either.

Any help or suggestion would be appreciated.

Kind regards,
Koen De Groote


Re: Index no longer being used, destroying and recreating it restores use.

2020-06-04 Thread Adrian Klaver

On 6/4/20 8:37 AM, Koen De Groote wrote:

Greetings,

The following is using version 11.2 of PostgreSQL.

I've got a table with about 30 million rows and a particular index that 
up until recently was actively being used.


And then it stopped being used and the query that the index was made 
for, is now doing sequential scans.


Did you ANALYZE the table at that point?



Deleting the index and creating it again, seems to fix the problem. The 
new index, which is identical in composition, is being used and the 
query in question no longer uses sequential scans.



It's the exact same query and the index is identical in composition. Yet 
after a while the database stops using it. I'd like to find out why that 
is and how to prevent it.



Also, I'm assuming this is the correct list for such a question?


In the past, I had asked this somewhere else, but then no longer had 
time to spend on it: https://dba.stackexchange.com/questions/264237/


Some data I gathered then:

 1. Size of the index not being used is 101MB.
 2. Size of the index being used is 16MB.

The query takes the form of:

"select * from myTable where bool1 = true and bool2 = false and 
timestamp <= ('timestampField'::timestamp without time zone) order by 
stringField asc, id asc limit 100 offset 3;"


3 is an example value.
#

Here is the "explain analyze" for index used: 
https://explain.depesz.com/s/H5X9y


#

Here is the "explain analyze" for index not used: 
https://explain.depesz.com/s/n6bP



And I'm frankly stumped.An index growing from 16MB to 101MB isn't that 
big of an increase, I would think? Is that the reason it's no longer 
being used? Or is something else going on here?


The entire database, in which this table belongs, undergoes a "vacuum 
analyze" every single night, which takes about 8 minutes. Do I perhaps 
need to do something additional in terms of cleanup/maintenance?


I've tried altering statistics, to very large values even, but no 
changes there either.


Any help or suggestion would be appreciated.

Kind regards,
Koen De Groote




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Index no longer being used, destroying and recreating it restores use.

2020-06-04 Thread Tom Lane
Koen De Groote  writes:
> I've got a table with about 30 million rows and a particular index that up
> until recently was actively being used.
> And then it stopped being used and the query that the index was made for,
> is now doing sequential scans.
> Deleting the index and creating it again, seems to fix the problem. The new
> index, which is identical in composition, is being used and the query in
> question no longer uses sequential scans.

It's possible that the index had bloated to the point where the planner
thought it was cheaper to use a seqscan.  Did you make a note of the
cost estimates for the different plans?

regards, tom lane




Re: Index no longer being used, destroying and recreating it restores use.

2020-06-04 Thread Adrian Klaver

On 6/4/20 9:43 AM, Tom Lane wrote:

Koen De Groote  writes:

I've got a table with about 30 million rows and a particular index that up
until recently was actively being used.
And then it stopped being used and the query that the index was made for,
is now doing sequential scans.
Deleting the index and creating it again, seems to fix the problem. The new
index, which is identical in composition, is being used and the query in
question no longer uses sequential scans.


It's possible that the index had bloated to the point where the planner
thought it was cheaper to use a seqscan.  Did you make a note of the
cost estimates for the different plans?


I missed the part where the OP pointed to a SO question. In that 
question where links to explain.depesz.com output. So:


With index(https://explain.depesz.com/s/H5X9y):

Limit  (cost=5964059.790..5964071.460 rows=100 width=3141) (actual 
time=2534.648..2547.352 rows=100 loops=1)

Buffers: shared hit=30 read=27753
  ->  Gather Merge  (cost=5959707.820..6516383.180 rows=4771170 
width=3141) (actual time=2193.611..2542.835 rows=37400 loops=1)

  Workers Planned: 2
  Workers Launched: 2
  Buffers: shared hit=30 read=27753
->  Sort  (cost=5958707.790..5964671.760 rows=2385585 
width=3141) (actual time=1833.278..1838.731 rows=12528 loops=3)

Sort Key: mike_romeo, quebec
Sort Method: quicksort  Memory: 24449kB
Worker 0:  Sort Method: quicksort  Memory: 28634kB
Worker 1:  Sort Method: quicksort  Memory: 18065kB
Buffers: shared hit=30 read=27753
  ->  Parallel Bitmap Heap Scan on oscar mike_three 
(cost=4282.840..1923544.730 rows=2385585 width=3141) (actual 
time=150.297..659.047 rows=23586 loops=3)
  Recheck Cond: ((zulu <= 'echo'::timestamp without 
time zone) AND four AND (NOT bravo))

  Heap Blocks: exact=9759
  Buffers: shared read=27753
->  Bitmap Index Scan on foxtrot 
(cost=0.000..2851.490 rows=5725405 width=0) (actual 
time=245.459..245.459 rows=70759 loops=1)
Index Cond: (zulu <= 'echo'::timestamp 
without time zone)

Buffers: shared read=2028
Planning time: 437.187 ms
Execution time: 2549.633 ms

W/O index(https://explain.depesz.com/s/n6bP):

Limit  (cost=5975083.560..5975095.230 rows=100 width=3141) (actual 
time=159708.476..159718.368 rows=100 loops=1)

Buffers: shared hit=1082 read=1798654
  ->  Gather Merge  (cost=5970731.590..6527406.950 rows=4771170 
width=3141) (actual time=159309.819..159713.861 rows=37400 loops=1)

  Workers Planned: 2
  Workers Launched: 2
  Buffers: shared hit=1082 read=1798654
->  Sort  (cost=5969731.570..5975695.530 rows=2385585 
width=3141) (actual time=159214.623..159219.579 rows=12531 loops=3)

Sort Key: mike_romeo, quebec
Sort Method: quicksort  Memory: 24459kB
Worker 0:  Sort Method: quicksort  Memory: 22650kB
Worker 1:  Sort Method: quicksort  Memory: 24038kB
Buffers: shared hit=1082 read=1798654
  ->  Parallel Seq Scan on oscar mike_three 
(cost=0.000..1934568.500 rows=2385585 width=3141) (actual 
time=159.800..158018.961 rows=23586 loops=3)
  Filter: (four AND (NOT bravo) AND (zulu <= 
'echo'::timestamp without time zone))

  Rows Removed by Filter: 8610174
  Buffers: shared hit=1014 read=1798652
Planning time: 0.807 ms
Execution time: 159720.208 ms



regards, tom lane





--
Adrian Klaver
adrian.kla...@aklaver.com




Re: PostgreSQL 11 with SSL on Linux

2020-06-04 Thread Susan Joseph
So the other issue I have is that I am running this on a server that is not 
connected to the Internet.  So I have downloaded the RPM file but I can't 
figure out how to install it without it trying to access files on the Internet. 
 Are there other libraries I need to include in my download for this to work on 
a non-networked server?
 
 
 
-Original Message-
From: Magnus Hagander 
To: Susan Joseph 
Cc: pgsql-general@lists.postgresql.org 
Sent: Thu, Jun 4, 2020 11:14 am
Subject: Re: PostgreSQL 11 with SSL on Linux



On Thu, Jun 4, 2020 at 5:01 PM Susan Joseph  wrote:

 Can someone tell me if there is a yum version of PostgreSQL 11 that can be 
installed on Linux that has SSL enabled?  Currently the only way I have gotten 
SSL turned on in PostgreSQL is by doing the following commands:
   
   - tar xzvf /tmp/postgresql-11.2.tar.gz -C /data
   - mv /data/postgresql-11.2 /data/pgsql
   - cd /data/pgsql
   - ./configure --prefix=/data/pgsql --without-readline --without-zlib 
--with-openssl >> conf.log
   - make
   - make install


All the PostgreSQL versions available om yum from the postgresql.org site have 
SSL enabled. Just install using the instructions from 
https://www.postgresql.org/download/.
-- 
 Magnus Hagander
 Me: https://www.hagander.net/
 Work: https://www.redpill-linpro.com/

Monitoring for long running transactions

2020-06-04 Thread Samuel Smith
We had a customer complaining of random data loss for the last 6 months 
or so. We eventually tracked it down to a combination of bad coding and 
a couple of bugs with the ORM. Basically, 'BEGIN' was being emitted by 
the web app when viewing a certain page and 'COMMIT' was never emitted 
after that. So once the app would get restarted, all data changes would 
be lost. Definitely worst case scenario.


So the question is, what is the best way to monitor for this scenario 
going forward? Are there any plugins or community recommended scripts 
already made?


Regards,






Re: Index no longer being used, destroying and recreating it restores use.

2020-06-04 Thread Tom Lane
Adrian Klaver  writes:
> On 6/4/20 9:43 AM, Tom Lane wrote:
>> It's possible that the index had bloated to the point where the planner
>> thought it was cheaper to use a seqscan.  Did you make a note of the
>> cost estimates for the different plans?

> I missed the part where the OP pointed to a SO question. In that 
> question where links to explain.depesz.com output.

Ah, I didn't bother to chase that link either.

So the cost estimates are only a fraction of a percent apart, making
it unsurprising for not-so-large changes in the index size to cause
a flip in the apparently-cheapest plan.  The real question then is
why the cost estimates aren't actually modeling the real execution
times very well; and I'd venture that that question boils down to
why is this rowcount estimate so far off:

>->  Parallel Seq Scan on oscar mike_three 
> (cost=0.000..1934568.500 rows=2385585 width=3141) (actual 
> time=159.800..158018.961 rows=23586 loops=3)
>Filter: (four AND (NOT bravo) AND (zulu <= 
> 'echo'::timestamp without time zone))
>Rows Removed by Filter: 8610174

We're not going to be able to answer that if the OP doesn't wish
to decloak his data a bit more ... but a reasonable guess is that
those filter conditions are correlated.  With late-model Postgres
you might be able to improve matters by creating extended statistics
for this table.

regards, tom lane




Re: PostgreSQL 11 with SSL on Linux

2020-06-04 Thread Tom Lane
Susan Joseph  writes:
> So the other issue I have is that I am running this on a server that is not 
> connected to the Internet.  So I have downloaded the RPM file but I can't 
> figure out how to install it without it trying to access files on the 
> Internet.  Are there other libraries I need to include in my download for 
> this to work on a non-networked server?

Are you using the right install tool?  "rpm -i" shouldn't result in any
nonlocal accesses.  Tools like yum or dnf will, because they're meant to
fetch from nonlocal repositories.

regards, tom lane




Re: PostgreSQL 11 with SSL on Linux

2020-06-04 Thread Susan Joseph
So when I run rpm -i it says that the package is installed, but I can't find a 
postgresql directory with all the files and executables.  So what am I missing?
 
 
 
 
-Original Message-
From: Tom Lane 
To: Susan Joseph 
Cc: mag...@hagander.net ; 
pgsql-general@lists.postgresql.org 
Sent: Thu, Jun 4, 2020 1:11 pm
Subject: Re: PostgreSQL 11 with SSL on Linux

Susan Joseph  writes:
> So the other issue I have is that I am running this on a server that is not 
> connected to the Internet.  So I have downloaded the RPM file but I can't 
> figure out how to install it without it trying to access files on the 
> Internet.  Are there other libraries I need to include in my download for 
> this to work on a non-networked server?

Are you using the right install tool?  "rpm -i" shouldn't result in any
nonlocal accesses.  Tools like yum or dnf will, because they're meant to
fetch from nonlocal repositories.

            regards, tom lane



Re: PostgreSQL 11 with SSL on Linux

2020-06-04 Thread Alan Hodgson
On Thu, 2020-06-04 at 17:32 +, Susan Joseph wrote:
> So when I run rpm -i it says that the package is installed, but I
> can't find a postgresql directory with all the files and executables. 
> So what am I missing?
> 
> 
>  
> 
> 
> 

The server stuff is in postgresql11-serverif you're using the community
rpms.


Re: PostgreSQL 11 with SSL on Linux

2020-06-04 Thread Tom Lane
Susan Joseph  writes:
> So when I run rpm -i it says that the package is installed, but I can't find 
> a postgresql directory with all the files and executables.  So what am I 
> missing?

Looking in the right place, perhaps ;-).  If the package is installed
then "rpm -ql" will list all the files it installed.

There are a couple of different layouts that different packagers like to
use, so you might be accustomed to something different than what this
particular package used.

Related to that is that there are almost always sub-packages; it's
likely that in addition to an rpm named just postgresql-NNN you need
postgresql-server-NNN, and maybe some other bits.

regards, tom lane




Re: PostgreSQL 11 with SSL on Linux

2020-06-04 Thread Susan Joseph
 OK, when I went to PostgreSQL to get the rpm file to install I was given:
yum install 
https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm


I put the file  pgdg-redhat-repo-latest.noarch.rpm in my /tmp directory and ran 
rpm -i on the file.

If I run rpm -qa | grep postgres

I get postgresql-libs-9.2.24-4.el7_8.x86_64


When I run:   rpm -ql postgresql-libs-9.2.24-4.el7_8.x86_64  I just get a 
list of libraries. 



I am not an SA, I am more of an engineer and have been working with the 
PostgreSQL I installed by building it so not really used to RPM.   I have used 
YUM but that was with a connected server.

How do I find all the packages that I need to have to do an install locally?

 
-Original Message-
From: Alan Hodgson 
To: pgsql-general@lists.postgresql.org
Sent: Thu, Jun 4, 2020 1:36 pm
Subject: Re: PostgreSQL 11 with SSL on Linux

On Thu, 2020-06-04 at 17:32 +, Susan Joseph wrote:
So when I run rpm -i it says that the package is installed, but I can't find a 
postgresql directory with all the files and executables.  So what am I missing?
 


The server stuff is in postgresql11-serverif you're using the community rpms.


Re: PostgreSQL 11 with SSL on Linux

2020-06-04 Thread Tom Lane
Susan Joseph  writes:
>  OK, when I went to PostgreSQL to get the rpm file to install I was given:
> yum install 
> https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm
> I put the file  pgdg-redhat-repo-latest.noarch.rpm in my /tmp directory and 
> ran rpm -i on the file.

I think you've got a fundamental misunderstanding here.  That package is
not Postgres; it's just a config file that tells your dnf/yum installation
where it can download community Postgres RPMs from.

If you want to do this on a machine without an internet connection,
you need to get the actual Postgres RPMs from that repo and install
them manually with "rpm -i".  I don't recall which subdirectory on
download.postgresql.org has what you want, but if you can't find it
by browsing, a look into the installed yum config file should help.

> If I run rpm -qa | grep postgres
> I get postgresql-libs-9.2.24-4.el7_8.x86_64

This is a postgresql-libs package from Red Hat, not from the community.
There's nothing particularly wrong with Red Hat's packaging, except
that it tends to be well behind community releases because that's
what they think the RHEL distribution ought to be.  The main problem
from your perspective is the risk of confusion with the libraries that
community PG11 RPMs will bring in.  You might try a test deinstallation
of that RPM first, and see if it cascades to removing anything you can't
live without.

regards, tom lane




Re: PostgreSQL 11 with SSL on Linux

2020-06-04 Thread Mohammed Bhatti
On Thu, Jun 4, 2020 at 1:50 PM Susan Joseph  wrote:

> OK, when I went to PostgreSQL to get the rpm file to install I was given:
>
> yum install 
> https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm
>
> I put the file  pgdg-redhat-repo-latest.noarch.rpm in my /tmp directory and 
> ran rpm -i on the file.
>
> If I run rpm -qa | grep postgres
>
> I get postgresql-libs-9.2.24-4.el7_8.x86_64
>
>
> When I run:   rpm -ql postgresql-libs-9.2.24-4.el7_8.x86_64  I just get a 
> list of libraries.
>
>
> I am not an SA, I am more of an engineer and have been working with the 
> PostgreSQL I installed by building it so not really used to RPM.   I have 
> used YUM but that was with a connected server.
>
> How do I find all the packages that I need to have to do an install locally?
>
>
> -Original Message-
> From: Alan Hodgson 
> To: pgsql-general@lists.postgresql.org
> Sent: Thu, Jun 4, 2020 1:36 pm
> Subject: Re: PostgreSQL 11 with SSL on Linux
>
> On Thu, 2020-06-04 at 17:32 +, Susan Joseph wrote:
>
> So when I run rpm -i it says that the package is installed, but I can't
> find a postgresql directory with all the files and executables.  So what am
> I missing?
>
>
> The server stuff is in postgresql11-serverif you're using the community
> rpms.
>
> Hi,

I've had to do this in the past and I think you probably want to do this:
1. connect to a machine that has access to repos
2. go here: https://www.postgresql.org/download/linux/redhat/
3. where it says "Direct RPM Download", click on the "direct download" link
4. this takes you here: https://yum.postgresql.org/rpmchart.php
5. click on "11  RHEL/CentOS/Oracle Linux 7 - x86_64" or whatever is the
appropriate version you are looking for
6. this takes you here:
https://yum.postgresql.org/11/redhat/rhel-7-x86_64/repoview/
7. under "Available Groups", click on the " PostgreSQL Database Server 11
PGDG" link
8. which takes you here:
https://yum.postgresql.org/11/redhat/rhel-7-x86_64/repoview/postgresqldbserver11.group.html
9. here, you'll see four packages listed
10. click on each, which takes you to a page where the latest package is
available
11. repeat for all four and download all of the four and copy then to your
offline server, so for example, /tmp
12. then try and do an rpm -ivh /tmp/, so for example rpm -ivh
postgresql11-server-11.8-1PGDG.rhel7.x86_64
13. I believe this should place the postgres stuff in /var/lib/pgsql
14. additional packages can be downloaded as needed

--
mohammed


Re: PostgreSQL 11 with SSL on Linux

2020-06-04 Thread Susan Joseph
OK thanks, I figured I had something messed up in my thought process.  I will 
work through these steps and let you know how it goes.
 Susan
 
 
-Original Message-
From: Mohammed Bhatti 
To: Susan Joseph 
Cc: ahodg...@lists.simkin.ca ; 
pgsql-general@lists.postgresql.org 
Sent: Thu, Jun 4, 2020 2:14 pm
Subject: Re: PostgreSQL 11 with SSL on Linux



On Thu, Jun 4, 2020 at 1:50 PM Susan Joseph  wrote:

 OK, when I went to PostgreSQL to get the rpm file to install I was given:
yum install 
https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm


I put the file  pgdg-redhat-repo-latest.noarch.rpm in my /tmp directory and ran 
rpm -i on the file.

If I run rpm -qa | grep postgres

I get postgresql-libs-9.2.24-4.el7_8.x86_64


When I run:   rpm -ql postgresql-libs-9.2.24-4.el7_8.x86_64  I just get a 
list of libraries. 



I am not an SA, I am more of an engineer and have been working with the 
PostgreSQL I installed by building it so not really used to RPM.   I have used 
YUM but that was with a connected server.

How do I find all the packages that I need to have to do an install locally?

 
-Original Message-
From: Alan Hodgson 
To: pgsql-general@lists.postgresql.org
Sent: Thu, Jun 4, 2020 1:36 pm
Subject: Re: PostgreSQL 11 with SSL on Linux

On Thu, 2020-06-04 at 17:32 +, Susan Joseph wrote:
So when I run rpm -i it says that the package is installed, but I can't find a 
postgresql directory with all the files and executables.  So what am I missing?
 


The server stuff is in postgresql11-serverif you're using the community rpms.

Hi,
I've had to do this in the past and I think you probably want to do this:1. 
connect to a machine that has access to repos2. go here: 
https://www.postgresql.org/download/linux/redhat/3. where it says "Direct RPM 
Download", click on the "direct download" link4. this takes you here: 
https://yum.postgresql.org/rpmchart.php5. click on "11  RHEL/CentOS/Oracle 
Linux 7 - x86_64" or whatever is the appropriate version you are looking for6. 
this takes you here: 
https://yum.postgresql.org/11/redhat/rhel-7-x86_64/repoview/7. under "Available 
Groups", click on the "PostgreSQL Database Server 11 PGDG" link8. which takes 
you here: 
https://yum.postgresql.org/11/redhat/rhel-7-x86_64/repoview/postgresqldbserver11.group.html9.
 here, you'll see four packages listed10. click on each, which takes you to a 
page where the latest package is available11. repeat for all four and download 
all of the four and copy then to your offline server, so for example, /tmp12. 
then try and do an rpm -ivh /tmp/, so for example rpm -ivh 
postgresql11-server-11.8-1PGDG.rhel7.x86_64 13. I believe this should place the 
postgres stuff in /var/lib/pgsql14. additional packages can be downloaded as 
needed
--mohammed


Re: Monitoring for long running transactions

2020-06-04 Thread Adrian Klaver

On 6/4/20 10:00 AM, Samuel Smith wrote:
We had a customer complaining of random data loss for the last 6 months 
or so. We eventually tracked it down to a combination of bad coding and 
a couple of bugs with the ORM. Basically, 'BEGIN' was being emitted by 
the web app when viewing a certain page and 'COMMIT' was never emitted 
after that. So once the app would get restarted, all data changes would 
be lost. Definitely worst case scenario.


So the question is, what is the best way to monitor for this scenario 
going forward? Are there any plugins or community recommended scripts 
already made?


https://www.postgresql.org/docs/12/monitoring-stats.html#PG-STAT-ACTIVITY-VIEW

"
state   textCurrent overall state of this backend. Possible values are:

active: The backend is executing a query.

idle: The backend is waiting for a new client command.

idle in transaction: The backend is in a transaction, but is not 
currently executing a query.


idle in transaction (aborted): This state is similar to idle in 
transaction, except one of the statements in the transaction caused an 
error.


fastpath function call: The backend is executing a fast-path function.

disabled: This state is reported if track_activities is disabled in 
this backend.

"



Regards,







--
Adrian Klaver
adrian.kla...@aklaver.com




Move configuration files with pg_upgrade

2020-06-04 Thread Tanja Savic
Hello,

I did Postgresql upgrade from v10 to v12 using pg_upgrade (on Linux Ubuntu 
server).
Of course i wanted settings in pg_hba.conf and postgresql.conf to keep, but 
after the upgrade there were new configuration files and I moved it manually.

Is it possible somehow to migrate these old
configuration files with pg_upgrade or it shall be done manually?

Cheers,
Tanja



Re: Monitoring for long running transactions

2020-06-04 Thread Samuel Smith

On 6/4/20 2:29 PM, Adrian Klaver wrote:

On 6/4/20 10:00 AM, Samuel Smith wrote:
We had a customer complaining of random data loss for the last 6 
months or so. We eventually tracked it down to a combination of bad 
coding and a couple of bugs with the ORM. Basically, 'BEGIN' was being 
emitted by the web app when viewing a certain page and 'COMMIT' was 
never emitted after that. So once the app would get restarted, all 
data changes would be lost. Definitely worst case scenario.


So the question is, what is the best way to monitor for this scenario 
going forward? Are there any plugins or community recommended scripts 
already made?


https://www.postgresql.org/docs/12/monitoring-stats.html#PG-STAT-ACTIVITY-VIEW 



"
state text Current overall state of this backend. Possible 
values are:


     active: The backend is executing a query.

     idle: The backend is waiting for a new client command.

     idle in transaction: The backend is in a transaction, but is not 
currently executing a query.


     idle in transaction (aborted): This state is similar to idle in 
transaction, except one of the statements in the transaction caused an 
error.


     fastpath function call: The backend is executing a fast-path function.

     disabled: This state is reported if track_activities is disabled in 
this backend.

"



Regards,










Sorry, I should have clarified that I was aware of the pg_stat_activity 
table. That is how we found the problem in the first place. And yes I 
could just write a bash script and run it in cron. I just didn't know if 
there was a more "official" way to go about this since it is probably a 
common monitoring point and/or if something like this was already made.


Regards,




Re: Shared memory error

2020-06-04 Thread Thomas Munro
On Fri, Jun 5, 2020 at 1:00 AM Sonam Sharma  wrote:
> The dynamic_shared_memory_type was set to POSIX . Because of this it was 
> using tmpfs /dev/shm. When the query was running I saw the file system was 
> filling. So I  extended the file system and luckily the query worked for that 
> time

Oh, hmm.  When you say "filling", maybe you hit the leaking shm
problem that was fixed in 11.7, that requires messages somewhere in
your log about lack of DSM slots?  I don't have an explanation for the
exact the error you're seeing though.  It's a bit tricky to speculate
on older releases with so many bug fixes missing though.  Please let
us know if you still see the problem after your next scheduled
upgrade.




Re: Monitoring for long running transactions

2020-06-04 Thread Adrian Klaver

On 6/4/20 12:59 PM, Samuel Smith wrote:

On 6/4/20 2:29 PM, Adrian Klaver wrote:







Sorry, I should have clarified that I was aware of the pg_stat_activity 
table. That is how we found the problem in the first place. And yes I 
could just write a bash script and run it in cron. I just didn't know if 
there was a more "official" way to go about this since it is probably a 
common monitoring point and/or if something like this was already made.


I guess it comes down to how much info you want. A list of monitoring tools:

https://wiki.postgresql.org/wiki/Monitoring#pgcenter

Haven't used it but this one:

https://pgmetrics.io/

looks interesting. From here:

https://pgmetrics.io/docs/

Active backends: transaction running too long, idling in transaction, 
waiting for locks


Information on scripting above:

https://pgmetrics.io/docs/scripting.html




Regards,





--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Monitoring for long running transactions

2020-06-04 Thread Christoph Moench-Tegeder
## Samuel Smith (pg...@net153.net):

> Sorry, I should have clarified that I was aware of the pg_stat_activity 
> table. That is how we found the problem in the first place. And yes I 
> could just write a bash script and run it in cron. I just didn't know if 
> there was a more "official" way to go about this since it is probably a 
> common monitoring point and/or if something like this was already made.

This works fine for everything nagios-like:
https://bucardo.org/check_postgres/check_postgres.pl.html#txn_time

Other solutions may be found via the PostgreSQL wiki - as you noted,
this is (should be) monitored, so almost everything which can monitor
PostgreSQL covers this. (which raises the obvious question - did you
monitor your server or did you just use the common "shouting user"
approach? ;))

Regards,
Christoph

-- 
Spare Space




Re: Monitoring for long running transactions

2020-06-04 Thread Thomas Kellerer

Samuel Smith schrieb am 04.06.2020 um 21:59:

Sorry, I should have clarified that I was aware of the
pg_stat_activity table. That is how we found the problem in the first
place. And yes I could just write a bash script and run it in cron. I
just didn't know if there was a more "official" way to go about this
since it is probably a common monitoring point and/or if something
like this was already made.


What about setting idle_in_transaction_session_timeout to some reasonably high 
value (15 minutes?)

Then you would get an error in your application because the connection was 
terminated and you would learn early about the problem.
Even with monitoring enabled, you would probably still kill those sessions 
manually as the application most probably can't commit them properly any more.

Thomas






Re: Move configuration files with pg_upgrade

2020-06-04 Thread Peter J. Holzer
On 2020-06-04 19:52:19 +, Tanja Savic wrote:
> I did Postgresql upgrade from v10 to v12 using pg_upgrade (on Linux Ubuntu
> server). 

Did you use the packages from the PGDG server or from Ubuntu?

> Of course i wanted settings in pg_hba.conf and postgresql.conf to keep, but
> after the upgrade there were new configuration files and I moved it manually. 

I recently did an upgrade from 9.6 to 12 on a Debian system (using the
PGDG packages - I think they should be the same for Debian and Ubuntu)
and pg_uprade migrated the existing configuration without problems.
So that's not a general problem - did you get any error messages or
warnings during the upgrade?

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature


Re: Move configuration files with pg_upgrade

2020-06-04 Thread Adrian Klaver

On 6/4/20 12:52 PM, Tanja Savic wrote:

Hello,

I did Postgresql upgrade from v10 to v12 using pg_upgrade (on Linux 
Ubuntu server).
Of course i wanted settings inpg_hba.conf and postgresql.conf to keep, 
but after the upgrade there were new configuration files and I moved it 
manually.


That is documented:

https://www.postgresql.org/docs/12/pgupgrade.html

"
Restore pg_hba.conf

If you modified pg_hba.conf, restore its original settings. It might 
also be necessary to adjust other configuration files in the new cluster 
to match the old cluster, e.g. postgresql.conf.

"

The new version's conf files  will probably have new settings available 
so this is something that should be reviewed before moving over. One way 
to deal with this is include file(s):


https://www.postgresql.org/docs/12/config-setting.html#CONFIG-INCLUDES

Using them you create the settings you want to manage in the include 
file. Then copy that over and set the include 'some_file' in the new 
postgresql.conf and you are good.




Is it possible somehow to migrate these old
configuration files with pg_upgrade or it shall be done manually?

Cheers,
Tanja




--
Adrian Klaver
adrian.kla...@aklaver.com




Binary encoding of TIMESTAMP WITH TIME ZONE

2020-06-04 Thread Joe Abbate
I'm dealing with an issue where a query uses 'today'::date to select one 
of a number of rows depending on the day modulo the number of rows. The 
intent is that different information will be shown starting after 
midnight local time.  The query runs as expected in psql and using psycopg2.


However, when using the same query using the Rust adapter the transition 
to a new row started showing up after midgnight GMT.  I opened an issue 
on Github (https://github.com/sfackler/rust-postgres/issues/608 ) and 
the maintainer claimed the Rust adapter *had* to initialize timezone to 
UTC in order to properly convert "to and from time datatypes".  I 
pointed out that the timezone offset is available in psql and psycopg2, 
but then he replied the binary encoding of timestamptz does *not* 
include the timezone offset.


He pointed me to the function timestamptz_send() which per the comments 
"converts timestamptz to binary format".  I found that the TimestampTz 
used in the function is a typedef for an int64, but since I'm not 
familiar with the code, I can't tell if timezone offset is embedded in 
there or not.


I'm hoping someone reading this can confirm (or deny) the above (or do I 
need to ask the -hackers list?).


Regards,

Joe




Re: Binary encoding of TIMESTAMP WITH TIME ZONE

2020-06-04 Thread Tom Lane
Joe Abbate  writes:
> However, when using the same query using the Rust adapter the transition 
> to a new row started showing up after midgnight GMT.  I opened an issue 
> on Github (https://github.com/sfackler/rust-postgres/issues/608 ) and 
> the maintainer claimed the Rust adapter *had* to initialize timezone to 
> UTC in order to properly convert "to and from time datatypes".  I 
> pointed out that the timezone offset is available in psql and psycopg2, 
> but then he replied the binary encoding of timestamptz does *not* 
> include the timezone offset.

Indeed it does not, just as the on-disk format for it does not.  The
representation is effectively always in UTC.  If you have some other
timezone setting selected, timestamptz_out rotates to that zone for
display purposes ... but the binary format doesn't.

regards, tom lane




Possible improvement

2020-06-04 Thread Paul van der Linden
Hi,

Don't know if this already came up earlier but I have an idea for
improvement.

If I have a query like:

SELECT * FROM (
SELECT
  CASE
  WHEN field='value1' THEN 1
  WHEN field='value2' THEN 2
  END AS category
FROM table1
) AS foo
WHERE category=1

doesn't use the index on field, while technically it could do that.
Is it hard to implement drilling down the constant in the WHERE to within
the CASE?
This is especially convenient with views (inner SELECT) where the category
is some complex list of possibilities and you want to filter (outer SELECT)
on specific categories
I know a different solution could be creating an index on that CASE but
(especially if you're experimenting a bit) can be quite cumbersome to
synchronize that with the actual query.

Is this something that could be put on some wishlist? If so where are the
most looked at ones?

Paul

P.S. In replies please use reply to all...


Multitenent architecture

2020-06-04 Thread Vasu Madhineni
Hi All,

We are planning a POC on multitenant architecture in Postgres, Could you
please help us with steps for multitenant using schema for each application
model.

Thank you so much all.

Regards,
Vasu


Re: PostgreSQL 11 with SSL on Linux

2020-06-04 Thread Paul Förster
Hi Susan,

you need to install the openssl rpm if you don't have that already. But it 
should be a standard package in all Linux distributions, so it should be in 
your repository.

Exmaple from my host:
$ rpm -qa openssl
openssl-1.1.0i-lp151.1.1.noarch

Cheers,
Paul

> On 04. Jun, 2020, at 18:50, Susan Joseph  wrote:
> 
> So the other issue I have is that I am running this on a server that is not 
> connected to the Internet.  So I have downloaded the RPM file but I can't 
> figure out how to install it without it trying to access files on the 
> Internet.  Are there other libraries I need to include in my download for 
> this to work on a non-networked server?
> 
> 
> 
> -Original Message-
> From: Magnus Hagander 
> To: Susan Joseph 
> Cc: pgsql-general@lists.postgresql.org 
> Sent: Thu, Jun 4, 2020 11:14 am
> Subject: Re: PostgreSQL 11 with SSL on Linux
> 
> 
> 
> On Thu, Jun 4, 2020 at 5:01 PM Susan Joseph  wrote:
> Can someone tell me if there is a yum version of PostgreSQL 11 that can be 
> installed on Linux that has SSL enabled?  Currently the only way I have 
> gotten SSL turned on in PostgreSQL is by doing the following commands:
> 
>   • tar xzvf /tmp/postgresql-11.2.tar.gz -C /data
>   • mv /data/postgresql-11.2 /data/pgsql
>   • cd /data/pgsql
>   • ./configure --prefix=/data/pgsql --without-readline --without-zlib 
> --with-openssl >> conf.log
>   • make
>   • make install
> 
> All the PostgreSQL versions available om yum from the postgresql.org site 
> have SSL enabled. Just install using the instructions from 
> https://www.postgresql.org/download/.
> 
> -- 
>  Magnus Hagander
>  Me: https://www.hagander.net/
>  Work: https://www.redpill-linpro.com/





Re: Multitenent architecture

2020-06-04 Thread Paul Förster
Hi Vasu,

> On 04. Jun, 2020, at 17:52, Vasu Madhineni  wrote:
> We are planning a POC on multitenant architecture in Postgres, Could you 
> please help us with steps for multitenant using schema for each application 
> model.

look up the docs for "create database" and "create schema":

https://www.postgresql.org/docs/current/sql-createdatabase.html
https://www.postgresql.org/docs/current/sql-createschema.html

PostgreSQL is by definition, what Oracle calls "multi tenant".

Cheers,
Paul





Re: Multitenent architecture

2020-06-04 Thread Laurenz Albe
On Thu, 2020-06-04 at 23:52 +0800, Vasu Madhineni wrote:
> We are planning a POC on multitenant architecture in Postgres, Could you 
> please
> help us with steps for multitenant using schema for each application model.

For few tenants, you can keep identical tables in several schemas and
set "search_path" to select a tenant.

With many tenants, you are better off with one table that holds the
data for all clients.  You can use Row Level Security to have each
tenant see only his or her data, and it might be a good idea to
use list partitioning on the tenant ID.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com