Re: [GENERAL] EXPLAIN command just hangs...

2017-11-03 Thread Peter J. Holzer
On 2017-11-02 20:51:23 +, Rhhh Lin wrote:
[...]
> where timestamp BETWEEN 150667656 AND 150875022 
[...]
> *Also, as a sidenote - can someone please expand on why one (I was not 
> involved
> in the creation of this DB/schema definition) would choose to have the
> definition of the timestamp column as a bigint in this case?  

The numbers look like Java timestamps (Milliseconds since the epoch). So
probably the programs accessing the database are written in Java and the
programmer decided that it's simpler to do all timestamp computations in
Java than in SQL. Or maybe the programmer just felt more comfortable
with Java-like timestamps than with calendar dates. (I have done the
same (with Unix timestamps, i.e. seconds since the epoch). Although in
the cases where I've done it I had the additional excuse that the
database didn't support timestamps with timezones, which isn't the case
for PostgreSQL.)

hp

-- 
   _  | Peter J. Holzer| we build much bigger, better disasters now
|_|_) || because we have much more sophisticated
| |   | h...@hjp.at | management tools.
__/   | http://www.hjp.at/ | -- Ross Anderson 


signature.asc
Description: PGP signature


Re: [GENERAL] [HACKERS] SSL and Encryption

2017-11-03 Thread Daniele Varrazzo
On Fri, Nov 3, 2017 at 5:22 AM, John R Pierce  wrote:
> On 11/2/2017 10:12 PM, Jeff Janes wrote:
>>
>> https://wiki.postgresql.org/wiki/List_of_drivers
>>
>> What is 'python native'?  psycopg works as long you update your libpq.
>
> I thought pythonistas preferred using a native driver that didn't use libpq

I'm pretty sure they don't. The industry standard uses libpq.

-- Daniele


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Postgresql.conf not found

2017-11-03 Thread Neto pr
Hello All

I was trying to install postgresql by this tutorial
http://powa.readthedocs.io/en/latest/quickstart.html to use the tool for bd
Powa.

I am use S.O. debian 8 Jessie.

I ran: apt-get install postgresql-9.6 postgresql-client-9.6
postgresql-contrib-9.6 apt-get install postgresql-9.6-powa

So far so good, the DBMS has gone up and I can create tables etc.

But I'm not finding where the postgresql.conf file is. I tried searching
using locate postgresql.conf, but can not find, I suspect it was not
created. I need to add some libraries in the parameter:
shared_preload_libraries

Has anyone installed postgresql in this way, and could it tell you where
the postgresql.conf file might be, or another one to configure the
shared_preload_libraries parameter?

Best Regards Neto


Re: [GENERAL] Postgresql.conf not found

2017-11-03 Thread Stefan Fercot
Hi,

You can find the file locations on
https://wiki.debian.org/PostgreSql#File_locations

You should find the configuration in /etc/postgresql/9.6/main.

Kind regards,


On 11/03/2017 12:51 PM, Neto pr wrote:
>
> Hello All
>
> I was trying to install postgresql by this tutorial
> http://powa.readthedocs.io/en/latest/quickstart.html to use the tool
> for bd Powa.
>
> I am use S.O. debian 8 Jessie.
>
> I ran: apt-get install postgresql-9.6 postgresql-client-9.6
> postgresql-contrib-9.6 apt-get install postgresql-9.6-powa
>
> So far so good, the DBMS has gone up and I can create tables etc.
>
> But I'm not finding where the postgresql.conf file is. I tried
> searching using locate postgresql.conf, but can not find, I suspect it
> was not created. I need to add some libraries in the parameter:
> shared_preload_libraries
>
> Has anyone installed postgresql in this way, and could it tell you
> where the postgresql.conf file might be, or another one to configure
> the shared_preload_libraries parameter?
>
> Best Regards Neto
>

-- 
Stefan FERCOT
http://dalibo.com - http://dalibo.org



signature.asc
Description: OpenPGP digital signature


Re: [GENERAL] Postgresql.conf not found

2017-11-03 Thread Andreas Kretschmer

Hi,


Am 03.11.2017 um 12:51 schrieb Neto pr:

But I'm not finding where the postgresql.conf file is.



you can ask the database, inside psql:

test=# show config_file;
   config_file
-
 /etc/postgresql/10/main/postgresql.conf
(1 Zeile)

test=*#

Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Hot Standby syntax error

2017-11-03 Thread nmmulla

Is the below syntax correct in the recovery.conf file: 

restore_command = 'copy "D:\\apps\postgres\pg_archivelog\\%f" "%p"' 
archive_cleanup_command = 'pg_archivecleanup
"D:\\apps\\postgres\\pg_archivelog\\" %r' 


And the lines is corrent in portgresql.conf file: 

archive_command = 'copy "%p" "D:\\apps\\postgres\\pg_archivelog\\%f"' 


I am archiving to local server 



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL]

2017-11-03 Thread K. Brannen
Rhhh Lin  wrote:

> *Also, as a sidenote - can someone please expand on why one (I was not 
> involved in the creation of this DB/schema definition) would choose to have 
> the definition of the timestamp column as a bigint in this case?

Because the time value you need to hold exceeds 32 bits. :)

Based on your example, you're storing epoch in milliseconds, which exceeds 
2^32, so you have to use bigint. Check out the size of the int and bigint data 
types in the docs.

HTH,
Kevin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [HACKERS] SSL and Encryption

2017-11-03 Thread Jeff Janes
On Thu, Nov 2, 2017 at 10:22 PM, John R Pierce  wrote:

> On 11/2/2017 10:12 PM, Jeff Janes wrote:
>
>> https://wiki.postgresql.org/wiki/List_of_drivers
>>
>> What is 'python native'?  psycopg works as long you update your libpq.
>>
>
>
> I thought pythonistas preferred using a native driver that didn't use
> libpq ?



I'm not a pythonista more of a python-when-I-must, but psycopg is the
dominant driver I've seen used and discussed.  I had to use a different
driver (zxJDBC, for jython) and basically all advice I found was for
psycopg and not applicable to zxJDBC, which most people apparently had
never even heard of.

Anyway, I would think psycopg would be classed as a native binding, as it
uses libpq which is 'native' to PostgreSQL.

If someone has greater knowledge here, it would be nice to update the wiki
pages with new info.

Talk of what is native or not in pythons now has me thinking of the
Everglades.

Cheers,

Jeff


Re: [GENERAL] Postgres 10 manual breaks links with anchors

2017-11-03 Thread Peter Eisentraut
On 10/16/17 03:19, Thomas Kellerer wrote:
> I don't know if this is intentional, but the Postgres 10 manual started to 
> use lowercase IDs as anchors in the manual.
> 
> So, if I have e.g.: the following URL open in my browser:
> 
>
> https://www.postgresql.org/docs/current/static/sql-createindex.html#sql-createindex-concurrently
> 
> I cannot simply switch to an older version by replacing "current" with e.g. 
> "9.5" because in the 9.5 manual the anchor was all uppercase, and the URL 
> would need to be: 
> 
>
> https://www.postgresql.org/docs/9.5/static/sql-createindex.html#SQL-CREATEINDEX-CONCURRENTLY

A fix for this has been committed.  Once 10.1 comes out (next week), the
old-style anchors will work again.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [HACKERS] SSL and Encryption

2017-11-03 Thread Daniele Varrazzo
On Fri, Nov 3, 2017 at 4:12 PM, Jeff Janes  wrote:
> On Thu, Nov 2, 2017 at 10:22 PM, John R Pierce  wrote:
>>
>> On 11/2/2017 10:12 PM, Jeff Janes wrote:
>>>
>>> https://wiki.postgresql.org/wiki/List_of_drivers
>>>
>>> What is 'python native'?  psycopg works as long you update your libpq.
>>
>> I thought pythonistas preferred using a native driver that didn't use
>> libpq ?
>
> Anyway, I would think psycopg would be classed as a native binding, as it
> uses libpq which is 'native' to PostgreSQL.

"Native" in the context of a Python library refers to a library
written in pure Python, which can run with any implementation of the
Python language: in C (CPython, the canonical implementation), in Java
(JPython), in .NET (IronPython), in Python (PyPy)... As such psycopg2
is not a native library because it's written in C and designed only
for CPython.

Notice that "native" and "using the libpq" are not mutually exclusive:
using FFI it is possible to load dynamically the libpq and have pure
Python code calling into the libpq, without the need to build a Python
C extension. This is the case of psycopg2-ctypes and psycopg2cffi,
which are drop-in replacements for psycopg2 (I contributed to the
-ctypes variant at the time and made sure of its interface by dropping
in the entire psycopg2 test suite and hammering it until all tests
passed; -cffi forked from it). Both projects feature-wise seem stuck
to psycopg 2.5 era (circa 2012); psycopg2-ctypes seems unmaintained,
psycopg2cffi seems still maintained instead, with a release a few
months ago (unfortunately with a confusing version number). These
projects are inherently slower than the C psycopg2, which performs
more operations in C, but because PyPy has a JIT is roughly makes up
for the speed lost implementing certain inner loops in Python.

Another way to be Python-native is to do without libpq and to parse
the client-server protocol in python. Of course this misses new
features, encryption schemas, and requires the reimplementation in
python of a lot of subtleties (env vars, service files...) that a
libpq-binding solution has for free. The most known example of such
driver is pg8000, which is barely used and barely maintained (had a
release in recent times after a long lull).

In terms of what the industry uses, ISTM almost everyone uses the
psycopg2 with CPython, with psycopg2cffi used by PyPy users, which
apparently do ok without the new features introduced in following
psycopg versions (replication, SQL composition...). Apart from
CPython, targeting PyPy is the only meaningful goal as JPython and
IronPython are not so used anymore. pg8000 and other drivers are just
not very used.

> If someone has greater knowledge here, it would be nice to update the wiki
> pages with new info.

I've added a link to psycopg2cffi. I don't think mentioning other
drivers is really useful for concrete and current use cases.


-- Daniele


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Adding 'serial' to existing column

2017-11-03 Thread Robert Lakes
I am new to Postgres and I am trying to build this SQL statement in my SQL
script:
ALTER TABLE listings_cdc ALTER COLUMN table_id SET DEFAULT
nextval('tab_id_seq');

I am trying to build the above-stated command as a dynamic SQL statement:
 EXECUTE 'ALTER TABLE listings_cdc ALTER COLUMN table_id SET DEFAULT
nextval('||quote_ident(tab_id_seq)||')';

I've received an error when running the script this above-stated command:
 ERROR:  column "tab_id_seq" does not exist
LINE 1: ...OLUMN table_id SET DEFAULT nextval('||quote_ident(tab_id_seq...

Need help on how to write the above-stated command as a dynamic SQL
statement.

In addition, do you know of a tutorial or a book that I can purchase that
teaches how to build dynamic SQL statements


Re: [GENERAL] Adding 'serial' to existing column

2017-11-03 Thread Tom Lane
Robert Lakes  writes:
> I am new to Postgres and I am trying to build this SQL statement in my SQL
> script:
> ALTER TABLE listings_cdc ALTER COLUMN table_id SET DEFAULT
> nextval('tab_id_seq');

That looks correct, but this not so much:

> I am trying to build the above-stated command as a dynamic SQL statement:
>  EXECUTE 'ALTER TABLE listings_cdc ALTER COLUMN table_id SET DEFAULT
> nextval('||quote_ident(tab_id_seq)||')';

> I've received an error when running the script this above-stated command:
>  ERROR:  column "tab_id_seq" does not exist
> LINE 1: ...OLUMN table_id SET DEFAULT nextval('||quote_ident(tab_id_seq...

You want quote_literal, not quote_ident, because you're trying to produce
a single-quoted literal.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] EXPLAIN command just hangs...

2017-11-03 Thread Rhhh Lin
Thanks for replying Tom.


Ah, I misunderstood the purpose of what you were looking for from the "ps" 
command. Most definitely, the postgres process for this session is eating up 
CPU cycles. That fact is what drew us to investigate the session activity in 
the first place. The CPU (8 cores) on this host are currently being pegged 
because of a couple of sessions all experiencing issues with the same or 
similar queries against this very table.


The actual index on the table is a composite index on (timestamp, 
measurement_id), but it is also the primary key constraint, so I don't 
particularly want to go messing with it(Production system environment also).


My initial attempt to 'explain' the plan was to check if the query planner was 
utilising this composite index based on the predicates. If not, then I could 
reasonable add in an explicit index on "timestamp", but right now I cannot 
check and verify because it just hangs...


Regard,

Ruan



From: Tom Lane 
Sent: 02 November 2017 21:40
To: Rhhh Lin
Cc: Justin Pryzby; pgsql-general@postgresql.org
Subject: Re: [GENERAL] EXPLAIN  command just hangs...

Rhhh Lin  writes:
> The EXPLAIN  statement is currently 'f' for waiting and 'active' via 
> pg_stat_activity, so it is doing something. The ps command does not show me 
> anything more verbose.

ps would confirm for sure whether it was eating CPU time, whereas I do not
particularly trust pg_stat_activity to tell you that.

> The reason I am very suspect of the timestamp column makeup is that if I
> remove that predicate from the EXPLAIN command and the actual query,
> both complete within seconds without issue.

We've seen issues with the planner having trouble trying to determine the
extreme values of an indexed column, in cases where there are a lot of
uncommitted or recently-dead entries at the end of the index --- it does
a lot of work trying to verify the commit status of each entry in turn.
So I wonder if that might apply.

regards, tom lane


Re: [GENERAL] Adding 'serial' to existing column

2017-11-03 Thread rob stone


On Fri, 2017-11-03 at 16:47 -0400, Tom Lane wrote:
> Robert Lakes  writes:
> > I am new to Postgres and I am trying to build this SQL statement in
> > my SQL
> > script:
> > ALTER TABLE listings_cdc ALTER COLUMN table_id SET DEFAULT
> > nextval('tab_id_seq');
> 
> That looks correct, but this not so much:
> 
> > I am trying to build the above-stated command as a dynamic SQL
> > statement:
> >  EXECUTE 'ALTER TABLE listings_cdc ALTER COLUMN table_id SET
> > DEFAULT
> > nextval('||quote_ident(tab_id_seq)||')';
> > I've received an error when running the script this above-stated
> > command:
> >  ERROR:  column "tab_id_seq" does not exist
> > LINE 1: ...OLUMN table_id SET DEFAULT
> > nextval('||quote_ident(tab_id_seq...
> 
> You want quote_literal, not quote_ident, because you're trying to
> produce
> a single-quoted literal.
> 
>   regards, tom lane
> 
> 

You could also do:-

ALTER SEQUENCE tab_id_seq OWNED BY listings_cdc.table_id;

which would establish the one-to-one relationship between the table and
its sequence for that column.

My 2 cents worth.

HTH,
Rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] EXPLAIN command just hangs...

2017-11-03 Thread Rhhh Lin
Hi Justin,


I checked for dead tuples against that particular table initially as I have 
seen performance problems before in a related 'busy' environment which needed 
its frequency of vacuuming to be increased. So I have a query to check for 
table with dead tuples and this table is not showing any.


I also came across the suggestion that bloat might be an issue on the database 
and how to identify and address it and it does not appear to be evident here 
also, so thats my thinking as to why these are not factors.


That said, if you have any pointers as to how I could verify (bloat or dead 
tuples) are not a factor, please let me know?


I have vacuumed. I have not reindexed as it is a prod environment and I see 
that... "REINDEX locks out writes but not reads of the index's parent table.", 
so I may have to arrange this to avoid any interruptions (Although currently, 
accessing this table seems completely problematic anyway!).


The table is 691MB and the composite index(PK) is 723 MB.


My thinking now is I may need to export this data out to a staging area whereby 
I can attempt to "play" with it without any repercussions...


Regards,

Ruan



From: Justin Pryzby 
Sent: 02 November 2017 21:49
To: Rhhh Lin
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] EXPLAIN  command just hangs...

On Thu, Nov 02, 2017 at 09:13:05PM +, Rhhh Lin wrote:
> Yes, it may be an issue with the index, but I'd like to have some evidence
> towards that before dropping and recreating (It does not appear that bloat is
> a problem here or dead tuples either).

Why do you say those aren't an issue?  Just curious.

Have you vacuum or reindexed (or pg_repack) ?

How large are the table and index? \dt+ and \di+

> The reason I am very suspect of the timestamp column makeup is that if I
> remove that predicate from the EXPLAIN command and the actual query, both
> complete within seconds without issue. So I do know where the issue is (I
> just dont know what the issue is!).

It could be that you're hitting selfuncs.c:get_actual_variable_range() and the
extremes of the index point to many dead tuples (as I see Tom suggests).

You could strace the backend and see if it's reading (or writing??)
consecutively (hopefully with ample OS readahead) or randomly (without).

Justin


Re: [GENERAL] EXPLAIN command just hangs...

2017-11-03 Thread Justin Pryzby
On Fri, Nov 03, 2017 at 09:12:02PM +, Rhhh Lin wrote:
> I checked for dead tuples against that particular table initially as I have 
> seen performance problems before in a related 'busy' environment which needed 
> its frequency of vacuuming to be increased. So I have a query to check for 
> table with dead tuples and this table is not showing any.
>
> I also came across the suggestion that bloat might be an issue on the 
> database and how to identify and address it and it does not appear to be 
> evident here also, so thats my thinking as to why these are not factors.

I'd be helpful if you'd paste the commands+output as you run them "\dt+, \di+,
ps, vacuum, dead tuples, etc"

> I have vacuumed. I have not reindexed as it is a prod environment and I see 
> that... "REINDEX locks out writes but not reads of the index's parent 
> table.", so I may have to arrange this to avoid any interruptions (Although 
> currently, accessing this table seems completely problematic anyway!).

Perhaps you could look into pg_repack?  Note that by default it will kill
longrunning transaction if it needs in order to (briefly) obtain a
super-exclusive lock.

> The table is 691MB and the composite index(PK) is 723 MB.

It'd be useful to see the pg_stat_user_tables.* and pg_class.reltuples and
relpages for that table.  Also output from VACUUM VERBOSE or autovacuum logs,
if you have them (but note that vacuum does different work every time it's
re-run).

> My thinking now is I may need to export this data out to a staging area 
> whereby I can attempt to "play" with it without any repercussions...

I imagine that maybe this is related to the pattern of activity on that table
(specifically around the extremes values of its indexed columns).  So it'll be
hard to reproduce, and dumping and reloading the table (or just reindexing it
without reloading it at all) will probably temporarily improve or resolve the
issue.

You could try *adding* a new index on the timestamp column alone (CREATE INDEX
CONCURRENTLY).  Re-creating the index might conceivably be the solution in
the end, and it's what pg_repack does behind the scenes.

Justin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] gin index trouble

2017-11-03 Thread Rob Sargent



On 10/30/2017 10:56 AM, Peter Geoghegan wrote:

On Mon, Oct 30, 2017 at 9:45 AM, Rob Sargent  wrote:

Peter, you beat me to the punch.  I was just about to say "Having read the
referenced message I thought I would add that we never delete from this
table."  In this particular case it was written to record by record, in a
previous execution and at the time of the error it was only being read.  (In
case you've been following, the failed execution would have added ~1M
"segments", each which references an entry in the gin'd table "probandsets"
- but like a rookie I'm looking up each probandset(2^16) individually.
Re-working that NOW.)

It's not surprising that only a SELECT statement could see this
problem. I guess that it's possible that only page deletions used for
the pending list are involved here.

I'm not sure how reliably you can recreate the problem, but if it
doesn't take too long then it would be worth seeing what effect
turning off the FASTUPDATE storage parameter for the GIN index has.
That could prevent the problem from recurring, and would support my
theory about what's up here. (It wouldn't fix the corruption, though.)

Of course, what I'd much prefer is a self-contained test case. But if
you can't manage that, or if reproducing the issue takes hours, then
this simpler experiment might be worthwhile.


My test database machine is:

   Not virtual
   Intel(R) Xeon(R) CPU E3-1241 v3 @ 3.50GHz (quad core, hyperthreaded)
   MemTotal:   16272548 kB
   default postgres.conf from yum install postgresql-10*

I've loaded thrice the number of records (190K) into the problem table, 
but no sign yet of the problem. But unlike the production 
lookup-notfind-insert (anti)pattern, these were all loaded in a single 
transaction. I think the following query has to read the gin'd column of 
every record:


   select array_length(probands,1) as heads,
   count(*) as occurs
   from tld.probandset
   where probands @>
   '{65fe3b60-1c86-4b14-a85d-21abdf68f9e2,f0963403-3f3c-426d-a828-b5bfff914bb4}'
   group by array_length(probands,1)
   order by array_length(probands,1);

 heads | occurs
   ---+
 2 |  1
 3 | 14
 4 | 91
 5 |364
 6 |   1001
 7 |   2002
 8 |   3003
 9 |   3432
10 |   3003
11 |   2002
12 |   1001
13 |364
14 | 91
15 | 14
16 |  1
   (15 rows)

   Time: 17.125 ms

Happy as a clam.

I'll try a run of the antipattern.  I have NOT diddled FASTUPDATE at all.





Re: [GENERAL] EXPLAIN command just hangs...

2017-11-03 Thread Rhhh Lin
Thanks Justin,


Yeah, I'd agree it would be much easier to show you direct command output but 
I'm prohibited due to the security and sensitive nature of my work environment.


A few observations I had earlier this evening.

I created a CTAS copy of the table in order to mirror it from its original in 
terms of constraints, indexes etc. I was able to query this copy table (using 
my "problematic" query) in under 1 second (Actually even before I had applied 
indexes). So I was also able to check the explain plan and based on the index 
makeup, the access path should be using the appropriate index correctly. So I 
was somewhat pleased that the issue was not a query optimisation difficulty as 
such.


I also dug deeper into vacuum, and discovered that even though a vacuum 
appeared to be completing, a verbose vacuum was actually erring out at the 
point of truncation at the end and not shrinking back the table to reclaim 
space. It could not obtain an exclusive lock to do this...


So I decided to intervene and terminate some active sessions (Which were active 
and not waiting) which were all querying this table with a similar query, by 
using pg_backend_terminate(pid). Some of these sessions were in an active state 
for a few days executing this query. However, the termination of the PIDs [The 
command succeeded and returned 't']has not closed down the sessions. They are 
still present, and still active and still executing. Those current sessions are 
for whatever reason completely "hung", locking out exclusive access to the 
table (from vacuum) and cannot be killed. I have not yet resorted to trying to 
kill the process PID's from the OS perspective. Instead, I will try to schedule 
an outage over the next few days, restart the server, implement a vacuum 
execution and see where I am at then.


Regards,

Ruan



From: Justin Pryzby 
Sent: 03 November 2017 21:33
To: Rhhh Lin
Cc: Tom Lane; pgsql-general@postgresql.org
Subject: Re: [GENERAL] EXPLAIN  command just hangs...

On Fri, Nov 03, 2017 at 09:12:02PM +, Rhhh Lin wrote:
> I checked for dead tuples against that particular table initially as I have 
> seen performance problems before in a related 'busy' environment which needed 
> its frequency of vacuuming to be increased. So I have a query to check for 
> table with dead tuples and this table is not showing any.
>
> I also came across the suggestion that bloat might be an issue on the 
> database and how to identify and address it and it does not appear to be 
> evident here also, so thats my thinking as to why these are not factors.

I'd be helpful if you'd paste the commands+output as you run them "\dt+, \di+,
ps, vacuum, dead tuples, etc"

> I have vacuumed. I have not reindexed as it is a prod environment and I see 
> that... "REINDEX locks out writes but not reads of the index's parent 
> table.", so I may have to arrange this to avoid any interruptions (Although 
> currently, accessing this table seems completely problematic anyway!).

Perhaps you could look into pg_repack?  Note that by default it will kill
longrunning transaction if it needs in order to (briefly) obtain a
super-exclusive lock.

> The table is 691MB and the composite index(PK) is 723 MB.

It'd be useful to see the pg_stat_user_tables.* and pg_class.reltuples and
relpages for that table.  Also output from VACUUM VERBOSE or autovacuum logs,
if you have them (but note that vacuum does different work every time it's
re-run).

> My thinking now is I may need to export this data out to a staging area 
> whereby I can attempt to "play" with it without any repercussions...

I imagine that maybe this is related to the pattern of activity on that table
(specifically around the extremes values of its indexed columns).  So it'll be
hard to reproduce, and dumping and reloading the table (or just reindexing it
without reloading it at all) will probably temporarily improve or resolve the
issue.

You could try *adding* a new index on the timestamp column alone (CREATE INDEX
CONCURRENTLY).  Re-creating the index might conceivably be the solution in
the end, and it's what pg_repack does behind the scenes.

Justin


Re: [GENERAL]

2017-11-03 Thread Rhhh Lin
Thanks for the explanation Kevin!

Regards,

Ruan



From: pgsql-general-ow...@postgresql.org  
on behalf of K. Brannen 
Sent: 03 November 2017 14:35
To: pgsql-general@postgresql.org
Subject: [GENERAL]

Rhhh Lin  wrote:

> *Also, as a sidenote - can someone please expand on why one (I was not 
> involved in the creation of this DB/schema definition) would choose to have 
> the definition of the timestamp column as a bigint in this case?

Because the time value you need to hold exceeds 32 bits. :)

Based on your example, you're storing epoch in milliseconds, which exceeds 
2^32, so you have to use bigint. Check out the size of the int and bigint data 
types in the docs.

HTH,
Kevin


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] EXPLAIN command just hangs...

2017-11-03 Thread Rhhh Lin
That helps, thanks Peter.

Regards,

Ruan




From: pgsql-general-ow...@postgresql.org  
on behalf of Peter J. Holzer 
Sent: 03 November 2017 08:49
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] EXPLAIN  command just hangs...

On 2017-11-02 20:51:23 +, Rhhh Lin wrote:
[...]
> where timestamp BETWEEN 150667656 AND 150875022
[...]
> *Also, as a sidenote - can someone please expand on why one (I was not 
> involved
> in the creation of this DB/schema definition) would choose to have the
> definition of the timestamp column as a bigint in this case?

The numbers look like Java timestamps (Milliseconds since the epoch). So
probably the programs accessing the database are written in Java and the
programmer decided that it's simpler to do all timestamp computations in
Java than in SQL. Or maybe the programmer just felt more comfortable
with Java-like timestamps than with calendar dates. (I have done the
same (with Unix timestamps, i.e. seconds since the epoch). Although in
the cases where I've done it I had the additional excuse that the
database didn't support timestamps with timezones, which isn't the case
for PostgreSQL.)

hp

--
   _  | Peter J. Holzer| we build much bigger, better disasters now
|_|_) || because we have much more sophisticated
| |   | h...@hjp.at | management tools.
__/   | http://www.hjp.at/ | -- Ross Anderson 
Peter Holzer's Home Page - Peter Johannes Holzer ...
www.hjp.at
Peter Johannes Holzer . was the name my parents chose to have written in my 
certificate of baptism. Not being content with two Christian names, my fellow 
men ...


[https://www.edge.org/sites/default/files/edge_logo.jpg]

Edge.org
www.edge.org
To arrive at the edge of the world's knowledge, seek out the most complex and 
sophisticated minds, put them in a room together, and have them ask each other 
the ...