Re: [GENERAL] Auto vacuum not running -- Could not bind socket for statistics collector

2014-12-03 Thread Tim Schäfer
Hi,


> On December 2, 2014 at 4:51 PM Tom Lane  wrote:
> =?UTF-8?Q?Tim_Sch=C3=A4fer?=  writes:
> > After starting the server with pg_ctl start, I get the following entries in
> > the
> > logs:
> 
> > 2014-12-02 15:27:36 CET LOG:  could not bind socket for statistics
> > collector: Cannot assign requested address
> > 2014-12-02 15:27:36 CET LOG:  disabling statistics collector for lack of
> > working socket
> 
> Yes, this will break autovacuum, because it won't have any way to find out
> what it should vacuum.  The cause probably is a DNS issue: "localhost"
> isn't resolving to anything sensible.  "dig localhost" on the command line
> might offer some insight.


thanks for your answer. Here is my full 'dig localhost' from the database
server:

~/data> dig localhost

; <<>> DiG 9.9.4-rpz2.13269.14-P2 <<>> localhost
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 10157
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 2

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;localhost. IN  A

;; ANSWER SECTION:
localhost.  604800  IN  A   127.0.0.1

;; AUTHORITY SECTION:
localhost.  604800  IN  NS  localhost.

;; ADDITIONAL SECTION:
localhost.  604800  IN  ::1

;; Query time: 1 msec
;; SERVER: 192.168.185.11#53(192.168.185.11)
;; WHEN: Wed Dec 03 09:24:53 CET 2014
;; MSG SIZE  rcvd: 96


Looks fine to me. Or is there something wrong with it?

And are you sure pgsql is unhappy with localhost? It would be great if I
definitely knew the address it is trying to bind. Is there a way to tell?

Thanks again & greetings,

--
Tim


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is "WITH () UPDATE" Thread Safe ?

2014-12-03 Thread Albe Laurenz
David G Johnston wrote:
> Albe Laurenz *EXTERN* wrote
>> Paul GOERGLER wrote:
>>> I have a lot of tickets, i need to take a batch of tickets and process
>>> them.
>>> So the process is :
>>> SELECT ONLY 100 tickets
>>> PROCESS ticket
>>> MARK THEM AS « done »
>>>
>>> I’m selecting the tickets with :
>>>
>>> WITH t0 AS (
>>> SELECT t.id,
[...]
>>> FROM tickets AS t
[...]
>>> FOR UPDATE OF t
>>> )
>>> UPDATE tickets AS t1
[...]
>>> FROM t0
[...]
>>>
>>>
>>> I wonder if this query is thread safe, Can a ticket be updated between
>>> the SELECT part (t0) and the
>>> UPDATE part ?
>>> If this query is not « thread safe » how can i do this ?

>> There is no race condition in your query because you used SELECT ... FOR
>> UPDATE.
>>
>> That causes the rows found in the WITH clause to be locked against
>> concurrent modification.
>>
>> So you should be fine.

> I was under the impression that the presence of FOR UPDATE in this situation
> was unnecessary since the execution of the update occurs in the same
> statement as the select and thus the relevant data will be locked at
> execution.
> 
> The FOR UPDATE is for situations where other code needs to intervene between
> the select and a subsequent update.
> 
> The documentation is silent on this distinction, but...
> 
> Note that the use of a CTE in this example is a convenience and that the top
> level command is still UPDATE, not SELECT.
> 
> It may be worthwhile to update the UPDATE page's WITH commentary to note
> that (if correct) there is no need for a FOR UPDATE clause on the contained
> subquery (yes, that was quite a mouthful...)

Well, the SELECT and the UPDATE are part of the same statement, so they
run in the same transaction and with the same snapshot.

But it could happen that (for example) two of the above queries are running
concurrently.  Then both could find the same row in the SELECT and modify
it (because of the exclusive row lock, one of them will have to wait until
the other is done).  Then this ticket will be processed twice.

With FOR UPDATE, no two SELECT statements can find the same row, because
by the time the lock taken by the first SELECT is released, the status of the
ticket will have changed and the second SELECT will not find it.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] I did some testing of GIST/GIN vs BTree indexing…

2014-12-03 Thread Guyren Howe
Obviously, database benchmarking is basically a silly idea, because every 
workload, every hardware configuration, every schema are different, with very 
different consequences.

Still, I was left with wondering when one should choose a BTree vs GIST or GIN 
(I didn’t even try to look at SP-GIST: any thoughts?).

The documentation on heuristics for when to choose which doesn’t say exactly 
but suggests that one should just choose a BTree.

But what made me curious about the alternatives is this: you have to use a 
compound BTree index left-to-right (you can use the first column, or the first 
and second, or first, second and third…), whereas GIST and GIN both support 
using arbitrary fields. That right there is mighty compelling for any kind of 
ad-hoc query scenario. So I thought I’d do some testing about insert and query 
speed.

I made a table with six integer columns but no indexes, and made copies with a 
regular BT primary key, and then an index on all the columns, in separate 
tables using either a BTree, a GIST or a GIN index. The original table I 
generated 10 million rows of random values in the range 1..1 million.

This was on 9.4RC1 on a recent-model iMac. I used an external 2.5” hard drive 
so the SSD wasn’t mucking things up. I restarted Postgres (but not the 
computer) between tests. I tested a different range of values, of the same 
size, when comparing searches, in order to mostly avoid having the MacOS disk 
cache helping things out. I also tried running multiple versions of the search 
tests on different ranges in different orders, just to see if the comparison of 
search times was roughly consistent, which it was.

The results were… surprising, and don’t seem to accord with what the 
documentation says.

I was interested in insert speed. The docs suggest that GIN would be 
appallingly slow, but don’t really talk about GIST vs BTree. When I configured 
reasonable caches, working mem and such, I got the following times for insert… 
select * (values in ms; columns are BTree-GIST-GIN):

1,233,570
2,323,639
1,700,752

GIST was slower than GIN (which flat contradicts the documentation), but both 
are in shouting distance of BTree. When I tuned down the caches, to simulate a 
machine under load, I got:

20,252,299
3,583,346
10,495,424

So wow, BTree is much slower. This is basically a default setup, with tiny 
cache and working memory and whatnot.

Searching on the first column is just as interesting. With decent caches, find 
100 values (so about a thousand rows):

936
508
2,215

So this result is interesting. GIN is *slowest*, and in particular is slower 
than GIST by a good margin, whereas the docs suggest GIN should always be 
faster. GIST is significantly faster than BTree.

Now finding 1,000 values (so 10,000 rows, give or take):

4,148
1,302
3,078

GIST is fastest again, but now GIN is faster than BTree.

Again, but 10,000 values/100,000 rows:

34,767
8,104
8,995

So in a memory-rich environment, GIST appears to be the clear winner. When I 
turn down the memory values, though, I get a very different result.

100 values/1000 rows:

7,931
7,209
1,034

1000 values/10,000 rows:

81,675
88,191
1,598

I tried that one several times, because I found it hard to believe. The results 
were consistent. With 10,000 values/100,000 rows, the results are even starker.

Index sizes in pages:

258,578
309,636
581,795

GIN is certainly not the “three times” size suggested in the docs, but perhaps 
that just hasn’t been updated for the 9.4 improvements. Certainly, there isn’t 
sufficient difference here to make the BTree advantage compelling in most 
applications.

Given the futility of database benchmarking in general, I didn’t want to go any 
further with this. What I was interested in was whether it might be worth 
switching from BTree to GIST/GIN indexes with regular sorts of data. It appears 
to be the case that GIST and GIN are often better than BTree in general, and 
given their much greater flexibility in satisfying queries on different 
columns, it might even be the case that one should recommend a single GIST or 
GIN index on the frequently-searched columns of a table in most cases?

I would absolutely *love* to hear what this community has to say about this 
question: should one consider GIST or GIN indexes on regular old numeric/text 
columns? When, theoretically? When, in practice (ie does anyone have comparable 
benchmarks on 9.4?). Other thoughts?

[GENERAL] FW: getting error while running sql on mm_activealrm table

2014-12-03 Thread M Tarkeshwar Rao
Hi all,

We are getting following error message on doing any action on the table 
like(Select or open from pgadmin).

Please suggest.

ERROR:  missing chunk number 0 for toast value 54787 in pg_toast_2619
** Error **

ERROR: missing chunk number 0 for toast value 54787 in pg_toast_2619
SQL state: XX000


CREATE TABLE mm_activealarm
(
  alarm_id integer NOT NULL,
  source_address character varying(255) NOT NULL,
  alarm_instance_id integer NOT NULL,
  alarm_raise_time bigint,
  alarm_update_time bigint,
  alarm_cease_time bigint,
  alarm_count integer,
  alarm_severity integer NOT NULL,
  source_type character varying(40) NOT NULL,
  alarm_state integer NOT NULL,
  event_type integer,
  notification_id integer NOT NULL,
  probable_cause integer NOT NULL,
  specific_problem integer NOT NULL,
  alarm_additional_text character varying(10240),
  alarm_ack_time bigint,
  alarm_ack_user character varying(100) NOT NULL,
  alarm_ack_system character varying(100) NOT NULL,
  alarm_proposed_repair_action character varying(10240) NOT NULL,
  CONSTRAINT mm_activealarm_pk PRIMARY KEY (alarm_id, source_address)
  USING INDEX TABLESPACE mgrdata
)
WITH (
  OIDS=FALSE
)
TABLESPACE mgrdata;
ALTER TABLE ss_activealarm
  OWNER TO ss_super;

Regards
Tarkeshwar


[GENERAL] TRUNCATE and CREATE TABLE LIKE for foreign tables

2014-12-03 Thread sri harsha
Hi,

   Is it possible to implement TRUNCATE and CREATE TABLE LIKE for a foreign
table. Is there anyway i can write a function to implement those queries in
my FDW ??

Thanks,
Harsha


Re: [GENERAL] TRUNCATE and CREATE TABLE LIKE for foreign tables

2014-12-03 Thread Albe Laurenz
sri harsha wrote:
> Is it possible to implement TRUNCATE and CREATE TABLE LIKE for a foreign 
> table. Is there anyway i
> can write a function to implement those queries in my FDW ??

This is currently not possible, it would need an extension to the FDW API.

If you can come up with a good design, lobby for it in the hackers mailing list.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres 9.2 PITR testing to before DROP DATABASE ends up removing file system files

2014-12-03 Thread Joshua Boyd
Having continued my research, the problem I encountered is the exact same
that's been recorded here:

https://www.marshut.net/kstxxk/pitr-failing-to-stop-before-drop-database.html


I took the backup by following the procedures as laid forth in the
continuous archiving document (
http://www.postgresql.org/docs/9.2/static/continuous-archiving.html)

I configured the archiving to gzip the wal files to pgdata/archive
Then I started the backup process by issuing "select
pg_start_backup('mylabel')"
next tarball'd the contents of pgdata, excluding the pgdata/archive dir and
the pgdata/pg_xlog dir (although preserving the directory structure)
after that I issued "select pg_stop_backup()"
then I added the contents of pgdata/archive and pgdata/pg_xlog to the
tarball above
then I gzipped the tarball.

The above is how I archived and backed up..

To restore the backup, I shut down the server, moved pgdata to
pgdata_backup, untarballed the backup tarball, removed all the files in the
new pgdata/pg_xlog dir, copied the files from pgdata_backup/archive and
pgdata_backup/pg_xlog into the new pgdata dir, set up the recovery.conf
file giving it a timestamp gathered from the pgdata_backup/pg_log/<> log
files..  I copied ALL the pg_xlog files ... not simply the "unarchived
ones". All of the unarchived ones should have been removed when I removed
the contents of the pg_xlog dir after restoring the tarball..

I think I answered all the questions - please let me know if I missed
some.  Based on the url I pasted at the top, though, it appears I'm not the
only one who's encountered this problem.


On Tue, Dec 2, 2014 at 3:39 PM, Adrian Klaver 
wrote:

> On 11/28/2014 02:29 PM, Joshua Boyd wrote:
>
>> I am testing out point in time recovery from a hot physical backup in a
>> disaster recovery situation - I turned on archiving of files, created a
>> hot physical backup,
>>
>
> How did you take the backup?
>
> Archiving how and to where?
>
> then (after letting it run for a few days) issued a
>
>> "DROP DATABASE".  The pg_log file shows the DROP DATABASE command was
>> issued at '2014-11-28 10:20:00.010 PST'.  I shut down the server, moved
>> the pgdata directory to pgdata_backup ... restored the files in the hot
>> physical backup I made, copied the wal archive files from pgdata_backup
>> to the (new) pgdata archive,
>>
>
> The above I do not understand.
> You where archiving the WALs in your pgdata directory?
>
> Restored the backup how?
>
>  cleared out the new pg_xlog dir and copied
>
>> the files from the old pg_xlog into the new..  Set up a recovery.conf
>>
>
> All the files or only the unarchived ones?
>
>  file as such:
>>
>> restore_command = 'gunzip -c /home/pg2dev/joshtest/pgdata/archive/%f.gz
>>  > %p'
>> recovery_target_time = '2014-11-28 10:20:00.010 PST'
>> recovery_target_inclusive = false
>>
>> then I started the server up.  the pg_log shows the following:
>>
>
>
>> And then I look in pgdata/base .. and sure enough, that directory is
>> missing.  I examine my hot physical backup file and that directory
>> exists within it.
>>
>> So  even though the recovery SAYS "recovery stopping before commit
>> of transaction 235078" ... it doesn't appear that it's 100% accurate.
>> It didn't commit the transaction, clearly, because the database is still
>> listed in the data dictionary ... however, the filesystem files are
>> gone.  Please - am I doing something wrong, or would this be considered
>> a bug?
>>
>> --
>> Joshua Boyd
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>



-- 
Joshua Boyd


Fwd: [GENERAL] Problem with pg_dump and decimal mark

2014-12-03 Thread Eric Svenson
Hi Adrian,

thank you for your support. Here are the informations you requested


> I have already done that and found something strange:
>
> On the PC where the backup was done with pg_dump, all locale settings of
> Postgres were English/United States. (LC_COLLATE, LC_CTYPE, LC_MONETARY,
> LC_NUMERIC in postgresql.conf)
>

> OS and OS version?

Windows Server 2012R2

> Postgres version?

Version 9.2.1 (same version used on all machines)

> What was the pg_dump command used?

pg_dump -p 6789 -EUTF8 -f myFile.sql my_Database


> On the first PC on which I tried to load the backup file with psql, all
> locale settings if Postgres were  German_Germany. Everything is ok, the
> SQL file with '.' as decimal point was accepted without a problem
>

> OS and OS version?

Windows 7 Enterprise SP 1

> Postgres version?

Version 9.2.1 (same version used on all machines)

> How was it loaded via psql?

psql -p 6789 -U postgres my_Database < myFile.sql

> Was the psql on the same machine and from the same version of Postgres?

Same version, but on different machines.


> On the second PC (Virtual Machine) I had the SAME settings in
> postgresql.conf (German_Germany)
>

> OS and OS version?

Windows 7 Ultimate SP 1

> Postgres version?

Version 9.2.1 (same version used on all machines)

> How was it loaded via psql?

psql -p 6789 -U postgres my_Database < myFile.sql

> Did you use the psql on the VM or did you use the psql on the host?

psql on the VM

> Was the Postgres/psql on the host the same as the VM?

Same Version (9.2.1)

> What are you using for virtualization?

VM Ware Player 6.0.2

> What is host OS?

Windows 7 Enterprise SP 1 (see above)

Again, thank you for the support.

Regards,
Eric Svenson


Re: [GENERAL] Auto vacuum not running -- Could not bind socket for statistics collector

2014-12-03 Thread Adrian Klaver

On 12/03/2014 12:36 AM, Tim Schäfer wrote:

Hi,



On December 2, 2014 at 4:51 PM Tom Lane  wrote:
=?UTF-8?Q?Tim_Sch=C3=A4fer?=  writes:

After starting the server with pg_ctl start, I get the following entries in
the
logs:



2014-12-02 15:27:36 CET LOG:  could not bind socket for statistics
collector: Cannot assign requested address
2014-12-02 15:27:36 CET LOG:  disabling statistics collector for lack of
working socket


Yes, this will break autovacuum, because it won't have any way to find out
what it should vacuum.  The cause probably is a DNS issue: "localhost"
isn't resolving to anything sensible.  "dig localhost" on the command line
might offer some insight.



thanks for your answer. Here is my full 'dig localhost' from the database
server:

~/data> dig localhost

; <<>> DiG 9.9.4-rpz2.13269.14-P2 <<>> localhost
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 10157
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 2

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;localhost. IN  A

;; ANSWER SECTION:
localhost.  604800  IN  A   127.0.0.1

;; AUTHORITY SECTION:
localhost.  604800  IN  NS  localhost.

;; ADDITIONAL SECTION:
localhost.  604800  IN  ::1

;; Query time: 1 msec
;; SERVER: 192.168.185.11#53(192.168.185.11)
;; WHEN: Wed Dec 03 09:24:53 CET 2014
;; MSG SIZE  rcvd: 96


Looks fine to me. Or is there something wrong with it?

And are you sure pgsql is unhappy with localhost? It would be great if I
definitely knew the address it is trying to bind. Is there a way to tell?


The below after starting Postgres will show what it successfully bound 
to. Not a complete answer but might help narrow the possibilities.


netstat -a | grep postgres
netstat -a | grep 

Where port_number is the one in postgresql.conf


Also you say you are using pg_ctl to start the program.

What user are you doing that as?

Also any other changes in the Connections portion of postgresql.conf.



Thanks again & greetings,

--
Tim





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] FW: getting error while running sql on mm_activealrm table

2014-12-03 Thread Adrian Klaver

On 12/03/2014 03:07 AM, M Tarkeshwar Rao wrote:

Hi all,

We are getting following error message on doing any action on the table
like(Select or open from pgadmin).

Please suggest.



http://dba.stackexchange.com/questions/31008/pg-dump-and-error-missing-chunk-number-0-for-toast-value

Though the question still remains, what got it into that state?

Any recent hardware/software issues?



Regards

Tarkeshwar




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: Fwd: [GENERAL] Problem with pg_dump and decimal mark

2014-12-03 Thread Adrian Klaver

On 12/03/2014 12:23 AM, Eric Svenson wrote:


Hi Adrian,

thank you for your support. Here are the informations you requested


I have already done that and found something strange:

On the PC where the backup was done with pg_dump, all locale settings of
Postgres were English/United States. (LC_COLLATE, LC_CTYPE, LC_MONETARY,
LC_NUMERIC in postgresql.conf)


 > OS and OS version?

Windows Server 2012R2

 > Postgres version?

Version 9.2.1 (same version used on all machines)

 > What was the pg_dump command used?

pg_dump -p 6789 -EUTF8 -f myFile.sql my_Database




 > How was it loaded via psql?

psql -p 6789 -U postgres my_Database < myFile.sql

 > Was the psql on the same machine and from the same version of Postgres?

Same version, but on different machines.


On the second PC (Virtual Machine) I had the SAME settings in
postgresql.conf (German_Germany)


 > OS and OS version?

Windows 7 Ultimate SP 1

 > Postgres version?

Version 9.2.1 (same version used on all machines)

 > How was it loaded via psql?

psql -p 6789 -U postgres my_Database < myFile.sql

 > Did you use the psql on the VM or did you use the psql on the host?

psql on the VM



So what if you load to the VM Postgres using the psql from the host?



Again, thank you for the support.


As Tom said LC_NUMERIC is set to C in main.c:

/*
 * We keep these set to "C" always, except transiently in 
pg_locale.c; see

 * that file for explanations.
 */

The transiently part intrigued me so I went to pg_locale.c and there was 
a big:


* !!! NOW HEAR THIS !!!

with regards to resetting locales. Honestly the explanation is beyond my 
expertise. I offer it only as a starting point for those that can 
understand it.





Regards,
Eric Svenson



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Auto vacuum not running -- Could not bind socket for statistics collector

2014-12-03 Thread Tim Schäfer
Hi,

on the netstat question: it bound the default postgres port (5432) sucessfully
on both interfaces (lo and
eth0) it seems:

postgres@ixion41:~/data> netstat -na | grep postg
unix 2 [ ACC ] STREAM LISTENING 21156728
/var/run/postgresql/.s.PGSQL.5432
postgres@ixion41:~/data> netstat -na | grep 5432
tcp 0 0 192.168.185.41:5432 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN
tcp 0 0 192.168.185.41:5432 192.168.185.42:40013 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.38:34332 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.42:40036 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.38:34285 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.35:49972 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.39:40494 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.42:40035 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.42:39976 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.42:40042 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.35:49975 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.42:40041 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.42:40023 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.34:50950 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.38:34339 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.42:39995 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.34:50922 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.42:40009 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.38:34343 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.42:40048 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.34:50962 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.38:34320 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.35:49966 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.36:58217 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.35:49910 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.39:40563 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.39:40566 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.35:49937 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.34:50891 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.39:40549 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.36:58215 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.40:59597 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.40:59590 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.38:34335 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.36:58212 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.35:49963 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.35:49950 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.39:40540 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.36:58167 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.36:58188 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.42:40032 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.42:40020 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.40:59587 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.40:59575 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.42:40046 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.40:59594 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.38:34345 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.42:40027 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.39:40559 ESTABLISHED
tcp 0 846 192.168.185.41:5432 192.168.185.36:58221 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.34:50947 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.42:39960 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.36:58196 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.34:50954 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.42:40024 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.35:49969 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.34:50961 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.39:40522 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.36:58191 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.38:34294 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.40:59584 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.40:59581 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.34:50958 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.40:59599 ESTABLISHED
tcp 0 0 192.168.185.41:5432 192.168.185.39:40557 ESTABLISHED
unix 2 [ ACC ] STREAM LISTENING 21156730 /tmp/.s.PGSQL.5432
unix 2 [ ACC ] STREAM LISTENING 21156728
/var/run/postgresql/.s.PGSQL.5432
unix 3 [ ] STREAM CONNECTED 21244990 /tmp/.s.PGSQL.5432

All the ESTABLISHED connections are the clients writing to it (slowly).

Also you say you are using pg_ctl to start the program.
What user are you doing that as?


As user 'postgres'. If I try it as root, it gives me the following message:

# pg_ctl restart
pg_ctl: cannot be run as root
Please log in (using, e.g., "su") as the (unprivileged) user that will
own the server process.

Also any other changes in the Connections portion of postgresql.conf.


The only change in that section besides 'liste

Re: [GENERAL] update several columns from function returning several values

2014-12-03 Thread Rémi Cura
Thanks,
waiting for the 9.5 then =)
Cheers,
Rémi-C

2014-12-02 18:23 GMT+01:00 Tom Lane :

> =?UTF-8?Q?R=C3=A9mi_Cura?=  writes:
> > IF I define a function returning several values , I can't use it to
> update
> > mutliple columns of a table at once.
> > ...
> > UPDATE test_update_m_values SET (gid_plus_1,gid_minus_1) =
> > (rc_test_update_m_values(gid)); --doesn't work
>
> > Somebody now if this is possible?
>
> Not currently.  In 9.5 it'll be possible to do
>
> UPDATE test_update_m_values SET (gid_plus_1,gid_minus_1) =
> (select * from rc_test_update_m_values(gid));
>
> but the syntax you were trying will never work, because it would be
> ambiguous with the case of assigning a composite value to a single
> composite column.
>
> regards, tom lane
>


[GENERAL] Mistake in documentation? ANALYZE on partitioned tables

2014-12-03 Thread pinker
Hi!
As I have read in  documentation
   that "If
you are using manual VACUUM or ANALYZE commands, don't forget that you need
to run them on each partition individually. A command like: ANALYZE
measurement; will only process the master table."

But when I run ANALYZE VERBOSE orders; (test table with 12 partitions) I
get:

INFORMACJA:  analiza "public.orders"
INFORMACJA:  "orders": przeskanowano 2747 z 2747 stron, zawierających 0
żywych wierszy i 0 martwych wierszy; 0 wierszy w przykładzie, 0 szacowanych
wszystkich wierszy
INFORMACJA:  analiza drzewa dziedziczenia "public.orders"
INFORMACJA:  "orders": przeskanowano 2750 z 2750 stron, zawierających 0
żywych wierszy i 0 martwych wierszy; 0 wierszy w przykładzie, 0 szacowanych
wszystkich wierszy
INFORMACJA:  "orders_2014_01": przeskanowano 418 z 418 stron, zawierających
0 żywych wierszy i 0 martwych wierszy; 0 wierszy w przykładzie, 0
szacowanych wszystkich wierszy
INFORMACJA:  "orders_2014_02": przeskanowano 377 z 377 stron, zawierających
0 żywych wierszy i 0 martwych wierszy; 0 wierszy w przykładzie, 0
szacowanych wszystkich wierszy
INFORMACJA:  "orders_2014_03": przeskanowano 418 z 418 stron, zawierających
0 żywych wierszy i 0 martwych wierszy; 0 wierszy w przykładzie, 0
szacowanych wszystkich wierszy
INFORMACJA:  "orders_2014_04": przeskanowano 404 z 404 stron, zawierających
0 żywych wierszy i 0 martwych wierszy; 0 wierszy w przykładzie, 0
szacowanych wszystkich wierszy
INFORMACJA:  "orders_2014_05": przeskanowano 418 z 418 stron, zawierających
0 żywych wierszy i 0 martwych wierszy; 0 wierszy w przykładzie, 0
szacowanych wszystkich wierszy
INFORMACJA:  "orders_2014_06": przeskanowano 404 z 404 stron, zawierających
0 żywych wierszy i 0 martwych wierszy; 0 wierszy w przykładzie, 0
szacowanych wszystkich wierszy
INFORMACJA:  "orders_2014_07": przeskanowano 317 z 317 stron, zawierających
0 żywych wierszy i 0 martwych wierszy; 0 wierszy w przykładzie, 0
szacowanych wszystkich wierszy


Abfrage war erfolgreich nach 353 ms. Keine Zeilen geliefert.

All partitions were processed...
So I guess: mistake in documentation?



--
View this message in context: 
http://postgresql.nabble.com/Mistake-in-documentation-ANALYZE-on-partitioned-tables-tp5829105.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Auto vacuum not running -- Could not bind socket for statistics collector

2014-12-03 Thread Tom Lane
=?UTF-8?Q?Tim_Sch=C3=A4fer?=  writes:
>> On December 2, 2014 at 4:51 PM Tom Lane  wrote:
>> Yes, this will break autovacuum, because it won't have any way to find out
>> what it should vacuum.  The cause probably is a DNS issue: "localhost"
>> isn't resolving to anything sensible.  "dig localhost" on the command line
>> might offer some insight.

> thanks for your answer. Here is my full 'dig localhost' from the database
> server:
> ...
> Looks fine to me. Or is there something wrong with it?

Hmph, looks fine to me too.

> And are you sure pgsql is unhappy with localhost? It would be great if I
> definitely knew the address it is trying to bind. Is there a way to tell?

As I mentioned, this has nothing to do with local_addresses or the port
setting; the code in pgstat.c is hard-wired to bind to whatever
"localhost" resolves as.

One idea is to see if you can strace postmaster startup (or whatever your
preferred local equivalent of strace is, perhaps truss).  That will
produce a great deal of output but there should only be a few bind()
calls so it won't be too hard to find the section where this is happening.

Hmm ... actually, when I try it here (on a RHEL6 machine) the relevant
stretch of output is

open("/etc/hosts", O_RDONLY|O_CLOEXEC)  = 8
fcntl(8, F_GETFD)   = 0x1 (flags FD_CLOEXEC)
fstat(8, {st_mode=S_IFREG|0644, st_size=158, ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 
0x7fa42206a000
read(8, "127.0.0.1   localhost localhost."..., 4096) = 158
read(8, "", 4096)   = 0
close(8)= 0
munmap(0x7fa42206a000, 4096)= 0
socket(PF_INET, SOCK_DGRAM, IPPROTO_IP) = 8
connect(8, {sa_family=AF_INET, sin_port=htons(0), 
sin_addr=inet_addr("127.0.0.1")}, 16) = 0
getsockname(8, {sa_family=AF_INET, sin_port=htons(37185), 
sin_addr=inet_addr("127.0.0.1")}, [16]) = 0
close(8)= 0
socket(PF_INET6, SOCK_DGRAM, IPPROTO_IP) = 8
connect(8, {sa_family=AF_INET6, sin6_port=htons(0), inet_pton(AF_INET6, "::1", 
&sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = 0
getsockname(8, {sa_family=AF_INET6, sin6_port=htons(39774), inet_pton(AF_INET6, 
"::1", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, [28]) = 0
close(8)= 0
socket(PF_INET6, SOCK_DGRAM, IPPROTO_IP) = 8
bind(8, {sa_family=AF_INET6, sin6_port=htons(0), inet_pton(AF_INET6, "::1", 
&sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = 0
getsockname(8, {sa_family=AF_INET6, sin6_port=htons(46928), inet_pton(AF_INET6, 
"::1", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, [28]) = 0
connect(8, {sa_family=AF_INET6, sin6_port=htons(46928), inet_pton(AF_INET6, 
"::1", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = 0
sendto(8, "\307", 1, 0, NULL, 0)= 1
select(9, [8], NULL, NULL, {0, 50}) = 1 (in [8], left {0, 44})
recvfrom(8, "\307", 1, 0, NULL, NULL)   = 1
fcntl(8, F_SETFL, O_RDONLY|O_NONBLOCK)  = 0

which suggests that getaddrinfo() preferentially looks in /etc/hosts
before contacting any DNS server.  So perhaps that "dig" call is not
telling you the real state of affairs, and what you need to do is
see if there's a bogus entry for localhost in /etc/hosts.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Merge rows based on Levenshtein distance

2014-12-03 Thread pinker
There is nice extension in postgres:  fuzzystrmatch
   I have used
to calculate the distance. From documetation:

SELECT levenshtein_less_equal('extensive', 'exhaustive',2);

You can use it then with your group by query.



--
View this message in context: 
http://postgresql.nabble.com/Merge-rows-based-on-Levenshtein-distance-tp5828841p5829111.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Auto vacuum not running -- Could not bind socket for statistics collector

2014-12-03 Thread Tim Schäfer
Hi,

> On December 3, 2014 at 5:04 PM Tom Lane  wrote:
> ...
> which suggests that getaddrinfo() preferentially looks in /etc/hosts
> before contacting any DNS server.  So perhaps that "dig" call is not
> telling you the real state of affairs, and what you need to do is
> see if there's a bogus entry for localhost in /etc/hosts.

Thanks a lot, your last hint did the trick!

The /etc/hosts file looked like this on the DB server:

  ::1 localhost ipv6-localhost ipv6-loopback
  192.168.185.41  ixion41.molbi ixion41
  192.168.185.41  ixion41.molbi ixion41

I changed it to this:

  127.0.0.1   localhost
  ::1 ipv6-localhost ipv6-loopback
  192.168.185.41  ixion41.molbi ixion41
  192.168.185.41  ixion41.molbi ixion41


Then I restarted the db server. Now I find this in the logs:
2014-12-03 17:17:53 CET LOG:  database system was shut down at 2014-12-03
17:17:51 CET
2014-12-03 17:17:53 CET LOG:  database system is ready to accept connections
2014-12-03 17:17:53 CET LOG:  autovacuum launcher started


So it seems to work now. Thanks a lot for your time & efforts, Tom Lane!

--
Tim


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Mistake in documentation? ANALYZE on partitioned tables

2014-12-03 Thread Tom Lane
pinker  writes:
> As I have read in  documentation
>    that "If
> you are using manual VACUUM or ANALYZE commands, don't forget that you need
> to run them on each partition individually. A command like: ANALYZE
> measurement; will only process the master table."

> But when I run ANALYZE VERBOSE orders; (test table with 12 partitions) I
> get:
> All partitions were processed...
> So I guess: mistake in documentation?

No.  All the partitions were *scanned*, to generate whole-inheritance-tree
statistics for the parent table.  But this did not update stats associated
with the individual child tables.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Merge rows based on Levenshtein distance

2014-12-03 Thread mongoose
David,

Thanks for the useful feedback. Since I am not an experienced developer it
is too complicated for me to come up with the queries. Besides I wonder if
this is going to be efficient to do this processing on PostgreSQL.



--
View this message in context: 
http://postgresql.nabble.com/Merge-rows-based-on-Levenshtein-distance-tp5828841p5829132.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Merge rows based on Levenshtein distance

2014-12-03 Thread David G Johnston
I play with it when I get a chance but you should at least try to code
something.

Dave


On Wed, Dec 3, 2014 at 11:08 AM, mongoose [via PostgreSQL] <
ml-node+s1045698n5829132...@n5.nabble.com> wrote:

> David,
>
> Thanks for the useful feedback. Since I am not an experienced developer it
> is too complicated for me to come up with the queries. Besides I wonder if
> this is going to be efficient to do this processing on PostgreSQL.
>
> --
>  If you reply to this email, your message will be added to the discussion
> below:
>
> http://postgresql.nabble.com/Merge-rows-based-on-Levenshtein-distance-tp5828841p5829132.html
>  To unsubscribe from Merge rows based on Levenshtein distance, click here
> 
> .
> NAML
> 
>




--
View this message in context: 
http://postgresql.nabble.com/Merge-rows-based-on-Levenshtein-distance-tp5828841p5829137.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

Re: [GENERAL] Merge rows based on Levenshtein distance

2014-12-03 Thread David G Johnston
On Wed, Dec 3, 2014 at 9:14 AM, pinker [via PostgreSQL] <
ml-node+s1045698n5829111...@n5.nabble.com> wrote:

> There is nice extension in postgres: fuzzystrmatch
>  I have
> used to calculate the distance. From documetation:
>
> SELECT levenshtein_less_equal('extensive', 'exhaustive',2);
>
> You can use it then with your group by query.
>
>
​Something like this - replace the substring(...) comparison with
legenshtein_less_equal(...)​ or whatever comparison you find applicable.

In the case below new groups are started whenever the first letter of the
value changes.

The first group would be NULL so I add a COALESCE() call to make it 0 -
subsequent groups start with 1 and increment properly.

WITH src (val) AS (
VALUES ('A1'::varchar),('A2'),('B1'),('B2'),('B3'),('C1'),('D1')
)
, grp AS (
SELECT val
, CASE WHEN
substring(val,1,1) <> substring(lag(val) OVER (ORDER BY
val),1,1)
   THEN 1
   ELSE NULL
   END AS changed
, ROW_NUMBER() OVER (ORDER BY val) AS val_idx
FROM src
)
SELECT val, COALESCE(sum(changed) OVER (ORDER BY val_idx), 0) AS group_id
FROM grp
​;

David J.​




--
View this message in context: 
http://postgresql.nabble.com/Merge-rows-based-on-Levenshtein-distance-tp5828841p5829143.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

Re: [GENERAL] Merge rows based on Levenshtein distance

2014-12-03 Thread Michael Nolan
I don't think you've defined your problem very clearly.

Suppose you have 1000 names in your database.  Are you planning to compare
each name to the other 999 names to see which is closest?  What if two
names are equally close to a third name but not to each other, how do you
decide which is better?
--
Mike Nolan


Re: [GENERAL] Merge rows based on Levenshtein distance

2014-12-03 Thread mongoose

Thanks for the help. I will give your code a try. Btw I know how to solve
this in a different language but unfortunately I am a very rookie with
databases.



--
View this message in context: 
http://postgresql.nabble.com/Merge-rows-based-on-Levenshtein-distance-tp5828841p5829145.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Merge rows based on Levenshtein distance

2014-12-03 Thread mongoose
Hi Mike,

I was planning to do something like David suggested: I would sort the rows
based on name and then I would  use a window (i.e. 100 rows) to compare each
individual name to the previous 100. All I want to do is create groups of
similar rows based on some criteria. 



--
View this message in context: 
http://postgresql.nabble.com/Merge-rows-based-on-Levenshtein-distance-tp5828841p5829146.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] INSERT to partitioned table doesn't return row count

2014-12-03 Thread Kris Deugau
Is it possible to return the number of rows inserted to a partitioned
table set up as per the examples on
http://www.postgresql.org/docs/9.1/interactive/ddl-partitioning.html, in
the same way as if you inserted directly into a target table?

I can sort of see why it returns 0, because 0 rows are actually inserted
in the parent table, but I'd prefer not to have to make all the code
around this use case partition-aware;  most of it isn't mine.

I tried modifying the last RETURN line in the trigger function but
couldn't find anything that was even valid syntax.

-kgd


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Merge rows based on Levenshtein distance

2014-12-03 Thread Michael Nolan
Have you considered using a soundex function to sort names into similarity
groups?  In my experience it works fairly well with Western European names,
not quite as well with names from other parts of the world.  It also
doesn't deal well with many nicknames (Mike instead of Michael, etc.)

--
Mike Nolan


Re: [GENERAL] INSERT to partitioned table doesn't return row count

2014-12-03 Thread Adrian Klaver

On 12/03/2014 11:19 AM, Kris Deugau wrote:

Is it possible to return the number of rows inserted to a partitioned
table set up as per the examples on
http://www.postgresql.org/docs/9.1/interactive/ddl-partitioning.html, in
the same way as if you inserted directly into a target table?

I can sort of see why it returns 0, because 0 rows are actually inserted
in the parent table, but I'd prefer not to have to make all the code
around this use case partition-aware;  most of it isn't mine.


I have not used partitioned tables enough, but it would seem this is 
covered here:


http://www.postgresql.org/docs/9.3/interactive/sql-select.html

table_name

The name (optionally schema-qualified) of an existing table or 
view. If ONLY is specified before the table name, only that table is 
scanned. If ONLY is not specified, the table and all its descendant 
tables (if any) are scanned. < ***>Optionally, * can be specified after 
the table name to explicitly indicate that descendant tables are 
included.<***>




I tried modifying the last RETURN line in the trigger function but
couldn't find anything that was even valid syntax.

-kgd





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] INSERT to partitioned table doesn't return row count

2014-12-03 Thread David G Johnston
Adrian Klaver-4 wrote
> On 12/03/2014 11:19 AM, Kris Deugau wrote:
>> Is it possible to return the number of rows inserted to a partitioned
>> table set up as per the examples on
>> http://www.postgresql.org/docs/9.1/interactive/ddl-partitioning.html, in
>> the same way as if you inserted directly into a target table?
>>
>> I can sort of see why it returns 0, because 0 rows are actually inserted
>> in the parent table, but I'd prefer not to have to make all the code
>> around this use case partition-aware;  most of it isn't mine.
> 
> I have not used partitioned tables enough, but it would seem this is 
> covered here:
> 
> http://www.postgresql.org/docs/9.3/interactive/sql-select.html
> 
> table_name
> 
>  The name (optionally schema-qualified) of an existing table or 
> view. If ONLY is specified before the table name, only that table is 
> scanned. If ONLY is not specified, the table and all its descendant 
> tables (if any) are scanned. < ***>Optionally, * can be specified after 
> the table name to explicitly indicate that descendant tables are 
> included.<***>

I don't see how what you quoted has any relation to the problem posed by the
OP...

Going from recent memory this particular behavior complaint has now come up
three times in the past six months - the main complaint previously is that
given an insert trigger for the partition you have to copy, not move, the
insert to the child tables - leaving the parent table populated during the
insert and thus returning the count - and then delete the record from the
parent table.  That sequence, while solving the row number problem, then
causes vacuum to behave undesirably.

David J.






--
View this message in context: 
http://postgresql.nabble.com/INSERT-to-partitioned-table-doesn-t-return-row-count-tp5829148p5829157.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] segmentation fault postgres 9.3.5 core dump perlu related ?

2014-12-03 Thread Tom Lane
"Day, David"  writes:
> We are developing on and  running Postgres 9.3.5 on  FreeBsd 10.0-p12.
> We have been experiencing a intermittent postgres core dump which
> Seems primarily to be associated with the the 2  functions below.

> Given the onset of this problem,  we suspect it has something to do with the 
> addition of  DNS lookup within the our  perlu function cc.get_sip_id(...).

So this bit is new?

> open my $fh, "/sbin/route get $host |";

I wonder if your version of Perl thinks this is sufficient license to go
multithreaded or something like that.  That could be problematic.  You
might try looking to see if a backend process that's successfully executed
this code now contains multiple threads.

It's difficult to offer much help on the basis of the info provided.
One comment is that the stack trace you show is completely nonsensical:
functions by those names do exist in PG, but the calling order shown
is impossible.  So it seems there's some problem in how you rebuilt
with debug symbols --- maybe the symbols being used don't match the
executable?  I'm not a FreeBSD user so I have no useful speculation
to offer about how such a mixup might occur on that platform.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] INSERT to partitioned table doesn't return row count

2014-12-03 Thread Kris Deugau
David G Johnston wrote:
>> http://www.postgresql.org/docs/9.3/interactive/sql-select.html
>>
>> table_name
>>
>>  The name (optionally schema-qualified) of an existing table or 
>> view. If ONLY is specified before the table name, only that table is 
>> scanned. If ONLY is not specified, the table and all its descendant 
>> tables (if any) are scanned. < ***>Optionally, * can be specified after 
>> the table name to explicitly indicate that descendant tables are 
>> included.<***>
> 
> I don't see how what you quoted has any relation to the problem posed by the
> OP...

*nod*  SELECTs work just fine;  by default they'll pull data from all
necessary child tables, and return the correct result row count.

It's on INSERT where if you have a trigger that diverts the actual
INSERT to a child table that you get:

INSERT 0 0

returned in psql, instead of

INSERT 0 1

for one row, or

INSERT 0 1

for 10K rows, and similar results from eg Perl DBI.

> Going from recent memory this particular behavior complaint has now come up
> three times in the past six months - the main complaint previously is that
> given an insert trigger for the partition you have to copy, not move, the
> insert to the child tables - leaving the parent table populated during the
> insert and thus returning the count - and then delete the record from the
> parent table.  That sequence, while solving the row number problem, then
> causes vacuum to behave undesirably.

Eugh.  For the (mostly) one-off bulk-copy process I've been preparing I
have a couple of other workarounds (simplest being just inserting in the
child table directly), but if it comes down to it it will be simpler to
put up with the relatively minor nuisance of staying unpartitioned
rather than (potentially) destabilizing someone else's code.  After all,
I've already written the code to archive old records from the
unpartitioned table anyway...  it just would have been nice to be able
to "pg_dump dbname -t table_2013" instead.

-kgd


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] INSERT to partitioned table doesn't return row count

2014-12-03 Thread David G Johnston
Kris Deugau wrote
> David G Johnston wrote:
>> Going from recent memory this particular behavior complaint has now come
>> up
>> three times in the past six months - the main complaint previously is
>> that
>> given an insert trigger for the partition you have to copy, not move, the
>> insert to the child tables - leaving the parent table populated during
>> the
>> insert and thus returning the count - and then delete the record from the
>> parent table.  That sequence, while solving the row number problem, then
>> causes vacuum to behave undesirably.
> 
> Eugh.  For the (mostly) one-off bulk-copy process I've been preparing I
> have a couple of other workarounds (simplest being just inserting in the
> child table directly), but if it comes down to it it will be simpler to
> put up with the relatively minor nuisance of staying unpartitioned
> rather than (potentially) destabilizing someone else's code.  After all,
> I've already written the code to archive old records from the
> unpartitioned table anyway...  it just would have been nice to be able
> to "pg_dump dbname -t table_2013" instead.

The specific thread I was thinking of is here:

http://postgresql.nabble.com/Autovacuum-on-partitioned-tables-in-version-9-1-td5826595.html

The links referenced there provide the basis for my thought that there might
be 3 recent examples...

David J.




--
View this message in context: 
http://postgresql.nabble.com/INSERT-to-partitioned-table-doesn-t-return-row-count-tp5829148p5829163.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres 9.2 PITR testing to before DROP DATABASE ends up removing file system files

2014-12-03 Thread Adrian Klaver

On 12/02/2014 03:50 PM, Joshua Boyd wrote:

Having continued my research, the problem I encountered is the exact
same that's been recorded here:

https://www.marshut.net/kstxxk/pitr-failing-to-stop-before-drop-database.html



I took the backup by following the procedures as laid forth in the
continuous archiving document
(http://www.postgresql.org/docs/9.2/static/continuous-archiving.html)

I configured the archiving to gzip the wal files to pgdata/archive
Then I started the backup process by issuing "select
pg_start_backup('mylabel')"
next tarball'd the contents of pgdata, excluding the pgdata/archive dir
and the pgdata/pg_xlog dir (although preserving the directory structure)
after that I issued "select pg_stop_backup()"
then I added the contents of pgdata/archive and pgdata/pg_xlog to the
tarball above
then I gzipped the tarball.


The above is not necessary, you are going to empty the pg_xlog directory 
later anyway and you want the archive contents from the later date when 
you do below.




The above is how I archived and backed up..

To restore the backup, I shut down the server, moved pgdata to
pgdata_backup, untarballed the backup tarball, removed all the files in
the new pgdata/pg_xlog dir,


See, you empty the pg_xlog dir you saved.

 copied the files from pgdata_backup/archive

and pgdata_backup/pg_xlog into the new pgdata dir, set up the
recovery.conf file giving it a timestamp gathered from the
pgdata_backup/pg_log/<> log files..  I copied ALL the pg_xlog files ...
not simply the "unarchived ones".



All of the unarchived ones should have

been removed when I removed the contents of the pg_xlog dir after
restoring the tarball..


Well those where the WALs from the time of the backup and should already 
be archived, assuming sufficient time/activity between backup and recovery.




I think I answered all the questions - please let me know if I missed
some.  Based on the url I pasted at the top, though, it appears I'm not
the only one who's encountered this problem.


Yes from the answers it would seem the solution is to push your 
recovery_target_time further into the future. If I am following the 
discussion DROP DATABASE removes the files from the file system first 
and then clears the system catalog. So by ending the recovery at the 
moment of DROP DATABASE you are getting to the point that it drops the 
files but before it removes the entry. Try letting the recovery run a 
little longer to see if that works. Also seems there is a patch in the 
works to correct this:)









--
Joshua Boyd



--
Adrian Klaver
adrian.kla...@aklaver.com 




--
Joshua Boyd



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres 9.2 PITR testing to before DROP DATABASE ends up removing file system files

2014-12-03 Thread Adrian Klaver

On 12/02/2014 03:50 PM, Joshua Boyd wrote:

Having continued my research, the problem I encountered is the exact
same that's been recorded here:

https://www.marshut.net/kstxxk/pitr-failing-to-stop-before-drop-database.html





I think I answered all the questions - please let me know if I missed
some.  Based on the url I pasted at the top, though, it appears I'm not
the only one who's encountered this problem.



Re-read the initial post and realized you wanted the state of the 
recovered cluster to include the database that was dropped. In that case 
I would say stop the recovery just before the DROP DATABASE.




--
Joshua Boyd



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres 9.2 PITR testing to before DROP DATABASE ends up removing file system files

2014-12-03 Thread Adrian Klaver

On 12/03/2014 03:52 PM, Joshua Boyd wrote:

I tried that when I was testing .. if I stopped at the most recent
insert/update/delete previous to the drop database (with telling it to
include the change) it DIDN'T include the change (I assume because the
commit timestamp was slightly after the transaction timestamp) .. and if
I told it to stop a little later than that, it removed the files
(because it got to the drop database statement and stopped before the
commit, but still deleted the files). For some reason it ignored SELECT
statements - I assume those are not actually written into the wal files,
and that would be why.  But .. that's only a guess.  I'm not that
educated with regard to the inner workings of Postgres..  :)

For the meantime, until the patch is released, the method I have
wrangled to "get around the issue" is to actually restore twice ... the
first time using a timestamp and I record the xid reported in the pg_log
that it stopped before commit of..  Then re-restore by using the xid.
That works, keeps the most recent insert/update/delete, and DOESN'T
delete files..  Kinda a pain, but it works.


Yea, DROP DATABASE is not transactional(or maybe semi-transactional) and 
lives in a gray area.




Anyway .. thanks for the assistance.  :)





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] High Availability Cluster

2014-12-03 Thread Sameer Kumar
On Thu, Nov 27, 2014 at 7:26 PM, Maila Fatticcioni  wrote:

> I was wondering if there is another way to set up a complete cluster
> that would manage in case of failure:
>
> * the relocation of the VIP (virtual ip address)
> * the relocation of the main instance of Postgresql
>

​Consider trying pgpool. It has an HA mode (and I think the latest version
works using a heartbeat, the older one uses watchdog).

If you are up for a commercial product, you may want to consider using
something like EnterpriseDB's failover manager.
​


Best Regards,

*Sameer Kumar | Database Consultant*

*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069533

M: *+65 8110 0350*  T: +65 6438 3504 | www.ashnik.com

*[image: icons]*



[image: Email patch] 



This email may contain confidential, privileged or copyright material and
is solely for the use of the intended recipient(s).


Re: [GENERAL] Synchronous Replication Timeout

2014-12-03 Thread Sameer Kumar
On Fri, Nov 28, 2014 at 10:24 AM, Teresa Bradbury 
wrote:

> I have a replication setup with a master and a single synchronous slave.
> If the slave dies (or the network goes down) I would like any transaction
> on the master that requires writing to fail so I can roll it back. At the
> moment, when I commit it just hangs forever or (if I cancel it using ^C in
> psql or using kill) it commits locally and not on the synchronous slave.
>

​I did that for a customer and I am using a tool (pgpool) to change the
config file if the master is going down. You can keep additional
configurations in
synchronous_master.conf and add include header in postgresql.conf

You just need to write a shell script (or use something like pgpool)​ to
keep a watch if the slave goes down change synchronous_master.conf to an
empty file and reload the postgres config (pg_ctl reload).



> Neither of these options are ok in my use case. I have tried setting
> statement_timeout but it does not work. So my questions are:
>
>
>
> 1) Is it possible to rollback transactions that fail to commit after a
> certain amount of time waiting for the slave?
>

​No, AFAIK it would have already committed to WAL files on on master.​

​It is just blocked till the slave confirms the ​same being done at its end.


>
> 2) If not, is there any intension of implementing such a feature in the
> near future?
>
>
>
> 3) Do any of the answers above change if we are dealing with two-phase
> commits instead? At the moment it hangs forever on ‘prepare transaction’,
> ‘commit prepared’ and ‘rollback prepared’ commands.
>
>


Best Regards,

*Sameer Kumar | Database Consultant*

*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069533

M: *+65 8110 0350*  T: +65 6438 3504 | www.ashnik.com

*[image: icons]*



[image: Email patch] 



This email may contain confidential, privileged or copyright material and
is solely for the use of the intended recipient(s).


Re: [GENERAL] recovering from "too many failures" wal error

2014-12-03 Thread Sameer Kumar
On Mon, Dec 1, 2014 at 10:59 PM, Andy Colson  wrote:

> We have a postgresql 9.2 cluster setup to do continuous wal archiving.
>> We were archiving to a mount point that went offline. As a result the db
>> could not archive the wal files, we ended up with many many errors in
>> the logs indicating the file could not be archived:
>>
>> WARNING:  transaction log file "00010FB10050" could not be
>> archived: too many failures
>>
>> So we caught the issue before the file system filled up, fixed the mount
>> point and I see wal files being added to the target wal archive
>> directory.  However the pg_xlog directory does not seem to be shrinking,
>> there are currently 27,546 files in the pg_xlog directory and that
>> number is not changed in some time (since we fixed the mount point.
>>
>> I assume the db will at some point remove the backed up files in the
>> pg_xlog dir, is this true? or do I need to intervene?
>>
>> Thanks in advance
>>
>>
>>
> From what I recall from this list, you should never play in pg_xlog.
> You'll probably do more damage than good.  PG should take care of itself.
>
> ​+1
Don't ever do that​



> Are you still getting error messages?  Looks like its been a few days, has
> it shrunk yet?


​if you are still getting error try to force an archival with


set synchronous_commit=off;  -- needed only if you are replicating to
synchronous slaves

select pg_switch_xlog(); -- though not neccessary

select pg_start_backup('test');​

select pg_stop_backup();



I use these commands to test if archival is working fine or to force
archival

Best Regards,

*Sameer Kumar | Database Consultant*

*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069533

M: *+65 8110 0350*  T: +65 6438 3504 | www.ashnik.com

*[image: icons]*



[image: Email patch] 



This email may contain confidential, privileged or copyright material and
is solely for the use of the intended recipient(s).


Re: [GENERAL] High Availability Cluster

2014-12-03 Thread Tatsuo Ishii
> On Thu, Nov 27, 2014 at 7:26 PM, Maila Fatticcioni > wrote:
> 
>> I was wondering if there is another way to set up a complete cluster
>> that would manage in case of failure:
>>
>> * the relocation of the VIP (virtual ip address)
>> * the relocation of the main instance of Postgresql
>>
> 
> ​Consider trying pgpool. It has an HA mode (and I think the latest version
> works using a heartbeat, the older one uses watchdog).

No. Just the implementation of watchdog has been changed (more
precisely new life check mode added). In recent versions (since 3.3)
it uses UDP packet for life check of pgpool, which is pretty much
similar to heartbeat.

> If you are up for a commercial product, you may want to consider using
> something like EnterpriseDB's failover manager.

Commercial support for pgpool-II is provided by the way.

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] FW: getting error while running sql on mm_activealrm table

2014-12-03 Thread M Tarkeshwar Rao
Hi Adrian,

Thanks Adrian for you quick reply. I am looking in this that why db came into 
this stage.

--I have found overheating servers to be particularly prone to index corruption 
and if indexes can get corrupt one has to worry about data becoming ---corrupt 
too.

Is the above statement true?
What is the meaming of toasted values?
What is this pg_toast? Can you please share any link on this?

Note from the shared link
--

Given that what you did was a reindex, what probably happened was it used an 
index scan to try to locate the toasted values in the table and couldnt find 
one. This sounds like a corrupted index. Vacuum analyse does alter the table 
but reindex does not and the changes are very minor.

The way to think about this is that TOASTed attributes are actually broken into 
chunks of about 4k in size and these are stored in rows. They are looked up and 
sorted/reconnected with the main row at query time. It sounds like an index 
used here was corrupted and so the reindex solved the problem.

I have found corrupted indexes are usually a sign that something is not well 
with the server. It is good to check and make sure memory, CPU's and hard 
drives are all happy and not reporting problems. I have found overheating 
servers to be particularly prone to index corruption and if indexes can get 
corrupt one has to worry about data becoming corrupt too.


Regards
Tarkeshwar

-Original Message-
From: Adrian Klaver [mailto:adrian.kla...@aklaver.com] 
Sent: 03 December 2014 20:22
To: M Tarkeshwar Rao; pgsql-general@postgresql.org
Subject: Re: [GENERAL] FW: getting error while running sql on mm_activealrm 
table

On 12/03/2014 03:07 AM, M Tarkeshwar Rao wrote:
> Hi all,
>
> We are getting following error message on doing any action on the 
> table like(Select or open from pgadmin).
>
> Please suggest.
>

http://dba.stackexchange.com/questions/31008/pg-dump-and-error-missing-chunk-number-0-for-toast-value

Though the question still remains, what got it into that state?

Any recent hardware/software issues?

>
> Regards
>
> Tarkeshwar
>


--
Adrian Klaver
adrian.kla...@aklaver.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] FW: getting error while running sql on mm_activealrm table

2014-12-03 Thread John R Pierce

On 12/3/2014 9:30 PM, M Tarkeshwar Rao wrote:

I have found corrupted indexes are usually a sign that something is not well 
with the server. It is good to check and make sure memory, CPU's and hard 
drives are all happy and not reporting problems. I have found overheating 
servers to be particularly prone to index corruption and if indexes can get 
corrupt one has to worry about data becoming corrupt too.


do your servers not have ECC memory?   the number one source of data 
corruption is cached data getting transient one-bit errors, which occur 
at a more common rate than you might expect if you have many gigabytes 
of ram...   ECC will fix and report these single bit errors.





--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] High Availability Cluster

2014-12-03 Thread Sameer Kumar
On Thu, Dec 4, 2014 at 12:32 PM, Tatsuo Ishii  wrote:

> No. Just the implementation of watchdog has been changed (more
> precisely new life check mode added). In recent versions (since 3.3)
> it uses UDP packet for life check of pgpool, which is pretty much
> similar to heartbeat.
>
>
​Thanks for adding the details.

​


> > If you are up for a commercial product, you may want to consider using
> > something like EnterpriseDB's failover manager.
>
> Commercial support for pgpool-II is provided by the way.
>
​
Noted :)​


Best Regards,

*Sameer Kumar | Database Consultant*

*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069533

M: *+65 8110 0350*  T: +65 6438 3504 | www.ashnik.com

*[image: icons]*



[image: Email patch] 



This email may contain confidential, privileged or copyright material and
is solely for the use of the intended recipient(s).