Re: [GENERAL] Question about shared_buffer cache behavior

2016-03-19 Thread Andreas Kretschmer


> Paul Jones  hat am 18. März 2016 um 21:24 geschrieben:
> 
> 
> In Postgres 9.5.1 with a shared_buffer cache of 7Gb, a SELECT from
> a single table that uses an index appears to read the table into the
> shared_buffer cache.  Then, as many times as the exact same SELECT is
> repeated in the same session, it runs blazingly fast and doesn't even
> touch the disk.  All good.
> 
> Now, in the *same* session, if a different SELECT from the *same* table,
> using the *same* index is run, it appears to read the entire table from
> disk again.
> 
> Why is this?  Is there something about the query that qualifies the
> contents of the share_buffer cache?  Would this act differently for
> different kinds of indexes?

the first query reads only the tuple from heap that are matched the
where-condition.
The 2nd query with an other where-condition reads other rows than the first
query.

Keep in mind: a index search reads the index and pulls the rows that matched the
condition from the heap, no more.

Regards
-- 
Andreas Kretschmer
http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] log temp files are created twice in PL/pgSQL function

2016-03-19 Thread Dang Minh Huong
Hi,

Thanks for your prompt response.
Appreciate your help.

Thanks and best regards,
bocap

> Hi
> 
> 2016-03-16 15:58 GMT+01:00 Dang Minh Huong :
>> Hi,
>> 
>> Why does log temp files are created twice when query is executed in PL/pgSQL 
>> function?
>> Would you please explain it to me?
> 
> PL/pgSQL materialize result internally. 
> 
> Usually PostgreSQL operations are executed row by row. But some SRF functions 
> like PLpgSQL functions doesn't support this mode, and returns tuplestore - 
> materialized result.
> 
> Using this technique is comfortable, but with some performance risks. 
> Unfortunately, you cannot to change this behave. Not in PL/pgSQL.
> 
> You can write C function with same functionality but with row by row 
> returning result mode. It is not possible in PL/pgSQL.
> 
> On other hand - you can try to increase work_mem (if your server has enough 
> RAM). Materialization are done when available memory (controlled by work_mem) 
> is too less.
> 
> You can try
> 
> SET work_mem to '20MB';
> SELECT test_tempfiles();
> 
> Regards
> 
> Pavel
>  
>> 
>> As below test result. Log temp files are created twice when SELECT statement 
>> is put
>>  into  a PL/pgSQL function. It led a little of performance degradation.
>> Is there any way to define PL/pgSQL function to avoid this issue?
>> # I am using PostgreSQL 9.3.9
>> 
>> my test results
>> -
>> [postgres@test]$ psql -c "select test_tempfiles();" > /dev/null
>> LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp2223.0", size 3244032
>> CONTEXT: PL/pgSQL function test_cursor() line 3 at RETURN QUERY
>> LOG: temporary file: path "base/pgsql_tmp/pgsql_tmp23223.1", size 2828062
>> LOG: duration: 421.426 ms statement: select test_tempfiles();
>> 
>> [postgres@test]$ psql -c "select name from testtbl order by id" > /dev/null
>> LOG: temporary file: path "base/pgsql_tmp/pgsql_tmp25586.0", size 2850816
>> LOG: duration: 389.054 ms statement: select random from testtbl order by 
>> random
>> -
>> 
>> test_tempfiles() function is defined as below
>> -
>> CREATE OR REPLACE FUNCTION public.test_tempfiles()
>> RETURNS TABLE(name text)
>> LANGUAGE plpgsql
>> AS
>> $function$
>> begin
>> return query execute "select name from testtbl order by id ";
>> end;
>> $function$
>> -
>> 
>> Thanks and best regrards,
>> bocap
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
> 


[GENERAL] UPSERT and HOT-update

2016-03-19 Thread CHENG Yuk-Pong, Daniel
Hi List,

I am doing a INSERT...ON CONFLICT...UPDATE.. on a table. The query is
mostly-UPDATE and does not change any columns most of the time, like
so:

  CREATE INDEX ON book(title);
  INSERT INTO book (isbn, title, author, lastupdate) VALUES ($1,$2,$3, now())
ON CONFLICT (isbn) DO UPDATE set title=excluded.title,
author=excluded.author, lastupdate=excluded.lastupdate;

PostgreSQL seems to consider the title as changed and refused to do a
HOT-update. It works if I remove the `title=...` part.

Are there any tricks to make it smarter? The title don't change most
of the time after all.

Regards,
Daniel


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] grant select on pg_stat_activity

2016-03-19 Thread avi Singh
Guys
Whats the best way to grant select on pg_stat_activity so that non
super user can access this view.

Thanks
Avi


Re: [HACKERS] [GENERAL] Request - repeat value of \pset title during \watch interations

2016-03-19 Thread David Steele
On 3/17/16 5:07 PM, David G. Johnston wrote:

> Figured out it had to be added to 2016-09...done

Hmm ... this patch is currently marked "needs review" in CF 2016-03.  Am
I missing something, should this have been closed?

-- 
-David
da...@pgmasters.net


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Partition

2016-03-19 Thread Adrian Klaver

On 03/18/2016 02:55 AM, Leonardo M. Ramé wrote:

Hi, I have read and re-read the Partitioning chapter
(http://www.postgresql.org/docs/9.1/static/ddl-partitioning.html), but I
still don't see how to implement this use case:

One table storing current data, let's call it the "master table", then
one or more partitions with old data.

For example, the master table is this:

 create table log(
   idlog serial not null,
   date_time timestamp not null default now(),
   log varchar(255),
   primary key(idlog)
 );

The documentation says the master table should be empty, then a trigger
must evaluate a condition, the date_time field for example, and insert
the data in the corresponding table. This is a *rare* condition, because
in the log example, new content is created without knowing its date and
time in advance. For example:

 insert into log(log) values('log this please.');

The date_time column will set the now() value.


True but you can catch that value in the BEFORE trigger as 
NEW.date_time. A default is still just a value being entered into the field.




Now, by following the example, to create a child table I'll do

 create table log_old( ) inherits (log);

This raises the 1nst question, how can I define a *dynamic* check,
for checking older than X days?. Is this possible?.

An idea (didn't test):

 check (date_time::date < now()::date - '30 day'::interval)


Where are you putting this CHECK?

FYI, should not need the casts to date. Interval will work with datetimes.



Then, the trigger, after each insert should *move* old data to log_old.

The only problem I see here is the master table isn't empty, but
contains current data. The question is, will it work as expected?, I
mean when I do "select * from log" I'll get an *union* of new and old
data?.


If you use a BEFORE trigger on the master table and redirect the INSERT 
to a partition and RETURN NULL from said trigger, then the INSERT will 
not happen on the master.




Regards,



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Deleting schema - saving up space - PostgreSQL 9.2

2016-03-19 Thread David G. Johnston
On Wed, Mar 16, 2016 at 1:59 PM, drum.lu...@gmail.com 
wrote:

>
> 1 - The problem here is that a VACUUM FULL will lock all the DB to wirte,
> am I right? My DB is 1.7 TB, so it will take a while and the System can't
> be offline
>
>1. Migrate the files to the NFS server
>2. Delete the schema from the MASTER DB
>3. Put the slaves into read-only servers
>4. Run Vacuum FULL into the MASTER DB
>5. Once the vacuum is done, do a DUMP from the MASTER to the slaves
>(excluding the GORFS schema of course)
>
>
​If you are removing the entire object there should be no cause to VACUUM
FULL.  A vacuum full reclaims unused space ​*within a given relation.*

​Both DROP TABLE and TRUNCATE have the effect of (near) immediately
​freeing up the disk spaced used by the named table and returning it to the
operating system.

​You want to use VACUUM FULL tablename; if you remove a significant chuck
of a table using DELETE or UPDATE and want to reclaim the spaced that was
occupied by the older version of the ​row within "tablename".

VACUUM FULL; simply does this for all tables - I'm not sure when locks are
taken and removed.  likely only the actively worked on tables are locked -
but the I/O hit is global so targeted locking only buys you so much.

David J.


Re: [GENERAL] Deleting schema - saving up space - PostgreSQL 9.2

2016-03-19 Thread David G. Johnston
On Wed, Mar 16, 2016 at 2:27 PM, drum.lu...@gmail.com 
wrote:

>
>
> On 17 March 2016 at 10:21, David G. Johnston 
> wrote:
>
>> On Wed, Mar 16, 2016 at 1:59 PM, drum.lu...@gmail.com <
>> drum.lu...@gmail.com> wrote:
>>
>>>
>>> 1 - The problem here is that a VACUUM FULL will lock all the DB to
>>> wirte, am I right? My DB is 1.7 TB, so it will take a while and the System
>>> can't be offline
>>>
>>>1. Migrate the files to the NFS server
>>>2. Delete the schema from the MASTER DB
>>>3. Put the slaves into read-only servers
>>>4. Run Vacuum FULL into the MASTER DB
>>>5. Once the vacuum is done, do a DUMP from the MASTER to the slaves
>>>(excluding the GORFS schema of course)
>>>
>>>
>> ​If you are removing the entire object there should be no cause to VACUUM
>> FULL.  A vacuum full reclaims unused space ​*within a given relation.*
>>
>> ​Both DROP TABLE and TRUNCATE have the effect of (near) immediately
>> ​freeing up the disk spaced used by the named table and returning it to the
>> operating system.
>>
>> ​You want to use VACUUM FULL tablename; if you remove a significant chuck
>> of a table using DELETE or UPDATE and want to reclaim the spaced that was
>> occupied by the older version of the ​row within "tablename".
>>
>> VACUUM FULL; simply does this for all tables - I'm not sure when locks
>> are taken and removed.  likely only the actively worked on tables are
>> locked - but the I/O hit is global so targeted locking only buys you so
>> much.
>>
>> David J.
>>
>>
>>
>
> I see..
>
> so in your opinion a DROP SCHEMA and maybe a VACUUM (not full) would be
> enough?
>
>
​I don't deal with Hot Standby's in my day-to-day but if you DROP SCHEMA
all of the spaced consumed by indexes and tables in that schema will be
freed.  The vacuum might make a small difference in performance on the
system catalogs (pg_class, stats, etc)  that were updated but with respect
to the dropped schema there won't be anything present there for vacuum to
touch.

Create and populate a dummy table in a test setup, measure the HD space
taken in PGDATA, then drop it and measure again to see it in action.

I've only done this using "TRUNCATE" - I've got a system with space
constraints a the same kind of "file data" table and freed up around 20GB
with a single fast truncate (though ensuring FKs wouldn't be a problem was
fun...).

David J.


Re: [GENERAL] Insert data in two columns same table

2016-03-19 Thread drum.lu...@gmail.com
On 18 March 2016 at 03:23, Adrian Klaver  wrote:

> On 03/16/2016 07:07 PM, drum.lu...@gmail.com wrote:
>
>>
>>
>>
>>
>> I see a lot of other problems: you have 3 independet tables. Your 2
>> queries
>> (selects) returns 2 independet results, you can't use that for
>> insert into the
>> 3rd table. And i think, you are looking for an update, not insert.
>> So you have
>> to define how your tables are linked together (join).
>>
>> Can you explain how these tables are linked together?
>>
>>
>>
>>
>> Hi Andreas!
>>
>> Well...
>>
>> There are two tables that I need to get data from(dm.billables /
>> public.ja_mobiusers), and a third table (dm.billables_links) that I need
>> to insert data from those two tables.
>>
>> The table dm.billables has four (important) columns:
>>
>> *billable_id / customer_id / role_id / mobiuser_id*
>>
>> I wanna add data there. The data is not there yet, so it's not an UPDATE.
>>
>> *1 -* select the billable_id: (SELECT1)
>> SELECT billable_id FROM junk.wm_260_billables2 WHERE info ilike '%Alisha%'
>>
>> *2 -* select the mobiuser_id: (SELECT2)
>> SELECT id FROM public.ja_mobiusers WHERE name_first LIKE 'Alisha%' AND
>> name_last LIKE 'Dadryl%'
>>
>> *3 -* Insert those two data into the dm.billables_links table (EXAMPLE):
>> INSERT INTO dm.billables_links (billable_id, mobiuser_id) VALUES
>> (SELECT1, SELECT2);
>>
>>
>> CREATE TABLE
>> *billables*
>>  (
>>  billable_id BIGINT DEFAULT
>> "nextval"('"dm"."billables_billable_id_seq"'::"regclass") NOT
>>  NULL,
>>  account_id BIGINT NOT NULL,
>>  code CHARACTER VARYING(64) NOT NULL,
>>  info "TEXT",
>>  CONSTRAINT pk_billables PRIMARY KEY (billable_id),
>>  CONSTRAINT uc_billable_code_unique_per_account UNIQUE
>> ("account_id", "code"),
>>  );
>> CREATE TABLE
>> *billables_links*
>>  (
>>  billable_link_id BIGINT DEFAULT
>> "nextval"('"dm"."billables_links_billable_link_id_seq"'::
>>  "regclass") NOT NULL,
>>  billable_id BIGINT NOT NULL,
>>  customer_id BIGINT,
>>  role_id BIGINT,
>>  mobiuser_id BIGINT,
>>  CONSTRAINT pk_billables_links PRIMARY KEY
>> (billable_link_id),
>>  CONSTRAINT fk_billable_must_exist FOREIGN KEY
>> (billable_id) REFERENCES billables
>>  (billable_id),
>>  CONSTRAINT cc_one_and_only_one_target CHECK
>> ("customer_id" IS NOT NULL))::INTEGER + (
>>  ("role_id" IS NOT NULL))::INTEGER) + (("mobiuser_id" IS
>> NOT NULL))::INTEGER) = 1)
>>
>
> Would it not be easier if instead of customer_id, role_id, mobiuser_id you
> had id_type('customer', 'role', 'mobi') and user_id(id). Then you could
> eliminate the CHECK, which as far as I can see is just restricting entry to
> one user id anyway.
>
>  );
>> CREATE TABLE
>> *ja_mobiusers*
>>  (
>>  id BIGINT DEFAULT
>> "nextval"('"ja_mobiusers_id_seq"'::"regclass") NOT NULL,
>>  clientid BIGINT DEFAULT 0,
>> [...]
>>  PRIMARY KEY (id),
>>  CONSTRAINT fk_account_must_exist FOREIGN KEY
>> (clientid) REFERENCES ja_clients (id),
>>  );
>>
>>
>>
>
>
>

I just did it using:

> INSERT INTO dm.billables_links (billable_id, mobiuser_id)
> SELECT billable_id
> , (SELECT id
>FROM   public.ja_mobiusers
>WHERE  name_first LIKE 'Anthon%'
>ANDname_last LIKE 'Swile%') AS foo  -- alias irrelevant
> FROM   dm.billables
> WHERE  info ILIKE '%Anthon%' AND info ILIKE '%Swile%' AND account_id =
> 32152 ;


Re: [GENERAL] which db caused postgres to stop serving due to wraparound prevention?

2016-03-19 Thread Steve Kehlet
Maybe my custom settings are relevant. Here they are in a gist:

https://gist.github.com/skehlet/08aeed3d06f1c35bc780

On Thu, Mar 17, 2016 at 11:47 AM Steve Kehlet 
wrote:

> Sorry, seems like such a noob problem, but I'm stumped. This is postgres
> 9.4.5. I'll post my custom settings if desired but I don't think they're
> needed.
>
> We recently had an issue where the autovacuumer wasn't starting because
> postgres couldn't resolve the hostname 'localhost' (we had bad perms on
> /etc/hosts). We're still working on getting that fixed on all affected
> boxes.
>
> In the meantime: today, one particular database unexpectedly stopped
> serving with this error:
>
> 2016-03-17 12:31:52 EDT [5395]: [787-1] ERROR:  database is not accepting
> commands to avoid wraparound data loss in database with OID 0
> 2016-03-17 12:31:52 EDT [5395]: [788-1] HINT:  Stop the postmaster and
> vacuum that database in single-user mode.
> You might also need to commit or roll back old prepared
> transactions.
>
> What has me confused is I ran the following command to keep an eye on
> this, and it seemed fine, the max(age(datfrozenxid)) was only about 330
> million:
>
> postgres=# select datname,age(datfrozenxid) from pg_database;
>   datname  |age
> ---+---
>  mydb  | 330688846
>  postgres  | 215500760
>  template1 | 198965879
>  template0 | 146483694
>  mydb2 | 175585538
> (5 rows)
>
> We shutdown postgres, started it in single user mode, and VACUUMed each
> database. Then postgres started up fine, and the crisis is averted, for now.
>
> However my understanding must be wrong: I thought we could just look
> around for max(age(datfrozenxid)), make sure it's "low" (<2 billion), and
> be sure that this wouldn't happen. What am I misunderstanding?
>
> And then, I don't know which db has OID 0?
>
> postgres=# SELECT oid,datname from pg_database;
> oid |  datname
> +---
>   16422 | mydb
>   12921 | postgres
>   1 | template1
>   12916 | template0
>  1575433129 | mydb2
> (5 rows)
>
> Thank you for your help!
>
>


Re: [GENERAL] How to Qualifying or quantify risk of loss in asynchronous replication

2016-03-19 Thread Adrian Klaver

On 03/16/2016 02:40 PM, otheus uibk wrote:

On Wednesday, March 16, 2016, Thomas Munro




Somehow, the documentation misleads (me) to believe the async
replication algorithm at least guarantees WAL records are *sent* before
responding "committed" to the client. I now know this is not the case.
*grumble*.

How can I help make the documentation clearer on this point?


I thought it was already clear:

http://www.postgresql.org/docs/9.4/interactive/warm-standby.html
"It should be noted that log shipping is asynchronous, i.e., the WAL 
records are shipped after transaction commit. As a result, there is a 
window for data loss should the primary server suffer a catastrophic 
failure; transactions not yet shipped will be lost. ..."


http://www.postgresql.org/docs/9.4/interactive/warm-standby.html#STREAMING-REPLICATION

"Streaming replication is asynchronous by default (see Section 25.2.8), 
in which case there is a small delay between committing a transaction in 
the primary and the changes becoming visible in the standby. This delay 
is however much smaller than with file-based log shipping, typically 
under one second assuming the standby is powerful enough to keep up with 
the load. ..."


http://www.postgresql.org/docs/9.4/interactive/warm-standby.html#SYNCHRONOUS-REPLICATION

"PostgreSQL streaming replication is asynchronous by default. If the 
primary server crashes then some transactions that were committed may 
not have been replicated to the standby server, causing data loss. The 
amount of data loss is proportional to the replication delay at the time 
of failover.


Synchronous replication offers the ability to confirm that all changes 
made by a transaction have been transferred to one synchronous standby 
server. This extends the standard level of durability offered by a 
transaction commit. This level of protection is referred to as 2-safe 
replication in computer science theory. "




--
Otheus
otheus.u...@gmail.com 
otheus.shell...@uibk.ac.at 




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Error: insufficient data in the message

2016-03-19 Thread Tom Lane
Michael Paquier  writes:
> On Fri, Mar 18, 2016 at 9:00 AM, Tom Lane  wrote:
>> Hmm ... I can't find the string "insufficient data in the message"
>> anywhere in the Postgres sources.  And not "pgsql_pexec" either.

>> 2016-03-16 17:35:07 BRT ERRO:  dados insuficientes na mensagem

> This is an error message in Portuguese, and it refers to an existing message:

> #: libpq/pqformat.c:556 libpq/pqformat.c:574 libpq/pqformat.c:595
> #: utils/adt/arrayfuncs.c:1444 utils/adt/rowtypes.c:556
> #, c-format
> msgid "insufficient data left in message"

Hm.  If that's the correct identification, all those message sites
are complaining that the client sent invalidly-formatted data.
So it still looks like a client-side issue.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema Size - PostgreSQL 9.2

2016-03-19 Thread Karsten Hilbert
On Fri, Mar 18, 2016 at 09:38:30AM +1300, drum.lu...@gmail.com wrote:

> Can you please provide me a Query that tells me how much space is a Schema
> in my DB?

There's been a discussion on that recently (like last month)
which can be found in the archive.

Karsten
-- 
GPG key ID E4071346 @ eu.pool.sks-keyservers.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Error: insufficient data in the message

2016-03-19 Thread Ranier VF
Hi, I around with error with PostgreSQL 9.5.0
Prepared Statment:INSERT INTO tbCerts (Company_UID, User_UID, Cert_Blob_Size, 
Cert_Format, After_Date, Before_Date, Cert_Blob, Password1, Key_Name, 
Cert_Blob_Type, Cert_Use, Status) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, 
$10, $11, $12);
Binds values:param[0]=1242004 /* htonl */param[1]=1242028 /* htonl 
*/param[2]=135967325878940612 /* htonl */param[3]=1242076 /* htonl 
*/param[4]='26/05/2015'param[5]='25/05/2016'param[6]=blob size=9529 /* blob 
with may can 0x0 data, obviously. 
*/param[7]=3B0429150961134Eparam[8]=C:\documentos\RC 
SOFTWARE\clientes\asbosch\bef.pfxparam[9]=pfxparam[10]=Sparam[11]=A
Error:  insufficient data in the messagepgsql_pexec error: PQresultStatus=7
Log:2016-03-16 17:35:07 BRT ERRO:  dados insuficientes na mensagem2016-03-16 
17:35:07 BRT COMANDO:  INSERT INTO tbCerts (Company_UID, User_UID, 
Cert_Blob_Size, Cert_Format, After_Date, Before_Date, Cert_Blob, Password1, 
Key_Name, Cert_Blob_Type, Cert_Use, Status) VALUES ($1, $2, $3, $4, $5, $6, $7, 
$8, $9, $10, $11, $12);
PostgreSQL 9.5.0 32bitsClient 32 bits (libpq.dll)
This is very frustrating, whats is wrog?How can debug this?
Best,
Ranier Vilela
  

Re: [GENERAL] Drop only temporary table

2016-03-19 Thread Melvin Davidson
On Fri, Mar 18, 2016 at 9:31 AM, Aleksander Alekseev <
a.aleks...@postgrespro.ru> wrote:

> You can use schema name as a prefix:
>
> postgres=# \d+
>List of relations
>   Schema   | Name | Type  |  Owner   |Size| Description
> ---+--+---+--++-
>  pg_temp_1 | t| table | postgres | 8192 bytes |
> (1 row)
>
> postgres=# drop table pg_temp_1.t;
> DROP TABLE
>
> But generally speaking I suggest you generate random names for
> temporary tables.
>
> On Fri, 18 Mar 2016 13:47:06 +0100
> Durumdara  wrote:
>
> > Dear PG Masters!
> >
> > As I experienced I can create normal and temp table with same name.
> >
> > create table x (kod integer);
> >
> > CREATE TEMPORARY TABLE x (kod integer);
> >
> > select tablename from pg_tables where schemaname='public'
> >  union all
> > select c.relname from pg_class c
> > join pg_namespace n on n.oid=c.relnamespace
> > where
> > n.nspname like 'pg_temp%'
> > and c.relkind = 'r'
> > and pg_table_is_visible(c.oid);
> >
> > ---
> >
> > I can see two x tables.
> >
> > As I see that drop table stmt don't have "temporary" suboption to
> > determine which to need to eliminate - the real or the temporary.
> >
> > Same thing would happen with other DDL/DML stmts - what is the
> > destination table - the real or the temporary?
> >
> > "insert into x(kod) values(1)"
> >
> > So what do you think about this problem?
> >
> > I want to avoid to remove any real table on resource closing (=
> > dropping of temporary table).
> > How to I force "drop only temporary"? Prefix, option, etc.
> >
> > Thanks for your help!
> >
> > dd
>
>
>
> --
> Best regards,
> Aleksander Alekseev
> http://eax.me/
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

It is not wise to create temp tables with the same name as actual tables.
It is always a good idea to prefix temp tables with something like "tmp_'
or "t_';

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


[GENERAL] log temp files are created twice in PL/pgSQL function

2016-03-19 Thread Dang Minh Huong
Hi,

Why does log temp files are created twice when query is executed in PL/pgSQL 
function?
Would you please explain it to me?

As below test result. Log temp files are created twice when SELECT statement is 
put
 into  a PL/pgSQL function. It led a little of performance degradation.
Is there any way to define PL/pgSQL function to avoid this issue?
# I am using PostgreSQL 9.3.9

my test results
-
[postgres@test]$ psql -c "select test_tempfiles();" > /dev/null 
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp2223.0", size 3244032 
CONTEXT: PL/pgSQL function test_cursor() line 3 at RETURN QUERY 
LOG: temporary file: path "base/pgsql_tmp/pgsql_tmp23223.1", size 2828062 
LOG: duration: 421.426 ms statement: select test_tempfiles();

[postgres@test]$ psql -c "select name from testtbl order by id" > /dev/null 
LOG: temporary file: path "base/pgsql_tmp/pgsql_tmp25586.0", size 2850816 
LOG: duration: 389.054 ms statement: select random from testtbl order by random
-

test_tempfiles() function is defined as below
-
CREATE OR REPLACE FUNCTION public.test_tempfiles() 
RETURNS TABLE(name text) 
LANGUAGE plpgsql 
AS 
$function$ 
begin 
return query execute "select name from testtbl order by id "; 
end; 
$function$
-

Thanks and best regrards,
bocap










-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Deleting schema - saving up space - PostgreSQL 9.2

2016-03-19 Thread drum.lu...@gmail.com
On 17 March 2016 at 10:21, David G. Johnston 
wrote:

> On Wed, Mar 16, 2016 at 1:59 PM, drum.lu...@gmail.com <
> drum.lu...@gmail.com> wrote:
>
>>
>> 1 - The problem here is that a VACUUM FULL will lock all the DB to wirte,
>> am I right? My DB is 1.7 TB, so it will take a while and the System can't
>> be offline
>>
>>1. Migrate the files to the NFS server
>>2. Delete the schema from the MASTER DB
>>3. Put the slaves into read-only servers
>>4. Run Vacuum FULL into the MASTER DB
>>5. Once the vacuum is done, do a DUMP from the MASTER to the slaves
>>(excluding the GORFS schema of course)
>>
>>
> ​If you are removing the entire object there should be no cause to VACUUM
> FULL.  A vacuum full reclaims unused space ​*within a given relation.*
>
> ​Both DROP TABLE and TRUNCATE have the effect of (near) immediately
> ​freeing up the disk spaced used by the named table and returning it to the
> operating system.
>
> ​You want to use VACUUM FULL tablename; if you remove a significant chuck
> of a table using DELETE or UPDATE and want to reclaim the spaced that was
> occupied by the older version of the ​row within "tablename".
>
> VACUUM FULL; simply does this for all tables - I'm not sure when locks are
> taken and removed.  likely only the actively worked on tables are locked -
> but the I/O hit is global so targeted locking only buys you so much.
>
> David J.
>
>
>

I see..

so in your opinion a DROP SCHEMA and maybe a VACUUM (not full) would be
enough?


Re: [GENERAL] Confusing deadlock report

2016-03-19 Thread Tom Lane
Thomas Kellerer  writes:
> 2016-03-12 13:51:29.305 CET [23912]: [2-1] user=arthur,db=prod,app=[unknown] 
> DETAIL: Process 23912 waits for ShareLock on transaction; blocked by process 
> 24342. 
> Process 24342 waits for ShareLock on transaction 39632974; blocked by 
> process 23912. 
> Process 23912: UPDATE alpha SET some_flag = $1 WHERE (id = $2) 
> Process 24342: INSERT INTO bravo (..., alpha_id) VALUES ($1, $2, $3, 
> $4, $5, $6, $7, $8, $9, $10)

> Can the foreign key between bravo and alpha play a role here?

Absolutely.  The insert will need a sharelock on whatever alpha row the
new bravo row references.  Perhaps the newly-inserted row references some
row that 23912 previously updated (in the same transaction) while the
alpha row 23912 is currently trying to update was previously share-locked
by 24342 as a side effect of some previous insert?

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] grant select on pg_stat_activity

2016-03-19 Thread Adrian Klaver

On 03/18/2016 01:09 PM, avi Singh wrote:

Guys
 Whats the best way to grant select on pg_stat_activity so that
non super user can access this view.


They should be able to, see below. If that is not your case, then more 
information is needed.


guest@test=> select current_user;
 current_user 



-- 



 guest
(1 row)

guest@test=> \du guest
   List of roles
 Role name | Attributes | Member of
---++---
 guest || {}


guest@test=> select * from pg_stat_activity;
-[ RECORD 1 ]+
datid| 16385
datname  | test
pid  | 2622
usesysid | 1289138
usename  | guest
application_name | psql
client_addr  |
client_hostname  |
client_port  | -1
backend_start| 2016-03-18 14:41:43.906754-07
xact_start   | 2016-03-18 14:44:22.550742-07
query_start  | 2016-03-18 14:44:22.550742-07
state_change | 2016-03-18 14:44:22.550746-07
waiting  | f
state| active
backend_xid  |
backend_xmin | 58635
query| select * from pg_stat_activity;



Thanks
Avi



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] vacuum - reclaiming disk space.

2016-03-19 Thread Robert McAlpine
Just to throw some extreme ideas out there, you could stand up a postgres
on some other server, pg_dump your current database and use that dump to
build up your second postgres. Use that new postgres when your system goes
live again after downtime. Restoring from a dump means your database would
not take up as much space since I assume your issue is that all that space
was allocated to postgres for the purposes of your large number of table
updates.

On Thu, Mar 17, 2016 at 11:34 AM, Melvin Davidson 
wrote:

>
>
> On Thu, Mar 17, 2016 at 10:57 AM, bricklen  wrote:
>
>> On Thu, Mar 17, 2016 at 7:27 AM, Mike Blackwell 
>> wrote:
>>
>>> I have a large table with numerous indexes which has approximately
>>> doubled in size after adding a column - every row was rewritten and 50% of
>>> the tuples are dead.  I'd like to reclaim this space, but VACUUM FULL
>>> cannot seem to finish within the scheduled downtime.
>>>
>>> Any suggestions for reclaiming the space without excessive downtime?
>>>
>>
>> pg_repack is a good tool for removing bloat.
>> https://github.com/reorg/pg_repack
>>
>>
> "I have a large table with numerous indexes :
> My first thought is, "DEFINE NUMEROUS". How many indexes do you actually
> have? How many of those indexes are actually used? In addition to VACUUMing
> the table, it also needs to go through every index you have.
> So find out if you have any unneeded indexes with:
>
> SELECT n.nspname as schema,
>i.relname as table,
>i.indexrelname as index,
>i.idx_scan,
>i.idx_tup_read,
>i.idx_tup_fetch,
>pg_size_pretty(pg_relation_size(quote_ident(n.nspname) || '.' ||
> quote_ident(i.relname))) AS table_size,
>pg_size_pretty(pg_relation_size(quote_ident(n.nspname) || '.' ||
> quote_ident(i.indexrelname))) AS index_size,
>pg_get_indexdef(idx.indexrelid) as idx_definition
>   FROM pg_stat_all_indexes i
>   JOIN pg_class c ON (c.oid = i.relid)
>   JOIN pg_namespace n ON (n.oid = c.relnamespace)
>   JOIN pg_index idx ON (idx.indexrelid =  i.indexrelid )
>  WHERE i.idx_scan = 0
>AND NOT idx.indisprimary
>AND NOT idx.indisunique
>  ORDER BY 1, 2, 3;
>
> Then drop any index that shows up!
>
> --
> *Melvin Davidson*
> I reserve the right to fantasize.  Whether or not you
> wish to share my fantasy is entirely up to you.
>



-- 
Robert McAlpine
DevOps Engineer
Perfecta Federal 
6506 Loisdale Road
Springfield, VA 22150
O: 202.888.4949 ext 1005
C: 757.620.3503
r...@pfcta.com


Re: [GENERAL] : Getting error while starting the server

2016-03-19 Thread bhartirawatbr

*Error*: psycopg2.OperationalError: could not translate host name
"localhost" to address: Name or service not known

*Solution*: Sometimes this error come just because that your LAN cable is
not connect to your system.

To resolve this problem connect your LAN cable to system then again run the
command.

You will find that its work Fine.



--
View this message in context: 
http://postgresql.nabble.com/Getting-error-while-starting-the-server-tp5838994p5893684.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Confusing deadlock report

2016-03-19 Thread Thomas Kellerer
Tom Lane schrieb am 16.03.2016 um 14:45:
>> 2016-03-12 13:51:29.305 CET [23912]: [2-1] user=arthur,db=prod,app=[unknown] 
>> DETAIL: Process 23912 waits for ShareLock on transaction; blocked by process 
>> 24342. 
>> Process 24342 waits for ShareLock on transaction 39632974; blocked 
>> by process 23912. 
>> Process 23912: UPDATE alpha SET some_flag = $1 WHERE (id = $2) 
>> Process 24342: INSERT INTO bravo (..., alpha_id) VALUES ($1, $2, $3, 
>> $4, $5, $6, $7, $8, $9, $10)
> 
>> Can the foreign key between bravo and alpha play a role here?
> 
> Absolutely.  The insert will need a sharelock on whatever alpha row the
> new bravo row references.  Perhaps the newly-inserted row references some
> row that 23912 previously updated (in the same transaction) while the
> alpha row 23912 is currently trying to update was previously share-locked
> by 24342 as a side effect of some previous insert?

Hmm, I tried a very simple setup like this:

  create table master (id integer primary key, data text);
  create table child (id integer primary key, master_id integer not null 
references master on update set null);

  insert into master (id, data) 
   values 
  (1,'one'),
  (2,'two'),
  (3,'three');

then in one transaction I do: 

  update master 
set data = 'bar'
  where id = 1;

and in a second transaction I run: 

  insert into child 
(id, master_id) 
  values 
(1, 1);

But the second transaction does not wait for the UPDATE to finish. 
So I guess it must be a bit more complicated then that.

Thomas



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Insert data in two columns same table

2016-03-19 Thread drum.lu...@gmail.com
>
>
>
> I see a lot of other problems: you have 3 independet tables. Your 2 queries
> (selects) returns 2 independet results, you can't use that for insert into
> the
> 3rd table. And i think, you are looking for an update, not insert. So you
> have
> to define how your tables are linked together (join).
>
> Can you explain how these tables are linked together?
>
>
>
>
Hi Andreas!

Well...

There are two tables that I need to get data from(dm.billables /
public.ja_mobiusers), and a third table (dm.billables_links) that I need to
insert data from those two tables.

The table dm.billables has four (important) columns:

*billable_id / customer_id / role_id / mobiuser_id*

I wanna add data there. The data is not there yet, so it's not an UPDATE.

*1 -* select the billable_id: (SELECT1)
SELECT billable_id FROM junk.wm_260_billables2 WHERE info ilike '%Alisha%'

*2 -* select the mobiuser_id: (SELECT2)
SELECT id FROM public.ja_mobiusers WHERE name_first LIKE 'Alisha%' AND
name_last LIKE 'Dadryl%'

*3 -* Insert those two data into the dm.billables_links table (EXAMPLE):
INSERT INTO dm.billables_links (billable_id, mobiuser_id) VALUES (SELECT1,
SELECT2);


CREATE TABLE
> *billables*
> (
> billable_id BIGINT DEFAULT
> "nextval"('"dm"."billables_billable_id_seq"'::"regclass") NOT
> NULL,
> account_id BIGINT NOT NULL,
> code CHARACTER VARYING(64) NOT NULL,
> info "TEXT",
> CONSTRAINT pk_billables PRIMARY KEY (billable_id),
> CONSTRAINT uc_billable_code_unique_per_account UNIQUE
> ("account_id", "code"),
> );
> CREATE TABLE
> *billables_links*
> (
> billable_link_id BIGINT DEFAULT
> "nextval"('"dm"."billables_links_billable_link_id_seq"'::
> "regclass") NOT NULL,
> billable_id BIGINT NOT NULL,
> customer_id BIGINT,
> role_id BIGINT,
> mobiuser_id BIGINT,
> CONSTRAINT pk_billables_links PRIMARY KEY (billable_link_id),
> CONSTRAINT fk_billable_must_exist FOREIGN KEY (billable_id)
> REFERENCES billables
> (billable_id),
> CONSTRAINT cc_one_and_only_one_target CHECK ("customer_id"
> IS NOT NULL))::INTEGER + (
> ("role_id" IS NOT NULL))::INTEGER) + (("mobiuser_id" IS NOT
> NULL))::INTEGER) = 1)
> );
> CREATE TABLE
> *ja_mobiusers*
> (
> id BIGINT DEFAULT
> "nextval"('"ja_mobiusers_id_seq"'::"regclass") NOT NULL,
> clientid BIGINT DEFAULT 0,
> [...]
> PRIMARY KEY (id),
> CONSTRAINT fk_account_must_exist FOREIGN KEY (clientid)
> REFERENCES ja_clients (id),
> );


Re: [GENERAL] vacuum - reclaiming disk space.

2016-03-19 Thread bricklen
On Thu, Mar 17, 2016 at 7:27 AM, Mike Blackwell 
wrote:

> I have a large table with numerous indexes which has approximately doubled
> in size after adding a column - every row was rewritten and 50% of the
> tuples are dead.  I'd like to reclaim this space, but VACUUM FULL cannot
> seem to finish within the scheduled downtime.
>
> Any suggestions for reclaiming the space without excessive downtime?
>

pg_repack is a good tool for removing bloat.
https://github.com/reorg/pg_repack


Re: [GENERAL] Schema Size - PostgreSQL 9.2

2016-03-19 Thread Melvin Davidson
On Thu, Mar 17, 2016 at 4:45 PM, Karsten Hilbert 
wrote:

> On Fri, Mar 18, 2016 at 09:38:30AM +1300, drum.lu...@gmail.com wrote:
>
> > Can you please provide me a Query that tells me how much space is a
> Schema
> > in my DB?
>
> There's been a discussion on that recently (like last month)
> which can be found in the archive.
>
> Karsten
> --
> GPG key ID E4071346 @ eu.pool.sks-keyservers.net
> E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


This should give you what you need:

SELECT n.nspname as schema,
   pg_size_pretty(sum(pg_total_relation_size(quote_ident(n.nspname)||
'.' || quote_ident(c.relname as size,
   sum(pg_total_relation_size(quote_ident(n.nspname)|| '.' ||
quote_ident(c.relname))) as size_bytes
  FROM pg_class c
  JOIN pg_namespace n ON (n.oid = c.relnamespace)
  JOIN pg_authid a ON ( a.oid = c.relowner )
  WHERE relname NOT LIKE 'pg_%'
AND relname NOT LIKE 'information%'
AND relname NOT LIKE 'sql_%'
AND relkind IN ('r')
GROUP BY 1
ORDER BY 3 DESC, 1, 2;


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Confusing deadlock report

2016-03-19 Thread Thomas Kellerer
Albe Laurenz schrieb am 16.03.2016 um 13:20:
>> The error as reported in the Postgres log file is this:
>>
>> 2016-03-12 13:51:29.305 CET [23912]: [1-1] user=arthur,db=prod,app=[unknown] 
>> ERROR: deadlock detected
>> 2016-03-12 13:51:29.305 CET [23912]: [2-1] user=arthur,db=prod,app=[unknown] 
>> DETAIL: Process 23912
>> waits for ShareLock on transaction; blocked by process 24342.
>> Process 24342 waits for ShareLock on transaction 39632974; blocked 
>> by process 23912.
>> Process 23912: UPDATE alpha SET some_flag = $1 WHERE (id = $2)
>> Process 24342: INSERT INTO bravo (..., alpha_id) VALUES ($1, $2, $3, 
>> $4, $5, $6, $7, $8, $9,
>> $10)
>>
>> Can the foreign key between bravo and alpha play a role here? With some 
>> simple test setups I could not
>> get the insert to wait even if it was referencing the row that the other 
>> process has updated.
>>
>> This happened on 9.3.10 running on Debian
> 
> The probable culprit is a foreign key between these tables.
> 
> What foreign keys are defined?

The FK in question is:

   alter table bravo foreign key (alpha_id) references alpha (id);

But by simply creating two tables (with a foreign key) and doing an update in 
one transaction and the insert in another, I do not get any locks or waiting 
transactions.
(And to be honest: I would have been pretty disappointed if I had)

Thomas



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Database not starting in init.d

2016-03-19 Thread ray madigans.org


I am attempting to setup a small postgresql database on a beaglebone black.  The application is slow so I am storing the database on a 64GB microsd card.  I formatted the drive, mounted it all correctly.I logged in as postgressudo su postgres# create the database cluster./initdb /data   # the mountpoint is /data# It all seemed to work correctly.  I can./postgres -D /data and the cluster seems to start correctly.I want the database to start in init.d so I edited the three lines in /etc/postgresql/9.1/postgresql.conf and pointed the three entries to /data.  When I start the machine the database fails with the following in the log fileautovacuum launcher starteddatabase system is readyincomplete startup packetreceived smart shutdown requestautovacuum launcher shutting downdatabase system is shutdownI have obviously missed something, any pointer to where to find what I missed would be appreciated.



Re: [GENERAL] Partition

2016-03-19 Thread Melvin Davidson
On Fri, Mar 18, 2016 at 7:08 AM, Sándor Daku  wrote:

>
> On 18 March 2016 at 10:55, Leonardo M. Ramé  wrote:
>
>> Hi, I have read and re-read the Partitioning chapter (
>> http://www.postgresql.org/docs/9.1/static/ddl-partitioning.html), but I
>> still don't see how to implement this use case:
>>
>> One table storing current data, let's call it the "master table", then
>> one or more partitions with old data.
>>
>> For example, the master table is this:
>>
>> create table log(
>>   idlog serial not null,
>>   date_time timestamp not null default now(),
>>   log varchar(255),
>>   primary key(idlog)
>> );
>>
>> The documentation says the master table should be empty, then a trigger
>> must evaluate a condition, the date_time field for example, and insert the
>> data in the corresponding table. This is a *rare* condition, because in the
>> log example, new content is created without knowing its date and time in
>> advance. For example:
>>
>> insert into log(log) values('log this please.');
>>
>> The date_time column will set the now() value.
>>
>> Now, by following the example, to create a child table I'll do
>>
>> create table log_old( ) inherits (log);
>>
>> This raises the 1nst question, how can I define a *dynamic* check,
>> for checking older than X days?. Is this possible?.
>>
>> An idea (didn't test):
>>
>> check (date_time::date < now()::date - '30 day'::interval)
>>
>> Then, the trigger, after each insert should *move* old data to log_old.
>>
>> The only problem I see here is the master table isn't empty, but contains
>> current data. The question is, will it work as expected?, I mean when I do
>> "select * from log" I'll get an *union* of new and old data?.
>>
>>
> I'm quite(but not completely) sure the dynamic constraint won't work.
> Also the log data - I guess - will be actual so nothing goes to the _old
> table, except you keep nudging the records and use an update trigger to
> move the data around.
> Oh, and you should keep the parent table empty.
> The correct way would be to define fixed date ranges for the child tables
> and keep adding new ones as time advances.(And dropping old ones if you
> want.)
> log ->parent
> log_201603 -> child of log, check date_time>'2016.03.01' and
> date_time<='2016.04.01'
> log_201604 -> child of log, check date_time>'2016.04.01' and
> date_time<='2016.05.01'
>
> Or take a look to the pg_partman extension which promises to do the
> legwork for you
>
> Regards,
> Sándor.
>
>
>
> Ezt az e-mailt egy Avast védelemmel rendelkező, vírusmentes számítógépről
> küldték.
> www.avast.com
> 
> <#1995191727766771537_DDB4FAA8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>


It would really be helpful for future reference if you provided the
PostgreSQL version and O/S, but nevertheless:

The best way to do it is to make a new version of the old master
and base the child/partitions off of that.
eg:

CREATE TABLE log_new(
  idlog serial not null,
  date_time timestamp not null default now(),
  log varchar(255),
  primary key(idlog)
);

CREATE TABLE log1()
  INHERITS (log_new);
ALTER TABLE log1
  ADD CONSTRAINT log1_pk PRIMARY KEY (idlog);
CREATE TABLE log2()
  INHERITS (log_new);
ALTER TABLE log2
  ADD CONSTRAINT log2_pk PRIMARY KEY (idlog);
CREATE TABLE log3()
  INHERITS (log_new);
ALTER TABLE log3
  ADD CONSTRAINT log3_pk PRIMARY KEY (idlog);

  CREATE OR REPLACE FUNCTION log_insert_fn()
   RETURNS TRIGGER AS
   $$
   BEGIN
IF NEW.date_time < '2015-01-01' THEN
  INSERT INTO log1(idlog, date_time, log)
VALUES
   ( NEW.idlog, NEW.date_time, NEW.log );
ELSEIF NEW.date_time >= '2015-01-01' AND NEW.date_time <=
'2015-12-31'THEN
  INSERT INTO log2(idlog, date_time, log)
VALUES
   ( NEW.idlog, NEW.date_time, NEW.log );
ELSE
  INSERT INTO log3(idlog, date_time, log)
VALUES
   ( NEW.idlog, NEW.date_time, NEW.log );
END IF;
   RETURN NEW;
   END
   $$
   LANGUAGE plpgsql;

CREATE TRIGGER log_insert
  BEFORE INSERT ON log_new
  FOR EACH ROW
  EXECUTE PROCEDURE log_insert_fn();


Then, at a convenient time, split the data:
INSERT INTO log_new SELECT * FROM log;

and finally
ALTER TABLE log RENAME TO log_old;
ALTER TABLE log_new RENAME TO log;

You can then either keep or drop log_old.

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Error: insufficient data in the message

2016-03-19 Thread Michael Paquier
On Fri, Mar 18, 2016 at 9:00 AM, Tom Lane  wrote:
> Ranier VF  writes:
>> Hi, I around with error with PostgreSQL 9.5.0
>> Error:  insufficient data in the messagepgsql_pexec error: PQresultStatus=7
>
> Hmm ... I can't find the string "insufficient data in the message"
> anywhere in the Postgres sources.  And not "pgsql_pexec" either.
> So this must be coming from some client-side code you're using (not
> libpq).  It's unlikely we can help you much here; you need to chat with
> the author of the client-side library that's emitting that error.

> 2016-03-16 17:35:07 BRT ERRO:  dados insuficientes na mensagem

This is an error message in Portuguese, and it refers to an existing message:

#: libpq/pqformat.c:556 libpq/pqformat.c:574 libpq/pqformat.c:595
#: utils/adt/arrayfuncs.c:1444 utils/adt/rowtypes.c:556
#, c-format
msgid "insufficient data left in message"
msgstr "dados insuficientes na mensagem"

Perhaps the driver you are using is indeed linked with libpq and the
message translated in Portuguese got translated again? Anyway, we are
going to need more details regarding the way you ran this query, and
what is the version of libpq used on client-side, which may not be
from 9.5.0. Compatibility is ensured with the protocol 3, so that
should work anyway.
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [HACKERS] [GENERAL] Request - repeat value of \pset title during \watch interations

2016-03-19 Thread David Steele
On 3/17/16 7:00 PM, Tom Lane wrote:
> David Steele  writes:
>> On 3/17/16 5:07 PM, David G. Johnston wrote:
>>> Figured out it had to be added to 2016-09...done
> 
>> Hmm ... this patch is currently marked "needs review" in CF 2016-03.  Am
>> I missing something, should this have been closed?
> 
> The message I saw was post-1-March.  If it was in fact submitted in
> time for 2016-03, then we owe it a review.

I meant to add the CF record and forgot:

https://commitfest.postgresql.org/9/480

It was added 2016-01-13 by Michael Paquier.

-- 
-David
da...@pgmasters.net


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Error: insufficient data in the message

2016-03-19 Thread Adrian Klaver

On 03/18/2016 08:49 AM, Ranier VF wrote:

Ccing list for real this time, for reasons stated below.


Hi,

 > Per previous posts, the error is not coming from Postgres, but from your
 > client software. Some searching indicates pgsql_pexec is part of the
 > Asterisk PBX software. Is that the case?
Not. The client code is own.
Yes the error is coming from client code, not from Postgres.

Libpq.dll (Postgresql 9.5.0), 32 bits, expect BIGINT param.
Own client code can not format correct bits.

Would be possible, example code in C, to format BIGINT
param with msvc 32 bits compiler?



I am not a C programmer and I do almost 0 development on Windows, so I 
have no answers for the above. Including the list as I am pretty sure 
there are folks on it who have the answer.




Best regards,

Ranier Vilela

 > Subject: Re: [GENERAL] Error: insufficient data in the message
 > To: ranier_...@hotmail.com
 > From: adrian.kla...@aklaver.com
 > Date: Fri, 18 Mar 2016 07:50:14 -0700
 >
 > On 03/18/2016 07:29 AM, Ranier VF wrote:
 >
 > Ccing list
 > > Hi, Thank your for response.
 > >
 > > After hard time, find this bug.
 > > I see that the problem is.
 > >
 > > length datatypes, in param[2]
 > > field is BIGINT (postgresql)
 > > param (num_long var) is unsigned long long (32bits)
 > >
 > > params[i].data.num_ulong = htonl(params[i].data.num_ulong);
 > > prep->bindValues[i] = (const uchar *)
 > > ¶ms[i].data.num_ulong;
 > > prep->bindLengths[i] = sizeof(ulong);
 > > prep->bindFormats[i] = 1;
 > >
 > > This fail miserably with windows 32bits (xp, Win7).
 > >
 > > If change to:
 > > prep->bindLengths[i] = 8;
 > >
 > > Postgresql error goes, but the wrong value is inserted.
 >
 > Per previous posts, the error is not coming from Postgres, but from your
 > client software. Some searching indicates pgsql_pexec is part of the
 > Asterisk PBX software. Is that the case?
 >
 > >
 > > For while, BIGINT is avoided for us.
 > >
 > > Best regards,
 > >
 > > Ranier Vilela
 >
 >
 > --
 > Adrian Klaver
 > adrian.kla...@aklaver.com



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] spurious /dev/shm related errors on insert

2016-03-19 Thread Michael Charnoky
I should have noted: the application is using PostgreSQL 9.5.0, running on
Ubuntu 14.04

Mike

On Fri, Mar 18, 2016 at 10:41 AM, Michael Charnoky  wrote:

> Hi, I'm seeing random errors from an application that is performing DB
> inserts. The error happens spuriously and looks like this from the
> application side:
>
> could not open file "/dev/shm/postgres_apptable_47861701461760" for
> reading: No such file or directory
>
> The PostgreSQL logs show:
>
> 2016-03-18 07:25:01 UTC ERROR:  could not open file
> "/dev/shm/postgres_apptable_47861701461760" for reading: No such file or
> directory
> 2016-03-18 07:25:01 UTC STATEMENT:  COPY urltable FROM
> '/dev/shm/postgres_apptable_47861701461760' USING DELIMITERS '#' WITH NULL
> AS '\null';
>
> Any clues? I couldn't find any similar issues reported by other users.
> Thanks!
>
> Mike
>
>


Re: [GENERAL] BDR

2016-03-19 Thread Craig Ringer
On 15 March 2016 at 05:17, Dustin Kempter 
wrote:

> However my instances are not on the same server and I attempted to simply
> add a host=(the ip) but that failed. Please help
>

Review the logs on both hosts to see any errors during setup.

Note that you will need to drop and re-create the database if you need to
attempt setup again.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


[GENERAL] Confused by the behavior of pg_basebackup with replication slot

2016-03-19 Thread Yi, Yi
Hello, 

I had an issue with the behavior of pg_basebackup command. I was convinced 
previously that pg_basebackup command always made the binary copy of the 
database cluster files of the postgres master. However, I recently noticed that 
pg_basebackup did not copy the the replication slot object of the master, in 
comparison with the fact that the copy-command-based-backup did copy the 
replication slot object. Is this difference designed on purpose ?

Considering the difference mentioned above, I'm wandering that is there 
anything else that the pg_basebackup would NOT copy from the master ?
In other words, what is the no-copying rules of pg_basebackup ? 

Any help will be greatly appreciated. Thanks.

Best Regards.


P.S.The details of my test is shown as follows.


1.  Edit the postgresql.conf of the master to add replication slot

## ~(Skipping the edit operation of postgresql.conf)~
wing@master:~$ cat /mnt/data/pgdata/master/postgresql.conf | grep 
"max_replication"
max_replication_slots = 2 # max number of replication slots


2. Create one replication slot on master.

testdb=# select * from pg_create_physical_replication_slot('slot1');
 slotname | xlog_position 
--+---
 slot1| 
(1 row)

testdb=# SELECT * FROM pg_replication_slots;
 slot_name | plugin | slot_type | datoid | database | active | active_pid | 
xmin | catalog_xmin | restar
t_lsn 
---++---++--+++--+--+---
--
 slot1 || physical  ||  | f  || 
 |  | 7/AD00
0220
(1 row)


3. Execute the pg_basebackup to make the base backup on the slave machine as 
slave1

## ~(Skipping the preparation operation of the pg_basebackup )~
wing@ubslave:~$ pg_basebackup -h {Master IP} -p 5432-U wing -F p -P -x -R -D 
/home/wing/pgdata/slave1  -l baseup_slave1.log


4. Confirm the replication slot settings of slave1

## ~(Skipping the operation for starting the slave1 cluster )~
## The 'max_replication_slots' settings of postgresql.conf equals with the one 
on master. 
wing@ubslave:~$ cat /home/wing/pgdata/slave1/postgresql.conf | grep 
"max_replication"
max_replication_slots = 2 # max number of replication slots

## However, there is NO RECORD in “pg_replication_slots” of slave1.
testdb=# SELECT * FROM pg_replication_slots;
 slot_name | plugin | slot_type | datoid | database | active | active_pid | 
xmin | catalog_xmin | restar
t_lsn 
---++---++--+++--+--+---
--
(0 rows)


5. Use the “pg_start_backup -> copy -> pg_stop_backup” method to make the base 
backup on the slave machine as slave2

## 5.1 execute pg_start_backup() on master
## 5.2 scp the data files from master to slave2.
## 5.3 execute pg_stop_backup() on master


6. Confirm the replication slot settings of slave2

## ~(Skipping the operation for starting the slave2 cluster )~
## The 'max_replication_slots' settings of postgresql.conf equals with the one 
on master. 
wing@ubslave:~$ cat /home/wing/pgdata/slave2/postgresql.conf | grep 
"max_replication"
max_replication_slots = 2 # max number of replication slots

## Differing from slave1, slave2 had a record in “pg_replication_slots” which 
is similar to the one of master.
testdb=# SELECT * FROM pg_replication_slots;
 slot_name | plugin | slot_type | datoid | database | active | active_pid | 
xmin | catalog_xmin | restar
t_lsn 
---++---++--+++--+--+---
--
 slot1 || physical  ||  | f  || 
 |  | 7/AB00
C540
(1 row)


Both master and slave are running PostgreSQL 9.5.0 on Ubuntu Server 14.04(64 
bit)

-- 
Best Regards
---
FNST) Yi Yi
---






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [HACKERS] [GENERAL] Request - repeat value of \pset title during \watch interations

2016-03-19 Thread Robert Haas
On Thu, Mar 10, 2016 at 1:40 AM, David G. Johnston
 wrote:
> Adding -hackers for consideration in the Commitfest.

I don't much like how this patch uses the arbitrary constant 50 in no
fewer than 5 locations.

Also, it seems like we could arrange for head_title to be "" rather
than NULL when myopt.title is NULL.  Then instead of this:

+if (head_title)
+snprintf(title, strlen(myopt.title) + 50,
+ _("Watch every %lds\t%s\n%s"),
+ sleep, asctime(localtime(&timer)), head_title);
+else
+snprintf(title, 50, _("Watch every %lds\t%s"),
+ sleep, asctime(localtime(&timer)));

...we could just the first branch of that if all the time.

 if (res == -1)
+{
+pg_free(title);
+pg_free(head_title);
 return false;
+}

Instead of repeating the cleanup code, how about making this break;
then, change the return statement at the bottom of the function to
return (res != -1).

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Confused by the behavior of pg_basebackup with replication slot

2016-03-19 Thread Julien Rouhaud
Hello,

On 19/03/2016 15:41, Yi, Yi wrote:
> Hello, 
> 
> I had an issue with the behavior of pg_basebackup command. I was convinced 
> previously that pg_basebackup command always made the binary copy of the 
> database cluster files of the postgres master. However, I recently noticed 
> that pg_basebackup did not copy the the replication slot object of the 
> master, in comparison with the fact that the copy-command-based-backup did 
> copy the replication slot object. Is this difference designed on purpose ?
> 

Yes.

> Considering the difference mentioned above, I'm wandering that is there 
> anything else that the pg_basebackup would NOT copy from the master ?
> In other words, what is the no-copying rules of pg_basebackup ? 
> 

The full list is documented here:
http://www.postgresql.org/docs/current/static/protocol-replication.html

Regards.

> Any help will be greatly appreciated. Thanks.
> 
> Best Regards.
> 


-- 
Julien Rouhaud
http://dalibo.com - http://dalibo.org


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Confused by the behavior of pg_basebackup with replication slot

2016-03-19 Thread Julien Rouhaud
On 19/03/2016 15:58, Julien Rouhaud wrote:
> Hello,
> 
> On 19/03/2016 15:41, Yi, Yi wrote:
>> Hello, 
>>
>> I had an issue with the behavior of pg_basebackup command. I was convinced 
>> previously that pg_basebackup command always made the binary copy of the 
>> database cluster files of the postgres master. However, I recently noticed 
>> that pg_basebackup did not copy the the replication slot object of the 
>> master, in comparison with the fact that the copy-command-based-backup did 
>> copy the replication slot object. Is this difference designed on purpose ?
>>
> 
> Yes.
> 
>> Considering the difference mentioned above, I'm wandering that is there 
>> anything else that the pg_basebackup would NOT copy from the master ?
>> In other words, what is the no-copying rules of pg_basebackup ? 
>>
> 
> The full list is documented here:
> http://www.postgresql.org/docs/current/static/protocol-replication.html
> 

Sorry I sent the mail a little too fast. The list is at the end of the page.

> Regards.
> 
>> Any help will be greatly appreciated. Thanks.
>>
>> Best Regards.
>>
> 
> 


-- 
Julien Rouhaud
http://dalibo.com - http://dalibo.org


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database not starting in init.d

2016-03-19 Thread Adrian Klaver

On 03/17/2016 08:20 AM, ray madigans.org wrote:

I am attempting to setup a small postgresql database on a beaglebone
black.  The application is slow so I am storing the database on a 64GB
microsd card.  I formatted the drive, mounted it all correctly.

I logged in as postgres

sudo su postgres

# create the database cluster

./initdb /data   # the mountpoint is /data

# It all seemed to work correctly.  I can

./postgres -D /data and the cluster seems to start correctly.

I want the database to start in init.d so I edited the three lines in
/etc/postgresql/9.1/postgresql.conf and pointed the three entries to
/data.  When I start the machine the database fails with the following
in the log file




autovacuum launcher started

database system is ready


The server has started and is ready to go.



incomplete startup packet


Looks like something is trying to connect and failing.

Would that be the application you mention above?



received smart shutdown request


Something sent a SIGTERM.

What OS are you using on the board?

Are you hitting the OOM killer?

Might want to look in the system logs.



autovacuum launcher shutting down

database system is shutdown

I have obviously missed something, any pointer to where to find what I
missed would be appreciated.




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] spurious /dev/shm related errors on insert

2016-03-19 Thread Michael Charnoky
Hi, I'm seeing random errors from an application that is performing DB
inserts. The error happens spuriously and looks like this from the
application side:

could not open file "/dev/shm/postgres_apptable_47861701461760" for
reading: No such file or directory

The PostgreSQL logs show:

2016-03-18 07:25:01 UTC ERROR:  could not open file
"/dev/shm/postgres_apptable_47861701461760" for reading: No such file or
directory
2016-03-18 07:25:01 UTC STATEMENT:  COPY urltable FROM
'/dev/shm/postgres_apptable_47861701461760' USING DELIMITERS '#' WITH NULL
AS '\null';

Any clues? I couldn't find any similar issues reported by other users.
Thanks!

Mike


Re: [GENERAL] Crypt change in 9.4.5

2016-03-19 Thread Jan de Visser
On Friday, March 18, 2016 1:18:01 PM EDT ando...@aule.net wrote:
> Hi,
> 
> After upgrading to PostgreSQL 9.4.6, our test system gave error messages
> like:
> 
> ERROR: invalid salt
> 
> The cause of these errors is statements like:
> 
> WHERE password = crypt('secret', 'secret')
> 
> After reverting to Postgres 9.4.4 the test system worked properly again.
> 
> This might be related to a security fix in 9.4.5:
> 
> ---
> Fix contrib/pgcrypto to detect and report too-short crypt() salts (Josh
> Kupershmidt)
> Certain invalid salt arguments crashed the server or disclosed a few bytes
> of server memory. We have not ruled out the viability of attacks that
> arrange for presence of confidential information in the disclosed bytes, but
> they seem unlikely. (CVE-2015-5288)
> ---
> 
> The "crypt" call is hardcoded in legacy code that hasn't been recompiled in
> years. Are there ways to keep the old code running against a newer Postgres
> version?

You could get the source of 9.4.6 from git, back out the commit for that fix, 
and compile.





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] spurious /dev/shm related errors on insert

2016-03-19 Thread Tom Lane
Michael Charnoky  writes:
> Hi, I'm seeing random errors from an application that is performing DB
> inserts. The error happens spuriously and looks like this from the
> application side:

> could not open file "/dev/shm/postgres_apptable_47861701461760" for
> reading: No such file or directory

Offhand I do not believe that any part of the core PG code would attempt
to access such a file.  Maybe you've got some extensions in there that
would do so?

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Deleting schema - saving up space - PostgreSQL 9.2

2016-03-19 Thread drum.lu...@gmail.com
>
>>
>>
> ​I don't deal with Hot Standby's in my day-to-day but if you DROP SCHEMA
> all of the spaced consumed by indexes and tables in that schema will be
> freed.  The vacuum might make a small difference in performance on the
> system catalogs (pg_class, stats, etc)  that were updated but with respect
> to the dropped schema there won't be anything present there for vacuum to
> touch.
>
> Create and populate a dummy table in a test setup, measure the HD space
> taken in PGDATA, then drop it and measure again to see it in action.
>
> I've only done this using "TRUNCATE" - I've got a system with space
> constraints a the same kind of "file data" table and freed up around 20GB
> with a single fast truncate (though ensuring FKs wouldn't be a problem was
> fun...).
>
> David J.
>
>
Have made some tests just by doing:

1 - Creating a master server
2 - Creating a slave server
3 - Creating a table on public schema and creating a gorfs schema with
another table
4 - inserting some random data into gorfs.test (*insert into gorfs.test
(descr) values (unnest(array(select md5(random()::text) from
generate_series(1, 3000;*)
5 - The data has been replicated into the slave - *PASS*
6 - Dropping the schema on the master server - *PASS* (Had to use the
CASCADE option)
7 - The schema has gone on the slave server as well
8 - Checked the free space - *PASS* (*I had more free space after deleting
the schema*)

So it seems that only by doing a DROP SCHEMA will be enough :)


Re: [GENERAL] Error: insufficient data in the message

2016-03-19 Thread Adrian Klaver

On 03/16/2016 01:49 PM, Ranier VF wrote:

Hi, I around with error with PostgreSQL 9.5.0

Prepared Statment:
INSERT INTO tbCerts (Company_UID, User_UID, Cert_Blob_Size, Cert_Format,
After_Date, Before_Date,
Cert_Blob, Password1, Key_Name, Cert_Blob_Type, Cert_Use, Status)
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12);

Binds values:
param[0]=1242004 /* htonl */
param[1]=1242028 /* htonl */
param[2]=135967325878940612 /* htonl */
param[3]=1242076 /* htonl */
param[4]='26/05/2015'
param[5]='25/05/2016'
param[6]=blob size=9529 /* blob with may can 0x0 data, obviously. */
param[7]=3B0429150961134E
param[8]=C:\documentos\RC SOFTWARE\clientes\asbosch\bef.pfx
param[9]=pfx
param[10]=S
param[11]=A

Error:  insufficient data in the message
pgsql_pexec error: PQresultStatus=7

Log:
2016-03-16 17:35:07 BRT ERRO:  dados insuficientes na mensagem
2016-03-16 17:35:07 BRT COMANDO:  INSERT INTO tbCerts (Company_UID,
User_UID, Cert_Blob_Size, Cert_Format, After_Date, Before_Date,
Cert_Blob, Password1, Key_Name, Cert_Blob_Type, Cert_Use, Status) VALUES
($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12);

PostgreSQL 9.5.0 32bits
Client 32 bits (libpq.dll)

This is very frustrating, whats is wrog?


What client are you using to run the query?

Are you sure you do not have a one-off error. Your bind values are 0 
indexed, while your parameters are not.


Could it be they are not matching up?

Hard to tell without seeing the complete code.


How can debug this?

Best,

Ranier Vilela




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Confusing deadlock report

2016-03-19 Thread Albe Laurenz
Thomas Kellerer wrote:
>>> The error as reported in the Postgres log file is this:
>>>
>>> 2016-03-12 13:51:29.305 CET [23912]: [1-1] 
>>> user=arthur,db=prod,app=[unknown] ERROR: deadlock detected
>>> 2016-03-12 13:51:29.305 CET [23912]: [2-1] 
>>> user=arthur,db=prod,app=[unknown] DETAIL: Process 23912
>>> waits for ShareLock on transaction; blocked by process 24342.
>>> Process 24342 waits for ShareLock on transaction 39632974; blocked 
>>> by process 23912.
>>> Process 23912: UPDATE alpha SET some_flag = $1 WHERE (id = $2)
>>> Process 24342: INSERT INTO bravo (..., alpha_id) VALUES ($1, $2, 
>>> $3, $4, $5, $6, $7, $8, $9,
>>> $10)
>>>
>>> Can the foreign key between bravo and alpha play a role here? With some 
>>> simple test setups I could not
>>> get the insert to wait even if it was referencing the row that the other 
>>> process has updated.
>>>
>>> This happened on 9.3.10 running on Debian

>> The probable culprit is a foreign key between these tables.
>>
>> What foreign keys are defined?

> The FK in question is:
> 
>alter table bravo foreign key (alpha_id) references alpha (id);
> 
> But by simply creating two tables (with a foreign key) and doing an update in 
> one transaction and the
> insert in another, I do not get any locks or waiting transactions.
> (And to be honest: I would have been pretty disappointed if I had)

Hm, true; I cannot get a lock with these two statements.

Can you determine what statements were executed in these transactions before 
the deadlock?
It was probably one of these that took the conflicting lock.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] BDR

2016-03-19 Thread John R Pierce

On 3/14/2016 2:43 PM, Roland van Laar wrote:
However my instances are not on the same server and I attempted to 
simply add a host=(the ip) but that failed.

There are a couple of other factors:
- is postgres running on an external available ip?
- is there a replication user with a password? 


3: are the servers configured to allow network connections from each 
other?  appropriate authentication settings in pg_hba.conf on both sides?




--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] vacuum - reclaiming disk space.

2016-03-19 Thread Mike Blackwell
I have a large table with numerous indexes which has approximately doubled
in size after adding a column - every row was rewritten and 50% of the
tuples are dead.  I'd like to reclaim this space, but VACUUM FULL cannot
seem to finish within the scheduled downtime.

Any suggestions for reclaiming the space without excessive downtime?


* *


Re: [GENERAL] How to Qualifying or quantify risk of loss in asynchronous replication

2016-03-19 Thread otheus uibk
On Wednesday, March 16, 2016, Thomas Munro 
wrote:
> In asynchronous replication, the primary writes to the WAL and flushes
the disk.  Then, for any standbys that happen to be connected, a WAL sender
 process trundles along behind feeding new WAL doesn the socket as soon as
it can, but it can be running arbitrarily far behind or not running at all
(the network could be down or saturated, the standby could be temporarily down
or up but not reading the stream fast enough, etc etc).

Thanks for your help on finding the code. To be more precise, in the 9.1.8
code, I see this:

 1. [backend] WAL is flushed to disk
 2. [backend] WAL-senders are sent  SIGUSR1 to wake up
 3. [backend] wait for responses from other SyncRep-Receiver, effectively
skipped if none
 [wal-sender] wakes up
 4. [backend] end-of-xact cycle
 [wal-sender] reads WAL (XLogRead) up to MAX_SEND_SIZE (or less) bytes
 5. [backend] ? is there an ACK send to client?
 [wal-sender] sends chunk to WAL-receiver using the
pq_putmessage_noblock call
 6. [wal-sender] repeats reading-sending loop

So if the WAL record is bigger than whatever MAX_SEND_SIZE is (in my
source, I seek 8k * 16 = 128 kB, so 1 Mb (roughly)), the WAL may end up
sleeping (between iterations of 5 and 6).

On Wed, Mar 16, 2016 at 10:21 AM, otheus uibk  wrote:

> Section 25.2.5. "The standby connects to the primary, which streams WAL
> records to the standby as they're generated, without waiting for the WAL
> file to be filled."

 Section 25.2.6 "If the primary server crashes then some transactions that
> were committed may not have been replicated to the standby server, causing
> data loss. The amount of data loss is proportional to the replication delay
> at the time of failover."


Both these statements, then, from the documentation perspective, are
incorrect, at least to a pedant. For 25.2.5, The primary streams WAL
records to the standby after they've been flushed to disk but without
waiting for the file to be filled. For 25.2.6 it's not clear: some
transactions that were *written* to the local WAL and reported as committed
but not yet *sent* to the standby server is possible.

Somehow, the documentation misleads (me) to believe the async replication
algorithm at least guarantees WAL records are *sent* before responding
"committed" to the client. I now know this is not the case. *grumble*.

How can I help make the documentation clearer on this point?

-- 
Otheus
otheus.u...@gmail.com
otheus.shell...@uibk.ac.at


Re: [GENERAL] Insert data in two columns same table

2016-03-19 Thread David G. Johnston
On Wed, Mar 16, 2016 at 6:49 PM, Andreas Kretschmer  wrote:

>
>
> > "drum.lu...@gmail.com"  hat am 17. März 2016 um
> 02:34
> > geschrieben:
> >
> >
> > I'm trying to insert data from TABLE A to TABLE B.
> >
> > 1 - Select billable_id from dm.billable
> > 2 - Select mobiuser_id from ja_mobiusers
> > 3 - Insert the billable_id and the mobiuser_id to the dm.billables_links
> > table.
> >
> >
> > *FYI -* It has to be in the same transaction because the mobiuser_id must
> > go to the selected billable_id on the first select.
> >
> > Well... Would be something like:
> >
> > > INSERT INTO dm.billables_links (billable_id) VALUES ((SELECT
> billable_id
> > > FROM junk.wm_260_billables2 WHERE info ilike '%Alisha%')),
> > > INSERT INTO dm.billables_links (mobiuser_id) VALUES ((SELECT id FROM
> > > public.ja_mobiusers WHERE name_first LIKE 'Alisha%' AND name_last LIKE
> > > 'Dson%'))
> >
> >
> >
> > The problem is that I need to do that at the same time, because of a
> > constraint:
> >
> > ALTER TABLE dm.billables_links
> >   ADD CONSTRAINT cc_one_and_only_one_target CHECK ((("customer_id" IS
> > NOT NULL)::integer + ("role_id" IS NOT NULL)::integer + ("mobiuser_id"
> > IS NOT NULL)::integer) = 1);
> >
> > I'm having trouble by creating that SQL... can anyone help please?
>
>
> I see a lot of other problems: you have 3 independet tables. Your 2 queries
> (selects) returns 2 independet results, you can't use that for insert into
> the
> 3rd table. And i think, you are looking for an update, not insert. So you
> have
> to define how your tables are linked together (join).
>
> Can you explain how these tables are linked together?
>
> ​
​If we assume both queries will only ever return, at most, one row:

INSERT INTO billables_links (customer_id, mobiuser_id, role_id)
SELECT customer_id, mobiuser_id, null AS role_id
FROM (SELECT customer_id FROM customer WHERE [...]) cust
FULL JOIN (​
​
SELECT
​mobiuser​
_id FROM
​mobiuser​
 WHERE [...]​
​) mobi
ON (true)
 --basically a CROSS JOIN but allows for one of the sides to be omitted​

​​David J​


[GENERAL] Partition

2016-03-19 Thread Leonardo M . Ramé
Hi, I have read and re-read the Partitioning chapter 
(http://www.postgresql.org/docs/9.1/static/ddl-partitioning.html), but I 
still don't see how to implement this use case:


One table storing current data, let's call it the "master table", then 
one or more partitions with old data.


For example, the master table is this:

create table log(
  idlog serial not null,
  date_time timestamp not null default now(),
  log varchar(255),
  primary key(idlog)
);

The documentation says the master table should be empty, then a trigger 
must evaluate a condition, the date_time field for example, and insert 
the data in the corresponding table. This is a *rare* condition, because 
in the log example, new content is created without knowing its date and 
time in advance. For example:


insert into log(log) values('log this please.');

The date_time column will set the now() value.

Now, by following the example, to create a child table I'll do

create table log_old( ) inherits (log);

This raises the 1nst question, how can I define a *dynamic* check,
for checking older than X days?. Is this possible?.

An idea (didn't test):

check (date_time::date < now()::date - '30 day'::interval)

Then, the trigger, after each insert should *move* old data to log_old.

The only problem I see here is the master table isn't empty, but 
contains current data. The question is, will it work as expected?, I 
mean when I do "select * from log" I'll get an *union* of new and old data?.


Regards,
--
Leonardo M. Ramé
Medical IT - Griensu S.A.
Av. Colón 636 - Piso 8 Of. A
X5000EPT -- Córdoba
Tel.: +54(351)4246924 +54(351)4247788 +54(351)4247979 int. 19
Cel.: +54 9 (011) 40871877


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] vacuum - reclaiming disk space.

2016-03-19 Thread Melvin Davidson
On Thu, Mar 17, 2016 at 10:57 AM, bricklen  wrote:

> On Thu, Mar 17, 2016 at 7:27 AM, Mike Blackwell 
> wrote:
>
>> I have a large table with numerous indexes which has approximately
>> doubled in size after adding a column - every row was rewritten and 50% of
>> the tuples are dead.  I'd like to reclaim this space, but VACUUM FULL
>> cannot seem to finish within the scheduled downtime.
>>
>> Any suggestions for reclaiming the space without excessive downtime?
>>
>
> pg_repack is a good tool for removing bloat.
> https://github.com/reorg/pg_repack
>
>
"I have a large table with numerous indexes :
My first thought is, "DEFINE NUMEROUS". How many indexes do you actually
have? How many of those indexes are actually used? In addition to VACUUMing
the table, it also needs to go through every index you have.
So find out if you have any unneeded indexes with:

SELECT n.nspname as schema,
   i.relname as table,
   i.indexrelname as index,
   i.idx_scan,
   i.idx_tup_read,
   i.idx_tup_fetch,
   pg_size_pretty(pg_relation_size(quote_ident(n.nspname) || '.' ||
quote_ident(i.relname))) AS table_size,
   pg_size_pretty(pg_relation_size(quote_ident(n.nspname) || '.' ||
quote_ident(i.indexrelname))) AS index_size,
   pg_get_indexdef(idx.indexrelid) as idx_definition
  FROM pg_stat_all_indexes i
  JOIN pg_class c ON (c.oid = i.relid)
  JOIN pg_namespace n ON (n.oid = c.relnamespace)
  JOIN pg_index idx ON (idx.indexrelid =  i.indexrelid )
 WHERE i.idx_scan = 0
   AND NOT idx.indisprimary
   AND NOT idx.indisunique
 ORDER BY 1, 2, 3;

Then drop any index that shows up!

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] log temp files are created twice in PL/pgSQL function

2016-03-19 Thread Pavel Stehule
Hi

2016-03-16 15:58 GMT+01:00 Dang Minh Huong :

> Hi,
>
> Why does log temp files are created twice when query is executed in
> PL/pgSQL function?
> Would you please explain it to me?
>

PL/pgSQL materialize result internally.

Usually PostgreSQL operations are executed row by row. But some SRF
functions like PLpgSQL functions doesn't support this mode, and returns
tuplestore - materialized result.

Using this technique is comfortable, but with some performance risks.
Unfortunately, you cannot to change this behave. Not in PL/pgSQL.

You can write C function with same functionality but with row by row
returning result mode. It is not possible in PL/pgSQL.

On other hand - you can try to increase work_mem (if your server has enough
RAM). Materialization are done when available memory (controlled by
work_mem) is too less.

You can try

SET work_mem to '20MB';
SELECT test_tempfiles();

Regards

Pavel


>
> As below test result. Log temp files are created twice when SELECT
> statement is put
>  into  a PL/pgSQL function. It led a little of performance degradation.
> Is there any way to define PL/pgSQL function to avoid this issue?
> # I am using PostgreSQL 9.3.9
>
> my test results
> -
> [postgres@test]$ psql -c "select test_tempfiles();" > /dev/null
> LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp2223.0", size 3244032
> CONTEXT: PL/pgSQL function test_cursor() line 3 at RETURN QUERY
> LOG: temporary file: path "base/pgsql_tmp/pgsql_tmp23223.1", size 2828062
> LOG: duration: 421.426 ms statement: select test_tempfiles();
>
> [postgres@test]$ psql -c "select name from testtbl order by id" >
> /dev/null
> LOG: temporary file: path "base/pgsql_tmp/pgsql_tmp25586.0", size 2850816
> LOG: duration: 389.054 ms statement: select random from testtbl order by
> random
> -
>
> test_tempfiles() function is defined as below
> -
> CREATE OR REPLACE FUNCTION public.test_tempfiles()
> RETURNS TABLE(name text)
> LANGUAGE plpgsql
> AS
> $function$
> begin
> return query execute "select name from testtbl order by id ";
> end;
> $function$
> -
>
> Thanks and best regrards,
> bocap
>
>
>
>
>
>
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Insert data in two columns same table

2016-03-19 Thread Andreas Kretschmer

> >
> Hi Andreas!
> 
> Well...
> 
> There are two tables that I need to get data from(dm.billables /
> public.ja_mobiusers), and a third table (dm.billables_links) that I need to
> insert data from those two tables.

lets start from here. you have 2 tables:

test=*# select * from source1;
 i
---
 1
 2
 3
(3 rows)

test=*# select * from source2;
 i
---
 1
 2
 3
(3 rows)


You can combine this 2 tables via cross join:

test=*# select * from source1 cross join (select * from source2) x;
 i | i
---+---
 1 | 1
 1 | 2
 1 | 3
 2 | 1
 2 | 2
 2 | 3
 3 | 1
 3 | 2
 3 | 3
(9 rows)


as you can see there are 9 different combinations. You can insert all the
different combinations into a destination table:


test=*# create table destination (s1 int, s2 int);
CREATE TABLE
test=*# insert into destination select * from source1 cross join (select * from
source2) x;
INSERT 0 9
test=*# select * from destination ;
 s1 | s2
+
  1 |  1
  1 |  2
  1 |  3
  2 |  1
  2 |  2
  2 |  3
  3 |  1
  3 |  2
  3 |  3
(9 rows)


That's all, or? Keep in mind: you have N * M different combinations from the 2
tables.









-- 
Andreas Kretschmer
http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Insert data in two columns same table

2016-03-19 Thread Andreas Kretschmer

> 
> 
> assuming those two queries 1 and 2 return multiple rows, which rows of 
> junk.wm_260_billables2  match up with what rows of public.ja_mobiusers  ?
> 
> your schema is very poorly defined.  I think you need to take a class in 
> relational database design and usage, or read a good book on it at least..
> 
> 
> the *CORRECT* SOLUTION WOULD BE MORE LIKE


yepp, full ack.


-- 
Andreas Kretschmer
http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump crashing

2016-03-19 Thread Adrian Klaver

On 03/15/2016 08:10 AM, Matthias Schmitt wrote:

Hello,

since two weeks I am trying to get PostgreSQL 9.5.1 to run on Debian 8.3. 
Everything is fine except the daily backup. When calling pg_dump as part of a 
cron job pg_dump crashes:

2016-03-15 01:00:02 CETFATAL:  semctl(23232524, 3, SETVAL, 0) failed: Invalid 
argument
2016-03-15 01:00:02 CETLOG:  server process (PID 22279) exited with exit code 1
2016-03-15 01:00:02 CETLOG:  terminating any other active server processes
2016-03-15 01:00:02 CETWARNING:  terminating connection because of crash of 
another server process
2016-03-15 01:00:02 CETDETAIL:  The postmaster has commanded this server 
process to roll back the current transaction and exit, because another server 
process exited abnormally and possibly corrupted shared memory.
2016-03-15 01:00:02 CETHINT:  In a moment you should be able to reconnect to 
the database and repeat your command.
2016-03-15 01:00:02 CETLOG:  all server processes terminated; reinitializing
2016-03-15 01:00:02 CETLOG:  could not remove shared memory segment 
"/PostgreSQL.1804289383": No such file or directory
2016-03-15 01:00:02 CETLOG:  semctl(22839296, 0, IPC_RMID, ...) failed: Invalid 
argument
2016-03-15 01:00:02 CETLOG:  semctl(22872065, 0, IPC_RMID, ...) failed: Invalid 
argument
2016-03-15 01:00:02 CETLOG:  semctl(22904834, 0, IPC_RMID, ...) failed: Invalid 
argument
…

I am calling pg_dump in my cronjob like this:
su - mmppostgres -c "/Users/…/bin/pg_dump -p 5433 mydatabase_1_0_0 > 
/my_backup_path/mydatabase_1_0_0.dump"

After the crash the database runs in recovery mode. A restart of the database 
brings everything back to normal.
This crash is always reproducible and occurs every night during backup. When 
calling the same command via the command line everything run fine. In the 
system log I can see:



Is the command you run via the command line exactly the same, including 
the su -?


What user are you running the cronjob as?

How do you supply the password for the mmppostgres user?


Any ideas where to continue my search?

Best regards

Matthias Schmitt

magic moving pixel s.a.
23, Avenue Grande-Duchesse Charlotte
L-3441 Dudelange
Luxembourg
Phone: +352 54 75 75
http://www.mmp.lu









--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Insert data in two columns same table

2016-03-19 Thread Andreas Kretschmer


> "drum.lu...@gmail.com"  hat am 17. März 2016 um 02:34
> geschrieben:
> 
> 
> I'm trying to insert data from TABLE A to TABLE B.
> 
> 1 - Select billable_id from dm.billable
> 2 - Select mobiuser_id from ja_mobiusers
> 3 - Insert the billable_id and the mobiuser_id to the dm.billables_links
> table.
> 
> 
> *FYI -* It has to be in the same transaction because the mobiuser_id must
> go to the selected billable_id on the first select.
> 
> Well... Would be something like:
> 
> > INSERT INTO dm.billables_links (billable_id) VALUES ((SELECT billable_id
> > FROM junk.wm_260_billables2 WHERE info ilike '%Alisha%')),
> > INSERT INTO dm.billables_links (mobiuser_id) VALUES ((SELECT id FROM
> > public.ja_mobiusers WHERE name_first LIKE 'Alisha%' AND name_last LIKE
> > 'Dson%'))
> 
> 
> 
> The problem is that I need to do that at the same time, because of a
> constraint:
> 
> ALTER TABLE dm.billables_links
>   ADD CONSTRAINT cc_one_and_only_one_target CHECK ((("customer_id" IS
> NOT NULL)::integer + ("role_id" IS NOT NULL)::integer + ("mobiuser_id"
> IS NOT NULL)::integer) = 1);
> 
> I'm having trouble by creating that SQL... can anyone help please?


I see a lot of other problems: you have 3 independet tables. Your 2 queries
(selects) returns 2 independet results, you can't use that for insert into the
3rd table. And i think, you are looking for an update, not insert. So you have
to define how your tables are linked together (join).

Can you explain how these tables are linked together?



-- 
Andreas Kretschmer
http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] MongoDB 3.2 beating Postgres 9.5.1?

2016-03-19 Thread pbj
 

On Tuesday, March 15, 2016 7:39 PM, "p...@cmicdo.com"  
wrote:
 
 > Your results are close enough to mine, I think, to prove the point.
 > And, I agree that the EDB benchmark is not necessary reflective of a
 > real-world scenario.
 > 
 > However, the cache I'm referring to is PG's shared_buffer cache.
 > You can see the first run of the select causing a lot of disk reads.
 > The second identical run, reads purely from shared_buffers.
 > 
 > What I don't understand is, why does a slightly different select from
 > the *same* table during the same session cause shared_buffers to be
 > blown out and re-read??
 > 
 > I will see if I can try YCSB next week (I'm in workshops all week...)
 > 
 > Thanks!

I was able to try YCSB today on both PG 9.5.1 and Mongo 3.2.  At first, PG
was running 4 times slower than Mongo.  Then I remembered about unlogged
tables (which I think is the way Mongo is all the time.), and remade
the PG table as UNLOGGED.  In a 50/50 read/update test over 1M records,
PG ran in 0.62 of the time of Mongo.

PG Load:

[OVERALL], RunTime(ms), 104507.0
[OVERALL], Throughput(ops/sec), 9568.737022400413
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 293.0
[CLEANUP], MinLatency(us), 293.0
[CLEANUP], MaxLatency(us), 293.0
[CLEANUP], 95thPercentileLatency(us), 293.0
[CLEANUP], 99thPercentileLatency(us), 293.0
[INSERT], Operations, 100.0
[INSERT], AverageLatency(us), 101.329235
[INSERT], MinLatency(us), 88.0
[INSERT], MaxLatency(us), 252543.0
[INSERT], 95thPercentileLatency(us), 121.0
[INSERT], 99thPercentileLatency(us), 141.0
[INSERT], Return=OK, 100
 
PG Run:
---
[OVERALL], RunTime(ms), 92763.0
[OVERALL], Throughput(ops/sec), 10780.16019318047
[READ], Operations, 499922.0
[READ], AverageLatency(us), 79.1722428698877
[READ], MinLatency(us), 69.0
[READ], MaxLatency(us), 19935.0
[READ], 95thPercentileLatency(us), 94.0
[READ], 99thPercentileLatency(us), 112.0
[READ], Return=OK, 499922
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 222.0
[CLEANUP], MinLatency(us), 222.0
[CLEANUP], MaxLatency(us), 222.0
[CLEANUP], 95thPercentileLatency(us), 222.0
[CLEANUP], 99thPercentileLatency(us), 222.0
[UPDATE], Operations, 500078.0
[UPDATE], AverageLatency(us), 98.96430156895525
[UPDATE], MinLatency(us), 83.0
[UPDATE], MaxLatency(us), 26655.0
[UPDATE], 95thPercentileLatency(us), 127.0
[UPDATE], 99thPercentileLatency(us), 158.0
[UPDATE], Return=OK, 500078
 
Mongo Load:
---
[OVERALL], RunTime(ms), 133308.0
[OVERALL], Throughput(ops/sec), 7501.425270801452
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 1822.0
[CLEANUP], MinLatency(us), 1822.0
[CLEANUP], MaxLatency(us), 1822.0
[CLEANUP], 95thPercentileLatency(us), 1822.0
[CLEANUP], 99thPercentileLatency(us), 1822.0
[INSERT], Operations, 100.0
[INSERT], AverageLatency(us), 130.830678
[INSERT], MinLatency(us), 90.0
[INSERT], MaxLatency(us), 7147519.0
[INSERT], 95thPercentileLatency(us), 159.0
[INSERT], 99thPercentileLatency(us), 226.0
[INSERT], Return=OK, 100
 
Mongo Run:
-
[OVERALL], RunTime(ms), 149150.0
[OVERALL], Throughput(ops/sec), 6704.65973851827
[READ], Operations, 500837.0
[READ], AverageLatency(us), 98.13153980237084
[READ], MinLatency(us), 69.0
[READ], MaxLatency(us), 28271.0
[READ], 95thPercentileLatency(us), 166.0
[READ], 99thPercentileLatency(us), 186.0
[READ], Return=OK, 500837
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 2387.0
[CLEANUP], MinLatency(us), 2386.0
[CLEANUP], MaxLatency(us), 2387.0
[CLEANUP], 95thPercentileLatency(us), 2387.0
[CLEANUP], 99thPercentileLatency(us), 2387.0
[UPDATE], Operations, 499163.0
[UPDATE], AverageLatency(us), 195.21505600375028
[UPDATE], MinLatency(us), 118.0
[UPDATE], MaxLatency(us), 4513791.0
[UPDATE], 95thPercentileLatency(us), 211.0
[UPDATE], 99thPercentileLatency(us), 252.0
[UPDATE], Return=OK, 499163


 > 
 > 
 > On Monday, March 14, 2016 3:34 AM, Dmitry Dolgov <9erthali...@gmail.com> 
wrote:
 > 
 > 
 > Hi, Paul
 > 
 > I agree with Oleg, EDB benchmarks are strange sometimes. I did the same 
benchmarks several months ago. I never noticed the cache influence back then, 
so I tried to reproduce your situation now (on a 5*10^6 records although). I 
started to play with db cache (using `echo 3 > /proc/sys/vm/drop_cache`), and I 
see difference in time execution for two subsequent queries, but `explain` info 
are almost identical, e.g. `shared hit & read`:
 > 
 > 


  

Re: [GENERAL] Confusing deadlock report

2016-03-19 Thread Rakesh Kumar
is there a possibility that there is no index on the FKY column
bravo.alpha_id.

On Wed, Mar 16, 2016 at 11:09 AM, Albe Laurenz 
wrote:

> Thomas Kellerer wrote:
> >> Can you determine what statements were executed in these transactions
> before the deadlock?
> >> It was probably one of these that took the conflicting lock.
> >
> > Unfortunately not. Statement logging is not enabled on that server
> (space-constrained).
> >
> > And while we know the statements that can possibly be executed by these
> parts of the application,
> > several on them depend on the actual data, so it's hard to tell which
> path the two transactions
> > actually used.
>
> But that's where the solution to your problem must be...
>
> Look at all statements that modify "alpha" and could be in the same
> transaction
> with the INSERT to "bravo".
>
> Yours,
> Laurenz Albe
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] How to Qualifying or quantify risk of loss in asynchronous replication

2016-03-19 Thread Andrew Sullivan
On Wed, Mar 16, 2016 at 10:40:03PM +0100, otheus uibk wrote:
> Somehow, the documentation misleads (me) to believe the async replication
> algorithm at least guarantees WAL records are *sent* before responding
> "committed" to the client. I now know this is not the case. *grumble*.
> 
> How can I help make the documentation clearer on this point?

Well, I never had the understanding you apparently do, but you're
right that it's important to be clear.  If there were an additional
sentence, "Note that, in any available async option, the client can
receive a message that data is committed before any replication of the
data has commenced," would that help?

Best regards,

A

-- 
Andrew Sullivan
a...@crankycanuck.ca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] UPSERT and HOT-update

2016-03-19 Thread Stephen Frost
Daniel,

* CHENG Yuk-Pong, Daniel  (j16s...@gmail.com) wrote:
> I am doing a INSERT...ON CONFLICT...UPDATE.. on a table. The query is
> mostly-UPDATE and does not change any columns most of the time, like
> so:
> 
>   CREATE INDEX ON book(title);
>   INSERT INTO book (isbn, title, author, lastupdate) VALUES ($1,$2,$3, now())
> ON CONFLICT (isbn) DO UPDATE set title=excluded.title,
> author=excluded.author, lastupdate=excluded.lastupdate;
> 
> PostgreSQL seems to consider the title as changed and refused to do a
> HOT-update. It works if I remove the `title=...` part.
> 
> Are there any tricks to make it smarter? The title don't change most
> of the time after all.

If it's really that infrequent for the title to change, you could do
something like:

insert into book (isbn, title, author, lastupdate) values ('$1','$2',
'$3', now()) on conflict (isbn) do update set author=excluded.author,
lastupdate = excluded.lastupdate where book.title = excluded.title;

and if that doesn't change any rows then the title did change and you
need to run the command you have above.

What might be kind of neat would be to have multiple UPDATE clauses
allowed for the INSERT .. ON CONFLICT DO UPDATE and then you could have
different WHERE clauses and do it all in one command.

Another interesting idea would be a different kind of 'UPDATE SET'
operation (maybe '*=' or something?) which says "only change this if the
value actually changed."  There's clearly a lot of cases where that
use-case is desired.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [GENERAL] Question about shared_buffer cache behavior

2016-03-19 Thread pbj
On Friday, March 18, 2016 4:54 PM, Andreas Kretschmer 
wrote:
 >
 >
 >> Paul Jones  hat am 18. Marz 2016 um 21:24 geschrieben:
 >>
 >>  
 >> In Postgres 9.5.1 with a shared_buffer cache of 7Gb, a SELECT from 

 >
 > the first query reads only the tuple from heap that are matched the
 > where-condition.
 > The 2nd query with an other where-condition reads other rows than the first
 > query.
 >    
 > Keep in mind: a index search reads the index and pulls the rows that matched
the   
 > condition from the heap, no more.
  
Ok, thanks!  I understand now!
   
 > 
 > Regards
 > --
 > Andreas Kretschmer
 > http://www.2ndQuadrant.com/
 > PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: [GENERAL] Drop only temporary table

2016-03-19 Thread David G. Johnston
On Friday, March 18, 2016, Durumdara  wrote:

>
> I want to avoid to remove any real table on resource closing (= dropping
> of temporary table).
> How to I force "drop only temporary"? Prefix, option, etc.
>
>
If you have to explicitly drop a temporary table you are likely doing
something wrong.  When you create the table you tell it when to go away and
at that time it will - no need for a drop statement.

David J.


[GENERAL] Crypt change in 9.4.5

2016-03-19 Thread andomar
Hi,

After upgrading to PostgreSQL 9.4.6, our test system gave error messages
like:

ERROR: invalid salt

The cause of these errors is statements like:

WHERE password = crypt('secret', 'secret')

After reverting to Postgres 9.4.4 the test system worked properly again.

This might be related to a security fix in 9.4.5:

---
Fix contrib/pgcrypto to detect and report too-short crypt() salts (Josh
Kupershmidt)
Certain invalid salt arguments crashed the server or disclosed a few bytes
of server memory. We have not ruled out the viability of attacks that
arrange for presence of confidential information in the disclosed bytes, but
they seem unlikely. (CVE-2015-5288)
---

The "crypt" call is hardcoded in legacy code that hasn't been recompiled in
years. Are there ways to keep the old code running against a newer Postgres
version?

Kind regards,
Andomar



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ON CONFLICT DO NOTHING RETURNING

2016-03-19 Thread Jeff Janes
On Mon, Mar 14, 2016 at 1:20 PM, Peter Geoghegan  wrote:
> On Mon, Mar 14, 2016 at 12:28 PM, Peter Devoy  wrote:
>> Is there a reason DO NOTHING was not developed for use with RETURNING?
>
> I don't know what you mean. It should work fine with RETURNING.


He wants to retrieve a value from the conflicting row.  Now getting
the value that caused the conflict should be easy, because you
provided it in the first place.   But he wants a value from a
different column of the conflicting row than the column(s) on which
there is conflict.  DO NOTHING RETURNING returns no rows.  Which is
reasonable, because nothing was inserted.  But it isn't what he wants.

I think the dummy update is his best bet, but it does seem like there
should be a better way.  Maybe ON CONFLICT DO SELECT where the select
operates over the target row.

Cheers,

Jeff


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] vacuum - reclaiming disk space.

2016-03-19 Thread Chris Travers
Autovacuum will eventually free your extra pages regarding index bloat but
it takes multiple runs.

You could also use reindex instead of vacuum full since you are only
interested in the index.

For the table there may be other options but they depend on your pattern of
writes.

On Thu, Mar 17, 2016 at 7:27 AM, Mike Blackwell 
wrote:

> I have a large table with numerous indexes which has approximately doubled
> in size after adding a column - every row was rewritten and 50% of the
> tuples are dead.  I'd like to reclaim this space, but VACUUM FULL cannot
> seem to finish within the scheduled downtime.
>
> Any suggestions for reclaiming the space without excessive downtime?
>
>
> * *
>



-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


Re: [GENERAL] Request - repeat value of \pset title during \watch interations

2016-03-19 Thread David G. Johnston
Figured out it had to be added to 2016-09...done

On Wed, Mar 9, 2016 at 11:40 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> Adding -hackers for consideration in the Commitfest.
>
> Thanks!
>
> David J.
>
> >>>Original request by me
>
>
> http://www.postgresql.org/message-id/CAKFQuwZqjz-je3Z=8jdodym3jm-n2ul4cuqy5vh8n75e5v1...@mail.gmail.com
>
> When executing a query using \watch in psql the first execution of the
> query includes "Title is [...]" when \pset title is in use.  Subsequent
> executions do not.  Once that first display goes off-screen the information
> in the title is no longer readily accessible.  If using \watch for a
> long-running monitoring query it can be helpful to incorporate some context
> information into the title.
>
> -- Forwarded message --
> From: Michael Paquier 
> Date: Thu, Jan 28, 2016 at 6:01 AM
> Subject: Re: [GENERAL] Request - repeat value of \pset title during \watch
> interations
> To: "David G. Johnston" 
> Cc: Tom Lane , "pgsql-general@postgresql.org" <
> pgsql-general@postgresql.org>
>
>
> On Thu, Jan 28, 2016 at 1:54 PM, David G. Johnston
>  wrote:
> > On Wed, Jan 27, 2016 at 9:13 PM, Michael Paquier <
> michael.paqu...@gmail.com>
> > wrote:
> >>
> >> On Thu, Jan 28, 2016 at 9:34 AM, David G. Johnston
> >>  wrote:
> >> > So how about:
> >> >
> >> > + snprintf(title, strlen(myopt.title) + 50,
> >> > + _("Watch every %lds\t%s\t%s"),
> >> > +  sleep, head_title, asctime(localtime(&timer)));
> >>
> >> I would just keep the timestamp and the title separated so what do you
> >> think about that instead?
> >> Watch every Xs   $timestamp
> >> $head_title
> >
> >
> > That works.  I like having the title immediately above the table.
> >
> > The other option that came to mind would be to place the time information
> > after the table display while leaving the title before it.  On an output
> > that requires more vertical space than is available in the terminal one
> > would no longer have to scroll up to confirm last execution time.  If
> doing
> > this I'd probably get rid of any logic that attempts to center the time
> > information on the table and simply leave it left-aligned.
>
> ​And the example:
> ​
> OK, attached is an updated patch. How does that look?
>
>Watch every 5sFri Jan 29 13:06:31 2016
>
> This is a medium length title
> repeat
>
> 
> --
>  
> 
> (1 row)
> ​
>
>
>


Re: [GENERAL] Error: insufficient data in the message

2016-03-19 Thread Tom Lane
Ranier VF  writes:
> Hi, I around with error with PostgreSQL 9.5.0
> Error:  insufficient data in the messagepgsql_pexec error: PQresultStatus=7

Hmm ... I can't find the string "insufficient data in the message"
anywhere in the Postgres sources.  And not "pgsql_pexec" either.
So this must be coming from some client-side code you're using (not
libpq).  It's unlikely we can help you much here; you need to chat with
the author of the client-side library that's emitting that error.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [HACKERS] [GENERAL] Request - repeat value of \pset title during \watch interations

2016-03-19 Thread Michael Paquier
On Fri, Mar 18, 2016 at 8:16 AM, Tom Lane  wrote:
> David Steele  writes:
>> On 3/17/16 7:00 PM, Tom Lane wrote:
>>> The message I saw was post-1-March.  If it was in fact submitted in
>>> time for 2016-03, then we owe it a review.
>
>> I meant to add the CF record and forgot:
>> https://commitfest.postgresql.org/9/480
>> It was added 2016-01-13 by Michael Paquier.
>
> OK, so it looks like David's 10-Mar patch was actually just a repost of
> Michael's 28-Jan patch, which was already in the queue to be reviewed in
> 2016-03 (and hasn't yet been).  So the addition to 2016-09 was simply
> erroneous and should be deleted.

My mistake I guess. I should have mentioned as well on this thread
that I registered it.
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] log temp files are created twice in PL/pgSQL function

2016-03-19 Thread Adrian Klaver

On 03/16/2016 07:58 AM, Dang Minh Huong wrote:

Hi,

Why does log temp files are created twice when query is executed in PL/pgSQL 
function?
Would you please explain it to me?

As below test result. Log temp files are created twice when SELECT statement is 
put
  into  a PL/pgSQL function. It led a little of performance degradation.
Is there any way to define PL/pgSQL function to avoid this issue?
# I am using PostgreSQL 9.3.9

my test results
-
[postgres@test]$ psql -c "select test_tempfiles();" > /dev/null
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp2223.0", size 3244032
CONTEXT: PL/pgSQL function test_cursor() line 3 at RETURN QUERY
LOG: temporary file: path "base/pgsql_tmp/pgsql_tmp23223.1", size 2828062
LOG: duration: 421.426 ms statement: select test_tempfiles();

[postgres@test]$ psql -c "select name from testtbl order by id" > /dev/null
LOG: temporary file: path "base/pgsql_tmp/pgsql_tmp25586.0", size 2850816
LOG: duration: 389.054 ms statement: select random from testtbl order by random
-

test_tempfiles() function is defined as below


Are you sure. The query is double quoted which returns an error when run 
on my machine.


What do have logging set to?


-
CREATE OR REPLACE FUNCTION public.test_tempfiles()
RETURNS TABLE(name text)
LANGUAGE plpgsql
AS
$function$
begin
return query execute "select name from testtbl order by id ";
end;
$function$
-

Thanks and best regrards,
bocap













--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Insert data in two columns same table

2016-03-19 Thread Vick Khera
On Wed, Mar 16, 2016 at 9:34 PM, drum.lu...@gmail.com 
wrote:

> The problem is that I need to do that at the same time, because of a
> constraint:
>
>
Mark your constraint as deferrable, and then defer the constraints within
your transaction.