Re: [GENERAL] ANALYZE command question

2015-03-03 Thread Albe Laurenz
Igor Stassiy wrote:
> Will calling ANALYZE table; twice in a row actually run the command twice ? 
> Or there is some sort of
> check that if the table is not changed since the time of first call the 
> second command will not
> actually be run?

The statistics will be calculated each time you run ANALYZE.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Weight BLOB objects in postgreSQL? How?

2015-03-09 Thread Albe Laurenz
María Dovale wrote:
> Thanks, I mean, how heavy it is in kB.

You can use lo_lseek64 to find out how big a large object is.
To find out the size of large object 24858, you can

SELECT lo_lseek64(lo_open(24858, 262144), 0, 2);

The last "2" here is SEEK_END from /usr/include/unistd.h, so it may be
that this value is different for different operating systems.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Update using non-existent fields does not throw an error

2015-03-16 Thread Albe Laurenz
Rob Richardson wrote:
> An update query is apparently succeeding, even though the query refers to 
> fields that do not exist.
> Here’s the query:
> 
> update inventory set
> x_coordinate = (select x_coordinate from bases where base = '101'),
> y_coordinate = (select y_coordinate from bases where base = '101')
> where charge = 100
> 
> -- select x_coordinate, y_coordinate from bases where base = '101'
> 
> When I run the update query, it tells me that the query succeeded and that 
> four records were updated,
> which is what I expect.  But when I looked at the inventory table, I found 
> that the four records were
> unchanged.  So, I tried to check the values of the base coordinates by 
> running the select statement
> shown above.  That statement threw an error complaining that x_coordinate and 
> y_coordinate did not
> exist.  This is correct; I should have been querying a view that includes 
> those fields.  But why
> didn’t the update statement throw an error?

That's an old one.

Since there is no "x_coordinate" in "bases", the column will refer to 
"x_coordinate" from
the outer query.  So you set "x_coordinate" and "y_coordinate" to their old 
values.

You can avoid problems like that by using column names that are qualified with 
the table name.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Reg: PL/pgSQL commit and rollback

2015-03-17 Thread Albe Laurenz
Medhavi Mahansaria wrote:
> I am writing a porting a procedure running in oracle to a PL/pgSQL function.
> 
> I need to use commit and rollback in my function.
> 
> I have read that usage of commit and rollback is not possible in PL/pgSQL, 
> however savepoints can be
> used.
> 
> even when i use savepoints and rollback to a savepoint in the exception block 
> I am getting the
> following error
> 
> ERROR:  cannot begin/end transactions in PL/pgSQL
> HINT:  Use a BEGIN block with an EXCEPTION clause instead.

Instead of explicitly using ROLLBACK, you have to code like this:

BEGIN
   INSERT ... -- may cause an error
EXCEPTION
   WHEN OTHERS
   THEN ...
END

If you get into the exception block, PL/pgSQL will implicitly
have rolled back everything that happened between BEGIN and EXCEPTION.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY command file name encoding issue (UTF8/WIN1252)

2015-03-23 Thread Albe Laurenz
Pujol Mathieu wrote:
> I have a problem using COPY command with a file name containing non
> ASCII characters.
> I use Postgres 9.3.5 x64 on a Windows 7.
> OS local encoding is WIN1252.
> My database is encoded in UTF8.
> I initiate client connection with libpq, connection encoding is set to UTF8.
> I build properly my file path taking care of encoding.
> 
> When I run COPY "test" TO 'C:/tmp/é.bin' WITH BINARY
> it creates a file named é.bin which is utf8 name interpreted as local8.
> It could be reproduced convert_from(convert_to('é','UTF8'),'WIN1252').
> é in UTF8 "\303\251"
> é in WIN1252"\351"
> 
> This command works on a database encoded in WIN1252 (same as OS) .
> So it seems that COPY command don't take care of file name encoding.
> Is it a bug ? a limitation ?
> Thanks for your help

I didn't look at the code, but I'd say that the database encoding is
used for the file name, which is why it works when database encoding
and OS locale are the same.

I guess that it would be possible for PostgreSQL to figure out with what
OS locale the postmaster is running and to convert file names accordingly,
but it's probably not trivial since it is OS dependent.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] schema or database

2015-04-13 Thread Albe Laurenz
Michael Cheung wrote:
> I have many similar database to store data for every customer.
> Structure of database is almost the same.
> As I use same application to control all these data, so I can only use
> one database user to connect to these database.
> And I have no needs to query table for different customer together.
> 
> I wonder which I should use, different shema or different database to store 
> data?
> 
> I 'd like to know the advantage and disadvantage for using schema or database.

In addition to what others have said:

If you use multiple schemas within one database, the danger is greater that
data are written to or read from the wrong schema if your application has a bug
ans does not make sure to always set search_path or qualify every access with a
schema name.

With multiple databases you are guaranteed not to access data from a different
database.

The main downside that I see to multiple databases is the overhead: each of
the databases will have its own pg_catalog tables.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] fillfactor and cluster table vs ZFS copy-on-write

2015-04-17 Thread Albe Laurenz
Geoff Speicher wrote:
> On Thu, Apr 16, 2015 at 4:56 PM, Qingqing Zhou  
> wrote:
>> On Thu, Apr 16, 2015 at 5:09 AM, Geoff Speicher  
>> wrote:
>>> ZFS implements copy-on-write, so when PostgreSQL modifies a block on disk,
>>> the filesystem writes a new block rather than updating the existing block.

>>> Therefore one might posit that PostgreSQL should be configured to use 100%
>>> fillfactor and avoid clustering on ZFS. Can anyone comment on this?

>> Even with COW, I can see fillfactor < 100% still have its virtues. For
>> example, HOT update can avoid adding an extra index item on the index
>> page if it finds the new item can be inserted in the same heap page.

> That's true, the new physical location on disk is transparent to the DBMS so 
> it has no more or less
> housekeeping with or without COW, but the housekeeping still has to be done 
> somewhere, so it helps to
> understand which is more efficient. I'll see if I can produce some empirical 
> data unless anyone thinks
> it's a waste of time.

I am quite certain that fillfactor < 100% will be a win even then (for the 
right load).
Upating one (heap) block should always be cheaper than updating one heap block
as well as (at least) one index block per index involved.

I doubt that you would be able to measure any performance improvement -
after all, you have to update at least one heap block, no matter what the
fillfactor setting is.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] clearing of the transactions shown in pg_locks

2015-04-29 Thread Albe Laurenz
Mitu Verma wrote:
> I have killed the script, but still the query is showing in pg_stat and 
> pg_locks.
> 
> Please help me how to clear the pg_locks from the transaction it is already 
> holding, if there is
> something in PostgreSQL which can clear the pg_stat and pg_locks?

[...]

> fm_db_Server3=# SELECT * FROM pg_stat_activity;

[...]

> 16384 | fm_db_Server3 |   28223 |16391 | mmsuper | psql | 
>  |
> |  -1 | 2015-04-22 11:39:12.384336+02 | 2015-04-22 11:39:33.36916+02  
> | 2015-04-22
> 11:39:33.36916+02  | t   | delete from audittraillogentry where intime <= 
> to_timestamp('2015-01-14
> 23:59:59.999', '/MM/DD-HH24:MI:SS.FF3') OR outtime  <= 
> to_timestamp('2015-01-14 23:59:59.9
> 99', '/MM/DD-HH24:MI:SS.FF3');

[...]

> 16384 | fm_db_Server3 |3722 |16391 | mmsuper |  | 
> 172.23.84.19 | s3bgwa31
> |   53620 | 2015-04-14 13:36:29.193159+02 | 2015-04-14 13:36:29.204018+02 
> | 2015-04-14
> 13:36:29.204018+02 | f   | delete from audittraillogentry where intime <= 
> to_timestamp('2015-01-14
> 23:59:59.999', '/MM/DD-HH24:MI:SS.FF3') OR outtime  <= 
> to_timestamp('2015-01-14 23:59:59.9
> 99', '/MM/DD-HH24:MI:SS.FF3')

You killed the script, but you did not cancel the queries.

Maybe you should change the script so that it cancels the queries when it is 
killed.

To get rid of the database sessions, do the following as superuser (because you 
are on 9.1):

SELECT pg_terminate_backend(28223);
SELECT pg_terminate_backend(3722);

Yours,
Laurenz Albe



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Success story full text search

2015-05-02 Thread Albe Laurenz
Frank Langel wrote:
> Does someone have a success story of using Postgres Full Search Capability
> with significant data, lets say > 50-100 GB  ? 

How about http://www.postgresql.org/search/?m=1

Yours,
Laurenz Albe







-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Block Corruption issue..

2015-05-08 Thread Albe Laurenz
Sachin Srivastava wrote:
> Could you help us solving  the below error which we are getting during taking 
> pg_dump.
> 
> pg_dump: SQL command failed
> 
> pg_dump: Error message from server: ERROR:  invalid page header in block 
> 14521215 of relation
> pg_tblspc/18140340/PG_9.1_201105231/18140346/18140757
> 
> pg_dump: The command was: COPY evidence.partline (feature_id, version, 
> base_type_id, domain_class_id,
> domain_subclass_id, status_id, deleted, modification_time, has_children, 
> is_evidence, data_immediate,
> data_lazy, parent_line_id, start_point_id, end_point_id, start_offset, 
> end_offset, geometry) TO
> stdout;

There is a corrupt block in this table.

This is a good starting point:
https://wiki.postgresql.org/wiki/Corruption

If you have a backup, use it!

One simple option is to set "zero_damaged_pages" and then pg_dump
(http://www.postgresql.org/docs/9.4/static/runtime-config-developer.html#GUC-ZERO-DAMAGED-PAGES),
but don't forget that this *will* lose some data.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: moving to PostgreSQL from MS-SQL and from Oracle, looking for feature comparison information

2015-05-09 Thread Albe Laurenz
Yuri Budilov wrote:
> My employer is evaluating PostgreSQL as a possible replacement for Oracle 11g 
> R2 and
> MS-SQL 2008 R2 for some systems.
> I am completely new to PostgreSQL but experienced in MS-SQL and also in 
> Oracle 11g R2.
> We need to establish what PostgreSQL is good at and not so good at - so we 
> can decide where to use it.
> The OS will be 64-bit Linux (probably Red Hat Linux or Oracle Linux).
>
> Are there any documents comparing these products in terms of features?
> That would save me asking a lot of questions and save me weeks or months of 
> reading of PostgreSQL manuals.

Some general remarks, since Maxim already answered your specific questions:

I am not aware if a document that compares the features, but I would like to
caution in this respect: if you come from another system, you are likely to
start searching for features that are similar to what you know from there, and
end up being disappointed if you cannot find them.
It is often not helpful to look for one-on-one feature comparison, as the same
problems are often solved in quite different ways on different systems.

One example: coming from Oracle, you might be appalled by PostgreSQL's
lack of synonyms. However, if you think the PostgreSQL way, you would view
synonyms as a band-aid for Oracle's lack of a schema search path.

Another familiar eyebrow-raiser is PostgreSQL's lack of stored procedures -
but it seems like that hasn't been a problem for practical application, a
function returning "void" usually does the trick.

Since you come from proprietary databases, I would divide the pros and cons
in two categories:

a) Features of open source software in general:
- You are empowered to analyze and fix your problems yourself, or pay
  people of your choosing to do it.
- Development, design choices and decision processes happen "in the open"
  and can be followed and influenced by everybody.
- You have to do more work to integrate the software with other parts of
  your landscape (backup software, high availability, client software, ...).

b) PostgreSQL features:
- Excellent documentation.
- Excellent extensibility (functions in several programming languages,
  packaged extensions (PGXN), user-defined types, ...).
- High code quality.
- Not owned by a single company. Hence, cannot be bought, and there is
  little danger for half-baked solutions to be shipped because of customer
  or marketing pressure.
- Unusually strict about correct encoding and implicit type casting.
- Excellent support on the mailing lists (you can talk to the developers, and
  bugs are often fixed in a matter of days).

Since you mentioned the documentation, I'd encourage you to spend the time
to read through the manual.  You will find it worth reading, particularly if
you plan to use PostgreSQL at your company.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] moving to PostgreSQL from MS-SQL and from Oracle, looking for feature comparison information

2015-05-09 Thread Albe Laurenz
Maxim Boguk wrote:
>> database and transaction log backup compression? not available?

> Transaction log backup compression not available (however could be easily 
> archived via external utilities like bzip2).

Well, in PostgreSQL you backup transaction logs by setting "archive_command",
which is a operating system command you write.
You just put a "gzip -1" in there and your WAL archive will be compressed.

>> - recovery from hardware or software corruption -
>>
>> suppose I am running a mission critical database (which is also relatively 
>> large, say > 1TB)
>> and I encounter a corruption of some sort (say, due to hardware or software 
>> bug)
>> on individual database pages or a number of pages in a database
>>
>> How do I recover quickly and without losing any transactions? MS-SQL and 
>> Oracle can restore
>> individual pages (or sets of pages) or restore individual database files and 
>> then allow me
>> to roll forward transaction log to bring back every last transaction. It can
>> be done on-line or off-line. How do I achieve the same in PostgreSQL 9.4?
>> One solution I see may be via complete synchronous replication of the 
>> database to another server.
>> I am but sure what happens to the corrupt page(s) - does it get transmitted 
>> corrupt
>> to the mirror server so I end up with same corruption on both databases or 
>> is there some protection against this?

​> It's depend where a corruption happen, if pages become corrupted due to some
> problems with physical storage (filesystem) in that case a replica data 
> should be ok.

I would not count on that.
I have had a case where a table file got corrupted due to hardware problems.
Pages that contained data were suddenly zeroed.
PostgreSQL recognizes such a block as empty, so the user got no error, but
data were suddenly missing. What does a user do in such a case? He/she grumbles
and enters the data again. This insert will be replicated to the standby (which 
was
fine up to then) and will cause data corruption there (duplicate primary keys).

PostgreSQL replicates the physical block, so data corruption that does not
trigger an error will be replicated.
You should enable checksums to minimize that risk.

If bad comes to worse, you'll just have to recover, although I'd say that in 
most cases
a standby database will help you survive a hardware failure.

Yours,
Laurenz Albe
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] moving to PostgreSQL from MS-SQL and from Oracle, looking for feature comparison information

2015-05-10 Thread Albe Laurenz
Scott Marlowe wrote:
> On Sat, May 9, 2015 at 11:20 PM, Albe Laurenz  wrote:
>> Maxim Boguk wrote:
>>> It's depend where a corruption happen, if pages become corrupted due to some
>>> problems with physical storage (filesystem) in that case a replica data 
>>> should be ok.

>> I would not count on that.
>> I have had a case where a table file got corrupted due to hardware problems.
>> Pages that contained data were suddenly zeroed.
>> PostgreSQL recognizes such a block as empty, so the user got no error, but
>> data were suddenly missing. What does a user do in such a case? He/she 
>> grumbles
>> and enters the data again. This insert will be replicated to the standby 
>> (which was
>> fine up to then) and will cause data corruption there (duplicate primary 
>> keys).

> You had zero corrupted pages turned on. PostgreSQL by default does NOT
> DO THIS. That setting is for recovering a corrupted database not for
> everyday use!

No, I didn't.

It was not PostgreSQL that zeroed the page, but the hardware or operating 
system.
The problem was a flaky fibre channel cable that intermittently was connected 
and disconnected.
That corrupted the file system, and I guess it must have been file system 
recovery
that zeroed the pages.  I'm not 100% certain, at any rate the symptoms were 
silently missing data.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] noobie join question

2015-05-11 Thread Albe Laurenz
Steve Clark wrote:
> I am having trouble trying to figure out
> how to get the result listed at the bottom.

That's a bit tough, since you don't describe the desired result.

> I have 3 tables units, types of units which has a description of the units,
> and a table that list associations of the units. I can't figure out
> how to do the proper joins. Any pointers would be appreciated.
> 
> create table types (
> id integer,
> descr varchar(30)
> );
> 
> COPY types (id, descr) FROM stdin;
> 1descr 1
> 2descr 2
> 3descr 3
> 4descr 4
> \.
> 
> create table units (
> uid integer,
> udevice varchar(30),
> utype integer
> );
> 
> COPY units (uid, udevice, utype) FROM stdin;
> 1a1
> 2b1
> 3c4
> 4d3
> \.
> 
> 
> create table assoc (
> aid integer,
> src_id integer,
> dest_id integer
> );
>
> COPY assoc (aid, src_id, dest_id) FROM stdin;
> 112
> 213
> 334
> 442
> \.

These tables should have foreign key constraints to each other,
so that we can understand how they are related and to make sure that
no impossible values are inserted.

> desired result
> a | descr 1 | b | descr 1
> a | descr 1 | c | descr 4
> c | descr 4 | d | descr 3
> d | descr 3 | b | descr 1

If my guesses are correct, the query would be

SELECT u1.udevice, t1.descr, u1.udevice, t1.descr
FROM assoc a JOIN
 units u1 ON (a.src_id = u1.uid) JOIN
 types t1 ON (u1.utype = t1.id) JOIN
 units u2 ON (a.dest_id = u2.uid) JOIN
 types t2 ON (u2.utype = t2.id);

I did not test this.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL]

2015-05-15 Thread Albe Laurenz
Sachin Srivastava wrote:
> How can I fast my daily pg_dump backup. Can I use parallel option(Which is 
> introduced in Postgres 9.3)
> with Postgres 9.1. There is any way I can use this is for 9.1 database.

You cannot do that.

Switch to file system backup, that is much faster.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Documentation bug?

2015-05-19 Thread Albe Laurenz
Thomas Kellerer wrote:
> I just noticed that you can do something like this (using 9.4.1):
> 
>select array[1,2,3] - 3
> 
> which is doing the same thing as:
> 
>select array_remove(array[1,2,3],3)

I can't reproduce this on my PostgreSQL 9.4.1:

test=> select array[1,2,3] - 3;
ERROR:  operator does not exist: integer[] - integer
LINE 1: select array[1,2,3] - 3;
^
HINT:  No operator matches the given name and argument type(s). You might need 
to add explicit type casts.

Did you add casts or operators?

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory Utilization Issue

2015-05-20 Thread Albe Laurenz
Sachin Srivastava wrote:
> Always my server memory utilization is remain >99%. I have 4 DB server and 
> RAM of the server is (32
> GB, 64 GB, 64 GB and 132 GB). In every server always we are getting the 
> memory utilization > 99%.
> Kindly suggest why this problem is and which parameter will resolve this 
> problem.
> 
> I am using PostgreSQL 9.1.2 and PostGIS 1.5 version on every Red hat Linux 
> server. Kindly find the
> "TOP" result, "ulimit -a"  result, ("kernel.shmax", "kernel.shmall", 
> "kernel.sem" value) and
> pg_setting result of one server as below. If you require any other 
> information then inform to me.

That sounds just fine.
Linux uses memory for the file system cache.  That memory is shown as "in use", 
but
it is available for processes if they need it.

It doesn't look like your machine is swapping.

Do you experience problems?

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Optimizing a read-only database

2015-05-20 Thread Albe Laurenz
Sameer Thakur wrote:
> You could disable fsync as write reliability is not relevant

That is bad advice.
If there are no writes, fsync won't hurt anyway.
Never disable fsync for anything but test systems.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can we simulate Oracle Flashback with pg_export_snapshot()?

2015-05-22 Thread Albe Laurenz
William Dunn wrote:
> Just had an idea and could use some feedback. If we start a transaction, 
> leave it idle, and use
> pg_export_snapshot() to get its snapshot_id MVCC will hold all the tuples as 
> of that transaction's
> start and any other transaction can see the state of the database as of that 
> time using SET
> TRANSACTION SNAPSHOT snapshot_id?
> 
> http://www.postgresql.org/docs/devel/static/functions-admin.html#FUNCTIONS-SNAPSHOT-SYNCHRONIZATION
> 
> 
> I'm thinking of setting up automation to ~every half hour open a transaction 
> as such, close any that
> have been open over an hour, and store the snapshot_id. However, I don't have 
> a busy system that I can
> test it on.
> 
> Of course this would cause some extra bloat because those tuples cannot get 
> autovacuumed until the
> transaction closes but that is also the case in Oracle. Is there anything 
> else I am missing or a
> reason that this would not be possible?

Oracle does not have that issue because modifications cannot bloat the table 
(the bloat is in
what is called the "UNDO tablespace").

What you suggest would allow you to look at the data as they were at specific 
times (of the snapshots).
But the price on a busy system where data are modified would be considerable; 
your tables might
become quite bloated, and you could not use HOT updates.

If you want to look into the past, wouldn't it be much more useful to have a 
standby server
that is lagging behind?  There is an option for that (recovery_min_apply_delay) 
from
PostgreSQL 9.4 on.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Different result depending on order of joins

2015-05-22 Thread Albe Laurenz
Nicklas Avén wrote:
> I was a little surprised by this behavior.
> Is this what is supposed to happen?
> 
> This query returns what I want:
> 
> with
> a as (select generate_series(1,3) a_val)
> ,b as (select generate_series(1,2) b_val)
> ,c as (select generate_series(1,1) c_val)
> select * from a
> inner join c on a.a_val=c.c_val
> full join b on a.a_val=b.b_val
> ;
> 
> I get all values from b since it only has a full join and nothing else.
> 
> But if I change the order in the joining like this:
> 
> with
> a as (select generate_series(1,3) a_val)
> ,b as (select generate_series(1,2) b_val)
> , c as (select generate_series(1,1) c_val)
> select * from a
> full join b on a.a_val=b.b_val
> inner join c on a.a_val=c.c_val
> ;
> 
> also b is limited to only return value 1.
> 
> I thought that the join was defined by "on a.a_val=c.c_val"
> and that the relation between b and the rest wasn't affected by that last 
> inner join.
> 
> I use PostgreSQL 9.3.6
> 
> Is this the expected behavior?

Yes.

In
http://www.postgresql.org/docs/current/static/queries-table-expressions.html#QUERIES-JOIN
you can read:

  "In the absence of parentheses, JOIN clauses nest left-to-right."

So the first query will first produce

 a_val | c_val
---+---
 1 | 1

and the FULL JOIN will add a row for b_val=2 with NULL a_val.

The second query will first produce

 a_val | b_val
---+---
 1 | 1
 2 | 2
 3 |

an since none but the first row matches a_val=1, you'll get only that row in 
the result.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [tsvector] to_tsvector called multiple times

2015-05-26 Thread Albe Laurenz
Sven R. Kunze wrote:
> the following stemming results made me curious:
> 
> select to_tsvector('german', 'systeme'); > 'system':1
> select to_tsvector('german', 'systemes'); > 'system':1
> select to_tsvector('german', 'systems'); > 'system':1
> select to_tsvector('german', 'systemen'); > 'system':1
> select to_tsvector('german', 'system'); >  'syst':1
> 
> 
> First of all, this seems to be a bug in the German stemmer. Where can I
> fix it?

As far as I understand, the stemmer is not perfect, it is just a "best
effort" at German stemming.  It does not have a dictionary of valid German
words, but uses an algorithm based on only the occurring letters.

This web page describes the algorithm:
http://snowball.tartarus.org/algorithms/german/stemmer.html
I guess that the Snowball folks (and PostgreSQL) would be interested
if you could come up with a better algorithm.

In this specific case, the stemmer goes wrong because "System" is a
foreign word whose ending is atypical for German.  The algorithm cannot
distinguish between "System" and, say, "lautem" or "bestem".

> Second, and more importantly, as I understand it, the stemmed version of
> a word should be considered normalized. That is, all other versions of
> that stem should be mapped to it as well. The interesting problem here
> is that PostgreSQL maps the stem itself ('system') to a completely
> different stem ('syst').
> 
> Should a stem not remain stable even when to_tsvector is called on it
> multiple times?

That's a possible position, but consider that a stem is not necessarily
a valid German word.  If you treat it as a German word (by stemming it),
the results might not be what you desire.

For example:

test=> select to_tsvector('german', 'linsen');
 to_tsvector
-
 'lins':1
(1 row)

test=> select to_tsvector('german', 'lins');
 to_tsvector
-
 'lin':1
(1 row)

I guess that your real problem here is that a search for "system"
will not find "systeme", which is indeed unfortunate.
But until somebody can come up with a better stemming algorithm, cases
like that can always occur.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [tsvector] to_tsvector called multiple times

2015-05-26 Thread Albe Laurenz
Sven R. Kunze wrote:
> However, are you sure, I am using snowball? Maybe, I am reading the
> documenation wrong:

test=> SELECT * FROM ts_debug('german', 'system');
   alias   |   description   | token  | dictionaries  | dictionary  | lexemes
---+-++---+-+-
 asciiword | Word, all ASCII | system | {german_stem} | german_stem | {syst}
(1 row)

test=> \dFd german_stem
List of text search dictionaries
   Schema   |Name | Description
+-+--
 pg_catalog | german_stem | snowball stemmer for german language
(1 row)

> http://www.postgresql.org/docs/9.3/static/textsearch-dictionaries.html
> but it seems as it depends on which packages (ispell, hunspell, myspell,
> snowball + corresponding languages) my system has installed.
> 
> Is there an easy way to determine which of these packages PostgreSQL
> uses AND what for?

If you use a standard PostgreSQL distribution, you will have no ispell
dictionary (as the documentation you quote says).
You can always list all dictionaries with "\dFd" in psql.

> Sure. That might be the problem. It occurs to me that stems (if detected
> as such) should be left alone.
> In case a stem is real German word, it should be stemmed to itself anyway
> If not, it might help not to stem in order to avoid errors.

Yes, but that would mean that you have a way to determine from a string
whether it is a word or a stem or both, and the software does not do that.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [tsvector] to_tsvector called multiple times

2015-05-26 Thread Albe Laurenz
Sven R. Kunze wrote:
> Maybe, I have difficulties to understand the relationship/dependencies
> between all these 'maybe' available dictionary/parser/stemmer packages.
> 
> What happens if I install all packages for a single language? (hunspell,
> myspell, ispell, snowball)
> 
> Are they complementary? Do they replace each other?

They are all dictionaries.

The "text search configuration" determines which dictionaries get
applied to which kinds of words in which order.

So if you introduce a new dictionary, you either have to modify
an existing configuration or efine a new one to use it.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [tsvector] to_tsvector called multiple times

2015-05-26 Thread Albe Laurenz
Sven R. Kunze wrote:
> I think I understand now.
> 
> Thus, the issue at hand could (maybe) be solved by passing words first
> to one of those more elaborate dictionaries (myspell, hunspell or
> ispell) and if still necessary then to snowball.
> 
> Did I get this right?

I have never experimented with ispell dictionaries, so I don't know
if they replace a snowball dictionary or are used in addition to it.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can we simulate Oracle Flashback with pg_export_snapshot()?

2015-05-27 Thread Albe Laurenz
William Dunn wrote:
> In terms of benefit over a lagging replica Flashback has the benefit of being 
> transparent to the user
> (the user can query over the same database connection, etc), it does not 
> incur the full cost of having
> a replica...

Yes, Flashback (in all ist forms) is something that PostgreSQL doesn't offer.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: duplicate key value violates unique constraint "pg_class_relname_nsp_index"

2015-05-27 Thread Albe Laurenz
Mitu Verma wrote:
> Following error is continuously seen with the postgreSQL database which we 
> are using at customer site.
> 
> Current Errors observed: 
> ./fm_db_VoiceReprocessing1/data/pg_log/postgresql-04.log:2015-04-04 01:00:16
> CESTERROR: duplicate key value violates unique constraint 
> "pg_class_relname_nsp_index"
> 
> Any pointers on why these errors are coming? What is the meaning of duplicate 
> key value violates
> unique constraint "pg_class_relname_nsp_index"
> 
> If it is due to some index corruption or duplicate index? Please help.

This is perfectly harmless from a data integrity point of view, it is just an 
error
message that the user gets when he or she tries to insert a row whose primary 
key
already exists in the table:

test=> CREATE TABLE test (id integer PRIMARY KEY, val text);
CREATE TABLE
test=> INSERT INTO test VALUES (1, 'one');
INSERT 0 1
test=> INSERT INTO test VALUES (2, 'one');
INSERT 0 1
test=> INSERT INTO test VALUES (1, 'two');
ERROR:  duplicate key value violates unique constraint "test_pkey"
DETAIL:  Key (id)=(1) already exists.

If you don't want user errors to be logged in the server log, you can
increase "log_min_messages" to "log".

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: duplicate key value violates unique constraint "pg_class_relname_nsp_index"

2015-05-27 Thread Albe Laurenz
> From: Pete Hollobon [mailto:postg...@hollobon.com]
> On 27 May 2015 at 09:57, Albe Laurenz  wrote:
>> Mitu Verma wrote:
>>> Following error is continuously seen with the postgreSQL database which we 
>>> are using at customer site.
>>>
>>> Current Errors observed: 
>>> ./fm_db_VoiceReprocessing1/data/pg_log/postgresql-04.log:2015-04-04 01:00:16
>>> CESTERROR: duplicate key value violates unique constraint 
>>> "pg_class_relname_nsp_index"
>>>
>>> Any pointers on why these errors are coming? What is the meaning of 
>>> duplicate key value violates
>>> unique constraint "pg_class_relname_nsp_index"
>>>
>>> If it is due to some index corruption or duplicate index? Please help.

>> This is perfectly harmless from a data integrity point of view, it is just 
>> an error
>> message that the user gets when he or she tries to insert a row whose 
>> primary key
>> already exists in the table:

> But the constraint here is pg_class_relname_nsp_index - the unique index on 
> pg_class (relname,
> relnamespace). You don't get that error if you try to create a table that 
> already exists. How could
> you end up with that error (short of attempting to directly insert rows in 
> pg_class)?

Oops, I didn't see that it was a system index.  My mistake.

Mitu Verma wrote:
> I have tried to see the postgres code(although new to this !) and found this.
> 
> declare unique index pg_class_relname_nsp_index 2663 on pg_class using 
> btree(relname name_ops,
> relnamespace oid_ops)
> 
> Not sure why ‘duplicate key value violates unique constraint 
> "pg_class_relname_nsp_index"’ message is
> coming in the logs ?
> 
> Any leads?

Yes, that is could indeed be data corruption, unless somebody messes around with
the system catalogs (can you exclude that?).

I would shut down the cluster right away and take a physical backup of the files
before doing more.

Have there been any crashes lately?

Do you have any idea what action triggers the error message?

If you try "REINDEX TABLE pg_class" as superuser, does the problem go away?

Yours,
Laurenz Albe


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] date type changing to timestamp without time zone in postgres 9.4

2015-06-01 Thread Albe Laurenz
Adrian Klaver wrote:
> On 05/30/2015 10:05 PM, Rishi Gokhale wrote:
>> When I create a table with a column whose type is date the type gets
>> forced to timestamp without timezone after it gets created
>>
>> ops=# CREATE TABLE test (
>> ops(# namevarchar(40) NOT NULL,
>> ops(# start date NOT NULL
>> ops(# );
>> CREATE TABLE
>>
>> ops=# \d test;
>> Table "public.test"
>>   Column |Type | Modifiers
>>  +-+---
>>   name   | character varying(40)   | not null
>>   start  | timestamp without time zone | not null

>> The table creation is just a test, my original issue is while restoring
>> a backup (pg_dump/pg_restore) from another server also 9.4, where the
>> date types on numerous columns get forced to change to timestamp without
>> timezone.

> Not seeing that here:

A wild guess, since "date" in Oracle is effectively a timestamp:
Are you using EDB's Postgres Plus?

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] date type changing to timestamp without time zone in postgres 9.4

2015-06-01 Thread Albe Laurenz
Rishi Gokhale wrote:
> Thanks very much for your quick responses. I am indeed using EDB's postgres 
> plus.
> 
> It looks like it has a function thats forcing the date type to change to a 
> timestamp. I actually
> deleted that function, but it still didn't help.

You shouldn't delete any functions.

But since EDB's PostgreSQL fork behaves differently in this respect,
you'd be better off asking them for help.
This mailing list only deals with standard PostgreSQL.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] TRIGGER TRUNCATE -- CASCADE or RESTRICT

2015-06-02 Thread Albe Laurenz
Andreas Ulbrich wrote:
> I'm in a handle for a trigger for TRUNCATE. Is it possible to find out
> whether the TRUNCATE TABLE ist called with CASCADE?

I don't think there is.

But you can find out the table where the trigger is defined and examine
if any foreign key constraints are referring to it.

If yes, then the trigger was called with CASCADE.
If no, it might have been called either way, but the effect would be the same.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_start_backup does not actually allow for consistent, file-level backup

2015-06-08 Thread Albe Laurenz
otheus uibk wrote:
> The manual and in this mailing list, the claim is made that consistent, 
> file-level backups may be made
> by bracketing the file-copy operation with the postgresql pg_start_backup and 
> pg_stop_backup
> operations.  Many people including myself have found that in some 
> circumstances, using "tar" to copy
> these files will result in an error if one of the data files changes during 
> the tar operation. The
> responses to those queries on this mailing list are unsatisfactory 
> ("everything is fine, trust us").

Everything is fine, trust us.

>> bash-3.00# tar -cf 16aprilstandby.tar /db-data/
>> tar: Removing leading `/' from member names
>> tar: /db-data/base/24643/151667: file changed as we read it
>> tar: /db-data/base/24643/151764.2: file changed as we read it
>> tar: /db-data/base/24643/151766: file changed as we read it
>> tar: /db-data/base/24643/151768: file changed as we read it
>> tar: /db-data/base/66412/151969: file changed as we read it

> The above scenario is exactly what I saw, albeit with less frequency and 
> severity.

> I decided to test this claim that these messages are "perfectly harmless" and 
> "can be ignored":
[...]
> As you can see below, there were non-zero changes made to these files.
[...]
> Such changes occurred EVEN WHEN TAR DID NOT WARN of changed files. Further, 
> when step 3 involved an
> actual backup, involving minutes, not milliseconds, dozens of differences to 
> files in data/base/...
> are reported. To be clear, I excluded from consideration all files in 
> pg_xlog, pg_clog, pg_subtrans,
> pg_stat_tmp.
> 
> If these files are changing during the pg_start_backup() and pg_stop_backup, 
> then exactly what is
> their purpose? Might they be changing during the tar, as tar thinks? How may 
> an operator be assured
> the snapshot is consistent (unless one stops the databases)?  Will the redo 
> logs restore the files to
> a consistent state, no matter when these files are changed? I find it hard to 
> believe that would be
> the case.

The files are indeed changing while they are backed up.

The tar archive is not a consistent backup.

Redo using the Write Ahead Log will indeed restore the files to a consistent 
state,
astonishing as that may be.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_start_backup does not actually allow for consistent, file-level backup

2015-06-08 Thread Albe Laurenz
otheus uibk wrote:
> Just to nit-pick, I see nowhere in either version of the manual the 
> indication that it is normal for
> postgresql to continue to update files in its data catalog between 
> pg_start_backup and pg_stop_backup.
> The closest I see comes in this paragraph:
> 
> ** Some file system backup tools emit warnings or errors if the files they 
> are trying to copy change
> while the copy proceeds. When taking a base backup of an active database, 
> this situation is normal and
> not an error.
> 
> Does "this situation" refer to the tools emitting warnings or to the fact 
> that postgresql is updating
> the files? It might be the case, for instance, that timestamps are updated 
> but not the contents of the
> files (this is what I had assumed prior to today).

The manual does not contain all the details how backup and recovery works 
internally,
you'd have to see the source code for that.

It is normal for the files to change while backup is in progress (in fact, the 
database
continues working normally, but more information is written to the Write Ahead 
Log).
It is also normal for backup tools to complain if the file they copy changes 
while they
read it.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.3.9 ?

2015-06-12 Thread Albe Laurenz
Birta Levente wrote:
> In the postgresql yum repo appeared the 9.3.9 and 9.4.4, but on the
> postgresql.org nothing about the new version. Where I can see the changelog?

AFAIK, it is being packaged and will be announced soon.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Compression function

2015-06-16 Thread Albe Laurenz
Leonardo M. Ramé wrote:
> Hi, does anyone know if there's a compression function to let me store
> in gzipped/deflate format TEXT or Bytea fields.
> 
> Please correct me if I'm wrong, but I also wonder if this function is
> really needed since I've read large objects are stored with TOAST, hence
> compression is already there.

Right, TOAST does that.

See http://www.postgresql.org/docs/9.4/static/storage-toast.html
You can control the storage attributes per column.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump 8.4.9 failing after upgrade to openssl-1.0.1e-30.el6_6.11.x86_64 on redhat linux

2015-06-17 Thread Albe Laurenz
Douglas Stetner wrote:
> Looking for confirmation there is an issue with pg_dump failing after upgrade 
> to openssl-1.0.1e-
> 30.el6_6.11.x86_64 on redhat linux.
> 
> -bash-4.1$ pg_dump -V
> pg_dump (PostgreSQL) 8.4.9
> 
> -bash-4.1$ pg_dump -h localhost -C Hogwarts -a -t mafs -f zz
> pg_dump: Dumping the contents of table "mafs" failed: PQgetCopyData() failed.
> pg_dump: Error message from server: SSL error: unexpected message
> pg_dump: The command was: COPY public.mafs (hugo_symbol,
> ...
> ...
> analysis_id) TO stdout;
> 
> 
> This is only happening on 2 tables in this database.  The same database can 
> be backed up with
> pgAdmin3.app remotely from a Mac
> 
> As stated I am fairly sure the cause was the upgrade of openssl as it started 
> to fail the next day:
> Jun 16 05:18:25 qcmg-database1 yum[2965]: Updated: 
> openssl-1.0.1e-30.el6_6.11.x86_64

This looks like a renegotiation problem.

Could you run the following two statements in a psql session:
   SET ssl_renegotiation_limit='3kB';
   SELECT repeat('0123456789', 900)";
and see if it triggers the problem?

If you set "ssl_renegotiation_limit" to 0, either on the server on using the 
PGOPTIONS
environment variable on the client side, does the problem go away?

Since you upgraded OpenSSL on one end (you didn't tell us on which), maybe the
problem is the age of the OpenSSL library on the other side ---
if one side follows RFC 5746 and the other doesn't (because it is pre-2010)
you can end up with problems like that.

If my guess is correct, the solution would be to upgrade OpenSSL on the other 
end too.

And while you are at it, upgrade to PostgreSQL 9.4 if your data are important 
for you.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] valgrind

2015-06-19 Thread Albe Laurenz
Peter Kroon wrote:
> ==3814== Memcheck, a memory error detector
> ==3814== Copyright (C) 2002-2013, and GNU GPL'd, by Julian Seward et al.
> ==3814== Using Valgrind-3.10.1 and LibVEX; rerun with -h for copyright info
> ==3814== Command: ./pgsql_check
> ==3814==
> ==3814==
> ==3814== HEAP SUMMARY:
> ==3814== in use at exit: 47,288 bytes in 2,864 blocks
> ==3814==   total heap usage: 5,671 allocs, 2,807 frees, 331,460 bytes 
> allocated
> ==3814==
> ==3814== LEAK SUMMARY:
> ==3814==definitely lost: 0 bytes in 0 blocks
> ==3814==indirectly lost: 0 bytes in 0 blocks
> ==3814==  possibly lost: 0 bytes in 0 blocks
> ==3814==still reachable: 47,288 bytes in 2,864 blocks
> ==3814== suppressed: 0 bytes in 0 blocks
> ==3814== Rerun with --leak-check=full to see details of leaked memory
> ==3814==
> ==3814== For counts of detected and suppressed errors, rerun with: -v
> ==3814== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)
> 
> There is still reachable data. Is this a bug or have I forgotten to free 
> something?

I'd recommend that you do as valgrind suggests and use --leak-check=full to see
where it thinks the problems are.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to craft a query that uses memory?

2015-06-19 Thread Albe Laurenz
Holger Friedrich wrote:
> So how do I craft a query that actually does use lots of memory?

You increase the parameter "work_mem".

You can do that globally in postgresql.conf or with SET for one session
or with SET LOCAL for one transaction.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] INSERT a number in a column based on other columns OLD INSERTs

2015-06-22 Thread Albe Laurenz
Adrian Klaver wrote:
> On 06/20/2015 12:41 PM, Charles Clavadetscher wrote:
>> I just made a short test with the code provided. As Bill mentioned the
>> moment when the trigger is fired is essential.
>> I made a test with both before (worked) and after (did not work because
>> the row was already inserted and the returned new row is ignored).
>>
>> The assignment (= or :=) does not seem to play a role, but the correct
>> version is as mentioned :=
> 
> Yea, I can't seem to remember this part of the docs:
> 
> " Equal (=) can be used instead of PL/SQL-compliant :=."

This was discussed on -hackers a while ago:
http://www.postgresql.org/message-id/flat/52ef20b2e3209443bc37736d00c3c1380876b...@exadv1.host.magwien.gv.at#52ef20b2e3209443bc37736d00c3c1380876b...@exadv1.host.magwien.gv.at

It's a rather long and rambling thread, but what I got from it
was that "=" for assignments is something that just works by accident,
is discouraged and left alive only to avoid breaking code that uses it.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgresql 9.2 has standby server lost data?

2015-06-22 Thread Albe Laurenz
Paula Price wrote:
> I have Postgresql 9.2.10 streaming replication set up with log shipping in
> case the replication falls behind.  I discovered that the log-shipping had
> been disabled at some point in time.  I enabled the log shipping again.
> 
> If at some point in time the streaming replication fell behind and the
> standby server was not able to retrieve the necessary WAL file(s) from the
> primary, would the standby server continue to function normally?  Do I need
> to rebuild the standby server?  I have restarted the standby server and it
> is up and running with no issues.  I need to know if the
> data integrity has been compromised.
> 
> I have run this query to determine the lag time for the standby(in case
> this tells me anything):
> "SELECT now(), now() - pg_last_xact_replay_timestamp()  AS time_lag;
> RESULT:
> "2015-06-19 00:40:48.83701+00";"00:00:01.078616"

Your were lucky and replication did not fall behind.

If it had, and replication had tried to resort to WAL archives,
replication would have got stuck there.

There is no way that recovery can omit a portion of WAL during replay.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: pg_dump 8.4.9 failing after upgrade to openssl-1.0.1e-30.el6_6.11.x86_64 on redhat linux

2015-06-22 Thread Albe Laurenz
Piotr Gackiewicz wrote:
> Tom Lane  wrote:
>> Douglas Stetner  writes:
>>> Looking for confirmation there is an issue with pg_dump failing after
>>> upgrade to openssl-1.0.1e-30.el6_6.11.x86_64 on redhat linux.
>>
>> Quick thought --- did you restart the Postgres service after upgrading
>> openssl?  If not, your server is still using the old library version,
>> while pg_dump would be running the new version on the client side.
>> I don't know exactly what was done to openssl in the last round of
>> revisions, but maybe there is some sort of version compatibility issue.
>>
>> Also, you really ought to be running something newer than PG 8.4.9.

> I have the same problem with fresh postgresql 9.2.13.
> Started after upgrade to openssl-1.0.1e-30.el6_6.11.x86_64
> 
> Since then pg_dump aborts after dumping circa 2GB:
> 
> pg_dump: [archiver (db)] query failed: SSL error: unexpected message
> pg_dump: [archiver (db)] query was: FETCH 100 FROM _pg_dump_cursor
> 
> openssl-1.0.1e-30.el6_6.11.x86_64 on both ends (connecting via localhost)
> 
> pg_dump via unix socket, without "-h localhost" - there is no problem.
> 
> Fetching 2.5 GB of such text dump via https (apache + mod_ssl +
> openssl-1.0.1e-30.el6_6.11.x86_64) => wget +
> openssl-1.0.1e-30.el6_6.11.x86_64  - there is no problem
> 
> Looks like postgresql+ssl issue.
> 
> postgres=#  select name,setting,unit from pg_settings where name ~ 'ssl' ;
>   name   |  setting  | unit
> -+---+--
>  ssl | on|
>  ssl_ca_file |   |
>  ssl_cert_file   | server.crt|
>  ssl_ciphers | ALL:!ADH:!LOW:!EXP:!MD5:@STRENGTH |
>  ssl_crl_file|   |
>  ssl_key_file| server.key|
>  ssl_renegotiation_limit | 524288| kB
> 
> 
> Any thoughts?

Maybe it has something to do with this OpenSSL bug:
http://rt.openssl.org/Ticket/Display.html?id=3712&user=guest&pass=guest

Basically, OpenSSL fails to handle application data messages during 
renegotiation.

I have only encountered that when using other SSL libraries together with
OpenSSL, but maybe it can also happen with only OpenSSL.

Just to make sure:
Do you have the same version of OpenSSL on both PostgreSQL client and server?

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL]

2015-06-24 Thread Albe Laurenz
Bruno Hass de Andrade wrote:
> Hi. My company have servers that run postgres for storing some logs and 
> serving django web interfaces
> for management of the server itself. In the last days some servers stopped 
> serving the web interface,
> and syslog show this error:
> 
>   Jun 23 04:40:19 django-1 postgres[8790]: [3-1] FATAL:  remaining 
> connection slots are reserved
> for non-replication superuser connections
> 
> So I started looking for the cause:

[...]

> 50 connections in idle status, but looking further:
> 
> 
>   $ cat /proc/4521/stack
>   [] unix_stream_recvmsg+0x2b9/0x633
>   [] __sock_recvmsg_nosec+0x29/0x2b
>   [] sock_recvmsg+0x65/0x88
>   [] SYSC_recvfrom+0xda/0x134
>   [] SyS_recvfrom+0x9/0xb
>   [] system_call_fastpath+0x16/0x1b
>   [] 0x
> 
> 
> All connections have stucked in this stack.

That just means that they are idle an waiting to receive a message from the 
client.

[...]

>   # postgresql.conf #
[...]
>   max_connections = 50

[...]

> I really don't know what is happening, why postgres hang and didn't close the 
> connections. This django
> web interface is used only for management and viewing logs, most server have 
> two users only.

The problem is clear from the PostgreSQL side:
You have reached the connection limit an so receive errors when you start new 
connections.

You'll have to investigate your application server why it starts so many 
connections
or why it fails to close idle connections after some time.
Is there a connection pool? How is it configured?

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Counting the occurences of a substring within a very large text

2015-06-24 Thread Albe Laurenz
Marc Mamin wrote:
> I'd like to count the number  linebreaks within a string,
> but I get a memory allocation error when using regexp_matches or 
> regexp_split_to_table.
> 
> Any idea for an alternative to this problem  ?
> 
> select count(*)-1 from
> (  select regexp_split_to_table(full_message,'(\n)', 'g')
>from mytable
>where id =-2146999703
> )foo;
> 
> ERROR:  invalid memory alloc request size 1447215584

Does any of these two work:

SELECT length(regexp_replace(full_message, '[^\n]', '', 'g'))
FROM mytable
WHERE id = -2146999703;

or

SELECT length(full_message) - length(replace(full_message, E'\n', ''))
FROM mytable
WHERE id = -2146999703;

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Counting the occurences of a substring within a very large text

2015-06-24 Thread Albe Laurenz
Marc Mamin wrote:
>>> I'd like to count the number  linebreaks within a string, but I get a
>>> memory allocation error when using regexp_matches or regexp_split_to_table.

>> Does any of these two work:
[...]

> no, they both yeld the same error.
>
> a new string functions for this would be nice, as it could certainly be 
> implemented in a more efficient way...

This is a rather special case.
But it should be easy to write your own C function that does this efficiently.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] could not fork new process for connection: Resource temporarily unavailable

2015-07-15 Thread Albe Laurenz
Jimit Amin wrote:
> I have heavy transaction load production database 9.3 PPAS .Today Database is 
> not able to give new
> process. I checked pg_stat_activity , there are so many transaction in 
> waiting stage because of one
> procedure and lock on one table (Code inside procedure)

[...]

>   could not fork new process for connection: Resource temporarily unavailable
>   could not fork new process for connection: Resource temporarily unavailable
>   could not fork autovacuum worker process: Resource temporarily unavailable
>   could not fork new process for connection: Resource temporarily unavailable
>   could not fork new process for connection: Resource temporarily unavailable

Your machine is running out of system resources needed to create new processes.

This might help:
http://stackoverflow.com/questions/12079087/fork-retry-resource-temporarily-unavailable

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Creating a user for pg_start_backup

2015-07-21 Thread Albe Laurenz
Andrew Beverley wrote:
> I'm setting up hot backups on my database server. As such, I'd like to set up 
> a
> Postgres user that has access to only pg_start_backup and pg_stop_backup.
> 
> I'm unable to work out how to do this with the various GRANT options. Can 
> someone
> point me in the right direction please? Or is there a better way to achieve 
> this,
> rather than having a dedicated user?

If you want to be as restrictive as possible, you could create functions
owned by a superuser with SECURITY DEFINER that do only these things
and give execution rights only to a user that has no other privileges.

Youes,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Drop down in connect time between 9.3.6 and 9.3.9 ?

2015-07-22 Thread Albe Laurenz
Marc Mamin wrote:
> We've just upgraded some productive servers from 9.3.6 to 9.3.9, and it seems 
> that the time to get a
> connection (or possibly to disconnect) has become much slower.
> These tests are consistent when run against different servers.
> 
> tests with 9.4.1 show fast times, but we don't have a later version to test 
> yet.
> 
> Is this a known issue ?
> 
>   9.3.6:
>   time { i=0; while [ $i -lt 1000 ]; do let i++; psql -c 'select 1' ; 
> done; } >/dev/null
> 
>   real0m4.994s
>   user0m0.088s
>   sys 0m0.272s
> 
>   9.3.9:
>   time { i=0; while [ $i -lt 1000 ]; do let i++; psql -c 'select 1' ; 
> done; } >/dev/null
> 
>   real0m11.081s
>   user0m0.140s
>   sys 0m0.208s

I guess you are hitting this bug:
http://www.postgresql.org/message-id/15290.1435103...@sss.pgh.pa.us

Fixed in this commit:
http://www.postgresql.org/message-id/e1z8c3z-0004ko...@gemulon.postgresql.org

You could apply the patch on your system if necessary.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgresql-ctl systemd failed: permission denied

2015-07-28 Thread Albe Laurenz
arnaud gaboury wrote:
> On Fedora 22
> 
> % pg_ctl -V
> pg_ctl (PostgreSQL) 9.4.4
> 
> 
> % systemctl status postgresql.service -l
> ● postgresql.service - PostgreSQL database server
>Loaded: loaded (/etc/systemd/system/postgresql.service; enabled;
> vendor preset: disabled)
>Active: failed (Result: exit-code) since Tue 2015-07-28 11:50:01
> CEST; 19min ago
> 
> Jul 28 11:49:56 poppy systemd[1]: Starting PostgreSQL database server...
> Jul 28 11:49:57 poppy postgresql-ctl[307]: FATAL:  42501: could not
> open log file "/storage/log/postgresql/postgresql-Tue.log": Permission
> denied
> Jul 28 11:49:57 poppy postgresql-ctl[307]: LOCATION:  logfile_open,
> syslogger.c:1160
> Jul 28 11:50:01 poppy postgresql-ctl[307]: pg_ctl: could not start server
> 
> 
> % ls -al /storage/log
> drwxr-xr-x 1 postgres postgres0 Jul 28 11:29 postgresql/
> 
> 
> I have found a few entries on the web but with no clear fix. I even
> saw this was a bug.
> 
> Can anyone tell me more and how to fix if it is possible?

What file system is that? ext4?

What do you get for:
  ls -l /storage/log/postgresql/postgresql-Tue.log
Perhaps the file exists and you don't have permissions to open it.

If not, does the following succeed as user "postgres":
  touch /storage/log/postgresql/postgresql-Tue.log
Do you get the same error message?

What do you get for:
  getfacl -p /storage/log/postgresql
  getfattr -d /storage/log/postgresql
Maybe some weird permissions or attributes are set.

Another idea: the file system could be mounted read-only.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] conn = PQconnectdb(conninfo);

2015-07-31 Thread Albe Laurenz
Peter Kroon wrote:
> I've found perhaps a bug.
> I've narrowed down my code and the problem is indeed at: conn = 
> PQconnectdb(conninfo);
> 
> My connection string: host=192.168.178.12 dbname=DATABASE user=foo 
> password=bar
> 
> When I remove key/value host=xxx then everything is OK. Valgrind mentions: no 
> leaks are possible.
> 
> When key/value host=xxx is added, not everything is freed and there are tons 
> of bytes still reachable.
> 
> 
> ==9195==
> ==9195== HEAP SUMMARY:
> ==9195== in use at exit: 450,080 bytes in 2,829 blocks
> ==9195==   total heap usage: 9,476 allocs, 6,647 frees, 7,810,733 bytes 
> allocated
> ==9195==
> ==9195== LEAK SUMMARY:
> ==9195==definitely lost: 0 bytes in 0 blocks
> ==9195==indirectly lost: 0 bytes in 0 blocks
> ==9195==  possibly lost: 0 bytes in 0 blocks
> ==9195==still reachable: 450,080 bytes in 2,829 blocks
> ==9195== suppressed: 0 bytes in 0 blocks
> ==9195== Rerun with --leak-check=full to see details of leaked memory
> ==9195==
> ==9195== For counts of detected and suppressed errors, rerun with: -v
> ==9195== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 14 from 6)
> 
> 
> 
> The network address is the machine's address where I was testing on. So I 
> could also have used
> localhost or 127.0.0.1 but this gave me the same result when using the 
> network address.
> 
> 
> Played with hostaddr as well and gave me the same result.
> 
> 
> http://www.postgresql.org/docs/9.4/static/libpq-connect.html#LIBPQ-PQCONNECTDB
> http://www.postgresql.org/docs/9.4/static/libpq-connect.html#LIBPQ-PARAMKEYWORDS
> "PostgreSQL 9.4.4 on x86_64-unknown-linux-gnu, compiled by gcc (Debian 
> 4.7.2-5) 4.7.2, 64-bit"
> 
> 
> The attachment is the program I've used for testing.

You should run valgrind with --leak-check=full to see details.

I tried your program and I get "still reachable" only when SSL is enabled; all 
the memory is in
OpenSSL.  Dou you use SSL?

Without SSL (sslmode=disable) I get no "still reachable" memory.

I don't know of reachable memory is a problem, I'd suspect not.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] scaling postgres - can child tables be in a different tablespace?

2015-08-04 Thread Albe Laurenz
Chris Withers wrote:
> This raises an interesting question: can a child table be in a different
> tablespace to its parent and other children of that parent?

Yes.

Inheritance is a logical concept and is independent of physical placement.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] conn = PQconnectdb(conninfo);

2015-08-10 Thread Albe Laurenz
Peter Kroon wrote:
>>> I've found perhaps a bug.
>>> I've narrowed down my code and the problem is indeed at: conn = 
>>> PQconnectdb(conninfo);
>>>
>>> My connection string: host=192.168.178.12 dbname=DATABASE user=foo 
>>> password=bar
>>>
>>> When I remove key/value host=xxx then everything is OK. Valgrind mentions: 
>>> no leaks are possible.
>>>
>>> When key/value host=xxx is added, not everything is freed and there are 
>>> tons of bytes still reachable.
>>>
>>>
>>> ==9195==
>>> ==9195== HEAP SUMMARY:
>>> ==9195== in use at exit: 450,080 bytes in 2,829 blocks
>>> ==9195==   total heap usage: 9,476 allocs, 6,647 frees, 7,810,733 bytes 
>>> allocated
>>> ==9195==
>>> ==9195== LEAK SUMMARY:
>>> ==9195==definitely lost: 0 bytes in 0 blocks
>>> ==9195==indirectly lost: 0 bytes in 0 blocks
>>> ==9195==  possibly lost: 0 bytes in 0 blocks
>>> ==9195==still reachable: 450,080 bytes in 2,829 blocks
>>> ==9195== suppressed: 0 bytes in 0 blocks
>>> ==9195== Rerun with --leak-check=full to see details of leaked memory
>>> ==9195==
>>> ==9195== For counts of detected and suppressed errors, rerun with: -v
>>> ==9195== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 14 from 6)

>> I tried your program and I get "still reachable" only when SSL is enabled; 
>> all the memory is in
>> OpenSSL.  Dou you use SSL?
>> 
>> Without SSL (sslmode=disable) I get no "still reachable" memory.
>> 
>> I don't know of reachable memory is a problem, I'd suspect not.

> I'm not using ssl at the moment.
> 
> The thing is when the reachable leak is there is just grows. It is not 
> constant. And at a given point
> the program will break.
> 
> I've ran the program with: valgrind --leak-check=full --show-reachable=yes 
> --log-
> file="_pg_test_debug.log" ./_pg_test_debug
> 
> View logfile here: http://pastebin.com/7rjBRbkD
> SSL is mentioned in combination with pg objects
> 
> Your suggestion: sslmode=disable seems to have fix my issue..

I looked into this some more, an this is expected behaviour.

If you do not use sslmode=disable or sslmode=allow, PostgreSQL will first try to
establish an SSL connection.  This requires that the SSL library be initialized
(a call to OPENSSL_config()).

This is done only once and will allocate some memory that will never be 
deallocated.
That should not be a problem, and the memory leak should not increase if more
than one connection is opened.
Since your code starts several threads, I believe than there is a slim chance 
that
due to race conditions, the memory is allocated more than once.

If you want more control over that, you can explicitly initialize and destroy
this memory, see 
http://www.postgresql.org/docs/9.4/static/libpq-ssl.html#LIBPQ-SSL-INITIALIZE

Sample code (untested) would look like that:

#include 

[...]

/* initialize SSL library */
OPENSSL_config();

/* tell PostgreSQL about it */
PQinitOpenSSL(0, 1);

[start threads, open database connections, do some database work, close 
connections]

/* free SSL memory */
CONF_modules_free();

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SELECT clause without parameters

2015-08-17 Thread Albe Laurenz
pinker wrote:
> I would like to ask what's the reason of change SELECT behaviour.
> In distributions below 9.4 SELECT without any parameters caused a syntax
> error and now gives empty set. Was it made for some bigger aim ? :)
> 
> for instance 8.4:
> postgres=# select version();
>version
> -
>  PostgreSQL 8.4.17 on i486-pc-linux-gnu, compiled by GCC gcc-4.4.real
> (Debian 4.4.5-8) 4.4.5, 32-bit
> (1 wiersz)
> 
> postgres=# select
> postgres-# ;
> ERROR:  syntax error at or near ";"
> LINIA 2: ;
> 
> 
> and 9.4:
> psql (9.4.4)
> Type "help" for help.
> 
> postgres=# select
> postgres-# ;
> --
> (1 row)

That must be this change:
http://www.postgresql.org/message-id/e1vs0qu-0004bc...@gemulon.postgresql.org

The explanation is in the commit message.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to EXPLAIN a trigger function

2015-08-17 Thread Albe Laurenz
Zdenek Belehrádek wrote:
> We
> would like to know if there is simple way to EXPLAIN the trigger function, so
> we could optimize it.

I believe that the auto_explain module could help you:
http://www.postgresql.org/docs/current/static/auto-explain.html

If you turn on "auto_explain.log_nested_statements" and set
"auto_explain.log_min_duration" low enough, the execution plans of
all SQL statements calle from a function will be logged.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Dangers of mislabelled immutable functions

2015-08-21 Thread Albe Laurenz
Jeff Janes wrote:
> I want to index the textual representations of a table's rows.
> 
> You can cast a row to text by using the name of the table where you would 
> usually use the name of a
> column, like "table_name::text".  But this is not immutable and so can't be 
> used in an expression
> index.
> 
> I can easily make a pl/perl function which does the same thing as the text 
> cast, and label it as
> immutable and use it in an index.  But are the dangers of doing this?
> 
> I know that configuration changes to the date and time formats could change 
> the way that the data is
> shown in ways that would cause the rows inserted under a different 
> configuration to be missed. That
> doesn't bother me.

That's exactly the danger: your index will become corrupted if the function 
turns out not to
be immutable after all.  So if you know that such configuration changes will 
not happen,
you can safely do that.
But how do you keep people from running "SET DateStyle=..."?

> I know that if I drop a table column, the contents of the column will still 
> be in the index.  That
> doesn't bother me either, as all the indexes I plan to use are lossy and so 
> will do a recheck anyway.

I don't understand.
If you drop a column, all indexes that depend on that column will be dropped as 
well.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgresql C extension and SIGSEGV

2015-09-04 Thread Albe Laurenz
Etienne Champetier wrote:
> We are planning to add a C extension 
> (https://github.com/petropavel13/pg_rrule) to our shared
> postgresql cluster, and wondering what are the risk? (looking for the worst 
> case scenario here)
> 
> If there is a SIGSEGV, SIGBUS, SIGABRT ..., is the whole server stopping, or 
> just the request?

All client connections will be terminated and the server will initiate
recovery from the latest checkpoint.  Until that is done, no client
can connect to the database.

That is something you normally don't want to have in a production database.

> Knowing that the extension is only used in select statement, is there a risk 
> of (on disk) data
> corruption?

Even when run from a SELECT, a C function can do anything it wants with the 
server.

> Is the risk limited to the current database? (the extension will only be used 
> by 1 application with 1
> database, and we prefer not to impact other applications/databases)

The C function can happily start removing arbitrary file owned by
the PostgreSQL user if it chooses to, so no.

> Are there any techniques to limit/mitigate these risks? 
> (configuration/compile flags/...)

You should only use C functions that you trust.

Code review of the extension and good testing are your best protection.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Buffers: shared hit/read to shared_buffers dependence

2015-09-05 Thread Albe Laurenz
Pavel Suderevsky wrote:
> When I have been passing through "Understanding explain" manual 
> (http://www.dalibo.org/_media/understanding_explain.pdf)
> I've faced some strange situation when table with size of 65MB completely 
> placed in cache with shared_buffers=320MB and it doesn't with shared_buffers 
> <= 256MB.
> Actually behaviour of caching in my case is the same with either 256MB or 
> 32MB. Im my mind shared_buffers
> with size of 256MB should be enough for caching table with size of 65MB, but 
> it isn't. Could you please explain such behaviour?
>
> Steps:
> 
> understanding_explain=# select pg_size_pretty(pg_relation_size('foo'));
>  pg_size_pretty 
> 
>  65 MB
> (1 row)

> postgres=# show shared_buffers ;
>  shared_buffers 
> 
>  320MB
> (1 row)
>

> understanding_explain=# EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM foo;
>QUERY PLAN 
>   
> 
>  Seq Scan on foo  (cost=0.00..17500.60 rows=100 width=37) (actual 
> time=0.786..143.686 rows=100 loops=1)
>Buffers: shared read=8334

> understanding_explain=# EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM foo;
>   QUERY PLAN  
>  
> ---
>  Seq Scan on foo  (cost=0.00..17500.60 rows=100 width=37) (actual 
> time=0.009..83.546 rows=100 loops=1)
>Buffers: shared hit=8334

> understanding_explain=# show shared_buffers;
>  shared_buffers 
> 
>  256MB
> (1 row)
>
> understanding_explain=# EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM foo;  
>QUERY PLAN 
>   
> 
>  Seq Scan on foo  (cost=0.00..17500.60 rows=100 width=37) (actual 
> time=0.772..126.242 rows=100 loops=1)
>Buffers: shared read=8334

> understanding_explain=# EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM foo;
>   QUERY PLAN  
>  
> ---
>  Seq Scan on foo  (cost=0.00..17500.60 rows=100 width=37) (actual 
> time=0.029..91.686 rows=100 loops=1)
>Buffers: shared hit=32 read=8302

> With every new query execution 32 hits adding to shared hit value.

This must be due to this commit:
http://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=d526575f893c1a4e05ebd307e80203536b213a6d

See also src/backend/storage/buffer/README, chapter
"Buffer Ring Replacement Strategy" and the functions initcan() and 
GetAccessStrategy()
in the source.

Basically, if in a sequential table scan shared_buffers is less than four times 
the estimated table size,
PostgreSQL will allocate a "ring buffer" of size 256 KB to cache the table 
data, so that a large sequential scan
does not "blow out" significant parts of the shared cache.
The rationale is that data from a sequential scan will probably not be needed 
again right away, while
other data in the cache might be hot.

That's what you see in your second example: 32 buffers equals 256 KB, and the 
ring buffer is chosen from
free buffer pages, so the amount of table data cached increases by 32 buffers 
every time.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Online backup of PostgreSQL data.

2015-09-17 Thread Albe Laurenz
John R Pierce wrote:
> to copy the data directory and have it be useful you need to bracket the copy 
> with calls to
> pg_start_backup() and pg_stop_backup()  this ensures the data files are 
> coherent.   this is in
> fact what pg_basebackup does for you

I apologize for my fussiness, but this is a misconception I encounter so 
frequently
that I have to speak up.

The file system copy does not become consistent (isn't that what you meant?) if 
it
is surrounded by pg_start_backup() and pg_stop_backup().  What happens is that
a) a backup.label file is created that tells recovery where to start
b) more WAL is generated so that all changes can be replayed safely.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Shouldn't "WHEN (OLD.* IS DISTINCT FROM NEW.*)" clause be independent from data type?

2015-09-17 Thread Albe Laurenz
pinker wrote:
> I've tried to write audit trigger which fires only when data changed, so I 
> used "WHEN (OLD.* IS
> DISTINCT FROM NEW.*)" clause as described in documentation
>  . Should 
> this clause be independent
> from data type? because an error occurs when I'm trying to modify row with 
> point data type: ERROR:
> could not identify an equality operator for type point

I guess it is dependent on data type as it requires an equality operator,
and type "point" doesn't have one.

You'd have to hand-roll a comparison in this case, probably using the
"same as" operator "~=".

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Delete trigger

2015-09-18 Thread Albe Laurenz
Leif Jensen wrote:
>If I do "DELETE FROM devicegroup WHERE group=1" I do not want to delete 
> anything. I only want to
> delete if I do "DELETE FROM devicegroup WHERE groupid=x AND ctrlid=y AND 
> userid=z". I don't wanna let
> anyone delete more than 1 row at a time.

I can't think of a way to do that with a trigger.

I'd write a
  FUNCTION delete_devicegroup(groupid integer, ctrlid integer, userid integer)
RETURNS void CALLED ON NULL INPUT VOLATILE SECURITY DEFINER
that enables the user to delete a row and checks that all arguments
are NOT NULL.  The user doesn't get privileges to DELETE from the table 
directly.

Yours,
Laurenz Albe


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] to pg

2015-09-25 Thread Albe Laurenz
Ramesh T wrote:
> CREATE UNIQUE INDEX idx_load_pick ON  pick (case picked when picked='y' then 
> load_id else null end );
> 
> how can i convert case expressed to postgres..above it is oracle.

CREATE TABLE pick (picked char(1), load_id integer);

CREATE FUNCTION picked_loadid(character, integer) RETURNS integer
   IMMUTABLE STRICT LANGUAGE sql AS
   $$SELECT CASE WHEN $1 = 'y' THEN $2 ELSE NULL END$$;

CREATE INDEX idx_load_pick ON pick (picked_loadid(picked, load_id));

*but*

It will only work with queries like:

SELECT * FROM pick WHERE picked_loadid(picked, load_id) IS NOT NULL;

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Selecting pairs of numbers

2015-10-06 Thread Albe Laurenz
Charles Clavadetscher wrote:
>> aklaver@test=> create table pr_test(x int, y int);
>>
>> aklaver@test=> select * from pr_test  where (x, y) between (1, 3) and
>> (3,2) order by x,y;
>>   x | y
>> ---+---
>>   1 | 3
>>   1 | 4
>>   2 | 1
>>   2 | 2
>>   2 | 3
>>   2 | 4
>>   3 | 1
>>   3 | 2
> 
> +1, nice.

And the really cool thing about it is that it will work well
with a combined index on (x, y).

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Version management for extensions

2015-10-09 Thread Albe Laurenz
Jeff Janes wrote:
> I am facing a scenario where I have different version of an extension, say 
> 1.0 and 2.0, which have
> some different functionality between them (so not merely a bug fix), so 
> people might want to continue
> to use 1.0.
> 
> But changes to the PostgreSQL software between major versions requires 
> changes to the extension's
> source code.
> 
> So I  basically have 4 versions to carry:
> 
> 1.0_for_9.4_or_before
> 2.0_for_9.4_or_before
> 1.0_for_9.5
> 2.0_for_9.5
> 
> 
> Is there some easy way to handle this?  Are there examples of existing 
> modules which have a similar
> situation (and which handle it well) on PGXN or pgfoundry or other public 
> repositories?

I don't think that there is an easy solution.

Could some #ifdefs make the same code work for 9.4 and 9.5?

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Drop or alter column under load give ERROR #42804 structure of query does not match function result type:

2015-10-09 Thread Albe Laurenz
Adrian Klaver wrote:
> On 10/08/2015 11:32 PM, Victor Blomqvist wrote:
>> I have a heavily used PostgreSQL 9.3.5 database on CentOS 6. Sometimes I
>> need to add/remove columns, preferably without any service
>> interruptions, but I get temporary errors.
>>
>> I follow the safe operations list from
>> https://www.braintreepayments.com/blog/safe-operations-for-high-volume-postgresql
>> but many operations cause troubles anyway when the more busy tables are
>> updated.
>>
>> Typically I have user defined functions for all operations, and my table
>> and functions follow this pattern:
>>
>> CREATE TABLE users (
>>id integer PRIMARY KEY,
>>name varchar NOT NULL,
>>to_be_removed integer NOT NULL
>> );
>>
>> CREATE FUNCTION select_users(id_ integer) RETURNS SETOF users AS
>> $$
>> BEGIN
>>RETURN QUERY SELECT * FROM users WHERE id = id_;
>> END;
>> $$ LANGUAGE plpgsql;
>>
>> Then the actual queries are run by our application as
>>
>> SELECT id, name FROM select_users(18);
>>
>> As you can see the column to_be_removed is not selected. Then to remove
>> the column I use:
>>
>> ALTER TABLE users DROP COLUMN to_be_removed;
>>
>> However, while the system is under load sometimes (more frequently and
>> persistent the more load the system is experiencing) I get errors like
>> these:
>>
>>  ERROR #42804 structure of query does not match function result
>> type: Number of returned columns (2) does not match expected column
>> count (3).
>>
>> The same error can happen when columns are added. Can this be avoided
>> somehow, or do I need to take the system offline during these kind of
>> changes?
> 
> For the reason why this is happening see:
> 
> http://www.postgresql.org/docs/9.4/interactive/plpgsql-implementation.html#PLPGSQL-PLAN-CACHING

Yes, but the ALTER TABLE causes the plan to be recreated the next time.

There must be a race condition that causes other sessions to continue using
the old plan for a little while.  Don't know if that's as designed.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Drop or alter column under load give ERROR #42804 structure of query does not match function result type:

2015-10-09 Thread Albe Laurenz
Adrian Klaver wrote:
>>> For the reason why this is happening see:
>>>
>>> http://www.postgresql.org/docs/9.4/interactive/plpgsql-implementation.html#PLPGSQL-PLAN-CACHING
>>
>> Yes, but the ALTER TABLE causes the plan to be recreated the next time.
> 
> But does it? From the link above:
> 
> "Because PL/pgSQL saves prepared statements and sometimes execution
> plans in this way, SQL commands that appear directly in a PL/pgSQL
> function must refer to the same tables and columns on every execution;
> that is, you cannot use a parameter as the name of a table or column in
> an SQL command. To get around this restriction, you can construct
> dynamic commands using the PL/pgSQL EXECUTE statement — at the price of
> performing new parse analysis and constructing a new execution plan on
> every execution."
> 
> I see '*' as a parameter. Or to put it another way '*' is not referring
> to the same thing on each execution when you change the table definition
> under the function.  Now if I can only get the brain to wake up I could
> find the post where Tom Lane explained this more coherently then I can:)

Session 1:

test=> CREATE TABLE users (id integer PRIMARY KEY, name varchar NOT NULL, 
to_be_removed integer NOT NULL);
CREATE TABLE
test=> CREATE FUNCTION select_users(id_ integer) RETURNS SETOF users AS
   $$BEGIN RETURN QUERY SELECT * FROM users WHERE id = id_; END;$$ LANGUAGE 
plpgsql;
CREATE FUNCTION

Session 2:

test=> SELECT id, name FROM select_users(18);
 id | name
+--
(0 rows)

Ok, now the plan is cached.

Now in Session 1:

test=> ALTER TABLE users DROP COLUMN to_be_removed;
ALTER TABLE

Session2:

test=> SELECT id, name FROM select_users(18);
 id | name
+--
(0 rows)

No error.  This is 9.4.4.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Drop or alter column under load give ERROR #42804 structure of query does not match function result type:

2015-10-12 Thread Albe Laurenz
Victor Blomqvist wrote:
[race condition causes errors due to stale plans immediately after ALTER TABLE 
DROP]
> Note that these errors most of the time only happens very briefly at the same 
> time as the ALTER is
> run. When I did some experiments today the server in total had around 3k 
> req/s with maybe 0.1% of them
> touching the table being updated, and the error then happens maybe 1-10% of 
> the times I try this
> operation. If I do the operation on a table with more load the error will 
> happen more frequently.

As far as I gleaned from reading the source, plan cache invalidation happens by 
signals
sent to the other backends, so I can see why there can be small delays.
I wonder if there is any good way to improve this.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgpool ssl handshake failure

2015-10-16 Thread Albe Laurenz
AI Rumman wrote:
> I am using pgpool-II version 3.4.3 (tataraboshi).
> Where my database is Postgresql 8.4.
> 
> I am trying to configure ssl mode from client and between pgpool and database 
> it is non-ssl.
> I configured as document and now I am getting this in my log:
> 
>   2015-10-13 22:17:58: pid 1857: LOG:  new connection received
>   2015-10-13 22:17:58: pid 1857: DETAIL:  connecting host=10.0.0.5 
> port=65326
>   2015-10-13 22:17:58: pid 1857: LOG:  pool_ssl: "SSL_read": "ssl 
> handshake failure"
>   2015-10-13 22:17:58: pid 1857: ERROR:  unable to read data from frontend
>   2015-10-13 22:17:58: pid 1857: DETAIL:  socket read failed with an 
> error "Success"
> 
> Please let me know what wrong I am doing.

Does the problem occur immediately when you try to connect or only after some 
time?

If the latter, it may well be a renegotiation issue.
If the former, I can't guess.
WHat are the OpenSSL versions involved on both ends?

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] interperting type oid in C code

2015-10-20 Thread Albe Laurenz
Ken Been wrote:
> I'm working on a foreign data wrapper and I want to switch based on the 
> column type.
> Specifically, if the column type in the external table is the same as in the 
> (locally defined) foreign
> table then I can get some speedup for some types.
> 
> Through the ForeignScanState object I can get TupleDesc and AttInMetadata 
> opjects, and
> through there I can get the Oid of the column type, but now I'm stumped.  How 
> can I programmatically
> check whether Oid X refers to type int4, or whatever?

> Thanks, but I actually wanted to do it from C code.  But anyway I think I 
> found the answer: use the
> symbolic constants in catalog/pg_type.h, such as INT4OID.

That's fine, there is no problem with doing that.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_archivecleanup not deleting anything?

2015-11-02 Thread Albe Laurenz
Paul Jungwirth wrote:
> I'm running Postgres 9.3 in a warm standby configuration, and the slave
> has this setting in recovery.conf:
> 
> archive_cleanup_command = '/usr/lib/postgresql/9.3/bin/pg_archivecleanup
> /secure/pgsql/archive/ %r'
> 
> But I noticed that the archive directory had files going back to
> February 2014:
> 
> $ ls -ltr archive | head
> total 9126292
> -rw--- 1 postgres postgres  300 Feb 15  2014
> 00010002.0028.backup
> -rw--- 1 postgres postgres  300 Feb 15  2014
> 00010003.0028.backup
> -rw--- 1 postgres postgres  300 Feb 15  2014
> 00010004.0028.backup
> -rw--- 1 postgres postgres  300 Feb 15  2014
> 00010006.0028.backup
> -rw--- 1 postgres postgres  300 Feb 15  2014
> 00010007.0028.backup
> -rw--- 1 postgres postgres  300 Feb 15  2014
> 0001000B.0028.backup
> -rw--- 1 postgres postgres  300 Feb 15  2014
> 0001000C.0028.backup
> -rw--- 1 postgres postgres  300 Feb 15  2014
> 0001000E.0028.backup
> -rw--- 1 postgres postgres  300 Feb 15  2014
> 0001000F.0028.backup
> 
> And even if I run the command by hand, nothing is actually deleted:
> 
> $ pg_archivecleanup -d /secure/pgsql/archive/
> 0001000F.0028.backup
> pg_archivecleanup: keep WAL file
> "/secure/pgsql/archive//0001000F" and later
> 
> Despite the message, and the same files remain.
> 
> Does anyone have any idea why pg_archivecleanup isn't deleting anything?

It seems like pg_archivecleanup does not delete *.backup files.

These files are rather small and are only created during a backup.
They contain information about the backup, so I guess it was decided that
they are important enough to be kept.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_archivecleanup not deleting anything?

2015-11-03 Thread Albe Laurenz
Michael Paquier wrote:
>> So, as Albe posted pg_archivecleanup is only cleaning up the WAL files, not
>> the auxiliary files. The WAL files would be the ones with no extension and a
>> size of 16 MB(unless someone changed the compile settings).
> 
> The docs mention that "all WAL files" preceding a given point are
> removed, personally I understand that as "all 16MB-size segments shall
> die", hence excluding backup and history files from the stack. But one
> may understand that "WAL files" means everything in pg_xlog, so as
> something that includes backup and history files. Perhaps we would
> gain in clarity by saying "WAL file segments, including .partial
> segments" in the docs, and not just "WAL files". Thoughts?

It might help to add:
"Timeline history files and backup history files are not deleted."

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_archivecleanup not deleting anything?

2015-11-04 Thread Albe Laurenz
Michael Paquier wrote:
>>> The docs mention that "all WAL files" preceding a given point are
>>> removed, personally I understand that as "all 16MB-size segments shall
>>> die", hence excluding backup and history files from the stack. But one
>>> may understand that "WAL files" means everything in pg_xlog, so as
>>> something that includes backup and history files. Perhaps we would
>>> gain in clarity by saying "WAL file segments, including .partial
>>> segments" in the docs, and not just "WAL files". Thoughts?
>>
>> It might help to add:
>> "Timeline history files and backup history files are not deleted."
> 
> Something among those lines?

> +  
> +   WAL file segments and WAL file segments with .partial
> +   are deleted, while timeline history files and backup history files are 
> not.
> +  

"WAL file segments with .partial" sounds strange.
What about "WAL file segments (including partial ones) are deleted, while ..."?

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_archivecleanup not deleting anything?

2015-11-06 Thread Albe Laurenz
Michael Paquier wrote:
>>> Something among those lines?
>>
>>> +  
>>> +   WAL file segments and WAL file segments with .partial
>>> +   are deleted, while timeline history files and backup history files are 
>>> not.
>>> +  
>>
>> "WAL file segments with .partial" sounds strange.
>> What about "WAL file segments (including partial ones) are deleted, while 
>> ..."?
> 
> Does that look better?

That looks fine to me, thanks!

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] xa compatibility

2015-11-10 Thread Albe Laurenz
Xaver Thum wrote:
> is there an option (provided by Postgres) accessing a Postgres DB via  the 
> standard XA interface ?
> 
> I don't mean the usage of JDBC's class PGXADataSource,
> but the usual XA methods xa_open, xa_prepare, xa_commit, ... of the XA 
> standard.

I am not sure why there is no implementation of the XA API for PostgreSQL,
I have wondered about that myself.

Maybe it is because PostgreSQL's XA support does not offer all the functionality
that the standard provides, maybe it is just because nobody wrote the code for 
it.

Yours,
Laurenz Albe


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] can postgres run well on NFS mounted partitions?

2015-11-10 Thread Albe Laurenz
anj patnaik wrote:
> Can anyone advise if there are problems running postgres over NFS mounted 
> partitions?
> 
> I do need reliability and high speed.

I have got the advice not to use NFS from a number of people who should know,
but there are also knowledgable people who use PostgreSQL with NFS.

You need hard foreground mounts, and you need an NFS server that is
guaranteed not to lose data that the client has synced to disk.

You should probably only consider storage systems that directly
support NFS, and you should run performance and reliability tests.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] can postgres run well on NFS mounted partitions?

2015-11-11 Thread Albe Laurenz
Scott Mead wrote:
> Don't do it.  Period.   I've used 4 big-vendor appliances with NFS as well as 
> my own server.  With
> maybe 3 exceptions, most of the 'total-data-loss' scenarios I've dealt with 
> regarding transactional
> data was due to NFS.

Can you share more details?
What happened and what had caused the problem?

While researching the same topic, I came across two scenarios where people
had encountered problems:

1) A background mounted NFS failed to mount in time.
   A smart startup script triggered initdb on the empty mount point,
   then while PostgreSQL was running, the mount completed
   and data corruption ensued.
2) The NFS server implementation didn't actually sync the data
   (or keep it in a battery powered buffer) when it said it had
   synced them.

It would be good to know of other pitfalls; I (and no doubt not only I)
keep getting asked why we shouldn't run PostgreSQL on NFS when Oracle
has no problem with it (and don't tell me that Oracle does not care
about data corruption).

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] attempting to install tds_fw-master on redhat

2015-11-11 Thread Albe Laurenz
Mammarelli, Joanne T wrote:
> The following postgresql rpm is installed ..
> 
> rpm -qil postgresql-server-9.2.13-1.el7_1.x86_64
> 
> on redhat 7

> [root@scsblnx-994457 tds_fdw-master]# make USE_PXGS=1 install
> 
> Makefile:53: /usr/lib64/pgsql/pgxs/src/makefiles/pgxs.mk: No such file or 
> direcy
> make: *** No rule to make target 
> `/usr/lib64/pgsql/pgxs/src/makefiles/pgxs.mk'..

Install the postgresql-devel package.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] can postgres run well on NFS mounted partitions?

2015-11-11 Thread Albe Laurenz
John R Pierce wrote:
> On 11/11/2015 12:37 AM, Albe Laurenz wrote:
> > It would be good to know of other pitfalls; I (and no doubt not only I)
> > keep getting asked why we shouldn't run PostgreSQL on NFS when Oracle
> > has no problem with it (and don't tell me that Oracle does not care
> > about data corruption).
> 
> Last I looked, Oracle had a very specific set of recommendations for
> qualified NFS storage.Like, NetApp Filers, with everything setup
> just so.

I had heard that before, so I looked and could not find any information
which NFS devices are supported. All you read is:
(http://docs.oracle.com/cd/E11882_01/install.112/e47689/app_nas.htm#LADBI1366)

  Before using the NAS device for the installation, verify that it is certified.
  For certification information refer to note 359515.1 on the My Oracle Support 
website

I did, and all that contains is a list of mount options you have to use, e.g.
for x86-64 Linux
rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,vers=3,timeo=600
(the "bg" weirds me out)

So I opened a service request about a year ago and asked them *what* NFS
storage is supported.

The answer (in unquotable English) was that it didn't matter what the
NFS server was, as long as the NFS client is on a UNIX/Linux certified for 
Oracle
database and you use the right mount options.
At least that's what I understood ...

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] attempting to install tds_fw-master on redhat

2015-11-11 Thread Albe Laurenz
Mammarelli, Joanne T wrote:
> Installed postgresql-devel .. getting closer ..
> 
> Installed Packages
> postgresql.x86_64 9.2.13-1.el7_1   
> @rhel7-x86_64-2015-07
> postgresql-devel.x86_64   9.2.13-1.el7_1   
> @rhel-x86_64-server-7
> postgresql-libs.x86_649.2.13-1.el7_1   
> @rhel7-x86_64-2015-07
> postgresql-server.x86_64  9.2.13-1.el7_1   
> @rhel-x86_64-server-7
> 
> Installed Packages
> freetds.x86_640.91-12.git0a42888.el7  
> @rhel-x86_64-server-7-epel
> freetds-devel.x86_64  0.91-12.git0a42888.el7  
> @rhel-x86_64-server-7-epel
> 
> 
> [root@scsblnx-994457 tds_fdw-master]# make USE_PGXS=1
> gcc -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions 
> -fstack-protector-sc
> src/tds_fdw.c:30:33: fatal error: access/htup_details.h: No such file or 
> directy
>  #include "access/htup_details.h"
>  ^
> compilation terminated.
> make: *** [src/tds_fdw.o] Error 1

Looks like you are using a PostgreSQL version older than 9.3.

Did you check the README for the supported PostgreSQL versions?

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Does PostgreSQL ever create indexes on its own?

2015-11-13 Thread Albe Laurenz
Doiron, Daniel wrote:
> I’m troubleshooting a schema and found this:
> 
> Indexes:
> "pk_patient_diagnoses" PRIMARY KEY, btree (id)
> "index_4341548" UNIQUE, btree (id)
> "idx_patient_diagnoses_deleted" btree (deleted)
> "idx_patient_diagnoses_diagnosis_type_id" btree (diagnosis_type_id)
> "idx_patient_diagnoses_icd10" btree (icd10)
> "idx_patient_diagnoses_icd9" btree (diagnosis_code)
> "idx_patient_diagnoses_is_unknown" btree (is_unknown)
> "idx_patient_diagnoses_modified" btree (modified)
> "idx_patient_diagnoses_patient_id" btree (patient_id)
> "idx_patient_diagnoses_uuid" btree (uuid)
> "index_325532921" btree (modified)
> "index_4345603" btree (deleted)
> "index_4349516" btree (diagnosis_type_id)
> "index_4353417" btree (icd10)
> "index_4384754" btree (diagnosis_code)
> "index_4418849" btree (is_unknown)
> "index_4424101" btree (patient_id)
> "index_4428458" btree (uuid)
> 
> My questions is whether these “index_*” indexes could have been created by 
> postgresql or whether I
> have an errant developer using some kinda third-party tool?

These indexes were *not* created by PostgreSQL.
We are not Oracle.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] can postgres run well on NFS mounted partitions?

2015-11-13 Thread Albe Laurenz
John McKown wrote:
> All of the above make we curious about using NFS for the data files, but 
> having the WAL files on a
> local, perhaps SSD, device.​ I am not knowledgeable about WAL. Of course, I 
> don't know why the OP wants
> to put the database files on an NFS.

If the data file storage does not keep the promise that synced files are
actually on disk, you'd get in trouble at checkpoint time.

So if you don't trust NFS, that wouldn't be an option.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Does PostgreSQL ever create indexes on its own?

2015-11-13 Thread Albe Laurenz
Jeremy Harris wrote:
> On 13/11/15 10:49, Thomas Kellerer wrote:
>>> These indexes were *not* created by PostgreSQL.
>>> We are not Oracle.
>>
>> Well, Oracle does not create indexes on its own either - it has the same 
>> strategy as Postgres:
>> Indexes are only created automatically for primary keys and unique 
>> constraints.

I know - but I couldn't help commenting on the strange names
it chooses for these, like "SYS43243247".
Sorry for being unclear.

> Given that indices are an implementation wart on the side of the
> relational model, it'd be nice if RDBMS' did create them for one.

That cannot be done without knowing what the queries are going to be.

However, I recently learned that MySQL automatically creates indexes
on columns with a foreign key, and you cannot even drop those.

Maybe that would be a good thing, guessing from the number of cases
where people suffer from the lack of such indexes, but on the other
hand it feels like too much DWIM (there are cases where you do not
need such an index).

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can SET_VARSIZE cause a memory leak?

2016-06-08 Thread Albe Laurenz
Николай Бабаджанян wrote:
> I didn't find an easy way to convert ucs-2 bytea to utf-8, so I decided to 
> write a C-function. Since
> ucs-2 is has fixed symbol size of 2 bytes the output bytea size may differ.
> 
> I do the following:
> 
>   bytea   *result= (bytea *)  palloc0(VARSIZE(in_bytea)); // 
> allocating memory for the result
>   SET_VARSIZE(result, VARSIZE_ANY(in_bytea));
> 
>   ... // some calculations resulting in `result` having some trailing 0-s 
> (since palloc0 was
> used). We don't need those, so:
> 
>   SET_VARSIZE(result, new_varsize_result+VARHDRSZ); // new_varsize_result 
> was calculated during
> the convertion
> 
>   PG_RETURN_BYTEA_P(result);
> 
> The question is am I leaking memory by doing this, and if I am, should I use 
> pfree() manually on each
> address that is left trailing, or is there some other way to make this work?

This is safe, and the memory will be freed at the end of the transaction.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Changelog version from 8.1.2 to 9.3.6

2016-06-15 Thread Albe Laurenz
Yogesh Sharma wrote:
> I have doubt regarding release notes of all versions.
> As per release notes, below change logs are mentioned in all versions.
> 
> "(8.3.8,8.4.1,8.2.14) Make LOAD of an already-loaded loadable module into a 
> no-op (Tom Lane)"
> 1. What is meaning of above lines?
> 2. This changes are implemented only in above versions or is it propagated on 
> every above newer
> versions after that?
> 3. Why same line is mentioned in the release notes of above 
> 8.3.8,8.4.1,8.2.14 versions?

Ad 1:
As of the cited versions, nothing will be done if you use LOAD to load a module
that is already loaded.

Ad 2:
The change was a bugfix that was applied to the development version (so it is 
included
in 9.0 and later) and was backpatched to 8.2, 8.3 and 8.4.

If you want to know details about a change like this, you'll have to dig into 
the
git logs and possibly the code.
The commit was 602a9ef5a7c60151e10293ae3c4bb3fbb0132d03 in master,
57710f39cc55cba1e98c718300a811aadacec7c5 in 8.4,
5927d9f642c4cf6233e5fedd3468087995c00523 in 8.3 and
22f77b0f9db1220789b262cda6dccad49d031643 in 8.2.

Ad 3:
Because the same fix was applied to these three versions.

It is safe to assume that a problem that was fixed in 8.4 will also be
fixed in 9.0 and above, even though the 9.0 release notes don't explicitly
mention it - they mostly contain new features.


When reading up on the changes between 8.1.2 and 9.3.6 (a daunting task)
you can usually limit yourself to reading 8.2, 8.3, 8.4, 9.0, 9.1, 9.2
and 9.3.  This should cover all behaviour changes that are likely to
affect you.
The minor releases should not change behaviour other than fixing bugs.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Hot disable WAL archiving

2016-06-17 Thread Albe Laurenz
Job wrote:
> is there a way in Postgresql-9.5 to disable temporarily WAL archiving to 
> speed up pg_bulkload with
> restarting database engine?

You can set 'archive_command=/bin/true' and reload, then no WAL
archives will be written.

Make sure to perform a base backup as soon as your bulk load
is finished.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Protect a table against concurrent data changes while allowing to vacuum it

2016-06-22 Thread Albe Laurenz
Vlad Arkhipov wrote:
> I have a constraint that requires a table to be locked before checking
> it (i.e. no more than 2 records with the same value in the same column).
> If I lock the table in the SHARE ROW EXCLUSIVE mode, any vacuuming (or
> autovacuuming) process prevents me from checking the constraint. What
> are possible solutions?

Can you describe your check in more detail?
Why don't you use simple unique constraints?

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Protect a table against concurrent data changes while allowing to vacuum it

2016-06-22 Thread Albe Laurenz
Sameer Kumar wrote:
> On Wed, Jun 22, 2016 at 6:08 PM Vlad Arkhipov  wrote:
>> I am running PostgreSQL 9.5.
>> 
>> CREATE TABLE t (id BIGINT NOT NULL PRIMARY KEY, name TEXT);
>> 
>> The constraint that the data must satisfy is `there is no more than 3 
>> records with the same name`.
>> 
>> I am not in control of queries that modify the table, so advisory locks can 
>> hardly be of help to me.
> 
> 
> Define a function which does a count of the rows and if count is 3 it return 
> false if count is less it
> returns true.
> 
> Use check constraint with this function. I have not tried this so not sure if 
> you can use function
> with SELECT on same table in CHECK constraint. So test it out first.
> 
> If this works, any insert trying to get the 4th record in table would fail.

You cannot use subqueries in a check constraint:

ALTER TABLE t
   ADD CONSTRAINT name_count
  CHECK ((SELECT count(*) FROM t t1 WHERE t1.name = name) <= 3);
ERROR:  cannot use subquery in check constraint

> A last resort could be using triggers. But either of these approaches will 
> cause issues if you have
> high concurrency.

Yes, triggers is the way to go:

CREATE OR REPLACE FUNCTION check_t() RETURNS trigger LANGUAGE plpgsql AS
$$BEGIN
   IF (SELECT count(*) FROM t WHERE name = NEW.name) > 3 THEN
  RAISE EXCEPTION 'More than three values!';
   END IF;
   RETURN NEW;
END;$$;

CREATE TRIGGER check_t AFTER INSERT OR UPDATE ON t FOR EACH ROW
   EXECUTE PROCEDURE check_t();

But be warned that this will only work if all transactions involved use
the isolation level SERIALIZABLE.

Otherwise two concurrent INSERTs would not see each other's entry, and the
triggers would not raise an error even if there are more than three entries
after COMMIT.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Slow SQL?

2016-07-12 Thread Albe Laurenz
haman...@t-online.de wrote:
Bjørn T Johansen wrote:
>> I am trying to move a small system from Oracle to PostgreSQL and I have come 
>> upon a sql that runs
>> really slow compared to on the Oracle database and I am not able to 
>> interpret why this is slow.

> I have experienced that some subqueries can be quite slow, and would suspect 
> the NOT IN
> clause. I occasionally rewrite
> NOT IN (select key from unwanted_candidates)
> as
> IN (select key from possible_candidates except select key from 
> unwanted_candidates)

I would try to rewrite these clauses to NOT EXISTS, for example

   a.avlsnr Not In (Select avlsnr From dyr_pause_mot)

could be

   NOT EXISTS (SELECT NULL FROM dyr_pause_mot WHERE avlsnr = a.avlsnr)

This can be executed as anti-join and is often more efficient.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Slow SQL?

2016-07-12 Thread Albe Laurenz
Bjørn T Johansen wrote:
> Thx for your suggestions. Tried to use NOT EXISTS and the query was about 
> half a second quicker so not
> much difference...
> But when I try to run the 3 queries separately, then they are very quick, 2 
> barely measurable and the
> third takes about 1,5 seconds. The union query
> takes a little over 9 seconds, so I guess the union part is the bottleneck?

Looking at EXPLAIN (ANALYZE) output should tell you.

If yes, is UNION ALL an option for you?
That should be cheaper.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] jdbc 9.4-1208 driver for PostgreSQL 9.5?

2016-07-13 Thread Albe Laurenz
Joek Hondius wrote:
> (I hope i am on the right list)

pgsql-jdbc would have been the perfect list.

> jdbc.postgresql.org lists version 9.4 build 1208 as the lastest.
> Is this the correct version to use with PostgreSQL 9.5 (or even 9.6-beta)?
> I cannot find info on this elsewhere.

Yes, you should just use the latest driver.
Don't be worried if they don't have the same version number.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Duplicate data despite unique constraint

2016-09-02 Thread Albe Laurenz
Jonas Tehler wrote:
> We’re running Postgresql 9.4.5 on Amazon RDS. One of our tables looks 
> something like this:
> 
> 
> CREATE TABLE users
> (
>   ...
>   email character varying(128) NOT NULL,
>   ...
>   CONSTRAINT users_email_key UNIQUE (email)
> )
> 
> Despite this we have rows with very similar email values. I discovered the 
> problem when I tried to add
> a column and got the following error:
> 
> ActiveRecord::RecordNotUnique: PG::UniqueViolation: ERROR:  could not create 
> unique index
> "users_email_key"
> DETAIL:  Key (email)=(x...@yyy.com) is duplicated.
> : ALTER TABLE "users" ADD COLUMN "client_branch" character varying(255) 
> DEFAULT ‘beta'
> 
> 
> Now look at this:
> 
> => select email from users where email = 'x...@yyy.com';
>email
> ---
>  x...@yyy.com
> (1 row)
> 
> 
> => select email from users where email LIKE 'x...@yyy.com';
>email
> ---
>  x...@yyy.com
>  x...@yyy.com
> (2 rows)
> 
> 
> I have tried to compare the binary data in various ways, email::bytes, 
> md5(email),
> encode(email::bytea, 'hex’), char_length(email) and it all looks the same for 
> both rows.
> 
> Any suggestions how I can discover the difference between the values and how 
> they could have been
> added without triggering the constraint? I know that the values were added 
> after the constraint was
> added.
> 
> The data was added from a Ruby on Rails app that also has unique constraints 
> on the email field and
> validation on the email format.

That looks very much like data corruption.

I guess there is an index on "users" that is used for one query but not the 
other.
Can you verify with EXPLAIN?

Assuming that it is a 'text_ops' or 'varchar_ops' index, I'd say it gets used 
for the first
query, but not for the second.  That would mean that there is an extra entry in 
the table that
is not in the index.

Did you have any crashes, standby promotion, restore with PITR or other unusual 
occurrences recently?

Make sure you have a physical backup; there may be other things corrupted.

This is a possible path to proceed:

Once you have made sure that you have a physical backup, try to add the "ctid" 
column to both queries.

Then delete the extra row from the second query with "DELETE FROM email WHERE 
ctid = ...".

Then, to make sure there is no other corruption lurking, make a logical backup
with pg_dumpall, create a new database cluster, create a new one with "initdb" 
and
restore the data.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unable to create oracle_fdw (foreign data wrapper) extension

2016-09-16 Thread Albe Laurenz
Arun Rangarajan wrote:
> But when I try to create the extension, I get the following error:
>
> postgres=# create extension oracle_fdw;
> server closed the connection unexpectedly
> This probably means the server terminated abnormally
> before or while processing the request.
> The connection to the server was lost. Attempting reset: Failed.

> t=2016-09-15 16:16:08 PDT d= h= p=25327 a=LOG:  0: server process (PID 
> 20397) was terminated by signal 11: Segmentation fault

Well, as I told you, get a stack trace with debugging symbols.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database fixed size

2016-09-28 Thread Albe Laurenz
Adir Shaban wrote:
> Is there anyway to limit a database size?
> For example, I need to create a database for user X and I don't want it to 
> use more than 5 GB.

You can create a tablespace on a device with limited size.
Then you can create the database on that tablespace.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Restricted access on DataBases

2016-10-04 Thread Albe Laurenz
Durumdara wrote:
[...]
> --- login with postgres:
[...]
>   ALTER DEFAULT PRIVILEGES
>   GRANT INSERT, SELECT, UPDATE, DELETE, TRUNCATE, REFERENCES, TRIGGER 
> ON TABLES
>   TO u_tr_db;
>
>  login with u_tr_main:
>
>   create table t_canyouseeme_1 (k int);
>
>  login with u_tr_db:
> 
>   select * from t_canyouseeme_1;
> 
>   ERROR: permission denied for relation t_canyouseeme_1
>   SQL state: 42501
> 
>  As you see before, u_tr_db got all default privileges on future tables, so I 
> don't understand why he
> don't get to "t_canyouseeme_1".

You should have written

   ALTER DEFAULT PRIVILEGES FOR ROLE u_tr_main ...

The way you did it, you effectively wrote "FOR ROLE postgres" because
you were connected as that user.

Than means that all future tables created *by postgres* will have
privileges for user "u_tr_db" added.  But you want tables created
*by u_tr_main* to get the privileges.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SERIALIZABLE and INSERTs with multiple VALUES

2016-10-12 Thread Albe Laurenz
Kevin Grittner wrote:
> Sent: Tuesday, October 11, 2016 10:00 PM
> To: Jason Dusek
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] SERIALIZABLE and INSERTs with multiple VALUES
> 
> On Tue, Oct 11, 2016 at 2:29 PM, Jason Dusek  wrote:
> 
>> I notice the following oddity:
> 
>>  =# CREATE TABLE with_pk (i integer PRIMARY KEY);
>> CREATE TABLE
> 
>>  =# BEGIN;
>> BEGIN
>>  =# INSERT INTO with_pk VALUES (2), (2) ON CONFLICT DO NOTHING;
>> ERROR:  could not serialize access due to concurrent update
>>  =# END;
>> ROLLBACK
> 
> I don't see that on development HEAD.  What version are you
> running?  What is your setting for default_transaction_isolation?

The subject says SERIALIZABLE, and I can see it on my 9.5.4 database:

test=> CREATE TABLE with_pk (i integer PRIMARY KEY);
CREATE TABLE
test=> START TRANSACTION ISOLATION LEVEL SERIALIZABLE;
START TRANSACTION
test=> INSERT INTO with_pk VALUES (2), (2) ON CONFLICT DO NOTHING;
ERROR:  could not serialize access due to concurrent update

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] journaled FS and and WAL

2016-10-14 Thread Albe Laurenz
t.dalpo...@gmail.com wrote:
>   two question related to the WAL.
> 
> 1) I read in the doc that journaled FS is not important as WAL is
> journaling itself.  But who garantees that the WAL is written correctly?
> I know that it's sequential and a partial update of WAL can be discarded
> after a restart. But am I sure that without a journaled FS, if there is
> a crash during the WAL update, nothing already updated in the WAL before
> my commit can get corrupted?

At commit time, the WAL is "synchronized": PostgreSQL instructs the operating
system to write the data to the physical medium (not just a memory cache)
and only return success if that write was successful.

After a successful commit, the WAL file and its metadata are on disk.
Moreover, the file metadata won't change (except for the write and access
timestamps) because WAL files are created with their full size and never
extended, so no WAL file should ever get "lost" because of partial metadata
writes.

> 2) Let's suppose that I have one database, one table of 10 rows,
> each 256 bytes. Now, in a single SQL commit, I update row 10, row 3
> and row 8. How much should I expect the WAL increase by? (supposing
> no WAL segments will be deleted). I could guess 8192x3 but I'm not sure

It will be that much immediately after a checkpoint, but for subsequent writes
to the same disk block only the actually changed parts of the data block will
be written to WAL.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Collations and codepages

2016-10-18 Thread Albe Laurenz
Raimo Jormakka wrote:
> In Windows 7, and using PostgreSQL 9.4.5, the collation gets set to 
> "English_United States.1252" when
> I select the "English, United States" locale in the installer. In Linux, the 
> collation is set to
> "en_US.UTF-8". The encoding is set to UTF-8 in both instances.
>
> Will these two instances behave identically in terms of collation logic? And 
> if not, is there
> something I can do about it? In general, what's the impact of the codepage 
> part of a collation to
> begin with?

The two collations will probably not behave identically, since PostgreSQL uses 
the
operating system collations instead of having ist own, and odds are that 
Microsoft's
collations and glibc's are slightly different.

I don't know if the impact will be large; maybe run a couple of tests to see if 
the
ordering is similar enough for your purposes.

I don't think that the actual encoing (UTF-8 or Windows-1252) has any impact on 
the ordering,
but I am not certain.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] journaled FS and and WAL

2016-10-19 Thread Albe Laurenz
t.dalpo...@gmail.com wrote:
> I don't mind about performance but I absolutely mind about reliability,
> so I was thinking about the safest setting of linux FS and postgresql I
> can use.

Sure, use journaling then.
I do it all the time.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sequences / Replication

2016-10-21 Thread Albe Laurenz
Jonathan Eastgate wrote:
> We're seeing some odd behaviour from a PostgreSQL group - one running as 
> primary and the other as a
> hot slave using streaming replication.
> 
> When a failover event occurs and we switch to the hot slave as primary 
> sequences in tables jump by 33
> - so where the last number allocated in the sequence was 100 prior to 
> failover once adding the next
> entry the sequence will produce the number 133.

That is working as expected.

When nextval() is called, a number of sequence numbers are reserved (by default 
one,
you can set this with the CACHE clause of CREATE SEQUENCE).  If the transaction 
is aborted,
these entries won't be used.

So if there were 30 inserting transactions when your server went down that got 
aborted,
that would explain the behaviour quite nicely.

This should not be a problem.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres_fdw : disable extended queries

2016-10-24 Thread Albe Laurenz
Nicolas Paris wrote:
> I have a 9.6 pg instance, and I am trying to link a foreign postgresql 
> database that do not accept
> extended queries. (only simple queries 
> https://www.postgresql.org/docs/current/static/protocol.html )
> 
> When I run a query against the foreign pg instance thought postres_fdw, it 
> looks like it sends a
> transaction containing
> 
> DECLARE c1 CURSOR FOR
> SELECT customer_id FROM foodmart.customer
> 
> -> is there a way to run a simple query with postgres_fdw such:
> 
> SELECT customer_id FROM foodmart.customer

No, it is part of the design that cursors are used, so that rows can be
fetched one at a time and concurrent DML statements can be run.

You might consider using dblink.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: What is the best thing to do with PUBLIC schema in Postgresql database

2016-11-07 Thread Albe Laurenz
Patricia Hu wrote:
> Since it could potentially be a security loop hole. So far the action taken 
> to address it falls into
> these two categories:
> 
> drop the PUBLIC schema altogether. One of the concerns is with some of 
> the system objects that
> have been exposed through PUBLIC schema previously, now they will need other 
> explicit grants to be
> accessible to users. e.g pg_stat_statements.
> keep the PUBLIC schema but revoke all privileges to it from public role, 
> then grant as necessity
> comes up.
> 
> Any feedback and lessons from those who have implemented this?

I'd prefer the second approach as it is less invasive and prevents
undesirable objects in schema "public" just as well.

> Confidentiality Notice::  This email, including attachments, may include 
> non-public, proprietary,
> confidential or legally privileged information.  If you are not an intended 
> recipient or an authorized
> agent of an intended recipient, you are hereby notified that any 
> dissemination, distribution or
> copying of the information contained in or transmitted with this e-mail is 
> unauthorized and strictly
> prohibited.

You are hereby notified that any dissemination, distribution or copying of the 
information
contained in or transmitted with your e-mail is hunky-dory.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database Recovery from Corrupted Dump or Raw database table file.

2016-11-07 Thread Albe Laurenz
Howard News wrote:
> I have a raid catastrophe which has effectively blitzed a cluster data
> directory.  I have several pg_dump backups but these will not restore
> cleanly. I assume the disk has been failing for some time and the
> backups are of the corrupted database.
> 
> Using a selective pg_restore on the dumps, I have restored 2/3rds of the
> data but some tables I cannot recover directly, so I would like to see
> if it is possible to examine the dumps (they are in compressed format)
> to see if there are any rows which are recoverable. I do not know how or
> if it is even possible to pick out rows from an individual table, when
> in all likelyhood the file itself is corrupted.
> 
> I also have some parts of the data directory, so the tables may be
> accessible if I know the ID of the corrupt datatables. Is the ID listed
> in the pg_dump --list file?  And can the data be extracted from the raw
> data files without running a cluster. I am unsure if there is enough
> data to actually start a cluster.

A backup created with pg_dump consists of SQL statements to recreate the
objects.

You can extract the SQL statements as text with

  pg_restore -f sqlfile backupfilename

That should help you with restoring the data.

What exactly do you mean by "do not restore cleanly"?
Do you get error messages or is the content not ok?

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Surviving connections after internet problem

2016-11-07 Thread Albe Laurenz
Durumdara wrote:
> Linux server, 9.4 PG, Windows clients far-far away.
> 
> They called us that they had an "internet reset" at 13.00, but many client 
> locks are alive now
> (14:00).
> I checked server status, and and saw 16 connections.
> 
> In Windows PG server I read about keepalive parameters which are control and 
> redefine default TCP
> keepalive values.
> 
> As I read it could be two hours in Windows, and it is a system default, so we 
> can change for all
> applications.
> 
> I don't know what happens with Linux server and Windows clients.
> 
> May Linux version of PGSQL also uses 2 hour keepalive (default), or the it 
> caused by Windows clients,
> or an active device?
> 
> Or how could they survive this internet connection reset? :-o
> 
> May we must limit these parameters in clients after the starting of the 
> connection?

Don't bother about the clients, just see that the backends go away on the 
server.

You can use pg_terminate_backend to kill a database session.

Setting the keepalive options in postgresql.conf can make PostgreSQL
discover dead connections more quickly.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


  1   2   3   4   5   6   7   8   9   10   >