Re: [GENERAL] Overwrite pg_catalog?

2016-12-06 Thread Juliano
Hi everyone,

Ok, thanks.
Just to explain: I have some custom objects on my pg_catalog and I would like 
to restore it in a new test database and I think that this is the best way, 
isn't it?


Regards,
Juliano




 Original Message 
Subject: Re: [GENERAL] Overwrite pg_catalog?
Local Time: December 2, 2016 2:18 AM
UTC Time: December 2, 2016 2:18 AM
From: michael.paqu...@gmail.com
To: Francisco Olarte 
Juliano , pgsql-general@postgresql.org 


On Thu, Dec 01, 2016 at 01:07:09PM +0100, Francisco Olarte wrote:
> Juliano:
>
> On Thu, Dec 1, 2016 at 12:16 PM, Juliano  wrote:
> > I tried to restore pg_catalog to my new database, but the existing
> > pg_catalog can't be overwritten or dropped, and postgres auto creates
> > pg_catalog when I create a new DB.
>
> This is because, in general, pg_catalog is maintained by DML
> statements, restoring it is not going to do what you think.

s/DML/DDL/
--
Michael

Re: [GENERAL] Streaming Replication delay getting bigger

2016-12-06 Thread Achilleas Mantzios

On 06/12/2016 04:20, Patrick B wrote:

Hi guys,

I've got some database servers in USA (own data center) and also @ AWS Japan.

*USA:*
master01
slave01 (Streaming Replication from master01 + wal_files)
slave02 (Streaming Replication from master01 + wal_files)

*Japan: (Cascading replication)*
slave03 (Streaming Replication from slave02 + wal_files)
slave04 (Streaming Replication from slave02)

*Running this query on slave02:*

select now() - pg_last_xact_replay_timestamp() AS replication_delay;
 replication_delay
---
 00:00:00.802012
(1 row)

*Same query on slave03 and slave04:*

select now() - pg_last_xact_replay_timestamp() AS replication_delay;
 replication_delay
---
 00:56:53.639516
(1 row)


*slave02:*

SELECT client_hostname, client_addr, 
pg_xlog_location_diff(pg_stat_replication.sent_location, 
pg_stat_replication.replay_location) AS byte_lag FROM pg_stat_replication;
 client_hostname |  client_addr  | byte_lag
-+---+--

 | slave03  |  2097400

 | slave04 |  3803888

(2 rows)


Why is that delay that big? Is it because networking issue? I tried to find out 
what the cause is, but couldn't find anything.

SCP and FTP (big files) between those servers are really fast, +1.0MB/s.


Are you sure the upstream does not produce WAL activity at a higher rate than 
0.5MB/s ?


I'm using PostgreSQL 9.2.14

Thanks!
Patrick.



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: [GENERAL] Install pgAudit extension

2016-12-06 Thread Artur Zakirov

On 06.12.2016 08:44, Dylan Luong wrote:

Hi



I need some advice on installing the pgAudit extension as I am new to
PostgreSQL extenstions.



You can use PGXS. With PGXS you don't need postgres's source code, only 
source code of pgaudit. But you need installed PostgreSQL. Here is the 
example:


1 - unzip pgaudit to a pgaudit directory
2 - go to a pgaudit directory:

cd pgaudit

3 - compile and install:

make USE_PGXS=1
make USE_PGXS=1 install

It is assumed that pg_config is in your $PATH variable. If it is not 
then you can run:


make USE_PGXS=1 PG_CONFIG=/pg_config
make USE_PGXS=1 PG_CONFIG=/pg_config install

--
Artur Zakirov
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Overwrite pg_catalog?

2016-12-06 Thread Michael Paquier
On Tue, Dec 6, 2016 at 5:15 PM, Juliano  wrote:
> Hi everyone,
>
> Ok, thanks.
> Just to explain: I have some custom objects on my pg_catalog and I would
> like to restore it in a new test database and I think that this is the best
> way, isn't it?

Yes, pg_dump and pg_dumpall are the tools to use to restore a database
or multiple databases from a past state, and provides options
regarding what you want to dump. So go for it.
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Migrating data from DB2 zOS to PostgreSQL

2016-12-06 Thread Swapnil Vaze
Hello Julien,

We created DDLs from DB2 zOS system and tried to run script in below format:

 ./db2topg.pl -f sql1.txt -o testdir

It throws below error:

I don't understand  at ./db2topg.pl line 880,  line 24.

For testing we used file with below content:

cat sql1.txt
-- This CLP file was created using DB2LOOK Version "10.5"
-- Timestamp: Tue Dec  6 04:14:28 CST 2016
-- Database Name: DB239
-- Database Manager Version: DB2 Version 11.1.0
-- Database Codepage: 1208



-- DDL Statements for Table "A90DVDT"."DLR_FAM_MRKTSHR_FY_END"


CREATE TABLE "A90DVDT"."DLR_FAM_MRKTSHR_FY_END"
(
 "DEALER_ID"  CHAR(6)  NOT NULL ,
 "MKTSHR_MONTH"  DATE NOT NULL ,
 "L12_DP_DLR_IN_AOR"  DECIMAL(15,6) ,
 "L12_DP_DLR_OUT_AOR"  DECIMAL(15,6) ,
 "L12_DP_DLR_SHARE"  DECIMAL(8,5) ,
 "L12_SA_DLR_SHARE"  DECIMAL(8,5) ,
 "L12_CA_DLR_SHARE"  DECIMAL(8,5) ,
 "L12_U90_DLR_IN_AOR"  DECIMAL(15,6) ,
 "L12_U90_DLR_OUT_AOR"  DECIMAL(15,6) ,
 "L12_U90_DLR_SHARE"  DECIMAL(8,5)
);

Thanks,
Swapnil


On Tue, Dec 6, 2016 at 12:23 PM, Julien Rouhaud 
wrote:

> On Tue, Dec 06, 2016 at 11:06:12AM +0530, Swapnil Vaze wrote:
> > Hello,
> >
>
> Hello
>
> > We need some help on how we can migrate data from DB2 zOS database to
> > postgres database.
> >
> > Are there any utilities present? Any thoughts how we should approach?
>
> You can use this utility: https://github.com/dalibo/db2topg
>
> The README should provide all needed informations.
>
> --
> Julien Rouhaud
> http://dalibo.com - http://dalibo.org
>



-- 
Thanks & Regards,
Swapnil Vaze


Re: [GENERAL] Migrating data from DB2 zOS to PostgreSQL

2016-12-06 Thread Sameer Kumar
On Tue, 6 Dec 2016, 9:27 p.m. Swapnil Vaze,  wrote:

>
> Hello Julien,
>
> We created DDLs from DB2 zOS system and tried to run script in below
> format:
>
>  ./db2topg.pl -f sql1.txt -o testdir
>
> It throws below error:
>
> I don't understand  > at ./db2topg.pl line 880,  line 24.
>
> For testing we used file with below content:
>
> cat sql1.txt
> -- This CLP file was created using DB2LOOK Version "10.5"
> -- Timestamp: Tue Dec  6 04:14:28 CST 2016
> -- Database Name: DB239
> -- Database Manager Version: DB2 Version 11.1.0
> -- Database Codepage: 1208
>
>
> 
> -- DDL Statements for Table "A90DVDT"."DLR_FAM_MRKTSHR_FY_END"
> 
>
> CREATE TABLE "A90DVDT"."DLR_FAM_MRKTSHR_FY_END"
> (
>  "DEALER_ID"  CHAR(6)  NOT NULL ,
>  "MKTSHR_MONTH"  DATE NOT NULL ,
>  "L12_DP_DLR_IN_AOR"  DECIMAL(15,6) ,
>  "L12_DP_DLR_OUT_AOR"  DECIMAL(15,6) ,
>  "L12_DP_DLR_SHARE"  DECIMAL(8,5) ,
>  "L12_SA_DLR_SHARE"  DECIMAL(8,5) ,
>  "L12_CA_DLR_SHARE"  DECIMAL(8,5) ,
>  "L12_U90_DLR_IN_AOR"  DECIMAL(15,6) ,
>  "L12_U90_DLR_OUT_AOR"  DECIMAL(15,6) ,
>  "L12_U90_DLR_SHARE"  DECIMAL(8,5)
> );
>

I will strongly recommend removing the quotes around table name and column
names. Else you may have challenges porting application to PG. As PG by
default refers to object names in small case unless you out quotes around
them.

So after migration a statement like

Select * from DLR_FAM_MRKTSHR_FY_END;

would fail.

Also about the error, see what is before this create table statement. Make
sure there is a semi colon to terminate the statement before the CREATE
table.

How are you running it? psql?


> Thanks,
> Swapnil
>
>
> On Tue, Dec 6, 2016 at 12:23 PM, Julien Rouhaud  > wrote:
>
> On Tue, Dec 06, 2016 at 11:06:12AM +0530, Swapnil Vaze wrote:
> > Hello,
> >
>
> Hello
>
> > We need some help on how we can migrate data from DB2 zOS database to
> > postgres database.
> >
> > Are there any utilities present? Any thoughts how we should approach?
>
> You can use this utility: https://github.com/dalibo/db2topg
>
> The README should provide all needed informations.
>
> --
> Julien Rouhaud
> http://dalibo.com - http://dalibo.org
>
>
>
>
> --
> Thanks & Regards,
> Swapnil Vaze
>
-- 

-- 

Best Regards,

*Sameer Kumar | DB Solution Architect*

*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069533

T: +65 6438 3504 | www.ashnik.com

Skype: sameer.ashnik |   T: +65 8110 0350


[image: www.ashnik.com] 

Re: [GENERAL] INSERT - UPDATE throughput oscillating and SSD activity after stopping the client

2016-12-06 Thread Tom DalPozzo
Hi,
about SSD light:
I guessed it was WAL -> actual db files data traffic. It explains why the
light stops blinking after shutting down the server (I did it via kill
command) . But if so, I expected the light to restart blinking after
restarting the server (in order to continue WAL->db activity).
Regards





2016-12-05 20:02 GMT+01:00 Jeff Janes :

> On Fri, Dec 2, 2016 at 9:40 AM, Tom DalPozzo  wrote:
>
>> Hi,
>> I've two tables, t1 and t2, both with one bigint id indexed field and one
>> 256 char data field; t1 has always got 1 row, while t2 is increasing as
>> explained in the following.
>>
>> My pqlib client countinously updates  one row in t1 (every time targeting
>> a different row) and inserts a new row in t2. All this in blocks of 1000
>> update-insert per commit, in order to get better performance.
>> Wal_method is fsync, fsync is on, attached my conf file.
>> I've a 3.8ghz laptop with evo SSD.
>>
>> Performance is  measured every two executed blocks and related to these
>> blocks.
>>
>> Over the first few minutes performance is around 10Krow/s then it slowly
>> drops, over next few minutes to 4Krow/s, then it slowly returns high and so
>> on, like a wave.
>> I don't understand this behaviour. Is it normal? What does it depend on?
>>
>
> Yes, that is normal.  It is also very complicated.  It depends on pretty
> much everything.  PostgreSQL, kernel, filesystem, IO controller, firmware,
> hardware, other things going on on the computer simultaneously, etc.
>
>
>>
>> Also, when I stop the client I see the SSD light still heavily working.
>>
>
> This is normal.  It writes out critical data to a WAL log first, and then
> leisurely writes out the changes to the actual data files later.  In the
> case of a crash, the WAL will be used to replay the data file changes which
> may or may not have made it to disk.
>
> It would last quite a while unless I stop the postgresql server, in this
>> case it suddenly stops.
>>
>
> Do you stop postgresql with fast or immediate shutdown?
>
>
>> If I restart the server it remains off.
>> I'm wondering if it's normal. I'd like to be sure that my data are safe
>> once commited.
>>
>
> If your kernel/fs/SSD doesn't lie about syncing the data, then your data
> is safe once committed. (It is possible there are bugs in PostgreSQL, of
> course, but nothing you report indicates you have found one).
>
> If you really want to be sure that the full stack, from PostgreSQL down to
> the hardware on the SSD, is crash safe, the only real way is to do some
> "pull the plug" tests.
>
> Cheers,
>
> Jeff
>


Re: [GENERAL] Migrating data from DB2 zOS to PostgreSQL

2016-12-06 Thread Marc Cousin
Please, file these as issues on github (i'm the author). It will be much
easier to solve your problems.

Regards

On 06/12/2016 11:42, Sameer Kumar wrote:
> 
> 
> On Tue, 6 Dec 2016, 9:27 p.m. Swapnil Vaze,  > wrote:
> 
> 
> Hello Julien,
> 
> We created DDLs from DB2 zOS system and tried to run script in below
> format:
> 
>  ./db2topg.pl  -f sql1.txt -o testdir
> 
> It throws below error:
> 
> I don't understand  > at ./db2topg.pl  line 880,  line 24.
> 
> For testing we used file with below content:
> 
> cat sql1.txt
> -- This CLP file was created using DB2LOOK Version "10.5"
> -- Timestamp: Tue Dec  6 04:14:28 CST 2016
> -- Database Name: DB239
> -- Database Manager Version: DB2 Version 11.1.0
> -- Database Codepage: 1208
> 
> 
> 
> -- DDL Statements for Table "A90DVDT"."DLR_FAM_MRKTSHR_FY_END"
> 
> 
> CREATE TABLE "A90DVDT"."DLR_FAM_MRKTSHR_FY_END"
> (
>  "DEALER_ID"  CHAR(6)  NOT NULL ,
>  "MKTSHR_MONTH"  DATE NOT NULL ,
>  "L12_DP_DLR_IN_AOR"  DECIMAL(15,6) ,
>  "L12_DP_DLR_OUT_AOR"  DECIMAL(15,6) ,
>  "L12_DP_DLR_SHARE"  DECIMAL(8,5) ,
>  "L12_SA_DLR_SHARE"  DECIMAL(8,5) ,
>  "L12_CA_DLR_SHARE"  DECIMAL(8,5) ,
>  "L12_U90_DLR_IN_AOR"  DECIMAL(15,6) ,
>  "L12_U90_DLR_OUT_AOR"  DECIMAL(15,6) ,
>  "L12_U90_DLR_SHARE"  DECIMAL(8,5)
> );
> 
> 
> I will strongly recommend removing the quotes around table name and
> column names. Else you may have challenges porting application to PG. As
> PG by default refers to object names in small case unless you out quotes
> around them.
> 
> So after migration a statement like 
> 
> Select * from DLR_FAM_MRKTSHR_FY_END; 
> 
> would fail.
> 
> Also about the error, see what is before this create table statement.
> Make sure there is a semi colon to terminate the statement before the
> CREATE table.
> 
> How are you running it? psql?
> 
> 
> Thanks,
> Swapnil
> 
> 
> On Tue, Dec 6, 2016 at 12:23 PM, Julien Rouhaud
> mailto:julien.rouh...@dalibo.com>> wrote:
> 
> On Tue, Dec 06, 2016 at 11:06:12AM +0530, Swapnil Vaze wrote:
> > Hello,
> >
> 
> Hello
> 
> > We need some help on how we can migrate data from DB2 zOS database 
> to
> > postgres database.
> >
> > Are there any utilities present? Any thoughts how we should 
> approach?
> 
> You can use this utility: https://github.com/dalibo/db2topg
> 
> The README should provide all needed informations.
> 
> --
> Julien Rouhaud
> http://dalibo.com - http://dalibo.org
> 
> 
> 
> 
> -- 
> Thanks & Regards,
> Swapnil Vaze
> 
> -- 
> 
> -- 
> 
> Best Regards,
> 
> *Sameer Kumar | DB Solution Architect*
> 
> *ASHNIK PTE. LTD.*
> 
> 101 Cecil Street, #11-11 Tong Eng Building, Singapore 069533
> 
> T: +65 6438 3504 | www.ashnik.com 
> 
> Skype: sameer.ashnik |   T: +65 8110 0350
> 
> 
> **
> 
> www.ashnik.com ​
> 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Index size

2016-12-06 Thread
Samuel Williams  wrote:
> So, uh, my main question was, does MySQL add null values to an index, and is 
> this different from Postgres...

Samuel,

A quick google says that Mysql does index NULLs. Ask a Mysql group to get a 
more definitive answer.

More relevant to your original question, I'll go out on a limb as I struggle to 
recall a fuzzy memory.

The difference between Mysql and Postgresql is fundamental architecture, so yes 
the index creation will be very different, as others have said. IIRC (and I may 
not be), Mysql stores where a row is on the disk via the PK index. That means 
that secondary indexes point to the proper row in the PK index, which does mean 
that when you use a secondary index to get data that there is a double look up. 
They claim that's faster for updates and other stuff because a change to a row 
only requires 1 index to be changed.

Postgresql stores the direct disk location in each index, which slows down 
updates a little, but makes selects faster (and I find this really amusing 
because so many people say Mysql is great because it's fast at reads, yet 
architecturally PG is faster). If I'm wrong here, I'm sure I'll be corrected. :)

So you can see that Mysql indexes should be smaller than PG indexes because of 
what they carry. Personally, I think the diff is small enough I'm not going to 
worry about it, but math is such that some numbers are smaller than others. :) 
So that should explain what you're seeing.

My opinion is that you shouldn't worry about the index size. Which DB does what 
you want the best? That obviously depends on what your needs are, but after 
using both Mysql and PG, I'll take PG whenever possible, thank you. Mysql has 
gotten better over the last 5-8 years, but there are still many pits of 
quicksand ready to swallow you up there that PG doesn't have. If you know where 
those pits are and/or don't travel into that part of the jungle, great for you; 
personally, I prefer to get the job done without having to watch where I'm 
walking. ;)

HTH,
Kevin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Select works only when connected from login postgres

2016-12-06 Thread Joseph Brenner
Looking back on the order of events, I think it went like this:

Back in around May, I was annoyed at pager behavior and wanted to get
rid of them-- I tried a blank PAGER setting in my .bashrc (and forgot
about it).

I also noticed the psql option "\pset pager off".  For not particular
reason, it didn't occur to me to just put that in a .psqlrc file, and
instead I had it in a command-line alias:

  alias psql="psql --pset='pager=off'"

Then more recently I started experimenting with a new build of 9.6.1:

The behavior at that stage looked like a difference between 9.4 (run
with a bare "psql") and 9.6 (run using a path to the new build).

Then later, as I was experimenting with multiple postgres
installations, I  got more careful about using paths to make sure I
knew which one I was running.  That cut the alias out of the picture,
and the .bashrc setting took over:

The behavior at that stage looked like a difference between user login
'doom' and 'postgres'.

I kept chasing after obscure file permissions settings or postgres
internal permissions (maybe there was something about the new row
level security features?), and so on.  The folks here kept assuming it
had to be some kind of version skew cross-talk between the different
installations.

Something I'd thought of vaguely (and wish I'd tried) was just
creating a new user and trying to reproduce the behavior in a clean
account set-up (there are always options to suppress rc files for
testing, but suppressing .bashrc isn't that simple).

The next thing on the list for me was to remove everything except the
new 9.6.1 pgdb binary install: by itself that wouldn't have gotten any
closer to isolating the problem, but it would've eliminated some
distractions.

Oddly enough, if I'd been reading up-to-date pg docs, I might not have
tried the .psqlrc setup that pointed the finger at the pager
situation: that was just something I was doing on the side, and I
didn't think it'd have any effect on a --command invocation, so I
didn't worry about it.

So yeah, some better messaging when PAGER is mangled wouldn't hurt, if
that's possible.  Falling back to "pager off" would make sense to me.











On Mon, Dec 5, 2016 at 9:28 PM, Joseph Brenner  wrote:
> Well yeah, trying to run a PAGER that's not there might throw an error.
> Or you know, nothing in PAGER might imply "pager off".
>
>> I find it a bit odd that all of your queries were using the pager...did I 
>> miss where you reported that setting?
>
> I didn't report it because I wasn't looking in that direction.   A
> PAGER set to blank for login 'doom' and no PAGER setting for login
> 'postgres' explains much of what I was seeing, I think: selects run as
>  'doom' tended to be blank (unless I had a pset no pager somewhere),
> selects run as 'postgres' always worked.
>
>
>
>
>
>
> On Mon, Dec 5, 2016 at 9:03 PM, David G. Johnston
>  wrote:
>> On Mon, Dec 5, 2016 at 9:53 PM, Joseph Brenner  wrote:
>>>
>>> And I guess I did that intentionally, my .bashrc has
>>>
>>>   # I use emacs shells, I got a "pager" already:
>>>   export PAGER=''
>>>
>>
>> PAGER= psql --pset=pager=always -c 'select 1;'
>> 
>>
>> Remove PAGER= and I'm good.
>>
>> I guess that psql could be a bit more helpful by reporting something to
>> stderr if the value of PAGER is not an executable (platform dependent...)
>>
>> I find it a bit odd that all of your queries were using the pager...did I
>> miss where you reported that setting?
>>
>> David J.
>>


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] INSERT - UPDATE throughput oscillating and SSD activity after stopping the client

2016-12-06 Thread Jeff Janes
On Tue, Dec 6, 2016 at 2:44 AM, Tom DalPozzo  wrote:

> Hi,
> about SSD light:
>


> I guessed it was WAL -> actual db files data traffic. It explains why the
> light stops blinking after shutting down the server (I did it via kill
> command) .
>

Do you kill with -15 (the default) or -9?  And which process, the postgres
master itself or just some random child?


> But if so, I expected the light to restart blinking after restarting
> the server (in order to continue WAL->db activity).
>

The normal checkpoint is paced.  So trickling out data slowly will keep the
light on, but not actually stress the system.

When you shutdown the system, it does a fast checkpoint.  This gets the
data written out as quickly as possible (since you are shutting down, it
doesn't worry about interfering with performance for other users, as there
are none), so once it is done you don't see the light anymore.  If you do a
clean shutdown (kill -15 of the postgres master) this fast checkpoint
happens upon shutdown.  If you do an abort (kill -9) then the fast
checkpoint happens upon start-up, once recovery is finished but before the
database is opened of regular use.

 Cheers,

Jeff


Re: [GENERAL] Extensions and privileges in public schema

2016-12-06 Thread Lee Hachadoorian
On Sun, Dec 4, 2016 at 4:24 PM, Paul Ramsey 
wrote:

> When you create the student user, remove their create privs in public.
> Then create a scratch schema and grant them privs there.
> Finally, alter the student user so that the scratch schema appears FIRST
> in their search path. This will cause unqualified CREATE statements to
> create in the scratch schema.
> For full separation, give each student their own login and set the search
> path to
>
> "$user", public
>
> That way each student gets their own private scratch area, and it is used
> by default for their creates.
>
> P
>
>

Paul,

I've been avoiding giving each student an individual login role, but it
might be worth it to consider for a future term.

I've followed your (and Charles') advice to:

REVOKE CREATE ON SCHEMA public FROM public;
ALTER ROLE gus_faculty
  SET search_path = scratch,public,tiger;

It also occurred to me that I don't want anyone changing data in
spatial_ref_sys. I think I should revoke everything *except* SELECT and
REFERENCES, and make this the default for new objects created in public
schema:

ALTER DEFAULT PRIVILEGES IN SCHEMA scratch
REVOKE INSERT, UPDATE, DELETE, TRUNCATE, TRIGGER ON TABLES
FROM public;

Please let me know if this is inadvisable or violates accepted practice.

Best,
--Lee


Re: [GENERAL] INSERT - UPDATE throughput oscillating and SSD activity after stopping the client

2016-12-06 Thread Tom DalPozzo
Hi,
I did: pkill -x postgres
so it should send SIGTERM.
Regards
Pupillo

​​


[GENERAL] Re: [GENERAL] Does PostgreSQL support BIM(Building Information Modeling) storage?

2016-12-06 Thread Rader, David
You should ask on the PostGIS list - they will be closer to this:

https://lists.osgeo.org/mailman/listinfo/postgis-users


--
David Rader
dav...@openscg.com

On Mon, Dec 5, 2016 at 9:05 PM, sunpeng  wrote:

> Does PostgreSQL support BIM(Building Information Modeling) storage?Or how?
> I can only find few infomation: http://repository.
> tudelft.nl/islandora/object/uuid:dcb7fc18-208c-4e3b-bc2a-
> d24a2346d44b?collection=research
> Are there any other articles/books/software recommended?
> Thanks!
> peng
>


Re: [GENERAL] Select works only when connected from login postgres

2016-12-06 Thread Tom Lane
Joseph Brenner  writes:
> Looking back on the order of events, I think it went like this:
> [ careful postmortem ]

Thanks for following up!

> So yeah, some better messaging when PAGER is mangled wouldn't hurt, if
> that's possible.  Falling back to "pager off" would make sense to me.

Agreed.  One thing that would be very simple is to treat an empty PAGER
value the same as "unset".  Detecting whether a nonempty value is behaving
sanely seems a great deal harder; depending on what pager you're using
and how you stop it, nonzero exit codes from the called process might
be normal.  I think it might be practical to issue a warning if we get
an exit code of 126 or 127, though.  We have a comment in archive-command
invocation:

 * Per the Single Unix Spec, shells report exit status > 128 when a called
 * command died on a signal.  Also, 126 and 127 are used to report
 * problems such as an unfindable command; treat those as fatal errors
 * too.

The relevant text in POSIX is

If a command is not found, the exit status shall be 127. If the
command name is found, but it is not an executable utility, the
exit status shall be 126. Applications that invoke utilities
without using the shell should use these exit status values to
report similar errors.

I don't believe we want to complain about exit on a signal, because
SIGTERM or SIGINT is a typical exit in some pager setups.  But these
two codes strongly suggest something broken about your PAGER value.

So I propose
(1) ignore PAGER if it's an empty string
(2) if pclose returns exit code 126 or 127, report that the PAGER
command didn't work.  I'm not sure how complex that is, because IIRC
the pclose is at some remove from the popen call, but if it's not
unreasonably hairy we should do it.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] storing C binary array in bytea via libpq

2016-12-06 Thread Tom DalPozzo
Hi,
I've a table in which a field is BYTEA, as I need to store around 200 raw
bytes in this field.
I need to perform many many INSERT  starting from a common C array  and, in
order to get good performance, I want to do many of them in a single BEGIN
COMMIT block.
What is the best choice from libpq?
PQexec needs to have the bytes encoded into a string. This expansion lowers
 the performance (I tried with \x format, not with the other option yet).
With PQexecParams, can I send a raw array of bytes as a bytea parameter?
And also, as PQexecParams can't accept multi commands, if I enclose many
PQexecParams calls between a PQexec("BEGIN;")  and PQexec("COMMIT") would
it work as I wish?
Other options?
Regards
Pupillo


Re: [GENERAL] storing C binary array in bytea via libpq

2016-12-06 Thread Tom Lane
Tom DalPozzo  writes:
> I've a table in which a field is BYTEA, as I need to store around 200 raw
> bytes in this field.
> I need to perform many many INSERT  starting from a common C array  and, in
> order to get good performance, I want to do many of them in a single BEGIN
> COMMIT block.
> What is the best choice from libpq?
> PQexec needs to have the bytes encoded into a string. This expansion lowers
>  the performance (I tried with \x format, not with the other option yet).
> With PQexecParams, can I send a raw array of bytes as a bytea parameter?

Sure.  Specify binary format for that parameter.

> And also, as PQexecParams can't accept multi commands, if I enclose many
> PQexecParams calls between a PQexec("BEGIN;")  and PQexec("COMMIT") would
> it work as I wish?

Well, it'll be faster than committing them separately, but have you
considered bundling this up into a single INSERT with multiple VALUES
rows?  It'd be a bit tedious to manage by hand, but if the command is
being constructed by a program anyway, it shouldn't be much harder
than separate INSERTs.  Much of the time in this is going to go into
parsing and network round-trip overhead, so one statement is going
to handily beat N statements whether they're in a transaction block
or not.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PDF files: to store in database or not

2016-12-06 Thread Rich Shepard

  With no experience of storing binary data in a bytea column I don't know
when its use is appropriate. I suspect that for an application I'm
developing it would be better to store row-related documents outside the
database, and want to learn if that is the appropriate approach.

  Consider an application that manages a fleet of vehicles. There's a
Vehicles table with information on each one (perhaps make, model, VIN, year
of purchase) and a Services table. There are many PDF documents associated
with each row in the tables: purchase contract, insurance form, service and
maintenance records, etc.

  My thinking is to not store these documents in the database, but to store
them in subdirectories outside the database.

  Your thoughts?

Rich



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PDF files: to store in database or not

2016-12-06 Thread Joshua D. Drake

On 12/06/2016 10:30 AM, Rich Shepard wrote:


  My thinking is to not store these documents in the database, but to store
them in subdirectories outside the database.

  Your thoughts?


Due to the widely variable size of a PDF document, I would say no. I 
would store the metadata and file location.


Sincerely,

JD




Rich






--
Command Prompt, Inc.  http://the.postgres.company/
+1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Everyone appreciates your honesty, until you are honest with them.
Unless otherwise stated, opinions are my own.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] storing C binary array in bytea via libpq

2016-12-06 Thread Tom Lane
[ please keep the list cc'd ]

Tom DalPozzo  writes:
> To be honest, I didn't know or I forgot about multiple VALUES in one
> command! Thanks for reminding!
> As for the PQexecParams, should I specify something in  const Oid
> *paramTypes parameter? Or just something like $1::bytea?

You can do it either way.  Hard-wiring the type OID will be a bit faster
than making the server parse a ton of repetitive cast constructs, but on
the other hand it means you have a magic number in your program.  It's
unlikely the OID assigned to bytea would ever change, but the technique
doesn't scale well to user-defined types.  Take your choice.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PDF files: to store in database or not

2016-12-06 Thread Rich Shepard

On Tue, 6 Dec 2016, Joshua D. Drake wrote:


Due to the widely variable size of a PDF document, I would say no. I would
store the metadata and file location.


Joshua,

  I read your answer as "don't store them in the database, but store the
location in a column."

Thanks for confirming,

Rich


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Migrating data from DB2 zOS to PostgreSQL

2016-12-06 Thread bend
We have recently done something similar - Converting Informix zLinux to Postgres x86_64.Short answer - We used unloads to put out csv files, wrote some python programs to cleanup data, thenimported them into our Postgres instance ...Ben Duncan - Business Network Solutions, Inc. 336 Elton Road  Jackson MS, 39212"Never attribute to malice, that which can be adequately explained by stupidity"- Hanlon's Razor


 Original Message 
Subject: [GENERAL] Migrating data from DB2 zOS to PostgreSQL
From: Swapnil Vaze 
Date: Mon, December 05, 2016 11:36 pm
To: pgsql-general@postgresql.org

Hello,We need some help on how we can migrate data from DB2 zOS database to postgres database.Are there any utilities present? Any thoughts how we should approach?-- Thanks & Regards, Swapnil Vaze  





Re: [GENERAL] Select works only when connected from login postgres

2016-12-06 Thread Joseph Brenner
> Agreed.  One thing that would be very simple is to treat an empty PAGER
> value the same as "unset".

Sounds excellent.

> Detecting whether a nonempty value is behaving
> sanely seems a great deal harder ...

I was thinking a check for existence and executability, but I guess
that's covered already...  if you use a random string as PAGER you get
a sh error:

  export PAGER="nadatech"
  /usr/lib/postgresql/9.6/bin/psql --no-psqlrc --dbname=doom
--username=doom -p 5434 --host=/var/run/postgresql --command="SELECT
'hello' AS world;"

  sh: 1: nadatech: not found

So the empty PAGER value case is the only one that doesn't seem
covered already.  (I'm talented about finding these things...)






On Tue, Dec 6, 2016 at 9:51 AM, Tom Lane  wrote:
> Joseph Brenner  writes:
>> Looking back on the order of events, I think it went like this:
>> [ careful postmortem ]
>
> Thanks for following up!
>
>> So yeah, some better messaging when PAGER is mangled wouldn't hurt, if
>> that's possible.  Falling back to "pager off" would make sense to me.
>
> Agreed.  One thing that would be very simple is to treat an empty PAGER
> value the same as "unset".  Detecting whether a nonempty value is behaving
> sanely seems a great deal harder; depending on what pager you're using
> and how you stop it, nonzero exit codes from the called process might
> be normal.  I think it might be practical to issue a warning if we get
> an exit code of 126 or 127, though.  We have a comment in archive-command
> invocation:
>
>  * Per the Single Unix Spec, shells report exit status > 128 when a called
>  * command died on a signal.  Also, 126 and 127 are used to report
>  * problems such as an unfindable command; treat those as fatal errors
>  * too.
>
> The relevant text in POSIX is
>
> If a command is not found, the exit status shall be 127. If the
> command name is found, but it is not an executable utility, the
> exit status shall be 126. Applications that invoke utilities
> without using the shell should use these exit status values to
> report similar errors.
>
> I don't believe we want to complain about exit on a signal, because
> SIGTERM or SIGINT is a typical exit in some pager setups.  But these
> two codes strongly suggest something broken about your PAGER value.
>
> So I propose
> (1) ignore PAGER if it's an empty string
> (2) if pclose returns exit code 126 or 127, report that the PAGER
> command didn't work.  I'm not sure how complex that is, because IIRC
> the pclose is at some remove from the popen call, but if it's not
> unreasonably hairy we should do it.
>
> regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PDF files: to store in database or not

2016-12-06 Thread Moreno Andreo

... but what if database is replicated?

Thanks
Moreno.

Il 06/12/2016 19:50, Rich Shepard ha scritto:

On Tue, 6 Dec 2016, Joshua D. Drake wrote:

Due to the widely variable size of a PDF document, I would say no. I 
would

store the metadata and file location.


Joshua,

  I read your answer as "don't store them in the database, but store the
location in a column."

Thanks for confirming,

Rich







--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PDF files: to store in database or not

2016-12-06 Thread Eric Schwarzenbach

On 12/06/2016 01:30 PM, Rich Shepard wrote:
  With no experience of storing binary data in a bytea column I don't 
know

when its use is appropriate. I suspect that for an application I'm
developing it would be better to store row-related documents outside the
database, and want to learn if that is the appropriate approach.

  Consider an application that manages a fleet of vehicles. There's a
Vehicles table with information on each one (perhaps make, model, VIN, 
year
of purchase) and a Services table. There are many PDF documents 
associated
with each row in the tables: purchase contract, insurance form, 
service and

maintenance records, etc.

  My thinking is to not store these documents in the database, but to 
store

them in subdirectories outside the database.

  Your thoughts?

Rich
I'd also be interested in answers to this that are give specific pros 
and cons, and not in terms of "its better to do this than that."
What's "better" depends on how much you value the various pros and the 
cons.


One of the pros of keeping them in the database is ease of protecting 
adds and updates to the files and their related data with a transaction 
and being able to have system where it iss pretty much impossible for 
the documents to ever be out of sync with the related data.


I maintain some systems that do keep the documents outside of the 
database, and the application code maintains the transactional integrity 
of the files and data, and for the most part we don't have integrity 
problems. In the worst of an add or update operation being interrupted 
by a system crash or unexpected error, we have a new document saved but 
the data about this document has not been written to the database and it 
is as if that operation never happened. The file may really be there but 
the system does not "know about it." This works even for updates because 
our system versions documents and the old version is not written over, 
there is simply a new version that the system never "knows" about. 
Without versioning this would be more of a problem, and you would 
probably need to protect yourself with code that does something like 
temporarily keeping the last version of a file during an update and 
switching over the metadata to reference the new document only at the 
very last operation in the transaction.


We also have the potential of the database not matching the file store 
when a system is migrated or "cloned." We are very careful about this, 
but we've at least once had a case where a client's IT depart screwed it 
up, and got a mismatched system to which they started writing new data. 
Luckily this was a test or staging system and no production data was lost.


I've often wondered if we'd have been better off storing the files in 
the database. This design decision was made some years ago, and our 
concerns around this had to do with performance, but I don't know that 
we had any real data that this should have been a concern, and I suspect 
you could ameliorate if not eliminate this as an issue by careful 
design. I'd loved to hear this idea confirmed or debunked by someone who 
has more expertise (and ideally, done actual testing).


Cheers,

Eric


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PDF files: to store in database or not

2016-12-06 Thread Eric Schwarzenbach

On 12/06/2016 01:34 PM, Joshua D. Drake wrote:

On 12/06/2016 10:30 AM, Rich Shepard wrote:

  My thinking is to not store these documents in the database, but to 
store

them in subdirectories outside the database.

  Your thoughts?


Due to the widely variable size of a PDF document, I would say no. I 
would store the metadata and file location.



Can you elaborate on this? Why is the variable size an issue? Are you 
assuming the files go into the same table as the rest of the data? (They 
certainly don't have to, and I would assume that not to be the smartest 
design.)



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PDF files: to store in database or not

2016-12-06 Thread David Wall

On 12/6/16 11:12 AM, Eric Schwarzenbach wrote:

On 12/06/2016 01:34 PM, Joshua D. Drake wrote:

On 12/06/2016 10:30 AM, Rich Shepard wrote:

  My thinking is to not store these documents in the database, but 
to store

them in subdirectories outside the database.

  Your thoughts?


Due to the widely variable size of a PDF document, I would say no. I 
would store the metadata and file location.



Can you elaborate on this? Why is the variable size an issue? Are you 
assuming the files go into the same table as the rest of the data? 
(They certainly don't have to, and I would assume that not to be the 
smartest design.)



The advantages of storing in the database is that a DB backup will have 
everything, instead of a DB backup and a file system backup. Using a 
BLOB, you can certainly keep track of variable length PDFs.


Also, if in the database, it can be part of a transaction so you will 
not have any issues keeping the DB and filesystem in sync.


David


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PDF files: to store in database or not

2016-12-06 Thread John R Pierce

On 12/6/2016 11:21 AM, David Wall wrote:


The advantages of storing in the database is that a DB backup will 
have everything, instead of a DB backup and a file system backup. 
Using a BLOB, you can certainly keep track of variable length PDFs. 


and one of the disadvantages of storing in the database is those db 
backups become way huger if there's a lot of this file data.



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PDF files: to store in database or not

2016-12-06 Thread Joshua D. Drake

On 12/06/2016 11:12 AM, Eric Schwarzenbach wrote:

On 12/06/2016 01:34 PM, Joshua D. Drake wrote:

On 12/06/2016 10:30 AM, Rich Shepard wrote:


  My thinking is to not store these documents in the database, but to
store
them in subdirectories outside the database.

  Your thoughts?


Due to the widely variable size of a PDF document, I would say no. I
would store the metadata and file location.



Can you elaborate on this? Why is the variable size an issue?


Because it will use at least that size in memory to deliver the document 
to you. Consider a 100MB PDF (not at all uncommon), now imagine 40 
connections requesting that PDF.



Sincerely,

JD


--
Command Prompt, Inc.  http://the.postgres.company/
+1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Everyone appreciates your honesty, until you are honest with them.
Unless otherwise stated, opinions are my own.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PDF files: to store in database or not

2016-12-06 Thread Joshua D. Drake

On 12/06/2016 11:09 AM, Moreno Andreo wrote:

... but what if database is replicated?


Use a network mounted filesystem (or replicated filesystem).

JD



--
Command Prompt, Inc.  http://the.postgres.company/
+1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Everyone appreciates your honesty, until you are honest with them.
Unless otherwise stated, opinions are my own.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PDF files: to store in database or not

2016-12-06 Thread Joshua D. Drake

On 12/06/2016 10:50 AM, Rich Shepard wrote:

On Tue, 6 Dec 2016, Joshua D. Drake wrote:


Due to the widely variable size of a PDF document, I would say no. I
would
store the metadata and file location.


Joshua,

  I read your answer as "don't store them in the database, but store the
location in a column."


Correct.



Thanks for confirming,

Rich





--
Command Prompt, Inc.  http://the.postgres.company/
+1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Everyone appreciates your honesty, until you are honest with them.
Unless otherwise stated, opinions are my own.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PDF files: to store in database or not

2016-12-06 Thread Joshua D. Drake

On 12/06/2016 11:21 AM, David Wall wrote:


The advantages of storing in the database is that a DB backup will have
everything, instead of a DB backup and a file system backup. Using a
BLOB, you can certainly keep track of variable length PDFs.


This is true but also not necessarily an advantage. Your backups will be 
larger, unless you are using logical backups and omitting certain tables.


Your resource utilization will be higher (memory, CPU etc...) to pull 
and decode the binary.


JD




--
Command Prompt, Inc.  http://the.postgres.company/
+1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Everyone appreciates your honesty, until you are honest with them.
Unless otherwise stated, opinions are my own.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PDF files: to store in database or not

2016-12-06 Thread Rich Shepard

On Tue, 6 Dec 2016, David Wall wrote:


The advantages of storing in the database is that a DB backup will have
everything, instead of a DB backup and a file system backup. Using a BLOB,
you can certainly keep track of variable length PDFs.


David,

  I did not realize that a BLOB is not the same as a bytea (page 217 of the
9.6 PDF manual), and I cannot find BLOB as a postgres data type. Please
point me in the right direction to learn how to store PDFs as BLOBs.

Thanks,

Rich


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Select works only when connected from login postgres

2016-12-06 Thread Tom Lane
Joseph Brenner  writes:
>> Agreed.  One thing that would be very simple is to treat an empty PAGER
>> value the same as "unset".

> Sounds excellent.

Actually, after thinking about it a bit longer, should PAGER-set-but-
empty be treated as equivalent to "pager off", rather than as a request
to use the default pager?  I could see arguments either way for that.

>> Detecting whether a nonempty value is behaving
>> sanely seems a great deal harder ...

> I was thinking a check for existence and executability, but I guess
> that's covered already...  if you use a random string as PAGER you get
> a sh error:

>   export PAGER="nadatech"
>   /usr/lib/postgresql/9.6/bin/psql --no-psqlrc --dbname=doom
> --username=doom -p 5434 --host=/var/run/postgresql --command="SELECT
> 'hello' AS world;"

>   sh: 1: nadatech: not found

Hm, so you do; so my thought that this needs explicit code on our end
seems wrong.  [ experiments... ]  It seems like the specific case of
PAGER being empty or all-white-space causes the shell to think that
it's executing an empty line and just do nothing (in particular, not
print any error).  pclose then returns EPIPE, at least on my platform,
which we could report but it doesn't seem like a very useful report.
Any other case seems to provoke a shell complaint that's probably
sufficient for diagnosis.

So what I'm thinking now is that if PAGER is empty or all white space
then we should not try to use it as a shell command; we can either
treat the case as "pager off" or as "use default pager".  Everything
else we can leave to the invoked shell to complain about.

Comments?

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PDF files: to store in database or not

2016-12-06 Thread Eric Schwarzenbach

On 12/06/2016 02:40 PM, Joshua D. Drake wrote:

On 12/06/2016 11:12 AM, Eric Schwarzenbach wrote:

On 12/06/2016 01:34 PM, Joshua D. Drake wrote:

On 12/06/2016 10:30 AM, Rich Shepard wrote:


  My thinking is to not store these documents in the database, but to
store
them in subdirectories outside the database.

  Your thoughts?


Due to the widely variable size of a PDF document, I would say no. I
would store the metadata and file location.



Can you elaborate on this? Why is the variable size an issue?


Because it will use at least that size in memory to deliver the 
document to you. Consider a 100MB PDF (not at all uncommon), now 
imagine 40 connections requesting that PDF.


Are you sure the whole thing necessarily gets pulled into memory? JDBC 
and ODBC support streaming on their BLOB interfaces and isn't the whole 
point of this that an application can stream large files a chunk at a 
time, the same way it would from the file system? Of course if the db 
engine always pulls the whole thing into memory to work with it 
regardless of the API, that's another thing, but that wouldn't seem like 
a very good design, and I have more faith in the PostgreSQL developers 
than that...but I'd certainly like to know for sure.


Cheers,

Eric




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PDF files: to store in database or not

2016-12-06 Thread John R Pierce

On 12/6/2016 12:10 PM, Rich Shepard wrote:
  I did not realize that a BLOB is not the same as a bytea (page 217 
of the

9.6 PDF manual), and I cannot find BLOB as a postgres data type. Please
point me in the right direction to learn how to store PDFs as BLOBs.


indeed BYTEA is postgres's type for storing arbitrary binary objects 
that are called BLOB in certain other databases.



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Select works only when connected from login postgres

2016-12-06 Thread David G. Johnston
On Tue, Dec 6, 2016 at 1:13 PM, Tom Lane  wrote:

> So what I'm thinking now is that if PAGER is empty or all white space
> then we should not try to use it as a shell command; we can either
> treat the case as "pager off" or as "use default pager".  Everything
> else we can leave to the invoked shell to complain about.
>

​My pick would be to leave \pset pager to control the on/off nature of the
pager and treat both an unset and an empty string PAGER identically and
solely for the purpose of choosing which pager to use should one be
required.

David J.
​


Re: [GENERAL] PDF files: to store in database or not

2016-12-06 Thread Tom Lane
John R Pierce  writes:
> On 12/6/2016 12:10 PM, Rich Shepard wrote:
>> I did not realize that a BLOB is not the same as a bytea (page 217 
>> of the
>> 9.6 PDF manual), and I cannot find BLOB as a postgres data type. Please
>> point me in the right direction to learn how to store PDFs as BLOBs.

> indeed BYTEA is postgres's type for storing arbitrary binary objects 
> that are called BLOB in certain other databases.

Well, there are also "large objects", which aren't really a data type at
all.  If you're storing stuff large enough that you need to write/read
it in chunks rather than all at once, the large-object APIs are what
you want.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PDF files: to store in database or not

2016-12-06 Thread Rich Shepard

On Tue, 6 Dec 2016, John R Pierce wrote:


indeed BYTEA is postgres's type for storing arbitrary binary objects that
are called BLOB in certain other databases.


John,

  I thought so.

Thanks,

Rich


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PDF files: to store in database or not

2016-12-06 Thread David Wall

On 12/6/16 12:33 PM, Tom Lane wrote:

John R Pierce  writes:

On 12/6/2016 12:10 PM, Rich Shepard wrote:

I did not realize that a BLOB is not the same as a bytea (page 217
of the
9.6 PDF manual), and I cannot find BLOB as a postgres data type. Please
point me in the right direction to learn how to store PDFs as BLOBs.

indeed BYTEA is postgres's type for storing arbitrary binary objects
that are called BLOB in certain other databases.

Well, there are also "large objects", which aren't really a data type at
all.  If you're storing stuff large enough that you need to write/read
it in chunks rather than all at once, the large-object APIs are what
you want.

regards, tom lane



Yeah, we've not used much BYTEA, but use PG's large objects.  It also 
has a streaming API and you don't have to encode/decode every byte going 
in and out of the DB.


In a table, you juse define the "blob_data" column as an OID.  Since we 
use Java/JDBC, this is handled by ResultSet.getBlob() for a 
java.sql.Blob object.


Some complain about DB backups being biggers if the PDFs are inside, 
which is true, but this only presumes you don't care about the 
filesystem PDFs being backed up separately (and no way to ensure a 
reliable DB backup and PDF filesystem backup if the system is active 
when doing the backups).  You can certainly put the files in a 
filesystem and point to them, but you'll likely need some access control 
or people will be able to download any/all PDFs in a given folder.  In 
the DB, you surely will have access control as I presume you don't allow 
browser access to the DB .


Either way, you may want to see if your PDFs compress well or not as 
that may save some storage space at the cost of compress/decompress on 
accesses.


David


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Select works only when connected from login postgres

2016-12-06 Thread Joseph Brenner
Well, my take would be that if you've taken the trouble to set an
empty string as the PAGER that means something, and it probably means
you don't want any pager to be used.

But then, I would say that.


On Tue, Dec 6, 2016 at 12:13 PM, Tom Lane  wrote:
> Joseph Brenner  writes:
>>> Agreed.  One thing that would be very simple is to treat an empty PAGER
>>> value the same as "unset".
>
>> Sounds excellent.
>
> Actually, after thinking about it a bit longer, should PAGER-set-but-
> empty be treated as equivalent to "pager off", rather than as a request
> to use the default pager?  I could see arguments either way for that.
>
>>> Detecting whether a nonempty value is behaving
>>> sanely seems a great deal harder ...
>
>> I was thinking a check for existence and executability, but I guess
>> that's covered already...  if you use a random string as PAGER you get
>> a sh error:
>
>>   export PAGER="nadatech"
>>   /usr/lib/postgresql/9.6/bin/psql --no-psqlrc --dbname=doom
>> --username=doom -p 5434 --host=/var/run/postgresql --command="SELECT
>> 'hello' AS world;"
>
>>   sh: 1: nadatech: not found
>
> Hm, so you do; so my thought that this needs explicit code on our end
> seems wrong.  [ experiments... ]  It seems like the specific case of
> PAGER being empty or all-white-space causes the shell to think that
> it's executing an empty line and just do nothing (in particular, not
> print any error).  pclose then returns EPIPE, at least on my platform,
> which we could report but it doesn't seem like a very useful report.
> Any other case seems to provoke a shell complaint that's probably
> sufficient for diagnosis.
>
> So what I'm thinking now is that if PAGER is empty or all white space
> then we should not try to use it as a shell command; we can either
> treat the case as "pager off" or as "use default pager".  Everything
> else we can leave to the invoked shell to complain about.
>
> Comments?
>
> regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Select works only when connected from login postgres

2016-12-06 Thread Joseph Brenner
But on the other hand, if you've got a blank PAGER envar and a "\pset
pager something", the pset should win (just as it does now).


On Tue, Dec 6, 2016 at 1:53 PM, Joseph Brenner  wrote:
> Well, my take would be that if you've taken the trouble to set an
> empty string as the PAGER that means something, and it probably means
> you don't want any pager to be used.
>
> But then, I would say that.
>
>
> On Tue, Dec 6, 2016 at 12:13 PM, Tom Lane  wrote:
>> Joseph Brenner  writes:
 Agreed.  One thing that would be very simple is to treat an empty PAGER
 value the same as "unset".
>>
>>> Sounds excellent.
>>
>> Actually, after thinking about it a bit longer, should PAGER-set-but-
>> empty be treated as equivalent to "pager off", rather than as a request
>> to use the default pager?  I could see arguments either way for that.
>>
 Detecting whether a nonempty value is behaving
 sanely seems a great deal harder ...
>>
>>> I was thinking a check for existence and executability, but I guess
>>> that's covered already...  if you use a random string as PAGER you get
>>> a sh error:
>>
>>>   export PAGER="nadatech"
>>>   /usr/lib/postgresql/9.6/bin/psql --no-psqlrc --dbname=doom
>>> --username=doom -p 5434 --host=/var/run/postgresql --command="SELECT
>>> 'hello' AS world;"
>>
>>>   sh: 1: nadatech: not found
>>
>> Hm, so you do; so my thought that this needs explicit code on our end
>> seems wrong.  [ experiments... ]  It seems like the specific case of
>> PAGER being empty or all-white-space causes the shell to think that
>> it's executing an empty line and just do nothing (in particular, not
>> print any error).  pclose then returns EPIPE, at least on my platform,
>> which we could report but it doesn't seem like a very useful report.
>> Any other case seems to provoke a shell complaint that's probably
>> sufficient for diagnosis.
>>
>> So what I'm thinking now is that if PAGER is empty or all white space
>> then we should not try to use it as a shell command; we can either
>> treat the case as "pager off" or as "use default pager".  Everything
>> else we can leave to the invoked shell to complain about.
>>
>> Comments?
>>
>> regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PDF files: to store in database or not

2016-12-06 Thread John R Pierce

On 12/6/2016 1:02 PM, David Wall wrote:
You can certainly put the files in a filesystem and point to them, but 
you'll likely need some access control or people will be able to 
download any/all PDFs in a given folder. In the DB, you surely will 
have access control as I presume you don't allow browser access to the 
DB . 


thats easily remedied by using storage accessed via https or whatever 
with directory listing disabled, and hashed names.   Or, only allow your 
app server direct access to the file system if its via NFS or whatever.




Either way, you may want to see if your PDFs compress well or not as 
that may save some storage space at the cost of compress/decompress on 
accesses. 



pretty sure most all PDF's are already compressed, and won't compress 
much further.


--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] FATAL: semctl(999999, 6, SETVAL, 0) failed: Invalid argument

2016-12-06 Thread DrakoRod
Hi everybody

I have a problem with to Postgres 9.5 with the semaphores, mainly in the
Standby node, but in the mater node has been happening too.

The implementation are in RedHat 7.2 with the NAS Storage, the data
directory of PostgreSQL is in the NAS storage. I was read that PostgreSQL
doesn't have problem with this type storage but now I have this problem and
I'm not source that the problem are the storage. The error is following: 

2016-12-05 03:02:31 CST FATAL:  connection to client lost
2016-12-05 03:04:10 CST FATAL:  semctl(5832710, 6, SETVAL, 0) failed:
Invalid argument
2016-12-05 03:04:10 CST LOG:  server process (PID 4852) exited with exit
code 1
2016-12-05 03:04:10 CST LOG:  terminating any other active server processes
2016-12-05 03:04:10 CST WARNING:  terminating connection because of crash of
another server process
2016-12-05 03:04:10 CST DETAIL:  The postmaster has commanded this server
process to roll back the current transaction and exit, because another
server process exited abnormally and possibly corrupted shared memory.
2016-12-05 03:04:10 CST HINT:  In a moment you should be able to reconnect
to the database and repeat your command.
2016-12-05 03:04:10 CST LOG:  archiver process (PID 15360) exited with exit
code 1
2016-12-05 03:04:10 CST WARNING:  terminating connection because of crash of
another server process
2016-12-05 03:04:10 CST DETAIL:  The postmaster has commanded this server
process to roll back the current transaction and exit, because another
server process exited abnormally and possibly corrupted shared memory.
2016-12-05 03:04:10 CST HINT:  In a moment you should be able to reconnect
to the database and repeat your command.
2016-12-05 03:04:10 CST WARNING:  terminating connection because of crash of
another server process
2016-12-05 03:04:10 CST DETAIL:  The postmaster has commanded this server
process to roll back the current transaction and exit, because another
server process exited abnormally and possibly corrupted shared memory.
2016-12-05 03:04:10 CST HINT:  In a moment you should be able to reconnect
to the database and repeat your command.
2016-12-05 03:04:11 CST LOG:  all server processes terminated;
reinitializing
2016-12-05 03:04:11 CST LOG:  could not remove shared memory segment
"/PostgreSQL.1804289383": No such file or directory
2016-12-05 03:04:11 CST LOG:  semctl(5636096, 0, IPC_RMID, ...) failed:
Invalid argument
2016-12-05 03:04:11 CST LOG:  semctl(5668865, 0, IPC_RMID, ...) failed:
Invalid argument
2016-12-05 03:04:11 CST LOG:  semctl(5701634, 0, IPC_RMID, ...) failed:
Invalid argument
2016-12-05 03:04:11 CST LOG:  semctl(5734403, 0, IPC_RMID, ...) failed:
Invalid argument
2016-12-05 03:04:11 CST LOG:  semctl(5767172, 0, IPC_RMID, ...) failed:
Invalid argument
2016-12-05 03:04:11 CST LOG:  semctl(5799941, 0, IPC_RMID, ...) failed:
Invalid argument
2016-12-05 03:04:11 CST LOG:  semctl(5832710, 0, IPC_RMID, ...) failed:
Invalid argument
2016-12-05 03:04:11 CST LOG:  semctl(5865479, 0, IPC_RMID, ...) failed:
Invalid argument
2016-12-05 03:04:11 CST LOG:  database system was interrupted; last known up
at 2016-12-05 03:03:26 CST
2016-12-05 03:04:12 CST LOG:  database system was not properly shut down;
automatic recovery in progress
2016-12-05 03:04:12 CST LOG:  redo starts at 6/984DDFD0
2016-12-05 03:04:12 CST LOG:  invalid record length at 6/984DE100
2016-12-05 03:04:12 CST LOG:  redo done at 6/984DE0C8
2016-12-05 03:04:12 CST LOG:  MultiXact member wraparound protections are
now enabled
2016-12-05 03:04:12 CST LOG:  database system is ready to accept connections

I read this in web and forums: 

http://serverfault.com/questions/723636/connecting-to-postgresql-fails-with-semctl-invalid-argument
http://postgresql.nabble.com/GENERAL-postgreslog-semctl-7438339-4-SETVAL-0-failed-td1860547.html
https://groups.google.com/forum/#!topic/pgbarman/Hx4U9F24vnw

I found this maybe some bug or something? 

http://stackoverflow.com/questions/36077235/all-auxiliary-procs-are-in-use-at-streaming-replication-server-in-postgresql-9-5

I have calculated and modificated kernel resources with the information this
link:
https://www.postgresql.org/docs/9.5/static/kernel-resources.html

But the problem does not fix. I have the same implmentation but with SAN
storage that is the production enviroment. I'm really worried that this not
fix it.

It is a virtual machine in VMWare. I have read many post talk about  freeBSD
jails, but I do not understand why freeBSD jails. What do you have to do
with this?

Any suggestion?

Best Regards.






-
Dame un poco de fe, eso me bastará.
Rozvo Ware Solutions 
--
View this message in context: 
http://postgresql.nabble.com/FATAL-semctl-99-6-SETVAL-0-failed-Invalid-argument-tp5933635.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Select works only when connected from login postgres

2016-12-06 Thread Tom Lane
Joseph Brenner  writes:
> Well, my take would be that if you've taken the trouble to set an
> empty string as the PAGER that means something, and it probably means
> you don't want any pager to be used.

Yeah, on reflection that argument seems pretty persuasive.  So I propose
the attached patch.

BTW, I realized while testing this that there's still one gap in our
understanding of what went wrong for you: cases like "SELECT 'hello'"
should not have tried to use the pager, because that would've produced
less than a screenful of data.  But that's irrelevant here, because it
can easily be shown that psql doesn't behave nicely if PAGER is set to
empty when it does try to use the pager.

regards, tom lane

diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml
index 261652a..9915731 100644
*** a/doc/src/sgml/ref/psql-ref.sgml
--- b/doc/src/sgml/ref/psql-ref.sgml
*** $endif
*** 3801,3808 
If the query results do not fit on the screen, they are piped
through this command.  Typical values are
more or less.  The default
!   is platform-dependent.  The use of the pager can be disabled by
!   using the \pset command.
   
  
 
--- 3801,3809 
If the query results do not fit on the screen, they are piped
through this command.  Typical values are
more or less.  The default
!   is platform-dependent.  Use of the pager can be disabled by setting
!   PAGER to empty, or by using pager-related options of
!   the \pset command.
   
  
 
diff --git a/src/fe_utils/print.c b/src/fe_utils/print.c
index 1ec74f1..5c5d285 100644
*** a/src/fe_utils/print.c
--- b/src/fe_utils/print.c
*** PageOutput(int lines, const printTableOp
*** 2874,2879 
--- 2874,2885 
  			pagerprog = getenv("PAGER");
  			if (!pagerprog)
  pagerprog = DEFAULT_PAGER;
+ 			else
+ 			{
+ /* if PAGER is empty or all-white-space, don't use pager */
+ if (strspn(pagerprog, " \t\r\n") == strlen(pagerprog))
+ 	return stdout;
+ 			}
  			disable_sigpipe_trap();
  			pagerpipe = popen(pagerprog, "w");
  			if (pagerpipe)
diff --git a/src/interfaces/libpq/fe-print.c b/src/interfaces/libpq/fe-print.c
index c33dc42..e596a51 100644
*** a/src/interfaces/libpq/fe-print.c
--- b/src/interfaces/libpq/fe-print.c
*** PQprint(FILE *fout, const PGresult *res,
*** 166,173 
  			screen_size.ws_col = 80;
  #endif
  			pagerenv = getenv("PAGER");
  			if (pagerenv != NULL &&
! pagerenv[0] != '\0' &&
  !po->html3 &&
  ((po->expanded &&
    nTups * (nFields + 1) >= screen_size.ws_row) ||
--- 166,174 
  			screen_size.ws_col = 80;
  #endif
  			pagerenv = getenv("PAGER");
+ 			/* if PAGER is unset, empty or all-white-space, don't use pager */
  			if (pagerenv != NULL &&
! strspn(pagerenv, " \t\r\n") != strlen(pagerenv) &&
  !po->html3 &&
  ((po->expanded &&
    nTups * (nFields + 1) >= screen_size.ws_row) ||

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] FATAL: semctl(999999, 6, SETVAL, 0) failed: Invalid argument

2016-12-06 Thread Tom Lane
DrakoRod  writes:
> I have a problem with to Postgres 9.5 with the semaphores, mainly in the
> Standby node, but in the mater node has been happening too.

> The implementation are in RedHat 7.2 with the NAS Storage, the data
> directory of PostgreSQL is in the NAS storage. I was read that PostgreSQL
> doesn't have problem with this type storage but now I have this problem and
> I'm not source that the problem are the storage. The error is following: 

> 2016-12-05 03:02:31 CST FATAL:  connection to client lost
> 2016-12-05 03:04:10 CST FATAL:  semctl(5832710, 6, SETVAL, 0) failed:
> Invalid argument

I doubt this has anything much to do with NAS storage.  What it sounds
like to me is systemd's "helpful" idea of removing SysV semaphores when
it thinks you've logged out.  There are at least two ways to stop it
from doing that; see
https://wiki.postgresql.org/wiki/Systemd
or the longer discussion at
https://www.postgresql.org/message-id/flat/57828C31.5060409%40gmail.com
or a couple of other discussions you can find by searching the PG mail
archives for "systemd semaphores".

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] FATAL: semctl(999999, 6, SETVAL, 0) failed: Invalid argument

2016-12-06 Thread DrakoRod
Hi Tom!

Thanks for your quick response, in fact, it is not a problem with NAS
storage. It is the systemd configuration. I had detected that the semaphores
were removed every 20-30 minutes. 

After that configuration (the links that you share), I monitored during 1
hour and everything is going well.

https://wiki.postgresql.org/wiki/Systemd
https://www.postgresql.org/message-id/cak7teys9-o4bterbs3xuk2bffnnd55u2sm9j5r2fi7v6bhj...@mail.gmail.com

Thanks! 



-
Dame un poco de fe, eso me bastará.
Rozvo Ware Solutions 
--
View this message in context: 
http://postgresql.nabble.com/FATAL-semctl-99-6-SETVAL-0-failed-Invalid-argument-tp5933635p5933649.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general