Re: [GENERAL] Recording exceptions within function (autonomous transactions?)

2015-10-06 Thread Scott Mead

> On Oct 6, 2015, at 05:38, Steve Pritchard  wrote:
> 
> I am porting several stored procedures from Oracle to Postgres. In the Oracle 
> code, if an exception is thrown within a stored procedure, the exception is 
> caught and details are written to a database table using an autonomous 
> transaction (as the main transaction is rolled back).
> 
> As far as I can see from the documentation, Postgres doesn't support 
> autonomous transaction (although there is talk about it at 
> https://wiki.postgresql.org/wiki/Autonomous_subtransactions - is this 
> something that is being discussed for a future release?).
> 
> The Postgres functions that I'm writing are batch processes that will be 
> invoked via a scheduler (either cron or pgAgent).
> 
> Ideally I'd like to record the exceptions in a database table. If this isn't 
> possible then recording in a log fie would be acceptable, but I'd like to 
> keep this separate from the main postgres log.
> 
> Alternatives that I've come up with (none of them very satisfactory):
> use 'raise' to record in postgres log
> put the error recording in the client code (as invoked by scheduler) - use 
> BEGIN TRANSACTION to start a new transaction
> use COPY to output to a file
> Can anyone suggest something that would meet my requirements above?
It's hacky, and, I haven't tried it in a few years.  Setup a foreign table that 
resides in the same database. When you write to the foreign table, it will be 
using a 'loopback' connection, and that transaction will be able to commit 
because it is a separate connection. 

  To be fair, I haven't actually done this since the days of dblink, I 
*believe* it should work with fdw though. 

--
Scott Mead
Sr. Architect
OpenSCG
http://openscg.com

> Steve Pritchard
> British Trust for Ornithology, UK


Re: [GENERAL] How to reduce pg_dump backup time

2015-10-06 Thread Scott Mead

> On Oct 6, 2015, at 00:52, Sachin Srivastava  wrote:
> 
> Dear Team,
> 
>  
> 
> I am using PostgreSQL 9.1 on Redhat Linux on my Production environment. My 
> database size is 680 GB and it take 7 hour for completion the pg_dump backup.
> 
>  
> 
> I want that my pg_dump backup should be fast and take less time.
> 
>  
> 
> In PostgresQL 9.3 there is “ -j njobs” option is available for fast pg_dump 
> backup.
> 
>  
> 
> There is any way, that I will use “ -j njobs” option in “PostgreSQL 9.1 “ so 
> that my backup is completed in less time or any other way in ”PostgreSQL 9.1” 
> to take the backup fast. Kindly suggest?
> 
>  
> 
If you're using compression, disable it. It'll require more space, but would be 
faster. Honestly, you should upgrade to a newer version, 9.1 is EOL in slightly 
less than 1 year. 

  For the upgrade, you can use a newer version of pg_dump. I'm not sure if the 
parallel option would be available against a 9.1 server (but it's worth a try). 
I wouldn't try restoring that dump to a 9.1 server, just use it to upgrade to 
9.4. 


>  
> 
> [root]# lsb_release –a
> 
>  
> 
> LSB Version:
> :base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch
> 
> Distributor ID: RedHatEnterpriseServer
> 
> Description:Red Hat Enterprise Linux Server release 6.7 (Santiago)
> 
> Release:6.7
> 
> Codename:   Santiago
> 
>  
> 
> [root@CPPMOMA_DB01 even]# psql
> 
> Password:
> 
> psql.bin (9.1.2)
> 
> Type "help" for help.
> 
>  
> 
> postgres=# select version();
> 
> version
> 
> ---
> 
>  PostgreSQL 9.1.2 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.1.2 
> 20080704 (Red Hat 4.1.2-46), 64-bit
> 
> (1 row)
> 
>  
> 
> postgres=#
> 
>  
> 
>  
> 
>  
> 
>  
> 
> Regards,
> 
> Sachin Srivastava


Re: [GENERAL] How to reduce pg_dump backup time

2015-10-06 Thread Scott Mead
On Tue, Oct 6, 2015 at 8:54 AM, Adrian Klaver 
wrote:

> On 10/05/2015 09:52 PM, Sachin Srivastava wrote:
>
>> Dear Team,
>>
>> I am using PostgreSQL 9.1 on Redhat Linux on my Production environment.
>> My database size is 680 GB and it take 7 hour for completion the pg_dump
>> backup.
>>
>> I want that my pg_dump backup should be fast and take less time.
>>
>> In PostgresQL 9.3 there is “ -j /njobs” /option is available for fast
>> pg_dump backup.
>>
>> There is any way, that I will use *“ **-j /njobs”/*/option /in
>> “PostgreSQL 9.1 “ so that my backup is completed in less time or any
>> other way in ”PostgreSQL 9.1” to take the backup fast. Kindly suggest?
>>
>
> In addition to Scott's suggestions have you looked at replication?:
>
> http://www.postgresql.org/docs/9.1/interactive/high-availability.html
>
> Basically, continuous backup.
>

+1 - It doesn't actually lower the base backup time, but, there's little
overhead for archiving (other than storage space) and restoration is just
starting an instance and replaying a few logs.  *Much* faster.  You can use
a tool like WAL-E or barman to build out a nice strategy.

   At the end of the day, you'll still want to do a periodic, logical
backup (make sure your phyiscal backups are sane), but, using physical
backups will open you up to many more options.

--
Scott Mead
Sr. Architect
*OpenSCG*
http://www.openscg.com <http://openscg.com/>



>> *[root]# lsb_release –a*
>>
>> **
>>
>> LSB
>>
>> Version::base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch
>>
>> Distributor ID: RedHatEnterpriseServer
>>
>> Description:Red Hat Enterprise Linux Server release 6.7 (Santiago)
>>
>> Release:6.7
>>
>> Codename:Santiago
>>
>> *[root@CPPMOMA_DB01 even]# psql*
>>
>> Password:
>>
>> psql.bin (9.1.2)
>>
>> Type "help" for help.
>>
>> postgres=# select version();
>>
>> version
>>
>>
>> ---
>>
>> PostgreSQL 9.1.2 on x86_64-unknown-linux-gnu, compiled by gcc (GCC)
>> 4.1.2 20080704 (Red Hat 4.1.2-46), 64-bit
>>
>> (1 row)
>>
>> postgres=#
>>
>> *Regards,*
>>
>> Sachin Srivastava
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] backup.old

2015-10-07 Thread Scott Mead
On Wed, Oct 7, 2015 at 15:38, David G. Johnston 


wrote:
On Wed, Oct 7, 2015 at 3:29 PM, Steve Pribyl < 
steve.pri...@akunacapital.com [steve.pri...@akunacapital.com] > wrote:

Thank you very much. I read someplace if you run pg_start_backup twice the
backup.old will be created, but there was not much beyond that and now I 
can't

seem to find the reference.


backup_label gets deleted on pg_stop_backup() on the *master*.
Backup_label will still be in the *backup* itself however (or, more 
succinctly,
a slave server). When you start the backup / slave, it will process 
backup_label

so that it can start recovery. Once we don't need it anymore, the file is
renamed to backup_label.old. Typically, when you see a backup_label.old on 
a
writable master, it was either: * a backup that was restored and put in to 
service * a slave server that was promoted A pg_controldata will probably 
show a timeline != 1




Scanning the docs and logic tells me that attempting to do pg_start_backup 
twice
in a row should result in the second attempt giving an error...but I could 
be

misinformed.
The file pg_start_backup creates is named "backup_label" and so I'd also 
expect

any attempt to add an old suffix would keep the same base name...
David J.

Re: [GENERAL] Creating Report for PieChart

2015-10-13 Thread Scott Mead


> On Oct 13, 2015, at 19:56, Alex Magnum  wrote:
> 
> Hello,
> I need to process some statistics for a pie chart (json) where I only want to 
> show a max of 8 slices. If I have more data points like in below table I need 
> to combine all to a slice called others. If there are less or equal 8 i use 
> them as is.
> 
> I am currently doing this with a plperl function which works well but was 
> just wondering out of curiosity if that could be done withing an sql query.
> 
> Anyone having done something similar who could point me in the right 
> direction? 
> 
> 
> SELECT count(*),country_name FROM stats_archive WHERE id=400 GROUP BY 
> country_name ORDER BY COUNT DESC;
>  count |   country_name
> ---+---
>302 | Malaysia
> 65 | Singapore
> 57 | Thailand
> 26 | Indonesia
> 15 | France
> 14 | United States
> 14 | India
> 13 | Philippines
> 12 | Vietnam
> 10 | Republic of Korea
> 10 | Canada
>  7 | Australia
>  6 | Brazil
>  6 | Czech Republic
>  5 | Switzerland
>  4 | Saudi Arabia
>  3 | Ireland
>  3 | Japan
>  3 | Sweden
>  3 | South Africa
>  3 | Belarus
>  3 | Colombia
>  3 | United Kingdom
>  1 | Peru
> 
> 
>country_name  | count | perc  
> -+---+---
>  Malaysia|   302 |  51.4 
>  Singapore   |65 |  11.0 
>  Thailand|57 |   9.7 
>  Indonesia   |26 |   4.4 
>  France  |15 |   2.6 
>  United States   |14 |   2.4 
>  India   |14 |   2.4 
>  Others  |95 |  16.1 
>  Total   |   588 |   100 
> 
> Thanks a lot for any suggestions
I would use rank to get a rank number for each record. 


> Alex


Re: [GENERAL] Creating Report for PieChart

2015-10-13 Thread Scott Mead
On Tue, Oct 13, 2015 at 9:14 PM, Scott Mead  wrote:

>
>
> On Oct 13, 2015, at 19:56, Alex Magnum  wrote:
>
> Hello,
> I need to process some statistics for a pie chart (json) where I only want
> to show a max of 8 slices. If I have more data points like in below table I
> need to combine all to a slice called others. If there are less or equal 8
> i use them as is.
>
> I am currently doing this with a plperl function which works well but was
> just wondering out of curiosity if that could be done withing an sql query.
>
> Anyone having done something similar who could point me in the right
> direction?
>
>
> SELECT count(*),country_name FROM stats_archive WHERE id=400 GROUP BY
> country_name ORDER BY COUNT DESC;
>  count |   country_name
> ---+---
>302 | Malaysia
> 65 | Singapore
> 57 | Thailand
> 26 | Indonesia
> 15 | France
> 14 | United States
> 14 | India
> 13 | Philippines
> 12 | Vietnam
> 10 | Republic of Korea
> 10 | Canada
>  7 | Australia
>  6 | Brazil
>  6 | Czech Republic
>  5 | Switzerland
>  4 | Saudi Arabia
>  3 | Ireland
>  3 | Japan
>  3 | Sweden
>  3 | South Africa
>  3 | Belarus
>  3 | Colombia
>  3 | United Kingdom
>  1 | Peru
>
>
>country_name  | count | perc
> -+---+---
>  Malaysia|   302 |  51.4
>  Singapore   |65 |  11.0
>  Thailand|57 |   9.7
>  Indonesia   |26 |   4.4
>  France  |15 |   2.6
>  United States   |14 |   2.4
>  India   |14 |   2.4
>  Others  |95 |  16.1
>  Total   |   588 |   100
>
> Thanks a lot for any suggestions
>
> I would use rank to get a rank number for each record.
>

  Sorry, Sent the last one from my phone, here's an example:



Use 'rank' to generate the rank order of the entry.

postgres=# select country, count(1) num_entries,
rank() over (order by count(1) DESC)
from test GROUP by country ORDER BY num_entries DESC;
 country | num_entries | rank
-+-+--
 US  |  20 |1
 CA  |  15 |2
 SP  |   8 |3
 IT  |   7 |4
(4 rows)

There's probably an easier way to do this without a sub-select, but, it
works.

postgres=# SELECT country, num_entries, rank
   FROM (select country, count(1) num_entries,
   rank() over (order by count(1) DESC)
   FROM test GROUP by country
) foo WHERE rank < 4;

 country | num_entries | rank
-+-+--
 US  |  20 |1
 CA  |  15 |2
 SP  |   8 |3
(3 rows)

postgres=#

--
Scott Mead
OpenSCG
www.openscg.com

>
>
> Alex
>
>


Re: [GENERAL] question

2015-10-15 Thread Scott Mead
On Thu, Oct 15, 2015 at 3:55 PM, Guillaume Lelarge 
wrote:

> 2015-10-15 20:40 GMT+02:00 anj patnaik :
>
>> It's a Linux machine with 8 CPUs. I don't have the other details.
>>
>> I get archive member too large for tar format.
>>
>> Is there a recommended command/options when dealing with very large
>> tables, aka 150K rows and half of the rows have data being inserted with
>> 22MB?
>>
>>
> Don't use tar format? I never understood the interest on this one. You
> should better use the custom method.
>

+ 1

 Use -F c


--
Scott Mead
Sr. Architect
*OpenSCG*
PostgreSQL, Java & Linux Experts


http://openscg.com


>
>
>> -bash-4.1$ ./pg_dump -t RECORDER postgres --format=t -w  > /tmp/dump
>> pg_dump: [archiver (db)] connection to database "postgres" failed:
>> fe_sendauth: no password supplied
>> -bash-4.1$ ./pg_dump -t RECORDER postgres --format=t   > /tmp/dump
>> Password:
>> pg_dump: [tar archiver] archive member too large for tar format
>> -bash-4.1$ pg_dumpall | gzip > \tmp\db.out-`date +\%Y\%m\%d\%H`.gz
>> -bash: pg_dumpall: command not found
>> -bash: tmpdb.out-2015101510.gz: Permission denied
>> -bash-4.1$ ./pg_dumpall | gzip > \tmp\db.out-`date +\%Y\%m\%d\%H`.gz
>>
>>
>> Thank you so much for replying and accepting my post to this NG.
>>
>> On Thu, Oct 15, 2015 at 11:17 AM, Melvin Davidson 
>> wrote:
>>
>>> In addition to exactly what you mean by "a long time" to pg_dump 77k of
>>> your table,
>>>
>>> What is your O/S and how much memory is on your system?
>>> How many CPU's are in your system?
>>> Also, what is your hard disk configuration?
>>> What other applications are running simultaneously with pg_dump?
>>> What is the value of shared_memory & maintenance_work_mem in
>>> postgresql.conf?
>>>
>>> On Thu, Oct 15, 2015 at 11:04 AM, Adrian Klaver <
>>> adrian.kla...@aklaver.com> wrote:
>>>
>>>> On 10/14/2015 06:39 PM, anj patnaik wrote:
>>>>
>>>>> Hello,
>>>>>
>>>>> I recently downloaded postgres 9.4 and I have a client application that
>>>>> runs in Tcl that inserts to the db and fetches records.
>>>>>
>>>>> For the majority of the time, the app will connect to the server to do
>>>>> insert/fetch.
>>>>>
>>>>> For occasional use, we want to remove the requirement to have a server
>>>>> db and just have the application retrieve data from a local file.
>>>>>
>>>>> I know I can use pg_dump to export the tables. The questions are:
>>>>>
>>>>> 1) is there an in-memory db instance or file based I can create that is
>>>>> loaded with the dump file? This way the app code doesn't have to
>>>>> change.
>>>>>
>>>>
>>>> No.
>>>>
>>>>
>>>>> 2) does pg support embedded db?
>>>>>
>>>>
>>>> No.
>>>>
>>>> 3) Or is my best option to convert the dump to sqlite and the import the
>>>>> sqlite and have the app read that embedded db.
>>>>>
>>>>
>>>> Sqlite tends to follow Postgres conventions, so you might be able to
>>>> use the pg_dump output directly if you use --inserts or --column-inserts:
>>>>
>>>> http://www.postgresql.org/docs/9.4/interactive/app-pgdump.html
>>>>
>>>>
>>>>> Finally, I am noticing pg_dump takes a lot of time to create a dump of
>>>>> my table. right now, the table  has 77K rows. Are there any ways to
>>>>> create automated batch files to create dumps overnight and do so
>>>>> quickly?
>>>>>
>>>>
>>>> Define long time.
>>>>
>>>> What is the pg_dump command you are using?
>>>>
>>>> Sure use a cron job.
>>>>
>>>>
>>>>> Thanks for your inputs!
>>>>>
>>>>
>>>>
>>>> --
>>>> Adrian Klaver
>>>> adrian.kla...@aklaver.com
>>>>
>>>>
>>>> --
>>>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>>>> To make changes to your subscription:
>>>> http://www.postgresql.org/mailpref/pgsql-general
>>>>
>>>
>>>
>>>
>>> --
>>> *Melvin Davidson*
>>> I reserve the right to fantasize.  Whether or not you
>>> wish to share my fantasy is entirely up to you.
>>>
>>
>>
>
>
> --
> Guillaume.
>   http://blog.guillaume.lelarge.info
>   http://www.dalibo.com
>


Re: [GENERAL] ID column naming convention

2015-10-17 Thread Scott Mead


> On Oct 13, 2015, at 18:27, droberts  wrote:
> 
> Gavin Flower-2 wrote
>>> On 14/10/15 06:36, droberts wrote:
>>> Hi, is there a problem calling ID's different when used as a FK vs table
>>> ID?
>>> For example
>>> 
>>> 
>>> mydimtable ()
>>>  ID
>>>  name
>>>  description
>>> 
>>> 
>>> myfacttable ()
>>>   my_dim_id   # FK to ID above
>>>   total_sales
>>> 
>>> 
>>> I 'think' if I don't enforce foreign key constraints, then this practice
>>> prevents tools from being able to generate ERD diagrams right?
>>> 
>>> 
>>> 
>>> --
>>> View this message in context:
>>> http://postgresql.nabble.com/ID-column-naming-convention-tp5869844.html
>>> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>> My practice is to name the PRIMARY KEY as id, and foreign keys with the 
>> original table name plus the sufiix_id.
>> 
>> By leaving the table name off the primary key name, and just using id, 
>> makes it more obvious that it is a primary key (plus it seems redundant 
>> to prefix the primary key name with its own table name!).
>> 
>> CREATE TABLE house
>> (
>> id  int PRIMARY KEY,
>> address text
>> );
>> 
>> CREATE TABLE room
>> (
>> id   int PRIMARY KEY,
>> house_id int REFERENCES house(id),
>> name text
>> );
>> 
>> 
>> There are exceptions like:
>> 
>> CREATE TABLE human
>> (
>> idint PRIMARY KEY,
>> mother_id int REFERENCES human (id),
>> father_id int REFERENCES human (id),
>> name  text
>> );
>> 
>> Cheers,
>> Gavin
>> 
>> 
>> -- 
>> Sent via pgsql-general mailing list (
> 
>> pgsql-general@
> 
>> )
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
> 
> Thanks.   My only question is how do you create a schema diagram (ERD) then? 
> The tool won't know what the relationships are unless maybe you put foreign
> key constraints on.  
That's how most tools work, usually by calling the driver api (jdbc 
databasemetadata, etc) which in turn look at the information_schema. If you 
don't setup real referential integrity, any tool that can use names is just 
guessing   

   I think dbvisualizer will 'infer' based on column names.  I KNOW that 
schemaspy has this option, but they explicitly note it is a GUESS. 

   Use foreign keys. 



> BTW does anyone recommend a tool to to that?  I've been
> playing with DbVisualizer.
> 
> 
> 
> --
> View this message in context: 
> http://postgresql.nabble.com/ID-column-naming-convention-tp5869844p5869881.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [BUGS] postgresql table data control

2015-10-19 Thread Scott Mead

> On Oct 19, 2015, at 04:29, Shulgin, Oleksandr  
> wrote:
> 
>> On Sat, Oct 17, 2015 at 1:26 AM, 許耀彰  wrote:
>> Dear Support Team,
>> How can we know each table data increase day by day? It mean how do we get 
>> how many data produce today,included which data? Thank you. 
> 
> 
> [moving from bugs@]
> 
> Please refer to these SQL-level functions:
> 
> pg_relation_size()
> pg_total_relation_size()
> pg_size_pretty()

If you are looking to track this over time, you'll need to use a tool that is 
capturing this in snapshots and do some reporting over it. 

  I've written a script that I use frequently as part of my OpenWatch tool 
(https://bitbucket.org/scott_mead/openwatch/src/d024185b1b5585f7b4c8a5ad6b926eafed2e249e/bindings/java/sql/OpenWatch-Snapshots.sql?at=master&fileviewer=file-view-default).
  Run this script in the database, then run snapshots periodically:

  Select snapshots.save_snap();
 

  To see a size report between snapshots, list the available snapshots:
  
select * from list_snaps();

  To get the size report, get the snap_id numbers you want to report between 
and run:

  select * from snapshots.report_tables(, )

  If you download the full OpenWatch tool 
(https://bitbucket.org/scott_mead/openwatch/downloads) and run it from the 
shell script, you an add cpu, disk/io and memory to the report as well. 



> 
> http://www.postgresql.org/docs/current/static/functions-admin.html
> 


Re: [GENERAL] trouble downloading postgres 9.4 for RHEL 6.x

2015-10-21 Thread Scott Mead

> On Oct 21, 2015, at 12:28, Adrian Klaver  wrote:
> 
>> On 10/21/2015 09:14 AM, anj patnaik wrote:
>> Ok, i am trying to determine why I am getting errors. Is it possible
>> that my browser is corrupting the transfer?
> 
> Maybe, though I used FireFox to download also. I would go to wherever the 
> file has been downloaded on your computer and delete it and try the download 
> again. The file I got was:
> 
> 37548416 Oct 21 09:20 postgresql-9.5.0-beta1-linux-x64.run
> 
> CCing list
>> 
>> I am seeing: Installer payload initialization failed. This is likely due
>> to an incomplete or corrupt downloaded file.
>> 
>> I am a Linux newbie. Is there a way to test if the transfers are getting
>> corrupted?
>> 
>> after download, did you just change permissions and ran sudo
>> ./postgres.run file?
> 
> Yes, so:
> 
> aklaver@killi:~/Downloads> chmod 755 postgresql-9.5.0-beta1-linux-x64.run
> 
> aklaver@killi:~/Downloads> sudo ./postgresql-9.5.0-beta1-linux-x64.run
> 
>> 
>> The reason for using installer is that it goes through all the steps and
>> instantiates a sid.
> 
> What is a sid?
> 

Postgres isn't like Oracle. Use yum to install, then run 

/etc/init.d/postgres9.5 initdb

Then, run:
/etc/init.d/postgres9.5 start


  That's it.  Initdb creates the instance, start - starts it.  No dbca or other 
such garbage is needed. Welcome to the easy life :)



>> 
>> The purpose of this installation is for me to have a database to rest
>> the restoration of pg_dump.
> 
> You can also get that with a Yum install, see below for more:
> 
> http://www.postgresql.org/download/linux/redhat/
>> 
>> Thank you.
>> 
>> On Wed, Oct 21, 2015 at 12:07 PM, Adrian Klaver
>> mailto:adrian.kla...@aklaver.com>> wrote:
>> 
>>On 10/21/2015 08:57 AM, anj patnaik wrote:
>> 
>>I used the same link:
>>http://www.enterprisedb.com/products-services-training/pgdownload
>> 
>>I chose /*Version 9.5.0 Beta 1*/ Linux x86-64
>> 
>> 
>>I downloaded(postgresql-9.5.0-beta1-linux-x64.run) and ran it. I did
>>not actually complete the install as I already have Postgres
>>installed  on this machine. This is on openSUSE 13.2, not that it
>>should matter.
>> 
>> 
>>Then, I tried 9.4 for Linux x86-64
>> 
>>Has anyone downloaded/installed within last 2 days? my OS is
>>RHEL 6.5
>> 
>>I am using the Linux machine's firefox browser to download.
>> 
>>is there a way to use yum to get the same installer program that
>>goes
>>through all the steps?
>> 
>> 
>>No, Yum will not use the installer program, it will use the RH
>>native packaging.
>> 
>>Is there a particular reason you want the installer?
>> 
>>Something specific you want to install?
>> 
>> 
>>Please advise. thanks
>> 
>>On Wed, Oct 21, 2015 at 9:08 AM, Adrian Klaver
>>mailto:adrian.kla...@aklaver.com>
>>>>> wrote:
>> 
>> On 10/20/2015 05:48 PM, anj patnaik wrote:
>> 
>> Several weeks ago, I successfully
>> downloaded postgresql-9.4.4-3-linux-x64.run on a Linux
>>server.
>> 
>> 
>> Where did you download from?
>> 
>> 
>> Today, I attempted to download from the site. I notice
>>9.5 is
>> there, but
>> getting lots of errors:
>> 
>> 
>> Looks like you are trying to run a *.exe(Windows) file on a
>>Linux
>> machine, that is the source of the errors.
>> 
>> 
>> 1) when downloading and running latest
>> 
>> rchive:
>>/home/apatnaik/Downloads/postgresql-9.4.5-1-windows-x64.exe
>> 
>>[/home/apatnaik/Downloads/postgresql-9.4.5-1-windows-x64.exe]
>> End-of-central-directory signature not found.
>>Either this
>> file is not
>> a zipfile, or it constitutes one disk of a multi-part
>> archive.  In the
>> latter case the central directory and zipfile
>>comment will
>> be found on
>> the last disk(s) of this archive.
>> zipinfo:  cannot find zipfile directory in one of
>> 
>>/home/apatnaik/Downloads/postgresql-9.4.5-1-windows-x64.exe or
>> 
>> 
>>/home/apatnaik/Downloads/postgresql-9.4.5-1-windows-x64.exe.zip, and
>> cannot find
>> 
>>/home/apatnaik/Downloads/postgresql-9.4.5-1-windows-x64.exe.ZIP,
>> period.
>> 
>> 2) Then I tried to scp from the other machine and changed
>> permissions to
>> 777 and used sudo to execute the file but get this:
>> 
>> bash-4.1$ chmod 777 postgresql-9.4.4-3-linux-x64.run
>> bash-4.1$ sudo ./postgresql-9.4.4-3-linux-x64.run
>> Installer payload initialization fail

Re: [GENERAL] trouble downloading postgres 9.4 for RHEL 6.x

2015-10-21 Thread Scott Mead


> On Oct 21, 2015, at 14:58, anj patnaik  wrote:
> 
> With the graphical installer, I had a way to create a user. Does it create 
> postgres user by default?

Yeah, it creates both the OS user and the database super-user. 


> 
> Let me know. Thx
> 
>> On Wed, Oct 21, 2015 at 1:43 PM, Scott Mead  wrote:
>> 
>> > On Oct 21, 2015, at 12:28, Adrian Klaver  wrote:
>> >
>> >> On 10/21/2015 09:14 AM, anj patnaik wrote:
>> >> Ok, i am trying to determine why I am getting errors. Is it possible
>> >> that my browser is corrupting the transfer?
>> >
>> > Maybe, though I used FireFox to download also. I would go to wherever the 
>> > file has been downloaded on your computer and delete it and try the 
>> > download again. The file I got was:
>> >
>> > 37548416 Oct 21 09:20 postgresql-9.5.0-beta1-linux-x64.run
>> >
>> > CCing list
>> >>
>> >> I am seeing: Installer payload initialization failed. This is likely due
>> >> to an incomplete or corrupt downloaded file.
>> >>
>> >> I am a Linux newbie. Is there a way to test if the transfers are getting
>> >> corrupted?
>> >>
>> >> after download, did you just change permissions and ran sudo
>> >> ./postgres.run file?
>> >
>> > Yes, so:
>> >
>> > aklaver@killi:~/Downloads> chmod 755 postgresql-9.5.0-beta1-linux-x64.run
>> >
>> > aklaver@killi:~/Downloads> sudo ./postgresql-9.5.0-beta1-linux-x64.run
>> >
>> >>
>> >> The reason for using installer is that it goes through all the steps and
>> >> instantiates a sid.
>> >
>> > What is a sid?
>> >
>> 
>> Postgres isn't like Oracle. Use yum to install, then run
>> 
>> /etc/init.d/postgres9.5 initdb
>> 
>> Then, run:
>> /etc/init.d/postgres9.5 start
>> 
>> 
>>   That's it.  Initdb creates the instance, start - starts it.  No dbca or 
>> other such garbage is needed. Welcome to the easy life :)
>> 
>> 
>> 
>> >>
>> >> The purpose of this installation is for me to have a database to rest
>> >> the restoration of pg_dump.
>> >
>> > You can also get that with a Yum install, see below for more:
>> >
>> > http://www.postgresql.org/download/linux/redhat/
>> >>
>> >> Thank you.
>> >>
>> >> On Wed, Oct 21, 2015 at 12:07 PM, Adrian Klaver
>> >> mailto:adrian.kla...@aklaver.com>> wrote:
>> >>
>> >>On 10/21/2015 08:57 AM, anj patnaik wrote:
>> >>
>> >>I used the same link:
>> >>http://www.enterprisedb.com/products-services-training/pgdownload
>> >>
>> >>I chose /*Version 9.5.0 Beta 1*/ Linux x86-64
>> >>
>> >>
>> >>I downloaded(postgresql-9.5.0-beta1-linux-x64.run) and ran it. I did
>> >>not actually complete the install as I already have Postgres
>> >>installed  on this machine. This is on openSUSE 13.2, not that it
>> >>should matter.
>> >>
>> >>
>> >>Then, I tried 9.4 for Linux x86-64
>> >>
>> >>Has anyone downloaded/installed within last 2 days? my OS is
>> >>RHEL 6.5
>> >>
>> >>I am using the Linux machine's firefox browser to download.
>> >>
>> >>is there a way to use yum to get the same installer program that
>> >>goes
>> >>through all the steps?
>> >>
>> >>
>> >>No, Yum will not use the installer program, it will use the RH
>> >>native packaging.
>> >>
>> >>Is there a particular reason you want the installer?
>> >>
>> >>Something specific you want to install?
>> >>
>> >>
>> >>Please advise. thanks
>> >>
>> >>On Wed, Oct 21, 2015 at 9:08 AM, Adrian Klaver
>> >>mailto:adrian.kla...@aklaver.com>
>> >><mailto:adrian.kla...@aklaver.com
>> >><mailto:adrian.kla...@aklaver.com>>> wrote:
>> >>
>> >> On 10/20/2015 05:48 PM, anj patnaik wrote:
>> >>
>> >> Several weeks ago, I successfully
>> >> downloaded postgresql-9.4.4-3-linux-x64.run on a Linux
>>

Re: [GENERAL] Red Hat Policies Regarding PostgreSQL

2015-10-28 Thread Scott Mead
On Wed, Oct 28, 2015 at 7:10 AM, Mark Morgan Lloyd <
markmll.pgsql-gene...@telemetry.co.uk> wrote:

> Tom Lane wrote:
>
>> Michael Convey  writes:
>>
>>> Due to policies for Red Hat family distributions, the PostgreSQL
>>> installation will not be enabled for automatic start or have the database
>>> initialized automatically.
>>>
>>
>> To which policies are they referring? Licensing, security, or other?​
>>>
>>
>> Packaging policy: daemons shall not run merely by virtue of having been
>> installed.  Otherwise, if you install a boatload of software without
>> checking each package, you'd have a boatload of probably-unwanted and
>> possibly-incorrectly-configured daemons running.  Which is a performance
>> problem and likely a security hazard too.
>>
>> It's a good policy IMO (though I used to work there so no doubt I've just
>> drunk too much Red Hat koolaid).
>>
>
> Seems reasonable. In fact somewhat better than current KDE as in e.g.
> Debian "Jessie", which embeds a copy of MySQL whether the the user wants to
> use it or not.
>
>
Ubuntu 15 Desktop enables mysqld by default.  In fact, it doesn't seem to
hook up with systemd in any obvious way to disable autostart.

--Scott



> --
> Mark Morgan Lloyd
> markMLl .AT. telemetry.co .DOT. uk
>
> [Opinions above are the author's, not those of his employers or colleagues]
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] can postgres run well on NFS mounted partitions?

2015-11-10 Thread Scott Mead
On Tue, Nov 10, 2015 at 6:26 PM, anj patnaik  wrote:

> Thanks for the feedback. I have setup a second Linux VM (running RHEL
> 5.11) and Postgres 9.4. I ran some insertions today from a client running
> on Windows. The client does a loop of 30 updates.
>
> I am seeing about 10-20% increase in latency in the case where DB is on
> NFS (over TCP) compared to directly on disk.
>
> The other machine I am using to compare is running RHEL 6.5 and Postgres
> 9.4.
>
> Are there any specific tests that are recommended to test that postgres
> over NFS works well?
>
> I am planning on doing a few large data inserts and fetches.
>
> With the little testing, the DB over NFS appears fine.
>
>
Don't do it.  Period.   I've used 4 big-vendor appliances with NFS as well
as my own server.  With maybe 3 exceptions, most of the 'total-data-loss'
scenarios I've dealt with regarding transactional data was due to NFS.

--Scott


>
> Thanks for any feedback.
>
> On Tue, Nov 10, 2015 at 4:29 AM, Albe Laurenz 
> wrote:
>
>> anj patnaik wrote:
>> > Can anyone advise if there are problems running postgres over NFS
>> mounted partitions?
>> >
>> > I do need reliability and high speed.
>>
>> I have got the advice not to use NFS from a number of people who should
>> know,
>> but there are also knowledgable people who use PostgreSQL with NFS.
>>
>> You need hard foreground mounts, and you need an NFS server that is
>> guaranteed not to lose data that the client has synced to disk.
>>
>> You should probably only consider storage systems that directly
>> support NFS, and you should run performance and reliability tests.
>>
>> Yours,
>> Laurenz Albe
>>
>
>


Re: [GENERAL] Best tool to pull from mssql

2015-11-11 Thread Scott Mead
On Wed, Nov 11, 2015 at 9:37 AM, taspotts  wrote:

> Take a look at  Pentaho Kettle
>   .  They have a
> free community edition.
>
>
+10


> I use it frequently to migrate data from MSSQL to Postgres.  It has a nice
> GUI for setting everything up and the transforms/jobs can be scheduled.
>
>
>
>
> --
> View this message in context:
> http://postgresql.nabble.com/Best-tool-to-pull-from-mssql-tp5873415p5873509.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Checkpoint Err on Startup of Rsynced System

2016-05-31 Thread Scott Mead
On Tue, May 31, 2016 at 1:13 PM, Jim Longwill  wrote:

> I am trying to setup a 2nd, identical, db server (M2) for development and
> I've run into a problem with starting up the 2nd Postgres installation.
>
> Here's what I've done:
>   1) did a 'clone' of 1st (production) machine M1 (so both machines on
> Cent OS 7.2)
>   2) setup an rsync operation, did a complete 'rsync' from M1 to M2
>   3) did a final 'CHECKPOINT' command on M1 postgres
>   4) shutdown postgres on M1 with 'pg_ctl stop'
>   5) did final 'rsync' operation  (then restarted postgres on M1 with
> 'pg_ctl start')
>   6) tried to startup postgres on M2
>
> It won't start, & in the log file gives the error message:
> ...
> < 2016-05-31 09:02:52.337 PDT >LOG:  invalid primary checkpoint record
> < 2016-05-31 09:02:52.337 PDT >LOG:  invalid secondary checkpoint record
> < 2016-05-31 09:02:52.337 PDT >PANIC:  could not locate a valid checkpoint
> record
> < 2016-05-31 09:02:53.184 PDT >LOG:  startup process (PID 26680) was
> terminated by signal 6: Aborted
> < 2016-05-31 09:02:53.184 PDT >LOG:  aborting startup due to startup
> process failure
>
> I've tried several times to do this but always get this result.  So, do I
> need to do a new 'initdb..' operation on machine M2 + restore from M1
> backups?  Or is there another way to fix this?
>

You should have stopped M1 prior to taking the backup.  If you can't do
that, it can be done online via:

   1. Setup archiving
   2. select pg_start_backup('some label');
   3. 
   4. select pg_stop_backup();

  Without archiving and the pg_[start|stop]_backup, you're not guaranteed
anything.  You could use an atomic snapshot (LVM, storage, etc...), but
it's got to be a true snapshot.  Without that, you need archiving + start /
stop backup.

Last section of:
https://wiki.postgresql.org/wiki/Simple_Configuration_Recommendation#Physical_Database_Backups
will take you to:
https://www.postgresql.org/docs/current/static/continuous-archiving.html

--Scott


--o--o--o--o--o--o--o--o--o--o--o--o--
> Jim Longwill
> PSMFC Regional Mark Processing Center
> jlongw...@psmfc.org
> --o--o--o--o--o--o--o--o--o--o--o--o--
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] Replication

2016-06-02 Thread Scott Mead
On Thu, Jun 2, 2016 at 10:16 AM, Melvin Davidson 
wrote:

> It's been a few years since I worked with slony, and you did not state
> which version of slony or PostgreSQL you are working with, nor did you
> indicate the O/S.
>

I think OP had pointed to using streaming


> That being said, you should be able to formulate a query with a join
> between sl_path & sl_node that gives you the information you need.
>
> On Thu, Jun 2, 2016 at 9:32 AM, Bertrand Paquet <
> bertrand.paq...@doctolib.fr> wrote:
>
>> Hi,
>>
>> On an hot standby streaming server, is there any way to know, in SQL, to
>> know the ip of current master ?
>> The solution I have is to read the recovery.conf file to find
>> primary_conninfo, but, it can be false.
>>
>>
I've run into this as well.  Only way is recovery.conf.

--Scott



> Regards,
>>
>> Bertrand
>>
>
>
>
> --
> *Melvin Davidson*
> I reserve the right to fantasize.  Whether or not you
> wish to share my fantasy is entirely up to you.
>



-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: R: [GENERAL] Vacuum full: alternatives?

2016-06-20 Thread Scott Mead
On Mon, Jun 20, 2016 at 6:13 AM, Andreas Kretschmer  wrote:

>
>
> Am 20.06.2016 um 11:43 schrieb Job:
>
>> Hi Andreas,
>>
>> I would suggest run only autovacuum, and with time you will see a not
>>> more growing table. There is no need for vacuum full.
>>>
>> So new record, when will be pg_bulkloaded, will replace "marked-free"
>> location?
>>
>
>
> exactly, that's the task for vacuum
>
>
I believe that free space is only available to UPDATE, not INSERT.



>
>
> Andreas
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] dblink authentication failed

2016-06-27 Thread Scott Mead


> On Jun 27, 2016, at 03:38, Kiss Attila  wrote:
> 
> Hello,
> I have some problem with using dblink_connect_u() in my function. 
> When I provide the password in the connection string, there is no problem, 
> but when I don’t, it says:
> ERROR:  could not establish connection
> DETAIL:  fe_sendauth: no password supplied
> However the .pgpass file in the path /var/lib/postgres/.pgpass does contain 
> the proper password.
> I have tried to connect to the remote server with psql from bash with the 
> following command:
> # psql –h 192.168.1.40 –U myuser –w remote_db_name
> it works fine.
> My .pgpass file does have the line:
> 192.168.1.40:5432:*:myuser:mypassword
> It seems the psql finds the proper password in the .pgpass file but dblink 
> doesn’t.

When dblink makes the connection,  the client is actually the dbserver itself. 
This means that your .pgpass file needs to be setup in the database server as 
the user running the postgres process. 

> The local and the remote server are running on Ubuntu server 14.04, both of 
> them are PostgreSQL 9.4.
> The .pgpass file properties look this:
> # ls –la ./.pgpass
> -rw--- 1 postgres postgres 193 jún   24 13:54 ./.pgpass
> 
> Any ideas are welcome.
> Regards,
> Attila
> 
>   Mentes a vírusoktól. www.avast.com


Re: [GENERAL] pasting a lot of commands to psql

2016-07-07 Thread Scott Mead
On Thu, Jul 7, 2016 at 1:30 PM, Dmitry Shalashov  wrote:

> Hi Sameer,
>
> I am trying to copy-paste (and execute) random snippets of SQL to psql
> console.
>
> There is another ways to do it, which do not involve copy-paste, but I am
> wondering why is copy-paste won't work. What exactly is happening there...
>
>
> Dmitry Shalashov, surfingbird.ru & relap.io
>
> 2016-07-07 20:26 GMT+03:00 Sameer Kumar :
>
>>
>>
>> On Fri, 8 Jul 2016, 1:23 a.m. Dmitry Shalashov, 
>> wrote:
>>
>>> Hi everyone.
>>>
>>> Let say that I have some sql file with like hundred of simple statements
>>> in it. I `cat` it, copy it to buffer, go to my beloved psql and insert it
>>> there.
>>> But somewhere after first few lines it screws over:
>>>
>>> b2b=> BEGIN;
>>> b2b=> INSERT INTO oko_topsites VALUES('russian_federation','ati.su
>>> ',0,NULL,5);
>>> INSERT INTO oko_topsites VALUES('russian_federation'b2b=> INSERT INTO
>>> oko_topsites VALUES('russian_federation','atn.ua',0,NULL,5);
>>> b2b=> INSERT INTO oko_topsites VALUES('russian_federation','audit-it.ru
>>> ',0,NULL,5);
>>> b2b=> INSERT INTO oko_topsites VALUES('russian_federation','aup.ru
>>> ',0,NULL,5);
>>> '
>>> b2b=> INSERT INTO oko_topsites VALUES('russian_federation','
>>> autocentre.ua',0,NULL,5);
>>>
>>> b2b=> INSERT INTO oko_topsites VALUES('russian_federation','autodoc.ru
>>> ',0,NULL,5);
>>>
>>> b2b=> INSERT INTO oko_topsites VALUES('russian_federation','calend.ru'
>>>
>>> b2b(> INSERT INTO oko_topsites VALUES('russian_federation','
>>> calorizator.ru',0,NULL,5)
>>>
>>> Unclosed quotes, unclosed parenthesis - anyway it wont work.
>>>
>>> How to safely insert big number of statements to psql at once?
>>> I am aware about "execute this file" \i option of psql, that is not the
>>> answer I am looking for, thanks :-)
>>>
>>
My personal favorite for this exact thing is to use '\e'

  When you are in psql, if you \e (on *nix) it will open a temp file in
whatever your $EDITOR variable is set ( I use vim).  Paste your data, then
save-close the file.  It will put you back into psql and execute the
command for you.

--Scott


>
>> What are you exactly aiming to do?
>>
>> Have you tried -
>> psql  < myfile
>>
>>
>>
>>> Dmitry Shalashov, surfingbird.ru & relap.io
>>>
>> --
>> --
>> Best Regards
>> Sameer Kumar | DB Solution Architect
>> *ASHNIK PTE. LTD.*
>>
>> 101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533
>>
>> T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com
>>
>
>


-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] pasting a lot of commands to psql

2016-07-07 Thread Scott Mead
On Thu, Jul 7, 2016 at 1:39 PM, Scott Mead  wrote:

>
>
> On Thu, Jul 7, 2016 at 1:30 PM, Dmitry Shalashov 
> wrote:
>
>> Hi Sameer,
>>
>> I am trying to copy-paste (and execute) random snippets of SQL to psql
>> console.
>>
>> There is another ways to do it, which do not involve copy-paste, but I am
>> wondering why is copy-paste won't work. What exactly is happening there...
>>
>>
>> Dmitry Shalashov, surfingbird.ru & relap.io
>>
>> 2016-07-07 20:26 GMT+03:00 Sameer Kumar :
>>
>>>
>>>
>>> On Fri, 8 Jul 2016, 1:23 a.m. Dmitry Shalashov, 
>>> wrote:
>>>
>>>> Hi everyone.
>>>>
>>>> Let say that I have some sql file with like hundred of simple
>>>> statements in it. I `cat` it, copy it to buffer, go to my beloved psql and
>>>> insert it there.
>>>> But somewhere after first few lines it screws over:
>>>>
>>>> b2b=> BEGIN;
>>>> b2b=> INSERT INTO oko_topsites VALUES('russian_federation','ati.su
>>>> ',0,NULL,5);
>>>> INSERT INTO oko_topsites VALUES('russian_federation'b2b=> INSERT INTO
>>>> oko_topsites VALUES('russian_federation','atn.ua',0,NULL,5);
>>>> b2b=> INSERT INTO oko_topsites VALUES('russian_federation','audit-it.ru
>>>> ',0,NULL,5);
>>>> b2b=> INSERT INTO oko_topsites VALUES('russian_federation','aup.ru
>>>> ',0,NULL,5);
>>>> '
>>>> b2b=> INSERT INTO oko_topsites VALUES('russian_federation','
>>>> autocentre.ua',0,NULL,5);
>>>>
>>>> b2b=> INSERT INTO oko_topsites VALUES('russian_federation','autodoc.ru
>>>> ',0,NULL,5);
>>>>
>>>> b2b=> INSERT INTO oko_topsites VALUES('russian_federation','calend.ru'
>>>>
>>>> b2b(> INSERT INTO oko_topsites VALUES('russian_federation','
>>>> calorizator.ru',0,NULL,5)
>>>>
>>>> Unclosed quotes, unclosed parenthesis - anyway it wont work.
>>>>
>>>> How to safely insert big number of statements to psql at once?
>>>> I am aware about "execute this file" \i option of psql, that is not the
>>>> answer I am looking for, thanks :-)
>>>>
>>>
> My personal favorite for this exact thing is to use '\e'
>
>   When you are in psql, if you \e (on *nix) it will open a temp file in
> whatever your $EDITOR variable is set ( I use vim).
>

on windows, it actually pops up notepad.exe.  Save and close, same behavior.

--Scott


> Paste your data, then save-close the file.  It will put you back into psql
> and execute the command for you.
>
> --Scott
>
>
>>
>>> What are you exactly aiming to do?
>>>
>>> Have you tried -
>>> psql  < myfile
>>>
>>>
>>>
>>>> Dmitry Shalashov, surfingbird.ru & relap.io
>>>>
>>> --
>>> --
>>> Best Regards
>>> Sameer Kumar | DB Solution Architect
>>> *ASHNIK PTE. LTD.*
>>>
>>> 101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533
>>>
>>> T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com
>>>
>>
>>
>
>
> --
> --
> Scott Mead
> Sr. Architect
> *OpenSCG <http://openscg.com>*
> http://openscg.com
>



-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-07-27 Thread Scott Mead
On Wed, Jul 27, 2016 at 3:34 AM, Achilleas Mantzios <
ach...@matrix.gatewaynet.com> wrote:

> On 27/07/2016 10:15, Condor wrote:
>
>> On 26-07-2016 21:04, Dorian Hoxha wrote:
>>
>>> Many comments: https://news.ycombinator.com/item?id=12166585
>>>
>>> https://www.reddit.com/r/programming/comments/4uph84/why_uber_engineering_switched_from_postgres_to/
>>>
>>> On Tue, Jul 26, 2016 at 7:39 PM, Guyren Howe  wrote:
>>>
>>> Honestly, I've never heard of anyone doing that. But it sounds like
>>>> they had good reasons.
>>>>
>>>> https://eng.uber.com/mysql-migration/
>>>>
>>>> Thoughts?
>>>>
>>>> --
>>>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>>>> To make changes to your subscription:
>>>> http://www.postgresql.org/mailpref/pgsql-general
>>>>
>>>
>>
>> They are right for upgrades.
>> It's a hard to shutdown 1 TB database and wait couple of days pg_upgrade
>> to finish upgrade and meanwhile database is offline.
>> In some distros after upgrade of PG version you don't have old binary and
>> library, need to do full dump and restore that take time and disk space.
>>
>
> Our last 1TB upgrade from 9.0 -> 9.3 went like a charm in something like
> seconds. (with the -k option)
> However, be warned that the planing and testing took one full week.
>

That being said, it doesn't really provide a back-out plan.  The beauty of
replication is that you can halt the upgrade at any point if need be and
cut your (hopefully small) losses. If you use -k, you are all in.  Sure,
you could setup a new standby, stop traffic, upgrade whichever node you'd
like (using -k) and still have the other ready in the event of total
catastrophe.  More often than not, I see DBAs and sysads lead the
conversation with "well, postgres can't replicate from one version to
another, so instead " followed by a fast-glazing of management's eyes
and a desire to buy a 'commercial database'.

All in all, Evan's blog seemed to start out decently technical, it quickly
took a turn with half-truths, outdated information and, in some cases,
downright fud:

 "The bug we ran into only affected certain releases of Postgres 9.2 and
has been fixed for a long time now. However, we still find it worrisome
that this class of bug can happen at all. A new version of Postgres could
be released at any time that has a bug of this nature, and because of the
way replication works, this issue has the potential to spread into all of
the databases in a replication hierarchy."


ISTM that they needed a tire swing
<http://i0.wp.com/blogs.perficient.com/perficientdigital/files/2011/07/treecomicbig.jpg>
and were using a dump truck.  Hopefully they vectored somewhere in the
middle and got themselves a nice sandbox.

--Scott


>
>
>>
>> Regards,
>> Hristo S.
>>
>>
>>
>>
>>
>>
>
> --
> Achilleas Mantzios
> IT DEV Lead
> IT DEPT
> Dynacom Tankers Mgmt
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] Unable to create oracle_fdw (foreign data wrapper) extension

2016-09-15 Thread Scott Mead
On Thu, Sep 15, 2016 at 3:25 PM, Arun Rangarajan 
wrote:

> I am following the instructions here:
> http://blog.dbi-services.com/connecting-your-postgresql-
> instance-to-an-oracle-database/
> to install Oracle foreign data wrapper, oracle_fdw, on a PostgreSQL server.
>
> ---
> Oracle version: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0
> - 64bit Production, running on Red Hat Linux 7.2
>
> PostgreSQL version: PostgreSQL 9.4.4 on x86_64-unknown-linux-gnu, compiled
> by gcc (Debian 4.7.2-5) 4.7.2, 64-bit, running on Debian 7 (wheezy).
> ---
>
> I was able to install sqlplus and connect from PostgreSQL server to Oracle
> server using sqlplus successfully, so connectivity is not a problem.
>
> But when I try to create the extension, I get the following error:
>
>
---
> postgres=# create extension oracle_fdw;
> server closed the connection unexpectedly
> This probably means the server terminated abnormally
> before or while processing the request.
> The connection to the server was lost. Attempting reset: Failed.
>

Hmm, odd that it's causing a crash.

  Which version of the oracle_fdw and which version of the oracle libraries
are you linked to?  Make sure to check 'ldd oracle_fdw.so'

--Scott



> ---
>
> Then I took clues from https://github.com/dalibo/pg_qualstats/issues/1
> and added oracle_fdw to shared_preload_libraries in postgresql.conf like
> this:
>
> shared_preload_libraries = 'oracle_fdw'
>
> but now I can't restart Postgres:
>
> ---
> # service postgresql restart
> [] Restarting PostgreSQL 9.4 database server: main[] The
> PostgreSQL server failed to start. Please check the log output:
> t=2016-09-15 11:05:42 PDT d= h= p=23300 a=FATAL: XX000: invalid cache
> ID[FAILt=2016-09-15 11:05:42 PDT d= h= p=23300 a=LOCATION:
> SearchSysCacheList, syscache.c:1219 ... failed!
>  failed!
> ---
>
> Looking into /var/log/postgresql/postgresql-9.4-main.log I only see these
> two lines:
>
> t=2016-09-15 11:05:42 PDT d= h= p=23300 a=FATAL:  XX000: invalid cache ID:
> 41
> t=2016-09-15 11:05:42 PDT d= h= p=23300 a=LOCATION:  SearchSysCacheList,
> syscache.c:1219
>
> Removing oracle_fdw from shared_preload_libraries allows postgres to be
> restarted, so this is the one causing restart to fail.
>
> How to fix this and get the foreign data wrapper working?
>
> Thank you.
>
> ​
>



-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] Unable to create oracle_fdw (foreign data wrapper) extension

2016-09-15 Thread Scott Mead
On Thu, Sep 15, 2016 at 4:57 PM, Arun Rangarajan 
wrote:

> Thanks, Scott.
>
> oracle_fdw version 1.5.0 from http://pgxn.org/dist/oracle_fdw/
>
> Oracle client version: instantclient 12.1
>

I've had problems using anything > instant client 10.  Give it a shot.

--Scott



>
> /usr/lib/postgresql/9.4/lib# ldd oracle_fdw.so
> linux-vdso.so.1 =>  (0x7fff50744000)
> libclntsh.so.12.1 => /usr/lib/oracle/12.1/client64/lib/libclntsh.so.12.1
> (0x7f44769f1000)
> libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x7f447000)
> libnnz12.so => /usr/lib/oracle/12.1/client64/lib/libnnz12.so
> (0x7f4475f4f000)
> libons.so => /usr/lib/oracle/12.1/client64/lib/libons.so
> (0x7f4475d0b000)
> libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x7f4475b07000)
> libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x7f4475884000)
> libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0
> (0x7f4475668000)
> libnsl.so.1 => /lib/x86_64-linux-gnu/libnsl.so.1 (0x7f447545)
> librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x7f4475247000)
> libaio.so.1 => /lib/x86_64-linux-gnu/libaio.so.1 (0x7f4475045000)
> libclntshcore.so.12.1 => 
> /usr/lib/oracle/12.1/client64/lib/libclntshcore.so.12.1
> (0x7f4474af5000)
> /lib64/ld-linux-x86-64.so.2 (0x7f447990c000)
>
>
>
> On Thu, Sep 15, 2016 at 1:10 PM, Scott Mead  wrote:
>
>>
>>
>> On Thu, Sep 15, 2016 at 3:25 PM, Arun Rangarajan <
>> arunrangara...@gmail.com> wrote:
>>
>>> I am following the instructions here:
>>> http://blog.dbi-services.com/connecting-your-postgresql-inst
>>> ance-to-an-oracle-database/
>>> to install Oracle foreign data wrapper, oracle_fdw, on a PostgreSQL
>>> server.
>>>
>>> ---
>>> Oracle version: Oracle Database 12c Enterprise Edition Release
>>> 12.1.0.2.0 - 64bit Production, running on Red Hat Linux 7.2
>>>
>>> PostgreSQL version: PostgreSQL 9.4.4 on x86_64-unknown-linux-gnu,
>>> compiled by gcc (Debian 4.7.2-5) 4.7.2, 64-bit, running on Debian 7
>>> (wheezy).
>>> ---
>>>
>>> I was able to install sqlplus and connect from PostgreSQL server to
>>> Oracle server using sqlplus successfully, so connectivity is not a problem.
>>>
>>> But when I try to create the extension, I get the following error:
>>>
>>>
>> ---
>>> postgres=# create extension oracle_fdw;
>>> server closed the connection unexpectedly
>>> This probably means the server terminated abnormally
>>> before or while processing the request.
>>> The connection to the server was lost. Attempting reset: Failed.
>>>
>>
>> Hmm, odd that it's causing a crash.
>>
>>   Which version of the oracle_fdw and which version of the oracle
>> libraries are you linked to?  Make sure to check 'ldd oracle_fdw.so'
>>
>> --Scott
>>
>>
>>
>>> ---
>>>
>>> Then I took clues from https://github.com/dalibo/pg_qualstats/issues/1
>>> and added oracle_fdw to shared_preload_libraries in postgresql.conf like
>>> this:
>>>
>>> shared_preload_libraries = 'oracle_fdw'
>>>
>>> but now I can't restart Postgres:
>>>
>>> ---
>>> # service postgresql restart
>>> [] Restarting PostgreSQL 9.4 database server: main[] The
>>> PostgreSQL server failed to start. Please check the log output:
>>> t=2016-09-15 11:05:42 PDT d= h= p=23300 a=FATAL: XX000: invalid cache
>>> ID[FAILt=2016-09-15 11:05:42 PDT d= h= p=23300 a=LOCATION:
>>> SearchSysCacheList, syscache.c:1219 ... failed!
>>>  failed!
>>> ---
>>>
>>> Looking into /var/log/postgresql/postgresql-9.4-main.log I only see
>>> these two lines:
>>>
>>> t=2016-09-15 11:05:42 PDT d= h= p=23300 a=FATAL:  XX000: invalid cache
>>> ID: 41
>>> t=2016-09-15 11:05:42 PDT d= h= p=23300 a=LOCATION:  SearchSysCacheList,
>>> syscache.c:1219
>>>
>>> Removing oracle_fdw from shared_preload_libraries allows postgres to be
>>> restarted, so this is the one causing restart to fail.
>>>
>>> How to fix this and get the foreign data wrapper working?
>>>
>>> Thank you.
>>>
>>> ​
>>>
>>
>>
>>
>> --
>> --
>> Scott Mead
>> Sr. Architect
>> *OpenSCG <http://openscg.com>*
>> http://openscg.com
>>
>
>


-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] Graphical entity relation model

2016-09-28 Thread Scott Mead
On Wed, Sep 28, 2016 at 4:17 PM, jotpe  wrote:

> Does anybody know a Software for generating graphical entity relation
> models from existing postgresql databases?
>
>
My personal favorites (in order):
schemaspy.sf.net
https://www.dbvis.com/
http://www.sqlpower.ca/page/architect


> Best regards Johannes




-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [SPAM] [GENERAL] psql 9.3 automatic recovery in progress

2016-10-10 Thread Scott Mead
On Mon, Oct 10, 2016 at 6:15 PM, Tom Lane  wrote:

> Adrian Klaver  writes:
> > On 10/10/2016 12:18 PM, Periko Support wrote:
> >> I was on vacation, but the issue have the same behavior:
>
> > Actually no. Before you had:
>
> > 2016-09-12 09:00:01 PDT LOG:  server process (PID 23958) was
> > terminated by signal 9: Killed
>
> > Now you have:
>
> > 2016-10-10 07:50:09 PDT WARNING:  terminating connection because of
> > crash of another server process
>
> Most likely it *is* the same thing but the OP trimmed the second log
> excerpt too much.  The "crash of another server process" complaints
> suggest strongly that there was already another problem and this
> is just part of the postmaster's kill-all-children-and-restart
> recovery procedure.
>
> Now, if there really is nothing before this in the log, another possible
> theory is that something decided to send the child processes a SIGQUIT
> signal, which would cause them to believe that the postmaster had told
> them to commit hara-kiri.  I only bring this up because we were already
> shown a script sending random SIGKILLs ... so random SIGQUITs wouldn't be
> too hard to credit either.  But the subsequent log entries don't quite
> square with that idea; if the postmaster weren't already expecting the
> children to die, it would have reacted differently.
>


The better solution is to do this in one query and more safely:

select pid, usename, datname, pg_terminate_backend(pid)
   FROM pg_stat_activity
  WHERE usename = 'openerp'
   AND  now() - query_start > '15 minutes'::interval;

   This will use the builtin 'pg_terminate_backend for you in one shot.

 --Scott




>
> regards, tom lane
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] Postgresql using lxd faild to find address

2016-10-11 Thread Scott Mead
On Mon, Oct 10, 2016 at 11:25 AM, Scottix  wrote:

> Hi I am using postgresql 9.5 in lxd container on Ubuntu. Using the stock
> 64bit 16.04 ubuntu image. Postgres is working just fine within the
> container, but when I try to assign the ip address associated to the
> container it fails to recognize it at boot. When I restart postgresql it
> recognizes it just fine. So it seems like it is not getting the ip address,
> maybe it is a little bit slower. Anyway to delay the boot time?
>
> Steps to reproduce:
> 1. Setup lxd container
> 2. Install postgresql-9.5
> 3. Configure ip address in the config
> 4. restart container
>
> Symptoms:
> 1. postgresql running fine
> 2. No ip address assigned to postgresql
>
> Logs:
> 2016-10-10 14:40:33 UTC [143-1] LOG:  could not bind IPv4 socket: Cannot
> assign requested address
> 2016-10-10 14:40:33 UTC [143-2] HINT:  Is another postmaster already
> running on port 5432? If not, wait a few seconds and retry.
> 2016-10-10 14:40:33 UTC [143-3] WARNING:  could not create listen socket
> for "10.0.3.51"
>

Set your listen_addresses='*'
  This means, any and all available IP addresses.  If you hardcode the IP
in the config, you'll get this type of error.  This way, no matter what
interfaces you have, you'll get a bind.  The downside is that it will
listen on ALL network interfaces you have in the box.  Sometimes you
explicitly don't want a certain interface to be listening.  You can fix
this with a firewall or pg_hba.conf however.

--Scott



> 2016-10-10 14:40:33 UTC [144-1] LOG:  database system was shut down at
> 2016-10-07 23:05:34 UTC
> 2016-10-10 14:40:33 UTC [144-2] LOG:  MultiXact member wraparound
> protections are now enabled
> 2016-10-10 14:40:33 UTC [143-4] LOG:  database system is ready to accept
> connections
> 2016-10-10 14:40:33 UTC [148-1] LOG:  autovacuum launcher started
>
> #systemctl restart postgresql
>
> 2016-10-10 15:17:33 UTC [2353-1] LOG:  database system was shut down at
> 2016-10-10 15:17:32 UTC
> 2016-10-10 15:17:33 UTC [2353-2] LOG:  MultiXact member wraparound
> protections are now enabled
> 2016-10-10 15:17:33 UTC [2352-1] LOG:  database system is ready to accept
> connections
> 2016-10-10 15:17:33 UTC [2357-1] LOG:  autovacuum launcher started
>
> Thanks
>
>
>
>


-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] ANN: Upscene releases Database Workbench 5.2.4

2016-10-12 Thread Scott Mead
On Wed, Oct 12, 2016 at 2:51 AM, Martijn Tonies (Upscene Productions) <
m.ton...@upscene.com> wrote:

> Upscene Productions is proud to announce the availability of
>>>> the next version of the popular multi-DBMS development tool:
>>>> " Database Workbench 5.2.4 "
>>>> The 5.2 release includes support for PostgreSQL and adds several other
>>>> features and bugfixes.
>>>> Database Workbench 5 comes in multiple editions with different pricing
>>>> models, there's always a version that suits you!
>>>>
>>>
>>>
>>> I'm not sure announcements of commercial software updates belong on the
>>> pgsql-general email list.   if every vendor of  apackage with postgres
>>> support posted announcements of each incremental update, we'd be buried
>>> in
>>> spam.
>>>
>>
>> Yeah.. If it applies somewhere that would be pgsql-announce.
>>
>
> For the record, it was my understanding pgsql-announce was by PostgreSQL
> only.
>

Nope, pgsql-announce is for exactly these types of things.  It is a
moderated list, but that's the appropriate venue.

--Scott



>
>
>
> With regards,
>
> Martijn Tonies
> Upscene Productions
> http://www.upscene.com
>
> Database Workbench - developer tool for Oracle, MS SQL Server, PostgreSQL,
> SQL Anywhere, MySQL, InterBase, NexusDB and Firebird.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] Remove Standby (SLAVE) from Primary (MASTER) - Postgres9.1

2016-10-27 Thread Scott Mead


On 10/27/16 10:12 AM, Joanna Xu wrote:
> We need to keep the data as it is so remving data fold is not feasible.
> If I run "pg_ctl promote" on the slave to make it fail over, this will break 
> the replication and then clean up the configuration related to replication.  
> Any feedback?

Just to [hopefully] clear up some of the confusion..

Somewhere I saw you mention 9.1 .  Since you're using 9.1, you won't
have any replication slots in use, if you were using replication slots,
then the master server could started to queue up WAL files waiting for
that node to come back.  This could cause you to run out of disk space. 
If you were using a replication slot, you would need to (after
promotion) connect to the master and drop it:
- SELECT * FROM pg_replication_slots;
- SELECT pg_drop_replication_slot(slot_name);

https://www.postgresql.org/docs/9.4/static/functions-admin.html#FUNCTIONS-REPLICATION
https://www.postgresql.org/docs/9.4/static/catalog-pg-replication-slots.html

Again, since you're on 9.1, this doesn't matter, BUT, it's important to
keep in mind for the future.

If you are using pure streaming replication, there is nothing that needs
'clean-up' on the master.  Doing a promote will indeed do what you want
to accomplish (NB: See below if you are also archiving your WAL).

That being said, many setups of streaming will also include WAL
(pg_xlog) archiving (check the archive_mode and archive_command in your
postgresql.conf on the master).  If you have configured archiving,
chances are, you will have some sort of cleanup job on the slave that
deals with old archived logs.  You would either

A. Want to stop archiving (for the node you are promoting)
B. Ensure that cleanup is still taking place

  These steps are not done for you when you promote a slave, it's
something that you need to deal with yourself.

There is one other thing to keep in mind.  If you are promoting a
database, it's going to become read-write.  I would make sure to either
erect a firewall around the promoted slave or edit your pg_hba.conf file
to reject traffic from your applications.  If, for some reason your
application were to connect, you could end up with a split-brain
scenario which would mean that some of your data is in the old master
and some is in the newly promoted slave.  This can be very, very
dangerous.  As long as you exercise care, you will be fine, it's just an
important point that I've seen haunt people in the past.
> Thanks,
> Joanna
>
> -Original Message-
> From: Adrian Klaver [mailto:adrian.kla...@aklaver.com] 
> Sent: Wednesday, October 26, 2016 7:50 PM
> To: Michael Paquier ; Joanna Xu 
> 
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Remove Standby (SLAVE) from Primary (MASTER) - 
> Postgres9.1
>
> On 10/26/2016 04:43 PM, Michael Paquier wrote:
>> On Wed, Oct 26, 2016 at 11:18 PM, Joanna Xu  wrote:
>>> I’ve been looking for a procedure (step by step) on how to remove 
>>> Standby
>>> (SLAVE) from Primary (MASTER) for Postgres9.1 in google and the 
>>> archived postings, but no luck.
>> Standby and master servers are two separate Postgres instances. So if 
>> you would like to remove a standby from a cluster. You just need to 
>> basically stop it, then remove its data folder. And you are done.
>> There is no complicated science here.
> Not sure that is what OP wants. From the original post:
>
> " ...
> 2.   After the standby is removed from the primary, both of nodes 
> are in standalone configuration. "
>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
> This message and the information contained herein is proprietary and 
> confidential and subject to the Amdocs policy statement,
> you may review at http://www.amdocs.com/email_disclaimer.asp
>

-- 
Scott Mead
Sr. Architect
OpenSCG




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] deadlock error - version 8.4 on CentOS 6

2016-10-28 Thread Scott Mead


On 10/28/16 9:27 AM, Steve Clark wrote:
> On 10/28/2016 09:15 AM, Adrian Klaver wrote:
>> On 10/28/2016 05:28 AM, Steve Clark wrote:
>>> Hello List,
>>>
>>> I am occasionally seeing the following error:
>>> ALERT  3 sqlcode=-400 errmsg=deadlock detected on line 3351
>> So what exactly is it doing at line 3351?
>>
>>> from an application written using ecpg when trying an update to the table.
>>> Can autovacuum be causing this,
>>> since no one else is updating this database table.
>> Is there more then one instance of the application running?
>>
>>> Thanks,
>>>
>>>
> No. But I examined the pg_log/log_file and saw an error indicating it
> was autovacuum:
>
>
> 2016-10-27 09:47:02 EDT:srm2api:12968:LOG:  sending cancel to blocking
> autovacuum PID 12874
> 2016-10-27 09:47:02 EDT:srm2api:12968:DETAIL:  Process 12968 waits for
> ExclusiveLock on relation 955454549 of database 955447411.
> 2016-10-27 09:47:02 EDT:srm2api:12968:STATEMENT:  lock table
> t_unit_status_log in exclusive mode
> 2016-10-27 09:47:02 EDT::12874:ERROR:  canceling autovacuum task
> 2016-10-27 09:47:02 EDT::12874:CONTEXT:  automatic vacuum of table
> "srm2.public.t_unit_status_log"
> 2016-10-27 09:47:02 EDT:srm2api:9189:ERROR:  deadlock detected at
> character 8
> 2016-10-27 09:47:02 EDT:srm2api:9189:DETAIL:  Process 9189 waits for
> RowExclusiveLock on relation 955454549 of database 955447411; blocked
> by process 12968.
> Process 12968 waits for ExclusiveLock on relation 955454518 of
> database 955447411; blocked by process 9189.
> Process 9189: update t_unit_status_log set status_date = now (
> ) , unit_active = 'y' , last_updated_date = now ( ) , last_updated_by
> = current_user , devices_down = $1  where unit_serial_no = $2
> Process 12968: lock table t_unit in exclusive mode
>
> This is at the same time and same table that my application reported
> the error on.
>
> So I feel pretty confident this is the issue. I guess I should retry
> the update in my application.
>
> Thanks,

The problem is that you're doing:
'LOCK TABLE t_unit_status_log'

  If you were executing normal updates, autovacuum would be fine. 
Remove the exclusive lock from your code and you'll be fine.

--Scott


>
>
>
> -- 
> Stephen Clark
>

-- 
Scott Mead
Sr. Architect
OpenSCG



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Methods to quiesce PostgreSQL DB without configuring log archival

2016-11-23 Thread Scott Mead

> On Nov 22, 2016, at 01:23, MEERA  wrote:
> 
> Hi all,
> 
> If archive_mode is not configured, and i use snapshot solution for backup of 
> the server, how can i ensure data consistency? Is there a way to quiesce all 
> the connections to DB?

If your snapshot solution is atomic, then you are *probably* okay. I would do a 
few restores to test, but atomic snapshots give you a point in time and should 
be consistent. 

Personally, I like archiving because it removes all of the maybes, but, that's 
me :)

> 
> 
> Thanks,
> Meera


Re: [GENERAL] Disabling inheritance with query.

2016-12-22 Thread Scott Mead
On Thu, Dec 22, 2016 at 9:51 AM, Edmundo Robles 
wrote:

> I want to do that because,  I have  a  partitioned table  (big_table)  and
> others  (t1,t2,t3,t4)  have  foreign keys  reference to big_table  and i
> had many trobules at insert data, reading the doc:
> "A serious limitation of the inheritance feature is that indexes
> (including unique constraints) and foreign key constraints only apply to
> single tables, not to their inheritance children. "
>
> On Wed, Dec 21, 2016 at 4:58 PM, Tom Lane  wrote:
>
>> Edmundo Robles  writes:
>> > i need  disable  inheritance  from many tables in a query like
>> > "delete from pg_inherits where inhparent=20473"  instead alter table ...
>> > but  is safe?   which is the risk for  database if  i  delete it?
>>
>
If you need to do it from many tables, you could write a script to generate
the ALTER TABLE statements

select 'ALTER TABLE ' || schemaname ||'.' || psut.relname || ' NO INHERIT '
|| pc.relname ||';'
  from pg_stat_user_tables psut, pg_class pc, pg_inherits pi
 where pi.inhrelid = psut.relid
   AND pi.inhparent = pc.oid
   AND pi.inhparent = 20473;

I wouldn't manually hit the catalogs, but, this will write all of the ALTER
TABLE statements that you need.


>
>> This seems really dangerous.  You're certainly missing the pg_depend
>> linkages, not to mention attribute inheritance counts in pg_attribute,
>> and there may be other things I'm not remembering offhand.
>>
>> Why can't you use the normal ALTER TABLE approach?
>>
>> regards, tom lane
>>
>
>


-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] Er Data Modeller for PostgreSQL

2016-12-29 Thread Scott Mead
On Thu, Dec 29, 2016 at 8:59 AM, Rich Shepard 
wrote:

> On Thu, 29 Dec 2016, Nicolas Paris wrote:
>
> Hi I'd like to tell about Sql Power Architect
>>
>
> Nicholas,
>
>   SPA was going to be my next re-examination after dbeaver. Since the
> latter
> easily accomplished what I needed I stopped there.
>

I've actually used Sql Power Architect before as well.  I like Schema Spy
because it's a quick read-only way to give me a fast report.  SQL Power
Architect is nice because it's graphical and you can interact with the
DBA.  I've also used DBVisualizer for this ( I actually love DBVisualizer
), but that tool is geared for interacting with the database and has
visualization as an extra.  I'll have to try dbeaver, I've heard of it,
but, haven't used it.

  Schema Spy is great, but, I do always have to come back and find the
exact command needed to kick it off, it's CLI is a bit over-complicated.

--Scott



>
> Thanks for the reminder,
>
>
> Rich
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] Re: Building PL/Perl procedural language without --with-perl configure flag

2017-01-15 Thread Scott Mead
On Sun, Jan 15, 2017 at 6:35 AM, postgres user 
wrote:

> The already installed Postgres edition was built using the same
> installation procedure as mentioned in the docs, but without the use of
> --with-perl flag. the point I ask the question is because I want to install
> PL/Perl as a separate extension as one does with PostGIS and not along with
> Postgres install. Is there a way out to solve that problem of building the
> PL/Perl language by somehow creating a custom Makefile as we have for
> contrib extensions or PostGIs etc... and then giving it the path of
> pg_config hence leading to an installation?
>


The common way of doing this is the following:

1. Download the same source that you used to build your existing postgres
2. In your existing postgres 'bin' directory, run pg_config
   -- This will show you the full string passed to configure and your
CFLAGS, LDFLAGS, etc...
3. Re-run configure using the same environment as specified by pg_config,
just add --with-perl
4. Build the server
5. Install the binaries

Note: After building, you could always run a diff between the existing
installation and a new installation and only install the differences.


-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] Selecting newly added column returns empty but only when selecting with other columns in table

2015-11-26 Thread Scott Mead


> On Nov 26, 2015, at 21:29, mrtruji  wrote:
> 
> Sure thing. Below are the results from your query along with the version and 
> table info. Not sure about the index. I queried the table quite a bit before 
> adding the new column and didn't have any issues. 
> 
> Here is the result from your query: 
> 
>  nspname | relname | indexrelname | type | ?column?
> -+-+--+--+--
> (0 rows)
> 
> Version: 
> PostgreSQL 9.4.5 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu 
> 4.8.2-19ubuntu1) 4.8.2, 64-bit
> 
>   
> Table "public.data"
> Column| Type | Modifiers
> --+--+---
>  id   | text |
>  name | text |
>  gender   | text |
>  age  | text |
>  street   | text |
>  city | text |
>  state| text |
>  zip  | text |
>  longitude| double precision |
>  latitude | double precision |
>  geom | geometry(Point,4326) |
>  features_bin | bytea|
> Indexes:
> "ix_data_id" btree (id)
> 
> 
> 
> 
>> On Thu, Nov 26, 2015 at 6:19 PM, Melvin Davidson  
>> wrote:
>> OK, thanks for clarifying, but just for sanity sake, it would REALLY be nice 
>> if you would advise us of the exact version of PostgreSQL and the O/S you 
>> are working with.
>> 
>> A copy of the table structure would also be helpful.
>> 
>> Just one more thing, is it possible you have an index on that table that 
>> might be corrupted?
>> 
>> What does the following query return?
>> 
>> SELECT n.nspname, 
>>i.relname, 
>>i.indexrelname, 
>>CASE WHEN idx.indisprimary 
>> THEN 'pkey' 
>> WHEN idx.indisunique 
>> THEN 'uidx' 
>> ELSE 'idx' 
>> END AS type, 
>> 'INVALID' 
>>   FROM pg_stat_all_indexes i 
>>   JOIN pg_class c ON (c.oid = i.relid) 
>>   JOIN pg_namespace n ON (n.oid = c.relnamespace) 
>>   JOIN pg_index idx   ON (idx.indexrelid =  i.indexrelid ) 
>>  WHERE idx.indisvalid = FALSE  
>>AND i.relname = 'data'
>>  ORDER BY 1, 2,3;
>> 
>>> On Thu, Nov 26, 2015 at 9:10 PM, mrtruji  wrote:
>>> Hi, 
>>> 
>>> Thanks for the reply. The limit is just to simplify results for the 
>>> examples. The same behavior occurs when each of the three queries are not 
>>> limited. Whenever I try to filter by the original columns and select the 
>>> new column the resultant values for the new column are empty. Conversely, 
>>> whenever I select the new column along with original columns without any 
>>> filtering the resultant values for the original columns return empty. It's 
>>> as if the added column is disconnected to the table in some way causing 
>>> problems with queries that combine original columns and the new one.
>>> 
>>> I created and filled in the new column externally using psycopg2 in Python 
>>> so I'm not sure if that could be the source of the problem...

Did you just fire sql alter statements? 


Have you tried creating a new table with the bytea column and loading it with 
the same data?  Just to help narrow things down?

  Also, can you reproduce it?


>>> 
 On Thu, Nov 26, 2015 at 5:39 PM, Melvin Davidson  
 wrote:
 Is it possible you have more than one row where state = 'CA'? Putting a 
 LIMIT 1 would then restrict to only 1 row.
 Have you tried with no limit? IE: SELECT new_col FROM data;
 
 
> On Thu, Nov 26, 2015 at 7:13 PM, mrtruji  wrote:
> Just added a new bytea type column to an existing psql table and 
> populated the column entirely with row data. Running into some strange 
> query results:
> 
> When I select the newly added column by itself I get all the data as 
> expected:
> 
> SELECT new_col FROM data LIMIT 1;
> Result: \x8481e7dec3650040b
> When I try to filter with 'where' on another column in the table, I get 
> the values from the other columns as expected but empty from my 
> new_column:
> 
> SELECT id, state, new_col FROM data WHERE state='CA' limit 1;
> Result: 123456_1; CA; EMPTY ROW
> The reverse is also true. If I select my new column in combination with 
> other columns with no 'where' I get the correct value from my new column 
> but empty for the other columns:
> 
> SELECT id, state, new_col FROM data limit 1;
> Result: EMPTY ROW; EMPTY ROW; \x8481e7dec3650040b
> Thanks to anyone with advice!
> 
 
 
 
 -- 
 Melvin Davidson
 I reserve the right to fantasize.  Whether or not you 
 wish to share my fantasy is entirely up to you. 
>> 
>> 
>> 
>> -- 
>> Melvin Davidson
>> I reserve the right to fantasize.  Whether or not you 
>> wish to share my fantasy is entirely up to you. 
> 


Re: [GENERAL] 2 questions

2015-12-01 Thread Scott Mead


> On Nov 30, 2015, at 12:54, anj patnaik  wrote:
> 
> 1) directory listing:
> 
> /opt/nfsDir/postgres/9.4/
> /bin
> /data
> /etc
>/pgAdmin3
>   
> 

The data directory will cause you many problems.   You will need one data 
directory that is accessed by one AND ONLY one host for each node connected. 
You can't run an instance on multiple machines pointing to the same 'data' 
directory simultaneously.  Data directories cannot be shared by multiple 
instances simultaneously, that's an active/active shared disk cluster and most 
databases don't support it or require massive overhead ( network/licensing I.e. 
Oracle rac) to do that. 

  You *can* re-use the other directories, it can be wrought with issues, and 
you need to carefully think though upgrades, etc

> 2) The way I am using PG now is that I have specified the directory above as 
> the location to install it (from graphical installer). 
> 
> Now, it appears that postgres places files in other directories besides the 
> one specified in the installer. For instance, there are scripts in 
> /etc/init.d to start the service. 
> 
> So in answering my own question: it appears PG places files in other dirs so 
> when I am given a new VM/different physical server with the same NFS mount I 
> would either need to copy these files over or better yet un-install the 
> current PG and re-install from scratch.
> 
> Thanks,
> ap
> 
>> On Fri, Nov 27, 2015 at 8:30 PM, Adrian Klaver  
>> wrote:
>>> On 11/27/2015 01:17 PM, anj patnaik wrote:
>>> Hello,
>>> Yes, postgres is currently installed on a nfs mounted file system. So
>>> when graphical installer runs, there is a form which asks location for
>>> installation. I specified this path /opt/nfsDir/Postgres where nfsDir is
>>> a nfs mount. So currently this is where PG 9.4 lives.
>> 
>> What is under /opt/nfsDir/Postgres?
>> 
>>> 
>>> My question is when I am given a brand new VM on a different physical
>>> server, can I mount that same NFS FS and use the Postgres or do I need
>>> to re-install PG on new VM?
>> 
>> How are you using Postgres now? Please be specific, more detail is better 
>> then less at this point.
>> 
>>> 
>>> I am not sure if PG writes to any other directories besides the one
>>> where it is installed.
>> 
>> 
>>> 
>>> On the issue of logging, I see a lot of log statements because client
>>> apps do upserts and since I use Tcl I don't have a SQL proc, but rather
>>> I let it exception and then do an update on the row.
>> 
>> You can Tcl in the database:
>> 
>> http://www.postgresql.org/docs/9.4/interactive/pltcl.html
>> 
>> That will not change things if you let the database throw an exception there 
>> also.
>> 
>> 
>>> 
>>> So, you can limit the size of an individual log, but there is no way to
>>> tell PG to keep the log file short?
>>> 
>>> if i choose FATAL, I'd lose some log, right?
>>> 
>>> Thank you!
>>> ap
>> 
>> -- 
>> Adrian Klaver
>> adrian.kla...@aklaver.com
> 


Re: [GENERAL] Loggingt psql meta-commands

2015-12-10 Thread Scott Mead
On Thu, Dec 10, 2015 at 2:50 PM, oleg yusim  wrote:

> Thanks John, I realized that and confirmed in my logs. What I'm trying to
> determine now, can I only log some SELECT statements, or I should log all
> of them or none of them.
>

You can configure this to multiple levels:

 Global, per-user, per-database

ALTER USER postgres SET log_min_duration_statement=0;
ALTER DATABASE xyz SET log_min_duration_statement=0;

  That being said, you would want to make sure that the user issuing the
largest volume of queries is not set with this, otherwise, you could
potential flood your logs with every single query issued.  This has a
tendency to cause performance problems.  The other item of note is that,
once logged in, the user could change that value with a similar ALTER
statement.


--Scott
PostgreSQL database experts
http://www.openscg.com

>
> Oleg
>
> On Thu, Dec 10, 2015 at 1:40 PM, John R Pierce 
> wrote:
>
>> On 12/10/2015 9:58 AM, oleg yusim wrote:
>>
>>> I'm new to PostgreSQL, working on it from the point of view of Cyber
>>> Security assessment. In regards to the here is my question:
>>>
>>> Is it a way to enable logging for psql prompt meta-commands, such as
>>> \du, \dp, \z, etc?
>>>
>>
>> what the other two gentlemen are trying to say is the metacommands are
>> shortcuts for more complex SQL queries of the pg_catalog schema, so to log
>> them, you would need to log all queries and filter for accesses to the
>> pg_catalog
>>
>>
>>
>> --
>> john r pierce, recycling bits in santa cruz
>>
>>
>>
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>>
>
>


Re: [GENERAL] Does PostgreSQL support to write glusterfs by Libgfapi

2015-12-16 Thread Scott Mead
On Wed, Dec 16, 2015 at 1:26 AM, zh1029  wrote:

> Hi,
>   It seems low performance PostgreSQL(9.3.6) while writing data to
> glusterFS
> distributed file system. libgfapi is provide since GlusterFS version 3.4 to
> avoid kernel visits/data copy which can improve its performance. But I
> didn't find out any instruction from the PostgreSQL web page. Do you know
> if
> PostgreSQL later release supports using libgfapi to optimized write data to
> ClusterFS file system.
>
> The real question is, why are you using GLusterFS?  It's important to be
careful since PostgreSQL provides no mechanism to allow for shared-disk
clustering (active/active).  If you are planning on using active/passive,
you must plan carefully so as not to create a split-brain scenario.


--
Scott Mead
Sr. Architect
*OpenSCG*
PostgreSQL, Java & Linux Experts
http://openscg.com <http://openscg.com>



>
>
> --
> View this message in context:
> http://postgresql.nabble.com/Does-PostgreSQL-support-to-write-glusterfs-by-Libgfapi-tp5877793.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Code of Conduct: Is it time?

2016-01-10 Thread Scott Mead
On Sun, Jan 10, 2016 at 5:09 PM, Adrian Klaver 
wrote:

> On 01/10/2016 02:05 PM, Regina Obe wrote:
>
>> Gavin,
>>
>>> I once went out of my way to help someone with Mac.  They were so Mac
>>>
>> centric they did not realize that they were not giving us the right
>> information to help them, but this was not obvious until later in the
>> thread.  I made some comment about Linux - next moment they were accusing
>>
>>> everyone of not helping them properly because they were using a Mac, as
>>>
>> though we had been deliberately discriminating against them!
>>
>> I hear ya. I do the same thing setting up a Linux VM to try to help
>> someone
>> on Ubuntu or CentOS.
>> My main point was if you don't have anything helpful to say, don't say
>> anything at all.
>>
>> I recall someone posting something earlier about on the lists we should
>> have
>> a section like:
>>
>> HELP US HELP YOU
>>
>> That details the information anyone having a problem should provide to
>> make
>> it easy for others to help them.
>> Can't find that item on mailing list.
>>
>
> https://wiki.postgresql.org/wiki/Guide_to_reporting_problems



Maybe I'm out of sync with everyone else, but, I think of list- and IRC
guidelines as distinctly separate from a code of conduct.  I see a code of
conduct as a legal document that allows the community to protect itself
(and individuals its individuals) from illegal and possibly predatory
behavior.  Guidelines for posting: "don't top post, don't paste 500 lines
in to IRC etc... " are things that could get the community to *ignore* you,
but not necessarily cause them to participate in a legal showdown directly
or as a 'third-party'.

   ISTM that if we develop a code of conduct, it would need to be designed
to insulate the community and individuals within it from becoming targets
of legal action.  "Mike said I was bad at postgres, it hurt my consulting
and I want to sue Joe for replying-all and upping the hit-count on
google... "

--Scott

>
>
>>
>> Thanks,
>> Regina
>>
>>
>>
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] How can I use a dynamic column name after NEW. or OLD. in trigger procedure?

2016-01-19 Thread Scott Mead
On Tue, Jan 19, 2016 at 5:05 PM, Peter Devoy  wrote:

> As part of the extension I am writing I am trying to create a trigger
> procedure in which the value of the primary key of the NEW or OLD row
> is used. The trigger will be fired by arbitrary tables so the column
> name must be dynamic.  Something like:
>
> pk_column := 'foo_id'; --example assignment only
>
> EXECUTE 'INSERT INTO  bar (baz) VALUES ($1)'
> USING NEW.quote_literal(pk_column);
>
> Out of desperation I have pretty much brute forced many weird
> combinations of quote_literal, quote_ident, ::regclass, || and USING.
> Unfortunately, I have not been able to get anything to work so any
> help would be very much appreciated.
>
> Thanks for reading
>
>
...
-- Dump into proper partition
sql := 'INSERT INTO ' || v_ets_destination || ' VALUES ( ($1).*)';

    -- DEBUG
--RAISE NOTICE 'SQL: %',sql;

BEGIN
 EXECUTE sql USING NEW;
...

--
Scott Mead
Sr. Architect
OpenSCG
PostgreSQL, Java & Linux Experts

Desk   : (732) 339 3419 ext 116
Bridge: (585) 484-8032

http://openscg.com




> Peter Devoy
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] How to stop autovacuum for daily partition old tables

2016-01-20 Thread Scott Mead


> On Jan 20, 2016, at 19:54, AI Rumman  wrote:
> 
> But, will it not create transaction wraparound for those table?
> 
> Thanks.
> 
>> On Wed, Jan 20, 2016 at 4:44 PM, Melvin Davidson  
>> wrote:
>> 
>> ALTER TABLE your_schema.your_table SET (autovacuum_enabled = false, 
>> toast.autovacuum_enabled = false);
>> 
>>> On Wed, Jan 20, 2016 at 6:22 PM, AI Rumman  wrote:
>>> Hi,
>>> 
>>> I have a table with daily partition schema on Postgresql 9.1 where we are 
>>> keeping 2 years of data.
>>> Often I experience that autovacuum process is busy with old tables where 
>>> there is no change. How can I stop it?
>>> Please advice.
>>> 
I typically run a vacuum freeze in old partitions that don't get any changes as 
part of a maintenance script.  If the tables actually get no changes, autovac 
should ignore them unless wrap becomes an issue at max_freeze_age... Which, it 
shouldn't of you vacuum freeze and there are no changes. 


>>> Thanks.
>> 
>> 
>> 
>> -- 
>> Melvin Davidson
>> I reserve the right to fantasize.  Whether or not you 
>> wish to share my fantasy is entirely up to you. 
> 


Re: [GENERAL] Catalog bloat (again)

2016-01-27 Thread Scott Mead


--
Scott Mead via mobile
IPhone : +1-607-765-1395
Skype  : scottm.openscg
Gtalk: sco...@openscg.com

> On Jan 27, 2016, at 22:11, Joshua D. Drake  wrote:
> 
>> On 01/27/2016 03:37 PM, Ivan Voras wrote:
>> 
>> 
>> On 28 January 2016 at 00:13, Bill Moran > <mailto:wmo...@potentialtech.com>> wrote:
>> 
>>On Wed, 27 Jan 2016 23:54:37 +0100
>>Ivan Voras mailto:ivo...@gmail.com>> wrote:
>> 
>>> So, question #1: WTF? How could this happen, on a regularly vacuumed
>>> system? Shouldn't the space be reused, at least after a VACUUM? The 
>> issue
>>> here is not the absolute existence of the bloat space, it's that it's
>> > constantly growing for *system* tables.
>> 
>>With a lot of activity, once a day probably isn't regular enough.
>> 
>> 
>> I sort of see what you are saying. I'm curious, though, what goes wrong
>> with the following list of expectations:
>> 
>> 1. Day-to-day load is approximately the same
>> 2. So, at the end of the first day there will be some amount of bloat
>> 3. Vacuum will mark that space re-usable
>> 4. Within the next day, this space will actually be re-used
>> 5. ... so the bloat won't grow.
>> 
>> Basically, I'm wondering why is it growing after vacuums, not why it
>> exists in the first place?
> 
> If something is causing the autovacuum to be aborted you can have this 
> problem.
It long-running transactions / idle in transaction / prepared xacts

  Have you considered slowing down on temp tables?  Typically, when bleeding, 
it's good to find the wound and stitch it up instead of just getting more 
towels


> 
> JD
> 
> 
> -- 
> Command Prompt, Inc.  http://the.postgres.company/
> +1-503-667-4564
> PostgreSQL Centered full stack support, consulting and development.
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema Size

2016-03-01 Thread Scott Mead
On Tue, Mar 1, 2016 at 6:07 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> You should read the definitions for the functions you are using to
> retrieve the sizes.
>
> ​http://www.postgresql.org/docs/current/static/functions-admin.html​
>
> +1, you've gotta be careful with each of these, they all tend to hide
different, yet critical components of size that you may be having trouble
resolving.

 The other thing to consider is that this isn't including any on-disk space
required for your change traffic in the WAL.  Your $PGDATA will always be
larger than the sum of all your databases sizes...


> On Tue, Mar 1, 2016 at 3:48 PM, drum.lu...@gmail.com  > wrote:
>
>> Hi there
>>
>> Wanna see how size a schema is in my PostgreSQL 9.2
>>
>> Got two queries - they return different values... can u please check?
>>
>> cheers;
>>
>> Query 1:
>> SELECT schema_name,
>> pg_size_pretty(sum(table_size)::bigint) as "disk space",
>> (sum(table_size) / pg_database_size(current_database())) * 100
>> as "percent"
>> FROM (
>>  SELECT pg_catalog.pg_namespace.nspname as schema_name,
>>  pg_relation_size(pg_catalog.pg_class.oid) as table_size
>>  FROM   pg_catalog.pg_class
>>  JOIN pg_catalog.pg_namespace
>>  ON relnamespace = pg_catalog.pg_namespace.oid
>> ) t
>> GROUP BY schema_name
>> ORDER BY schema_name
>>
>>
> ​​pg_relation_size: "Disk space used by the specified fork ('main', 'fsm',
> 'vm', or 'init') of the specified table or index"
>
> The 'init' fork is (I think) non-zero but extremely small.
> TOAST for a given relation is considered its own table
>
>
>> Query 2:
>> select schemaname,
>> pg_size_pretty(sum(pg_table_size(schemaname||'.'||relname))::bigint) as s
>> from pg_stat_user_tables
>> group by schemaname
>>
>
> pg_table_size: "Disk space used by the specified table, excluding indexes
> (but including TOAST, free space map, and visibility map)"
>

Personally, I'm a huge fan of 'pg_total_relation_size' which is all of
pg_table_size + indexes.  It really depends on specifically what you're
trying to count.  If you're looking for the total disk space required by
your tables in a schema, I always [personally] want to include indexes in
this count to make sure I understand the total impact on disk of accessing
my relations.


>
> David J.​
>
>


Re: [GENERAL] Postgresql 9.3.4 file system compatibility

2016-04-08 Thread Scott Mead
On Fri, Apr 8, 2016 at 9:16 AM, Marllius  wrote:

> thank you, but i need a link in official postgresql documentation
>

I'm not sure if that link exists, the general rule is In g if it's POSIX,
it'll work. You'll find that most PostgreSQL-ers have strong opinions and
preferences in regards to filesystems.   Personally, I know that XFS will
work, it's not *my* preference, but, to each their own.


>
> OCFS2 = oracle cluster file system 2
>
>
2016-04-08 10:00 GMT-03:00 Bob Lunney :
>
>> XFS absolutely does.  Its well supported on Redhat and CentOS 6.x and
>> 7.x.  Highly recommended.
>>
>> Don’t know about OCFS2.
>>
>> Bob Lunney
>> Lead Data Architect
>> MeetMe, Inc.
>>
>> > On Apr 8, 2016, at 8:56 AM, Marllius  wrote:
>> >
>> > Hi guys!
>> >
>> > The OCFS2 and XFS have compatibility with postgresql 9.3.4?
>> >
>>
>
I did some experimentation with ocfs2 back about 7 or 8 years ago
(admittedly, a Big-Bang away, so keep that in mind when reading my
comments).  At the time, OCFS2 was *mostly* POSIX compatible and would
indeed work with Postgres.  What we found (again, at the time) is that
OCFS2 started to have performance problems and eventually a race condition
when using a large number of [relatively] small files.  I believe the DB I
was working on had 10's of databases, each with 1,000+ tables in it, so,
lots of files.  It was really designed for use with Oracle (small number of
large files) and was passed over in favor of ASM.

If it were me, I'd stay away from OCFS2 for anything except Oracle (and in
that case, I'd use ASM).


> > I was looking the documentation but i not found it.
>> >
>> >
>> >
>>
>>
>
>
> --
> Atenciosamente,
>
> Márllius de Carvalho Ribeiro
>


Re: [GENERAL] index build faster on 8G laptop than 30G server

2016-04-19 Thread Scott Mead
On Tue, Apr 19, 2016 at 5:28 PM, Bill Ross  wrote:

> I've been running an index build for almost an hour on my 30G server that
> takes ~ 20 mins on my puny old macbook.
>
> It seems like I've tuned all I can.. what am I missing?
>
>
Concurrent traffic on the server ? Locks / conflicts with running traffic?

>From a parameter perspective, look at maintenance_work_mem.

--Scott



> Thanks,
> Bill
>
> Records to index: 33305041
>
> --- Server:
>
>  PostgreSQL 9.2.15 on x86_64-redhat-linux-gnu, compiled by gcc (GCC) 4.8.3
> 20140
> 911 (Red Hat 4.8.3-9), 64-bit
>
> shared_buffers = 8GB# min 128kB
> temp_buffers = 2GB# min 800kB
> work_mem = 8GB# min 64kB
> checkpoint_segments = 256# in logfile segments, min 1, 16MB each
> seq_page_cost = 1.0# measured on an arbitrary scale
> random_page_cost = 1.0# same scale as above
> effective_cache_size = 20GB
>
>   PID   USERPR  NI  VIRTRES  SHR S %CPU %MEMTIME+ COMMAND
>  4069 ec2-user  20   0 8596m 1.7g 1.7g R 99.8  5.6  67:48.36 postgres
>
> Macbook:
>  PostgreSQL 9.4.4 on x86_64-apple-darwin14.3.0, compiled by Apple LLVM
> version 6.1.0 (clang-602.0.53) (based on LLVM 3.6.0svn), 64-bit
>
> shared_buffers = 2048MB# min 128kB
> temp_buffers = 32MB# min 800kB
> work_mem = 8MB# min 64kB
> dynamic_shared_memory_type = posix# the default is the first option
> checkpoint_segments = 32# in logfile segments, min 1, 16MB each
>
> PIDCOMMAND  %CPU TIME #TH   #WQ  #PORTS MEMPURG CMPRS  PGRP
> 52883  postgres 91.0 02:16:14 1/1   07  1427M+ 0B 622M-  52883
>
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Using both ident and password in pg_hba.conf

2016-05-09 Thread Scott Mead
On Mon, May 9, 2016 at 5:42 PM, D'Arcy J.M. Cain  wrote:

> On Mon, 09 May 2016 17:12:22 -0400
> Tom Lane  wrote:
> > If the same user id + database combinations might be valid in both
> > cases (from both PHP and manual connections) I think your only other
> > option for distinguishing which auth method to use is to make them
> > come in on different addresses.  Can you set up a secondary IP
> > interface that only the PHP server uses, for example?
>
> I did think of that but how do I define that in pg_hba?  The host field
> only specifies the remote IP, not the local one.
>
> > There's no provision for saying "try this auth method, but if it
> > fails, try subsequent hba lines".  It might be interesting to have
> > that, particularly for methods like ident that don't involve any
> > client interaction.  (Otherwise, you're assuming that the client can
> > cope with multiple challenges, which seems like a large assumption.)
> > I don't have much of a feeling for how hard it would be to do in the
> > server.
>
> I had an idea that that wouldn't be so easy else we would have had it
> by now.  However, I am not sure that that is what is needed.  I was
> thinking of something like this:
>
> hostall   joe@nobody  192.168.151.75/32   password
> hostall   all 192.168.151.75/32   ident
>
> The "all@nobody" field is meant to specify that the remote user is
> nobody but that they are connecting as user joe.  You would be able to
> use "all" as well.  You don't even need to do an ident check unless the
> auth method is "trust" which would be silly anyway.  In fact "password"
> is the only method that even makes any sense at all.
>


So, at a high-level, you want:

- Users deploying php scripts in apache to require a password ( btw -- use
md5, not password)
- Users running php scripts from their shell accounts to connect with no
password to the database

  Is that correct?

  Why not just require that everyone use an (again: *md5*) to connect?  It
would be significantly more secure.  Is their a requirement that shell
account users be able to connect without providing a password?

  (NB:
http://www.postgresql.org/docs/9.4/static/auth-methods.html#AUTH-PASSWORD
 password will send the password in cleartext, md5 will tell libpq to hash
the password for you.  No client-level change).



> --
> D'Arcy J.M. Cain  |  Democracy is three wolves
> http://www.druid.net/darcy/|  and a sheep voting on
> +1 416 788 2246 (DoD#0082)(eNTP)   |  what's for dinner.
> IM: da...@vex.net, VoIP: sip:da...@druid.net
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] postgresql embedded mode

2016-05-23 Thread Scott Mead
On Mon, May 23, 2016 at 9:51 AM, Adrian Klaver 
wrote:

> On 05/23/2016 03:54 AM, aluka raju wrote:
>
>>
>> Hello,
>>
>> As given in the FAQ's that postgresql cannot be
>> embedded
>> https://wiki.postgresql.org/wiki/FAQ#Can_PostgreSQL_be_embedded.3F .
>>
>> Is their any possibility to make it embedded. Till now postgresql has
>> not done this embedded mode and i want to work on how it can be embedded
>> and contribute. please help me how to start or suggest the idea-.
>>
>
> All of this might be easier if you where to explain your design goals for
> whatever you are creating. Right now we have a series of disconnected
> references to parts of whatever it is.
>
> +1
One individual's idea of embedded is not the same as another's




--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com/>*
http://openscg.com

>
>> Thanks,
>> aluka
>>
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] mild modification to pg_dump

2017-11-17 Thread Scott Mead
On Fri, Nov 17, 2017 at 7:51 AM, marcelo  wrote:

> I would need to do a mild change to pg_dump, working against a 9.4 server
> on linux.
> Which source tree do I need? Have gcc 4.9.2 in my Lubuntu installation.
> TIA


What exactly do you need to change?  Most likely, there is a quick and easy
fix for whatever you're doing without modifying pg_dump itself.

That being said, if you really want to modify the source, download the
source tarball: https://www.postgresql.org/ftp/source/


>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] mild modification to pg_dump

2017-11-17 Thread Scott Mead
On Fri, Nov 17, 2017 at 4:06 PM, marcelo  wrote:

> I need to "emulate" the pg_dump code because the password prompt. Years
> ago I write a program (for the QnX environment) that catched some prompt
> and emulates the standard input. I don't like to do that again.


pg_dump can use an environment variable "PGPASSWORD" upon execution
(actually, all libpq programs can).  You could have a wrapper that sets the
environment variable and then executes pg_dump, this would get you around
that prompt.  Similarly, you could use the .pgpass file.

https://www.postgresql.org/docs/9.5/static/libpq-envars.html
https://www.postgresql.org/docs/9.5/static/libpq-pgpass.html

--Scott



>
>
> On 17/11/17 17:23, John R Pierce wrote:
>
>> On 11/17/2017 12:19 PM, marcelo wrote:
>>
>>> Sorry, I was not exact.
>>> I don't need nor like to change pg_dump. Rather, based on pg_dump code,
>>> I need to develop a daemon which can receive a TCP message (from a
>>> privileged app) containing some elements: the database to dump, the user
>>> under which do that, and his password. (My apps are using that same data,
>>> of course, encripted to the common users).
>>>
>>
>>
>> I would just fork pg_dump to do the actual dump rather than try and
>> incorporate its source code into your app.
>>
>>
>>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] Survey on backing up unlogged tables: help us with PostgreSQL development!

2010-11-16 Thread Scott Mead
On Tue, Nov 16, 2010 at 7:21 PM, Glen Parker  wrote:

> On 11/16/2010 03:24 PM, Karsten Hilbert wrote:
>
>> PostgreSQL 9.1 is likely to have, as a feature, the ability to create
>>> tables which are "unlogged", meaning that they are not added to the
>>> transaction log, and will be truncated (emptied) on database restart.
>>> Such tables are intended for highly volatile, but not very valuable,
>>> data, such as session statues, application logs, etc.
>>>
>>
> I have been following loosely this discussion on HACKERS, but seem to have
> missed the part about truncating such tables on server restart.
>
> I have an immediate use for unlogged tables (application logs), but having
> them truncate after even a clean server restart would be a show stopper.  I
> keep log data for 2 months, and never back it up.  Having it disappear after
> a system melt down is acceptable, but not after a clean restart.  That would
> be utterly ridiculous!
>

+1  -- Is there a technical reason to do a TRUNCATE on restart?  I'd feel
better if I could just have unlogged tables that survive unless something
like a power-outage etc...  I'm in the exact same boat here, lots of big
logging tables that need to survive reboot, but are frustrating when it
comes to WAL generation.


>
>
> As to the topic of the thread, I think pg_dump needs to dump unlogged
> tables by default.
>
> -1 I disagree.  I'm fine with having the loaded weapon  pointed at my foot.

--Scott


>
> -Glen
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] port warded (iptables) postgres

2010-11-16 Thread Scott Mead
Make sure that listen_addresses='' or '*'

By default, the server  only listens on unix sockets.


--Scott


On Tue, Nov 16, 2010 at 6:41 PM, zhong ming wu  wrote:

> Hello
>
> I have successfully used iptables to direct ports for other services.
> For example I run apache on 8443 on 127.0.0.1 and
> use iptable to direct traffic to public ip on 443.  Trying the same
> with postgresql does not seem to work properly.
>
> I was wondering whether anyone has successfully used iptables +
> postgres this way
>
> Thanks in advance
>
> Mr. Wu
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Survey on backing up unlogged tables: help us with PostgreSQL development!

2010-11-16 Thread Scott Mead
On Tue, Nov 16, 2010 at 8:15 PM, Tom Lane  wrote:

> Scott Mead  writes:
> > +1  -- Is there a technical reason to do a TRUNCATE on restart?  I'd feel
> > better if I could just have unlogged tables that survive unless something
> > like a power-outage etc...  I'm in the exact same boat here, lots of big
> > logging tables that need to survive reboot, but are frustrating when it
> > comes to WAL generation.
>
> Keep in mind that these tables are *not* going to survive any type of
> backend crash.


  Not surviving a crash is fine.  IMHO, if we'd lose data in myisam files,
I'm happy to lose them on pg nologging tables.  I just want it to survive a
stop / start operation.  The benefits (think of multi-host syslog
consolidation with FTS  ) on these tables FAR outweigh the
off-chance that a crash will cause me some heartache.


> Maybe my perceptions are colored because I deal with
> Postgres bugs all the time, but I think of backend crashes as pretty
> common, certainly much more common than an OS-level crash.  I'm afraid
> you may be expecting unlogged tables to be significantly more robust
> than they really will be.
>


Bugs?  What bugs :)

  Honestly, I've only had a couple of *Prod* crashes (knocks on wood), but
the need to restart occurs every now and then.

--Scott





>
>regards, tom lane
>


Re: [GENERAL] Survey on backing up unlogged tables: help us with PostgreSQL development!

2010-11-17 Thread Scott Mead
On Tue, Nov 16, 2010 at 10:25 PM, Tom Lane  wrote:

> Man, the number of misunderstandings in this thread is staggering.
> Let me try to explain what the proposed feature will and will not do.
>
> 1. The system catalog entries for all tables will be wal-logged.
> So schema (DDL) will survive a crash.  There wouldn't be any way
> to make it not do that, because we can't wal-log only some updates
> to a particular table, and that includes the catalogs in particular.
>
> Gotcha


> 2. What's proposed as the new feature is that specific non-system
> tables can be marked as unlogged, meaning that WAL entries won't
> be made for changes in those tables' contents (nor their indexes'
> contents).  So we can't guarantee that the contents of such tables
> will be correct or consistent after a crash.  The proposed feature
> deals with this by forcibly truncating all such tables after a crash,
> thus ensuring that they're consistent though not populated.  So the
> possible use-cases for such tables are limited to where (a) you can
> repopulate the tables on demand, or (b) you don't really care about
> losing data on a crash.
>

I would rather be allowed to decide that for myself.


>
> 3. There's a lot of wishful thinking here about what constitutes a
> crash.  A backend crash *is* a crash, even if the postmaster keeps
> going.  Data that had been in shared buffers doesn't get written out
> in such a scenario (and if we tried, it might be corrupt anyway).  So
> unlogged tables would be corrupt and in need of truncation after such an
> event.  Obviously, the same goes for an OS-level crash or power failure.
>

Right, just let *me* decide, that's all.


>
> 4. The last bit of discussion on -hackers concerned what to do in
> the case where the server got shut down cleanly.  If it was shut
> down cleanly, then any data for unlogged tables would have been
> written out from shared buffers ... but did the data make it to disk?
> There's no easy way to know that.  In the event of an OS crash or
> power failure shortly after server shutdown, it's possible that
> the unlogged tables would be corrupt.  So Robert's initial proposal
> includes truncating unlogged tables at any database startup, even
> if the previous shutdown was clean.  Some (including me) are arguing
> that that is unnecessarily strict; but you do have to realize that
> you're taking some risk with data validity if it doesn't do that.
>

It is too strict, it makes the feature barely more usable than a temp table.
As a DBA, I realize the implication of the feature:
   *) b0rked indexes
   *) b0rked data
   *) Not knowing what's good and what's bad
   *) Bad reports
   *) Bad Bi

etc..., etc... etc...

   Still, I'd rather be allowed to make the decision here.  I think that
having the database try to enforce integrity on something i've marked as
'corruptable' (via the 'unlogged' flag) will be a constant fight between me
and the system.  In the end, I'd just not use the feature.


> The bottom line here is that you really can only use the feature
> for data that you're willing to accept losing on no notice.
> Allowing the data to persist across clean shutdowns would probably
> improve usability a bit, but it's not changing that fundamental fact.
>

Agreed, and that's fine.  IMHO, it improves the usability 10 fold.  Having
it truncated on server restart is useful for only a fraction of the
use-cases for this feature.

--Scott


>
>regards, tom lane
>


Re: [GENERAL] Survey on backing up unlogged tables: help us with PostgreSQL development!

2010-11-17 Thread Scott Mead
On Wed, Nov 17, 2010 at 10:38 AM, Scott Mead  wrote:

> On Tue, Nov 16, 2010 at 10:25 PM, Tom Lane  wrote:
>
>> Man, the number of misunderstandings in this thread is staggering.
>> Let me try to explain what the proposed feature will and will not do.
>>
>> 1. The system catalog entries for all tables will be wal-logged.
>> So schema (DDL) will survive a crash.  There wouldn't be any way
>> to make it not do that, because we can't wal-log only some updates
>> to a particular table, and that includes the catalogs in particular.
>>
>> Gotcha
>
>
>> 2. What's proposed as the new feature is that specific non-system
>> tables can be marked as unlogged, meaning that WAL entries won't
>> be made for changes in those tables' contents (nor their indexes'
>> contents).  So we can't guarantee that the contents of such tables
>> will be correct or consistent after a crash.  The proposed feature
>> deals with this by forcibly truncating all such tables after a crash,
>> thus ensuring that they're consistent though not populated.  So the
>> possible use-cases for such tables are limited to where (a) you can
>> repopulate the tables on demand, or (b) you don't really care about
>> losing data on a crash.
>>
>
> I would rather be allowed to decide that for myself.
>
>
>>
>> 3. There's a lot of wishful thinking here about what constitutes a
>> crash.  A backend crash *is* a crash, even if the postmaster keeps
>> going.  Data that had been in shared buffers doesn't get written out
>> in such a scenario (and if we tried, it might be corrupt anyway).  So
>> unlogged tables would be corrupt and in need of truncation after such an
>> event.  Obviously, the same goes for an OS-level crash or power failure.
>>
>
> Right, just let *me* decide, that's all.
>
>
>>
>> 4. The last bit of discussion on -hackers concerned what to do in
>> the case where the server got shut down cleanly.  If it was shut
>> down cleanly, then any data for unlogged tables would have been
>> written out from shared buffers ... but did the data make it to disk?
>> There's no easy way to know that.  In the event of an OS crash or
>> power failure shortly after server shutdown, it's possible that
>> the unlogged tables would be corrupt.  So Robert's initial proposal
>> includes truncating unlogged tables at any database startup, even
>> if the previous shutdown was clean.  Some (including me) are arguing
>> that that is unnecessarily strict; but you do have to realize that
>> you're taking some risk with data validity if it doesn't do that.
>>
>
> It is too strict, it makes the feature barely more usable than a temp
> table.
> As a DBA, I realize the implication of the feature:
>*) b0rked indexes
>*) b0rked data
>*) Not knowing what's good and what's bad
>*) Bad reports
>*) Bad Bi
>
> etc..., etc... etc...
>
>Still, I'd rather be allowed to make the decision here.  I think that
> having the database try to enforce integrity on something i've marked as
> 'corruptable' (via the 'unlogged' flag) will be a constant fight between me
> and the system.  In the end, I'd just not use the feature.
>
>
>> The bottom line here is that you really can only use the feature
>> for data that you're willing to accept losing on no notice.
>> Allowing the data to persist across clean shutdowns would probably
>> improve usability a bit, but it's not changing that fundamental fact.
>>
>
> Agreed, and that's fine.  IMHO, it improves the usability 10 fold.  Having
> it truncated on server restart is useful for only a fraction of the
> use-cases for this feature.
>


Now that I've just sent that last piece, what about a 'truncate on restart'
option that is defaulted to on?  That way, the community feels good knowing
that we're trying to protect people from themselves, but like the 'fsync'
feature, I can load the gun and pull the trigger if I really want to.  I'd
like to see that so even if there is a server crash, it doesn't truncate.
That way, i can rename the garbage table if I want, create a new one for all
new data and then be allowed to glean what I can from the last one.

--Scott


>
> --Scott
>
>
>>
>>regards, tom lane
>>
>
>


Re: [GENERAL] Survey on backing up unlogged tables: help us with PostgreSQL development!

2010-11-17 Thread Scott Mead
On Wed, Nov 17, 2010 at 12:49 PM, Josh Berkus  wrote:

>
>  As was already mentioned, application logs. Unlogged tables would be
>> perfect for that, provided they don't go *poof* every now and then for
>> no good reason. Nobody's going to be too heart broken if a handful of
>> log records go missing, or get garbled, after a server crash or power
>> outage. Delete 'em all after every restart though, and that's a problem.
>>
>
> That's a nice thought, but it's not how data corruption works in the event
> of a crash.  If a table is corrupted, *we don't know* how it's corrupted,
> and it's not just "the last few records" which are corrupted.  So for
> unlogged tables, there is never going to be any other option for crashes
> than to truncate them.
>
> Robert Haas did discuss the ability to synch unlogged tables on a planned
> shutdown, though.   However, that's liable to wait until 9.2, given the
> multiple steps required to make it work.
>
> Note that you would have the option of periodically synching an unlogged
> table to pgdump or to a logged table, via script, if you cared about
> retaining the data.  That would probably give you the behavior you want,
> above.
>
>
In an airplane, a pilot can kill the engine mid-flight if [s]he wants to.
They can deploy the flaps /slats at cruise speed / altitude, and if they're
so minded, they can land with a full tank of gas.  Now, none of these things
are particularly wise, but that's why the pilots are given *slightly* more
learning than your average bus driver.

  If you want to have a widely usable 'unlogged' table feature, I highly
recommend that 'truncate on server crash/restart' be an option that is
defaulted to true.  That way, I can go in an push the buttons I want and
give corrupted data to whomever, whenever i like.  (Land with a full tank of
Jet-A).

Whatever the decision is about backup, doesn't really matter IMO, but I
honestly think that the benefit of an unlogged table is there for both
session data (I run my session db's in fsync mode anyway and re-initdb them
on boot) AND for logging data where I can't take WAL anymore, but would like
to be able to have them in the same cluster as other stuff.  If they just
disappear then this feature won't be useful [to me] and I'll have to either
wait for the patch or give up on it and do a flat-file / lucene project just
to deal with it (I really don't want to do that :-).

--Scott



>
> --
>  -- Josh Berkus
> PostgreSQL Experts Inc.
> http://www.pgexperts.com
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] where is pg_stat_activity (and others) in the documentation?

2010-11-20 Thread Scott Mead
On Mon, Nov 15, 2010 at 10:03 AM, Vick Khera  wrote:

> On Mon, Nov 15, 2010 at 5:15 AM, Willy-Bas Loos 
> wrote:
> > I was looking for what exactly "waiting" means in pg_stat_activity.
>


You can find out exactly what you're waiting for by correlating this to the
pg_locks table.

  Grab the 'procpid' of your waiting query and run:

  select * from pg_locks where pid= and granted='f';

  Notice, in the pg_locks table, the logic works slightly different... if
you're 'waiting' (pg_stat_activity), then you haven't been 'granted'
(pg_locks).


As far as documentation:
http://www.postgresql.org/docs/current/static/monitoring-stats.html

--Scott


>
> waiting for a lock.
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] monitoring warm standby lag in 8.4?

2010-12-10 Thread Scott Mead
Yeah, my website is busted.  I'll fix it for you.


On Thu, Dec 9, 2010 at 2:30 PM, Josh Kupershmidt  wrote:
> Hi all,
>
> I'm wondering if there's an accepted way to monitor a warm standby
> machine's lag in 8.4. The wiki[1] has a link[2] to a script which
> parses the output of pg_controldata, looking for a line like this:
>
>  Time of latest checkpoint:            Thu 09 Dec 2010 01:35:46 PM EST
>
> But I'm not sure whether this timestamp is to be trusted as an
> indicator of how far behind the standby is in its recovery -- this
> timestamp just tells us when the standby last performed a checkpoint,
> regardless of how far behind in the WAL stream it is, right?
>
> I haven't come across any other monitoring suggestions for warm
> standby on 8.4. I've seen suggestions for hot standby slaves to use:
>  SELECT pg_last_xlog_receive_location();
> but this won't work on an 8.4 warm standby of course. I've searched
> around and haven't found[3] any other tips on how to monitor my
> standby.
>
> The manual mentions[4] using pg_xlogfile_name_offset() in the context
> of implementing record-based log shipping. Would this be useful for
> monitoring standby lag? Any other ideas?
>
> Thanks,
> Josh
>
>
> --
> [1] http://wiki.postgresql.org/wiki/Warm_Standby
> [2] http://www.kennygorman.com/wordpress/?p=249
> [3] I was hoping this page would have some relevant info:
> http://www.scottrmead.com/blogs/scott/warm-standby-monitoring , but
> it's down now :(
> [4] 
> http://www.postgresql.org/docs/8.4/static/warm-standby.html#WARM-STANDBY-RECORD
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Tablespace Issue

2011-02-17 Thread Scott Mead
On Thu, Feb 17, 2011 at 4:08 AM, Adarsh Sharma wrote:

> Dear all,
>
> Today I got to have a strong  issue while craeting table in a database.
>
> pdc_uima=# create table clause2_naxal as select * from clause2 c
> ,page_naxal_v3 p where c.source_id = p.crawled_page_id;
> ERROR:  could not extend relation pg_tblspc/17208/17216/23098672: No space
> left on device
> HINT:  Check free disk space.
> pdc_uima=# drop table clause2_naxal;
> ERROR:  table "clause2_naxal" does not exist
>
> I thougt that that the hard disk becomes full in which the tablespace is
> created for pdc_uima database.
>
> So, I attach a new hard disk(300 Gb ) and create a new tablespace  in it .
>
> And after, issue the below command :
>
> ALTER DATABASE pdc_uima SET TABLESPACE ts_pdc_uima1;
>
> It takes so much time as database size is near about 40 GB.
>
> It seems it moves the whole database to new place. I think this is not good
> for future. Is there is any way to  have new  data in  new  tablespace  and
>  remain old as it  is.
>

There is a 'TABLESPACE' option to the CREATE TABLE statament.
   http://www.postgresql.org/docs/current/static/sql-createtable.html


--Scott


>
> Also, Can I delete the old tablespace now .
>
>
> Thanks & Best Regards,
>
> Adarsh Sharma
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Per-session memory footprint (9.0/windows)

2011-03-02 Thread Scott Mead
On Wed, Mar 2, 2011 at 11:07 AM, Hannes Erven  wrote:

> Folks,
>
>
> I run a PG (currently 8.4, but will shortly migrate to 9.0) database on
> Windows Server 2003 that supports a desktop application which opens a
> few long-running sessions per user. This is due to the Hibernate
> persistence layer and the "one session per view" pattern that is
> recommended for such applications.
> These sessions usually load a pile of data once to display to the user,
> and then occasionally query updates of this data or even fetch single
> rows over a long time (like a few hours).
>
> It seems that each of the server postmaster.exe processes takes up
> approx. 5 MB of server memory (the "virtual memory size" column in task
> manager), and I guess this truly is the private memory these processes
> require. This number is roughly the same for 8.4 and 9.0 .
>
>
Task manager is mis-leading as multiple processes are sharing memory.  You
need process explorer
http://technet.microsoft.com/en-us/sysinternals/bb896653 (or something like
it) to see real memory consumption per backend.  Adding up the columns in
task manager is wrong and most definitely scary if you believe it :-)

--Scott



>
> As there are many, many such server processes running, is there anything
> I can do to reduce/optimize the per-session memory footprint?
>
> I'm aware of the sort_mem etc. parameters
> (http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server ) but
> these seem to only apply to the execution of queries, not to sessions
> that mainly "sit around waiting", right?
>
>
> Thank you for any hints!
>
>-hannes
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] .pgpass and root: a problem

2013-02-05 Thread Scott Mead
On Tue, Feb 5, 2013 at 12:15 PM, Shaun Thomas wrote:

> Hey folks,
>
> We're wanting to implement a more secure password policy, and so have
> considered switching to LDAP/Active Directory for passwords. Normally, this
> would be fine, but for two things:
>
> 1. Tons of our devs use .pgpass files to connect everywhere.
> 2. Several devs have root access to various environments.
>

I would love to see pgpass storing encrypted stuff here, that'd be great...
in the meantime...

 Is there any way that you could move your 'root-fellas' to a 'sudo' model
so that they can have *most* of what they need, without allowing identity
switches ?  I was trying to come up with something clever, but if they're
root, they're root.

--Scott Mead
sco...@openscg.com
http://www.openscg.com



>
> So, by switching from database-stored passwords to LDAP, we open a
> security problem that currently only affects the database, to developers'
> personal LDAP password, which is the key to every service and machine they
> use in the company.
>
> Unfortunately I can't see any way around this at all. Ident won't really
> work on remote systems, .pgpass isn't encrypted, and you can't use
> encrypted/hashed password entries either.
>
> I agree that we should probably have our root access much more locked down
> than it is, but it's still a valid problem. I don't think I'd even want a
> restricted set of root users able to see my LDAP password in plain text.
>
> Has anyone put thought into combining LDAP and .pgpass, or has it simply
> been abandoned every time the issue has presented itself?
>
> Thanks in advance!
>
> --
> Shaun Thomas
> OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
> 312-676-8870
> stho...@optionshouse.com
>
> __**
>
> See 
> http://www.peak6.com/email_**disclaimer/<http://www.peak6.com/email_disclaimer/>for
>  terms and conditions related to this email
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/**mailpref/pgsql-general<http://www.postgresql.org/mailpref/pgsql-general>
>


Re: [GENERAL] PG V9 on NFS

2013-02-11 Thread Scott Mead
On Mon, Feb 11, 2013 at 5:22 PM, Gauthier, Dave wrote:

>  Can PG V9.1* support a DB that's on an NFS disk?
>
> I googled around, but nothing popped out.
>
> Also, would you happen to know the answer to this for MySQL v5.5*? 
>
> Thanks in Advance.
>


I've done this before, and I really wish I hadn't.  NFS has gotten speedier
and more reliable, but the tendency to lock up an entire system and crash
unexpectedly is still there.  The other thing with NFS is that it is a lot
easier for people to try and run multi-headed clusters ( which, isn't
supported in postgres and will cause corruption if you do it).  If network
based storage is really a need, take a look at iSCSI or ATA over ethernet.
 I've had great luck there.


--Scott


Re: [GENERAL] Re: Causeless CPU load waves in backend, on windows, 9.5.5 (EDB binary).

2017-02-13 Thread Scott Mead
On Mon, Feb 13, 2017 at 5:10 PM, Thomas Kellerer  wrote:

> Nikolai Zhubr schrieb am 13.02.2017 um 23:03:
>
>> Maybe I should have been more specific.
>> What I need is debugging/profiling pure communication side of server
>> operation, implying huge lots of requests and replies going over the
>> wire to and from the server within some continued (valid) session,
>> but so that the server is not actually doing anything above that (no
>> sql, no locking, no synchronizing, zero usefull activity, just
>> pumping network I/O)
>>
>
My personal favorite is 'select 1'

   I know you're not looking for SQL, but, you gotta start somewhere...


>
>>
> If you are willing to drop the "no sql" requirement you could use
> something like
>
> select rpad('*', 1, '*');
>
> this will send a lot of data over the wire, the SQL overhead should be
> fairly small.
>
> You can send more data if you combine that with e.g. generate_series()
>
>
>
>
>
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] Recover PostgreSQL database folder data

2017-04-21 Thread Scott Mead
On Fri, Apr 21, 2017 at 12:40 PM, Edson Lidorio 
wrote:

> Hi,
> There was a disaster in my development note. I was able to recover the
> data folder. PostgreSQL 9.6.2, was installed in Centos 7.
>
> Here are the procedures I'm trying to initialize Postgresql for me to do a
> backup.
>
> 1- I installed PostgreSQL 9.6.2 on a VM with Centos 7.
> 2- I stopped the PostgreSQL service: sudo systemctl stop postgresql-9.6
> 3- I renamed the /var/lib/pgsql/9.6/data folder to date data_old and
> copied the old date folder
> 4- I gave permission in the folder date:
>  sudo chown postgres: postgres /var/lib/pgsql/9.6/data;
>

Your error is that you didn't 'chown *-R* postgres:postgres
/var/lib/pgsql/9.6/data'

--Scott


>  sudo chmod 700 /var/lib/pgsql/9.6/data
> 5 - I tried to start the service: sudo systemctl start postgresql-9.6
> It is generating the following errors:
>
> Abr 21 01:25:35 localhost.localdomain systemd[1]: Starting PostgreSQL 9.6
> database server...
> Abr 21 01:25:36 localhost.localdomain postgresql96-check-db-dir[19996]:
> cat: /var/lib/pgsql/9.6/data//PG_VER…ada
> Abr 21 01:25:36 localhost.localdomain postgresql96-check-db-dir[19996]:
> cat: /var/lib/pgsql/9.6/data//PG_VER…ada
> Abr 21 01:25:36 localhost.localdomain systemd[1]: postgresql-9.6.service:
> control process exited, code=ex...us=1
> Abr 21 01:25:36 localhost.localdomain systemd[1]: Failed to start
> PostgreSQL 9.6 database server.
> Abr 21 01:25:36 localhost.localdomain systemd[1]: Unit
> postgresql-9.6.service entered failed state.
> Abr 21 01:25:36 localhost.localdomain systemd[1]: postgresql-9.6.service
> failed.
> Hint: Some lines were ellipsized, use -l to show in full.
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] GENERAL : [Can somebody explain about Base directory]

2017-05-01 Thread Scott Mead
On Mon, May 1, 2017 at 11:41 AM, VENKTESH GUTTEDAR <
venkteshgutte...@gmail.com> wrote:

> Hello All,
>
> Base directory is consuming to much memory, leading to no space on
> server and stopping the application
>
> Somebody please explain why it takes so much memory and is it safe to
> delete those files.?
>
>
The base directory is where your actual data (tables, indexes) are stored.
It is NOT safe to delete from that directory manually.  It would be better
to login to the database and see if you can either drop tables or indexes.
If you do a 'DROP TABLE ;' in the database, it will delete from
the base directory.  DO NOT DELETE FROM THE BASE DIRECTORY manually unless
you know what you are doing.

   It is possible that you have bloat causing you space issues, but, that's
a harder thing to solve (
https://www.openscg.com/2016/11/postgresql-bloat-estimates/)


> --
> Regards :
> Venktesh Guttedar.
>
>


-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] logging of application level user in audit trigger

2017-05-09 Thread Scott Mead
On Tue, May 9, 2017 at 2:50 PM, Rajesh Mallah 
wrote:

> Hi ,
>
> I am referring to   audit trigger as described in
>
> https://wiki.postgresql.org/wiki/Audit_trigger_91plus OR
> https://wiki.postgresql.org/wiki/Audit_trigger
>
> Although there are documented limitations for these systems , but
> I would like to mention and seek suggestion on a limitation that I feel is
> biggest .
>
>
> It is very a common design pattern in  web-applications that the same
> database
> user is shared for making database changes by different "logged in users"
> of the
> web application.
>
> I feel the core of audit is all about "who"  , "when" and "what" .
>
> In the current audit trigger the "who" is essentially the ROLE which is
> the actor of
> the trigger , but in most scenarios the user associated with the
> web-application session
> is the one that is seeked.
>
> In one of my past projects I passed the web-user to the trigger by setting
> a postgres
> custom variable during the database connection and reading it inside the
> trigger
> and storing it in the audit log table.
>

This is a good method, and one of the best for just straight auditing.  The
other trick I've seen is to use the 'application_name' field.  Developers
would issue:

SET application_name = "app_user:app_name';

This can be read from pg_stat_activity.application_name.  I believe you'd
be able to read that in a procedure with 'show application_name'; and, you
can see it live in pg_stat_activity as well.

select application_name, count(*)
  FROM pg_stat_activity
 GROUP by application_name;

  You'd be able to see each user/app pair and the number of sessions that
were using to the DB at a given time.

--Scott



>
> I am curious how others deal with the same issue , is there better or more
> inbuilt solutions
> to store the application level user in the audit trail records.
>
> Regds
> mallah.
>
> ( https://www.redgrape.tech )
>
>
>
>
>
>
>
>
>
>



-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] Unable to install EASM postgre due to error 8023

2017-06-05 Thread Scott Mead
On Mon, Jun 5, 2017 at 6:14 AM, Garry Sim  wrote:

> Hi all,
>
>
>
> I did a search but unable to find anything in regards to this error. I am
> installing “01 SM_81SP1_Win_PostgreSQL_8323_setup.exe” but upon
> installing towards the ending, I am encountering this message. “Modifying
> the database files failed with an exit code of 8023”. I am currently
> installing at VM with OS: Windows Server 2012 R2
>
>
>

Is that postgres version 8.3.23?  That went EOL in Feb 2013

https://www.postgresql.org/support/versioning/

--Scott


>
>
> *Regards, *
>
>
>
> *Garry Sim*
>
> Professional Service Consultant
>
>
>
> *NETRUST PTE LTD*
> 70 Bendemeer Road #05-03 Luzerne Singapore 339940
>
> DID: +65 6212 1393 <+65%206212%201393> | Fax +65 6212 1366
> <+65%206212%201366>  |Website http://www.netrust.net
>
>
>



-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] Unable to install EASM postgre due to error 8023

2017-06-05 Thread Scott Mead
On Mon, Jun 5, 2017 at 9:59 PM, Garry Sim  wrote:

> Hi Scott,
>
>
>
> Is there a difference between postgre and Entrust Authority Security
> Manager Postgresql Database? But even if end of support, anyway of letting
> me have a better understanding of the error code ?
>

Ah, you're using the bundle that comes with EASM.  I would reach out to
their support for that error code.  It's not an error code that we in the
community would recognize, it's most likely well documented by their
support team.

--Scott




>
>
> *Regards, *
>
>
>
> *Garry Sim*
>
> Professional Service Consultant
>
>
>
> *NETRUST PTE LTD*
> 70 Bendemeer Road #05-03 Luzerne Singapore 339940
>
> DID: +65 6212 1393 <+65%206212%201393> | Fax +65 6212 1366
> <+65%206212%201366>  |Website http://www.netrust.net
>
>
>
>
>
>
>
> *From:* Scott Mead [mailto:sco...@openscg.com]
> *Sent:* Tuesday, June 6, 2017 9:55 AM
> *To:* Garry Sim 
> *Cc:* pgsql-general 
> *Subject:* Re: [GENERAL] Unable to install EASM postgre due to error 8023
>
>
>
>
>
>
>
> On Mon, Jun 5, 2017 at 6:14 AM, Garry Sim  wrote:
>
> Hi all,
>
>
>
> I did a search but unable to find anything in regards to this error. I am
> installing “01 SM_81SP1_Win_PostgreSQL_8323_setup.exe” but upon
> installing towards the ending, I am encountering this message. “Modifying
> the database files failed with an exit code of 8023”. I am currently
> installing at VM with OS: Windows Server 2012 R2
>
>
>
>
>
> Is that postgres version 8.3.23?  That went EOL in Feb 2013
>
>
>
> https://www.postgresql.org/support/versioning/
>
>
>
> --Scott
>
>
>
>
>
> *Regards, *
>
>
>
> *Garry Sim*
>
> Professional Service Consultant
>
>
>
>
> *NETRUST PTE LTD*70 Bendemeer Road #05-03 Luzerne Singapore 339940
>
> DID: +65 6212 1393 <+65%206212%201393> | Fax +65 6212 1366
> <+65%206212%201366>  |Website http://www.netrust.net
>
>
>
>
>
>
>
> --
>
> --
> Scott Mead
> Sr. Architect
> *OpenSCG <http://openscg.com>*
>
> http://openscg.com
>



-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] Strange case of database bloat

2017-07-05 Thread Scott Mead
On Wed, Jul 5, 2017 at 7:28 AM, Chris Travers 
wrote:

>
>
> On Wed, Jul 5, 2017 at 1:00 PM, PT  wrote:
>
>>
>> 2x the working size for a frequently updated table isn't terrible bloat.
>> Or are
>> you saying it grows 2x every 24 hours and keeps growing? The real
>> question is
>> how often the table is being vacuumed. How long have you let the
>> experiment run
>> for? Does the table find an equilibrium size where it stops growing? Have
>> you
>> turned on logging for autovacuum to see how often it actually runs on this
>> table?
>>
>
> If it were only twice it would not bother me.  The fact that it is twice
> after 24 hrs, 3x after 48 hrs and 4x after 72 hrs is alarming.
>
>>
>> No unremovable rows does not indicate that autovaccum is keeping up. It
>> just
>> indicates that you don't have a problem with uncommitted transactions
>> holding
>> rows for long periods of time.
>>
>
> Right.  I should have specified that I also have not seen auto vacuum in
> pg_stat_activity with an unusual duration.
>

What about anything 'WHERE state = 'idle in transaction' ?



>
>> Have you looked at tuning the autovacuum parameters for this table? More
>> frequent
>> vacuums should keep things more under control. However, if the write load
>> is
>> heavy, you'll probably want to lower autovacuum_vacuum_cost_delay.
>> Personally,
>> I feel like the default value for this should be 0, but there are likely
>> those
>> that would debate that. In any event, if that setting is too high it can
>> cause
>> autovacuum to take so long that it can't keep up. In theory, setting it
>> too low
>> can cause autovaccum to have a negative performance impact, but I've
>> never seen
>> that happen on modern hardware.
>>
>
> Most of the writes are periodic (hourly?) batch updates which are fairly
> big.
>

I've had similar issues when each update makes a row larger than any of the
available slots.  I had a workload (admittedly on an older version of
postgres) where we were updating every row a few times a day.  Each time,
the row (a bytea field) would grow about 0.5 - 5.0%.  This would prevent us
from using freespace (it was all too small).  The only way around this was :

1. Run manual table rebuilds (this was before pg_repack / reorg).  Use
pg_repack now
2. Fix the app

  Essentially, I would do targeted, aggressive vacuuming and then, once a
month (or once I hit a bloat threshold) do a repack (again, it was my
custom process back then).  This was the bandage until I could get the app
fixed to stop churning so badly.


>
>> But that's all speculation until you know how frequently autovacuum runs
>> on
>> that table and how long it takes to do its work.
>>
>
> Given the other time I have seen similar behaviour, the question in my
> mind is why free pages near the beginning of the table don't seem to be
> re-used.
>
> I would like to try to verify that however, if you have any ideas.
>
>>
>> --
>> PT 
>>
>
>
>
> --
> Best Wishes,
> Chris Travers
>
> Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
> lock-in.
> http://www.efficito.com/learn_more
>



-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] Logging at schema level

2017-07-21 Thread Scott Mead
On Fri, Jul 21, 2017 at 2:11 AM, Nikhil  wrote:

> Schema = tenant. So basically tenant level logging.
>

If each tenant uses a separate user, you could parse this by that user.
You can't separate the logs by user, but, you could use a tool like
pgBadger to parse reports for each individual user (tenant) in the system
and present the logs that way:

From: https://github.com/dalibo/pgbadger

   -u | --dbuser username : only report on entries for the given user.
 ....

--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com

>
> On 21-Jul-2017 11:21 AM, "Andreas Kretschmer" 
> wrote:
>
>> On 21 July 2017 07:10:42 GMT+02:00, Nikhil 
>> wrote:
>> >Hello,
>> >
>> >I am using postgresql schema feature for multi-tenancy. can we get
>> >postgresql logs at schema level. Currently it is for the whole database
>> >server (pg_log)
>> >
>>
>> What do you want to achieve? Logging of data-changes per tenant?
>>
>> Regards, Andreas.
>>
>>
>> --
>> 2ndQuadrant - The PostgreSQL Support Company
>>
>


Re: [GENERAL] How to get transaction started always in WRITE mode.

2017-07-25 Thread Scott Mead
On Tue, Jul 25, 2017 at 5:32 AM, Adam Šlachta 
wrote:

> Hello,
>
>
>
> In short: Is there any way how to setup PostgreSql 9.6 to always start a
> transaction in WRITE mode?
>
>
>
> Our related configuration:
>
> "default_transaction_isolation" --> "read committed"
>
> "default_transaction_read_only" --> "off"
>

Login to the database with psql as the same user that your java app
connects with try:

show default_transaction_read_only;

This can be set per-user, it's possible you're getting tripped up there.
Also, what happens if you run:

select pg_is_in_recovery();


This can happen if you connect to a postgres slave instead of a master.
Make sure you're always connecting to a master node for executing writes.


>
>
>
> Longer description (for those who are interested, since it is not only
> PostgreSQL related):
>
>
>
> We are facing problems with "cannot execute  in a
> read-only transaction" exception (org.postgresql.util.PSQLException).
>
> It is very likely the problem is caused by our code, however at the moment
> the fastest solution before we solve the things properly would be to setup
> WRITE mode for all started transactions on a database-setup-level.
>
>
>
> SW we use:
>
> -> Java 8
>
> -> Hibernate 5.1.2
>
> -> spring-data-jpa 1.10.4.RELEASE
>
> -> spring-beans, spring-core, other spring stuff of version 4.2.8.RELEASE
>
>
>
> Related configuration (I don't want to spam here with long list of
> configuration files so I pick-up what I consider important):
>
> Hibernate -> first & second level cache switched OFF
>
> SessionFactory -> org.springframework.orm.hibernate5.
> LocalSessionFactoryBean
>
> transactionManager -> org.springframework.orm.jpa.JpaTransactionManager
>
> Spring @Transactional(read-only) hint -> where we could we set it to
> "false"
>
> Our typical @Repository extends 
> org.springframework.data.jpa.repository.JpaRepository,
> which uses implementation from org.springframework.data.jpa.
> repository.support.SimpleJpaRepository.
>

Is it possible that your code / connect layer is setting
default_transaction_read_only to TRUE when the app connects?





>
>
> Thank you very much for any hints.
>
> Adam Slachta
>
>
>
> ---
> This e-mail message including any attachments is for the sole use of the
> intended recipient(s) and may contain privileged or confidential
> information. Any unauthorized review, use, disclosure or distribution is
> prohibited. If you are not the intended recipient, please immediately
> contact the sender by reply e-mail and delete the original message and
> destroy all copies thereof.
>
> Tato zpráva včetně veškerých příloh je důvěrná a mohou ji využít pouze
> osoby, jimž je adresována. Nejste-li adresátem zprávy, obsah i s přílohami
> a kopiemi bezodkladně odstraňte ze svého systému a dále ji nijak
> nevyužívejte. Upozorňujeme Vás, že využívání zpráv, které Vám nejsou
> určeny, je zakázáno, včetně jejich přímého či nepřímého zveřejňování,
> kopírování, tištění, rozšiřování anebo jakéhokoli právního jednání
> učiněného při spoléhání se na jejich obsah. Pokud jste zprávu obdrželi
> omylem, postupujte stejně a neprodleně informujte odesílatele.
>
> Der Inhalt dieser E-Mail ist vertraulich und ausschließlich für den
> bezeichneten Adressaten bestimmt. Wenn Sie nicht der vorgesehene Adressat
> dieser E-Mail oder dessen Vertreter sein sollten, so beachten Sie bitte,
> dass jede Form der Kenntnisnahme, Veröffentlichung, Vervielfältigung oder
> Weitergabe des Inhalts dieser E-Mail unzulässig ist. Wir bitten Sie, sich
> in diesem Fall mit dem Absender der E-Mail in Verbindung zu setzen.
>
>


-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] archive_command fails but works outside of Postgres

2017-08-19 Thread Scott Mead


> On Aug 19, 2017, at 04:05, twoflower  wrote:
> 
> Alvaro Herrera-9 wrote
> I saw one installation with "gsutil cp" in archive_command recently. It had 
> the CLOUDSDK_PYTHON environment variable set in the archive_command itself. 
> Maybe that's a problem.
> After all, this was the solution:
> archive_command = 'CLOUDSDK_PYTHON=/usr/bin/python gsutil cp 
> /storage/postgresql/9.6/main/pg_xlog/%p gs://my_bucket/'
> as also hinted in https://github.com/GoogleCloudPlatform/gsutil/issues/402 
> 
> I still don't understand why the environments differ (the context of 
> archive_command vs. "su postgres -" and executing it there) but I am happy 
> it's working now. Thank you! 

If postgres is running under systemd, you'll have a wildly different 
environment than if you just su to postgres. 

--Scott


> View this message in context: Re: archive_command fails but works outside of 
> Postgres
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.


Re: [GENERAL] Veritas cluster management

2017-08-30 Thread Scott Mead
On Wed, Aug 30, 2017 at 9:43 AM, Ron Johnson  wrote:

> Hi,
>
> For any of you with those failover clusters, do you know if "pg_ctl
> reload" works (for compatible config file changes), or must we bounce the
> database using "hares -offline" then "hares -online"?
>

pg_ctl reload does work in this case. HOWEVER, if you do something that
could cause trouble to the cluster (i.e.  a pg_hba.conf change that breaks
connectivity), this could cause veritas to try and failover.  It's
recommended that you test your changes to avoid these scenarios.

  Technically however, pg_ctl reload works just fine, just don't break
anything :)

--Scott



>
> Thanks
>
> --
> World Peace Through Nuclear Pacification
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] WAL Archive command.

2017-09-27 Thread Scott Mead
On Wed, Sep 27, 2017 at 2:55 PM, Jerry Sievers 
wrote:

> John Britto  writes:
>
> > Hello,
> >
> > I have a streaming replication setup along with WAL archive.
> >
> > archive_command = ‘test ! -f /var/pg_archive/%f && cp %p  > location>%f && scp %p postgres@192.168.0.123:/%f'
> >
> > When the SCP command fails, the master repeatedly tries to send the
> > archived WAL to standby. But during this time, the pg_xlog directly
> > grew with newer WAL files.
> >
> > The streaming replication hasn't had the problem because on my check,
> > the WAL write location on the primary was same with the last WAL
> > location received/replayed in standby.
> >
> > Since the pg_xlog in the master had few newer WAL files, the master
> > archive is lagging to pick the current pg_xlog WAL file.  When a new
> > WAL occur in the pg_xlog, Master picks the old WAL file to send to
> > the standby.
>
> Yup Pg is going to handle the unshipped WALs one at a time and it will
> do them in order, oldest (lowest file name) first.
>
> > How should I force the PostgreSQL to batch copy the lagging WAL files
> > to pg_archive and then send to standby.  Can I do this manually using
> > rsync?  I wonder how PostgreSQL knows the changes because it
> > maintains info in archive_status with extension as .ready and .done.
>
> I suggest you fix your basic archiving routine to complete and exit with
> success to postgres.
>

+1

scp %p host:/archive/%f
if [ "$?" -ne 0 ]
then
echo "`date`:FAILED:%p" >> /var/log/failed_xlog.log
cp %p /localarchive/%f
exit 0
fi

  Essentially, always make sure that you are returning a 0 to postgres.  If
there is a failure, either log it or handle it separately.  This code
snippet is *NOT COMPLETE, *there's a lot more to do in order to make it
production ready and recoverable.  The biggest issue I've had with scp is
that you have to set and enforce a timeout and trap the timeout.  Note, the
above only works until your local machine (or the /localarchive partition)
runs out of space.  It's *really* important that you have ultra solid
logging and alerting around this.




> And as for archive command scripts in general, simpler is better.
>
> If you want to manually ship them in bulk, you may do so but then will
> need to remove the corresponding archive_status/$foo.ready file so that
> postgres won't keep trying to ship the same one.
>

I'm a huge fan of this strategy, especially if you're sending to a remote
datacenter.

archive_command.sh:

cp %p /localarchive/%f
if [ "$?" -ne 0 ]
then
echo "`date`:FAILED:%p" >> /var/log/failed_xlog.log
exit 0
fi

send_archive_to_remote.sh
rsync -avzP /localarchive/* host:/archive/


Of course, now you have to deal with files that are removed from the slave
and making sure they get removed from the master appropriately, but, this
is fairly straightforward.

--Scott




> HTH
>
> > Please assist.
> >
> > Thanks,
> >
> > John Britto
> >
> >
> >
>
> --
> Jerry Sievers
> Postgres DBA/Development Consulting
> e: postgres.consult...@comcast.net
> p: 312.241.7800
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] Rsync to a recovering streaming replica?

2017-09-27 Thread Scott Mead
On Wed, Sep 27, 2017 at 1:59 PM, Igor Polishchuk  wrote:

> Sorry, here are the missing details, if it helps:
> Postgres 9.6.5 on CentOS 7.2.1511
>
> > On Sep 27, 2017, at 10:56, Igor Polishchuk  wrote:
> >
> > Hello,
> > I have a multi-terabyte streaming replica on a bysy database. When I set
> it up, repetative rsyncs take at least 6 hours each.
> > So, when I start the replica, it begins streaming, but it is many hours
> behind right from the start. It is working for hours, and cannot reach a
> consistent state
> > so the database is not getting opened for queries. I have plenty of WAL
> files available in the master’s pg_xlog, so the replica never uses archived
> logs.
> > A question:
> > Should I be able to run one more rsync from the master to my replica
> while it is streaming?
> > The idea is to overcome the throughput limit imposed by a single
> recovery process on the replica and allow to catch up quicker.
> > I remember doing it many years ago on Pg 8.4, and also heard from other
> people doing it. In all cases, it seamed working.
> > I’m just not sure if there is no high risk of introducing some hidden
> data corruption, which I may not notice for a while on such a huge database.
> > Any educated opinions on the subject here?
>

It really comes down to the amount of I/O (network and disk) your system
can handle while under load.  I've used 2 methods to do this in the past:

- http://moo.nac.uci.edu/~hjm/parsync/

  parsync (parallel rsync)is nice, it does all the hard work for you of
parellizing rsync.  It's just a pain to get all the prereqs installed.


- rsync --itemize-changes
  Essentially, use this to get a list of files, manually split them out and
fire up a number of rsyncs.  parsync does this for you, but, if you can't
get it going for any reason, this works.


The real trick, after you do your parallel rsync, make sure that you run
one final rsync to sync-up any missed items.

Remember, it's all about I/O.  The more parallel threads you use, the
harder you'll beat up the disks / network on the master, which could impact
production.

Good luck

--Scott







> >
> > Thank you
> > Igor Polishchuk
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] Rsync to a recovering streaming replica?

2017-09-27 Thread Scott Mead
On Wed, Sep 27, 2017 at 4:08 PM, Igor Polishchuk  wrote:

> Scott,
> Thank you for your insight. I do have some extra disk and network
> throughput to spare. However, my question is ‘Can I run rsync while
> streaming is running?’
>

Ahh, I see.  Sorry

You need to stop the slave, put the master into backup mode, do the
parallel rsync over the existing slave's data directory (for differential).

Then, pg_stop_backup(), start the slave.

--Scott



> A streaming replica is a physical copy of a master, so why not. My concern
> is a possible silent introduction of some block corruptions, that would not
> be fixed by a block copy in wal files. I think such corruptions should not
> happen, and I saw a few instances where running rsync seemed to work.
> I’m curious if somebody is aware about a situation where a corruption is
> likely to happen.
>



>
> Igor
>
> On Sep 27, 2017, at 12:48, Scott Mead  wrote:
>
>
>
> On Wed, Sep 27, 2017 at 1:59 PM, Igor Polishchuk 
> wrote:
>
>> Sorry, here are the missing details, if it helps:
>> Postgres 9.6.5 on CentOS 7.2.1511
>>
>> > On Sep 27, 2017, at 10:56, Igor Polishchuk  wrote:
>> >
>> > Hello,
>> > I have a multi-terabyte streaming replica on a bysy database. When I
>> set it up, repetative rsyncs take at least 6 hours each.
>> > So, when I start the replica, it begins streaming, but it is many hours
>> behind right from the start. It is working for hours, and cannot reach a
>> consistent state
>> > so the database is not getting opened for queries. I have plenty of WAL
>> files available in the master’s pg_xlog, so the replica never uses archived
>> logs.
>> > A question:
>> > Should I be able to run one more rsync from the master to my replica
>> while it is streaming?
>> > The idea is to overcome the throughput limit imposed by a single
>> recovery process on the replica and allow to catch up quicker.
>> > I remember doing it many years ago on Pg 8.4, and also heard from other
>> people doing it. In all cases, it seamed working.
>> > I’m just not sure if there is no high risk of introducing some hidden
>> data corruption, which I may not notice for a while on such a huge database.
>> > Any educated opinions on the subject here?
>>
>
> It really comes down to the amount of I/O (network and disk) your system
> can handle while under load.  I've used 2 methods to do this in the past:
>
> - http://moo.nac.uci.edu/~hjm/parsync/
>
>   parsync (parallel rsync)is nice, it does all the hard work for you of
> parellizing rsync.  It's just a pain to get all the prereqs installed.
>
>
> - rsync --itemize-changes
>   Essentially, use this to get a list of files, manually split them out
> and fire up a number of rsyncs.  parsync does this for you, but, if you
> can't get it going for any reason, this works.
>
>
> The real trick, after you do your parallel rsync, make sure that you run
> one final rsync to sync-up any missed items.
>
> Remember, it's all about I/O.  The more parallel threads you use, the
> harder you'll beat up the disks / network on the master, which could impact
> production.
>
> Good luck
>
> --Scott
>
>
>
>
>
>
>
>> >
>> > Thank you
>> > Igor Polishchuk
>>
>>
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>>
>
>
>
> --
> --
> Scott Mead
> Sr. Architect
> *OpenSCG <http://openscg.com/>*
> http://openscg.com
>
>
>


-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] Using cp to back up a database?

2017-10-09 Thread Scott Mead
On Mon, Oct 9, 2017 at 1:19 PM, Ron Johnson  wrote:

> Maybe my original question wasn't clear, so I'll try again: is it safe to
> do a physical using cp (as opposed to rsync)?
>

Yes -- however* you must configure WAL archiving* first.  If not, no backup
tool, cp, rsync, etc... will provide a good backup.

Oh, and BTW -- The obligatory: You are on an ancient, EOL version of PG.
Upgrade.

Make sure that these are set:
- archive_mode
- archive_command


Then, on when you restore the backup, you need to create a recovery.conf
and configure
- restore_command

https://www.postgresql.org/docs/8.4/static/continuous-archiving.html






>
>
>
> On 10/09/2017 11:49 AM, Darren Douglas wrote:
>
> Ron:
>
> Here is an explanation that may help a bit.
>
> Your script is executing a PHYSICAL backup. A physical backup is simply a
> full copy of the cluster (instance) data directory ($PGDATA). A physical
> backup is your best option when you need to backup the cluster data as well
> as all configuration for the cluster. Essentially, if you had to rebuild
> the entire computer hosting the cluster, you could just reinstall the same
> version of postgres, copy in the backup data directory, and the cluster
> would run exactly as it did before with the same data. A physical backup is
> also necessary when the databases get very large.
>
> In the backup script you posted, the 'pg_start_backup' and
> 'pg_stop_backup' commands fulfill two purposes. The first is to create a
> label for the point in time the backup was started - this is done by
> pg_start_backup. The second is to ensure that all WAL segments that have
> been written since the backup began have been safely archived. That is done
> by pg_stop_backup. This approach is necessary to accomplish an online
> physical backup.
>
> As others have mentioned pg_dump is a LOGICAL backup tool similar to any
> SQL dump you've done with another DBMS. The pg_dump command will do a SQL
> dump to recreate everything within a single database. So, if you have
> multiple databases in your cluster, its not the best option. pg_dumpall is
> the logical backup tool that will do a logical dump of all globals (schema
> + roles) along with all databases in the cluster. Because the
> pg_dump/pg_dumpall commands are not executing a physical backup, the
> pg_start_backup and pg_stop_backup commands do not apply.
>
> As for times when you would elect to do a logical backup, as others have
> mentioned, this is the only valid option when you are restoring to a
> different version of Postgres. It is also a good option to do a backup of a
> single small database or several small databases. And, if for any reason
> the backup needs to be human-readable, this is the approach of choice as
> well.
>
> Darren
>
>
>
> The first
>
> On Mon, Oct 9, 2017 at 6:40 AM, Ron Johnson  wrote:
>
>> Hi,
>>
>> v8.4.20
>>
>> This is what the current backup script uses:
>>
>> /usr/bin/psql -U postgres -c "SELECT pg_start_backup('Incrementalba
>> ckup',true);"
>> cp -r /var/lib/pgsql/data/* $dumpdir/data/
>> /usr/bin/psql -U postgres template1 -c "SELECT pg_stop_backup();"
>>
>>
>> Should it use rsync or pg_dump instead?
>>
>> Thanks
>>
>> --
>> World Peace Through Nuclear Pacification
>>
>>
>
>
> --
> Darren Douglas
> Synse Solutions
> dar...@synsesolutions.com
> 520-661-5885 <(520)%20661-5885>
>
>
>
> --
> World Peace Through Nuclear Pacification
>
>


-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] Using Variables in Queries

2017-10-19 Thread Scott Mead
On Thu, Oct 19, 2017 at 11:21 AM, Igal @ Lucee.org  wrote:

> Hello,
>
> In other database servers, which I'm finally dropping in favor of
> Postgres, I can do the following (mind you that this is for illustration
> only, I do not actually write queries like that):
>
> DECLARE @query varchar(64) = 'red widget';
>
> SELECT *
> FROM products
> WHERE col1 LIKE @query
>OR col2 LIKE @query
>OR col3 LIKE @query
>OR col4 LIKE @query
>OR col5 LIKE @query
>
> The point is, though, that I can change the @query variable in one place
> which is very convenient.
>
> Is it still true (the posts I see on this subject are quite old) that I
> can not do so in Postgres outside of a stored procedure/function?
>

You can do this if you're using psql.  This doesn't work if you're using
something like JDBC, etc..., but from psql it works great.

https://www.postgresql.org/docs/9.5/static/app-psql.html#APP-PSQL-VARIABLES

--Scott


> And if so, what's the reason of not adding this feature?  Seems very
> useful to me.
>
> Thanks,
>
>
> Igal Sapir
> Lucee Core Developer
> Lucee.org <http://lucee.org/>
>



-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] Postgres 9.6 fails to start on VMWare

2017-10-23 Thread Scott Mead
On Mon, Oct 23, 2017 at 11:26 AM, Martin Moore 
wrote:

> It was running – not sure how dd handles this. Maybe badly… ☺
>

it doesn't handle it at all.  This would be the cause of your issue.

--Scott



>
>
> *From: *Michael Nolan 
> *Date: *Monday, 23 October 2017 at 15:52
> *To: *Martin Moore 
> *Cc: *rob stone , "pgsql-general@postgresql.org" <
> pgsql-general@postgresql.org>
> *Subject: *Re: [GENERAL] Postgres 9.6 fails to start on VMWare
>
>
>
>
>
>
>
> On Mon, Oct 23, 2017 at 3:14 AM, Martin Moore 
> wrote:
>
> Same server. I tried a few times.
>
> I didn’t move the db separately, but did a ‘dd’ to copy the disk to an
> imagefile which was converted and loaded into VMWare.
>
> I ‘believed’ that this should keep the low level disk structure the same,
> but if this has corrupted the files I can drop, dump and restore, in which
> case how do I ‘drop’ the DB without postgres running?
>
> Ta,
>
> Martin.
>
>
>
> Was the server you were backing up shut down or in backup mode when you
> did the 'dd' copy?
>
> --
>
> Mike Nolan
>



-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] Adding the host name to the PgSQL shell

2009-06-08 Thread Scott Mead
On Mon, Jun 8, 2009 at 12:44 PM, Madison Kelly  wrote:

> Hi all,
>
>  I work on a development and production server, and I am always
> double-checking myself to make sure I am doing something on the right
> server.
>
>  Is there a way, like in terminal shells, to change the PgSQL shell's
> prompt from 'db=>' to something like 'h...@db=>'? I'm on PgSQL 8.1 (server
> on Debian) and 8.3 (devel on Ubuntu), in case it matters.


You certainly can do this, very similar to PS1 on linux with bash:

http://www.postgresql.org/docs/8.1/static/app-psql.html#APP-PSQL-PROMPTING

--Scott

>
>
> Thanks all!
>
> Madi
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Adding the host name to the PgSQL shell

2009-06-08 Thread Scott Mead
On Mon, Jun 8, 2009 at 1:30 PM, Madison Kelly  wrote:
>
>
> That works like a charm, thank you!


  No problem :)

>
>
> Next question though;
>
> How can I get it to save my custom prompt across sessions/server restarts?
> It there something equivalent to '.bash_profile'?
>

  ~/.psqlrc

--Scott


Re: [GENERAL] running pg_dump from python

2009-06-14 Thread Scott Mead
On Sun, Jun 14, 2009 at 4:06 PM, Garry Saddington <
ga...@schoolteachers.co.uk> wrote:

> I ahve the following python file that I am running as an external method in
> Zope.
>
> def backup():
>   import  os
>   os.popen("c:/scholarpack/postgres/bin/pg_dump scholarpack  >
> c:/scholarpack/ancillary/scholarpack.sql")


Have you tried running that command on the command line by itself (as
the same user that runs the phython)?  If that gives you the same result,
then you know for sure that it's a function of the pg_dump options and not
the python script.

Are you looking for the full SQL of the scholarpack database?  What user
is this running as?  Remember, in your case, pg_dump is going to try to
connect as the OS username running your script.  you may want to include the
username option to pg_dump:

 pg_dump -U  scholarpack

 Try running that on the commandline first, by itself, as the same user
that runs the python script.  If it works, then you know for sure that any
problems from here on out are just a function of the python script and not
pg_dump itself.

--Scott


Re: [GENERAL] nagios -- number of postgres connections

2009-06-16 Thread Scott Mead
On Tue, Jun 16, 2009 at 11:32 PM, John R Pierce  wrote:

> Whit Armstrong wrote:
>
>> anyone know a way to get nagios to monitor the number of postgres
>> connections?
>>
>>
>
  You could also login to the database and run:

   select count(1) from pg_stat_activity;

--Scott


Re: [GENERAL] nagios -- number of postgres connections

2009-06-16 Thread Scott Mead
On Tue, Jun 16, 2009 at 11:32 PM, John R Pierce  wrote:

> Whit Armstrong wrote:
>
>> anyone know a way to get nagios to monitor the number of postgres
>> connections?
>>
>>
>

  You could also login to the database and run:

   select count(1) from pg_stat_activity;

--Scott


Re: [GENERAL] Controlling proliferation of postgres.exe processes

2009-06-23 Thread Scott Mead
2009/6/23 Radcon Entec 

> Greetings!
>
> At the current moment, our customer's computer has 22 instances of
> postgres.exe running.  When a colleague checked a few minutes ago, there
> were 29.  Our contract specifies that we cannot consume more than 40% of the
> computer's memory, and we're over that level.  When does an instance of
> postgres.exe get created, and how can we make sure we create only the
> minimum number necessary?
>
> Thanks very much!
>
> RobR
>

  Every single time someone connects to the database, one of those things
will start.  So if you have a new user in the app, and your app is 1 db
connection per user, then you will have a postgres.exe for each one of them
on top of 3 or 4 that the system is using.

  As far as memory, the way the task manager shows memory is slightly
mis-leading.  Each process is not actually taking up the full amount shown.
 The reason for this is that postgres is sharing a large memory area between
multiple postgres.exe's, but windows (and linux, solaris etc.. for that
matter) all think that each postgres.exe has taken that much memory up when
in reality, there is only one memory segment that is that high. iow, you
probably are well below your 'sla'.

   There are some apps that can help you decipher exactly how much mem you
are using, this has been talked about before:
http://archives.postgresql.org/pgsql-general/2009-06/msg00729.php

Good luck

--Scott


Re: [GENERAL] Replication

2009-06-23 Thread Scott Mead
On Tue, Jun 23, 2009 at 10:07 PM, Craig Ringer
wrote:

> Thomas Kellerer wrote:
> > Mike Christensen wrote on 23.06.2009 19:37:
> >> Does anyone actually have that (any node can go down and the others
> still
> >> replicate amongst themselves?)
> >
> > I think this is what Oracle promises with their RAC technology.
>
> Isn't RAC a shared-storage cluster?


 Shared-

   Storage
   Memory
   Listener

   Storage : Obvious
   Memory: Cache fusion maintains buffer consistency across the network
   Listener: Load balancing and failover of connections are handled
transparently by TNS (transparent network substrate).

--Scott

>
>
> --
> Craig Ringer
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] [BUGS] Integrity check

2009-06-24 Thread Scott Mead
2009/6/23 David Fetter 

> On Tue, Jun 23, 2009 at 03:38:35PM +0800, Prasad, Venkat wrote:
> > Hello,
> >
> > Please can you assist on following questions.
>
> This is an issue for pgsql-general, where I'm redirecting this.
> >
> > * do you any tool to check postgreSQL database integrity check?
>
> No more than Oracle does.  We get it right in the first place.  The
> existence of "integrity checking" tools means the DBMS is done with
> high incompetence.


   It depends on what you mean by integrity.  If you mean "Does the database
have the data I THINK it's supposed to have".. that's all up to you when you
design your schema (FK's, Unique, etc..) and your app developers:

'Where thou wouldst type foo, someone else would surely type
supercalafragilisticexpialadocious'
(paraphrased)


   If you're asking "Does the database have the ability to verify that
whatever is in a block is what was put into that block", then Oracle has
block check-summing (I'm not sure if this got into PG 8.4 or not...) .  I'm
not sure if there is a checksum on the datafiles themselves, but I would
guess in the affirmative when talking Oracle.


>
> > * how do we confirm that dump file is proper data?
>

  This is talking about "Does the database have what I think it
should?" and can't really be done unless you had some auditing system
setup to compare it to.

--Scott


Re: [GENERAL] masking the code

2009-06-29 Thread Scott Mead
On Fri, Jun 26, 2009 at 3:37 PM,  wrote:

>
>
> I completely agree w/ HArald. Its not something we'd want to see in an open
> source product. That said, I saw yesterday that the latest version of
> EnterpriseDB has this feature.  So if you want to protect your own IP, then
> you've got to purchase someone else's.
>

   Release 2 of our proprietary Advanced Server 8.3 does include the
'edbwrap' functionality.  It was included based on demand from ISV's who are
used to wrapping their code when distributing an app.

   It is important to note (as many people have already pointed out) that
both EnterpriseDB and Oracle's wrap functionality is declared as a 100%
guarantee that nobody can read your code.  As with many different types of
security (i.e. the 3 foot high fence) this is really just a deterrent to
most people who either aren't capable of reverse engineering or are just not
interested in the first place.

http://www.enterprisedb.com/docs/en/8.3R2/oracompat/EnterpriseDB_OraCompat_8.3-211.htm#P15495_739546
http://www.databasejournal.com/features/oracle/article.php/3382331/Oracles-Wrap-Utility.htm
http://download-west.oracle.com/docs/cd/B10501_01/appdev.920/a96624/c_wrap.htm



--Scott


Re: [GENERAL] masking the code

2009-06-29 Thread Scott Mead
On Mon, Jun 29, 2009 at 9:35 AM, Jonah H. Harris wrote:

> On Mon, Jun 29, 2009 at 9:31 AM, Scott Mead 
> wrote:
>
>>
>>It is important to note (as many people have already pointed out) that
>> both EnterpriseDB and Oracle's wrap functionality is declared as a 100%
>> guarantee that nobody can read your code.  As with many different types of
>> security (i.e. the 3 foot high fence) this is really just a deterrent to
>> most people who either aren't capable of reverse engineering or are just not
>> interested in the first place.
>>
>
> s/is declared/is NOT declared/g


  Yes!  Jeez, this cold is getting to me.  Thanks Jonah... :

  It is important to note (as many people have already pointed out) that
both EnterpriseDB and Oracle's wrap functionality is NOT declared as a 100%
guarantee that nobody can read your code.

--Scott


Re: [GENERAL] masking the code

2009-06-29 Thread Scott Mead
On Mon, Jun 29, 2009 at 9:35 AM, Jonah H. Harris wrote:

> On Mon, Jun 29, 2009 at 9:31 AM, Scott Mead 
> wrote:
>
>>
>>It is important to note (as many people have already pointed out) that
>> both EnterpriseDB and Oracle's wrap functionality is declared as a 100%
>> guarantee that nobody can read your code.  As with many different types of
>> security (i.e. the 3 foot high fence) this is really just a deterrent to
>> most people who either aren't capable of reverse engineering or are just not
>> interested in the first place.
>>
>
> s/is declared/is NOT declared/g


  Yes!  Jeez, this cold is getting to me.  Thanks Jonah... :

  It is important to note (as many people have already pointed out) that
both EnterpriseDB and Oracle's wrap functionality is NOT declared as a 100%
guarantee that nobody can read your code.

--Scott

>
>
> :)
>
> --
> Jonah H. Harris, Senior DBA
> myYearbook.com
>
>


Re: [GENERAL] pasting into psql garbles text

2009-06-29 Thread Scott Mead
On Mon, Jun 29, 2009 at 10:49 AM, Cédric Villemain <
cedric.villem...@dalibo.com> wrote:

> Le samedi 27 juin 2009, Merlin Moncure a écrit :
>
> >
> > Interestingly, the one platform that tends not to suffer from this is
> > windows so I'm guessing this is a readline problem.  Has anybody else
> > noticed this? Is there a workaround?
>

  Whenever I have a huge chunk of text to paste into psql, I'll drop to an
editor with \e, then paste it, the close the editor (I have $EDITOR=vim).
 Just a thought.

--Scott


Re: [GENERAL] permissions / ACLs made easier?

2009-06-29 Thread Scott Mead
On Mon, Jun 29, 2009 at 1:01 PM, Jeff Davis  wrote:

>
>
> CREATE USER read_only_user
>  GRANT (SELECT ON TABLE, USAGE ON SCHEMA) FROM admin_user;
>
> "read_only_user" would automatically have SELECT privileges on any table
> that admin_user has SELECT privileges on, and automatically have USAGE
> privileges on any schema that admin_user has privileges on.


So, you're proposing the ability to inherit privileges from another user /
role?  That could be useful, but typically, when I have lots of roles
hanging around, their privileges are mutually exclusive to the point where
this won't help.


>
> The benefits are:
>  * you can create a new role after the fact, and you don't have to
>   issue GRANT statements for every object in the database


  Interesting for sure, but now in your example, I have to write a
separate grant for the maybe 3 or 4 tables that shouldn't be read by
read_only_user in the schema (i.e. ss #'s or other top-secret stuff
that read_only_user
shouldn't
see).  And if I'm a novice, I could easily get confused and give the
world the ability to see what they really should not be seeing, just
b/c I took the short route.  Personally, I'd prefer being forced to
write individual grants just
to be sure I know what has privileges on what.


>  * you can create new objects without needing to issue appropriate
>   GRANT statements for each user


  One of the things I've always appreciated about pg is that you have to
be explicit about your permissions.  However, making things slightly easier
isn't necessarily a bad thing.


>  * you can easily see the permissions/ACLs you have set up without
>   inspecting each object


Maybe I'm missing this part of what you're proposing.  Honestly, losing
object level security is more a concern for me than being forced to write a
pile of scripts.  Maybe having a tool (like pgAdmin or pg_dump, something
like pg_dump --privs_by_role  ) generate a sql script for the
grants that a role has would be more appropriate than a core change.


>
>
> This scheme only helps when you have broad roles, like the
> admin/normal/read-only I listed above, and you don't complicate things
> with lots of exceptions. It's flexible enough that you can use it in
> interesting ways with groups and individual GRANT statements, but by
> that time the simplicity of this feature is most likely lost.


  Agreed.


>
>
> With that in mind, who out there would really use this feature?
>
> 1. If you aren't using separate roles now, would you be more likely to
> do so with a feature like this?


   Not likely, the people I've worked with in the past are in the routine as
role / non-role shops based on dev practices, dba experience, etc...  I
think this would just be another feature that would get a 'huh, neat' type
of response.


>
>
> 2. If you are using multiple roles currently, would this feature
> simplify the management of those roles and their privileges?


   Not really, as above, I think that most [well-designed] RBAC solutions
have enough mutual exclusivity where permissions inheritance at the time of
user creation may complicate issues.

--Scott


Re: [GENERAL] Postgres online backup and restore

2009-06-30 Thread Scott Mead
On Thu, Jun 25, 2009 at 9:40 AM, Chris Barnes  wrote:

>  Sorry if posting twice, wasn’t part of general when sent and didn’t see
> it received by group.
>
>   I started an online backup of postgres, tar’d my data folder, copy to usb
> drive in production
> and restored it into my RC environment. Have I missed something important?
>
> Online Backup
>
> psql
>
> checkpoint;
>
> SELECT pg_switch_xlog();
>
> SELECT pg_start_backup('postres_full_backup_June222009');
>
> tar -czvf pgprd01_June22_2009_production.dmp.tar.gz data/
>
> SELECT pg_stop_backup();
>
> Restore
>
> tar –xzvf pgprd01_June22_2009_production.dmp.tar.gz
>
>
>
> When starting the database I receive many errors that look like the backup
> was corrupted.
>

You missed the all-important step of also taking your archived log files to
the remote environment and creating the recovery.conf file:

 http://www.postgresql.org/docs/8.3/static/continuous-archiving.html
Note section 24.3.3.1

--Scott


Re: [GENERAL] Postgres online backup and restore

2009-06-30 Thread Scott Mead
On Thu, Jun 25, 2009 at 9:40 AM, Chris Barnes  wrote:

>  Sorry if posting twice, wasn’t part of general when sent and didn’t see
> it received by group.
>
>   I started an online backup of postgres, tar’d my data folder, copy to usb
> drive in production
> and restored it into my RC environment. Have I missed something important?
>
> Online Backup
>
> psql
>
> checkpoint;
>
> SELECT pg_switch_xlog();
>
> SELECT pg_start_backup('postres_full_backup_June222009');
>
> tar -czvf pgprd01_June22_2009_production.dmp.tar.gz data/
>
> SELECT pg_stop_backup();
>
> Restore
>
> tar –xzvf pgprd01_June22_2009_production.dmp.tar.gz
>
>
>
> When starting the database I receive many errors that look like the backup
> was corrupted.
>

You missed the all-important step of also taking your archived log files to
the remote environment and creating the recovery.conf file:

 http://www.postgresql.org/docs/8.3/static/continuous-archiving.html
Note section 24.3.3.1

--Scott


Re: [GENERAL] Python client + select = locked resources???

2009-07-01 Thread Scott Mead
On Wed, Jul 1, 2009 at 12:21 PM, johnf  wrote:

> On Monday 29 June 2009 09:26:24 am Craig Ringer wrote:
> > Try connecting to the database with psql and running
> >   "select * from pg_stat_activity"
> > while the web app is running. You should see only "IDLE" or working
> > connections, never idle in transaction. If you have anything idle in a
> > transaction for more than a few moments you WILL have problems, because
> > if those transactions have SELECTed from the table you're trying to
> > alter they'll hold share locks that will prevent ALTER TABLE from
> > grabbing an exclusive lock on the table.
>
> I used psql to run the query "select * from pg_stat_activity;" and it shows
> three (3) lines , the select statement, , and the last one is 
> in
> transaction.  No one else is using the database because it's a on my
> machine.
> So can you explain why I have an  " in transaction" listed and does
> it
> mean I can't alter the table from some other program like pgAdmin3???


   The  in transaction session that you see could be causing blocks.
 That's probably coming from the application that you're using.  You need to
disconnect that session and make sure that your code is either using
autocommit or explicitly ending your transactions.

The  sessions are the other connections to the database (pgAdmin,
psql, etc...)



>
> Also "commit" or "rollback" gives a warning "there is no transaction in
> progress".  So what gives?


If you login with psql, you have your own session.  You cannot commit or
rollback another session from yours.   psql is autocommit, if you want a
commit or rollback to do anything, you start by typing:

   begin;



   commit;   or   rollback;

>
>
>
> --
> John Fabiani
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Trying to find a low-cost program for Data migration and ETL

2009-07-07 Thread Scott Mead
On Tue, Jul 7, 2009 at 11:48 AM, Rstat  wrote:

>
>
> Hi, Im building a database for my company. We are a rather small size book
> company with a lot of references and still growing.
>
> We have a Mysql database here and are trying to find some good tools to use
> it at its best. Basically we are just starting up the database after
> dealing
> with Excel: we had a size problem… So im trying to find a program that will
> allow us to do two different things: the migration of our data from the old
> system to the new one and a specialized software to perform ETL (Extract,
> transform and load) on our database.
>
> About the price of the tools, if we were one year ago, the accounting
> department would have been pretty relaxed about this. But today, we have
> some budget restrictions and therefore need a low cost tool. So could you
> give me some advice on a good data migration and etl tool for a low cost?
>
> Thanks for your help.


  You may have some luck by viewing a similar thread on another mailing list:

http://www.theserverside.net/discussions/thread.tss?thread_id=54755


-- Scott


Re: [GENERAL] Trying to find a low-cost program for Data migration and ETL

2009-07-07 Thread Scott Mead
On Tue, Jul 7, 2009 at 1:26 PM, Scott Mead wrote:

>
>
>
>   You may have some luck by viewing a similar thread on another mailing list:
>
> http://www.theserverside.net/discussions/thread.tss?thread_id=54755
>


  That being said, I would highly recommend using:

http://www.sql-workbench.net/

  As a tool to help in Migrations.  Also, a quick google for ETL Mysql  or
ETL Postgres shows many different companies (both software and consulting)
that have the ability to quickly and easily build you a solution.

--Scott


Re: [GENERAL] Start With... Connect By?

2009-07-13 Thread Scott Mead
On Mon, Jul 13, 2009 at 2:05 AM, Philippe Lang  wrote:

> Hi,
>
> Now that Postgresql 8.4 has a "WITH RECURSIVE" operator (thanks! :)), is
> there a chance we can see one day "START WITH... CONNECT BY" in
> Postgresql, or is that something 100% oracle-specific?


There is a commercial / proprietary version of the pg database called
'EnterpriseDB Advanced Server' that supports this.
www.enterprisedb.com

--Scott


Re: [GENERAL] Start With... Connect By?

2009-07-13 Thread Scott Mead
On Mon, Jul 13, 2009 at 2:05 AM, Philippe Lang  wrote:

> Hi,
>
> Now that Postgresql 8.4 has a "WITH RECURSIVE" operator (thanks! :)), is
> there a chance we can see one day "START WITH... CONNECT BY" in
> Postgresql, or is that something 100% oracle-specific?


There is a commercial / proprietary version of the pg database called
'EnterpriseDB Advanced Server' that supports this.
www.enterprisedb.com

--Scott


Re: [GENERAL] pg_dump of a big table

2009-07-14 Thread Scott Mead
On Mon, Jul 13, 2009 at 3:29 PM, Sam Mason  wrote:

> On Mon, Jul 13, 2009 at 06:57:43PM +, Nelson Correia wrote:
> > Running pg_dump from another machine needs much space on the DB
> > host? Or it just outputs the data as it goes?
>
> pg_dump should use very little space on the server, it just streams it
> out to where ever you tell it.  You could run pg_dump on another host,
> or do something like:
>
>  pg_dump mydb | gzip | ssh otherbox "cat > out.sql.gz"


   Yes, from the other machine, run:

  [u...@notmyhost ]$ pg_dump -h myhost -U mydbuser mydb > out.sql

--Scott


Re: [GENERAL] [Q] single image Table across multiple PG servers

2009-07-14 Thread Scott Mead
On Tue, Jul 14, 2009 at 1:16 AM, V S P  wrote:

> Hello
>
> I am researching how to store the data for easy 'user-driven'
> reporting (where I do not need to develop application for
> every user request).
>
> The data will typically be number ranges and text strings with
> unique Id for each row
>
> I hope there will be a lot of data :-).
>
> So in that anticipation I am looking for a way
> to allow
> SQL/ODBC access to the data
>
> but in a way that each table resides on more than one
> PG server


Take a look at open-source GridSQL:http://sourceforge.net/projects/gridsql/

   It'll do what you want and provide parallel query across your nodes.

--Scott


[GENERAL] Log timings on Windows 64

2009-07-20 Thread Scott Mead
Hey all,
   I'm looking at windows, and a couple of quick google's didn't give me
anything,

If I set my log_min_duration_statement < 16 ms, I get one of 2 values
for my pgbench runs (pretty quick statements).  0, or 16 ms (sometimes
15.999).  If I benchmark some other way (via my app) things look the same.
 If I set my log_line_prefix to %m vs. %t, then the time increments in 16ms
chunks.  I'm guessing this is an OS limitation on windows.  Does anybody
else have experience here (or aware of a previous bug)?

Thanks

--Scott


  1   2   3   >