Re: Information on savepoint requirement within transctions

2018-02-01 Thread Robert Zenz
On 31.01.2018 19:58, David G. Johnston wrote:
> ​Now that I've skimmed the tutorial again I think pointing the reader of
> the SQL Commands there to learn how it works in practice is better than
> trying to explain it in BEGIN and/or SAVEPOINT.

That seems like a good idea, yeah.

> I decided to add a title to the part of SAVEPOINTS and introduce the term
> "Sub-Transaction" there though I'm not married to it - re-wording it using
> only "savepoint" is something that should be tried still.

Technically, it *is* a sub-transaction, Savepoints are just the means to do it.
I think that a sub-transaction is the concept, Savepoint is the implementation.

> A title and a paragraph or two on aborted transaction behavior probably
> should be added as well.

I'd like that. I might be able to type something up, though I'm currently a
little bit short on time, so don't wait for me please.

Just to make sure, you have two typos in there, "61: tranasctions" and "106:
implment". Also I'd like to take the opportunity to agree with Laurenz here,
"pseudo" seems to be misplaced, they *are* sub-transactions.

Master-Slave error: the database system is starting up

2018-02-01 Thread Сергей Злобин
Hello,

I was advised to write a letter to you

Windows Server 2012
Postgres 9.6
I made a replication with the master server. On the slave server service
postgres is running, but in log files slave's server "FATAL:  the database
system is starting up" message will continue.

when I try from the command line run the command

psql.exe -U postgres`
writes the following error:

psql: ... Connection refused (0x274D/10061) ...
My postgresql.conf configuration is on the slave server:

listen_addresses = 'ip slave adress'
hot_standby = on
wal_receiver_status_interval = 0
hot_standby_feedback = on
My recovery.conf configuration is on the slave server:

standby_mode = 'on'
primary_conninfo = 'host=ip master server port=5432 user=postgres'
primary_slot_name = 'slot_1'
trigger_file = 'D:\Postgres Data\Start_master'
Could you please to fix this issue? I don't understand what could be the
problem. I've made shutdown the server, the problem is not resolved.

Thenk you very much

http://www.postgresql-archive.org/Master-Slave-error-the-database-system-is-starting-up-td6004044.html


Re: Master-Slave error: the database system is starting up

2018-02-01 Thread Andreas Kretschmer



Am 01.02.2018 um 15:43 schrieb Сергей Злобин:

Hello,

I was advised to write a letter to you

Windows Server 2012
Postgres 9.6
I made a replication with the master server. On the slave server service
postgres is running, but in log files slave's server "FATAL:  the database
system is starting up" message will continue.

when I try from the command line run the command

psql.exe -U postgres`
writes the following error:

psql: ... Connection refused (0x274D/10061) ...
My postgresql.conf configuration is on the slave server:

listen_addresses = 'ip slave adress'
hot_standby = on
wal_receiver_status_interval = 0
hot_standby_feedback = on
My recovery.conf configuration is on the slave server:

standby_mode = 'on'
primary_conninfo = 'host=ip master server port=5432 user=postgres'
primary_slot_name = 'slot_1'
trigger_file = 'D:\Postgres Data\Start_master'
Could you please to fix this issue? I don't understand what could be the
problem. I've made shutdown the server, the problem is not resolved.

Thenk you very much

http://www.postgresql-archive.org/Master-Slave-error-the-database-system-is-starting-up-td6004044.html


please execute "select * from pg_stat_replication;" and "select * from 
pg_replication_slots ;" on the master and show us the output.

Please check also the logs on the standby.

Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: PG Sharding

2018-02-01 Thread Dan Wierenga
On Wed, Jan 31, 2018 at 7:48 PM, Steven Lembark  wrote:

> On Mon, 29 Jan 2018 15:34:18 +0100
> Matej  wrote:
>
> > Hi Everyone.
> >
> > We are looking at a rather large fin-tech installation. But as
> > scalability requirements are high we look at sharding of-course.
> >
> > I have looked at many sources for Postgresql sharding, but we are a
> > little confused as to shared with schema or databases or both.
>
> Suggest looking at the Xreme Data product. It is a parallel,
> shared-nothing implementation of PG that should solve your
> needs rather nicely.
>
> You can see a description of their product at
> https://xtremedata.com/
>
> Happy scaling :-)
>
>
Having been a production DBA for both the DBX (XtremeData) and the
Greenplum MPP database platforms, IMO Greenplum is far superior to DBX.
Issues with the GP master node being a single point of failure are solved
by a secondary master node and automatic failover technology e.g.
keepalived.

But, it sounds like the OP is not really looking for the kind of scale that
an MPP solution provides, but rather the kind of scale that is typically
solved by a service-orchestration suite.  I don't think that "a rather
large fin-tech installation" with "high scalability requirements" is really
enough detail to give a recommendation on orchestration software.

-dan


What is your psql tip?

2018-02-01 Thread Joshua D. Drake

Folks,

So today I was digging into the backend of PostgresConf.Org which runs 
on a highly modified version of OSEM (which is an awesome conference 
platform). I was very quickly losing my mind because the content in the 
public schema did not match what I was seeing on the website. I checked 
connection strings, looked for other databases in the cluster and I even 
checked to see if there was another version of PostgreSQL running. For 
the life of me, I couldn't figure out where the heck the data went. So 
here is my psql tip:


Remember \dn 

My problem was that the latest version of the app actually runs within a 
schema titled postgresconf vs public. Of course what is worse is I 
remember requesting that to happen. That said, that is my psql tip for 
the day, remember to check things like \dn if you don't recall exactly 
where you may have put something!


Jolly Good,

JD

--
Command Prompt, Inc. || http://the.postgres.company/ || @cmdpromptinc

PostgreSQL centered full stack support, consulting and development.
Advocate: @amplifypostgres || Learn: https://postgresconf.org
* Unless otherwise stated, opinions are my own.   *




Searching Postgresql Database with psycopg2

2018-02-01 Thread brandon wallace
I have created a function to search a Postgresql database but it only pulls up the first row of the database no matter what value I search for. Why would it only pull up the first row of the database? I am using Psycopg2 2.6 and Python 3.5.

All my other functions to update, add, delete, view all rows are working fine. I am able to run queries on the database and only pull up the rows I need using psql command line no problem.

 

 
Here is my function.

 


from tkinter import *
import psycopg2

 

def search(title=None, isbn=None, year=0):
    '''Search the database rows'''
    conn = psycopg2.connect("dbname='books'")
    cur = conn.cursor()
    cur.execute("SELECT books.title, books.isbn, books.year FROM books WHERE books.title = title OR books.isbn = books.isbn OR books.year = year")
    rows = cur.fetchone()
    conn.close()
    return rows

 

 

 





Re: Searching Postgresql Database with psycopg2

2018-02-01 Thread Rob Sargent
> On Feb 1, 2018, at 1:50 PM, brandon wallace  wrote:
> 
> I have created a function to search a Postgresql database but it only pulls 
> up the first row of the database no matter what value I search for. Why would 
> it only pull up the first row of the database? I am using Psycopg2 2.6 and 
> Python 3.5.
> All my other functions to update, add, delete, view all rows are working 
> fine. I am able to run queries on the database and only pull up the rows I 
> need using psql command line no problem.
>  
>  
> Here is my function.
>  
> from tkinter import *
> import psycopg2
>  
> def search(title=None, isbn=None, year=0):
> '''Search the database rows'''
> conn = psycopg2.connect("dbname='books'")
> cur = conn.cursor()
> cur.execute("SELECT books.title, books.isbn, books.year FROM books WHERE 
> books.title = title OR books.isbn = books.isbn OR books.year = year")
> rows = cur.fetchone()

The “fetchone()” proably has a lot to do with it

>  conn.close()
> return rows
>  
>  
>  



Re: Searching Postgresql Database with psycopg2

2018-02-01 Thread David Kohn
Because you're only getting the first row of the query result, even though
the cursor that contains the result (should be server side in this case)
does have more rows.
On Thu, Feb 1, 2018 at 3:50 PM brandon wallace  wrote:

> I have created a function to search a Postgresql database but it only
> pulls up the first row of the database no matter what value I search for.
> Why would it only pull up the first row of the database? I am using
> Psycopg2 2.6 and Python 3.5.
> ...
> def search(title=None, isbn=None, year=0):
> '''Search the database rows'''
> conn = psycopg2.connect("dbname='books'")
> cur = conn.cursor()
> cur.execute("SELECT books.title, books.isbn, books.year FROM books
> WHERE books.title = title OR books.isbn = books.isbn OR books.year = year")
> rows = cur.fetchone()
>
There's your problem. You'll want to either use fetchall() if you want to
get all the rows, see
http://initd.org/psycopg/docs/cursor.html#cursor.fetchone

> conn.close()
> return rows
>
best,
D


Rolls

2018-02-01 Thread Andrew Bartley
Hi all,

I am trying to work out a way to create a roll/user that can only execute
one particular function and nothing else.  The particular function has been
created with "SECURITY DEFINER".

Regards

Andrew Bartley


Re: Rolls

2018-02-01 Thread Rob Sargent



On 02/01/2018 02:22 PM, Andrew Bartley wrote:

Hi all,

I am trying to work out a way to create a roll/user that can only 
execute one particular function and nothing else. The particular 
function has been created with "SECURITY DEFINER".


Regards

Andrew Bartley



And I thought you we offering hot cinnamon buns or some such.



Re: Rolls

2018-02-01 Thread Andrew Bartley
Sorry Roles


On Fri, 2 Feb 2018 at 08:29 Rob Sargent  wrote:

>
>
> On 02/01/2018 02:22 PM, Andrew Bartley wrote:
> > Hi all,
> >
> > I am trying to work out a way to create a roll/user that can only
> > execute one particular function and nothing else. The particular
> > function has been created with "SECURITY DEFINER".
> >
> > Regards
> >
> > Andrew Bartley
> >
> >
> And I thought you we offering hot cinnamon buns or some such.
>
>


Re: Rolls

2018-02-01 Thread David G. Johnston
On Thursday, February 1, 2018, Andrew Bartley  wrote:

> Hi all,
>
> I am trying to work out a way to create a roll/user that can only execute
> one particular function and nothing else.  The particular function has been
> created with "SECURITY DEFINER".
>

Never tried it but "REVOKE PUBLIC FROM role" then "GRANT ... TO role" would
ideally work.

Not simple since every role is a member of PUBLIC from which they all
inherit useful defaults.  You can remove those defaults and the already
granted privileges from PUBLIC and then add them back to some super-role
group that everyone but this user belongs too.  Then only add the one grant
you desire to this user.

David J.


Re: Recreating functions after starting the database server.

2018-02-01 Thread raf
Konrad Witaszczyk wrote:

> Hi,
> 
> I'm considering changing my database schema upgrade routines to recreate
> functions and all objects depending on them (e.g. triggers, views) just after
> the database server is started. It would make the routines much simpler since
> they wouldn't have to depend on the history of the schema.
> 
> Does anyone has any experience with recreating all functions and triggers to
> upgrade a database schema assuming that during the upgrade there are no client
> connections to the database?
> 
> Does anyone see any problems with such approach in terms of consistency and
> performance? I'm not familiar with PostgreSQL internals and I'm not sure how 
> it
> would affect the planner when using various function volatile categories.
> 
> Konrad

Hi,

I think the usual way to do this is to have a table containing a
schema version number that can be consulted to work out where
the schema is up to so as to know which schema migration scripts
still need to be run to bring the database up to date. This is
useful for upgrading a system on multiple database servers. All
the cool kids would be doing this. However, I think this is more
common with ORMs where all the code lives outside the database.

I often have to load old database backups that need to be
brought up to date so that the current software will work with
it. I have date/time stamps in the filename of the database
backup and my script to load the backup consults a directory
full of all schema migration scripts that are also date/time
stamped. It then applies the schema migration scripts that are
dated after the time that the backup was made.

After applying all the schema migration scripts, it runs another
script to audit the state of the database functions with respect
to the "source". It then reloads any functions that differ from
the current source. I don't use views or triggers but the same
idea would apply.

My main advice is to not call any functions (or rely on any
views or triggers) inside schema migration scripts because you
never know if they'll exist or not when they are needed.

Also, if any update scripts massage data rather than just change
the schema, don't assume that the data exists just because it
exists when you wrote the migration script. The script might run
on a very old backup taken before the data existed. Or at least
write migrations that avoid producing errors if the data isn't
present. Minimise the output so that errors don't go unnoticed.

I'd also recommend writing a tool that compares the functions,
triggers and views in the database against the code for them on
disk somewhere and reload the ones that differ from source. Once
you have such an audit tool, it's very easy to see what needs to
be reloaded and the tool can do that too.

A very important part of such an audit tool is that it
(probably?) needs to perform a topological sort on the
functions, triggers and views to load them in a sensible order.
Inspect the source for them all looking for references between
them and do a topoligical sort so you can load things that are
depended on before loading the things that depend on them. This
might not actually be necessary but I think it is (or I wouldn't
have bothered doing it unless it's cargo cult behaviour on my
part from Sybase days). :-)

I use the function auditing tool to load new or changed
functions on multiple servers all the time but I only apply old
schema migration scripts when loading backups.

One last thing, if you need to keep very old backups
indefinitely for legal reasons, make a point of reloading them
every year to apply the last year's worth of schema changes and
make a backup of that updated version. It makes it less likely
that bit rot will ruin your day in the distant future. But also
keep the original backup because sometimes you need the backup
as it was without any changes made after the fact. The ancient
version won't work with current application code but that can be
the problem of whoever has demanded the data in its original
form.

I've never encountered any problems with consistency and
performance issues in doing something like this but I only have
small databases so that doesn't mean there won't be any.

Good luck!

cheers,
raf




Re: Issue with WAL logs temporary not replaying

2018-02-01 Thread Tomas Vondra


On 02/01/2018 08:17 AM, Eugene Pirogov wrote:
> Tomas,
> 
> Indeed, I haven't put enough detail in the original post. That is a fair
> point, thank you. Let me follow up on each of your comments.
> 
> unfortunately your message is missing some pretty important
> information, like for example which OS you're running, what
> PostgreSQL version are you using (and if you built it on your own or
> installed from some package repository).
> 
> 
> We are using Docker container based on Alpine 3.5. It has a custom-built
> PostgreSQL 9.6.3 installation. The customization is negligible, e.g. we
> did not alter configure and make process in any way. We only included
> pglogical extension. Here are the very sources for our image
> – 
> https://github.com/edenlabllc/alpine-postgre/blob/master/pglogical/Dockerfile
> 
> . The Dockerfile is based on the "official" postgres image 9.6.3 from
> docker library,
> e.g. 
> https://github.com/docker-library/postgres/tree/a554d043a3b77937120d325b30fefaad2e3be12d/9.6/alpine
> 
> .
> 
> Also, you mention replicas. What type of replication are you using?
> The built-in streaming replication, or something else?
> 
> 
> We use logical replication. We are interested in replicating selected
> columns from tables across several databases in our kubernetes cluster.
> For this we are using pglogical REL2.0.1, built with patch that allows
> it to compile under Alpine. Here's the
> patch: 
> https://github.com/alpinelinux/aports/commit/f4f856564dfb2589c1800bd51322f4eadfc67cf1
> 
> 
> > Firstly, I very much doubt using pg_resetxlog is a good idea, 
> especially when it's rather unclear what was the actual root cause.
> 
> 
> Noted. In hindsight I think our best course of action would have been to
> make a copy of PG_DATA table.

Yeah, that would be helpful, but it's too late to do that.

> 
> Secondly, I don't quite understand what you mean by "recovery" here.
> The regular recovery process (say, after a crash) makes it
> impossible to connect to the database, so I'm not sure what you mean
> by running vacuum analyze (not to mention how it could have improved
> the recovery speed).
> 
> 
> By recovery I mean a sort of self-healing of a running database. Let me
> make a little step back and explain this.
> 
> After several restart attempts, the DB would still not get up. Instead
> it was complaining about inability to "locate a valid checkpoint
> record". After we ran pg_resetxlog and tried to start the database
> again, the "locate a valid checkpoint record" error was gone, and we
> ended up with a 100% working database.
> 

OK. So what pg_resetxlog command have you used exactly? There are
various options.

> From this point onwards we began issuing SELECT statements, once in
> several minutes, to see if the database sees the missing records. The
> records indeed began to slowly appear in our tables. I _think_ this was
> happening because the database began reading them from WAL, in the
> background. Again – at this point the database was (or seemed to be)
> fully operational. It was just catching up on data (this is my impression).
> 

That is not how WAL works. It's only read when the database is starting
after an unclean shutdown/crash. But it's not possible to connect to a
database in this recovery mode. And after the database gets into a
consistent state, it does not read WAL anymore.

Moreover, pg_resetxlog actually removes all the WAL segments, so there's
nothing to read ...

So the records were appearing for some other reasons. Either there were
being inserted by some other process, or becoming visible for some other
reason (e.g. because there were XIDs initially from the "future" and the
database eventually generated them again).

> It seems as if pg_resetxlog removed some WAL segment (hence the
> mentioned "PANIC:  could not locate a valid checkpoint record" log),
> which initially caused the database to stuck and not read further WAL
> segments.
> 
> Well, considering you did pg_resetxlog ... I'm surprised you only
> lost two records, but it's a question what state are the indexes in etc.
> 
> 
> I never considered indexes to play part in the issue. Noted on that,
> thank you.
> 

Well, the pg_resetxlog docs actually say this:

After running this command, it should be possible to start the
server, but bear in mind that the database might contain
inconsistent data due to partially-committed transactions. You
should immediately dump your data, run initdb, and reload. After
reload, check for inconsistencies and repair as needed.

and I strongly recommend doing exactly that - dump/restore. The sooner
the better.

> Man, that's an extremely noi

Re: pgaudit?

2018-02-01 Thread Devrim Gündüz

Hi,

On Fri, 2018-01-19 at 12:53 -0800, Adrian Klaver wrote:
> I cannot find it either, so I have to believe this is an oversight. You 
> might want to contact the Yum team directly:
> 
> https://yum.postgresql.org/contact.php

Just to follow this up: I pushed pgaudit12_10 (pgaudit 1.2 for PostgreSQL 10) 2
weeks ago.

Regards,
-- 
Devrim Gündüz
EnterpriseDB: https://www.enterprisedb.com
PostgreSQL Consultant, Red Hat Certified Engineer
Twitter: @DevrimGunduz , @DevrimGunduzTR

signature.asc
Description: This is a digitally signed message part


Re: pgaudit?

2018-02-01 Thread Simon Riggs
On 1 February 2018 at 23:46, Devrim Gündüz  wrote:
>
> Hi,
>
> On Fri, 2018-01-19 at 12:53 -0800, Adrian Klaver wrote:
>> I cannot find it either, so I have to believe this is an oversight. You
>> might want to contact the Yum team directly:
>>
>> https://yum.postgresql.org/contact.php
>
> Just to follow this up: I pushed pgaudit12_10 (pgaudit 1.2 for PostgreSQL 10) 
> 2
> weeks ago.

Very good, thank you Devrim.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Create schema with in a specific database from a script file

2018-02-01 Thread Abhra Kar
Hi,

   I have to write script for psql in a xyz.sh file which should create a
schema in a specific data.In psql prompt I achieve it like --


postgres=# \c ABC

ABC=# create schema authorization myschema



 In xyz.sh I executed the following script ---

*su -c "psql -c \"\c ABC  \"" postgres*

*su -c "psql -c \"create schema authorization myschema\"" postgres*



In the terminal got message “connected to ABC database”. But schema created
with in postgres database not with in ABC database.

What should be the proper script ?



Thanks and Regards,

Abhra


Re: Create schema with in a specific database from a script file

2018-02-01 Thread Melvin Davidson
On Thu, Feb 1, 2018 at 9:09 PM, Abhra Kar  wrote:

> Hi,
>
>I have to write script for psql in a xyz.sh file which should create a
> schema in a specific data.In psql prompt I achieve it like --
>
>
> postgres=# \c ABC
>
> ABC=# create schema authorization myschema
>
>
>
>  In xyz.sh I executed the following script ---
>
> *su -c "psql -c \"\c ABC  \"" postgres*
>
> *su -c "psql -c \"create schema authorization myschema\"" postgres*
>
>
>
> In the terminal got message “connected to ABC database”. But schema
> created with in postgres database not with in ABC database.
>
> What should be the proper script ?
>
>
>
> Thanks and Regards,
>
> Abhra
>
>

*> But schema created with in postgres database not with in ABC database. *

*Just specify the database in the command line:*

*EG: su -c "psql -d ABC"*


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: Create schema with in a specific database from a script file

2018-02-01 Thread Abhra Kar
*sudo **-u** postgres createdb ABC*

*su **-c* *"**psql -d ABC**"*

*if [* *$3* *==* *'x'* *]*

*then*

*su **-c* *"**psql -c **\"**grant all privileges on database ABC** to *
*\"**"** postgres*

*su **-c* *"**psql -c **\"**create schema authorization **\"**"**
postgres*

*else*

*su **-c* *"**psql -c **\"**grant all privileges on database ABC to *
*\"**"** postgres*

*su **-c* *"**psql -c **\"**create schema authorization **\"**"*
* postgres*

*fi*


Hi Melvin,
  I executed the above script but  schema created with
in postgres db not with in ABC db. And in terminal didn't show the message
connected to ABC database.  What is wrong I am doing here?

Regards,
Abhra

On Fri, Feb 2, 2018 at 7:48 AM, Melvin Davidson 
wrote:

>
>
> On Thu, Feb 1, 2018 at 9:09 PM, Abhra Kar  wrote:
>
>> Hi,
>>
>>I have to write script for psql in a xyz.sh file which should create a
>> schema in a specific data.In psql prompt I achieve it like --
>>
>>
>> postgres=# \c ABC
>>
>> ABC=# create schema authorization myschema
>>
>>
>>
>>  In xyz.sh I executed the following script ---
>>
>> *su -c "psql -c \"\c ABC  \"" postgres*
>>
>> *su -c "psql -c \"create schema authorization myschema\"" postgres*
>>
>>
>>
>> In the terminal got message “connected to ABC database”. But schema
>> created with in postgres database not with in ABC database.
>>
>> What should be the proper script ?
>>
>>
>>
>> Thanks and Regards,
>>
>> Abhra
>>
>>
>
> *> But schema created with in postgres database not with in ABC database. *
>
> *Just specify the database in the command line:*
>
> *EG: su -c "psql -d ABC"*
>
>
> --
> *Melvin Davidson*
> I reserve the right to fantasize.  Whether or not you
> wish to share my fantasy is entirely up to you.
>


Re: Create schema with in a specific database from a script file

2018-02-01 Thread Melvin Davidson
On Thu, Feb 1, 2018 at 9:42 PM, Abhra Kar  wrote:

> *sudo **-u** postgres createdb ABC*
>
> *su **-c* *"**psql -d ABC**"*
>
> *if [* *$3* *==* *'x'* *]*
>
> *then*
>
> *su **-c* *"**psql -c **\"**grant all privileges on database ABC** to
> **\"**"** postgres*
>
> *su **-c* *"**psql -c **\"**create schema authorization **\"**"**
> postgres*
>
> *else*
>
> *su **-c* *"**psql -c **\"**grant all privileges on database ABC to *
> *\"**"** postgres*
>
> *su **-c* *"**psql -c **\"**create schema authorization **\"**"*
> * postgres*
>
> *fi*
>
>
> Hi Melvin,
>   I executed the above script but  schema created with
> in postgres db not with in ABC db. And in terminal didn't show the message
> connected to ABC database.  What is wrong I am doing here?
>
> Regards,
> Abhra
>
> On Fri, Feb 2, 2018 at 7:48 AM, Melvin Davidson 
> wrote:
>
>>
>>
>> On Thu, Feb 1, 2018 at 9:09 PM, Abhra Kar  wrote:
>>
>>> Hi,
>>>
>>>I have to write script for psql in a xyz.sh file which should create
>>> a schema in a specific data.In psql prompt I achieve it like --
>>>
>>>
>>> postgres=# \c ABC
>>>
>>> ABC=# create schema authorization myschema
>>>
>>>
>>>
>>>  In xyz.sh I executed the following script ---
>>>
>>> *su -c "psql -c \"\c ABC  \"" postgres*
>>>
>>> *su -c "psql -c \"create schema authorization myschema\"" postgres*
>>>
>>>
>>>
>>> In the terminal got message “connected to ABC database”. But schema
>>> created with in postgres database not with in ABC database.
>>>
>>> What should be the proper script ?
>>>
>>>
>>>
>>> Thanks and Regards,
>>>
>>> Abhra
>>>
>>>
>>
>> *> But schema created with in postgres database not with in ABC database.
>> *
>>
>> *Just specify the database in the command line:*
>>
>> *EG: su -c "psql -d ABC"*
>>
>>
>> --
>> *Melvin Davidson*
>> I reserve the right to fantasize.  Whether or not you
>> wish to share my fantasy is entirely up to you.
>>
>
>


*> I executed the above script but  schema created with in postgres db
not with in ABC db ...*

*1. Please do not top post. The convention is to bottom post in this list.*

*2. You code is not a proper bash script. I suggest you google how to write
bash scripts as this list is*

*is for PostgreSQL help, not bash*

*3.  You are just submitting MULTIPLE psql commands. You need to just start
the psql session, then *

* submit all additional commands inside the psql session.*

*EG: psql -d somedb*   *grant all privileges on database ABC** to *
*;*

*   *

*   \q*
-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Create schema with in a specific database from a script file

2018-02-01 Thread David G. Johnston
On Thursday, February 1, 2018, Abhra Kar  wrote:
>
>  In xyz.sh I executed the following script ---
>
> *su -c "psql -c \"\c ABC  \"" postgres*
>
> *su -c "psql -c \"create schema authorization myschema\"" postgres*
>
>
>
> In the terminal got message “connected to ABC database”. But schema
> created with in postgres database not with in ABC database.
>
>
Right.  psql connected to the Postgres database, executed \c ABC which
caused the session to connect to ABC.  Then, seeing no other commands, that
psql session disconnected.  Then a brand new session was opened to the
postgres database and in that new session created the schema.

Every psql command in a script is independent of all others, there is no
state maintained.

For psql scripts in bash I find:

psql <
SQL

To be the better form than trying to use -c.  Or put the psql code in its
own file and use -f (really preferred though more files and navigation to
deal with).

And you should probably try to avoid using "su" as a crutch to make things
work.  Invoking commands as root is dangerous.

David J.