Re: Connection to PostgreSQL from Excel

2018-08-27 Thread Łukasz Jarych
Hi,

i think that this is not possible to avoid installing driver.

pon., 27 sie 2018 o 09:55 Paul Suworow  napisał(a):

> Hi team,
>
> I am working on an Excel tool that should allow users to update tables and
> charts from a PostgreSQL database. My workbook currently connects to the
> database using the following VBA code:
>
> Set objConnection = CreateObject("ADODB.Connection")
>
> ConnString = "DRIVER={PostgreSQL
> Unicode};DATABASE=FMS;SERVER=[x];PORT=5432;Uid=[x];Pwd=[x]"
>
>  objConnection.Open ConnString
>
> All works fine but the code requires PostgreSQL driver to be installed on
> user's machine.
>
> Is there an alternative way to connect to a PostgreSQL database without
> pre-creating DNS or installing any drivers in advance?
>
> Thank you for your help!
>
> Kind regards,
> Paul
>
>
>


pg_basebackup + SSL error: bad length

2018-08-27 Thread Csaba Ragasits
Hello,

I would like to create a streaming replication, when I'm trying the
basebackup from the node1, I've the following errors:

pg_basebackup -P -v -D ./main -h node1 -p 5432  -U repmgr

pg_basebackup: initiating base backup, waiting for checkpoint to complete
pg_basebackup: checkpoint completed
pg_basebackup: write-ahead log start point: 2/1328 on timeline 1
pg_basebackup: directory "/srv/pgdata/tbs_int302" exists but is not empty
pg_basebackup: removing data directory "./main"

In the node1 postgresql.log:

LOG:  could not send data to client: Connection reset by peer
ERROR:  base backup could not send data, aborting backup
LOG:  SSL error: bad length
LOG:  SSL error: bad length
FATAL:  connection to client lost

When I set off the SSL in the postgresql.conf, i've the folloving error
message when I try use the pg_basebackup command:
LOG:  could not send data to client: Broken pipe
FATAL:  connection to client lost

Have you got any information, suggestion how can I solve this errors?

My area:
- Updated Debian GNU/Linux 9
- PostgreSQL 10.5 (Debian 10.5-1.pgdg90+1) on x86_64-pc-linux-gnu, compiled
by gcc (Debian 6.3.0-18+deb9u1) 6.3.0 20170516, 64-bit

Thx,
Csaba


Connection to PostgreSQL from Excel

2018-08-27 Thread Paul Suworow
Hi team,

I am working on an Excel tool that should allow users to update tables and 
charts from a PostgreSQL database. My workbook currently connects to the 
database using the following VBA code:

Set objConnection = CreateObject("ADODB.Connection")

    ConnString = "DRIVER={PostgreSQL 
Unicode};DATABASE=FMS;SERVER=[x];PORT=5432;Uid=[x];Pwd=[x]"

 objConnection.Open ConnString

All works fine but the code requires PostgreSQL driver to be installed on 
user's machine.

Is there an alternative way to connect to a PostgreSQL database without 
pre-creating DNS or installing any drivers in advance?

Thank you for your help!

Kind regards,
Paul




archive items not in correct section order

2018-08-27 Thread Tim Clarke
Last night for the first time our pg_dump backup threw this error for
the first time. Repeated runs this morning consistently throw it too. I
can see the error in the Postgres source
https://doxygen.postgresql.org/pg__backup__archiver_8c_source.html but I
can't find any specifics about the cause or cure?

Environment: Centos 6.10 kernel 2.6.32-754.2.1.el6.i686

postgresql10.i686   
   
10.5-1PGDG.rhel6
   
@pgdg10
postgresql10-contrib.i686   
   
10.5-1PGDG.rhel6
   
@pgdg10
postgresql10-devel.i686 
   
10.5-1PGDG.rhel6
   
@pgdg10
postgresql10-libs.i686  
   
10.5-1PGDG.rhel6
   
@pgdg10
postgresql10-odbc.i686  
   
10.03.-1PGDG.rhel6  
   
@pgdg10
postgresql10-server.i686
   
10.5-1PGDG.rhel6
   
@pgdg10
-- 
Tim Clarke
IT Director
Direct: +44 (0)1376 504510 | Mobile: +44 (0)7887 563420

Main: +44 (0)1376 503500 | Fax: +44 (0)1376 503550
Web: https://www.manifest.co.uk/

Minerva Analytics Ltd
9 Freebournes Court | Newland Street | Witham | Essex | CM8 2BL | England



Copyright: This e-mail may contain confidential or legally privileged
information. If you are not the named addressee you must not use or
disclose such information, instead please report it to
ad...@minerva-analytics.info
Legal: Minerva Analytics is the trading name of: Minerva Analytics Ltd:
Registered in England Number 11260966 & The Manifest Voting Agency Ltd:
Registered in England Number 2920820 Registered Office at above address.
Please Click Here >>  for further
information.


Re: Returning Vector of Pairs with a PostgreSQL C Extension Function

2018-08-27 Thread TalGloz
Hello,

I'm having a small problem. The code works but the array cells don't hold
the whole length of *localT1* and *localT2* texts, I think it has something
to do with *int16 typlen* in my set parameters:

ArrayType *array;
Datum elements[2];
int16 typlen;
bool typbyval;
char typalign;

elements[0] = CStringGetDatum(localT1.c_str());
elements[1] = CStringGetDatum(localT2.c_str());

get_typlenbyvalalign(TEXTOID, &typlen, &typbyval, &typalign);
array = construct_array(elements, 2, TEXTOID, typlen, typbyval, typalign);

I can't change the int16 to int32 or int64 because the
get_typlenbyvalalign() function takes an int16. Is there a way to change my
settings so that each input (in this example the *localT1* and *localT2*)
fits fully in each sell?

Best regards,
Tal



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: Returning Vector of Pairs with a PostgreSQL C Extension Function

2018-08-27 Thread TalGloz
It looks like changing the

elements[0] = CStringGetDatum(localT1.c_str()); 
elements[1] = CStringGetDatum(localT2.c_str()); 

to:

elements[0] = PointerGetDatum(cstring_to_text(localT1.c_str()));
elements[1] = PointerGetDatum(cstring_to_text(localT2.c_str()));

Solved the problem. If anyone thinks that there is a better way please tell
me.

Best regards,
Tal



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: Returning Vector of Pairs with a PostgreSQL C Extension Function

2018-08-27 Thread Pavel Stehule
2018-08-27 14:40 GMT+02:00 TalGloz :

> It looks like changing the
>
> elements[0] = CStringGetDatum(localT1.c_str());
> elements[1] = CStringGetDatum(localT2.c_str());
>
> to:
>
> elements[0] = PointerGetDatum(cstring_to_text(localT1.c_str()));
> elements[1] = PointerGetDatum(cstring_to_text(localT2.c_str()));
>
> Solved the problem. If anyone thinks that there is a better way please tell
> me.
>

There is not better or worst way - you have to use just correct way, that
is one

There are already prepared macros

#define CStringGetTextDatum(s) PointerGetDatum(cstring_to_text(s))
#define TextDatumGetCString(d) text_to_cstring((text *) DatumGetPointer(d))

you can use it.

Regards

Pavel


> Best regards,
> Tal
>
>
>
> --
> Sent from: http://www.postgresql-archive.org/PostgreSQL-general-
> f1843780.html
>
>


Re: Returning Vector of Pairs with a PostgreSQL C Extension Function

2018-08-27 Thread TalGloz
Pavel Stehule wrote
> There are already prepared macros
> 
> #define CStringGetTextDatum(s) PointerGetDatum(cstring_to_text(s))
> #define TextDatumGetCString(d) text_to_cstring((text *)
> DatumGetPointer(d))

Thanks for pointing out the macros, for some reason I've missed them.

Best regards,
Tal



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: unorthodox use of PG for a customer

2018-08-27 Thread John McKown
On Fri, Aug 24, 2018 at 1:19 PM David Gauthier 
wrote:

> Hi Everyone:
>
> I'm going to throw this internal customer request out for ideas, even
> though I think it's a bit crazy.  I'm on the brink of telling him it's
> impractical and/or inadvisable.  But maybe someone has a solution.
>

Reading below, I think you're right about it being inadvisable. It seems to
me that your user, like most, just has "needs" and doesn't really want to
worry about "details" or much of anything else. We have programmers where I
would with this attitude: We list what we want -- you supply it and
maintain it. Because we only code (we don't even design much because we're
AGILE! ), we don't worry about those little details (availability,
reliability, security -- not our concern!). One thing missing from your
post is the OS involved. Linux? Windows? Other? Multiple different ones?
E.g. some users are Linux while others are Windows. {ouch}.



>
> He's writing a script/program that runs on a workstation and needs to
> write data to a DB.  This process also sends work to a batch system on a
> server farm external to the workstation that will create multiple, parallel
> jobs/processes that also have to write to the DB as well. The workstation
> may have many of these jobs running at the same time.  And there are 58
> workstation which all have/use locally mounted disks for this work.
>

First question. You say DB. Do you need an SQL based data base. Or do you
just need a shared data store which is easy to use in a script. Have you
considered any NOSQL type data stores such as CouchDB, MongoDB

Second question, which scripting language and what programming language?
Examples might be: Windows Powershell, Linux BASH, Windows BASH, Python,
Perl, surely not some AWK variant, R, Google GO, Rush, C/C++, and so on.



>
> At first blush, this is easy.  Just create a DB on a server and have all
> those clients work with it.  But he's also adamant about having the DB on
> the same server(s) that ran the script AND on the locally mounted disk.  He
> said he doesn't want the overhead, dependencies and worries of anything
> like an external DB with a DBA, etc... . He also wants this to be fast.
>

So, if I understand, he wants a single shared data store for 58 clients. He
also wants the disk holding the data to be "locally mounted" to every
workstation. Depending on what "locally mounted" means to the user, I only
way that I know of to do something like this is to have the actual disk /
filesystem directly attached to a single server. All the workstations would
need to use a communication protocol (IP?) to communicate their I/O to the
"shared data" to this server to do the physical I/O. So we're talking some
protocol like NFS or CIFS (Windows "share"). The easiest way that I know to
do this sharing is to have a NAS box, such as NetApp, which really is a
server+disk "in a box" and which implements these protocols in an easy to
manage manner.

Or can each workstation have a separate, unrelated data store for its
processing. The thought then would be some way to have the application
write locally into its data store. Something else, perhaps "in line", would
replicate the data to a central store on a server. That server would then
distribute the changes back out to all 58 workstations and ??? servers so
that each will have, eventually, an identical copy of the current data; but
the stores might have inconsistencies until they synchronise.



>
> My first thought was SQLite.  Apparently, they now have some sort of
> multiple, concurrent write ability.  But there's no way those batch jobs on
> remote machines are going to be able to get at the locally mounted disk on
> the workstation. So I dismissed that idea.  Then I thought about having 58
> PG installs, one per workstation, each serving all the jobs pertaining to
> that workstation.  That could work.  But 58 DB instances ?  If he didn't
> like the ideal of one DBA, 58 can't be good.  Still, the DB would be on the
> workstation which seems to be what he wants.
>
> I can't think of anything better.  Does anyone have any ideas?
>

Yes, but the civil authorities would prosecute you if they found the body.



>
> Thanks in Advance !
>
>

-- 
Between infinite and short there is a big difference. -- G.H. Gonnet

Maranatha! <><
John McKown


Re: archive items not in correct section order

2018-08-27 Thread Adrian Klaver

On 08/27/2018 03:59 AM, Tim Clarke wrote:
Last night for the first time our pg_dump backup threw this error for 
the first time. Repeated runs this morning consistently throw it too. I 
can see the error in the Postgres source 
https://doxygen.postgresql.org/pg__backup__archiver_8c_source.html but I 
can't find any specifics about the cause or cure?


What is the full pg_dump command you are using?




Environment: Centos 6.10 kernel 2.6.32-754.2.1.el6.i686

postgresql10.i686 10.5-1PGDG.rhel6 @pgdg10
postgresql10-contrib.i686 10.5-1PGDG.rhel6 @pgdg10
postgresql10-devel.i686 10.5-1PGDG.rhel6 @pgdg10
postgresql10-libs.i686 10.5-1PGDG.rhel6 @pgdg10
postgresql10-odbc.i686 10.03.-1PGDG.rhel6 @pgdg10
postgresql10-server.i686 10.5-1PGDG.rhel6 @pgdg10
--
Tim Clarke
IT Director
Direct: +44 (0)1376 504510 | Mobile: +44 (0)7887 563420




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: pg_basebackup + SSL error: bad length

2018-08-27 Thread Adrian Klaver

On 08/27/2018 02:00 AM, Csaba Ragasits wrote:

Hello,

I would like to create a streaming replication, when I'm trying the 
basebackup from the node1, I've the following errors:


pg_basebackup -P -v -D ./main -h node1 -p 5432  -U repmgr

pg_basebackup: initiating base backup, waiting for checkpoint to complete
pg_basebackup: checkpoint completed
pg_basebackup: write-ahead log start point: 2/1328 on timeline 1
pg_basebackup: directory "/srv/pgdata/tbs_int302" exists but is not empty
pg_basebackup: removing data directory "./main"

In the node1 postgresql.log:

LOG:  could not send data to client: Connection reset by peer
ERROR:  base backup could not send data, aborting backup
LOG:  SSL error: bad length
LOG:  SSL error: bad length
FATAL:  connection to client lost

When I set off the SSL in the postgresql.conf, i've the folloving error 
message when I try use the pg_basebackup command:

LOG:  could not send data to client: Broken pipe
FATAL:  connection to client lost

Have you got any information, suggestion how can I solve this errors?


Can you connect to node1 using psql with and without SSL?

Where is node1 relative to the machine you are running pg_basebackup from?

Just to be clear both Postgres instances are running 10.5, correct?



My area:
- Updated Debian GNU/Linux 9
- PostgreSQL 10.5 (Debian 10.5-1.pgdg90+1) on x86_64-pc-linux-gnu, 
compiled by gcc (Debian 6.3.0-18+deb9u1) 6.3.0 20170516, 64-bit


Thx,
Csaba



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: pg_sample

2018-08-27 Thread Adrian Klaver

On 08/26/2018 11:03 PM, Naveen Dabas wrote:

Server encoding is UTF8
Client encoding is UTF8
Creating sample schema sampledb1
DBD::Pg::db do failed: ERROR:  cannot execute CREATE SCHEMA in a 
read-only transaction at ./pg_sample line 296.
main::__ANON__('DBD::Pg::db do failed: ERROR:  cannot execute CREATE 
SCHEMA i...', 'DBI::db=HASH(0xab2388)', undef) called at ./pg_sample 
line 478

Done.


can you help me in this


Yes, by suggesting you try something other then pg_sample. The last 
commit was 4 years ago and it does not seem to be up to date enough.


What sort of sampling do you want to do?


thanks

On Mon, Aug 27, 2018 at 11:16 AM, Naveen Dabas > wrote:


Thanks for the help.
But now i'm getting some different error as i rum this command
./pg_sample -a -f file.sql --limit="user = 100" --schema=dbname -h
hostname -U user -W password

I am not getting output and one file opens , in that file there id
guide for pg_sample

/tmp/9REOT5C

NAME
pg_sample - extract a small, sample dataset from a larger PostgreSQL
database while maintaining referential integrity.
=head1 SYNOPSIS
pg_sample [ option... ] [ dbname ]
=head1 DESCRIPTION
pg_sample is a utility for exporting a small, sample dataset from a
larger PostgreSQL database. The output and command-line options closely
resemble the pg_dump backup utility (although only the plain-text format
is supported).
The sample database produced includes all tables from the original,
maintains referential integrity, and supports circular dependencies.
To build an actual instance of the sample database, the output of
this script
can be piped to the psql utility. For example, assuming we have an
existing
PostgreSQL database named "mydb", a sample database could be
constructed with:
   createdb sampledb
   pg_sample mydb | psql sampledb
and so on ...

On Mon, Aug 27, 2018 at 1:27 AM, Paul Carlucci
mailto:paul.carlu...@gmail.com>> wrote:

sudo yum install perl-CPAN

Also do a "yum search perl-" and you should find most, if not
all what you need natively packaged for your particular Linux
distro.  You're better off just sticking with the pre-packaged
perl modules unless you specifically need something special.

You'll find more modules if you also enable the EPEL yum repo by
setting enabled=1 in the first section of
/etc/yum.repos.d/epel.conf and rerunning that yum search command.

On Sun, Aug 26, 2018, 2:20 PM Naveen Dabas mailto:nav...@paymonk.com>> wrote:

Sir i'm getting error in both commands

[root@ip-88-8-8-17 ~]#  perl -MCPAN -e 'install Bundle::DBI'
Can't locate CPAN.pm in @INC (@INC contains:
/usr/local/lib64/perl5 /usr/local/share/perl5
/usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl
/usr/lib64/perl5 /usr/share/perl5 .).
BEGIN failed--compilation aborted.
[root@ip-88-8-8-17 ~]# perl -MCPAN -e 'install DBD::Pg'
Can't locate CPAN.pm in @INC (@INC contains:
/usr/local/lib64/perl5 /usr/local/share/perl5
/usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl
/usr/lib64/perl5 /usr/share/perl5 .).
BEGIN failed--compilation aborted.



operating system =CentOS Linux release 7.5.1804 (Core)

thanks


On Fri, Aug 24, 2018 at 6:16 PM, Abhinav Mehta
mailto:abhi...@metarain.com>> wrote:

Solution, execute this on your linux terminal -

$ perl -MCPAN -e 'install Bundle::DBI'
$ perl -MCPAN -e 'install DBD::Pg'

 > On 24-Aug-2018, at 6:13 PM, Ravi Krishna
mailto:sravikris...@aol.com>> wrote:
 >
 >>
 >> sir have taken pg_sample
 >> Now i want to run pg_sample with credential but i'm
getting this error
 >>
 >> Can't locate DBI.pm in @INC (@INC contains:
/usr/local/lib64/perl5 /usr/local/share/perl5
/usr/lib64/perl5/vendor_perl
/usr/share/perl5/vendor_perl /usr/lib64/perl5
/usr/share/perl5 .) at ./pg_sample line 192.
 >> BEGIN failed--compilation aborted at ./pg_sample
line 192.
 >
 > As is clear from the message, you need to install
Perl DBI/DBD first.
 >
 >




-- 
--

With Regards
Naveen Dabas
Ph. 9017298370


*Important Disclaimer:* Information contained in this email
is for the recipient primarily addressed to. If you are not
the primary recipient or are not supposed to receive thi

Re: unorthodox use of PG for a customer

2018-08-27 Thread Olivier Gautherot
Hi Dave, lots of nice inputs but let's try to summarize your user's needs.

On Fri, Aug 24, 2018 at 3:18 PM, David Gauthier 
wrote:

> [...]
>
> He's writing a script/program that runs on a workstation and needs to
> write data to a DB.  This process also sends work to a batch system on a
> server farm external to the workstation that will create multiple, parallel
> jobs/processes that also have to write to the DB as well. The workstation
> may have many of these jobs running at the same time.  And there are 58
> workstation which all have/use locally mounted disks for this work.
>

Do the workstations work independently or do they need to share
information? If they share information, does it need to be ACID or is
"eventually consistent" enough? Also, what is the size of the database? A
local database is viable as long as it does not overload the workstation
but it you have many instances of the job per workstation and 58 of them, I
would rely on an optimized central DB.

Are the jobs mainly INSERT or a mix of INSERT and SELECT/UPDATE?

Things to consider: amount of RAM required, CPU, disk I/O... The cost
factor should be considered.

At first blush, this is easy.  Just create a DB on a server and have all
> those clients work with it.  But he's also adamant about having the DB on
> the same server(s) that ran the script AND on the locally mounted disk.  He
> said he doesn't want the overhead, dependencies and worries of anything
> like an external DB with a DBA, etc... . He also wants this to be fast.
>

If he insists on having a local copy of the database, suggest PG10 with
logical replication - ideally the bidirectional replication. Once he
realizes that every INSERT causes 57 replications and this will most likely
kill the network, he may reconsider the requirement... Queries to a central
DB is likely to cause less stress and will be easier to handle.

Now, if speed is critical, you may also consider an in-memory DB like
Redis. There are schemes running in multi-master.

My first thought was SQLite.  Apparently, they now have some sort of
> multiple, concurrent write ability.  But there's no way those batch jobs on
> remote machines are going to be able to get at the locally mounted disk on
> the workstation. So I dismissed that idea.  Then I thought about having 58
> PG installs, one per workstation, each serving all the jobs pertaining to
> that workstation.  That could work.  But 58 DB instances ?  If he didn't
> like the ideal of one DBA, 58 can't be good.  Still, the DB would be on the
> workstation which seems to be what he wants.
>

A distributed database is likely to cause failures at some point if not
handled properly - and it sounds like there won't be any DBA at all. I
would be extremely cautious in this case.


Re: archive items not in correct section order

2018-08-27 Thread Tom Lane
Tim Clarke  writes:
> Last night for the first time our pg_dump backup threw this error for
> the first time. Repeated runs this morning consistently throw it too.

That's not supposed to happen.  Can you create a test case, by any chance?
Presumably, it's triggered by some database schema change you made since
the last successful dump.

regards, tom lane



Re: pg_basebackup + SSL error: bad length

2018-08-27 Thread Csaba Ragasits
2018-08-27 16:08 GMT+02:00 Adrian Klaver :

> Can you connect to node1 using psql with and without SSL?
>
> Where is node1 relative to the machine you are running pg_basebackup from?
>
> Just to be clear both Postgres instances are running 10.5, correct?
>
>>
>>
Every instances are same (10.5) version, and every servers are in the
common VMWARE subnet. The psql connection is working fine with sslmode or
without sslmode.

When I try run it on the localhost (node1), I've same errors too:
pg_basebackup -P -v -D /tmp/main -h localhost -p 5432  -U repmgr

Thx,
Csaba


Re: unorthodox use of PG for a customer

2018-08-27 Thread Dimitri Maziuk

On 8/27/2018 8:20 AM, John McKown wrote:
On Fri, Aug 24, 2018 at 1:19 PM David Gauthier > wrote:



I can't think of anything better.  Does anyone have any ideas?

Yes, but the civil authorities would prosecute you if they found the body.


And that's why we have raised floors and locked doors in the server rooms.

Dima




Re: pg_sample

2018-08-27 Thread Naveen Dabas
actually i want limited data from the postgres database. But i don't want
to use .csv



thanks

On Mon, Aug 27, 2018 at 7:42 PM, Adrian Klaver 
wrote:

> On 08/26/2018 11:03 PM, Naveen Dabas wrote:
>
>> Server encoding is UTF8
>> Client encoding is UTF8
>> Creating sample schema sampledb1
>> DBD::Pg::db do failed: ERROR:  cannot execute CREATE SCHEMA in a
>> read-only transaction at ./pg_sample line 296.
>> main::__ANON__('DBD::Pg::db do failed: ERROR:  cannot execute CREATE
>> SCHEMA i...', 'DBI::db=HASH(0xab2388)', undef) called at ./pg_sample line
>> 478
>> Done.
>>
>>
>> can you help me in this
>>
>
> Yes, by suggesting you try something other then pg_sample. The last commit
> was 4 years ago and it does not seem to be up to date enough.
>
> What sort of sampling do you want to do?
>
> thanks
>>
>>
>> On Mon, Aug 27, 2018 at 11:16 AM, Naveen Dabas > > wrote:
>>
>> Thanks for the help.
>> But now i'm getting some different error as i rum this command
>> ./pg_sample -a -f file.sql --limit="user = 100" --schema=dbname -h
>> hostname -U user -W password
>>
>> I am not getting output and one file opens , in that file there id
>> guide for pg_sample
>> 
>> /tmp/9REOT5C
>>
>> NAME
>> pg_sample - extract a small, sample dataset from a larger PostgreSQL
>> database while maintaining referential integrity.
>> =head1 SYNOPSIS
>> pg_sample [ option... ] [ dbname ]
>> =head1 DESCRIPTION
>> pg_sample is a utility for exporting a small, sample dataset from a
>> larger PostgreSQL database. The output and command-line options
>> closely
>> resemble the pg_dump backup utility (although only the plain-text
>> format
>> is supported).
>> The sample database produced includes all tables from the original,
>> maintains referential integrity, and supports circular dependencies.
>> To build an actual instance of the sample database, the output of
>> this script
>> can be piped to the psql utility. For example, assuming we have an
>> existing
>> PostgreSQL database named "mydb", a sample database could be
>> constructed with:
>>createdb sampledb
>>pg_sample mydb | psql sampledb
>> and so on ...
>>
>> On Mon, Aug 27, 2018 at 1:27 AM, Paul Carlucci
>> mailto:paul.carlu...@gmail.com>> wrote:
>>
>> sudo yum install perl-CPAN
>>
>> Also do a "yum search perl-" and you should find most, if not
>> all what you need natively packaged for your particular Linux
>> distro.  You're better off just sticking with the pre-packaged
>> perl modules unless you specifically need something special.
>>
>> You'll find more modules if you also enable the EPEL yum repo by
>> setting enabled=1 in the first section of
>> /etc/yum.repos.d/epel.conf and rerunning that yum search command.
>>
>> On Sun, Aug 26, 2018, 2:20 PM Naveen Dabas > > wrote:
>>
>> Sir i'm getting error in both commands
>>
>> [root@ip-88-8-8-17 ~]#  perl -MCPAN -e 'install Bundle::DBI'
>> Can't locate CPAN.pm in @INC (@INC contains:
>> /usr/local/lib64/perl5 /usr/local/share/perl5
>> /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl
>> /usr/lib64/perl5 /usr/share/perl5 .).
>> BEGIN failed--compilation aborted.
>> [root@ip-88-8-8-17 ~]# perl -MCPAN -e 'install DBD::Pg'
>> Can't locate CPAN.pm in @INC (@INC contains:
>> /usr/local/lib64/perl5 /usr/local/share/perl5
>> /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl
>> /usr/lib64/perl5 /usr/share/perl5 .).
>> BEGIN failed--compilation aborted.
>>
>>
>>
>> operating system =CentOS Linux release 7.5.1804 (Core)
>>
>> thanks
>>
>>
>> On Fri, Aug 24, 2018 at 6:16 PM, Abhinav Mehta
>> mailto:abhi...@metarain.com>> wrote:
>>
>> Solution, execute this on your linux terminal -
>>
>> $ perl -MCPAN -e 'install Bundle::DBI'
>> $ perl -MCPAN -e 'install DBD::Pg'
>>
>>  > On 24-Aug-2018, at 6:13 PM, Ravi Krishna
>> mailto:sravikris...@aol.com>>
>> wrote:
>>  >
>>  >>
>>  >> sir have taken pg_sample
>>  >> Now i want to run pg_sample with credential but i'm
>> getting this error
>>  >>
>>  >> Can't locate DBI.pm in @INC (@INC contains:
>> /usr/local/lib64/perl5 /usr/local/share/perl5
>> /usr/lib64/perl5/vendor_perl
>> /usr/share/perl5/vendor_perl /usr/lib64/perl5
>> /usr/share/perl5 .) at ./pg_sample line 192.
>>  >> BEGIN failed--compilation aborted at ./pg_sample
>> line 192.
>>

Erroneous behavior of primary key

2018-08-27 Thread Daniel J Peacock
Good afternoon, all.
I've got an odd situation with a table that has a varchar(255) as the
primary key that is getting key values from an Elasticsearch engine.  What
I'm finding is that even though there is a primary key on the table, I am
getting duplicated keys in the data and the constraint is not blocking
these.  When I do a "select ,count(*) from  group by
 having count(*) > 1" I get no results.  Yet, when I search the
table for a value that is like a key I know to be duplicated, I get
multiple results.  When I select from the table where field is equal to the
duplicated field I get one result.  I verified that they are distinct row
with ctid.  I also created a clone of the table with CTAS and then tried to
create a unique index on the id varchar field but that failed with
"duplicate keys found".  I'm stumped as to what could be the problem.
The only thing that I can think of is that the primary key is somehow
corrupt.  I've noticed this behavior on other tables on this database.
What could be causing this sort of problem?

Thanks in advance for any insights.

Dan Peacock
Auto-wares, Inc.


Re: Erroneous behavior of primary key

2018-08-27 Thread Rob Sargent



> On Aug 27, 2018, at 1:50 PM, Daniel J Peacock  wrote:
> 
> Good afternoon, all.
> I've got an odd situation with a table that has a varchar(255) as the primary 
> key that is getting key values from an Elasticsearch engine.  What I'm 
> finding is that even though there is a primary key on the table, I am getting 
> duplicated keys in the data and the constraint is not blocking these.  When I 
> do a "select ,count(*) from  group by  having 
> count(*) > 1" I get no results.  Yet, when I search the table for a value 
> that is like a key I know to be duplicated, I get multiple results.  When I 
> select from the table where field is equal to the duplicated field I get one 
> result.  I verified that they are distinct row with ctid.  I also created a 
> clone of the table with CTAS and then tried to create a unique index on the 
> id varchar field but that failed with "duplicate keys found".  I'm stumped as 
> to what could be the problem.
> The only thing that I can think of is that the primary key is somehow 
> corrupt.  I've noticed this behavior on other tables on this database.  
> What could be causing this sort of problem?
> 
> Thanks in advance for any insights.
> 
> Dan Peacock
> Auto-wares, Inc.
Check for trailing white space


Re: Will there ever be support for Row Level Security on Materialized Views?

2018-08-27 Thread Ken Tanzer
On Tue, Aug 14, 2018 at 5:24 PM Ken Tanzer  wrote:

> Hi.  My question is similar to one that was asked but apparently never
> answered a couple of years ago on this list. (
> https://www.postgresql.org/message-id/20160112023419.GA30965%40moraine.isi.edu
> )
>
> Basically, I'm wondering whether materialized views are likely to ever
> support row-level security.
>
>
Hi.  Just wanted to follow up on my own email and see if anyone could
answer this.

Cheers,
Ken


-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://demo.agency-software.org/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: Will there ever be support for Row Level Security on Materialized Views?

2018-08-27 Thread David Rowley
On 28 August 2018 at 09:58, Ken Tanzer  wrote:
> On Tue, Aug 14, 2018 at 5:24 PM Ken Tanzer  wrote:
>> Basically, I'm wondering whether materialized views are likely to ever 
>> support row-level security.
>
> Hi.  Just wanted to follow up on my own email and see if anyone could answer 
> this.

You might think there's some master project-wide list of things that
are to implement that we all work from, but there's not. There is a
todo list [1] that might serve as some sort of guide for a new person
who wishes to contribute something, who's looking for inspiration as
to what to contribute, but I don't think that list ever goes much
beyond that.

New features normally appear for one of two reasons:

1) A developer gets inspired to make something happen; or
2) Someone pays a developer to make something happen.

So I guess in this case either 1 or 2 has not happened, or they have
but the correct people have either not seen your email or they just
simply don't want to answer.

If you're desperate for the feature, I'd recommend looking into the
possibility of going with 1 or 2.

[1] https://wiki.postgresql.org/wiki/Todo



Re: Will there ever be support for Row Level Security on Materialized Views?

2018-08-27 Thread Ken Tanzer
Hi David, and thanks for taking the time to respond.

On Mon, Aug 27, 2018 at 3:29 PM David Rowley 
wrote:

>
> You might think there's some master project-wide list of things that
> are to implement that we all work from, but there's not.
>
>
I suppose I might have thought that, but I didn't.  That's partly why I
asked on this list.



> New features normally appear for one of two reasons:
>
> 1) A developer gets inspired to make something happen; or
> 2) Someone pays a developer to make something happen.
>
>
OK.

So I guess in this case either 1 or 2 has not happened, or they have
> but the correct people have either not seen your email or they just
> simply don't want to answer.
>
>
Yes, that also partly why I resent.  Generally it seems most questions sent
to this list get some kind of response.  So perhaps my original question
was not seen by "the correct people."

Or perhaps they "simply don't want to answer." Well I wondered about that
too, since the last person who asked about this didn't get a response
either.  Is it a particularly stupid question?  Is there some history that
makes it a touchy subject?  Or is it just the Bermuda Triangle of this
mailing list? :)



> If you're desperate for the feature, I'd recommend looking into the
> possibility of going with 1 or 2.
>
>
I'm not really desperate, and unlikely to do either 1 or 2.  But it still
would be helpful to have a sense of what the future _might_ look like in
this area. This seems like a relatively common type of inquiry on this
list.  Maybe there are reasons this will never get implemented.  (e.g.,
it's a Bad Idea, or it's so hard to implement that your head would explode
if we explained it to you, or it would interfere with x, y or z).  Maybe
there's no conceptual or technical obstacles, and it really does just come
down to someone doing it.  Maybe something else entirely.

People on this list generally seem pretty generous in spirit and sharing of
their knowledge, insights and opinions.  It seemed a pretty reasonable and
typical question to ask.  I guess if there's no answer to be had, then so
be it!

Cheers,
Ken




-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://demo.agency-software.org/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: Erroneous behavior of primary key

2018-08-27 Thread David Rowley
On 28 August 2018 at 07:50, Daniel J Peacock  wrote:
> I've got an odd situation with a table that has a varchar(255) as the
> primary key that is getting key values from an Elasticsearch engine.  What
> I'm finding is that even though there is a primary key on the table, I am
> getting duplicated keys in the data and the constraint is not blocking
> these.

How do you know they're duplicated?

>  When I do a "select ,count(*) from  group by
>  having count(*) > 1" I get no results.  Yet, when I search the
> table for a value that is like a key I know to be duplicated, I get multiple
> results.  When I select from the table where field is equal to the
> duplicated field I get one result.  I verified that they are distinct row
> with ctid.  I also created a clone of the table with CTAS and then tried to
> create a unique index on the id varchar field but that failed with
> "duplicate keys found".  I'm stumped as to what could be the problem.
> The only thing that I can think of is that the primary key is somehow
> corrupt.  I've noticed this behavior on other tables on this database.
> What could be causing this sort of problem?

If the index is corrupt then you might find that:

set enable_indexscan = 0;
set enable_indexonlyscan = 0;
select ,count(*) from  group by  having count(*) > 1;

would return some rows.  You should also verify the above query does
use a Seq Scan by performing an EXPLAIN on the query.

There are issues that have been fixed in previous releases which could
have caused an index to get corrupted in this way, so it's quite
useful to know which version of PostgreSQL you're running here and if
you've paid attention to the release notes when you've previously
upgraded. For example, see [1].

[1] https://www.postgresql.org/docs/9.6/static/release-9-6-2.html

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services



Re: Will there ever be support for Row Level Security on Materialized Views?

2018-08-27 Thread David G. Johnston
On Mon, Aug 27, 2018 at 4:06 PM, Ken Tanzer  wrote:

> People on this list generally seem pretty generous in spirit and sharing
> of their knowledge, insights and opinions.  It seemed a pretty reasonable
> and typical question to ask.  I guess if there's no answer to be had, then
> so be it!
>

You asked the question in a manner where the obvious reactions are:  Reply
Affirmative: "Yes, I am planning/working on that right now" or Do Nothing:
"I'm not going to respond because while I'm not aware of any work in this
area I have no clue what others might or might not be planning/doing in
this area before the heat death of the universe (a reasonably close
approximation of "ever")".

Since most things are theoretically possible a flat out - "no that will
never happen is unlikely".

David J.


Re: Will there ever be support for Row Level Security on Materialized Views?

2018-08-27 Thread David Rowley
On 28 August 2018 at 11:06, Ken Tanzer  wrote:
> Or perhaps they "simply don't want to answer." Well I wondered about that 
> too, since the last person who asked about this didn't get a response either. 
>  Is it a particularly stupid question?  Is there some history that makes it a 
> touchy subject?  Or is it just the Bermuda Triangle of this mailing list? :)

I can't imagine why it would be a touchy subject.  Generally, if
you're working on a feature its best not to keep it a secret as if
someone else does the same, then you end up with redundant work being
done.

If I had to guess what's going on here then I'd say that nobody has
been sufficiently motivated to work on this yet. If that's the case,
everyone who reads your email is not the person working on this
feature, so can't answer your question.   I just answered to suggest
the reasons why you might not be getting an answer.


-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services



Re: Will there ever be support for Row Level Security on Materialized Views?

2018-08-27 Thread Ken Tanzer
On Mon, Aug 27, 2018 at 4:19 PM David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Mon, Aug 27, 2018 at 4:06 PM, Ken Tanzer  wrote:
>
>> People on this list generally seem pretty generous in spirit and sharing
>> of their knowledge, insights and opinions.  It seemed a pretty reasonable
>> and typical question to ask.  I guess if there's no answer to be had, then
>> so be it!
>>
>
> You asked the question in a manner where the obvious reactions are:  Reply
> Affirmative: "Yes, I am planning/working on that right now" or Do Nothing:
> "I'm not going to respond because while I'm not aware of any work in this
> area I have no clue what others might or might not be planning/doing in
> this area before the heat death of the universe (a reasonably close
> approximation of "ever")".
>
> Since most things are theoretically possible a flat out - "no that will
> never happen is unlikely".
>
>
Thanks David J.  So I guess I could have asked:


   - Is there any reason there is no RLS / MV support now?
   - Are there any reason fundamental reasons why PG couldn't support RLS
   for MVs?
   - Is it just a matter of nobody has done it yet?
   - Is there other development work that would need to happen before this
   could be implemented?
   - In the scheme of things, is it a lot of work or not so much?
   - Has there been any discussion of implementing this feature, and if so
   is there a link to it?



-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://demo.agency-software.org/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: Will there ever be support for Row Level Security on Materialized Views?

2018-08-27 Thread Ken Tanzer
On Mon, Aug 27, 2018 at 4:23 PM David Rowley 
wrote:

>
> If I had to guess what's going on here then I'd say that nobody has
> been sufficiently motivated to work on this yet. If that's the case,
> everyone who reads your email is not the person working on this
> feature, so can't answer your question.   I just answered to suggest
> the reasons why you might not be getting an answer.
>
>
Thanks, that makes sense, and I appreciate that.





> --
>  David Rowley   http://www.2ndQuadrant.com/
>  PostgreSQL Development, 24x7 Support, Training & Services
>


-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://demo.agency-software.org/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: pg_basebackup + SSL error: bad length

2018-08-27 Thread Adrian Klaver

On 08/27/2018 07:38 AM, Csaba Ragasits wrote:



2018-08-27 16:08 GMT+02:00 Adrian Klaver >:


Can you connect to node1 using psql with and without SSL?

Where is node1 relative to the machine you are running pg_basebackup
from?

Just to be clear both Postgres instances are running 10.5, correct?



Every instances are same (10.5) version, and every servers are in the 
common VMWARE subnet. The psql connection is working fine with sslmode 
or without sslmode.


When I try run it on the localhost (node1), I've same errors too:
pg_basebackup -P -v -D /tmp/main -h localhost -p 5432  -U repmgr


Hmm.

Is there more then one copy of pg_basebackup on the machines?

Maybe from the Debian repo and the PGDG repo.



Thx,
Csaba




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: pg_sample

2018-08-27 Thread Adrian Klaver

On 08/27/2018 08:33 AM, Naveen Dabas wrote:
actually i want limited data from the postgres database. But i don't 
want to use .csv


Does the sample need to include parent/child relationships?

In Postgres 9.5+ there is TABLESAMPLE:

https://www.postgresql.org/docs/10/static/sql-select.html

So you can do something like:

select * from plant1 TABLESAMPLE system (25);

on a single table.

I could see doing the above in a program and INSERTing the output to 
another database.






thanks

On Mon, Aug 27, 2018 at 7:42 PM, Adrian Klaver 
mailto:adrian.kla...@aklaver.com>> wrote:


On 08/26/2018 11:03 PM, Naveen Dabas wrote:

Server encoding is UTF8
Client encoding is UTF8
Creating sample schema sampledb1
DBD::Pg::db do failed: ERROR:  cannot execute CREATE SCHEMA in a
read-only transaction at ./pg_sample line 296.
main::__ANON__('DBD::Pg::db do failed: ERROR:  cannot execute
CREATE SCHEMA i...', 'DBI::db=HASH(0xab2388)', undef) called at
./pg_sample line 478
Done.


can you help me in this


Yes, by suggesting you try something other then pg_sample. The last
commit was 4 years ago and it does not seem to be up to date enough.

What sort of sampling do you want to do?

thanks


On Mon, Aug 27, 2018 at 11:16 AM, Naveen Dabas
mailto:nav...@paymonk.com>
>> wrote:

     Thanks for the help.
     But now i'm getting some different error as i rum this command
     ./pg_sample -a -f file.sql --limit="user = 100"
--schema=dbname -h
     hostname -U user -W password

     I am not getting output and one file opens , in that file
there id
     guide for pg_sample
     
     /tmp/9REOT5C

     NAME
     pg_sample - extract a small, sample dataset from a larger
PostgreSQL
     database while maintaining referential integrity.
     =head1 SYNOPSIS
     pg_sample [ option... ] [ dbname ]
     =head1 DESCRIPTION
     pg_sample is a utility for exporting a small, sample
dataset from a
     larger PostgreSQL database. The output and command-line
options closely
     resemble the pg_dump backup utility (although only the
plain-text format
     is supported).
     The sample database produced includes all tables from the
original,
     maintains referential integrity, and supports circular
dependencies.
     To build an actual instance of the sample database, the
output of
     this script
     can be piped to the psql utility. For example, assuming we
have an
     existing
     PostgreSQL database named "mydb", a sample database could be
     constructed with:
        createdb sampledb
        pg_sample mydb | psql sampledb
     and so on ...

     On Mon, Aug 27, 2018 at 1:27 AM, Paul Carlucci
     mailto:paul.carlu...@gmail.com>
>> wrote:

         sudo yum install perl-CPAN

         Also do a "yum search perl-" and you should find most,
if not
         all what you need natively packaged for your particular
Linux
         distro.  You're better off just sticking with the
pre-packaged
         perl modules unless you specifically need something
special.

         You'll find more modules if you also enable the EPEL
yum repo by
         setting enabled=1 in the first section of
         /etc/yum.repos.d/epel.conf and rerunning that yum
search command.

         On Sun, Aug 26, 2018, 2:20 PM Naveen Dabas
mailto:nav...@paymonk.com>
         >> wrote:

             Sir i'm getting error in both commands

             [root@ip-88-8-8-17 ~]#  perl -MCPAN -e 'install
Bundle::DBI'
             Can't locate CPAN.pm in @INC (@INC contains:
             /usr/local/lib64/perl5 /usr/local/share/perl5
             /usr/lib64/perl5/vendor_perl
/usr/share/perl5/vendor_perl
             /usr/lib64/perl5 /usr/share/perl5 .).
             BEGIN failed--compilation aborted.
             [root@ip-88-8-8-17 ~]# perl -MCPAN -e 'install DBD::Pg'
             Can't locate CPAN.pm in @INC (@INC contains:
             /usr/local/lib64/perl5 /usr/local/share/perl5
             /usr/lib64/perl5/vendor_perl
/usr/share/perl5/vendor_perl
             /usr/lib64/perl5 /usr/share/perl5 .).
             BEGIN failed--compilation aborted.



   

First steps to being a contributer

2018-08-27 Thread Daniel Wood
Having quit Amazon, where I was doing Postgres development, I've started 
looking at various things I might work on for fun.  One thought is to start 
with something easy like the scalability of GetSnapshotData().  :-)


I recently found it interesting to examine performance while running near 1 
million pgbench selects per sec on a 48 core/96 HT Skylake box.  I noticed that 
additional sessions trying to connect were timing out when they got stuck in 
ProcArrayAdd trying to get the ProcArrayLock in EXCLUSIVE mode.  FYI, scale 
1 with 2048 clients.


The question is whether it is possible that the problem with GetSnapshotData() 
has reached a critical point, with respect to snapshot scaling, on the newest 
high end systems.


I didn't have time to complete my analysis as I lost access to the hardware on 
my last day.  It shouldn't cost me much more than about $6 per hour to do 
experiments on a 48 core system.

What I'd like is a short cut to any of the current discussions of various ideas 
to improve snapshot scaling.  I have some of my own ideas but want to review 
things before posting them.

Re: Will there ever be support for Row Level Security on Materialized Views?

2018-08-27 Thread Alvaro Herrera
On 2018-Aug-27, Ken Tanzer wrote:

>- Is there any reason there is no RLS / MV support now?

Nobody worked on it yet.

>- Are there any reason fundamental reasons why PG couldn't support RLS
>for MVs?

Doesn't look like it.

>- Is it just a matter of nobody has done it yet?

Yeah.

>- Is there other development work that would need to happen before this
>could be implemented?

Doesn't look like it.

>- In the scheme of things, is it a lot of work or not so much?

Probably not much.

>- Has there been any discussion of implementing this feature, and if so
>is there a link to it?

Not that I remember.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Reeving an error while taking the backup using "pg_basebackup" utility.

2018-08-27 Thread Raghavendra Rao J S V
Hi All,

We are using below command to take the backup of the database.

*$PGHOME/bin/pg_basebackup -p 5433 -U postgres -P -v -x --format=tar --gzip
--compress=1 --pgdata=- -D /opt/rao *

While taking the backup we have received below error.

transaction log start point: 285/8F80

pg_basebackup: could not get transaction log end position from server:
FATAL:  requested WAL segment 00010285008F has already been
removed

Please guide me why and how to handle this error. Do you want me to change
any of the option in my pg_basebackup command let me know.

Please clarify me what it means *--pgdata=--D* in my above * pg_basebackup *
command.
-- 
Regards,
Raghavendra Rao J S V


How to search particular line/text code in all Postgres all database object's

2018-08-27 Thread Raghavendra Rao J S V
Hi All,

How to search particular line/text code in all Postgres all database
object's like functions,triggers,views etc ?

Is there any data dictionary table in Postgres?

Eg:- My requirement is , I want to found data related to employee table in
any function, trigger,view etc.

Kindly help me.
-- 
Regards,
Raghavendra Rao J S V
Mobile- 8861161425


Re: Reeving an error while taking the backup using "pg_basebackup" utility.

2018-08-27 Thread Michael Paquier
On Tue, Aug 28, 2018 at 07:19:12AM +0530, Raghavendra Rao J S V wrote:
> pg_basebackup: could not get transaction log end position from server:
> FATAL:  requested WAL segment 00010285008F has already been
> removed
> 
> Please guide me why and how to handle this error. Do you want me to change
> any of the option in my pg_basebackup command let me know.

This means that while taking a backup, a checkpoint has come in and has
recycled past segments.  Lack of luck it is, as depending on the load
you may see such failures.  One way to correct the problem would be to
take a base backup without WAL segments included and with a WAL archive
used by the base backup taken.  A second is to use a physical
replication slot which guarantee the presence of the wanted segments.
--
Michael


signature.asc
Description: PGP signature


Re: pg_basebackup + SSL error: bad length

2018-08-27 Thread Michael Paquier
On Mon, Aug 27, 2018 at 04:40:34PM -0700, Adrian Klaver wrote:
> Is there more then one copy of pg_basebackup on the machines?

Or this user has created a tablespace directly in the main data folder,
which can cause pg_basebackup to fail because of recursion issues.  It
is no wonder that a WARNING is created during a CREATE TABLESPACE if you
do that.
--
Michael


signature.asc
Description: PGP signature


Re: How to search particular line/text code in all Postgres all database objects

2018-08-27 Thread raf
Raghavendra Rao J S V wrote:

> Hi All,
> 
> How to search particular line/text code in all Postgres all database
> object's like functions,triggers,views etc ?
> 
> Is there any data dictionary table in Postgres?
> 
> Eg:- My requirement is , I want to found data related to employee table in
> any function, trigger,view etc.
> 
> Kindly help me.
> -- 
> Regards,
> Raghavendra Rao J S V
> Mobile- 8861161425

This may not be helpful but all function, trigger and view
source code (and table definitions) should exist outside the
database in a git repository or similar and that can be
searched. Otherwise, you probably can't audit whether or
not the code in the database is up to date because you have
nothing to compare it to.

I only use functions and the code for them resides in pg_proc.
My tool for auditing functions contains this query which might
be a starting point for you:

select
p.proname, -- name
p.proretset, -- returns setof?
p.proisstrict, -- strict 't' or 'f'
p.provolatile, -- volatile or stable 'v' or 's'
p.prosecdef, -- security definer 't' or 'f'
p.pronargs, -- number of in arguments
p.prorettype, -- return type
p.proargtypes, -- space-separated list of in arg types
p.proallargtypes, -- array of in/out arg types (iff there are out 
args)
p.proargmodes, -- array of in/out arg modes like {i,o,o} (iff there 
are out args)
p.proargnames, -- array of in/out arg names like {id,code,name}
p.prosrc, -- source code
cast(cast(p.oid as regprocedure) as text) -- nice signature
from
pg_user u,
pg_proc p
where
u.usename = current_user and
p.proowner = u.usesysid and
p.proname like 'aps%' -- 'aps' is the prefix for our functions
order by
p.proname

Or just: select proname, prosrc from pg_proc where proname like ...

cheers,
raf




Re: Reeving an error while taking the backup using "pg_basebackup" utility.

2018-08-27 Thread Adrian Klaver

On 08/27/2018 06:49 PM, Raghavendra Rao J S V wrote:

Hi All,

We are using below command to take the backup of the database.

*$PGHOME/bin/pg_basebackup -p 5433 -U postgres -P -v -x --format=tar 
--gzip --compress=1 --pgdata=- -D /opt/rao *


While taking the backup we have received below error.

transaction log start point: 285/8F80

**

pg_basebackup: could not get transaction log end position from server: 
FATAL:  requested WAL segment 00010285008F has already been 
removed


Please guide me why and how to handle this error. Do you want me to 
change any of the option in my pg_basebackup command let me know.


-X stream

See link below.



Please clarify me what it means *--pgdata=--D* in my above 
*pg_basebackup *command.


https://www.postgresql.org/docs/10/static/app-pgbasebackup.html



--
Regards,
Raghavendra Rao J S V




--
Adrian Klaver
adrian.kla...@aklaver.com



Which background task looks for pg_xlog in 10?

2018-08-27 Thread Johann Spies
I see this in /var/log/postgresql/postgresql-10-main.log:

postgres postgres@template1 ERROR:  could not open directory
"pg_xlog": No such file or directory

postgres postgres@template1 STATEMENT:  SELECT count(*) AS segments
FROM pg_ls_dir('pg_xlog') t(fn) \
WHERE fn ~ '^[0-9A-Z]{24}$'

Where does this come from?

Regards
Johann
--
Because experiencing your loyal love is better than life itself,
my lips will praise you.  (Psalm 63:3)