Re: tsvector field length limitation

2019-02-11 Thread AJG
Hi Jonathan,

Check out this potential fix/extension

https://github.com/postgrespro/tsvector2





--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



bdr replication breaks down

2019-02-11 Thread Daniel Fink (PDF)
Hi all,



I have a bdr replication setup that worked fine until recently.

But it seems like one command cannot be replicated, and it breaks down the
whole process on all hosts where it is sent to.

I appended part of the log file, it gets repeated over and over.

Can I somehow omit this command and try if the next one works?



Best Regards,

Daniel Fink



< 2019-02-08 00:00:01.658 CET >ERROR:  schema "pg_temp_30" does not exist

< 2019-02-08 00:00:01.658 CET >CONTEXT:  apply QUEUED_DROP in commit
7D/2E3143B0, xid 106791715 commited at 2019-01-28 10:12:34.348577+01
(action #2) from node (6208877715678412212,1,2942745)

< 2019-02-08 00:00:01.661 CET >LOG:  worker process: bdr
(6449651545875285115,1,16388,)->bdr (6208877715678412212,1, (PID 27640)
exited with exit code 1

< 2019-02-08 00:00:06.666 CET >LOG:  starting background worker process
"bdr (6449651545875285115,1,16388,)->bdr (6208877715678412212,1,"

< 2019-02-08 00:00:07.641 CET >ERROR:  schema "pg_temp_30" does not exist

< 2019-02-08 00:00:07.641 CET >CONTEXT:  apply QUEUED_DROP in commit
7D/2E3143B0, xid 106791715 commited at 2019-01-28 10:12:34.348577+01
(action #2) from node (6208877715678412212,1,2942745)

< 2019-02-08 00:00:07.644 CET >LOG:  worker process: bdr
(6449651545875285115,1,16388,)->bdr (6208877715678412212,1, (PID 27645)
exited with exit code 1

< 2019-02-08 00:00:12.649 CET >LOG:  starting background worker process
"bdr (6449651545875285115,1,16388,)->bdr (6208877715678412212,1,"

< 2019-02-08 00:00:13.633 CET >ERROR:  schema "pg_temp_30" does not exist

< 2019-02-08 00:00:13.633 CET >CONTEXT:  apply QUEUED_DROP in commit
7D/2E3143B0, xid 106791715 commited at 2019-01-28 10:12:34.348577+01
(action #2) from node (6208877715678412212,1,2942745)




*DANIEL FINK*

*Senior Software Engineer*

*tel* (+49) 89.767062.20
*fax*(+49) 89.767062.11
email daniel.f...@pdf.com

*PDF Solutions GmbH*
* (**a PDF Solutions Company)*
Managing Director: Kimon Michaels
Schwanthalerstr. 10
D-80336 München, Germany

München HRB 87307
DE 128214899

*www.pdf.com *

-- 
This message may contain confidential and privileged information. If it has 
been sent to you in error, please reply to advise the sender of the error 
and then immediately permanently delete it and all attachments to it from 
your systems. If you are not the intended recipient, do not read, copy, 
disclose or otherwise use this message or any attachments to it. The sender 
disclaims any liability for such unauthorized use.  PLEASE NOTE that all 
incoming e-mails sent to PDF e-mail accounts will be archived and may be 
scanned by us and/or by external service providers to detect and prevent 
threats to our systems, investigate illegal or inappropriate behavior, 
and/or eliminate unsolicited promotional e-mails (“spam”).  If you have any 
concerns about this process, please contact us at legal.departm...@pdf.com 
.


Re: Promote replica before being able to accept connections

2019-02-11 Thread Laurenz Albe
Martín Fernández wrote:
> We have a 9.2 pg cluster and we are in the process of rebuilding a master 
> database in our staging environment.
> In order to achieve the latter goal, we are restoring our staging database 
> using pg_basebackup against one
> of our production replicas. 
> 
> pg_basebackup has completed and the staging database is configured in 
> hot_standby mode, recovering from
> the WAL archive. The database has not reached the point were it can accept 
> connections yet, still
> investing WAL files.
> 
> What we are not sure about is, can we attempt promoting the database as a 
> master instance before it can
> accept connections ? We don´t care if the staging database is a few days 
> behind the production environment,
> that is something acceptable in our use case, we just want to promote it as 
> soon as we can.

The earliest time to which you can recover is the end of the backup.

Until recovery has reached that point, the database is not consistent
(that's the "consistent recovery state" the logs are talking about).

You cannot connect to the recovering database or promote it before you
reach that point, so you have to wait until you can connectio to the database
before you can promote it.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com




Postgresql Duplicate DB

2019-02-11 Thread Sathish Kumar
Hi All,

I would like to duplicate our existing db on the same server, what will be
the faster way to achieve it.

DB size is around 300gb.


RE: Postgresql Duplicate DB

2019-02-11 Thread Alvaro Aguayo Garcia-Rada
Hi. Not sure if the fastest, but the first that comes up to my mind is using 
pg_dump and psql. First you create your new database, then you run 
this(replacing as needed):

pg_dump OLDDB | psql NEWDB

Saludos,

Alvaro Aguayo
Jefe de Operaciones
Open Comb Systems E.I.R.L.

Oficina: (+51-1) 3377813 | Cel.: (+51) 995540103 | RPC: (+51) 954183248
Website: www.ocs.pe

Sent from my Sony Xperia™ smartphone

 Sathish Kumar wrote 



Hi All,

I would like to duplicate our existing db on the same server, what will be
the faster way to achieve it.

DB size is around 300gb.


Re: Postgresql Duplicate DB

2019-02-11 Thread Laurenz Albe
Sathish Kumar wrote:
> I would like to duplicate our existing db on the same server, what will be 
> the faster way to achieve it.

If it is in the same database cluster, you can use

   CREATE DATABASE newdb TEMPLATE olddb;

Make sure nobody is connected to "olddb" when you do that.

If you want to clone a whole database cluster, you can run

   pg_basebackup -D /new/cluster/directory --wal-method=stream

and recover the new cluster with "restore_command = 'true'".

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com




Re: bdr replication breaks down

2019-02-11 Thread Alvaro Aguayo Garcia-Rada
What's the command you are running to trigger such behaviour?

Regards,

Alvaro Aguayo
Jefe de Operaciones
Open Comb Systems E.I.R.L.

(+51-1) 337-7813 Anexo 4002
www.ocs.pe

- Original Message -
From: "Daniel Fink (PDF)" 
To: "pgsql-general" 
Sent: Monday, 11 February, 2019 05:18:30
Subject: bdr replication breaks down

Hi all,



I have a bdr replication setup that worked fine until recently.

But it seems like one command cannot be replicated, and it breaks down the
whole process on all hosts where it is sent to.

I appended part of the log file, it gets repeated over and over.

Can I somehow omit this command and try if the next one works?



Best Regards,

Daniel Fink



< 2019-02-08 00:00:01.658 CET >ERROR:  schema "pg_temp_30" does not exist

< 2019-02-08 00:00:01.658 CET >CONTEXT:  apply QUEUED_DROP in commit
7D/2E3143B0, xid 106791715 commited at 2019-01-28 10:12:34.348577+01
(action #2) from node (6208877715678412212,1,2942745)

< 2019-02-08 00:00:01.661 CET >LOG:  worker process: bdr
(6449651545875285115,1,16388,)->bdr (6208877715678412212,1, (PID 27640)
exited with exit code 1

< 2019-02-08 00:00:06.666 CET >LOG:  starting background worker process
"bdr (6449651545875285115,1,16388,)->bdr (6208877715678412212,1,"

< 2019-02-08 00:00:07.641 CET >ERROR:  schema "pg_temp_30" does not exist

< 2019-02-08 00:00:07.641 CET >CONTEXT:  apply QUEUED_DROP in commit
7D/2E3143B0, xid 106791715 commited at 2019-01-28 10:12:34.348577+01
(action #2) from node (6208877715678412212,1,2942745)

< 2019-02-08 00:00:07.644 CET >LOG:  worker process: bdr
(6449651545875285115,1,16388,)->bdr (6208877715678412212,1, (PID 27645)
exited with exit code 1

< 2019-02-08 00:00:12.649 CET >LOG:  starting background worker process
"bdr (6449651545875285115,1,16388,)->bdr (6208877715678412212,1,"

< 2019-02-08 00:00:13.633 CET >ERROR:  schema "pg_temp_30" does not exist

< 2019-02-08 00:00:13.633 CET >CONTEXT:  apply QUEUED_DROP in commit
7D/2E3143B0, xid 106791715 commited at 2019-01-28 10:12:34.348577+01
(action #2) from node (6208877715678412212,1,2942745)




*DANIEL FINK*

*Senior Software Engineer*

*tel* (+49) 89.767062.20
*fax*(+49) 89.767062.11
email daniel.f...@pdf.com

*PDF Solutions GmbH*
* (**a PDF Solutions Company)*
Managing Director: Kimon Michaels
Schwanthalerstr. 10
D-80336 München, Germany

München HRB 87307
DE 128214899

*www.pdf.com *

-- 
This message may contain confidential and privileged information. If it has 
been sent to you in error, please reply to advise the sender of the error 
and then immediately permanently delete it and all attachments to it from 
your systems. If you are not the intended recipient, do not read, copy, 
disclose or otherwise use this message or any attachments to it. The sender 
disclaims any liability for such unauthorized use.  PLEASE NOTE that all 
incoming e-mails sent to PDF e-mail accounts will be archived and may be 
scanned by us and/or by external service providers to detect and prevent 
threats to our systems, investigate illegal or inappropriate behavior, 
and/or eliminate unsolicited promotional e-mails (“spam”).  If you have any 
concerns about this process, please contact us at legal.departm...@pdf.com 
.



Implementing pgaudit extension on Microsoft Windows

2019-02-11 Thread Niels Jespersen
Hello all

Recent Oracle convert here.

Running PostgreSQL 11 on Windows (10 and 2016).

Please advise, if I should direct this to another mailinglist or forum.

I want to implement detailed object auditing on PostgreSQL. This is what 
PGAudit extension does. However most advice on this is directed at Linux and 
friends, which is fair enough. From Googleing around, it seems it should be 
possible to compile on Windows, though.

And it is. I have compiled pgaudit.dll from the source out of pgaudit version 
1.3.

And this:

Copied pgaudit.dll to C:\Program Files\PostgreSQL\11\lib
Copied  pgaudit.control and pgaudit--1.3.sql to C:\Program 
Files\PostgreSQL\11\share\extension
Set shared_preload_libraries = 'pgaudit'
Restart PostgreSQL
Run "CREATE EXTENSION pgaudit;"
alter system set pgaudit.log = 'all';
select pg_reload_conf();

Nothing enters the log-file from pgaudit. Creating tables (relations, sorry). 
Selecting from tables, nothing in the log.

I can see that alter system set pgaudit.logx = 'all';  complains about 
"unrecognized configuration parameter", so something is known about pgaudit.

I can see that postgres.exe has locked pgaudit.dll, so postgreSQL knows it is 
there, but nothing in the log-file.

I need some advice regarding the next steps in my troubleshooting.

Regards

Niels Jespersen





RE: bdr replication breaks down

2019-02-11 Thread Daniel Fink (PDF)
Hi again,



So I checked the x_logs and found the following lines.

It seems like it faild at commiting a long transaction…

I am still not sure what the pg_temp30 schema is used for.

But I was able to “fix” it with

select bdr.skip_changes_upto('6208877715678412212', 1, 2942745,
'7D/2E314530');



I guess the listed transaction is now missing in some of the nodes.



rmgr: Standby len (rec/tot): 28/60, tx:  0, lsn:
7D/2E313238, prev 7D/2E313208, bkp: , desc: running xacts: nextXid
106791716 latestCompletedXid 106791714 oldestRunningXid 106791715; 1 xacts:
106791715

rmgr: Heap2   len (rec/tot): 34/66, tx:  106791715, lsn:
7D/2E313278, prev 7D/2E313238, bkp: , desc: new_cid: rel
1663/2942745/12834; tid 14/1; cmin: 4294967295, cmax: 0, combo: 4294967295

rmgr: Heaplen (rec/tot): 26/58, tx:  106791715, lsn:
7D/2E3132C0, prev 7D/2E313278, bkp: , desc: delete: rel
1663/2942745/12834; tid 14/1 KEYS_UPDATED

rmgr: Heap2   len (rec/tot): 34/66, tx:  106791715, lsn:
7D/2E313300, prev 7D/2E3132C0, bkp: , desc: new_cid: rel
1663/2942745/12877; tid 65/82; cmin: 4294967295, cmax: 0, combo: 4294967295

rmgr: Heaplen (rec/tot): 26/58, tx:  106791715, lsn:
7D/2E313348, prev 7D/2E313300, bkp: , desc: delete: rel
1663/2942745/12877; tid 65/82 KEYS_UPDATED

rmgr: Heap2   len (rec/tot): 34/66, tx:  106791715, lsn:
7D/2E313388, prev 7D/2E313348, bkp: , desc: new_cid: rel
1663/2942745/12877; tid 65/81; cmin: 4294967295, cmax: 0, combo: 4294967295

rmgr: Heaplen (rec/tot): 26/58, tx:  106791715, lsn:
7D/2E3133D0, prev 7D/2E313388, bkp: , desc: delete: rel
1663/2942745/12877; tid 65/81 KEYS_UPDATED

rmgr: Heap2   len (rec/tot): 34/66, tx:  106791715, lsn:
7D/2E313410, prev 7D/2E3133D0, bkp: , desc: new_cid: rel
1663/2942745/12877; tid 65/80; cmin: 4294967295, cmax: 0, combo: 4294967295

rmgr: Heaplen (rec/tot): 26/58, tx:  106791715, lsn:
7D/2E313458, prev 7D/2E313410, bkp: , desc: delete: rel
1663/2942745/12877; tid 65/80 KEYS_UPDATED

rmgr: Heap2   len (rec/tot): 34/66, tx:  106791715, lsn:
7D/2E313498, prev 7D/2E313458, bkp: , desc: new_cid: rel
1663/2942745/12877; tid 65/79; cmin: 4294967295, cmax: 0, combo: 4294967295

rmgr: Heaplen (rec/tot): 26/58, tx:  106791715, lsn:
7D/2E3134E0, prev 7D/2E313498, bkp: , desc: delete: rel
1663/2942745/12877; tid 65/79 KEYS_UPDATED

rmgr: Heap2   len (rec/tot): 34/66, tx:  106791715, lsn:
7D/2E313520, prev 7D/2E3134E0, bkp: , desc: new_cid: rel
1663/2942745/12877; tid 65/78; cmin: 4294967295, cmax: 0, combo: 4294967295

rmgr: Heaplen (rec/tot): 26/58, tx:  106791715, lsn:
7D/2E313568, prev 7D/2E313520, bkp: , desc: delete: rel
1663/2942745/12877; tid 65/78 KEYS_UPDATED

rmgr: Heap2   len (rec/tot): 34/66, tx:  106791715, lsn:
7D/2E3135A8, prev 7D/2E313568, bkp: , desc: new_cid: rel
1663/2942745/12877; tid 65/77; cmin: 4294967295, cmax: 0, combo: 4294967295

rmgr: Heaplen (rec/tot): 26/58, tx:  106791715, lsn:
7D/2E3135F0, prev 7D/2E3135A8, bkp: , desc: delete: rel
1663/2942745/12877; tid 65/77 KEYS_UPDATED

rmgr: Heap2   len (rec/tot): 34/66, tx:  106791715, lsn:
7D/2E313630, prev 7D/2E3135F0, bkp: , desc: new_cid: rel
1663/2942745/12877; tid 65/76; cmin: 4294967295, cmax: 0, combo: 4294967295

rmgr: Heaplen (rec/tot): 26/58, tx:  106791715, lsn:
7D/2E313678, prev 7D/2E313630, bkp: , desc: delete: rel
1663/2942745/12877; tid 65/76 KEYS_UPDATED

rmgr: Heap2   len (rec/tot): 34/66, tx:  106791715, lsn:
7D/2E3136B8, prev 7D/2E313678, bkp: , desc: new_cid: rel
1663/2942745/12877; tid 65/75; cmin: 4294967295, cmax: 0, combo: 4294967295

rmgr: Heaplen (rec/tot): 26/58, tx:  106791715, lsn:
7D/2E313700, prev 7D/2E3136B8, bkp: , desc: delete: rel
1663/2942745/12877; tid 65/75 KEYS_UPDATED

rmgr: Heap2   len (rec/tot): 34/66, tx:  106791715, lsn:
7D/2E313740, prev 7D/2E313700, bkp: , desc: new_cid: rel
1663/2942745/12877; tid 65/74; cmin: 4294967295, cmax: 0, combo: 4294967295

rmgr: Heaplen (rec/tot): 26/58, tx:  106791715, lsn:
7D/2E313788, prev 7D/2E313740, bkp: , desc: delete: rel
1663/2942745/12877; tid 65/74 KEYS_UPDATED

rmgr: Heap2   len (rec/tot): 34/66, tx:  106791715, lsn:
7D/2E3137C8, prev 7D/2E313788, bkp: , desc: new_cid: rel
1663/2942745/12747; tid 12/10; cmin: 4294967295, cmax: 1, combo: 4294967295

rmgr: Heaplen (rec/tot): 26/58, tx:  106791715, lsn:
7D/2E313810, prev 7D/2E3137C8, bkp: , desc: delete: rel
1663/2942745/12747; tid 12/10 KEYS_UPDATED

rmgr: Heap2   len (rec/tot): 34/66, tx:  106791715, lsn:
7D/2E313850, prev 7D/2E313810, bkp: , desc: new_cid: rel
1663/2942745/12877; tid 65/72; cmin: 4294967295, cmax: 1, combo: 4294967295

rmgr: Heap

Re: Implementing pgaudit extension on Microsoft Windows

2019-02-11 Thread Pavel Stehule
po 11. 2. 2019 v 14:23 odesílatel Niels Jespersen  napsal:

> Hello all
>
>
>
> Recent Oracle convert here.
>
>
>
> Running PostgreSQL 11 on Windows (10 and 2016).
>
>
>
> Please advise, if I should direct this to another mailinglist or forum.
>
>
>
> I want to implement detailed object auditing on PostgreSQL. This is what
> PGAudit extension does. However most advice on this is directed at Linux
> and friends, which is fair enough. From Googleing around, it seems it
> should be possible to compile on Windows, though.
>
>
>
> And it is. I have compiled pgaudit.dll from the source out of pgaudit
> version 1.3.
>
>
>
> And this:
>
>
>
> Copied pgaudit.dll to C:\Program Files\PostgreSQL\11\lib
>
> Copied  pgaudit.control and pgaudit--1.3.sql to C:\Program
> Files\PostgreSQL\11\share\extension
>
> Set shared_preload_libraries = 'pgaudit'
>
> Restart PostgreSQL
>
> Run “CREATE EXTENSION pgaudit;”
>
> alter system set pgaudit.log = 'all';
>
> select pg_reload_conf();
>
>
>
> Nothing enters the log-file from pgaudit. Creating tables (relations,
> sorry). Selecting from tables, nothing in the log.
>
>
>
> I can see that alter system set pgaudit.logx = 'all';  complains about
> “unrecognized configuration parameter”, so something is known about
> pgaudit.
>
>
>
> I can see that postgres.exe has locked pgaudit.dll, so postgreSQL knows it
> is there, but nothing in the log-file.
>
>
>
> I need some advice regarding the next steps in my troubleshooting.
>

probably you should to use

alter system set "pgaudit.log" = 'all';

Regards

Pavel


>
> Regards
>
>
>
> Niels Jespersen
>
>
>
>
>
>
>


Re: Implementing pgaudit extension on Microsoft Windows

2019-02-11 Thread Tom Lane
Niels Jespersen  writes:
> Copied pgaudit.dll to C:\Program Files\PostgreSQL\11\lib
> Copied  pgaudit.control and pgaudit--1.3.sql to C:\Program 
> Files\PostgreSQL\11\share\extension
> Set shared_preload_libraries = 'pgaudit'
> Restart PostgreSQL
> Run "CREATE EXTENSION pgaudit;"
> alter system set pgaudit.log = 'all';
> select pg_reload_conf();

> Nothing enters the log-file from pgaudit. Creating tables (relations, sorry). 
> Selecting from tables, nothing in the log.

Hm, what you describe above looks right.

> I can see that alter system set pgaudit.logx = 'all';  complains about 
> "unrecognized configuration parameter", so something is known about pgaudit.

No, that means the postmaster *doesn't* have pgaudit loaded.

The fact that CREATE EXTENSION worked implies that you've got the library
correctly built, so I think this must boil down to the
"shared_preload_libraries" setting not having taken.  You could
cross-check that by seeing what "SHOW shared_preload_libraries"
prints.

I wonder if you forgot to remove the comment marker (#) on the
shared_preload_libraries line?

regards, tom lane



Re: Implementing pgaudit extension on Microsoft Windows

2019-02-11 Thread Pavel Stehule
po 11. 2. 2019 v 15:37 odesílatel Pavel Stehule 
napsal:

>
>
> po 11. 2. 2019 v 14:23 odesílatel Niels Jespersen  napsal:
>
>> Hello all
>>
>>
>>
>> Recent Oracle convert here.
>>
>>
>>
>> Running PostgreSQL 11 on Windows (10 and 2016).
>>
>>
>>
>> Please advise, if I should direct this to another mailinglist or forum.
>>
>>
>>
>> I want to implement detailed object auditing on PostgreSQL. This is what
>> PGAudit extension does. However most advice on this is directed at Linux
>> and friends, which is fair enough. From Googleing around, it seems it
>> should be possible to compile on Windows, though.
>>
>>
>>
>> And it is. I have compiled pgaudit.dll from the source out of pgaudit
>> version 1.3.
>>
>>
>>
>> And this:
>>
>>
>>
>> Copied pgaudit.dll to C:\Program Files\PostgreSQL\11\lib
>>
>> Copied  pgaudit.control and pgaudit--1.3.sql to C:\Program
>> Files\PostgreSQL\11\share\extension
>>
>> Set shared_preload_libraries = 'pgaudit'
>>
>> Restart PostgreSQL
>>
>> Run “CREATE EXTENSION pgaudit;”
>>
>> alter system set pgaudit.log = 'all';
>>
>> select pg_reload_conf();
>>
>>
>>
>> Nothing enters the log-file from pgaudit. Creating tables (relations,
>> sorry). Selecting from tables, nothing in the log.
>>
>>
>>
>> I can see that alter system set pgaudit.logx = 'all';  complains about
>> “unrecognized configuration parameter”, so something is known about
>> pgaudit.
>>
>>
>>
>> I can see that postgres.exe has locked pgaudit.dll, so postgreSQL knows
>> it is there, but nothing in the log-file.
>>
>>
>>
>> I need some advice regarding the next steps in my troubleshooting.
>>
>
> probably you should to use
>
> alter system set "pgaudit.log" = 'all';
>

but alter system set pgaudit.log = 'all'; should to work if pgaudit is
loaded.




>
> Regards
>
> Pavel
>
>
>>
>> Regards
>>
>>
>>
>> Niels Jespersen
>>
>>
>>
>>
>>
>>
>>
>


Re: Promote replica before being able to accept connections

2019-02-11 Thread Martín Fernández
Laurenz,

That makes sense! 

Thanks for the quick reply

Best,

Martín

On Mon, Feb 11th, 2019 at 7:55 AM, Laurenz Albe  
wrote:

> 
> 
> 
> Martín Fernández wrote:
> > We have a 9.2 pg cluster and we are in the process of rebuilding a
> master database in our staging environment.
> > In order to achieve the latter goal, we are restoring our staging
> database using pg_basebackup against one
> > of our production replicas.
> >
> > pg_basebackup has completed and the staging database is configured in
> hot_standby mode, recovering from
> > the WAL archive. The database has not reached the point were it can
> accept connections yet, still
> > investing WAL files.
> >
> > What we are not sure about is, can we attempt promoting the database as
> a master instance before it can
> > accept connections ? We don´t care if the staging database is a few days
> behind the production environment,
> > that is something acceptable in our use case, we just want to promote it
> as soon as we can.
> 
> The earliest time to which you can recover is the end of the backup.
> 
> Until recovery has reached that point, the database is not consistent
> (that's the "consistent recovery state" the logs are talking about).
> 
> You cannot connect to the recovering database or promote it before you
> reach that point, so you have to wait until you can connectio to the
> database
> before you can promote it.
> 
> Yours,
> Laurenz Albe
> --
> Cybertec | https://www.cybertec-postgresql.com
> 
> 
> 
> 
>

Copy entire schema A to a different schema B

2019-02-11 Thread Tiffany Thang
Hi,
To copy the source schema A to target schema B in the same database in
PG10.3, I use psql to dump schema A and manually removes anything specific
to the schema in the text dump file before importing into schema B. How do
I achieve the same exporting from Schema A and importing into schema B
using pg_dump with the -Fc option? Since the dump file generated is binary,
I could not make modifications to the file. Is the procedure the same in
version 11?

Thank.

Tiff


Re: Copy entire schema A to a different schema B

2019-02-11 Thread Adrian Klaver

On 2/11/19 8:00 AM, Tiffany Thang wrote:

Hi,
To copy the source schema A to target schema B in the same database in 
PG10.3, I use psql to dump schema A and manually removes anything 
specific to the schema in the text dump file before importing into 
schema B. How do I achieve the same exporting from Schema A and 
importing into schema B using pg_dump with the -Fc option? Since the 


Use the -f option to pg_restore:

https://www.postgresql.org/docs/10/app-pgrestore.html

So something like:

pg_restore -f text_file.sql the_binary_file.out

This will get you back to a text file you can manipulate.

dump file generated is binary, I could not make modifications to the 
file. Is the procedure the same in version 11?


Thank.

Tiff



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Copy entire schema A to a different schema B

2019-02-11 Thread Ron

On 2/11/19 10:00 AM, Tiffany Thang wrote:

Hi,
To copy the source schema A to target schema B in the same database in 
PG10.3, I use psql to dump schema A and manually removes anything specific 
to the schema in the text dump file before importing into schema B. How do 
I achieve the same exporting from Schema A and importing into schema B 
using pg_dump with the -Fc option? Since the dump file generated is 
binary, I could not make modifications to the file. Is the procedure the 
same in version 11?


Why do you need to use "--format=custom" instead of "--format=plain"?

For example:
$ pg_dump --format=plain --schema-only --schema=A


--
Angular momentum makes the world go 'round.



Re: Copy entire schema A to a different schema B

2019-02-11 Thread Tiffany Thang
Thanks Adrian and Ron. Sorry, I was not clear. What I'm trying to achieve
was to dump the schema quickly and be able to restore a single or subset of
objects from the dump. As far as I understand, the only way of achieving
that is to use the custom format and the -j option. Is that correct? Are
there any other alternatives?

Thanks.

Tiff

On Mon, Feb 11, 2019 at 11:10 AM Ron  wrote:

> On 2/11/19 10:00 AM, Tiffany Thang wrote:
> > Hi,
> > To copy the source schema A to target schema B in the same database in
> > PG10.3, I use psql to dump schema A and manually removes anything
> specific
> > to the schema in the text dump file before importing into schema B. How
> do
> > I achieve the same exporting from Schema A and importing into schema B
> > using pg_dump with the -Fc option? Since the dump file generated is
> > binary, I could not make modifications to the file. Is the procedure the
> > same in version 11?
>
> Why do you need to use "--format=custom" instead of "--format=plain"?
>
> For example:
> $ pg_dump --format=plain --schema-only --schema=A
>
>
> --
> Angular momentum makes the world go 'round.
>
>


Re: Copy entire schema A to a different schema B

2019-02-11 Thread Adrian Klaver

On 2/11/19 8:30 AM, Tiffany Thang wrote:
Thanks Adrian and Ron. Sorry, I was not clear. What I'm trying to 
achieve was to dump the schema quickly and be able to restore a single 
or subset of objects from the dump. As far as I understand, the only way 
of achieving that is to use the custom format and the -j option. Is that 
correct? Are there any other alternatives?


If you want to use -j then you need to use the -Fd output:

https://www.postgresql.org/docs/10/app-pgdump.html

"-j njobs
--jobs=njobs

Run the dump in parallel by dumping njobs tables simultaneously. 
This option reduces the time of the dump but it also increases the load 
on the database server. You can only use this option with the directory 
output format because this is the only output format where multiple 
processes can write their data at the same time."


If you need to grab just a subset of the schema then there are options 
to do that depending on the object. From above link as examples:


"-n schema
--schema=schema

Dump only schemas matching schema; this selects both the schema 
itself, and all its contained objects. ..."



"-t table
--table=table

Dump only tables with names matching table.  .."




Thanks.

Tiff

On Mon, Feb 11, 2019 at 11:10 AM Ron > wrote:


On 2/11/19 10:00 AM, Tiffany Thang wrote:
 > Hi,
 > To copy the source schema A to target schema B in the same
database in
 > PG10.3, I use psql to dump schema A and manually removes anything
specific
 > to the schema in the text dump file before importing into schema
B. How do
 > I achieve the same exporting from Schema A and importing into
schema B
 > using pg_dump with the -Fc option? Since the dump file generated is
 > binary, I could not make modifications to the file. Is the
procedure the
 > same in version 11?

Why do you need to use "--format=custom" instead of "--format=plain"?

For example:
$ pg_dump --format=plain --schema-only --schema=A


-- 
Angular momentum makes the world go 'round.





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Odd messages on reloading DB table

2019-02-11 Thread David G. Johnston
On Mon, Feb 11, 2019 at 12:29 PM Steve Wampler  wrote:
> Thanks - but I thought the search_path update was a PG 10 change and so 
> shouldn't reflect on 9.5.15 behavior.  Did it
> get back-ported?

Yes, it was deemed a security vulnerability and thus back-patched.
Release notes will indicate when that happened.

> In any event I'm surprised that pg_dump for 9.5.15 can produce a dump that 
> can't be restored by either pg_restore
> (when -Fc is used on both ends) or with psql (without -Fc used on pg_dump).  
> I would have expected some message
> from pg_dump if it ran into issues preventing this.

pg_dump doesn't look into function bodies to make determinations - to
it those are just strings.  There are number of susceptible areas that
cannot be reliably detected during backup.  That is why it is
imperative that backups are restored before being considered valid (at
the least those backups that occur subsequent to a schema change).

David J.



Re: Odd messages on reloading DB table

2019-02-11 Thread Steve Wampler

On 2/7/19 3:24 PM, David G. Johnston wrote:

On Thursday, February 7, 2019, Steve Wampler mailto:swamp...@nso.edu>> wrote:

    (1) the table already exist and the immediately doesn't exist?
    (2) report ERROR on UPDATE when there are no UPDATES in the input file



Most likely the first attempt was schema qualified and so found the existing targets table while the second attempt was 
not schema qualified and targets is not in the search path.


One guess I have is that triggers are involved here and those triggers need to be more resiliant in face of the recent 
search_path security update.


Thanks - but I thought the search_path update was a PG 10 change and so 
shouldn't reflect on 9.5.15 behavior.  Did it
get back-ported?

In any event I'm surprised that pg_dump for 9.5.15 can produce a dump that 
can't be restored by either pg_restore
(when -Fc is used on both ends) or with psql (without -Fc used on pg_dump).  I 
would have expected some message
from pg_dump if it ran into issues preventing this.


--
Steve Wampler -- swamp...@nso.edu
The gods that smiled on your birth are now laughing out loud.



Aurora Postgresql RDS DB Latency

2019-02-11 Thread github kran
Hi Postgres Team,
We are using AWS Aurora RDS Postgres DB 9.6.8 Engine. We recently deleted
few million rows from the database and ran into a issue in one of our dev
account where the
DB was not normal after this deletion. We did re index, vacuuming entire
database but we couldnt bring it to the same state as earlier. So next
steps we deleted the database and
recreated the database by copying the snapshot from a production instance.
Further did vacumming, re-index on the database.

After this now the dev database seems to be in a better state than earlier
but we are seeing few of our DB calls are taking more than 1 minute when we
are fetching data and we observed
this is because the query plan was executing a hash join as part of the
query whereas a similar query on prod instance is not doing any hash join
and is returning faster.

Also we did not want to experiment by modifing the DB settings by doing
enable_hash_join to off or random_page_count to 1 as we dont have these
settings in Prod instance.

Note:
The partition table sizes we have here is between 40 GB to 75 GB and this
is our normal size range, we have a new partition table for every 7 days.

Appreciate your ideas on what we could be missing and what we can correct
here to reduce the query latency.

Thanks
githubKran


Re: Aurora Postgresql RDS DB Latency

2019-02-11 Thread Michael Lewis
Are default statistics target the same on both prod and AWS? Have you
analyzed all tables being used in this query to ensure stats are up proper?
If the optimizer is choosing a different plan, then the stats must be
different IMO.


*Michael Lewis |  Software Engineer*
*Entrata*


On Mon, Feb 11, 2019 at 2:15 PM github kran  wrote:

> Hi Postgres Team,
> We are using AWS Aurora RDS Postgres DB 9.6.8 Engine. We recently deleted
> few million rows from the database and ran into a issue in one of our dev
> account where the
> DB was not normal after this deletion. We did re index, vacuuming entire
> database but we couldnt bring it to the same state as earlier. So next
> steps we deleted the database and
> recreated the database by copying the snapshot from a production instance.
> Further did vacumming, re-index on the database.
>
> After this now the dev database seems to be in a better state than earlier
> but we are seeing few of our DB calls are taking more than 1 minute when we
> are fetching data and we observed
> this is because the query plan was executing a hash join as part of the
> query whereas a similar query on prod instance is not doing any hash join
> and is returning faster.
>
> Also we did not want to experiment by modifing the DB settings by doing
> enable_hash_join to off or random_page_count to 1 as we dont have these
> settings in Prod instance.
>
> Note:
> The partition table sizes we have here is between 40 GB to 75 GB and this
> is our normal size range, we have a new partition table for every 7 days.
>
> Appreciate your ideas on what we could be missing and what we can correct
> here to reduce the query latency.
>
> Thanks
> githubKran
>


Blank, nullable date column rejected by psql

2019-02-11 Thread Rich Shepard

Running postgresql-10.5 on Slackware-14.2.

A table has a column defined as

Column   | Type  | Collation | Nullable | Default
next_contact | date  |   |  |

In a .sql file to insert rows in this table psql has a problem when there's
no value for the next_contact column:

$ psql -f activities.sql -d bustrac 
psql:activities.sql:6: ERROR:  invalid input syntax for type date: ""

LINE 2: ...ise. Asked him to call.',''),

Explicitly replacing the blank field ('') with null is accepted. Why is
this?

Now I know to replace no dates with null I'll do so but I'm curious why this
is needed.

Thanks in advance,

Rich




Re: Blank, nullable date column rejected by psql

2019-02-11 Thread Ron

On 2/11/19 4:44 PM, Rich Shepard wrote:

Running postgresql-10.5 on Slackware-14.2.

A table has a column defined as

Column   | Type  | Collation | Nullable | Default
next_contact | date  |   |  |

In a .sql file to insert rows in this table psql has a problem when there's
no value for the next_contact column:

$ psql -f activities.sql -d bustrac psql:activities.sql:6: ERROR: invalid 
input syntax for type date: ""

LINE 2: ...ise. Asked him to call.',''),

Explicitly replacing the blank field ('') with null is accepted. Why is
this?

Now I know to replace no dates with null I'll do so but I'm curious why this
is needed.


NULL is nothing.  Blank isn't nothing; blank is a zero-length string.  Thus, 
you need to tell Pg "nothing", not "blank string".


(Oracle is really bad about that.)

--
Angular momentum makes the world go 'round.



Re: Blank, nullable date column rejected by psql

2019-02-11 Thread Adrian Klaver

On 2/11/19 2:44 PM, Rich Shepard wrote:

Running postgresql-10.5 on Slackware-14.2.

A table has a column defined as

Column   | Type  | Collation | Nullable | Default
next_contact | date  |   |  |

In a .sql file to insert rows in this table psql has a problem when there's
no value for the next_contact column:

$ psql -f activities.sql -d bustrac psql:activities.sql:6: ERROR:  
invalid input syntax for type date: ""

LINE 2: ...ise. Asked him to call.',''),

Explicitly replacing the blank field ('') with null is accepted. Why is
this?

Now I know to replace no dates with null I'll do so but I'm curious why 
this

is needed.


Because:

invalid input syntax for type date: ""

means you are trying to enter an empty string("") and that:

a) Is not NULL
b) Is not a valid date string.



Thanks in advance,

Rich






--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Blank, nullable date column rejected by psql

2019-02-11 Thread Tom Lane
Rich Shepard  writes:
> In a .sql file to insert rows in this table psql has a problem when there's
> no value for the next_contact column:

> $ psql -f activities.sql -d bustrac 
> psql:activities.sql:6: ERROR:  invalid input syntax for type date: ""
> LINE 2: ...ise. Asked him to call.',''),

> Explicitly replacing the blank field ('') with null is accepted. Why is
> this?

An empty string is not a null.

(Oracle has done untold damage to the field by failing to make this
distinction ... but in theory, and in the SQL standard, and in Postgres,
they're not at all the same thing.)

regards, tom lane



Re: Blank, nullable date column rejected by psql

2019-02-11 Thread David G. Johnston
On Mon, Feb 11, 2019 at 3:44 PM Rich Shepard  wrote:
> Now I know to replace no dates with null I'll do so but I'm curious why this
> is needed.

Same reason you needed it about a month ago when you were dealing with
a check constraint question with the same error message.

David J.



Re: Blank, nullable date column rejected by psql

2019-02-11 Thread Rich Shepard

On Mon, 11 Feb 2019, Tom Lane wrote:


An empty string is not a null.


Tom,

I understand this yet thought that empty strings and numeric fields were
accepted. Guess I need to review this.

Thanks,

Rich




Re: Blank, nullable date column rejected by psql

2019-02-11 Thread Rich Shepard

On Mon, 11 Feb 2019, Ron wrote:


NULL is nothing. Blank isn't nothing; blank is a zero-length string. 
Thus, you need to tell Pg "nothing", not "blank string".


Ron,

All of you who responded drove home my need to explicitly enter null when
there are no data for a column.

Thanks,

Rich



Re: Blank, nullable date column rejected by psql

2019-02-11 Thread Ron

On 2/11/19 5:30 PM, Rich Shepard wrote:

On Mon, 11 Feb 2019, Tom Lane wrote:


An empty string is not a null.


Tom,

I understand this yet thought that empty strings and numeric fields were
accepted. Guess I need to review this.


You've got ADOS (All Databases are Oracle Syndrome).

--
Angular momentum makes the world go 'round.



Re: Blank, nullable date column rejected by psql

2019-02-11 Thread Rich Shepard

On Mon, 11 Feb 2019, Rich Shepard wrote:


All of you who responded drove home my need to explicitly enter null when
there are no data for a column.


Correction: when there are no date data for a column.

Rich



Re: Blank, nullable date column rejected by psql

2019-02-11 Thread Rich Shepard

On Mon, 11 Feb 2019, Ron wrote:


You've got ADOS (All Databases are Oracle Syndrome).


Interesting as I've never bought, used, or seen anything from Oracle. Guess
it's transmitted by errent bits.

Regards,

Rich



Re: Blank, nullable date column rejected by psql

2019-02-11 Thread Ken Tanzer
>
>
> Ron,
>
> All of you who responded drove home my need to explicitly enter null when
> there are no data for a column.
>
> Thanks,
>
> Rich
>
>
Just in case you miss this little nuance, you don't necessarily _have_ to
specify a NULL for that column, depending how you're doing your inserts.
You haven't show us your table or what INSERT you're using, but all of
these examples will work, and don't specify an explicit NULL:

CREATE TEMP TABLE foo (a INTEGER NOT NULL, b INTEGER NOT NULL, c DATE);
CREATE TABLE

INSERT INTO foo VALUES (1,2);
INSERT 0 1
   ^
INSERT INTO foo (a,b) VALUES (1,2);
INSERT 0 1

INSERT INTO foo (a,b) SELECT  1,2;
INSERT 0 1

Cheers,
Ken





-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://demo.agency-software.org/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: Aurora Postgresql RDS DB Latency

2019-02-11 Thread github kran
On Mon, Feb 11, 2019 at 3:29 PM Michael Lewis  wrote:

> Are default statistics target the same on both prod and AWS? Have you
> analyzed all tables being used in this query to ensure stats are up proper?
> If the optimizer is choosing a different plan, then the stats must be
> different IMO.
>
>
> *Michael Lewis |  Software Engineer*
> *Entrata*
>


Thanks for your reply  I have verified few of the tables and their default
statistics target and they seem to be same but is there anything in
particular you want me to look at it to differentiate Prod and Non prod
databases ?. ( Also the DB instance size is same but there is little
more data in the Non prod Aurora RDS instance compared to Prod instance).

Query used.
>select * from pg_stats where tablename = 'tableName'
>
>
> On Mon, Feb 11, 2019 at 2:15 PM github kran  wrote:
>
>> Hi Postgres Team,
>> We are using AWS Aurora RDS Postgres DB 9.6.8 Engine. We recently deleted
>> few million rows from the database and ran into a issue in one of our dev
>> account where the
>> DB was not normal after this deletion. We did re index, vacuuming entire
>> database but we couldnt bring it to the same state as earlier. So next
>> steps we deleted the database and
>> recreated the database by copying the snapshot from a production
>> instance. Further did vacumming, re-index on the database.
>>
>> After this now the dev database seems to be in a better state than
>> earlier but we are seeing few of our DB calls are taking more than 1 minute
>> when we are fetching data and we observed
>> this is because the query plan was executing a hash join as part of the
>> query whereas a similar query on prod instance is not doing any hash join
>> and is returning faster.
>>
>> Also we did not want to experiment by modifing the DB settings by doing
>> enable_hash_join to off or random_page_count to 1 as we dont have these
>> settings in Prod instance.
>>
>> Note:
>> The partition table sizes we have here is between 40 GB to 75 GB and this
>> is our normal size range, we have a new partition table for every 7 days.
>>
>> Appreciate your ideas on what we could be missing and what we can correct
>> here to reduce the query latency.
>>
>> Thanks
>> githubKran
>>
>


Re: Blank, nullable date column rejected by psql

2019-02-11 Thread Rich Shepard

On Mon, 11 Feb 2019, Ken Tanzer wrote:


Just in case you miss this little nuance, you don't necessarily _have_ to
specify a NULL for that column, depending how you're doing your inserts.
You haven't show us your table or what INSERT you're using, but all of
these examples will work, and don't specify an explicit NULL:


Ken,

Well, you've succeeded in confusing me. :-)

This is the table's schema:

# \d activities
   Table "public.activities"
Column| Type  | Collation | Nullable |  Default 
--+---+---+--+-


 person_id| integer   |   | not null |
 act_date | date  |   | not null | CURRENT_DATE
 act_type | character varying(12) |   | not null | '??'::charac
ter varying
 notes| text  |   | not null | '??'::text
 next_contact | date  |   |  | 
Indexes:

"activities_pkey" PRIMARY KEY, btree (person_id, act_date, act_type)
Foreign-key constraints:
"activities_act_type_fkey" FOREIGN KEY (act_type) REFERENCES activityty
pes(act_name) ON UPDATE CASCADE ON DELETE RESTRICT
"activities_person_id_fkey" FOREIGN KEY (person_id) REFERENCES people(p
erson_id) ON UPDATE CASCADE ON DELETE RESTRICT

And this is the framwork for adding rows:

insert into Activities (person_id,act_date,act_type,notes,next_contact) values
(

I add values for each column, but if there's no scheduled next_contact date
I left that off. To me, this looks like your second example (with two
columns of values and no date) and I don't see the differences.

Regards,

Rich




Re: Blank, nullable date column rejected by psql

2019-02-11 Thread Ken Tanzer
>
>
> Ken,
>
> Well, you've succeeded in confusing me. :-)
>
> And this is the framwork for adding rows:
>
> insert into Activities (person_id,act_date,act_type,notes,next_contact)
> values
> (
>
> I add values for each column, but if there's no scheduled next_contact date
> I left that off. To me, this looks like your second example (with two
> columns of values and no date) and I don't see the differences.
>
> Assuming you're meaning this example:

INSERT INTO foo (a,b) VALUES (1,2);

The difference is I didn't specify field c in the list of columns, so it
gets inserted with its default value.  That would be the same as:

insert into Activities (person_id,act_date,act_type,notes) values...

Now that will work if you're doing a separate INSERT for each row.  If
you're doing multiple VALUES in one select, and some have a next contact
date and some don't, then I think you're going to need to explicitly spell
out your NULLs.

Cheers,
Ken

-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://demo.agency-software.org/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: Blank, nullable date column rejected by psql

2019-02-11 Thread Ron

On 2/11/19 5:44 PM, Rich Shepard wrote:

On Mon, 11 Feb 2019, Ron wrote:


You've got ADOS (All Databases are Oracle Syndrome).


Interesting as I've never bought, used, or seen anything from Oracle. Guess
it's transmitted by errent bits.


It's easily transmitted via toilet seats.

--
Angular momentum makes the world go 'round.



Re: Promote replica before being able to accept connections

2019-02-11 Thread Michael Paquier
On Mon, Feb 11, 2019 at 06:59:27AM -0800, Martín Fernández wrote:
> That makes sense!
> 
> Thanks for the quick reply

Note that PostgreSQL 9.4 has introduced a new parameter in
recovery.conf that allows recovery to finish exactly when a consistent
state has been reached:
recovery_target = 'immediate'

So that's basically what you are looking for.  Now you are on 9.2, and
new features are not back-ported.
--
Michael


signature.asc
Description: PGP signature


Re: Server goes to Recovery Mode when run a SQL

2019-02-11 Thread Michael Paquier
On Sun, Feb 10, 2019 at 03:15:38PM +1100, rob stone wrote:
> Down around line 87 onwards there are generate_series without any
> parameters, and further dubious usage of EPOCH, as well as DOW.
> 
> Not having the table definitions is obviously clouding the analysis.

That seems like the root issue for now.

> If there is a bug in the parser, then one of the experts will have to
> opine about that.

If you actually extract the SQL query and the schema which are used to
make the problem reproducible?  That would help a bit instead of
having to guess based on the sparse information on this thread.
--
Michael


signature.asc
Description: PGP signature


SV: Implementing pgaudit extension on Microsoft Windows

2019-02-11 Thread Niels Jespersen
Thanks Tom

alter system set pgaudit.log = 'all'; -- Works, it results in 2019-02-12 
08:51:49.109 CET [13560] LOG:  parameter "pgaudit.log" changed to "all" after 
select pg_reload_conf();
alter system set pgaudit.logx = 'all'; -- Notice spelling error logx: Fails, it 
results immediately in 2019-02-12 08:53:04.412 CET [12856] ERROR:  unrecognized 
configuration parameter "pgaudit.logx" 2019-02-12 08:53:04.412 CET [12856] 
STATEMENT:  alter system set pgaudit.logx = 'all';

pgaudit is loaded, as show by: 

localhost postgres@postgres#show shared_preload_libraries;
 shared_preload_libraries
--
 pgaudit
(1 row)

localhost postgres@postgres#

pgaudit.dll is locked py postgres.exe, if I try to remove it from the lib 
folder. 

All looks normal, except no logs from auditing. 

Regards Niels


-Oprindelig meddelelse-
Fra: Tom Lane  
Sendt: 11. februar 2019 15:44
Til: Niels Jespersen 
Cc: 'pgsql-general@lists.postgresql.org' 
Emne: Re: Implementing pgaudit extension on Microsoft Windows

Niels Jespersen  writes:
> Copied pgaudit.dll to C:\Program Files\PostgreSQL\11\lib Copied  
> pgaudit.control and pgaudit--1.3.sql to C:\Program 
> Files\PostgreSQL\11\share\extension
> Set shared_preload_libraries = 'pgaudit'
> Restart PostgreSQL
> Run "CREATE EXTENSION pgaudit;"
> alter system set pgaudit.log = 'all';
> select pg_reload_conf();

> Nothing enters the log-file from pgaudit. Creating tables (relations, sorry). 
> Selecting from tables, nothing in the log.

Hm, what you describe above looks right.

> I can see that alter system set pgaudit.logx = 'all';  complains about 
> "unrecognized configuration parameter", so something is known about pgaudit.

No, that means the postmaster *doesn't* have pgaudit loaded.

The fact that CREATE EXTENSION worked implies that you've got the library 
correctly built, so I think this must boil down to the 
"shared_preload_libraries" setting not having taken.  You could cross-check 
that by seeing what "SHOW shared_preload_libraries"
prints.

I wonder if you forgot to remove the comment marker (#) on the 
shared_preload_libraries line?

regards, tom lane