Postgres C-API: How to get the Oid for a custom type defined in a schema outside of the current search path

2020-12-03 Thread Michael Krüger
Dear all,

I need to find out the Oid of a custom type in C programming language. This 
type is defined in a schema, outside of the standard search path, e.g. public.
I know that I can call TypenameGetTypid to find out an Oid. But this is not 
working for me if the type is not in the public namespace.
How can I query a non-public Oid? Do I have to change the search path? If yes 
how do I do it? As mentioned I need to do this with the C interface. 

Regards,
Michael

Email:  michael@kruegers.email


Re: Postgres C-API: How to get the Oid for a custom type defined in a schema outside of the current search path

2020-12-03 Thread Pavel Stehule
Hi

čt 3. 12. 2020 v 10:32 odesílatel Michael Krüger 
napsal:

> Dear all,
>
> I need to find out the Oid of a custom type in C programming language.
> This type is defined in a schema, outside of the standard search path, e.g.
> public.
> I know that I can call TypenameGetTypid to find out an Oid. But this is
> not working for me if the type is not in the public namespace.
> How can I query a non-public Oid? Do I have to change the search path? If
> yes how do I do it? As mentioned I need to do this with the C interface.
>

I looking to source code

Oid
TypenameGetTypidExtended(const char *typname, bool temp_ok)
{
<-->Oid><--><-->typid;
<-->ListCell   *l;

<-->recomputeNamespacePath();

<-->foreach(l, activeSearchPath)
<-->{
<--><-->Oid><--><-->namespaceId = lfirst_oid(l);

<--><-->if (!temp_ok && namespaceId == myTempNamespace)
<--><--><-->continue;<-><--><-->/* do not look in temp namespace */

<--><-->typid = GetSysCacheOid2(TYPENAMENSP, Anum_pg_type_oid,
<--><--><--><--><--><--><--><-->PointerGetDatum(typname),
<--><--><--><--><--><--><--><-->ObjectIdGetDatum(namespaceId));
<--><-->if (OidIsValid(typid))
<--><--><-->return typid;
<-->}

<-->/* Not found in path */
<-->return InvalidOid;
}

You can get the value from sys cache by call GetSysCacheOid2. It can looks
like

<--><-->char<-->   *schemaname;
<--><-->char<-->   *typname;
<--><--><-->namespaceId = LookupExplicitNamespace(schemaname, missing_ok);
<--><--><-->if (OidIsValid(namespaceId))
<--><--><--><-->typoid = GetSysCacheOid2(TYPENAMENSP, Anum_pg_type_oid,
<--><--><--><--><--><--><--><--><--><--> PointerGetDatum(typname),
<--><--><--><--><--><--><--><--><--><--> ObjectIdGetDatum(namespaceId));
<--><--><-->else
<--><--><--><-->typoid = InvalidOid;


Regards

Pavel





> Regards,
> Michael
>
> Email: michael@kruegers.email
>


Re: Postgres C-API: How to get the Oid for a custom type defined in a schema outside of the current search path

2020-12-03 Thread Michael Krüger
Hi,

> Am 03.12.2020 um 11:01 schrieb Pavel Stehule :
> 
> Hi
> 
> čt 3. 12. 2020 v 10:32 odesílatel Michael Krüger  
> napsal:
> Dear all,
> 
> I need to find out the Oid of a custom type in C programming language. This 
> type is defined in a schema, outside of the standard search path, e.g. public.
> I know that I can call TypenameGetTypid to find out an Oid. But this is not 
> working for me if the type is not in the public namespace.
> How can I query a non-public Oid? Do I have to change the search path? If yes 
> how do I do it? As mentioned I need to do this with the C interface. 
> 
> I looking to source code
> 
> Oid
> TypenameGetTypidExtended(const char *typname, bool temp_ok)
> {
> <-->Oid><--><-->typid;
> <-->ListCell   *l;
> 
> <-->recomputeNamespacePath();
> 
> <-->foreach(l, activeSearchPath)
> <-->{
> <--><-->Oid><--><-->namespaceId = lfirst_oid(l);
> 
> <--><-->if (!temp_ok && namespaceId == myTempNamespace)
> <--><--><-->continue;<-><--><-->/* do not look in temp namespace */
> 
> <--><-->typid = GetSysCacheOid2(TYPENAMENSP, Anum_pg_type_oid,
> <--><--><--><--><--><--><--><-->PointerGetDatum(typname),
> <--><--><--><--><--><--><--><-->ObjectIdGetDatum(namespaceId));
> <--><-->if (OidIsValid(typid))
> <--><--><-->return typid;
> <-->}
> 
> <-->/* Not found in path */
> <-->return InvalidOid;
> }
> 
> You can get the value from sys cache by call GetSysCacheOid2. It can looks 
> like
> 
> <--><-->char<-->   *schemaname;
> <--><-->char<-->   *typname;
> <--><--><-->namespaceId = LookupExplicitNamespace(schemaname, missing_ok);
> <--><--><-->if (OidIsValid(namespaceId))
> <--><--><--><-->typoid = GetSysCacheOid2(TYPENAMENSP, Anum_pg_type_oid,
> <--><--><--><--><--><--><--><--><--><--> PointerGetDatum(typname),
> <--><--><--><--><--><--><--><--><--><--> ObjectIdGetDatum(namespaceId));
> <--><--><-->else
> <--><--><--><-->typoid = InvalidOid;
> 

Thank you so much. That worked.

Regards,
Michael

psql > split > queries & output

2020-12-03 Thread Wim Bertels
Hello,

psql shows (the queries and) the output,
there are for example the -L option and echo_queries option

is there a way to run psql so that:
* inside psql there are no surprises :) (ignore this line:)
* one pipe (or..) to a file which only contains the statements
(queries) history (and not the output)
* another pipe (or..) to a file which only contains the output (with or
without the statements) history

why? for example, 
to have easy access in a psql   demonstration to past commands
(history),
without the output cluttering or overwhelming everything, side by side
on the screen. (optionally with a third pane for the output)

there is also the /s option in psql,
but i guess the history is only written to file when psql is closed,
and is not accessible during the session (only in memory?),
otherwise this would also be an option
(not ctrl+r)

suggestions or alternatives while using the commandline?

-- 
mvg,
Wim 
--
https://coronalert.be


--
Always do right.  This will gratify some people and astonish the rest.
-- Mark Twain





postgres-10 with FIPS

2020-12-03 Thread Aravindhan Krishnan
Hi Community,

In our org, we are using postgres-10 and was installed by adding the source
to apt and downloading postgres on top of ubuntu (currently using ubuntu
20.04.1) . We wanted to go for FIPS compliance for which we need to use the
FIPS compliant packages of all the libraries / packages we use.

Since postgres is linked against openssl we wanted to make sure we build
postgres against the FIPS compliant openssl libraries. Does postgres
provide a FIPS debian package that can be used. If not it would be of great
help to help with the instructions to build the debian of postgres linked
against the FIPS compliant openssl libraries.

Regards,
Aravindhan Krishnan...


Re: psql > split > queries & output

2020-12-03 Thread Pavel Stehule
čt 3. 12. 2020 v 12:39 odesílatel Wim Bertels  napsal:

> Hello,
>
> psql shows (the queries and) the output,
> there are for example the -L option and echo_queries option
>
> is there a way to run psql so that:
> * inside psql there are no surprises :) (ignore this line:)
> * one pipe (or..) to a file which only contains the statements
> (queries) history (and not the output)
> * another pipe (or..) to a file which only contains the output (with or
> without the statements) history
>
> why? for example,
> to have easy access in a psql   demonstration to past commands
> (history),
> without the output cluttering or overwhelming everything, side by side
> on the screen. (optionally with a third pane for the output)
>
> there is also the /s option in psql,
> but i guess the history is only written to file when psql is closed,
> and is not accessible during the session (only in memory?),
> otherwise this would also be an option
> (not ctrl+r)
>
> suggestions or alternatives while using the commandline?
>

Unfortunately it is not possible. Both contents are sent to stdout.  The
own hack can be pretty easy, but without it it is not possible.

Regards

Pavel


> --
> mvg,
> Wim
> --
> https://coronalert.be
>
>
> --
> Always do right.  This will gratify some people and astonish the rest.
> -- Mark Twain
>
>
>
>


Re: Postgres C-API: How to get the Oid for a custom type defined in a schema outside of the current search path

2020-12-03 Thread Pavel Stehule
čt 3. 12. 2020 v 11:44 odesílatel Michael Krüger 
napsal:

> Hi,
>
> Am 03.12.2020 um 11:01 schrieb Pavel Stehule :
>
> Hi
>
> čt 3. 12. 2020 v 10:32 odesílatel Michael Krüger 
> napsal:
>
>> Dear all,
>>
>> I need to find out the Oid of a custom type in C programming language.
>> This type is defined in a schema, outside of the standard search path, e.g.
>> public.
>> I know that I can call TypenameGetTypid to find out an Oid. But this is
>> not working for me if the type is not in the public namespace.
>> How can I query a non-public Oid? Do I have to change the search path? If
>> yes how do I do it? As mentioned I need to do this with the C interface.
>>
>
> I looking to source code
>
> Oid
> TypenameGetTypidExtended(const char *typname, bool temp_ok)
> {
> <-->Oid><--><-->typid;
> <-->ListCell   *l;
>
> <-->recomputeNamespacePath();
>
> <-->foreach(l, activeSearchPath)
> <-->{
> <--><-->Oid><--><-->namespaceId = lfirst_oid(l);
>
> <--><-->if (!temp_ok && namespaceId == myTempNamespace)
> <--><--><-->continue;<-><--><-->/* do not look in temp namespace */
>
> <--><-->typid = GetSysCacheOid2(TYPENAMENSP, Anum_pg_type_oid,
> <--><--><--><--><--><--><--><-->PointerGetDatum(typname),
> <--><--><--><--><--><--><--><-->ObjectIdGetDatum(namespaceId));
> <--><-->if (OidIsValid(typid))
> <--><--><-->return typid;
> <-->}
>
> <-->/* Not found in path */
> <-->return InvalidOid;
> }
>
> You can get the value from sys cache by call GetSysCacheOid2. It can looks
> like
>
> <--><-->char<-->   *schemaname;
> <--><-->char<-->   *typname;
> <--><--><-->namespaceId = LookupExplicitNamespace(schemaname, missing_ok);
> <--><--><-->if (OidIsValid(namespaceId))
> <--><--><--><-->typoid = GetSysCacheOid2(TYPENAMENSP, Anum_pg_type_oid,
> <--><--><--><--><--><--><--><--><--><--> PointerGetDatum(typname),
> <--><--><--><--><--><--><--><--><--><--> ObjectIdGetDatum(namespaceId));
> <--><--><-->else
> <--><--><--><-->typoid = InvalidOid;
>
>
> Thank you so much. That worked.
>

with pleasure.

When you write C extensions for Postgres, then PostgreSQL source code is
the best source of inspiration.

Regards

Pavel




> Regards,
> Michael
>


Re: psql > split > queries & output

2020-12-03 Thread Wim Bertels
Daniel Verite schreef op do 03-12-2020 om 15:18 [+0100]:
>   Wim Bertels wrote:
> 
> > * one pipe (or..) to a file which only contains the statements
> > (queries) history (and not the output)
> > * another pipe (or..) to a file which only contains the output
> > (with or
> > without the statements) history
> 
> Consider that script:
> 
> $ cat f.sql
> \set ECHO queries
> \o output.txt
> select 1;
> select 2;
> select 3;
> 
> If it's run with
> 
> $ psql -f f.sql >queries.txt
> 
> then it produces the queries in queries.txt and the output in
> output.txt


thank you,
i use the same with a input file,
but interactively this doesn't seem to work

ie 
psql > queries.txt (does not work (stdout))
(psql 2> queries.txt does work (stderr))
(psql &> queries.txt does not work (both))

--

psql -f - > out 

does work interactively,
but no fun experience :) (no completion, and the separation does not
seem to work)

> 
> 
> Best regards,
-- 
mvg,
Wim 
--
https://coronalert.be

--
If you laid all of our laws end to end, there would be no end.
-- Mark Twain





Re: Alter the column data type of the large data volume table.

2020-12-03 Thread Michael Lewis
On Wed, Dec 2, 2020 at 11:53 PM charles meng  wrote:

> Hi all,
>
> I have a table with 1.6 billion records. The data type of the primary key
> column is incorrectly used as integer. I need to replace the type of the
> column with bigint. Is there any ideas for this?
>
> Solutions that have been tried:
> Adding temporary columns was too time-consuming, so I gave up.
> Using a temporary table, there is no good way to migrate the original
> table data to the temporary table
>
> Thanks in advance.
>

You can add a new column with NO default value and null as default and have
it be very fast. Then you can gradually update rows in batches (if on
PG11+, perhaps use do script with a loop to commit after X rows) to set the
new column the same as the primary key. Lastly, in a transaction, update
any new rows where the bigint column is null, and change which column is
the primary key & drop the old one. This should keep each transaction
reasonably sized to not hold up other processes.


Re: Alter the column data type of the large data volume table.

2020-12-03 Thread Rich Shepard

On Thu, 3 Dec 2020, Michael Lewis wrote:


On Wed, Dec 2, 2020 at 11:53 PM charles meng  wrote:



I have a table with 1.6 billion records. The data type of the primary key
column is incorrectly used as integer. I need to replace the type of the
column with bigint. Is there any ideas for this?



You can add a new column with NO default value and null as default and have
it be very fast. Then you can gradually update rows in batches (if on
PG11+, perhaps use do script with a loop to commit after X rows) to set the
new column the same as the primary key. Lastly, in a transaction, update
any new rows where the bigint column is null, and change which column is
the primary key & drop the old one. This should keep each transaction
reasonably sized to not hold up other processes.


Tell me, please, why

ALTER TABLE  ALTER COLUMN  SET DATA TYPE BIGINT

will not do the job?

I've found some varchar columns in a couple of tables too small and used the
above to increase their size. Worked perfectly.

Regards,

Rich




Re: Alter the column data type of the large data volume table.

2020-12-03 Thread Michael Lewis
On Thu, Dec 3, 2020 at 10:18 AM Rich Shepard 
wrote:

> On Thu, 3 Dec 2020, Michael Lewis wrote:
>
> > On Wed, Dec 2, 2020 at 11:53 PM charles meng  wrote:
>
> >> I have a table with 1.6 billion records. The data type of the primary
> key
> >> column is incorrectly used as integer. I need to replace the type of the
> >> column with bigint. Is there any ideas for this?
>
> > You can add a new column with NO default value and null as default and
> have
> > it be very fast. Then you can gradually update rows in batches (if on
> > PG11+, perhaps use do script with a loop to commit after X rows) to set
> the
> > new column the same as the primary key. Lastly, in a transaction, update
> > any new rows where the bigint column is null, and change which column is
> > the primary key & drop the old one. This should keep each transaction
> > reasonably sized to not hold up other processes.
>
> Tell me, please, why
>
> ALTER TABLE  ALTER COLUMN  SET DATA TYPE BIGINT
>
> will not do the job?
>
> I've found some varchar columns in a couple of tables too small and used
> the
> above to increase their size. Worked perfectly.
>
> Regards,
>
> Rich
>

Afaik, it will require an access exclusive lock for the entire time it
takes to re-write the 1.6 billion rows and update all indexes. That sort of
lock out time doesn't seem workable in many production systems.


Re: Alter the column data type of the large data volume table.

2020-12-03 Thread Rich Shepard

On Thu, 3 Dec 2020, Michael Lewis wrote:


Afaik, it will require an access exclusive lock for the entire time it
takes to re-write the 1.6 billion rows and update all indexes. That sort
of lock out time doesn't seem workable in many production systems.


Michael,

Okay. I hadn't thought of that.

Stay well,

Rich




Re: Alter the column data type of the large data volume table.

2020-12-03 Thread Ron

On 12/3/20 11:26 AM, Michael Lewis wrote:
On Thu, Dec 3, 2020 at 10:18 AM Rich Shepard > wrote:


On Thu, 3 Dec 2020, Michael Lewis wrote:

> On Wed, Dec 2, 2020 at 11:53 PM charles meng mailto:xly...@gmail.com>> wrote:

>> I have a table with 1.6 billion records. The data type of the
primary key
>> column is incorrectly used as integer. I need to replace the type
of the
>> column with bigint. Is there any ideas for this?

> You can add a new column with NO default value and null as default
and have
> it be very fast. Then you can gradually update rows in batches (if on
> PG11+, perhaps use do script with a loop to commit after X rows) to
set the
> new column the same as the primary key. Lastly, in a transaction, update
> any new rows where the bigint column is null, and change which column is
> the primary key & drop the old one. This should keep each transaction
> reasonably sized to not hold up other processes.

Tell me, please, why

ALTER TABLE  ALTER COLUMN  SET DATA TYPE BIGINT

will not do the job?

I've found some varchar columns in a couple of tables too small and
used the
above to increase their size. Worked perfectly.

Regards,

Rich


Afaik, it will require an access exclusive lock for the entire time it 
takes to re-write the 1.6 billion rows and update all indexes. That sort 
of lock out time doesn't seem workable in many production systems.


Yet another argument for partitioning!

1. Split split all the partitions from the main table,
2. drop the PK,
3. do all the ALTER statements in parallel,
4. recreate the PK indices, then
5. join them back to the main table.

Not instant, but faster than updating 1.6Bn rows in one single giant statement.

(Of course, that doesn't help OP with his current problem.)

--
Angular momentum makes the world go 'round.


Re: Alter the column data type of the large data volume table.

2020-12-03 Thread Michael Lewis
On Thu, Dec 3, 2020 at 10:18 AM Rich Shepard 
wrote:

> Tell me, please, why
>
> ALTER TABLE  ALTER COLUMN  SET DATA TYPE BIGINT
>
> will not do the job?
>
> I've found some varchar columns in a couple of tables too small and used
> the
> above to increase their size. Worked perfectly.
>


Something else noteworthy is that with varchar, there is no rewrite of the
table. You are just removing or loosening the length restriction on a
variable width column type. I believe you could change all columns from
VARCHAR(n) to TEXT or VARCHAR(n+X) and have it take almost no time at all
since you are only impacting the catalog tables (pretty sure about that at
least). With a fixed width column like int4 to int8, all the rows need to
be actually re-written.


Re: Alter the column data type of the large data volume table.

2020-12-03 Thread Ron

On 12/3/20 11:53 AM, Michael Lewis wrote:



On Thu, Dec 3, 2020 at 10:18 AM Rich Shepard > wrote:


Tell me, please, why

ALTER TABLE  ALTER COLUMN  SET DATA TYPE BIGINT

will not do the job?

I've found some varchar columns in a couple of tables too small and
used the
above to increase their size. Worked perfectly.



Something else noteworthy is that with varchar, there is no rewrite of the 
table. You are just removing or loosening the length restriction on a 
variable width column type. I believe you could change all columns from 
VARCHAR(n) to TEXT or VARCHAR(n+X) and have it take almost no time at all 
since you are only impacting the catalog tables (pretty sure about that at 
least). With a fixed width column like int4 to int8, all the rows need to 
be actually re-written.


And in this case it's the PK, so indexed and thus even slower.  Lots slowe.

--
Angular momentum makes the world go 'round.


Re: postgres-10 with FIPS

2020-12-03 Thread Michael Paquier
On Thu, Dec 03, 2020 at 05:57:04PM +0530, Aravindhan Krishnan wrote:
> Since postgres is linked against openssl we wanted to make sure we build
> postgres against the FIPS compliant openssl libraries. Does postgres
> provide a FIPS debian package that can be used. If not it would be of great
> help to help with the instructions to build the debian of postgres linked
> against the FIPS compliant openssl libraries.

There is no need for Postgres to do anything specific with FIPS at
runtime, as long as the OS takes care of enabling FIPS and that
OpenSSL is able to recognize that.  So normally, you could just use a
version of Postgres compiled with OpenSSL 1.0.2, and replace the
libraries of OpenSSL with a version that is compiled with FIPS enabled
as the APIs of OpenSSL used by Postgres are exactly the same for the
non-FIPS and FIPS cases.
--
Michael


signature.asc
Description: PGP signature


Re: Postgres C-API: How to get the Oid for a custom type defined in a schema outside of the current search path

2020-12-03 Thread Michael Paquier
On Thu, Dec 03, 2020 at 01:45:05PM +0100, Pavel Stehule wrote:
> When you write C extensions for Postgres, then PostgreSQL source code is
> the best source of inspiration.

One common source of inspiration for such cases is regproc.c.  For a
type, you can for example look at what to_regtype() uses for a
conversion from a name string to an OID, aka parseTypeString().
--
Michael


signature.asc
Description: PGP signature


Re: Set COLLATE on a session level

2020-12-03 Thread Dirk Mika
> > > Or views in schemas per locale. Selecting the search path
> > > per locale pulls in the right view.
> >
> > And one view per locale would mean that I would have to
> > create a whole schema including all views for each locale I
> > want to support. I would have to roll out a new version of
> > the data model, just for an additional locale.
>
> Or a "smart" view. Set a session variable before running the
> query and have the (one) view return the locale'd data based
> on the session variable ...
>
>set session "mika.current_locale" = 'locale@2_use';
>
> and use
>
>select current_setting('mika.current_locale')
>
> as needed inside the view definition

I'll take a look at that. Thank you very much for the idea.

>
> > This all seems unnecessarily complicated to me.
>
> No one said it is going to be particularly convenient...  You
> asked for possible, I guess. :-)

My sentence was more along the lines that I cannot imagine that I am the first 
to have this problem and that there should therefore be an easier solution. But 
apparently the topic is not so relevant in general. 😉

BR
Dirk


--
Dirk Mika
Software Developer

mika:timing GmbH
Strundepark - Kürtener Str. 11b
51465 Bergisch Gladbach
Germany

fon +49 2202 2401-1197
dirk.m...@mikatiming.de
www.mikatiming.de

AG Köln HRB 47509 * WEEE-Reg.-Nr. DE 90029884
Geschäftsführer: Harald Mika, Jörg Mika