Partitioning an existing table - pg10.6

2019-07-03 Thread Ayub M
Hello, I am using postgres 10.6 and have a huge table with ~30m recs which
needs to be partitioned. Whats the best strategy for it with minimal
downtime?

1. Rename existing table and create new partitioned tables with orig name
and keep inserting data from the old renamed to new table. This will incur
downtime for the apps.
2. Create partitioned table with new name and move the data from current
table (move oldest to newest) and when moving current month paritition
shutdown down the app and move the last set of data. This will not incur
downtime but if any data is changed which was already copied over then it
might be lost.

Any other approaches/alternatives?

-- 
Regards,
Ayub


Re: multiple nodes in FDW create server statement

2019-07-03 Thread Laurenz Albe
Vijaykumar Jain wrote:
> We are glad that we have this feature that allows us to load balance reads.
> that has helped us a lot.
> https://paquier.xyz/postgresql-2/postgres-10-multi-host-connstr/
> 
> I would like to know if it is possible to request a similar enhancement to 
> FDWs too?
> https://www.postgresql.org/docs/11/sql-createserver.html
> 
> unless i am missing something obvious, we wanted to use this option when one
> of the read instances are down when there is a FDW query for reads.
> the second instance is only a fallback, not round robin. and we have
> 2 foreign servers, one for write and one for reads.

The documentation of postgres_fdw says:

  A foreign server using the postgres_fdw foreign data wrapper can have the same
  options that libpq accepts in connection strings, as described in Section 
34.1.2,
  except that these options are not allowed:

user and password (specify these in a user mapping, instead)

client_encoding (this is automatically set from the local server encoding)

fallback_application_name (always set to postgres_fdw)

So there is nothing that keeps you from using multiple host names or ports,
and you can also use "target_session_attrs".

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: [External] Re: multiple nodes in FDW create server statement

2019-07-03 Thread Vijaykumar Jain
awesomeness.
Thanks Laurenz.

Regards,
Vijay


On Wed, Jul 3, 2019 at 12:48 PM Laurenz Albe 
wrote:

> Vijaykumar Jain wrote:
> > We are glad that we have this feature that allows us to load balance
> reads.
> > that has helped us a lot.
> > https://paquier.xyz/postgresql-2/postgres-10-multi-host-connstr/
> >
> > I would like to know if it is possible to request a similar enhancement
> to FDWs too?
> > https://www.postgresql.org/docs/11/sql-createserver.html
> >
> > unless i am missing something obvious, we wanted to use this option when
> one
> > of the read instances are down when there is a FDW query for reads.
> > the second instance is only a fallback, not round robin. and we have
> > 2 foreign servers, one for write and one for reads.
>
> The documentation of postgres_fdw says:
>
>   A foreign server using the postgres_fdw foreign data wrapper can have
> the same
>   options that libpq accepts in connection strings, as described in
> Section 34.1.2,
>   except that these options are not allowed:
>
> user and password (specify these in a user mapping, instead)
>
> client_encoding (this is automatically set from the local server
> encoding)
>
> fallback_application_name (always set to postgres_fdw)
>
> So there is nothing that keeps you from using multiple host names or ports,
> and you can also use "target_session_attrs".
>
> Yours,
> Laurenz Albe
> --
> Cybertec | https://www.cybertec-postgresql.com
>
>


refresh materialized view concurrently alternatives

2019-07-03 Thread Zsolt Ero
Hi,

I'm using refresh materialized view concurrently at the moment. I have
a few problems with it:
1. It requires adding a unique index, even if it's never actually
used. This can just create wasted space and bad cache utilization.

2. It locks the table so that two refresh commands cannot be run at
the same time.

3. It's slower than without concurrently.

My idea is the following approach:

DROP MATERIALIZED VIEW IF EXISTS tmp.my_mat_view;

CREATE MATERIALIZED VIEW tmp.my_mat_view AS
SELECT ...

BEGIN;
DROP MATERIALIZED VIEW IF EXISTS my_mat_view;
ALTER MATERIALIZED VIEW tmp.my_mat_view SET SCHEMA public;
COMMIT;

Would this approach work? From my testing this approach doesn't result
in any kind of locking, and it's very fast and also it doesn't require
the unique index condition.

Are there any problems with this? In what situations would refresh mat
view or refresh mat view concurrently has advantages over this?

Probably it's important to note how my DB works, inserts are pretty
much 100% controlled. They happen once per hour, after which all views
are refreshed.

Zsolt




Re: Partitioning an existing table - pg10.6

2019-07-03 Thread Achilleas Mantzios

On 3/7/19 10:01 π.μ., Ayub M wrote:

Hello, I am using postgres 10.6 and have a huge table with ~30m recs which 
needs to be partitioned. Whats the best strategy for it with minimal downtime?

1. Rename existing table and create new partitioned tables with orig name and 
keep inserting data from the old renamed to new table. This will incur downtime 
for the apps.
2. Create partitioned table with new name and move the data from current table (move oldest to newest) and when moving current month paritition shutdown down the app and move the last set of data. 
This will not incur downtime but if any data is changed which was already copied over then it might be lost.


Regarding 2) you could setup a trigger on the original table which would also 
replicate INSERTS, DELETES and UPDATES to the new table.



Any other approaches/alternatives?

--
Regards,
Ayub



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: Partitioning an existing table - pg10.6

2019-07-03 Thread legrand legrand
Hello,

I didn’t test it myself but maybe using  logical réplication could help ...
See https://www.postgresql.org/docs/10/logical-replication.html

Operations 
- create parttable
- Feed it using réplication
- when sync : stop app, stop réplication, rename tables
- maybe you can réplicate from parttable to nonpart as a rollback plan 
- ...

To be tested carrefully 
Regards
PAscal




--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Too short field

2019-07-03 Thread Karl Martin Skoldebrand
Hi,

We solved the problem of yesterday where I was looking at sequences. It 
eventually turned that sequence was irrelevant (at least in the PostgreSQL 
sense) to the problem.
Now, we have a bug in another application that prevents an automatic tool to 
enter certain users in the database. The organisational field is varchar(60) 
while the actual Organisation "abbreviation" may be as long as 70 characters 
(don't ask why).
What happens to data if I simple redefine the table field as varchar(80) (or 
something, at least 70+). Does "everything" break database side or can I just 
go on running the app as is.
Do we need to restart databases or something else that requires an interrupted 
service?

Best regards,
Martin S


Disclaimer:  This message and the information contained herein is proprietary 
and confidential and subject to the Tech Mahindra policy statement, you may 
review the policy at http://www.techmahindra.com/Disclaimer.html 
 externally 
http://tim.techmahindra.com/tim/disclaimer.html 
 internally within 
TechMahindra.




Re: Too short field

2019-07-03 Thread Thomas Kellerer
Karl Martin Skoldebrand schrieb am 03.07.2019 um 13:30:
> Now, we have a bug in another application that prevents an automatic
> tool to enter certain users in the database. The organisational field
> is varchar(60) while the actual Organisation “abbreviation” may be as
> long as 70 characters (don’t ask why).
> 
> What happens to data if I simple redefine the table field as
> varchar(80) (or something, at least 70+). Does “everything” break
> database side or can I just go on running the app as is.

Nothing will break on the database size. 
 
> Do we need to restart databases or something else that requires an
> interrupted service?

No, you just run 

   ALTER the_table ALTER COLUMN organisational TYPE varchar(80);

It requires an exclusive lock on the table, but the actual operation 
will finish in a few milliseconds because the table is not physically 
changed (as the limit is only increased).

Getting the exlusive lock might take a while if the table is actively
used, so the ALTER statement itself might look as it would take a while,
but once the lock could be obtained the change is very fast. 

There is no real need to stop anything if you can anticipate that there
will be a few (milli)seconds where the ALTER statement succeeds in 
obtainin (and releasing) the lock.







RE: Too short field

2019-07-03 Thread Karl Martin Skoldebrand
Thanks.
That is what I thought - good to have a confirmation.

Best regards,

Martin S 



Disclaimer:  This message and the information contained herein is proprietary 
and confidential and subject to the Tech Mahindra policy statement, you may 
review the policy at http://www.techmahindra.com/Disclaimer.html 
 externally 
http://tim.techmahindra.com/tim/disclaimer.html 
 internally within 
TechMahindra.




Re: Too short field

2019-07-03 Thread Karsten Hilbert
On Wed, Jul 03, 2019 at 01:56:03PM +0200, Thomas Kellerer wrote:

> Karl Martin Skoldebrand schrieb am 03.07.2019 um 13:30:
> > Now, we have a bug in another application that prevents an automatic
> > tool to enter certain users in the database. The organisational field
> > is varchar(60) while the actual Organisation “abbreviation” may be as
> > long as 70 characters (don’t ask why).
> >
> > What happens to data if I simple redefine the table field as
> > varchar(80) (or something, at least 70+). Does “everything” break
> > database side or can I just go on running the app as is.
>
> Nothing will break on the database size.

We don't know. There may be functions in triggers or for
direct use which rely on the assumption that it be 60 chars
max.

PostgreSQL itself won't care, that much holds.

Karsten
--
GPG  40BE 5B0E C98E 1713 AFA6  5BC0 3BEA AC80 7D4F C89B




Re: Too short field

2019-07-03 Thread Karsten Hilbert
On Wed, Jul 03, 2019 at 02:10:55PM +0200, Karsten Hilbert wrote:

> > Karl Martin Skoldebrand schrieb am 03.07.2019 um 13:30:
> > > Now, we have a bug in another application that prevents an automatic
> > > tool to enter certain users in the database. The organisational field
> > > is varchar(60) while the actual Organisation “abbreviation” may be as
> > > long as 70 characters (don’t ask why).
> > >
> > > What happens to data if I simple redefine the table field as
> > > varchar(80) (or something, at least 70+). Does “everything” break
> > > database side or can I just go on running the app as is.
> >
> > Nothing will break on the database size.
>
> We don't know.

Sorry, I didn't read properly:

>>> What happens to >>data<< if I simple redefine the table field

Nothing should happen to the data.

But:

> There may be functions in triggers or for direct use which
> rely on the assumption that it be 60 chars max.

:)

Karsten
--
GPG  40BE 5B0E C98E 1713 AFA6  5BC0 3BEA AC80 7D4F C89B




RE: Too short field

2019-07-03 Thread Karl Martin Skoldebrand
Valid point. 
I've added that to the report.

Best regards,

Martin Skjoldebrand 



Disclaimer:  This message and the information contained herein is proprietary 
and confidential and subject to the Tech Mahindra policy statement, you may 
review the policy at http://www.techmahindra.com/Disclaimer.html 
 externally 
http://tim.techmahindra.com/tim/disclaimer.html 
 internally within 
TechMahindra.




Error updating column of type text as boolean type

2019-07-03 Thread Marllius
Hi guys, I was looking datatype bugs for postgresql 10.6 but i not found
anything.

I am trying to update the column of type text column but i get an error for
boolean type. In other tables the behavior of the text type column is
normal.

Has anyone had this problem before?

accounting@accounting=> \dS+ stock_asset_document
   Table
"public.stock_asset_document"












* Column |   Type   | Collation | Nullable |
 Default| Storage  | Stats target |
Description
+--+---+--+---+--+--+-
id
| bigint   |   | not null |
nextval('seq_stock_asset_document'::regclass) | plain|  |
 tenant_id  | bigint   |   | not null |
  | plain|  |
 registry_id| uuid |   | not null |
  | plain|  |
 revision_id| uuid |   | not null |
  | plain|  |
 negotiation_id | bigint   |   | not null |
  | plain|  |
 competence_at  | date |   | not null |
  | plain|  |
 is_deleted | boolean  |   | not null |
  | plain|  |
 created_at | timestamp with time zone |   | not null | now()
  | plain|  |
 updated_at | timestamp with time zone |   |  |
  | plain|  |  number
  | bigint   |   |  |
| plain|  |  serial |
text |   |  |
| extended |  | *
Indexes:
"pk_stock_asset_document" PRIMARY KEY, btree (id)
"uk_stock_asset_document_registry_revision" UNIQUE CONSTRAINT, btree
(registry_id, revision_id)
"ix_stock_asset_document_tenant_deleted" btree (tenant_id, is_deleted)
"ix_stock_asset_document_tenant_registry_revision_deleted" btree
(tenant_id, registry_id, revision_id, is_deleted)
Referenced by:
TABLE "stock_asset" CONSTRAINT
"fk_stock_asset_entry_stock_asset_document" FOREIGN KEY (entry_document_id)
REFERENCES stock_asset_document(id) ON UPDATE RESTRICT ON DELETE RESTRICT
TABLE "stock_asset" CONSTRAINT
"fk_stock_asset_output_stock_asset_document" FOREIGN KEY
(output_document_id) REFERENCES stock_asset_document(id) ON UPDATE RESTRICT
ON DELETE RESTRICT

accounting@accounting=> UPDATE stock_asset_document SET serial = '3' AND
number = 36245 WHERE negotiation_id = 15948333;
ERROR:  22P02: invalid input syntax for type boolean: "3"
LINE 1: UPDATE stock_asset_document SET serial = '3' AND number = 36...
 ^
LOCATION:  boolin, bool.c:154
Time: 16.427 ms
accounting@accounting=> SELECT version();
 version

-
 PostgreSQL 10.6 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5
20150623 (Red Hat 4.8.5-11), 64-bit
(1 row)

Time: 15.989 ms

Atenciosamente,

Márllius de Carvalho Ribeiro
[image: EDB Certified Associate - PostgreSQL 10]



Re: Error updating column of type text as boolean type

2019-07-03 Thread John McKown
On Wed, Jul 3, 2019 at 10:09 AM Marllius  wrote:

> Hi guys, I was looking datatype bugs for postgresql 10.6 but i not found
> anything.
>
> I am trying to update the column of type text column but i get an error
> for boolean type. In other tables the behavior of the text type column is
> normal.
>
> Has anyone had this problem before?
>
> accounting@accounting=> \dS+ stock_asset_document
>Table
> "public.stock_asset_document"
>
>
>
>
>
>
>
>
>
>
>
>
> * Column |   Type   | Collation | Nullable |
>  Default| Storage  | Stats target |
> Description
> +--+---+--+---+--+--+-
>  id
> | bigint   |   | not null |
> nextval('seq_stock_asset_document'::regclass) | plain|  |
>  tenant_id  | bigint   |   | not null |
>   | plain|  |
>  registry_id| uuid |   | not null |
>   | plain|  |
>  revision_id| uuid |   | not null |
>   | plain|  |
>  negotiation_id | bigint   |   | not null |
>   | plain|  |
>  competence_at  | date |   | not null |
>   | plain|  |
>  is_deleted | boolean  |   | not null |
>   | plain|  |
>  created_at | timestamp with time zone |   | not null | now()
>   | plain|  |
>  updated_at | timestamp with time zone |   |  |
>   | plain|  |  number
>   | bigint   |   |  |
> | plain|  |  serial |
> text |   |  |
> | extended |  | *
> Indexes:
> "pk_stock_asset_document" PRIMARY KEY, btree (id)
> "uk_stock_asset_document_registry_revision" UNIQUE CONSTRAINT, btree
> (registry_id, revision_id)
> "ix_stock_asset_document_tenant_deleted" btree (tenant_id, is_deleted)
> "ix_stock_asset_document_tenant_registry_revision_deleted" btree
> (tenant_id, registry_id, revision_id, is_deleted)
> Referenced by:
> TABLE "stock_asset" CONSTRAINT
> "fk_stock_asset_entry_stock_asset_document" FOREIGN KEY (entry_document_id)
> REFERENCES stock_asset_document(id) ON UPDATE RESTRICT ON DELETE RESTRICT
> TABLE "stock_asset" CONSTRAINT
> "fk_stock_asset_output_stock_asset_document" FOREIGN KEY
> (output_document_id) REFERENCES stock_asset_document(id) ON UPDATE RESTRICT
> ON DELETE RESTRICT
>
> accounting@accounting=> UPDATE stock_asset_document SET serial = '3' AND
> number = 36245 WHERE negotiation_id = 15948333;
>

Don't use AND. Use a comma:

UPDATE stock_asset_document SET serial = '3', number = 36245 WHERE
negotiation_id = 15948333;

ref: https://www.postgresql.org/docs/11/sql-update.html




> ERROR:  22P02: invalid input syntax for type boolean: "3"
> LINE 1: UPDATE stock_asset_document SET serial = '3' AND number = 36...
>  ^
> LOCATION:  boolin, bool.c:154
> Time: 16.427 ms
> accounting@accounting=> SELECT version();
>  version
>
>
> -
>  PostgreSQL 10.6 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5
> 20150623 (Red Hat 4.8.5-11), 64-bit
> (1 row)
>
> Time: 15.989 ms
>
> Atenciosamente,
>
> Márllius de Carvalho Ribeiro
> [image: EDB Certified Associate - PostgreSQL 10]
> 
>
>

-- 
Money is the root of all evil.
Evil is the root of all money.
With that in mind, money is made by the government ...


Maranatha! <><
John McKown


Re: Error updating column of type text as boolean type

2019-07-03 Thread Marllius
Tank you Mckown.

Em qua, 3 de jul de 2019 às 12:15, John McKown 
escreveu:

> On Wed, Jul 3, 2019 at 10:09 AM Marllius  wrote:
>
>> Hi guys, I was looking datatype bugs for postgresql 10.6 but i not found
>> anything.
>>
>> I am trying to update the column of type text column but i get an error
>> for boolean type. In other tables the behavior of the text type column is
>> normal.
>>
>> Has anyone had this problem before?
>>
>> accounting@accounting=> \dS+ stock_asset_document
>>Table
>> "public.stock_asset_document"
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> * Column |   Type   | Collation | Nullable |
>>Default| Storage  | Stats target |
>> Description
>> +--+---+--+---+--+--+-
>>  id
>> | bigint   |   | not null |
>> nextval('seq_stock_asset_document'::regclass) | plain|  |
>>  tenant_id  | bigint   |   | not null |
>>   | plain|  |
>>  registry_id| uuid |   | not null |
>>   | plain|  |
>>  revision_id| uuid |   | not null |
>>   | plain|  |
>>  negotiation_id | bigint   |   | not null |
>>   | plain|  |
>>  competence_at  | date |   | not null |
>>   | plain|  |
>>  is_deleted | boolean  |   | not null |
>>   | plain|  |
>>  created_at | timestamp with time zone |   | not null | now()
>>   | plain|  |
>>  updated_at | timestamp with time zone |   |  |
>>   | plain|  |  number
>>   | bigint   |   |  |
>> | plain|  |  serial |
>> text |   |  |
>> | extended |  | *
>> Indexes:
>> "pk_stock_asset_document" PRIMARY KEY, btree (id)
>> "uk_stock_asset_document_registry_revision" UNIQUE CONSTRAINT, btree
>> (registry_id, revision_id)
>> "ix_stock_asset_document_tenant_deleted" btree (tenant_id, is_deleted)
>> "ix_stock_asset_document_tenant_registry_revision_deleted" btree
>> (tenant_id, registry_id, revision_id, is_deleted)
>> Referenced by:
>> TABLE "stock_asset" CONSTRAINT
>> "fk_stock_asset_entry_stock_asset_document" FOREIGN KEY (entry_document_id)
>> REFERENCES stock_asset_document(id) ON UPDATE RESTRICT ON DELETE RESTRICT
>> TABLE "stock_asset" CONSTRAINT
>> "fk_stock_asset_output_stock_asset_document" FOREIGN KEY
>> (output_document_id) REFERENCES stock_asset_document(id) ON UPDATE RESTRICT
>> ON DELETE RESTRICT
>>
>> accounting@accounting=> UPDATE stock_asset_document SET serial = '3' AND
>> number = 36245 WHERE negotiation_id = 15948333;
>>
>
> Don't use AND. Use a comma:
>
> UPDATE stock_asset_document SET serial = '3', number = 36245 WHERE
> negotiation_id = 15948333;
>
> ref: https://www.postgresql.org/docs/11/sql-update.html
>
>
>
>
>> ERROR:  22P02: invalid input syntax for type boolean: "3"
>> LINE 1: UPDATE stock_asset_document SET serial = '3' AND number = 36...
>>  ^
>> LOCATION:  boolin, bool.c:154
>> Time: 16.427 ms
>> accounting@accounting=> SELECT version();
>>  version
>>
>>
>> -
>>  PostgreSQL 10.6 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5
>> 20150623 (Red Hat 4.8.5-11), 64-bit
>> (1 row)
>>
>> Time: 15.989 ms
>>
>> Atenciosamente,
>>
>> Márllius de Carvalho Ribeiro
>> [image: EDB Certified Associate - PostgreSQL 10]
>> 
>>
>>
>
> --
> Money is the root of all evil.
> Evil is the root of all money.
> With that in mind, money is made by the government ...
>
>
> Maranatha! <><
> John McKown
>


Re: postgres 11 issue?

2019-07-03 Thread Steve Rogerson

> It seems a bug to me. Can you share an anonymized/simplified definition
> of that table that reproduces the problem?
>
Ok See attached sql set up. I guess you need to to a createdb first.

so :

testdb=# \i db.sql
DROP TRIGGER
DROP TABLE
...

testdb=# select * from user_passwords ;
 name | timestamp | password
--+---+--
(0 rows)

testdb=# insert into users (name, password) values  ('fred', 'sdfsdf');
INSERT 0 1

testdb=# select * from user_passwords ;
 name |  timestamp  | password
--+-+--
 fred | ("2019-07-03 16:37:07.124207",01:00:00,BST) | sdfsdf
(1 row)


testdb=# \q
steve@work-hp pg_prob$ psql testdb
psql (11.3)
Type "help" for help.

testdb=# select * from user_passwords ;
ERROR:  record type has not been registered

Same thing (record type has not been registered/) in a different session that
previously showed user_passwords to be empty in the "normal" way. before the
insert.


It's taken me a while to get this to get it to go wrong and I don't think that
all the complexity is needed, but this at least is consistent.


Steve






db.sql
Description: db.sql


Re: postgres 11 issue?

2019-07-03 Thread Adrian Klaver

On 7/3/19 9:42 AM, Steve Rogerson wrote:



It seems a bug to me. Can you share an anonymized/simplified definition
of that table that reproduces the problem?


Ok See attached sql set up. I guess you need to to a createdb first.

so :

testdb=# \i db.sql
DROP TRIGGER
DROP TABLE
...

testdb=# select * from user_passwords ;
  name | timestamp | password
--+---+--
(0 rows)

testdb=# insert into users (name, password) values  ('fred', 'sdfsdf');
INSERT 0 1

testdb=# select * from user_passwords ;
  name |  timestamp  | password
--+-+--
  fred | ("2019-07-03 16:37:07.124207",01:00:00,BST) | sdfsdf
(1 row)


testdb=# \q
steve@work-hp pg_prob$ psql testdb
psql (11.3)
Type "help" for help.

testdb=# select * from user_passwords ;
ERROR:  record type has not been registered

Same thing (record type has not been registered/) in a different session that
previously showed user_passwords to be empty in the "normal" way. before the
insert.


It's taken me a while to get this to get it to go wrong and I don't think that
all the complexity is needed, but this at least is consistent.


Hmm:

test=> select * from user_passwords ;
ERROR:  record type has not been registered

test=> select my_from_local(now());
my_from_local
--
 ("2019-07-03 19:29:34.587891",-07:00:00,PDT)
(1 row)

test=> select * from user_passwords ;
 name |  timestamp   | password
--+--+--
 fred | ("2019-07-03 19:28:07.082895",-07:00:00,PDT) | sdfsdf
(1 row)

Now to figure out why?




Steve







--
Adrian Klaver
adrian.kla...@aklaver.com




Re: postgres 11 issue?

2019-07-03 Thread Tom Lane
Steve Rogerson  writes:
>> It seems a bug to me. Can you share an anonymized/simplified definition
>> of that table that reproduces the problem?

> Ok See attached sql set up. I guess you need to to a createdb first.

It looks like what's happening is that the result of my_from_local()
is being stored into the table as an anonymous record value rather
than a value of type my_timestamp.  (The originating session can figure
out what the anonymous record type is, but no other session can.)
That should be fixed, but in the meantime you could dodge the problem by
declaring my_from_local()'s v_result variable as my_timestamp not record.

regards, tom lane




Allocating shared memory in Postgres

2019-07-03 Thread Souvik Bhattacherjee
Hi,

I need to allocate shared memory in Postgres 11.0 outside the
initialization phase. In order to achieve this I have done the following:

- increased the amount of shared memory by increasing the value of size in
CreateSharedMemoryAndSemaphores (int port) in ipci.c. I have made sure that
the amount of memory that I need, say m << M, where M is the amount of
additional shared memory that I have allocated.

- during a particular query, where I need to allocate shared memory (which
is a function of the sizes of the tables in the query), I invoke
ShmemInitStruct() in shmem.c

This seems to work in my case, although I haven't tested it extensively.

My concern here is that when I go through the NOTES in shmem.c, I find the
following line:

Fixed-size structures contain things like global variables for a module and
should never be allocated after the shared memory initialization phase.

I'm allocating a shared array data structure through ShmemInitStruct and
I'm not sure if the lines above apply to my case, since I'm doing the
allocation during a query.

Any help/clarifications in this regard would be appreciated.

Best,
-SB


Re: Allocating shared memory in Postgres

2019-07-03 Thread Tom Lane
Souvik Bhattacherjee  writes:
> I need to allocate shared memory in Postgres 11.0 outside the
> initialization phase. In order to achieve this I have done the following:
> - during a particular query, where I need to allocate shared memory (which
> is a function of the sizes of the tables in the query), I invoke
> ShmemInitStruct() in shmem.c

This seems like a pretty horrid idea.  For starters, what happens if two
backends do this concurrently?  Even with only one backend, if you do a
query that requires X space, and then you do another query that requires
X+1 space, what's going to happen?

Recent PG releases have a "DSM" mechanism for short-lived (query lifespan,
typically) shared memory that's separate from the core shmem pool.  That
might suit your needs better.   The system design is really not friendly
to demanding more core shmem after postmaster start.

regards, tom lane




Re: postgres 11 issue?

2019-07-03 Thread Tom Lane
I wrote:
> It looks like what's happening is that the result of my_from_local()
> is being stored into the table as an anonymous record value rather
> than a value of type my_timestamp.  (The originating session can figure
> out what the anonymous record type is, but no other session can.)

I pushed a fix for this:

https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=24c5c711f497c995ad7b560aedd41b4d0f0f77f4

Thanks for the report!

regards, tom lane




Re: Too short field

2019-07-03 Thread Gavin Flower

On 03/07/2019 23:30, Karl Martin Skoldebrand wrote:


Hi,

We solved the problem of yesterday where I was looking at sequences. 
It eventually turned that sequence was irrelevant (at least in the 
PostgreSQL sense) to the problem.


Now, we have a bug in another application that prevents an automatic 
tool to enter certain users in the database. The organisational field 
is varchar(60) while the actual Organisation “abbreviation” may be as 
long as 70 characters (don’t ask why).


What happens to data if I simple redefine the table field as 
varchar(80) (or something, at least 70+). Does “everything” break 
database side or can I just go on running the app as is.


Do we need to restart databases or something else that requires an 
interrupted service?


Best regards,

Martin S



Disclaimer:  This message and the information contained herein is 
proprietary and confidential and subject to the Tech Mahindra policy 
statement, you may review the policy at 
http://www.techmahindra.com/Disclaimer.html externally 
http://tim.techmahindra.com/tim/disclaimer.html internally within 
TechMahindra.




Is there any reason to put a limit of the number of characters in the 
field in the database?


If not, consider using, the 'text' type.


Cheers,
Gavin





Re: Allocating shared memory in Postgres

2019-07-03 Thread Souvik Bhattacherjee
For starters, what happens if two backends do this concurrently?

> I'm assuming here that a single backend process handles all queries from
a given client. In case of parallel queries, the master process will be
responsible for allocating the shared memory and not the workers. Please
let me know if this is not something that you implied by two backends.

Even with only one backend, if you do a query that requires X space, and
then you do another query that requires X+1 space, what's going to happen?

> In my application, every query that allocates shared memory is wrapped in
a separate txn, and the life of the shared memory is for the duration of
the query. The only purpose of allocating shared memory here is to make the
memory segment visible to the worker processes for that particular query.
No other txns/query actually accesses this shared memory. Also, when I
allocate shared memory, the txn id is used as a key to the ShmemIndex to
differentiate between two concurrent shared memory allocation requests.

Recent PG releases have a "DSM" mechanism for short-lived (query lifespan,
typically) shared memory that's separate from the core shmem pool.  That
might suit your needs better.   The system design is really not friendly
to demanding more core shmem after postmaster start.

> Yes, I understand that "DSM" mechanisms exist. But I wanted to know if
the approach that I had outlined will work even if there are certain
drawbacks to it such as
1. overestimating the initial shared memory size that needs to be
allocated,
2. not able to free the shared memory after use

Best,
-SB

On Wed, Jul 3, 2019 at 4:17 PM Tom Lane  wrote:

> Souvik Bhattacherjee  writes:
> > I need to allocate shared memory in Postgres 11.0 outside the
> > initialization phase. In order to achieve this I have done the following:
> > - during a particular query, where I need to allocate shared memory
> (which
> > is a function of the sizes of the tables in the query), I invoke
> > ShmemInitStruct() in shmem.c
>
> This seems like a pretty horrid idea.  For starters, what happens if two
> backends do this concurrently?  Even with only one backend, if you do a
> query that requires X space, and then you do another query that requires
> X+1 space, what's going to happen?
>
> Recent PG releases have a "DSM" mechanism for short-lived (query lifespan,
> typically) shared memory that's separate from the core shmem pool.  That
> might suit your needs better.   The system design is really not friendly
> to demanding more core shmem after postmaster start.
>
> regards, tom lane
>