Re: pg_restore remap schema

2022-08-08 Thread Marcos Pegoraro
>
> People have asked for such a thing before, but it'd be quite difficult
> to do reliably --- particularly inside function bodies, which aren't
> normally parsed at all during a dump/restore.  If you're willing to
> accept a 95% solution, running the pg_restore output through "sed"
> would likely work fairly well.  I'd personally want to diff the
> before-and-after scripts before applying though :-)
>

Another 95% solution, if both schemas are on the same server, google for
function clone schema, you´ll find some of them. They create tables,
functions, sequences, views, ...


toast useless

2022-09-13 Thread Marcos Pegoraro
create table test(x integer, y integer, w varchar(50), z varchar(80));
When a table has well defined sizes all their data is stored on heap, so
they doesn´t need to be toasted, and no toast table is created. Fine.

But then I want to store last modification of that record, so I do ...
alter table test add audit_last_record jsonb;
And a trigger to store old.* on that field.

So I'm sure that jsonb will fit on heap too because their size is all other
fields converted to jsonb, never bigger than that.

But as soon as I´ve created that field, a toast table is created too. Even
if I set storage MAIN to that field, reltoastrelid on pg_class still exists.

alter table test alter audit_last_record SET STORAGE MAIN

So, there is a way to eliminate these useless toast tables ?

thanks
Marcos


Re: toast useless

2022-09-13 Thread Marcos Pegoraro
>
> What problem do they cause you?
>

They don't cause any problem, I was just trying to get my database as clean
as possible.
I have some thousands of these toast tables with 8k bytes, so I was trying
to eliminate them
But if there is no way, ok


get user info on log

2022-09-13 Thread Marcos Pegoraro
When using set role or set session authorization and an error occurs I get
user name which logged in and not the one I´ve changed to.

There is a way to get session_user and current_user on log ?

Thanks
Marcos


massive update on gin index

2022-09-14 Thread Marcos Pegoraro
In a table with people's info I have 3 phone numbers, mobile, work and
home. But then some have 2 mobiles, some have 2 work numbers, so decided to
test it as an array of json. I know I could have another table for that,
but I was just testing.

So my original table had
Mobile, Work, Home and all of them are btree indexed.

Then added a jsonb field and updated it with those 3 phone numbers on it
[{"phone": 2236279878, "type": 1}, {"phone": 22998432631, "type": 2}]
[{"phone": 22996783278, "type": 2}]
create index idxPhones on People using gin(Phones)

If I select using old or new fields, both uses index and Execution Time is
similar
explain analyze select * from People where Phones @>
'[{"phone": 2236279878}]';
explain analyze select * from People where Mobile = 2236279878 or Work
= 2236279878 or Home = 2236279878;

But then I repeated 2 or 3 times that update which stores those 3 phones on
json and then my gin index became slow, very very slow, why ?

select using btree on 3 phone numbers - Execution Time: 0.164 ms
select using gin on json on first update - Execution Time: 0.220 ms
select using gin on json next to 2 or 3 updates - Execution Time: 11.220 ms

And that execution time will come back to 0.220 ms only if I recreate the
index.

Then I found gin_pending_list_limit and fast_update which I think are used
to update GIN indexes, but didn´t find any examples of both.

What am I missing ? That gin index needs to have some more options or
attributes on it ?
I know in a day by day use I'll never do that massive update twice but just
to understand when will this index be updated ?

Thanks
Marcos


Re: massive update on gin index

2022-09-14 Thread Marcos Pegoraro
>
> Did you try a simple array of phone numbers?  If you really care about
> mobile,work,home prepend the number with one of HMW. Easily stripped off
> as necessary.  I've had decent performance with arrays in the past.
>

I know I have other options, and possibly better, but I was trying to
understand what happens with gin indexes, just that.


Re: massive update on gin index

2022-09-14 Thread Marcos Pegoraro
Em qua., 14 de set. de 2022 às 16:55, Tom Lane  escreveu:

> GIN does have a "pending list" of insertions not yet pushed into the main
> index structure, and search performance will suffer if that gets too
> bloated.  I don't recall much about how to control that, but I think
> vacuuming the table will serve to empty the pending list.  Also see
>
>
> https://www.postgresql.org/docs/current/gin-implementation.html#GIN-FAST-UPDATE
>
> regards, tom lane
>

Correct, if I want use that index immediately with same performance I have
to call

select pg_catalog.gin_clean_pending_list('idxphones');

Or wait next autovacuum.

thanks
Marcos


Re: get user info on log

2022-09-16 Thread Marcos Pegoraro
Em qui., 15 de set. de 2022 às 12:59, Adrian Klaver <
adrian.kla...@aklaver.com> escreveu:

> test(5432)=# set role maura;
> ERROR:  role "maura" does not exist
> test(5432)=# SET SESSION AUTHORIZATION 'maura';
> ERROR:  role "maura" does not exist
>
> No, I was asking about an error occurring later, not on set session
authorization command.

set role maura;
--user Maura exists and set was done correctly
--now I´m working as Maura
select 1/0; -- I would like to see this exception on log being logged as
maura

thanks
Marcos


Re: Finding free time period on non-continous tstzrange field values

2022-11-30 Thread Marcos Pegoraro
>
> Given the following table, how do I find free time period.
>

https://www.crunchydata.com/blog/better-range-types-in-postgres-14-turning-100-lines-of-sql-into-3


Re: toast useless

2023-01-31 Thread Marcos Pegoraro
Simon Riggs  escreveu:

> On Tue, 13 Sept 2022 at 13:29, Marcos Pegoraro  wrote:
> >>
> >> What problem do they cause you?
> >
> >
> > They don't cause any problem, I was just trying to get my database as
> clean as possible.
> > I have some thousands of these toast tables with 8k bytes, so I was
> trying to eliminate them
> > But if there is no way, ok
>
> They might be optimized away one day, but for now, they are essential.
>

When version 16 comes in it´ll be possible to define [1] field STORAGE on
CREATE TABLE step.
I have some thousands of toast files which are completely useless on my
database, so how to remove them.

pg_upgrade ? Those unneeded toast files will be dropped automatically ?
If not, if I do "alter table x alter column y set storage main" then toast
file will still exist ?

[1]
https://www.postgresql.org/message-id/flat/de83407a-ae3d-a8e1-a788-920eb334f...@sigaev.ru


Understanding years part of Interval

2023-02-06 Thread Marcos Pegoraro
I was just playing with some random timestamps for a week, for a month, for
a year ...

select distinct current_date+((random()::numeric)||'month')::interval from
generate_series(1,100) order by 1;
It´s with distinct clause because if you change that 'month' for a 'year'
it´ll return only 12 rows, instead of 100. So, why years part of interval
works differently than any other ?

select '1.01 week'::interval; --> 0 years 0 mons 7 days 1 hours 40 mins
48.00 secs
select '1.01 month'::interval; --> 0 years 1 mons 0 days 7 hours 12 mins
0.00 secs
select '1.01 year'::interval; --> 1 years 0 mons 0 days 0 hours 0 mins 0.00
secs

thanks
Marcos


Re: How to pass table column values to function

2023-02-11 Thread Marcos Pegoraro
Em sáb., 11 de fev. de 2023 às 07:10, Andrus  escreveu:

> Hi!
>
> Table source contains integer column. Its values should be passed to
> function for selecting data from other table.
>
> I tried
>
> CREATE OR REPLACE FUNCTION
> public.TestAddAssetTransactions(dokumnrs int[])
>  RETURNS int AS
> $BODY$
>
> with i1 as (
> INSERT INTO bilkaib (dokumnr)
> select dokumnr  from dok where dokumnr in (select * from
> unnest(dokumnrs))
> returning *
> )
>
> select count(*) from i1;
> $BODY$ language sql;
>
> create temp table bilkaib (dokumnr int ) on commit drop;
> create temp table dok (dokumnr serial primary key ) on commit drop;
> create temp table source (dokumnr int ) on commit drop;
> insert into source values (1),(2);
>
> select TestAddAssetTransactions( (select ARRAY[dokumnr] from
> source)::int[] )
>
>
> but got error
>
> > ERROR: more than one row returned by a subquery used as an expression
>
> Probably you want an array_agg and not an array
select TestAddAssetTransactions( (select array_agg(dokumnr) from source) )


Re: Move all elements toward another schema?

2023-02-28 Thread Marcos Pegoraro
>
> As far as i know, this piece of code would move the data. But how to also
> move indexes, constraints, primary key?
>
> create schema if not exists a;
create schema if not exists b;
create table a.a(id integer not null constraint pk_a primary key, name
text);
create index idx_a_name on a.a(name);
alter table a.a set schema b;

select relnamespace::regnamespace, relname from pg_class where relname in
('a','pk_a','idx_a_name');
 relnamespace |  relname
--+
 b| a
 b| idx_a_name
 b| pk_a
(3 rows)


CTE, lateral or jsonb_object_agg ?

2023-05-20 Thread Marcos Pegoraro
I have a table like pg_settings, so records have name and value.
This select is really fast, just 0.1 or 0.2 ms, but it runs millions of
times a day, so ...

Then all the time I have to select up to 10 of these records but the result
has to be a single record. So it can be done with ...

--Using CTE
with
BancoPadrao as (select varvalue from sys_var where name =
$$/Geral/BancoPadrao$$),
BancoMatricula as (select varvalue from sys_var where name =
$$/Geral/BancoMatricula$$),
BancoParcela as (select varvalue from sys_var where name =
$$/Geral/BancoParcela$$),
BancoMaterial as (select varvalue from sys_var where name =
$$/Geral/BancoMaterial$$)
select (select * from BancoPadrao) BancoPadrao,
   (select * from BancoMatricula) BancoMatricula,
   (select * from BancoParcela) BancoParcela,
   (select * from BancoMaterial) BancoMaterial;

--Using LATERAL
select * from (select varvalue from sys_var where name =
$$/Geral/BancoPadrao$$) BP(BancoPadrao)
cross join lateral (select varvalue from sys_var where name =
$$/Geral/BancoMatricula$$) BM(BancoMatricula)
cross join lateral (select varvalue from sys_var where name =
$$/Geral/BancoParcela$$) BPP(BancoParcela)
cross join lateral (select varvalue from sys_var where name =
$$/Geral/BancoMaterial$$) BMM(BancoMaterial);

--Using JSONB_OBJECT_AGG
select (VarValue->>'BancoPadrao') BancoPadrao,
   (VarValue->>'BancoMatricula') BancoMatricula,
   (VarValue->>'BancoParcela') BancoParcela,
   (VarValue->>'BancoMaterial') BancoMaterial
from (select jsonb_object_agg(split_part(name,'/',3), varvalue) VarValue
from sys_Var where Name
=  
any('{/Geral/BancoPadrao,/Geral/BancoMatricula,/Geral/BancoParcela,/Geral/BancoMaterial}'::text[]))
x

The first 2 options will have to find records independently, so they'll hit
heap or index multiple times. Then the first 2 options will have a bigger
planning time than the last one. But the last one has to aggregate and
later extract values from that aggregate.

Planning time for the first 2 options is 2 or 3 times more than the last
one but execution time is similar for all them.

Planning Time: 0.138 ms, Execution Time: 0.058 ms - First
Planning Time: 0.165 ms, Execution Time: 0.034 ms - Second
Planning Time: 0.073 ms, Execution Time: 0.040 ms - Third

My question is, how can I measure how much memory was used ? Because the
first 2 options did not have to calculate anything, they just found that
value and fetched, the last one had to process it in memory, right ?

regards,
Marcos


Re: DB migration : Sybase to Postgres

2023-05-26 Thread Marcos Pegoraro
Em qui., 25 de mai. de 2023 às 08:30, Sengottaiyan T 
escreveu:

> Is there an option to set novalidate constraints in postgres? In my source
> Sybase DB, table structures are not defined properly (only primary keys
> exist and no foreign key) - I'm making necessary changes on target Postgres
> DB (created tables, identifying relationship between table columns with
> respective team). After creating proper structure on target, the next step
> is to load data - I'm sure there will be a lot of errors during initial
> data load (no parent record found). How to handle it?
>
>>
>> Other options:

create all foreign keys before importing your data and do ...

This way table triggers are disabled for all users;
ALTER TABLE T1 DISABLE TRIGGER ALL; ALTER TABLE T2 DISABLE TRIGGER ALL; ALTER
TABLE T3 DISABLE TRIGGER ALL;
--Import all your data
ALTER TABLE T1 ENABLE TRIGGER ALL; ALTER TABLE T2 ENABLE TRIGGER ALL; ALTER
TABLE T3 ENABLE TRIGGER ALL;

or

This way table triggers are disabled for this session only;
SET SESSION_REPLICATION_ROLE = REPLICA;
--Import all your data
SET SESSION_REPLICATION_ROLE = ORIGIN;

Obviously if your data doesn't have correct foreign keys matching to their
parent you'll never be able to do a dump/restore properly.

Marcos


vacuum to prevent wraparound

2023-06-06 Thread Marcos Pegoraro
I have a replica server using Postgres 14.4.
Replica is done using Publication/Subscription and I have triggers for
auditing, so every record which comes from production is audited.
Some months ago I changed the way I was auditing and replaced that audit
table for a new one, but didn't remove the old table. So I have both, new
(AUDIT) and old (SYS_AUDIT) tables.
Then last night I received this message that to prevent wraparound postgres
would do an aggressive vacuum on (SYS_AUDIT), and that took several hours
to complete, in a table that is not used for 6 or 8 months. Why ?

I know it would be good to run vacuum on that table, it has lots of dead
tuples, but that table is not used anymore, so why vacuum it ?
I have to drop immediately that huge table that is not used anymore because
it can stop the server to prevent a wraparound some day ?

automatic aggressive vacuum to prevent wraparound of table
"db.sys_tables.sys_audit": index scans: 1
pages: 0 removed, 2990943 remain, 0 skipped due to pins, 0 skipped frozen
tuples: 12125175 removed, 225002719 remain, 0 are dead but not yet
removable, oldest xmin: 623406932
index scan needed: 642835 pages from table (21.49% of total) had 19945151
dead item identifiers removed
index "pksysaudit": pages: 682245 in total, 112 newly deleted, 112
currently deleted, 112 reusable
index "idxsys_audit_audit_date_time": pages: 889005 in total, 2950 newly
deleted, 2950 currently deleted, 2708 reusable
index "idxsysaudittablenameprimarykey": pages: 1045682 in total, 8 newly
deleted, 8 currently deleted, 8 reusable
I/O timings: read: 9426032.316 ms, write: 95896.564 ms
avg read rate: 1.381 MB/s, avg write rate: 0.937 MB/s
buffer usage: 3021218 hits, 6252689 misses, 4243736 dirtied
WAL usage: 7402733 records, 4242832 full page images, 8868995633 bytes
system usage: CPU: user: 999.31 s, system: 84.52 s, elapsed: 35378.45 s

regards
Marcos


Re: Active Active PostgreSQL Solution

2023-06-09 Thread Marcos Pegoraro
Postgres 16 will have origin on logical replication, so you can have Active
Active mode.

regards,
Marcos

Em sex., 9 de jun. de 2023 às 07:41, Mohsin Kazmi 
escreveu:

> Hello Everyone,
>
> I have been working on PostgreSQL databases for the last three years and I
> have also migrate databases from Oracle to PostgreSQL as well. I configured
> PostgreSQL for logical replication as well.
>
> Now in order to deploy PostgreSQL in our production servers, I need to
> configure it in Active Active mode. Can anyone help me to do so?
>
> Thanks in advance.
> --
> Best Regards
> Mohsin
>


Re: Active Active PostgreSQL Solution

2023-06-09 Thread Marcos Pegoraro
Em sex., 9 de jun. de 2023 às 22:22, Bruce Momjian

>
> Uh, I have no idea what that means and I didn't think we had active-active
> in version 16


https://postgrespro.com/blog/pgsql/5969859#commit_36628396

Regards
Marcos
-- 

Atenciosamente,


Re: Trigger Function question

2023-07-10 Thread Marcos Pegoraro
>
> Is there a way to get new.* into a jsonb column?
>

select json_object_agg(js.key, js.value) from
json_each_text(row_to_json(new.*)) js

Marcos


Re: Read only user permission

2023-08-23 Thread Marcos Pegoraro
Have you tried grant pg_read_all_data to readonly ?

regards
Marcos

Em qua., 23 de ago. de 2023 às 14:30, Hellen Jiang <
hji...@federatedwireless.com> escreveu:

> Hi,
>
>
>
> I have created readonly and readwrite roles with the following grants:
> however, readonly user does not have access to the* new* tables created
> by readwrite user. (readonly user has the access to *new* tables created
> by admin).
>
> Any idea how I can grant the access to readonly to make sure it has the
> read access to *NEW* tables created by readwrite user?
>
>
>
> -- Read-only role
>
> GRANT CONNECT ON DATABASE mydatabase TO readonly;
>
> GRANT USAGE ON SCHEMA public TO dbreadonly;
>
> GRANT SELECT ON ALL TABLES IN SCHEMA public TO dbreadonly;
>
> GRANT SELECT ON ALL SEQUENCES IN SCHEMA public TO dbreadonly;
>
> ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO
> dbreadonly;
>
>
>
> -- Read/write role
>
> GRANT CONNECT ON DATABASE mydatabase TO readwrite;
>
> GRANT USAGE, CREATE ON SCHEMA public TO readwrite;
>
> GRANT SELECT, INSERT, UPDATE, DELETE, TRUNCATE ON ALL TABLES IN SCHEMA
> public TO readwrite;
>
> GRANT USAGE ON ALL SEQUENCES IN SCHEMA public TO readwrite;
>
> ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT, INSERT, UPDATE,
> DELETE,TRUNCATE ON TABLES TO readwrite;
>
> ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT ALL ON SEQUENCES TO
> readwrite;
>
>
>
>
>


Check column result size in functions

2023-12-12 Thread Marcos Pegoraro
My functions should return varchar(5) or should return an exception, but
sometimes they return CHARACTER VARYING. Why ?

create or replace function f_sql() returns varchar(5) language sql as
$$select '0123456789'$$;
--works, but shouldn't
select * from f_sql();

create domain T5 as varchar(5);
create or replace function f_sql_domain() returns T5 language sql as
$$select '0123456789'$$;
--exception because of domain
--ERROR: value too long for type character varying(5)
select * from f_sql_domain();

create or replace function f_sql_table() returns table (V5 varchar(5))
language sql as
$$select '0123456789'$$;
--works, but shouldn't
select * from f_sql_table();

create or replace function f_plpgsql() returns varchar(5) language plpgsql
as
$$begin return '0123456789';end;$$;
--works, but shouldn't
select * from f_plpgsql();

create or replace function f_plpgsql_table() returns table(V5 varchar(5))
language plpgsql as
$$begin return query select '0123456789';end;$$;
--Shouldn't be this exception the response for all other calls ?
--ERROR: structure of query does not match function result type
select * from f_plpgsql_table();

I've created some functions using LANGUAGE SQL but strangely all varchar
results did not respect I've declared.
To be sure column result size is fine I have to use a DOMAIN ? Because even
on PLPGSQL, column size are checked only if RETURN TABLE.

thanks for any explanation


Re: How to do faster DML

2024-02-04 Thread Marcos Pegoraro
>
> insert into mytable2 select * from mytable1 order by ctid limit 10_000_000
> offset 20_000_000;
>

You can use min/max values grouping them by 10_000_000 records, so you
don´t need that offset, then generate commands and run them.

select format('insert into mytable2 select * from mytable1 where i between
%s and %s;', max(i), min(i)) from t group by i/10_000_000;


Documentation diff

2024-02-25 Thread Marcos Pegoraro
It's not easy to see what exactly changed between this and that version, on
a specific page on documentation. As an example I got this page but you'll
see this problem anywhere on docs.

https://www.postgresql.org/docs/16/functions-json.html
and
https://www.postgresql.org/docs/devel/functions-json.html

There are lots of new functions and better explanations of old features.
Then my question is, how to know all pages which were changed and what
exactly was changed on those pages ?

Regars
Marcos


Re: Documentation diff

2024-02-25 Thread Marcos Pegoraro
>
> Which problem are you trying to solve?  You should be reading the version
> of
> the docs which corresponds to the version you are running.  If you are
> trying
> to figure out an upgrade then the release notes are probably a better
> starting
> point.

Daniel Gustafsson
>

All the time we have doubts about what function exists in what version, or
was changed.
IS JSON belongs to 15 or 16, MERGE was done on 14 or 15, as example, are
completely new, so that feature will not run on previous versions.
But sometimes a function is changed a bit or a new easier function is
added, like [1].
Before all those new JSONPATH operators and methods were added, I could do
all those operations and return anything I wanted, but they are a lot
easier.

But if I don't read that page carefully, word by word, I will not see that
that function exists. And this needs to be done on all pages.
What changes were done on CREATE TABLE for version 16 ? Hmm, cool, STORAGE
can be defined at CREATE TABLE stage and not only at ALTER TABLE anymore.
But to know that I have to read that page carefully, do you understand me ?

So I was thinking of a way to get differences between this and that
versions, and for all doc pages.
Something like we already have on [2], it explicits, this feature was
introduced, this was changed, this does not exist anymore.

[1] -
https://www.postgresql.org/docs/devel/functions-json.html#FUNCTIONS-SQLJSON-PATH-OPERATORS
[2] - https://www.postgresql.org/about/featurematrix/

regards
Marcos


Re: Documentation diff

2024-02-28 Thread Marcos Pegoraro
> jian he  escreveu:
> but I guess the OP wants a side by side rendered html comparison.
> like you buy a iphone then you can compare it like this:
>
https://www.apple.com/iphone/compare/

No, I want show what changes ocurred on both versions, the page your are
seeing and other you selected. But both on same page, not side by side.

On every page you have on title Supported Versions: 16 / 15 / 14 / 13 / 12.
I would like to see on right of that some radio buttons, for Comparing
Version: rb16 / rb15 / rb14 / rb13 / rb12. If these rb16, rb15 are radio
buttons, I can compare actual page with one of previous versions, only one.
So, if you are showing a CREATE TABLE of version 16 and you are comparing
with 14 version, that STORAGE ... part would be painted
with green background, because it was added on 16 version.

If you change this part of that page
[ STORAGE { PLAIN | EXTERNAL | EXTENDED | MAIN | DEFAULT } ]
could be rewritten as
 [ STORAGE { PLAIN |
EXTERNAL | EXTENDED | MAIN | DEFAULT } ] 

Then a simple JS would show that text with background color green only if
comparing version you've selected is 15 or less

This page has another change, but only if you compare with version 11 of
before, because WITH OIDS was removed on version 12.
So, this text
[ WITH ( *storage_parameter* [= *value*] [, ... ] ) | WITH OIDS | WITHOUT
OIDS ]
could be rewritten as
  [ WITH (
*storage_parameter* [= *value*] [, ... ] ) | WITH OIDS | WITHOUT OIDS ]
 

This time that part would show something red, just to show that this part
was removed, maybe.

I know that we have to rewrite all html pages changing that this or that
text was added or changed, but I think if possible to find a tool to do
that.
And to know what exactly was changed on every page of docs would be much
better.

regards
Marcos


Re: CLUSTER vs. VACUUM FULL

2024-04-22 Thread Marcos Pegoraro
Em dom., 21 de abr. de 2024 às 22:35, David Rowley 
escreveu:

>
> Both VACUUM FULL and CLUSTER go through a very similar code path. Both
> use cluster_rel().  VACUUM FULL just won't make use of an existing
> index to provide presorted input or perform a sort, whereas CLUSTER
> will attempt to choose the cheapest out of these two to get sorted
> results.


But wouldn't it be good that VACUUM FULL uses that index defined by
Cluster, if it exists ? Maybe an additional option for VACUUM FULL ?
Because if I periodically reorganize all tables I have to run CLUSTER once,
which will run on clustered tables, and VACUUM FULL on every table that is
not clustered, because if I run VACUUM FULL for entire database it'll just
ignore cluster index defined for every table. So, INDISCLUSTERED is used
when running CLUSTER but is ignored when running VACUUM FULL.

regards
Marcos


Re: CLUSTER vs. VACUUM FULL

2024-04-22 Thread Marcos Pegoraro
Em seg., 22 de abr. de 2024 às 11:25, Tom Lane  escreveu:

> No ... what would be the difference then


Well, I think if a cluster index was defined sometime in a table, it should
be respected for next commands, including VACUUM FULL.
If I want to come back to PK or any other index I would use CLUSTER ...
USING PK_INDEX.

regards
Marcos


Re: query multiple schemas

2024-04-23 Thread Marcos Pegoraro
Em dom., 21 de abr. de 2024 às 17:12, Norbert Sándor <
sandor.norb...@erinors.com> escreveu:

> Hello,
>
> I have a database with multiple tenants with a separate schema for each
> tenant.
> The structure of each schema is identical, the tenant ID is the name of
> the schema.
>
> What I would like to achieve is to able to query tables in all schemas at
> once with the tenant IDs added to the result  set.
>
I have a similar structure and do my multi tenant queries this way.
The only problem I see is that we have to define every result type, because
I return a record, but it runs fine.

create function sql_per_tenant(sql text, tenants text[]) returns setof
record language plpgsql AS $function$
declare
  Result record;
  schemas text;
begin
  for schemas in select unnest(tenants) loop
execute Format('set local search_path to %s, public;', schemas);
for Result in execute Format('select $$%1$s$$ tenant, * from (%2$s) x',
schemas, sql) loop
  return next Result;
end loop;
  end loop;
end;$function$;

select * from sql_per_tenant('select Order_ID, sum(Value) from Orders inner
join Items using(Order_ID)
where Due_Date = Current_Date','{cus_001,cus_035,cus_175}')
as (SchemaName text, Order_ID integer, sum_of_items Numeric)

regards
Marcos


Length returns NULL ?

2024-06-05 Thread Marcos Pegoraro
There are some functions called ...length, but only array_length returns
NULL on empty array, why ?

select array_length('{}'::text[],1), -->NULL
   jsonb_array_length('[]'), -->0
   bit_length(''), -->0
   octet_length(''), -->0
   length(''), -->0
   char_length(''), -->0
   length(B''); -->0

I know, it is documented, but the question is, why does it work differently
?

array_length ( anyarray, integer ) → integer
Returns the length of the requested array dimension. (Produces NULL instead
of 0 for empty or missing array dimensions.)

array_length(array[1,2,3], 1) → 3
array_length(array[]::int[], 1) → NULL
array_length(array['text'], 2) → NULL


Re: Determining if a table really changed in a trigger

2021-10-26 Thread Marcos Pegoraro
>
>
> PERFORM * FROM ((TABLE old_table EXCEPT TABLE new_table) UNION ALL
> (TABLE new_table EXCEPT TABLE old_table)) AS differences LIMIT 1;
> IF FOUND THEN
>   ... changed ...
> END IF;
>
> Maybe converting new and old records to json and text
PERFORM * FROM (select ID, row_to_json(O.*)::text Old_Values,
row_to_json(N.*)::text New_Values from old_table o full outer join
new_table N using(ID) where Old_Values is distinct from New_Values) as
differences LIMIT 1;


Re: Determining if a table really changed in a trigger

2021-10-26 Thread Marcos Pegoraro
>
>
> Don’t use this approach with JSON (as opposed to JSONB) type fields
> though, a single extra space in the JSON structure would already lead to a
> difference, as would other formatting differences.
>
> I don´t think two equal values being converted to json will be different
in any way. If row_to_json of both are different, I suppose both record
really are different, no ?


Re: Determining if a table really changed in a trigger

2021-10-27 Thread Marcos Pegoraro
>
> Oh, very interesting. I thought that this is not possible because WHEN
> condition on triggers does not have NEW and OLD. But this is a very
> cool way to combine rules with triggers, where a rule can still
> operate by row.
>
> That is not true

create table test(i integer);
create function test_old_new() returns trigger language plpgsql as $$
begin
   raise notice '% - %', old.i, new.i;
   return new;
end;$$;
CREATE TRIGGER testvalue BEFORE UPDATE OF i ON test FOR EACH ROW WHEN
(((new.i)::integer = 5::integer)) EXECUTE PROCEDURE test_old_new();
> insert into test values(4)
1 row affected in 52 ms
> update test set i = 6
1 row affected in 93 ms
> update test set i = 5
6 - 5 ->raise notice of procedure test_old_new was called only when new.i =
5
1 row affected in 48 ms

>
>


Re: Logical replication issue after Postgresql upgrade from 13 to 14

2021-12-01 Thread Marcos Pegoraro
> I have an issue with logical replication after Postgresql upgrade from
> 13 to 14 (upgraded subscriber only using pg_upgrade_cluster -m link 13
> main). After upgrade all subscriptions were disabled so I have enabled
> them and replication workers successfully started.
> pg_stat_subscription contains list of all subscriptions. All times in
> this table is near current time (replication workers receives data
> from servers). But no changes in destination table since cluster
> upgrade (on publishers tables are changed). What I'm doing wrong and
> how to fix issue?
>

Amit, wouldn't it be better to document all steps needed to use pg_upgrade
with logical replication ?
Sergey is showing a different problem than mine.