Link error on Release mode

2024-11-27 Thread Igor Korot
Hi, ALL.
Today for the first time I tried to build my software in
Release mode.

This is the command MSVC generates:

/OUT:"..\dbhandler\vc_mswudll\postgres_dll.dll" /MANIFEST /NXCOMPAT
/PDB:"vc_mswudll\my_dll.pdb" /DYNAMICBASE "dbinterface.lib"
"libpq.lib" "kernel32.lib" "user32.lib" "gdi32.lib" "comdlg32.lib"
"winspool.lib" "winmm.lib" "shell32.lib" "shlwapi.lib" "comctl32.lib"
"ole32.lib" "oleaut32.lib" "uuid.lib" "rpcrt4.lib" "advapi32.lib"
"version.lib" "wsock32.lib" "WS2_32.lib" "wininet.lib" "Secur32.lib"
"odbc32.lib" "odbccp32.lib" /IMPLIB:"vc_mswudll\my_dll.lib" /DEBUG
/DLL /MACHINE:X86 /OPT:REF /SAFESEH /INCREMENTAL:NO
/PGD:"vc_mswudll\postgres.pgd" /MANIFESTUAC:"level='asInvoker'
uiAccess='false'"
/ManifestFile:"vc_mswudll\my_dll\postgres.dll.intermediate.manifest"
/OPT:ICF /ERRORREPORT:PROMPT /NOLOGO /LIBPATH:"..\dbhandler\Release"
/LIBPATH:"..\libpg\src\interfaces\libpq\Release" /LIBPATH:"C:\Program
Files (x86)\Visual Leak Detector\lib\Win32" /TLBID:1

And this is what I got:

1>libpq.lib(dirmod.obj) : error LNK2001: unresolved external symbol
__impiob_func
1>libpq.lib(win32error.obj) : error LNK2001: unresolved external
symbol __impiob_func
1>libpq.lib(fe-connect.obj) : error LNK2001: unresolved external
symbol __impiob_func
1>libpq.lib(snprintf.obj) : error LNK2001: unresolved external symbol
__impiob_func
1>libpq.lib(encnames.obj) : error LNK2001: unresolved external symbol
__impiob_func
1>libpq.lib(chklocale.obj) : error LNK2001: unresolved external symbol
__impiob_func
1>libpq.lib(fe-exec.obj) : error LNK2019: unresolved external symbol
__imp__sscanf referenced in function _pqSaveParameterStatus
1>libpq.lib(snprintf.obj) : error LNK2019: unresolved external symbol
__imp__sprintf referenced in function _fmtptr
1>..\dbhandler\vc_mswudll\postgres_dll.dll : fatal error LNK1120: 3
unresolved externals
1>2 Warning(s)
1>9 Error(s)

What am I missing?

Thank you




uninstalling postgresql 13 on ubuntu 22.04?

2024-11-27 Thread Richard Welty
i installed pg13 from the postgres repos but am not really using it

in that form; i'm using docker images instead. i now find i need

to uninstall it to facilitate upgrading my desktop to ubuntu 24



is apt remove an effective way to uninstall or is there a better method?



thanks,

   richard

Re: uninstalling postgresql 13 on ubuntu 22.04?

2024-11-27 Thread Ray O'Donnell

On 27/11/2024 21:54, Richard Welty wrote:

i installed pg13 from the postgres repos but am not really using it
in that form; i'm using docker images instead. i now find i need
to uninstall it to facilitate upgrading my desktop to ubuntu 24

is apt remove an effective way to uninstall or is there a better method?



If  you used "apt install" to install it in the first place, then yes, 
"apt remove" is the only way to remove it cleanly.


Note that this won't delete the data directory - you'll need to use the 
Debian-specific script pg_dropcluster to do this. Also, I don't think it 
removes the config files in /etc/postgresql, though "apt purge" should 
do for those.


Ray.


--
Raymond O'Donnell // Galway // Ireland
r...@rodonnell.ie


Re: uninstalling postgresql 13 on ubuntu 22.04?

2024-11-27 Thread Adrian Klaver

On 11/27/24 13:54, Richard Welty wrote:

i installed pg13 from the postgres repos but am not really using it
in that form; i'm using docker images instead. i now find i need
to uninstall it to facilitate upgrading my desktop to ubuntu 24


Not really. Per here:

https://www.postgresql.org/download/linux/ubuntu/

Ubuntu 24 is supported, You would just have to change the repo to:

... noble-pgdg ...



is apt remove an effective way to uninstall or is there a better method?


Otherwise follow Raymond O'Donnell's suggestions to uninstall/purge.



thanks,
    richard



--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Customize psql prompt to show current_role

2024-11-27 Thread Raphael Salguero Aragón
Am 23.09.2024 um 17:37 schrieb Dominique Devienne :
> 
> On Mon, Sep 23, 2024 at 5:16 PM Adrian Klaver  
> wrote:
>> On 9/23/24 08:07, Dominique Devienne wrote:
>>> I often resort to \conninfo, but it's less automatic and
>>> harder to visually parse (IMHO) compared to a custom ad-hoc prompt.
> 
>> For me that shows the user that connected(session_user) not the current_user.
> 
> yeah, sure, I realized that just after sending my last post...
> I resort to using both \conninfo *and* `select current_role`.
> I used \conninfo to tell me which server I'm on (since not
> on the default prompt, before I played with it today), which
> session_user I'm connected with (ddevienne or dd_login
> in my recent example), and finally `select current_role`
> which matters in my use-cases (session_user or dd_admin or dd_user).
> 
> I'd like not to have to type those commands/queries all the time,
> and have them in the prompt at all time. I get that it's not for everyone,
> nor even necessary all the time, but I sincerely think it is definitely 
> useful.
> 
> I'd be glad to use Pavel's proposed %N. —DD
> 


Hi all,

I just wanted to add that we are also supporting a larger customer with exactly 
the same request.
Perhaps this increases the value of the adjustment a little more :)

Best regards!
 Raphael Salguero



Re: Customize psql prompt to show current_role

2024-11-27 Thread Dominique Devienne
On Wed, Nov 27, 2024 at 3:14 PM Raphael Salguero Aragón
 wrote:
> Am 23.09.2024 um 17:37 schrieb Dominique Devienne :
> > I'd be glad to use Pavel's proposed %N. —DD
> I just wanted to add that we are also supporting a larger customer with 
> exactly the same request.
> Perhaps this increases the value of the adjustment a little more :)

Hi. Thanks for sharing Raphael. That's a 3rd notch Tom! :)
So is it the proverbial "third time's a charm" and the community would
revise its initial rejection?
I keep making that point, but the fact this is entirely opt-in,
mitigates the "expensive" argument IMHO.
FWIW. Thank you for considering this RFC again, especially since
there's a patch for it. Respectfully, --DD




Re: PostgreSQL 15.10 update corrective action for ATTACH PARTITION/DETACH PARTITION

2024-11-27 Thread Tom Lane
Paul Foerster  writes:
> On 26 Nov 2024, at 22:25, Tom Lane  wrote:
>> I'm suspicious that our repair recipe might not have accounted
>> for self-reference FKs fully, but that's just a gut feeling at
>> this point.

> Of course, it contains no secret data. Please find the full log below. 
> According to the add constraint statement, it is a self reference.
> Thanks for looking into it.

Okay, so I was able to reproduce this from scratch on HEAD:

regression=# create table p_ci_pipelines(partition_id int, id int, primary 
key(partition_id,id), auto_canceled_by_partition_id int, auto_canceled_by_id 
int) partition by LIST (partition_id);
CREATE TABLE
regression=# create table ci_pipelines partition of p_ci_pipelines FOR VALUES 
IN ('100', '101', '102');
CREATE TABLE
regression=# ALTER TABLE p_ci_pipelines ADD CONSTRAINT fk_262d4c2d19_p FOREIGN 
KEY (auto_canceled_by_partition_id, auto_canceled_by_id) REFERENCES 
p_ci_pipelines(partition_id, id) ON UPDATE CASCADE ON DELETE SET NULL;
ALTER TABLE
regression=# SELECT conrelid::pg_catalog.regclass AS "constrained table",
   conname AS constraint,
   confrelid::pg_catalog.regclass AS "references",
   pg_catalog.format('ALTER TABLE %s DROP CONSTRAINT %I;',
 conrelid::pg_catalog.regclass, conname) AS "drop",
   pg_catalog.format('ALTER TABLE %s ADD CONSTRAINT %I %s;',
 conrelid::pg_catalog.regclass, conname,
 pg_catalog.pg_get_constraintdef(oid)) AS "add"
FROM pg_catalog.pg_constraint c
WHERE contype = 'f' AND conparentid = 0 AND
   (SELECT count(*) FROM pg_catalog.pg_constraint c2
WHERE c2.conparentid = c.oid) <>
   (SELECT count(*) FROM pg_catalog.pg_inherits i
WHERE (i.inhparent = c.conrelid OR i.inhparent = c.confrelid) AND
  EXISTS (SELECT 1 FROM pg_catalog.pg_partitioned_table
  WHERE partrelid = i.inhparent));
 constrained table |   constraint|   references   | 
   drop |   
  add
---+-++-+--
 p_ci_pipelines| fk_262d4c2d19_p | p_ci_pipelines | ALTER TABLE 
p_ci_pipelines DROP CONSTRAINT fk_262d4c2d19_p; | ALTER TABLE p_ci_pipelines 
ADD CONSTRAINT fk_262d4c2d19_p FOREIGN KEY (auto_canceled_by_partition_id, 
auto_canceled_by_id) REFERENCES p_ci_pipelines(partition_id, id) ON UPDATE 
CASCADE ON DELETE SET NULL;
(1 row)

I doubt that there's anything actually wrong with the catalog state at
this point (perhaps Alvaro would confirm that).  That leads to the
conclusion that what's wrong is the release notes' query for fingering
broken constraints, and it needs some additional test to avoid
complaining about (I suspect) self-reference cases.

regards, tom lane




Re: PostgreSQL 15.10 update corrective action for ATTACH PARTITION/DETACH PARTITION

2024-11-27 Thread Alvaro Herrera
On 2024-Nov-27, Tom Lane wrote:

> I doubt that there's anything actually wrong with the catalog state at
> this point (perhaps Alvaro would confirm that).  That leads to the
> conclusion that what's wrong is the release notes' query for fingering
> broken constraints, and it needs some additional test to avoid
> complaining about (I suspect) self-reference cases.

Ugh, I hadn't noticed this report, thanks for CCing me.  I'll have a
look at this tomorrow.  You're right that I didn't think of checking
self-referencing FKs with the query.

-- 
Álvaro HerreraBreisgau, Deutschland  —  https://www.EnterpriseDB.com/
 It does it in a really, really complicated way
 why does it need to be complicated?
 Because it's MakeMaker.




Re: uninstalling postgresql 13 on ubuntu 22.04?

2024-11-27 Thread Rob Sargent



> On Nov 27, 2024, at 3:51 PM, Adrian Klaver  wrote:
> 
> On 11/27/24 13:54, Richard Welty wrote:
>> i installed pg13 from the postgres repos but am not really using it
>> in that form; i'm using docker images instead. i now find i need
>> to uninstall it to facilitate upgrading my desktop to ubuntu 24
> 
> Not really. Per here:
> 
> https://www.postgresql.org/download/linux/ubuntu/
> 
> Ubuntu 24 is supported, You would just have to change the repo to:
> 
> ... noble-pgdg ...
> 
>> is apt remove an effective way to uninstall or is there a better method?
> 
> Otherwise follow Raymond O'Donnell's suggestions to uninstall/purge.
> 
>> thanks,
>>richard
> 
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
> 
> 

Or rethink that docker choice...

> 




Re: Validating check constraints without a table scan?

2024-11-27 Thread Philip Couling
It looks like updating pg_constraint isn't an option for AWS RDS due to the
way AWS doesn't give you superuser access.
Thanks a lot for the suggestion anyway.

On Fri, 15 Nov 2024 at 15:28, Torsten Förtsch 
wrote:

> On Fri, Nov 15, 2024 at 9:38 AM Philip Couling  wrote:
>
>> Is there a solid reason why adding a check constraint does not use
>> existing indexes for validation.
>>
>> We are currently looking at partitioning a multi TB table leaving all
>> existing data in place and simply attaching it as a partition to a new
>> table. To prevent locking, we are trying to add an INVALID check constraint
>> first and then validate it.
>>
>> I can trivially prove the invalid constraint is valid with a simple
>> SELECT which will use an existing index and return instantaneously. But
>> AFAIK Theres no way to mark a constraint as valid without scanning all the
>> rows.
>>
>
> Most likely your query is not exactly the same as the check constraint.
> Think about NULL and similar.
>
>
>> This operation is really problematic on a production database with heavy
>> IO load.
>>
>> Is there a solid ready why validating check constraints cannot use
>> existing indexes? If I can prove the constraint is valid so trivially with
>> a SELECT, then why can Postgres not do the same (or similar)?
>>
>
> Here is what has worked for me many times:
>
> 1. create the check constraint as NOT VALID. From now on no new or updated
> row is allowed to violate it.
> 2. check if the constraint holds with a query on a binary replica. Make
> sure the query starts only when the constraint is visible on the replica.
> 3. on the master: UPDATE pg_constraint SET conisvalidated=true WHERE
> conname='your_constraint_name' AND conrelid='schema.table'::regclass
>
> How you perform step 2 is irrelevant. Checking it on a replica would
> simply avoid the load on the master. You just need to make sure there is no
> conflicting data in the table.
>
> WARNING, you need to be damn sure of your data if you do that. But if you
> are, it works.
>
> Here is the procedure how I solved the same problem for some of our
> multi-TB tables (PG14):
>
> The table has a column called transaction_time. We wanted to partition by
> that column. For some historical reason the column did not have a NOT NULL
> constraint. However, there was no way our processes could insert NULL in
> that column and there was no row with NULL in that field. So, first was to
> add the NOT NULL constraint:
>
> BEGIN;
>
> ALTER TABLE my.table
> ADD CONSTRAINT transaction_time_not_null CHECK(transaction_time IS NOT
> NULL) NOT VALID;
>
> UPDATE pg_constraint
>SET convalidated=true
>  WHERE conname = 'transaction_time_not_null'
>AND conrelid = 'my.table'::REGCLASS
> RETURNING conname, conrelid::REGCLASS, convalidated;
>
> COMMIT;
>
> Now for cosmetic purposes we first turn the check constraint above into a
> normal NOT NULL constraint:
>
> BEGIN;
>
> SET LOCAL client_min_messages = 'debug4';
> -- expecting this message
> -- DEBUG:  existing constraints on column "table.transaction_time" are
> sufficient to prove that it does not contain nulls
> ALTER TABLE my.table
>   ALTER COLUMN transaction_time SET NOT NULL;
> RESET client_min_messages;
>
> ALTER TABLE my.table
>   DROP CONSTRAINT transaction_time_not_null;
>
> COMMIT;
>
> If you set client_min_messages to something like debug4, then the database
> tells you if it wants to scan the table or if existing constraints are
> sufficient to prove the condition.
>
> transaction_time in our case is never in the future. Also database
> transactions are short-lived. Nothing exceeds 30 seconds. So, I can trust
> the effect of any action performed more than 30 seconds ago in the database
> is visible.
>
> So, I set the time after which new rows go to the new partition at least
> 10 minutes from now at the next hour boundary. 30 seconds would be good
> enough. I chose 10 minutes just for extra safety.
>
> SELECT date_trunc('hour', now()+'1h 10min'::interval)::timestamp AS
> switch_time\gset
>
> Next comes the actual change:
>
> BEGIN;
>
> -- rename the existing table
> ALTER TABLE my.table RENAME TO table_old;
>
> -- drop triggers. We will recreate them later.
> DROP TRIGGER ... ON my.table_old;
> DROP TRIGGER ...;
>
> -- create partitioned table
> CREATE TABLE my.table (
> LIKE my.table_old
> INCLUDING DEFAULTS
> INCLUDING CONSTRAINTS
> )
> PARTITION BY RANGE (transaction_time);
>
> -- recreate triggers
> CREATE TRIGGER ...
> BEFORE DELETE ON my.table
>FOR EACH ROW EXECUTE FUNCTION my,trigger_fun();
> CREATE TRIGGER ...;
>
> -- create the partition for future rows
> CREATE TABLE my.table_current PARTITION OF my.table
>   FOR VALUES FROM (:'switch_time') TO ('infinity');
>
> -- and some indexes and FK
> ALTER TABLE my.table_current ADD PRIMARY KEY (id);
> CREATE INDEX ON my.table_current (...);
> ALTER TABLE my.table_current
>   ADD CONSTRAINT fk_name
>   FOREIGN KEY (...)
>   REFERENCES ...(...) ON UPDATE ... ON DEL