psql removes dashed comments

2022-04-07 Thread Boris Zentner
Hi,

I was wondering why psql loose dashed comments and what can be done about this 
misbehaviour.

I get often some sql, paste it into psql run and edit it via \e. A few 
iterations until everything works.
But psql removes the comments and creates a lot extra work to restore the 
comments and changes to the query.

Here is an example:

# start psql, paste something run it, edit \e, rerun and so on. At the end all 
dashed comments are removed.


psql -Xe postgres
psql (14.1)
Type "help" for help.

postgres=# select 1, -- one
 2, /* two */
 3 -- three
;
select 1,
 2, /* two */
 3
;
 ?column? | ?column? | ?column?
--+--+--
1 |2 |3
(1 row)

postgres=# \e
select 1,
 2, /* two */
 3
;
 ?column? | ?column? | ?column?
--+--+--
1 |2 |3
(1 row)

--
Boris






Re: psql removes dashed comments

2022-04-07 Thread Boris Zentner
> I was wondering why psql loose dashed comments and what can be done about 
> this misbehaviour.
> 
> postgres=# \e
> 
> 
> Because \e treats the query buffer as a single line of code and dashed 
> comments cannot be used, just like meta-commands cannot be used.
> 
> Including a filename should establish the behavior you desire.
> 
Thanks David! \e filename does indeed help a bit but the file must exists and I 
have to type and invent the filename before I start.
My workaround was \! vim xyz.sql but it is not satisfying. 

Without thinking to depth about this, maybe a new meta command could help?
Something like \eee to fill an initially empty temporary file from the query 
buffer on the first call and reuse it over the runtime of the psql command.



Re: Delete values from JSON

2023-03-18 Thread Boris Zentner


> Am 17.03.2023 um 08:56 schrieb Andreas Joseph Krogh :
> 
> Hi, in PG-14, how do I delete the keys "dunsNumber": "NaN":
>  
> {
>   "sessionId": "ce6fc9d0-5923-4d71-9f7e-ae60f35c45d6",
>   "details": [
> {
>   "keyInformation": {
> "dunsNumber": "NaN",
> "organizationType": "LIMITED_COMPANY"
>   }
> },
> {
>   "keyInformation": {
> "dunsNumber": "123",
> "organizationType": "LIMITED_COMPANY"
>   }
> }
>   ],
>   "nisse": 123
> }
>  
> So that the result becomes:
>  
> {
>   "sessionId": "ce6fc9d0-5923-4d71-9f7e-ae60f35c45d6",
>   "details": [
> {
>   "keyInformation": {
> "organizationType": "LIMITED_COMPANY"
>   }
> },
> {
>   "keyInformation": {
> "dunsNumber": "123",
> "organizationType": "LIMITED_COMPANY"
>   }
> }
>   ],
>   "nisse": 123
> }
>  
> Thanks.

Hi Andreas, this works for me.

➤ 2023-03-18 14:23:51 CET bz@[local]:5432/test
=# WITH data(j)
  AS (VALUES (CAST('{ "sessionId": "ce6fc9d0-5923-4d71-9f7e-ae60f35c45d6", 
"details": [ { "keyInformation": { "dunsNumber": "NaN", "organizationType": 
"LIMITED_COMPANY" } }, { "keyInformation": { "dunsNumber": "123", 
"organizationType": "LIMITED_COMPANY" } } ], "nisse": 123 }' AS jsonb)))

  SELECT jsonb_pretty(jsonb_set(j
  , '{details}'
  , (SELECT jsonb_agg(CASE
WHEN ((elem -> 
'keyInformation') ->> 'dunsNumber') = 'NaN'
  THEN jsonb_set(elem
   , 
'{keyInformation}'
   , (elem -> 
'keyInformation') - 'dunsNumber')
ELSE elem
  END)
 FROM jsonb_array_elements(data.j -> 'details') 
AS elem))) AS nice_output
  FROM data
;
nice_output
{
"nisse": 123,
"details": [
{
"keyInformation": {
"organizationType": "LIMITED_COMPANY"
}
},
{
"keyInformation": {
"dunsNumber": "123",
"organizationType": "LIMITED_COMPANY"
}
}
],
"sessionId": "ce6fc9d0-5923-4d71-9f7e-ae60f35c45d6"
}
(1 row)
Time: 0,731 ms

--
Boris




Re: cast bytea to macaddr

2023-09-28 Thread Boris Zentner

Hi Matt,

> 
> 1. What do the double parenthesis '(( Values  ))' mean?

Nothing, it was easier to continue typing and close one more on the phone. 
Single parenthesis are enough.

> 
> 2. What is the 'x(field)' in the column alias?

X(field) is just to make the example more readable. And name the example table 
x with the column field to illustrate the main part of the example 
encode(yourfield, 'hex')::macaddr




Re: pg_repack and locks

2025-01-19 Thread Boris Zentner
Hi there,

I think you can use -T 3600 -D  If you don't want to kill the backends. 
--
Boris


> Am 19.01.2025 um 13:52 schrieb nicolas75...@yahoo.fr:
> 
> Hi
> 
> Thanks for the help but this will not help, killing other process is not safe
> The good way will be that pg_repack tools include a timeout so, that after 
> expiration delay, he will stop waiting and discard the repack action
> 
> But thanks again for your proposition.
> Regards,
> Nicolas
> 
> -Message d'origine-
> De : dep...@depesz.com 
> Envoyé : lundi 13 janvier 2025 16:42
> À : nicolas 
> Cc : pgsql-general@lists.postgresql.org
> Objet : Re: pg_repack and locks
> 
>> On Mon, Jan 13, 2025 at 12:40:06PM +, nicolas wrote:
>> Hello everyone,
>> 
>> We are using postgresql v12 and added the pg_repack package
>> 
>> Since I cannot stop other process, I use the “--no-kill-backend” and
>> Pg_repack will wait indefinitly until pg_repack get the lock
>> 
>> I get sometimes a problem of lock:
>> 
>> sometimes, I get indefinitly this message : “NOTICE: Waiting for 1 
>> transactions to finish. First PID: ”
>> 
>> this is a real problem because the database is usd all the time.
>> If I kill the process, a trigger on source table will still exist and  
>> temporary tables and type still exists in the repack schema. The tables are 
>> not empty if data has been modified in the source table during the repack.
>> 
>> If I drop table repack tables, I will loose all data modifications
>> done on source table how can I properly cleanup the database ?
> 
> Allow it to kill offending backends after some time? For example -T 7200?
> 
> Best regards,
> 
> depesz
> 
> 
> 
>