[no subject]

2022-06-16 Thread Rama Krishnan
Hi ,

What type of temp database should i use to maintain logs or export logs?

Thanks
Rama-krishnan


Re:

2022-06-16 Thread Adrian Klaver

On 6/16/22 07:08, Rama Krishnan wrote:

Hi ,

What type of temp database should i use to maintain logs or export logs?


There is no temp database. You will need to be more specific about what 
you want to achieve.




Thanks
Rama-krishnan



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: ERROR: failed to find conversion function from key_vals_nn to record[]

2022-06-16 Thread Tom Lane
"David G. Johnston"  writes:
> The fact that a domain over an array isn’t being seen as an array here
> seems like a bug.

Hmm.  The attached quick-hack patch seems to make this better, but
I'm not sure whether there are any cases it makes worse.

regards, tom lane

diff --git a/src/backend/parser/parse_oper.c b/src/backend/parser/parse_oper.c
index cf64afbd85..f36b23092d 100644
--- a/src/backend/parser/parse_oper.c
+++ b/src/backend/parser/parse_oper.c
@@ -867,9 +867,11 @@ make_scalar_array_op(ParseState *pstate, List *opname,
 	 * Now switch back to the array type on the right, arranging for any
 	 * needed cast to be applied.  Beware of polymorphic operators here;
 	 * enforce_generic_type_consistency may or may not have replaced a
-	 * polymorphic type with a real one.
+	 * polymorphic type with a real one.  RECORD acts like a polymorphic type
+	 * for this purpose, too.
 	 */
-	if (IsPolymorphicType(declared_arg_types[1]))
+	if (IsPolymorphicType(declared_arg_types[1]) ||
+		declared_arg_types[1] == RECORDOID)
 	{
 		/* assume the actual array type is OK */
 		res_atypeId = atypeId;


Operators on ranges with a domain subtype do not implicitly cast operands of the domain's base type

2022-06-16 Thread Daniel Popowich
I'm using PG 14 and have an application using a custom range with a custom
domain subtype. My problem: PG does not do an implicit cast from the
domain's base type when used with range operators. Below is a script that
will demonstrate the problem (and below that, the output of running it with
psql).

What I'm looking for: the magic to add to my schema so I do not have to add
explicit casts throughout my application code when using the base type of a
domain as an operand to a range operator using a subtype of the domain.  How
do we get implicit casts?

Thanks!

Daniel

Here's my script.  Note it creates a schema to isolate what it generates.
Output of running it follows.

--
\set ECHO all
\set VERBOSITY verbose

select version();
create schema _range_domain_cast;

set search_path to _range_domain_cast,public;

-- domain with underlying type of integer (what constraints we might
-- place on the integer values are not germane to the issue so they're
-- left out).
create domain zzzint integer;

-- a range on our domain
create type zzzrange as range (subtype = zzzint);

-- similar range, but on an integer
create type myintrange as range (subtype = integer);

-- these work
select myintrange(10, 20) @> 15; -- subtype is integer and this
just works
select zzzrange(10, 20) @> 15::zzzint;   -- subtype is zzzint and this
works with the explicit cast

-- as does using integer where zzzint is expected
create table foo (
   x zzzint
);
insert into foo select * from generate_series(1,3);
select * from foo;

-- But this fails! - without the explicit cast, PG doesn't do the
implicit cast
-- even though integer is the underlying type of the domain
select zzzrange(10, 20) @> 15;
--


Here is the output when running it:


\set VERBOSITY verbose
select version();
   version

--
 PostgreSQL 14.2 on x86_64-pc-linux-musl, compiled by gcc (Alpine
10.3.1_git20211027) 10.3.1 20211027, 64-bit
(1 row)

create schema _range_domain_cast;
CREATE SCHEMA
set search_path to _range_domain_cast,public;
SET
-- domain with underlying type of integer (what constraints we might
-- place on the integer values are not germane to the issue so they're
-- left out).
create domain zzzint integer;
CREATE DOMAIN
-- a range on our domain
create type zzzrange as range (subtype = zzzint);
CREATE TYPE
-- similar range, but on an integer
create type myintrange as range (subtype = integer);
CREATE TYPE
-- these work
select myintrange(10, 20) @> 15; -- subtype is integer and this
just works
 ?column?
--
 t
(1 row)

select zzzrange(10, 20) @> 15::zzzint;   -- subtype is zzzint and this
works with the explicit cast
 ?column?
--
 t
(1 row)

-- as does using integer where zzzint is expected
create table foo (
   x zzzint
);
CREATE TABLE
insert into foo select * from generate_series(1,3);
INSERT 0 3
select * from foo;
 x
---
 1
 2
 3
(3 rows)

-- But this fails! - without the explicit cast, PG doesn't do the
implicit cast
-- even though integer is the underlying type of the domain
select zzzrange(10, 20) @> 15;
ERROR:  42883: operator does not exist: zzzrange @> integer
LINE 1: select zzzrange(10, 20) @> 15;
^
HINT:  No operator matches the given name and argument types. You might need
to add explicit type casts.
LOCATION:  op_error, parse_oper.c:647
--


Re: Operators on ranges with a domain subtype do not implicitly cast operands of the domain's base type

2022-06-16 Thread Tom Lane
Daniel Popowich  writes:
> -- domain with underlying type of integer (what constraints we might
> -- place on the integer values are not germane to the issue so they're
> -- left out).
> create domain zzzint integer;

> -- a range on our domain
> create type zzzrange as range (subtype = zzzint);

Why is this a good idea?

ISTM the subtype of a range type shouldn't really be a domain.
The range type depends very fundamentally on the ordering properties
of the subtype, so trying to put some abstraction in there seems a
bit misguided.  Moreover, there are a whole bunch of weird semantics
issues that arise if the domain tries to restrict the set of allowed
values.  For instance, if the domain disallows "3" (maybe it allows
only even integers) then what does a range (2,10) really mean?
Should we be expected to figure out that it's effectively [4,10)?
What pitfalls does that create for, say, multirange operators?

You could usefully make a domain over the range type and put some
restrictions at that level, perhaps.

regards, tom lane




Re: Operators on ranges with a domain subtype do not implicitly cast operands of the domain's base type

2022-06-16 Thread Bryn Llewellyn
> dpopow...@artandlogic.com  wrote:
> 
> I'm using PG 14 and have an application using a custom range with a custom 
> domain subtype. My problem: PG does not do an implicit cast from the domain's 
> base type when used with range operators.

I hit what looks to be the same issue. I reported in here:

https://www.postgresql.org/message-id/37d76918-6fd7-4598-a409-a7091687d...@yugabyte.com
 


David Johnston said that he thought that it was a bug.

My workaround is to typecast the two operands explicitly back to their base 
types.

I don't think that there's any magic to fix it declaratively. My guess is that 
you might work around it with a user-defined operator for the domains in 
question that hides the typecasts in its implementation function. (This has 
worked for me in other cases for other reasons. But I didn't try that in my 
testcase.)

Re:

2022-06-16 Thread Abdul Qoyyuum
Best if you just maintain the logs into a log file instead of a database
table.

On Thu, Jun 16, 2022 at 10:08 PM Rama Krishnan  wrote:

> Hi ,
>
> What type of temp database should i use to maintain logs or export logs?
>
> Thanks
> Rama-krishnan
>


-- 
Abdul Qoyyuum Bin Haji Abdul Kadir
HP No: +673 720 8043


Any way to understand state after data corruption failures during startup.

2022-06-16 Thread Harinath Kanchu
Hello,

If postgres fail when it is starting as standby due to WAL file corruption, can 
we get the status through any exit code or by parsing pg_control file ?

I can see the  “XX001” code is emitted in logs but parsing logs doesn’t seem to 
be a good option. Please suggest.

Thanks.





Re: ERROR: failed to find conversion function from key_vals_nn to record[]

2022-06-16 Thread Bryn Llewellyn
> david.g.johns...@gmail.com wrote:
> 
>> b...@yugabyte.com wrote:
>> 
>> This is what the doc promises. But how can you see it as anything but a bug? 
>> The subquery evaluates to "null", and only then is the attempt made to 
>> create a new row which self-evidently violates the domain's constraint. How 
>> is it any different from this:
> 
> Because NULL is doing double-duty. The absence of any possible value is 
> represented by null, as is knowing that a value exists but not knowing what 
> that value is. The most obvious example of this problem is...
> 
> ...The resultant value of "b.lbl" is null in both queries, but not for the 
> same reason (left join missing semantics pre-empt null type value semantics).
> 
> So, yes, queries can produce NULL even for domains that are defined not null.
> 
> If you want protection from null values in your database tables you must 
> define your columns to not accept nulls. It becomes a habit and can be 
> readily checked for in the catalogs (I suggest adding a column comment for 
> why columns defined as null are allowed to be null, then you can query the 
> column contents to exclude those columns where null was intended - or go with 
> your idea and just report every column as non-conforming. COMMENT ON 
> table.column IS '@NULLABLE - optional information the customer might not 
> provide').

First off, thank you very much for the tip to use "comment on" together with 
catalog queries to police a practice policy. Meanwhile, I've noted the 
unsurprising fact that you can write a "not null" constraint in the "create 
table" statement that defines a column's data type using a domain that, too, 
has such a constraint.

Maybe a slightly more lightweight practice could be to use domains like 
"text_nn" and "text_uc" (for unconstrained text) and then in "create table" 
statements and similar add the explicit constraint for the "_nn" case. Using 
"text_uc" and similar would support searching the catalog, too. And both 
approaches are vulnerable to ordinary human error that would be very hard to 
detect. Typing an explanation in any kind of comment, including external prose 
doc, always has this character.

Back to NULLs...

Your code examples ran without error and produced the results that you 
described. I do understand the fact that, on its face,  the NULLs in the two 
cases arise for different reasons. But this (still) seems to me to be a 
distinction without a difference. It rather reminds me of my earlier discussion 
with you (all) about the distinction (in the world of JSON using "jsonb") 
between the presence of an object key "k" with the value "JSON null" and the 
absence of key "k".

The semantic proposition behind the "outer join", as it seems to me, is 
inextricably bound up with the notion that, in the resulting rows, one table 
might not have a partner row with the other. (It doesn't matter here which 
table lacks the partner or if you decide to spell your query so that "right" is 
the appropriate choice or "left" is—as long as you spell the whole thing 
correctly to express your intention.) And the "outer join" semantics bring the 
notion that you simply have no information about the facts that, were it 
present, the missing row might have given you. Whoever it was on the Committee 
back in the day, decided in concert to represent this "no information" outcome, 
in the relation that results for an "outer join", as an "emergent" SQL NULL.

I've talked endlessly about NULL, over the years and face-to-face, with 
colleagues whose reasoning ability and lucidity I hugely respect. They are 
unwavering in how they explain NULL. It says simply: "I have absolutely no 
information about the value that I sought to interrogate." And these experts 
argue that there are no flavors of the bare fact of having no information. They 
argue, too, that to say "the value of this variable (or row-column 
intersection) is NULL" is an oxymoron because the absence of information is not 
a value—in the purist sense.

I copied a reasonably realistic example, below, where the "outer join" is 
between two physical tables, "genres" and "books". They have properly defined 
PKs. And there's an FK that points in the direction that the table names 
suggest. Here's the requirement:

«
Each book may be labelled by exactly one (known) genre.
Each genre may label one or several books.
»

It's the canonical list-of-values use case. It's perfectly OK to have a book 
for whose genre there is simply (as yet) no information available. And it's 
also perfectly OK to have a genre that, as yet, labels no book.

The PK columns inevitably end up as "not null". And the FK column is 
deliberately nullable.

I avoided domains altogether, using explicit "not null" constraints, or not, as 
appropriate to the requirements set out above.

Notwithstanding the proper PK and FK declarations, I cannot stop the "genres" 
table having a row that has no "books" partner. And this is exactly what I want.

My example crea

Re: ERROR: failed to find conversion function from key_vals_nn to record[]

2022-06-16 Thread David G. Johnston
On Thu, Jun 16, 2022 at 8:28 PM Bryn Llewellyn  wrote:

>
> *Back to NULLs...*
>
> Your code examples ran without error and produced the results that you
> described. I do understand the fact that, on its face,  the NULLs in the
> two cases arise for different reasons. But this (still) seems to me to be a
> distinction without a difference. It rather reminds me of my earlier
> discussion with you (all) about the distinction (in the world of JSON using
> "jsonb") between the presence of an object key "k" with the value "JSON
> null" and the absence of key "k".
>

Hadn't thought of that but indeed I suspect SQL, whether conscious or not,
has influenced the design of dealing with JSON in an SQL database to this
extent.


> The semantic proposition behind the "outer join", as it seems to me, is
> inextricably bound up with the notion that, in the resulting rows, one
> table might not have a partner row with the other. (It doesn't matter here
> which table lacks the partner or if you decide to spell your query so that
> "right" is the appropriate choice or "left" is—as long as you spell the
> whole thing correctly to express your intention.) And the "outer join"
> semantics bring the notion that you simply have no information about the
> facts that, were it present, the missing row might have given you. Whoever
> it was on the Committee back in the day, decided in concert to represent
> this "no information" outcome, in the relation that results for an "outer
> join", as an "emergent" SQL NULL.
>

> I've talked endlessly about NULL, over the years and face-to-face, with
> colleagues whose reasoning ability and lucidity I hugely respect. They are
> unwavering in how they explain NULL. It says simply: "I have absolutely no
> information about the value that I sought to interrogate." And these
> experts argue that there are no flavors of the bare fact of having no
> information. They argue, too, that to say "the value of this variable (or
> row-column intersection) is NULL" is an oxymoron because the absence of
> information is not a value—in the purist sense.
>

At a high-level I would agree.  But those NULLs are introduce by two
different processes and sometimes that fact helps to explain reality.

It's like: In what city does John reside in, and in what country is that
location.  If I don't know the first I won't know the second, even though I
do know what country every city in my database is located within (country
is not null, it's just null in reference to this question about John).

IOW, it is quite possible for a design to have had different token for the
two cases, which means they differ in some regard.  That it doesn't work
that way is because for nearly all cases the difference is immaterial and
so separate tokens would be more annoying than helpful.  The model is
intentionally papering over reality (i.e., is wrong is some sense) in the
interest of being more useful.



> (A table without a primary key violoates the rule of proper practice. This
> is why I found David's terse example too unrealistic to illustrate the
> issue at hand here.)
>

vala is the PK, valb is the FK.  Table B's PK wasn't relevant.


> *Can anybody show me an implementation of a realistic use case that
> follows proper practice — like "every table must a primary key", "a foreign
> key must refer to a primary key", and "joins may be made only "on" columns
> one of which has a PK constraint and the other of which has a FK
> constraint" — where using a not nullable data type brings a problem that
> wouldn't occur if the column were defined with a nullable data type and an
> explicit "not null" constraint?*
>


Nothing obvious comes to mind.  But frankly, proper practice includes
trying to write idiomatic code for the language you are using so others
familiar with the language can learn your code more easily.  You are
violating this to an extreme degree.  I do not think it to be a good
trade-off.  SQL writers are practical people and the idioms largely avoid
any downsides that the arise from SQL not being some paragon of language
design.

>
> *-- "\d genres" shows "gk" with a "not null" constraint, whether I write
> it*
> *-- or not. And convention seems to say "don't clutter you code by writing
> it".*
> *create table genres(*
> *  gk  int   primary key,*
> *  gv  text  not null*
> *  );*
>

"Primary Key" is defined to be the application of both UNIQUE and NOT NULL
constraints.  It's not unlike saying "serial" to mean "integer with an
associated sequence and default".  But let's not go there, please?

David J.


Re: Any way to understand state after data corruption failures during startup.

2022-06-16 Thread Kyotaro Horiguchi
At Thu, 16 Jun 2022 19:46:25 -0700, Harinath Kanchu  wrote 
in 
> 
> Hello,
> 
> If postgres fail when it is starting as standby due to WAL file corruption, 
> can we get the status through any exit code or by parsing pg_control file ?
> 
> I can see the  “XX001” code is emitted in logs but parsing logs doesn’t seem 
> to be a good option. Please suggest.

I'm not sure exactly what is what you want to achieve here, but there
is a negative evidence for the reliability of detecting WAL corruption
failure without searching server logs.

[64923:startup] [XX000] 2022-06-17 13:30:52.493 JST PANIC:  incorrect resource 
manager data checksum in record at 0/360

At least in this case, there's no trace of a server-stop due to WAL
corruption left behind other than in server log.

regards.

-- 
Kyotaro Horiguchi
NTT Open Source Software Center




Re: ERROR: failed to find conversion function from key_vals_nn to record[]

2022-06-16 Thread Bryn Llewellyn
> david.g.johns...@gmail.com wrote:
> 
>> b...@yugabyte.com wrote:
>> 
>> Can anybody show me an implementation of a realistic use case that follows 
>> proper practice — like "every table must a primary key", "a foreign key must 
>> refer to a primary key", and "joins may be made only "on" columns one of 
>> which has a PK constraint and the other of which has a FK constraint" — 
>> where using a not nullable data type brings a problem that wouldn't occur if 
>> the column were defined with a nullable data type and an explicit "not null" 
>> constraint?
> 
> Nothing obvious comes to mind. But frankly:
> 
> proper practice includes trying to write idiomatic code for the language you 
> are using so others familiar with the language can learn your code more 
> easily. You are violating this to an extreme degree.
> 
> I do not think it to be a good trade-off. SQL writers are practical people 
> and the idioms largely avoid any downsides that the arise from SQL not being 
> some paragon of language design.
> 
> -- "\d genres" shows "gk" with a "not null" constraint, whether I write it
> -- or not. And convention seems to say "don't clutter you code by writing it".
> create table genres(
>   gk  int   primary key,
>   gv  text  not null
>   );
> 
> "Primary Key" is defined to be the application of both UNIQUE and NOT NULL 
> constraints...

Yes, I know what "primary key" implies. I meant only to emphasize that the 
source column for what the "outer join" projects has a not null constraint, 
that it doesn't apply to the projection of that column, that this is perfectly 
understandable, and that this isn't a problem. Never mind.

What part of the code that I showed (the "genres" and "books" use case) 
violated, to an extreme degree, what you would have wanted me to write—and in 
what way?