Re: [External] Re: wal_level logical for streaming replication

2019-08-29 Thread Vijaykumar Jain
Thanks Laurenz,


Regards,
Vijay


On Thu, Aug 29, 2019 at 2:07 AM Laurenz Albe  wrote:
>
> On Wed, 2019-08-28 at 21:44 +0530, Vijaykumar Jain wrote:
> > If I change wal_level back to replica, will it corrupt wal? coz it
> > will then be having diff information ( r format of data ?)
>
> That's why you have to restart the server when you change that
> parameter.  This way, there will be a checkpoint marking the change.
>
> > What is the base reason as to why ddls are not sent via logical
> > replication but the hot standby has ddl changes reflected absolutely
> > fine ?
>
> Physical streaming replication just replicates the database files,
> so it also replicates DDL statements, which are just changes to the
> catalog tables.
>
> Basically, anything that can be recovered can be replicated.
>
> Logical replication has to perform "logical decoding", that is,
> it has to translate the (physical) WAL information into logical
> information (waht row was modified in which table).
>
> So this is much more complicated.  It should be possible in theory,
> but nobody has got around to solving the difficulties involved yet.
>
> > and there is one large limitation on large object support for logical
> > replication?
> > Where can I see the limitation on size or is it just certain data
> > types ?
>
> This has nothing to do with the size; I guess the answer is the same as
> above.  One proble that I can see immediately is that primary and
> standby don't share the same OIDs, yet every large object is identified
> by its OID.  So I think this is a fundamental problem that cannot be
> solved.
>
> Yours,
> Laurenz Albe
> --
> Cybertec | https://www.cybertec-postgresql.com
>




RE: Question about password character in ECPG's connection string

2019-08-29 Thread Egashira, Yusuke
> > Perhaps it helps to URL-encode the & in the password as %26?
> 
> The OP already did without success.

Yes, I already get failed with URL-encode the &.

> Could it be needed to escape the & with the backslash or single ticks?

Thanks. I tested with "pass\&word" and "'pass&word'".
However, them also failed...

My tested ways are followings.
- [ECPG] Connection option with plain string :
  EXEC SQL CONNECT "tcp:postgresql://localhost?user=myuser&password=pass&word"
-> Failed.
- [ECPG] Connection option with percent-encoded string :
  EXEC SQL CONNECT "tcp:postgresql://localhost?user=myuser&password=pass%26word"
-> Failed.
- [ECPG] Connection option with backslash escaped string :
  EXEC SQL CONNECT "tcp:postgresql://localhost?user=myuser&password=pass\&word"
-> Failed. (and gcc claims warning)
- [ECPG] Connection option with single-quoted string :
  EXEC SQL CONNECT "tcp:postgresql://localhost?user=myuser&password='pass&word'"
-> Failed.
- [ECPG] USING or IDENTIFIED BY phrase :
  EXEC SQL CONNECT "tcp:postgresql://localhost" USER "myuser" USING "pass&word"
  EXEC SQL CONNECT "tcp:postgresql://localhost" USER "myuser" IDENTIFIED BY 
"pass&word"
-> Success.
- [psql] Connection option with plain string :
  psql "postgresql://localhost?user=myuser&password=pass&word"
-> Failed.
- [psql] Connection option with percent-encoded string :
  psql "postgresql://localhost?user=myuser&password=pass%26word"
-> Success.

According to my tests, I think that the ECPG's connection_option seems not to 
accept '&' character as password anyhow...
ECPG CONNECT's connection_option seems to have some restricted characters.
I will use "IDENTIFIED BY" phrase to connect the database in my ECPG 
application.

The database role's password often contains '&' in our environment.
I hope to this limitation will be documented because it causes confusion.

Regards.
--
Yusuke, Egashira



Re: Question about password character in ECPG's connection string

2019-08-29 Thread Luca Ferrari
On Thu, Aug 29, 2019 at 1:08 PM Egashira, Yusuke
 wrote:
> According to my tests, I think that the ECPG's connection_option seems not to 
> accept '&' character as password anyhow...
> ECPG CONNECT's connection_option seems to have some restricted characters.


As far as I understand from

unwanted characters are '&' and '='. Even if the connetion string
seems an URL, it is managed in plain text without any sort of
escaping.



> I hope to this limitation will be documented because it causes confusion.

If this is confirmed, I agree this should be documented.

Luca




Security patch older releases

2019-08-29 Thread Erika Knihti-Van Driessche
Hi,

I think I've seen this question pass by earlier, but don't immediately find
it back..

So, I have postgres 9.6.9 and customer wants it to be updated to 9.6.11,
because that is their current testing "sandbox" version, which was not
installed by me and is also another linux distro.

Now, the newest security update is 9.6.15.. If I download that RHEL
repository, I will automatically get that version, and won't be able to
choose the exact patch..

So my question is, is there a way to get the 9.6.11 update?

Thanks in advance!
Erika


implicit transaction changes trigger behaviour

2019-08-29 Thread Willy-Bas Loos
Hi,

I currently have a fairly complex use case to solve and one thing i tried
was a deferred constraint trigger. I'm not sure if this solution is the way
to go, but anyway: As i was testing my code, i noticed that the trigger
behaves differently depending on whether or not i explicitly use BEGIN and
COMMIT, even though there is only 1 query in the transaction.
I am wondering if this is a bug in postgresql?
I'm using postgresql 10.10 on Debian.

Here's an example that reproduces the behaviour:

/*
https://www.postgresql.org/docs/10/sql-createtrigger.html
Constraint triggers must be AFTER ROW triggers on plain tables (not foreign
tables). They can be fired either at the end of the statement causing the
triggering event, or at the end of the containing transaction; in the
latter case they are said to be deferred. A pending deferred-trigger firing
can also be forced to happen immediately by using SET CONSTRAINTS.
Constraint triggers are expected to raise an exception when the constraints
they implement are violated.
*/

create table a(a_id serial primary key);
create table b(b_id serial primary key, a_id integer not null, type integer
not null);
create or replace function has_1b_type1() returns trigger as $$
declare
n_b_type1 integer; --the number of records in table b with type 1 that
correspond to OLD.id
begin
select count(*) into n_b_type1
from b
join a on b.a_id = a.a_id
where b.type = 1;
if n_b_type1 != 1 then
raise exception 'Each record of a must have exactly 1 corresponding records
in b of type 1. But after this delete the a-record with id % would have %
b-records of type 1, so the operation has been cancelled.', OLD.a_id,
n_b_type1;
else
--The return value is ignored for row-level triggers fired after an
operation, and so they can return NULL.
return null;
end if;
end
$$ language plpgsql stable;
create constraint trigger tr_has_1b_type1_del
after delete on b
deferrable initially deferred for each row
execute procedure has_1b_type1();

begin;
insert into a (a_id)
values(nextval('a_a_id_seq'));
insert into b(a_id, type)
values(currval('a_a_id_seq'), 1);
--also some other data, just to illustrate
insert into b(a_id, type)
values(currval('a_a_id_seq'), 2);
insert into b(a_id, type)
values(nextval('a_a_id_seq'), 3);
commit;

begin;
delete from b;
commit;
--ERROR:  Each record of a must have exactly 1 corresponding records in b
of type 1. But after this delete the a-record with id 1 would have 0
b-records of type 1, so the operation has been cancelled.

delete from b;
--DELETE 3
--Query returned successfully in 91 msec.


-- 
Willy-Bas Loos


Re: Security patch older releases

2019-08-29 Thread Magnus Hagander
 Thu, Aug 29, 2019 at 2:05 PM Erika Knihti-Van Driessche <
erika.kni...@gmail.com> wrote:

> Hi,
>
> I think I've seen this question pass by earlier, but don't immediately
> find it back..
>
> So, I have postgres 9.6.9 and customer wants it to be updated to 9.6.11,
> because that is their current testing "sandbox" version, which was not
> installed by me and is also another linux distro.
>
> Now, the newest security update is 9.6.15.. If I download that RHEL
> repository, I will automatically get that version, and won't be able to
> choose the exact patch..
>
> So my question is, is there a way to get the 9.6.11 update?
>
>
PostgreSQL does not release individual security patches. The way to get the
security patch is to install the latest minor version, see
https://www.postgresql.org/support/security/.

If you want to get an individual security patch you will have to cherry
pick it from git and build your own server from source. But per the above
link, it is really recommended that you don't do that. Instead, do it the
way it's intended to, which means install the latest minor release.

Why would you not want the other security patches, or other important
bugfixes?

-- 
 Magnus Hagander
 Me: https://www.hagander.net/ 
 Work: https://www.redpill-linpro.com/ 


Re: Security patch older releases

2019-08-29 Thread Luca Ferrari
On Thu, Aug 29, 2019 at 2:05 PM Erika Knihti-Van Driessche
 wrote:
> So, I have postgres 9.6.9 and customer wants it to be updated to 9.6.11, 
> because that is their current testing "sandbox" version, which was not 
> installed by me and is also another linux distro.

Binary packages are always at the latest minor version, so you have to
either force your client to test against latest version or install it
by your own downloading the tarball or using tools like pgenv and
alike.

Luca




Re: Security patch older releases

2019-08-29 Thread Erika Knihti-Van Driessche
Hi,

I was a bit afraid of this.. And of course I want all the latest security
improvements, it's not about that.. it was about  the other release
sandbox. Maybe they can then first patch their sandbox, issue solved ;-)

Thanks for all the replies & have a nice end of week!
Erika

On Thu, 29 Aug 2019 at 14:25, Luca Ferrari  wrote:

> On Thu, Aug 29, 2019 at 2:05 PM Erika Knihti-Van Driessche
>  wrote:
> > So, I have postgres 9.6.9 and customer wants it to be updated to 9.6.11,
> because that is their current testing "sandbox" version, which was not
> installed by me and is also another linux distro.
>
> Binary packages are always at the latest minor version, so you have to
> either force your client to test against latest version or install it
> by your own downloading the tarball or using tools like pgenv and
> alike.
>
> Luca
>


RE: Rename a column if not already renamed.?

2019-08-29 Thread Day, David
Hi,

Finally resolved this.  Bottom-line some stupidity-bad analysis on my part.
Scenario was - changes were ported from trunk back to a branch and then rolling 
that branch
back into trunk. 

Altering the rename_column fx to check that old and new name did not exist
was a necessary for  merge process to complete.   

I ended up with an additional patch in trunk that would only be relevant to a 
upgraded system,
to DROP  IF EXISTS old_column name that was re-added by a trunk patch to when 
the branch rolled forward.

Obviously nothing to do with 9.6 -> 11.3 postgres upgrade.


Again thanks to all for assistance


Dave



-Original Message-
From: Day, David 
Sent: Wednesday, August 21, 2019 2:58 PM
To: 'Adrian Klaver' ; Tom Lane 
Cc: Luca Ferrari ; pgsql-gene...@postgresql.org
Subject: RE: Rename a column if not already renamed.?

Restoring into 11.3 instance the 9.6 dump  ? ->  yes.

For the upgrade scenario, I  confirmed that both old column name and new name 
are in the pg_attribute table at the time the patch attempts to rename it.
Why both  is a big question.
However,
It is easy enough to re-write the column rename function to deal with the 
simultaneous possibility.  I will include the redefined function in the merge 
patch and see how it goes.

I'll update the thread after some further exploration.

Thanks all for your assistance.


-Original Message-
From: Adrian Klaver [mailto:adrian.kla...@aklaver.com] 
Sent: Wednesday, August 21, 2019 11:47 AM
To: Day, David ; Tom Lane 
Cc: Luca Ferrari ; pgsql-gene...@postgresql.org
Subject: Re: Rename a column if not already renamed.?

On 8/21/19 7:52 AM, Day, David wrote:
> I agree the function could be improved to deal with both old and new name 
> existing simultaneously.
> That is almost certainly the root  cause, and one that I would confirm if the 
> tester and site were currently available to me.
> 
> Our work flow  for this scenario is something like:
> 
> 1.  9.6 pg_dump takes a snapshot of our  9.6  database.
> 2.  Postgres is upgraded/freshly installed to  11.3..
> 3.  The 9.6 database is restored using the version 11 pg_restore tool.

In 3) you are restoring to the new 11.3 instance, correct?

> 
> 4. Once our application process starts up, it sees there is a patch available 
> in it's old branch that is one greater then it's restored  9.6 content.
> That happens to be a merge patch which resets the expectations.
> It attempts to apply all patches in the new branch since the point of 
> divergence and runs into my current issue.
>   
> It occurs to me I could simply put an exception handler in the rename column 
> function and I would likely proceed merrily along.
> But curiosity is killing me and the cat. What is causing the old name to 
> persist in the pg_attribute table after the rename. ?

If you are indeed working on the new instance pg_attribute would have no 
'memory' of the dropped column.  It would seem to me to come down to what is 
passed into sys.rename_column() as old_name_, new_name.

> 
> Would a stale function referencing the old column name be a contributor?
> 
> 
> Regards
> 
> 
> Dave Day
> 
> 
> 
> 
> -Original Message-
> From: Tom Lane [mailto:t...@sss.pgh.pa.us]
> Sent: Tuesday, August 20, 2019 4:57 PM
> To: Day, David 
> Cc: Luca Ferrari ; pgsql-gene...@postgresql.org
> Subject: Re: Rename a column if not already renamed.?
> 
> "Day, David"  writes:
>> The error is something like column already exists and
> 
> Not sure about the workflow this function is used within, but maybe you need 
> to consider what to do when both the old and new column names exist.
> Because that sure sounds like what is happening.
> 
>   regards, tom lane
> 
> 
> 
> 
> 


--
Adrian Klaver
adrian.kla...@aklaver.com


Re: implicit transaction changes trigger behaviour

2019-08-29 Thread Luca Ferrari
On Thu, Aug 29, 2019 at 2:16 PM Willy-Bas Loos  wrote:
> delete from b;
> --DELETE 3

Here the trigger is fired 3 times (for each row), and on every single
test it finds a row in 'a', that is your variable n_b_type1 is always
1, that causes the trigger (fired on each row) to not abort. If you
delete first the row that makes the trigger fail, you will not be able
to do the deletion happen outside an explicit transaction:

testdb=# delete from b where type = 1;
DELETE
testdb=# delete from b;
ERROR:  Each record of a must have exactly 1 corresponding records in
b of type 1. But after this delete the a-record with id 5 would have 0
b-records of type 1, so the operation has been cancelled.


So it seems to me a problem within the trigger: when executing outside
the transaction your row is deleted as last, and this makes the
deletion "iterate" and remove all the rows. Within the transaction,
when the trigger fires, no rows are there, so it fails. Either this is
what you have to do or your query within the trigger is wrong.

Luca




Re: implicit transaction changes trigger behaviour

2019-08-29 Thread Tom Lane
Willy-Bas Loos  writes:
> I currently have a fairly complex use case to solve and one thing i tried
> was a deferred constraint trigger. I'm not sure if this solution is the way
> to go, but anyway: As i was testing my code, i noticed that the trigger
> behaves differently depending on whether or not i explicitly use BEGIN and
> COMMIT, even though there is only 1 query in the transaction.
> I am wondering if this is a bug in postgresql?

I think the issue is that you marked the trigger as STABLE.  That causes
it to use the calling query's snapshot so it doesn't see the updates,
if it's fired during the delete query and not during the subsequent
COMMIT.  If I remove the STABLE label then it works as you expect.

This is probably under-documented but I'm not sure that it should be
considered a bug.

The trigger seems a bit broken besides that, in that the comments claim it
has something to do with the OLD row's id field(s) but the query is not in
fact taking that into account.

regards, tom lane




Selecting rows having substring in a column

2019-08-29 Thread Rich Shepard

Using postgres-11.1 here. My SQL knowledge needs expanding and my web
searches have not found a satisfactory answer. I hope to learn the correct
approach here.

A table (Fishes) has an attribute column stream_trib with values such as
Small Creek trib to Winding River
Roaring River trib to Winding River
and I want to find all rows containing Winding River in that column.

The postgres substring function takes as arguments the substring, starting
position, and length. In my table the staring position varies although the
length remains constant.

I need to learn how to construct a SELECT statement that returns the set of
rows containing the substring 'Winding River'. A pointer to references would
be great; so would a detailed lesson in handling this and similar queries.

Regards,

Rich




Re: Selecting rows having substring in a column

2019-08-29 Thread Adrian Klaver

On 8/29/19 7:13 AM, Rich Shepard wrote:

Using postgres-11.1 here. My SQL knowledge needs expanding and my web
searches have not found a satisfactory answer. I hope to learn the correct
approach here.

A table (Fishes) has an attribute column stream_trib with values such as
 Small Creek trib to Winding River
 Roaring River trib to Winding River
and I want to find all rows containing Winding River in that column.

The postgres substring function takes as arguments the substring, starting
position, and length. In my table the staring position varies although the
length remains constant.

I need to learn how to construct a SELECT statement that returns the set of
rows containing the substring 'Winding River'. A pointer to references 
would

be great; so would a detailed lesson in handling this and similar queries.


https://www.postgresql.org/docs/11/functions-matching.html

create table like_test(fld_1 varchar);

insert into like_test values ('Small Creek trib to Winding River');
insert into like_test values ('Roaring River trib to Winding River');
insert into like_test values ('Roaring River');

test=# select * from like_test where  fld_1 ilike '%Winding River%';
fld_1
-
 Small Creek trib to Winding River
 Roaring River trib to Winding River
(2 rows)

Using ilike to case-insensitive search.

If you want more in depth search:

https://www.postgresql.org/docs/11/functions-textsearch.html


 > Regards,

Rich






--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Selecting rows having substring in a column

2019-08-29 Thread Gary Cowell
Look at the 'LIKE' function

select * from Fishes where stream_trib like '%Winding River%';

You need to 'bookend' your string with '%' for 'LIKE' function, if the
string could match anywhere. If it could only be at the end, you could
use '%Winding River'

If case is an issue, wrap it with upper (or lower)

 where upper(stream_trib) like '%WINDING RIVER%';

You could also use a regular expression, or 'SIMILAR', but 'LIKE' is
often the simplest.

More information here:

https://www.postgresql.org/docs/11/functions-matching.html

On Thu, 29 Aug 2019 at 15:13, Rich Shepard  wrote:
>
> Using postgres-11.1 here. My SQL knowledge needs expanding and my web
> searches have not found a satisfactory answer. I hope to learn the correct
> approach here.
>
> A table (Fishes) has an attribute column stream_trib with values such as
> Small Creek trib to Winding River
> Roaring River trib to Winding River
> and I want to find all rows containing Winding River in that column.
>
> The postgres substring function takes as arguments the substring, starting
> position, and length. In my table the staring position varies although the
> length remains constant.
>
> I need to learn how to construct a SELECT statement that returns the set of
> rows containing the substring 'Winding River'. A pointer to references would
> be great; so would a detailed lesson in handling this and similar queries.
>
> Regards,
>
> Rich
>
>




literal vs dynamic partition constraint in plan execution

2019-08-29 Thread Luca Ferrari
Ok, the title is a little buzz, however I've got a partitioned table
and one "leaf" has a set of checks against a timestamp field to ensure
that tuples within such table belongs to the year and month:

testdb=# \d respi.y2019m08
...
Partition of: respi.y2019 FOR VALUES IN ('8')
Check constraints:
"y2019_mis_ora_check" CHECK (date_part('year'::text, mis_ora) =
2019::double precision)
"y2019_mis_ora_check1" CHECK (mis_ora >= make_timestamp(2019, 1,
1, 0, 0, 0::double precision) AND mis_ora <= make_timestamp(2019, 12,
31, 23, 59, 59::double precision))
"y2019m08_mis_ora_check" CHECK (date_part('month'::text, mis_ora)
= 8::double precision)
"y2019m08_mis_ora_check1" CHECK (date_part('year'::text, mis_ora)
= 2019::double precision)
"y2019m08_mis_ora_check2" CHECK (mis_ora >= make_timestamp(2019,
8, 1, 0, 0, 0::double precision) AND mis_ora <= make_timestamp(2019,
8, 31, 23, 59, 59::double precision))
"y2019m08_mis_ora_check3" CHECK (mis_ora >= make_timestamp(2019,
8, 1, 0, 0, 0::double precision) AND mis_ora <= make_timestamp(2019,
8, 31, 23, 59, 59::double precision))

So y2019m08 accepts only tuples where 'mis_ora' has a timestamp that
is contained into the eigth month of the year.
Now if I look at the plan for this query everything works as expected
(I disabled parallel scans for better see the plan):

testdb=# explain select * from respi.root where ts >= '2019-08-28
23:35:00.007245' and mis_ora >= '2019-08-29 16:28:48.711482'   order
by ts;
--
 Sort  (cost=353986.27..353991.59 rows=2129 width=40)
   Sort Key: y2019m08.ts
   ->  Append  (cost=0.00..353868.58 rows=2129 width=40)
 ->  Seq Scan on y2019m08  (cost=0.00..353409.93 rows=1 width=40)
   Filter: ((ts >= '2019-08-28 23:35:00.007245'::timestamp
without time zone) AND (mis_ora >= '2019-08-29
16:28:48.711482'::timestamp without time zone))
 ->  Seq Scan on y2019m09  (cost=0.00..28.00 rows=133 width=40)
   Filter: ((ts >= '2019-08-28 23:35:00.007245'::timestamp
without time zone) AND (mis_ora >= '2019-08-29
16:28:48.711482'::timestamp without time zone))
 ->  Seq Scan on y2019m10  (cost=0.00..28.00 rows=133 width=40)
   Filter: ((ts >= '2019-08-28 23:35:00.007245'::timestamp
without time zone) AND (mis_ora >= '2019-08-29
16:28:48.711482'::timestamp without time zone))
 ->  Seq Scan on y2019m11  (cost=0.00..28.00 rows=133 width=40)
   Filter: ((ts >= '2019-08-28 23:35:00.007245'::timestamp
without time zone) AND (mis_ora >= '2019-08-29
16:28:48.711482'::timestamp without time zone))
 ->  Seq Scan on y2019m12  (cost=0.00..28.00 rows=133 width=40)
   Filter: ((ts >= '2019-08-28 23:35:00.007245'::timestamp
without time zone) AND (mis_ora >= '2019-08-29
16:28:48.711482'::timestamp without time zone))


The "as I expected" means that the system starts scanning from
y2019m08 and following (in time) tables, and does not scan previous
time tables.
This works if the mis_ora is compared against a literal timestamp, but
if I simply change it with a dynamic timestamp:

testdb=# explain select * from respi.root where ts >= '2019-08-28
23:35:00.007245' and   mis_ora >= current_timestamp   order by ts;
   QUERY PLAN

 Sort  (cost=4654860.37..4654865.25 rows=1952 width=36)
   Sort Key: r.ts
   ->  Nested Loop  (cost=0.00..4654753.69 rows=1952 width=36)
 Join Filter: (r.sen_id = s.sen_id)
 ->  Append  (cost=0.00..4638927.56 rows=3204 width=32)
   ->  Seq Scan on y2018m01 r  (cost=0.00..31.00 rows=133 width=32)
 Filter: ((ts >= '2019-08-28
23:35:00.007245'::timestamp without time zone) AND (mis_ora >=
CURRENT_TIMESTAMP))
   ->  Seq Scan on y2018m02 r_1  (cost=0.00..31.00
rows=133 width=32)
 Filter: ((ts >= '2019-08-28
23:35:00.007245'::timestamp without time zone) AND (mis_ora >=
CURRENT_TIMESTAMP))
   ->  Seq Scan on y2018m03 r_2  (cost=0.00..31.00
rows=133 width=32)
 Filter: ((ts >= '2019-08-28
23:35:00.007245'::timestamp without time zone) AND (mis_ora >=
CURRENT_TIMESTAMP))
   ->  Seq Scan on y2018m04 r_3  (cost=0.00..31.00
rows=133 width=32)
 Filter: ((ts >= '2019-08-28
23:35:00.007245'::timestamp without time zone) AND (mis_ora >=
CURRENT_TIMESTAMP))

also the tables for the past year are scanned. Moreover, the planner
thinks I will get 133 rows out of, for instance, y2018m01 which is
impossible.
So, do I have defined the constraint on each table in a wrong manner?


testdb=# select version();
 version
--

Re: Selecting rows having substring in a column [RESOLVED]

2019-08-29 Thread Rich Shepard

On Thu, 29 Aug 2019, Adrian Klaver wrote:


test=# select * from like_test where  fld_1 ilike '%Winding River%';
   fld_1


Adrian,

Aha! I thought of 'like' but forgot about ilike. That's exactly what I need.

Thanks very much,

Rich




Re: Selecting rows having substring in a column

2019-08-29 Thread Rich Shepard

On Thu, 29 Aug 2019, Gary Cowell wrote:


Look at the 'LIKE' function


Gary,

Yes, I thought of like but didn't think to look for it in the postgres
manual.

Thank you very much,

Rich




Re: Selecting rows having substring in a column

2019-08-29 Thread Rob Sargent



On 8/29/19 8:47 AM, Rich Shepard wrote:

On Thu, 29 Aug 2019, Gary Cowell wrote:


Look at the 'LIKE' function


Gary,

Yes, I thought of like but didn't think to look for it in the postgres
manual.

Thank you very much,

Rich


I've given up on the "likes" in favour of the ~ (tilde) and ~* (tilde 
asterisk) operator.  Way cool, powerful. Not standard sql though (iirc)





Re: Selecting rows having substring in a column [RESOLVED]

2019-08-29 Thread Rich Shepard

On Thu, 29 Aug 2019, Rich Shepard wrote:


Aha! I thought of 'like' but forgot about ilike. That's exactly what I
need.


'thought' is the wrong word. I should have written that I once knew of like
and had forgotten it.

Rich





Query using 'LIKE' returns empty set

2019-08-29 Thread Rich Shepard

Next problem is one I've not before encountered.

The .sql file used to import data to the fish_counts table has rows such as
this one:

('1237796458250','0','17174','Buchanan Creek','Buchanan Creek trib to North
Fork Nehalem River','0-3.25','161980','Unknown','Jack or subadult','Peak
live & dead fish','Spawner Counts','ODFW','2012-01-06','1950-1974',25,
'1950-10-01','1951-01-31','Ground','Actual Physical Counts',0), 
[Lines wrapped in the message only.]


When I submit this query I get no rows returned:

select * from fish_counts where stream_tribs ilike 'Nehalem';
 count_id | loc_id | downstream | upstream | stream_name | stream_tribs | r
iver_miles | itis_tsn | production | life_stage | count_type | data_categor
y | compiled_by | updated | years | nbr_observations | begin_date | end_dat
e | sample_method | calc_method | count_value 
--+++--+-+--+--

---+--++++-
--+-+-+---+--++
--+---+-+-
(0 rows)

What is equally puzzling is when I search the input file using
grep -c -e "Nehalem" fish_counts
0 is returned, the same as the postgres query.

I want to understand what I've done incorrectly.

TIA,

Rich




Re: How to log 'user time' in postgres logs

2019-08-29 Thread francis picabia
On Wed, Aug 28, 2019 at 4:51 PM Tom Lane  wrote:

> francis picabia  writes:
> > The server was running Moodle.  The slow load time was noticed when
> loading
> > a quiz containing
> > multiple images.  All Apache log results showed a 6 seconds or a multiple
> > of 6 for how long
> > it took to retrieve each image.
>
> > Interestingly, if I did a wget, on the server, to the image link (which
> was
> > processed through a pluginfile.php URL)
> > even the HTML page returned of "please login first" took consistently 6
> > seconds.  Never 2, 3, 4, 5 or 7, 8, 9...
> > So whatever was wrong, there was a 6 second penalty for this.
>
> Hmm ... some weird DNS behavior, perhaps?  That is one way to explain
> a pattern like this.  How long does it take to "dig" or "nslookup"
> your server name?
>
> regards, tom lane
>

Nope, it isn't DNS related.  The dev server uses same DNS service and
was very fast to do the same actions.  I also run the DNS server and would
be aware
of any complaints about the performance.  I thought firewall for awhile but
there was
nothing there related to it.

It all cleared up minutes after the *vacuum -a -v -z  *was run,
and was reported slow several days before this.  I myself saw it
consistently slow
for the quiz for the 2.5 days I fought the problem.  We're certain the
vacuum
made the difference in performance.

If there is no way to log the "user time", I guess we'd need to log
something
like the connection and disconnection to get another idea on the time
elapsing.


Re: Query using 'LIKE' returns empty set

2019-08-29 Thread Rob Sargent



On 8/29/19 10:39 AM, Rich Shepard wrote:

Next problem is one I've not before encountered.

The .sql file used to import data to the fish_counts table has rows 
such as

this one:

('1237796458250','0','17174','Buchanan Creek','Buchanan Creek trib to 
North

Fork Nehalem River','0-3.25','161980','Unknown','Jack or subadult','Peak
live & dead fish','Spawner Counts','ODFW','2012-01-06','1950-1974',25,
'1950-10-01','1951-01-31','Ground','Actual Physical Counts',0), [Lines 
wrapped in the message only.]


When I submit this query I get no rows returned:

select * from fish_counts where stream_tribs ilike 'Nehalem';
 count_id | loc_id | downstream | upstream | stream_name | 
stream_tribs | r
iver_miles | itis_tsn | production | life_stage | count_type | 
data_categor
y | compiled_by | updated | years | nbr_observations | begin_date | 
end_dat
e | sample_method | calc_method | count_value 
--+++--+-+--+--
---+--++++- 

--+-+-+---+--++ 


--+---+-+-
(0 rows)

What is equally puzzling is when I search the input file using
grep -c -e "Nehalem" fish_counts
0 is returned, the same as the postgres query.

I want to understand what I've done incorrectly.

TIA,

Rich



Are you sure that particular file has the search string?

grep -i nehalem fish_counts





Re: Query using 'LIKE' returns empty set

2019-08-29 Thread Michael Lewis
You need the wildcard character at front and back.

select * from fish_counts where stream_tribs ilike '*%*Nehalem*%*';


Re: Query using 'LIKE' returns empty set [FIXED]

2019-08-29 Thread Rich Shepard

On Thu, 29 Aug 2019, Rob Sargent wrote:


Are you sure that particular file has the search string?


Rob,

I'm suitably embarrased: that's the wrong file name. I must be seriously
under cafinated. The proper file is fish_data.sql so grep and postgres
return 1409 instances.

My apologies to all,

Rich




Re: How to log 'user time' in postgres logs

2019-08-29 Thread Adrian Klaver

On 8/28/19 11:36 AM, francis picabia wrote:


The server was running Moodle.  The slow load time was noticed when 
loading a quiz containing
multiple images.  All Apache log results showed a 6 seconds or a 
multiple of 6 for how long

it took to retrieve each image.

Interestingly, if I did a wget, on the server, to the image link (which 
was processed through a pluginfile.php URL)
even the HTML page returned of "please login first" took consistently 6 
seconds.  Never 2, 3, 4, 5 or 7, 8, 9...
So whatever was wrong, there was a 6 second penalty for this.  We found 
the problem


Caching software in the mix?

More below.

outside of the quizzes as well, so it wasn't a single part of the PHP 
code.  In addition, our development
server with a clone of data, database, apps, etc., was fast at doing any 
of the tests.
Of course a database brought in with a dump will not include any of the 
cruft
DB structures, so the dev server was free of the problem (and it had 
fewer cores, memory, etc).


I was not suspecting PostgreSQL as the culprit, because all query times 
in the log showed
roughly 1 ms response.  I could tail -f the log file while loading a 
quiz to watch the times

reported.

Last night the vacuum was run (on a database of about 40GB if dumped), 
and since
then the quizzes and everything run as fast as would be expected.  It 
had run

for about 1.5 months without vacuum.

Apache, PHP and Postgres all on the same server.  Memory and CPUs not 
taxed, load kept level

while the problem quizzes were being tested.

Given this experience, I'd like something that reflected the true times 
Postgres was spending

on any work it was doing.


The log_duration will show that.
If you want to verify open psql and set \timing on and run your statements.




The other possibility was it just didn't log the slower times, but I 
have seen larger numbers
in the 10,000 ms range in the night when some backups and housekeeping 
happens.


All  log related settings:
checkpoint_segments = 12
logging_collector = on
log_destination = 'stderr'
log_directory = 'pg_log'
log_filename = 'postgresql-%a.log'
log_truncate_on_rotation = on
log_rotation_age = 1440
log_rotation_size = 0
log_min_messages = info
log_min_error_statement = debug1
log_duration = on
log_line_prefix = '<%t>'

I know it does sound strange, but this is what we battled with for 2.5 
days until the light
came on that the vacuum had been set to off on the target system during 
server migration.






--
Adrian Klaver
adrian.kla...@aklaver.com




Re: How to log 'user time' in postgres logs

2019-08-29 Thread Adrian Klaver

On 8/28/19 11:36 AM, francis picabia wrote:






The other possibility was it just didn't log the slower times, but I 
have seen larger numbers
in the 10,000 ms range in the night when some backups and housekeeping 
happens.


All  log related settings:
checkpoint_segments = 12
logging_collector = on
log_destination = 'stderr'
log_directory = 'pg_log'
log_filename = 'postgresql-%a.log'
log_truncate_on_rotation = on
log_rotation_age = 1440
log_rotation_size = 0
log_min_messages = info
log_min_error_statement = debug1
log_duration = on
log_line_prefix = '<%t>'


You might also want to log:

log_connections = on

log_disconnections = on

And add to log_line_prefix some mix of:

%u  User name

%p  Process ID

%s  Process start time stamp




I know it does sound strange, but this is what we battled with for 2.5 
days until the light
came on that the vacuum had been set to off on the target system during 
server migration.







--
Adrian Klaver
adrian.kla...@aklaver.com




Exclusion constraints on overlapping text arrays?

2019-08-29 Thread Ken Tanzer
Hi.  Using 9.6.14, I was setting up a table with this:

EXCLUDE using gist (EXCLUDE using gist (grant_number_codes with &&)

Where grant_numbers is a varchar[].  I get this error:

ERROR:  data type character varying[] has no default operator class for
access method "gist"
HINT:  You must specify an operator class for the index or define a default
operator class for the data type.

I did some Googling, and it looks like you could do this for integer arrays
with the intarray extension.  I didn't see anything I'd recognize as an
equivalent for varchar (or text) arrays.  Is there any way to do this now?
And if not, is there much prospect of this being implemented at some point?

I found a couple of old threads about this.  Not sure if they are still
relevant, but listed below.

Also, on a side note, I tried using grant_number_codes::text[] with &&, but
that got a syntax error.  Does that mean casting isn't allowed at all in
these constraints?

Thanks in advance,
Ken

1) This 2014 thread asked about this:

*array exclusion constraints*
https://www.postgresql.org/message-id/flat/20141113183843.E8AC620362%40smtp.hushmail.com

and pointed toward this 2013 discussion:

*Todo item: Support amgettuple() in GIN*
https://www.postgresql.org/message-id/flat/5297DC17.7000608%40proxel.se



-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://demo.agency-software.org/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: literal vs dynamic partition constraint in plan execution

2019-08-29 Thread Luca Ferrari
On Thu, Aug 29, 2019 at 4:45 PM Luca Ferrari  wrote:
>
> Ok, the title is a little buzz, however I've got a partitioned table
> and one "leaf" has a set of checks against a timestamp field to ensure
> that tuples within such table belongs to the year and month:

Of course, all the siblings have similar constraints. So my partition
starts at a table named "root", then it it has a level for the year,
and each year has subpartitions for months:
- root
   - y2018
 - y2018m01, y2018m02, ...
  - y2019
   - y2019m01, y2019m02 

All partitions have been created equally, and constraints seem fine to me:

testdb=# \d respi.y2018m01
...
Partition of: respi.y2018 FOR VALUES IN ('1')
Check constraints:
"y2018_mis_ora_check" CHECK (date_part('year'::text, mis_ora) =
2018::double precision)
"y2018_mis_ora_check1" CHECK (mis_ora >= make_timestamp(2018, 1,
1, 0, 0, 0::double precision) AND mis_ora <= make_timestamp(2018, 12,
31, 23, 59, 59::double precision))
"y2018m01_mis_ora_check" CHECK (date_part('year'::text, mis_ora) =
2018::double precision)
"y2018m01_mis_ora_check1" CHECK (mis_ora >= make_timestamp(2018,
1, 1, 0, 0, 0::double precision) AND mis_ora <= make_timestamp(2018,
1, 31, 23, 59, 59::double precision))



testdb=># \d+ respi.y2018
...
Partition of: respi.root FOR VALUES IN ('2018')
Partition constraint: ((date_part('year'::text, mis_ora) IS NOT NULL)
AND (date_part('year'::text, mis_ora) = '2018'::double precision))
Partition key: LIST (date_part('month'::text, mis_ora))
Check constraints:
"y2018_mis_ora_check" CHECK (date_part('year'::text, mis_ora) =
2018::double precision)
"y2018_mis_ora_check1" CHECK (mis_ora >= make_timestamp(2018, 1,
1, 0, 0, 0::double precision) AND mis_ora <= make_timestamp(2018, 12,
31, 23, 59, 59::double precision))
Partitions: respi.y2018m01 FOR VALUES IN ('1'),
respi.y2018m02 FOR VALUES IN ('2'),
respi.y2018m03 FOR VALUES IN ('3'),
respi.y2018m04 FOR VALUES IN ('4'),
respi.y2018m05 FOR VALUES IN ('5'),
respi.y2018m06 FOR VALUES IN ('6'),
respi.y2018m07 FOR VALUES IN ('7'),
respi.y2018m08 FOR VALUES IN ('8'),
respi.y2018m09 FOR VALUES IN ('9'),
...

With the above constraint, all the branch starting at y2018 should be
excluded when selecting with
mis_ora >= CURRENT_TIMESTAMP
(the date of the server is right, of course).
Why is instead scanned (as reported by the execution plan in the
previous email)?

Thanks,
Luca