Need help on query optimization

2021-03-22 Thread Shubham Mittal
Hi Team,

I am trying to execute the below query and getting below explain plan in
postgres .

Please provide any inputs on the same , on how to optimize the same.

Here B is a ltree column, E is a jsonb column.

EXPLAIN ANALYZE SELECT * FROM A

where ( B <@ 'INDIA' ) AND C = 'D'

AND CAST ( E->'F'->'G'->>'H' AS DATE ) >= '2021-02-01'

AND CAST ( E->'F'->'G'->>'H' AS DATE ) <= '2021-02-24'

ORDER BY E -> 'F' ->> 'J' ASC,created_date DESC

OFFSET 0 ROWS FETCH NEXT 200 ROWS ONLY



"Limit  (cost=22009.81..22010.08 rows=105 width=3853) (actual
time=2295.654..2295.688 rows=200 loops=1)"

"  ->  Sort  (cost=22009.81..22010.08 rows=105 width=3853) (actual
time=2295.651..2295.671 rows=200 loops=1)"

"Sort Key: (((E -> 'F'::text) ->> 'J'::text)), created_date DESC"

"Sort Method: top-N heapsort  Memory: 355kB"

"->  Index Scan using task_opp_tlmd_iscmp_idx on task
(cost=0.56..22006.29 rows=105 width=3853) (actual time=3.788..2277.503
rows=10982 loops=1)"

"  Index Cond: (C = 'D'::ltree)"

"  Filter: ((B <@ 'INDIA'::ltree) AND (E -> 'F'::text) ->
'G'::text) ->> 'H'::text))::date >= '2021-02-01'::date) AND (E ->
'F'::text) -> 'G'::text) ->> 'H::text))::date <= '2021-02-24'::date))"

"  Rows Removed by Filter: 14738"

"Planning Time: 0.418 ms"

"Execution Time: 2295.981 ms"


Thanks & Regards,

Shubham


Re: SV: Log files polluted with permission denied error messages after every 10 seconds

2021-03-22 Thread Andrus

Hi!

Errors in pg_wal directory seems not to occur in patched version. Errors 
in pg_stat_tmp still occur. Yesterdays log introduces new error message


using stale statistics instead of current ones because stats collector 
is not responding


2021-03-21 03:02:23 EET stats collector LOG:  could not rename 
temporary statistics file "pg_stat_tmp/global.tmp" to 
"pg_stat_tmp/global.stat": Permission denied
2021-03-21 13:31:00 EET stats collector LOG:  could not rename 
temporary statistics file "pg_stat_tmp/global.tmp" to 
"pg_stat_tmp/global.stat": Permission denied
2021-03-21 15:15:52 EET stats collector LOG:  could not rename 
temporary statistics file "pg_stat_tmp/global.tmp" to 
"pg_stat_tmp/global.stat": Permission denied
2021-03-21 23:51:20 EET stats collector LOG:  could not rename 
temporary statistics file "pg_stat_tmp/global.tmp" to 
"pg_stat_tmp/global.stat": Permission denied
2021-03-21 23:51:25 EET autovacuum worker LOG:  using stale 
statistics instead of current ones because stats collector is not responding


Andrus.




Re: SV: Log files polluted with permission denied error messages after every 10 seconds

2021-03-22 Thread Michael Paquier
On Mon, Mar 22, 2021 at 09:25:26AM +0200, Andrus wrote:
> Errors in pg_wal directory seems not to occur in patched version. Errors in
> pg_stat_tmp still occur. Yesterdays log introduces new error message
> 
> using stale statistics instead of current ones because stats collector is
> not responding
> 
> 2021-03-21 23:51:25 EET autovacuum worker LOG:  using stale statistics
> instead of current ones because stats collector is not responding

The renaming of stats files involves just pgrename(), which is a
completely separate code path than the one of the WAL segments.  This
requires a separate investigation.
--
Michael


signature.asc
Description: PGP signature


Re: questions about wraparound

2021-03-22 Thread Luca Ferrari
On Sat, Mar 20, 2021 at 12:07 PM Luca Ferrari  wrote:
> I suspect freezing is doing it "totally" for a idatistemplate
> database, even if I don't understand why.

I can confirm that freezing a template database is done by means of
setting it age to zero. I've set the datistempalte flag for testdb and
reissued a wraparong (I'm torturing my postgresql!), and after a
vacuum its age went to zero.
I'm not able to find this behavior in the documentation however, and
still don't understand why a template database should have a different
behavior (at least, I can imagine only to reduce the future workload
of vacuuming a template database).
Here it is, again, what I did in single user mode:

backend> select datname, age( datfrozenxid), current_setting(
'vacuum_freeze_min_age' ) from pg_database;
 1: datname (typeid = 19, len = 64, typmod = -1, byval = f)
 2: age (typeid = 23, len = 4, typmod = -1, byval = t)
 3: current_setting (typeid = 25, len = -1, typmod = -1, byval = f)

 1: datname = "postgres"(typeid = 19, len = 64, typmod
= -1, byval = f)
 2: age = "2138438218"  (typeid = 23, len = 4, typmod = -1, byval = t)
 3: current_setting = "5000"(typeid = 25, len =
-1, typmod = -1, byval = f)

 1: datname = "backupdb"(typeid = 19, len = 64, typmod
= -1, byval = f)
 2: age = "2138438218"  (typeid = 23, len = 4, typmod = -1, byval = t)
 3: current_setting = "5000"(typeid = 25, len =
-1, typmod = -1, byval = f)

 1: datname = "template1"   (typeid = 19, len = 64, typmod
= -1, byval = f)
 2: age = "2138438218"  (typeid = 23, len = 4, typmod = -1, byval = t)
 3: current_setting = "5000"(typeid = 25, len =
-1, typmod = -1, byval = f)

 1: datname = "template0"   (typeid = 19, len = 64, typmod
= -1, byval = f)
 2: age = "2138438218"  (typeid = 23, len = 4, typmod = -1, byval = t)
 3: current_setting = "5000"(typeid = 25, len =
-1, typmod = -1, byval = f)

 1: datname = "testdb"  (typeid = 19, len = 64, typmod = -1, byval = f)
 2: age = "2138438218"  (typeid = 23, len = 4, typmod = -1, byval = t)
 3: current_setting = "5000"(typeid = 25, len =
-1, typmod = -1, byval = f)

 1: datname = "pgbench" (typeid = 19, len = 64, typmod = -1, byval = f)
 2: age = "2138438218"  (typeid = 23, len = 4, typmod = -1, byval = t)
 3: current_setting = "5000"(typeid = 25, len =
-1, typmod = -1, byval = f)

backend> set vacuum_freeze_min_age to 1234
backend> vacuum
 WARNING:  database "backupdb" must be vacuumed within 9045429 transactions
 HINT:  To avoid a database shutdown, execute a database-wide VACUUM
in that database.
You might also need to commit or roll back old prepared
transactions, or drop stale replication slots.
backend>


and here it is the situation after a restart:

testdb=> select datname, age( datfrozenxid ) from pg_database;
  datname  |   age
---+--
 postgres  | 1234
 backupdb  | 5000
 template1 |0
 template0 |0
 testdb|0
 pgbench   | 5000




Re: PITR for an only object in postgres

2021-03-22 Thread Kyotaro Horiguchi
At Thu, 18 Mar 2021 17:12:49 -0400, aslı cokay  wrote in 
> Hi all,
> 
> Is there any way to get for example a table version of 5 minutes ago, or we
> should PITR to 5 minutes ago, dump the table and restore with this dump ?

If you want to revert the recent changes on the table, I think there's
no way other than time-travelling using PITR.  Of couse you can't do
that only for a part of database.

If you often want to access 5-minutes ago image of the database, you
could configure a delayed standby using recovery_min_apply_delay.

I'm not sure PostgreSQL has the same feature for logical replication,
but perhaps we don't have that.

regards.

-- 
Kyotaro Horiguchi
NTT Open Source Software Center




Re: Need help on query optimization

2021-03-22 Thread Saurabh Agrawal
>
>
> Here B is a ltree column, E is a jsonb column.
>


It may also help to mention the indexes and their types. eg. Does column B
have a GiST index?




>
> EXPLAIN ANALYZE SELECT * FROM A
>
> where ( B <@ 'INDIA' ) AND C = 'D'
>
> AND CAST ( E->'F'->'G'->>'H' AS DATE ) >= '2021-02-01'
>
> AND CAST ( E->'F'->'G'->>'H' AS DATE ) <= '2021-02-24'
>
> ORDER BY E -> 'F' ->> 'J' ASC,created_date DESC
>
> OFFSET 0 ROWS FETCH NEXT 200 ROWS ONLY
>
>
>
> "Limit  (cost=22009.81..22010.08 rows=105 width=3853) (actual
> time=2295.654..2295.688 rows=200 loops=1)"
>
> "  ->  Sort  (cost=22009.81..22010.08 rows=105 width=3853) (actual
> time=2295.651..2295.671 rows=200 loops=1)"
>
> "Sort Key: (((E -> 'F'::text) ->> 'J'::text)), created_date DESC"
>
> "Sort Method: top-N heapsort  Memory: 355kB"
>
> "->  Index Scan using task_opp_tlmd_iscmp_idx on task
> (cost=0.56..22006.29 rows=105 width=3853) (actual time=3.788..2277.503
> rows=10982 loops=1)"
>
> "  Index Cond: (C = 'D'::ltree)"
>
> "  Filter: ((B <@ 'INDIA'::ltree) AND (E -> 'F'::text) ->
> 'G'::text) ->> 'H'::text))::date >= '2021-02-01'::date) AND (E ->
> 'F'::text) -> 'G'::text) ->> 'H::text))::date <= '2021-02-24'::date))"
>
> "  Rows Removed by Filter: 14738"
>
> "Planning Time: 0.418 ms"
>
> "Execution Time: 2295.981 ms"
>
>
> Thanks & Regards,
>
> Shubham
>


Re: More than one UNIQUE key when matching items..

2021-03-22 Thread Ron Clarke
Thank you, simple, and effective. Got sucked into trying to use a named
constraint.

Best regards
Ron

On Sat, 20 Mar 2021 at 20:00, Laurenz Albe  wrote:

> On Sat, 2021-03-20 at 15:51 +, Ron Clarke wrote:
> > In SQL Server this is easy, we insert the records into a temporary table
> with separate Unique
> >  indexes on the id for set a and the ids for set b and put the
> 'ignore_dup_key' on which tells
> >  SQL Server to ignore duplicate rows and carry on.
> >
> > The nearest to this at first appears to be the ON CONFLICT ON CONSTRAINT
> IGNORE in Postgres.
> >  But this only works with a single constraint, at a time i.e. we can't
> set the ON CONFLICT ON
> >  CONSTRAINT IGNORE to work with multiple UNIQUE indexes.
>
> You can try this:
>
> INSERT ... ON CONFLICT DO NOTHING;
>
> Yours,
> Laurenz Albe
> --
> Cybertec | https://www.cybertec-postgresql.com
>
>


Re: More than one UNIQUE key when matching items..

2021-03-22 Thread Ron Clarke
Hey thanks for working out a solution to this deceptive problem.   One of
those you expect to be simple, but then all of a sudden it isn't.

Best regards
Ron





On Sat, 20 Mar 2021 at 19:01, Allan Kamau  wrote:

>
>
>
>
>
>
> On Sat, Mar 20, 2021 at 6:52 PM Ron Clarke  wrote:
>
>> /*
>> I'm trying to port a system from SQL server, and at the same time better
>> learn postgreSQL.
>>
>> I've come across a problem that is easily solved in that world, but I am
>> struggling to find an approach in postgres that works.
>>
>> We have 2 sets of events A and B (sets), they have a shared number
>> (ncode), both have unique Id's
>>
>> We want to link items of set A to those of set B, but each item of each
>> set can only be linked once.That is we do not want to link all set 'A'
>> items to all set 'B' Items with the same code.
>>
>> In SQL Server this is easy, we insert the records into a temporary table
>> with separate Unique indexes on the id for set a and the ids for set b and
>> put the 'ignore_dup_key' on which tells SQL Server to ignore duplicate rows
>> and carry on.
>>
>> The nearest to this at first appears to be the ON CONFLICT ON CONSTRAINT
>> IGNORE in Postgres. But this only works with a single constraint, at a time
>> i.e. we can't set the ON CONFLICT ON CONSTRAINT IGNORE to work with
>> multiple UNIQUE indexes.
>>
>> To show the problem:
>>
>> I'm using PostgreSQL version 11.
>>
>> */
>> -- source data
>> WITH sd AS (
>> SELECT iid, s, ncode FROM (
>> VALUES (1, 'A', 10),
>>(2, 'A', 30),
>>(3, 'A', 10),
>>(4, 'B', 10),
>>(5, 'B', 20),
>>(6, 'B', 10)
>> )
>> AS tx (iid, s, ncode))
>> SELECT iid, s, ncode FROM sd
>>
>> /* The target result would be :
>>
>>   id:1, A, 10 this matches id:4, B, 10
>>   id:3, A, 10 this matches id:6, B, 10
>> */
>>
>> --  Example to get the *wrong *answer, i.e. both sets of links
>>
>> WITH
>> sd (i, s, n ) AS (
>> SELECT iid, s, ncode FROM (
>> VALUES (1, 'A', 10),
>>   (2, 'A', 30),
>>   (3, 'A', 10),
>>   (4, 'B', 10),
>>   (5, 'B', 20),
>>   (6, 'B', 10)
>> )
>> AS tx (iid, s, ncode))
>> ,
>> x AS ( SELECT
>>
>> ax.i as ia,
>> ax.s as sa,
>> ax.n as na,
>> bx.i as ib,
>> bx.s as sb,
>> bx.n as nb,
>> ROW_NUMBER () OVER (
>>
>> PARTITION BY bx.i
>>
>> ORDER BY
>>
>> ax.i ) as rx
>>
>> FROM sd AS ax
>> INNER JOIN sd AS bx ON ax.n = bx.n and ax.i != bx.i and bx.s = 'B'
>> WHERE ax.s = 'A'
>> )
>> SELECT ia,ib, na, rx FROM x
>> ;
>>
>>
>> /*   I've tried using a recursive CTE where I'm trying to exclude results
>> from the result set that have already been identified, but I can't get an
>> allowed syntax.
>>   Doesn't seem to allow joins to the recursive term to exclude results.
>> */
>>
>>
>> /*   I've tried Unique and Exclusion constraints on temporary table, e.g
>> */
>> -- similar Example to get the wrong answer, i.e. both sets of links
>>
>> DROP TABLE IF EXISTS links ;
>>
>> CREATE TEMPORARY TABLE links
>> (mid serial ,
>> ia int ,
>> -- ia int UNIQUE,
>> ib int ,
>> -- ib int UNIQUE,
>> EXCLUDE USING gist (ia WITH =, ib WITH =)
>>
>>   ) ;
>>
>> WITH
>> sd (i, s, n ) AS (
>> SELECT iid, side, ncode FROM (
>> VALUES (1, 'A', 10),
>>   (2, 'A', 30),
>>   (3, 'A', 10),
>>   (4, 'B', 10),
>>   (5, 'B', 20),
>>   (6, 'B', 10)
>> )
>> AS tx (iid, side, ncode))
>> ,
>> x AS (
>> SELECT
>> ax.i as ia,
>> ax.s as sa,
>> ax.n as na,
>> bx.i as ib,
>> bx.s as sb,
>> bx.n as nb,
>> ROW_NUMBER () OVER (
>> PARTITION BY bx.i
>> ORDER BY
>> ax.i
>> ) as rx
>> FROM sd AS ax
>> INNER JOIN sd AS bx ON ax.n = bx.n and ax.i != bx.i and bx.s = 'B'
>> WHERE ax.s = 'A'
>> )
>> -- SELECT * FROM x
>> INSERT INTO links(ia,ib)
>> SELECT ia, ib FROM x
>> ON CONFLICT ON CONSTRAINT links_ia_ib_excl DO NOTHING;
>>
>>   --
>> SELECT * from links;
>>
>> /*   I've also tried and failed to use array(ia,ib) within or as computed
>> column of an Exclusion constraint of && s on temporary table, e.g
>>   but can't find any syntax that doesn't result in an error
>>   */
>>
>>
>> DROP TABLE IF EXISTS links ;
>>
>> CREATE TEMPORARY TABLE links
>> (mid serial ,
>> ia int ,
>> -- ia int UNIQUE,
>> ib int ,
>> -- ib int UNIQUE,
>> ix int[],
>>   EXCLUDE USING gist (ix WITH &&)
>>   ) ;
>>
>> -- This gives me:
>> -- ERROR:  data type integer[] has no default operator class for access
>> method "gist"
>>
>> -- I have the btree_gist extension installed
>>
>>
>> /*
>>
>> I appreciate I could create a cursor from a list of proposed links and
>> step through each one, checking if the id value has been "used up"
>> but I am trying to keep this as a set based operation to give me the
>> results in one statement.
>>
>> There are some similar questions w.r.t. duplicate detection, but these
>> again seem to be solved by evaluating each proposed record individually.
>> If that's just what I have to do then so be it. There is probably a
>> simple 'postgreSQL' freindly approach I'm still yet to discov

design partioning scheme for selecting from latest partition

2021-03-22 Thread Niels Jespersen
Hello

I have a table partitioned like this

drop table if exists s cascade;
create table s
(
version int not null,
a   int,
b   int
) partition by list (version);

-- Add tens of partitions
-- Load millions of rows in each partition
-- Then I want to be able to do this wothout scanning all partitions for the 
highest version number. 

select s.* from s where s.version = (select max(version) from s);

I could add an index on the version column. But the only use would be to the 
newest partition, so that seems a bit like overkill, indexing 100 of milliomns 
of rows.  

Is there another way to do this in a cheaper way. 

For now I have created a materialized view based on the select above, thus only 
scanning for max partition only once. 

Niels Jespersen




gdal32-libs-3-2-2-13.rhel bad dependency

2021-03-22 Thread Edward Donahue III

It requires proj72 >= 8.0.0, needs to be proj80

--
Thank you,
Ed Donahue III
e...@eddonahue.com





Re: design partioning scheme for selecting from latest partition

2021-03-22 Thread Francisco Olarte
Niels:

On Mon, Mar 22, 2021 at 3:40 PM Niels Jespersen  wrote:
...
> version int not null,
...
> ) partition by list (version);
>
> -- Add tens of partitions
> -- Load millions of rows in each partition
> -- Then I want to be able to do this wothout scanning all partitions for the 
> highest version number.
>
> select s.* from s where s.version = (select max(version) from s);

> I could add an index on the version column. But the only use would be to the 
> newest partition, so that seems a bit like overkill, indexing 100 of 
> milliomns of rows.

Without an index, or some caching, you would need to scan partitions.
Potentially several.  Because you could have a partition for (0,3,6),
other for (1,2,8) and a another for 4. Are you sure list is better,
giving your message hints you make them increasing? I mean, even if
you do no tuse all versions, having ranges means you can query the
schema, order the partitions in desceding order in the range, query
each one and it must be either empty or contain the maximum. You could
do this if your lists are increasing too ( i.e, you have 1,2,3 and
10,12,14, and 100,200, but you haven't told that so we cannot assume
it ). With free lists, like the ones I've put above, you may scan
1,2,8 expecting an 8 to find max is a 2, then 0,3,6 expecting a 6 to
find max is a 3 and then 4 and find a max there, and without indexes
or insider knowledge every scan will have to be a full scan.

Even if you can do something like that, without an index you will need
a full scan, or do some trigger magic and keep a cache ( just keep
versio, count(*) on a table and maintain it ). If your partitions are
ordered, you can always keep the last one indexed, or if you know
versions do not decrease, you may keep things cached. This seems to be
the kind of problem where the generic solution is hard but a little
insider knowledge can accelerate it a lot.

Regards.
   Francisco Olarte.




CSV From Oracle with timestamp column getting errors

2021-03-22 Thread Saha, Sushanta K
\COPY table1 FROM '/tmp/Oracle_2020_06.csv' DELIMITER ',' CSV HEADER;
ERROR:  invalid input syntax for type timestamp: "01-JUN-20 06.04.20.634000
AM"
CONTEXT:  COPY table1, line 2, column last_update_timestamp: "01-JUN-20
06.04.20.634000 AM"

Appreciate any help with this psql command.

Thanks & Regards
 Sushanta


Re: CSV From Oracle with timestamp column getting errors

2021-03-22 Thread Saha, Sushanta K
The table column definition:

  Column   |  Type  |
---++
 last_update_timestamp | timestamp(6) without time zone |

Thanks & Regards
 Sushanta


On Mon, Mar 22, 2021 at 4:37 PM Saha, Sushanta K <
sushanta.s...@verizonwireless.com> wrote:

> \COPY table1 FROM '/tmp/Oracle_2020_06.csv' DELIMITER ',' CSV HEADER;
> ERROR:  invalid input syntax for type timestamp: "01-JUN-20
> 06.04.20.634000 AM"
> CONTEXT:  COPY table1, line 2, column last_update_timestamp: "01-JUN-20
> 06.04.20.634000 AM"
>
> Appreciate any help with this psql command.
>
> Thanks & Regards
>  Sushanta
>
>

-- 

*Sushanta Saha|*MTS IV-Cslt-Sys Engrg|WebIaaS_DB Group|HQ -
* VerizonWireless O 770.797.1260  C 770.714.6555 Iaas Support Line
949-286-8810*


Re: CSV From Oracle with timestamp column getting errors

2021-03-22 Thread Victor Yegorov
пн, 22 мар. 2021 г. в 21:38, Saha, Sushanta K <
sushanta.s...@verizonwireless.com>:

> \COPY table1 FROM '/tmp/Oracle_2020_06.csv' DELIMITER ',' CSV HEADER;
> ERROR:  invalid input syntax for type timestamp: "01-JUN-20
> 06.04.20.634000 AM"
> CONTEXT:  COPY table1, line 2, column last_update_timestamp: "01-JUN-20
> 06.04.20.634000 AM"
>
> Appreciate any help with this psql command.
>

I would recommend issuing one of these on the Oracle side *before* taking
the CSV snapshot.
export NLS_DATE_FORMAT="-MM-DD HH24:MI:SS"
ALTER SESSION SET nls_date_format='-MM-DD HH24:MI:SS';

Otherwise, you have to load this CSV file in a table, that has `text` type
for the column and do a post-processing,
smth like:

INSERT INTO permanent_tab
SELECT *, to_timestamp(col, 'DD-MON-YY HH12.MI.SS.S AM') FROM
temp_table;

Hope this helps.

-- 
Victor Yegorov


Re: CSV From Oracle with timestamp column getting errors

2021-03-22 Thread Tim Cross


"Saha, Sushanta K"  writes:

> \COPY table1 FROM '/tmp/Oracle_2020_06.csv' DELIMITER ',' CSV HEADER;
> ERROR:  invalid input syntax for type timestamp: "01-JUN-20 06.04.20.634000 
> AM"
> CONTEXT:  COPY table1, line 2, column last_update_timestamp: "01-JUN-20 
> 06.04.20.634000 AM"
>

The problem is psql doesn't understand/recognise the timestamp format
being used in the CSV dump from Oracle.

Modify the SQL used to extract the data from Oracle so that it formats
the timestamp as a string which psql can parse into a timestamp type -
for example ISO or any of the psql timestamp formats (see psql manual). 

-- 
Tim Cross




Re: [E] Re: CSV From Oracle with timestamp column getting errors

2021-03-22 Thread Saha, Sushanta K
Awesome. Thanks Victor!

 Sushanta


On Mon, Mar 22, 2021 at 4:47 PM Victor Yegorov  wrote:

> пн, 22 мар. 2021 г. в 21:38, Saha, Sushanta K <
> sushanta.s...@verizonwireless.com>:
>
>> \COPY table1 FROM '/tmp/Oracle_2020_06.csv' DELIMITER ',' CSV HEADER;
>> ERROR:  invalid input syntax for type timestamp: "01-JUN-20
>> 06.04.20.634000 AM"
>> CONTEXT:  COPY table1, line 2, column last_update_timestamp: "01-JUN-20
>> 06.04.20.634000 AM"
>>
>> Appreciate any help with this psql command.
>>
>
> I would recommend issuing one of these on the Oracle side *before* taking
> the CSV snapshot.
> export NLS_DATE_FORMAT="-MM-DD HH24:MI:SS"
> ALTER SESSION SET nls_date_format='-MM-DD HH24:MI:SS';
>
> Otherwise, you have to load this CSV file in a table, that has `text` type
> for the column and do a post-processing,
> smth like:
>
> INSERT INTO permanent_tab
> SELECT *, to_timestamp(col, 'DD-MON-YY HH12.MI.SS.S AM') FROM
> temp_table;
>
> Hope this helps.
>
> --
> Victor Yegorov
>


-- 

*Sushanta Saha|*MTS IV-Cslt-Sys Engrg|WebIaaS_DB Group|HQ -
* VerizonWireless O 770.797.1260  C 770.714.6555 Iaas Support Line
949-286-8810*


Re: Binary encoding of timetz type

2021-03-22 Thread Merlin Moncure
On Tue, Mar 16, 2021 at 1:31 PM Ruslan  wrote:
>
> Hi folks,
>
> Apologies if it's the wrong place to ask. I have a question for postgres 
> developers.
>
> I wrote parser for COPY binary encoding. Everything is great but one thing 
> worries me. It seems like the time offset field has reversed digit sign. I'm 
> referring to this function
>
> https://github.com/postgres/postgres/blob/REL_13_STABLE/src/backend/utils/adt/date.c#L2150-L2159
>
> COPY response for value '13:37:00-0300'::timetz gives me binary 
> 0x000b69d1d7002a30. And value '13:37:00+0300'::timetz is encoded as 
> 0x000b69d1d700d5d0.
>
> 0xd5d0 is -10800 for signed integer and 0x2a30 is 10800.
>
> I reverse the sign in my code but feeling that I'm missing something is 
> nagging me. Please advise.

Check out libpqtypes -- it has client side send/receive functions for
binary format you can crib from.

https://github.com/pgagarinov/libpqtypes/blob/master/source/src/datetime.c

merlin




Re: Binary encoding of timetz type

2021-03-22 Thread Kenneth Marshall
On Mon, Mar 22, 2021 at 08:11:55PM -0500, Merlin Moncure wrote:
> Check out libpqtypes -- it has client side send/receive functions for
> binary format you can crib from.
> 
> https://github.com/pgagarinov/libpqtypes/blob/master/source/src/datetime.c
> 
> merlin

Hi,

I just wanted to +1 the libpqtypes. An excellent piece of work.

Regards,
Ken




SV: design partioning scheme for selecting from latest partition

2021-03-22 Thread Niels Jespersen



>Fra: Francisco Olarte  
>Sendt: 22. marts 2021 20:04
>Til: Niels Jespersen 
>Cc: pgsql-general@lists.postgresql.org
>Emne: Re: design partioning scheme for selecting from latest partition
>
>Niels:
>
>On Mon, Mar 22, 2021 at 3:40 PM Niels Jespersen  wrote:
>...
>> -- Then I want to be able to do this wothout scanning all partitions for the 
>> highest version number.
>>
>> select s.* from s where s.version = (select max(version) from s);
>
>> I could add an index on the version column. But the only use would be to the 
>> newest partition, so that seems a bit like overkill, indexing 100 of 
>> milliomns of rows.
>
>Without an index, or some caching, you would need to scan partitions.
>...
>Even if you can do something like that, without an index you will need a full 
>scan, or do some trigger magic and keep a cache ( just keep versio, count(*) 
>on a table and maintain it ). If your partitions are ordered, you can always 
>keep the last one indexed, or if you know versions do not decrease, you may 
>keep things cached. This seems to be the kind of problem where the generic 
>solution is hard but a little insider knowledge can accelerate it a lot.
>
>Regards.
>   Francisco Olarte.

Thank you Francisco

I think I will revisit the whole design. Better do it right. 

Niels