Need optimization in query

2022-06-10 Thread Shubham Mittal
Hi Team.

*I have a use case to get the result as follows:*

1. asin_ymm is never null.
2. If there is more than 1 entry for an asin_ymm with both null and
non-null submodelId, I should return rows with non-null submodelId only,
otherwise if there is no submodelid present for a asin_ymm, then return
that row with null submodelid.
3. Also if submodelid is null , assuming fitment_key would always be null
in the table.
4. Using that resultset, If there is more than 1 entry for an (asin_ymm
,SubmodelID)  with both null and non-null fitment_key, I should return rows
with non-null fitment_key only, otherwise if there is no fitment_key
present for a (asin_ymm,submodelId), then return that row with null
fitment_key.
5. Using that resultset, i need to return those rows having maximum values
in fitment_key(ie. for eg.(out of these two 'A', 'D','1--3-4' and A',
'D','1-2-3-4-7', i should return row having A', 'D','1-2-3-4-7)

create table fitment_records(asin_ymm varchar(50), SubmodelID varchar(50),
fitment_key varchar(50));

insert into fitment_records(asin_ymm, SubmodelID,fitment_key) values('A',
null,null);
insert into fitment_records(asin_ymm, SubmodelID,fitment_key) values('B',
null,null);
insert into fitment_records(asin_ymm, SubmodelID,fitment_key ) values('C',
null,null);
insert into fitment_records(asin_ymm, SubmodelID,fitment_key) values('A',
'D','1--3-4');
insert into fitment_records(asin_ymm, SubmodelID,fitment_key) values('B',
'E','2-3-4-5');
insert into fitment_records(asin_ymm, SubmodelID,fitment_key) values('B',
'E', null);
insert into fitment_records(asin_ymm, SubmodelID,fitment_key) values('A',
'F','2-3');
insert into fitment_records(asin_ymm, SubmodelID,fitment_key) values('A',
'E', null);
insert into fitment_records(asin_ymm, SubmodelID,fitment_key) values('A',
'D', null);
insert into fitment_records(asin_ymm, SubmodelID,fitment_key) values('A',
'D','1-2-3-4-7');

output should be:

 asin_ymm | submodelid | fitment_key
--++-
 A| D  | 1-2-3-4-7
 A| E  | null
 A| F  | 2-3
 B| E  | 2-3-4-5
 C| null   | null

Currently i have written these queries for this usecase. Can we
optimise it further? Considering data is in millions

create temporary view tv1 as (SELECT *
FROM fitment_records fr_1
WHERE fitment_key IS NOT NULL OR
(fitment_key IS NULL AND
NOT EXISTS (SELECT 1 FROM fitment_records fr_2
  WHERE fr_2.asin_ymm = fr_1.asin_ymm AND
fr_2.SubmodelID = fr_1.SubmodelID and
fr_2.fitment_key IS NOT NULL)));

create temporary view tv2 as (select *
FROM tv1 fr_1
WHERE SubmodelID IS NOT NULL OR
(SubmodelID IS NULL AND
NOT EXISTS (SELECT 1 FROM fitment_records fr_2
  WHERE fr_2.asin_ymm = fr_1.asin_ymm AND
fr_2.SubmodelID IS NOT NULL) ));

create temporary view fitment_records_with_fitment_key_size as (
select asin_ymm, SubmodelID, fitment_key,
Array_Length(string_to_array(fitment_key, '-'),1) as fitment_key_size
from tv2
where SubmodelID is not null
and fitment_key is not null
group by asin_ymm, SubmodelID, fitment_key
);

create temporary view fitment_records_with_fitment_key_max_size as (
select asin_ymm, SubmodelID, max(fitment_key_size) as max_fitment_key_size
from fitment_records_with_fitment_key_size
group by asin_ymm, SubmodelID
);

select * from tv2
except
select f2.*
from fitment_records_with_fitment_key_size frws,
fitment_records_with_fitment_key_max_size frwms,
tv2 f2
where frws.asin_ymm = frwms.asin_ymm
AND frws.SubmodelID = frwms.SubmodelID
AND frws.fitment_key_size < frwms.max_fitment_key_size
AND frws.SubmodelID = f2.SubmodelID
AND frws.asin_ymm = f2.asin_ymm
AND frws.fitment_key = f2.fitment_key;

Thanks & Regards


Re: Need optimization in query

2022-06-13 Thread Shubham Mittal
Hi Team,

Does anybody tried to have a look at it and would like to suggest any
optimisations?

Thanks

On Sat, Jun 11, 2022 at 12:48 AM Shubham Mittal 
wrote:

> Hi Team.
>
> *I have a use case to get the result as follows:*
>
> 1. asin_ymm is never null.
> 2. If there is more than 1 entry for an asin_ymm with both null and
> non-null submodelId, I should return rows with non-null submodelId only,
> otherwise if there is no submodelid present for a asin_ymm, then return
> that row with null submodelid.
> 3. Also if submodelid is null , assuming fitment_key would always be null
> in the table.
> 4. Using that resultset, If there is more than 1 entry for an (asin_ymm
> ,SubmodelID)  with both null and non-null fitment_key, I should return rows
> with non-null fitment_key only, otherwise if there is no fitment_key
> present for a (asin_ymm,submodelId), then return that row with null
> fitment_key.
> 5. Using that resultset, i need to return those rows having maximum values
> in fitment_key(ie. for eg.(out of these two 'A', 'D','1--3-4' and A',
> 'D','1-2-3-4-7', i should return row having A', 'D','1-2-3-4-7)
>
> create table fitment_records(asin_ymm varchar(50), SubmodelID varchar(50),
> fitment_key varchar(50));
>
> insert into fitment_records(asin_ymm, SubmodelID,fitment_key) values('A',
> null,null);
> insert into fitment_records(asin_ymm, SubmodelID,fitment_key) values('B',
> null,null);
> insert into fitment_records(asin_ymm, SubmodelID,fitment_key ) values('C',
> null,null);
> insert into fitment_records(asin_ymm, SubmodelID,fitment_key) values('A',
> 'D','1--3-4');
> insert into fitment_records(asin_ymm, SubmodelID,fitment_key) values('B',
> 'E','2-3-4-5');
> insert into fitment_records(asin_ymm, SubmodelID,fitment_key) values('B',
> 'E', null);
> insert into fitment_records(asin_ymm, SubmodelID,fitment_key) values('A',
> 'F','2-3');
> insert into fitment_records(asin_ymm, SubmodelID,fitment_key) values('A',
> 'E', null);
> insert into fitment_records(asin_ymm, SubmodelID,fitment_key) values('A',
> 'D', null);
> insert into fitment_records(asin_ymm, SubmodelID,fitment_key) values('A',
> 'D','1-2-3-4-7');
>
> output should be:
>
>  asin_ymm | submodelid | fitment_key
> --++-
>  A| D  | 1-2-3-4-7
>  A| E  | null
>  A| F  | 2-3
>  B| E  | 2-3-4-5
>  C| null   | null
>
> Currently i have written these queries for this usecase. Can we optimise it 
> further? Considering data is in millions
>
> create temporary view tv1 as (SELECT *
> FROM fitment_records fr_1
> WHERE fitment_key IS NOT NULL OR
> (fitment_key IS NULL AND
> NOT EXISTS (SELECT 1 FROM fitment_records fr_2
>   WHERE fr_2.asin_ymm = fr_1.asin_ymm AND
> fr_2.SubmodelID = fr_1.SubmodelID and
> fr_2.fitment_key IS NOT NULL)));
>
> create temporary view tv2 as (select *
> FROM tv1 fr_1
> WHERE SubmodelID IS NOT NULL OR
> (SubmodelID IS NULL AND
> NOT EXISTS (SELECT 1 FROM fitment_records fr_2
>   WHERE fr_2.asin_ymm = fr_1.asin_ymm AND
> fr_2.SubmodelID IS NOT NULL) ));
>
> create temporary view fitment_records_with_fitment_key_size as (
> select asin_ymm, SubmodelID, fitment_key, 
> Array_Length(string_to_array(fitment_key, '-'),1) as fitment_key_size
> from tv2
> where SubmodelID is not null
> and fitment_key is not null
> group by asin_ymm, SubmodelID, fitment_key
> );
>
> create temporary view fitment_records_with_fitment_key_max_size as (
> select asin_ymm, SubmodelID, max(fitment_key_size) as max_fitment_key_size
> from fitment_records_with_fitment_key_size
> group by asin_ymm, SubmodelID
> );
>
> select * from tv2
> except
> select f2.*
> from fitment_records_with_fitment_key_size frws, 
> fitment_records_with_fitment_key_max_size frwms,
> tv2 f2
> where frws.asin_ymm = frwms.asin_ymm
> AND frws.SubmodelID = frwms.SubmodelID
> AND frws.fitment_key_size < frwms.max_fitment_key_size
> AND frws.SubmodelID = f2.SubmodelID
> AND frws.asin_ymm = f2.asin_ymm
> AND frws.fitment_key = f2.fitment_key;
>
> Thanks & Regards
>
>
>


Need help on query optimization

2021-03-22 Thread Shubham Mittal
Hi Team,

I am trying to execute the below query and getting below explain plan in
postgres .

Please provide any inputs on the same , on how to optimize the same.

Here B is a ltree column, E is a jsonb column.

EXPLAIN ANALYZE SELECT * FROM A

where ( B <@ 'INDIA' ) AND C = 'D'

AND CAST ( E->'F'->'G'->>'H' AS DATE ) >= '2021-02-01'

AND CAST ( E->'F'->'G'->>'H' AS DATE ) <= '2021-02-24'

ORDER BY E -> 'F' ->> 'J' ASC,created_date DESC

OFFSET 0 ROWS FETCH NEXT 200 ROWS ONLY



"Limit  (cost=22009.81..22010.08 rows=105 width=3853) (actual
time=2295.654..2295.688 rows=200 loops=1)"

"  ->  Sort  (cost=22009.81..22010.08 rows=105 width=3853) (actual
time=2295.651..2295.671 rows=200 loops=1)"

"Sort Key: (((E -> 'F'::text) ->> 'J'::text)), created_date DESC"

"Sort Method: top-N heapsort  Memory: 355kB"

"->  Index Scan using task_opp_tlmd_iscmp_idx on task
(cost=0.56..22006.29 rows=105 width=3853) (actual time=3.788..2277.503
rows=10982 loops=1)"

"  Index Cond: (C = 'D'::ltree)"

"  Filter: ((B <@ 'INDIA'::ltree) AND (E -> 'F'::text) ->
'G'::text) ->> 'H'::text))::date >= '2021-02-01'::date) AND (E ->
'F'::text) -> 'G'::text) ->> 'H::text))::date <= '2021-02-24'::date))"

"  Rows Removed by Filter: 14738"

"Planning Time: 0.418 ms"

"Execution Time: 2295.981 ms"


Thanks & Regards,

Shubham


How to generate file from postgres data

2021-06-12 Thread Shubham Mittal
Hi Team,

I want to execute some queries on postgres and generate the reports
containing the resultset on client side for user in the most efficient way.
The data can be of 1 gb also.

Could you please help.

Thanks and regards,
Shubham


Use case stuck due to Partitioning

2021-06-28 Thread Shubham Mittal
Hi All,

Please find my use case below and suggest the appropriate solution:

We have done declarative partitioning over our table A based on some key
and further range sub partitioned each partition on the basis of
createddate range of 90 days.

In the search query to display last 90 day data , I give both column B and
createddate which may pick any of the required partitions.

But how can I display last 90 days data in which a record which is modified
last is also shown at the top but its created date might not lie in the
last 90 days range..

Basically, I need to partition my huge table based on some key and also on
date to split the data into smaller dataset for faster query . The UI needs
to display the latest modified records first but the created date also
needs to be given in the query to pick the right partition...

Can we do partition differently to achieve this?
Should i find the max modified date  and give it in the range of the
created date.

Thanks & Regards


Re: Use case stuck due to Partitioning

2021-06-28 Thread Shubham Mittal
Hi Michael,

Yes I have a btree index on the *modified_date* column currently.

SELECT  *  FROM partitioned_table  where
 A ='Value'
AND created_date >= '2021-03-01 08:16:13.589' and created_date <=
'2021-04-02 08:16:13.589'  ORDER BY viewpriority desc OFFSET 0 ROWS FETCH
NEXT 200 ROWS ONLY;

Here viewpriority is basically a long value containing created_date in
milliseconds.

Issue here is : I want to  somehow include *modified_date also in this
query to get the records sorted by **modified_date . But that sorting will
happen only in the specified created_date range only.  I want those latest
modified records also whose created_date might not lie in this range.*

*Thanks *



On Tue, Jun 29, 2021 at 1:22 AM Michael Lewis  wrote:

> Do you have an index on the "updated_at" field and found that the query is
> too slow? Do you have an example query?
>


Query takes around 15 to 20 min over 20Lakh rows

2021-09-02 Thread Shubham Mittal
Hi ,

*Please help in optimizing this query. I need to actually generate reports
daily using this query.. It takes almost 15 to 20 min to execute this query
due to joins.. *
*Here common_details is a jsonB column.*

SELECT T.order_id,
   ( ( T.common_details -> 'commonDetails' :: text ) -> 'bundle' ::
 text )
   ->> 'srType' :: text
  AS
   product,
   ( ( T.common_details -> 'commonDetails' :: text ) -> 'bundle' ::
 text )
   ->> 'mobileNumber' :: text
  AS
   msisdn,
   ( ( T.common_details -> 'commonDetails' :: text ) -> 'bundle' ::
 text )
   ->> 'alternateNumber' :: text
   AS
   alternate_number,
   ( ( T.common_details -> 'commonDetails' :: text ) -> 'bundle' ::
 text )
   ->> 'circle' :: text
  AS
   parent_circle,
   T.circle,
   ( ( T.common_details -> 'commonDetails' :: text ) -> 'bundle' ::
 text )
   ->> 'srNumber' :: text
  AS
   complaint_number,
   ( ( T.common_details -> 'commonDetails' :: text ) -> 'bundle' ::
 text )
   ->> 'caseType' :: text
  AS
   complaint_type,
   ( ( ( T.common_details -> 'commonDetails' :: text ) -> 'bundle' ::
   text )
 ->> 'status' :: text )
  AS
   status,
   T.status
 AS
   task_status,
   ( ( T.common_details -> 'commonDetails' :: text ) -> 'bundle' ::
 text )
   ->> 'subType' :: text
   AS
   SUBTYPE,
   ( ( T.common_details -> 'commonDetails' :: text ) -> 'bundle' ::
 text )
   ->> 'subSubType' :: text
  AS
   subsubtype,
   ( ( T.common_details -> 'commonDetails' :: text ) -> 'bundle' ::
 text )
   ->> 'source' :: text
  AS
   source,
   ( ( T.common_details -> 'commonDetails' :: text ) -> 'bundle' ::
 text )
   ->> 'custType' :: text
  AS
   customer_type,
   ( ( T.common_details -> 'commonDetails' :: text ) -> 'bundle' ::
 text )
   ->> 'custClass' :: text
   AS
   customer_class,
   ( ( T.common_details -> 'commonDetails' :: text ) -> 'bundle' ::
 text )
   ->> 'custValue' :: text
   AS
   customer_value,
   ( ( T.common_details -> 'commonDetails' :: text ) -> 'bundle' ::
 text )
   ->> 'accountNumber' :: text
   AS
   account_number,
   To_char(( ( ( T.common_details -> 'commonDetails' :: text ) ->
   'bundle' ::
   text )
 ->> 'slaDt' :: text ) :: timestamp, 'DD/MM/
HH24:MI:SS') AS
   sladt,
   To_char(( ( ( T.common_details -> 'commonDetails' :: text ) ->
   'bundle' ::
   text )
 ->> 'srDt' :: text ) :: timestamp, 'DD/MM/
HH24:MI:SS')  AS
   sr_date,
   CASE Lower(T.status)
 WHEN 'reopen' THEN NULL
 ELSE To_char(( totalTimeJoin.modified_date ), 'DD/MM/
HH24:MI:SS')
   END
   AS
   resolutiondatetime,
   To_char(reopenJoin.modified_date, 'DD/MM/ HH24:MI:SS')
  AS
   reopen_date,
   T.dynamic_data ->> 'resolution_code' :: text
 AS
   rc,
   T.dynamic_data ->> 'fault_found_code' :: text
AS
   ffc,
   ( ( T.common_details -> 'commonDetails' :: text ) -> 'bundle' ::
 text )
   ->> 'servingCellId' :: text
   AS
   serving_cell_id,
   Coalesce(( ( ( T.common_details -> 'commonDetails' :: text ) ->
'bundle'
:: text )
  ->> 'servingSiteId' :: text ),
   (
   ( (
   T.common_details -> 'commonDetails' :: text ) -> 'bundle' :: text )
   ->> 'producthandsetType'
::
   text )
   )

  AS
   servingsiteid,
   ( ( T.common_details -> 'commonDetails' :: text ) -> 'bundle' ::
 text )
   ->> 'customerLat' :: text
   AS
   customer_lat,
   ( ( T.common_details -> 'commonDetails' :: text ) -> 'bundle' ::
 text )
   ->> 'customerLng' :: text
   AS
   customer_long,
   ( ( T.common_details -> 'commonDetails' :: text ) -> 'bundle' ::
 text )
   ->> 'thanksCustomer' :: text
  AS
   thanks_flag,
   ( ( T.common_details -> 'commonDetails' :: text ) -> 'bundle' ::
 text )
   ->> 'custValue' :: text
   AS
   black_flag,
   ( ( T.common_details -> 'commonDetails' :: text ) -> 'bundle' ::
 text )
   ->> 'caseType' :: text
  AS
   sr_ftr,
   T.dynamic_data ->> 'dsl_connection' :: text
AS
   dsl,
   ( ( T.common_details -> 'commonDetails' :: text ) -> 'bundle' ::
 text )
   ->> 'appInstalled' :: text
  AS
   app_installed,
   ( ( T.common_details -> 'commonDetails' :: text ) -> 'bundle' ::
 text )
   ->> 'voiceMOU' :: text
  AS
   voice_mou,
   ( ( T.common_details -> 'commonDetails' :: text ) -> 'bundle' ::
 text )
   ->> 'dataConsumed' :: text
  AS
   data_mou,
   ( T.common_details -> 'commonD

Re: Query takes around 15 to 20 min over 20Lakh rows

2021-09-06 Thread Shubham Mittal
20 Lakh is the current no of rows in the task table.. on which the query is
executed..

On Mon, Sep 6, 2021, 11:44 PM Matthias Apitz  wrote:

>
> What does the term 'over 20Lakh rows' mean? Thanks
>
> matthias
> --
> Matthias Apitz, ✉ g...@unixarea.de, http://www.unixarea.de/
> +49-176-38902045
> Public GnuPG key: http://www.unixarea.de/key.pub
> August 13, 1961: Better a wall than a war. And, while the GDR was still
> existing,
> no German troups and bombs have been killed in Yugoslavia, Afghanistan,
> Afrika...
>


Re: Query takes around 15 to 20 min over 20Lakh rows

2021-09-13 Thread Shubham Mittal
Hi Tom/David

Could you please help me getting started to optimise this query??

Thanks & Regards
Shubham mittal



On Tue, Sep 7, 2021, 8:57 PM Michael Lewis  wrote:

> Have you ever used this site to visualize the explain plan and spot bad
> estimates and slow nodes? https://explain.depesz.com/s/WE1R
>
> This stands out to me-
>
> *Subquery Scan on npiactionjoin (cost=10,165,289.40..10,167,192.01
> rows=293 width=16) (actual time=118,413.432..118,806.684 rows=446,782
> loops=1)Filter: (npiactionjoin.rn = 1)*
>
> It seems that estimate is pretty far off and this node and the final node
> above this are the biggest slowdowns. If you filtered down to the record
> you want from task_history BEFORE the join, then maybe you would have
> quicker results. I might try a materialized CTE or even an analyzed temp
> table if that option is available to you, so the planner makes informed
> decisions.
>
> By the way, the order by on that row_number seems like you are getting the
> OLDEST activity related to the task which could maybe be cached rather than
> re-calculated daily as this query runs.
>
>
> *Michael Lewis  |  Database Engineer*
> *Entrata*
>


Query time related to limit clause

2021-10-04 Thread Shubham Mittal
Hi Team,

*I have shared execution times of two queries below:*

*I need to find only the first row matching the criteria , but limit 1 is
taking more time than limit 15 or more.. If any one can tell an
explanation for this and how I can achieve the same in less time.*

 explain analyze SELECT * from abc where organisation_process_path =
cast('org' as ltree) and  abc_type='secondary' and
common_details->'commonDetails'->'nchBundle'->>'subGridName'='905811-22_MISCN_data'
and status <>
   'CLOSED' AND sub_product_type = 'Prepaid'
AND created_date >= cast('2021-03-23 00:00:00.000' AS TIMESTAMP)
AND created_date <= cast('2021-09-23 00:00:00.000' AS TIMESTAMP)
order by created_date asc *LIMIT 1*

"Limit  (cost=1.31..941.32 rows=1 width=6947) (actual
time=5117.039..5117.042 rows=1 loops=1)"
"  ->  Merge Append  (cost=1.31..4476296.09 rows=4762 width=6947) (actual
time=5117.036..5117.038 rows=1 loops=1)"
"Sort Key: abc_serv_nch_q1_2021.created_date"
"->  Index Scan using abc_serv_nch_q1_2021_created_date_idx on
abc_serv_nch_q1_2021  (cost=0.43..378412.39 rows=1005 width=7025) (actual
time=742.277..742.277 rows=0 loops=1)"
"  Index Cond: ((created_date >= '2021-03-23
00:00:00'::timestamp without time zone) AND (created_date <= '2021-09-23
00:00:00'::timestamp without time zone))"
"  Filter: (((status)::text <> 'CLOSED'::text) AND
(organisation_process_path = 'org'::ltree) AND ((abc_type)::text =
'secondary'::text) AND ((sub_product_type)::text = 'Prepaid'::text) AND
common_details -> 'commonDetails'::text) -> 'nchBundle'::text) ->>
'subGridName'::text) = '905811-22_MISCN_data'::text))"
"  Rows Removed by Filter: 558116"
"->  Index Scan using abc_serv_nch_q2_2021_created_date_idx on
abc_serv_nch_q2_2021  (cost=0.43..2674454.09 rows=3756 width=6928) (actual
time=2074.950..2074.950 rows=1 loops=1)"
"  Index Cond: ((created_date >= '2021-03-23
00:00:00'::timestamp without time zone) AND (created_date <= '2021-09-23
00:00:00'::timestamp without time zone))"
"  Filter: (((status)::text <> 'CLOSED'::text) AND
(organisation_process_path = 'org'::ltree) AND ((abc_type)::text =
'secondary'::text) AND ((sub_product_type)::text = 'Prepaid'::text) AND
common_details -> 'commonDetails'::text) -> 'nchBundle'::text) ->>
'subGridName'::text) = '905811-22_MISCN_data'::text))"
"  Rows Removed by Filter: 1743539"
"->  Index Scan using abc_serv_nch_q3_2021_created_date_idx on
abc_serv_nch_q3_2021  (cost=0.43..1423368.04 rows=1 width=6548) (actual
time=2299.805..2299.805 rows=0 loops=1)"
"  Index Cond: ((created_date >= '2021-03-23
00:00:00'::timestamp without time zone) AND (created_date <= '2021-09-23
00:00:00'::timestamp without time zone))"
"  Filter: (((status)::text <> 'CLOSED'::text) AND
(organisation_process_path = 'org'::ltree) AND ((abc_type)::text =
'secondary'::text) AND ((sub_product_type)::text = 'Prepaid'::text) AND
common_details -> 'commonDetails'::text) -> 'nchBundle'::text) ->>
'subGridName'::text) = '905811-22_MISCN_data'::text))"
"  Rows Removed by Filter: 1320434"



*"Planning Time: 18.563 ms""Execution Time: 5117.157 ms"*

WHEN LIMIT IS GIVEN MORE THAN EQUAL TO
15*

explain analyze SELECT * from abc where organisation_process_path =
cast('org' as ltree) and  abc_type='secondary' and
common_details->'commonDetails'->'nchBundle'->>'subGridName'='905811-22_MISCN_data'
and status <>
   'CLOSED' AND sub_product_type = 'Prepaid'
AND created_date >= cast('2021-03-23 00:00:00.000' AS TIMESTAMP)
AND created_date <= cast('2021-09-23 00:00:00.000' AS TIMESTAMP)
order by created_date asc *LIMIT 15*

   "Limit  (cost=12708.06..12708.09 rows=15 width=6947) (actual
time=0.428..0.431 rows=15 loops=1)"
"  ->  Sort  (cost=12708.06..12719.96 rows=4762 width=6947) (actual
time=0.426..0.428 rows=15 loops=1)"
"Sort Key: abc_serv_nch_q1_2021.created_date"
"Sort Method: top-N heapsort  Memory: 40kB"
"->  Append  (cost=7201.82..12591.22 rows=4762 width=6947) (actual
time=0.081..0.366 rows=299 loops=1)"
"  ->  Bitmap Heap Scan on abc_serv_nch_q1_2021
 (cost=7201.82..8338.60 rows=1005 width=7025) (actual time=0.038..0.038
rows=0 loops=1)"
"Recheck Cond: (common_details ->
'commonDetails'::text) -> 'nchBundle'::text) ->> 'subGridName'::text) =
'905811-22_MISCN_data'::text) AND ((sub_product_type)::text =
'Prepaid'::text) AND ((abc_type)::text = 'secondary'::text) AND
((status)::text <> 'CLOSED'::text) AND (created_date >= '2021-03-23
00:00:00'::timestamp without time zone) AND (created_date <= '2021-09-23
00:00:00'::timestamp without time zone))"
"Filter: (organisation_process_path = 'org'::ltree)"
"->  BitmapAnd  (cost=7201.82..7201.82 rows=1005
width=0)

Query Timeout not working using jpa

2021-11-09 Thread Shubham Mittal
Hi Team,

Can anyone help in setting the query timeout in postgres 12 using spring
data jpa??

Currently using below code :

query.setHint("javax.persistence.query.timeout", 1);

Thanks & Regards,
Shubham


PGBouncer logs explanation required

2021-12-19 Thread Shubham Mittal
Hi Team,

I get below logs in pgbouncer.. when I configure query timeout in spring
boot application as below:

SessionImplementor session = entityManager.unwrap(SessionImplementor.class);
Connection connection = session.connection();
connection.createStatement().execute("SET LOCAL statement_timeout TO "
+ queryTimeOut);

2021-11-25 14:45:45.244 IST [18307] LOG C-0x6ae270: sit/postgres@abcdlogin
attempt: db=sit user=postgres tls=no
2021-11-25 14:45:45.299 IST [18307] LOG S-0x6b4f48: sit/postgres@abcd new
connection to server (from abcd)
2021-11-25 14:46:17.843 IST [18307] LOG S-0x6b4d10: sit/postgres@abcd *closing
because: query timeout (age=185s)*
2021-11-25 14:46:17.843 IST [18307] LOG C-0x6ae038: sit/postgres@abcd *closing
because: query timeout (age=185s)*
2021-11-25 14:46:17.843 IST [18307] WARNING C-0x6ae038: sit/postgres@*abcd
pooler error: query timeout*
2021-11-25 14:46:25.763 IST [18307] LOG stats: 0 xacts/s, 0 queries/s, in
30 B/s, out 141 B/s, xact 3660 us, query 4362 us, wait 152 us

Does this log mean that connection is closed and returned to the pool and
can be reused again?? Or something else?? Please help.