Hi,
Could some some verify the attached query to verify the performance and
suggest some steps to improve it, this query is created as a view. This
view is used to get the aggregates of orders based on its current status
Thanks
HashAggregate (cost=334063.59..334064.17 rows=58 width=213) (actual
FROM choice AS c, menu_item_choice AS mc, menu_item AS mi
WHERE c.choice_id = mc.choice_id AND mc.menu_item_id =
mi.menu_item_id AND mc.menu_item_id = i_menu_item_id AND mi.active = 'Y';
END;
$BODY$;
On Fri, Jun 25, 2021 at 7:09 PM Ayub Khan wrote:
>
> I am using postgresq
I set the cursor_tuple_fraction to 1 now I am seeing high cpu for fetach
all in
The number of rows returned is less than 200. Why is the high cpu being
shown for fetch all
-Ayub
On Fri, 25 Jun 2021, 19:09 Ayub Khan, wrote:
>
> I am using postgresql 12 and using cursors in a stored pro
not
create a high cpu on the database.
--Ayub
On Fri, Jun 25, 2021 at 7:09 PM Ayub Khan wrote:
>
> I am using postgresql 12 and using cursors in a stored procedure,
> executing procedure which has cursor is slowing down the call. However if I
> do not use the cursor and just execute
I am using postgresql 12 and using cursors in a stored procedure,
executing procedure which has cursor is slowing down the call. However if I
do not use the cursor and just execute the queries using JDBC (Java client)
it's fast.
Is there any setting which needs to be modified to improve the perfor
tegory_id =
i_category_id OR i_category_id IS NULL)
ORDER BY combo_id;
END;
$BODY$;
1. open connection
2. set auto commit to false
3. create callable statement
4. execute the call
5. get the results
6. set autocommit to true
7. close the resultset,callable stateme
tps = 18001.935315 (including connections establishing)
tps = 18002.205940 (excluding connections establishing)
On Wed, Jun 9, 2021 at 5:47 PM Ayub Khan wrote:
> attached is the screenshot of RDS performance insights for AWS and it
> shows high waiting client writes. The api performance is s
RDS, installed postgresql on a
similar VM as that of where oracle is installed and tested it. Now even
when both client and postgresql VMs have the same MTU settings still in the
pg activity table I could see clientwrite waits.
-Ayub
On Wed, Jun 9, 2021 at 5:47 PM Ayub Khan wrote:
> attached
ration: 600 s
number of transactions actually processed: 12007
latency average = 2480.042 ms
latency stddev = 242.243 ms
tps = 19.955602 (including connections establishing)
tps = 19.955890 (excluding connections establishing)
On Wed, Jun 9, 2021 at 5:47 PM Ayub Khan wrote:
> attached is th
Ranier, Vijay,
Sure will try and check out pgbench and MTU
--Ayub
On Wed, 9 Jun 2021, 17:47 Ayub Khan, wrote:
> attached is the screenshot of RDS performance insights for AWS and it
> shows high waiting client writes. The api performance is slow. I read that
> this might be due to IO
Ranier,
This issue is only with queries which are slow, if it's an MTU issue then
it should be with all the APIs.
I tried on Aurora db and I see same plan and also same slowness
On Wed, 9 Jun 2021, 17:47 Ayub Khan, wrote:
> attached is the screenshot of RDS performance insights for AW
Jeff,
Both tomcat vm and RDS vms have 25Gbps
Postgresql Db class is db.r6g.16xlarge
Tomcat vm is c5.9xlarge
--Ayub
On Wed, 9 Jun 2021, 17:47 Ayub Khan, wrote:
> attached is the screenshot of RDS performance insights for AWS and it
> shows high waiting client writes. The api performa
Ranier,
Both production and test vms are running on Ubuntu:
the below command when executed from client VM shows that its using
PMTU 9001.
# tracepath dns-name-of-rds
1?: [LOCALHOST] pmtu 9001
On Wed, Jun 9, 2021 at 5:47 PM Ayub Khan wrote:
> attac
.compute.internal (xxx.xx.xx.x) 0.187ms pmtu 1500
Should the LOCALHOST pmtu needs to be updated to 1500 ?
On Wed, Jun 9, 2021 at 5:47 PM Ayub Khan wrote:
> attached is the screenshot of RDS performance insights for AWS and it
> shows high waiting client writes. The api performance is slow. I rea
} finally {
if (rs != null)
rs.close();
if (rs1 != null)
rs1.close();
if (ps != null)
ps.close();
if (callableStatement != null) callableStatement.close();
if (con != null) con.close();
}
return menuMobileListCombo;
}
On Wed, Jun 9, 2021 at 5:47 PM Ayub Khan wrote:
> att
.001 rows=1
loops=89)
Index Cond: (size_id = c.size_id)
-> Index Scan using "restaurant_idx$$_274b003d" on restaurant
f (cost=0.29..2.30 rows=1 width=12) (actual time=0.001..0.001 rows=1
loops=89)
Index Cond: (restaurant_id = 1528)
-> S
s eu-central-1a
On Wed, Jun 9, 2021 at 5:47 PM Ayub Khan wrote:
> attached is the screenshot of RDS performance insights for AWS and it
> shows high waiting client writes. The api performance is slow. I read that
> this might be due to IOPS on RDS. However we have 80k IOPS on t
I did profiling of the application and it seems most of the CPU consumption
is for executing the stored procedure. Attached is the screenshot of the
profile
--Ayub
On Wed, Jun 9, 2021 at 5:47 PM Ayub Khan wrote:
> attached is the screenshot of RDS performance insights for AWS and it
>
Below is the test setup
Jmeter-->(load balanced tomcat on ec2 instances)>rds read replicas
All these are running on different ec2 instances in AWS cloud in the same
region
On Tue, 8 Jun 2021, 19:03 Ayub Khan, wrote:
>
> I checked all the indexes are defined on the tables howev
@Magnus
There is an EC2 tomcat server which communicates to postgresql. This is a
replica of our production server except that in this case the test database
is postgres RDS and our production is running oracle on EC2 instance.
On Wed, Jun 9, 2021 at 5:47 PM Ayub Khan wrote:
> attached is
END) AND (av.day_of_week LIKE CONCAT_WS('', '%', l_current_day, '%')
OR av.day_of_week LIKE '%0%') AND is_deleted = 0;
END IF;
END;
BEGIN
IF item_available_count > 0 THEN
R
In AWS RDS performance insights the client writes is high and the api
which receives data on the mobile side is slow during load test.
On Tue, 8 Jun 2021, 19:03 Ayub Khan, wrote:
>
> I checked all the indexes are defined on the tables however the query
> seems slow, below is the plan
I checked all the indexes are defined on the tables however the query seems
slow, below is the plan. Can any one give any pointers to verify ?
SELECT a.menu_item_id, a.menu_item_name, a.menu_item_category_id,
b.menu_item_category_desc, c.menu_item_variant_id,
c.menu_item_variant_type_id, c.price,
Julien,
Thank you for the pointer. I will change the data type and verify the query
again.
-Ayub
On Mon, Jun 7, 2021 at 7:51 AM Ayub Khan wrote:
>
> Other than Dexter, Is there an auto tune or query performance indicator
> for postgres ?
> Also which are the most commonly use
9178.73
rows=3032573 width=21) (actual time=0.013..575.630 rows=3032702 loops=1)
Planning Time: 2.222 ms
Execution Time: 3009.387 ms
On Mon, Jun 7, 2021 at 8:00 AM Christophe Pettus wrote:
>
>
> > On Jun 6, 2021, at 21:51, Ayub Khan wrote:
> > Other than Dexter, Is there an aut
Other than Dexter, Is there an auto tune or query performance indicator for
postgres ?
Also which are the most commonly used monitoring (slow query, cpu, index
creation for missing indexs ) tools being used for postgres ?
--Ayub
ms
> (9 rows)
>
>
> but from Ayub's plan, the number of rows fetched are a lot, but is also
> removing rows post index scan.
> if that can be improved with a btree index that does not filter unwanted
> rows, the run may be faster ?
> but i guess if there are 156k row
I am using postgres 12 on AWS RDS
could someone clarify why the LEFT JOIN order_offer_map oom using
(order_id) in the below query is using sequential scan instead of
using index on order_id which is defined in order_offer_map table.
explain ANALYZE
WITH business AS( SELECT * FROM get_business
274b003d" on restaurant
r (cost=0.29..233.42 rows=8609 width=8) (actual time=0.007..0.634
rows=8609 loops=1)
Heap Fetches: 0
Planning Time: 1.352 ms
Execution Time: 1.571 ms
On Fri, Jun 4, 2021 at 11:41 AM Pavel Stehule
wrote:
> Hi
>
>
> pá 4. 6. 2021 v 10:32 odesí
BRIN index is only on the date_time column, I even tried with btree index
with no performance gains.
On Fri, Jun 4, 2021 at 11:23 AM Pavel Stehule
wrote:
>
>
> pá 4. 6. 2021 v 10:07 odesílatel Ayub Khan napsal:
>
>>
>> below query is slow even with no data
>>
&g
below query is slow even with no data
explain ANALYZE
WITH business AS( SELECT * FROM get_businessday_utc_f() start_date)
SELECT ro.order_id,
ro.date_time,
round(ro.order_amount, 2) AS order_amount,
b.branch_id,
b.branch_name,
st_x(b.location) AS from_x,
st_y(b.location
31 matches
Mail list logo