Hi Muhammad Usman Khan

I have already set required values of params.Here issue was about
triggers.I have resolved this issue

Regards
Durga Mahesh

On Thu, Sep 12, 2024 at 10:05 AM Muhammad Usman Khan <usma...@bitnine.net>
wrote:

> Hi,
> You can use the following approaches for optimization:
>
>    - Instead of inserting one row at a time, perform bulk inserts, which
>    will reduce the overhead of each individual transaction
>    - Partitioning can improve write performance by splitting the data
>    into smaller, more manageable chunks
>    - Tune postgres configuration like
>    work_mem = '16MB'
>    shared_buffers = '8GB'
>    effective_cache_size = '24GB'
>
>
> On Wed, 11 Sept 2024 at 13:50, Durgamahesh Manne <
> maheshpostgr...@gmail.com> wrote:
>
>> Hi
>>     insert into
>> dictionary(lang,tid,sportid,brandid,translatedtext,objecttype,basetid)
>> values ($1,$2,$3,$4,$5,$6,$7) on conflict do nothing
>>  *8vcpus and 32gb ram
>>    Number of calls per sec 1600 at this time 42% of cpu utilized
>>                  Max in ms 33.62 per call
>>                  Avg in ms  0.17 per call
>>                                                    Table
>> "dictionary.dictionary"
>>      Column     |           Type           | Collation | Nullable |
>> Default  | Storage  | Compression | Stats target | Description
>>
>> ----------------+--------------------------+-----------+----------+----------+----------+-------------+--------------+-------------
>>  lang           | text                     |           | not null |
>>    | extended |             |              |
>>  tid            | text                     |           | not null |
>>    | extended |             |              |
>>  basetid        | text                     |           | not null |
>>    | extended |             |              |
>>  sportid        | text                     |           |          |
>>    | extended |             |              |
>>  brandid        | text                     |           | not null |
>>    | extended |             |              |
>>  translatedtext | text                     |           |          |
>>    | extended |             |              |
>>  objecttype     | text                     |           |          |
>>    | extended |             |              |
>>  createdat      | timestamp with time zone |           | not null | now()
>>    | plain    |             |              |
>>  modified       | timestamp with time zone |           | not null | now()
>>    | plain    |             |              |
>>  modifiedby     | text                     |           | not null |
>> ''::text | extended |             |              |
>>  version        | integer                  |           | not null | 0
>>    | plain    |             |              |
>> Indexes:
>>     "pk_dictionary" PRIMARY KEY, btree (lang, tid)
>>     "idx_dictionary_basetid" btree (basetid)
>>     "idx_dictionary_brandid" btree (brandid)
>>     "idx_dictionary_objecttype" btree (objecttype)
>>     "idx_dictionary_sportid" btree (sportid)
>> Triggers:
>>     i_dictionary_createdat BEFORE INSERT ON dictionary FOR EACH ROW
>> EXECUTE FUNCTION update_createdat_col()
>>     i_dictionary_modified BEFORE INSERT OR UPDATE ON dictionary FOR EACH
>> ROW EXECUTE FUNCTION update_modified_col()
>> Access method: heap
>> How do we improve this query performance without taking more cpu?
>>
>> Regards,
>> Durga Mahesh
>>
>

Reply via email to