On Wed, Sep 4, 2024 at 9:10 AM yudhi s <learnerdatabas...@gmail.com> wrote:
> > On Wed, Sep 4, 2024 at 6:29 PM Muhammad Usman Khan <usma...@bitnine.net> > wrote: > >> Hi, >> >> In your scenario, triggers can add some overhead since they require extra >> processing after each update operation. Considering the size of your table >> and the high transaction volume, you need to observe that this might >> significantly affect performance. >> > > On Wed, 4 Sept 2024 at 17:50, yudhi s <learnerdatabas...@gmail.com> wrote: >> >>> Hello, >>> In postgres database , we have all the tables with audit columns like >>> created_by_user, created_timestamp,updated_by_user, updated_timestamp. So >>> we have these fields that were supposed to be populated by the time at >>> which the insert/update operation happened on the database but not at the >>> application level. So we are planning to populate the created_by_user, >>> created_timestamp columns by setting a default value of "current_timestamp" >>> and "current_user" for the two columns, but no such this is available to >>> populate while we do the update of the row, so the only option seems to be >>> through a trigger. >>> >>> So wanted to check with the experts here ,considering the table will be >>> DML heavy table (300M+ transactions will be inserted daily), Is is okay to >>> have the trigger for this table for populating all the audit columns or >>> should we keep default for created_by_user, created_timestamp and just >>> trigger for the update related two audit column? Basically wanted to see, >>> if the default value does the same thing as a trigger or it does something >>> more optimally than trigger? >>> >>> Regards >>> Yudhi >>> >> > Thank you so much. So do you mean to say that , we should add default > values for the create_timestamp and create_user_id as current_timestamp and > current_user, > That's the simplest way. But the application can overwrite those fields. > but for update_user_id and update_timestamp , we can ask the application > to update the values manually , whenever they are executing the update > statement on the rows? > How strict are the audit requirements? If they're really strict, you might need INSERT and UPDATE triggers that call security defined functions which write into a separate table not accessible by the application. That table would have the application table's PK, created_by_user, created_timestamp, updated_by_user and updated_timestamp. Would that table have a LOT of records? Sure. Would it add overhead? Sure. But the subsequently beefier hardware requirements and care in designing the physical schema (for example, audit tables in a separate tablespace and pg_wal/ on separate disk controllers, or a 10Gb SAN) are the price you pay for strict audit requirements. Of course, if the audit requirements are minimal, then sure, "default values and the application" are Good Enough. -- Death to America, and butter sauce. Iraq lobster!