On 12/16/25 23:40, Matthias Leisi wrote:
An application (which we can’t change) is accessing some Postgres table, and we 
would like to record when the rows in that table were last read (meaning: 
appeared in a SELECT result). The ultimate goal would be that we can „age out“ 
rows which have not been accessed in a certain period of time.

Why?

Given the small size of the table, what is the gain expected?

Also is it assured that the reading of a row equals importance of a row?

I would expect any solution would impose more overhead then simply leaving the rows alone.


The table contains some ten thousand rows, five columns, and we already record 
created / last updated using triggers. Almost all accesses will result in zero, 
one or very few records returned. Given the modest size of the table, 
performance considerations are not top priority.

If we had full control over the application, we could eg use a function to 
select the records and then update some „last read“ column. But since we don’t 
control the application, that’s not an option. On the other hand, we have full 
control over the database, so we could put some other „object“ in lieu of the 
direct table.

Any other ways this could be achieved?

Thanks,
Matthias





--
Adrian Klaver
[email protected]


Reply via email to