On Mon, Jun 13, 2022 at 9:23 AM Aleksander Alekseev <aleksan...@timescale.com> wrote: > Should it necessarily be a fixed list? Why not support plugable algorithms? > > An extension implementing a checksum algorithm is going to need: > > - several hooks: check_page_after_reading, calc_checksum_before_writing > - register_checksum()/deregister_checksum() > - an API to save the checksums to a seperate fork > > By knowing the block number and the hash size the extension knows > exactly where to look for the checksum in the fork.
I don't think that a separate fork is a good option for reasons that I articulated previously: I think it will be significantly more complex to implement and add extra I/O. I am not completely opposed to the idea of making the algorithm pluggable but I'm not very excited about it either. Making the algorithm pluggable probably wouldn't be super-hard, but allowing a checksum of arbitrary size rather than one of a short list of fixed sizes might complicate efforts to ensure this doesn't degrade performance. And I'm not sure what the benefit is, either. This isn't like archive modules or custom backup targets where the feature proposes to interact with things outside the server and we don't know what's happening on the other side and so need to offer an interface that can accommodate what the user wants to do. Nor is it like a custom background worker or a custom data type which lives fully inside the database but the desired behavior could be anything. It's not even like column compression where I think that the same small set of strategies are probably fine for everybody but some people think that customizing the behavior by datatype would be a good idea. All it's doing is taking a fixed size block of data and checksumming it. I don't see that as being something where there's a lot of interesting things to experiment with from an extension point of view. -- Robert Haas EDB: http://www.enterprisedb.com