On Wed, Nov 13, 2019 at 2:51 PM Peter Geoghegan <p...@bowt.ie> wrote: > "Deduplication" never means that you get rid of duplicates. According > to Wikipedia's deduplication article: "Whereas compression algorithms > identify redundant data inside individual files and encodes this > redundant data more efficiently, the intent of deduplication is to > inspect large volumes of data and identify large sections – such as > entire files or large sections of files – that are identical, and > replace them with a shared copy".
Hmm. Well, maybe I'm just behind the times. But that same wikipedia article also says that deduplication works on large chunks "such as entire files or large sections of files" thus differentiating it from compression algorithms which work on the byte level, so it seems to me that what you are doing still sounds more like ad-hoc compression. > Can you suggest an alternative? My instinct is to pick a name that somehow involves compression and just put enough other words in there to make it clear e.g. duplicate value compression, or something of that sort. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company