On Thu, Mar 26, 2020 at 4:44 PM Stephen Frost <sfr...@snowman.net> wrote: > Is it actually possible, today, in PG, to have a 4GB WAL record? > Judging this based on the WAL record size doesn't seem quite right.
I'm not sure. I mean, most records are quite small, but I think if you set REPLICA IDENTITY FULL on a table with a bunch of very wide columns (and also wal_level=logical) it can get really big. I haven't tested to figure out just how big it can get. (If I have a table with lots of almost-1GB-blobs in it, does it work without logical replication and fail with logical replication? I don't know, but I doubt a WAL record >4GB is possible, because it seems unlikely that the code has a way to cope with that struct field overflowing.) > Again, I'm not against having a checksum algorithm as a option. I'm not > saying that it must be SHA512 as the default. I think that what we have seen so far is that all of the SHA-n algorithms that PostgreSQL supports are about equally slow, so it doesn't really matter which one you pick there from a performance point of view. If you're not saying it has to be SHA-512 but you do want it to be SHA-256, I don't think that really fixes anything. Using CRC-32C does fix the performance issue, but I don't think you like that, either. We could default to having no checksums at all, or even no manifest at all, but I didn't get the impression that David, at least, wanted to go that way, and I don't like it either. It's not the world's best feature, but I think it's good enough to justify enabling it by default. So I'm not sure we have any options here that will satisfy you. > > > I don't agree with limiting our view to only those algorithms that we've > > > already got implemented in PG. > > > > I mean, opening that giant can of worms ~2 weeks before feature freeze > > is not very nice. This patch has been around for months, and the > > algorithms were openly discussed a long time ago. > > Yes, they were discussed before, and these issues were brought up before > and there was specifically concern brought up about exactly the same > issues that I'm repeating here. Those concerns seem to have been > largely ignored, apparently because "we don't have that in PG today" as > at least one of the considerations- even though we used to. I might have missed something, but I don't remember any suggestion of CRC-64 or other algorithms for which PG does not currently have support prior to this week. The only thing I remember having been suggested previously was SHA, and I responded to that by adding support for SHA, not by ignoring the suggestion. If there was another suggestion made earlier, I must have missed it. > I also had hoped that > David's concerns that were raised before had been heeded, as I knew he > was involved in the discussion previously, but that turns out to not > have been the case. Well, I mean, I am trying pretty hard here, but I realize that I'm not succeeding. I don't know which specific suggestion you're talking about here. I understand that there is a concern about a 32-bit CRC somehow not being valid for more than 512MB, but based on my research, I believe that to be incorrect. I've explained the reasons why I believe it to be incorrect several times now, but I feel like we're just going around in circles. If my explanation of why it's incorrect is itself incorrect, tell me why, but let's not just keep saying the things we've both already said. > Yes, that looks fine. Feels slightly redundant to include the "as > described above ..." bit, and I think that could be dropped, but up to > you. Done in the version I posted a bit ago. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company