On Mon, Jun 29, 2015 at 5:41 AM, Amit Kapila <amit.kapil...@gmail.com> wrote: > > On Sun, Jun 28, 2015 at 9:05 PM, Tom Lane <t...@sss.pgh.pa.us> wrote: > > > > Amit Kapila <amit.kapil...@gmail.com> writes: > > > On Sat, Jun 27, 2015 at 7:40 PM, Tom Lane <t...@sss.pgh.pa.us> wrote: > > >> I don't like this too much because it will fail badly if the caller > > >> is wrong about the maximum possible page number for the table, which > > >> seems not exactly far-fetched. (For instance, remember those kernel bugs > > >> we've seen that cause lseek to lie about the EOF position?) > > > > > Considering we already have exclusive lock while doing this operation > > > and nobody else can perform write on this file, won't closing and > > > opening it again would avoid such problems. > > > > On what grounds do you base that touching faith? > > Sorry, but I don't get what problem do you see in this touching? >
On again thinking about it, I think you are worried that if we close and reopen the file, it would break any flush operation that could happen in parallel to it via checkpoint. Still I am not clear that do we want to assume that we can't rely on lseek for the size of file if there can be parallel write activity on file (even if that write doesn't increase the size of file)? If yes, then we have below options: a. Have some protection mechanism for File Access (ignore error for file not present or accessed during flush) and clean the buffers buffers containing invalid objects as is being discussed up-thread. b. Use some other API like stat to get the size of file? Do you prefer any of these or if you have any better idea, then please do share the same? With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com