> 2019-02-14 Cumulative Update Release
> ====================================
> 
> The PostgreSQL Global Development Group has released an update to all 
> supported versions of our database system, including 11.2, 10.7, 9.6.12, 
> 9.5.16, and 9.4.21. This release changes the behavior in how PostgreSQL 
> interfaces with `fsync()` and includes fixes for partitioning and over 70 
> other bugs that were reported over the past three months.
> 
> Users should plan to apply this update at the next scheduled downtime.
> 
> Highlight: Change in behavior with `fsync()`
> ------------------------------------------
> 
> When available in an operating system and enabled in the configuration file 
> (which it is by default), PostgreSQL uses the kernel function `fsync()` to 
> help ensure that data is written to a disk. In some operating systems that 
> provide `fsync()`, when the kernel is unable to write out the data, it 
> returns a failure and flushes the data that was supposed to be written from 
> its data buffers.
> 
> This flushing operation has an unfortunate side-effect for PostgreSQL: if 
> PostgreSQL tries again to write the data to disk by again calling `fsync()`, 
> `fsync()` will report back that it succeeded, but the data that PostgreSQL 
> believed to be saved to the disk would not actually be written. This presents 
> a possible data corruption scenario.
> 
> This update modifies how PostgreSQL handles a `fsync()` failure: PostgreSQL 
> will no longer retry calling `fsync()` but instead will panic. In this case, 
> PostgreSQL can then replay the data from the write-ahead log (WAL) to help 
> ensure the data is written. While this may appear to be a suboptimal 
> solution, there are presently few alternatives and, based on reports, the 
> problem case occurs extremely rarely.

Shouldn't we mention that previous behavior (retrying fsync) can be
chosen by a new GUC parameter?

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp

Reply via email to