Pierre Frédéric Caillaud wrote:
Does postgres write something to the logfile whenever a fsync()
takes a suspiciously long amount of time ?
Not specifically. If you're logging statements that take a while, you
can see this indirectly, but commits that just take much longer than usual.
Now, with ext4 moving to full barrier/fsync support, we could get to the
point where WAL in the main data FS can mimic the state where WAL is
seperate, namely that WAL writes can "jump the queue" and be written
without waiting for the data pages to be flushed down to disk, but also
that you'll g
>Aidan Van Dyk wrote:
> But, I think that's one of the reasons people usually recommend
> putting WAL separate. Even if it's just another partition on the
> same (set of) disk(s), you get the benefit of not having to wait
> for all the dirty ext3 pages from your whole database FS to be
> flushed
* Greg Smith [100121 09:49]:
> Aidan Van Dyk wrote:
>> Sure, if your WAL is on the same FS as your data, you're going to get
>> hit, and *especially* on ext3...
>>
>> But, I think that's one of the reasons people usually recommend putting
>> WAL separate.
>
> Separate disks can actually concentrat
Aidan Van Dyk wrote:
Sure, if your WAL is on the same FS as your data, you're going to get
hit, and *especially* on ext3...
But, I think that's one of the reasons people usually recommend putting
WAL separate.
Separate disks can actually concentrate the problem. The writes to the
data disk b
* Greg Smith:
> Note the comment from the first article saying "those delays can be 30
> seconds or more". On multiple occasions, I've measured systems with
> dozens of disks in a high-performance RAID1+0 with battery-backed
> controller that could grind to a halt for 10, 20, or more seconds in
>
* Greg Smith [100121 00:58]:
> Greg Stark wrote:
>>
>> That doesn't sound right. The kernel having 10% of memory dirty
>> doesn't mean there's a queue you have to jump at all. You don't get
>> into any queue until the kernel initiates write-out which will be
>> based on the usage counters --
Both of those refer to the *drive* cache.
greg
On 21 Jan 2010 05:58, "Greg Smith" wrote:
Greg Stark wrote: > > > That doesn't sound right. The kernel having 10% of
memory dirty doesn't mean...
Most safe ways ext3 knows how to initiate a write-out on something that must
go (because it's gotten a
Greg Stark wrote:
That doesn't sound right. The kernel having 10% of memory dirty
doesn't mean there's a queue you have to jump at all. You don't get
into any queue until the kernel initiates write-out which will be
based on the usage counters -- basically a lru. fsync and cousins like
sync_
That doesn't sound right. The kernel having 10% of memory dirty doesn't mean
there's a queue you have to jump at all. You don't get into any queue until
the kernel initiates write-out which will be based on the usage counters --
basically a lru. fsync and cousins like sync_file_range and
posix_fadv
Jeff Davis wrote:
On one side, we might finally be
able to use regular drives with their caches turned on safely, taking
advantage of the cache for other writes while doing the right thing with
the database writes.
That could be good news. What's your opinion on the practical
performan
On Fri, 2010-01-15 at 22:05 -0500, Greg Smith wrote:
> A few months ago the worst of the bugs in the ext4 fsync code started
> clearing up, with
> http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=5f3481e9a80c240f169b36ea886e2325b9aeb745
>
> as a particularly painful o
12 matches
Mail list logo