Hi, all,
A probably bit of complex question:
Does nowadays practical filesystems, eg., extX, btfs, preserve metadata
operation order through a crash/power failure?
What I know is modern filesystems ensure metadata consistency
after crash/power failure. Journal filesystems like extX do that by
wri
On Wed, Sep 5, 2018 at 4:09 PM 焦晓冬 wrote:
>
> On Tue, Sep 4, 2018 at 11:44 PM Jeff Layton wrote:
> >
> > On Tue, 2018-09-04 at 22:56 +0800, 焦晓冬 wrote:
> > > On Tue, Sep 4, 2018 at 7:09 PM Jeff Layton wrote:
> > > >
> > > > On Tue, 2018-09-04
On Wed, Sep 5, 2018 at 4:04 PM Rogier Wolff wrote:
>
> On Wed, Sep 05, 2018 at 09:39:58AM +0200, Martin Steigerwald wrote:
> > Rogier Wolff - 05.09.18, 09:08:
> > > So when a mail queuer puts mail the mailq files and the mail processor
> > > can get them out of there intact, nobody is going to not
e read from the disk
> >
> > Well, the absolutist position on posix compliance here would be that a
> > crash is still preferable to returning the wrong data. And for the
> > cases 焦晓冬 gives, that sounds right? Maybe it's the wrong balance in
> > general, I do
On Tue, Sep 4, 2018 at 11:44 PM Jeff Layton wrote:
>
> On Tue, 2018-09-04 at 22:56 +0800, 焦晓冬 wrote:
> > On Tue, Sep 4, 2018 at 7:09 PM Jeff Layton wrote:
> > >
> > > On Tue, 2018-09-04 at 16:58 +0800, Trol wrote:
> > > > On Tue, Sep 4, 20
On Tue, Sep 4, 2018 at 7:09 PM Jeff Layton wrote:
>
> On Tue, 2018-09-04 at 16:58 +0800, Trol wrote:
> > On Tue, Sep 4, 2018 at 3:53 PM Rogier Wolff wrote:
> >
> > ...
> > > >
> > > > Jlayton's patch is simple but wonderful idea towards correct error
> > > > reporting. It seems one crucial thing
On Tue, Sep 4, 2018 at 5:29 PM Rogier Wolff wrote:
>
> On Tue, Sep 04, 2018 at 04:58:59PM +0800, 焦晓冬 wrote:
>
> > As for suggestion, maybe the error flag of inode/mapping, or the entire
> > inode
> > should not be evicted if there was an error. That hopefully won
On Tue, Sep 4, 2018 at 3:53 PM Rogier Wolff wrote:
...
> >
> > Jlayton's patch is simple but wonderful idea towards correct error
> > reporting. It seems one crucial thing is still here to be fixed. Does
> > anyone have some idea?
> >
> > The crucial thing may be that a read() after a successful
Hi,
After reading several writeback error handling articles from LWN, I
begin to be upset about writeback error handling.
Jlayton's patch is simple but wonderful idea towards correct error
reporting. It seems one crucial thing is still here to be fixed. Does
anyone have some idea?
The crucial th
+ CC Boqun
in case you are interested in this topic
Best Regards,
Trol
> Sorry, this is a resend because the previous one was messed
> up by my editor and hard to be read.
>
> void finish_wait(struct wait_queue_head *wq_head,
> struct wait_queue_entry *wq_entry)
> {
>
On Mon, Mar 12, 2018 at 9:24 PM, Andrea Parri wrote:
> Hi Trol,
>
> [...]
>
>
>> But this is just one special case that acquire-release chains promise us.
>>
>> A=B=0 as initial
>>
>> CPU0CPU1CPU2CPU3
>> write A=1
>>rea
On Mon, Mar 12, 2018 at 4:56 PM, Peter Zijlstra wrote:
> On Mon, Mar 12, 2018 at 04:56:00PM +0800, Boqun Feng wrote:
>> So I think the purpose of smp_mb__after_spinlock() is to provide RCsc
>> locks, it's just the comments before that may be misleading. We want
>> RCsc locks in schedule code becau
>> Peter pointed out in this patch https://patchwork.kernel.org/patch/9771921/
>> that the spinning-lock used at __schedule() should be RCsc to ensure
>> visibility of writes prior to __schedule when the task is to be migrated to
>> another CPU.
>>
>> And this is emphasized at the comment of the ne
> Sorry, this is a resend because the previous one was messed
> up by my editor and hard to be read.
>
> void finish_wait(
> struct wait_queue_head *wq_head,
> struct wait_queue_entry *wq_entry)
> {
>
> ->if (!list_empty_careful(&wq_entry->entry)) {
> ->spin_lock_irqsave(&wq
Sorry, this is a resend because the previous one was messed
up by my editor and hard to be read.
void finish_wait(
struct wait_queue_head *wq_head,
struct wait_queue_entry *wq_entry)
{
->if (!list_empty_careful(&wq_entry->entry)) {
->spin_lock_irqsave(&wq_head->lock, flags)
void finish_wait(struct wait_queue_head *wq_head, struct
wait_queue_entry *wq_entry)
{
->if (!list_empty_careful(&wq_entry->entry)) {
->spin_lock_irqsave(&wq_head->lock, flags);
->list_del_init(&wq_entry->entry);
->spin_unlock_irqrestore(&wq_head->lock, flags);
Peter pointed out in this patch https://patchwork.kernel.org/patch/9771921/
that the spinning-lock used at __schedule() should be RCsc to ensure
visibility of writes prior to __schedule when the task is to be migrated to
another CPU.
And this is emphasized at the comment of the newly introduced
sm
17 matches
Mail list logo