On Fri 08-09-23 12:51:03, Zdenek Kabelac wrote:
> Dne 08. 09. 23 v 12:20 Jan Kara napsal(a):
> > On Fri 08-09-23 11:29:40, Zdenek Kabelac wrote:
> > > Dne 08. 09. 23 v 9:32 Jan Kara napsal(a):
> > > > On Thu 07-09-23 14:04:51, Mikulas Patocka wrote:
> > > > > On Thu, 7 Sep 2023, Christian Brauner w
On Fri, Sep 08, 2023 at 12:20:14PM +0200, Jan Kara wrote:
> Well, currently you click some "Eject / safely remove / whatever" button
> and then you get a "wait" dialog until everything is done after which
> you're told the stick is safe to remove. What I imagine is that the "wait"
> dialog needs to
> "Christian" == Christian Brauner writes:
>> Well, currently you click some "Eject / safely remove / whatever" button
>> and then you get a "wait" dialog until everything is done after which
>> you're told the stick is safe to remove. What I imagine is that the "wait"
>> dialog needs to be t
> I'd say there are several options and we should aim towards the variant
> which is most usable by normal users.
None of the options is sufficiently satisfying to risk intricate
behavioral changes with unknown consequences for existing workloads as
far as I'm concerned.
--
dm-devel mailing list
> So can you please elaborate which new risks are we going to introduce by
> fixing this resource hole ?
I'm not quite sure why you need a personal summary of the various
reasons different people brought together in the thread.
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.c
> Well, currently you click some "Eject / safely remove / whatever" button
> and then you get a "wait" dialog until everything is done after which
> you're told the stick is safe to remove. What I imagine is that the "wait"
> dialog needs to be there while there are any (or exclusive at minimum) op
> I think we've got too deep down into "how to fix things" but I'm not 100%
We did.
> sure what the "bug" actually is. In the initial posting Mikulas writes "the
> kernel writes to the filesystem after unmount successfully returned" - is
> that really such a big issue? Anybody else can open the d
Dne 08. 09. 23 v 13:32 Christian Brauner napsal(a):
I'd say there are several options and we should aim towards the variant
which is most usable by normal users.
None of the options is sufficiently satisfying to risk intricate
behavioral changes with unknown consequences for existing workloads
On Thu 2023-09-07 11:44:57, Jan Kara wrote:
> On Wed 06-09-23 18:52:39, Mikulas Patocka wrote:
> > On Wed, 6 Sep 2023, Christian Brauner wrote:
> > > On Wed, Sep 06, 2023 at 06:01:06PM +0200, Mikulas Patocka wrote:
> > > > > > BTW. what do you think that unmount of a frozen filesystem should
> > >
Hi!
> What I wanted to suggest is that we should provide means how to make sure
> block device is not being modified and educate admins and tool authors
> about them. Because just doing "umount /dev/sda1" and thinking this means
> that /dev/sda1 is unused now simply is not enough in today's world
Dne 08. 09. 23 v 12:20 Jan Kara napsal(a):
On Fri 08-09-23 11:29:40, Zdenek Kabelac wrote:
Dne 08. 09. 23 v 9:32 Jan Kara napsal(a):
On Thu 07-09-23 14:04:51, Mikulas Patocka wrote:
On Thu, 7 Sep 2023, Christian Brauner wrote:
I think we've got too deep down into "how to fix things" but I'm
On Fri 08-09-23 11:29:40, Zdenek Kabelac wrote:
> Dne 08. 09. 23 v 9:32 Jan Kara napsal(a):
> > On Thu 07-09-23 14:04:51, Mikulas Patocka wrote:
> > >
> > > On Thu, 7 Sep 2023, Christian Brauner wrote:
> > >
> > > > > I think we've got too deep down into "how to fix things" but I'm not
> > > > >
Dne 08. 09. 23 v 9:32 Jan Kara napsal(a):
On Thu 07-09-23 14:04:51, Mikulas Patocka wrote:
On Thu, 7 Sep 2023, Christian Brauner wrote:
I think we've got too deep down into "how to fix things" but I'm not 100%
We did.
sure what the "bug" actually is. In the initial posting Mikulas writes "
On Thu 07-09-23 14:04:51, Mikulas Patocka wrote:
>
>
> On Thu, 7 Sep 2023, Christian Brauner wrote:
>
> > > I think we've got too deep down into "how to fix things" but I'm not 100%
> >
> > We did.
> >
> > > sure what the "bug" actually is. In the initial posting Mikulas writes
> > > "the
> >
On Thu, 7 Sep 2023, Christian Brauner wrote:
> > I think we've got too deep down into "how to fix things" but I'm not 100%
>
> We did.
>
> > sure what the "bug" actually is. In the initial posting Mikulas writes "the
> > kernel writes to the filesystem after unmount successfully returned" - i
On Wed 06-09-23 18:52:39, Mikulas Patocka wrote:
> On Wed, 6 Sep 2023, Christian Brauner wrote:
> > On Wed, Sep 06, 2023 at 06:01:06PM +0200, Mikulas Patocka wrote:
> > > > > BTW. what do you think that unmount of a frozen filesystem should
> > > > > properly
> > > > > do? Fail with -EBUSY? Or, u
On Wed, Sep 06, 2023 at 05:33:32PM +0200, Christian Brauner wrote:
> > Currently, if we freeze a filesystem with "fsfreeze" and unmount it, the
> > mount point is removed, but the filesystem stays active and it is leaked.
> > You can't unfreeze it with "fsfreeze --unfreeze" because the mount poin
On Wed, Sep 06, 2023 at 03:26:21PM +0200, Mikulas Patocka wrote:
> lvm may suspend any logical volume anytime. If lvm suspend races with
> unmount, it may be possible that the kernel writes to the filesystem after
> unmount successfully returned. The problem can be demonstrated with this
> script:
> Currently, if we freeze a filesystem with "fsfreeze" and unmount it, the
> mount point is removed, but the filesystem stays active and it is leaked.
> You can't unfreeze it with "fsfreeze --unfreeze" because the mount point
> is gone. (the only way how to recover it is "echo j>/proc/sysrq-trig
On Wed, Sep 06, 2023 at 08:22:45AM -0700, Darrick J. Wong wrote:
> On Wed, Sep 06, 2023 at 03:26:21PM +0200, Mikulas Patocka wrote:
> > lvm may suspend any logical volume anytime. If lvm suspend races with
> > unmount, it may be possible that the kernel writes to the filesystem after
> > unmount su
On Wed, Sep 06, 2023 at 06:01:06PM +0200, Mikulas Patocka wrote:
>
>
> On Wed, 6 Sep 2023, Christian Brauner wrote:
>
> > > > IOW, you'd also hang on any umount of a bind-mount. IOW, every
> > > > single container making use of this filesystems via bind-mounts would
> > > > hang on umount and sh
On Wed, Sep 06, 2023 at 06:01:06PM +0200, Mikulas Patocka wrote:
> Perhaps we could distinguish between FIFREEZE-initiated freezes and
> device-mapper initiated freezes as well. And we could change the logic to
> return -EBUSY if the freeze was initiated by FIFREEZE and to wait for
> unfreeze i
On Wed, Sep 06, 2023 at 05:03:34PM +0200, Mikulas Patocka wrote:
> > IOW, you'd also hang on any umount of a bind-mount. IOW, every
> > single container making use of this filesystems via bind-mounts would
> > hang on umount and shutdown.
>
> bind-mount doesn't modify "s->s_writers.frozen", so th
On Wed, 6 Sep 2023, Christian Brauner wrote:
> On Wed, Sep 06, 2023 at 06:01:06PM +0200, Mikulas Patocka wrote:
> >
> >
> > On Wed, 6 Sep 2023, Christian Brauner wrote:
> >
> > > > > IOW, you'd also hang on any umount of a bind-mount. IOW, every
> > > > > single container making use of this
On Wed, 6 Sep 2023, Christian Brauner wrote:
> > > IOW, you'd also hang on any umount of a bind-mount. IOW, every
> > > single container making use of this filesystems via bind-mounts would
> > > hang on umount and shutdown.
> >
> > bind-mount doesn't modify "s->s_writers.frozen", so the patch
On Wed, Sep 06, 2023 at 03:26:21PM +0200, Mikulas Patocka wrote:
> lvm may suspend any logical volume anytime. If lvm suspend races with
> unmount, it may be possible that the kernel writes to the filesystem after
> unmount successfully returned. The problem can be demonstrated with this
> script:
On Wed, 6 Sep 2023, Christian Brauner wrote:
> > What happens:
> > * dmsetup suspend calls freeze_bdev, that goes to freeze_super and it
> > increments sb->s_active
> > * then we unmount the filesystem, we go to cleanup_mnt, cleanup_mnt calls
> > deactivate_super, deactivate_super sees that
27 matches
Mail list logo