On (02/03/15 12:02), Minchan Kim wrote:
> On Tue, Feb 03, 2015 at 10:54:33AM +0900, Sergey Senozhatsky wrote:
> > On (02/02/15 16:06), Sergey Senozhatsky wrote:
> > > So, guys, how about doing it differently, in less lines of code,
> > > hopefully. Don't move reset_store()'s work to zram_reset_devi
On Tue, Feb 03, 2015 at 10:54:33AM +0900, Sergey Senozhatsky wrote:
> On (02/02/15 16:06), Sergey Senozhatsky wrote:
> > So, guys, how about doing it differently, in less lines of code,
> > hopefully. Don't move reset_store()'s work to zram_reset_device().
> > Instead, move
> >
> > set_capacit
On (02/02/15 16:06), Sergey Senozhatsky wrote:
> So, guys, how about doing it differently, in less lines of code,
> hopefully. Don't move reset_store()'s work to zram_reset_device().
> Instead, move
>
> set_capacity(zram->disk, 0);
> revalidate_disk(zram->disk);
>
> out from zram_rese
On (02/02/15 15:18), Minchan Kim wrote:
> > a quick idea:
> > can we additionally move all bd flush and put work after
> > zram_reset_device(zram, true)
> > and, perhaps, replace ->bd_holders with something else?
> >
> > zram_reset_device() will not return until we have active IOs, pending IOs
>
On Mon, Feb 02, 2015 at 02:59:23PM +0900, Sergey Senozhatsky wrote:
> On (02/02/15 12:41), Minchan Kim wrote:
> > > If we use zram as block device itself(not a fs or swap) and open the
> > > block device as !FMODE_EXCL, bd_holders will be void.
> > >
> > > Another topic: As I didn't see enough fs/
On (02/02/15 12:41), Minchan Kim wrote:
> > If we use zram as block device itself(not a fs or swap) and open the
> > block device as !FMODE_EXCL, bd_holders will be void.
> >
> > Another topic: As I didn't see enough fs/block_dev.c bd_holders in zram
> > would be mess. I guess we need to study hot
On (02/02/15 14:18), Minchan Kim wrote:
> Everything are fixed. Ready to send a patch.
> But before sending, hope we fix umount race issue first.
>
Thanks a lot, Minchan!
OK, surely I don't mind to fix the umount race first, I just didn't want
to interrupt you in the middle of locking rework. t
On Mon, Feb 02, 2015 at 02:09:12PM +0900, Sergey Senozhatsky wrote:
>
> the patch mosly looks good, except for one place:
>
> On (02/02/15 13:28), Minchan Kim wrote:
> > @@ -783,6 +812,8 @@ static ssize_t disksize_store(struct device *dev,
> > goto out_destroy_comp;
> > }
> >
>
On Mon, Feb 02, 2015 at 01:28:47PM +0900, Minchan Kim wrote:
> On Mon, Feb 02, 2015 at 01:01:24PM +0900, Sergey Senozhatsky wrote:
> > On (02/02/15 11:44), Minchan Kim wrote:
> > > > sure, I did think about this. and I actually didn't find any reason not
> > > > to use ->refcount there. if user wan
the patch mosly looks good, except for one place:
On (02/02/15 13:28), Minchan Kim wrote:
> @@ -783,6 +812,8 @@ static ssize_t disksize_store(struct device *dev,
> goto out_destroy_comp;
> }
>
> + init_waitqueue_head(&zram->io_done);
> + zram_meta_get(zram);
it was
On Mon, Feb 02, 2015 at 01:01:24PM +0900, Sergey Senozhatsky wrote:
> On (02/02/15 11:44), Minchan Kim wrote:
> > > sure, I did think about this. and I actually didn't find any reason not
> > > to use ->refcount there. if user wants to reset the device, he first
> > > should umount it to make bdev-
On (02/02/15 11:44), Minchan Kim wrote:
> > sure, I did think about this. and I actually didn't find any reason not
> > to use ->refcount there. if user wants to reset the device, he first
> > should umount it to make bdev->bd_holders check happy. and that's where
> > IOs will be failed. so it make
On (02/02/15 11:45), Minchan Kim wrote:
> On Mon, Feb 02, 2015 at 10:59:40AM +0900, Sergey Senozhatsky wrote:
> > On (02/02/15 10:43), Minchan Kim wrote:
> > > > static inline int init_done(struct zram *zram)
> > > > {
> > > > - return zram->meta != NULL;
> > > > + return atomic_read(
Separate another issue from my patch.
On Mon, Feb 02, 2015 at 11:44:06AM +0900, Minchan Kim wrote:
> On Mon, Feb 02, 2015 at 10:48:00AM +0900, Sergey Senozhatsky wrote:
> > Hello Minchan,
> >
> > On (02/02/15 10:30), Minchan Kim wrote:
> > > > > static inline int init_done(struct zram *zram)
> >
On Mon, Feb 02, 2015 at 10:59:40AM +0900, Sergey Senozhatsky wrote:
> On (02/02/15 10:43), Minchan Kim wrote:
> > > static inline int init_done(struct zram *zram)
> > > {
> > > - return zram->meta != NULL;
> > > + return atomic_read(&zram->refcount);
> >
> > As I said previous mail, it could mak
On Mon, Feb 02, 2015 at 10:48:00AM +0900, Sergey Senozhatsky wrote:
> Hello Minchan,
>
> On (02/02/15 10:30), Minchan Kim wrote:
> > > > static inline int init_done(struct zram *zram)
> > > > {
> > > > - return zram->meta != NULL;
> > > > + return zram->disksize != 0;
> > >
> > > we
On (02/02/15 10:43), Minchan Kim wrote:
> > static inline int init_done(struct zram *zram)
> > {
> > - return zram->meta != NULL;
> > + return atomic_read(&zram->refcount);
>
> As I said previous mail, it could make livelock so I want to use disksize
> in here to prevent further I/O handling
Hello Minchan,
On (02/02/15 10:30), Minchan Kim wrote:
> > > static inline int init_done(struct zram *zram)
> > > {
> > > - return zram->meta != NULL;
> > > + return zram->disksize != 0;
> >
> > we don't set ->disksize to 0 when create device. and I think
> > it's better to use refcount here, b
On Mon, Feb 02, 2015 at 12:04:16AM +0900, Sergey Senozhatsky wrote:
> Sorry, guys! my bad. the patch I sent to you is incomplete. pelase review
> this one.
>
> On (02/01/15 23:50), Sergey Senozhatsky wrote:
> > + zram_meta_free(meta, disksize);
> >
> that was my in-mail last second stupid modif
Hey Sergey,
On Sun, Feb 01, 2015 at 11:50:36PM +0900, Sergey Senozhatsky wrote:
> Hello Minchan,
>
> the idea looks good and this is something I was trying to do, except
> that I used kref.
>
> some review nitpicks are below. I also posted modified version of your
> patch so that will save some
Sorry, guys! my bad. the patch I sent to you is incomplete. pelase review
this one.
On (02/01/15 23:50), Sergey Senozhatsky wrote:
> + zram_meta_free(meta, disksize);
>
that was my in-mail last second stupid modification and I didn't compile
tested it. I hit send button and then realized that
Hello Minchan,
the idea looks good and this is something I was trying to do, except
that I used kref.
some review nitpicks are below. I also posted modified version of your
patch so that will save some time.
> static inline int init_done(struct zram *zram)
> {
> - return zram->meta != NUL
Hello, Sergey
2015-01-31 19:07 GMT+08:00 Sergey Senozhatsky :
> On (01/31/15 16:50), Ganesh Mahendran wrote:
>> >> > after umount we still have init device. so, *theoretically*, we
>> >> > can see something like
>> >> >
>> >> > CPU0CPU1
>> >> > umount
>> >> > re
Hello Minchan,
excellent analysis!
On (01/30/15 23:41), Minchan Kim wrote:
> Yes, __srcu_read_lock is a little bit heavier but the number of instruction
> are not too much difference to make difference 10%. A culprit is
> __cond_resched but I don't think, either because our test was CPU intensive
On (01/31/15 16:50), Ganesh Mahendran wrote:
> >> > after umount we still have init device. so, *theoretically*, we
> >> > can see something like
> >> >
> >> > CPU0CPU1
> >> > umount
> >> > reset_store
> >> > bdev->bd_holders == 0 mount
> >> > .
2015-01-30 16:08 GMT+08:00 Sergey Senozhatsky
:
> On (01/30/15 15:52), Ganesh Mahendran wrote:
>> >> When I/O operation is running, that means the /dev/zram0 is
>> >> mounted or swaped on. Then the device could not be reset by
>> >> below code:
>> >>
>> >> /* Do not reset an active device! */
>
Hello, Sergey
On Thu, Jan 29, 2015 at 04:08:35PM +0900, Sergey Senozhatsky wrote:
> On (01/29/15 15:35), Minchan Kim wrote:
> >
> > As you told, the data was not stable.
> >
> yes. fread test was always slower, and the rest was mostly slower.
>
>
> > Anyway, when I read down_read implementation,
On (01/30/15 15:52), Ganesh Mahendran wrote:
> >> When I/O operation is running, that means the /dev/zram0 is
> >> mounted or swaped on. Then the device could not be reset by
> >> below code:
> >>
> >> /* Do not reset an active device! */
> >> if (bdev->bd_holders) {
> >> ret = -EBU
Hello Sergey
2015-01-29 23:12 GMT+08:00 Sergey Senozhatsky :
> On (01/29/15 21:48), Ganesh Mahendran wrote:
>> > Admin could reset zram during I/O operation going on so we have
>> > used zram->init_lock as read-side lock in I/O path to prevent
>> > sudden zram meta freeing.
>>
>> When I/O operatio
On (01/29/15 15:35), Minchan Kim wrote:
>
> I tested it with multiple dd processes.
>
Hello,
fio test (http://manpages.ubuntu.com/manpages/natty/man1/fio.1.html) configs
are attached,
so we can run fio on different h/w (works against zram with fs mounted at /mnt/)
for i in ./test-fio-
On (01/29/15 21:48), Ganesh Mahendran wrote:
> > Admin could reset zram during I/O operation going on so we have
> > used zram->init_lock as read-side lock in I/O path to prevent
> > sudden zram meta freeing.
>
> When I/O operation is running, that means the /dev/zram0 is
> mounted or swaped on. T
Hello, Minchan:
2015-01-28 16:15 GMT+08:00 Minchan Kim :
> Admin could reset zram during I/O operation going on so we have
> used zram->init_lock as read-side lock in I/O path to prevent
> sudden zram meta freeing.
When I/O operation is running, that means the /dev/zram0 is
mounted or swaped on.
On (01/29/15 15:35), Minchan Kim wrote:
>
> As you told, the data was not stable.
>
yes. fread test was always slower, and the rest was mostly slower.
> Anyway, when I read down_read implementation, it's one atomic instruction.
> Hmm, it seems te be better for srcu_read_lock which does more thing
On Thu, Jan 29, 2015 at 03:06:04PM +0900, Sergey Senozhatsky wrote:
> On (01/29/15 14:28), Minchan Kim wrote:
> > > I'm still concerned about performance numbers that I see on my x86_64.
> > > it's not always, but mostly slower. I'll give it another try (disable
> > > lockdep, etc.), but if we lose
On (01/29/15 14:28), Minchan Kim wrote:
> > I'm still concerned about performance numbers that I see on my x86_64.
> > it's not always, but mostly slower. I'll give it another try (disable
> > lockdep, etc.), but if we lose 10% on average then, sorry, I'm not so
> > positive about srcu change and w
On Thu, Jan 29, 2015 at 11:22:41AM +0900, Sergey Senozhatsky wrote:
> On (01/29/15 11:01), Minchan Kim wrote:
> > On Thu, Jan 29, 2015 at 10:57:38AM +0900, Sergey Senozhatsky wrote:
> > > On Thu, Jan 29, 2015 at 8:33 AM, Minchan Kim wrote:
> > > > On Wed, Jan 28, 2015 at 11:56:51PM +0900, Sergey S
On (01/29/15 11:01), Minchan Kim wrote:
> On Thu, Jan 29, 2015 at 10:57:38AM +0900, Sergey Senozhatsky wrote:
> > On Thu, Jan 29, 2015 at 8:33 AM, Minchan Kim wrote:
> > > On Wed, Jan 28, 2015 at 11:56:51PM +0900, Sergey Senozhatsky wrote:
> > > > I don't like re-introduced ->init_done.
> > > > an
On Thu, Jan 29, 2015 at 10:57:38AM +0900, Sergey Senozhatsky wrote:
> On Thu, Jan 29, 2015 at 8:33 AM, Minchan Kim wrote:
>
> > On Wed, Jan 28, 2015 at 11:56:51PM +0900, Sergey Senozhatsky wrote:
> > > I don't like re-introduced ->init_done.
> > > another idea... how about using `zram->disksize =
On Wed, Jan 28, 2015 at 11:56:51PM +0900, Sergey Senozhatsky wrote:
> On (01/28/15 17:15), Minchan Kim wrote:
> > Admin could reset zram during I/O operation going on so we have
> > used zram->init_lock as read-side lock in I/O path to prevent
> > sudden zram meta freeing.
> >
> > However, the ini
On (01/28/15 23:56), Sergey Senozhatsky wrote:
> > -static inline int init_done(struct zram *zram)
> > +static inline bool init_done(struct zram *zram)
> > {
> > - return zram->meta != NULL;
> > + /*
> > +* init_done can be used without holding zram->init_lock in
> > +* read/write hand
On (01/28/15 17:15), Minchan Kim wrote:
> Admin could reset zram during I/O operation going on so we have
> used zram->init_lock as read-side lock in I/O path to prevent
> sudden zram meta freeing.
>
> However, the init_lock is really troublesome.
> We can't do call zram_meta_alloc under init_lock
41 matches
Mail list logo