Re: [PATCH v5 1/2] memory_hotplug: Free pages as higher order
On Thu, Oct 11, 2018 at 6:05 PM Vlastimil Babka wrote: > > On 10/10/18 6:56 PM, Arun KS wrote: > > On 2018-10-10 21:00, Vlastimil Babka wrote: > >> On 10/5/18 10:10 AM, Arun KS wrote: > >>> When free pages are done with higher order, time spend on > >>> coalescing pages by buddy allocator can be reduced. With > >>> section size of 256MB, hot add latency of a single section > >>> shows improvement from 50-60 ms to less than 1 ms, hence > >>> improving the hot add latency by 60%. Modify external > >>> providers of online callback to align with the change. > >>> > >>> Signed-off-by: Arun KS > >> > >> [...] > >> > >>> @@ -655,26 +655,44 @@ void __online_page_free(struct page *page) > >>> } > >>> EXPORT_SYMBOL_GPL(__online_page_free); > >>> > >>> -static void generic_online_page(struct page *page) > >>> +static int generic_online_page(struct page *page, unsigned int order) > >>> { > >>> - __online_page_set_limits(page); > >> > >> This is now not called anymore, although the xen/hv variants still do > >> it. The function seems empty these days, maybe remove it as a followup > >> cleanup? > >> > >>> - __online_page_increment_counters(page); > >>> - __online_page_free(page); > >>> + __free_pages_core(page, order); > >>> + totalram_pages += (1UL << order); > >>> +#ifdef CONFIG_HIGHMEM > >>> + if (PageHighMem(page)) > >>> + totalhigh_pages += (1UL << order); > >>> +#endif > >> > >> __online_page_increment_counters() would have used > >> adjust_managed_page_count() which would do the changes under > >> managed_page_count_lock. Are we safe without the lock? If yes, there > >> should perhaps be a comment explaining why. > > > > Looks unsafe without managed_page_count_lock. I think better have a > > similar implementation of free_boot_core() in memory_hotplug.c like we > > had in version 1 of patch. And use adjust_managed_page_count() instead > > of page_zone(page)->managed_pages += nr_pages; > > > > https://lore.kernel.org/patchwork/patch/989445/ > > Looks like deferred_free_range() has the same problem calling > __free_pages_core() to adjust zone->managed_pages. I expect > __free_pages_bootmem() is OK because at that point the system is still > single-threaded? > Could be solved by moving that out of __free_pages_core(). > Seems deferred_free_range() is protected by pgdat_resize_lock()/pgdat_resize_unlock(). Which protects pgdat's zones, if I am right. > But do we care about readers potentially seeing a store tear? If yes > then maybe these counters should be converted to atomics... > > > -static void generic_online_page(struct page *page) > > +static int generic_online_page(struct page *page, unsigned int order) > > { > > - __online_page_set_limits(page); > > - __online_page_increment_counters(page); > > - __online_page_free(page); > > + unsigned long nr_pages = 1 << order; > > + struct page *p = page; > > + > > + for (loop = 0 ; loop < nr_pages ; loop++, p++) { > > + __ClearPageReserved(p); > > + set_page_count(p, 0); > > + } > > + > > + adjust_managed_page_count(page, nr_pages); > > + set_page_refcounted(page); > > + __free_pages(page, order); > > + > > + return 0; > > +} > > > > > > Regards, > > Arun > > > ___ devel mailing list de...@linuxdriverproject.org http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel
Re: [PATCH RFCv2 0/4] mm/memory_hotplug: Introduce memory block types
On Fri, Nov 30, 2018 at 06:59:18PM +0100, David Hildenbrand wrote: >This is the second approach, introducing more meaningful memory block >types and not changing online behavior in the kernel. It is based on >latest linux-next. > >As we found out during dicussion, user space should always handle onlining >of memory, in any case. However in order to make smart decisions in user >space about if and how to online memory, we have to export more information >about memory blocks. This way, we can formulate rules in user space. > >One such information is the type of memory block we are talking about. >This helps to answer some questions like: >- Does this memory block belong to a DIMM? >- Can this DIMM theoretically ever be unplugged again? >- Was this memory added by a balloon driver that will rely on balloon > inflation to remove chunks of that memory again? Which zone is advised? >- Is this special standby memory on s390x that is usually not automatically > onlined? > >And in short it helps to answer to some extend (excluding zone imbalances) >- Should I online this memory block? >- To which zone should I online this memory block? >... of course special use cases will result in different anwers. But that's >why user space has control of onlining memory. > >More details can be found in Patch 1 and Patch 3. >Tested on x86 with hotplugged DIMMs. Cross-compiled for PPC and s390x. > > >Example: >$ udevadm info -q all -a /sys/devices/system/memory/memory0 > KERNEL=="memory0" > SUBSYSTEM=="memory" > DRIVER=="" > ATTR{online}=="1" > ATTR{phys_device}=="0" > ATTR{phys_index}=="" > ATTR{removable}=="0" > ATTR{state}=="online" > ATTR{type}=="boot" > ATTR{valid_zones}=="none" >$ udevadm info -q all -a /sys/devices/system/memory/memory90 > KERNEL=="memory90" > SUBSYSTEM=="memory" > DRIVER=="" > ATTR{online}=="1" > ATTR{phys_device}=="0" > ATTR{phys_index}=="005a" > ATTR{removable}=="1" > ATTR{state}=="online" > ATTR{type}=="dimm" > ATTR{valid_zones}=="Normal" > > >RFC -> RFCv2: >- Now also taking care of PPC (somehow missed it :/ ) >- Split the series up to some degree (some ideas on how to split up patch 3 > would be very welcome) >- Introduce more memory block types. Turns out abstracting too much was > rather confusing and not helpful. Properly document them. > >Notes: >- I wanted to convert the enum of types into a named enum but this > provoked all kinds of different errors. For now, I am doing it just like > the other types (e.g. online_type) we are using in that context. >- The "removable" property should never have been named like that. It > should have been "offlinable". Can we still rename that? E.g. boot memory > is sometimes marked as removable ... > This make sense to me. Remove usually describe physical hotplug phase, if I am correct. -- Wei Yang Help you, Help me ___ devel mailing list de...@linuxdriverproject.org http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel
Re: [PATCH RFCv2 1/4] mm/memory_hotplug: Introduce memory block types
this check for now? >+ > mem = kzalloc(sizeof(*mem), GFP_KERNEL); > if (!mem) > return -ENOMEM; >@@ -675,6 +704,7 @@ static int init_memory_block(struct memory_block **memory, > mem->state = state; > start_pfn = section_nr_to_pfn(mem->start_section_nr); > mem->phys_device = arch_get_memory_phys_device(start_pfn); >+ mem->type = type; > > ret = register_memory(mem); > >@@ -699,7 +729,8 @@ static int add_memory_block(int base_section_nr) > > if (section_count == 0) > return 0; >- ret = init_memory_block(&mem, __nr_to_section(section_nr), MEM_ONLINE); >+ ret = init_memory_block(&mem, __nr_to_section(section_nr), MEM_ONLINE, >+ MEMORY_BLOCK_BOOT); > if (ret) > return ret; > mem->section_count = section_count; >@@ -722,7 +753,8 @@ int hotplug_memory_register(int nid, struct mem_section >*section) > mem->section_count++; > put_device(&mem->dev); > } else { >- ret = init_memory_block(&mem, section, MEM_OFFLINE); >+ ret = init_memory_block(&mem, section, MEM_OFFLINE, >+ MEMORY_BLOCK_UNSPECIFIED); > if (ret) > goto out; > mem->section_count++; >diff --git a/include/linux/memory.h b/include/linux/memory.h >index d75ec88ca09d..06268e96e0da 100644 >--- a/include/linux/memory.h >+++ b/include/linux/memory.h >@@ -34,12 +34,39 @@ struct memory_block { > int (*phys_callback)(struct memory_block *); > struct device dev; > int nid;/* NID for this memory block */ >+ int type; /* type of this memory block */ > }; > > int arch_get_memory_phys_device(unsigned long start_pfn); > unsigned long memory_block_size_bytes(void); > int set_memory_block_size_order(unsigned int order); > >+/* >+ * Memory block types allow user space to formulate rules if and how to >+ * online memory blocks. The types are exposed to user space as text >+ * strings in sysfs. >+ * >+ * MEMORY_BLOCK_NONE: >+ * No memory block is to be created (e.g. device memory). Not exposed to >+ * user space. >+ * >+ * MEMORY_BLOCK_UNSPECIFIED: >+ * The type of memory block was not further specified when adding the >+ * memory block. >+ * >+ * MEMORY_BLOCK_BOOT: >+ * This memory block was added during boot by the basic system. No >+ * specific device driver takes care of this memory block. This memory >+ * block type is onlined automatically by the kernel during boot and might >+ * later be managed by a different device driver, in which case the type >+ * might change. >+ */ >+enum { >+ MEMORY_BLOCK_NONE = 0, >+ MEMORY_BLOCK_UNSPECIFIED, >+ MEMORY_BLOCK_BOOT, >+}; >+ > /* These states are exposed to userspace as text strings in sysfs */ > #define MEM_ONLINE (1<<0) /* exposed to userspace */ > #define MEM_GOING_OFFLINE (1<<1) /* exposed to userspace */ >-- >2.17.2 -- Wei Yang Help you, Help me ___ devel mailing list de...@linuxdriverproject.org http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel
Re: [PATCH RFCv2 2/4] mm/memory_hotplug: Replace "bool want_memblock" by "int type"
On Fri, Nov 30, 2018 at 06:59:20PM +0100, David Hildenbrand wrote: >Let's pass a memory block type instead. Pass "MEMORY_BLOCK_NONE" for device >memory and for now "MEMORY_BLOCK_UNSPECIFIED" for anything else. No >functional change. I would suggest to put more words to this. " Function arch_add_memory()'s last parameter *want_memblock* is used to determin whether it is necessary to create a corresponding memory block device. After introducing the memory block type, this patch replaces the bool type *want_memblock* with memory block type with following rules for now: * Pass "MEMORY_BLOCK_NONE" for device memory * Pass "MEMORY_BLOCK_UNSPECIFIED" for anything else Since this parameter is passed deep to __add_section(), all its descendents are effected. Below lists those descendents. arch_add_memory() add_pages() __add_pages() __add_section() " > >Cc: Tony Luck >Cc: Fenghua Yu >Cc: Benjamin Herrenschmidt >Cc: Paul Mackerras >Cc: Michael Ellerman >Cc: Martin Schwidefsky >Cc: Heiko Carstens >Cc: Yoshinori Sato >Cc: Rich Felker >Cc: Dave Hansen >Cc: Andy Lutomirski >Cc: Peter Zijlstra >Cc: Thomas Gleixner >Cc: Ingo Molnar >Cc: Borislav Petkov >Cc: "H. Peter Anvin" >Cc: x...@kernel.org >Cc: Greg Kroah-Hartman >Cc: "Rafael J. Wysocki" >Cc: Andrew Morton >Cc: Mike Rapoport >Cc: Michal Hocko >Cc: Dan Williams >Cc: "Kirill A. Shutemov" >Cc: Oscar Salvador >Cc: Nicholas Piggin >Cc: Stephen Rothwell >Cc: Christophe Leroy >Cc: "Jonathan Neusch??fer" >Cc: Mauricio Faria de Oliveira >Cc: Vasily Gorbik >Cc: Arun KS >Cc: Rob Herring >Cc: Pavel Tatashin >Cc: "mike.tra...@hpe.com" >Cc: Joonsoo Kim >Cc: Wei Yang >Cc: Logan Gunthorpe >Cc: "J??r??me Glisse" >Cc: "Jan H. Sch??nherr" >Cc: Dave Jiang >Cc: Matthew Wilcox >Cc: Mathieu Malaterre >Signed-off-by: David Hildenbrand >--- > arch/ia64/mm/init.c| 4 ++-- > arch/powerpc/mm/mem.c | 4 ++-- > arch/s390/mm/init.c| 4 ++-- > arch/sh/mm/init.c | 4 ++-- > arch/x86/mm/init_32.c | 4 ++-- > arch/x86/mm/init_64.c | 8 > drivers/base/memory.c | 11 +++ > include/linux/memory.h | 2 +- > include/linux/memory_hotplug.h | 12 ++-- > kernel/memremap.c | 6 -- > mm/memory_hotplug.c| 16 > 11 files changed, 40 insertions(+), 35 deletions(-) > >diff --git a/arch/ia64/mm/init.c b/arch/ia64/mm/init.c >index 904fe55e10fc..408635d2902f 100644 >--- a/arch/ia64/mm/init.c >+++ b/arch/ia64/mm/init.c >@@ -646,13 +646,13 @@ mem_init (void) > > #ifdef CONFIG_MEMORY_HOTPLUG > int arch_add_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap, >- bool want_memblock) >+ int type) > { > unsigned long start_pfn = start >> PAGE_SHIFT; > unsigned long nr_pages = size >> PAGE_SHIFT; > int ret; > >- ret = __add_pages(nid, start_pfn, nr_pages, altmap, want_memblock); >+ ret = __add_pages(nid, start_pfn, nr_pages, altmap, type); > if (ret) > printk("%s: Problem encountered in __add_pages() as ret=%d\n", > __func__, ret); >diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c >index b3c9ee5c4f78..e394637da270 100644 >--- a/arch/powerpc/mm/mem.c >+++ b/arch/powerpc/mm/mem.c >@@ -118,7 +118,7 @@ int __weak remove_section_mapping(unsigned long start, >unsigned long end) > } > > int __meminit arch_add_memory(int nid, u64 start, u64 size, struct > vmem_altmap *altmap, >- bool want_memblock) >+int type) > { > unsigned long start_pfn = start >> PAGE_SHIFT; > unsigned long nr_pages = size >> PAGE_SHIFT; >@@ -135,7 +135,7 @@ int __meminit arch_add_memory(int nid, u64 start, u64 >size, struct vmem_altmap * > } > flush_inval_dcache_range(start, start + size); > >- return __add_pages(nid, start_pfn, nr_pages, altmap, want_memblock); >+ return __add_pages(nid, start_pfn, nr_pages, altmap, type); > } > > #ifdef CONFIG_MEMORY_HOTREMOVE >diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c >index 3e82f66d5c61..ba2c56328e6d 100644 >--- a/arch/s390/mm/init.c >+++ b/arch/s390/mm/init.c >@@ -225,7 +225,7 @@ device_initcall(s390_cma_mem_init); > #endif /* CONFIG_CMA */ > > int arch_add_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap, >- bool want_memblock) >+ in
Re: [PATCH RFCv2 1/4] mm/memory_hotplug: Introduce memory block types
[...] >>> >>> + if (type == MEMORY_BLOCK_NONE) >>> + return -EINVAL; >> >> No one will pass in this value. Can we omit this check for now? > >I could move it to patch nr 2 I guess, but as I introduce >MEMORY_BLOCK_NONE here it made sense to keep it in here. > Yes, this make sense to me now. >(and I think at least for now it makes sense to not squash patch 1 and >2, to easier discuss the new user interface/concept introduced in this >patch). > >Thanks! > >-- > >Thanks, > >David / dhildenb -- Wei Yang Help you, Help me ___ devel mailing list de...@linuxdriverproject.org http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel
Re: [PATCH v3 1/3] resource: Use list_head to link resource sibling
On Tue, Apr 10, 2018 at 09:44:16PM +0800, Baoquan He wrote: >Hi Rob, > >Thanks a lot for looking into this and involve Nico to this thread! > >On 04/09/18 at 09:49am, Rob Herring wrote: >> +Nico who has been working on tinification of the kernel. >> >> On Mon, Apr 9, 2018 at 4:08 AM, Baoquan He wrote: >> > The struct resource uses singly linked list to link siblings. It's not >> > easy to do reverse iteration on sibling list. So replace it with list_head. >> >> Why is reverse iteration needed? > >This is the explanation I made when Andrew helped to review the v1 post: >https://lkml.org/lkml/2018/3/23/78 > >Because we have been using kexec-tools utility to search available >System RAM space for loading kernel/initrd/purgatory from top to down. >That is done in user space by searching /proc/iomem. While later added >kexec_file interface, the searching code happened in kernel, and it >only search System RAM region bottom up, then take an area in that found >RAM region from top to down. We need unify these two interfaces on >behaviour since they are the same on essense from the users' point of >view, though implementation is different. As you know, the singly linked >list implementation of the current resource's sibling linking, makes the >searching from top to down very hard to satisfy people. > >Below is the v1 post, we make an temporary array to copy iomem_resource's >first level of children, then iterate the array reversedly. Andrew >suggested me to try list_head after reviewing. In fact we can optimize >that patch to only copy resource pointer into array, still the way is >ugly. >https://lkml.org/lkml/2018/3/21/952 > >Then Wei pasted a patch he had made as below. He didn't mention if he >also has requirement on reversed iteration of resource. That is an O(n*n) >way, from personal feelings, hard to say if it's bettern than v1 post. >https://lkml.org/lkml/2018/3/24/157 I don't have requirement on reverse iteration of resource structure. My approach is almost the same as current walk_system_ram_res(). Since each resource keeps parent, we could get previous resource by search on res->parent->child. The complexity of a whole iteration is O(N * W / 2), where N is the number of resources in the tree and W is the average number of siblings of each resource. And this approach doesn't need to change current structure. > >That's why I would like to have a try of the list_head linking. > -- Wei Yang Help you, Help me ___ devel mailing list de...@linuxdriverproject.org http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel
Re: [PATCH v3 1/3] resource: Use list_head to link sibling resource
On Thu, Apr 19, 2018 at 08:18:46AM +0800, Baoquan He wrote: >The struct resource uses singly linked list to link siblings. It's not >easy to do reverse iteration on sibling list. So replace it with list_head. > Hi, Baoquan Besides changing the data structure, I have another proposal to do the reverse iteration. Which means it would not affect other users, if you just want a reverse iteration. BTW, I don't think Andrew suggest to use linked-list directly. What he wants is a better solution to your first proposal in https://patchwork.kernel.org/patch/10300819/. Below is my proposal of resource reverse iteration without changing current design. >From 5d7145d44fe48b98572a03884fa3a3aa82e3cef9 Mon Sep 17 00:00:00 2001 From: Wei Yang Date: Sat, 24 Mar 2018 23:25:46 +0800 Subject: [PATCH] kernel/resource: add walk_system_ram_res_rev() As discussed on https://patchwork.kernel.org/patch/10300819/, this patch comes up with a variant implementation of walk_system_ram_res_rev(), which uses iteration instead of allocating array to store those resources. Signed-off-by: Wei Yang --- include/linux/ioport.h | 3 ++ kernel/resource.c | 113 + 2 files changed, 116 insertions(+) diff --git a/include/linux/ioport.h b/include/linux/ioport.h index da0ebaec25f0..473f1d9cb97e 100644 --- a/include/linux/ioport.h +++ b/include/linux/ioport.h @@ -277,6 +277,9 @@ extern int walk_system_ram_res(u64 start, u64 end, void *arg, int (*func)(struct resource *, void *)); extern int +walk_system_ram_res_rev(u64 start, u64 end, void *arg, + int (*func)(struct resource *, void *)); +extern int walk_iomem_res_desc(unsigned long desc, unsigned long flags, u64 start, u64 end, void *arg, int (*func)(struct resource *, void *)); diff --git a/kernel/resource.c b/kernel/resource.c index 769109f20fb7..d4ec5fbc6875 100644 --- a/kernel/resource.c +++ b/kernel/resource.c @@ -73,6 +73,38 @@ static struct resource *next_resource(struct resource *p, bool sibling_only) return p->sibling; } +static struct resource *prev_resource(struct resource *p, bool sibling_only) +{ + struct resource *prev; + if (NULL == iomem_resource.child) + return NULL; + + if (p == NULL) { + prev = iomem_resource.child; + while (prev->sibling) + prev = prev->sibling; + } else { + if (p->parent->child == p) { + return p->parent; + } + + for (prev = p->parent->child; prev->sibling != p; + prev = prev->sibling) {} + } + + /* Caller wants to traverse through siblings only */ + if (sibling_only) + return prev; + + for (;prev->child;) { + prev = prev->child; + + while (prev->sibling) + prev = prev->sibling; + } + return prev; +} + static void *r_next(struct seq_file *m, void *v, loff_t *pos) { struct resource *p = v; @@ -401,6 +433,47 @@ static int find_next_iomem_res(struct resource *res, unsigned long desc, return 0; } +/* + * Finds the highest iomem resource existing within [res->start.res->end). + * The caller must specify res->start, res->end, res->flags, and optionally + * desc. If found, returns 0, res is overwritten, if not found, returns -1. + * This function walks the whole tree and not just first level children until + * and unless first_level_children_only is true. + */ +static int find_prev_iomem_res(struct resource *res, unsigned long desc, + bool first_level_children_only) +{ + struct resource *p; + + BUG_ON(!res); + BUG_ON(res->start >= res->end); + + read_lock(&resource_lock); + + for (p = prev_resource(NULL, first_level_children_only); p; + p = prev_resource(p, first_level_children_only)) { + if ((p->flags & res->flags) != res->flags) + continue; + if ((desc != IORES_DESC_NONE) && (desc != p->desc)) + continue; + if (p->end < res->start || p->child == iomem_resource.child) { + p = NULL; + break; + } + if ((p->end >= res->start) && (p->start < res->end)) + break; + } + + read_unlock(&resource_lock); + if (!p) + return -1; + /* copy data */ + resource_clip(res, p->start, p->end); + res->flags = p->flags; + res->desc = p->desc; + return 0; +} + static int __walk_iomem_res_desc(struct resource *res, unsigned long desc, bool first_
Re: [PATCH v3 1/3] resource: Use list_head to link sibling resource
On Mon, May 07, 2018 at 09:14:29AM +0800, Baoquan He wrote: >Hi Wei Yang, > >On 04/26/18 at 09:18am, Wei Yang wrote: >> On Thu, Apr 19, 2018 at 08:18:46AM +0800, Baoquan He wrote: >> >The struct resource uses singly linked list to link siblings. It's not >> >easy to do reverse iteration on sibling list. So replace it with list_head. >> > >> >> Hi, Baoquan >> >> Besides changing the data structure, I have another proposal to do the >> reverse >> iteration. Which means it would not affect other users, if you just want a >> reverse iteration. >> >> BTW, I don't think Andrew suggest to use linked-list directly. What he wants >> is a better solution to your first proposal in >> https://patchwork.kernel.org/patch/10300819/. >> >> Below is my proposal of resource reverse iteration without changing current >> design. > >I got your mail and read it, then interrupted by other thing and forgot >replying, sorry. > >I am fine with your code change. As I said before, I have tried to change >code per reviewers' comment, then let reviewers decide which way is >better. Please feel free to post formal patches and joining discussion >about this issue. Yep, while I don't have a real requirement to add the reverse version, so what is the proper way to send a patch? A patch reply to this thread is ok? > >Thanks >Baoquan > ___ devel mailing list de...@linuxdriverproject.org http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel
Re: [PATCH v3 1/3] resource: Use list_head to link sibling resource
On Tue, May 08, 2018 at 08:11:23PM +0800, Baoquan He wrote: >On 05/08/18 at 08:48pm, Wei Yang wrote: >> On Mon, May 07, 2018 at 09:14:29AM +0800, Baoquan He wrote: >> >Hi Wei Yang, >> > >> >On 04/26/18 at 09:18am, Wei Yang wrote: >> >> On Thu, Apr 19, 2018 at 08:18:46AM +0800, Baoquan He wrote: >> >> >The struct resource uses singly linked list to link siblings. It's not >> >> >easy to do reverse iteration on sibling list. So replace it with >> >> >list_head. >> >> > >> >> >> >> Hi, Baoquan >> >> >> >> Besides changing the data structure, I have another proposal to do the >> >> reverse >> >> iteration. Which means it would not affect other users, if you just want a >> >> reverse iteration. >> >> >> >> BTW, I don't think Andrew suggest to use linked-list directly. What he >> >> wants >> >> is a better solution to your first proposal in >> >> https://patchwork.kernel.org/patch/10300819/. >> >> >> >> Below is my proposal of resource reverse iteration without changing >> >> current >> >> design. >> > >> >I got your mail and read it, then interrupted by other thing and forgot >> >replying, sorry. >> > >> >I am fine with your code change. As I said before, I have tried to change >> >code per reviewers' comment, then let reviewers decide which way is >> >better. Please feel free to post formal patches and joining discussion >> >about this issue. >> >> Yep, while I don't have a real requirement to add the reverse version, so >> what >> is the proper way to send a patch? >> >> A patch reply to this thread is ok? > >I am not sure either. Since my patches are still under reviewing. And >you have pasted your patch. It depends on maintainers, mainly Andrew and >other reviewers who have concerns. Ok, thanks. -- Wei Yang Help you, Help me ___ devel mailing list de...@linuxdriverproject.org http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel