RE: [PATCH 1/1] x86: sanitize argument of clearcpuid command-line option

2017-01-18 Thread Odzioba, Lukasz
On Tuesday, January 17, 2017 8:18 AM, Ingo Molnar wrote: > Mind sending this as a proper patch, with akpm Cc:-ed? Not at all, I'll put it together this week. Thanks, Lukas

RE: [PATCH 1/1] x86: sanitize argument of clearcpuid command-line option

2017-01-16 Thread Odzioba, Lukasz
On Thursday, January 5, 2017 8:56 AM, Ingo Molnar wrote: > > Good one, queued it up. Hi Ingo, thanks for picking up the patch. > When we don't accept the value we should at least inform the user (via a > printk > that includes the 'clearcpuid' token in its message) that we totally ignored > wh

RE: [PATCH] perf/x86: fix event counter update issue

2016-12-05 Thread Odzioba, Lukasz
On Monday, December 5, 2016 11:25 AM, Peter Zijlstra wrote: > I'll certainly, try. I've queued it as per the below. Great, thank you! Lukas

RE: [PATCH] perf/x86: fix event counter update issue

2016-12-02 Thread Odzioba, Lukasz
On Tuesday, November 29, 2016 9:33 PM, Liang, Kan wrote: > Yes, the patch as below fixes the issue on my SLM. It works for me as well. Can we still have it in 4.9? Thanks, Lukas

RE: [PATCH] perf/x86: fix event counter update issue

2016-11-29 Thread Odzioba, Lukasz
On Tuesday, November 29, 2016 6:20 PM Stephane Eranian wrote: >> On Tue, Nov 29, 2016 at 1:25 AM, Peter Zijlstra wrote: >> How can this happen? IIRC the thing increments, we program a negative >> value, and when it passes 0 we generate a PMI. >> > Yeah, that's the part I don't quite unders

RE: [PATCH 1/1] x86/perf/intel/cstate: add C-state residency events for Knights Landing

2016-10-19 Thread Odzioba, Lukasz
On Tuesday, October 4, 2016 6:26 PM Odzioba, Lukasz wrote: > Although KNL does support C1,C6,PC2,PC3,PC6 states, the patch only > supports C6,PC2,PC3,PC6, because there is no counter for C1. > C6 residency counter MSR on KNL has a different address than other > platforms which is hand

RE: [PATCH 1/1] EDAC, sb_edac: Fix channel reporting on Knights Landing

2016-07-28 Thread Odzioba, Lukasz
On Saturday, July 23, 2016 1:45 AM, Lukasz Odzioba wrote: > On Intel Xeon Phi Knights Landing processor family the channels > of memory controller have untypical arrangement - MC0 is mapped to > CH3,4,5 and MC1 is mapped to CH0,1,2. This causes EDAC driver to > report the channel name incorrectly.

RE: Revert c5ad33184354260be6d05de57e46a5498692f6d6 "mm/swap.c: flush lru pvecs on compound page arrival" from stable tree? Was:[osstest-ad...@xenproject.org: [Xen-devel] [linux-4.1 bisection] complet

2016-07-18 Thread Odzioba, Lukasz
On Monday, July 18, 2016 5:31 PM, Konrad Rzeszutek Wilk wrote: > We found that your patch in the automated Xen test-case ends up > OOMing the box when trying to install guests. This worked prior > to your patch. > > See serial log: > http://logs.test-lab.xenproject.org/osstest/logs/97597/test-amd64

RE: [PATCH 1/1] perf/x86/intel: Add extended event constraints for Knights Landing

2016-06-24 Thread Odzioba, Lukasz
On Tuesday, June 21, 2016 11:38 AM, Peter Zijlstra wrote: > Yes, that is the intent, but how is this achieved? I'm not sure I see > how the patch ensure this. If you are confused, then it is likely that I did something wrong here. Let me explain myself. We already have a mechanism to create stati

RE: [PATCH 1/1] perf/x86/intel: Add extended event constraints for Knights Landing

2016-06-20 Thread Odzioba, Lukasz
On 08.06.2016 Peter Zijlstra wrote: > How does this work in the light of intel_alt_er() ? Hi Peter, If the constrained bit is valid on only one of the OCR MSRs (like in case of KNL), then OCR valid mask will forbid altering it by the other MSR in intel_alt_er. If constrained bit is valid on bo

RE: [PATCH 1/1] mm/swap.c: flush lru_add pvecs on compound page arrival

2016-06-16 Thread Odzioba, Lukasz
On Thu 16-06-16 08:19 PM, Michal Hocko wrote: > > On Thu 16-06-16 18:08:57, Odzioba, Lukasz wrote: > I am not able to find clear reasons why we shouldn't do it for the rest. > Ok so what do we do now? I'll send v2 with proposed changes. > Then do we still want to have s

RE: [PATCH 1/1] mm/swap.c: flush lru_add pvecs on compound page arrival

2016-06-16 Thread Odzioba, Lukasz
On Thru 09-06-16 02:22 PM Michal Hocko wrote: > I agree it would be better to do the same for others as well. Even if > this is not an immediate problem for those. I am not able to find clear reasons why we shouldn't do it for the rest. Ok so what do we do now? I'll send v2 with proposed changes.

RE: [PATCH 1/1] mm/swap.c: flush lru_add pvecs on compound page arrival

2016-06-13 Thread Odzioba, Lukasz
On 09-06-16 17:42:00, Dave Hansen wrote: > Does your workload put large pages in and out of those pvecs, though? > If your system doesn't have any activity, then all we've shown is that > they're not a problem when not in use. But what about when we use them? It doesn't. To use them extensively I

RE: [PATCH 1/1] mm/swap.c: flush lru_add pvecs on compound page arrival

2016-06-09 Thread Odzioba, Lukasz
On 08-06-16 17:31:00, Dave Hansen wrote: > Do we have any statistics that tell us how many pages are sitting the > lru pvecs? Although this helps the problem overall, don't we still have > a problem with memory being held in such an opaque place? >From what I observed the problem is mainly with l

RE: [PATCH 1/1] mm/swap.c: flush lru_add pvecs on compound page arrival

2016-06-09 Thread Odzioba, Lukasz
On Wed 08-07-16 17:04:00, Michal Hocko wrote: > I do not see how a SIGTERM would make any difference. But see below. This is how we encounter this problem initially, by hitting ctr-c while running parallel memory intensive workload, which ended up not calling munmap on allocated memory. > Is th

RE: mm: pages are not freed from lru_add_pvecs after process termination

2016-06-08 Thread Odzioba, Lukasz
On Tue 07-06-16 13:20:00, Michal Hocko wrote: > I guess you want something like posix_memalign or start faulting in from > an aligned address to guarantee you will fault 2MB pages. Good catch. > Besides that I am really suspicious that this will be measurable at all. > I would just go and spin a

RE: mm: pages are not freed from lru_add_pvecs after process termination

2016-06-07 Thread Odzioba, Lukasz
On Wed 05-11-16 09:53:00, Michal Hocko wrote: > Yes I think this makes sense. The only case where it would be suboptimal > is when the pagevec was already full and then we just created a single > page pvec to drain it. This can be handled better though by: > > diff --git a/mm/swap.c b/mm/swap.c > i

RE: mm: pages are not freed from lru_add_pvecs after process termination

2016-05-13 Thread Odzioba, Lukasz
On Wed 05-11-16 09:53:00, Michal Hocko wrote: > Yes I think this makes sense. The only case where it would be suboptimal > is when the pagevec was already full and then we just created a single > page pvec to drain it. This can be handled better though by: > > diff --git a/mm/swap.c b/mm/swap.c >

RE: mm: pages are not freed from lru_add_pvecs after process termination

2016-05-06 Thread Odzioba, Lukasz
On Thu 05-05-16 09:21:00, Michal Hocko wrote: > Or maybe the async nature of flushing turns > out to be just impractical and unreliable and we will end up skipping > THP (or all compound pages) for pcp LRU add cache. Let's see... What if we simply skip lru_add pvecs for compound pages? That way w

RE: mm: pages are not freed from lru_add_pvecs after process termination

2016-05-05 Thread Odzioba, Lukasz
On Thu 05-05-16 09:21:00, Michal Hocko wrote: > OK, it wasn't that tricky afterall. Maybe I have missed something but > the following should work. Or maybe the async nature of flushing turns > out to be just impractical and unreliable and we will end up skipping > THP (or all compound pages) for p

RE: mm: pages are not freed from lru_add_pvecs after process termination

2016-05-04 Thread Odzioba, Lukasz
On Thu 02-05-16 03:00:00, Michal Hocko wrote: > So I have given this a try (not tested yet) and it doesn't look terribly > complicated. It is hijacking vmstat for a purpose it wasn't intended for > originally but creating a dedicated kenrnel threads/WQ sounds like an > overkill to me. Does this hel

mm: pages are not freed from lru_add_pvecs after process termination

2016-04-27 Thread Odzioba, Lukasz
Hi, I encounter a problem which I'd like to discuss here (tested on 3.10 and 4.5). While running some workloads we noticed that in case of "improper" application exit (like SIGTERM) quite a bit (a few GBs) of memory is not being reclaimed after process termination. Executing echo 1 > /proc/sys/vm

RE: [PATCH 1/1] Bumps limit of maximum core ID from 32 to 128.

2015-10-15 Thread Odzioba, Lukasz
On Wednesday, October 14, 2014 at 4:04 PM, Guenter Roeck wrote: > That is just in the comment. The actual limit is still 128. Ok, sure. Thank you for your help in driving this change upstream. Thanks, Lukas -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of

RE: [PATCH 1/1] Bumps limit of maximum core ID from 32 to 128.

2015-10-14 Thread Odzioba, Lukasz
On Wednesday, October 14, 2015 at 3:17 AM, Guenter Roeck wrote: > Applied, after fixing up the subject and listing the current required limit > of 72 cores for Xeon Phi (per published information). Guenter sorry for inconvenience I forgot that core enumeration on KNL is not continuous, so some c

RE: [PATCH 1/1] Bumps limit of maximum core ID from 32 to 128.

2015-10-13 Thread Odzioba, Lukasz
On Wednesday, October 14, 2015 at 12:26 AM, Guenter Roeck wrote: > Pardon my ignorance ... those are Xeon Phi processors, and support up to > 244 threads (for Knights Corner). Programming datasheet isn't easily > available, > so I have to guess a bit. Following the processor numbering scheme of >

RE: [PATCH 1/1] Bumps limit of maximum core ID from 32 to 128.

2015-10-13 Thread Odzioba, Lukasz
On Tuesday, October 12, 2015 at 10:32 PM, Guenter Roeck wrote: > Why 128 instead of a more reasonable 64 ? What is the required minimum > for Xeon Phi ? It would be fine today, but it will be not enough in 2016 and we would like to give GNU/Linux distributions some time to propagate this patch. Fo

RE: [PATCH 1/1] hwmon: coretemp: use dynamically allocated array to store per core data

2015-09-18 Thread Odzioba, Lukasz
On 09/11/2015 04:28 AM, Guenter Roeck wrote: > You can return NULL but never check for this condition in the calling code. > The only time you check in the calling code is when you want to know > if pdata->core_data[index] is NULL, which is distinctly different. > As such, this check does not reall

RE: [PATCH] hwmon: coretemp: use list instead of fixed size array for temp data

2015-07-17 Thread Odzioba, Lukasz
On Friday, July 17, 2015 8:02 PM Guenter Roeck wrote: > Please explain why krealloc() won't work, why using krealloc(() would > result in a larger memory footprint than using lists, and why disabling > CPUs would require any action in the first place. It will work, but it can use more memory for c

RE: [PATCH] hwmon: coretemp: use list instead of fixed size array for temp data

2015-07-17 Thread Odzioba, Lukasz
From: Guenter Roeck [mailto:li...@roeck-us.net] On Friday, July 17, 2015 6:55 PM Guenter Roeck wrote: > You don't really explain why your approach would be better than > allocating an array of pointers to struct temp_data and increasing > its size using krealloc if needed. Let's consider two cas

RE: [PATCH] hwmon: coretemp: use list instead of fixed size array for temp data

2015-07-16 Thread Odzioba, Lukasz
On Wednesday, July 15, 2015 11:08 PM Jean Delvare wrote: > I see the benefit of removing the arbitrary limit, but why use a list > instead of a dynamically allocated array? This is turning a O(1) > algorithm into a O(n) algorithm. I know n isn't too large in this case > but I still consider it bad