On Tuesday, January 17, 2017 8:18 AM, Ingo Molnar wrote:
> Mind sending this as a proper patch, with akpm Cc:-ed?
Not at all, I'll put it together this week.
Thanks,
Lukas
On Thursday, January 5, 2017 8:56 AM, Ingo Molnar wrote:
>
> Good one, queued it up.
Hi Ingo, thanks for picking up the patch.
> When we don't accept the value we should at least inform the user (via a
> printk
> that includes the 'clearcpuid' token in its message) that we totally ignored
> wh
On Monday, December 5, 2016 11:25 AM, Peter Zijlstra wrote:
> I'll certainly, try. I've queued it as per the below.
Great, thank you!
Lukas
On Tuesday, November 29, 2016 9:33 PM, Liang, Kan wrote:
> Yes, the patch as below fixes the issue on my SLM.
It works for me as well.
Can we still have it in 4.9?
Thanks,
Lukas
On Tuesday, November 29, 2016 6:20 PM Stephane Eranian wrote:
>> On Tue, Nov 29, 2016 at 1:25 AM, Peter Zijlstra wrote:
>> How can this happen? IIRC the thing increments, we program a negative
>> value, and when it passes 0 we generate a PMI.
>>
> Yeah, that's the part I don't quite unders
On Tuesday, October 4, 2016 6:26 PM Odzioba, Lukasz wrote:
> Although KNL does support C1,C6,PC2,PC3,PC6 states, the patch only
> supports C6,PC2,PC3,PC6, because there is no counter for C1.
> C6 residency counter MSR on KNL has a different address than other
> platforms which is hand
On Saturday, July 23, 2016 1:45 AM, Lukasz Odzioba wrote:
> On Intel Xeon Phi Knights Landing processor family the channels
> of memory controller have untypical arrangement - MC0 is mapped to
> CH3,4,5 and MC1 is mapped to CH0,1,2. This causes EDAC driver to
> report the channel name incorrectly.
On Monday, July 18, 2016 5:31 PM, Konrad Rzeszutek Wilk wrote:
> We found that your patch in the automated Xen test-case ends up
> OOMing the box when trying to install guests. This worked prior
> to your patch.
>
> See serial log:
> http://logs.test-lab.xenproject.org/osstest/logs/97597/test-amd64
On Tuesday, June 21, 2016 11:38 AM, Peter Zijlstra wrote:
> Yes, that is the intent, but how is this achieved? I'm not sure I see
> how the patch ensure this.
If you are confused, then it is likely that I did something wrong here.
Let me explain myself.
We already have a mechanism to create stati
On 08.06.2016 Peter Zijlstra wrote:
> How does this work in the light of intel_alt_er() ?
Hi Peter,
If the constrained bit is valid on only one of the OCR MSRs (like in case of
KNL),
then OCR valid mask will forbid altering it by the other MSR in intel_alt_er.
If constrained bit is valid on bo
On Thu 16-06-16 08:19 PM, Michal Hocko wrote:
>
> On Thu 16-06-16 18:08:57, Odzioba, Lukasz wrote:
> I am not able to find clear reasons why we shouldn't do it for the rest.
> Ok so what do we do now? I'll send v2 with proposed changes.
> Then do we still want to have s
On Thru 09-06-16 02:22 PM Michal Hocko wrote:
> I agree it would be better to do the same for others as well. Even if
> this is not an immediate problem for those.
I am not able to find clear reasons why we shouldn't do it for the rest.
Ok so what do we do now? I'll send v2 with proposed changes.
On 09-06-16 17:42:00, Dave Hansen wrote:
> Does your workload put large pages in and out of those pvecs, though?
> If your system doesn't have any activity, then all we've shown is that
> they're not a problem when not in use. But what about when we use them?
It doesn't. To use them extensively I
On 08-06-16 17:31:00, Dave Hansen wrote:
> Do we have any statistics that tell us how many pages are sitting the
> lru pvecs? Although this helps the problem overall, don't we still have
> a problem with memory being held in such an opaque place?
>From what I observed the problem is mainly with l
On Wed 08-07-16 17:04:00, Michal Hocko wrote:
> I do not see how a SIGTERM would make any difference. But see below.
This is how we encounter this problem initially, by hitting ctr-c while
running parallel memory intensive workload, which ended up
not calling munmap on allocated memory.
> Is th
On Tue 07-06-16 13:20:00, Michal Hocko wrote:
> I guess you want something like posix_memalign or start faulting in from
> an aligned address to guarantee you will fault 2MB pages.
Good catch.
> Besides that I am really suspicious that this will be measurable at all.
> I would just go and spin a
On Wed 05-11-16 09:53:00, Michal Hocko wrote:
> Yes I think this makes sense. The only case where it would be suboptimal
> is when the pagevec was already full and then we just created a single
> page pvec to drain it. This can be handled better though by:
>
> diff --git a/mm/swap.c b/mm/swap.c
> i
On Wed 05-11-16 09:53:00, Michal Hocko wrote:
> Yes I think this makes sense. The only case where it would be suboptimal
> is when the pagevec was already full and then we just created a single
> page pvec to drain it. This can be handled better though by:
>
> diff --git a/mm/swap.c b/mm/swap.c
>
On Thu 05-05-16 09:21:00, Michal Hocko wrote:
> Or maybe the async nature of flushing turns
> out to be just impractical and unreliable and we will end up skipping
> THP (or all compound pages) for pcp LRU add cache. Let's see...
What if we simply skip lru_add pvecs for compound pages?
That way w
On Thu 05-05-16 09:21:00, Michal Hocko wrote:
> OK, it wasn't that tricky afterall. Maybe I have missed something but
> the following should work. Or maybe the async nature of flushing turns
> out to be just impractical and unreliable and we will end up skipping
> THP (or all compound pages) for p
On Thu 02-05-16 03:00:00, Michal Hocko wrote:
> So I have given this a try (not tested yet) and it doesn't look terribly
> complicated. It is hijacking vmstat for a purpose it wasn't intended for
> originally but creating a dedicated kenrnel threads/WQ sounds like an
> overkill to me. Does this hel
Hi,
I encounter a problem which I'd like to discuss here (tested on 3.10 and 4.5).
While running some workloads we noticed that in case of "improper" application
exit (like SIGTERM) quite a bit (a few GBs) of memory is not being reclaimed
after process termination.
Executing echo 1 > /proc/sys/vm
On Wednesday, October 14, 2014 at 4:04 PM, Guenter Roeck wrote:
> That is just in the comment. The actual limit is still 128.
Ok, sure. Thank you for your help in driving this change upstream.
Thanks,
Lukas
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of
On Wednesday, October 14, 2015 at 3:17 AM, Guenter Roeck wrote:
> Applied, after fixing up the subject and listing the current required limit
> of 72 cores for Xeon Phi (per published information).
Guenter sorry for inconvenience I forgot that core enumeration on KNL
is not continuous, so some c
On Wednesday, October 14, 2015 at 12:26 AM, Guenter Roeck wrote:
> Pardon my ignorance ... those are Xeon Phi processors, and support up to
> 244 threads (for Knights Corner). Programming datasheet isn't easily
> available,
> so I have to guess a bit. Following the processor numbering scheme of
>
On Tuesday, October 12, 2015 at 10:32 PM, Guenter Roeck wrote:
> Why 128 instead of a more reasonable 64 ? What is the required minimum
> for Xeon Phi ?
It would be fine today, but it will be not enough in 2016 and we would like to
give GNU/Linux distributions some time to propagate this patch.
Fo
On 09/11/2015 04:28 AM, Guenter Roeck wrote:
> You can return NULL but never check for this condition in the calling code.
> The only time you check in the calling code is when you want to know
> if pdata->core_data[index] is NULL, which is distinctly different.
> As such, this check does not reall
On Friday, July 17, 2015 8:02 PM Guenter Roeck wrote:
> Please explain why krealloc() won't work, why using krealloc(() would
> result in a larger memory footprint than using lists, and why disabling
> CPUs would require any action in the first place.
It will work, but it can use more memory for c
From: Guenter Roeck [mailto:li...@roeck-us.net]
On Friday, July 17, 2015 6:55 PM Guenter Roeck wrote:
> You don't really explain why your approach would be better than
> allocating an array of pointers to struct temp_data and increasing
> its size using krealloc if needed.
Let's consider two cas
On Wednesday, July 15, 2015 11:08 PM Jean Delvare wrote:
> I see the benefit of removing the arbitrary limit, but why use a list
> instead of a dynamically allocated array? This is turning a O(1)
> algorithm into a O(n) algorithm. I know n isn't too large in this case
> but I still consider it bad
30 matches
Mail list logo