>>> --- a/xen/common/page_alloc.c
>>> +++ b/xen/common/page_alloc.c
>>> @@ -694,6 +694,31 @@ static void page_list_add_scrub(struct page_info *pg,
>>> unsigned int node,
>>> page_list_add(pg, &heap(node, zone, order));
>>> }
>>>
>>> +#define SCRUB_BYTE_PATTERN 0xc2c2c2c2c2c2c2c2
>> Th
On Fri, May 05, 2017 at 09:05:54AM -0600, Jan Beulich wrote:
> >>> On 14.04.17 at 17:37, wrote:
> > --- a/xen/Kconfig.debug
> > +++ b/xen/Kconfig.debug
> > @@ -114,6 +114,13 @@ config DEVICE_TREE_DEBUG
> > logged in the Xen ring buffer.
> > If unsure, say N here.
> >
> > +config SCRU
>>> On 14.04.17 at 17:37, wrote:
> --- a/xen/Kconfig.debug
> +++ b/xen/Kconfig.debug
> @@ -114,6 +114,13 @@ config DEVICE_TREE_DEBUG
> logged in the Xen ring buffer.
> If unsure, say N here.
>
> +config SCRUB_DEBUG
> +bool "Page scrubbing test"
> +default DEBUG
> +---
Add a debug Kconfig option that will make page allocator verify
that pages that were supposed to be scrubbed are, in fact, clean.
Signed-off-by: Boris Ostrovsky
---
xen/Kconfig.debug |7 ++
xen/common/page_alloc.c | 49 ++-
2 files chan