On 07/09/2015 12:47 AM, Zhao Qiang wrote:
Bytes alignment is required to manage some special ram,
so add gen_pool_alloc_align func to genalloc.
rename gen_pool_alloc to gen_pool_alloc_align with a align parameter,
then provide gen_pool_alloc to call gen_pool_alloc_align with
align = 1 Byte.
Sign
On 07/09/2015 03:17 PM, Scott Wood wrote:
On Thu, 2015-07-09 at 14:51 -0700, Laura Abbott wrote:
On 07/09/2015 12:47 AM, Zhao Qiang wrote:
Bytes alignment is required to manage some special ram,
so add gen_pool_alloc_align func to genalloc.
rename gen_pool_alloc to gen_pool_alloc_align with a
On 07/12/2015 07:22 PM, Zhao Qiang wrote:
-Original Message-
From: Laura Abbott [mailto:labb...@redhat.com]
Sent: Friday, July 10, 2015 5:51 AM
To: Zhao Qiang-B45475; lau...@codeaurora.org
Cc: linux-ker...@vger.kernel.org; linuxppc-dev@lists.ozlabs.org;
a...@linux-foundation.org; o
Cc: Fenghua Yu
Benjamin Herrenschmidt
Paul Mackerras
Signed-off-by: Laura Abbott
---
arch/arm/Kconfig | 1 +
arch/arm/include/asm/Kbuild| 1 +
arch/arm/include/asm/scatterlist.h | 12
arch/arm64/Kconfig | 1 +
arch/ia64
ed-bys can we take this through your tree
as suggested by Will?
Thanks,
Laura
[1] http://lists.infradead.org/pipermail/linux-arm-kernel/2014-March/240435.html
Laura Abbott (2):
lib/scatterlist: Make ARCH_HAS_SG_CHAIN an actual Kconfig
Cleanup useless architecture versions of scatterlist.h
Cc: Fenghua Yu
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: Ingo Molnar
Cc: "H. Peter Anvin"
Cc: "James E.J. Bottomley"
Cc: Fenghua Yu
Cc: Tony Luck
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: Martin Schwidefsky
Cc: Heiko Carstens
Cc: Andrew Morton
Signed-
On 08/24/2015 02:31 AM, Zhao Qiang wrote:
Bytes alignment is required to manage some special RAM,
so add gen_pool_first_fit_align to genalloc,
meanwhile add gen_pool_alloc_data to pass data to
gen_pool_first_fit_align(modify gen_pool_alloc as a wrapper)
Signed-off-by: Zhao Qiang
---
Changes for
On 08/24/2015 02:31 AM, Zhao Qiang wrote:
diff --git a/drivers/soc/fsl/qe/qe_common.c b/drivers/soc/fsl/qe/qe_common.c
new file mode 100644
index 000..7f1762c
--- /dev/null
+++ b/drivers/soc/fsl/qe/qe_common.c
@@ -0,0 +1,193 @@
+/*
+ * common qe code
+ *
+ * author: scott wood
+ *
+ * copyr
On 08/24/2015 07:40 PM, Zhao Qiang wrote:
On 08/25/2015 07:11 AM, Laura Abbott wrote:
-Original Message-
From: Laura Abbott [mailto:labb...@redhat.com]
Sent: Tuesday, August 25, 2015 7:11 AM
To: Zhao Qiang-B45475; Wood Scott-B07421
Cc: linux-ker...@vger.kernel.org; linuxppc-dev
On 08/24/2015 08:03 PM, Zhao Qiang wrote:
-Original Message-
From: Laura Abbott [mailto:labb...@redhat.com]
Sent: Tuesday, August 25, 2015 7:32 AM
To: Zhao Qiang-B45475; Wood Scott-B07421
Cc: linux-ker...@vger.kernel.org; linuxppc-dev@lists.ozlabs.org;
lau...@codeaurora.org; Xie Xiaobo
: Laura Abbott
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
On 09/28/2015 07:09 PM, Zhao Qiang wrote:
Add new algo for genalloc, it reserve a specific region of
memory matching the size requirement (no alignment constraint)
Signed-off-by: Zhao Qiang
Reviewed-by: Laura Abbott
___
Linuxppc-dev mailing list
Hi,
We received a report (https://bugzilla.redhat.com/show_bug.cgi?id=1267395) of
bad assembly
when compiling on powerpc with little endian
[labbott@labbott-redhat-machine linux_upstream]$ make ARCH=powerpc
CROSS_COMPILE=powerpc64-linux-gnu-
CHK include/config/kernel.release
CHK in
On 10/02/2015 03:00 PM, Segher Boessenkool wrote:
On Sat, Oct 03, 2015 at 12:37:35AM +0300, Denis Kirjanov wrote:
-0: tlbie r4; \
+0: tlbie r4, 0; \
This isn't correct. With POWER7 and later (which this compile
is, since it's on
On 10/03/2015 05:00 PM, Segher Boessenkool wrote:
On Fri, Oct 02, 2015 at 09:24:46PM -0500, Peter Bergner wrote:
Ok, than we can just zero out r5 for example and use it in tlbie as RS,
right?
That won't assemble _unless_ your assembler is in POWER7 mode. It also
won't do the right thing at ru
On 10/05/2015 08:35 PM, Michael Ellerman wrote:
On Fri, 2015-10-02 at 08:43 -0700, Laura Abbott wrote:
Hi,
We received a report (https://bugzilla.redhat.com/show_bug.cgi?id=1267395) of
bad assembly
when compiling on powerpc with little endian
...
After some discussion with the binutils
kay, I'd like acks for this so this can go
through the kbuild tree.
Thanks,
Laura
Laura Abbott (4):
kbuild: Add build salt to the kernel and modules
x86: Add build salt to the vDSO
powerpc: Add build salt to the vDSO
arm64: Add build salt to the vDSO
arch/arm64/kernel/vdso/note.S
kay, I'd like acks for this so this can go
through the kbuild tree.
Thanks,
Laura
Laura Abbott (4):
kbuild: Add build salt to the kernel and modules
x86: Add build salt to the vDSO
powerpc: Add build salt to the vDSO
arm64: Add build salt to the vDSO
arch/arm64/kernel/vdso/note.S
is to insert
a section with some data.
Add an ELF note to both the kernel and module which contains some data based
off of a config option.
Signed-off-by: Masahiro Yamada
Signed-off-by: Laura Abbott
---
v5: I used S-o-b here since the majority of the code was written
already. Please feel fr
The vDSO needs to have a unique build id in a similar manner
to the kernel and modules. Use the build salt macro.
Signed-off-by: Laura Abbott
---
v5: Switched to using the single line BUILD_SALT macro
---
arch/x86/entry/vdso/vdso-note.S | 3 +++
arch/x86/entry/vdso/vdso32/note.S | 3 +++
2
The vDSO needs to have a unique build id in a similar manner
to the kernel and modules. Use the build salt macro.
Signed-off-by: Laura Abbott
---
v5: New approach with the BUILD_SALT macro
---
arch/powerpc/kernel/vdso32/note.S | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch
The vDSO needs to have a unique build id in a similar manner
to the kernel and modules. Use the build salt macro.
Signed-off-by: Laura Abbott
---
v5: I was previously focused on x86 only but since powerpc gave a patch,
I figured I would do arm64 since the changes were also fairly simple
On 07/03/2018 08:55 PM, Masahiro Yamada wrote:
Hi.
2018-07-04 8:34 GMT+09:00 Laura Abbott :
The vDSO needs to have a unique build id in a similar manner
to the kernel and modules. Use the build salt macro.
Signed-off-by: Laura Abbott
---
v5: I was previously focused on x86 only but since
On 07/05/2018 08:58 AM, Andy Lutomirski wrote:
On Tue, Jul 3, 2018 at 4:34 PM, Laura Abbott wrote:
The vDSO needs to have a unique build id in a similar manner
to the kernel and modules. Use the build salt macro.
Looks good to me. I have no idea whose tree these would go through.
I was
On 07/03/2018 08:59 PM, Masahiro Yamada wrote:
Hi.
Thanks for the update.
2018-07-04 8:34 GMT+09:00 Laura Abbott :
The build id generated from --build-id can be generated in several different
ways, with the default being the sha1 on the output of the linked file. For
distributions, it can
Hi,
This is v6 of the series to allow unique build ids. v6 is mostly minor
fixups and Acks for this to go through the kbuild tree.
Thanks,
Laura
Laura Abbott (4):
kbuild: Add build salt to the kernel and modules
x86: Add build salt to the vDSO
powerpc: Add build salt to the vDSO
arm64
-by: Masahiro Yamada
Signed-off-by: Laura Abbott
---
v6: Added more detail to the commit text about why exactly this feature
is useful. Default string now ""
---
include/linux/build-salt.h | 20
init/Kconfig | 9 +
init/version.c |
The vDSO needs to have a unique build id in a similar manner
to the kernel and modules. Use the build salt macro.
Acked-by: Andy Lutomirski
Signed-off-by: Laura Abbott
---
v6: Ack from Andy
---
arch/x86/entry/vdso/vdso-note.S | 3 +++
arch/x86/entry/vdso/vdso32/note.S | 3 +++
2 files
The vDSO needs to have a unique build id in a similar manner
to the kernel and modules. Use the build salt macro.
Signed-off-by: Laura Abbott
---
v6: Remove semi-colon
---
arch/powerpc/kernel/vdso32/note.S | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/powerpc/kernel/vdso32/note.S
The vDSO needs to have a unique build id in a similar manner
to the kernel and modules. Use the build salt macro.
Acked-by: Will Deacon
Signed-off-by: Laura Abbott
---
v6: Remove the semi-colon, Ack from Will
---
arch/arm64/kernel/vdso/note.S | 3 +++
1 file changed, 3 insertions(+)
diff
s can pass __GFP_ZERO to get zeroed buffer,
what has already been an issue: see commit dd65a941f6ba ("arm64:
dma-mapping: clear buffers allocated with FORCE_CONTIGUOUS flag").
For Ion,
Acked-by: Laura Abbott
Signed-off-by: Marek Szyprowski
---
arch/powerpc/kvm/book3s_hv_buil
On 4/22/19 8:49 PM, Masahiro Yamada wrote:
This prepares to move CONFIG_OPTIMIZE_INLINING from x86 to a common
place. We need to eliminate potential issues beforehand.
If it is enabled for s390, the following error is reported:
In file included from arch/s390/crypto/des_s390.c:19:
./arch/s390/i
On 05/16/2017 07:32 AM, Kees Cook wrote:
> On Tue, May 16, 2017 at 4:09 AM, Michael Ellerman wrote:
>> [Cc'ing the relevant folks]
>>
>> Breno Leitao writes:
>>> Hello,
>>>
>>> Kernel 4.12-rc1 is showing a bug when I try it on a POWER8 virtual
>>> machine. Justing SSHing into the machine causes t
On Sat, Jul 9, 2016 at 1:25 AM, Ard Biesheuvel
wrote:
> On 9 July 2016 at 04:22, Laura Abbott wrote:
> > On 07/06/2016 03:25 PM, Kees Cook wrote:
> >>
> >> Hi,
> >>
> >> This is a start of the mainline port of PAX_USERCOPY[1]. After I started
&
On 07/15/2016 02:44 PM, Kees Cook wrote:
This is the start of porting PAX_USERCOPY into the mainline kernel. This
is the first set of features, controlled by CONFIG_HARDENED_USERCOPY. The
work is based on code by PaX Team and Brad Spengler, and an earlier port
from Casey Schaufler. Additional non
On 07/15/2016 02:44 PM, Kees Cook wrote:
This is the start of porting PAX_USERCOPY into the mainline kernel. This
is the first set of features, controlled by CONFIG_HARDENED_USERCOPY. The
work is based on code by PaX Team and Brad Spengler, and an earlier port
from Casey Schaufler. Additional non
Code such as hardened user copy[1] needs a way to tell if a
page is CMA or not. Add is_migrate_cma_page in a similar way
to is_migrate_isolate_page.
[1]http://article.gmane.org/gmane.linux.kernel.mm/155238
Signed-off-by: Laura Abbott
---
Here's an explicit patch, slightly different than w
On 07/20/2016 03:24 AM, Balbir Singh wrote:
On Tue, 2016-07-19 at 11:48 -0700, Kees Cook wrote:
On Mon, Jul 18, 2016 at 6:06 PM, Laura Abbott wrote:
On 07/15/2016 02:44 PM, Kees Cook wrote:
This doesn't work when copying CMA allocated memory since CMA purposely
allocates larger than a
On 07/20/2016 01:26 PM, Kees Cook wrote:
Hi,
[This is now in my kspp -next tree, though I'd really love to add some
additional explicit Tested-bys, Reviewed-bys, or Acked-bys. If you've
looked through any part of this or have done any testing, please consider
sending an email with your "*-by:" l
object_size = slab_ksize(s);
+ if (ptr < page_address(page))
+ return s->name;
+
/* Find offset within object. */
offset = (ptr - page_address(page)) % s->size;
With that, you can add
Reviwed-by: Laura Abbott
static siz
On 07/25/2016 02:42 PM, Rik van Riel wrote:
On Mon, 2016-07-25 at 12:16 -0700, Laura Abbott wrote:
On 07/20/2016 01:27 PM, Kees Cook wrote:
Under CONFIG_HARDENED_USERCOPY, this adds object size checking to
the
SLUB allocator to catch any copies that may span objects. Includes
a
redzone
On 07/25/2016 01:45 PM, Kees Cook wrote:
On Mon, Jul 25, 2016 at 12:16 PM, Laura Abbott wrote:
On 07/20/2016 01:27 PM, Kees Cook wrote:
Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
SLUB allocator to catch any copies that may span objects. Includes a
redzone handling
42 matches
Mail list logo