Re: [PULL] Docs for 4.13
On Tue, Jul 04, 2017 at 01:24:13PM -0600, Jonathan Corbet wrote: > On Mon, 3 Jul 2017 21:32:33 -0700 > Linus Torvalds wrote: > > > Eg things like > > > > Error: Cannot open file ./kernel/rcu/srcu.c > > Error: Cannot open file ./kernel/rcu/srcu.c > > > > happen simply because that file no longer exists, and the docs never > > got updated. > > > > So my merge didn't even try to fix those kinds of things at all. I > > literally just looked at the conflicts and moved those over to the rst > > files, and that was it. There's a lot of other changes that never > > cause conflicts for the simple reason that those changes never caused > > documentation changes to begin with. > > > > Now, this is obviously not new, but it does strike me that if checking > > for these kinds of things was easier and part of "make allmodconfig", > > then we might have less of it happen. > > I see Markus already tossed out a patch using the sphinx "dummy mode". > It might be possible to create a dead-simple linter for this kind of > thing that would be quite a bit faster, but I wonder how much we really > need it. Problems like this pop up with great regularity, but they > tend to be caught and fixed fairly quickly. Meanwhile, the world > stubbornly refuses to end if the docs build tosses out a few (more) > errors for a few days. I don't think we have to slow down everybody's > build for this. > > (Getting something into the build-and-boot testers might not be a bad > idea, though). 0day runs make htmldocs on everything it can get its hand on. That was about the first thing I've made sure happens when we went onto this docs endeavor :-) I guess 0day just isn't all that good at making sure people handle docs issues in cross-tree conflicts, but hopefully that doesn't happen much in the future anymore now that docbook is gone. The other problem is also that the current htmldocs build is anything but clean (lots of warnings about kernel-doc mismatching the function prototype, but lots others), and we don't yet have anyone like Arnd trying to stem the tide ... Wrt building: The big gain with sphinx is incremental builds: You can finally edit a few comments/text, rebuild and a) not have to wait more than a few seconds b) be sure it did rebuild everything that had to be rebuild. Makes things much nicer for developers, not so much for maintainers unfortunately, not sure how much faster we could make that. -Daniel -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH RFC] scripts/sphinx-pre-install: add a script to check Sphinx install
> Am 15.07.2017 um 04:21 schrieb Mauro Carvalho Chehab > : > > Em Fri, 14 Jul 2017 19:35:59 +0200 > Markus Heiser escreveu: > >>> Am 14.07.2017 um 18:49 schrieb Mauro Carvalho Chehab >>> : >>> >>> Solving Sphinx dependencies can be painful. Add a script to >>> check if everything is ok. >> >> just my 5cent: >> >> What we need is a "requirements.txt" file to define a >> **reference environment**. E.g. to stick Sphinx 1.4.9 in >> such a reference environment:: >> >> --- >> Sphinx==1.4.9 >> sphinx_rtd_theme >> - >> >> The rest is similarly to what you wrote in doc-guide/sphinx.rst ... >> >> The ref-environment can be build with virtualenv & pip:: >> >> $ virtualenv --python=python3 docenv >> (doc-env) $ source ./docenv/bin/activate >> (doc-env) $ pip install -r requirements.txt >> >> From now we can start our build as usual. If not already done, >> first activate the environment:: >> >> $ . ./docenv/bin/activate >> (doc-env) $ make htmldocs >> >> This (requirements.txt) is the way python packaging goes. > > > The above assumes that the user wants to use virtenv and > have python3, virtualenv3 and pip3 already installed. > > I agree that a virtual environment works better than using > distro-specific packaging, as Sphinx toolchain is really > fragile. But we should give an option for the developer to > use whatever he wants. The developer is free to choose the way he like. But we are talking about what is "best practice". I tested sphinx-pre-install and it works fine for me, thats not the point. The point is: what do we recommend? E.g. for me it advices me to run: sudo apt-get install python3-sphinx python3-sphinx-rtd-theme We should not assume that the developer (better: the build-user) owns the privilege to install fine grained OS packages. There is a admin-part and a user-part: The admin (sudoer) installs binaries into the OS: * gcc * make * python3, virtualenv * LiveTex * ImageMagick * perl * graphviz The user (developer) installs additional requirements anywhere under $HOME: * Python best practice here is virtualenv and pip .. and the *recipe* to get a *reference environment* is the requirements.txt * LaTeX what is best practice here? .. I know there is some package managing with "TeX Live Manager" (tlmgr) but I have no experience with. Is there a way to define a *requirements* file and to install such requirements anywhere under $HOME? My english is less eloquent, but I hope its clear what I mean. To have a script is nice, but first lets explore what best practice is. -- Markus -- -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[RFC v4] scripts/sphinx-pre-install: add a script to check Sphinx install
Solving Sphinx dependencies can be painful. Add a script to check if everything is ok. Signed-off-by: Mauro Carvalho Chehab --- v4: changed default to use virtualenv for Sphinx and add switches to change its behavior to disable PDF and virtualenv. v3: check for DeJavu fonts on Ubuntu and add "sudo" to Fedora instructions v2: add support for Fedora 26 scripts/sphinx-pre-install | 378 + 1 file changed, 378 insertions(+) create mode 100755 scripts/sphinx-pre-install diff --git a/scripts/sphinx-pre-install b/scripts/sphinx-pre-install new file mode 100755 index ..b0ea74d2f745 --- /dev/null +++ b/scripts/sphinx-pre-install @@ -0,0 +1,378 @@ +#!/usr/bin/perl +use strict; + + + +# +# Static vars +# + +my @missing; +my @opt_missing; +my $system_release; +my $need = 0; +my $optional = 0; +my $need_symlink = 0; +my $need_sphinx = 0; +my $install = ""; + +# +# Command line arguments +# + +my $pdf = 1; +my $virtualenv = 1; + +# +# Subroutines that checks if a feature exists +# + +sub catcheck($) +{ + my $res = ""; + $res = qx(cat $_[0]) if (-r $_[0]); + return $res; +} + +sub check_missing(%) +{ + my %map = %{$_[0]}; + + foreach my $prog (@missing) { + print "ERROR: please install \"$prog\", otherwise, build won't work.\n"; + if (defined($map{$prog})) { + $install .= " " . $map{$prog}; + } else { + $install .= " " . $prog; + } + } + foreach my $prog (@opt_missing) { + print "Warning: better to also install \"$prog\".\n"; + if (defined($map{$prog})) { + $install .= " " . $map{$prog}; + } else { + $install .= " " . $prog; + } + } + + $install =~ s/^\s//; +} + +sub add_package($$) +{ + my $package = shift; + my $is_optional = shift; + + if ($is_optional) { + push @opt_missing, $package; + + $optional++; + } else { + push @missing, $package; + + $need++; + } +} + +sub check_missing_file($$$) +{ + my $file = shift; + my $package = shift; + my $is_optional = shift; + + return if(-e $file); + + add_package($package, $is_optional); +} + +sub findprog($) +{ + foreach(split(/:/, $ENV{PATH})) { + return "$_/$_[0]" if(-x "$_/$_[0]"); + } +} + +sub check_program($$) +{ + my $prog = shift; + my $is_optional = shift; + + return if findprog($prog); + + add_package($prog, $is_optional); +} + +sub check_perl_module($$) +{ + my $prog = shift; + my $is_optional = shift; + + my $err = system("perl -M$prog -e 1 2>/dev/null /dev/null"); + return if ($err == 0); + + add_package($prog, $is_optional); +} + +sub check_python_module($$) +{ + my $prog = shift; + my $is_optional = shift; + + my $err = system("python3 -c 'import $prog' 2>/dev/null /dev/null"); + return if ($err == 0); + my $err = system("python -c 'import $prog' 2>/dev/null /dev/null"); + return if ($err == 0); + + add_package($prog, $is_optional); +} + +sub check_sphinx() +{ + return if findprog("sphinx-build"); + + if (findprog("sphinx-build-3")) { + $need_symlink = 1; + return; + } + + if ($virtualenv) { + check_program("virtualenv", 0); + check_program("pip", 0); + $need_sphinx = 1; + } else { + add_package("python-sphinx", 0); + } +} + +sub which($) +{ + my $file = shift; + my @path = split ":", $ENV{PATH}; + + foreach my $dir(@path) { + my $name = $dir.'/'.$file; + return $name if (-x $name ); + } + return undef; +} + +# +# Subroutines that check distro-specific hints +# + +sub give_debian_hints() +{ + my %map = ( + "python-sphinx" => "python3-sphinx", + "sphinx_rtd_theme" => "python3-sphinx-rtd-theme", + "virtualenv"=> "python3-virtualenv", + "pip" => "python3-pip", + "dot" => "graphviz", + "convert" => "imagemagick", + "Pod::Usage"=> "perl-modules", + "xelatex" => "texlive-xetex", + ); + + if ($pdf) { + check_missing_file("/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf", + "fonts-dejavu", 1); + } + + check_missing(\%map); + + return if (!$need && !$optional); + printf("You should run:\n\n\tsudo apt-get install $install\n"); +} + +sub give_redhat_hints() +{ + my %map = ( + "python-sphinx" => "python3-sphinx", + "sphinx_rtd_
[RFC v5] scripts/sphinx-pre-install: add a script to check Sphinx install
Solving Sphinx dependencies can be painful. Add a script to check if everything is ok. Signed-off-by: Mauro Carvalho Chehab --- v5: minor fix to return error if virtualenv and Sphinx is not installed v4: changed default to use virtualenv for Sphinx and add switches to change its behavior to disable PDF and virtualenv. v3: check for DeJavu fonts on Ubuntu and add "sudo" to Fedora instructions v2: add support for Fedora 26 scripts/sphinx-pre-install | 379 + 1 file changed, 379 insertions(+) create mode 100755 scripts/sphinx-pre-install diff --git a/scripts/sphinx-pre-install b/scripts/sphinx-pre-install new file mode 100755 index ..b842f76c26e9 --- /dev/null +++ b/scripts/sphinx-pre-install @@ -0,0 +1,379 @@ +#!/usr/bin/perl +use strict; + + + +# +# Static vars +# + +my @missing; +my @opt_missing; +my $system_release; +my $need = 0; +my $optional = 0; +my $need_symlink = 0; +my $need_sphinx = 0; +my $install = ""; + +# +# Command line arguments +# + +my $pdf = 1; +my $virtualenv = 1; + +# +# Subroutines that checks if a feature exists +# + +sub catcheck($) +{ + my $res = ""; + $res = qx(cat $_[0]) if (-r $_[0]); + return $res; +} + +sub check_missing(%) +{ + my %map = %{$_[0]}; + + foreach my $prog (@missing) { + print "ERROR: please install \"$prog\", otherwise, build won't work.\n"; + if (defined($map{$prog})) { + $install .= " " . $map{$prog}; + } else { + $install .= " " . $prog; + } + } + foreach my $prog (@opt_missing) { + print "Warning: better to also install \"$prog\".\n"; + if (defined($map{$prog})) { + $install .= " " . $map{$prog}; + } else { + $install .= " " . $prog; + } + } + + $install =~ s/^\s//; +} + +sub add_package($$) +{ + my $package = shift; + my $is_optional = shift; + + if ($is_optional) { + push @opt_missing, $package; + + $optional++; + } else { + push @missing, $package; + + $need++; + } +} + +sub check_missing_file($$$) +{ + my $file = shift; + my $package = shift; + my $is_optional = shift; + + return if(-e $file); + + add_package($package, $is_optional); +} + +sub findprog($) +{ + foreach(split(/:/, $ENV{PATH})) { + return "$_/$_[0]" if(-x "$_/$_[0]"); + } +} + +sub check_program($$) +{ + my $prog = shift; + my $is_optional = shift; + + return if findprog($prog); + + add_package($prog, $is_optional); +} + +sub check_perl_module($$) +{ + my $prog = shift; + my $is_optional = shift; + + my $err = system("perl -M$prog -e 1 2>/dev/null /dev/null"); + return if ($err == 0); + + add_package($prog, $is_optional); +} + +sub check_python_module($$) +{ + my $prog = shift; + my $is_optional = shift; + + my $err = system("python3 -c 'import $prog' 2>/dev/null /dev/null"); + return if ($err == 0); + my $err = system("python -c 'import $prog' 2>/dev/null /dev/null"); + return if ($err == 0); + + add_package($prog, $is_optional); +} + +sub check_sphinx() +{ + return if findprog("sphinx-build"); + + if (findprog("sphinx-build-3")) { + $need_symlink = 1; + return; + } + + if ($virtualenv) { + check_program("virtualenv", 0); + check_program("pip", 0); + $need_sphinx = 1; + } else { + add_package("python-sphinx", 0); + } +} + +sub which($) +{ + my $file = shift; + my @path = split ":", $ENV{PATH}; + + foreach my $dir(@path) { + my $name = $dir.'/'.$file; + return $name if (-x $name ); + } + return undef; +} + +# +# Subroutines that check distro-specific hints +# + +sub give_debian_hints() +{ + my %map = ( + "python-sphinx" => "python3-sphinx", + "sphinx_rtd_theme" => "python3-sphinx-rtd-theme", + "virtualenv"=> "python3-virtualenv", + "pip" => "python3-pip", + "dot" => "graphviz", + "convert" => "imagemagick", + "Pod::Usage"=> "perl-modules", + "xelatex" => "texlive-xetex", + ); + + if ($pdf) { + check_missing_file("/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf", + "fonts-dejavu", 1); + } + + check_missing(\%map); + + return if (!$need && !$optional); + printf("You should run:\n\n\tsudo apt-get install $install\n"); +} + +sub give_redhat_hints() +{ + my %map = ( + "p
Re: [PATCH RFC] scripts/sphinx-pre-install: add a script to check Sphinx install
Em Sat, 15 Jul 2017 11:51:45 +0200 Markus Heiser escreveu: > > Am 15.07.2017 um 04:21 schrieb Mauro Carvalho Chehab > > : > > > > Em Fri, 14 Jul 2017 19:35:59 +0200 > > Markus Heiser escreveu: > > > >>> Am 14.07.2017 um 18:49 schrieb Mauro Carvalho Chehab > >>> : > >>> > >>> Solving Sphinx dependencies can be painful. Add a script to > >>> check if everything is ok. > >> > >> just my 5cent: > >> > >> What we need is a "requirements.txt" file to define a > >> **reference environment**. E.g. to stick Sphinx 1.4.9 in > >> such a reference environment:: > >> > >> --- > >> Sphinx==1.4.9 > >> sphinx_rtd_theme > >> - > >> > >> The rest is similarly to what you wrote in doc-guide/sphinx.rst ... > >> > >> The ref-environment can be build with virtualenv & pip:: > >> > >> $ virtualenv --python=python3 docenv > >> (doc-env) $ source ./docenv/bin/activate > >> (doc-env) $ pip install -r requirements.txt > >> > >> From now we can start our build as usual. If not already done, > >> first activate the environment:: > >> > >> $ . ./docenv/bin/activate > >> (doc-env) $ make htmldocs > >> > >> This (requirements.txt) is the way python packaging goes. > > > > > > The above assumes that the user wants to use virtenv and > > have python3, virtualenv3 and pip3 already installed. > > > > I agree that a virtual environment works better than using > > distro-specific packaging, as Sphinx toolchain is really > > fragile. But we should give an option for the developer to > > use whatever he wants. > > The developer is free to choose the way he like. But we are talking > about what is "best practice". As I said, the idea is to let the user to decide what it wants. I focused on the packaging approach first because such logic is required for other packages. Now that it is working, just sent a version 5 that will use virtualenv for Sphinx by default. With such change, it will now do the right thing: Forcing to use distro-packages: $ ./scripts/sphinx-pre-install --no-virtualenv Checking if the needed tools for Fedora release 26 (Twenty Six) are available ERROR: please install "python-sphinx", otherwise, build won't work. Warning: better to also install "sphinx_rtd_theme". Warning: better to also install "python-sphinx-latex". You should run: sudo dnf install -y python3-sphinx python3-sphinx_rtd_theme python-sphinx-latex Can't build as 1 mandatory dependency is missing at ./scripts/sphinx-pre-install line 335. Default: $ ./scripts/sphinx-pre-install Checking if the needed tools for Fedora release 26 (Twenty Six) are available Warning: better to also install "python-sphinx-latex". You should run: sudo dnf install -y python-sphinx-latex virtualenv sphinx_1.4 . sphinx_1.4/bin/activate pip install 'docutils==0.12' pip install 'Sphinx==1.4.9' pip install sphinx_rtd_theme Can't build as 1 mandatory dependency is missing at ./scripts/sphinx-pre-install line 335. There is one problem there on Fedora that I just noticed: "python-sphinx-latex" actually installs python2-sphinx. Fixing it is trivial, but will require some time to adjust, as the script will need to manually check for the packages that are actually required on Fedora. Yet, before spending more time on such script, I'd like to have more feedback if: - is this approach acceptable? - should it have an optional argument that will make the script to run the needed commands; - should it be integrated at the Documentation/Makefile? - what's the best name/location for such script? I guess it could also use kpsewhich to check if the needed texlive packages are installed. However, the problem with such approach is that texlive-kpathsea-bin package should be installed first, in order to provide such command. So, installing PDF and math dependencies would require two steps. > I tested sphinx-pre-install and it works fine for me, thats not the > point. The point is: what do we recommend? E.g. for me it advices me > to run: > > sudo apt-get install python3-sphinx python3-sphinx-rtd-theme > > We should not assume that the developer (better: the build-user) owns the > privilege to install fine grained OS packages. There is a admin-part and > a user-part: That's not relevant. Typically, anyone that is building a Kernel has admin privileges, otherwise it can't actually test the Kernel that was built. Ok, there are exceptions to that, but, on such case, the user should be able to request the admin to install whatever packages are needed to build the Kernel. Thanks, Mauro -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH RFC] scripts/sphinx-pre-install: add a script to check Sphinx install
Em Sat, 15 Jul 2017 09:49:40 -0300 Mauro Carvalho Chehab escreveu: > I guess it could also use kpsewhich to check if the needed > texlive packages are installed. However, the problem with such > approach is that texlive-kpathsea-bin package should be installed > first, in order to provide such command. > > So, installing PDF and math dependencies would require two steps. Hmm... answering myself: if texlive-kpathsea-bin is not installed, that probably means that texlive is not installed. So, the script can just give the command to install all texlive packages that are needed for a given distribution. Thanks, Mauro -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[RFC v6] scripts/sphinx-pre-install: add a script to check Sphinx install
Solving Sphinx dependencies can be painful. Add a script to check if everything is ok. Signed-off-by: Mauro Carvalho Chehab --- v6: add logic to detect texlive packages v5: minor fix to return error if virtualenv and Sphinx is not installed v4: changed default to use virtualenv for Sphinx and add switches to change its behavior to disable PDF and virtualenv. v3: check for DeJavu fonts on Ubuntu and add "sudo" to Fedora instructions v2: add support for Fedora 26 scripts/sphinx-pre-install | 438 + 1 file changed, 438 insertions(+) create mode 100755 scripts/sphinx-pre-install diff --git a/scripts/sphinx-pre-install b/scripts/sphinx-pre-install new file mode 100755 index ..d7c242c27f43 --- /dev/null +++ b/scripts/sphinx-pre-install @@ -0,0 +1,438 @@ +#!/usr/bin/perl +use strict; + + + +# +# Static vars +# + +my @missing; +my @opt_missing; +my $system_release; +my $need = 0; +my $optional = 0; +my $need_symlink = 0; +my $need_sphinx = 0; +my $install = ""; + +# +# Command line arguments +# + +my $pdf = 1; +my $virtualenv = 1; + +# +# List of required texlive packages +# + +my %texlive = ( + 'adjustbox.sty' => 'texlive-adjustbox', + 'amsfonts.sty' => 'texlive-amsfonts', + 'amsmath.sty'=> 'texlive-amsmath', + 'amssymb.sty'=> 'texlive-amsfonts', + 'amsthm.sty' => 'texlive-amscls', + 'anyfontsize.sty'=> 'texlive-anyfontsize', + 'atbegshi.sty' => 'texlive-oberdiek', + 'bm.sty' => 'texlive-tools', + 'capt-of.sty'=> 'texlive-capt-of', + 'cmap.sty' => 'texlive-cmap', + 'ecrm1000.tfm' => 'texlive-ec', + 'eqparbox.sty' => 'texlive-eqparbox', + 'eu1enc.def' => 'texlive-euenc', + 'fncychap.sty' => 'texlive-fncychap', + 'footnote.sty' => 'texlive-mdwtools', + 'framed.sty' => 'texlive-framed', + 'luatex85.sty' => 'texlive-luatex85', + 'multirow.sty' => 'texlive-multirow', + 'needspace.sty' => 'texlive-needspace', + 'palatino.sty' => 'texlive-psnfss', + 'parskip.sty'=> 'texlive-parskip', + 'polyglossia.sty'=> 'texlive-polyglossia', + 'tabulary.sty' => 'texlive-tabulary', + 'threeparttable.sty' => 'texlive-threeparttable', + 'titlesec.sty' => 'texlive-titlesec', + 'ucs.sty'=> 'texlive-ucs', + 'upquote.sty'=> 'texlive-upquote', + 'wrapfig.sty'=> 'texlive-wrapfig', +); + +# +# Subroutines that checks if a feature exists +# + +sub catcheck($) +{ + my $res = ""; + $res = qx(cat $_[0]) if (-r $_[0]); + return $res; +} + +sub check_missing(%) +{ + my %map = %{$_[0]}; + + foreach my $prog (@missing) { + print "ERROR: please install \"$prog\", otherwise, build won't work.\n"; + if (defined($map{$prog})) { + $install .= " " . $map{$prog}; + } else { + $install .= " " . $prog; + } + } + foreach my $prog (@opt_missing) { + print "Warning: better to also install \"$prog\".\n"; + if (defined($map{$prog})) { + $install .= " " . $map{$prog}; + } else { + $install .= " " . $prog; + } + } + + $install =~ s/^\s//; +} + +sub add_package($$) +{ + my $package = shift; + my $is_optional = shift; + + if ($is_optional) { + push @opt_missing, $package; + + $optional++; + } else { + push @missing, $package; + + $need++; + } +} + +sub check_missing_file($$$) +{ + my $file = shift; + my $package = shift; + my $is_optional = shift; + + return if(-e $file); + + add_package($package, $is_optional); +} + +sub findprog($) +{ + foreach(split(/:/, $ENV{PATH})) { + return "$_/$_[0]" if(-x "$_/$_[0]"); + } +} + +sub check_program($$) +{ + my $prog = shift; + my $is_optional = shift; + + return if findprog($prog); + + add_package($prog, $is_optional); +} + +sub check_perl_module($$) +{ + my $prog = shift; + my $is_optional = shift; + + my $err = system("perl -M$prog -e 1 2>/dev/null /dev/null"); + return if ($err == 0); + + add_package($prog, $is_optional); +} + +sub check_python_module($$) +{ + my $prog = shift; + my $is_optional = shift; + + my $err = system("python3 -c 'import $prog' 2>/dev/null /dev/null"); + return if ($err == 0); + my $err = system("python -c 'import $prog' 2>/dev/null /dev/null"); + return if ($err == 0); + + add_package($prog, $is_optional); +} + +sub check_rpm_missing($$) +{ + my @pkgs = @{$_[0]}; + my $is_optional = $_[1]; + + fore
[PULL] Documentation format standardization
Here's a request to consider as an end-of-window pull. Mauro has gone through and fixed up a lot of top-level documentation files to make them conform to the RST format, but without moving or renaming them in any way. This will help when we incorporate the ones we want to keep into the Sphinx doctree, but the real purpose is to bring a bit of uniformity to our documentation and let the top-level docs serve as examples for those writing new ones. It touches a lot of files, but it's all contained within Documentation/. I've held it until now to minimize the chances of conflicts with other trees changing the docs. Thanks, jon The following changes since commit b86faee6d111294fa95a2e89b5f771b2da3c9782: Merge tag 'nfs-for-4.13-1' of git://git.linux-nfs.org/projects/anna/linux-nfs (2017-07-13 14:35:37 -0700) are available in the git repository at: git://git.lwn.net/linux.git tags/standardize-docs for you to fetch changes up to 43e5f7e1fa66531777c49791014c3124ea9208d8: docs: kprobes.txt: Fix whitespacing (2017-07-14 13:58:14 -0600) This series converts a number of top-level documents to the RST format without incorporating them into the Sphinx tree. The hope is to bring some uniformity to kernel documentation and, perhaps more importantly, have our existing docs serve as an example of the desired formatting for those that will be added later. Mauro Carvalho Chehab (84): bcache.txt: standardize document format bt8xxgpio.txt: standardize document format btmrvl.txt: standardize document format bus-virt-phys-mapping.txt: standardize document format cachetlb.txt: standardize document format circular-buffers.txt: standardize document format clk.txt: standardize document format cpu-load: standardize document format cputopology.txt: standardize document format crc32.txt: standardize document format dcdbas.txt: standardize document format digsig.txt: standardize document format DMA-API.txt: standardize document format DMA-API-HOWTO.txt: standardize document format DMA-attributes.txt: standardize document format DMA-ISA-LPC.txt: standardize document format debugging-via-ohci1394.txt: standardize document format efi-stub.txt: standardize document format eisa.txt: standardize document format flexible-arrays.txt: standardize document format futex-requeue-pi.txt: standardize document format gcc-plugins.txt: standardize document format highuid.txt: standardize document format hw_random.txt: standardize document format hwspinlock.txt: standardize document format intel_txt.txt: standardize document format Intel-IOMMU.txt: standardize document format io-mapping.txt: standardize document format io_ordering.txt: standardize document format iostats.txt: standardize document format iostats.txt: update it to cover recent Kernels IPMI.txt: standardize document format IRQ-affinity.txt: standardize document format IRQ-domain.txt: standardize document format irqflags-tracing.txt: standardize document format IRQ.txt: add a markup for its title isapnp.txt: promote title level isa.txt: standardize document format kernel-per-CPU-kthreads.txt: standardize document format kobject.txt: standardize document format kprobes.txt: standardize document format kref.txt: standardize document format ldm.txt: standardize document format lockup-watchdogs.txt: standardize document format lzo.txt: standardize document format mailbox.txt: standardize document format memory-hotplug.txt: standardize document format men-chameleon-bus.txt: standardize document format nommu-mmap.txt: standardize document format nommu-mmap.txt: don't use all upper case on titles ntb.txt: standardize document format numastat.txt: standardize document format padata.txt: standardize document format parport-lowlevel.txt: standardize document format percpu-rw-semaphore.txt: standardize document format phy.txt: standardize document format pi-futex.txt: standardize document format pnp.txt: standardize document format preempt-locking.txt: standardize document format printk-formats.txt: standardize document format rbtree.txt: standardize document format remoteproc.txt: standardize document format rfkill.txt: standardize document format robust-futex-ABI.txt: standardize document format robust-futexes.txt: standardize document format rpmsg.txt: standardize document format SAK.txt: standardize document format sgi-ioc4.txt: standardize document format siphash.txt: standardize document format SM501.txt: standardize document format
Re: [PATCH RFC] scripts/sphinx-pre-install: add a script to check Sphinx install
Em Sat, 15 Jul 2017 09:55:09 -0300 Mauro Carvalho Chehab escreveu: > Em Sat, 15 Jul 2017 09:49:40 -0300 > Mauro Carvalho Chehab escreveu: > > > I guess it could also use kpsewhich to check if the needed > > texlive packages are installed. However, the problem with such > > approach is that texlive-kpathsea-bin package should be installed > > first, in order to provide such command. > > > > So, installing PDF and math dependencies would require two steps. > > Hmm... answering myself: if texlive-kpathsea-bin is not installed, > that probably means that texlive is not installed. So, the script > can just give the command to install all texlive packages that are > needed for a given distribution. While getting the required stuff for OpenSuse Tumbleweed, I noticed a problem with the current toolchain, that affects both html and PDF: /usr/bin/dot -Tpdf /home/mchehab/docs/Documentation/doc-guide/hello.dot Format: "pdf" not recognized. Use one of: canon cmap cmapx cmapx_np dot eps fig gv imap imap_np ismap pic plain plain-ext pov ps ps2 svg svgz tk vml vmlz xdot xdot1.2 xdot1.4 $ ls -lctra Documentation/output/doc-guide/latex/hello.pdf -rw-rw-r-- 1 mchehab mchehab 0 jul 15 16:20 Documentation/output/doc-guide/latex/hello.pdf Btw, the same error is happening also with Fedora 26. So, I guess kfigure should test if pdf is supported, otherwise convert to some other format (like ps or svg). E. g. calling it with something like: $ dot -T help Format: "help" not recognized. Use one of: canon cmap cmapx cmapx_np dot eps fig gv imap imap_np ismap pic plain plain-ext pov ps ps2 svg svgz tk vml vmlz xdot xdot1.2 xdot1.4 then parse its output, before assuming that the requested format is available. Thanks, Mauro -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH RFC v7] scripts/sphinx-pre-install: add a script to check Sphinx install
Solving Sphinx dependencies can be painful. Add a script to check if everything is ok. Tested on: - Fedora 25 and 26; - Ubuntu 17.04; - OpenSuse Tumbleweed; - Arch Linux. Signed-off-by: Mauro Carvalho Chehab --- v7: added support for OpenSuse and Arch Linux. Re-tested on Ubuntu and Fedora 25. v6: add logic to detect texlive packages v5: minor fix to return error if virtualenv and Sphinx is not installed v4: changed default to use virtualenv for Sphinx and add switches to change its behavior to disable PDF and virtualenv. v3: check for DeJavu fonts on Ub scripts/sphinx-pre-install | 485 + 1 file changed, 485 insertions(+) create mode 100755 scripts/sphinx-pre-install diff --git a/scripts/sphinx-pre-install b/scripts/sphinx-pre-install new file mode 100755 index ..0abf6175da40 --- /dev/null +++ b/scripts/sphinx-pre-install @@ -0,0 +1,485 @@ +#!/usr/bin/perl +use strict; + + + +# +# Static vars +# + +my @missing; +my @opt_missing; +my $system_release; +my $need = 0; +my $optional = 0; +my $need_symlink = 0; +my $need_sphinx = 0; +my $install = ""; + +# +# Command line arguments +# + +my $pdf = 1; +my $virtualenv = 1; + +# +# List of required texlive packages +# + +my %texlive = ( + 'adjustbox.sty' => 'texlive-adjustbox', + 'amsfonts.sty' => 'texlive-amsfonts', + 'amsmath.sty'=> 'texlive-amsmath', + 'amssymb.sty'=> 'texlive-amsfonts', + 'amsthm.sty' => 'texlive-amscls', + 'anyfontsize.sty'=> 'texlive-anyfontsize', + 'atbegshi.sty' => 'texlive-oberdiek', + 'bm.sty' => 'texlive-tools', + 'capt-of.sty'=> 'texlive-capt-of', + 'cmap.sty' => 'texlive-cmap', + 'ecrm1000.tfm' => 'texlive-ec', + 'eqparbox.sty' => 'texlive-eqparbox', + 'eu1enc.def' => 'texlive-euenc', + 'fancybox.sty' => 'texlive-fancybox', + 'fancyvrb.sty' => 'texlive-fancyvrb', + 'float.sty' => 'texlive-float', + 'fncychap.sty' => 'texlive-fncychap', + 'footnote.sty' => 'texlive-mdwtools', + 'framed.sty' => 'texlive-framed', + 'luatex85.sty' => 'texlive-luatex85', + 'multirow.sty' => 'texlive-multirow', + 'needspace.sty' => 'texlive-needspace', + 'palatino.sty' => 'texlive-psnfss', + 'parskip.sty'=> 'texlive-parskip', + 'polyglossia.sty'=> 'texlive-polyglossia', + 'tabulary.sty' => 'texlive-tabulary', + 'threeparttable.sty' => 'texlive-threeparttable', + 'titlesec.sty' => 'texlive-titlesec', + 'ucs.sty'=> 'texlive-ucs', + 'upquote.sty'=> 'texlive-upquote', + 'wrapfig.sty'=> 'texlive-wrapfig', +); + +# +# Subroutines that checks if a feature exists +# + +sub catcheck($) +{ + my $res = ""; + $res = qx(cat $_[0]) if (-r $_[0]); + return $res; +} + +sub check_missing(%) +{ + my %map = %{$_[0]}; + + foreach my $prog (@missing) { + print "ERROR: please install \"$prog\", otherwise, build won't work.\n"; + if (defined($map{$prog})) { + $install .= " " . $map{$prog}; + } else { + $install .= " " . $prog; + } + } + foreach my $prog (@opt_missing) { + print "Warning: better to also install \"$prog\".\n"; + if (defined($map{$prog})) { + $install .= " " . $map{$prog}; + } else { + $install .= " " . $prog; + } + } + + $install =~ s/^\s//; +} + +sub add_package($$) +{ + my $package = shift; + my $is_optional = shift; + + if ($is_optional) { + push @opt_missing, $package; + + $optional++; + } else { + push @missing, $package; + + $need++; + } +} + +sub check_missing_file($$$) +{ + my $file = shift; + my $package = shift; + my $is_optional = shift; + + return if(-e $file); + + add_package($package, $is_optional); +} + +sub findprog($) +{ + foreach(split(/:/, $ENV{PATH})) { + return "$_/$_[0]" if(-x "$_/$_[0]"); + } +} + +sub check_program($$) +{ + my $prog = shift; + my $is_optional = shift; + + return if findprog($prog); + + add_package($prog, $is_optional); +} + +sub check_perl_module($$) +{ + my $prog = shift; + my $is_optional = shift; + + my $err = system("perl -M$prog -e 1 2>/dev/null /dev/null"); + return if ($err == 0); + + add_package($prog, $is_optional); +} + +sub check_python_module($$) +{ + my $prog = shift; + my $is_optional = shift; + + my $err = system("python3 -c 'import $prog' 2>/dev/null /dev/null"); + r
[RFC v6 00/62] powerpc: Memory Protection Keys
Memory protection keys enable applications to protect its address space from inadvertent access or corruption from itself. The overall idea: - A process allocates a key and associates it with an address range withinits address space. The process then can dynamically set read/write permissions on the key without involving the kernel. Any code that violates the permissions of the address space; as defined by its associated key, will receive a segmentation fault. This patch series enables the feature on PPC64 HPTE platform. ISA3.0 section 5.7.13 describes the detailed specifications. Highlevel view of the design: --- When an application associates a key with a address address range, program the key inthe Linux PTE. When the MMU detects a page fault, allocate a hash page and program the key into HPTE. And finally when the MMUdetects a key violation; due to invalidapplication access, invoke the registered signal handler and provide the violated key number as well as the state of the key register (AMR), at the time it faulted. Testing: --- This patch series has passed all the protection key tests availablein the selftests directory.The tests are updated towork on both x86 and powerpc. Outstanding issues: --- How will the application know if pkey is enabled, if so how many pkeys are available? Is PKEY_DISABLE_EXECUTE supported? - Ben. History: --- version v6: (1) selftest changes are broken down into 20 incremental patches. (2) A separate key allocation mask that includesPKEY_DISABLE_EXECUTE is added for powerpc (3) pkey feature is enabled for 64K HPT case only. RPT and 4k HPT is disabled. (4) Documentation is updated to better capture the semantics. (5) introduced arch_pkeys_enabled() to find if an arch enables pkeys. Correspond- ing change the logic that displays key value in smaps. (6) code rearranged in many places based on comments from Dave Hansen, Balbir, Anshuman. (7) fixed one bug where a bogus key could be associated successfullyin pkey_mprotect(). version v5: (1) reverted back to the old design -- store the key in the pte, instead of bypassing it. The v4 design slowed down the hash page path. (2) detects key violation when kernel is told to access user pages. (3) further refined the patches into smaller consumable units (4) page faults handlers captures the fault- ing key from the pte instead of the vma. This closes a race between where the key update in the vma and a key fault caused by the key programmed in the pte. (5) a key created with access-denied should also set it up to deny write. Fixed it. (6) protection-key number is displayed in smaps the x86 way. version v4: (1) patches no more depend on the pte bits to program the hpte -- comment by Balbir (2) documentation updates (3) fixed a bug in the selftest. (4) unlike x86, powerpc lets signal handler change key permission bits; the change will persist across signal handler boundaries. Earlierwe allowed the signal handler to modify a field in the siginfo structure which would than be used by the kernel to program the key protection register (AMR) -- resolves a issue raised by Ben. "Calls to sys_swapcontext with a made-up context will end up with a crap AMR if done by code who didn't know about that register". (5) these changes enable protection keys on 4k-page kernel aswell. version v3: (1) split the patches into smaller consumable patches. (2) added the ability to disable execute permission on a key at creation. (3) renamecalc_pte_to_hpte_pkey_bits() to pte_to_hpte_pkey_bits() -- suggested by Anshuman (4) some code optimization and clarity in do_page_fault() (5) A bug fix while invalidating a hpte slot in __hash_page_4K() -- noticed by Aneesh version v2: (1) documentation and selfte
[RFC v6 02/62] powerpc: Free up four 64K PTE bits in 64K backed HPTE pages
Rearrange 64K PTE bits to free up bits 3, 4, 5 and 6 in the 64K backed HPTE pages. This along with the earlier patch will entirely free up the four bits from 64K PTE. The bit numbers are big-endian as defined in the ISA3.0 This patch does the following change to 64K PTE backed by 64K HPTE. H_PAGE_F_SECOND (S) which occupied bit 4 moves to the second part of the pte to bit 60. H_PAGE_F_GIX (G,I,X) which occupied bit 5, 6 and 7 also moves to the second part of the pte to bit 61, 62, 63, 64 respectively since bit 7 is now freed up, we move H_PAGE_BUSY (B) from bit 9 to bit 7. The second part of the PTE will hold (H_PAGE_F_SECOND|H_PAGE_F_GIX) at bit 60,61,62,63. NOTE: None of the bits in the secondary PTE were not used by 64k-HPTE backed PTE. Before the patch, the 64K HPTE backed 64k PTE format was as follows 0 1 2 3 4 5 6 7 8 9 10...63 : : : : : : : : : : :: v v v v v v v v v v vv ,-,-,-,-,--,--,--,--,-,-,-,-,-,--,-,-,-, |x|x|x| |S |G |I |X |x|B| |x|x||x|x|x|x| <- primary pte '_'_'_'_'__'__'__'__'_'_'_'_'_''_'_'_'_' | | | | | | | | | | | | |..| | | | | <- secondary pte '_'_'_'_'__'__'__'__'_'_'_'_'__'_'_'_'_' After the patch, the 64k HPTE backed 64k PTE format is as follows 0 1 2 3 4 5 6 7 8 9 10...63 : : : : : : : : : : :: v v v v v v v v v v vv ,-,-,-,-,--,--,--,--,-,-,-,-,-,--,-,-,-, |x|x|x| | | | |B |x| | |x|x||.|.|.|.| <- primary pte '_'_'_'_'__'__'__'__'_'_'_'_'_''_'_'_'_' | | | | | | | | | | | | |..|S|G|I|X| <- secondary pte '_'_'_'_'__'__'__'__'_'_'_'_'__'_'_'_'_' The above PTE changes is applicable to hugetlbpages aswell. The patch does the following code changes: a) moves the H_PAGE_F_SECOND and H_PAGE_F_GIX to 4k PTE header since it is no more needed b the 64k PTEs. b) abstracts out __real_pte() and __rpte_to_hidx() so the caller need not know the bit location of the slot. c) moves the slot bits the secondary pte. Signed-off-by: Ram Pai --- arch/powerpc/include/asm/book3s/64/hash-4k.h |3 ++ arch/powerpc/include/asm/book3s/64/hash-64k.h | 29 ++- arch/powerpc/include/asm/book3s/64/hash.h |3 -- arch/powerpc/mm/hash64_64k.c | 30 ++-- arch/powerpc/mm/hugetlbpage-hash64.c | 22 ++ 5 files changed, 55 insertions(+), 32 deletions(-) diff --git a/arch/powerpc/include/asm/book3s/64/hash-4k.h b/arch/powerpc/include/asm/book3s/64/hash-4k.h index f959c00..d2cf949 100644 --- a/arch/powerpc/include/asm/book3s/64/hash-4k.h +++ b/arch/powerpc/include/asm/book3s/64/hash-4k.h @@ -16,6 +16,9 @@ #define H_PUD_TABLE_SIZE (sizeof(pud_t) << H_PUD_INDEX_SIZE) #define H_PGD_TABLE_SIZE (sizeof(pgd_t) << H_PGD_INDEX_SIZE) +#define H_PAGE_F_GIX_SHIFT 56 +#define H_PAGE_F_SECOND_RPAGE_RSV2 /* HPTE is in 2ndary HPTEG */ +#define H_PAGE_F_GIX (_RPAGE_RSV3 | _RPAGE_RSV4 | _RPAGE_RPN44) #define H_PAGE_BUSY_RPAGE_RSV1 /* software: PTE & hash are busy */ /* PTE flags to conserve for HPTE identification */ diff --git a/arch/powerpc/include/asm/book3s/64/hash-64k.h b/arch/powerpc/include/asm/book3s/64/hash-64k.h index 62e580c..c281f18 100644 --- a/arch/powerpc/include/asm/book3s/64/hash-64k.h +++ b/arch/powerpc/include/asm/book3s/64/hash-64k.h @@ -12,7 +12,7 @@ */ #define H_PAGE_COMBO _RPAGE_RPN0 /* this is a combo 4k page */ #define H_PAGE_4K_PFN _RPAGE_RPN1 /* PFN is for a single 4k page */ -#define H_PAGE_BUSY_RPAGE_RPN42 /* software: PTE & hash are busy */ +#define H_PAGE_BUSY_RPAGE_RPN44 /* software: PTE & hash are busy */ /* * We need to differentiate between explicit huge page and THP huge @@ -21,8 +21,7 @@ #define H_PAGE_THP_HUGE H_PAGE_4K_PFN /* PTE flags to conserve for HPTE identification */ -#define _PAGE_HPTEFLAGS (H_PAGE_BUSY | H_PAGE_F_SECOND | \ -H_PAGE_F_GIX | H_PAGE_HASHPTE | H_PAGE_COMBO) +#define _PAGE_HPTEFLAGS (H_PAGE_BUSY | H_PAGE_HASHPTE | H_PAGE_COMBO) /* * we support 16 fragments per PTE page of 64K size. */ @@ -50,24 +49,22 @@ static inline real_pte_t __real_pte(pte_t pte, pte_t *ptep) unsigned long *hidxp; rpte.pte = pte; - rpte.hidx = 0; - if (pte_val(pte) & H_PAGE_COMBO) { - /* -* Make sure we order the hidx load against the H_PAGE_COMBO -* check. The store side ordering is done in __hash_page_4K -*/ - smp_rmb(); - hidxp = (unsigned long *)(ptep + PTRS_PER_PTE); - rpte.hidx = *hidxp; - } + /* +* Ensur
[RFC v6 11/62] powerpc: initial pkey plumbing
basic setup to initialize the pkey system. Only 64K kernel in HPT mode, enables the pkey system. Signed-off-by: Ram Pai --- arch/powerpc/Kconfig | 16 ++ arch/powerpc/include/asm/mmu_context.h |5 +++ arch/powerpc/include/asm/pkeys.h | 51 arch/powerpc/kernel/setup_64.c |4 ++ arch/powerpc/mm/Makefile |1 + arch/powerpc/mm/hash_utils_64.c|1 + arch/powerpc/mm/pkeys.c| 18 +++ 7 files changed, 96 insertions(+), 0 deletions(-) create mode 100644 arch/powerpc/include/asm/pkeys.h create mode 100644 arch/powerpc/mm/pkeys.c diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index bf4391d..5c60fd6 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -855,6 +855,22 @@ config SECCOMP If unsure, say Y. Only embedded should say N here. +config PPC64_MEMORY_PROTECTION_KEYS + prompt "PowerPC Memory Protection Keys" + def_bool y + # Note: only available in 64-bit mode + depends on PPC64 && PPC_64K_PAGES + select ARCH_USES_HIGH_VMA_FLAGS + select ARCH_HAS_PKEYS + ---help--- + Memory Protection Keys provides a mechanism for enforcing + page-based protections, but without requiring modification of the + page tables when an application changes protection domains. + + For details, see Documentation/vm/protection-keys.txt + + If unsure, say y. + endmenu config ISA_DMA_API diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h index da7e943..4b93547 100644 --- a/arch/powerpc/include/asm/mmu_context.h +++ b/arch/powerpc/include/asm/mmu_context.h @@ -181,5 +181,10 @@ static inline bool arch_vma_access_permitted(struct vm_area_struct *vma, /* by default, allow everything */ return true; } + +#ifndef CONFIG_PPC64_MEMORY_PROTECTION_KEYS +#define pkey_initialize() +#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */ + #endif /* __KERNEL__ */ #endif /* __ASM_POWERPC_MMU_CONTEXT_H */ diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h new file mode 100644 index 000..203d7de --- /dev/null +++ b/arch/powerpc/include/asm/pkeys.h @@ -0,0 +1,51 @@ +#ifndef _ASM_PPC64_PKEYS_H +#define _ASM_PPC64_PKEYS_H + +extern bool pkey_inited; +#define ARCH_VM_PKEY_FLAGS 0 + +static inline bool mm_pkey_is_allocated(struct mm_struct *mm, int pkey) +{ + return (pkey == 0); +} + +static inline int mm_pkey_alloc(struct mm_struct *mm) +{ + return -1; +} + +static inline int mm_pkey_free(struct mm_struct *mm, int pkey) +{ + return -EINVAL; +} + +/* + * Try to dedicate one of the protection keys to be used as an + * execute-only protection key. + */ +static inline int execute_only_pkey(struct mm_struct *mm) +{ + return 0; +} + +static inline int arch_override_mprotect_pkey(struct vm_area_struct *vma, + int prot, int pkey) +{ + return 0; +} + +static inline int arch_set_user_pkey_access(struct task_struct *tsk, int pkey, + unsigned long init_val) +{ + return 0; +} + +static inline void pkey_initialize(void) +{ +#ifdef CONFIG_PPC_64K_PAGES + pkey_inited = !radix_enabled(); +#else + pkey_inited = false; +#endif +} +#endif /*_ASM_PPC64_PKEYS_H */ diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c index 4640f6d..50accab 100644 --- a/arch/powerpc/kernel/setup_64.c +++ b/arch/powerpc/kernel/setup_64.c @@ -37,6 +37,7 @@ #include #include #include +#include #include #include @@ -316,6 +317,9 @@ void __init early_setup(unsigned long dt_ptr) /* Initialize the hash table or TLB handling */ early_init_mmu(); + /* initialize the key subsystem */ + pkey_initialize(); + /* * At this point, we can let interrupts switch to virtual mode * (the MMU has been setup), so adjust the MSR in the PACA to diff --git a/arch/powerpc/mm/Makefile b/arch/powerpc/mm/Makefile index 7414034..8cc2ff1 100644 --- a/arch/powerpc/mm/Makefile +++ b/arch/powerpc/mm/Makefile @@ -45,3 +45,4 @@ obj-$(CONFIG_PPC_COPRO_BASE) += copro_fault.o obj-$(CONFIG_SPAPR_TCE_IOMMU) += mmu_context_iommu.o obj-$(CONFIG_PPC_PTDUMP) += dump_linuxpagetables.o obj-$(CONFIG_PPC_HTDUMP) += dump_hashpagetable.o +obj-$(CONFIG_PPC64_MEMORY_PROTECTION_KEYS) += pkeys.o diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c index d863696..f88423b 100644 --- a/arch/powerpc/mm/hash_utils_64.c +++ b/arch/powerpc/mm/hash_utils_64.c @@ -35,6 +35,7 @@ #include #include #include +#include #include #include diff --git a/arch/powerpc/mm/pkeys.c b/arch/powerpc/mm/pkeys.c new file mode 100644 index 000..c3acee1 --- /dev/null +++ b/arch/powerpc/mm/pkeys.c @@ -0,0 +1,18 @@ +/* + * PowerPC Memory Protection Keys management + * Copyright (c) 2015, Intel Corporation. + * C
[RFC v6 31/62] powerpc: Handle exceptions caused by pkey violation
Handle Data and Instruction exceptions caused by memory protection-key. The CPU will detect the key fault if the HPTE is already programmed with the key. However if the HPTE is not hashed, a key fault will not be detected by the hardware. The software will detect pkey violation in such a case. Signed-off-by: Ram Pai --- arch/powerpc/include/asm/reg.h |3 ++- arch/powerpc/mm/fault.c| 21 + 2 files changed, 23 insertions(+), 1 deletions(-) diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h index ee04bc0..b7cbc8c 100644 --- a/arch/powerpc/include/asm/reg.h +++ b/arch/powerpc/include/asm/reg.h @@ -286,7 +286,8 @@ #define DSISR_SET_RC 0x0004 /* Failed setting of R/C bits */ #define DSISR_PGDIRFAULT 0x0002 /* Fault on page directory */ #define DSISR_PAGE_FAULT_MASK (DSISR_BIT32 | DSISR_PAGEATTR_CONFLT | \ - DSISR_BADACCESS | DSISR_DABRMATCH | DSISR_BIT43) + DSISR_BADACCESS | DSISR_KEYFAULT | \ + DSISR_DABRMATCH | DSISR_BIT43) #define SPRN_TBRL 0x10C /* Time Base Read Lower Register (user, R/O) */ #define SPRN_TBRU 0x10D /* Time Base Read Upper Register (user, R/O) */ #define SPRN_CIR 0x11B /* Chip Information Register (hyper, R/0) */ diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c index 3a7d580..ea74fe2 100644 --- a/arch/powerpc/mm/fault.c +++ b/arch/powerpc/mm/fault.c @@ -261,6 +261,13 @@ int do_page_fault(struct pt_regs *regs, unsigned long address, } #endif +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS + if (error_code & DSISR_KEYFAULT) { + code = SEGV_PKUERR; + goto bad_area_nosemaphore; + } +#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */ + /* We restore the interrupt state now */ if (!arch_irq_disabled_regs(regs)) local_irq_enable(); @@ -441,6 +448,20 @@ int do_page_fault(struct pt_regs *regs, unsigned long address, WARN_ON_ONCE(error_code & DSISR_PROTFAULT); #endif /* CONFIG_PPC_STD_MMU */ +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS + if (!arch_vma_access_permitted(vma, flags & FAULT_FLAG_WRITE, + is_exec, 0)) { + code = SEGV_PKUERR; + goto bad_area; + } +#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */ + + + /* handle_mm_fault() needs to know if its a instruction access +* fault. +*/ + if (is_exec) + flags |= FAULT_FLAG_INSTRUCTION; /* * If for any reason at all we couldn't handle the fault, * make sure we exit gracefully rather than endlessly redo -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[RFC v6 22/62] powerpc: ability to associate pkey to a vma
arch-independent code expects the arch to map a pkey into the vma's protection bit setting. The patch provides that ability. Signed-off-by: Ram Pai --- arch/powerpc/include/asm/mman.h |8 +++- arch/powerpc/include/asm/pkeys.h | 18 +++--- 2 files changed, 22 insertions(+), 4 deletions(-) diff --git a/arch/powerpc/include/asm/mman.h b/arch/powerpc/include/asm/mman.h index 30922f6..067eec2 100644 --- a/arch/powerpc/include/asm/mman.h +++ b/arch/powerpc/include/asm/mman.h @@ -13,6 +13,7 @@ #include #include +#include #include /* @@ -22,7 +23,12 @@ static inline unsigned long arch_calc_vm_prot_bits(unsigned long prot, unsigned long pkey) { - return (prot & PROT_SAO) ? VM_SAO : 0; +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS + return (((prot & PROT_SAO) ? VM_SAO : 0) | + pkey_to_vmflag_bits(pkey)); +#else + return ((prot & PROT_SAO) ? VM_SAO : 0); +#endif } #define arch_calc_vm_prot_bits(prot, pkey) arch_calc_vm_prot_bits(prot, pkey) diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h index 1864148..c92b049 100644 --- a/arch/powerpc/include/asm/pkeys.h +++ b/arch/powerpc/include/asm/pkeys.h @@ -14,14 +14,26 @@ PKEY_DISABLE_WRITE |\ PKEY_DISABLE_EXECUTE) +#define ARCH_VM_PKEY_FLAGS (VM_PKEY_BIT0 | VM_PKEY_BIT1 | VM_PKEY_BIT2 | \ + VM_PKEY_BIT3 | VM_PKEY_BIT4) + +static inline u64 pkey_to_vmflag_bits(u16 pkey) +{ + if (!pkey_inited) + return 0x0UL; + + return (((pkey & 0x1UL) ? VM_PKEY_BIT0 : 0x0UL) | + ((pkey & 0x2UL) ? VM_PKEY_BIT1 : 0x0UL) | + ((pkey & 0x4UL) ? VM_PKEY_BIT2 : 0x0UL) | + ((pkey & 0x8UL) ? VM_PKEY_BIT3 : 0x0UL) | + ((pkey & 0x10UL) ? VM_PKEY_BIT4 : 0x0UL)); +} + #define arch_max_pkey() 32 #define AMR_RD_BIT 0x1UL #define AMR_WR_BIT 0x2UL #define IAMR_EX_BIT 0x1UL #define AMR_BITS_PER_PKEY 2 -#define ARCH_VM_PKEY_FLAGS (VM_PKEY_BIT0 | VM_PKEY_BIT1 | VM_PKEY_BIT2 | \ - VM_PKEY_BIT3 | VM_PKEY_BIT4) -#define AMR_BITS_PER_PKEY 2 /* * Bits are in BE format. * NOTE: key 31, 1, 0 are not used. -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[RFC v6 23/62] powerpc: implementation for arch_override_mprotect_pkey()
arch independent code calls arch_override_mprotect_pkey() to return a pkey that best matches the requested protection. This patch provides the implementation. Signed-off-by: Ram Pai --- arch/powerpc/include/asm/mmu_context.h |5 +++ arch/powerpc/include/asm/pkeys.h | 14 - arch/powerpc/mm/pkeys.c| 47 3 files changed, 64 insertions(+), 2 deletions(-) diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h index 4705dab..7232484 100644 --- a/arch/powerpc/include/asm/mmu_context.h +++ b/arch/powerpc/include/asm/mmu_context.h @@ -185,6 +185,11 @@ static inline bool arch_vma_access_permitted(struct vm_area_struct *vma, #ifndef CONFIG_PPC64_MEMORY_PROTECTION_KEYS #define pkey_initialize() #define pkey_mm_init(mm) + +static inline int vma_pkey(struct vm_area_struct *vma) +{ + return 0; +} #endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */ #endif /* __KERNEL__ */ diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h index c92b049..94013af 100644 --- a/arch/powerpc/include/asm/pkeys.h +++ b/arch/powerpc/include/asm/pkeys.h @@ -29,6 +29,13 @@ static inline u64 pkey_to_vmflag_bits(u16 pkey) ((pkey & 0x10UL) ? VM_PKEY_BIT4 : 0x0UL)); } +static inline int vma_pkey(struct vm_area_struct *vma) +{ + if (!pkey_inited) + return 0; + return (vma->vm_flags & ARCH_VM_PKEY_FLAGS) >> VM_PKEY_SHIFT; +} + #define arch_max_pkey() 32 #define AMR_RD_BIT 0x1UL #define AMR_WR_BIT 0x2UL @@ -138,11 +145,14 @@ static inline int execute_only_pkey(struct mm_struct *mm) return __execute_only_pkey(mm); } - +extern int __arch_override_mprotect_pkey(struct vm_area_struct *vma, + int prot, int pkey); static inline int arch_override_mprotect_pkey(struct vm_area_struct *vma, int prot, int pkey) { - return 0; + if (!pkey_inited) + return 0; + return __arch_override_mprotect_pkey(vma, prot, pkey); } extern int __arch_set_user_pkey_access(struct task_struct *tsk, int pkey, diff --git a/arch/powerpc/mm/pkeys.c b/arch/powerpc/mm/pkeys.c index 34e8557..403f5ae 100644 --- a/arch/powerpc/mm/pkeys.c +++ b/arch/powerpc/mm/pkeys.c @@ -154,3 +154,50 @@ int __execute_only_pkey(struct mm_struct *mm) mm->context.execute_only_pkey = execute_only_pkey; return execute_only_pkey; } + +static inline bool vma_is_pkey_exec_only(struct vm_area_struct *vma) +{ + /* Do this check first since the vm_flags should be hot */ + if ((vma->vm_flags & (VM_READ | VM_WRITE | VM_EXEC)) != VM_EXEC) + return false; + + return (vma_pkey(vma) == vma->vm_mm->context.execute_only_pkey); +} + +/* + * This should only be called for *plain* mprotect calls. + */ +int __arch_override_mprotect_pkey(struct vm_area_struct *vma, int prot, + int pkey) +{ + /* +* Is this an mprotect_pkey() call? If so, never +* override the value that came from the user. +*/ + if (pkey != -1) + return pkey; + + /* +* If the currently associated pkey is execute-only, +* but the requested protection requires read or write, +* move it back to the default pkey. +*/ + if (vma_is_pkey_exec_only(vma) && + (prot & (PROT_READ|PROT_WRITE))) + return 0; + + /* +* the requested protection is execute-only. Hence +* lets use a execute-only pkey. +*/ + if (prot == PROT_EXEC) { + pkey = execute_only_pkey(vma->vm_mm); + if (pkey > 0) + return pkey; + } + + /* +* nothing to override. +*/ + return vma_pkey(vma); +} -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[RFC v6 37/62] x86: implementation for arch_pkeys_enabled()
arch_pkeys_enabled() returns true if the cpu supports X86_FEATURE_OSPKE. Signed-off-by: Ram Pai --- arch/x86/include/asm/pkeys.h |1 + arch/x86/kernel/fpu/xstate.c |5 + 2 files changed, 6 insertions(+), 0 deletions(-) diff --git a/arch/x86/include/asm/pkeys.h b/arch/x86/include/asm/pkeys.h index b3b09b9..fa82799 100644 --- a/arch/x86/include/asm/pkeys.h +++ b/arch/x86/include/asm/pkeys.h @@ -5,6 +5,7 @@ extern int arch_set_user_pkey_access(struct task_struct *tsk, int pkey, unsigned long init_val); +extern bool arch_pkeys_enabled(void); /* * Try to dedicate one of the protection keys to be used as an diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c index c24ac1e..df594b8 100644 --- a/arch/x86/kernel/fpu/xstate.c +++ b/arch/x86/kernel/fpu/xstate.c @@ -918,6 +918,11 @@ int arch_set_user_pkey_access(struct task_struct *tsk, int pkey, return 0; } + +bool arch_pkeys_enabled(void) +{ + return boot_cpu_has(X86_FEATURE_OSPKE); +} #endif /* ! CONFIG_ARCH_HAS_PKEYS */ /* -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[RFC v6 36/62] mm: introduce arch_pkeys_enabled()
Only the architecture knows if it supports protection keys. Hence introducing arch_pkeys_enabled(). This function is needed by arch neutral code. One use case is -- to determine if the protection key needs to be displayed in smaps. Signed-off-by: Ram Pai --- include/linux/pkeys.h |5 + 1 files changed, 5 insertions(+), 0 deletions(-) diff --git a/include/linux/pkeys.h b/include/linux/pkeys.h index a1bacf1..d120810 100644 --- a/include/linux/pkeys.h +++ b/include/linux/pkeys.h @@ -34,6 +34,11 @@ static inline int arch_set_user_pkey_access(struct task_struct *tsk, int pkey, return 0; } +static inline bool arch_pkeys_enabled(void) +{ + return false; +} + static inline void copy_init_pkru_to_fpregs(void) { } -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[RFC v6 53/62] selftest/vm: powerpc implementation for generic abstraction
Introduce powerpc implementation for the various abstactions. Signed-off-by: Ram Pai --- tools/testing/selftests/vm/pkey-helpers.h| 97 +++-- tools/testing/selftests/vm/protection_keys.c | 33 + 2 files changed, 107 insertions(+), 23 deletions(-) diff --git a/tools/testing/selftests/vm/pkey-helpers.h b/tools/testing/selftests/vm/pkey-helpers.h index 5211019..59172cc 100644 --- a/tools/testing/selftests/vm/pkey-helpers.h +++ b/tools/testing/selftests/vm/pkey-helpers.h @@ -17,27 +17,58 @@ #define u16 uint16_t #define u32 uint32_t #define u64 uint64_t -#define pkey_reg_t u32 -#ifdef __i386__ +#ifdef __i386__ /* arch */ + #define SYS_mprotect_key 380 #define SYS_pkey_alloc 381 #define SYS_pkey_free 382 #define REG_IP_IDX REG_EIP #define si_pkey_offset 0x14 -#else + +#define NR_PKEYS 16 +#define NR_RESERVED_PKEYS 1 +#define PKEY_BITS_PER_PKEY 2 +#define PKEY_DISABLE_ACCESS0x1 +#define PKEY_DISABLE_WRITE 0x2 +#define HPAGE_SIZE (1UL<<21) +#define pkey_reg_t u32 + +#elif __powerpc64__ /* arch */ + +#define SYS_mprotect_key 386 +#define SYS_pkey_alloc 384 +#define SYS_pkey_free 385 +#define si_pkey_offset 0x20 +#define REG_IP_IDX PT_NIP +#define REG_TRAPNO PT_TRAP +#define REG_AMR45 +#define gregs gp_regs +#define fpregs fp_regs + +#define NR_PKEYS 32 +#define NR_RESERVED_PKEYS 3 +#define PKEY_BITS_PER_PKEY 2 +#define PKEY_DISABLE_ACCESS0x3 /* disable read and write */ +#define PKEY_DISABLE_WRITE 0x2 +#define HPAGE_SIZE (1UL<<24) +#define pkey_reg_t u64 + +#else /* arch */ + #define SYS_mprotect_key 329 #define SYS_pkey_alloc 330 #define SYS_pkey_free 331 #define REG_IP_IDX REG_RIP #define si_pkey_offset 0x20 -#endif - #define NR_PKEYS 16 #define PKEY_BITS_PER_PKEY 2 #define PKEY_DISABLE_ACCESS0x1 #define PKEY_DISABLE_WRITE 0x2 #define HPAGE_SIZE (1UL<<21) + NOT SUPPORTED + +#endif /* arch */ #ifndef DEBUG_LEVEL #define DEBUG_LEVEL 0 @@ -46,7 +77,11 @@ static inline u32 pkey_to_shift(int pkey) { +#ifdef __i386__ /* arch */ return pkey * PKEY_BITS_PER_PKEY; +#elif __powerpc64__ /* arch */ + return (NR_PKEYS - pkey - 1) * PKEY_BITS_PER_PKEY; +#endif /* arch */ } static inline pkey_reg_t reset_bits(int pkey, pkey_reg_t bits) @@ -107,6 +142,7 @@ static inline void sigsafe_printf(const char *format, ...) extern pkey_reg_t shadow_pkey_reg; static inline pkey_reg_t __rdpkey_reg(void) { +#ifdef __i386__ /* arch */ unsigned int eax, edx; unsigned int ecx = 0; pkey_reg_t pkey_reg; @@ -114,7 +150,13 @@ static inline pkey_reg_t __rdpkey_reg(void) asm volatile(".byte 0x0f,0x01,0xee\n\t" : "=a" (eax), "=d" (edx) : "c" (ecx)); - pkey_reg = eax; +#elif __powerpc64__ /* arch */ + pkey_reg_t eax; + pkey_reg_t pkey_reg; + + asm volatile("mfspr %0, 0xd" : "=r" ((pkey_reg_t)(eax))); +#endif /* arch */ + pkey_reg = (pkey_reg_t)eax; return pkey_reg; } @@ -134,6 +176,7 @@ static inline pkey_reg_t _rdpkey_reg(int line) static inline void __wrpkey_reg(pkey_reg_t pkey_reg) { pkey_reg_t eax = pkey_reg; +#ifdef __i386__ /* arch */ pkey_reg_t ecx = 0; pkey_reg_t edx = 0; @@ -142,6 +185,14 @@ static inline void __wrpkey_reg(pkey_reg_t pkey_reg) asm volatile(".byte 0x0f,0x01,0xef\n\t" : : "a" (eax), "c" (ecx), "d" (edx)); assert(pkey_reg == __rdpkey_reg()); + +#else /* arch */ + dprintf4("%s() changing %llx to %llx\n", +__func__, __rdpkey_reg(), pkey_reg); + asm volatile("mtspr 0xd, %0" : : "r" ((unsigned long)(eax)) : "memory"); +#endif /* arch */ + dprintf4("%s() pkey register after changing %016lx to %016lx\n", +__func__, __rdpkey_reg(), pkey_reg); } static inline void wrpkey_reg(pkey_reg_t pkey_reg) @@ -188,6 +239,8 @@ static inline void __pkey_write_allow(int pkey, int do_allow_write) dprintf4("pkey_reg now: %08x\n", rdpkey_reg()); } +#ifdef __i386__ /* arch */ + #define PAGE_SIZE 4096 #define MB (1<<20) @@ -270,8 +323,18 @@ static inline void __page_o_noops(void) /* 8-bytes of instruction * 512 bytes = 1 page */ asm(".rept 512 ; nopl 0x7eee(%eax) ; .endr"); } +#elif __powerpc64__ /* arch */ -#endif /* _PKEYS_HELPER_H */ +#define PAGE_SIZE (0x1UL << 16) +static inline int cpu_has_pku(void) +{ + return 1; +} + +/* 8-bytes of instruction * 16384bytes = 1 page */ +#define __page_o_noops() asm(".rept 16384 ; nop; .endr") + +#endif /* arch */ #define ARRAY_SIZE(x) (sizeof(x) / sizeof(*(x))) #define ALIGN_UP(x, align_to) (((x) + ((align_to)-1)) & ~((align_to)-1)) @@ -303,11 +366,29 @@ static inline void __page_o_noops(void) static inline int open_hugepage_file(int flag) { - return open("/sys/kernel/mm/hugepages/hugepages-2
[RFC v6 54/62] selftest/vm: fix an assertion in test_pkey_alloc_exhaust()
The maximum number of keys that can be allocated has to take into consideration that some keys are reserved by the architecture of specific purpose and cannot be allocated. Fix the assertion in test_pkey_alloc_exhaust() Signed-off-by: Ram Pai --- tools/testing/selftests/vm/protection_keys.c | 10 +- 1 files changed, 5 insertions(+), 5 deletions(-) diff --git a/tools/testing/selftests/vm/protection_keys.c b/tools/testing/selftests/vm/protection_keys.c index 1a28c88..37645a5 100644 --- a/tools/testing/selftests/vm/protection_keys.c +++ b/tools/testing/selftests/vm/protection_keys.c @@ -1159,12 +1159,12 @@ void test_pkey_alloc_exhaust(int *ptr, u16 pkey) pkey_assert(i < NR_PKEYS*2); /* -* There are 16 pkeys supported in hardware. One is taken -* up for the default (0) and another can be taken up by -* an execute-only mapping. Ensure that we can allocate -* at least 14 (16-2). +* There are NR_PKEYS pkeys supported in hardware. NR_RESERVED_KEYS +* are reserved. One can be taken up by an execute-only mapping. +* Ensure that we can allocate at least the remaining. */ - pkey_assert(i >= NR_PKEYS-2); + pkey_assert(i >= (NR_PKEYS-NR_RESERVED_PKEYS-1)); + for (i = 0; i < nr_allocated_pkeys; i++) { err = sys_pkey_free(allocated_pkeys[i]); -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[RFC v6 61/62] Documentation/x86: Move protecton key documentation to arch neutral directory
Since PowerPC and Intel both support memory protection keys, moving the documenation to arch-neutral directory. Signed-off-by: Ram Pai --- Documentation/vm/protection-keys.txt | 85 + Documentation/x86/protection-keys.txt | 85 - 2 files changed, 85 insertions(+), 85 deletions(-) create mode 100644 Documentation/vm/protection-keys.txt delete mode 100644 Documentation/x86/protection-keys.txt diff --git a/Documentation/vm/protection-keys.txt b/Documentation/vm/protection-keys.txt new file mode 100644 index 000..b643045 --- /dev/null +++ b/Documentation/vm/protection-keys.txt @@ -0,0 +1,85 @@ +Memory Protection Keys for Userspace (PKU aka PKEYs) is a CPU feature +which will be found on future Intel CPUs. + +Memory Protection Keys provides a mechanism for enforcing page-based +protections, but without requiring modification of the page tables +when an application changes protection domains. It works by +dedicating 4 previously ignored bits in each page table entry to a +"protection key", giving 16 possible keys. + +There is also a new user-accessible register (PKRU) with two separate +bits (Access Disable and Write Disable) for each key. Being a CPU +register, PKRU is inherently thread-local, potentially giving each +thread a different set of protections from every other thread. + +There are two new instructions (RDPKRU/WRPKRU) for reading and writing +to the new register. The feature is only available in 64-bit mode, +even though there is theoretically space in the PAE PTEs. These +permissions are enforced on data access only and have no effect on +instruction fetches. + +=== Syscalls === + +There are 3 system calls which directly interact with pkeys: + + int pkey_alloc(unsigned long flags, unsigned long init_access_rights) + int pkey_free(int pkey); + int pkey_mprotect(unsigned long start, size_t len, + unsigned long prot, int pkey); + +Before a pkey can be used, it must first be allocated with +pkey_alloc(). An application calls the WRPKRU instruction +directly in order to change access permissions to memory covered +with a key. In this example WRPKRU is wrapped by a C function +called pkey_set(). + + int real_prot = PROT_READ|PROT_WRITE; + pkey = pkey_alloc(0, PKEY_DENY_WRITE); + ptr = mmap(NULL, PAGE_SIZE, PROT_NONE, MAP_ANONYMOUS|MAP_PRIVATE, -1, 0); + ret = pkey_mprotect(ptr, PAGE_SIZE, real_prot, pkey); + ... application runs here + +Now, if the application needs to update the data at 'ptr', it can +gain access, do the update, then remove its write access: + + pkey_set(pkey, 0); // clear PKEY_DENY_WRITE + *ptr = foo; // assign something + pkey_set(pkey, PKEY_DENY_WRITE); // set PKEY_DENY_WRITE again + +Now when it frees the memory, it will also free the pkey since it +is no longer in use: + + munmap(ptr, PAGE_SIZE); + pkey_free(pkey); + +(Note: pkey_set() is a wrapper for the RDPKRU and WRPKRU instructions. + An example implementation can be found in + tools/testing/selftests/x86/protection_keys.c) + +=== Behavior === + +The kernel attempts to make protection keys consistent with the +behavior of a plain mprotect(). For instance if you do this: + + mprotect(ptr, size, PROT_NONE); + something(ptr); + +you can expect the same effects with protection keys when doing this: + + pkey = pkey_alloc(0, PKEY_DISABLE_WRITE | PKEY_DISABLE_READ); + pkey_mprotect(ptr, size, PROT_READ|PROT_WRITE, pkey); + something(ptr); + +That should be true whether something() is a direct access to 'ptr' +like: + + *ptr = foo; + +or when the kernel does the access on the application's behalf like +with a read(): + + read(fd, ptr, 1); + +The kernel will send a SIGSEGV in both cases, but si_code will be set +to SEGV_PKERR when violating protection keys versus SEGV_ACCERR when +the plain mprotect() permissions are violated. diff --git a/Documentation/x86/protection-keys.txt b/Documentation/x86/protection-keys.txt deleted file mode 100644 index b643045..000 --- a/Documentation/x86/protection-keys.txt +++ /dev/null @@ -1,85 +0,0 @@ -Memory Protection Keys for Userspace (PKU aka PKEYs) is a CPU feature -which will be found on future Intel CPUs. - -Memory Protection Keys provides a mechanism for enforcing page-based -protections, but without requiring modification of the page tables -when an application changes protection domains. It works by -dedicating 4 previously ignored bits in each page table entry to a -"protection key", giving 16 possible keys. - -There is also a new user-accessible register (PKRU) with two separate -bits (Access Disable and Write Disable) for each key. Being a CPU -register, PKRU is inherently thread-local, potentially giving each -thread a different set of protections from every oth
[RFC v6 62/62] Documentation/vm: PowerPC specific updates to memory protection keys
Add documentation updates that capture PowerPC specific changes. Signed-off-by: Ram Pai --- Documentation/vm/protection-keys.txt | 90 - 1 files changed, 65 insertions(+), 25 deletions(-) diff --git a/Documentation/vm/protection-keys.txt b/Documentation/vm/protection-keys.txt index b643045..9330105 100644 --- a/Documentation/vm/protection-keys.txt +++ b/Documentation/vm/protection-keys.txt @@ -1,22 +1,45 @@ -Memory Protection Keys for Userspace (PKU aka PKEYs) is a CPU feature -which will be found on future Intel CPUs. - -Memory Protection Keys provides a mechanism for enforcing page-based -protections, but without requiring modification of the page tables -when an application changes protection domains. It works by -dedicating 4 previously ignored bits in each page table entry to a -"protection key", giving 16 possible keys. - -There is also a new user-accessible register (PKRU) with two separate -bits (Access Disable and Write Disable) for each key. Being a CPU -register, PKRU is inherently thread-local, potentially giving each -thread a different set of protections from every other thread. - -There are two new instructions (RDPKRU/WRPKRU) for reading and writing -to the new register. The feature is only available in 64-bit mode, -even though there is theoretically space in the PAE PTEs. These -permissions are enforced on data access only and have no effect on -instruction fetches. +Memory Protection Keys for Userspace (PKU aka PKEYs) is a CPU feature found on +future Intel CPUs and on PowerPC 7 and higher CPUs. + +Memory Protection Keys provide a mechanism for enforcing page-based +protections, but without requiring modification of the page tables when an +application changes protection domains. + +It works by dedicating bits in each page table entry to a "protection key". +There is also a user-accessible register with two separate bits for each +key. Being a CPU register, the user-accessible register is inherently +thread-local, potentially giving each thread a different set of protections +from every other thread. + +On Intel: + + Four previously bits are used the page table entry giving 16 possible keys. + + The user accessible register(PKRU) has a bit each per key to disable + access and to disable write. + + The feature is only available in 64-bit mode, even though there is + theoretically space in the PAE PTEs. These permissions are enforced on + data access only and have no effect on instruction fetches. + +On PowerPC: + + Five bits in the page table entry are used giving 32 possible keys. + This support is currently for Hash Page Table mode only. + + The user accessible register(AMR) has a bit each per key to disable + read and write. Access disable can be achieved by disabling + read and write. + + 'mtspr 0xd, mem' reads the AMR register + 'mfspr mem, 0xd' writes into the AMR register. + + Execution can be disabled by allocating a key with execute-disabled + permission. The execute-permissions on the key; however, cannot be + changed through a user accessible register. The CPU will not allow + execution of instruction in pages that are associated with + execute-disabled key. + === Syscalls === @@ -28,9 +51,9 @@ There are 3 system calls which directly interact with pkeys: unsigned long prot, int pkey); Before a pkey can be used, it must first be allocated with -pkey_alloc(). An application calls the WRPKRU instruction +pkey_alloc(). An application calls the WRPKRU/AMR instruction directly in order to change access permissions to memory covered -with a key. In this example WRPKRU is wrapped by a C function +with a key. In this example WRPKRU/AMR is wrapped by a C function called pkey_set(). int real_prot = PROT_READ|PROT_WRITE; @@ -52,11 +75,11 @@ is no longer in use: munmap(ptr, PAGE_SIZE); pkey_free(pkey); -(Note: pkey_set() is a wrapper for the RDPKRU and WRPKRU instructions. +(Note: pkey_set() is a wrapper for the RDPKRU,WRPKRU or AMR instructions. An example implementation can be found in - tools/testing/selftests/x86/protection_keys.c) + tools/testing/selftests/vm/protection_keys.c) -=== Behavior === +=== Behavior = The kernel attempts to make protection keys consistent with the behavior of a plain mprotect(). For instance if you do this: @@ -66,7 +89,7 @@ behavior of a plain mprotect(). For instance if you do this: you can expect the same effects with protection keys when doing this: - pkey = pkey_alloc(0, PKEY_DISABLE_WRITE | PKEY_DISABLE_READ); + pkey = pkey_alloc(0, PKEY_DISABLE_ACCESS); pkey_mprotect(ptr, size, PROT_READ|PROT_WRITE, pkey); something(ptr); @@ -83,3 +106,20
[RFC v6 60/62] selftest/vm: sub-page allocator
introduce a new allocator that allocates 4k hardware-pages to back 64k linux-page. This allocator is only applicable on powerpc. Signed-off-by: Ram Pai --- tools/testing/selftests/vm/protection_keys.c | 29 ++ 1 files changed, 29 insertions(+), 0 deletions(-) diff --git a/tools/testing/selftests/vm/protection_keys.c b/tools/testing/selftests/vm/protection_keys.c index d9474f9..bffa890 100644 --- a/tools/testing/selftests/vm/protection_keys.c +++ b/tools/testing/selftests/vm/protection_keys.c @@ -758,6 +758,34 @@ void free_pkey_malloc(void *ptr) return ptr; } +void *malloc_pkey_with_mprotect_subpage(long size, int prot, u16 pkey) +{ + void *ptr; + int ret; + +#ifndef __powerpc64__ + return PTR_ERR_ENOTSUP; +#endif /* __powerpc64__ */ + dprintf1("doing %s(size=%ld, prot=0x%x, pkey=%d)\n", __func__, + size, prot, pkey); + pkey_assert(pkey < NR_PKEYS); + ptr = mmap(NULL, size, prot, MAP_ANONYMOUS|MAP_PRIVATE, -1, 0); + pkey_assert(ptr != (void *)-1); + + ret = syscall(__NR_subpage_prot, ptr, size, NULL); + if (ret) { + perror("subpage_perm"); + return PTR_ERR_ENOTSUP; + } + + ret = mprotect_pkey((void *)ptr, PAGE_SIZE, prot, pkey); + pkey_assert(!ret); + record_pkey_malloc(ptr, size); + + dprintf1("%s() for pkey %d @ %p\n", __func__, pkey, ptr); + return ptr; +} + void *malloc_pkey_anon_huge(long size, int prot, u16 pkey) { int ret; @@ -880,6 +908,7 @@ void setup_hugetlbfs(void) void *(*pkey_malloc[])(long size, int prot, u16 pkey) = { malloc_pkey_with_mprotect, + malloc_pkey_with_mprotect_subpage, malloc_pkey_anon_huge, malloc_pkey_hugetlb /* can not do direct with the pkey_mprotect() API: -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[RFC v6 59/62] selftest/vm: detect write violation on a mapped access-denied-key page
detect write-violation on a page to which access-disabled key is associated much after the page is mapped. Signed-off-by: Ram Pai --- tools/testing/selftests/vm/protection_keys.c | 13 + 1 files changed, 13 insertions(+), 0 deletions(-) diff --git a/tools/testing/selftests/vm/protection_keys.c b/tools/testing/selftests/vm/protection_keys.c index 07df8cf..d9474f9 100644 --- a/tools/testing/selftests/vm/protection_keys.c +++ b/tools/testing/selftests/vm/protection_keys.c @@ -1092,6 +1092,18 @@ void test_write_of_access_disabled_region(int *ptr, u16 pkey) *ptr = __LINE__; expected_pkey_fault(pkey); } + +void test_write_of_access_disabled_region_with_page_already_mapped(int *ptr, + u16 pkey) +{ + *ptr = __LINE__; + dprintf1("disabling access; after accessing the page, " + " to PKEY[%02d], doing write\n", pkey); + pkey_access_deny(pkey); + *ptr = __LINE__; + expected_pkey_fault(pkey); +} + void test_kernel_write_of_access_disabled_region(int *ptr, u16 pkey) { int ret; @@ -1377,6 +1389,7 @@ void test_mprotect_pkey_on_unsupported_cpu(int *ptr, u16 pkey) test_write_of_write_disabled_region, test_write_of_write_disabled_region_with_page_already_mapped, test_write_of_access_disabled_region, + test_write_of_access_disabled_region_with_page_already_mapped, test_kernel_write_of_access_disabled_region, test_kernel_write_of_write_disabled_region, test_kernel_gup_of_access_disabled_region, -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[RFC v6 56/62] selftest/vm: detect no key violation on a freed key
a access-denied key should not trigger any key violation after the key has been freed. Signed-off-by: Ram Pai --- tools/testing/selftests/vm/protection_keys.c | 25 + 1 files changed, 25 insertions(+), 0 deletions(-) diff --git a/tools/testing/selftests/vm/protection_keys.c b/tools/testing/selftests/vm/protection_keys.c index bf27bcd..47c23cc 100644 --- a/tools/testing/selftests/vm/protection_keys.c +++ b/tools/testing/selftests/vm/protection_keys.c @@ -1026,6 +1026,30 @@ void test_read_of_access_disabled_region_with_page_already_mapped(int *ptr, expected_pkey_fault(pkey); } +void test_read_of_access_disabled_but_freed_key_region(int *ptr, u16 pkey) +{ + int ptr_contents; + + dprintf1("disabling access to PKEY[%02d], doing read @ %p\n", +pkey, ptr); + + /* read the content */ + ptr_contents = read_ptr(ptr); + do_not_expect_pkey_fault(); + + /* deny key access */ + pkey_access_deny(pkey); + ptr_contents = read_ptr(ptr); + dprintf1("*ptr: %d\n", ptr_contents); + expected_pkey_fault(pkey); + + /* free the key without restoring access */ + pkey_access_deny(pkey); + sys_pkey_free(pkey); + ptr_contents = read_ptr(ptr); + do_not_expect_pkey_fault(); +} + void test_write_of_write_disabled_region(int *ptr, u16 pkey) { dprintf1("disabling write access to PKEY[%02d], doing write\n", pkey); @@ -1333,6 +1357,7 @@ void test_mprotect_pkey_on_unsupported_cpu(int *ptr, u16 pkey) test_pkey_syscalls_on_non_allocated_pkey, test_pkey_syscalls_bad_args, test_pkey_alloc_exhaust, + test_read_of_access_disabled_but_freed_key_region, }; void run_tests_once(void) -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[RFC v6 58/62] selftest/vm: detect no write key-violation on a freed key
a write-denied key should not trigger any key violation after the key has been freed. Signed-off-by: Ram Pai --- tools/testing/selftests/vm/protection_keys.c | 18 ++ 1 files changed, 18 insertions(+), 0 deletions(-) diff --git a/tools/testing/selftests/vm/protection_keys.c b/tools/testing/selftests/vm/protection_keys.c index e35cef5..07df8cf 100644 --- a/tools/testing/selftests/vm/protection_keys.c +++ b/tools/testing/selftests/vm/protection_keys.c @@ -1068,6 +1068,23 @@ void test_write_of_write_disabled_region(int *ptr, u16 pkey) *ptr = __LINE__; expected_pkey_fault(pkey); } + +void test_write_of_write_disabled_but_freed_key_region(int *ptr, u16 pkey) +{ + dprintf1("disabling write access to PKEY[%02d], doing write\n", pkey); + *ptr = __LINE__; + do_not_expect_pkey_fault(); + + pkey_write_deny(pkey); + *ptr = __LINE__; + expected_pkey_fault(pkey); + + pkey_write_deny(pkey); + sys_pkey_free(pkey); + *ptr = __LINE__; + do_not_expect_pkey_fault(); +} + void test_write_of_access_disabled_region(int *ptr, u16 pkey) { dprintf1("disabling access to PKEY[%02d], doing write\n", pkey); @@ -1370,6 +1387,7 @@ void test_mprotect_pkey_on_unsupported_cpu(int *ptr, u16 pkey) test_pkey_syscalls_bad_args, test_pkey_alloc_exhaust, test_read_of_access_disabled_but_freed_key_region, + test_write_of_write_disabled_but_freed_key_region, }; void run_tests_once(void) -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[RFC v6 57/62] selftest/vm: associate key on a mapped page and detect write violation
detect write-violation on a page to which write-disabled key is associated much after the page is mapped. Signed-off-by: Ram Pai --- tools/testing/selftests/vm/protection_keys.c | 12 1 files changed, 12 insertions(+), 0 deletions(-) diff --git a/tools/testing/selftests/vm/protection_keys.c b/tools/testing/selftests/vm/protection_keys.c index 47c23cc..e35cef5 100644 --- a/tools/testing/selftests/vm/protection_keys.c +++ b/tools/testing/selftests/vm/protection_keys.c @@ -1050,6 +1050,17 @@ void test_read_of_access_disabled_but_freed_key_region(int *ptr, u16 pkey) do_not_expect_pkey_fault(); } +void test_write_of_write_disabled_region_with_page_already_mapped(int *ptr, + u16 pkey) +{ + *ptr = __LINE__; + dprintf1("disabling write access; after accessing the page, " + "to PKEY[%02d], doing write\n", pkey); + pkey_write_deny(pkey); + *ptr = __LINE__; + expected_pkey_fault(pkey); +} + void test_write_of_write_disabled_region(int *ptr, u16 pkey) { dprintf1("disabling write access to PKEY[%02d], doing write\n", pkey); @@ -1347,6 +1358,7 @@ void test_mprotect_pkey_on_unsupported_cpu(int *ptr, u16 pkey) test_read_of_access_disabled_region, test_read_of_access_disabled_region_with_page_already_mapped, test_write_of_write_disabled_region, + test_write_of_write_disabled_region_with_page_already_mapped, test_write_of_access_disabled_region, test_kernel_write_of_access_disabled_region, test_kernel_write_of_write_disabled_region, -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[RFC v6 55/62] selftest/vm: associate key on a mapped page and detect access violation
detect access-violation on a page to which access-disabled key is associated much after the page is mapped. Signed-off-by: Ram Pai --- tools/testing/selftests/vm/protection_keys.c | 19 +++ 1 files changed, 19 insertions(+), 0 deletions(-) diff --git a/tools/testing/selftests/vm/protection_keys.c b/tools/testing/selftests/vm/protection_keys.c index 37645a5..bf27bcd 100644 --- a/tools/testing/selftests/vm/protection_keys.c +++ b/tools/testing/selftests/vm/protection_keys.c @@ -1008,6 +1008,24 @@ void test_read_of_access_disabled_region(int *ptr, u16 pkey) dprintf1("*ptr: %d\n", ptr_contents); expected_pkey_fault(pkey); } + +void test_read_of_access_disabled_region_with_page_already_mapped(int *ptr, + u16 pkey) +{ + int ptr_contents; + + dprintf1("disabling access to PKEY[%02d], doing read @ %p\n", + pkey, ptr); + ptr_contents = read_ptr(ptr); + dprintf1("reading ptr before disabling the read : %d\n", + ptr_contents); + rdpkey_reg(); + pkey_access_deny(pkey); + ptr_contents = read_ptr(ptr); + dprintf1("*ptr: %d\n", ptr_contents); + expected_pkey_fault(pkey); +} + void test_write_of_write_disabled_region(int *ptr, u16 pkey) { dprintf1("disabling write access to PKEY[%02d], doing write\n", pkey); @@ -1303,6 +1321,7 @@ void test_mprotect_pkey_on_unsupported_cpu(int *ptr, u16 pkey) void (*pkey_tests[])(int *ptr, u16 pkey) = { test_read_of_write_disabled_region, test_read_of_access_disabled_region, + test_read_of_access_disabled_region_with_page_already_mapped, test_write_of_write_disabled_region, test_write_of_access_disabled_region, test_kernel_write_of_access_disabled_region, -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[RFC v6 52/62] selftest/vm: generic cleanup
cleanup the code to satisfy coding styles. Signed-off-by: Ram Pai --- tools/testing/selftests/vm/protection_keys.c | 81 ++ 1 files changed, 43 insertions(+), 38 deletions(-) diff --git a/tools/testing/selftests/vm/protection_keys.c b/tools/testing/selftests/vm/protection_keys.c index f21e177..fd94449 100644 --- a/tools/testing/selftests/vm/protection_keys.c +++ b/tools/testing/selftests/vm/protection_keys.c @@ -3,7 +3,7 @@ * * There are examples in here of: * * how to set protection keys on memory - * * how to set/clear bits in pkey registers (the rights register) + * * how to set/clear bits in Protection Key registers (the rights register) * * how to handle SEGV_PKUERR signals and extract pkey-relevant *information from the siginfo * @@ -12,13 +12,18 @@ * prefault pages in at malloc, or not * protect MPX bounds tables with protection keys? * make sure VMA splitting/merging is working correctly - * OOMs can destroy mm->mmap (see exit_mmap()), so make sure it is immune to pkeys - * look for pkey "leaks" where it is still set on a VMA but "freed" back to the kernel - * do a plain mprotect() to a mprotect_pkey() area and make sure the pkey sticks + * OOMs can destroy mm->mmap (see exit_mmap()), + * so make sure it is immune to pkeys + * look for pkey "leaks" where it is still set on a VMA + * but "freed" back to the kernel + * do a plain mprotect() to a mprotect_pkey() area and make + * sure the pkey sticks * * Compile like this: - * gcc -o protection_keys-O2 -g -std=gnu99 -pthread -Wall protection_keys.c -lrt -ldl -lm - * gcc -m32 -o protection_keys_32 -O2 -g -std=gnu99 -pthread -Wall protection_keys.c -lrt -ldl -lm + * gcc -o protection_keys-O2 -g -std=gnu99 + * -pthread -Wall protection_keys.c -lrt -ldl -lm + * gcc -m32 -o protection_keys_32 -O2 -g -std=gnu99 + * -pthread -Wall protection_keys.c -lrt -ldl -lm */ #define _GNU_SOURCE #include @@ -251,26 +256,11 @@ void signal_handler(int signum, siginfo_t *si, void *vucontext) dprintf1("signal pkey_reg from pkey_reg: %016lx\n", __rdpkey_reg()); dprintf1("si_pkey from siginfo: %jx\n", si_pkey); *(u64 *)pkey_reg_ptr = 0x; - dprintf1("WARNING: set PRKU=0 to allow faulting instruction to continue\n"); + dprintf1("WARNING: set PKEY_REG=0 to allow faulting instruction " + "to continue\n"); pkey_faults++; dprintf1("==\n"); return; - if (trapno == 14) { - fprintf(stderr, - "ERROR: In signal handler, page fault, trapno = %d, ip = %016lx\n", - trapno, ip); - fprintf(stderr, "si_addr %p\n", si->si_addr); - fprintf(stderr, "REG_ERR: %lx\n", - (unsigned long)uctxt->uc_mcontext.gregs[REG_ERR]); - exit(1); - } else { - fprintf(stderr, "unexpected trap %d! at 0x%lx\n", trapno, ip); - fprintf(stderr, "si_addr %p\n", si->si_addr); - fprintf(stderr, "REG_ERR: %lx\n", - (unsigned long)uctxt->uc_mcontext.gregs[REG_ERR]); - exit(2); - } - dprint_in_signal = 0; } int wait_all_children(void) @@ -415,7 +405,7 @@ void pkey_disable_set(int pkey, int flags) { unsigned long syscall_flags = 0; int ret; - int pkey_rights; + u32 pkey_rights; pkey_reg_t orig_pkey_reg = rdpkey_reg(); dprintf1("START->%s(%d, 0x%x)\n", __func__, @@ -453,7 +443,7 @@ void pkey_disable_clear(int pkey, int flags) { unsigned long syscall_flags = 0; int ret; - int pkey_rights = pkey_get(pkey, syscall_flags); + u32 pkey_rights = pkey_get(pkey, syscall_flags); pkey_reg_t orig_pkey_reg = rdpkey_reg(); pkey_assert(flags & (PKEY_DISABLE_ACCESS | PKEY_DISABLE_WRITE)); @@ -516,9 +506,10 @@ int sys_mprotect_pkey(void *ptr, size_t size, unsigned long orig_prot, return sret; } -int sys_pkey_alloc(unsigned long flags, unsigned long init_val) +int sys_pkey_alloc(unsigned long flags, u64 init_val) { int ret = syscall(SYS_pkey_alloc, flags, init_val); + dprintf1("%s(flags=%lx, init_val=%lx) syscall ret: %d errno: %d\n", __func__, flags, init_val, ret, errno); return ret; @@ -542,7 +533,7 @@ void pkey_set_shadow(u32 key, u64 init_val) int alloc_pkey(void) { int ret; - unsigned long init_val = 0x0; + u64 init_val = 0x0; dprintf1("%s()::%d, pkey_reg: 0x%016lx shadow: %016lx\n", __func__, __LINE__, __rdpkey_reg(), shadow_pkey_reg); @@ -692,7 +683,9 @@ void record_pkey_malloc(void *ptr, long size)
[RFC v6 51/62] selftest/vm: pkey register should match shadow pkey
expected_pkey_fault() is comparing the contents of pkey register with 0. This may not be true all the time. There could be bits set by default by the architecture which can never be changed. Hence compare the value against shadow pkey register, which is supposed to track the bits accurately all throughout Signed-off-by: Ram Pai --- tools/testing/selftests/vm/protection_keys.c |4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/vm/protection_keys.c b/tools/testing/selftests/vm/protection_keys.c index 20bab6d..f21e177 100644 --- a/tools/testing/selftests/vm/protection_keys.c +++ b/tools/testing/selftests/vm/protection_keys.c @@ -926,10 +926,10 @@ void expected_pkey_fault(int pkey) pkey_assert(last_pkey_faults + 1 == pkey_faults); pkey_assert(last_si_pkey == pkey); /* -* The signal handler shold have cleared out PKEY register to let the +* The signal handler shold have cleared out pkey-register to let the * test program continue. We now have to restore it. */ - if (__rdpkey_reg() != 0) + if (__rdpkey_reg() != shadow_pkey_reg) pkey_assert(0); __wrpkey_reg(shadow_pkey_reg); -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[RFC v6 47/62] selftest/vm: fixed bugs in pkey_disable_clear()
instead of clearing the bits, pkey_disable_clear() was setting the bits. Fixed it. Also fixed a wrong assertion in that function. When bits are cleared, the resulting bit value will be less than the original. Signed-off-by: Ram Pai --- tools/testing/selftests/vm/protection_keys.c |4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/vm/protection_keys.c b/tools/testing/selftests/vm/protection_keys.c index b2d7879..0f2d1ce 100644 --- a/tools/testing/selftests/vm/protection_keys.c +++ b/tools/testing/selftests/vm/protection_keys.c @@ -461,7 +461,7 @@ void pkey_disable_clear(int pkey, int flags) pkey, pkey, pkey_rights); pkey_assert(pkey_rights >= 0); - pkey_rights |= flags; + pkey_rights &= ~flags; ret = pkey_set(pkey, pkey_rights, 0); /* pkey_reg and flags have the same format */ @@ -475,7 +475,7 @@ void pkey_disable_clear(int pkey, int flags) dprintf1("%s(%d) pkey_reg: 0x%016lx\n", __func__, pkey, rdpkey_reg()); if (flags) - assert(rdpkey_reg() > orig_pkey_reg); + assert(rdpkey_reg() < orig_pkey_reg); } void pkey_write_allow(int pkey) -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[RFC v6 48/62] selftest/vm: clear the bits in shadow reg when a pkey is freed.
When a key is freed, the key is no more effective. Clear the bits corresponding to the pkey in the shadow register. Otherwise it will carry some spurious bits which can trigger false-positive asserts. Signed-off-by: Ram Pai --- tools/testing/selftests/vm/protection_keys.c |3 +++ 1 files changed, 3 insertions(+), 0 deletions(-) diff --git a/tools/testing/selftests/vm/protection_keys.c b/tools/testing/selftests/vm/protection_keys.c index 0f2d1ce..4f4ce36 100644 --- a/tools/testing/selftests/vm/protection_keys.c +++ b/tools/testing/selftests/vm/protection_keys.c @@ -582,6 +582,9 @@ int alloc_pkey(void) int sys_pkey_free(unsigned long pkey) { int ret = syscall(SYS_pkey_free, pkey); + + if (!ret) + shadow_pkey_reg &= reset_bits(pkey, PKEY_DISABLE_ACCESS); dprintf1("%s(pkey=%ld) syscall ret: %d\n", __func__, pkey, ret); return ret; } -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[RFC v6 49/62] selftest/vm: fix alloc_random_pkey() to make it really random
alloc_random_pkey() was allocating the same pkey every time. Not all pkeys were geting tested. fixed it. Signed-off-by: Ram Pai --- tools/testing/selftests/vm/protection_keys.c | 10 +++--- 1 files changed, 7 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/vm/protection_keys.c b/tools/testing/selftests/vm/protection_keys.c index 4f4ce36..1c8ef39 100644 --- a/tools/testing/selftests/vm/protection_keys.c +++ b/tools/testing/selftests/vm/protection_keys.c @@ -23,6 +23,7 @@ #define _GNU_SOURCE #include #include +#include #include #include #include @@ -602,13 +603,15 @@ int alloc_random_pkey(void) int alloced_pkeys[NR_PKEYS]; int nr_alloced = 0; int random_index; + memset(alloced_pkeys, 0, sizeof(alloced_pkeys)); + srand((unsigned int)time(NULL)); /* allocate every possible key and make a note of which ones we got */ max_nr_pkey_allocs = NR_PKEYS; - max_nr_pkey_allocs = 1; for (i = 0; i < max_nr_pkey_allocs; i++) { int new_pkey = alloc_pkey(); + if (new_pkey < 0) break; alloced_pkeys[nr_alloced++] = new_pkey; @@ -624,13 +627,14 @@ int alloc_random_pkey(void) /* go through the allocated ones that we did not want and free them */ for (i = 0; i < nr_alloced; i++) { int free_ret; + if (!alloced_pkeys[i]) continue; free_ret = sys_pkey_free(alloced_pkeys[i]); pkey_assert(!free_ret); } - dprintf1("%s()::%d, ret: %d pkey_reg: 0x%x shadow: 0x%x\n", __func__, - __LINE__, ret, __rdpkey_reg(), shadow_pkey_reg); + dprintf1("%s()::%d, ret: %d pkey_reg: 0x%x shadow: 0x%016lx\n", + __func__, __LINE__, ret, __rdpkey_reg(), shadow_pkey_reg); return ret; } -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[RFC v6 50/62] selftest/vm: introduce two arch independent abstraction
open_hugepage_file() <- opens the huge page file get_start_key() <-- provides the first non-reserved key. Signed-off-by: Ram Pai --- tools/testing/selftests/vm/pkey-helpers.h| 11 +++ tools/testing/selftests/vm/protection_keys.c |6 +++--- 2 files changed, 14 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/vm/pkey-helpers.h b/tools/testing/selftests/vm/pkey-helpers.h index f50b5f2..5211019 100644 --- a/tools/testing/selftests/vm/pkey-helpers.h +++ b/tools/testing/selftests/vm/pkey-helpers.h @@ -300,3 +300,14 @@ static inline void __page_o_noops(void) } \ } while (0) #define raw_assert(cond) assert(cond) + +static inline int open_hugepage_file(int flag) +{ + return open("/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages", +O_RDONLY); +} + +static inline int get_start_key(void) +{ + return 1; +} diff --git a/tools/testing/selftests/vm/protection_keys.c b/tools/testing/selftests/vm/protection_keys.c index 1c8ef39..20bab6d 100644 --- a/tools/testing/selftests/vm/protection_keys.c +++ b/tools/testing/selftests/vm/protection_keys.c @@ -809,7 +809,7 @@ void setup_hugetlbfs(void) * Now go make sure that we got the pages and that they * are 2M pages. Someone might have made 1G the default. */ - fd = open("/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages", O_RDONLY); + fd = open_hugepage_file(O_RDONLY); if (fd < 0) { perror("opening sysfs 2M hugetlb config"); return; @@ -1087,10 +1087,10 @@ void test_kernel_gup_write_to_write_disabled_region(int *ptr, u16 pkey) void test_pkey_syscalls_on_non_allocated_pkey(int *ptr, u16 pkey) { int err; - int i; + int i = get_start_key(); /* Note: 0 is the default pkey, so don't mess with it */ - for (i = 1; i < NR_PKEYS; i++) { + for (; i < NR_PKEYS; i++) { if (pkey == i) continue; -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[RFC v6 45/62] selftest/vm: generics function to handle shadow key register
helper functions to handler shadow pkey register Signed-off-by: Ram Pai --- tools/testing/selftests/vm/pkey-helpers.h| 27 tools/testing/selftests/vm/protection_keys.c | 34 - 2 files changed, 49 insertions(+), 12 deletions(-) diff --git a/tools/testing/selftests/vm/pkey-helpers.h b/tools/testing/selftests/vm/pkey-helpers.h index 12260e8..f50b5f2 100644 --- a/tools/testing/selftests/vm/pkey-helpers.h +++ b/tools/testing/selftests/vm/pkey-helpers.h @@ -43,6 +43,33 @@ #define DEBUG_LEVEL 0 #endif #define DPRINT_IN_SIGNAL_BUF_SIZE 4096 + +static inline u32 pkey_to_shift(int pkey) +{ + return pkey * PKEY_BITS_PER_PKEY; +} + +static inline pkey_reg_t reset_bits(int pkey, pkey_reg_t bits) +{ + u32 shift = pkey_to_shift(pkey); + + return ~(bits << shift); +} + +static inline pkey_reg_t left_shift_bits(int pkey, pkey_reg_t bits) +{ + u32 shift = pkey_to_shift(pkey); + + return (bits << shift); +} + +static inline pkey_reg_t right_shift_bits(int pkey, pkey_reg_t bits) +{ + u32 shift = pkey_to_shift(pkey); + + return (bits >> shift); +} + extern int dprint_in_signal; extern char dprint_in_signal_buffer[DPRINT_IN_SIGNAL_BUF_SIZE]; static inline void sigsafe_printf(const char *format, ...) diff --git a/tools/testing/selftests/vm/protection_keys.c b/tools/testing/selftests/vm/protection_keys.c index bd46f87..e5f5535 100644 --- a/tools/testing/selftests/vm/protection_keys.c +++ b/tools/testing/selftests/vm/protection_keys.c @@ -374,7 +374,7 @@ u32 pkey_get(int pkey, unsigned long flags) __func__, pkey, flags, 0, 0); dprintf2("%s() raw pkey_reg: %x\n", __func__, pkey_reg); - shifted_pkey_reg = (pkey_reg >> (pkey * PKEY_BITS_PER_PKEY)); + shifted_pkey_reg = right_shift_bits(pkey, pkey_reg); dprintf2("%s() shifted_pkey_reg: %x\n", __func__, shifted_pkey_reg); masked_pkey_reg = shifted_pkey_reg & mask; dprintf2("%s() masked pkey_reg: %x\n", __func__, masked_pkey_reg); @@ -397,9 +397,9 @@ int pkey_set(int pkey, unsigned long rights, unsigned long flags) /* copy old pkey_reg */ new_pkey_reg = old_pkey_reg; /* mask out bits from pkey in old value: */ - new_pkey_reg &= ~(mask << (pkey * PKEY_BITS_PER_PKEY)); + new_pkey_reg &= reset_bits(pkey, mask); /* OR in new bits for pkey: */ - new_pkey_reg |= (rights << (pkey * PKEY_BITS_PER_PKEY)); + new_pkey_reg |= left_shift_bits(pkey, rights); __wrpkey_reg(new_pkey_reg); @@ -430,7 +430,7 @@ void pkey_disable_set(int pkey, int flags) ret = pkey_set(pkey, pkey_rights, syscall_flags); assert(!ret); /*pkey_reg and flags have the same format */ - shadow_pkey_reg |= flags << (pkey * 2); + shadow_pkey_reg |= left_shift_bits(pkey, flags); dprintf1("%s(%d) shadow: 0x%016lx\n", __func__, pkey, shadow_pkey_reg); @@ -465,7 +465,7 @@ void pkey_disable_clear(int pkey, int flags) ret = pkey_set(pkey, pkey_rights, 0); /* pkey_reg and flags have the same format */ - shadow_pkey_reg &= ~(flags << (pkey * 2)); + shadow_pkey_reg &= reset_bits(pkey, flags); pkey_assert(ret >= 0); pkey_rights = pkey_get(pkey, syscall_flags); @@ -523,6 +523,21 @@ int sys_pkey_alloc(unsigned long flags, unsigned long init_val) return ret; } +void pkey_setup_shadow(void) +{ + shadow_pkey_reg = __rdpkey_reg(); +} + +void pkey_reset_shadow(u32 key) +{ + shadow_pkey_reg &= reset_bits(key, 0x3); +} + +void pkey_set_shadow(u32 key, u64 init_val) +{ + shadow_pkey_reg |= left_shift_bits(key, init_val); +} + int alloc_pkey(void) { int ret; @@ -540,7 +555,7 @@ int alloc_pkey(void) shadow_pkey_reg); if (ret) { /* clear both the bits: */ - shadow_pkey_reg &= ~(0x3 << (ret * 2)); + pkey_reset_shadow(ret); dprintf4("%s()::%d, ret: %d pkey_reg: 0x%016lx " "shadow: 0x%016lx\n", __func__, @@ -550,7 +565,7 @@ int alloc_pkey(void) * move the new state in from init_val * (remember, we cheated and init_val == pkey_reg format) */ - shadow_pkey_reg |= (init_val << (ret * 2)); + pkey_set_shadow(ret, init_val); } dprintf4("%s()::%d, ret: %d pkey_reg: 0x%016lx shadow: 0x%016lx\n", __func__, __LINE__, ret, __rdpkey_reg(), @@ -1322,11 +1337,6 @@ void run_tests_once(void) iteration_nr++; } -void pkey_setup_shadow(void) -{ - shadow_pkey_reg = __rdpkey_reg(); -} - int main(void) { int nr_iterations = 22; -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at h
[RFC v6 46/62] selftest/vm: fix the wrong assert in pkey_disable_set()
If the flag is 0, no bits will be set. Hence we cant expect the resulting bitmap to have a higher value than what it was earlier. Signed-off-by: Ram Pai --- tools/testing/selftests/vm/protection_keys.c |2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/tools/testing/selftests/vm/protection_keys.c b/tools/testing/selftests/vm/protection_keys.c index e5f5535..b2d7879 100644 --- a/tools/testing/selftests/vm/protection_keys.c +++ b/tools/testing/selftests/vm/protection_keys.c @@ -443,7 +443,7 @@ void pkey_disable_set(int pkey, int flags) dprintf1("%s(%d) pkey_reg: 0x%lx\n", __func__, pkey, rdpkey_reg()); if (flags) - pkey_assert(rdpkey_reg() > orig_pkey_reg); + pkey_assert(rdpkey_reg() >= orig_pkey_reg); dprintf1("END<---%s(%d, 0x%x)\n", __func__, pkey, flags); } -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[RFC v6 43/62] selftest/vm: move generic definitions to header file
Moved all the generic definition and helper functions to the header file Signed-off-by: Ram Pai --- tools/testing/selftests/vm/pkey-helpers.h| 62 +++-- tools/testing/selftests/vm/protection_keys.c | 54 -- 2 files changed, 57 insertions(+), 59 deletions(-) diff --git a/tools/testing/selftests/vm/pkey-helpers.h b/tools/testing/selftests/vm/pkey-helpers.h index 2d9887a..f378bc2 100644 --- a/tools/testing/selftests/vm/pkey-helpers.h +++ b/tools/testing/selftests/vm/pkey-helpers.h @@ -12,8 +12,31 @@ #include #include +/* Define some kernel-like types */ +#define u8 uint8_t +#define u16 uint16_t +#define u32 uint32_t +#define u64 uint64_t + +#ifdef __i386__ +#define SYS_mprotect_key 380 +#define SYS_pkey_alloc 381 +#define SYS_pkey_free 382 +#define REG_IP_IDX REG_EIP +#define si_pkey_offset 0x14 +#else +#define SYS_mprotect_key 329 +#define SYS_pkey_alloc 330 +#define SYS_pkey_free 331 +#define REG_IP_IDX REG_RIP +#define si_pkey_offset 0x20 +#endif + #define NR_PKEYS 16 #define PKEY_BITS_PER_PKEY 2 +#define PKEY_DISABLE_ACCESS0x1 +#define PKEY_DISABLE_WRITE 0x2 +#define HPAGE_SIZE (1UL<<21) #ifndef DEBUG_LEVEL #define DEBUG_LEVEL 0 @@ -137,11 +160,6 @@ static inline void __pkey_write_allow(int pkey, int do_allow_write) dprintf4("pkey_reg now: %08x\n", rdpkey_reg()); } -#define PROT_PKEY0 0x10/* protection key value (bit 0) */ -#define PROT_PKEY1 0x20/* protection key value (bit 1) */ -#define PROT_PKEY2 0x40/* protection key value (bit 2) */ -#define PROT_PKEY3 0x80/* protection key value (bit 3) */ - #define PAGE_SIZE 4096 #define MB (1<<20) @@ -219,4 +237,38 @@ int pkey_reg_xstate_offset(void) return xstate_offset; } +static inline void __page_o_noops(void) +{ + /* 8-bytes of instruction * 512 bytes = 1 page */ + asm(".rept 512 ; nopl 0x7eee(%eax) ; .endr"); +} + #endif /* _PKEYS_HELPER_H */ + +#define ARRAY_SIZE(x) (sizeof(x) / sizeof(*(x))) +#define ALIGN_UP(x, align_to) (((x) + ((align_to)-1)) & ~((align_to)-1)) +#define ALIGN_DOWN(x, align_to) ((x) & ~((align_to)-1)) +#define ALIGN_PTR_UP(p, ptr_align_to) \ + ((typeof(p))ALIGN_UP((unsigned long)(p), ptr_align_to)) +#define ALIGN_PTR_DOWN(p, ptr_align_to) \ + ((typeof(p))ALIGN_DOWN((unsigned long)(p), ptr_align_to)) +#define __stringify_1(x...) #x +#define __stringify(x...) __stringify_1(x) + +#define PTR_ERR_ENOTSUP ((void *)-ENOTSUP) + +int dprint_in_signal; +char dprint_in_signal_buffer[DPRINT_IN_SIGNAL_BUF_SIZE]; + +extern void abort_hooks(void); +#define pkey_assert(condition) do {\ + if (!(condition)) { \ + dprintf0("assert() at %s::%d test_nr: %d iteration: %d\n", \ + __FILE__, __LINE__, \ + test_nr, iteration_nr); \ + dprintf0("errno at assert: %d", errno); \ + abort_hooks(); \ + assert(condition); \ + } \ +} while (0) +#define raw_assert(cond) assert(cond) diff --git a/tools/testing/selftests/vm/protection_keys.c b/tools/testing/selftests/vm/protection_keys.c index 2a237e2..c345ff8 100644 --- a/tools/testing/selftests/vm/protection_keys.c +++ b/tools/testing/selftests/vm/protection_keys.c @@ -48,34 +48,9 @@ int test_nr; unsigned int shadow_pkey_reg; - -#define HPAGE_SIZE (1UL<<21) -#define ARRAY_SIZE(x) (sizeof(x) / sizeof(*(x))) -#define ALIGN_UP(x, align_to) (((x) + ((align_to)-1)) & ~((align_to)-1)) -#define ALIGN_DOWN(x, align_to) ((x) & ~((align_to)-1)) -#define ALIGN_PTR_UP(p, ptr_align_to) ((typeof(p))ALIGN_UP((unsigned long)(p),ptr_align_to)) -#define ALIGN_PTR_DOWN(p, ptr_align_to) ((typeof(p))ALIGN_DOWN((unsigned long)(p), ptr_align_to)) -#define __stringify_1(x...) #x -#define __stringify(x...) __stringify_1(x) - -#define PTR_ERR_ENOTSUP ((void *)-ENOTSUP) - int dprint_in_signal; char dprint_in_signal_buffer[DPRINT_IN_SIGNAL_BUF_SIZE]; -extern void abort_hooks(void); -#define pkey_assert(condition) do {\ - if (!(condition)) { \ - dprintf0("assert() at %s::%d test_nr: %d iteration: %d\n", \ - __FILE__, __LINE__, \ - test_nr, iteration_nr); \ - dprintf0("errno at assert: %d", errno); \ - abort_hooks(); \ - assert(condition); \ - } \ -} while (0) -#define raw_assert(cond) assert(cond) - void cat_into_file(char *str, char *file) { int fd = open(file, O_RDWR); @@ -153,12 +128,6 @@ void abort_hooks(void) #endif } -static inline void __page_o_noops(void) -{ - /* 8-bytes of instruction * 512 bytes
[RFC v6 42/62] selftest/vm: rename all references to pkru to a generic name
some pkru references are named to pkey_reg and some prku references are renamed to pkey Signed-off-by: Ram Pai --- tools/testing/selftests/vm/pkey-helpers.h| 85 +- tools/testing/selftests/vm/protection_keys.c | 227 ++ 2 files changed, 164 insertions(+), 148 deletions(-) diff --git a/tools/testing/selftests/vm/pkey-helpers.h b/tools/testing/selftests/vm/pkey-helpers.h index b202939..2d9887a 100644 --- a/tools/testing/selftests/vm/pkey-helpers.h +++ b/tools/testing/selftests/vm/pkey-helpers.h @@ -13,7 +13,7 @@ #include #define NR_PKEYS 16 -#define PKRU_BITS_PER_PKEY 2 +#define PKEY_BITS_PER_PKEY 2 #ifndef DEBUG_LEVEL #define DEBUG_LEVEL 0 @@ -53,85 +53,88 @@ static inline void sigsafe_printf(const char *format, ...) #define dprintf3(args...) dprintf_level(3, args) #define dprintf4(args...) dprintf_level(4, args) -extern unsigned int shadow_pkru; -static inline unsigned int __rdpkru(void) +extern unsigned int shadow_pkey_reg; +static inline unsigned int __rdpkey_reg(void) { unsigned int eax, edx; unsigned int ecx = 0; - unsigned int pkru; + unsigned int pkey_reg; asm volatile(".byte 0x0f,0x01,0xee\n\t" : "=a" (eax), "=d" (edx) : "c" (ecx)); - pkru = eax; - return pkru; + pkey_reg = eax; + return pkey_reg; } -static inline unsigned int _rdpkru(int line) +static inline unsigned int _rdpkey_reg(int line) { - unsigned int pkru = __rdpkru(); + unsigned int pkey_reg = __rdpkey_reg(); - dprintf4("rdpkru(line=%d) pkru: %x shadow: %x\n", - line, pkru, shadow_pkru); - assert(pkru == shadow_pkru); + dprintf4("rdpkey_reg(line=%d) pkey_reg: %x shadow: %x\n", + line, pkey_reg, shadow_pkey_reg); + assert(pkey_reg == shadow_pkey_reg); - return pkru; + return pkey_reg; } -#define rdpkru() _rdpkru(__LINE__) +#define rdpkey_reg() _rdpkey_reg(__LINE__) -static inline void __wrpkru(unsigned int pkru) +static inline void __wrpkey_reg(unsigned int pkey_reg) { - unsigned int eax = pkru; + unsigned int eax = pkey_reg; unsigned int ecx = 0; unsigned int edx = 0; - dprintf4("%s() changing %08x to %08x\n", __func__, __rdpkru(), pkru); + dprintf4("%s() changing %08x to %08x\n", __func__, + __rdpkey_reg(), pkey_reg); asm volatile(".byte 0x0f,0x01,0xef\n\t" : : "a" (eax), "c" (ecx), "d" (edx)); - assert(pkru == __rdpkru()); + assert(pkey_reg == __rdpkey_reg()); } -static inline void wrpkru(unsigned int pkru) +static inline void wrpkey_reg(unsigned int pkey_reg) { - dprintf4("%s() changing %08x to %08x\n", __func__, __rdpkru(), pkru); + dprintf4("%s() changing %08x to %08x\n", __func__, + __rdpkey_reg(), pkey_reg); /* will do the shadow check for us: */ - rdpkru(); - __wrpkru(pkru); - shadow_pkru = pkru; - dprintf4("%s(%08x) pkru: %08x\n", __func__, pkru, __rdpkru()); + rdpkey_reg(); + __wrpkey_reg(pkey_reg); + shadow_pkey_reg = pkey_reg; + dprintf4("%s(%08x) pkey_reg: %08x\n", __func__, + pkey_reg, __rdpkey_reg()); } /* * These are technically racy. since something could - * change PKRU between the read and the write. + * change PKEY register between the read and the write. */ static inline void __pkey_access_allow(int pkey, int do_allow) { - unsigned int pkru = rdpkru(); + unsigned int pkey_reg = rdpkey_reg(); int bit = pkey * 2; if (do_allow) - pkru &= (1<>>>===SIGSEGV\n"); - dprintf1("%s()::%d, pkru: 0x%x shadow: %x\n", __func__, __LINE__, - __rdpkru(), shadow_pkru); + dprintf1("%s()::%d, pkey_reg: 0x%x shadow: %x\n", __func__, __LINE__, + __rdpkey_reg(), shadow_pkey_reg); trapno = uctxt->uc_mcontext.gregs[REG_TRAPNO]; ip = uctxt->uc_mcontext.gregs[REG_IP_IDX]; @@ -263,19 +263,19 @@ void signal_handler(int signum, siginfo_t *si, void *vucontext) */ fpregs += 0x70; #endif - pkru_offset = pkru_xstate_offset(); - pkru_ptr = (void *)(&fpregs[pkru_offset]); + pkey_reg_offset = pkey_reg_xstate_offset(); + pkey_reg_ptr = (void *)(&fpregs[pkey_reg_offset]); dprintf1("siginfo: %p\n", si); dprintf1(" fpregs: %p\n", fpregs); /* -* If we got a PKRU fault, we *HAVE* to have at least one bit set in +* If we got a PKEY fault, we *HAVE* to have at least one bit set in * here. */ - dprintf1("pkru_xstate_offset: %d\n", pkru_xstate_offset()); + dprintf1("pkey_reg_xstate_offset: %d\n", pkey_reg_xstate_offset()); if (DEBUG_LEVEL > 4) - dump_mem(pkru_ptr - 128, 256); - pk
[RFC v6 44/62] selftest/vm: typecast the pkey register
This is in preparation to accomadate a differing size register across architectures. Signed-off-by: Ram Pai --- tools/testing/selftests/vm/pkey-helpers.h| 27 +- tools/testing/selftests/vm/protection_keys.c | 75 ++ 2 files changed, 54 insertions(+), 48 deletions(-) diff --git a/tools/testing/selftests/vm/pkey-helpers.h b/tools/testing/selftests/vm/pkey-helpers.h index f378bc2..12260e8 100644 --- a/tools/testing/selftests/vm/pkey-helpers.h +++ b/tools/testing/selftests/vm/pkey-helpers.h @@ -17,6 +17,7 @@ #define u16 uint16_t #define u32 uint32_t #define u64 uint64_t +#define pkey_reg_t u32 #ifdef __i386__ #define SYS_mprotect_key 380 @@ -76,12 +77,12 @@ static inline void sigsafe_printf(const char *format, ...) #define dprintf3(args...) dprintf_level(3, args) #define dprintf4(args...) dprintf_level(4, args) -extern unsigned int shadow_pkey_reg; -static inline unsigned int __rdpkey_reg(void) +extern pkey_reg_t shadow_pkey_reg; +static inline pkey_reg_t __rdpkey_reg(void) { unsigned int eax, edx; unsigned int ecx = 0; - unsigned int pkey_reg; + pkey_reg_t pkey_reg; asm volatile(".byte 0x0f,0x01,0xee\n\t" : "=a" (eax), "=d" (edx) @@ -90,11 +91,11 @@ static inline unsigned int __rdpkey_reg(void) return pkey_reg; } -static inline unsigned int _rdpkey_reg(int line) +static inline pkey_reg_t _rdpkey_reg(int line) { - unsigned int pkey_reg = __rdpkey_reg(); + pkey_reg_t pkey_reg = __rdpkey_reg(); - dprintf4("rdpkey_reg(line=%d) pkey_reg: %x shadow: %x\n", + dprintf4("rdpkey_reg(line=%d) pkey_reg: %016lx shadow: %016lx\n", line, pkey_reg, shadow_pkey_reg); assert(pkey_reg == shadow_pkey_reg); @@ -103,11 +104,11 @@ static inline unsigned int _rdpkey_reg(int line) #define rdpkey_reg() _rdpkey_reg(__LINE__) -static inline void __wrpkey_reg(unsigned int pkey_reg) +static inline void __wrpkey_reg(pkey_reg_t pkey_reg) { - unsigned int eax = pkey_reg; - unsigned int ecx = 0; - unsigned int edx = 0; + pkey_reg_t eax = pkey_reg; + pkey_reg_t ecx = 0; + pkey_reg_t edx = 0; dprintf4("%s() changing %08x to %08x\n", __func__, __rdpkey_reg(), pkey_reg); @@ -116,7 +117,7 @@ static inline void __wrpkey_reg(unsigned int pkey_reg) assert(pkey_reg == __rdpkey_reg()); } -static inline void wrpkey_reg(unsigned int pkey_reg) +static inline void wrpkey_reg(pkey_reg_t pkey_reg) { dprintf4("%s() changing %08x to %08x\n", __func__, __rdpkey_reg(), pkey_reg); @@ -134,7 +135,7 @@ static inline void wrpkey_reg(unsigned int pkey_reg) */ static inline void __pkey_access_allow(int pkey, int do_allow) { - unsigned int pkey_reg = rdpkey_reg(); + pkey_reg_t pkey_reg = rdpkey_reg(); int bit = pkey * 2; if (do_allow) @@ -148,7 +149,7 @@ static inline void __pkey_access_allow(int pkey, int do_allow) static inline void __pkey_write_allow(int pkey, int do_allow_write) { - long pkey_reg = rdpkey_reg(); + pkey_reg_t pkey_reg = rdpkey_reg(); int bit = pkey * 2 + 1; if (do_allow_write) diff --git a/tools/testing/selftests/vm/protection_keys.c b/tools/testing/selftests/vm/protection_keys.c index c345ff8..bd46f87 100644 --- a/tools/testing/selftests/vm/protection_keys.c +++ b/tools/testing/selftests/vm/protection_keys.c @@ -47,7 +47,7 @@ int iteration_nr = 1; int test_nr; -unsigned int shadow_pkey_reg; +pkey_reg_t shadow_pkey_reg; int dprint_in_signal; char dprint_in_signal_buffer[DPRINT_IN_SIGNAL_BUF_SIZE]; @@ -157,7 +157,7 @@ void dump_mem(void *dumpme, int len_bytes) for (i = 0; i < len_bytes; i += sizeof(u64)) { u64 *ptr = (u64 *)(c + i); - dprintf1("dump[%03d][@%p]: %016jx\n", i, ptr, *ptr); + dprintf1("dump[%03d][@%p]: %016lx\n", i, ptr, *ptr); } } @@ -186,15 +186,16 @@ void signal_handler(int signum, siginfo_t *si, void *vucontext) int trapno; unsigned long ip; char *fpregs; - u32 *pkey_reg_ptr; - u64 si_pkey; - u32 *si_pkey_ptr; + pkey_reg_t *pkey_reg_ptr; + u32 si_pkey; + pkey_reg_t *si_pkey_ptr; int pkey_reg_offset; fpregset_t fpregset; dprint_in_signal = 1; dprintf1("===SIGSEGV\n"); - dprintf1("%s()::%d, pkey_reg: 0x%x shadow: %x\n", __func__, __LINE__, + dprintf1("%s()::%d, pkey_reg: 0x%016lx shadow: %016lx\n", + __func__, __LINE__, __rdpkey_reg(), shadow_pkey_reg); trapno = uctxt->uc_mcontext.gregs[REG_TRAPNO]; @@ -202,8 +203,9 @@ void signal_handler(int signum, siginfo_t *si, void *vucontext) fpregset = uctxt->uc_mcontext.fpregs; fpregs = (void *)fpregset; - dprintf2("%s() tra
[RFC v6 41/62] selftest/x86: Move protecton key selftest to arch neutral directory
Signed-off-by: Ram Pai --- tools/testing/selftests/vm/Makefile |1 + tools/testing/selftests/vm/pkey-helpers.h | 219 tools/testing/selftests/vm/protection_keys.c | 1395 + tools/testing/selftests/x86/Makefile |2 +- tools/testing/selftests/x86/pkey-helpers.h| 219 tools/testing/selftests/x86/protection_keys.c | 1395 - 6 files changed, 1616 insertions(+), 1615 deletions(-) create mode 100644 tools/testing/selftests/vm/pkey-helpers.h create mode 100644 tools/testing/selftests/vm/protection_keys.c delete mode 100644 tools/testing/selftests/x86/pkey-helpers.h delete mode 100644 tools/testing/selftests/x86/protection_keys.c diff --git a/tools/testing/selftests/vm/Makefile b/tools/testing/selftests/vm/Makefile index cbb29e4..1d32f78 100644 --- a/tools/testing/selftests/vm/Makefile +++ b/tools/testing/selftests/vm/Makefile @@ -17,6 +17,7 @@ TEST_GEN_FILES += transhuge-stress TEST_GEN_FILES += userfaultfd TEST_GEN_FILES += mlock-random-test TEST_GEN_FILES += virtual_address_range +TEST_GEN_FILES += protection_keys TEST_PROGS := run_vmtests diff --git a/tools/testing/selftests/vm/pkey-helpers.h b/tools/testing/selftests/vm/pkey-helpers.h new file mode 100644 index 000..b202939 --- /dev/null +++ b/tools/testing/selftests/vm/pkey-helpers.h @@ -0,0 +1,219 @@ +#ifndef _PKEYS_HELPER_H +#define _PKEYS_HELPER_H +#define _GNU_SOURCE +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define NR_PKEYS 16 +#define PKRU_BITS_PER_PKEY 2 + +#ifndef DEBUG_LEVEL +#define DEBUG_LEVEL 0 +#endif +#define DPRINT_IN_SIGNAL_BUF_SIZE 4096 +extern int dprint_in_signal; +extern char dprint_in_signal_buffer[DPRINT_IN_SIGNAL_BUF_SIZE]; +static inline void sigsafe_printf(const char *format, ...) +{ + va_list ap; + + va_start(ap, format); + if (!dprint_in_signal) { + vprintf(format, ap); + } else { + int len = vsnprintf(dprint_in_signal_buffer, + DPRINT_IN_SIGNAL_BUF_SIZE, + format, ap); + /* +* len is amount that would have been printed, +* but actual write is truncated at BUF_SIZE. +*/ + if (len > DPRINT_IN_SIGNAL_BUF_SIZE) + len = DPRINT_IN_SIGNAL_BUF_SIZE; + write(1, dprint_in_signal_buffer, len); + } + va_end(ap); +} +#define dprintf_level(level, args...) do { \ + if (level <= DEBUG_LEVEL) \ + sigsafe_printf(args); \ + fflush(NULL); \ +} while (0) +#define dprintf0(args...) dprintf_level(0, args) +#define dprintf1(args...) dprintf_level(1, args) +#define dprintf2(args...) dprintf_level(2, args) +#define dprintf3(args...) dprintf_level(3, args) +#define dprintf4(args...) dprintf_level(4, args) + +extern unsigned int shadow_pkru; +static inline unsigned int __rdpkru(void) +{ + unsigned int eax, edx; + unsigned int ecx = 0; + unsigned int pkru; + + asm volatile(".byte 0x0f,0x01,0xee\n\t" +: "=a" (eax), "=d" (edx) +: "c" (ecx)); + pkru = eax; + return pkru; +} + +static inline unsigned int _rdpkru(int line) +{ + unsigned int pkru = __rdpkru(); + + dprintf4("rdpkru(line=%d) pkru: %x shadow: %x\n", + line, pkru, shadow_pkru); + assert(pkru == shadow_pkru); + + return pkru; +} + +#define rdpkru() _rdpkru(__LINE__) + +static inline void __wrpkru(unsigned int pkru) +{ + unsigned int eax = pkru; + unsigned int ecx = 0; + unsigned int edx = 0; + + dprintf4("%s() changing %08x to %08x\n", __func__, __rdpkru(), pkru); + asm volatile(".byte 0x0f,0x01,0xef\n\t" +: : "a" (eax), "c" (ecx), "d" (edx)); + assert(pkru == __rdpkru()); +} + +static inline void wrpkru(unsigned int pkru) +{ + dprintf4("%s() changing %08x to %08x\n", __func__, __rdpkru(), pkru); + /* will do the shadow check for us: */ + rdpkru(); + __wrpkru(pkru); + shadow_pkru = pkru; + dprintf4("%s(%08x) pkru: %08x\n", __func__, pkru, __rdpkru()); +} + +/* + * These are technically racy. since something could + * change PKRU between the read and the write. + */ +static inline void __pkey_access_allow(int pkey, int do_allow) +{ + unsigned int pkru = rdpkru(); + int bit = pkey * 2; + + if (do_allow) + pkru &= (1
[RFC v6 40/62] x86: delete arch_show_smap()
arch_show_smap() function is not needed anymore. Delete it. Signed-off-by: Ram Pai --- arch/x86/kernel/setup.c |8 1 files changed, 0 insertions(+), 8 deletions(-) diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c index f818236..5efe4c3 100644 --- a/arch/x86/kernel/setup.c +++ b/arch/x86/kernel/setup.c @@ -1333,11 +1333,3 @@ static int __init register_kernel_offset_dumper(void) return 0; } __initcall(register_kernel_offset_dumper); - -void arch_show_smap(struct seq_file *m, struct vm_area_struct *vma) -{ - if (!boot_cpu_has(X86_FEATURE_OSPKE)) - return; - - seq_printf(m, "ProtectionKey: %8u\n", vma_pkey(vma)); -} -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[RFC v6 39/62] mm: display pkey in smaps if arch_pkeys_enabled() is true
Currently the architecture specific code is expected to display the protection keys in smap for a given vma. This can lead to redundant code and possibly to divergent formats in which the key gets displayed. This patch changes the implementation. It displays the pkey only if the architecture support pkeys. Signed-off-by: Ram Pai --- fs/proc/task_mmu.c |9 - 1 files changed, 4 insertions(+), 5 deletions(-) diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index e5710bc..d2b3e75 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -16,6 +16,7 @@ #include #include #include +#include #include #include @@ -715,10 +716,6 @@ static int smaps_hugetlb_range(pte_t *pte, unsigned long hmask, } #endif /* HUGETLB_PAGE */ -void __weak arch_show_smap(struct seq_file *m, struct vm_area_struct *vma) -{ -} - static int show_smap(struct seq_file *m, void *v, int is_pid) { struct vm_area_struct *vma = v; @@ -804,7 +801,9 @@ static int show_smap(struct seq_file *m, void *v, int is_pid) (vma->vm_flags & VM_LOCKED) ? (unsigned long)(mss.pss >> (10 + PSS_SHIFT)) : 0); - arch_show_smap(m, vma); + if (arch_pkeys_enabled()) + seq_printf(m, "ProtectionKey: %8u\n", vma_pkey(vma)); + show_smap_vma_flags(m, vma); m_cache_vma(m, vma); return 0; -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[RFC v6 35/62] powerpc: Deliver SEGV signal on pkey violation
The value of the AMR register at the time of exception is made available in gp_regs[PT_AMR] of the siginfo. The value of the pkey, whose protection got violated, is made available in si_pkey field of the siginfo structure. Signed-off-by: Ram Pai --- arch/powerpc/include/uapi/asm/ptrace.h |1 + arch/powerpc/kernel/signal_32.c|5 + arch/powerpc/kernel/signal_64.c|4 arch/powerpc/kernel/traps.c| 15 +++ 4 files changed, 25 insertions(+), 0 deletions(-) diff --git a/arch/powerpc/include/uapi/asm/ptrace.h b/arch/powerpc/include/uapi/asm/ptrace.h index 8036b38..fc9c9c0 100644 --- a/arch/powerpc/include/uapi/asm/ptrace.h +++ b/arch/powerpc/include/uapi/asm/ptrace.h @@ -110,6 +110,7 @@ struct pt_regs { #define PT_RESULT 43 #define PT_DSCR 44 #define PT_REGS_COUNT 44 +#define PT_AMR 45 #define PT_FPR048 /* each FP reg occupies 2 slots in this space */ diff --git a/arch/powerpc/kernel/signal_32.c b/arch/powerpc/kernel/signal_32.c index 97bb138..9c4a7f3 100644 --- a/arch/powerpc/kernel/signal_32.c +++ b/arch/powerpc/kernel/signal_32.c @@ -500,6 +500,11 @@ static int save_user_regs(struct pt_regs *regs, struct mcontext __user *frame, (unsigned long) &frame->tramp[2]); } +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS + if (__put_user(get_paca()->paca_amr, &frame->mc_gregs[PT_AMR])) + return 1; +#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */ + return 0; } diff --git a/arch/powerpc/kernel/signal_64.c b/arch/powerpc/kernel/signal_64.c index c83c115..86a4262 100644 --- a/arch/powerpc/kernel/signal_64.c +++ b/arch/powerpc/kernel/signal_64.c @@ -174,6 +174,10 @@ static long setup_sigcontext(struct sigcontext __user *sc, if (set != NULL) err |= __put_user(set->sig[0], &sc->oldmask); +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS + err |= __put_user(get_paca()->paca_amr, &sc->gp_regs[PT_AMR]); +#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */ + return err; } diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c index d4e545d..fe1e7c7 100644 --- a/arch/powerpc/kernel/traps.c +++ b/arch/powerpc/kernel/traps.c @@ -20,6 +20,7 @@ #include #include #include +#include #include #include #include @@ -247,6 +248,15 @@ void user_single_step_siginfo(struct task_struct *tsk, info->si_addr = (void __user *)regs->nip; } +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS +static void fill_sig_info_pkey(int si_code, siginfo_t *info, unsigned long addr) +{ + if (si_code != SEGV_PKUERR) + return; + info->si_pkey = get_paca()->paca_pkey; +} +#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */ + void _exception(int signr, struct pt_regs *regs, int code, unsigned long addr) { siginfo_t info; @@ -274,6 +284,11 @@ void _exception(int signr, struct pt_regs *regs, int code, unsigned long addr) info.si_signo = signr; info.si_code = code; info.si_addr = (void __user *) addr; + +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS + fill_sig_info_pkey(code, &info, addr); +#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */ + force_sig_info(signr, &info, current); } -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[RFC v6 38/62] powerpc: implementation for arch_pkeys_enabled()
arch_pkeys_enabled() returns true if the cpu supports protection key, and the kernel has it enabled. Signed-off-by: Ram Pai --- arch/powerpc/include/asm/pkeys.h |5 + 1 files changed, 5 insertions(+), 0 deletions(-) diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h index 7a9aade..ea43cb2 100644 --- a/arch/powerpc/include/asm/pkeys.h +++ b/arch/powerpc/include/asm/pkeys.h @@ -201,6 +201,11 @@ static inline int arch_set_user_pkey_access(struct task_struct *tsk, int pkey, return __arch_set_user_pkey_access(tsk, pkey, init_val); } +static inline bool arch_pkeys_enabled(void) +{ + return pkey_inited; +} + static inline void pkey_mm_init(struct mm_struct *mm) { if (!pkey_inited) -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[RFC v6 33/62] powerpc: introduce get_pte_pkey() helper
get_pte_pkey() helper returns the pkey associated with a address corresponding to a given mm_struct. Signed-off-by: Ram Pai --- arch/powerpc/include/asm/book3s/64/mmu-hash.h |5 + arch/powerpc/mm/hash_utils_64.c | 25 + 2 files changed, 30 insertions(+), 0 deletions(-) diff --git a/arch/powerpc/include/asm/book3s/64/mmu-hash.h b/arch/powerpc/include/asm/book3s/64/mmu-hash.h index f7a6ed3..369f9ff 100644 --- a/arch/powerpc/include/asm/book3s/64/mmu-hash.h +++ b/arch/powerpc/include/asm/book3s/64/mmu-hash.h @@ -450,6 +450,11 @@ extern int hash_page(unsigned long ea, unsigned long access, unsigned long trap, int __hash_page_huge(unsigned long ea, unsigned long access, unsigned long vsid, pte_t *ptep, unsigned long trap, unsigned long flags, int ssize, unsigned int shift, unsigned int mmu_psize); + +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS +u16 get_pte_pkey(struct mm_struct *mm, unsigned long address); +#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */ + #ifdef CONFIG_TRANSPARENT_HUGEPAGE extern int __hash_page_thp(unsigned long ea, unsigned long access, unsigned long vsid, pmd_t *pmdp, unsigned long trap, diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c index 1e74529..6bc8e91 100644 --- a/arch/powerpc/mm/hash_utils_64.c +++ b/arch/powerpc/mm/hash_utils_64.c @@ -1573,6 +1573,31 @@ void hash_preload(struct mm_struct *mm, unsigned long ea, local_irq_restore(flags); } +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS +/* + * return the protection key associated with the given address + * and the mm_struct. + */ +u16 get_pte_pkey(struct mm_struct *mm, unsigned long address) +{ + pte_t *ptep; + u16 pkey = 0; + unsigned long flags; + + if (!mm || !mm->pgd) + return 0; + + local_irq_save(flags); + ptep = find_linux_pte_or_hugepte(mm->pgd, address, + NULL, NULL); + if (ptep) + pkey = pte_to_pkey_bits(pte_val(READ_ONCE(*ptep))); + local_irq_restore(flags); + + return pkey; +} +#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */ + #ifdef CONFIG_PPC_TRANSACTIONAL_MEM static inline void tm_flush_hash_page(int local) { -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[RFC v6 34/62] powerpc: capture the violated protection key on fault
Capture the protection key that got violated in paca. This value will be later used to inform the signal handler. Signed-off-by: Ram Pai --- arch/powerpc/include/asm/paca.h |1 + arch/powerpc/kernel/asm-offsets.c |1 + arch/powerpc/mm/fault.c |8 3 files changed, 10 insertions(+), 0 deletions(-) diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h index c8bd1fc..0c06188 100644 --- a/arch/powerpc/include/asm/paca.h +++ b/arch/powerpc/include/asm/paca.h @@ -94,6 +94,7 @@ struct paca_struct { u64 dscr_default; /* per-CPU default DSCR */ #ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS u64 paca_amr; /* value of amr at exception */ + u16 paca_pkey; /* exception causing pkey */ #endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */ #ifdef CONFIG_PPC_STD_MMU_64 diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c index 17f5d8a..7dff862 100644 --- a/arch/powerpc/kernel/asm-offsets.c +++ b/arch/powerpc/kernel/asm-offsets.c @@ -244,6 +244,7 @@ int main(void) #ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS OFFSET(PACA_AMR, paca_struct, paca_amr); + OFFSET(PACA_PKEY, paca_struct, paca_pkey); #endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */ OFFSET(ACCOUNT_STARTTIME, paca_struct, accounting.starttime); diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c index a6710f5..6423277 100644 --- a/arch/powerpc/mm/fault.c +++ b/arch/powerpc/mm/fault.c @@ -265,6 +265,7 @@ int do_page_fault(struct pt_regs *regs, unsigned long address, if (error_code & DSISR_KEYFAULT) { code = SEGV_PKUERR; get_paca()->paca_amr = read_amr(); + get_paca()->paca_pkey = get_pte_pkey(current->mm, address); goto bad_area_nosemaphore; } #endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */ @@ -453,6 +454,13 @@ int do_page_fault(struct pt_regs *regs, unsigned long address, if (!arch_vma_access_permitted(vma, flags & FAULT_FLAG_WRITE, is_exec, 0)) { get_paca()->paca_amr = read_amr(); + /* +* The pgd-pdt...pmd-pte tree may not have been fully setup. +* Hence we cannot walk the tree to locate the pte, to locate +* the key. Hence lets call vma_pkey() to get the key here +* instead of get_pte_pkey(). +*/ + get_paca()->paca_pkey = vma_pkey(vma); code = SEGV_PKUERR; goto bad_area; } -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[RFC v6 30/62] powerpc: implementation for arch_vma_access_permitted()
This patch provides the implementation for arch_vma_access_permitted(). Returns true if the requested access is allowed by pkey associated with the vma. Signed-off-by: Ram Pai --- arch/powerpc/include/asm/mmu_context.h |5 +++- arch/powerpc/mm/pkeys.c| 43 2 files changed, 47 insertions(+), 1 deletions(-) diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h index 7232484..635d4a6 100644 --- a/arch/powerpc/include/asm/mmu_context.h +++ b/arch/powerpc/include/asm/mmu_context.h @@ -175,6 +175,10 @@ static inline void arch_bprm_mm_init(struct mm_struct *mm, { } +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS +bool arch_vma_access_permitted(struct vm_area_struct *vma, + bool write, bool execute, bool foreign); +#else /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */ static inline bool arch_vma_access_permitted(struct vm_area_struct *vma, bool write, bool execute, bool foreign) { @@ -182,7 +186,6 @@ static inline bool arch_vma_access_permitted(struct vm_area_struct *vma, return true; } -#ifndef CONFIG_PPC64_MEMORY_PROTECTION_KEYS #define pkey_initialize() #define pkey_mm_init(mm) diff --git a/arch/powerpc/mm/pkeys.c b/arch/powerpc/mm/pkeys.c index 1794e17..ce1 100644 --- a/arch/powerpc/mm/pkeys.c +++ b/arch/powerpc/mm/pkeys.c @@ -234,3 +234,46 @@ bool arch_pte_access_permitted(u64 pte, bool write, bool execute) return pkey_access_permitted(pte_to_pkey_bits(pte), write, execute); } + +/* + * We only want to enforce protection keys on the current process + * because we effectively have no access to AMR/IAMR for other + * processes or any way to tell *which * AMR/IAMR in a threaded + * process we could use. + * + * So do not enforce things if the VMA is not from the current + * mm, or if we are in a kernel thread. + */ +static inline bool vma_is_foreign(struct vm_area_struct *vma) +{ + if (!current->mm) + return true; + /* +* if the VMA is from another process, then AMR/IAMR has no +* relevance and should not be enforced. +*/ + if (current->mm != vma->vm_mm) + return true; + + return false; +} + +bool arch_vma_access_permitted(struct vm_area_struct *vma, + bool write, bool execute, bool foreign) +{ + int pkey; + + if (!pkey_inited) + return true; + + /* allow access if the VMA is not one from this process */ + if (foreign || vma_is_foreign(vma)) + return true; + + pkey = vma_pkey(vma); + + if (!pkey) + return true; + + return pkey_access_permitted(pkey, write, execute); +} -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[RFC v6 32/62] powerpc: capture AMR register content on pkey violation
capture AMR register contents, and save it in paca whenever a pkey violation is detected. This value will be needed to deliver pkey-violation signal to the task. Signed-off-by: Ram Pai --- arch/powerpc/include/asm/paca.h |3 +++ arch/powerpc/kernel/asm-offsets.c |5 + arch/powerpc/mm/fault.c |2 ++ 3 files changed, 10 insertions(+), 0 deletions(-) diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h index 1c09f8f..c8bd1fc 100644 --- a/arch/powerpc/include/asm/paca.h +++ b/arch/powerpc/include/asm/paca.h @@ -92,6 +92,9 @@ struct paca_struct { struct dtl_entry *dispatch_log_end; #endif /* CONFIG_PPC_STD_MMU_64 */ u64 dscr_default; /* per-CPU default DSCR */ +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS + u64 paca_amr; /* value of amr at exception */ +#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */ #ifdef CONFIG_PPC_STD_MMU_64 /* diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c index 709e234..17f5d8a 100644 --- a/arch/powerpc/kernel/asm-offsets.c +++ b/arch/powerpc/kernel/asm-offsets.c @@ -241,6 +241,11 @@ int main(void) OFFSET(PACAHWCPUID, paca_struct, hw_cpu_id); OFFSET(PACAKEXECSTATE, paca_struct, kexec_state); OFFSET(PACA_DSCR_DEFAULT, paca_struct, dscr_default); + +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS + OFFSET(PACA_AMR, paca_struct, paca_amr); +#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */ + OFFSET(ACCOUNT_STARTTIME, paca_struct, accounting.starttime); OFFSET(ACCOUNT_STARTTIME_USER, paca_struct, accounting.starttime_user); OFFSET(ACCOUNT_USER_TIME, paca_struct, accounting.utime); diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c index ea74fe2..a6710f5 100644 --- a/arch/powerpc/mm/fault.c +++ b/arch/powerpc/mm/fault.c @@ -264,6 +264,7 @@ int do_page_fault(struct pt_regs *regs, unsigned long address, #ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS if (error_code & DSISR_KEYFAULT) { code = SEGV_PKUERR; + get_paca()->paca_amr = read_amr(); goto bad_area_nosemaphore; } #endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */ @@ -451,6 +452,7 @@ int do_page_fault(struct pt_regs *regs, unsigned long address, #ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS if (!arch_vma_access_permitted(vma, flags & FAULT_FLAG_WRITE, is_exec, 0)) { + get_paca()->paca_amr = read_amr(); code = SEGV_PKUERR; goto bad_area; } -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[RFC v6 29/62] powerpc: Macro the mask used for checking DSI exception
Replace the magic number used to check for DSI exception with a meaningful value. Signed-off-by: Ram Pai --- arch/powerpc/include/asm/reg.h |7 ++- arch/powerpc/kernel/exceptions-64s.S |2 +- 2 files changed, 7 insertions(+), 2 deletions(-) diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h index 7e50e47..ee04bc0 100644 --- a/arch/powerpc/include/asm/reg.h +++ b/arch/powerpc/include/asm/reg.h @@ -272,16 +272,21 @@ #define SPRN_DAR 0x013 /* Data Address Register */ #define SPRN_DBCR 0x136 /* e300 Data Breakpoint Control Reg */ #define SPRN_DSISR 0x012 /* Data Storage Interrupt Status Register */ +#define DSISR_BIT32 0x8000 /* not defined */ #define DSISR_NOHPTE 0x4000 /* no translation found */ +#define DSISR_PAGEATTR_CONFLT0x2000 /* page attribute conflict */ +#define DSISR_BIT35 0x1000 /* not defined */ #define DSISR_PROTFAULT 0x0800 /* protection fault */ #define DSISR_BADACCESS 0x0400 /* bad access to CI or G */ #define DSISR_ISSTORE0x0200 /* access was a store */ #define DSISR_DABRMATCH 0x0040 /* hit data breakpoint */ -#define DSISR_NOSEGMENT 0x0020 /* SLB miss */ #define DSISR_KEYFAULT 0x0020 /* Key fault */ +#define DSISR_BIT43 0x0010 /* not defined */ #define DSISR_UNSUPP_MMU 0x0008 /* Unsupported MMU config */ #define DSISR_SET_RC 0x0004 /* Failed setting of R/C bits */ #define DSISR_PGDIRFAULT 0x0002 /* Fault on page directory */ +#define DSISR_PAGE_FAULT_MASK (DSISR_BIT32 | DSISR_PAGEATTR_CONFLT | \ + DSISR_BADACCESS | DSISR_DABRMATCH | DSISR_BIT43) #define SPRN_TBRL 0x10C /* Time Base Read Lower Register (user, R/O) */ #define SPRN_TBRU 0x10D /* Time Base Read Upper Register (user, R/O) */ #define SPRN_CIR 0x11B /* Chip Information Register (hyper, R/0) */ diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S index b886795..e154bfe 100644 --- a/arch/powerpc/kernel/exceptions-64s.S +++ b/arch/powerpc/kernel/exceptions-64s.S @@ -1411,7 +1411,7 @@ USE_TEXT_SECTION() .balign IFETCH_ALIGN_BYTES do_hash_page: #ifdef CONFIG_PPC_STD_MMU_64 - andis. r0,r4,0xa450/* weird error? */ + andis. r0,r4,DSISR_PAGE_FAULT_MASK@h bne-handle_page_fault /* if not, try to insert a HPTE */ CURRENT_THREAD_INFO(r11, r1) lwz r0,TI_PREEMPT(r11) /* If we're in an "NMI" */ -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[RFC v6 28/62] powerpc: check key protection for user page access
Make sure that the kernel does not access user pages without checking their key-protection. Signed-off-by: Ram Pai --- arch/powerpc/include/asm/book3s/64/pgtable.h | 14 ++ 1 files changed, 14 insertions(+), 0 deletions(-) diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h index 0056e58..425d98b 100644 --- a/arch/powerpc/include/asm/book3s/64/pgtable.h +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h @@ -474,6 +474,20 @@ static inline void write_uamor(u64 value) #ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS extern bool arch_pte_access_permitted(u64 pte, bool write, bool execute); + +#define pte_access_permitted(pte, write) \ + (pte_present(pte) && \ +((!(write) || pte_write(pte)) && \ + arch_pte_access_permitted(pte_val(pte), !!write, 0))) + +/* + * We store key in pmd for huge tlb pages. So need + * to check for key protection. + */ +#define pmd_access_permitted(pmd, write) \ + (pmd_present(pmd) && \ +((!(write) || pmd_write(pmd)) && \ + arch_pte_access_permitted(pmd_val(pmd), !!write, 0))) #endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */ #define __HAVE_ARCH_PTEP_GET_AND_CLEAR -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[RFC v6 26/62] powerpc: Program HPTE key protection bits
Map the PTE protection key bits to the HPTE key protection bits, while creating HPTE entries. Signed-off-by: Ram Pai --- arch/powerpc/include/asm/book3s/64/mmu-hash.h |5 + arch/powerpc/include/asm/pkeys.h | 12 arch/powerpc/mm/hash_utils_64.c |4 3 files changed, 21 insertions(+), 0 deletions(-) diff --git a/arch/powerpc/include/asm/book3s/64/mmu-hash.h b/arch/powerpc/include/asm/book3s/64/mmu-hash.h index 6981a52..f7a6ed3 100644 --- a/arch/powerpc/include/asm/book3s/64/mmu-hash.h +++ b/arch/powerpc/include/asm/book3s/64/mmu-hash.h @@ -90,6 +90,8 @@ #define HPTE_R_PP0 ASM_CONST(0x8000) #define HPTE_R_TS ASM_CONST(0x4000) #define HPTE_R_KEY_HI ASM_CONST(0x3000) +#define HPTE_R_KEY_BIT0ASM_CONST(0x2000) +#define HPTE_R_KEY_BIT1ASM_CONST(0x1000) #define HPTE_R_RPN_SHIFT 12 #define HPTE_R_RPN ASM_CONST(0x0000) #define HPTE_R_RPN_3_0 ASM_CONST(0x01fff000) @@ -104,6 +106,9 @@ #define HPTE_R_C ASM_CONST(0x0080) #define HPTE_R_R ASM_CONST(0x0100) #define HPTE_R_KEY_LO ASM_CONST(0x0e00) +#define HPTE_R_KEY_BIT2ASM_CONST(0x0800) +#define HPTE_R_KEY_BIT3ASM_CONST(0x0400) +#define HPTE_R_KEY_BIT4ASM_CONST(0x0200) #define HPTE_V_1TB_SEG ASM_CONST(0x4000) #define HPTE_V_VRMA_MASK ASM_CONST(0x4001ff00) diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h index ad39db0..bbb5d85 100644 --- a/arch/powerpc/include/asm/pkeys.h +++ b/arch/powerpc/include/asm/pkeys.h @@ -41,6 +41,18 @@ static inline u64 vmflag_to_page_pkey_bits(u64 vm_flags) ((vm_flags & VM_PKEY_BIT4) ? H_PAGE_PKEY_BIT0 : 0x0UL)); } +static inline u64 pte_to_hpte_pkey_bits(u64 pteflags) +{ + if (!pkey_inited) + return 0x0UL; + + return (((pteflags & H_PAGE_PKEY_BIT0) ? HPTE_R_KEY_BIT0 : 0x0UL) | + ((pteflags & H_PAGE_PKEY_BIT1) ? HPTE_R_KEY_BIT1 : 0x0UL) | + ((pteflags & H_PAGE_PKEY_BIT2) ? HPTE_R_KEY_BIT2 : 0x0UL) | + ((pteflags & H_PAGE_PKEY_BIT3) ? HPTE_R_KEY_BIT3 : 0x0UL) | + ((pteflags & H_PAGE_PKEY_BIT4) ? HPTE_R_KEY_BIT4 : 0x0UL)); +} + static inline int vma_pkey(struct vm_area_struct *vma) { if (!pkey_inited) diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c index f88423b..1e74529 100644 --- a/arch/powerpc/mm/hash_utils_64.c +++ b/arch/powerpc/mm/hash_utils_64.c @@ -231,6 +231,10 @@ unsigned long htab_convert_pte_flags(unsigned long pteflags) */ rflags |= HPTE_R_M; +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS + rflags |= pte_to_hpte_pkey_bits(pteflags); +#endif + return rflags; } -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[RFC v6 27/62] powerpc: helper to validate key-access permissions of a pte
helper function that checks if the read/write/execute is allowed on the pte. Signed-off-by: Ram Pai --- arch/powerpc/include/asm/book3s/64/pgtable.h |4 +++ arch/powerpc/include/asm/pkeys.h | 12 + arch/powerpc/mm/pkeys.c | 33 ++ 3 files changed, 49 insertions(+), 0 deletions(-) diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h index 30d7f55..0056e58 100644 --- a/arch/powerpc/include/asm/book3s/64/pgtable.h +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h @@ -472,6 +472,10 @@ static inline void write_uamor(u64 value) mtspr(SPRN_UAMOR, value); } +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS +extern bool arch_pte_access_permitted(u64 pte, bool write, bool execute); +#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */ + #define __HAVE_ARCH_PTEP_GET_AND_CLEAR static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep) diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h index bbb5d85..7a9aade 100644 --- a/arch/powerpc/include/asm/pkeys.h +++ b/arch/powerpc/include/asm/pkeys.h @@ -53,6 +53,18 @@ static inline u64 pte_to_hpte_pkey_bits(u64 pteflags) ((pteflags & H_PAGE_PKEY_BIT4) ? HPTE_R_KEY_BIT4 : 0x0UL)); } +static inline u16 pte_to_pkey_bits(u64 pteflags) +{ + if (!pkey_inited) + return 0x0UL; + + return (((pteflags & H_PAGE_PKEY_BIT0) ? 0x10 : 0x0UL) | + ((pteflags & H_PAGE_PKEY_BIT1) ? 0x8 : 0x0UL) | + ((pteflags & H_PAGE_PKEY_BIT2) ? 0x4 : 0x0UL) | + ((pteflags & H_PAGE_PKEY_BIT3) ? 0x2 : 0x0UL) | + ((pteflags & H_PAGE_PKEY_BIT4) ? 0x1 : 0x0UL)); +} + static inline int vma_pkey(struct vm_area_struct *vma) { if (!pkey_inited) diff --git a/arch/powerpc/mm/pkeys.c b/arch/powerpc/mm/pkeys.c index 403f5ae..1794e17 100644 --- a/arch/powerpc/mm/pkeys.c +++ b/arch/powerpc/mm/pkeys.c @@ -201,3 +201,36 @@ int __arch_override_mprotect_pkey(struct vm_area_struct *vma, int prot, */ return vma_pkey(vma); } + +static bool pkey_access_permitted(int pkey, bool write, bool execute) +{ + int pkey_shift; + u64 amr; + + if (!pkey) + return true; + + pkey_shift = pkeyshift(pkey); + if (!(read_uamor() & (0x3UL << pkey_shift))) + return true; + + if (execute && !(read_iamr() & (IAMR_EX_BIT << pkey_shift))) + return true; + + if (!write) { + amr = read_amr(); + if (!(amr & (AMR_RD_BIT << pkey_shift))) + return true; + } + + amr = read_amr(); /* delay reading amr uptil absolutely needed */ + return (write && !(amr & (AMR_WR_BIT << pkey_shift))); +} + +bool arch_pte_access_permitted(u64 pte, bool write, bool execute) +{ + if (!pkey_inited) + return true; + return pkey_access_permitted(pte_to_pkey_bits(pte), + write, execute); +} -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[RFC v6 25/62] powerpc: sys_pkey_mprotect() system call
Patch provides the ability for a process to associate a pkey with a address range. Signed-off-by: Ram Pai --- arch/powerpc/include/asm/systbl.h |1 + arch/powerpc/include/asm/unistd.h |4 +--- arch/powerpc/include/uapi/asm/unistd.h |1 + 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/powerpc/include/asm/systbl.h b/arch/powerpc/include/asm/systbl.h index 22dd776..b33b551 100644 --- a/arch/powerpc/include/asm/systbl.h +++ b/arch/powerpc/include/asm/systbl.h @@ -390,3 +390,4 @@ SYSCALL(statx) SYSCALL(pkey_alloc) SYSCALL(pkey_free) +SYSCALL(pkey_mprotect) diff --git a/arch/powerpc/include/asm/unistd.h b/arch/powerpc/include/asm/unistd.h index e0273bc..daf1ba9 100644 --- a/arch/powerpc/include/asm/unistd.h +++ b/arch/powerpc/include/asm/unistd.h @@ -12,12 +12,10 @@ #include -#define NR_syscalls386 +#define NR_syscalls387 #define __NR__exit __NR_exit -#define __IGNORE_pkey_mprotect - #ifndef __ASSEMBLY__ #include diff --git a/arch/powerpc/include/uapi/asm/unistd.h b/arch/powerpc/include/uapi/asm/unistd.h index 7993a07..71ae45e 100644 --- a/arch/powerpc/include/uapi/asm/unistd.h +++ b/arch/powerpc/include/uapi/asm/unistd.h @@ -396,5 +396,6 @@ #define __NR_statx 383 #define __NR_pkey_alloc384 #define __NR_pkey_free 385 +#define __NR_pkey_mprotect 386 #endif /* _UAPI_ASM_POWERPC_UNISTD_H_ */ -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[RFC v6 24/62] powerpc: map vma key-protection bits to pte key bits.
map the pkey bits in the pte from the key protection bits of the vma. The pte bits used for pkey are 3,4,5,6 and 57. The first four bits are the same four bits that were freed up initially in this patch series. remember? :-) Without those four bits this patch would'nt be possible. Signed-off-by: Ram Pai --- arch/powerpc/include/asm/book3s/64/pgtable.h | 20 +++- arch/powerpc/include/asm/mman.h |8 arch/powerpc/include/asm/pkeys.h | 12 3 files changed, 39 insertions(+), 1 deletions(-) diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h index d4da0e9..30d7f55 100644 --- a/arch/powerpc/include/asm/book3s/64/pgtable.h +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h @@ -37,6 +37,7 @@ #define _RPAGE_RSV20x0800UL #define _RPAGE_RSV30x0400UL #define _RPAGE_RSV40x0200UL +#define _RPAGE_RSV50x00040UL #define _PAGE_PTE 0x4000UL/* distinguishes PTEs from pointers */ #define _PAGE_PRESENT 0x8000UL/* pte contains a translation */ @@ -56,6 +57,20 @@ /* Max physical address bit as per radix table */ #define _RPAGE_PA_MAX 57 +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS +#define H_PAGE_PKEY_BIT0 _RPAGE_RSV1 +#define H_PAGE_PKEY_BIT1 _RPAGE_RSV2 +#define H_PAGE_PKEY_BIT2 _RPAGE_RSV3 +#define H_PAGE_PKEY_BIT3 _RPAGE_RSV4 +#define H_PAGE_PKEY_BIT4 _RPAGE_RSV5 +#else /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */ +#define H_PAGE_PKEY_BIT0 0 +#define H_PAGE_PKEY_BIT1 0 +#define H_PAGE_PKEY_BIT2 0 +#define H_PAGE_PKEY_BIT3 0 +#define H_PAGE_PKEY_BIT4 0 +#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */ + /* * Max physical address bit we will use for now. * @@ -116,13 +131,16 @@ #define _PAGE_CHG_MASK (PTE_RPN_MASK | _PAGE_HPTEFLAGS | _PAGE_DIRTY | \ _PAGE_ACCESSED | _PAGE_SPECIAL | _PAGE_PTE | \ _PAGE_SOFT_DIRTY) + +#define H_PAGE_PKEY (H_PAGE_PKEY_BIT0 | H_PAGE_PKEY_BIT1 | H_PAGE_PKEY_BIT2 | \ + H_PAGE_PKEY_BIT3 | H_PAGE_PKEY_BIT4) /* * Mask of bits returned by pte_pgprot() */ #define PAGE_PROT_BITS (_PAGE_SAO | _PAGE_NON_IDEMPOTENT | _PAGE_TOLERANT | \ H_PAGE_4K_PFN | _PAGE_PRIVILEGED | _PAGE_ACCESSED | \ _PAGE_READ | _PAGE_WRITE | _PAGE_DIRTY | _PAGE_EXEC | \ -_PAGE_SOFT_DIRTY) +_PAGE_SOFT_DIRTY | H_PAGE_PKEY) /* * We define 2 sets of base prot bits, one for basic pages (ie, * cacheable kernel and user pages) and one for non cacheable diff --git a/arch/powerpc/include/asm/mman.h b/arch/powerpc/include/asm/mman.h index 067eec2..3f7220f 100644 --- a/arch/powerpc/include/asm/mman.h +++ b/arch/powerpc/include/asm/mman.h @@ -32,12 +32,20 @@ static inline unsigned long arch_calc_vm_prot_bits(unsigned long prot, } #define arch_calc_vm_prot_bits(prot, pkey) arch_calc_vm_prot_bits(prot, pkey) + static inline pgprot_t arch_vm_get_page_prot(unsigned long vm_flags) { +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS + return (vm_flags & VM_SAO) ? + __pgprot(_PAGE_SAO | vmflag_to_page_pkey_bits(vm_flags)) : + __pgprot(0 | vmflag_to_page_pkey_bits(vm_flags)); +#else return (vm_flags & VM_SAO) ? __pgprot(_PAGE_SAO) : __pgprot(0); +#endif } #define arch_vm_get_page_prot(vm_flags) arch_vm_get_page_prot(vm_flags) + static inline bool arch_validate_prot(unsigned long prot) { if (prot & ~(PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM | PROT_SAO)) diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h index 94013af..ad39db0 100644 --- a/arch/powerpc/include/asm/pkeys.h +++ b/arch/powerpc/include/asm/pkeys.h @@ -29,6 +29,18 @@ static inline u64 pkey_to_vmflag_bits(u16 pkey) ((pkey & 0x10UL) ? VM_PKEY_BIT4 : 0x0UL)); } +static inline u64 vmflag_to_page_pkey_bits(u64 vm_flags) +{ + if (!pkey_inited) + return 0x0UL; + + return (((vm_flags & VM_PKEY_BIT0) ? H_PAGE_PKEY_BIT4 : 0x0UL) | + ((vm_flags & VM_PKEY_BIT1) ? H_PAGE_PKEY_BIT3 : 0x0UL) | + ((vm_flags & VM_PKEY_BIT2) ? H_PAGE_PKEY_BIT2 : 0x0UL) | + ((vm_flags & VM_PKEY_BIT3) ? H_PAGE_PKEY_BIT1 : 0x0UL) | + ((vm_flags & VM_PKEY_BIT4) ? H_PAGE_PKEY_BIT0 : 0x0UL)); +} + static inline int vma_pkey(struct vm_area_struct *vma) { if (!pkey_inited) -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[RFC v6 21/62] powerpc: introduce execute-only pkey
This patch provides the implementation of execute-only pkey. The architecture-independent expects the ability to create and manage a special key which has execute-only permission. Signed-off-by: Ram Pai --- arch/powerpc/include/asm/book3s/64/mmu.h |1 + arch/powerpc/include/asm/pkeys.h |8 - arch/powerpc/mm/pkeys.c | 57 ++ 3 files changed, 65 insertions(+), 1 deletions(-) diff --git a/arch/powerpc/include/asm/book3s/64/mmu.h b/arch/powerpc/include/asm/book3s/64/mmu.h index 104ad72..0c0a2a8 100644 --- a/arch/powerpc/include/asm/book3s/64/mmu.h +++ b/arch/powerpc/include/asm/book3s/64/mmu.h @@ -116,6 +116,7 @@ struct patb_entry { * bit unset -> key available for allocation */ u32 pkey_allocation_map; + s16 execute_only_pkey; /* key holding execute-only protection */ #endif } mm_context_t; diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h index 0e744f1..1864148 100644 --- a/arch/powerpc/include/asm/pkeys.h +++ b/arch/powerpc/include/asm/pkeys.h @@ -118,11 +118,15 @@ static inline int mm_pkey_free(struct mm_struct *mm, int pkey) * Try to dedicate one of the protection keys to be used as an * execute-only protection key. */ +extern int __execute_only_pkey(struct mm_struct *mm); static inline int execute_only_pkey(struct mm_struct *mm) { - return 0; + if (!pkey_inited) + return -1; + return __execute_only_pkey(mm); } + static inline int arch_override_mprotect_pkey(struct vm_area_struct *vma, int prot, int pkey) { @@ -144,6 +148,8 @@ static inline void pkey_mm_init(struct mm_struct *mm) if (!pkey_inited) return; mm_pkey_allocation_map(mm) = PKEY_INITIAL_ALLOCAION; + /* -1 means unallocated or invalid */ + mm->context.execute_only_pkey = -1; } static inline void pkey_initialize(void) diff --git a/arch/powerpc/mm/pkeys.c b/arch/powerpc/mm/pkeys.c index b9ad98d..34e8557 100644 --- a/arch/powerpc/mm/pkeys.c +++ b/arch/powerpc/mm/pkeys.c @@ -97,3 +97,60 @@ int __arch_set_user_pkey_access(struct task_struct *tsk, int pkey, init_iamr(pkey, new_iamr_bits); return 0; } + +static inline bool pkey_allows_readwrite(int pkey) +{ + int pkey_shift = pkeyshift(pkey); + + if (!(read_uamor() & (0x3UL << pkey_shift))) + return true; + + return !(read_amr() & ((AMR_RD_BIT|AMR_WR_BIT) << pkey_shift)); +} + +int __execute_only_pkey(struct mm_struct *mm) +{ + bool need_to_set_mm_pkey = false; + int execute_only_pkey = mm->context.execute_only_pkey; + int ret; + + /* Do we need to assign a pkey for mm's execute-only maps? */ + if (execute_only_pkey == -1) { + /* Go allocate one to use, which might fail */ + execute_only_pkey = mm_pkey_alloc(mm); + if (execute_only_pkey < 0) + return -1; + need_to_set_mm_pkey = true; + } + + /* +* We do not want to go through the relatively costly +* dance to set AMR if we do not need to. Check it +* first and assume that if the execute-only pkey is +* readwrite-disabled than we do not have to set it +* ourselves. +*/ + if (!need_to_set_mm_pkey && + !pkey_allows_readwrite(execute_only_pkey)) + return execute_only_pkey; + + /* +* Set up AMR so that it denies access for everything +* other than execution. +*/ + ret = __arch_set_user_pkey_access(current, execute_only_pkey, + (PKEY_DISABLE_ACCESS | PKEY_DISABLE_WRITE)); + /* +* If the AMR-set operation failed somehow, just return +* 0 and effectively disable execute-only support. +*/ + if (ret) { + mm_set_pkey_free(mm, execute_only_pkey); + return -1; + } + + /* We got one, store it and use it from here on out */ + if (need_to_set_mm_pkey) + mm->context.execute_only_pkey = execute_only_pkey; + return execute_only_pkey; +} -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[RFC v6 19/62] powerpc: ability to create execute-disabled pkeys
powerpc has hardware support to disable execute on a pkey. This patch enables the ability to create execute-disabled keys. Signed-off-by: Ram Pai --- arch/powerpc/include/asm/pkeys.h | 12 arch/powerpc/mm/pkeys.c | 10 ++ 2 files changed, 22 insertions(+), 0 deletions(-) diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h index 1943e6b..0e744f1 100644 --- a/arch/powerpc/include/asm/pkeys.h +++ b/arch/powerpc/include/asm/pkeys.h @@ -2,6 +2,18 @@ #define _ASM_PPC64_PKEYS_H extern bool pkey_inited; +/* override any generic PKEY Permission defines */ +#undef PKEY_DISABLE_ACCESS +#define PKEY_DISABLE_ACCESS0x1 +#undef PKEY_DISABLE_WRITE +#define PKEY_DISABLE_WRITE 0x2 +#undef PKEY_DISABLE_EXECUTE +#define PKEY_DISABLE_EXECUTE 0x4 +#undef PKEY_ACCESS_MASK +#define PKEY_ACCESS_MASK (PKEY_DISABLE_ACCESS |\ + PKEY_DISABLE_WRITE |\ + PKEY_DISABLE_EXECUTE) + #define arch_max_pkey() 32 #define AMR_RD_BIT 0x1UL #define AMR_WR_BIT 0x2UL diff --git a/arch/powerpc/mm/pkeys.c b/arch/powerpc/mm/pkeys.c index 98d0391..b9ad98d 100644 --- a/arch/powerpc/mm/pkeys.c +++ b/arch/powerpc/mm/pkeys.c @@ -73,6 +73,7 @@ int __arch_set_user_pkey_access(struct task_struct *tsk, int pkey, unsigned long init_val) { u64 new_amr_bits = 0x0ul; + u64 new_iamr_bits = 0x0ul; if (!is_pkey_enabled(pkey)) return -1; @@ -85,5 +86,14 @@ int __arch_set_user_pkey_access(struct task_struct *tsk, int pkey, init_amr(pkey, new_amr_bits); + /* +* By default execute is disabled. +* To enable execute, PKEY_ENABLE_EXECUTE +* needs to be specified. +*/ + if ((init_val & PKEY_DISABLE_EXECUTE)) + new_iamr_bits |= IAMR_EX_BIT; + + init_iamr(pkey, new_iamr_bits); return 0; } -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[RFC v6 17/62] powerpc: implementation for arch_set_user_pkey_access()
This patch provides the detailed implementation for a user to allocate a key and enable it in the hardware. It provides the plumbing, but it cannot be used till the system call is implemented. The next patch will do so. Signed-off-by: Ram Pai --- arch/powerpc/include/asm/pkeys.h | 10 +- arch/powerpc/mm/pkeys.c | 27 +++ 2 files changed, 36 insertions(+), 1 deletions(-) diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h index 7f5c21d..1943e6b 100644 --- a/arch/powerpc/include/asm/pkeys.h +++ b/arch/powerpc/include/asm/pkeys.h @@ -3,6 +3,10 @@ extern bool pkey_inited; #define arch_max_pkey() 32 +#define AMR_RD_BIT 0x1UL +#define AMR_WR_BIT 0x2UL +#define IAMR_EX_BIT 0x1UL +#define AMR_BITS_PER_PKEY 2 #define ARCH_VM_PKEY_FLAGS (VM_PKEY_BIT0 | VM_PKEY_BIT1 | VM_PKEY_BIT2 | \ VM_PKEY_BIT3 | VM_PKEY_BIT4) #define AMR_BITS_PER_PKEY 2 @@ -113,10 +117,14 @@ static inline int arch_override_mprotect_pkey(struct vm_area_struct *vma, return 0; } +extern int __arch_set_user_pkey_access(struct task_struct *tsk, int pkey, + unsigned long init_val); static inline int arch_set_user_pkey_access(struct task_struct *tsk, int pkey, unsigned long init_val) { - return 0; + if (!pkey_inited) + return -1; + return __arch_set_user_pkey_access(tsk, pkey, init_val); } static inline void pkey_mm_init(struct mm_struct *mm) diff --git a/arch/powerpc/mm/pkeys.c b/arch/powerpc/mm/pkeys.c index 04ee361..98d0391 100644 --- a/arch/powerpc/mm/pkeys.c +++ b/arch/powerpc/mm/pkeys.c @@ -17,6 +17,10 @@ bool pkey_inited; #define pkeyshift(pkey) ((arch_max_pkey()-pkey-1) * AMR_BITS_PER_PKEY) +static bool is_pkey_enabled(int pkey) +{ + return !!(read_uamor() & (0x3ul << pkeyshift(pkey))); +} static inline void init_amr(int pkey, u8 init_bits) { @@ -60,3 +64,26 @@ void __arch_deactivate_pkey(int pkey) { pkey_status_change(pkey, false); } + +/* + * set the access right in AMR IAMR and UAMOR register + * for @pkey to that specified in @init_val. + */ +int __arch_set_user_pkey_access(struct task_struct *tsk, int pkey, + unsigned long init_val) +{ + u64 new_amr_bits = 0x0ul; + + if (!is_pkey_enabled(pkey)) + return -1; + + /* Set the bits we need in AMR: */ + if (init_val & PKEY_DISABLE_ACCESS) + new_amr_bits |= AMR_RD_BIT | AMR_WR_BIT; + else if (init_val & PKEY_DISABLE_WRITE) + new_amr_bits |= AMR_WR_BIT; + + init_amr(pkey, new_amr_bits); + + return 0; +} -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[RFC v6 20/62] powerpc: store and restore the pkey state across context switches
Store and restore the AMR, IAMR and UMOR register state of the task before scheduling out and after scheduling in, respectively. Signed-off-by: Ram Pai --- arch/powerpc/include/asm/processor.h |5 + arch/powerpc/kernel/process.c| 18 ++ 2 files changed, 23 insertions(+), 0 deletions(-) diff --git a/arch/powerpc/include/asm/processor.h b/arch/powerpc/include/asm/processor.h index 1189d04..dcb1cf0 100644 --- a/arch/powerpc/include/asm/processor.h +++ b/arch/powerpc/include/asm/processor.h @@ -309,6 +309,11 @@ struct thread_struct { struct thread_vr_state ckvr_state; /* Checkpointed VR state */ unsigned long ckvrsave; /* Checkpointed VRSAVE */ #endif /* CONFIG_PPC_TRANSACTIONAL_MEM */ +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS + unsigned long amr; + unsigned long iamr; + unsigned long uamor; +#endif #ifdef CONFIG_KVM_BOOK3S_32_HANDLER void* kvm_shadow_vcpu; /* KVM internal data */ #endif /* CONFIG_KVM_BOOK3S_32_HANDLER */ diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c index 2ad725e..9429361 100644 --- a/arch/powerpc/kernel/process.c +++ b/arch/powerpc/kernel/process.c @@ -1096,6 +1096,11 @@ static inline void save_sprs(struct thread_struct *t) t->tar = mfspr(SPRN_TAR); } #endif +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS + t->amr = mfspr(SPRN_AMR); + t->iamr = mfspr(SPRN_IAMR); + t->uamor = mfspr(SPRN_UAMOR); +#endif } static inline void restore_sprs(struct thread_struct *old_thread, @@ -1131,6 +1136,14 @@ static inline void restore_sprs(struct thread_struct *old_thread, mtspr(SPRN_TAR, new_thread->tar); } #endif +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS + if (old_thread->amr != new_thread->amr) + mtspr(SPRN_AMR, new_thread->amr); + if (old_thread->iamr != new_thread->iamr) + mtspr(SPRN_IAMR, new_thread->iamr); + if (old_thread->uamor != new_thread->uamor) + mtspr(SPRN_UAMOR, new_thread->uamor); +#endif } struct task_struct *__switch_to(struct task_struct *prev, @@ -1689,6 +1702,11 @@ void start_thread(struct pt_regs *regs, unsigned long start, unsigned long sp) current->thread.tm_tfiar = 0; current->thread.load_tm = 0; #endif /* CONFIG_PPC_TRANSACTIONAL_MEM */ +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS + current->thread.amr = 0x0ul; + current->thread.iamr = 0x0ul; + current->thread.uamor = 0x0ul; +#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */ } EXPORT_SYMBOL(start_thread); -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[RFC v6 16/62] powerpc: cleaup AMR,iAMR when a key is allocated or freed
cleanup the bits corresponding to a key in the AMR, and IAMR register, when the key is newly allocated/activated or is freed. We dont want some residual bits cause the hardware enforce unintended behavior when the key is activated or freed. Signed-off-by: Ram Pai --- arch/powerpc/include/asm/pkeys.h | 12 1 files changed, 12 insertions(+), 0 deletions(-) diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h index 4327842..7f5c21d 100644 --- a/arch/powerpc/include/asm/pkeys.h +++ b/arch/powerpc/include/asm/pkeys.h @@ -42,6 +42,8 @@ static inline bool mm_pkey_is_allocated(struct mm_struct *mm, int pkey) mm_set_pkey_is_allocated(mm, pkey)); } +extern void __arch_activate_pkey(int pkey); +extern void __arch_deactivate_pkey(int pkey); /* * Returns a positive, 5-bit key on success, or -1 on failure. */ @@ -70,6 +72,12 @@ static inline int mm_pkey_alloc(struct mm_struct *mm) ffz((u32)mm_pkey_allocation_map(mm)) - 1; mm_set_pkey_allocated(mm, ret); + + /* +* enable the key in the hardware +*/ + if (ret > 0) + __arch_activate_pkey(ret); return ret; } @@ -81,6 +89,10 @@ static inline int mm_pkey_free(struct mm_struct *mm, int pkey) if (!mm_pkey_is_allocated(mm, pkey)) return -EINVAL; + /* +* Disable the key in the hardware +*/ + __arch_deactivate_pkey(pkey); mm_set_pkey_free(mm, pkey); return 0; -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[RFC v6 18/62] powerpc: sys_pkey_alloc() and sys_pkey_free() system calls
Finally this patch provides the ability for a process to allocate and free a protection key. Signed-off-by: Ram Pai --- arch/powerpc/include/asm/systbl.h |2 ++ arch/powerpc/include/asm/unistd.h |4 +--- arch/powerpc/include/uapi/asm/unistd.h |2 ++ 3 files changed, 5 insertions(+), 3 deletions(-) diff --git a/arch/powerpc/include/asm/systbl.h b/arch/powerpc/include/asm/systbl.h index 1c94708..22dd776 100644 --- a/arch/powerpc/include/asm/systbl.h +++ b/arch/powerpc/include/asm/systbl.h @@ -388,3 +388,5 @@ COMPAT_SYS_SPU(pwritev2) SYSCALL(kexec_file_load) SYSCALL(statx) +SYSCALL(pkey_alloc) +SYSCALL(pkey_free) diff --git a/arch/powerpc/include/asm/unistd.h b/arch/powerpc/include/asm/unistd.h index 9ba11db..e0273bc 100644 --- a/arch/powerpc/include/asm/unistd.h +++ b/arch/powerpc/include/asm/unistd.h @@ -12,13 +12,11 @@ #include -#define NR_syscalls384 +#define NR_syscalls386 #define __NR__exit __NR_exit #define __IGNORE_pkey_mprotect -#define __IGNORE_pkey_alloc -#define __IGNORE_pkey_free #ifndef __ASSEMBLY__ diff --git a/arch/powerpc/include/uapi/asm/unistd.h b/arch/powerpc/include/uapi/asm/unistd.h index b85f142..7993a07 100644 --- a/arch/powerpc/include/uapi/asm/unistd.h +++ b/arch/powerpc/include/uapi/asm/unistd.h @@ -394,5 +394,7 @@ #define __NR_pwritev2 381 #define __NR_kexec_file_load 382 #define __NR_statx 383 +#define __NR_pkey_alloc384 +#define __NR_pkey_free 385 #endif /* _UAPI_ASM_POWERPC_UNISTD_H_ */ -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[RFC v6 13/62] powerpc: track allocation status of all pkeys
Total 32 keys are supported on powerpc. However pkey 0,1 and 31 are reserved. So effectively we have 29 pkeys. This patch keeps track of reserved keys, allocated keys and keys that are currently free. Also it adds skeletal functions and macros, that the architecture-independent code expects to be available. Signed-off-by: Ram Pai --- arch/powerpc/include/asm/book3s/64/mmu.h |9 +++ arch/powerpc/include/asm/mmu_context.h |1 + arch/powerpc/include/asm/pkeys.h | 81 -- arch/powerpc/mm/mmu_context_book3s64.c |2 + 4 files changed, 89 insertions(+), 4 deletions(-) diff --git a/arch/powerpc/include/asm/book3s/64/mmu.h b/arch/powerpc/include/asm/book3s/64/mmu.h index 77529a3..104ad72 100644 --- a/arch/powerpc/include/asm/book3s/64/mmu.h +++ b/arch/powerpc/include/asm/book3s/64/mmu.h @@ -108,6 +108,15 @@ struct patb_entry { #ifdef CONFIG_SPAPR_TCE_IOMMU struct list_head iommu_group_mem_list; #endif + +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS + /* +* Each bit represents one protection key. +* bit set -> key allocated +* bit unset -> key available for allocation +*/ + u32 pkey_allocation_map; +#endif } mm_context_t; /* diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h index 4b93547..4705dab 100644 --- a/arch/powerpc/include/asm/mmu_context.h +++ b/arch/powerpc/include/asm/mmu_context.h @@ -184,6 +184,7 @@ static inline bool arch_vma_access_permitted(struct vm_area_struct *vma, #ifndef CONFIG_PPC64_MEMORY_PROTECTION_KEYS #define pkey_initialize() +#define pkey_mm_init(mm) #endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */ #endif /* __KERNEL__ */ diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h index 203d7de..09b268e 100644 --- a/arch/powerpc/include/asm/pkeys.h +++ b/arch/powerpc/include/asm/pkeys.h @@ -2,21 +2,87 @@ #define _ASM_PPC64_PKEYS_H extern bool pkey_inited; -#define ARCH_VM_PKEY_FLAGS 0 +#define arch_max_pkey() 32 +#define ARCH_VM_PKEY_FLAGS (VM_PKEY_BIT0 | VM_PKEY_BIT1 | VM_PKEY_BIT2 | \ + VM_PKEY_BIT3 | VM_PKEY_BIT4) +/* + * Bits are in BE format. + * NOTE: key 31, 1, 0 are not used. + * key 0 is used by default. It give read/write/execute permission. + * key 31 is reserved by the hypervisor. + * key 1 is recommended to be not used. + * PowerISA(3.0) page 1015, programming note. + */ +#define PKEY_INITIAL_ALLOCAION 0xc001 + +#define pkeybit_mask(pkey) (0x1 << (arch_max_pkey() - pkey - 1)) + +#define mm_pkey_allocation_map(mm) (mm->context.pkey_allocation_map) + +#define mm_set_pkey_allocated(mm, pkey) { \ + mm_pkey_allocation_map(mm) |= pkeybit_mask(pkey); \ +} + +#define mm_set_pkey_free(mm, pkey) { \ + mm_pkey_allocation_map(mm) &= ~pkeybit_mask(pkey); \ +} + +#define mm_set_pkey_is_allocated(mm, pkey) \ + (mm_pkey_allocation_map(mm) & pkeybit_mask(pkey)) + +#define mm_set_pkey_is_reserved(mm, pkey) (PKEY_INITIAL_ALLOCAION & \ + pkeybit_mask(pkey)) static inline bool mm_pkey_is_allocated(struct mm_struct *mm, int pkey) { - return (pkey == 0); + /* a reserved key is never considered as 'explicitly allocated' */ + return ((pkey < arch_max_pkey()) && + !mm_set_pkey_is_reserved(mm, pkey) && + mm_set_pkey_is_allocated(mm, pkey)); } +/* + * Returns a positive, 5-bit key on success, or -1 on failure. + */ static inline int mm_pkey_alloc(struct mm_struct *mm) { - return -1; + /* +* Note: this is the one and only place we make sure +* that the pkey is valid as far as the hardware is +* concerned. The rest of the kernel trusts that +* only good, valid pkeys come out of here. +*/ + u32 all_pkeys_mask = (u32)(~(0x0)); + int ret; + + if (!pkey_inited) + return -1; + /* +* Are we out of pkeys? We must handle this specially +* because ffz() behavior is undefined if there are no +* zeros. +*/ + if (mm_pkey_allocation_map(mm) == all_pkeys_mask) + return -1; + + ret = arch_max_pkey() - + ffz((u32)mm_pkey_allocation_map(mm)) + - 1; + mm_set_pkey_allocated(mm, ret); + return ret; } static inline int mm_pkey_free(struct mm_struct *mm, int pkey) { - return -EINVAL; + if (!pkey_inited) + return -1; + + if (!mm_pkey_is_allocated(mm, pkey)) + return -EINVAL; + + mm_set_pkey_free(mm, pkey); + + return 0; } /* @@ -40,6 +106,13 @@ static inline int arch_set_user_pkey_access(struct task_struct *tsk, int pkey, return 0; } +static inline void pkey_mm_init(struct mm_struct *mm) +{ + if (!pkey_inited) + return; + mm_pkey_allocation_map(mm) = PKEY_INITIAL_ALLOCAION; +} +
[RFC v6 15/62] powerpc: helper functions to initialize AMR, IAMR and UMOR registers
Introduce helper functions that can initialize the bits in the AMR, IAMR and UMOR register; the bits that correspond to the given pkey. Signed-off-by: Ram Pai --- arch/powerpc/include/asm/pkeys.h |1 + arch/powerpc/mm/pkeys.c | 44 ++ 2 files changed, 45 insertions(+), 0 deletions(-) diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h index 09b268e..4327842 100644 --- a/arch/powerpc/include/asm/pkeys.h +++ b/arch/powerpc/include/asm/pkeys.h @@ -5,6 +5,7 @@ #define arch_max_pkey() 32 #define ARCH_VM_PKEY_FLAGS (VM_PKEY_BIT0 | VM_PKEY_BIT1 | VM_PKEY_BIT2 | \ VM_PKEY_BIT3 | VM_PKEY_BIT4) +#define AMR_BITS_PER_PKEY 2 /* * Bits are in BE format. * NOTE: key 31, 1, 0 are not used. diff --git a/arch/powerpc/mm/pkeys.c b/arch/powerpc/mm/pkeys.c index c3acee1..04ee361 100644 --- a/arch/powerpc/mm/pkeys.c +++ b/arch/powerpc/mm/pkeys.c @@ -16,3 +16,47 @@ #include /* PKEY_* */ bool pkey_inited; +#define pkeyshift(pkey) ((arch_max_pkey()-pkey-1) * AMR_BITS_PER_PKEY) + +static inline void init_amr(int pkey, u8 init_bits) +{ + u64 new_amr_bits = (((u64)init_bits & 0x3UL) << pkeyshift(pkey)); + u64 old_amr = read_amr() & ~((u64)(0x3ul) << pkeyshift(pkey)); + + write_amr(old_amr | new_amr_bits); +} + +static inline void init_iamr(int pkey, u8 init_bits) +{ + u64 new_iamr_bits = (((u64)init_bits & 0x3UL) << pkeyshift(pkey)); + u64 old_iamr = read_iamr() & ~((u64)(0x3ul) << pkeyshift(pkey)); + + write_amr(old_iamr | new_iamr_bits); +} + +static void pkey_status_change(int pkey, bool enable) +{ + u64 old_uamor; + + /* reset the AMR and IAMR bits for this key */ + init_amr(pkey, 0x0); + init_iamr(pkey, 0x0); + + /* enable/disable key */ + old_uamor = read_uamor(); + if (enable) + old_uamor |= (0x3ul << pkeyshift(pkey)); + else + old_uamor &= ~(0x3ul << pkeyshift(pkey)); + write_uamor(old_uamor); +} + +void __arch_activate_pkey(int pkey) +{ + pkey_status_change(pkey, true); +} + +void __arch_deactivate_pkey(int pkey) +{ + pkey_status_change(pkey, false); +} -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[RFC v6 14/62] powerpc: helper function to read,write AMR,IAMR,UAMOR registers
Implements helper functions to read and write the key related registers; AMR, IAMR, UAMOR. AMR register tracks the read,write permission of a key IAMR register tracks the execute permission of a key UAMOR register enables and disables a key Signed-off-by: Ram Pai --- arch/powerpc/include/asm/book3s/64/pgtable.h | 26 ++ 1 files changed, 26 insertions(+), 0 deletions(-) diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h index 85bc987..d4da0e9 100644 --- a/arch/powerpc/include/asm/book3s/64/pgtable.h +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h @@ -428,6 +428,32 @@ static inline void huge_ptep_set_wrprotect(struct mm_struct *mm, pte_update(mm, addr, ptep, 0, _PAGE_PRIVILEGED, 1); } +#include +static inline u64 read_amr(void) +{ + return mfspr(SPRN_AMR); +} +static inline void write_amr(u64 value) +{ + mtspr(SPRN_AMR, value); +} +static inline u64 read_iamr(void) +{ + return mfspr(SPRN_IAMR); +} +static inline void write_iamr(u64 value) +{ + mtspr(SPRN_IAMR, value); +} +static inline u64 read_uamor(void) +{ + return mfspr(SPRN_UAMOR); +} +static inline void write_uamor(u64 value) +{ + mtspr(SPRN_UAMOR, value); +} + #define __HAVE_ARCH_PTEP_GET_AND_CLEAR static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep) -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[RFC v6 09/62] powerpc: use helper functions in __hash_page_4K() for 4K PTE
replace redundant code with helper functions pte_get_hash_gslot() and pte_set_hash_slot() Signed-off-by: Ram Pai --- arch/powerpc/mm/hash64_4k.c | 14 ++ 1 files changed, 6 insertions(+), 8 deletions(-) diff --git a/arch/powerpc/mm/hash64_4k.c b/arch/powerpc/mm/hash64_4k.c index 6fa450c..a1eebc1 100644 --- a/arch/powerpc/mm/hash64_4k.c +++ b/arch/powerpc/mm/hash64_4k.c @@ -20,6 +20,7 @@ int __hash_page_4K(unsigned long ea, unsigned long access, unsigned long vsid, pte_t *ptep, unsigned long trap, unsigned long flags, int ssize, int subpg_prot) { + real_pte_t rpte; unsigned long hpte_group; unsigned long rflags, pa; unsigned long old_pte, new_pte; @@ -54,6 +55,7 @@ int __hash_page_4K(unsigned long ea, unsigned long access, unsigned long vsid, * need to add in 0x1 if it's a read-only user page */ rflags = htab_convert_pte_flags(new_pte); + rpte = __real_pte(__pte(old_pte), ptep); if (cpu_has_feature(CPU_FTR_NOEXECUTE) && !cpu_has_feature(CPU_FTR_COHERENT_ICACHE)) @@ -64,13 +66,10 @@ int __hash_page_4K(unsigned long ea, unsigned long access, unsigned long vsid, /* * There MIGHT be an HPTE for this pte */ - hash = hpt_hash(vpn, shift, ssize); - if (old_pte & H_PAGE_F_SECOND) - hash = ~hash; - slot = (hash & htab_hash_mask) * HPTES_PER_GROUP; - slot += (old_pte & H_PAGE_F_GIX) >> H_PAGE_F_GIX_SHIFT; + unsigned long gslot = pte_get_hash_gslot(vpn, shift, + ssize, rpte, 0); - if (mmu_hash_ops.hpte_updatepp(slot, rflags, vpn, MMU_PAGE_4K, + if (mmu_hash_ops.hpte_updatepp(gslot, rflags, vpn, MMU_PAGE_4K, MMU_PAGE_4K, ssize, flags) == -1) old_pte &= ~_PAGE_HPTEFLAGS; } @@ -118,8 +117,7 @@ int __hash_page_4K(unsigned long ea, unsigned long access, unsigned long vsid, return -1; } new_pte = (new_pte & ~_PAGE_HPTEFLAGS) | H_PAGE_HASHPTE; - new_pte |= (slot << H_PAGE_F_GIX_SHIFT) & - (H_PAGE_F_SECOND | H_PAGE_F_GIX); + new_pte |= pte_set_hash_slot(ptep, rpte, 0, slot); } *ptep = __pte(new_pte & ~H_PAGE_BUSY); return 0; -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[RFC v6 10/62] powerpc: use helper functions in flush_hash_page()
replace redundant code in flush_hash_page() with helper function pte_get_hash_gslot(). Signed-off-by: Ram Pai --- arch/powerpc/mm/hash_utils_64.c | 13 - 1 files changed, 4 insertions(+), 9 deletions(-) diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c index d3604da..d863696 100644 --- a/arch/powerpc/mm/hash_utils_64.c +++ b/arch/powerpc/mm/hash_utils_64.c @@ -1615,23 +1615,18 @@ unsigned long pte_get_hash_gslot(unsigned long vpn, unsigned long shift, void flush_hash_page(unsigned long vpn, real_pte_t pte, int psize, int ssize, unsigned long flags) { - unsigned long hash, index, shift, hidx, slot; + unsigned long index, shift, gslot; int local = flags & HPTE_LOCAL_UPDATE; DBG_LOW("flush_hash_page(vpn=%016lx)\n", vpn); pte_iterate_hashed_subpages(pte, psize, vpn, index, shift) { - hash = hpt_hash(vpn, shift, ssize); - hidx = __rpte_to_hidx(pte, index); - if (hidx & _PTEIDX_SECONDARY) - hash = ~hash; - slot = (hash & htab_hash_mask) * HPTES_PER_GROUP; - slot += hidx & _PTEIDX_GROUP_IX; - DBG_LOW(" sub %ld: hash=%lx, hidx=%lx\n", index, slot, hidx); + gslot = pte_get_hash_gslot(vpn, shift, ssize, pte, index); + DBG_LOW(" sub %ld: gslot=%lx\n", index, gslot); /* * We use same base page size and actual psize, because we don't * use these functions for hugepage */ - mmu_hash_ops.hpte_invalidate(slot, vpn, psize, psize, + mmu_hash_ops.hpte_invalidate(gslot, vpn, psize, psize, ssize, local); } pte_iterate_hashed_end(); -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[RFC v6 12/62] mm: introduce an additional vma bit for powerpc pkey
Currently only 4bits are allocated in the vma flags to hold 16 keys. This is sufficient for x86. PowerPC supports 32 keys, which needs 5bits. This patch allocates an additional bit. Signed-off-by: Ram Pai --- fs/proc/task_mmu.c |6 -- include/linux/mm.h | 20 ++-- 2 files changed, 18 insertions(+), 8 deletions(-) diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index 520802d..e5710bc 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -662,13 +662,15 @@ static void show_smap_vma_flags(struct seq_file *m, struct vm_area_struct *vma) [ilog2(VM_MERGEABLE)] = "mg", [ilog2(VM_UFFD_MISSING)]= "um", [ilog2(VM_UFFD_WP)] = "uw", -#ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS +#ifdef CONFIG_ARCH_HAS_PKEYS /* These come out via ProtectionKey: */ [ilog2(VM_PKEY_BIT0)] = "", [ilog2(VM_PKEY_BIT1)] = "", [ilog2(VM_PKEY_BIT2)] = "", [ilog2(VM_PKEY_BIT3)] = "", -#endif + /* Additional bit used by ppc64 */ + [ilog2(VM_PKEY_BIT4)] = "", +#endif /* CONFIG_ARCH_HAS_PKEYS */ }; size_t i; diff --git a/include/linux/mm.h b/include/linux/mm.h index 6f543a4..095e2e7 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -208,21 +208,29 @@ extern int overcommit_kbytes_handler(struct ctl_table *, int, void __user *, #define VM_HIGH_ARCH_BIT_1 33 /* bit only usable on 64-bit architectures */ #define VM_HIGH_ARCH_BIT_2 34 /* bit only usable on 64-bit architectures */ #define VM_HIGH_ARCH_BIT_3 35 /* bit only usable on 64-bit architectures */ +#define VM_HIGH_ARCH_BIT_4 36 /* bit only usable on 64-bit architectures */ #define VM_HIGH_ARCH_0 BIT(VM_HIGH_ARCH_BIT_0) #define VM_HIGH_ARCH_1 BIT(VM_HIGH_ARCH_BIT_1) #define VM_HIGH_ARCH_2 BIT(VM_HIGH_ARCH_BIT_2) #define VM_HIGH_ARCH_3 BIT(VM_HIGH_ARCH_BIT_3) +#define VM_HIGH_ARCH_4 BIT(VM_HIGH_ARCH_BIT_4) #endif /* CONFIG_ARCH_USES_HIGH_VMA_FLAGS */ -#if defined(CONFIG_X86) -# define VM_PATVM_ARCH_1 /* PAT reserves whole VMA at once (x86) */ -#if defined (CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS) +#ifdef CONFIG_ARCH_HAS_PKEYS # define VM_PKEY_SHIFT VM_HIGH_ARCH_BIT_0 -# define VM_PKEY_BIT0 VM_HIGH_ARCH_0 /* A protection key is a 4-bit value */ -# define VM_PKEY_BIT1 VM_HIGH_ARCH_1 +# define VM_PKEY_BIT0 VM_HIGH_ARCH_0 /* A protection key is a 4-bit value */ +# define VM_PKEY_BIT1 VM_HIGH_ARCH_1 /* on x86 and 5-bit value on ppc64 */ # define VM_PKEY_BIT2 VM_HIGH_ARCH_2 # define VM_PKEY_BIT3 VM_HIGH_ARCH_3 -#endif +# define VM_PKEY_BIT4 VM_HIGH_ARCH_4 +#endif /* CONFIG_ARCH_HAS_PKEYS */ + +#if defined(CONFIG_PPC64_MEMORY_PROTECTION_KEYS) +#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */ + + +#if defined(CONFIG_X86) +# define VM_PATVM_ARCH_1 /* PAT reserves whole VMA at once (x86) */ #elif defined(CONFIG_PPC) # define VM_SAOVM_ARCH_1 /* Strong Access Ordering (powerpc) */ #elif defined(CONFIG_PARISC) -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[RFC v6 07/62] powerpc: use helper functions in __hash_page_huge() for 64K PTE
replace redundant code in __hash_page_huge() with helper functions pte_get_hash_gslot() and pte_set_hash_slot() Signed-off-by: Ram Pai --- arch/powerpc/mm/hugetlbpage-hash64.c | 24 1 files changed, 4 insertions(+), 20 deletions(-) diff --git a/arch/powerpc/mm/hugetlbpage-hash64.c b/arch/powerpc/mm/hugetlbpage-hash64.c index 6f7aee3..e6dcd50 100644 --- a/arch/powerpc/mm/hugetlbpage-hash64.c +++ b/arch/powerpc/mm/hugetlbpage-hash64.c @@ -23,7 +23,6 @@ int __hash_page_huge(unsigned long ea, unsigned long access, unsigned long vsid, int ssize, unsigned int shift, unsigned int mmu_psize) { real_pte_t rpte; - unsigned long *hidxp; unsigned long vpn; unsigned long old_pte, new_pte; unsigned long rflags, pa, sz; @@ -74,16 +73,10 @@ int __hash_page_huge(unsigned long ea, unsigned long access, unsigned long vsid, /* Check if pte already has an hpte (case 2) */ if (unlikely(old_pte & H_PAGE_HASHPTE)) { /* There MIGHT be an HPTE for this pte */ - unsigned long hash, slot, hidx; + unsigned long gslot; - hash = hpt_hash(vpn, shift, ssize); - hidx = __rpte_to_hidx(rpte, 0); - if (hidx & _PTEIDX_SECONDARY) - hash = ~hash; - slot = (hash & htab_hash_mask) * HPTES_PER_GROUP; - slot += hidx & _PTEIDX_GROUP_IX; - - if (mmu_hash_ops.hpte_updatepp(slot, rflags, vpn, mmu_psize, + gslot = pte_get_hash_gslot(vpn, shift, ssize, rpte, 0); + if (mmu_hash_ops.hpte_updatepp(gslot, rflags, vpn, mmu_psize, mmu_psize, ssize, flags) == -1) old_pte &= ~_PAGE_HPTEFLAGS; } @@ -110,16 +103,7 @@ int __hash_page_huge(unsigned long ea, unsigned long access, unsigned long vsid, return -1; } - /* -* Insert slot number & secondary bit in PTE second half. -*/ - hidxp = (unsigned long *)(ptep + PTRS_PER_PTE); - rpte.hidx &= ~(0xfUL); - *hidxp = rpte.hidx | (slot & 0xfUL); - /* -* check __real_pte for details on matching smp_rmb() -*/ - smp_wmb(); + new_pte |= pte_set_hash_slot(ptep, rpte, 0, slot); } /* -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[RFC v6 08/62] powerpc: use helper functions in __hash_page_4K() for 64K PTE
replace redundant code in __hash_page_4K() with helper functions pte_get_hash_gslot() and pte_set_hash_slot() Signed-off-by: Ram Pai --- arch/powerpc/mm/hash64_64k.c | 34 +- 1 files changed, 9 insertions(+), 25 deletions(-) diff --git a/arch/powerpc/mm/hash64_64k.c b/arch/powerpc/mm/hash64_64k.c index 645f621..c658cb5 100644 --- a/arch/powerpc/mm/hash64_64k.c +++ b/arch/powerpc/mm/hash64_64k.c @@ -39,9 +39,8 @@ int __hash_page_4K(unsigned long ea, unsigned long access, unsigned long vsid, { real_pte_t rpte; unsigned long hpte_group; - unsigned long *hidxp; unsigned int subpg_index; - unsigned long rflags, pa, hidx; + unsigned long rflags, pa; unsigned long old_pte, new_pte, subpg_pte; unsigned long vpn, hash, slot, gslot; unsigned long shift = mmu_psize_defs[MMU_PAGE_4K].shift; @@ -114,18 +113,13 @@ int __hash_page_4K(unsigned long ea, unsigned long access, unsigned long vsid, if (__rpte_sub_valid(rpte, subpg_index)) { int ret; - hash = hpt_hash(vpn, shift, ssize); - hidx = __rpte_to_hidx(rpte, subpg_index); - if (hidx & _PTEIDX_SECONDARY) - hash = ~hash; - slot = (hash & htab_hash_mask) * HPTES_PER_GROUP; - slot += hidx & _PTEIDX_GROUP_IX; - - ret = mmu_hash_ops.hpte_updatepp(slot, rflags, vpn, + gslot = pte_get_hash_gslot(vpn, shift, ssize, rpte, + subpg_index); + ret = mmu_hash_ops.hpte_updatepp(gslot, rflags, vpn, MMU_PAGE_4K, MMU_PAGE_4K, ssize, flags); /* -*if we failed because typically the HPTE wasn't really here +* if we failed because typically the HPTE wasn't really here * we try an insertion. */ if (ret == -1) @@ -221,20 +215,10 @@ int __hash_page_4K(unsigned long ea, unsigned long access, unsigned long vsid, MMU_PAGE_4K, MMU_PAGE_4K, old_pte); return -1; } - /* -* Insert slot number & secondary bit in PTE second half, -* clear H_PAGE_BUSY and set appropriate HPTE slot bit -* Since we have H_PAGE_BUSY set on ptep, we can be sure -* nobody is undating hidx. -*/ - hidxp = (unsigned long *)(ptep + PTRS_PER_PTE); - rpte.hidx &= ~(0xfUL << (subpg_index << 2)); - *hidxp = rpte.hidx | (slot << (subpg_index << 2)); - /* -* check __real_pte for details on matching smp_rmb() -*/ - smp_wmb(); - new_pte |= H_PAGE_HASHPTE; + + new_pte |= pte_set_hash_slot(ptep, rpte, subpg_index, slot); + new_pte |= H_PAGE_HASHPTE; + *ptep = __pte(new_pte & ~H_PAGE_BUSY); return 0; } -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[RFC v6 05/62] powerpc: capture the PTE format changes in the dump pte report
The H_PAGE_F_SECOND,H_PAGE_F_GIX are not in the 64K main-PTE. capture these changes in the dump pte report. Signed-off-by: Ram Pai --- arch/powerpc/mm/dump_linuxpagetables.c |3 ++- 1 files changed, 2 insertions(+), 1 deletions(-) diff --git a/arch/powerpc/mm/dump_linuxpagetables.c b/arch/powerpc/mm/dump_linuxpagetables.c index 44fe483..5627edd 100644 --- a/arch/powerpc/mm/dump_linuxpagetables.c +++ b/arch/powerpc/mm/dump_linuxpagetables.c @@ -213,7 +213,7 @@ struct flag_info { .val= H_PAGE_4K_PFN, .set= "4K_pfn", }, { -#endif +#else /* CONFIG_PPC_64K_PAGES */ .mask = H_PAGE_F_GIX, .val= H_PAGE_F_GIX, .set= "f_gix", @@ -224,6 +224,7 @@ struct flag_info { .val= H_PAGE_F_SECOND, .set= "f_second", }, { +#endif /* CONFIG_PPC_64K_PAGES */ #endif .mask = _PAGE_SPECIAL, .val= _PAGE_SPECIAL, -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[RFC v6 06/62] powerpc: use helper functions in __hash_page_64K() for 64K PTE
replace redundant code in __hash_page_64K() with helper functions pte_get_hash_gslot() and pte_set_hash_slot() Signed-off-by: Ram Pai --- arch/powerpc/mm/hash64_64k.c | 24 1 files changed, 4 insertions(+), 20 deletions(-) diff --git a/arch/powerpc/mm/hash64_64k.c b/arch/powerpc/mm/hash64_64k.c index 0012618..645f621 100644 --- a/arch/powerpc/mm/hash64_64k.c +++ b/arch/powerpc/mm/hash64_64k.c @@ -244,7 +244,6 @@ int __hash_page_64K(unsigned long ea, unsigned long access, unsigned long flags, int ssize) { real_pte_t rpte; - unsigned long *hidxp; unsigned long hpte_group; unsigned long rflags, pa; unsigned long old_pte, new_pte; @@ -289,18 +288,12 @@ int __hash_page_64K(unsigned long ea, unsigned long access, vpn = hpt_vpn(ea, vsid, ssize); if (unlikely(old_pte & H_PAGE_HASHPTE)) { - unsigned long hash, slot, hidx; - - hash = hpt_hash(vpn, shift, ssize); - hidx = __rpte_to_hidx(rpte, 0); - if (hidx & _PTEIDX_SECONDARY) - hash = ~hash; - slot = (hash & htab_hash_mask) * HPTES_PER_GROUP; - slot += hidx & _PTEIDX_GROUP_IX; + unsigned long gslot; /* * There MIGHT be an HPTE for this pte */ - if (mmu_hash_ops.hpte_updatepp(slot, rflags, vpn, MMU_PAGE_64K, + gslot = pte_get_hash_gslot(vpn, shift, ssize, rpte, 0); + if (mmu_hash_ops.hpte_updatepp(gslot, rflags, vpn, MMU_PAGE_64K, MMU_PAGE_64K, ssize, flags) == -1) old_pte &= ~_PAGE_HPTEFLAGS; @@ -350,17 +343,8 @@ int __hash_page_64K(unsigned long ea, unsigned long access, return -1; } - /* -* Insert slot number & secondary bit in PTE second half. -*/ - hidxp = (unsigned long *)(ptep + PTRS_PER_PTE); - rpte.hidx &= ~(0xfUL); - *hidxp = rpte.hidx | (slot & 0xfUL); - /* -* check __real_pte for details on matching smp_rmb() -*/ - smp_wmb(); new_pte = (new_pte & ~_PAGE_HPTEFLAGS) | H_PAGE_HASHPTE; + new_pte |= pte_set_hash_slot(ptep, rpte, 0, slot); } *ptep = __pte(new_pte & ~H_PAGE_BUSY); return 0; -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[RFC v6 04/62] powerpc: introduce pte_get_hash_gslot() helper
Introduce pte_get_hash_gslot()() which returns the slot number of the HPTE in the global hash table. This function will come in handy as we work towards re-arranging the PTE bits in the later patches. Signed-off-by: Ram Pai --- arch/powerpc/include/asm/book3s/64/hash.h |3 +++ arch/powerpc/mm/hash_utils_64.c | 18 ++ 2 files changed, 21 insertions(+), 0 deletions(-) diff --git a/arch/powerpc/include/asm/book3s/64/hash.h b/arch/powerpc/include/asm/book3s/64/hash.h index d27f885..277158c 100644 --- a/arch/powerpc/include/asm/book3s/64/hash.h +++ b/arch/powerpc/include/asm/book3s/64/hash.h @@ -156,6 +156,9 @@ static inline int hash__pte_none(pte_t pte) return (pte_val(pte) & ~H_PTE_NONE_MASK) == 0; } +unsigned long pte_get_hash_gslot(unsigned long vpn, unsigned long shift, + int ssize, real_pte_t rpte, unsigned int subpg_index); + /* This low level function performs the actual PTE insertion * Setting the PTE depends on the MMU type and other factors. It's * an horrible mess that I'm not going to try to clean up now but diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c index 1b494d0..d3604da 100644 --- a/arch/powerpc/mm/hash_utils_64.c +++ b/arch/powerpc/mm/hash_utils_64.c @@ -1591,6 +1591,24 @@ static inline void tm_flush_hash_page(int local) } #endif +/* + * return the global hash slot, corresponding to the given + * pte, which contains the hpte. + */ +unsigned long pte_get_hash_gslot(unsigned long vpn, unsigned long shift, + int ssize, real_pte_t rpte, unsigned int subpg_index) +{ + unsigned long hash, slot, hidx; + + hash = hpt_hash(vpn, shift, ssize); + hidx = __rpte_to_hidx(rpte, subpg_index); + if (hidx & _PTEIDX_SECONDARY) + hash = ~hash; + slot = (hash & htab_hash_mask) * HPTES_PER_GROUP; + slot += hidx & _PTEIDX_GROUP_IX; + return slot; +} + /* WARNING: This is called from hash_low_64.S, if you change this prototype, * do not forget to update the assembly call site ! */ -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[RFC v6 03/62] powerpc: introduce pte_set_hash_slot() helper
Introduce pte_set_hash_slot().It sets the (H_PAGE_F_SECOND|H_PAGE_F_GIX) bits at the appropriate location in the PTE of 4K PTE. For 64K PTE, it sets the bits in the second part of the PTE. Though the implementation for the former just needs the slot parameter, it does take some additional parameters to keep the prototype consistent. This function will be handy as we work towards re-arranging the bits in the later patches. Signed-off-by: Ram Pai --- arch/powerpc/include/asm/book3s/64/hash-4k.h | 15 +++ arch/powerpc/include/asm/book3s/64/hash-64k.h | 25 + 2 files changed, 40 insertions(+), 0 deletions(-) diff --git a/arch/powerpc/include/asm/book3s/64/hash-4k.h b/arch/powerpc/include/asm/book3s/64/hash-4k.h index d2cf949..dc153c6 100644 --- a/arch/powerpc/include/asm/book3s/64/hash-4k.h +++ b/arch/powerpc/include/asm/book3s/64/hash-4k.h @@ -53,6 +53,21 @@ static inline int hash__hugepd_ok(hugepd_t hpd) } #endif +/* + * 4k pte format is different from 64k pte format. Saving the + * hash_slot is just a matter of returning the pte bits that need to + * be modified. On 64k pte, things are a little more involved and + * hence needs many more parameters to accomplish the same. + * However we want to abstract this out from the caller by keeping + * the prototype consistent across the two formats. + */ +static inline unsigned long pte_set_hash_slot(pte_t *ptep, real_pte_t rpte, + unsigned int subpg_index, unsigned long slot) +{ + return (slot << H_PAGE_F_GIX_SHIFT) & + (H_PAGE_F_SECOND | H_PAGE_F_GIX); +} + #ifdef CONFIG_TRANSPARENT_HUGEPAGE static inline char *get_hpte_slot_array(pmd_t *pmdp) diff --git a/arch/powerpc/include/asm/book3s/64/hash-64k.h b/arch/powerpc/include/asm/book3s/64/hash-64k.h index c281f18..89ef5a9 100644 --- a/arch/powerpc/include/asm/book3s/64/hash-64k.h +++ b/arch/powerpc/include/asm/book3s/64/hash-64k.h @@ -67,6 +67,31 @@ static inline unsigned long __rpte_to_hidx(real_pte_t rpte, unsigned long index) return ((rpte.hidx >> (index<<2)) & 0xfUL); } +/* + * Commit the hash slot and return pte bits that needs to be modified. + * The caller is expected to modify the pte bits accordingly and + * commit the pte to memory. + */ +static inline unsigned long pte_set_hash_slot(pte_t *ptep, real_pte_t rpte, + unsigned int subpg_index, unsigned long slot) +{ + unsigned long *hidxp = (unsigned long *)(ptep + PTRS_PER_PTE); + + rpte.hidx &= ~(0xfUL << (subpg_index << 2)); + *hidxp = rpte.hidx | (slot << (subpg_index << 2)); + /* +* Commit the hidx bits to memory before returning. +* Anyone reading pte must ensure hidx bits are +* read only after reading the pte by using the +* read-side barrier smp_rmb(). __real_pte() can +* help ensure that. +*/ + smp_wmb(); + + /* no pte bits to be modified, return 0x0UL */ + return 0x0UL; +} + #define __rpte_to_pte(r) ((r).pte) extern bool __rpte_sub_valid(real_pte_t rpte, unsigned long index); /* -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[RFC v6 01/62] powerpc: Free up four 64K PTE bits in 4K backed HPTE pages
Rearrange 64K PTE bits to free up bits 3, 4, 5 and 6, in the 4K backed HPTE pages.These bits continue to be used for 64K backed HPTE pages in this patch, but will be freed up in the next patch. The bit numbers are big-endian as defined in the ISA3.0 The patch does the following change to the 4k htpe backed 64K PTE's format. H_PAGE_BUSY moves from bit 3 to bit 9 (B bit in the figure below) V0 which occupied bit 4 is not used anymore. V1 which occupied bit 5 is not used anymore. V2 which occupied bit 6 is not used anymore. V3 which occupied bit 7 is not used anymore. Before the patch, the 4k backed 64k PTE format was as follows 0 1 2 3 4 5 6 7 8 9 10...63 : : : : : : : : : : :: v v v v v v v v v v vv ,-,-,-,-,--,--,--,--,-,-,-,-,-,--,-,-,-, |x|x|x|B|V0|V1|V2|V3|x| | |x|x||x|x|x|x| <- primary pte '_'_'_'_'__'__'__'__'_'_'_'_'_''_'_'_'_' |S|G|I|X|S |G |I |X |S|G|I|X|..|S|G|I|X| <- secondary pte '_'_'_'_'__'__'__'__'_'_'_'_'__'_'_'_'_' After the patch, the 4k backed 64k PTE format is as follows 0 1 2 3 4 5 6 7 8 9 10...63 : : : : : : : : : : :: v v v v v v v v v v vv ,-,-,-,-,--,--,--,--,-,-,-,-,-,--,-,-,-, |x|x|x| | | | | |x|B| |x|x||.|.|.|.| <- primary pte '_'_'_'_'__'__'__'__'_'_'_'_'_''_'_'_'_' |S|G|I|X|S |G |I |X |S|G|I|X|..|S|G|I|X| <- secondary pte '_'_'_'_'__'__'__'__'_'_'_'_'__'_'_'_'_' the four bits S,G,I,X (one quadruplet per 4k HPTE) that cache the hash-bucket slot value, is initialized to 1,1,1,1 indicating -- an invalid slot. If a HPTE gets cached in a slot(i.e 7th slot of secondary hash bucket), it is released immediately. In other words, even though is a valid slot value in the hash bucket, we consider it invalid and release the slot and the HPTE. This gives us the opportunity to determine the validity of S,G,I,X bits based on its contents and not on any of the bits V0,V1,V2 or V3 in the primary PTE When we release aHPTEcached in the slot we alsorelease a legitimate slot in the primary hash bucket and unmap its corresponding HPTE. This is to ensure that we do get a HPTE cached in a slot of the primary hash bucket, the next time we retry. Though treating slot as invalid, reduces the number of available slots in the hash bucket and may have an effect on the performance, the probabilty of hitting a slot is extermely low. Compared to the current scheme, the above described scheme reduces the number of false hash table updates significantly andhas the added advantage of releasing four valuable PTE bits for other purpose. NOTE:even though bits 3, 4, 5, 6, 7 are not used when the 64K PTE is backed by 4k HPTE, they continue to be used if the PTE gets backed by 64k HPTE. The next patch will decouple that aswell, and truely release the bits. This idea was jointly developed by Paul Mackerras, Aneesh, Michael Ellermen and myself. 4K PTE format remains unchanged currently. The patch does the following code changes a) PTE flags are split between 64k and 4k header files. b) __hash_page_4K() is reimplemented to reflect the above logic. Signed-off-by: Ram Pai --- arch/powerpc/include/asm/book3s/64/hash-4k.h |2 + arch/powerpc/include/asm/book3s/64/hash-64k.h |8 +-- arch/powerpc/include/asm/book3s/64/hash.h |1 - arch/powerpc/mm/hash64_64k.c | 78 - arch/powerpc/mm/hash_utils_64.c |4 +- 5 files changed, 57 insertions(+), 36 deletions(-) diff --git a/arch/powerpc/include/asm/book3s/64/hash-4k.h b/arch/powerpc/include/asm/book3s/64/hash-4k.h index 0c4e470..f959c00 100644 --- a/arch/powerpc/include/asm/book3s/64/hash-4k.h +++ b/arch/powerpc/include/asm/book3s/64/hash-4k.h @@ -16,6 +16,8 @@ #define H_PUD_TABLE_SIZE (sizeof(pud_t) << H_PUD_INDEX_SIZE) #define H_PGD_TABLE_SIZE (sizeof(pgd_t) << H_PGD_INDEX_SIZE) +#define H_PAGE_BUSY_RPAGE_RSV1 /* software: PTE & hash are busy */ + /* PTE flags to conserve for HPTE identification */ #define _PAGE_HPTEFLAGS (H_PAGE_BUSY | H_PAGE_HASHPTE | \ H_PAGE_F_SECOND | H_PAGE_F_GIX) diff --git a/arch/powerpc/include/asm/book3s/64/hash-64k.h b/arch/powerpc/include/asm/book3s/64/hash-64k.h index 9732837..62e580c 100644 --- a/arch/powerpc/include/asm/book3s/64/hash-64k.h +++ b/arch/powerpc/include/asm/book3s/64/hash-64k.h @@ -12,18 +12,14 @@ */ #define H_PAGE_COMBO _RPAGE_RPN0 /* this is a combo 4k page */ #define H_PAGE_4K_PFN _RPAGE_RPN1 /* PFN is for a single 4k page */ +#define H_PAGE_BUSY_RPAGE_RPN42