I became aware of a typo in this paragraph and then realized there is more room for editorial changes (grammar, consistency, and being more clear).
*Not* pushed. Honza, Sandra, what do you think? Gerald gcc: * doc/invoke.texi (Optimize Options) <-fprofile-partial-training>: Editorial changes throughout. diff --git a/gcc/doc/invoke.texi b/gcc/doc/invoke.texi index 27539a01785..d7463d30a0c 100644 --- a/gcc/doc/invoke.texi +++ b/gcc/doc/invoke.texi @@ -15053,16 +15053,16 @@ This option is enabled by @option{-fauto-profile}. @opindex fprofile-partial-training @item -fprofile-partial-training -With @code{-fprofile-use} all portions of programs not executed during train -run are optimized agressively for size rather than speed. In some cases it is -not practical to train all possible hot paths in the program. (For -example, program may contain functions specific for a given hardware and -trianing may not cover all hardware configurations program is run on.) With -@code{-fprofile-partial-training} profile feedback will be ignored for all -functions not executed during the train run leading them to be optimized as if -they were compiled without profile feedback. This leads to better performance -when train run is not representative but also leads to significantly bigger -code. +With @code{-fprofile-use} all portions of programs not executed during +training runs are optimized aggressively for size rather than speed. +In some cases it is not practical to train all possible hot paths in +the program. (For example, a program may contain functions specific to +a given hardware and training may not cover all hardware configurations +the program later runs on.) With @code{-fprofile-partial-training} +profile feedback will be ignored for all functions not executed during +the training, them being optimized as if they were compiled without +profile feedback. This leads to better performance when the training +is not representative at the cost of significantly bigger code. @opindex fprofile-use @item -fprofile-use