I meant to install this a week or so ago, but got sidetracked by other more pressing issues.

Installed on the trunk as obvious.

diff --git a/gcc/ChangeLog b/gcc/ChangeLog
index 5703bb5..ce9c066 100644
--- a/gcc/ChangeLog
+++ b/gcc/ChangeLog
@@ -1,3 +1,7 @@
+2014-02-07  Jeff Law  <l...@redhat.com>
+
+       * ipa-inline.c (inline_small_functions): Fix typos.
+
 2014-02-07  Richard Sandiford  <rsand...@linux.vnet.ibm.com>
 
        * config/s390/s390-protos.h (s390_can_use_simple_return_insn)
diff --git a/gcc/ipa-inline.c b/gcc/ipa-inline.c
index ce24ea5..d304133 100644
--- a/gcc/ipa-inline.c
+++ b/gcc/ipa-inline.c
@@ -1749,9 +1749,9 @@ inline_small_functions (void)
          continue;
        }
 
-      /* Heuristics for inlining small functions works poorly for
-        recursive calls where we do efect similar to loop unrolling.
-        When inliing such edge seems profitable, leave decision on
+      /* Heuristics for inlining small functions work poorly for
+        recursive calls where we do effects similar to loop unrolling.
+        When inlining such edge seems profitable, leave decision on
         specific inliner.  */
       if (cgraph_edge_recursive_p (edge))
        {
@@ -1779,10 +1779,11 @@ inline_small_functions (void)
          struct cgraph_node *outer_node = NULL;
          int depth = 0;
 
-         /* Consider the case where self recursive function A is inlined into 
B.
-            This is desired optimization in some cases, since it leads to 
effect
-            similar of loop peeling and we might completely optimize out the
-            recursive call.  However we must be extra selective.  */
+         /* Consider the case where self recursive function A is inlined
+            into B.  This is desired optimization in some cases, since it
+            leads to effect similar of loop peeling and we might completely
+            optimize out the recursive call.  However we must be extra
+            selective.  */
 
          where = edge->caller;
          while (where->global.inlined_to)

Reply via email to