Some spelling fixes and replace with other similar meaning words.


Signed-off-by: Bhaskar Chowdhury <unixbhas...@gmail.com>
---
 kernel/irq/timings.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/kernel/irq/timings.c b/kernel/irq/timings.c
index 773b6105c4ae..72f69e3b1e8d 100644
--- a/kernel/irq/timings.c
+++ b/kernel/irq/timings.c
@@ -478,21 +478,21 @@ static inline void irq_timings_store(int irq, struct 
irqt_stat *irqs, u64 ts)

        /*
         * The interval type is u64 in order to deal with the same
-        * type in our computation, that prevent mindfuck issues with
+        * type in our computation, that prevent mind blowing issues with
         * overflow, sign and division.
         */
        interval = ts - old_ts;

        /*
         * The interrupt triggered more than one second apart, that
-        * ends the sequence as predictible for our purpose. In this
+        * ends the sequence as predictable for our purpose. In this
         * case, assume we have the beginning of a sequence and the
-        * timestamp is the first value. As it is impossible to
+        * timestamps is the first value. As it is impossible to
         * predict anything at this point, return.
         *
-        * Note the first timestamp of the sequence will always fall
+        * Note the first timestamps of the sequence will always fall
         * in this test because the old_ts is zero. That is what we
-        * want as we need another timestamp to compute an interval.
+        * want as we need another timestamps to compute an interval.
         */
        if (interval >= NSEC_PER_SEC) {
                irqs->count = 0;
@@ -523,7 +523,7 @@ static inline void irq_timings_store(int irq, struct 
irqt_stat *irqs, u64 ts)
  * thus the count is reinitialized.
  *
  * The array of values **must** be browsed in the time direction, the
- * timestamp must increase between an element and the next one.
+ * timestamps must increase between an element and the next one.
  *
  * Returns a nanosec time based estimation of the earliest interrupt,
  * U64_MAX otherwise.
@@ -556,7 +556,7 @@ u64 irq_timings_next_event(u64 now)
         * type but with the cost of extra computation in the
         * interrupt handler hot path. We choose efficiency.
         *
-        * Inject measured irq/timestamp to the pattern prediction
+        * Inject measured irq/timestamps to the pattern prediction
         * model while decrementing the counter because we consume the
         * data from our circular buffer.
         */
--
2.30.0

Reply via email to