On Tue, 2018-06-26 at 21:41 +0300, Andy Shevchenko wrote:
> > > > @@ -42,9 +41,10 @@ static inline void
> > > > ratelimit_state_init(struct
> > > > ratelimit_state *rs,
> > > > {
> > > > memset(rs, 0, sizeof(*rs));
> > > >
> > > > - raw_spin_lock_init(&rs->lock);
> > > > rs-
Hi Andy, thanks for the review,
On Tue, 2018-06-26 at 20:04 +0300, Andy Shevchenko wrote
[..]
> > #define RATELIMIT_STATE_INIT(name, interval_init, burst_init)
> > {\
> > - .lock =
> > __RAW_SPIN_LOCK_UNLOCKED(name.lock), \
>
> name is now redundant, isn'
On Tue, Jun 26, 2018 at 8:46 PM, Dmitry Safonov wrote:
> On Tue, 2018-06-26 at 20:04 +0300, Andy Shevchenko wrote
>> > #define RATELIMIT_STATE_INIT(name, interval_init, burst_init)
>> > {\
>> > - .lock =
>> > __RAW_SPIN_LOCK_UNLOCKED(name.lock), \
>>
>> n
On Tue, Jun 26, 2018 at 7:24 PM, Dmitry Safonov wrote:
> Currently ratelimit_state is protected with spin_lock. If the .lock is
> taken at the moment of ___ratelimit() call, the message is suppressed to
> make ratelimiting robust.
>
> That results in the following issue issue:
> CPU0