On 30/04/15 11:09, Tim Deegan wrote:
> At 00:56 +0100 on 30 Apr (1430355366), Jan Beulich wrote:
> David Vrabel 04/29/15 5:28 PM >>>
>>> On 29/04/15 00:15, Jan Beulich wrote:
>>> David Vrabel 04/28/15 6:16 PM >>>
> Are there any structures whose size you're particularly concerned abou
At 00:56 +0100 on 30 Apr (1430355366), Jan Beulich wrote:
> >>> David Vrabel 04/29/15 5:28 PM >>>
> >On 29/04/15 00:15, Jan Beulich wrote:
> > David Vrabel 04/28/15 6:16 PM >>>
> >>> Are there any structures whose size you're particularly concerned about?
> >>
> >> No specific ones (but of c
At 18:00 +0100 on 29 Apr (1430330400), David Vrabel wrote:
> On 29/04/15 17:56, Tim Deegan wrote:
> > At 16:36 +0100 on 29 Apr (1430325362), David Vrabel wrote:
> >> On 23/04/15 15:58, Jan Beulich wrote:
> >> On 23.04.15 at 16:43, wrote:
> At 14:54 +0100 on 23 Apr (1429800874), Jan Beulic
>>> David Vrabel 04/29/15 5:28 PM >>>
>On 29/04/15 00:15, Jan Beulich wrote:
> David Vrabel 04/28/15 6:16 PM >>>
>>> Are there any structures whose size you're particularly concerned about?
>>
>> No specific ones (but of course structures with an inherent size constraint
>> - like struct dom
>>> David Vrabel 04/29/15 5:39 PM >>>
>On 23/04/15 15:58, Jan Beulich wrote:
> On 23.04.15 at 16:43, wrote:
>>> At 14:54 +0100 on 23 Apr (1429800874), Jan Beulich wrote:
>>> AIUI, the '++' could end up as a word-size read, modify, and word-size
>>> write. If another CPU updates .tail paralle
On 29/04/15 17:56, Tim Deegan wrote:
> At 16:36 +0100 on 29 Apr (1430325362), David Vrabel wrote:
>> On 23/04/15 15:58, Jan Beulich wrote:
>> On 23.04.15 at 16:43, wrote:
At 14:54 +0100 on 23 Apr (1429800874), Jan Beulich wrote:
On 23.04.15 at 14:03, wrote:
>> At 11:11 +0100
At 16:36 +0100 on 29 Apr (1430325362), David Vrabel wrote:
> On 23/04/15 15:58, Jan Beulich wrote:
> On 23.04.15 at 16:43, wrote:
> >> At 14:54 +0100 on 23 Apr (1429800874), Jan Beulich wrote:
> >> On 23.04.15 at 14:03, wrote:
> At 11:11 +0100 on 21 Apr (1429614687), David Vrabel wr
On 23/04/15 15:58, Jan Beulich wrote:
On 23.04.15 at 16:43, wrote:
>> At 14:54 +0100 on 23 Apr (1429800874), Jan Beulich wrote:
>> On 23.04.15 at 14:03, wrote:
At 11:11 +0100 on 21 Apr (1429614687), David Vrabel wrote:
> void _spin_unlock(spinlock_t *lock)
> {
> +s
On 29/04/15 00:15, Jan Beulich wrote:
David Vrabel 04/28/15 6:16 PM >>>
>> On 23/04/15 12:58, Jan Beulich wrote:
+typedef union {
+u32 head_tail;
+struct {
+u16 head;
+u16 tail;
+};
+} spinlock_tickets_t;
+
typedef stru
>>> David Vrabel 04/28/15 6:16 PM >>>
>On 23/04/15 12:58, Jan Beulich wrote:
>>> +typedef union {
>>> +u32 head_tail;
>>> +struct {
>>> +u16 head;
>>> +u16 tail;
>>> +};
>>> +} spinlock_tickets_t;
>>> +
>>> typedef struct spinlock {
>>> -raw_spinlock_t raw;
>>> +
On 23/04/15 12:58, Jan Beulich wrote:
>
>> +typedef union {
>> +u32 head_tail;
>> +struct {
>> +u16 head;
>> +u16 tail;
>> +};
>> +} spinlock_tickets_t;
>> +
>> typedef struct spinlock {
>> -raw_spinlock_t raw;
>> +spinlock_tickets_t tickets;
>
> At least for x
>>> On 23.04.15 at 16:43, wrote:
> At 14:54 +0100 on 23 Apr (1429800874), Jan Beulich wrote:
>> >>> On 23.04.15 at 14:03, wrote:
>> > At 11:11 +0100 on 21 Apr (1429614687), David Vrabel wrote:
>> >> void _spin_unlock(spinlock_t *lock)
>> >> {
>> >> +smp_mb();
>> >> preempt_enable();
>>
At 14:45 +0100 on 23 Apr (1429800338), Andrew Cooper wrote:
> On 23/04/15 14:43, David Vrabel wrote:
> > On 23/04/15 13:03, Tim Deegan wrote:
> >> Hi,
> >>
> >> At 11:11 +0100 on 21 Apr (1429614687), David Vrabel wrote:
> >>> void _spin_lock(spinlock_t *lock)
> >>> {
> >>> +spinlock_tickets_t
At 15:24 +0100 on 23 Apr (1429802696), Tim Deegan wrote:
> At 14:43 +0100 on 23 Apr (1429800229), David Vrabel wrote:
> > On 23/04/15 13:03, Tim Deegan wrote:
> > > Hi,
> > >
> > > At 11:11 +0100 on 21 Apr (1429614687), David Vrabel wrote:
> > >> void _spin_lock(spinlock_t *lock)
> > >> {
> > >>
At 14:54 +0100 on 23 Apr (1429800874), Jan Beulich wrote:
> >>> On 23.04.15 at 14:03, wrote:
> > At 11:11 +0100 on 21 Apr (1429614687), David Vrabel wrote:
> >> void _spin_unlock(spinlock_t *lock)
> >> {
> >> +smp_mb();
> >> preempt_enable();
> >> LOCK_PROFILE_REL;
> >> -_raw_s
At 14:43 +0100 on 23 Apr (1429800229), David Vrabel wrote:
> On 23/04/15 13:03, Tim Deegan wrote:
> > Hi,
> >
> > At 11:11 +0100 on 21 Apr (1429614687), David Vrabel wrote:
> >> void _spin_lock(spinlock_t *lock)
> >> {
> >> +spinlock_tickets_t tickets = { .tail = 1, };
> >> LOCK_PROFILE
On 23/04/15 14:43, David Vrabel wrote:
> On 23/04/15 13:03, Tim Deegan wrote:
>> Hi,
>>
>> At 11:11 +0100 on 21 Apr (1429614687), David Vrabel wrote:
>>> void _spin_lock(spinlock_t *lock)
>>> {
>>> +spinlock_tickets_t tickets = { .tail = 1, };
>>> LOCK_PROFILE_VAR;
>>>
>>> check_l
>>> On 23.04.15 at 14:03, wrote:
> At 11:11 +0100 on 21 Apr (1429614687), David Vrabel wrote:
>> void _spin_unlock(spinlock_t *lock)
>> {
>> +smp_mb();
>> preempt_enable();
>> LOCK_PROFILE_REL;
>> -_raw_spin_unlock(&lock->raw);
>> +lock->tickets.head++;
>
> This needs to b
On 23/04/15 13:03, Tim Deegan wrote:
> Hi,
>
> At 11:11 +0100 on 21 Apr (1429614687), David Vrabel wrote:
>> void _spin_lock(spinlock_t *lock)
>> {
>> +spinlock_tickets_t tickets = { .tail = 1, };
>> LOCK_PROFILE_VAR;
>>
>> check_lock(&lock->debug);
>> -while ( unlikely(!_raw
Hi,
At 11:11 +0100 on 21 Apr (1429614687), David Vrabel wrote:
> void _spin_lock(spinlock_t *lock)
> {
> +spinlock_tickets_t tickets = { .tail = 1, };
> LOCK_PROFILE_VAR;
>
> check_lock(&lock->debug);
> -while ( unlikely(!_raw_spin_trylock(&lock->raw)) )
> +tickets.head_t
>>> On 21.04.15 at 12:11, wrote:
> @@ -213,27 +211,32 @@ int _spin_trylock(spinlock_t *lock)
>
> void _spin_barrier(spinlock_t *lock)
> {
> +spinlock_tickets_t sample;
> #ifdef LOCK_PROFILE
> s_time_t block = NOW();
> -u64 loop = 0;
> +#endif
>
> check_barrier(&lock->
Replace the byte locks with ticket locks. Ticket locks are: a) fair;
and b) peform better when contented since they spin without an atomic
operation.
The lock is split into two ticket values: head and tail. A locker
acquires a ticket by (atomically) increasing tail and using the
previous tail va
22 matches
Mail list logo