Hi,

We are receiving a substantial amount of errors relating to delays and 
retransmittions, below are two examples:

RADIUS authentication failed for "auser" on J1.31 Message: Cannot open 
RADIUS session (no authentication response)
Fri Feb 18 11:57:45 2000: INFO: Duplicate request id 92 received from 
1.2.3.4: ignored

The main Tigris NAS is connected via an uncongested 100Mb Switch to the 
Radiator server, so we can assume that network failures are pretty 
unlikely.  The PC the radius server is on is only running 1 instance of 
Radiator and ssh to copy and unzip the configs from another machine.  The 
PC is Pentium1 233 with 192Mb RAM and running FreeBSD 3.2-RELEASE.  We use 
flat files for passwords and log files and we currently don't use radius 
proxying.

We currently have the authentication retry interval set to 10 seconds and 
the retry count set to 3.
The accounting retry interval is set to 40 seconds and the retry count set 
to 10.

What would be the optimum retry interval and count?  What should we look at 
in more detail to prevent these types of problems?

We are also getting a duplicate stop entries.  The entries concerned have 
the same session ids, and different accounting delay times.  I have noticed 
that the duplicate entries are exactly the 40sec apart, which is the 
retransmittion time set on our NAS'.  We can change the timeout to 1-60 
seconds.  Do you have any recommendations, or is the only solution to 
filter these out in our accounting DB?

Cheers,
Richard

Our Radius Client config:
<Client tigris.tassie.net.au>
         Secret   password
</Client>

This will mean that the DupInterval will default to 2 seconds.

Our NAS config:
AUTHENTICATION          Delay 10        Retry 3
AUTHENTICATION          Delay 40        Retry 10

Manual docs:

http://www.open.com.au/radiator/ref.html#pgfId=363701

6.4.4 DupInterval
If more than 1 Radius request from this Client with the same Radius 
Identifier are received within DupInterval seconds, the 2nd and subsequent 
are ignored. A value of 0 means duplicates are always accepted, which might 
not be very wise, except during testing. Default is 2 seconds, whcih will 
detect and ignore duplicates due to multiple transmission paths. In general 
you should never need to worry about or set this parameter. Ignore it and 
accept the default.

# brian.open.com.au is being tested
<Client brian.open.com.au>
            Secret 666obaFGkmRNs666
     DupInterval 0
</Client>


Email on subject:

 >Date: Thu, 10 Feb 2000 07:55:15 +0100
 >From: Paul Rolland <[EMAIL PROTECTED]>
 >To: Mike McCauley <[EMAIL PROTECTED]>
 >
 > My advice is to reduce the DupInterval to something like 2 seconds. It is
 > really only intended to catch genuine duplicate packets (ie packets sent 
along
 > duplicate parallel network paths, or from some other pathological network
 > problem). Its really not supposed to catch _retransmissions_ by the NAS. 
As you
 > have found, when it starts to catch _retransmissions_ (as opposed to
 > duplicates), you start to have problems.

===
Archive at http://www.thesite.com.au/~radiator/
To unsubscribe, email '[EMAIL PROTECTED]' with
'unsubscribe radiator' in the body of the message.

Reply via email to