Hello.. 

I'm attempting to load some old cdr accounting data into my dev environment 
through Radiator and I'm seeing a problem with dropped records.  I saw the note 
in the doc about increasing the SocketQueueLength and I've done that both in 
the Radiator config and in the OS... taking it from 128k to 12M. (overkill?) 
but even with that... every time I reload the same days data from disk.. I'm 
getting more and more records into the DB for that day. It's not a big 
difference.. maybe 200 more recs out of 18k each time I resend the data. I'm 
using the unique recordid as the primary DB key to avoid duplicate entries.   
I'm not seeing any errors or timeouts on the client side.. which tells me the 
packets are being acknowledged as received.... and I see a continuous stream of 
packets to and from the client.. but when tailing a debug log from radiator 
where I should see one log entry for every record.. I see periodic pauses in 
the log... sometimes of a few seconds. 

Of course this isn't a normal level of activity... this is me blasting the 
server from cdr files stored on disk. Any insight on what else I might try 
here? 

Thanks,
J

_______________________________________________
radiator mailing list
radiator@open.com.au
http://www.open.com.au/mailman/listinfo/radiator

Reply via email to