I am going way out on a limb here since you mentioned file transfers to and 
from IBM are involved,
I just closed a ticket that has been open since June.     This started around 
the time IBM changed their DNS names to redirect to a new set of internal IP 
addresses.  I assume this was to reflect some kind of internal server 
migrations to new equipment.
To me, it caused Java 'write'  failures and 'broken pipes' at random times for 
random RECEIVE ORDER commands.
There was also hubub about getting new Certificate but I do not think that is 
the problem.

After dozens of tests, I found the following workaround.   The 'usual' way I 
was doing a set of 9 RECEIVE ORDER requests was in parallel.    Of those nine, 
the ones with a larger expected payload were the ones to fail most often.
The surprising workaround is to submit the 9 jobs sequentially instead of in 
parallel.    So far, that brings me back up to the 100% reliability I never had 
to worry about before.

I am mildly curious what kind of trace you are using as proof that you are not 
going to the external switch.    The only traces of which I know are software 
and will trace packets in memory in and out of the stack. IS there something 
that will show you the MAC addrs of the intervening nodes (like external 
switches).    You said prior that the network staff could not give you info on 
the in and outs  of the switch.     In my world, the traces that were switched 
through the OSA, and the traces that go external and back look the same.

There must be  a hardware trace involved.  Maybe my ignorance is showing.
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to