This is the latest ethtool -S :
beehive:~# ethtool -S eth4
NIC statistics:
     rx_packets: 33491526
     tx_packets: 41410384
     rx_bytes: 28384277429
     tx_bytes: 46178788616
     rx_broadcast: 3144
     tx_broadcast: 2068
     rx_multicast: 79
     tx_multicast: 0
     rx_errors: 0
     tx_errors: 0
     tx_dropped: 0
     multicast: 79
     collisions: 0
     rx_length_errors: 0
     rx_over_errors: 0
     rx_crc_errors: 0
     rx_frame_errors: 0
     rx_no_buffer_count: 0
     rx_missed_errors: 0
     tx_aborted_errors: 0
     tx_carrier_errors: 0
     tx_fifo_errors: 0
     tx_heartbeat_errors: 0
     tx_window_errors: 0
     tx_abort_late_coll: 0
     tx_deferred_ok: 36256
     tx_single_coll_ok: 0
     tx_multi_coll_ok: 0
     tx_timeout_count: 0
     tx_restart_queue: 0
     rx_long_length_errors: 0
     rx_short_length_errors: 0
     rx_align_errors: 0
     tx_tcp_seg_good: 0
     tx_tcp_seg_failed: 0
     rx_flow_control_xon: 37420
     rx_flow_control_xoff: 37420
     tx_flow_control_xon: 0
     tx_flow_control_xoff: 0
     rx_long_byte_count: 28384277429
     rx_csum_offload_good: 33478553
     rx_csum_offload_errors: 0
     rx_header_split: 0
     alloc_rx_buff_failed: 0
     tx_smbus: 0
     rx_smbus: 0
     dropped_smbus: 0
There have been no further deferred frames, in accordance to what was
posted above. However, as Urs very correctly points out, the problem
is still very much there. I am waiting for new and improved cables
(third batch) to see if that makes a difference.
I am also wondering if, perhaps, a layer3 managed switch with port
statistics may prove helpful.
To answer Bill's questions, memtest+ 1.70 was run overnight, it
totalled some fifty passes with no errors. I know that some maintain
that memtest is not a definitive RAM test, but it certainly catches
most problems.
As to the disk side of the equation, that is of course a possible
concern and I will be running a second, more intensive batch of tests.
However, the fact that virtualbox (running on the server) reports no
errors despite running through significant amounts of data (10GB+,
passing through samba via the tap device that is bridged to the
problematic ethernet) every day makes me fairly confident that the
disk side of the equation is not where the problem lies.

LF
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to