Hi Jack,

# netstat -m
65493/19272/84765 mbufs in use (current/cache/total)
65491/13867/79358/1014370 mbuf clusters in use (current/cache/total/max)
65491/13698 mbuf+clusters out of packet secondary zone in use (current/cache) 0/15/15/507184 4k (page size) jumbo clusters in use (current/cache/total/max)
0/0/0/150276 9k jumbo clusters in use (current/cache/total/max)
0/0/0/84530 16k jumbo clusters in use (current/cache/total/max)
147355K/32612K/179967K bytes allocated to network (current/cache/total)
0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
0/0/0 requests for mbufs delayed (mbufs/clusters/mbuf+clusters)
0/0/0 requests for jumbo clusters delayed (4k/9k/16k)
0/0/0 requests for jumbo clusters denied (4k/9k/16k)
0 requests for sfbufs denied
0 requests for sfbufs delayed
0 requests for I/O initiated by sendfile

Thanks for your help.

Em 04/07/2014 04:59, Jack Vogel escreveu:
What does a netstat -m show, I noticed you show no_desc counts on
all your queues, perhaps you don't have enough mbufs/clusters available?
Does your message log show any events or messages of significance?

I'm not sure about the module compatibility, Jeff would be better positioned
to answer that.

Jack



On Fri, Jul 4, 2014 at 12:38 AM, Marcelo Gondim <gon...@bsdinfo.com.br>
wrote:

Hi all,

Both modules are 850nm. Could there be some incompatibility between the
Datacom XFP optical module and the SFP + Intel X520-SR2 optical module?
I ask this because I replaced all the hardware and optical cords and
nothing worked. Datacom is a DM4100.

dev.ix.0.%desc: Intel(R) PRO/10GbE PCI-Express Network Driver, Version -
2.5.15
dev.ix.0.%driver: ix
dev.ix.0.%location: slot=0 function=0 handle=\_SB_.PCI1.BR48.S3F0
dev.ix.0.%pnpinfo: vendor=0x8086 device=0x10fb subvendor=0x8086
subdevice=0x7a11 class=0x020000
dev.ix.0.%parent: pci131
dev.ix.0.fc: 3
dev.ix.0.enable_aim: 1
dev.ix.0.advertise_speed: 0
dev.ix.0.dropped: 0
dev.ix.0.mbuf_defrag_failed: 0
dev.ix.0.watchdog_events: 0
dev.ix.0.link_irq: 61846
dev.ix.0.queue0.interrupt_rate: 83333
dev.ix.0.queue0.irqs: 2240081588
dev.ix.0.queue0.txd_head: 1687
dev.ix.0.queue0.txd_tail: 1687
dev.ix.0.queue0.tso_tx: 5
dev.ix.0.queue0.no_tx_dma_setup: 0
dev.ix.0.queue0.no_desc_avail: 2171
dev.ix.0.queue0.tx_packets: 4721544662
dev.ix.0.queue0.rxd_head: 223
dev.ix.0.queue0.rxd_tail: 222
dev.ix.0.queue0.rx_packets: 3939926170
dev.ix.0.queue0.rx_bytes: 708714319872
dev.ix.0.queue0.rx_copies: 1428958853
dev.ix.0.queue0.lro_queued: 0
dev.ix.0.queue0.lro_flushed: 0
dev.ix.0.queue1.interrupt_rate: 100000
dev.ix.0.queue1.irqs: 2164304165
dev.ix.0.queue1.txd_head: 24
dev.ix.0.queue1.txd_tail: 24
dev.ix.0.queue1.tso_tx: 0
dev.ix.0.queue1.no_tx_dma_setup: 0
dev.ix.0.queue1.no_desc_avail: 241832
dev.ix.0.queue1.tx_packets: 4995476933
dev.ix.0.queue1.rxd_head: 1161
dev.ix.0.queue1.rxd_tail: 1160
dev.ix.0.queue1.rx_packets: 3901327164
dev.ix.0.queue1.rx_bytes: 691440627148
dev.ix.0.queue1.rx_copies: 1408409383
dev.ix.0.queue1.lro_queued: 0
dev.ix.0.queue1.lro_flushed: 0
dev.ix.0.queue2.interrupt_rate: 83333
dev.ix.0.queue2.irqs: 2167993136
dev.ix.0.queue2.txd_head: 1329
dev.ix.0.queue2.txd_tail: 1329
dev.ix.0.queue2.tso_tx: 0
dev.ix.0.queue2.no_tx_dma_setup: 0
dev.ix.0.queue2.no_desc_avail: 190120
dev.ix.0.queue2.tx_packets: 5013202508
dev.ix.0.queue2.rxd_head: 2039
dev.ix.0.queue2.rxd_tail: 2038
dev.ix.0.queue2.rx_packets: 3955460159
dev.ix.0.queue2.rx_bytes: 743382421188
dev.ix.0.queue2.rx_copies: 1422295822
dev.ix.0.queue2.lro_queued: 0
dev.ix.0.queue2.lro_flushed: 0
dev.ix.0.queue3.interrupt_rate: 71428
dev.ix.0.queue3.irqs: 2139498119
dev.ix.0.queue3.txd_head: 673
dev.ix.0.queue3.txd_tail: 673
dev.ix.0.queue3.tso_tx: 0
dev.ix.0.queue3.no_tx_dma_setup: 0
dev.ix.0.queue3.no_desc_avail: 94226
dev.ix.0.queue3.tx_packets: 5301360114
dev.ix.0.queue3.rxd_head: 416
dev.ix.0.queue3.rxd_tail: 415
dev.ix.0.queue3.rx_packets: 3951345010
dev.ix.0.queue3.rx_bytes: 723655881546
dev.ix.0.queue3.rx_copies: 1424750061
dev.ix.0.queue3.lro_queued: 0
dev.ix.0.queue3.lro_flushed: 0
dev.ix.0.queue4.interrupt_rate: 100000
dev.ix.0.queue4.irqs: 2027199532
dev.ix.0.queue4.txd_head: 764
dev.ix.0.queue4.txd_tail: 764
dev.ix.0.queue4.tso_tx: 0
dev.ix.0.queue4.no_tx_dma_setup: 0
dev.ix.0.queue4.no_desc_avail: 174621
dev.ix.0.queue4.tx_packets: 5250099331
dev.ix.0.queue4.rxd_head: 780
dev.ix.0.queue4.rxd_tail: 779
dev.ix.0.queue4.rx_packets: 3898505370
dev.ix.0.queue4.rx_bytes: 680413268286
dev.ix.0.queue4.rx_copies: 1415917068
dev.ix.0.queue4.lro_queued: 0
dev.ix.0.queue4.lro_flushed: 0
dev.ix.0.queue5.interrupt_rate: 62500
dev.ix.0.queue5.irqs: 2076140170
dev.ix.0.queue5.txd_head: 553
dev.ix.0.queue5.txd_tail: 553
dev.ix.0.queue5.tso_tx: 23
dev.ix.0.queue5.no_tx_dma_setup: 0
dev.ix.0.queue5.no_desc_avail: 178058
dev.ix.0.queue5.tx_packets: 5266432651
dev.ix.0.queue5.rxd_head: 1870
dev.ix.0.queue5.rxd_tail: 1869
dev.ix.0.queue5.rx_packets: 3876348699
dev.ix.0.queue5.rx_bytes: 684015705094
dev.ix.0.queue5.rx_copies: 1433886240
dev.ix.0.queue5.lro_queued: 0
dev.ix.0.queue5.lro_flushed: 0
dev.ix.0.queue6.interrupt_rate: 100000
dev.ix.0.queue6.irqs: 2106459361
dev.ix.0.queue6.txd_head: 181
dev.ix.0.queue6.txd_tail: 181
dev.ix.0.queue6.tso_tx: 0
dev.ix.0.queue6.no_tx_dma_setup: 0
dev.ix.0.queue6.no_desc_avail: 2112
dev.ix.0.queue6.tx_packets: 4786334585
dev.ix.0.queue6.rxd_head: 1165
dev.ix.0.queue6.rxd_tail: 1164
dev.ix.0.queue6.rx_packets: 3963283094
dev.ix.0.queue6.rx_bytes: 707926383609
dev.ix.0.queue6.rx_copies: 1443393267
dev.ix.0.queue6.lro_queued: 0
dev.ix.0.queue6.lro_flushed: 0
dev.ix.0.queue7.interrupt_rate: 62500
dev.ix.0.queue7.irqs: 2046097590
dev.ix.0.queue7.txd_head: 1928
dev.ix.0.queue7.txd_tail: 1928
dev.ix.0.queue7.tso_tx: 0
dev.ix.0.queue7.no_tx_dma_setup: 0
dev.ix.0.queue7.no_desc_avail: 151682
dev.ix.0.queue7.tx_packets: 4753275982
dev.ix.0.queue7.rxd_head: 971
dev.ix.0.queue7.rxd_tail: 970
dev.ix.0.queue7.rx_packets: 3941412326
dev.ix.0.queue7.rx_bytes: 707141933261
dev.ix.0.queue7.rx_copies: 1438797770
dev.ix.0.queue7.lro_queued: 0
dev.ix.0.queue7.lro_flushed: 0
dev.ix.0.mac_stats.crc_errs: 119
dev.ix.0.mac_stats.ill_errs: 4
dev.ix.0.mac_stats.byte_errs: 20
dev.ix.0.mac_stats.short_discards: 0
dev.ix.0.mac_stats.local_faults: 14
dev.ix.0.mac_stats.remote_faults: 69
dev.ix.0.mac_stats.rec_len_errs: 0
dev.ix.0.mac_stats.xon_txd: 201740142300
dev.ix.0.mac_stats.xon_recvd: 0
dev.ix.0.mac_stats.xoff_txd: 3736066605
dev.ix.0.mac_stats.xoff_recvd: 0
dev.ix.0.mac_stats.total_octets_rcvd: 11577768204079
dev.ix.0.mac_stats.good_octets_rcvd: 11577747489371
dev.ix.0.mac_stats.total_pkts_rcvd: 31435679394
dev.ix.0.mac_stats.good_pkts_rcvd: 18446741966445774380
dev.ix.0.mac_stats.mcast_pkts_rcvd: 1678
dev.ix.0.mac_stats.bcast_pkts_rcvd: 60437
dev.ix.0.mac_stats.rx_frames_64: 488
dev.ix.0.mac_stats.rx_frames_65_127: 22152221688
dev.ix.0.mac_stats.rx_frames_128_255: 1593120232
dev.ix.0.mac_stats.rx_frames_256_511: 689654943
dev.ix.0.mac_stats.rx_frames_512_1023: 1050762441
dev.ix.0.mac_stats.rx_frames_1024_1522: 5949659560
dev.ix.0.mac_stats.recv_undersized: 0
dev.ix.0.mac_stats.recv_fragmented: 0
dev.ix.0.mac_stats.recv_oversized: 0
dev.ix.0.mac_stats.recv_jabberd: 1
dev.ix.0.mac_stats.management_pkts_rcvd: 0
dev.ix.0.mac_stats.management_pkts_drpd: 0
dev.ix.0.mac_stats.checksum_errs: 189336386
dev.ix.0.mac_stats.good_octets_txd: 39184317469340
dev.ix.0.mac_stats.total_pkts_txd: 40087600379
dev.ix.0.mac_stats.good_pkts_txd: 18446743912615910349
dev.ix.0.mac_stats.bcast_pkts_txd: 73107
dev.ix.0.mac_stats.mcast_pkts_txd: 18446743872528672584
dev.ix.0.mac_stats.management_pkts_txd: 0
dev.ix.0.mac_stats.tx_frames_64: 18446743875376966599
dev.ix.0.mac_stats.tx_frames_65_127: 7716639581
dev.ix.0.mac_stats.tx_frames_128_255: 2192043837
dev.ix.0.mac_stats.tx_frames_256_511: 1138023553
dev.ix.0.mac_stats.tx_frames_512_1023: 1236626066
dev.ix.0.mac_stats.tx_frames_1024_1522: 24955610750


Em 02/07/2014 16:23, Marcelo Gondim escreveu:

  Em 02/07/2014 14:07, sth...@nethelp.no escreveu:
Is there any way that you can try a reproduce this  using a B2B
configuration, or something that doesn't use XFP as a link partner? I'm
thinking that you are correct regarding an incompatibility issue between
SFP+ and XFP.

Why do you believe that? The optical signals are the same for SFP+
and XFP.

We have lots of 10G SFP+ / XFP links in production. It just works...

Steinar Haug, Nethelp consulting, sth...@nethelp.no

  I think I found the problem. Our SFP+ optical module is 850nm MMF and
our transport operator is using an XFP 1310nmMMF.
I am waiting for them to exchange the module and see the result. Once the
module is changed, I post here.

Thanks and best regards,
Gondim

_______________________________________________
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"

Reply via email to