On 2017-06-20 23:44, Baldur Norddahl wrote:
But what foundation do you have for asserting that switch hardware is
any
different in this regard? I can say that we are using 80 km modules in
various hardware without any issues. I admittedly do not use any high
power
modules in servers, but I will need better evidence than this to assume
that it would not work just fine.
For switches i guess it is same story as for PoE on them - total power
budget matters.
So if you will pack whole EX4500 with 10G 80km SFP+ it might have
problems as well,
but for normal use, and if few only are "long distance/high power", at
any case 3.3V supply rail
by design in switch should handle many SFP, so if there is 48 ports, it
should handle by specs at least 72W peak load.
It might be multiple power rails for groups of ports, but still, much
better than just 750mA on network card.
But that's just guessing, i never seen circuit diagrams of good
switches, or at least reference design,
as it is all NDA material.
Den 20. jun. 2017 22.24 skrev "Denys Fedoryshchenko"
<de...@visp.net.lb>:
On 2017-06-20 22:07, Baldur Norddahl wrote:
I would expect anything mounted in a computer to have all the power
you
could want. It is not like the ATX power supply cares about an extra
watt
or two.
As I understand the issue it is more about cooling than power and is
primarly a concern in high density switches were you could have 48 or
more
to power and cool.
SFP needs 3.3V, it might be supplied from regulator on the card or
directly
PCI-Express,
can't be absolutely sure, in reference design it is just 3.3V_NIA and
then
filter,
also reference design SFP power circuit define max 750mA/3.3V max to
SFP,
thats only 2.475W.
FTLX1471D3BCV (10km SM) - up to 285mA
FTLX1671D3BCL (40km SM) - up to 400mA, and thumb rule in electronics it
is
better to not exceed 50%
of max specs of designed max current, as for many parts it is stated
for
25C & etc operating conditions.
I expect it might work, but noone knows how long, and how reliable, if
it
is not cooled very well.
And 82599 sensitive to cooling(it is very old card after all), as soon
as
it is not enough, it starts to glitch.
Den 20. jun. 2017 18.09 skrev "Denys Fedoryshchenko"
<de...@visp.net.lb>:
I guess it depends on NIC, there is many spinoffs of Intel X520 with
much
weaker power supply circuitry.
It might work with good NIC, but you can't rely on it on long term,
IMHO.
Even 40km Finisar SFP+ has Pdiss 1.5W. Also they mention: "The
typical
power consumption of the FTLX1672D3BTL may exceed the limit of 1.5W
specified for the Power Level II transceivers"
If we talk about 80km, Pdiss is 1.8W.
While 10GBASE-LR is <1W
On 2017-06-20 16:30, Max Tulyev wrote:
We use Intel NICs with SFP+ holes. It works good with long and short
range SFP+ modules, including CWDM/DWDM.
On 15.06.17 12:10, chiel wrote:
Hello,
We are deploying more and more server based routers (based on BSD).
We
have now come to the point where we need to have 10GB uplinks one
these
devices and I prefer to plug in a long range 10GB fiber straight
into
the server without it going first into a router/switch from vendor
x. It
seems to me that all the 10GB PCIe cards only support either copper
10GBASE-T, short range 10GBASE-SR or the 10 Km 10GBASE-LR (but only
very
few). Are there any PCIe cards that support 10GBASE-ER and
10GBASE-ZR? I
can't seem to find any.
Chiel