I completely agree, the original "rfq" is super suspicious. There's no need
to require to be specifically at One Wilshire for a single 1U server
(particularly with only 10GbE interfaces, not 100), since the most
effective use of being at a major interconnect point like that is only if
you're prepar
While it may be a plausible scenario, it IMO is highly unlikely (<
0.0001%) that this is the case in this situation, given the person that
is asking...
Regards,
Christopher Hawker
From: NANOG on behalf of Saku
Ytti
Sent: Thursday, August 8, 2024 3:13 P
CoreSite now charges a disconnect fee for all cross-connects in addition to
the MRC and connection fee.
If you don't plan to cross-connect at CoreSite LA1 (One Wilshire), you may
consider other nearby facilities. Most facilities are backhauled there
anyway.
On Thu, Aug 8, 2024 at 1:15 PM Saku Ytt
On Wed, 7 Aug 2024 at 20:05, Christopher Morrow wrote:
> I'd bet the real answer is that someone wants to connect a commodity
> server to an IX and pretend to be
> some network/asn and then do some not terrific things with that setup :(
>
> seen this in AMSIX and DECIX ... don't know that I've no
On 8/8/24 03:37, William Herrin wrote:
Hi Eric,
All of these are excellent reasons why the DC -operator- should want
to use fiber in 10GE links.
The question was: why does a DC -customer- want 40 gigs of
specifically fiber optic connections in what is otherwise a minimum
server configurati
On Wed, Aug 7, 2024 at 4:07 PM Eric Kuhnke wrote:
> Your typical cat 6A cable is significantly fatter in diameter, less flexible
> and [...]
Hi Eric,
All of these are excellent reasons why the DC -operator- should want
to use fiber in 10GE links.
The question was: why does a DC -customer- want
>From a strictly physical cabling point of view, while 10GBaseT is likely to
work on ordinary cat5e or cat 6 cabling at very short distances such as
from a server to a top of rack aggregation switch, more successful results
will be seen with cat6a.
Your typical cat 6A cable is significantly fatter
On 8/7/24 18:52, Saku Ytti wrote:
I think this is the least bad explanation, some explanations are that
copper may not be available, but that doesn't explain preference. Nor
do I think wattage/heat explains preference, as it's hosted, so
customers probably shouldn't care. Latency could very we
On Wed, Aug 7, 2024 at 12:52 PM Saku Ytti wrote:
>
> budget, the actual hardware becomes very important, so i think lack of
> specificity there implies it's not about latency.
I'd bet the real answer is that someone wants to connect a commodity
server to an IX and pretend to be
some network/asn a
On Wed, 7 Aug 2024 at 17:41, Brandon Martin wrote:
> Among the other reasons folks have given, the 10GBASE-T PHY has added
> latency beyond the basic packetization/serialization delay inherent to
> Ethernet due to the use of a relatively long line code plus LDPC. It's
> not much (2-4us which is
We offer highly customizable colo arrangements. For small engagements, we
usually price per box, so it doesn't have to be 1U.
Mundelein IL (northern Chicago suburbs)
Sincerely,
Michael Malitsky
Advanced Business Group
847-247-0700
-Original Message-
From: NANOG On Behalf Of
nanog-
mark@tinka.africa (Mark Tinka) wrote:
> Unless others have done it differently, what I used to do was run fibre to
> whatever the local terminal server's gateway router was, and use copper
> within or between (nearby) racks between the terminal server and the end
> device.
Oh sure, if you have an
On 8/7/24 17:38, Elmar K. Bins wrote:
Which is really detrimental if you need to OOB connect a server. IPMI ports are
generally copper; I suppose that will change, but it hasn't yet.
Unless others have done it differently, what I used to do was run fibre
to whatever the local terminal serv
br...@shout.net (Bryan Holloway) wrote:
> Many of the big DCs don't do copper xconns anymore, so if you have a server
> with optical NICs, you don't need a switch or media-converter.
Which is really detrimental if you need to OOB connect a server. IPMI ports are
generally copper; I suppose that w
On 8/7/24 16:14, Bryan Holloway wrote:
Many of the big DCs don't do copper xconns anymore, so if you have a
server with optical NICs, you don't need a switch or media-converter.
If it's in-rack or in-cage (or even in-contiguous-row racks), most data
centres may permit your own copper x-co
On 8/7/24 02:01, Saku Ytti wrote:
I can't help you, but I'm just awfully curious and must ask, why
specifically optical ports? Seems very strange and a limiting
requirement for upside that my imagination struggles to find.
Among the other reasons folks have given, the 10GBASE-T PHY has added
l
On 8/7/24 08:17, Mark Tinka wrote:
On 8/7/24 08:01, Saku Ytti wrote:
I can't help you, but I'm just awfully curious and must ask, why
specifically optical ports? Seems very strange and a limiting
requirement for upside that my imagination struggles to find.
Many of the reasons I've hear
> On 6. Aug 2024, at 07:02, Tim Utschig wrote:
>
> Are there any providers of 1U personal colos these days?
>
> VMs are neat, but they lack the power to experiment with without
> paying an arm and a leg.
>
> I was lucky enough to have my 1U hosted by Dave Rand back in the
> day.
Satisfied cu
18 matches
Mail list logo