[KwartzLab] Where to get Cat6 cable?
cedric at ccjclearline.com
Sun Sep 2 12:58:51 EDT 2012
Another thing to keep in mind is whether or not Cat6 buys you anything, in terms of capability. The main thing with Cat6 is that it allows you to achieve ethernet signal integrity over a longer length, or more signal adverse conditions; if you are dealing with lengths of 50m or less (given that your average house is generally less than 15m wide, that's usually a given), then Cat6, and it's additional cost, weight and Pain-In-The-Ass (PITA) factor buys you absolutely *nothing*.
"Oh, but what about 10GigE?", I hear you ask? Nobody runs 10GigE over Twisted Pair. Why? Four reasons:
- limited to 15m, much less if you have patch panels and patch cables that create insertion loss, or anything less than perfect termination.
- 25% higher transceiver latency over CX4 or Fibre transceivers (10GigE is standardized for 3 different physical medias: TP, CX4 and various wavelengths on Single Mode fibre for long haul, or Multi-Mode fibre for short haul distances)
- 50% higher power cost for TP transceivers VS CX4 of Fibre
- since nobody uses 10GigE over TP, the cards cost more
Bottom-line: to do 10GigE over TP, you need to a network card pulling almost as much power as a nice video card, and it gets you significantly less performance (why did you go 10GigE, if you didn't want performance, again?), for higher cost, and less assurance that your physical layer isn't going to impact your performance.
10 GigE is still pretty expensive:
- probably about $1000-2000 per port to deploy, for around 4-12 ports, depending on interface carts, what switch you choose, etc, and
- looks to remain pretty expensive for some time yet, insofar as there is no mass-market need for it that anyone has yet discovered.
Yes, very lovely for business with racks of gear:
- at one client site, we are *looking at* deploying rack top switches with 10 GigE between the switches, for about 15 racks of misc gear; at another,
- we did six 10 GigE links a few years back for about $21k, at an industrial facility:
- each 10 GigE blade was $2k, and it supported up to 4 connections (MRSP of ~$3.5k, if memory serves),
- each transceiver that went in one of those 4 ports was $1.5k (we bought them from a gray market reseller on eBay; through normal channels, they went for about $2.5k/ea,
- then there was the cost of the fibre runs through the facility at about $1k each, etc;
- that cost does not include the switches themselves that we plugged the blades into;
- bottom line: it was about $4k per port, and while the cost has gotten better since we did that job, it's by no means cheap, yet, and probably won't be cheap for a few years yet.
Now, granted, we were *very happy* with the results that we got at the industrial facility; even though none of the links comes close to saturating the 10 GigE connection, the biggest benefit was the lower latency -- the moment a packet hit the 10 GigE port, it was *gone*, done and delivered to the other side, so in effect no packet *ever* had to wait for transmission, and compared to the GigE links, transmission was effectively instantaneous. The network had a "feel" that we smooth as glass; latencies were always low, resources that were available over the network always were snappy, etc -- this was primarily the result of very low packet latencies, and very little to do with the available bandwidth. (so long as you have more bandwidth than you need, available bandwidth makes no difference to the performance of the network, and you instead start caring about time-in-flight, interface latencies, and so forth in order to get extra performance).
This is why Infiniband networks are so cool, and used for scientific computing: while they can provide between 2-300 Gbit/sec communication (it gets complicated -- see http://en.wikipedia.org/wiki/InfiniBand if you crave details; basically you figure out how much bandwidth you need, and then hope you have enough money in your bank account to buy that particular level of hardware), between connected points, their biggest claim to fame (as I understand it) is that they have a latency that's around 1 µsec (1000 nano seconds, or 0.001 milliseconds), which is pretty damn good. Scientists will use this to have multiple processes running on physically independent computers share vast amounts of state data in real time. (There is a nice paper talking about what latency can buy you in HPC applications, comparing 10GigE to Infiniband here: http://www.hpcadvisorycouncil.com/pdf/IB_and_10GigE_in_HPC.pdf )
But I digress. In conclusion, my recommendation is save your money for something fun, and go with Cat5e cable. I would buy the fire rated cable if installing in walls. PrimeSpec has been my chosen local supplier of such things for years -- generally their prices are good, and they're a local business worth supporting (full disclosure: in past couple years, they've also become a client, but I liked them long before that :)
All the Best,
On 2012-09-01, at 4:19 PM, Ben Brown wrote:
> One thing to keep in mind, is if you're planning to run cable in drop
> ceilings (and other spaces without a conduit), the fire code specifies
> using Plenum cable, which is more expensive than the PVC stuff (the
> added factor being it's less likely to kill you when it burns). I dunno
> if you care about code or not :)
> On 9/1/2012 3:35 PM, Darcy Casselman wrote:
>> I'm starting to wire up the house for ethernet, something I've wanted
>> to do since I moved in.
>> Yeah, wifi's great and all, but who doesn't want to cut holes in
>> plaster and fish wires? Srsly.
>> Anyone have a decent supplier for Cat6 networking cable? I don't need
>> much, probably just a couple hundred feet for the whole house. Home
>> Depot seems a touch pricey to me...
>> Discuss mailing list
>> Discuss at kwartzlab.ca
> Discuss mailing list
> Discuss at kwartzlab.ca
More information about the Discuss