Product Id: 26891889
Description: Mellanox MetroX TX6100 - Switch - managed - 12 x FDR-10 InfiniBand QSFP - rack-mountable
Mfr Part #: MTX6100-2SFS
The TX6100 MetroX series extends InfiniBand from a single-location data center network to a high-performance technology for local, campus and even metro applications.
- 6 long haul (40 Gbps) ports in a 1U switch
- Up to 240 Gbps long-haul aggregate data
- 6 downlink (56 Gbps) VPI ports
- Compliant with IBTA 1.2.1 and 1.3
- 2 virtual lanes for QoS applications
- Compliant with Mellanox LR4 QSFP+ 40 Gbps transceivers
- Redundant power supplies and fan drawers
- Scaling-out data centers with MetroX
Data centers, compute clusters and supercomputers are overwhelmed by unprecedented growth in data volume, fueled by strong application and technology trends. While InfiniBand products have been traditionally deployed for their high-performance interconnect benefits within the data center, Mellanox MetroX switches, implementing long-haul InfiniBand, enable connecting between data centers deployed across multiple geographically distributed sites, extending the same world-leading interconnect benefits of InfiniBand beyond local data centers and storage clusters.
- Long haul InfiniBand
MetroX switches, which implement long-haul InfiniBand, can transfer data to distances of up to 10 km. The switches enable aggregate data and storage networking over a single, consolidated InfiniBand fabric. The long-haul InfiniBand technology guarantees high-performance, highvolume data sharing between distant sites, enabling existing data centers expansion, disaster recovery, data mirroring and campus connectivity. MetroX enables a campus network to assemble large aggregate clusters, all connected and easily managed by an InfiniBand Subnet Manager - an embedded manager, OpenSM, or using Mellanox's Unified Fabric Manager (UFM). MetroX extends InfiniBand protocol RDMA capabilities beyond the local cluster and ensures RDMA enhancements to both data centers and storage clusters alike.
- 40 Gbps across campus, 56 Gbps locally
Mellanox's MetroX supports up to 6 long-haul ports running at 40 Gbps for up to 10km and 6 downlink ports running at 56 Gbps (FDR). The port capacity enables star-like campus deployments and provides clear CAPEX reduction versus current single port-toport long-haul solutions. MetroX switch latency is 200 ns, and each kilometer of transmission across fiber glass adds 5 microseconds of latency. MetroX downlink ports support Mellanox's Virtual Protocol Interconnect (VPI) technology, which enables any standard networking, storage or management protocol to seamlessly operate over any converged network.