Product Id: 32703288
Description: Mellanox ConnectX-5 Ex VPI - Network adapter - PCIe 4.0 x8 - 100Gb Ethernet / 100Gb Infiniband QSFP28 x 2
Mfr Part #: MCX556A-EDAT
The ConnectX-5 supports two ports of 100Gb/s InfiniBand, Ethernet connectivity, and very high message rate, plus PCIe switch and NVMf offloads, providing high performance and a flexible solution for demanding applications and markets.
- Adaptive routing on reliable transport
- Embedded PCIe switch
- High throughput, low latency and low CPU utilization
- Maximizes data center ROI
- Innovative rack design for storage
- Advanced storage capabilities
- Intelligent network adapter supporting flexible pipeline programmability
- Advanced performance in virtualized networks
- HPC environments
The ConnectX-5 delivers high bandwidth, low latency, and high computation efficiency for high performance, data intensive and scalable compute and storage platforms. ConnectX-5 offers enhancements to HPC infrastructures by providing MPI and SHMEM/PGAS and rendezvous tag matching offload, hardware support for out-of-order RDMA write and read operations, as well as additional PCIe Atomic operations support. The ConnectX-5 VPI utilizes both IBTA RDMA and RoCE technologies, delivering low latency and high performance. The ConnectX-5 enhances RDMA network capabilities by completing the switch adaptive-routing capabilities and supporting out-of-order data delivery, while maintaining ordered completion semantics, providing multi-path reliability and efficient support for network topologies including DragonFly and DragonFly+.
- Storage environments
NVMe storage devices offer very fast storage access. The evolving NVMf protocol leverages the RDMA connectivity for remote access. ConnectX-5 offers further enhancements by providing NVMf target offloads, enabling very efficient NVMe storage access with no CPU intervention, and thus improved performance and lower latency. Moreover, the embedded PCIe switch enables customers to build standalone storage or machine learning appliances. Standard block and file access protocols can leverage RoCE for high-performance storage access. A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks.