Product Id: 29963951
Description: Mellanox ConnectX-3 VPI - Network adapter low profile - Infiniband FDR x 1 - for PowerEdge C4130, C6320; PowerEdge C6420, R430, R530, R630, R640, R730, R740, R830, R930
Mfr Part #: 540-BBKI
ConnectX-3 adapter cards with Virtual Protocol Interconnect (VPI) supporting InfiniBand and Ethernet connectivity provide the highest performing and most flexible interconnect solution for PCI Express Gen3 servers used in Enterprise Data Centers, High-Performance Computing, and Embedded environments. Clustered data bases, parallel processing, transactional services and high-performance embedded I/O applications will achieve significant performance improvements resulting in reduced completion time and lower cost per operation ConnectX-3 with VPI also simplifies system development by serving multiple fabrics with one hardware design.
- Virtual protocol interconnect
- 1us MPI ping latency
- Single- and Dual-Port options available
- PCI Express 3.0
- CPU offload of transport operations
- Application offload
- GPU communication acceleration
- Precision clock synchronization
- End-to-end QoS and congestion control
- Hardware-based I/O virtualization
- Ethernet encapsulation (EoIB)
- Virtual protocol interconnect
VPI-enabled adapters enable any standard networking, clustering, storage, and management protocol to seamlessly operate over any converged network leveraging a consolidated software stack. With auto-sense capability, each ConnectX-3 port can identify and operate on InfiniBand, Ethernet, or Data Center Bridging (DCB) fabrics. FlexBoot provides additional flexibility by enabling servers to boot from remote InfiniBand or LAN storage targets. ConnectX-3 with VPI and FlexBoot simplifies I/O system design and makes it easier for IT managers to deploy infrastructure that meets the challenges of a dynamic data center.
- World-class performance InfiniBand
ConnectX-3 delivers low latency, high bandwidth, and computing efficiency for performance-driven server and storage clustering applications. Efficient computing is achieved by offloading from the CPU protocol processing and data movement overhead such as RDMA and Send/Receive semantics allowing more processor power for the application. CORE-Direct brings the next level of performance improvement by offloading application overhead such as data broadcasting and gathering as well as global synchronization communication routines. GPU communication acceleration provides additional efficiencies by eliminating unnecessary internal data copies to significantly reduce application run time. ConnectX-3 advanced acceleration technology enables higher cluster efficiency and large scalability to tens of thousands of nodes.
- RDMA over converged
Ethernet ConnectX-3 utilizing IBTA RoCE technology delivers similar low-latency and high-performance over Ethernet networks. Leveraging Data Center Bridging capabilities, RoCE provides efficient low latency RDMA services over Layer 2 Ethernet. With link-level interoperability in existing Ethernet infrastructure, Network Administrators can leverage existing data center fabric management solutions.
- I/O Virtualization
ConnectX-3 SR-IOV technology provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VM) within the server. I/O virtualization with ConnectX-3 gives data center managers better server utilization while reducing cost, power, and cable complexity.
- Storage accelerated
A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks. Standard block and file access protocols can leverage InfiniBand RDMA for high-performance storage access.