Network Interface Cards

Items 1 to 6 of 36 total

Set Descending Direction
per page

Grid  List 

Page:
  1. 1
  2. 2
  3. 3
  4. 4
  5. 5
  • NVIDIA BlueField-3 P B3220 200Gigabit Ethernet Card - PCI Express 5.0 x16 - 200 Gbit/s Data Transfer Rate - 2 Port(s) - Optical Fiber - FHHL Bracket Height - QSFP112 - Plug-in Card

    NVIDIA BlueField-3 P B3220 200Gigabit Ethernet Card - PCI Express 5.0 x16 - 200 Gbit/s Data Transfer Rate - 2 Port(s) - Optical Fiber - FHHL Bracket Height - QSFP112 - Plug-in Card Learn More
    $6,175.00

  • Mellanox 40Gigabit Ethernet Card - PCI Express 3.0 x8 - 2 Port(s) - Optical Fiber - 40GBase-X - Plug-in Card

    ConnectX®-3 Pro EN for Open Compute Project Supporting OCP Specification 2.

    Single/Dual-Port 40 Gigabit Ethernet Adapters with PCI Express 3.0 Mellanox ConnectX-3

    Pro EN 40 Gigabit Ethernet Network Interface Card (NIC) with PCI Express 3.0 delivers high-bandwidth, low latency and industry-leading Ethernet connectivity for Open Compute Project (OCP) server and storage applications in Web 2.0, Enterprise Data Centers and Cloud infrastructure.

    Web2.0, public and private clouds, clustered databases, web infrastructure, and high frequency trading are just a few applications that will achieve significant throughput and latency improvements resulting in faster access, real-time response and more virtual machines hosted per server. ConnectX-3 Pro 40GbE for Open Compute Project (OCP) specification 2.0 improves network performance by increasing available bandwidth while decreasing the associated transport load on the CPU especially in virtualized server environments.

    World-Class Ethernet Performance

    Virtualized Overlay Networks-
    Infrastructure as a Service (IaaS) cloud demands that data centers host and serve multiple tenants, each with their own isolated network domain over a shared network infrastructure. To achieve maximum efficiency, data center operators create overlay networks that carry traffic from individual Virtual Machines (VMs) in encapsulated formats such as NVGRE and VXLAN over a logical "tunnel," thereby stretching a virtual layer-2 network over the physical layer-3 network.Overlay Network architecture introduces an additional layer of packet processing at the hypervisor level, adding and removing protocol headers for the encapsulated traffic. The new encapsulation prevents many of the traditional "offloading" capabilities (e.g. checksum, LSO) from being performed at the NIC.

    ConnectX-3 Pro EN effectively addresses the increasing demand for an overlay network, enabling superior performance by introducing advanced NVGRE and VXLAN hardware offload engines that allow traditional offloads to be performed on the encapsulated traffic. With ConnectX-3 Pro EN, data center operators can decouple the overlay network layer from the physical NIC performance, thus achieving native performance in the new network architecture.

    RDMA over Converged Ethernet-

    ConnectX-3 Pro EN utilizing IBTA RoCE technology provides efficient RDMA (Remote Direct Memory Access) services, delivering low-latency and high- performance to bandwidth and latency sensitive applications. With link-level interoperability in existing Ethernet infrastructure, Network Administrators can leverage existing data center fabric management solutions.

    Sockets Acceleration-

    Applications utilizing TCP/UDP/IP transport can achieve industry-leading throughput over 40GbE. The hardware-based stateless offload and flow steering engines in ConnectX-3 Pro EN reduce the CPU overhead of IP packet transport, freeing more processor cycles to work on the application. Sockets acceleration software further increases performance for latency sensitive applications.

    Learn More
    $656.00

  • Mellanox ConnectX-4 Lx EN 25Gigabit Ethernet Card - PCI Express 3.0 x8 - 1 Port(s) - Optical Fiber - 25GBase-X - Plug-in Card

    Mellanox MCX4111A-ACUT ConnectX-4 Lx EN Network Interface Card 25GbE Single-Port SFP28 PCIe3.0 x8 UEFI Enabled Tall Bracket

    1/10/25/40/50 Gigabit Ethernet Adapter Cards supporting RDMA, Overlay Networks Encapsulation/Decapsulation and more

    ConnectX-4 Lx EN Network Controller with 1/10/25/40/50Gb/s Ethernet connectivity addresses virtualized infrastructure challenges, for today's demanding markets and applications. Providing true hardware-based I/O isolation with unmatched scalability and efficiency, ConnectX-4 Lx provides a very cost-effective and flexible Ethernet adapter solution for Web 2.0, Cloud, data analytics, database, and storage platforms.

    With the exponential increase in usage of data and the creation of new applications, the demand for the highest throughput, lowest latency, virtualization and sophisticated data acceleration engines continues to rise. ConnectX-4 Lx EN enables data centers to leverage the world's leading interconnect adapter for increasing their operational efficiency, improving servers' utilization, maximizing applications productivity, while reducing total cost of ownership (TCO).

    ConnectX-4 Lx EN provides an unmatched combination of 1, 10, 25, 40, and 50GbE bandwidth, sub-microsecond latency and a 75 million packets per second message rate. It includes native hardware support for RDMA over Converged Ethernet, Ethernet stateless offload engines, Overlay Networks,and GPUDirect Technology.

    Learn More
    $342.00

  • Mellanox ConnectX-4 MCX4131A-GCAT 50Gigabit Ethernet Card - PCI Express 3.0 x8 - 1 Port(s) - Optical Fiber - Plug-in Card

    ConnectX-4 Lx EN Network Controller with 10/25/40/50Gb/s Ethernet connectivity addresses virtualized infrastructure challenges, delivering best-in-class and highest performance to various demanding markets and applications. Providing true hardware-based I/O isolation with unmatched scalability and efficiency, achieving the most cost-effective and flexible solution for Web 2.0, Cloud, data analytics, database, and storage platforms.

    With the exponential increase in usage of data and the creation of new applications, the demand for the highest throughput, lowest latency, virtualization and sophisticated data acceleration engines continues to rise. ConnectX-4 Lx EN enables data centers to leverage the world's leading interconnect adapter for increasing their operational efficiency, improving servers' utilization, maximizing applications productivity, while reducing total cost of ownership (TCO).

    ConnectX-4 Lx EN provides an unmatched combination of 10, 25, 40, and 50GbE bandwidth, sub-microsecond latency and a 75 million packets per second message rate. It includes native hardware support for RDMA over Converged Ethernet, Ethernet stateless offload engines, Overlay Networks,and GPUDirect ® Technology.

    High Speed Ethernet Adapter

    ConnectX-4 Lx offers the best cost effective Ethernet adapter solution for 10, 25, 40 and 50Gb/s Ethernet speeds, enabling seamless networking, clustering, or storage. The adapter reduces application runtime, and offers the flexibility and scalability to make infrastructure run as efficiently and productively as possible.

    I/O Virtualization

    ConnectX-4 Lx EN SR-IOV technology provides dedi-cated adapter resources and guaranteed isolation and protection for virtual machines (VMs) within the server. I/O virtualization with ConnectX-4 Lx EN gives data center administrators better server utilization while reducing cost, power, and cable complexity, allowing more Virtual Machines and more tenants on the same hardware.

    Overlay Networks

    In order to better scale their networks, data center operators often create overlay networks that carry traffic from individual virtual machines over logical tunnels in encapsulated formats such as NVGRE and VXLAN. While this solves network scalability issues, it hides the TCP packet from the hardware offloading engines, placing higher loads on the host CPU. ConnectX-4 Lx EN effectively addresses this by providing advanced NVGRE, VXLAN and GENEVE hardware offloading engines that encapsulate and de-capsulate the overlay protocol headers, enabling the traditional offloads to be performed on the encapsulated traffic for these and other tunneling protocols (GENEVE, MPLS, QinQ, and so on). With ConnectX-4 Lx EN, data center operators can achieve native performance in the new network architecture.

    RDMA over Converged Ethernet (RoCE)

    ConnectX-4 Lx EN supports RoCE specifications delivering low-latency and high- performance over Ethernet networks. Leveraging data center bridging (DCB) capabilities as well as ConnectX-4 Lx EN advanced congestion control hardware mechanisms, RoCE provides efficient low-latency RDMA services over Layer 2 and Layer 3 networks RDMA over Converged Ethernet (RoCE)

    Mellanox PeerDirect ™

    PeerDirect™ communication provides high efficiency RDMA access by eliminating unnecessary internal data copies between components on the PCIe bus (for example, from GPU to CPU), and therefore significantly reduces application run time. ConnectX-4 Lx EN advanced acceleration technology enables higher cluster efficiency and scalability to tens of thousands of nodes.

    Learn More
    $856.00

  • NVIDIA ConnectX-4 Lx EN 25Gigabit Ethernet Card - PCI Express 3.0 x8 - 200 Gbit/s Data Transfer Rate - NVIDIA - 2 Port(s) - Optical Fiber - OCP 3.O Bracket Height - 25GBase-R - SFP28 - Plug-in Card

    NVIDIA ConnectX-4 Lx EN 25Gigabit Ethernet Card - PCI Express 3.0 x8 - 200 Gbit/s Data Transfer Rate - NVIDIA - 2 Port(s) - Optical Fiber - OCP 3.O Bracket Height - 25GBase-R - SFP28 - Plug-in Card Learn More
    $424.00

  • Mellanox ConnectX-5 EN Card - PCI Express 3.0 x16 - 1 Port(s) - Optical Fiber - 100GBase-X - Plug-in Card

    Intelligent RDMA-enabled, single and dual-port network adapter with advanced application offload capabilities for Web 2.0, Cloud, Storage, and Telco platforms

    ConnectX-5 Ethernet adapter cards provide high performance and flexible solutions with up to two ports of 100GbE connectivity, 750ns latency, up to 200 million messages per second (Mpps), and a recordsetting 197Mpps when running an open source Data Path Development Kit (DPDK) PCIe (Gen 4.0). For storage workloads, ConnectX-5 delivers a range of innovative accelerations, such as Signature Handover (T10-DIF) in hardware, an embedded PCIe Switch, and NVMe over Fabric target offloads. ConnectX-5 adapter cards also bring advanced Open vSwitch offloads to telecommunications and cloud data centers to drive extremely high packet rates and throughput with reduced CPU resource consumption, thus boosting data center infrastructure efficiency.

    ConnectX-5 adapter cards are available for PCIe Gen 3.0 and Gen 4.0 servers and provide support for 1, 10, 25, 40, 50 and 100 GbE speeds in stand-up PCIe cards, OCP 2.0, and OCP 3.0 form factors. ConnectX-5 cards also offer advanced Mellanox Multi-Host® and Mellanox Socket Direct® technologies.

    Cloud and Web 2.0 Environments

    ConnectX-5 adapter cards enable data center administrators to benefit from better server utilization and reduced costs, power usage, and cable complexity, allowing for more virtual appliances, virtual machines (VMs) and tenants to co-exist on the same hardware.

    Supported vSwitch/vRouter offload functions include:

    • Overlay Networks (e.g., VXLAN, NVGRE, MPLS, GENEVE, and NSH) header encapsulation & decapsulation.
    • Stateless offloads of inner packets and packet headers' re-write, enabling NAT functionality and more.
    • Flexible and programmable parser and match-action tables, which enable hardware offloads for future protocols.
    • SR-IOV technology, providing dedicated adapter resources, guaranteed isolation and protection for virtual machines (VMs) within the server.
    • Network Function Virtualization (NFV), enabling a VM to be used as a virtual appliance. The full datapath operation offloads, hairpin hardware capability and service chaining enables data to be handled by the virtual appliance, with minimum CPU utilization.
    Cloud and Web 2.0 customers developing platforms on Software Defined Network (SDN) environments are leveraging their servers' Operating System Virtual-Switching capabilities to achieve maximum flexibility. Open vSwitch (OvS) is an example of a virtual switch that allows Virtual Machines to communicate with each other and with the outside world. Traditionally residing in the hypervisor where switching is based on twelve-tuple matching onflows, the virtual switch, or virtual router software-based solution, is CPU-intensive. This can negatively affect system performance and prevent the full utilization of available bandwidth.

    Mellanox ASAP - Accelerated Switching and Packet Processing® technology enables offloading the vSwitch/vRouter by handling the data plane in the NIC hardware, without modifying the control plane. This results in significantly higher vSwitch/vRouter performance without the associated CPU load.

    Additionally, intelligent ConnectX-5's flexible pipeline capabilities, including flexible parser and flexible match-action tables, are programmable, enabling hardware offloads for future protocols.

    Learn More
    $1,446.00

Items 1 to 6 of 36 total

Set Descending Direction
per page

Grid  List 

Page:
  1. 1
  2. 2
  3. 3
  4. 4
  5. 5