Bookkeeping Service Providers

  • Accounting
  • Bookkeeping
  • US Taxation
  • Financial Planning
  • Accounting Software
  • Small Business Finance
You are here: Home / BlockChain / NVMe over Fabrics: Fibre Channel vs. RDMA

NVMe over Fabrics: Fibre Channel vs. RDMA

August 15, 2018 by cbn Leave a Comment

For enterprises deploying NVMe over Fabric, choosing between Fibre Channel and RDMA can be difficult, because both have advantages and disadvantages.

In the last few years, enterprises have been getting hungrier for infrastructure that provides high throughput with low latency and greater performance for hosted applications. Faster networking with high-speed Ethernet, Fibre Channel, and Infiniband offers end-to-end speed varying from 10 Gb/s to 128 Gb/s.

Enterprises are also starting to realize the performance and latency benefits offered by the NVMe protocol with storage arrays featuring high-speed NAND flash and next-generation SSDs.

But a latency bottleneck has arisen in the implementation of shared storage or storage area networking where data needs to be transferred between the host (initiator) and the NVMe-enabled storage array (target) over Ethernet, RDMA technologies (iWARP/RoCE), or Fibre Channel.

The NVMe bottleneck

Latency gets high when SCSI commands transported by Fibre Channel require interpretation and translation into NVMe commands.

NVMe over fabrics (NVMe-oF) is a network protocol introduced by NVM Express to address this bottleneck. NVMe-oF replaced iSCSI as a storage networking protocol, allowing enterprises to experience the full benefits offered by NVMe-enabled storage arrays. NVMe-oF acts as a messaging layer between the host computer and target SSDs or a shared system network over ultra-high speed RDMAs/Fibre Channels.

NVMe-oF supports five technologies: RDMA (RoCE, iWARP), Fibre Channel (FC-NVMe), Infiniband, Future Fabrics, and Intel Omni-Path architecture.

In addition, NVMe-oF allows separation of control traffic and data traffic, which further simplifies traffic management. Also, it takes advantage of the internal parallelism of storage devices and lowers I/O overhead. This enhances overall data access performance to reduce latency.

NVMe-oF offers a performance boost to enterprises that are deploying machine learning applications, big data, and Internet of Things (IoT) analytics, which demand real-time access to stored data without any distance dependencies. 

Performance evaluation of NVMe-oF over Fibre Channel and RDMA

Recent conferences have sparked debate about which transport channel delivers the best performance using the NVMe-oF protocol. Some vendors firmly believe that RDMA is a better option for higher throughput, and many vendors stick to Fibre Channel to gain performance advantages.

Both network fabric technologies have their own benefits and pitfalls.

NVMe over Fabrics using Fibre Channel

NVMe over Fibre Channel relies on two standards: NVMe-oF and FC-NVMe. NVMe-oF is the protocol offered by NVM Express organization for enabling transportation of NVMe traffic over network fabric, and FC-NVMe is the Fibre Channel-specific transport standard. The combination of both serves as a solution. A majority of enterprises are already using Fibre Channel technology to process their critical data to and from storage arrays.

Fibre Channel was specially designed for storage device and systems, and it is the de facto standard for enterprise storage area networking (SAN) solutions. The main advantage of Fibre Channel technology is that it provides concurrent traffic for existing traditional storage protocols — SCSI — and the new NVMe protocol using the same hardware resources in storage infrastructure. This co-existence of SCSI and NVMe on Fibre Channel benefits most of enterprises because they can enable NVMe operations with just a simple software upgrade.

In March 2018, NVM Express added a new feature called Asymmetric Namespace Access (ANA) to the NVMe-oF protocol. This allows multi-path I/O support among multiple hosts and namespaces.

Gen 5 and Gen 6 are new versions of Fibre Channel. Gen 6 supports transfer speeds up to 128Gbs, i.e. the highest in storage networking. Additionally, Gen 6 enables monitoring and diagnostics capabilities that enable visibility into latency levels and IOPS. NVMe-oF seamlessly integrates with both new versions of Fibre Channel protocols.

As per a Demartek report, NVMe over Fibre Channel delivers 58% higher IOPS and 34% lower latency than SCSI-based Fibre Channel protocol. Large enterprises favor the use of FC-NVMe for processing critical workloads due to its simplicity, reliability, predictability, and performance.

However, this implementation requires more expertise at the storage networking level, which may add costs.

NVMe over Fabrics using RDMA

RDMA offers an alternative to Fibre Channel. According to WhatIs.com, “Remote Direct Memory Access (RDMA) is a technology that allows computers in a network to exchange data in main memory without involving the processor, cache, or operating system of either computer.”

In other words, RDMA allows applications to bypass the software stack for processing network traffic. Because RDMA data transfer does not involve so many resources, RDMA helps enterprises achieve higher throughput and better performance with lower latency. NVMe-enabled storage devices appear to be near to the host with RDMA.

RDMA can be enabled in storage networking with protocols like RoCE (RDMA over Converged Ethernet), iWARP (internet wide area RDMA protocol), and Infiniband.

iWARP is roughly an RDMA over TCP/IP. It uses TCP and Stream Control Transmission Protocol (SCTP) for data transmission.

RoCE enables RDMA over Ethernet. It is described as Inifiniband over Ethernet. There are two versions of RoCE v1 and RoCE v2. Both of these protocols are incompatible with each other due to different transport mechanisms.

Inifiniband is largely supported by vendors offering high-performance computing solutions. It is the fastest RDMA storage networking technology having data transfer speed around 100 Gbs, compared to the up to 128 Gb/s offered by Gen 6 FC-NVME. Like FC-NVMe, Infiniband is a lossless transmission protocol, providing quality of service (QoS) mechanism, along with credit-based flow control.

Some vendors consider RDMA to be highly compatible with NVMe use cases due to their use of the same queueing structure. The main reason for using RDMA-based technologies is that command transfer does not require any kind of encapsulation and translation of commands as both use the similar queueing structure for data transfer without CPU intervention. This way RDMA saves CPU cycles, which lowers latency in data transmission from hosts to storage devices.

Key differentiators

  • With Fibre Channel, enterprises can preserve their existing hardware investment along with taking full advantage of complete NVMe-enabled storage infrastructure. But NVMe-oF implementations based on Infiniband, RDMA (iWARP or RoCE), and Ethernet often require new hardware resources for enterprises.
  • Fibre Channel fabric has a flow control “buffer-to-buffer credit” feature with which it assures the quality of service (QoS) for enterprises by providing lossless network traffic. RDMA Ethernet (iWARP and RoCE) require additional protocol support to enable this feature.
  • As compared to other network fabric options, Fibre Channel requires less configuration to initiate network traffic.
  • Fibre Channel fabric has a feature to automatically discover and add host initiator and target storage devices and their properties. RDMA Ethernet (iWARP and RoCE) and Infiniband lack this capability.

Summary

As per a 2016 NVMe ecosystem market sizing report published by G2M Research, the NVMe market will be worth more than $57 billion by 2020, and more than 50% of enterprise servers will have NVMe-enabled by 2020.

NVMe over Fabrics takes the NVMe boost to a network, providing efficient, reliable and highly agile storage networks to be used for advanced use cases like artificial intelligence/machine learning, IoT, real-time analytics, and mission-critical applications.

But enterprises have to evaluate their investment capabilities based on different kinds of NVMe-oF implementations. RDMA offers advantages which are suited for advanced use cases (considering real-time access to storage), but enterprises can also leverage FC-NVMe by transitioning to the Gen 6 version which offers the highest data transfer speed with low latency.

In upcoming years, NVMe integration will be crucial for enterprises that are transitioning their IT infrastructure ecosystem for digital transformation.

Share on FacebookShare on TwitterShare on Google+Share on LinkedinShare on Pinterest

Filed Under: BlockChain

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Archives

  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • May 2021
  • April 2021
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • March 2016

Recent Posts

  • How Azure Cobalt 100 VMs are powering real-world solutions, delivering performance and efficiency results
  • FabCon Vienna: Build data-rich agents on an enterprise-ready foundation
  • Agent Factory: Connecting agents, apps, and data with new open standards like MCP and A2A
  • Azure mandatory multifactor authentication: Phase 2 starting in October 2025
  • Microsoft Cost Management updates—July & August 2025

Recent Comments

    Categories

    • Accounting
    • Accounting Software
    • BlockChain
    • Bookkeeping
    • CLOUD
    • Data Center
    • Financial Planning
    • IOT
    • Machine Learning & AI
    • SECURITY
    • Uncategorized
    • US Taxation

    Categories

    • Accounting (145)
    • Accounting Software (27)
    • BlockChain (18)
    • Bookkeeping (205)
    • CLOUD (1,322)
    • Data Center (214)
    • Financial Planning (345)
    • IOT (260)
    • Machine Learning & AI (41)
    • SECURITY (620)
    • Uncategorized (1,284)
    • US Taxation (17)

    Subscribe Our Newsletter

     Subscribing I accept the privacy rules of this site

    Copyright © 2025 · News Pro Theme on Genesis Framework · WordPress · Log in