1 .. SPDX-License-Identifier: GPL-2.0
7 This document describes NVMe multipath and its path selection policies supported
8 by the Linux NVMe host driver.
14 The NVMe multipath feature in Linux integrates namespaces with the same
15 identifier into a single block device. Using multipath enhances the reliability
16 and stability of I/O access while improving bandwidth performance. When a user
17 sends I/O to this merged block device, the multipath mechanism selects one of
18 the underlying block devices (paths) according to the configured policy.
19 Different policies result in different path selections.
25 All policies follow the ANA (Asymmetric Namespace Access) mechanism, meaning
26 that when an optimized path is available, it will be chosen over a non-optimized
27 one. Current the NVMe multipath policies include numa(default), round-robin and
30 To set the desired policy (e.g., round-robin), use one of the following methods:
31 1. echo -n "round-robin" > /sys/module/nvme_core/parameters/iopolicy
32 2. or add the "nvme_core.iopolicy=round-robin" to cmdline.
38 The NUMA policy selects the path closest to the NUMA node of the current CPU for
39 I/O distribution. This policy maintains the nearest paths to each NUMA node
40 based on network interface connections.
42 When to use the NUMA policy:
43 1. Multi-core Systems: Optimizes memory access in multi-core and
44 multi-processor systems, especially under NUMA architecture.
45 2. High Affinity Workloads: Binds I/O processing to the CPU to reduce
46 communication and data transfer delays across nodes.
52 The round-robin policy distributes I/O requests evenly across all paths to
53 enhance throughput and resource utilization. Each I/O operation is sent to the
54 next path in sequence.
56 When to use the round-robin policy:
57 1. Balanced Workloads: Effective for balanced and predictable workloads with
58 similar I/O size and type.
59 2. Homogeneous Path Performance: Utilizes all paths efficiently when
60 performance characteristics (e.g., latency, bandwidth) are similar.
66 The queue-depth policy manages I/O requests based on the current queue depth
67 of each path, selecting the path with the least number of in-flight I/Os.
69 When to use the queue-depth policy:
70 1. High load with small I/Os: Effectively balances load across paths when
71 the load is high, and I/O operations consist of small, relatively