1 LIBNVDIMM: Non-Volatile Devices
2 libnvdimm - kernel / libndctl - userspace helper library
3 linux-nvdimm@lists.01.org
11 LIBNVDIMM PMEM and BLK
14 BLK-REGIONs, PMEM-REGIONs, Atomic Sectors, and DAX
15 Example NVDIMM Platform
16 LIBNVDIMM Kernel Device Model and LIBNDCTL Userspace API
18 libndctl: instantiate a new library context example
19 LIBNVDIMM/LIBNDCTL: Bus
20 libnvdimm: control class device in /sys/class
22 libndctl: bus enumeration example
23 LIBNVDIMM/LIBNDCTL: DIMM (NMEM)
24 libnvdimm: DIMM (NMEM)
25 libndctl: DIMM enumeration example
26 LIBNVDIMM/LIBNDCTL: Region
28 libndctl: region enumeration example
29 Why Not Encode the Region Type into the Region Name?
30 How Do I Determine the Major Type of a Region?
31 LIBNVDIMM/LIBNDCTL: Namespace
33 libndctl: namespace enumeration example
34 libndctl: namespace creation example
35 Why the Term "namespace"?
36 LIBNVDIMM/LIBNDCTL: Block Translation Table "btt"
38 libndctl: btt creation example
39 Summary LIBNDCTL Diagram
45 PMEM: A system-physical-address range where writes are persistent. A
46 block device composed of PMEM is capable of DAX. A PMEM address range
47 may span an interleave of several DIMMs.
49 BLK: A set of one or more programmable memory mapped apertures provided
50 by a DIMM to access its media. This indirection precludes the
51 performance benefit of interleaving, but enables DIMM-bounded failure
54 DPA: DIMM Physical Address, is a DIMM-relative offset. With one DIMM in
55 the system there would be a 1:1 system-physical-address:DPA association.
56 Once more DIMMs are added a memory controller interleave must be
57 decoded to determine the DPA associated with a given
58 system-physical-address. BLK capacity always has a 1:1 relationship
59 with a single-DIMM's DPA range.
61 DAX: File system extensions to bypass the page cache and block layer to
62 mmap persistent memory, from a PMEM block device, directly into a
63 process address space.
65 BTT: Block Translation Table: Persistent memory is byte addressable.
66 Existing software may have an expectation that the power-fail-atomicity
67 of writes is at least one sector, 512 bytes. The BTT is an indirection
68 table with atomic update semantics to front a PMEM/BLK block device
69 driver and present arbitrary atomic sector sizes.
71 LABEL: Metadata stored on a DIMM device that partitions and identifies
72 (persistently names) storage between PMEM and BLK. It also partitions
73 BLK storage to host BTTs with different parameters per BLK-partition.
74 Note that traditional partition tables, GPT/MBR, are layered on top of a
81 The LIBNVDIMM subsystem provides support for three types of NVDIMMs, namely,
82 PMEM, BLK, and NVDIMM devices that can simultaneously support both PMEM
83 and BLK mode access. These three modes of operation are described by
84 the "NVDIMM Firmware Interface Table" (NFIT) in ACPI 6. While the LIBNVDIMM
85 implementation is generic and supports pre-NFIT platforms, it was guided
86 by the superset of capabilities need to support this ACPI 6 definition
87 for NVDIMM resources. The bulk of the kernel implementation is in place
88 to handle the case where DPA accessible via PMEM is aliased with DPA
89 accessible via BLK. When that occurs a LABEL is needed to reserve DPA
90 for exclusive access via one mode a time.
93 ACPI 6: http://www.uefi.org/sites/default/files/resources/ACPI_6.0.pdf
94 NVDIMM Namespace: http://pmem.io/documents/NVDIMM_Namespace_Spec.pdf
95 DSM Interface Example: http://pmem.io/documents/NVDIMM_DSM_Interface_Example.pdf
96 Driver Writer's Guide: http://pmem.io/documents/NVDIMM_Driver_Writers_Guide.pdf
99 LIBNVDIMM: https://git.kernel.org/cgit/linux/kernel/git/djbw/nvdimm.git
100 LIBNDCTL: https://github.com/pmem/ndctl.git
101 PMEM: https://github.com/01org/prd
104 LIBNVDIMM PMEM and BLK
107 Prior to the arrival of the NFIT, non-volatile memory was described to a
108 system in various ad-hoc ways. Usually only the bare minimum was
109 provided, namely, a single system-physical-address range where writes
110 are expected to be durable after a system power loss. Now, the NFIT
111 specification standardizes not only the description of PMEM, but also
112 BLK and platform message-passing entry points for control and
115 For each NVDIMM access method (PMEM, BLK), LIBNVDIMM provides a block
118 1. PMEM (nd_pmem.ko): Drives a system-physical-address range. This
119 range is contiguous in system memory and may be interleaved (hardware
120 memory controller striped) across multiple DIMMs. When interleaved the
121 platform may optionally provide details of which DIMMs are participating
124 Note that while LIBNVDIMM describes system-physical-address ranges that may
125 alias with BLK access as ND_NAMESPACE_PMEM ranges and those without
126 alias as ND_NAMESPACE_IO ranges, to the nd_pmem driver there is no
127 distinction. The different device-types are an implementation detail
128 that userspace can exploit to implement policies like "only interface
129 with address ranges from certain DIMMs". It is worth noting that when
130 aliasing is present and a DIMM lacks a label, then no block device can
131 be created by default as userspace needs to do at least one allocation
132 of DPA to the PMEM range. In contrast ND_NAMESPACE_IO ranges, once
133 registered, can be immediately attached to nd_pmem.
135 2. BLK (nd_blk.ko): This driver performs I/O using a set of platform
136 defined apertures. A set of apertures will all access just one DIMM.
137 Multiple windows allow multiple concurrent accesses, much like
138 tagged-command-queuing, and would likely be used by different threads or
141 The NFIT specification defines a standard format for a BLK-aperture, but
142 the spec also allows for vendor specific layouts, and non-NFIT BLK
143 implementations may other designs for BLK I/O. For this reason "nd_blk"
144 calls back into platform-specific code to perform the I/O. One such
145 implementation is defined in the "Driver Writer's Guide" and "DSM
152 While PMEM provides direct byte-addressable CPU-load/store access to
153 NVDIMM storage, it does not provide the best system RAS (recovery,
154 availability, and serviceability) model. An access to a corrupted
155 system-physical-address address causes a cpu exception while an access
156 to a corrupted address through an BLK-aperture causes that block window
157 to raise an error status in a register. The latter is more aligned with
158 the standard error model that host-bus-adapter attached disks present.
159 Also, if an administrator ever wants to replace a memory it is easier to
160 service a system at DIMM module boundaries. Compare this to PMEM where
161 data could be interleaved in an opaque hardware specific manner across
165 BLK-apertures solve this RAS problem, but their presence is also the
166 major contributing factor to the complexity of the ND subsystem. They
167 complicate the implementation because PMEM and BLK alias in DPA space.
168 Any given DIMM's DPA-range may contribute to one or more
169 system-physical-address sets of interleaved DIMMs, *and* may also be
170 accessed in its entirety through its BLK-aperture. Accessing a DPA
171 through a system-physical-address while simultaneously accessing the
172 same DPA through a BLK-aperture has undefined results. For this reason,
173 DIMMs with this dual interface configuration include a DSM function to
174 store/retrieve a LABEL. The LABEL effectively partitions the DPA-space
175 into exclusive system-physical-address and BLK-aperture accessible
176 regions. For simplicity a DIMM is allowed a PMEM "region" per each
177 interleave set in which it is a member. The remaining DPA space can be
178 carved into an arbitrary number of BLK devices with discontiguous
181 BLK-REGIONs, PMEM-REGIONs, Atomic Sectors, and DAX
182 --------------------------------------------------
185 reasons to allow multiple BLK namespaces per REGION is so that each
186 BLK-namespace can be configured with a BTT with unique atomic sector
187 sizes. While a PMEM device can host a BTT the LABEL specification does
188 not provide for a sector size to be specified for a PMEM namespace.
189 This is due to the expectation that the primary usage model for PMEM is
190 via DAX, and the BTT is incompatible with DAX. However, for the cases
191 where an application or filesystem still needs atomic sector update
192 guarantees it can register a BTT on a PMEM device or partition. See
193 LIBNVDIMM/NDCTL: Block Translation Table "btt"
196 Example NVDIMM Platform
197 -----------------------
199 For the remainder of this document the following diagram will be
200 referenced for any example sysfs layouts.
203 (a) (b) DIMM BLK-REGION
204 +-------------------+--------+--------+--------+
205 +------+ | pm0.0 | blk2.0 | pm1.0 | blk2.1 | 0 region2
206 | imc0 +--+- - - region0- - - +--------+ +--------+
207 +--+---+ | pm0.0 | blk3.0 | pm1.0 | blk3.1 | 1 region3
208 | +-------------------+--------v v--------+
212 | +----------------------------^ ^--------+
213 +--+---+ | blk4.0 | pm1.0 | blk4.0 | 2 region4
214 | imc1 +--+----------------------------| +--------+
215 +------+ | blk5.0 | pm1.0 | blk5.0 | 3 region5
216 +----------------------------+--------+--------+
218 In this platform we have four DIMMs and two memory controllers in one
219 socket. Each unique interface (BLK or PMEM) to DPA space is identified
220 by a region device with a dynamically assigned id (REGION0 - REGION5).
222 1. The first portion of DIMM0 and DIMM1 are interleaved as REGION0. A
223 single PMEM namespace is created in the REGION0-SPA-range that spans
224 DIMM0 and DIMM1 with a user-specified name of "pm0.0". Some of that
225 interleaved system-physical-address range is reclaimed as BLK-aperture
226 accessed space starting at DPA-offset (a) into each DIMM. In that
227 reclaimed space we create two BLK-aperture "namespaces" from REGION2 and
228 REGION3 where "blk2.0" and "blk3.0" are just human readable names that
229 could be set to any user-desired name in the LABEL.
231 2. In the last portion of DIMM0 and DIMM1 we have an interleaved
232 system-physical-address range, REGION1, that spans those two DIMMs as
233 well as DIMM2 and DIMM3. Some of REGION1 allocated to a PMEM namespace
234 named "pm1.0" the rest is reclaimed in 4 BLK-aperture namespaces (for
235 each DIMM in the interleave set), "blk2.1", "blk3.1", "blk4.0", and
238 3. The portion of DIMM2 and DIMM3 that do not participate in the REGION1
239 interleaved system-physical-address range (i.e. the DPA address below
240 offset (b) are also included in the "blk4.0" and "blk5.0" namespaces.
241 Note, that this example shows that BLK-aperture namespaces don't need to
242 be contiguous in DPA-space.
244 This bus is provided by the kernel under the device
245 /sys/devices/platform/nfit_test.0 when CONFIG_NFIT_TEST is enabled and
246 the nfit_test.ko module is loaded. This not only test LIBNVDIMM but the
247 acpi_nfit.ko driver as well.
250 LIBNVDIMM Kernel Device Model and LIBNDCTL Userspace API
251 ----------------------------------------------------
253 What follows is a description of the LIBNVDIMM sysfs layout and a
254 corresponding object hierarchy diagram as viewed through the LIBNDCTL
255 api. The example sysfs paths and diagrams are relative to the Example
256 NVDIMM Platform which is also the LIBNVDIMM bus used in the LIBNDCTL unit
260 Every api call in the LIBNDCTL library requires a context that holds the
261 logging parameters and other library instance state. The library is
262 based on the libabc template:
263 https://git.kernel.org/cgit/linux/kernel/git/kay/libabc.git/
265 LIBNDCTL: instantiate a new library context example
267 struct ndctl_ctx *ctx;
269 if (ndctl_new(&ctx) == 0)
274 LIBNVDIMM/LIBNDCTL: Bus
277 A bus has a 1:1 relationship with an NFIT. The current expectation for
278 ACPI based systems is that there is only ever one platform-global NFIT.
279 That said, it is trivial to register multiple NFITs, the specification
280 does not preclude it. The infrastructure supports multiple busses and
281 we we use this capability to test multiple NFIT configurations in the
284 LIBNVDIMM: control class device in /sys/class
286 This character device accepts DSM messages to be passed to DIMM
287 identified by its NFIT handle.
291 |-- device -> ../../../ndbus0
292 |-- subsystem -> ../../../../../../../class/nd
298 struct nvdimm_bus *nvdimm_bus_register(struct device *parent,
299 struct nvdimm_bus_descriptor *nfit_desc);
301 /sys/devices/platform/nfit_test.0/ndbus0
320 LIBNDCTL: bus enumeration example
321 Find the bus handle that describes the bus from Example NVDIMM Platform
323 static struct ndctl_bus *get_bus_by_provider(struct ndctl_ctx *ctx,
324 const char *provider)
326 struct ndctl_bus *bus;
328 ndctl_bus_foreach(ctx, bus)
329 if (strcmp(provider, ndctl_bus_get_provider(bus)) == 0)
335 bus = get_bus_by_provider(ctx, "nfit_test.0");
338 LIBNVDIMM/LIBNDCTL: DIMM (NMEM)
339 ---------------------------
341 The DIMM device provides a character device for sending commands to
342 hardware, and it is a container for LABELs. If the DIMM is defined by
343 NFIT then an optional 'nfit' attribute sub-directory is available to add
346 Note that the kernel device name for "DIMMs" is "nmemX". The NFIT
347 describes these devices via "Memory Device to System Physical Address
348 Range Mapping Structure", and there is no requirement that they actually
349 be physical DIMMs, so we use a more generic name.
351 LIBNVDIMM: DIMM (NMEM)
353 struct nvdimm *nvdimm_create(struct nvdimm_bus *nvdimm_bus, void *provider_data,
354 const struct attribute_group **groups, unsigned long flags,
355 unsigned long *dsm_mask);
357 /sys/devices/platform/nfit_test.0/ndbus0
359 | |-- available_slots
363 | |-- driver -> ../../../../../bus/nd/drivers/nvdimm
374 | |-- subsystem -> ../../../../../bus/nd
380 LIBNDCTL: DIMM enumeration example
382 Note, in this example we are assuming NFIT-defined DIMMs which are
383 identified by an "nfit_handle" a 32-bit value where:
384 Bit 3:0 DIMM number within the memory channel
385 Bit 7:4 memory channel number
386 Bit 11:8 memory controller ID
387 Bit 15:12 socket ID (within scope of a Node controller if node controller is present)
388 Bit 27:16 Node Controller ID
391 static struct ndctl_dimm *get_dimm_by_handle(struct ndctl_bus *bus,
394 struct ndctl_dimm *dimm;
396 ndctl_dimm_foreach(bus, dimm)
397 if (ndctl_dimm_get_handle(dimm) == handle)
403 #define DIMM_HANDLE(n, s, i, c, d) \
404 (((n & 0xfff) << 16) | ((s & 0xf) << 12) | ((i & 0xf) << 8) \
405 | ((c & 0xf) << 4) | (d & 0xf))
407 dimm = get_dimm_by_handle(bus, DIMM_HANDLE(0, 0, 0, 0, 0));
409 LIBNVDIMM/LIBNDCTL: Region
410 ----------------------
412 A generic REGION device is registered for each PMEM range orBLK-aperture
413 set. Per the example there are 6 regions: 2 PMEM and 4 BLK-aperture
414 sets on the "nfit_test.0" bus. The primary role of regions are to be a
415 container of "mappings". A mapping is a tuple of <DIMM,
416 DPA-start-offset, length>.
418 LIBNVDIMM provides a built-in driver for these REGION devices. This driver
419 is responsible for reconciling the aliased DPA mappings across all
420 regions, parsing the LABEL, if present, and then emitting NAMESPACE
421 devices with the resolved/exclusive DPA-boundaries for the nd_pmem or
422 nd_blk device driver to consume.
424 In addition to the generic attributes of "mapping"s, "interleave_ways"
425 and "size" the REGION device also exports some convenience attributes.
426 "nstype" indicates the integer type of namespace-device this region
427 emits, "devtype" duplicates the DEVTYPE variable stored by udev at the
428 'add' event, "modalias" duplicates the MODALIAS variable stored by udev
429 at the 'add' event, and finally, the optional "spa_index" is provided in
430 the case where the region is defined by a SPA.
434 struct nd_region *nvdimm_pmem_region_create(struct nvdimm_bus *nvdimm_bus,
435 struct nd_region_desc *ndr_desc);
436 struct nd_region *nvdimm_blk_region_create(struct nvdimm_bus *nvdimm_bus,
437 struct nd_region_desc *ndr_desc);
439 /sys/devices/platform/nfit_test.0/ndbus0
445 | |-- driver -> ../../../../../bus/nd/drivers/nd_region
446 | |-- init_namespaces
459 | |-- subsystem -> ../../../../../bus/nd
464 LIBNDCTL: region enumeration example
466 Sample region retrieval routines based on NFIT-unique data like
467 "spa_index" (interleave set id) for PMEM and "nfit_handle" (dimm id) for
470 static struct ndctl_region *get_pmem_region_by_spa_index(struct ndctl_bus *bus,
471 unsigned int spa_index)
473 struct ndctl_region *region;
475 ndctl_region_foreach(bus, region) {
476 if (ndctl_region_get_type(region) != ND_DEVICE_REGION_PMEM)
478 if (ndctl_region_get_spa_index(region) == spa_index)
484 static struct ndctl_region *get_blk_region_by_dimm_handle(struct ndctl_bus *bus,
487 struct ndctl_region *region;
489 ndctl_region_foreach(bus, region) {
490 struct ndctl_mapping *map;
492 if (ndctl_region_get_type(region) != ND_DEVICE_REGION_BLOCK)
494 ndctl_mapping_foreach(region, map) {
495 struct ndctl_dimm *dimm = ndctl_mapping_get_dimm(map);
497 if (ndctl_dimm_get_handle(dimm) == handle)
505 Why Not Encode the Region Type into the Region Name?
506 ----------------------------------------------------
508 At first glance it seems since NFIT defines just PMEM and BLK interface
509 types that we should simply name REGION devices with something derived
510 from those type names. However, the ND subsystem explicitly keeps the
511 REGION name generic and expects userspace to always consider the
512 region-attributes for 4 reasons:
514 1. There are already more than two REGION and "namespace" types. For
515 PMEM there are two subtypes. As mentioned previously we have PMEM where
516 the constituent DIMM devices are known and anonymous PMEM. For BLK
517 regions the NFIT specification already anticipates vendor specific
518 implementations. The exact distinction of what a region contains is in
519 the region-attributes not the region-name or the region-devtype.
521 2. A region with zero child-namespaces is a possible configuration. For
522 example, the NFIT allows for a DCR to be published without a
523 corresponding BLK-aperture. This equates to a DIMM that can only accept
524 control/configuration messages, but no i/o through a descendant block
525 device. Again, this "type" is advertised in the attributes ('mappings'
526 == 0) and the name does not tell you much.
528 3. What if a third major interface type arises in the future? Outside
529 of vendor specific implementations, it's not difficult to envision a
530 third class of interface type beyond BLK and PMEM. With a generic name
531 for the REGION level of the device-hierarchy old userspace
532 implementations can still make sense of new kernel advertised
533 region-types. Userspace can always rely on the generic region
534 attributes like "mappings", "size", etc and the expected child devices
535 named "namespace". This generic format of the device-model hierarchy
536 allows the LIBNVDIMM and LIBNDCTL implementations to be more uniform and
539 4. There are more robust mechanisms for determining the major type of a
540 region than a device name. See the next section, How Do I Determine the
541 Major Type of a Region?
543 How Do I Determine the Major Type of a Region?
544 ----------------------------------------------
546 Outside of the blanket recommendation of "use libndctl", or simply
547 looking at the kernel header (/usr/include/linux/ndctl.h) to decode the
548 "nstype" integer attribute, here are some other options.
550 1. module alias lookup:
552 The whole point of region/namespace device type differentiation is to
553 decide which block-device driver will attach to a given LIBNVDIMM namespace.
554 One can simply use the modalias to lookup the resulting module. It's
555 important to note that this method is robust in the presence of a
556 vendor-specific driver down the road. If a vendor-specific
557 implementation wants to supplant the standard nd_blk driver it can with
558 minimal impact to the rest of LIBNVDIMM.
560 In fact, a vendor may also want to have a vendor-specific region-driver
561 (outside of nd_region). For example, if a vendor defined its own LABEL
562 format it would need its own region driver to parse that LABEL and emit
563 the resulting namespaces. The output from module resolution is more
564 accurate than a region-name or region-devtype.
568 The kernel "devtype" is registered in the udev database
569 # udevadm info --path=/devices/platform/nfit_test.0/ndbus0/region0
570 P: /devices/platform/nfit_test.0/ndbus0/region0
571 E: DEVPATH=/devices/platform/nfit_test.0/ndbus0/region0
576 # udevadm info --path=/devices/platform/nfit_test.0/ndbus0/region4
577 P: /devices/platform/nfit_test.0/ndbus0/region4
578 E: DEVPATH=/devices/platform/nfit_test.0/ndbus0/region4
583 ...and is available as a region attribute, but keep in mind that the
584 "devtype" does not indicate sub-type variations and scripts should
585 really be understanding the other attributes.
587 3. type specific attributes:
589 As it currently stands a BLK-aperture region will never have a
590 "nfit/spa_index" attribute, but neither will a non-NFIT PMEM region. A
591 BLK region with a "mappings" value of 0 is, as mentioned above, a DIMM
592 that does not allow I/O. A PMEM region with a "mappings" value of zero
593 is a simple system-physical-address range.
596 LIBNVDIMM/LIBNDCTL: Namespace
597 -------------------------
599 A REGION, after resolving DPA aliasing and LABEL specified boundaries,
600 surfaces one or more "namespace" devices. The arrival of a "namespace"
601 device currently triggers either the nd_blk or nd_pmem driver to load
602 and register a disk/block device.
605 Here is a sample layout from the three major types of NAMESPACE where
606 namespace0.0 represents DIMM-info-backed PMEM (note that it has a 'uuid'
607 attribute), namespace2.0 represents a BLK namespace (note it has a
608 'sector_size' attribute) that, and namespace6.0 represents an anonymous
609 PMEM namespace (note that has no 'uuid' attribute due to not support a
612 /sys/devices/platform/nfit_test.0/ndbus0/region0/namespace0.0
621 |-- subsystem -> ../../../../../../bus/nd
625 /sys/devices/platform/nfit_test.0/ndbus0/region2/namespace2.0
634 |-- subsystem -> ../../../../../../bus/nd
638 /sys/devices/platform/nfit_test.1/ndbus1/region6/namespace6.0
642 |-- driver -> ../../../../../../bus/nd/drivers/pmem
648 |-- subsystem -> ../../../../../../bus/nd
652 LIBNDCTL: namespace enumeration example
653 Namespaces are indexed relative to their parent region, example below.
654 These indexes are mostly static from boot to boot, but subsystem makes
655 no guarantees in this regard. For a static namespace identifier use its
658 static struct ndctl_namespace *get_namespace_by_id(struct ndctl_region *region,
661 struct ndctl_namespace *ndns;
663 ndctl_namespace_foreach(region, ndns)
664 if (ndctl_namespace_get_id(ndns) == id)
670 LIBNDCTL: namespace creation example
671 Idle namespaces are automatically created by the kernel if a given
672 region has enough available capacity to create a new namespace.
673 Namespace instantiation involves finding an idle namespace and
674 configuring it. For the most part the setting of namespace attributes
675 can occur in any order, the only constraint is that 'uuid' must be set
676 before 'size'. This enables the kernel to track DPA allocations
677 internally with a static identifier.
679 static int configure_namespace(struct ndctl_region *region,
680 struct ndctl_namespace *ndns,
681 struct namespace_parameters *parameters)
685 snprintf(devname, sizeof(devname), "namespace%d.%d",
686 ndctl_region_get_id(region), paramaters->id);
688 ndctl_namespace_set_alt_name(ndns, devname);
689 /* 'uuid' must be set prior to setting size! */
690 ndctl_namespace_set_uuid(ndns, paramaters->uuid);
691 ndctl_namespace_set_size(ndns, paramaters->size);
692 /* unlike pmem namespaces, blk namespaces have a sector size */
693 if (parameters->lbasize)
694 ndctl_namespace_set_sector_size(ndns, parameters->lbasize);
695 ndctl_namespace_enable(ndns);
699 Why the Term "namespace"?
701 1. Why not "volume" for instance? "volume" ran the risk of confusing ND
702 as a volume manager like device-mapper.
704 2. The term originated to describe the sub-devices that can be created
705 within a NVME controller (see the nvme specification:
706 http://www.nvmexpress.org/specifications/), and NFIT namespaces are
707 meant to parallel the capabilities and configurability of
711 LIBNVDIMM/LIBNDCTL: Block Translation Table "btt"
712 ---------------------------------------------
714 A BTT (design document: http://pmem.io/2014/09/23/btt.html) is a stacked
715 block device driver that fronts either the whole block device or a
716 partition of a block device emitted by either a PMEM or BLK NAMESPACE.
718 LIBNVDIMM: btt layout
719 Every region will start out with at least one BTT device which is the
720 seed device. To activate it set the "namespace", "uuid", and
721 "sector_size" attributes and then bind the device to the nd_pmem or
722 nd_blk driver depending on the region type.
724 /sys/devices/platform/nfit_test.1/ndbus0/region0/btt0/
731 |-- subsystem -> ../../../../../bus/nd
735 LIBNDCTL: btt creation example
736 Similar to namespaces an idle BTT device is automatically created per
737 region. Each time this "seed" btt device is configured and enabled a new
738 seed is created. Creating a BTT configuration involves two steps of
739 finding and idle BTT and assigning it to consume a PMEM or BLK namespace.
741 static struct ndctl_btt *get_idle_btt(struct ndctl_region *region)
743 struct ndctl_btt *btt;
745 ndctl_btt_foreach(region, btt)
746 if (!ndctl_btt_is_enabled(btt)
747 && !ndctl_btt_is_configured(btt))
753 static int configure_btt(struct ndctl_region *region,
754 struct btt_parameters *parameters)
756 btt = get_idle_btt(region);
758 ndctl_btt_set_uuid(btt, parameters->uuid);
759 ndctl_btt_set_sector_size(btt, parameters->sector_size);
760 ndctl_btt_set_namespace(btt, parameters->ndns);
761 /* turn off raw mode device */
762 ndctl_namespace_disable(parameters->ndns);
763 /* turn on btt access */
764 ndctl_btt_enable(btt);
767 Once instantiated a new inactive btt seed device will appear underneath
770 Once a "namespace" is removed from a BTT that instance of the BTT device
771 will be deleted or otherwise reset to default values. This deletion is
772 only at the device model level. In order to destroy a BTT the "info
773 block" needs to be destroyed. Note, that to destroy a BTT the media
774 needs to be written in raw mode. By default, the kernel will autodetect
775 the presence of a BTT and disable raw mode. This autodetect behavior
776 can be suppressed by enabling raw mode for the namespace via the
777 ndctl_namespace_set_raw_mode() api.
780 Summary LIBNDCTL Diagram
781 ------------------------
783 For the given example above, here is the view of the objects as seen by the LIBNDCTL api:
785 |CTX| +---------+ +--------------+ +---------------+
786 +-+-+ +-> REGION0 +---> NAMESPACE0.0 +--> PMEM8 "pm0.0" |
787 | | +---------+ +--------------+ +---------------+
788 +-------+ | | +---------+ +--------------+ +---------------+
789 | DIMM0 <-+ | +-> REGION1 +---> NAMESPACE1.0 +--> PMEM6 "pm1.0" |
790 +-------+ | | | +---------+ +--------------+ +---------------+
791 | DIMM1 <-+ +-v--+ | +---------+ +--------------+ +---------------+
792 +-------+ +-+BUS0+---> REGION2 +-+-> NAMESPACE2.0 +--> ND6 "blk2.0" |
793 | DIMM2 <-+ +----+ | +---------+ | +--------------+ +----------------------+
794 +-------+ | | +-> NAMESPACE2.1 +--> ND5 "blk2.1" | BTT2 |
795 | DIMM3 <-+ | +--------------+ +----------------------+
796 +-------+ | +---------+ +--------------+ +---------------+
797 +-> REGION3 +-+-> NAMESPACE3.0 +--> ND4 "blk3.0" |
798 | +---------+ | +--------------+ +----------------------+
799 | +-> NAMESPACE3.1 +--> ND3 "blk3.1" | BTT1 |
800 | +--------------+ +----------------------+
801 | +---------+ +--------------+ +---------------+
802 +-> REGION4 +---> NAMESPACE4.0 +--> ND2 "blk4.0" |
803 | +---------+ +--------------+ +---------------+
804 | +---------+ +--------------+ +----------------------+
805 +-> REGION5 +---> NAMESPACE5.0 +--> ND1 "blk5.0" | BTT0 |
806 +---------+ +--------------+ +---------------+------+