Intel Optane DCPMM with QEMU vNVDIMM for KVM Guest

1. Overview

In this setup we use Intel Optane Data Center Persistent Memory Module (DCPMM) as the backend storage for a virtual NVDIMM device exposed to a KVM/QEMU guest.

Intel Optane DC Persistent Memory (DCPMM / PMEM) is a non-volatile memory technology that allows processors to access stored data directly with low latency, while retaining data across power cycles.

QEMU supports vNVDIMM since version 2.6. For stable usage, SLES 12SP5, 15SP2 and later are recommended.


2. Intel Optane DCPMM Operating Modes

2.1 Memory Mode

  • DCPMM behaves like system memory.
  • DRAM acts as cache for DCPMM.
  • Data is not persistent after power loss.

Example: If the system has 1TB DCPMM and 128GB DRAM, the OS sees 1TB memory. The 128GB DRAM works as cache.

2.2 App Direct Mode

  • DCPMM behaves as an NVDIMM device.
  • Data persists across power cycles.
  • Applications must support persistent memory.

QEMU vNVDIMM requires App Direct Mode.


3. QEMU Virtual NVDIMM

QEMU provides vNVDIMM using memory backend objects:

  • memory-backend-file
  • memory-backend-ram

3.1 Basic Example


-machine pc,nvdimm
-m 4G,slots=4,maxmem=32G
-object memory-backend-file,id=mem1,share=on,mem-path=/path/to/file,size=10G
-device nvdimm,id=nv1,memdev=mem1

Important Notes:

  • nvdimm enables the feature.
  • slots must accommodate RAM + NVDIMM devices.
  • maxmem must be ≥ total memory size.
  • share=on makes guest writes persistent.

3.2 Label Support (QEMU ≥ 2.7)


-device nvdimm,memdev=mem1,label-size=128K

Labels store metadata at the end of backend storage. Be careful when reusing backend files created without labels.


3.3 Hotplug Support (QEMU ≥ 2.8)


(qemu) object_add memory-backend-file,id=mem2,share=on,mem-path=/path,size=4G
(qemu) device_add nvdimm,id=nv2,memdev=mem2

Each hotplugged NVDIMM consumes one memory slot.


3.4 IO Alignment (QEMU ≥ 2.12)


-object memory-backend-file,id=mem1,mem-path=/dev/dax0.0,size=4G,align=2M

3.5 Persistence Model


-machine pc,accel=kvm,nvdimm,nvdimm-persistence=cpu
  • mem-ctrl – Flush via memory controller
  • cpu – Flush CPU cache + memory controller

4. Host Configuration

4.1 Install Tools


sudo zypper in ipmctl
sudo zypper in ndctl
  • ipmctl – Configure DCPMM hardware
  • ndctl – Manage Linux libnvdimm

4.2 Configure App Direct Mode


sudo ipmctl create -goal PersistentMemoryType=AppDirect

Reboot is required.


sudo ipmctl show -region

4.3 Create Namespace


sudo ndctl create-namespace --region=region0

Or specify size:


sudo ndctl create-namespace --region=region0 --size=36G

4.4 Format and Mount


sudo mkfs.xfs /dev/pmem0
sudo mkdir /pmemfs0
sudo mount -o dax /dev/pmem0 /pmemfs0

5. Backend File Creation

5.1 Regular File


truncate -s 10G /pmemfs0/nvdimm

Requires unarmed=on in QEMU.

5.2 DevDAX Device


ndctl create-namespace -f -e namespace0.0 -m devdax

Use /dev/dax0.0 as mem-path.

5.3 FS-DAX


mount -o dax /dev/pmem0p1 /mnt

6. QEMU Guest Example


sudo qemu-system-x86_64 \
  -machine pc,accel=kvm,nvdimm=on \
  -m 4G,slots=4,maxmem=32G \
  -object memory-backend-file,id=mem1,share,mem-path=/pmemfs0/nvdimm,size=10G,align=2M \
  -device nvdimm,memdev=mem1,unarmed=on,id=nv1,label-size=2M \
  -hda sles15.qcow2

7. Guest Configuration

7.1 Verify Driver


lsmod | grep libnvdimm
sudo zypper in ndctl
sudo ndctl list

7.2 Create Namespace Inside Guest


sudo ndctl create-namespace -f -e namespace0.0 --mode=fsdax
sudo mkfs.xfs -f /dev/pmem0
sudo mount -o dax /dev/pmem0 /pmemfs0

7.3 Persistence Test


echo "12345" > /pmemfs0/test
reboot
cat /pmemfs0/test

8. Conclusion

We configured Intel Optane DCPMM in App Direct mode on the host and exposed it to a QEMU guest as a vNVDIMM device.

Applications inside the VM can use persistent memory through:

  • Standard file APIs
  • PMDK (Persistent Memory Development Kit)
← Previous Post
Next Post →

Leave a Comment