1 # SPDX-License-Identifier: GPL-2.0-only
2 menu "Xen driver support"
6 bool "Xen memory balloon driver"
9 The balloon driver allows the Xen domain to request more memory from
10 the system to expand the domain's memory allocation, or alternatively
11 return unneeded memory to the system.
13 config XEN_BALLOON_MEMORY_HOTPLUG
14 bool "Memory hotplug support for Xen balloon driver"
15 depends on XEN_BALLOON && MEMORY_HOTPLUG
17 Memory hotplug support for Xen balloon driver allows expanding memory
18 available for the system above limit declared at system startup.
19 It is very useful on critical systems which require long
20 run without rebooting.
22 Memory could be hotplugged in following steps:
24 1) target domain: ensure that memory auto online policy is in
25 effect by checking /sys/devices/system/memory/auto_online_blocks
26 file (should be 'online').
28 2) control domain: xl mem-max <target-domain> <maxmem>
29 where <maxmem> is >= requested memory size,
31 3) control domain: xl mem-set <target-domain> <memory>
32 where <memory> is requested memory size; alternatively memory
33 could be added by writing proper value to
34 /sys/devices/system/xen_memory/xen_memory0/target or
35 /sys/devices/system/xen_memory/xen_memory0/target_kb on the
38 Alternatively, if memory auto onlining was not requested at step 1
39 the newly added memory can be manually onlined in the target domain
40 by doing the following:
42 for i in /sys/devices/system/memory/memory*/state; do \
43 [ "`cat "$i"`" = offline ] && echo online > "$i"; done
45 or by adding the following line to udev rules:
47 SUBSYSTEM=="memory", ACTION=="add", RUN+="/bin/sh -c '[ -f /sys$devpath/state ] && echo online > /sys$devpath/state'"
49 config XEN_BALLOON_MEMORY_HOTPLUG_LIMIT
50 int "Hotplugged memory limit (in GiB) for a PV guest"
54 depends on XEN_HAVE_PVMMU
55 depends on XEN_BALLOON_MEMORY_HOTPLUG
57 Maxmium amount of memory (in GiB) that a PV guest can be
58 expanded to when using memory hotplug.
60 A PV guest can have more memory than this limit if is
61 started with a larger maximum.
63 This value is used to allocate enough space in internal
64 tables needed for physical memory administration.
66 config XEN_SCRUB_PAGES_DEFAULT
67 bool "Scrub pages before returning them to system by default"
68 depends on XEN_BALLOON
71 Scrub pages before returning them to the system for reuse by
72 other domains. This makes sure that any confidential data
73 is not accidentally visible to other domains. It is more
74 secure, but slightly less efficient. This can be controlled with
75 xen_scrub_pages=0 parameter and
76 /sys/devices/system/xen_memory/xen_memory0/scrub_pages.
77 This option only sets the default value.
82 tristate "Xen /dev/xen/evtchn device"
85 The evtchn driver allows a userspace process to trigger event
86 channels and to receive notification of an event channel
91 bool "Backend driver support"
94 Support for backend device drivers that provide I/O services
95 to other virtual machines.
98 tristate "Xen filesystem"
102 The xen filesystem provides a way for domains to share
103 information with each other and with the hypervisor.
104 For example, by reading and writing the "xenbus" file, guests
105 may pass arbitrary information to the initial domain.
106 If in doubt, say yes.
108 config XEN_COMPAT_XENFS
109 bool "Create compatibility mount point /proc/xen"
113 The old xenstore userspace tools expect to find "xenbus"
114 under /proc/xen, but "xenbus" is now found at the root of the
115 xenfs filesystem. Selecting this causes the kernel to create
116 the compatibility mount point /proc/xen if it is running on
118 If in doubt, say yes.
120 config XEN_SYS_HYPERVISOR
121 bool "Create xen entries under /sys/hypervisor"
123 select SYS_HYPERVISOR
126 Create entries under /sys/hypervisor describing the Xen
127 hypervisor environment. When running native or in another
128 virtual environment, /sys/hypervisor will still be present,
129 but will have no xen contents.
131 config XEN_XENBUS_FRONTEND
135 tristate "userspace grant access device driver"
140 Allows userspace processes to use grants.
142 config XEN_GNTDEV_DMABUF
143 bool "Add support for dma-buf grant access device driver extension"
144 depends on XEN_GNTDEV && XEN_GRANT_DMA_ALLOC
145 select DMA_SHARED_BUFFER
147 Allows userspace processes and kernel modules to use Xen backed
148 dma-buf implementation. With this extension grant references to
149 the pages of an imported dma-buf can be exported for other domain
150 use and grant references coming from a foreign domain can be
151 converted into a local dma-buf for local export.
153 config XEN_GRANT_DEV_ALLOC
154 tristate "User-space grant reference allocator driver"
158 Allows userspace processes to create pages with access granted
159 to other domains. This can be used to implement frontend drivers
160 or as part of an inter-domain shared memory channel.
162 config XEN_GRANT_DMA_ALLOC
163 bool "Allow allocating DMA capable buffers with grant reference module"
164 depends on XEN && HAS_DMA
166 Extends grant table module API to allow allocating DMA capable
167 buffers and mapping foreign grant references on top of it.
168 The resulting buffer is similar to one allocated by the balloon
169 driver in that proper memory reservation is made by
170 ({increase|decrease}_reservation and VA mappings are updated if
172 This is useful for sharing foreign buffers with HW drivers which
173 cannot work with scattered buffers provided by the balloon driver,
174 but require DMAable memory instead.
180 config XEN_PCIDEV_BACKEND
181 tristate "Xen PCI-device backend driver"
182 depends on PCI && X86 && XEN
183 depends on XEN_BACKEND
186 The PCI device backend driver allows the kernel to export arbitrary
187 PCI devices to other guests. If you select this to be a module, you
188 will need to make sure no other driver has bound to the device(s)
189 you want to make visible to other guests.
191 The parameter "passthrough" allows you specify how you want the PCI
192 devices to appear in the guest. You can choose the default (0) where
193 PCI topology starts at 00.00.0, or (1) for passthrough if you want
194 the PCI devices topology appear the same as in the host.
196 The "hide" parameter (only applicable if backend driver is compiled
197 into the kernel) allows you to bind the PCI devices to this module
198 from the default device drivers. The argument is the list of PCI BDFs:
199 xen-pciback.hide=(03:00.0)(04:00.0)
203 config XEN_PVCALLS_FRONTEND
204 tristate "XEN PV Calls frontend driver"
205 depends on INET && XEN
206 select XEN_XENBUS_FRONTEND
208 Experimental frontend for the Xen PV Calls protocol
209 (https://xenbits.xen.org/docs/unstable/misc/pvcalls.html). It
210 sends a small set of POSIX calls to the backend, which
213 config XEN_PVCALLS_BACKEND
214 bool "XEN PV Calls backend driver"
215 depends on INET && XEN && XEN_BACKEND
217 Experimental backend for the Xen PV Calls protocol
218 (https://xenbits.xen.org/docs/unstable/misc/pvcalls.html). It
219 allows PV Calls frontends to send POSIX calls to the backend,
220 which implements them.
224 config XEN_SCSI_BACKEND
225 tristate "XEN SCSI backend driver"
226 depends on XEN && XEN_BACKEND && TARGET_CORE
228 The SCSI backend driver allows the kernel to export its SCSI Devices
229 to other guests via a high-performance shared-memory interface.
230 Only needed for systems running as XEN driver domains (e.g. Dom0) and
231 if guests need generic access to SCSI devices.
239 bool "Xen stub drivers"
240 depends on XEN && X86_64 && BROKEN
242 Allow kernel to install stub drivers, to reserve space for Xen drivers,
243 i.e. memory hotplug and cpu hotplug, and to block native drivers loaded,
244 so that real Xen drivers can be modular.
246 To enable Xen features like cpu and memory hotplug, select Y here.
248 config XEN_ACPI_HOTPLUG_MEMORY
249 tristate "Xen ACPI memory hotplug"
250 depends on XEN_DOM0 && XEN_STUB && ACPI
252 This is Xen ACPI memory hotplug.
254 Currently Xen only support ACPI memory hot-add. If you want
255 to hot-add memory at runtime (the hot-added memory cannot be
256 removed until machine stop), select Y/M here, otherwise select N.
258 config XEN_ACPI_HOTPLUG_CPU
259 tristate "Xen ACPI cpu hotplug"
260 depends on XEN_DOM0 && XEN_STUB && ACPI
261 select ACPI_CONTAINER
263 Xen ACPI cpu enumerating and hotplugging
265 For hotplugging, currently Xen only support ACPI cpu hotadd.
266 If you want to hotadd cpu at runtime (the hotadded cpu cannot
267 be removed until machine stop), select Y/M here.
269 config XEN_ACPI_PROCESSOR
270 tristate "Xen ACPI processor"
271 depends on XEN && XEN_DOM0 && X86 && ACPI_PROCESSOR && CPU_FREQ
274 This ACPI processor uploads Power Management information to the Xen
277 To do that the driver parses the Power Management data and uploads
278 said information to the Xen hypervisor. Then the Xen hypervisor can
279 select the proper Cx and Pxx states. It also registers itself as the
280 SMM so that other drivers (such as ACPI cpufreq scaling driver) will
283 To compile this driver as a module, choose M here: the module will be
284 called xen_acpi_processor If you do not know what to choose, select
285 M here. If the CPUFREQ drivers are built in, select Y here.
288 bool "Xen platform mcelog"
289 depends on XEN_DOM0 && X86_MCE
291 Allow kernel fetching MCE error from Xen platform and
292 converting it into Linux mcelog format for mcelog tools
294 config XEN_HAVE_PVMMU
299 depends on (ARM || ARM64 || X86_64) && EFI
301 config XEN_AUTO_XLATE
303 depends on ARM || ARM64 || XEN_PVHVM
305 Support for auto-translated physmap guests.
309 depends on X86 && ACPI
313 depends on X86 && XEN_DOM0 && XENFS
314 default y if KALLSYMS
316 Exports hypervisor symbols (along with their types and addresses) via
317 /proc/xen/xensyms file, similar to /proc/kallsyms
322 config XEN_FRONT_PGDIR_SHBUF