1 menu "Xen driver support"
5 bool "Xen memory balloon driver"
8 The balloon driver allows the Xen domain to request more memory from
9 the system to expand the domain's memory allocation, or alternatively
10 return unneeded memory to the system.
12 config XEN_SELFBALLOONING
13 bool "Dynamically self-balloon kernel memory to target"
14 depends on XEN && XEN_BALLOON && CLEANCACHE && SWAP && XEN_TMEM
16 Self-ballooning dynamically balloons available kernel memory driven
17 by the current usage of anonymous memory ("committed AS") and
18 controlled by various sysfs-settable parameters. Configuring
19 FRONTSWAP is highly recommended; if it is not configured, self-
20 ballooning is disabled by default. If FRONTSWAP is configured,
21 frontswap-selfshrinking is enabled by default but can be disabled
22 with the 'tmem.selfshrink=0' kernel boot parameter; and self-ballooning
23 is enabled by default but can be disabled with the 'tmem.selfballooning=0'
24 kernel boot parameter. Note that systems without a sufficiently
25 large swap device should not enable self-ballooning.
27 config XEN_BALLOON_MEMORY_HOTPLUG
28 bool "Memory hotplug support for Xen balloon driver"
29 depends on XEN_BALLOON && MEMORY_HOTPLUG
31 Memory hotplug support for Xen balloon driver allows expanding memory
32 available for the system above limit declared at system startup.
33 It is very useful on critical systems which require long
34 run without rebooting.
36 Memory could be hotplugged in following steps:
38 1) target domain: ensure that memory auto online policy is in
39 effect by checking /sys/devices/system/memory/auto_online_blocks
40 file (should be 'online').
42 2) control domain: xl mem-max <target-domain> <maxmem>
43 where <maxmem> is >= requested memory size,
45 3) control domain: xl mem-set <target-domain> <memory>
46 where <memory> is requested memory size; alternatively memory
47 could be added by writing proper value to
48 /sys/devices/system/xen_memory/xen_memory0/target or
49 /sys/devices/system/xen_memory/xen_memory0/target_kb on the
52 Alternatively, if memory auto onlining was not requested at step 1
53 the newly added memory can be manually onlined in the target domain
54 by doing the following:
56 for i in /sys/devices/system/memory/memory*/state; do \
57 [ "`cat "$i"`" = offline ] && echo online > "$i"; done
59 or by adding the following line to udev rules:
61 SUBSYSTEM=="memory", ACTION=="add", RUN+="/bin/sh -c '[ -f /sys$devpath/state ] && echo online > /sys$devpath/state'"
63 config XEN_BALLOON_MEMORY_HOTPLUG_LIMIT
64 int "Hotplugged memory limit (in GiB) for a PV guest"
68 depends on XEN_HAVE_PVMMU
69 depends on XEN_BALLOON_MEMORY_HOTPLUG
71 Maxmium amount of memory (in GiB) that a PV guest can be
72 expanded to when using memory hotplug.
74 A PV guest can have more memory than this limit if is
75 started with a larger maximum.
77 This value is used to allocate enough space in internal
78 tables needed for physical memory administration.
80 config XEN_SCRUB_PAGES_DEFAULT
81 bool "Scrub pages before returning them to system by default"
82 depends on XEN_BALLOON
85 Scrub pages before returning them to the system for reuse by
86 other domains. This makes sure that any confidential data
87 is not accidentally visible to other domains. It is more
88 secure, but slightly less efficient. This can be controlled with
89 xen_scrub_pages=0 parameter and
90 /sys/devices/system/xen_memory/xen_memory0/scrub_pages.
91 This option only sets the default value.
96 tristate "Xen /dev/xen/evtchn device"
99 The evtchn driver allows a userspace process to trigger event
100 channels and to receive notification of an event channel
102 If in doubt, say yes.
105 bool "Backend driver support"
108 Support for backend device drivers that provide I/O services
109 to other virtual machines.
112 tristate "Xen filesystem"
116 The xen filesystem provides a way for domains to share
117 information with each other and with the hypervisor.
118 For example, by reading and writing the "xenbus" file, guests
119 may pass arbitrary information to the initial domain.
120 If in doubt, say yes.
122 config XEN_COMPAT_XENFS
123 bool "Create compatibility mount point /proc/xen"
127 The old xenstore userspace tools expect to find "xenbus"
128 under /proc/xen, but "xenbus" is now found at the root of the
129 xenfs filesystem. Selecting this causes the kernel to create
130 the compatibility mount point /proc/xen if it is running on
132 If in doubt, say yes.
134 config XEN_SYS_HYPERVISOR
135 bool "Create xen entries under /sys/hypervisor"
137 select SYS_HYPERVISOR
140 Create entries under /sys/hypervisor describing the Xen
141 hypervisor environment. When running native or in another
142 virtual environment, /sys/hypervisor will still be present,
143 but will have no xen contents.
145 config XEN_XENBUS_FRONTEND
149 tristate "userspace grant access device driver"
154 Allows userspace processes to use grants.
156 config XEN_GNTDEV_DMABUF
157 bool "Add support for dma-buf grant access device driver extension"
158 depends on XEN_GNTDEV && XEN_GRANT_DMA_ALLOC && DMA_SHARED_BUFFER
160 Allows userspace processes and kernel modules to use Xen backed
161 dma-buf implementation. With this extension grant references to
162 the pages of an imported dma-buf can be exported for other domain
163 use and grant references coming from a foreign domain can be
164 converted into a local dma-buf for local export.
166 config XEN_GRANT_DEV_ALLOC
167 tristate "User-space grant reference allocator driver"
171 Allows userspace processes to create pages with access granted
172 to other domains. This can be used to implement frontend drivers
173 or as part of an inter-domain shared memory channel.
175 config XEN_GRANT_DMA_ALLOC
176 bool "Allow allocating DMA capable buffers with grant reference module"
177 depends on XEN && HAS_DMA
179 Extends grant table module API to allow allocating DMA capable
180 buffers and mapping foreign grant references on top of it.
181 The resulting buffer is similar to one allocated by the balloon
182 driver in that proper memory reservation is made by
183 ({increase|decrease}_reservation and VA mappings are updated if
185 This is useful for sharing foreign buffers with HW drivers which
186 cannot work with scattered buffers provided by the balloon driver,
187 but require DMAable memory instead.
195 depends on !ARM && !ARM64
196 default m if (CLEANCACHE || FRONTSWAP)
198 Shim to interface in-kernel Transcendent Memory hooks
199 (e.g. cleancache and frontswap) to Xen tmem hypercalls.
201 config XEN_PCIDEV_BACKEND
202 tristate "Xen PCI-device backend driver"
203 depends on PCI && X86 && XEN
204 depends on XEN_BACKEND
207 The PCI device backend driver allows the kernel to export arbitrary
208 PCI devices to other guests. If you select this to be a module, you
209 will need to make sure no other driver has bound to the device(s)
210 you want to make visible to other guests.
212 The parameter "passthrough" allows you specify how you want the PCI
213 devices to appear in the guest. You can choose the default (0) where
214 PCI topology starts at 00.00.0, or (1) for passthrough if you want
215 the PCI devices topology appear the same as in the host.
217 The "hide" parameter (only applicable if backend driver is compiled
218 into the kernel) allows you to bind the PCI devices to this module
219 from the default device drivers. The argument is the list of PCI BDFs:
220 xen-pciback.hide=(03:00.0)(04:00.0)
224 config XEN_PVCALLS_FRONTEND
225 tristate "XEN PV Calls frontend driver"
226 depends on INET && XEN
227 select XEN_XENBUS_FRONTEND
229 Experimental frontend for the Xen PV Calls protocol
230 (https://xenbits.xen.org/docs/unstable/misc/pvcalls.html). It
231 sends a small set of POSIX calls to the backend, which
234 config XEN_PVCALLS_BACKEND
235 bool "XEN PV Calls backend driver"
236 depends on INET && XEN && XEN_BACKEND
238 Experimental backend for the Xen PV Calls protocol
239 (https://xenbits.xen.org/docs/unstable/misc/pvcalls.html). It
240 allows PV Calls frontends to send POSIX calls to the backend,
241 which implements them.
245 config XEN_SCSI_BACKEND
246 tristate "XEN SCSI backend driver"
247 depends on XEN && XEN_BACKEND && TARGET_CORE
249 The SCSI backend driver allows the kernel to export its SCSI Devices
250 to other guests via a high-performance shared-memory interface.
251 Only needed for systems running as XEN driver domains (e.g. Dom0) and
252 if guests need generic access to SCSI devices.
260 bool "Xen stub drivers"
261 depends on XEN && X86_64 && BROKEN
263 Allow kernel to install stub drivers, to reserve space for Xen drivers,
264 i.e. memory hotplug and cpu hotplug, and to block native drivers loaded,
265 so that real Xen drivers can be modular.
267 To enable Xen features like cpu and memory hotplug, select Y here.
269 config XEN_ACPI_HOTPLUG_MEMORY
270 tristate "Xen ACPI memory hotplug"
271 depends on XEN_DOM0 && XEN_STUB && ACPI
273 This is Xen ACPI memory hotplug.
275 Currently Xen only support ACPI memory hot-add. If you want
276 to hot-add memory at runtime (the hot-added memory cannot be
277 removed until machine stop), select Y/M here, otherwise select N.
279 config XEN_ACPI_HOTPLUG_CPU
280 tristate "Xen ACPI cpu hotplug"
281 depends on XEN_DOM0 && XEN_STUB && ACPI
282 select ACPI_CONTAINER
284 Xen ACPI cpu enumerating and hotplugging
286 For hotplugging, currently Xen only support ACPI cpu hotadd.
287 If you want to hotadd cpu at runtime (the hotadded cpu cannot
288 be removed until machine stop), select Y/M here.
290 config XEN_ACPI_PROCESSOR
291 tristate "Xen ACPI processor"
292 depends on XEN && XEN_DOM0 && X86 && ACPI_PROCESSOR && CPU_FREQ
295 This ACPI processor uploads Power Management information to the Xen
298 To do that the driver parses the Power Management data and uploads
299 said information to the Xen hypervisor. Then the Xen hypervisor can
300 select the proper Cx and Pxx states. It also registers itself as the
301 SMM so that other drivers (such as ACPI cpufreq scaling driver) will
304 To compile this driver as a module, choose M here: the module will be
305 called xen_acpi_processor If you do not know what to choose, select
306 M here. If the CPUFREQ drivers are built in, select Y here.
309 bool "Xen platform mcelog"
310 depends on XEN_DOM0 && X86_64 && X86_MCE
312 Allow kernel fetching MCE error from Xen platform and
313 converting it into Linux mcelog format for mcelog tools
315 config XEN_HAVE_PVMMU
320 depends on (ARM || ARM64 || X86_64) && EFI
322 config XEN_AUTO_XLATE
324 depends on ARM || ARM64 || XEN_PVHVM
326 Support for auto-translated physmap guests.
330 depends on X86 && ACPI
334 depends on X86 && XEN_DOM0 && XENFS
335 default y if KALLSYMS
337 Exports hypervisor symbols (along with their types and addresses) via
338 /proc/xen/xensyms file, similar to /proc/kallsyms