1 <?xml version="1.0" encoding="UTF-8"?>
3 PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN"
4 "http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd">
6 <refentry id="ctdb-script.options.5">
9 <refentrytitle>ctdb-script.options</refentrytitle>
10 <manvolnum>5</manvolnum>
11 <refmiscinfo class="source">ctdb</refmiscinfo>
12 <refmiscinfo class="manual">CTDB - clustered TDB database</refmiscinfo>
16 <refname>ctdb-script.options</refname>
17 <refpurpose>CTDB scripts configuration files</refpurpose>
21 <title>DESCRIPTION</title>
24 <title>Location</title>
26 Each CTDB script has 2 possible locations for its configuration options:
33 <filename>/usr/local/etc/ctdb/script.options</filename>
37 This is a catch-all global file for general purpose
38 scripts and for options that are used in multiple event
46 <parameter>SCRIPT</parameter>.options
51 <filename><parameter>SCRIPT</parameter></filename> are
52 placed in a file alongside the script, with a ".script"
53 suffix added. This style is usually recommended for event
58 Options in this script-specific file override those in
68 <title>Contents</title>
71 These files should include simple shell-style variable
72 assignments and shell-style comments.
77 <title>Monitoring Thresholds</title>
80 Event scripts can monitor resources or services. When a
81 problem is detected, it may be better to warn about a problem
82 rather than to immediately fail monitoring and mark a node as
83 unhealthy. CTDB provides support for event scripts to do
84 threshold-based monitoring.
88 A threshold setting looks like
89 <parameter>WARNING_THRESHOLD<optional>:ERROR_THRESHOLD</optional></parameter>.
90 If the number of problems is ≥ WARNING_THRESHOLD then the
91 script will log a warning and continue. If the number
92 problems is ≥ ERROR_THRESHOLD then the script will log an
93 error and exit with failure, causing monitoring to fail. Note
94 that ERROR_THRESHOLD is optional, and follows the optional
102 <title>NETWORK CONFIGURATION</title>
105 <title>10.interface</title>
108 This event script handles public IP address release and
109 takeover, as well as monitoring interfaces used by public IP
117 CTDB_KILLTCP_USE_SS_KILL=yes|try|no
121 Whether to use <command>ss -K/--kill</command> to reset
122 incoming TCP connections to public IP addresses during
123 <command>releaseip</command>.
127 CTDB's standard method of resetting incoming TCP
128 connections during <command>releaseip</command> is via
129 its custom <command>ctdb_killtcp</command> command.
130 This uses network trickery to reset each connection:
131 send a "tickle ACK", capture the reply to extract the
132 TCP sequence number, send a reset (containing the
133 correct sequence number).
137 <command>ss -K</command> has been supported in
138 <command>ss</command> since iproute 4.5 in March 2016
139 and in the Linux kernel since 4.4 in December 2015.
140 However, the required kernel configuration item
141 <code>CONFIG_INET_DIAG_DESTROY</code> is disabled by
142 default. Although enabled in Debian kernels since ~2017
143 and in Ubuntu since at least 18.04, this has only
144 recently been enabled in distributions such as RHEL.
145 There seems to be no way, including running <command>ss
146 -K</command>, to determine if this is supported, so use
147 of this feature needs to be configurable. When
148 available, it should be the fastest, most reliable way
149 of killing connections.
153 Supported values are:
161 Use <command>ss -K</command> and make no other
162 attempt to kill any remaining connections. This
163 is sane on modern Linux distributions that are
165 <code>CONFIG_INET_DIAG_DESTROY</code> enabled.
176 Attempt to use <command>ss -K</command> and fall
177 back to <command>ctdb_killtcp</command> for any
178 remaining connections. This may be a good value
179 when <command>ss</command> supports the
180 <command>-K</command> option but it is uncertain
181 whether <code>CONFIG_INET_DIAG_DESTROY</code> is
193 Never attempt to use <command>ss -K</command>.
194 Rely only on <command>ctdb_killtcp</command>.
206 CTDB_PARTIALLY_ONLINE_INTERFACES=yes|no
210 Whether one or more offline interfaces should cause a
211 monitor event to fail if there are other interfaces that
212 are up. If this is "yes" and a node has some interfaces
213 that are down then <command>ctdb status</command> will
214 display the node as "PARTIALLYONLINE".
218 Note that CTDB_PARTIALLY_ONLINE_INTERFACES=yes is not
219 generally compatible with NAT gateway or LVS. NAT
220 gateway relies on the interface configured by
221 CTDB_NATGW_PUBLIC_IFACE to be up and LVS replies on
222 CTDB_LVS_PUBLIC_IFACE to be up. CTDB does not check if
223 these options are set in an incompatible way so care is
224 needed to understand the interaction.
237 <title>11.natgw</title>
240 Provides CTDB's NAT gateway functionality.
244 NAT gateway is used to configure fallback routing for nodes
245 when they do not host any public IP addresses. For example,
246 it allows unhealthy nodes to reliably communicate with
247 external infrastructure. One node in a NAT gateway group will
248 be designated as the NAT gateway leader node and other (follower)
249 nodes will be configured with fallback routes via the NAT
250 gateway leader node. For more information, see the
251 <citetitle>NAT GATEWAY</citetitle> section in
252 <citerefentry><refentrytitle>ctdb</refentrytitle>
253 <manvolnum>7</manvolnum></citerefentry>.
259 <term>CTDB_NATGW_DEFAULT_GATEWAY=<parameter>IPADDR</parameter></term>
262 IPADDR is an alternate network gateway to use on the NAT
263 gateway leader node. If set, a fallback default route
264 is added via this network gateway.
267 No default. Setting this variable is optional - if not
268 set that no route is created on the NAT gateway leader
275 <term>CTDB_NATGW_NODES=<parameter>FILENAME</parameter></term>
278 FILENAME contains the list of nodes that belong to the
279 same NAT gateway group.
284 <parameter>IPADDR</parameter> <optional>follower-only</optional>
288 IPADDR is the private IP address of each node in the NAT
292 If "follower-only" is specified then the corresponding node
293 can not be the NAT gateway leader node. In this case
294 <varname>CTDB_NATGW_PUBLIC_IFACE</varname> and
295 <varname>CTDB_NATGW_PUBLIC_IP</varname> are optional and
300 <filename>/usr/local/etc/ctdb/natgw_nodes</filename> when enabled.
306 <term>CTDB_NATGW_PRIVATE_NETWORK=<parameter>IPADDR/MASK</parameter></term>
309 IPADDR/MASK is the private sub-network that is
310 internally routed via the NAT gateway leader node. This
311 is usually the private network that is used for node
321 <term>CTDB_NATGW_PUBLIC_IFACE=<parameter>IFACE</parameter></term>
324 IFACE is the network interface on which the
325 CTDB_NATGW_PUBLIC_IP will be configured.
334 <term>CTDB_NATGW_PUBLIC_IP=<parameter>IPADDR/MASK</parameter></term>
337 IPADDR/MASK indicates the IP address that is used for
338 outgoing traffic (originating from
339 CTDB_NATGW_PRIVATE_NETWORK) on the NAT gateway leader
340 node. This <emphasis>must not</emphasis> be a
341 configured public IP address.
350 <term>CTDB_NATGW_STATIC_ROUTES=<parameter>IPADDR/MASK[@GATEWAY]</parameter> ...</term>
353 Each IPADDR/MASK identifies a network or host to which
354 NATGW should create a fallback route, instead of
355 creating a single default route. This can be used when
356 there is already a default route, via an interface that
357 can not reach required infrastructure, that overrides
358 the NAT gateway default route.
361 If GATEWAY is specified then the corresponding route on
362 the NATGW leader node will be via GATEWAY. Such routes
364 <varname>CTDB_NATGW_DEFAULT_GATEWAY</varname> is not
365 specified. If GATEWAY is not specified for some
366 networks then routes are only created on the NATGW
367 leader node for those networks if
368 <varname>CTDB_NATGW_DEFAULT_GATEWAY</varname> is
372 This should be used with care to avoid causing traffic
373 to unnecessarily double-hop through the NAT gateway
374 leader, even when a node is hosting public IP addresses.
375 Each specified network or host should probably have a
376 corresponding automatically created link route or static
388 <title>Example</title>
390 CTDB_NATGW_NODES=/usr/local/etc/ctdb/natgw_nodes
391 CTDB_NATGW_PRIVATE_NETWORK=192.168.1.0/24
392 CTDB_NATGW_DEFAULT_GATEWAY=10.0.0.1
393 CTDB_NATGW_PUBLIC_IP=10.0.0.227/24
394 CTDB_NATGW_PUBLIC_IFACE=eth0
398 A variation that ensures that infrastructure (ADS, DNS, ...)
399 directly attached to the public network (10.0.0.0/24) is
400 always reachable would look like this:
403 CTDB_NATGW_NODES=/usr/local/etc/ctdb/natgw_nodes
404 CTDB_NATGW_PRIVATE_NETWORK=192.168.1.0/24
405 CTDB_NATGW_PUBLIC_IP=10.0.0.227/24
406 CTDB_NATGW_PUBLIC_IFACE=eth0
407 CTDB_NATGW_STATIC_ROUTES=10.0.0.0/24
410 Note that <varname>CTDB_NATGW_DEFAULT_GATEWAY</varname> is
418 <title>13.per_ip_routing</title>
421 Provides CTDB's policy routing functionality.
425 A node running CTDB may be a component of a complex network
426 topology. In particular, public addresses may be spread
427 across several different networks (or VLANs) and it may not be
428 possible to route packets from these public addresses via the
429 system's default route. Therefore, CTDB has support for
430 policy routing via the <filename>13.per_ip_routing</filename>
431 eventscript. This allows routing to be specified for packets
432 sourced from each public address. The routes are added and
433 removed as CTDB moves public addresses between nodes.
437 For more information, see the <citetitle>POLICY
438 ROUTING</citetitle> section in
439 <citerefentry><refentrytitle>ctdb</refentrytitle>
440 <manvolnum>7</manvolnum></citerefentry>.
445 <term>CTDB_PER_IP_ROUTING_CONF=<parameter>FILENAME</parameter></term>
448 FILENAME contains elements for constructing the desired
449 routes for each source address.
453 The special FILENAME value
454 <constant>__auto_link_local__</constant> indicates that no
455 configuration file is provided and that CTDB should
456 generate reasonable link-local routes for each public IP
463 <parameter>IPADDR</parameter> <parameter>DEST-IPADDR/MASK</parameter> <optional><parameter>GATEWAY-IPADDR</parameter></optional>
469 <filename>/usr/local/etc/ctdb/policy_routing</filename>
477 CTDB_PER_IP_ROUTING_RULE_PREF=<parameter>NUM</parameter>
481 NUM sets the priority (or preference) for the routing
482 rules that are added by CTDB.
486 This should be (strictly) greater than 0 and (strictly)
487 less than 32766. A priority of 100 is recommended, unless
488 this conflicts with a priority already in use on the
490 <citerefentry><refentrytitle>ip</refentrytitle>
491 <manvolnum>8</manvolnum></citerefentry>, for more details.
498 CTDB_PER_IP_ROUTING_TABLE_ID_LOW=<parameter>LOW-NUM</parameter>,
499 CTDB_PER_IP_ROUTING_TABLE_ID_HIGH=<parameter>HIGH-NUM</parameter>
503 CTDB determines a unique routing table number to use for
504 the routing related to each public address. LOW-NUM and
505 HIGH-NUM indicate the minimum and maximum routing table
506 numbers that are used.
510 <citerefentry><refentrytitle>ip</refentrytitle>
511 <manvolnum>8</manvolnum></citerefentry> uses some
512 reserved routing table numbers below 255. Therefore,
513 CTDB_PER_IP_ROUTING_TABLE_ID_LOW should be (strictly)
518 CTDB uses the standard file
519 <filename>/etc/iproute2/rt_tables</filename> to maintain
520 a mapping between the routing table numbers and labels.
521 The label for a public address
522 <replaceable>ADDR</replaceable> will look like
523 ctdb.<replaceable>addr</replaceable>. This means that
524 the associated rules and routes are easy to read (and
529 No default, usually 1000 and 9000.
536 <title>Example</title>
538 CTDB_PER_IP_ROUTING_CONF=/usr/local/etc/ctdb/policy_routing
539 CTDB_PER_IP_ROUTING_RULE_PREF=100
540 CTDB_PER_IP_ROUTING_TABLE_ID_LOW=1000
541 CTDB_PER_IP_ROUTING_TABLE_ID_HIGH=9000
548 <title>91.lvs</title>
551 Provides CTDB's LVS functionality.
555 For a general description see the <citetitle>LVS</citetitle>
556 section in <citerefentry><refentrytitle>ctdb</refentrytitle>
557 <manvolnum>7</manvolnum></citerefentry>.
564 CTDB_LVS_NODES=<parameter>FILENAME</parameter>
568 FILENAME contains the list of nodes that belong to the
574 <parameter>IPADDR</parameter> <optional>follower-only</optional>
578 IPADDR is the private IP address of each node in the LVS
582 If "follower-only" is specified then the corresponding node
583 can not be the LVS leader node. In this case
584 <varname>CTDB_LVS_PUBLIC_IFACE</varname> and
585 <varname>CTDB_LVS_PUBLIC_IP</varname> are optional and
590 <filename>/usr/local/etc/ctdb/lvs_nodes</filename> when enabled.
597 CTDB_LVS_PUBLIC_IFACE=<parameter>INTERFACE</parameter>
601 INTERFACE is the network interface that clients will use
602 to connection to <varname>CTDB_LVS_PUBLIC_IP</varname>.
603 This is optional for follower-only nodes.
611 CTDB_LVS_PUBLIC_IP=<parameter>IPADDR</parameter>
615 CTDB_LVS_PUBLIC_IP is the LVS public address. No
627 <title>SERVICE CONFIGURATION</title>
630 CTDB can be configured to manage and/or monitor various NAS (and
631 other) services via its eventscripts.
635 In the simplest case CTDB will manage a service. This means the
636 service will be started and stopped along with CTDB, CTDB will
637 monitor the service and CTDB will do any required
638 reconfiguration of the service when public IP addresses are
643 <title>20.multipathd</title>
646 Provides CTDB's Linux multipathd service management.
650 It can monitor multipath devices to ensure that active paths
657 CTDB_MONITOR_MPDEVICES=<parameter>MP-DEVICE-LIST</parameter>
661 MP-DEVICE-LIST is a list of multipath devices for CTDB to monitor?
672 <title>31.clamd</title>
675 This event script provide CTDB's ClamAV anti-virus service
680 This eventscript is not enabled by default. Use <command>ctdb
681 enablescript</command> to enable it.
688 CTDB_CLAMD_SOCKET=<parameter>FILENAME</parameter>
692 FILENAME is the socket to monitor ClamAV.
705 <title>40.vsftpd</title>
708 Provides CTDB's vsftpd service management.
714 CTDB_VSFTPD_MONITOR_THRESHOLDS=<parameter>THRESHOLDS</parameter>
718 THRESHOLDS indicates how many consecutive monitoring
719 attempts need to report that vsftpd is not listening on
720 TCP port 21 before a warning is logged and before
721 monitoring fails. See the <citetitle>Monitoring
722 Thresholds</citetitle> for a description of how
723 monitoring thresholds work.
736 <title>48.netbios</title>
739 Provides CTDB's NetBIOS service management.
745 CTDB_SERVICE_NMB=<parameter>SERVICE</parameter>
749 Distribution specific SERVICE for managing nmbd.
752 Default is distribution-dependant.
762 <title>49.winbind</title>
765 Provides CTDB's Samba winbind service management.
772 CTDB_SERVICE_WINBIND=<parameter>SERVICE</parameter>
776 Distribution specific SERVICE for managing winbindd.
779 Default is "winbind".
786 CTDB_SAMBA_INTERFACES_FILE=<parameter>FILENAME</parameter>
790 Generates FILENAME, containing an smb.conf snippet with
791 an interfaces setting that includes interfaces for
792 configured CTDB public IP addresses. This file then
793 needs to be explicitly included in smb.conf.
796 For example, if public IP addresses are defined on
797 interfaces eth0 and eth1, and this is set to
798 <filename>/etc/samba/interfaces.conf</filename>, then
799 that file will contain the following before smbd is
803 bind interfaces only = yes
804 interfaces = lo eth0 eth1
807 This can be useful for limiting the interfaces used by
811 Default is to not generate a file.
818 CTDB_SAMBA_INTERFACES_EXTRA=<parameter>INTERFACE-LIST</parameter>
822 A space separated list to provide additional interfaces to bind.
825 Default is empty - no extra interfaces are added.
834 <title>50.samba</title>
837 Provides the core of CTDB's Samba file service management.
844 CTDB_SAMBA_CHECK_PORTS=<parameter>PORT-LIST</parameter>
848 When monitoring Samba, check TCP ports in
849 space-separated PORT-LIST.
852 Default is to monitor ports that Samba is configured to listen on.
859 CTDB_SAMBA_SKIP_SHARE_CHECK=yes|no
863 As part of monitoring, should CTDB skip the check for
864 the existence of each directory configured as share in
865 Samba. This may be desirable if there is a large number
876 CTDB_SERVICE_SMB=<parameter>SERVICE</parameter>
880 Distribution specific SERVICE for managing smbd.
883 Default is distribution-dependant.
893 <title>60.nfs</title>
896 This event script provides CTDB's NFS service management.
900 This includes parameters for the kernel NFS server.
901 Alternative NFS subsystems (such as <ulink
902 url="https://github.com/nfs-ganesha/nfs-ganesha/wiki">NFS-Ganesha</ulink>)
903 can be integrated using <varname>CTDB_NFS_CALLOUT</varname>.
910 CTDB_NFS_CALLOUT=<parameter>COMMAND</parameter>
914 COMMAND specifies the path to a callout to handle
915 interactions with the configured NFS system, including
916 startup, shutdown, monitoring.
919 Default is the included
920 <command>nfs-linux-kernel-callout</command>.
927 CTDB_NFS_CHECKS_DIR=<parameter>DIRECTORY</parameter>
931 Specifies the path to a DIRECTORY containing files that
932 describe how to monitor the responsiveness of NFS RPC
933 services. See the README file for this directory for an
934 explanation of the contents of these "check" files.
937 CTDB_NFS_CHECKS_DIR can be used to point to different
938 sets of checks for different NFS servers.
941 One way of using this is to have it point to, say,
942 <filename>/usr/local/etc/ctdb/nfs-checks-enabled.d</filename>
943 and populate it with symbolic links to the desired check
944 files. This avoids duplication and is upgrade-safe.
948 <filename>/usr/local/etc/ctdb/nfs-checks.d</filename>,
949 which contains NFS RPC checks suitable for Linux kernel
957 CTDB_NFS_EXPORTS_FILE=<parameter>FILE</parameter>
961 Set FILE as the path of the file containing NFS exports,
962 for use by the NFS callout (see CTDB_NFS_CALLOUT,
963 above). This is used for share checks when
964 CTDB_NFS_SKIP_SHARE_CHECK is not set to "yes". This is
965 most useful with NFS-Ganesha, since it supports
966 configuration include files and exports may be stored in
970 Default is <filename>/var/lib/nfs/etab</filename> for
971 <filename>nfs-linux-kernel-callout</filename>,
972 <filename>/etc/ganesha/ganesha.conf</filename> for
973 <filename>nfs-ganesha-callout</filename>.
980 CTDB_NFS_SHARED_STATE_DIR=<parameter>DIRECTORY</parameter>
984 DIRECTORY where clustered NFS shared state will be
985 located. DIRECTORY should be in a cluster filesystem
986 that is shared between the nodes. No default.
993 CTDB_NFS_SKIP_SHARE_CHECK=yes|no
997 As part of monitoring, should CTDB skip the check for
998 the existence of each directory exported via NFS. This
999 may be desirable if there is a large number of exports.
1009 CTDB_RPCINFO_LOCALHOST=<parameter>IPADDR</parameter>|<parameter>HOSTNAME</parameter>
1013 IPADDR or HOSTNAME indicates the address that
1014 <command>rpcinfo</command> should connect to when doing
1015 <command>rpcinfo</command> check on IPv4 RPC service during
1016 monitoring. Optimally this would be "localhost".
1017 However, this can add some performance overheads.
1020 Default is "127.0.0.1".
1027 CTDB_RPCINFO_LOCALHOST6=<parameter>IPADDR</parameter>|<parameter>HOSTNAME</parameter>
1031 IPADDR or HOSTNAME indicates the address that
1032 <command>rpcinfo</command> should connect to when doing
1033 <command>rpcinfo</command> check on IPv6 RPC service
1034 during monitoring. Optimally this would be "localhost6"
1035 (or similar). However, this can add some performance
1046 CTDB_STATD_CALLOUT_SHARED_STORAGE=<parameter>LOCATION</parameter>
1050 LOCATION where NFSv3 statd state will be stored. Valid
1056 persistent_db<optional>:<parameter>TDB</parameter></optional>
1060 Data is queued to local storage and then dequeued
1061 to TDB during monitor events. This means there is
1062 a window where locking state may be lost.
1063 However, this works around performance limitations
1064 in CTDB's persistent database handling.
1067 If :TDB is omitted then TDB defaults to
1068 <filename>ctdb.tdb</filename>.
1074 shared_dir<optional>:<parameter>DIRECTORY</parameter></optional>
1078 DIRECTORY is a directory in a cluster filesystem
1079 that is shared between the nodes. If DIRECTORY is
1080 relative (i.e. does not start with '/') then it is
1081 appended to CTDB_NFS_SHARED_STATE_DIR. If
1082 :DIRECTORY is omitted then DIRECTORY defaults to
1083 <filename>statd</filename>.
1086 Using a shared directory may result in performance
1087 and/or stability problems. rpc.statd is
1088 single-threaded and its HA callout is called
1089 synchronously, causing any latency introduced by
1090 the callout to be cumulative. Stability issues
1091 are most likely if thousands of clients reclaim
1092 locks after failover and use of the cluster
1093 filesystem introduces too much additional
1094 latency. Too much latency in in the HA callout
1095 may cause rpc.statd to fail health monitoring.
1108 <title>70.iscsi</title>
1111 Provides CTDB's Linux iSCSI tgtd service management.
1118 CTDB_START_ISCSI_SCRIPTS=<parameter>DIRECTORY</parameter>
1122 DIRECTORY on shared storage containing scripts to start
1123 tgtd for each public IP address.
1141 CTDB checks the consistency of databases during startup and
1142 provides a facility to backup persistent databases.
1146 <title>95.database</title>
1151 <term>CTDB_MAX_CORRUPT_DB_BACKUPS=<parameter>NUM</parameter></term>
1154 NUM is the maximum number of volatile TDB database
1155 backups to be kept (for each database) when a corrupt
1156 database is found during startup. Volatile TDBs are
1157 zeroed during startup so backups are needed to debug
1158 any corruption that occurs before a restart.
1167 <term>CTDB_PERSISTENT_DB_BACKUP_DIR=<parameter>DIRECTORY</parameter></term>
1170 Create a daily backup tarball for all persistent TDBs
1171 in DIRECTORY. Note that DIRECTORY must exist or no
1172 backups will be created.
1175 Given that persistent databases are fully replicated,
1176 duplication is avoid by only creating backups on the
1177 current leader node. To maintain a complete, single
1178 set of backups, it makes sense for DIRECTORY to be in
1179 a cluster filesystem.
1182 This creates the backup from the
1183 <command>monitor</command> event, which should be fine
1184 because backing up persistent databases is a local
1185 operation. Users who do not wish do create backups
1186 during the <command>monitor</command> event can choose
1187 not to use this option and instead run
1188 <command>/usr/local/etc/ctdb/ctdb-backup-persistent-tdbs.sh
1189 -l <parameter>DIRECTORY</parameter></command> on all
1191 <citerefentry><refentrytitle>cron</refentrytitle>
1192 <manvolnum>8</manvolnum></citerefentry> job, which
1193 will also need to manually manage backup pruning.
1196 No default. No daily backups are created.
1202 <term>CTDB_PERSISTENT_DB_BACKUP_LIMIT=<parameter>COUNT</parameter></term>
1205 Keep at most COUNT backups in
1206 CTDB_PERSISTENT_DB_BACKUP_DIR. Note that if
1207 additional manual backups are created in this
1208 directory then these will count towards the limit.
1222 <title>SYSTEM RESOURCE MONITORING</title>
1230 Provides CTDB's filesystem and memory usage monitoring.
1234 CTDB can experience seemingly random (performance and other)
1235 issues if system resources become too constrained. Options in
1236 this section can be enabled to allow certain system resources
1237 to be checked. They allows warnings to be logged and nodes to
1238 be marked unhealthy when system resource usage reaches the
1239 configured thresholds.
1243 Some checks are enabled by default. It is recommended that
1244 these checks remain enabled or are augmented by extra checks.
1245 There is no supported way of completely disabling the checks.
1252 CTDB_MONITOR_FILESYSTEM_USAGE=<parameter>FS-LIMIT-LIST</parameter>
1256 FS-LIMIT-LIST is a space-separated list of
1257 <parameter>FILESYSTEM</parameter>:<parameter>WARN_LIMIT</parameter><optional>:<parameter>UNHEALTHY_LIMIT</parameter></optional>
1258 triples indicating that warnings should be logged if the
1259 space used on FILESYSTEM reaches WARN_LIMIT%. If usage
1260 reaches UNHEALTHY_LIMIT then the node should be flagged
1261 unhealthy. Either WARN_LIMIT or UNHEALTHY_LIMIT may be
1262 left blank, meaning that check will be omitted.
1266 Default is to warn for each filesystem containing a
1268 (<literal>volatile database directory</literal>,
1269 <literal>persistent database directory</literal>,
1270 <literal>state database directory</literal>)
1271 with a threshold of 90%.
1278 CTDB_MONITOR_MEMORY_USAGE=<parameter>MEM-LIMITS</parameter>
1282 MEM-LIMITS takes the form
1283 <parameter>WARN_LIMIT</parameter><optional>:<parameter>UNHEALTHY_LIMIT</parameter></optional>
1284 indicating that warnings should be logged if memory
1285 usage reaches WARN_LIMIT%. If usage reaches
1286 UNHEALTHY_LIMIT then the node should be flagged
1287 unhealthy. Either WARN_LIMIT or UNHEALTHY_LIMIT may be
1288 left blank, meaning that check will be omitted.
1291 Default is 80, so warnings will be logged when memory
1304 <title>EVENT SCRIPT DEBUGGING</title>
1308 debug-hung-script.sh
1314 <term>CTDB_DEBUG_HUNG_SCRIPT_STACKPAT=<parameter>REGEXP</parameter></term>
1317 REGEXP specifies interesting processes for which stack
1318 traces should be logged when debugging hung eventscripts
1319 and those processes are matched in pstree output.
1320 REGEXP is an extended regexp so choices are separated by
1321 pipes ('|'). However, REGEXP should not contain
1322 parentheses. See also the <citerefentry><refentrytitle>ctdb.conf</refentrytitle>
1323 <manvolnum>5</manvolnum></citerefentry>
1324 [event] "debug script" option.
1327 Default is "exportfs|rpcinfo".
1338 <title>FILES</title>
1341 <member><filename>/usr/local/etc/ctdb/script.options</filename></member>
1346 <title>SEE ALSO</title>
1348 <citerefentry><refentrytitle>ctdbd</refentrytitle>
1349 <manvolnum>1</manvolnum></citerefentry>,
1351 <citerefentry><refentrytitle>ctdb</refentrytitle>
1352 <manvolnum>7</manvolnum></citerefentry>,
1354 <ulink url="http://ctdb.samba.org/"/>
1361 This documentation was written by
1369 <holder>Andrew Tridgell</holder>
1370 <holder>Ronnie Sahlberg</holder>
1374 This program is free software; you can redistribute it and/or
1375 modify it under the terms of the GNU General Public License as
1376 published by the Free Software Foundation; either version 3 of
1377 the License, or (at your option) any later version.
1380 This program is distributed in the hope that it will be
1381 useful, but WITHOUT ANY WARRANTY; without even the implied
1382 warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
1383 PURPOSE. See the GNU General Public License for more details.
1386 You should have received a copy of the GNU General Public
1387 License along with this program; if not, see
1388 <ulink url="http://www.gnu.org/licenses"/>.