1 ftrace - Function Tracer
2 ========================
4 Copyright 2008 Red Hat Inc.
5 Author: Steven Rostedt <srostedt@redhat.com>
6 License: The GNU Free Documentation License, Version 1.2
7 (dual licensed under the GPL v2)
8 Reviewers: Elias Oltmanns, Randy Dunlap, Andrew Morton,
9 John Kacur, and David Teigland.
10 Written for: 2.6.28-rc2
16 Ftrace is an internal tracer designed to help out developers and
17 designers of systems to find what is going on inside the kernel.
18 It can be used for debugging or analyzing latencies and
19 performance issues that take place outside of user-space.
21 Although ftrace is typically considered the function tracer, it
22 is really a frame work of several assorted tracing utilities.
23 There's latency tracing to examine what occurs between interrupts
24 disabled and enabled, as well as for preemption and from a time
25 a task is woken to the task is actually scheduled in.
27 One of the most common uses of ftrace is the event tracing.
28 Through out the kernel is hundreds of static event points that
29 can be enabled via the debugfs file system to see what is
30 going on in certain parts of the kernel.
33 Implementation Details
34 ----------------------
36 See ftrace-design.txt for details for arch porters and such.
42 Ftrace uses the debugfs file system to hold the control files as
43 well as the files to display output.
45 When debugfs is configured into the kernel (which selecting any ftrace
46 option will do) the directory /sys/kernel/debug will be created. To mount
47 this directory, you can add to your /etc/fstab file:
49 debugfs /sys/kernel/debug debugfs defaults 0 0
51 Or you can mount it at run time with:
53 mount -t debugfs nodev /sys/kernel/debug
55 For quicker access to that directory you may want to make a soft link to
58 ln -s /sys/kernel/debug /debug
60 Any selected ftrace option will also create a directory called tracing
61 within the debugfs. The rest of the document will assume that you are in
62 the ftrace directory (cd /sys/kernel/debug/tracing) and will only concentrate
63 on the files within that directory and not distract from the content with
64 the extended "/sys/kernel/debug/tracing" path name.
66 That's it! (assuming that you have ftrace configured into your kernel)
68 After mounting debugfs, you can see a directory called
69 "tracing". This directory contains the control and output files
70 of ftrace. Here is a list of some of the key files:
73 Note: all time values are in microseconds.
77 This is used to set or display the current tracer
82 This holds the different types of tracers that
83 have been compiled into the kernel. The
84 tracers listed here can be configured by
85 echoing their name into current_tracer.
89 This sets or displays whether writing to the trace
90 ring buffer is enabled. Echo 0 into this file to disable
91 the tracer or 1 to enable it. Note, this only disables
92 writing to the ring buffer, the tracing overhead may
97 This file holds the output of the trace in a human
98 readable format (described below).
102 The output is the same as the "trace" file but this
103 file is meant to be streamed with live tracing.
104 Reads from this file will block until new data is
105 retrieved. Unlike the "trace" file, this file is a
106 consumer. This means reading from this file causes
107 sequential reads to display more current data. Once
108 data is read from this file, it is consumed, and
109 will not be read again with a sequential read. The
110 "trace" file is static, and if the tracer is not
111 adding more data, it will display the same
112 information every time it is read.
116 This file lets the user control the amount of data
117 that is displayed in one of the above output
118 files. Options also exist to modify how a tracer
119 or events work (stack traces, timestamps, etc).
123 This is a directory that has a file for every available
124 trace option (also in trace_options). Options may also be set
125 or cleared by writing a "1" or "0" respectively into the
126 corresponding file with the option name.
130 Some of the tracers record the max latency.
131 For example, the time interrupts are disabled.
132 This time is saved in this file. The max trace
133 will also be stored, and displayed by "trace".
134 A new max trace will only be recorded if the
135 latency is greater than the value in this
136 file. (in microseconds)
140 Some latency tracers will record a trace whenever the
141 latency is greater than the number in this file.
142 Only active when the file contains a number greater than 0.
147 This sets or displays the number of kilobytes each CPU
148 buffer holds. By default, the trace buffers are the same size
149 for each CPU. The displayed number is the size of the
150 CPU buffer and not total size of all buffers. The
151 trace buffers are allocated in pages (blocks of memory
152 that the kernel uses for allocation, usually 4 KB in size).
153 If the last page allocated has room for more bytes
154 than requested, the rest of the page will be used,
155 making the actual allocation bigger than requested.
156 ( Note, the size may not be a multiple of the page size
157 due to buffer management meta-data. )
159 buffer_total_size_kb:
161 This displays the total combined size of all the trace buffers.
165 If a process is performing the tracing, and the ring buffer
166 should be shrunk "freed" when the process is finished, even
167 if it were to be killed by a signal, this file can be used
168 for that purpose. On close of this file, the ring buffer will
169 be resized to its minimum size. Having a process that is tracing
170 also open this file, when the process exits its file descriptor
171 for this file will be closed, and in doing so, the ring buffer
174 It may also stop tracing if disable_on_free option is set.
178 This is a mask that lets the user only trace
179 on specified CPUs. The format is a hex string
180 representing the CPUs.
184 When dynamic ftrace is configured in (see the
185 section below "dynamic ftrace"), the code is dynamically
186 modified (code text rewrite) to disable calling of the
187 function profiler (mcount). This lets tracing be configured
188 in with practically no overhead in performance. This also
189 has a side effect of enabling or disabling specific functions
190 to be traced. Echoing names of functions into this file
191 will limit the trace to only those functions.
193 This interface also allows for commands to be used. See the
194 "Filter commands" section for more details.
198 This has an effect opposite to that of
199 set_ftrace_filter. Any function that is added here will not
200 be traced. If a function exists in both set_ftrace_filter
201 and set_ftrace_notrace, the function will _not_ be traced.
205 Have the function tracer only trace a single thread.
209 Set a "trigger" function where tracing should start
210 with the function graph tracer (See the section
211 "dynamic ftrace" for more details).
213 available_filter_functions:
215 This lists the functions that ftrace
216 has processed and can trace. These are the function
217 names that you can pass to "set_ftrace_filter" or
218 "set_ftrace_notrace". (See the section "dynamic ftrace"
219 below for more details.)
223 This file is more for debugging ftrace, but can also be useful
224 in seeing if any function has a callback attached to it.
225 Not only does the trace infrastructure use ftrace function
226 trace utility, but other subsystems might too. This file
227 displays all functions that have a callback attached to them
228 as well as the number of callbacks that have been attached.
229 Note, a callback may also call multiple functions which will
230 not be listed in this count.
232 If the callback registered to be traced by a function with
233 the "save regs" attribute (thus even more overhead), a 'R'
234 will be displayed on the same line as the function that
235 is returning registers.
237 If the callback registered to be traced by a function with
238 the "ip modify" attribute (thus the regs->ip can be changed),
239 an 'I' will be displayed on the same line as the function that
242 function_profile_enabled:
244 When set it will enable all functions with either the function
245 tracer, or if enabled, the function graph tracer. It will
246 keep a histogram of the number of functions that were called
247 and if run with the function graph tracer, it will also keep
248 track of the time spent in those functions. The histogram
249 content can be displayed in the files:
251 trace_stats/function<cpu> ( function0, function1, etc).
255 A directory that holds different tracing stats.
259 Enable dynamic trace points. See kprobetrace.txt.
263 Dynamic trace points stats. See kprobetrace.txt.
267 Used with the function graph tracer. This is the max depth
268 it will trace into a function. Setting this to a value of
269 one will show only the first kernel function that is called
274 This is for tools that read the raw format files. If an event in
275 the ring buffer references a string (currently only trace_printk()
276 does this), only a pointer to the string is recorded into the buffer
277 and not the string itself. This prevents tools from knowing what
278 that string was. This file displays the string and address for
279 the string allowing tools to map the pointers to what the
284 Only the pid of the task is recorded in a trace event unless
285 the event specifically saves the task comm as well. Ftrace
286 makes a cache of pid mappings to comms to try to display
287 comms for events. If a pid for a comm is not listed, then
288 "<...>" is displayed in the output.
292 This displays the "snapshot" buffer and also lets the user
293 take a snapshot of the current running trace.
294 See the "Snapshot" section below for more details.
298 When the stack tracer is activated, this will display the
299 maximum stack size it has encountered.
300 See the "Stack Trace" section below.
304 This displays the stack back trace of the largest stack
305 that was encountered when the stack tracer is activated.
306 See the "Stack Trace" section below.
310 This is similar to "set_ftrace_filter" but it limits what
311 functions the stack tracer will check.
315 Whenever an event is recorded into the ring buffer, a
316 "timestamp" is added. This stamp comes from a specified
317 clock. By default, ftrace uses the "local" clock. This
318 clock is very fast and strictly per cpu, but on some
319 systems it may not be monotonic with respect to other
320 CPUs. In other words, the local clocks may not be in sync
321 with local clocks on other CPUs.
323 Usual clocks for tracing:
326 [local] global counter x86-tsc
328 local: Default clock, but may not be in sync across CPUs
330 global: This clock is in sync with all CPUs but may
331 be a bit slower than the local clock.
333 counter: This is not a clock at all, but literally an atomic
334 counter. It counts up one by one, but is in sync
335 with all CPUs. This is useful when you need to
336 know exactly the order events occurred with respect to
337 each other on different CPUs.
339 uptime: This uses the jiffies counter and the time stamp
340 is relative to the time since boot up.
342 perf: This makes ftrace use the same clock that perf uses.
343 Eventually perf will be able to read ftrace buffers
344 and this will help out in interleaving the data.
346 x86-tsc: Architectures may define their own clocks. For
347 example, x86 uses its own TSC cycle clock here.
349 To set a clock, simply echo the clock name into this file.
351 echo global > trace_clock
355 This is a very useful file for synchronizing user space
356 with events happening in the kernel. Writing strings into
357 this file will be written into the ftrace buffer.
359 It is useful in applications to open this file at the start
360 of the application and just reference the file descriptor
363 void trace_write(const char *fmt, ...)
373 n = vsnprintf(buf, 256, fmt, ap);
376 write(trace_fd, buf, n);
381 trace_fd = open("trace_marker", WR_ONLY);
385 Add dynamic tracepoints in programs.
390 Uprobe statistics. See uprobetrace.txt
394 This is a way to make multiple trace buffers where different
395 events can be recorded in different buffers.
396 See "Instances" section below.
400 This is the trace event directory. It holds event tracepoints
401 (also known as static tracepoints) that have been compiled
402 into the kernel. It shows what event tracepoints exist
403 and how they are grouped by system. There are "enable"
404 files at various levels that can enable the tracepoints
405 when a "1" is written to them.
407 See events.txt for more information.
411 This is a directory that contains the trace per_cpu information.
413 per_cpu/cpu0/buffer_size_kb:
415 The ftrace buffer is defined per_cpu. That is, there's a separate
416 buffer for each CPU to allow writes to be done atomically,
417 and free from cache bouncing. These buffers may have different
418 size buffers. This file is similar to the buffer_size_kb
419 file, but it only displays or sets the buffer size for the
420 specific CPU. (here cpu0).
424 This is similar to the "trace" file, but it will only display
425 the data specific for the CPU. If written to, it only clears
426 the specific CPU buffer.
428 per_cpu/cpu0/trace_pipe
430 This is similar to the "trace_pipe" file, and is a consuming
431 read, but it will only display (and consume) the data specific
434 per_cpu/cpu0/trace_pipe_raw
436 For tools that can parse the ftrace ring buffer binary format,
437 the trace_pipe_raw file can be used to extract the data
438 from the ring buffer directly. With the use of the splice()
439 system call, the buffer data can be quickly transferred to
440 a file or to the network where a server is collecting the
443 Like trace_pipe, this is a consuming reader, where multiple
444 reads will always produce different data.
446 per_cpu/cpu0/snapshot:
448 This is similar to the main "snapshot" file, but will only
449 snapshot the current CPU (if supported). It only displays
450 the content of the snapshot for a given CPU, and if
451 written to, only clears this CPU buffer.
453 per_cpu/cpu0/snapshot_raw:
455 Similar to the trace_pipe_raw, but will read the binary format
456 from the snapshot buffer for the given CPU.
460 This displays certain stats about the ring buffer:
462 entries: The number of events that are still in the buffer.
464 overrun: The number of lost events due to overwriting when
467 commit overrun: Should always be zero.
468 This gets set if so many events happened within a nested
469 event (ring buffer is re-entrant), that it fills the
470 buffer and starts dropping events.
472 bytes: Bytes actually read (not overwritten).
474 oldest event ts: The oldest timestamp in the buffer
476 now ts: The current timestamp
478 dropped events: Events lost due to overwrite option being off.
480 read events: The number of events read.
485 Here is the list of current tracers that may be configured.
489 Function call tracer to trace all kernel functions.
493 Similar to the function tracer except that the
494 function tracer probes the functions on their entry
495 whereas the function graph tracer traces on both entry
496 and exit of the functions. It then provides the ability
497 to draw a graph of function calls similar to C code
502 Traces the areas that disable interrupts and saves
503 the trace with the longest max latency.
504 See tracing_max_latency. When a new max is recorded,
505 it replaces the old trace. It is best to view this
506 trace with the latency-format option enabled.
510 Similar to irqsoff but traces and records the amount of
511 time for which preemption is disabled.
515 Similar to irqsoff and preemptoff, but traces and
516 records the largest time for which irqs and/or preemption
521 Traces and records the max latency that it takes for
522 the highest priority task to get scheduled after
523 it has been woken up.
524 Traces all tasks as an average developer would expect.
528 Traces and records the max latency that it takes for just
529 RT tasks (as the current "wakeup" does). This is useful
530 for those interested in wake up timings of RT tasks.
534 This is the "trace nothing" tracer. To remove all
535 tracers from tracing simply echo "nop" into
539 Examples of using the tracer
540 ----------------------------
542 Here are typical examples of using the tracers when controlling
543 them only with the debugfs interface (without using any
544 user-land utilities).
549 Here is an example of the output format of the file "trace"
554 # entries-in-buffer/entries-written: 140080/250280 #P:4
557 # / _----=> need-resched
558 # | / _---=> hardirq/softirq
559 # || / _--=> preempt-depth
561 # TASK-PID CPU# |||| TIMESTAMP FUNCTION
563 bash-1977 [000] .... 17284.993652: sys_close <-system_call_fastpath
564 bash-1977 [000] .... 17284.993653: __close_fd <-sys_close
565 bash-1977 [000] .... 17284.993653: _raw_spin_lock <-__close_fd
566 sshd-1974 [003] .... 17284.993653: __srcu_read_unlock <-fsnotify
567 bash-1977 [000] .... 17284.993654: add_preempt_count <-_raw_spin_lock
568 bash-1977 [000] ...1 17284.993655: _raw_spin_unlock <-__close_fd
569 bash-1977 [000] ...1 17284.993656: sub_preempt_count <-_raw_spin_unlock
570 bash-1977 [000] .... 17284.993657: filp_close <-__close_fd
571 bash-1977 [000] .... 17284.993657: dnotify_flush <-filp_close
572 sshd-1974 [003] .... 17284.993658: sys_select <-system_call_fastpath
575 A header is printed with the tracer name that is represented by
576 the trace. In this case the tracer is "function". Then it shows the
577 number of events in the buffer as well as the total number of entries
578 that were written. The difference is the number of entries that were
579 lost due to the buffer filling up (250280 - 140080 = 110200 events
582 The header explains the content of the events. Task name "bash", the task
583 PID "1977", the CPU that it was running on "000", the latency format
584 (explained below), the timestamp in <secs>.<usecs> format, the
585 function name that was traced "sys_close" and the parent function that
586 called this function "system_call_fastpath". The timestamp is the time
587 at which the function was entered.
592 When the latency-format option is enabled or when one of the latency
593 tracers is set, the trace file gives somewhat more information to see
594 why a latency happened. Here is a typical trace.
598 # irqsoff latency trace v1.1.5 on 3.8.0-test+
599 # --------------------------------------------------------------------
600 # latency: 259 us, #4/4, CPU#2 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
602 # | task: ps-6143 (uid:0 nice:0 policy:0 rt_prio:0)
604 # => started at: __lock_task_sighand
605 # => ended at: _raw_spin_unlock_irqrestore
609 # / _-----=> irqs-off
610 # | / _----=> need-resched
611 # || / _---=> hardirq/softirq
612 # ||| / _--=> preempt-depth
614 # cmd pid ||||| time | caller
616 ps-6143 2d... 0us!: trace_hardirqs_off <-__lock_task_sighand
617 ps-6143 2d..1 259us+: trace_hardirqs_on <-_raw_spin_unlock_irqrestore
618 ps-6143 2d..1 263us+: time_hardirqs_on <-_raw_spin_unlock_irqrestore
619 ps-6143 2d..1 306us : <stack trace>
620 => trace_hardirqs_on_caller
622 => _raw_spin_unlock_irqrestore
629 => system_call_fastpath
632 This shows that the current tracer is "irqsoff" tracing the time
633 for which interrupts were disabled. It gives the trace version (which
634 never changes) and the version of the kernel upon which this was executed on
635 (3.10). Then it displays the max latency in microseconds (259 us). The number
636 of trace entries displayed and the total number (both are four: #4/4).
637 VP, KP, SP, and HP are always zero and are reserved for later use.
638 #P is the number of online CPUs (#P:4).
640 The task is the process that was running when the latency
641 occurred. (ps pid: 6143).
643 The start and stop (the functions in which the interrupts were
644 disabled and enabled respectively) that caused the latencies:
646 __lock_task_sighand is where the interrupts were disabled.
647 _raw_spin_unlock_irqrestore is where they were enabled again.
649 The next lines after the header are the trace itself. The header
650 explains which is which.
652 cmd: The name of the process in the trace.
654 pid: The PID of that process.
656 CPU#: The CPU which the process was running on.
658 irqs-off: 'd' interrupts are disabled. '.' otherwise.
659 Note: If the architecture does not support a way to
660 read the irq flags variable, an 'X' will always
664 'N' both TIF_NEED_RESCHED and PREEMPT_NEED_RESCHED is set,
665 'n' only TIF_NEED_RESCHED is set,
666 'p' only PREEMPT_NEED_RESCHED is set,
670 'H' - hard irq occurred inside a softirq.
671 'h' - hard irq is running
672 's' - soft irq is running
673 '.' - normal context.
675 preempt-depth: The level of preempt_disabled
677 The above is mostly meaningful for kernel developers.
679 time: When the latency-format option is enabled, the trace file
680 output includes a timestamp relative to the start of the
681 trace. This differs from the output when latency-format
682 is disabled, which includes an absolute timestamp.
684 delay: This is just to help catch your eye a bit better. And
685 needs to be fixed to be only relative to the same CPU.
686 The marks are determined by the difference between this
687 current trace and the next trace.
688 '$' - greater than 1 second
689 '#' - greater than 1000 microsecond
690 '!' - greater than 100 microsecond
691 '+' - greater than 10 microsecond
692 ' ' - less than or equal to 10 microsecond.
694 The rest is the same as the 'trace' file.
696 Note, the latency tracers will usually end with a back trace
697 to easily find where the latency occurred.
702 The trace_options file (or the options directory) is used to control
703 what gets printed in the trace output, or manipulate the tracers.
704 To see what is available, simply cat the file:
734 To disable one of the options, echo in the option prepended with
737 echo noprint-parent > trace_options
739 To enable an option, leave off the "no".
741 echo sym-offset > trace_options
743 Here are the available options:
745 print-parent - On function traces, display the calling (parent)
746 function as well as the function being traced.
749 bash-4000 [01] 1477.606694: simple_strtoul <-kstrtoul
752 bash-4000 [01] 1477.606694: simple_strtoul
755 sym-offset - Display not only the function name, but also the
756 offset in the function. For example, instead of
757 seeing just "ktime_get", you will see
758 "ktime_get+0xb/0x20".
761 bash-4000 [01] 1477.606694: simple_strtoul+0x6/0xa0
763 sym-addr - this will also display the function address as well
764 as the function name.
767 bash-4000 [01] 1477.606694: simple_strtoul <c0339346>
769 verbose - This deals with the trace file when the
770 latency-format option is enabled.
772 bash 4000 1 0 00000000 00010a95 [58127d26] 1720.415ms \
773 (+0.000ms): simple_strtoul (kstrtoul)
775 raw - This will display raw numbers. This option is best for
776 use with user applications that can translate the raw
777 numbers better than having it done in the kernel.
779 hex - Similar to raw, but the numbers will be in a hexadecimal
782 bin - This will print out the formats in raw binary.
784 block - When set, reading trace_pipe will not block when polled.
786 stacktrace - This is one of the options that changes the trace
787 itself. When a trace is recorded, so is the stack
788 of functions. This allows for back traces of
791 trace_printk - Can disable trace_printk() from writing into the buffer.
793 branch - Enable branch tracing with the tracer.
795 annotate - It is sometimes confusing when the CPU buffers are full
796 and one CPU buffer had a lot of events recently, thus
797 a shorter time frame, were another CPU may have only had
798 a few events, which lets it have older events. When
799 the trace is reported, it shows the oldest events first,
800 and it may look like only one CPU ran (the one with the
801 oldest events). When the annotate option is set, it will
802 display when a new CPU buffer started:
804 <idle>-0 [001] dNs4 21169.031481: wake_up_idle_cpu <-add_timer_on
805 <idle>-0 [001] dNs4 21169.031482: _raw_spin_unlock_irqrestore <-add_timer_on
806 <idle>-0 [001] .Ns4 21169.031484: sub_preempt_count <-_raw_spin_unlock_irqrestore
807 ##### CPU 2 buffer started ####
808 <idle>-0 [002] .N.1 21169.031484: rcu_idle_exit <-cpu_idle
809 <idle>-0 [001] .Ns3 21169.031484: _raw_spin_unlock <-clocksource_watchdog
810 <idle>-0 [001] .Ns3 21169.031485: sub_preempt_count <-_raw_spin_unlock
812 userstacktrace - This option changes the trace. It records a
813 stacktrace of the current userspace thread.
815 sym-userobj - when user stacktrace are enabled, look up which
816 object the address belongs to, and print a
817 relative address. This is especially useful when
818 ASLR is on, otherwise you don't get a chance to
819 resolve the address to object/file/line after
820 the app is no longer running
822 The lookup is performed when you read
823 trace,trace_pipe. Example:
825 a.out-1623 [000] 40874.465068: /root/a.out[+0x480] <-/root/a.out[+0
826 x494] <- /root/a.out[+0x4a8] <- /lib/libc-2.7.so[+0x1e1a6]
829 printk-msg-only - When set, trace_printk()s will only show the format
830 and not their parameters (if trace_bprintk() or
831 trace_bputs() was used to save the trace_printk()).
833 context-info - Show only the event data. Hides the comm, PID,
834 timestamp, CPU, and other useful data.
836 latency-format - This option changes the trace. When
837 it is enabled, the trace displays
838 additional information about the
839 latencies, as described in "Latency
842 sleep-time - When running function graph tracer, to include
843 the time a task schedules out in its function.
844 When enabled, it will account time the task has been
845 scheduled out as part of the function call.
847 graph-time - When running function graph tracer, to include the
848 time to call nested functions. When this is not set,
849 the time reported for the function will only include
850 the time the function itself executed for, not the time
851 for functions that it called.
853 record-cmd - When any event or tracer is enabled, a hook is enabled
854 in the sched_switch trace point to fill comm cache
855 with mapped pids and comms. But this may cause some
856 overhead, and if you only care about pids, and not the
857 name of the task, disabling this option can lower the
860 overwrite - This controls what happens when the trace buffer is
861 full. If "1" (default), the oldest events are
862 discarded and overwritten. If "0", then the newest
863 events are discarded.
864 (see per_cpu/cpu0/stats for overrun and dropped)
866 disable_on_free - When the free_buffer is closed, tracing will
867 stop (tracing_on set to 0).
869 irq-info - Shows the interrupt, preempt count, need resched data.
870 When disabled, the trace looks like:
874 # entries-in-buffer/entries-written: 144405/9452052 #P:4
876 # TASK-PID CPU# TIMESTAMP FUNCTION
878 <idle>-0 [002] 23636.756054: ttwu_do_activate.constprop.89 <-try_to_wake_up
879 <idle>-0 [002] 23636.756054: activate_task <-ttwu_do_activate.constprop.89
880 <idle>-0 [002] 23636.756055: enqueue_task <-activate_task
883 markers - When set, the trace_marker is writable (only by root).
884 When disabled, the trace_marker will error with EINVAL
888 function-trace - The latency tracers will enable function tracing
889 if this option is enabled (default it is). When
890 it is disabled, the latency tracers do not trace
891 functions. This keeps the overhead of the tracer down
892 when performing latency tests.
894 Note: Some tracers have their own options. They only appear
895 when the tracer is active.
902 When interrupts are disabled, the CPU can not react to any other
903 external event (besides NMIs and SMIs). This prevents the timer
904 interrupt from triggering or the mouse interrupt from letting
905 the kernel know of a new mouse event. The result is a latency
906 with the reaction time.
908 The irqsoff tracer tracks the time for which interrupts are
909 disabled. When a new maximum latency is hit, the tracer saves
910 the trace leading up to that latency point so that every time a
911 new maximum is reached, the old saved trace is discarded and the
914 To reset the maximum, echo 0 into tracing_max_latency. Here is
917 # echo 0 > options/function-trace
918 # echo irqsoff > current_tracer
919 # echo 1 > tracing_on
920 # echo 0 > tracing_max_latency
923 # echo 0 > tracing_on
927 # irqsoff latency trace v1.1.5 on 3.8.0-test+
928 # --------------------------------------------------------------------
929 # latency: 16 us, #4/4, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
931 # | task: swapper/0-0 (uid:0 nice:0 policy:0 rt_prio:0)
933 # => started at: run_timer_softirq
934 # => ended at: run_timer_softirq
938 # / _-----=> irqs-off
939 # | / _----=> need-resched
940 # || / _---=> hardirq/softirq
941 # ||| / _--=> preempt-depth
943 # cmd pid ||||| time | caller
945 <idle>-0 0d.s2 0us+: _raw_spin_lock_irq <-run_timer_softirq
946 <idle>-0 0dNs3 17us : _raw_spin_unlock_irq <-run_timer_softirq
947 <idle>-0 0dNs3 17us+: trace_hardirqs_on <-run_timer_softirq
948 <idle>-0 0dNs3 25us : <stack trace>
949 => _raw_spin_unlock_irq
955 => smp_apic_timer_interrupt
956 => apic_timer_interrupt
961 => x86_64_start_reservations
962 => x86_64_start_kernel
964 Here we see that that we had a latency of 16 microseconds (which is
965 very good). The _raw_spin_lock_irq in run_timer_softirq disabled
966 interrupts. The difference between the 16 and the displayed
967 timestamp 25us occurred because the clock was incremented
968 between the time of recording the max latency and the time of
969 recording the function that had that latency.
971 Note the above example had function-trace not set. If we set
972 function-trace, we get a much larger output:
974 with echo 1 > options/function-trace
978 # irqsoff latency trace v1.1.5 on 3.8.0-test+
979 # --------------------------------------------------------------------
980 # latency: 71 us, #168/168, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
982 # | task: bash-2042 (uid:0 nice:0 policy:0 rt_prio:0)
984 # => started at: ata_scsi_queuecmd
985 # => ended at: ata_scsi_queuecmd
989 # / _-----=> irqs-off
990 # | / _----=> need-resched
991 # || / _---=> hardirq/softirq
992 # ||| / _--=> preempt-depth
994 # cmd pid ||||| time | caller
996 bash-2042 3d... 0us : _raw_spin_lock_irqsave <-ata_scsi_queuecmd
997 bash-2042 3d... 0us : add_preempt_count <-_raw_spin_lock_irqsave
998 bash-2042 3d..1 1us : ata_scsi_find_dev <-ata_scsi_queuecmd
999 bash-2042 3d..1 1us : __ata_scsi_find_dev <-ata_scsi_find_dev
1000 bash-2042 3d..1 2us : ata_find_dev.part.14 <-__ata_scsi_find_dev
1001 bash-2042 3d..1 2us : ata_qc_new_init <-__ata_scsi_queuecmd
1002 bash-2042 3d..1 3us : ata_sg_init <-__ata_scsi_queuecmd
1003 bash-2042 3d..1 4us : ata_scsi_rw_xlat <-__ata_scsi_queuecmd
1004 bash-2042 3d..1 4us : ata_build_rw_tf <-ata_scsi_rw_xlat
1006 bash-2042 3d..1 67us : delay_tsc <-__delay
1007 bash-2042 3d..1 67us : add_preempt_count <-delay_tsc
1008 bash-2042 3d..2 67us : sub_preempt_count <-delay_tsc
1009 bash-2042 3d..1 67us : add_preempt_count <-delay_tsc
1010 bash-2042 3d..2 68us : sub_preempt_count <-delay_tsc
1011 bash-2042 3d..1 68us+: ata_bmdma_start <-ata_bmdma_qc_issue
1012 bash-2042 3d..1 71us : _raw_spin_unlock_irqrestore <-ata_scsi_queuecmd
1013 bash-2042 3d..1 71us : _raw_spin_unlock_irqrestore <-ata_scsi_queuecmd
1014 bash-2042 3d..1 72us+: trace_hardirqs_on <-ata_scsi_queuecmd
1015 bash-2042 3d..1 120us : <stack trace>
1016 => _raw_spin_unlock_irqrestore
1017 => ata_scsi_queuecmd
1018 => scsi_dispatch_cmd
1020 => __blk_run_queue_uncond
1023 => generic_make_request
1026 => __ext3_get_inode_loc
1035 => user_path_at_empty
1040 => system_call_fastpath
1043 Here we traced a 71 microsecond latency. But we also see all the
1044 functions that were called during that time. Note that by
1045 enabling function tracing, we incur an added overhead. This
1046 overhead may extend the latency times. But nevertheless, this
1047 trace has provided some very helpful debugging information.
1053 When preemption is disabled, we may be able to receive
1054 interrupts but the task cannot be preempted and a higher
1055 priority task must wait for preemption to be enabled again
1056 before it can preempt a lower priority task.
1058 The preemptoff tracer traces the places that disable preemption.
1059 Like the irqsoff tracer, it records the maximum latency for
1060 which preemption was disabled. The control of preemptoff tracer
1061 is much like the irqsoff tracer.
1063 # echo 0 > options/function-trace
1064 # echo preemptoff > current_tracer
1065 # echo 1 > tracing_on
1066 # echo 0 > tracing_max_latency
1069 # echo 0 > tracing_on
1071 # tracer: preemptoff
1073 # preemptoff latency trace v1.1.5 on 3.8.0-test+
1074 # --------------------------------------------------------------------
1075 # latency: 46 us, #4/4, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1077 # | task: sshd-1991 (uid:0 nice:0 policy:0 rt_prio:0)
1079 # => started at: do_IRQ
1080 # => ended at: do_IRQ
1084 # / _-----=> irqs-off
1085 # | / _----=> need-resched
1086 # || / _---=> hardirq/softirq
1087 # ||| / _--=> preempt-depth
1089 # cmd pid ||||| time | caller
1091 sshd-1991 1d.h. 0us+: irq_enter <-do_IRQ
1092 sshd-1991 1d..1 46us : irq_exit <-do_IRQ
1093 sshd-1991 1d..1 47us+: trace_preempt_on <-do_IRQ
1094 sshd-1991 1d..1 52us : <stack trace>
1095 => sub_preempt_count
1101 This has some more changes. Preemption was disabled when an
1102 interrupt came in (notice the 'h'), and was enabled on exit.
1103 But we also see that interrupts have been disabled when entering
1104 the preempt off section and leaving it (the 'd'). We do not know if
1105 interrupts were enabled in the mean time or shortly after this
1108 # tracer: preemptoff
1110 # preemptoff latency trace v1.1.5 on 3.8.0-test+
1111 # --------------------------------------------------------------------
1112 # latency: 83 us, #241/241, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1114 # | task: bash-1994 (uid:0 nice:0 policy:0 rt_prio:0)
1116 # => started at: wake_up_new_task
1117 # => ended at: task_rq_unlock
1121 # / _-----=> irqs-off
1122 # | / _----=> need-resched
1123 # || / _---=> hardirq/softirq
1124 # ||| / _--=> preempt-depth
1126 # cmd pid ||||| time | caller
1128 bash-1994 1d..1 0us : _raw_spin_lock_irqsave <-wake_up_new_task
1129 bash-1994 1d..1 0us : select_task_rq_fair <-select_task_rq
1130 bash-1994 1d..1 1us : __rcu_read_lock <-select_task_rq_fair
1131 bash-1994 1d..1 1us : source_load <-select_task_rq_fair
1132 bash-1994 1d..1 1us : source_load <-select_task_rq_fair
1134 bash-1994 1d..1 12us : irq_enter <-smp_apic_timer_interrupt
1135 bash-1994 1d..1 12us : rcu_irq_enter <-irq_enter
1136 bash-1994 1d..1 13us : add_preempt_count <-irq_enter
1137 bash-1994 1d.h1 13us : exit_idle <-smp_apic_timer_interrupt
1138 bash-1994 1d.h1 13us : hrtimer_interrupt <-smp_apic_timer_interrupt
1139 bash-1994 1d.h1 13us : _raw_spin_lock <-hrtimer_interrupt
1140 bash-1994 1d.h1 14us : add_preempt_count <-_raw_spin_lock
1141 bash-1994 1d.h2 14us : ktime_get_update_offsets <-hrtimer_interrupt
1143 bash-1994 1d.h1 35us : lapic_next_event <-clockevents_program_event
1144 bash-1994 1d.h1 35us : irq_exit <-smp_apic_timer_interrupt
1145 bash-1994 1d.h1 36us : sub_preempt_count <-irq_exit
1146 bash-1994 1d..2 36us : do_softirq <-irq_exit
1147 bash-1994 1d..2 36us : __do_softirq <-call_softirq
1148 bash-1994 1d..2 36us : __local_bh_disable <-__do_softirq
1149 bash-1994 1d.s2 37us : add_preempt_count <-_raw_spin_lock_irq
1150 bash-1994 1d.s3 38us : _raw_spin_unlock <-run_timer_softirq
1151 bash-1994 1d.s3 39us : sub_preempt_count <-_raw_spin_unlock
1152 bash-1994 1d.s2 39us : call_timer_fn <-run_timer_softirq
1154 bash-1994 1dNs2 81us : cpu_needs_another_gp <-rcu_process_callbacks
1155 bash-1994 1dNs2 82us : __local_bh_enable <-__do_softirq
1156 bash-1994 1dNs2 82us : sub_preempt_count <-__local_bh_enable
1157 bash-1994 1dN.2 82us : idle_cpu <-irq_exit
1158 bash-1994 1dN.2 83us : rcu_irq_exit <-irq_exit
1159 bash-1994 1dN.2 83us : sub_preempt_count <-irq_exit
1160 bash-1994 1.N.1 84us : _raw_spin_unlock_irqrestore <-task_rq_unlock
1161 bash-1994 1.N.1 84us+: trace_preempt_on <-task_rq_unlock
1162 bash-1994 1.N.1 104us : <stack trace>
1163 => sub_preempt_count
1164 => _raw_spin_unlock_irqrestore
1172 The above is an example of the preemptoff trace with
1173 function-trace set. Here we see that interrupts were not disabled
1174 the entire time. The irq_enter code lets us know that we entered
1175 an interrupt 'h'. Before that, the functions being traced still
1176 show that it is not in an interrupt, but we can see from the
1177 functions themselves that this is not the case.
1182 Knowing the locations that have interrupts disabled or
1183 preemption disabled for the longest times is helpful. But
1184 sometimes we would like to know when either preemption and/or
1185 interrupts are disabled.
1187 Consider the following code:
1189 local_irq_disable();
1190 call_function_with_irqs_off();
1192 call_function_with_irqs_and_preemption_off();
1194 call_function_with_preemption_off();
1197 The irqsoff tracer will record the total length of
1198 call_function_with_irqs_off() and
1199 call_function_with_irqs_and_preemption_off().
1201 The preemptoff tracer will record the total length of
1202 call_function_with_irqs_and_preemption_off() and
1203 call_function_with_preemption_off().
1205 But neither will trace the time that interrupts and/or
1206 preemption is disabled. This total time is the time that we can
1207 not schedule. To record this time, use the preemptirqsoff
1210 Again, using this trace is much like the irqsoff and preemptoff
1213 # echo 0 > options/function-trace
1214 # echo preemptirqsoff > current_tracer
1215 # echo 1 > tracing_on
1216 # echo 0 > tracing_max_latency
1219 # echo 0 > tracing_on
1221 # tracer: preemptirqsoff
1223 # preemptirqsoff latency trace v1.1.5 on 3.8.0-test+
1224 # --------------------------------------------------------------------
1225 # latency: 100 us, #4/4, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1227 # | task: ls-2230 (uid:0 nice:0 policy:0 rt_prio:0)
1229 # => started at: ata_scsi_queuecmd
1230 # => ended at: ata_scsi_queuecmd
1234 # / _-----=> irqs-off
1235 # | / _----=> need-resched
1236 # || / _---=> hardirq/softirq
1237 # ||| / _--=> preempt-depth
1239 # cmd pid ||||| time | caller
1241 ls-2230 3d... 0us+: _raw_spin_lock_irqsave <-ata_scsi_queuecmd
1242 ls-2230 3...1 100us : _raw_spin_unlock_irqrestore <-ata_scsi_queuecmd
1243 ls-2230 3...1 101us+: trace_preempt_on <-ata_scsi_queuecmd
1244 ls-2230 3...1 111us : <stack trace>
1245 => sub_preempt_count
1246 => _raw_spin_unlock_irqrestore
1247 => ata_scsi_queuecmd
1248 => scsi_dispatch_cmd
1250 => __blk_run_queue_uncond
1253 => generic_make_request
1258 => htree_dirblock_to_tree
1259 => ext3_htree_fill_tree
1263 => system_call_fastpath
1266 The trace_hardirqs_off_thunk is called from assembly on x86 when
1267 interrupts are disabled in the assembly code. Without the
1268 function tracing, we do not know if interrupts were enabled
1269 within the preemption points. We do see that it started with
1272 Here is a trace with function-trace set:
1274 # tracer: preemptirqsoff
1276 # preemptirqsoff latency trace v1.1.5 on 3.8.0-test+
1277 # --------------------------------------------------------------------
1278 # latency: 161 us, #339/339, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1280 # | task: ls-2269 (uid:0 nice:0 policy:0 rt_prio:0)
1282 # => started at: schedule
1283 # => ended at: mutex_unlock
1287 # / _-----=> irqs-off
1288 # | / _----=> need-resched
1289 # || / _---=> hardirq/softirq
1290 # ||| / _--=> preempt-depth
1292 # cmd pid ||||| time | caller
1294 kworker/-59 3...1 0us : __schedule <-schedule
1295 kworker/-59 3d..1 0us : rcu_preempt_qs <-rcu_note_context_switch
1296 kworker/-59 3d..1 1us : add_preempt_count <-_raw_spin_lock_irq
1297 kworker/-59 3d..2 1us : deactivate_task <-__schedule
1298 kworker/-59 3d..2 1us : dequeue_task <-deactivate_task
1299 kworker/-59 3d..2 2us : update_rq_clock <-dequeue_task
1300 kworker/-59 3d..2 2us : dequeue_task_fair <-dequeue_task
1301 kworker/-59 3d..2 2us : update_curr <-dequeue_task_fair
1302 kworker/-59 3d..2 2us : update_min_vruntime <-update_curr
1303 kworker/-59 3d..2 3us : cpuacct_charge <-update_curr
1304 kworker/-59 3d..2 3us : __rcu_read_lock <-cpuacct_charge
1305 kworker/-59 3d..2 3us : __rcu_read_unlock <-cpuacct_charge
1306 kworker/-59 3d..2 3us : update_cfs_rq_blocked_load <-dequeue_task_fair
1307 kworker/-59 3d..2 4us : clear_buddies <-dequeue_task_fair
1308 kworker/-59 3d..2 4us : account_entity_dequeue <-dequeue_task_fair
1309 kworker/-59 3d..2 4us : update_min_vruntime <-dequeue_task_fair
1310 kworker/-59 3d..2 4us : update_cfs_shares <-dequeue_task_fair
1311 kworker/-59 3d..2 5us : hrtick_update <-dequeue_task_fair
1312 kworker/-59 3d..2 5us : wq_worker_sleeping <-__schedule
1313 kworker/-59 3d..2 5us : kthread_data <-wq_worker_sleeping
1314 kworker/-59 3d..2 5us : put_prev_task_fair <-__schedule
1315 kworker/-59 3d..2 6us : pick_next_task_fair <-pick_next_task
1316 kworker/-59 3d..2 6us : clear_buddies <-pick_next_task_fair
1317 kworker/-59 3d..2 6us : set_next_entity <-pick_next_task_fair
1318 kworker/-59 3d..2 6us : update_stats_wait_end <-set_next_entity
1319 ls-2269 3d..2 7us : finish_task_switch <-__schedule
1320 ls-2269 3d..2 7us : _raw_spin_unlock_irq <-finish_task_switch
1321 ls-2269 3d..2 8us : do_IRQ <-ret_from_intr
1322 ls-2269 3d..2 8us : irq_enter <-do_IRQ
1323 ls-2269 3d..2 8us : rcu_irq_enter <-irq_enter
1324 ls-2269 3d..2 9us : add_preempt_count <-irq_enter
1325 ls-2269 3d.h2 9us : exit_idle <-do_IRQ
1327 ls-2269 3d.h3 20us : sub_preempt_count <-_raw_spin_unlock
1328 ls-2269 3d.h2 20us : irq_exit <-do_IRQ
1329 ls-2269 3d.h2 21us : sub_preempt_count <-irq_exit
1330 ls-2269 3d..3 21us : do_softirq <-irq_exit
1331 ls-2269 3d..3 21us : __do_softirq <-call_softirq
1332 ls-2269 3d..3 21us+: __local_bh_disable <-__do_softirq
1333 ls-2269 3d.s4 29us : sub_preempt_count <-_local_bh_enable_ip
1334 ls-2269 3d.s5 29us : sub_preempt_count <-_local_bh_enable_ip
1335 ls-2269 3d.s5 31us : do_IRQ <-ret_from_intr
1336 ls-2269 3d.s5 31us : irq_enter <-do_IRQ
1337 ls-2269 3d.s5 31us : rcu_irq_enter <-irq_enter
1339 ls-2269 3d.s5 31us : rcu_irq_enter <-irq_enter
1340 ls-2269 3d.s5 32us : add_preempt_count <-irq_enter
1341 ls-2269 3d.H5 32us : exit_idle <-do_IRQ
1342 ls-2269 3d.H5 32us : handle_irq <-do_IRQ
1343 ls-2269 3d.H5 32us : irq_to_desc <-handle_irq
1344 ls-2269 3d.H5 33us : handle_fasteoi_irq <-handle_irq
1346 ls-2269 3d.s5 158us : _raw_spin_unlock_irqrestore <-rtl8139_poll
1347 ls-2269 3d.s3 158us : net_rps_action_and_irq_enable.isra.65 <-net_rx_action
1348 ls-2269 3d.s3 159us : __local_bh_enable <-__do_softirq
1349 ls-2269 3d.s3 159us : sub_preempt_count <-__local_bh_enable
1350 ls-2269 3d..3 159us : idle_cpu <-irq_exit
1351 ls-2269 3d..3 159us : rcu_irq_exit <-irq_exit
1352 ls-2269 3d..3 160us : sub_preempt_count <-irq_exit
1353 ls-2269 3d... 161us : __mutex_unlock_slowpath <-mutex_unlock
1354 ls-2269 3d... 162us+: trace_hardirqs_on <-mutex_unlock
1355 ls-2269 3d... 186us : <stack trace>
1356 => __mutex_unlock_slowpath
1363 => system_call_fastpath
1365 This is an interesting trace. It started with kworker running and
1366 scheduling out and ls taking over. But as soon as ls released the
1367 rq lock and enabled interrupts (but not preemption) an interrupt
1368 triggered. When the interrupt finished, it started running softirqs.
1369 But while the softirq was running, another interrupt triggered.
1370 When an interrupt is running inside a softirq, the annotation is 'H'.
1376 One common case that people are interested in tracing is the
1377 time it takes for a task that is woken to actually wake up.
1378 Now for non Real-Time tasks, this can be arbitrary. But tracing
1379 it none the less can be interesting.
1381 Without function tracing:
1383 # echo 0 > options/function-trace
1384 # echo wakeup > current_tracer
1385 # echo 1 > tracing_on
1386 # echo 0 > tracing_max_latency
1388 # echo 0 > tracing_on
1392 # wakeup latency trace v1.1.5 on 3.8.0-test+
1393 # --------------------------------------------------------------------
1394 # latency: 15 us, #4/4, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1396 # | task: kworker/3:1H-312 (uid:0 nice:-20 policy:0 rt_prio:0)
1400 # / _-----=> irqs-off
1401 # | / _----=> need-resched
1402 # || / _---=> hardirq/softirq
1403 # ||| / _--=> preempt-depth
1405 # cmd pid ||||| time | caller
1407 <idle>-0 3dNs7 0us : 0:120:R + [003] 312:100:R kworker/3:1H
1408 <idle>-0 3dNs7 1us+: ttwu_do_activate.constprop.87 <-try_to_wake_up
1409 <idle>-0 3d..3 15us : __schedule <-schedule
1410 <idle>-0 3d..3 15us : 0:120:R ==> [003] 312:100:R kworker/3:1H
1412 The tracer only traces the highest priority task in the system
1413 to avoid tracing the normal circumstances. Here we see that
1414 the kworker with a nice priority of -20 (not very nice), took
1415 just 15 microseconds from the time it woke up, to the time it
1418 Non Real-Time tasks are not that interesting. A more interesting
1419 trace is to concentrate only on Real-Time tasks.
1424 In a Real-Time environment it is very important to know the
1425 wakeup time it takes for the highest priority task that is woken
1426 up to the time that it executes. This is also known as "schedule
1427 latency". I stress the point that this is about RT tasks. It is
1428 also important to know the scheduling latency of non-RT tasks,
1429 but the average schedule latency is better for non-RT tasks.
1430 Tools like LatencyTop are more appropriate for such
1433 Real-Time environments are interested in the worst case latency.
1434 That is the longest latency it takes for something to happen,
1435 and not the average. We can have a very fast scheduler that may
1436 only have a large latency once in a while, but that would not
1437 work well with Real-Time tasks. The wakeup_rt tracer was designed
1438 to record the worst case wakeups of RT tasks. Non-RT tasks are
1439 not recorded because the tracer only records one worst case and
1440 tracing non-RT tasks that are unpredictable will overwrite the
1441 worst case latency of RT tasks (just run the normal wakeup
1442 tracer for a while to see that effect).
1444 Since this tracer only deals with RT tasks, we will run this
1445 slightly differently than we did with the previous tracers.
1446 Instead of performing an 'ls', we will run 'sleep 1' under
1447 'chrt' which changes the priority of the task.
1449 # echo 0 > options/function-trace
1450 # echo wakeup_rt > current_tracer
1451 # echo 1 > tracing_on
1452 # echo 0 > tracing_max_latency
1454 # echo 0 > tracing_on
1460 # wakeup_rt latency trace v1.1.5 on 3.8.0-test+
1461 # --------------------------------------------------------------------
1462 # latency: 5 us, #4/4, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1464 # | task: sleep-2389 (uid:0 nice:0 policy:1 rt_prio:5)
1468 # / _-----=> irqs-off
1469 # | / _----=> need-resched
1470 # || / _---=> hardirq/softirq
1471 # ||| / _--=> preempt-depth
1473 # cmd pid ||||| time | caller
1475 <idle>-0 3d.h4 0us : 0:120:R + [003] 2389: 94:R sleep
1476 <idle>-0 3d.h4 1us+: ttwu_do_activate.constprop.87 <-try_to_wake_up
1477 <idle>-0 3d..3 5us : __schedule <-schedule
1478 <idle>-0 3d..3 5us : 0:120:R ==> [003] 2389: 94:R sleep
1481 Running this on an idle system, we see that it only took 5 microseconds
1482 to perform the task switch. Note, since the trace point in the schedule
1483 is before the actual "switch", we stop the tracing when the recorded task
1484 is about to schedule in. This may change if we add a new marker at the
1485 end of the scheduler.
1487 Notice that the recorded task is 'sleep' with the PID of 2389
1488 and it has an rt_prio of 5. This priority is user-space priority
1489 and not the internal kernel priority. The policy is 1 for
1490 SCHED_FIFO and 2 for SCHED_RR.
1492 Note, that the trace data shows the internal priority (99 - rtprio).
1494 <idle>-0 3d..3 5us : 0:120:R ==> [003] 2389: 94:R sleep
1496 The 0:120:R means idle was running with a nice priority of 0 (120 - 20)
1497 and in the running state 'R'. The sleep task was scheduled in with
1498 2389: 94:R. That is the priority is the kernel rtprio (99 - 5 = 94)
1499 and it too is in the running state.
1501 Doing the same with chrt -r 5 and function-trace set.
1503 echo 1 > options/function-trace
1507 # wakeup_rt latency trace v1.1.5 on 3.8.0-test+
1508 # --------------------------------------------------------------------
1509 # latency: 29 us, #85/85, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1511 # | task: sleep-2448 (uid:0 nice:0 policy:1 rt_prio:5)
1515 # / _-----=> irqs-off
1516 # | / _----=> need-resched
1517 # || / _---=> hardirq/softirq
1518 # ||| / _--=> preempt-depth
1520 # cmd pid ||||| time | caller
1522 <idle>-0 3d.h4 1us+: 0:120:R + [003] 2448: 94:R sleep
1523 <idle>-0 3d.h4 2us : ttwu_do_activate.constprop.87 <-try_to_wake_up
1524 <idle>-0 3d.h3 3us : check_preempt_curr <-ttwu_do_wakeup
1525 <idle>-0 3d.h3 3us : resched_curr <-check_preempt_curr
1526 <idle>-0 3dNh3 4us : task_woken_rt <-ttwu_do_wakeup
1527 <idle>-0 3dNh3 4us : _raw_spin_unlock <-try_to_wake_up
1528 <idle>-0 3dNh3 4us : sub_preempt_count <-_raw_spin_unlock
1529 <idle>-0 3dNh2 5us : ttwu_stat <-try_to_wake_up
1530 <idle>-0 3dNh2 5us : _raw_spin_unlock_irqrestore <-try_to_wake_up
1531 <idle>-0 3dNh2 6us : sub_preempt_count <-_raw_spin_unlock_irqrestore
1532 <idle>-0 3dNh1 6us : _raw_spin_lock <-__run_hrtimer
1533 <idle>-0 3dNh1 6us : add_preempt_count <-_raw_spin_lock
1534 <idle>-0 3dNh2 7us : _raw_spin_unlock <-hrtimer_interrupt
1535 <idle>-0 3dNh2 7us : sub_preempt_count <-_raw_spin_unlock
1536 <idle>-0 3dNh1 7us : tick_program_event <-hrtimer_interrupt
1537 <idle>-0 3dNh1 7us : clockevents_program_event <-tick_program_event
1538 <idle>-0 3dNh1 8us : ktime_get <-clockevents_program_event
1539 <idle>-0 3dNh1 8us : lapic_next_event <-clockevents_program_event
1540 <idle>-0 3dNh1 8us : irq_exit <-smp_apic_timer_interrupt
1541 <idle>-0 3dNh1 9us : sub_preempt_count <-irq_exit
1542 <idle>-0 3dN.2 9us : idle_cpu <-irq_exit
1543 <idle>-0 3dN.2 9us : rcu_irq_exit <-irq_exit
1544 <idle>-0 3dN.2 10us : rcu_eqs_enter_common.isra.45 <-rcu_irq_exit
1545 <idle>-0 3dN.2 10us : sub_preempt_count <-irq_exit
1546 <idle>-0 3.N.1 11us : rcu_idle_exit <-cpu_idle
1547 <idle>-0 3dN.1 11us : rcu_eqs_exit_common.isra.43 <-rcu_idle_exit
1548 <idle>-0 3.N.1 11us : tick_nohz_idle_exit <-cpu_idle
1549 <idle>-0 3dN.1 12us : menu_hrtimer_cancel <-tick_nohz_idle_exit
1550 <idle>-0 3dN.1 12us : ktime_get <-tick_nohz_idle_exit
1551 <idle>-0 3dN.1 12us : tick_do_update_jiffies64 <-tick_nohz_idle_exit
1552 <idle>-0 3dN.1 13us : update_cpu_load_nohz <-tick_nohz_idle_exit
1553 <idle>-0 3dN.1 13us : _raw_spin_lock <-update_cpu_load_nohz
1554 <idle>-0 3dN.1 13us : add_preempt_count <-_raw_spin_lock
1555 <idle>-0 3dN.2 13us : __update_cpu_load <-update_cpu_load_nohz
1556 <idle>-0 3dN.2 14us : sched_avg_update <-__update_cpu_load
1557 <idle>-0 3dN.2 14us : _raw_spin_unlock <-update_cpu_load_nohz
1558 <idle>-0 3dN.2 14us : sub_preempt_count <-_raw_spin_unlock
1559 <idle>-0 3dN.1 15us : calc_load_exit_idle <-tick_nohz_idle_exit
1560 <idle>-0 3dN.1 15us : touch_softlockup_watchdog <-tick_nohz_idle_exit
1561 <idle>-0 3dN.1 15us : hrtimer_cancel <-tick_nohz_idle_exit
1562 <idle>-0 3dN.1 15us : hrtimer_try_to_cancel <-hrtimer_cancel
1563 <idle>-0 3dN.1 16us : lock_hrtimer_base.isra.18 <-hrtimer_try_to_cancel
1564 <idle>-0 3dN.1 16us : _raw_spin_lock_irqsave <-lock_hrtimer_base.isra.18
1565 <idle>-0 3dN.1 16us : add_preempt_count <-_raw_spin_lock_irqsave
1566 <idle>-0 3dN.2 17us : __remove_hrtimer <-remove_hrtimer.part.16
1567 <idle>-0 3dN.2 17us : hrtimer_force_reprogram <-__remove_hrtimer
1568 <idle>-0 3dN.2 17us : tick_program_event <-hrtimer_force_reprogram
1569 <idle>-0 3dN.2 18us : clockevents_program_event <-tick_program_event
1570 <idle>-0 3dN.2 18us : ktime_get <-clockevents_program_event
1571 <idle>-0 3dN.2 18us : lapic_next_event <-clockevents_program_event
1572 <idle>-0 3dN.2 19us : _raw_spin_unlock_irqrestore <-hrtimer_try_to_cancel
1573 <idle>-0 3dN.2 19us : sub_preempt_count <-_raw_spin_unlock_irqrestore
1574 <idle>-0 3dN.1 19us : hrtimer_forward <-tick_nohz_idle_exit
1575 <idle>-0 3dN.1 20us : ktime_add_safe <-hrtimer_forward
1576 <idle>-0 3dN.1 20us : ktime_add_safe <-hrtimer_forward
1577 <idle>-0 3dN.1 20us : hrtimer_start_range_ns <-hrtimer_start_expires.constprop.11
1578 <idle>-0 3dN.1 20us : __hrtimer_start_range_ns <-hrtimer_start_range_ns
1579 <idle>-0 3dN.1 21us : lock_hrtimer_base.isra.18 <-__hrtimer_start_range_ns
1580 <idle>-0 3dN.1 21us : _raw_spin_lock_irqsave <-lock_hrtimer_base.isra.18
1581 <idle>-0 3dN.1 21us : add_preempt_count <-_raw_spin_lock_irqsave
1582 <idle>-0 3dN.2 22us : ktime_add_safe <-__hrtimer_start_range_ns
1583 <idle>-0 3dN.2 22us : enqueue_hrtimer <-__hrtimer_start_range_ns
1584 <idle>-0 3dN.2 22us : tick_program_event <-__hrtimer_start_range_ns
1585 <idle>-0 3dN.2 23us : clockevents_program_event <-tick_program_event
1586 <idle>-0 3dN.2 23us : ktime_get <-clockevents_program_event
1587 <idle>-0 3dN.2 23us : lapic_next_event <-clockevents_program_event
1588 <idle>-0 3dN.2 24us : _raw_spin_unlock_irqrestore <-__hrtimer_start_range_ns
1589 <idle>-0 3dN.2 24us : sub_preempt_count <-_raw_spin_unlock_irqrestore
1590 <idle>-0 3dN.1 24us : account_idle_ticks <-tick_nohz_idle_exit
1591 <idle>-0 3dN.1 24us : account_idle_time <-account_idle_ticks
1592 <idle>-0 3.N.1 25us : sub_preempt_count <-cpu_idle
1593 <idle>-0 3.N.. 25us : schedule <-cpu_idle
1594 <idle>-0 3.N.. 25us : __schedule <-preempt_schedule
1595 <idle>-0 3.N.. 26us : add_preempt_count <-__schedule
1596 <idle>-0 3.N.1 26us : rcu_note_context_switch <-__schedule
1597 <idle>-0 3.N.1 26us : rcu_sched_qs <-rcu_note_context_switch
1598 <idle>-0 3dN.1 27us : rcu_preempt_qs <-rcu_note_context_switch
1599 <idle>-0 3.N.1 27us : _raw_spin_lock_irq <-__schedule
1600 <idle>-0 3dN.1 27us : add_preempt_count <-_raw_spin_lock_irq
1601 <idle>-0 3dN.2 28us : put_prev_task_idle <-__schedule
1602 <idle>-0 3dN.2 28us : pick_next_task_stop <-pick_next_task
1603 <idle>-0 3dN.2 28us : pick_next_task_rt <-pick_next_task
1604 <idle>-0 3dN.2 29us : dequeue_pushable_task <-pick_next_task_rt
1605 <idle>-0 3d..3 29us : __schedule <-preempt_schedule
1606 <idle>-0 3d..3 30us : 0:120:R ==> [003] 2448: 94:R sleep
1608 This isn't that big of a trace, even with function tracing enabled,
1609 so I included the entire trace.
1611 The interrupt went off while when the system was idle. Somewhere
1612 before task_woken_rt() was called, the NEED_RESCHED flag was set,
1613 this is indicated by the first occurrence of the 'N' flag.
1615 Latency tracing and events
1616 --------------------------
1617 As function tracing can induce a much larger latency, but without
1618 seeing what happens within the latency it is hard to know what
1619 caused it. There is a middle ground, and that is with enabling
1622 # echo 0 > options/function-trace
1623 # echo wakeup_rt > current_tracer
1624 # echo 1 > events/enable
1625 # echo 1 > tracing_on
1626 # echo 0 > tracing_max_latency
1628 # echo 0 > tracing_on
1632 # wakeup_rt latency trace v1.1.5 on 3.8.0-test+
1633 # --------------------------------------------------------------------
1634 # latency: 6 us, #12/12, CPU#2 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1636 # | task: sleep-5882 (uid:0 nice:0 policy:1 rt_prio:5)
1640 # / _-----=> irqs-off
1641 # | / _----=> need-resched
1642 # || / _---=> hardirq/softirq
1643 # ||| / _--=> preempt-depth
1645 # cmd pid ||||| time | caller
1647 <idle>-0 2d.h4 0us : 0:120:R + [002] 5882: 94:R sleep
1648 <idle>-0 2d.h4 0us : ttwu_do_activate.constprop.87 <-try_to_wake_up
1649 <idle>-0 2d.h4 1us : sched_wakeup: comm=sleep pid=5882 prio=94 success=1 target_cpu=002
1650 <idle>-0 2dNh2 1us : hrtimer_expire_exit: hrtimer=ffff88007796feb8
1651 <idle>-0 2.N.2 2us : power_end: cpu_id=2
1652 <idle>-0 2.N.2 3us : cpu_idle: state=4294967295 cpu_id=2
1653 <idle>-0 2dN.3 4us : hrtimer_cancel: hrtimer=ffff88007d50d5e0
1654 <idle>-0 2dN.3 4us : hrtimer_start: hrtimer=ffff88007d50d5e0 function=tick_sched_timer expires=34311211000000 softexpires=34311211000000
1655 <idle>-0 2.N.2 5us : rcu_utilization: Start context switch
1656 <idle>-0 2.N.2 5us : rcu_utilization: End context switch
1657 <idle>-0 2d..3 6us : __schedule <-schedule
1658 <idle>-0 2d..3 6us : 0:120:R ==> [002] 5882: 94:R sleep
1664 This tracer is the function tracer. Enabling the function tracer
1665 can be done from the debug file system. Make sure the
1666 ftrace_enabled is set; otherwise this tracer is a nop.
1667 See the "ftrace_enabled" section below.
1669 # sysctl kernel.ftrace_enabled=1
1670 # echo function > current_tracer
1671 # echo 1 > tracing_on
1673 # echo 0 > tracing_on
1677 # entries-in-buffer/entries-written: 24799/24799 #P:4
1680 # / _----=> need-resched
1681 # | / _---=> hardirq/softirq
1682 # || / _--=> preempt-depth
1684 # TASK-PID CPU# |||| TIMESTAMP FUNCTION
1686 bash-1994 [002] .... 3082.063030: mutex_unlock <-rb_simple_write
1687 bash-1994 [002] .... 3082.063031: __mutex_unlock_slowpath <-mutex_unlock
1688 bash-1994 [002] .... 3082.063031: __fsnotify_parent <-fsnotify_modify
1689 bash-1994 [002] .... 3082.063032: fsnotify <-fsnotify_modify
1690 bash-1994 [002] .... 3082.063032: __srcu_read_lock <-fsnotify
1691 bash-1994 [002] .... 3082.063032: add_preempt_count <-__srcu_read_lock
1692 bash-1994 [002] ...1 3082.063032: sub_preempt_count <-__srcu_read_lock
1693 bash-1994 [002] .... 3082.063033: __srcu_read_unlock <-fsnotify
1697 Note: function tracer uses ring buffers to store the above
1698 entries. The newest data may overwrite the oldest data.
1699 Sometimes using echo to stop the trace is not sufficient because
1700 the tracing could have overwritten the data that you wanted to
1701 record. For this reason, it is sometimes better to disable
1702 tracing directly from a program. This allows you to stop the
1703 tracing at the point that you hit the part that you are
1704 interested in. To disable the tracing directly from a C program,
1705 something like following code snippet can be used:
1709 int main(int argc, char *argv[]) {
1711 trace_fd = open(tracing_file("tracing_on"), O_WRONLY);
1713 if (condition_hit()) {
1714 write(trace_fd, "0", 1);
1720 Single thread tracing
1721 ---------------------
1723 By writing into set_ftrace_pid you can trace a
1724 single thread. For example:
1726 # cat set_ftrace_pid
1728 # echo 3111 > set_ftrace_pid
1729 # cat set_ftrace_pid
1731 # echo function > current_tracer
1735 # TASK-PID CPU# TIMESTAMP FUNCTION
1737 yum-updatesd-3111 [003] 1637.254676: finish_task_switch <-thread_return
1738 yum-updatesd-3111 [003] 1637.254681: hrtimer_cancel <-schedule_hrtimeout_range
1739 yum-updatesd-3111 [003] 1637.254682: hrtimer_try_to_cancel <-hrtimer_cancel
1740 yum-updatesd-3111 [003] 1637.254683: lock_hrtimer_base <-hrtimer_try_to_cancel
1741 yum-updatesd-3111 [003] 1637.254685: fget_light <-do_sys_poll
1742 yum-updatesd-3111 [003] 1637.254686: pipe_poll <-do_sys_poll
1743 # echo > set_ftrace_pid
1747 # TASK-PID CPU# TIMESTAMP FUNCTION
1749 ##### CPU 3 buffer started ####
1750 yum-updatesd-3111 [003] 1701.957688: free_poll_entry <-poll_freewait
1751 yum-updatesd-3111 [003] 1701.957689: remove_wait_queue <-free_poll_entry
1752 yum-updatesd-3111 [003] 1701.957691: fput <-free_poll_entry
1753 yum-updatesd-3111 [003] 1701.957692: audit_syscall_exit <-sysret_audit
1754 yum-updatesd-3111 [003] 1701.957693: path_put <-audit_syscall_exit
1756 If you want to trace a function when executing, you could use
1757 something like this simple program:
1761 #include <sys/types.h>
1762 #include <sys/stat.h>
1768 #define STR(x) _STR(x)
1769 #define MAX_PATH 256
1771 const char *find_debugfs(void)
1773 static char debugfs[MAX_PATH+1];
1774 static int debugfs_found;
1781 if ((fp = fopen("/proc/mounts","r")) == NULL) {
1782 perror("/proc/mounts");
1786 while (fscanf(fp, "%*s %"
1788 "s %99s %*s %*d %*d\n",
1789 debugfs, type) == 2) {
1790 if (strcmp(type, "debugfs") == 0)
1795 if (strcmp(type, "debugfs") != 0) {
1796 fprintf(stderr, "debugfs not mounted");
1800 strcat(debugfs, "/tracing/");
1806 const char *tracing_file(const char *file_name)
1808 static char trace_file[MAX_PATH+1];
1809 snprintf(trace_file, MAX_PATH, "%s/%s", find_debugfs(), file_name);
1813 int main (int argc, char **argv)
1823 ffd = open(tracing_file("current_tracer"), O_WRONLY);
1826 write(ffd, "nop", 3);
1828 fd = open(tracing_file("set_ftrace_pid"), O_WRONLY);
1829 s = sprintf(line, "%d\n", getpid());
1832 write(ffd, "function", 8);
1837 execvp(argv[1], argv+1);
1843 Or this simple script!
1848 debugfs=`sed -ne 's/^debugfs \(.*\) debugfs.*/\1/p' /proc/mounts`
1849 echo nop > $debugfs/tracing/current_tracer
1850 echo 0 > $debugfs/tracing/tracing_on
1851 echo $$ > $debugfs/tracing/set_ftrace_pid
1852 echo function > $debugfs/tracing/current_tracer
1853 echo 1 > $debugfs/tracing/tracing_on
1858 function graph tracer
1859 ---------------------------
1861 This tracer is similar to the function tracer except that it
1862 probes a function on its entry and its exit. This is done by
1863 using a dynamically allocated stack of return addresses in each
1864 task_struct. On function entry the tracer overwrites the return
1865 address of each function traced to set a custom probe. Thus the
1866 original return address is stored on the stack of return address
1869 Probing on both ends of a function leads to special features
1872 - measure of a function's time execution
1873 - having a reliable call stack to draw function calls graph
1875 This tracer is useful in several situations:
1877 - you want to find the reason of a strange kernel behavior and
1878 need to see what happens in detail on any areas (or specific
1881 - you are experiencing weird latencies but it's difficult to
1884 - you want to find quickly which path is taken by a specific
1887 - you just want to peek inside a working kernel and want to see
1890 # tracer: function_graph
1892 # CPU DURATION FUNCTION CALLS
1896 0) | do_sys_open() {
1898 0) | kmem_cache_alloc() {
1899 0) 1.382 us | __might_sleep();
1901 0) | strncpy_from_user() {
1902 0) | might_fault() {
1903 0) 1.389 us | __might_sleep();
1908 0) 0.668 us | _spin_lock();
1909 0) 0.570 us | expand_files();
1910 0) 0.586 us | _spin_unlock();
1913 There are several columns that can be dynamically
1914 enabled/disabled. You can use every combination of options you
1915 want, depending on your needs.
1917 - The cpu number on which the function executed is default
1918 enabled. It is sometimes better to only trace one cpu (see
1919 tracing_cpu_mask file) or you might sometimes see unordered
1920 function calls while cpu tracing switch.
1922 hide: echo nofuncgraph-cpu > trace_options
1923 show: echo funcgraph-cpu > trace_options
1925 - The duration (function's time of execution) is displayed on
1926 the closing bracket line of a function or on the same line
1927 than the current function in case of a leaf one. It is default
1930 hide: echo nofuncgraph-duration > trace_options
1931 show: echo funcgraph-duration > trace_options
1933 - The overhead field precedes the duration field in case of
1934 reached duration thresholds.
1936 hide: echo nofuncgraph-overhead > trace_options
1937 show: echo funcgraph-overhead > trace_options
1938 depends on: funcgraph-duration
1943 0) 0.646 us | _spin_lock_irqsave();
1944 0) 0.684 us | _spin_unlock_irqrestore();
1946 0) 0.548 us | fput();
1952 0) | kmem_cache_free() {
1953 0) 0.518 us | __phys_addr();
1959 + means that the function exceeded 10 usecs.
1960 ! means that the function exceeded 100 usecs.
1961 # means that the function exceeded 1000 usecs.
1962 $ means that the function exceeded 1 sec.
1965 - The task/pid field displays the thread cmdline and pid which
1966 executed the function. It is default disabled.
1968 hide: echo nofuncgraph-proc > trace_options
1969 show: echo funcgraph-proc > trace_options
1973 # tracer: function_graph
1975 # CPU TASK/PID DURATION FUNCTION CALLS
1977 0) sh-4802 | | d_free() {
1978 0) sh-4802 | | call_rcu() {
1979 0) sh-4802 | | __call_rcu() {
1980 0) sh-4802 | 0.616 us | rcu_process_gp_end();
1981 0) sh-4802 | 0.586 us | check_for_new_grace_period();
1982 0) sh-4802 | 2.899 us | }
1983 0) sh-4802 | 4.040 us | }
1984 0) sh-4802 | 5.151 us | }
1985 0) sh-4802 | + 49.370 us | }
1988 - The absolute time field is an absolute timestamp given by the
1989 system clock since it started. A snapshot of this time is
1990 given on each entry/exit of functions
1992 hide: echo nofuncgraph-abstime > trace_options
1993 show: echo funcgraph-abstime > trace_options
1998 # TIME CPU DURATION FUNCTION CALLS
2000 360.774522 | 1) 0.541 us | }
2001 360.774522 | 1) 4.663 us | }
2002 360.774523 | 1) 0.541 us | __wake_up_bit();
2003 360.774524 | 1) 6.796 us | }
2004 360.774524 | 1) 7.952 us | }
2005 360.774525 | 1) 9.063 us | }
2006 360.774525 | 1) 0.615 us | journal_mark_dirty();
2007 360.774527 | 1) 0.578 us | __brelse();
2008 360.774528 | 1) | reiserfs_prepare_for_journal() {
2009 360.774528 | 1) | unlock_buffer() {
2010 360.774529 | 1) | wake_up_bit() {
2011 360.774529 | 1) | bit_waitqueue() {
2012 360.774530 | 1) 0.594 us | __phys_addr();
2015 The function name is always displayed after the closing bracket
2016 for a function if the start of that function is not in the
2019 Display of the function name after the closing bracket may be
2020 enabled for functions whose start is in the trace buffer,
2021 allowing easier searching with grep for function durations.
2022 It is default disabled.
2024 hide: echo nofuncgraph-tail > trace_options
2025 show: echo funcgraph-tail > trace_options
2027 Example with nofuncgraph-tail (default):
2029 0) | kmem_cache_free() {
2030 0) 0.518 us | __phys_addr();
2034 Example with funcgraph-tail:
2036 0) | kmem_cache_free() {
2037 0) 0.518 us | __phys_addr();
2038 0) 1.757 us | } /* kmem_cache_free() */
2039 0) 2.861 us | } /* putname() */
2041 You can put some comments on specific functions by using
2042 trace_printk() For example, if you want to put a comment inside
2043 the __might_sleep() function, you just have to include
2044 <linux/ftrace.h> and call trace_printk() inside __might_sleep()
2046 trace_printk("I'm a comment!\n")
2050 1) | __might_sleep() {
2051 1) | /* I'm a comment! */
2055 You might find other useful features for this tracer in the
2056 following "dynamic ftrace" section such as tracing only specific
2062 If CONFIG_DYNAMIC_FTRACE is set, the system will run with
2063 virtually no overhead when function tracing is disabled. The way
2064 this works is the mcount function call (placed at the start of
2065 every kernel function, produced by the -pg switch in gcc),
2066 starts of pointing to a simple return. (Enabling FTRACE will
2067 include the -pg switch in the compiling of the kernel.)
2069 At compile time every C file object is run through the
2070 recordmcount program (located in the scripts directory). This
2071 program will parse the ELF headers in the C object to find all
2072 the locations in the .text section that call mcount. (Note, only
2073 white listed .text sections are processed, since processing other
2074 sections like .init.text may cause races due to those sections
2075 being freed unexpectedly).
2077 A new section called "__mcount_loc" is created that holds
2078 references to all the mcount call sites in the .text section.
2079 The recordmcount program re-links this section back into the
2080 original object. The final linking stage of the kernel will add all these
2081 references into a single table.
2083 On boot up, before SMP is initialized, the dynamic ftrace code
2084 scans this table and updates all the locations into nops. It
2085 also records the locations, which are added to the
2086 available_filter_functions list. Modules are processed as they
2087 are loaded and before they are executed. When a module is
2088 unloaded, it also removes its functions from the ftrace function
2089 list. This is automatic in the module unload code, and the
2090 module author does not need to worry about it.
2092 When tracing is enabled, the process of modifying the function
2093 tracepoints is dependent on architecture. The old method is to use
2094 kstop_machine to prevent races with the CPUs executing code being
2095 modified (which can cause the CPU to do undesirable things, especially
2096 if the modified code crosses cache (or page) boundaries), and the nops are
2097 patched back to calls. But this time, they do not call mcount
2098 (which is just a function stub). They now call into the ftrace
2101 The new method of modifying the function tracepoints is to place
2102 a breakpoint at the location to be modified, sync all CPUs, modify
2103 the rest of the instruction not covered by the breakpoint. Sync
2104 all CPUs again, and then remove the breakpoint with the finished
2105 version to the ftrace call site.
2107 Some archs do not even need to monkey around with the synchronization,
2108 and can just slap the new code on top of the old without any
2109 problems with other CPUs executing it at the same time.
2111 One special side-effect to the recording of the functions being
2112 traced is that we can now selectively choose which functions we
2113 wish to trace and which ones we want the mcount calls to remain
2116 Two files are used, one for enabling and one for disabling the
2117 tracing of specified functions. They are:
2125 A list of available functions that you can add to these files is
2128 available_filter_functions
2130 # cat available_filter_functions
2139 If I am only interested in sys_nanosleep and hrtimer_interrupt:
2141 # echo sys_nanosleep hrtimer_interrupt > set_ftrace_filter
2142 # echo function > current_tracer
2143 # echo 1 > tracing_on
2145 # echo 0 > tracing_on
2149 # entries-in-buffer/entries-written: 5/5 #P:4
2152 # / _----=> need-resched
2153 # | / _---=> hardirq/softirq
2154 # || / _--=> preempt-depth
2156 # TASK-PID CPU# |||| TIMESTAMP FUNCTION
2158 usleep-2665 [001] .... 4186.475355: sys_nanosleep <-system_call_fastpath
2159 <idle>-0 [001] d.h1 4186.475409: hrtimer_interrupt <-smp_apic_timer_interrupt
2160 usleep-2665 [001] d.h1 4186.475426: hrtimer_interrupt <-smp_apic_timer_interrupt
2161 <idle>-0 [003] d.h1 4186.475426: hrtimer_interrupt <-smp_apic_timer_interrupt
2162 <idle>-0 [002] d.h1 4186.475427: hrtimer_interrupt <-smp_apic_timer_interrupt
2164 To see which functions are being traced, you can cat the file:
2166 # cat set_ftrace_filter
2171 Perhaps this is not enough. The filters also allow simple wild
2172 cards. Only the following are currently available
2174 <match>* - will match functions that begin with <match>
2175 *<match> - will match functions that end with <match>
2176 *<match>* - will match functions that have <match> in it
2178 These are the only wild cards which are supported.
2180 <match>*<match> will not work.
2182 Note: It is better to use quotes to enclose the wild cards,
2183 otherwise the shell may expand the parameters into names
2184 of files in the local directory.
2186 # echo 'hrtimer_*' > set_ftrace_filter
2192 # entries-in-buffer/entries-written: 897/897 #P:4
2195 # / _----=> need-resched
2196 # | / _---=> hardirq/softirq
2197 # || / _--=> preempt-depth
2199 # TASK-PID CPU# |||| TIMESTAMP FUNCTION
2201 <idle>-0 [003] dN.1 4228.547803: hrtimer_cancel <-tick_nohz_idle_exit
2202 <idle>-0 [003] dN.1 4228.547804: hrtimer_try_to_cancel <-hrtimer_cancel
2203 <idle>-0 [003] dN.2 4228.547805: hrtimer_force_reprogram <-__remove_hrtimer
2204 <idle>-0 [003] dN.1 4228.547805: hrtimer_forward <-tick_nohz_idle_exit
2205 <idle>-0 [003] dN.1 4228.547805: hrtimer_start_range_ns <-hrtimer_start_expires.constprop.11
2206 <idle>-0 [003] d..1 4228.547858: hrtimer_get_next_event <-get_next_timer_interrupt
2207 <idle>-0 [003] d..1 4228.547859: hrtimer_start <-__tick_nohz_idle_enter
2208 <idle>-0 [003] d..2 4228.547860: hrtimer_force_reprogram <-__rem
2210 Notice that we lost the sys_nanosleep.
2212 # cat set_ftrace_filter
2217 hrtimer_try_to_cancel
2221 hrtimer_force_reprogram
2222 hrtimer_get_next_event
2226 hrtimer_get_remaining
2228 hrtimer_init_sleeper
2231 This is because the '>' and '>>' act just like they do in bash.
2232 To rewrite the filters, use '>'
2233 To append to the filters, use '>>'
2235 To clear out a filter so that all functions will be recorded
2238 # echo > set_ftrace_filter
2239 # cat set_ftrace_filter
2242 Again, now we want to append.
2244 # echo sys_nanosleep > set_ftrace_filter
2245 # cat set_ftrace_filter
2247 # echo 'hrtimer_*' >> set_ftrace_filter
2248 # cat set_ftrace_filter
2253 hrtimer_try_to_cancel
2257 hrtimer_force_reprogram
2258 hrtimer_get_next_event
2263 hrtimer_get_remaining
2265 hrtimer_init_sleeper
2268 The set_ftrace_notrace prevents those functions from being
2271 # echo '*preempt*' '*lock*' > set_ftrace_notrace
2277 # entries-in-buffer/entries-written: 39608/39608 #P:4
2280 # / _----=> need-resched
2281 # | / _---=> hardirq/softirq
2282 # || / _--=> preempt-depth
2284 # TASK-PID CPU# |||| TIMESTAMP FUNCTION
2286 bash-1994 [000] .... 4342.324896: file_ra_state_init <-do_dentry_open
2287 bash-1994 [000] .... 4342.324897: open_check_o_direct <-do_last
2288 bash-1994 [000] .... 4342.324897: ima_file_check <-do_last
2289 bash-1994 [000] .... 4342.324898: process_measurement <-ima_file_check
2290 bash-1994 [000] .... 4342.324898: ima_get_action <-process_measurement
2291 bash-1994 [000] .... 4342.324898: ima_match_policy <-ima_get_action
2292 bash-1994 [000] .... 4342.324899: do_truncate <-do_last
2293 bash-1994 [000] .... 4342.324899: should_remove_suid <-do_truncate
2294 bash-1994 [000] .... 4342.324899: notify_change <-do_truncate
2295 bash-1994 [000] .... 4342.324900: current_fs_time <-notify_change
2296 bash-1994 [000] .... 4342.324900: current_kernel_time <-current_fs_time
2297 bash-1994 [000] .... 4342.324900: timespec_trunc <-current_fs_time
2299 We can see that there's no more lock or preempt tracing.
2302 Dynamic ftrace with the function graph tracer
2303 ---------------------------------------------
2305 Although what has been explained above concerns both the
2306 function tracer and the function-graph-tracer, there are some
2307 special features only available in the function-graph tracer.
2309 If you want to trace only one function and all of its children,
2310 you just have to echo its name into set_graph_function:
2312 echo __do_fault > set_graph_function
2314 will produce the following "expanded" trace of the __do_fault()
2318 0) | filemap_fault() {
2319 0) | find_lock_page() {
2320 0) 0.804 us | find_get_page();
2321 0) | __might_sleep() {
2325 0) 0.653 us | _spin_lock();
2326 0) 0.578 us | page_add_file_rmap();
2327 0) 0.525 us | native_set_pte_at();
2328 0) 0.585 us | _spin_unlock();
2329 0) | unlock_page() {
2330 0) 0.541 us | page_waitqueue();
2331 0) 0.639 us | __wake_up_bit();
2335 0) | filemap_fault() {
2336 0) | find_lock_page() {
2337 0) 0.698 us | find_get_page();
2338 0) | __might_sleep() {
2342 0) 0.631 us | _spin_lock();
2343 0) 0.571 us | page_add_file_rmap();
2344 0) 0.526 us | native_set_pte_at();
2345 0) 0.586 us | _spin_unlock();
2346 0) | unlock_page() {
2347 0) 0.533 us | page_waitqueue();
2348 0) 0.638 us | __wake_up_bit();
2352 You can also expand several functions at once:
2354 echo sys_open > set_graph_function
2355 echo sys_close >> set_graph_function
2357 Now if you want to go back to trace all functions you can clear
2358 this special filter via:
2360 echo > set_graph_function
2366 Note, the proc sysctl ftrace_enable is a big on/off switch for the
2367 function tracer. By default it is enabled (when function tracing is
2368 enabled in the kernel). If it is disabled, all function tracing is
2369 disabled. This includes not only the function tracers for ftrace, but
2370 also for any other uses (perf, kprobes, stack tracing, profiling, etc).
2372 Please disable this with care.
2374 This can be disable (and enabled) with:
2376 sysctl kernel.ftrace_enabled=0
2377 sysctl kernel.ftrace_enabled=1
2381 echo 0 > /proc/sys/kernel/ftrace_enabled
2382 echo 1 > /proc/sys/kernel/ftrace_enabled
2388 A few commands are supported by the set_ftrace_filter interface.
2389 Trace commands have the following format:
2391 <function>:<command>:<parameter>
2393 The following commands are supported:
2396 This command enables function filtering per module. The
2397 parameter defines the module. For example, if only the write*
2398 functions in the ext3 module are desired, run:
2400 echo 'write*:mod:ext3' > set_ftrace_filter
2402 This command interacts with the filter in the same way as
2403 filtering based on function names. Thus, adding more functions
2404 in a different module is accomplished by appending (>>) to the
2405 filter file. Remove specific module functions by prepending
2408 echo '!writeback*:mod:ext3' >> set_ftrace_filter
2411 These commands turn tracing on and off when the specified
2412 functions are hit. The parameter determines how many times the
2413 tracing system is turned on and off. If unspecified, there is
2414 no limit. For example, to disable tracing when a schedule bug
2415 is hit the first 5 times, run:
2417 echo '__schedule_bug:traceoff:5' > set_ftrace_filter
2419 To always disable tracing when __schedule_bug is hit:
2421 echo '__schedule_bug:traceoff' > set_ftrace_filter
2423 These commands are cumulative whether or not they are appended
2424 to set_ftrace_filter. To remove a command, prepend it by '!'
2425 and drop the parameter:
2427 echo '!__schedule_bug:traceoff:0' > set_ftrace_filter
2429 The above removes the traceoff command for __schedule_bug
2430 that have a counter. To remove commands without counters:
2432 echo '!__schedule_bug:traceoff' > set_ftrace_filter
2435 Will cause a snapshot to be triggered when the function is hit.
2437 echo 'native_flush_tlb_others:snapshot' > set_ftrace_filter
2439 To only snapshot once:
2441 echo 'native_flush_tlb_others:snapshot:1' > set_ftrace_filter
2443 To remove the above commands:
2445 echo '!native_flush_tlb_others:snapshot' > set_ftrace_filter
2446 echo '!native_flush_tlb_others:snapshot:0' > set_ftrace_filter
2448 - enable_event/disable_event
2449 These commands can enable or disable a trace event. Note, because
2450 function tracing callbacks are very sensitive, when these commands
2451 are registered, the trace point is activated, but disabled in
2452 a "soft" mode. That is, the tracepoint will be called, but
2453 just will not be traced. The event tracepoint stays in this mode
2454 as long as there's a command that triggers it.
2456 echo 'try_to_wake_up:enable_event:sched:sched_switch:2' > \
2461 <function>:enable_event:<system>:<event>[:count]
2462 <function>:disable_event:<system>:<event>[:count]
2464 To remove the events commands:
2467 echo '!try_to_wake_up:enable_event:sched:sched_switch:0' > \
2469 echo '!schedule:disable_event:sched:sched_switch' > \
2473 When the function is hit, it will dump the contents of the ftrace
2474 ring buffer to the console. This is useful if you need to debug
2475 something, and want to dump the trace when a certain function
2476 is hit. Perhaps its a function that is called before a tripple
2477 fault happens and does not allow you to get a regular dump.
2480 When the function is hit, it will dump the contents of the ftrace
2481 ring buffer for the current CPU to the console. Unlike the "dump"
2482 command, it only prints out the contents of the ring buffer for the
2483 CPU that executed the function that triggered the dump.
2488 The trace_pipe outputs the same content as the trace file, but
2489 the effect on the tracing is different. Every read from
2490 trace_pipe is consumed. This means that subsequent reads will be
2491 different. The trace is live.
2493 # echo function > current_tracer
2494 # cat trace_pipe > /tmp/trace.out &
2496 # echo 1 > tracing_on
2498 # echo 0 > tracing_on
2502 # entries-in-buffer/entries-written: 0/0 #P:4
2505 # / _----=> need-resched
2506 # | / _---=> hardirq/softirq
2507 # || / _--=> preempt-depth
2509 # TASK-PID CPU# |||| TIMESTAMP FUNCTION
2513 # cat /tmp/trace.out
2514 bash-1994 [000] .... 5281.568961: mutex_unlock <-rb_simple_write
2515 bash-1994 [000] .... 5281.568963: __mutex_unlock_slowpath <-mutex_unlock
2516 bash-1994 [000] .... 5281.568963: __fsnotify_parent <-fsnotify_modify
2517 bash-1994 [000] .... 5281.568964: fsnotify <-fsnotify_modify
2518 bash-1994 [000] .... 5281.568964: __srcu_read_lock <-fsnotify
2519 bash-1994 [000] .... 5281.568964: add_preempt_count <-__srcu_read_lock
2520 bash-1994 [000] ...1 5281.568965: sub_preempt_count <-__srcu_read_lock
2521 bash-1994 [000] .... 5281.568965: __srcu_read_unlock <-fsnotify
2522 bash-1994 [000] .... 5281.568967: sys_dup2 <-system_call_fastpath
2525 Note, reading the trace_pipe file will block until more input is
2531 Having too much or not enough data can be troublesome in
2532 diagnosing an issue in the kernel. The file buffer_size_kb is
2533 used to modify the size of the internal trace buffers. The
2534 number listed is the number of entries that can be recorded per
2535 CPU. To know the full size, multiply the number of possible CPUs
2536 with the number of entries.
2538 # cat buffer_size_kb
2539 1408 (units kilobytes)
2541 Or simply read buffer_total_size_kb
2543 # cat buffer_total_size_kb
2546 To modify the buffer, simple echo in a number (in 1024 byte segments).
2548 # echo 10000 > buffer_size_kb
2549 # cat buffer_size_kb
2550 10000 (units kilobytes)
2552 It will try to allocate as much as possible. If you allocate too
2553 much, it can cause Out-Of-Memory to trigger.
2555 # echo 1000000000000 > buffer_size_kb
2556 -bash: echo: write error: Cannot allocate memory
2557 # cat buffer_size_kb
2560 The per_cpu buffers can be changed individually as well:
2562 # echo 10000 > per_cpu/cpu0/buffer_size_kb
2563 # echo 100 > per_cpu/cpu1/buffer_size_kb
2565 When the per_cpu buffers are not the same, the buffer_size_kb
2566 at the top level will just show an X
2568 # cat buffer_size_kb
2571 This is where the buffer_total_size_kb is useful:
2573 # cat buffer_total_size_kb
2576 Writing to the top level buffer_size_kb will reset all the buffers
2577 to be the same again.
2581 CONFIG_TRACER_SNAPSHOT makes a generic snapshot feature
2582 available to all non latency tracers. (Latency tracers which
2583 record max latency, such as "irqsoff" or "wakeup", can't use
2584 this feature, since those are already using the snapshot
2585 mechanism internally.)
2587 Snapshot preserves a current trace buffer at a particular point
2588 in time without stopping tracing. Ftrace swaps the current
2589 buffer with a spare buffer, and tracing continues in the new
2590 current (=previous spare) buffer.
2592 The following debugfs files in "tracing" are related to this
2597 This is used to take a snapshot and to read the output
2598 of the snapshot. Echo 1 into this file to allocate a
2599 spare buffer and to take a snapshot (swap), then read
2600 the snapshot from this file in the same format as
2601 "trace" (described above in the section "The File
2602 System"). Both reads snapshot and tracing are executable
2603 in parallel. When the spare buffer is allocated, echoing
2604 0 frees it, and echoing else (positive) values clear the
2606 More details are shown in the table below.
2608 status\input | 0 | 1 | else |
2609 --------------+------------+------------+------------+
2610 not allocated |(do nothing)| alloc+swap |(do nothing)|
2611 --------------+------------+------------+------------+
2612 allocated | free | swap | clear |
2613 --------------+------------+------------+------------+
2615 Here is an example of using the snapshot feature.
2617 # echo 1 > events/sched/enable
2622 # entries-in-buffer/entries-written: 71/71 #P:8
2625 # / _----=> need-resched
2626 # | / _---=> hardirq/softirq
2627 # || / _--=> preempt-depth
2629 # TASK-PID CPU# |||| TIMESTAMP FUNCTION
2631 <idle>-0 [005] d... 2440.603828: sched_switch: prev_comm=swapper/5 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=snapshot-test-2 next_pid=2242 next_prio=120
2632 sleep-2242 [005] d... 2440.603846: sched_switch: prev_comm=snapshot-test-2 prev_pid=2242 prev_prio=120 prev_state=R ==> next_comm=kworker/5:1 next_pid=60 next_prio=120
2634 <idle>-0 [002] d... 2440.707230: sched_switch: prev_comm=swapper/2 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=snapshot-test-2 next_pid=2229 next_prio=120
2639 # entries-in-buffer/entries-written: 77/77 #P:8
2642 # / _----=> need-resched
2643 # | / _---=> hardirq/softirq
2644 # || / _--=> preempt-depth
2646 # TASK-PID CPU# |||| TIMESTAMP FUNCTION
2648 <idle>-0 [007] d... 2440.707395: sched_switch: prev_comm=swapper/7 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=snapshot-test-2 next_pid=2243 next_prio=120
2649 snapshot-test-2-2229 [002] d... 2440.707438: sched_switch: prev_comm=snapshot-test-2 prev_pid=2229 prev_prio=120 prev_state=S ==> next_comm=swapper/2 next_pid=0 next_prio=120
2653 If you try to use this snapshot feature when current tracer is
2654 one of the latency tracers, you will get the following results.
2656 # echo wakeup > current_tracer
2658 bash: echo: write error: Device or resource busy
2660 cat: snapshot: Device or resource busy
2665 In the debugfs tracing directory is a directory called "instances".
2666 This directory can have new directories created inside of it using
2667 mkdir, and removing directories with rmdir. The directory created
2668 with mkdir in this directory will already contain files and other
2669 directories after it is created.
2671 # mkdir instances/foo
2673 buffer_size_kb buffer_total_size_kb events free_buffer per_cpu
2674 set_event snapshot trace trace_clock trace_marker trace_options
2675 trace_pipe tracing_on
2677 As you can see, the new directory looks similar to the tracing directory
2678 itself. In fact, it is very similar, except that the buffer and
2679 events are agnostic from the main director, or from any other
2680 instances that are created.
2682 The files in the new directory work just like the files with the
2683 same name in the tracing directory except the buffer that is used
2684 is a separate and new buffer. The files affect that buffer but do not
2685 affect the main buffer with the exception of trace_options. Currently,
2686 the trace_options affect all instances and the top level buffer
2687 the same, but this may change in future releases. That is, options
2688 may become specific to the instance they reside in.
2690 Notice that none of the function tracer files are there, nor is
2691 current_tracer and available_tracers. This is because the buffers
2692 can currently only have events enabled for them.
2694 # mkdir instances/foo
2695 # mkdir instances/bar
2696 # mkdir instances/zoot
2697 # echo 100000 > buffer_size_kb
2698 # echo 1000 > instances/foo/buffer_size_kb
2699 # echo 5000 > instances/bar/per_cpu/cpu1/buffer_size_kb
2700 # echo function > current_trace
2701 # echo 1 > instances/foo/events/sched/sched_wakeup/enable
2702 # echo 1 > instances/foo/events/sched/sched_wakeup_new/enable
2703 # echo 1 > instances/foo/events/sched/sched_switch/enable
2704 # echo 1 > instances/bar/events/irq/enable
2705 # echo 1 > instances/zoot/events/syscalls/enable
2707 CPU:2 [LOST 11745 EVENTS]
2708 bash-2044 [002] .... 10594.481032: _raw_spin_lock_irqsave <-get_page_from_freelist
2709 bash-2044 [002] d... 10594.481032: add_preempt_count <-_raw_spin_lock_irqsave
2710 bash-2044 [002] d..1 10594.481032: __rmqueue <-get_page_from_freelist
2711 bash-2044 [002] d..1 10594.481033: _raw_spin_unlock <-get_page_from_freelist
2712 bash-2044 [002] d..1 10594.481033: sub_preempt_count <-_raw_spin_unlock
2713 bash-2044 [002] d... 10594.481033: get_pageblock_flags_group <-get_pageblock_migratetype
2714 bash-2044 [002] d... 10594.481034: __mod_zone_page_state <-get_page_from_freelist
2715 bash-2044 [002] d... 10594.481034: zone_statistics <-get_page_from_freelist
2716 bash-2044 [002] d... 10594.481034: __inc_zone_state <-zone_statistics
2717 bash-2044 [002] d... 10594.481034: __inc_zone_state <-zone_statistics
2718 bash-2044 [002] .... 10594.481035: arch_dup_task_struct <-copy_process
2721 # cat instances/foo/trace_pipe
2722 bash-1998 [000] d..4 136.676759: sched_wakeup: comm=kworker/0:1 pid=59 prio=120 success=1 target_cpu=000
2723 bash-1998 [000] dN.4 136.676760: sched_wakeup: comm=bash pid=1998 prio=120 success=1 target_cpu=000
2724 <idle>-0 [003] d.h3 136.676906: sched_wakeup: comm=rcu_preempt pid=9 prio=120 success=1 target_cpu=003
2725 <idle>-0 [003] d..3 136.676909: sched_switch: prev_comm=swapper/3 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcu_preempt next_pid=9 next_prio=120
2726 rcu_preempt-9 [003] d..3 136.676916: sched_switch: prev_comm=rcu_preempt prev_pid=9 prev_prio=120 prev_state=S ==> next_comm=swapper/3 next_pid=0 next_prio=120
2727 bash-1998 [000] d..4 136.677014: sched_wakeup: comm=kworker/0:1 pid=59 prio=120 success=1 target_cpu=000
2728 bash-1998 [000] dN.4 136.677016: sched_wakeup: comm=bash pid=1998 prio=120 success=1 target_cpu=000
2729 bash-1998 [000] d..3 136.677018: sched_switch: prev_comm=bash prev_pid=1998 prev_prio=120 prev_state=R+ ==> next_comm=kworker/0:1 next_pid=59 next_prio=120
2730 kworker/0:1-59 [000] d..4 136.677022: sched_wakeup: comm=sshd pid=1995 prio=120 success=1 target_cpu=001
2731 kworker/0:1-59 [000] d..3 136.677025: sched_switch: prev_comm=kworker/0:1 prev_pid=59 prev_prio=120 prev_state=S ==> next_comm=bash next_pid=1998 next_prio=120
2734 # cat instances/bar/trace_pipe
2735 migration/1-14 [001] d.h3 138.732674: softirq_raise: vec=3 [action=NET_RX]
2736 <idle>-0 [001] dNh3 138.732725: softirq_raise: vec=3 [action=NET_RX]
2737 bash-1998 [000] d.h1 138.733101: softirq_raise: vec=1 [action=TIMER]
2738 bash-1998 [000] d.h1 138.733102: softirq_raise: vec=9 [action=RCU]
2739 bash-1998 [000] ..s2 138.733105: softirq_entry: vec=1 [action=TIMER]
2740 bash-1998 [000] ..s2 138.733106: softirq_exit: vec=1 [action=TIMER]
2741 bash-1998 [000] ..s2 138.733106: softirq_entry: vec=9 [action=RCU]
2742 bash-1998 [000] ..s2 138.733109: softirq_exit: vec=9 [action=RCU]
2743 sshd-1995 [001] d.h1 138.733278: irq_handler_entry: irq=21 name=uhci_hcd:usb4
2744 sshd-1995 [001] d.h1 138.733280: irq_handler_exit: irq=21 ret=unhandled
2745 sshd-1995 [001] d.h1 138.733281: irq_handler_entry: irq=21 name=eth0
2746 sshd-1995 [001] d.h1 138.733283: irq_handler_exit: irq=21 ret=handled
2749 # cat instances/zoot/trace
2752 # entries-in-buffer/entries-written: 18996/18996 #P:4
2755 # / _----=> need-resched
2756 # | / _---=> hardirq/softirq
2757 # || / _--=> preempt-depth
2759 # TASK-PID CPU# |||| TIMESTAMP FUNCTION
2761 bash-1998 [000] d... 140.733501: sys_write -> 0x2
2762 bash-1998 [000] d... 140.733504: sys_dup2(oldfd: a, newfd: 1)
2763 bash-1998 [000] d... 140.733506: sys_dup2 -> 0x1
2764 bash-1998 [000] d... 140.733508: sys_fcntl(fd: a, cmd: 1, arg: 0)
2765 bash-1998 [000] d... 140.733509: sys_fcntl -> 0x1
2766 bash-1998 [000] d... 140.733510: sys_close(fd: a)
2767 bash-1998 [000] d... 140.733510: sys_close -> 0x0
2768 bash-1998 [000] d... 140.733514: sys_rt_sigprocmask(how: 0, nset: 0, oset: 6e2768, sigsetsize: 8)
2769 bash-1998 [000] d... 140.733515: sys_rt_sigprocmask -> 0x0
2770 bash-1998 [000] d... 140.733516: sys_rt_sigaction(sig: 2, act: 7fff718846f0, oact: 7fff71884650, sigsetsize: 8)
2771 bash-1998 [000] d... 140.733516: sys_rt_sigaction -> 0x0
2773 You can see that the trace of the top most trace buffer shows only
2774 the function tracing. The foo instance displays wakeups and task
2777 To remove the instances, simply delete their directories:
2779 # rmdir instances/foo
2780 # rmdir instances/bar
2781 # rmdir instances/zoot
2783 Note, if a process has a trace file open in one of the instance
2784 directories, the rmdir will fail with EBUSY.
2789 Since the kernel has a fixed sized stack, it is important not to
2790 waste it in functions. A kernel developer must be conscience of
2791 what they allocate on the stack. If they add too much, the system
2792 can be in danger of a stack overflow, and corruption will occur,
2793 usually leading to a system panic.
2795 There are some tools that check this, usually with interrupts
2796 periodically checking usage. But if you can perform a check
2797 at every function call that will become very useful. As ftrace provides
2798 a function tracer, it makes it convenient to check the stack size
2799 at every function call. This is enabled via the stack tracer.
2801 CONFIG_STACK_TRACER enables the ftrace stack tracing functionality.
2802 To enable it, write a '1' into /proc/sys/kernel/stack_tracer_enabled.
2804 # echo 1 > /proc/sys/kernel/stack_tracer_enabled
2806 You can also enable it from the kernel command line to trace
2807 the stack size of the kernel during boot up, by adding "stacktrace"
2808 to the kernel command line parameter.
2810 After running it for a few minutes, the output looks like:
2812 # cat stack_max_size
2816 Depth Size Location (18 entries)
2818 0) 2928 224 update_sd_lb_stats+0xbc/0x4ac
2819 1) 2704 160 find_busiest_group+0x31/0x1f1
2820 2) 2544 256 load_balance+0xd9/0x662
2821 3) 2288 80 idle_balance+0xbb/0x130
2822 4) 2208 128 __schedule+0x26e/0x5b9
2823 5) 2080 16 schedule+0x64/0x66
2824 6) 2064 128 schedule_timeout+0x34/0xe0
2825 7) 1936 112 wait_for_common+0x97/0xf1
2826 8) 1824 16 wait_for_completion+0x1d/0x1f
2827 9) 1808 128 flush_work+0xfe/0x119
2828 10) 1680 16 tty_flush_to_ldisc+0x1e/0x20
2829 11) 1664 48 input_available_p+0x1d/0x5c
2830 12) 1616 48 n_tty_poll+0x6d/0x134
2831 13) 1568 64 tty_poll+0x64/0x7f
2832 14) 1504 880 do_select+0x31e/0x511
2833 15) 624 400 core_sys_select+0x177/0x216
2834 16) 224 96 sys_select+0x91/0xb9
2835 17) 128 128 system_call_fastpath+0x16/0x1b
2837 Note, if -mfentry is being used by gcc, functions get traced before
2838 they set up the stack frame. This means that leaf level functions
2839 are not tested by the stack tracer when -mfentry is used.
2841 Currently, -mfentry is used by gcc 4.6.0 and above on x86 only.
2845 More details can be found in the source code, in the
2846 kernel/trace/*.c files.