1 ftrace - Function Tracer
2 ========================
4 Copyright 2008 Red Hat Inc.
5 Author: Steven Rostedt <srostedt@redhat.com>
6 License: The GNU Free Documentation License, Version 1.2
7 (dual licensed under the GPL v2)
8 Reviewers: Elias Oltmanns, Randy Dunlap, Andrew Morton,
9 John Kacur, and David Teigland.
10 Written for: 2.6.28-rc2
15 Ftrace is an internal tracer designed to help out developers and
16 designers of systems to find what is going on inside the kernel.
17 It can be used for debugging or analyzing latencies and
18 performance issues that take place outside of user-space.
20 Although ftrace is the function tracer, it also includes an
21 infrastructure that allows for other types of tracing. Some of
22 the tracers that are currently in ftrace include a tracer to
23 trace context switches, the time it takes for a high priority
24 task to run after it was woken up, the time interrupts are
25 disabled, and more (ftrace allows for tracer plugins, which
26 means that the list of tracers can always grow).
29 Implementation Details
30 ----------------------
32 See ftrace-design.txt for details for arch porters and such.
38 Ftrace uses the debugfs file system to hold the control files as
39 well as the files to display output.
41 When debugfs is configured into the kernel (which selecting any ftrace
42 option will do) the directory /sys/kernel/debug will be created. To mount
43 this directory, you can add to your /etc/fstab file:
45 debugfs /sys/kernel/debug debugfs defaults 0 0
47 Or you can mount it at run time with:
49 mount -t debugfs nodev /sys/kernel/debug
51 For quicker access to that directory you may want to make a soft link to
54 ln -s /sys/kernel/debug /debug
56 Any selected ftrace option will also create a directory called tracing
57 within the debugfs. The rest of the document will assume that you are in
58 the ftrace directory (cd /sys/kernel/debug/tracing) and will only concentrate
59 on the files within that directory and not distract from the content with
60 the extended "/sys/kernel/debug/tracing" path name.
62 That's it! (assuming that you have ftrace configured into your kernel)
64 After mounting the debugfs, you can see a directory called
65 "tracing". This directory contains the control and output files
66 of ftrace. Here is a list of some of the key files:
69 Note: all time values are in microseconds.
73 This is used to set or display the current tracer
78 This holds the different types of tracers that
79 have been compiled into the kernel. The
80 tracers listed here can be configured by
81 echoing their name into current_tracer.
85 This sets or displays whether the current_tracer
86 is activated and tracing or not. Echo 0 into this
87 file to disable the tracer or 1 to enable it.
91 This file holds the output of the trace in a human
92 readable format (described below).
96 The output is the same as the "trace" file but this
97 file is meant to be streamed with live tracing.
98 Reads from this file will block until new data is
99 retrieved. Unlike the "trace" file, this file is a
100 consumer. This means reading from this file causes
101 sequential reads to display more current data. Once
102 data is read from this file, it is consumed, and
103 will not be read again with a sequential read. The
104 "trace" file is static, and if the tracer is not
105 adding more data,they will display the same
106 information every time they are read.
110 This file lets the user control the amount of data
111 that is displayed in one of the above output
116 Some of the tracers record the max latency.
117 For example, the time interrupts are disabled.
118 This time is saved in this file. The max trace
119 will also be stored, and displayed by "trace".
120 A new max trace will only be recorded if the
121 latency is greater than the value in this
122 file. (in microseconds)
126 This sets or displays the number of kilobytes each CPU
127 buffer can hold. The tracer buffers are the same size
128 for each CPU. The displayed number is the size of the
129 CPU buffer and not total size of all buffers. The
130 trace buffers are allocated in pages (blocks of memory
131 that the kernel uses for allocation, usually 4 KB in size).
132 If the last page allocated has room for more bytes
133 than requested, the rest of the page will be used,
134 making the actual allocation bigger than requested.
135 ( Note, the size may not be a multiple of the page size
136 due to buffer management overhead. )
138 This can only be updated when the current_tracer
143 This is a mask that lets the user only trace
144 on specified CPUS. The format is a hex string
145 representing the CPUS.
149 When dynamic ftrace is configured in (see the
150 section below "dynamic ftrace"), the code is dynamically
151 modified (code text rewrite) to disable calling of the
152 function profiler (mcount). This lets tracing be configured
153 in with practically no overhead in performance. This also
154 has a side effect of enabling or disabling specific functions
155 to be traced. Echoing names of functions into this file
156 will limit the trace to only those functions.
160 This has an effect opposite to that of
161 set_ftrace_filter. Any function that is added here will not
162 be traced. If a function exists in both set_ftrace_filter
163 and set_ftrace_notrace, the function will _not_ be traced.
167 Have the function tracer only trace a single thread.
171 Set a "trigger" function where tracing should start
172 with the function graph tracer (See the section
173 "dynamic ftrace" for more details).
175 available_filter_functions:
177 This lists the functions that ftrace
178 has processed and can trace. These are the function
179 names that you can pass to "set_ftrace_filter" or
180 "set_ftrace_notrace". (See the section "dynamic ftrace"
181 below for more details.)
187 Here is the list of current tracers that may be configured.
191 Function call tracer to trace all kernel functions.
195 Similar to the function tracer except that the
196 function tracer probes the functions on their entry
197 whereas the function graph tracer traces on both entry
198 and exit of the functions. It then provides the ability
199 to draw a graph of function calls similar to C code
204 Traces the context switches and wakeups between tasks.
208 Traces the areas that disable interrupts and saves
209 the trace with the longest max latency.
210 See tracing_max_latency. When a new max is recorded,
211 it replaces the old trace. It is best to view this
212 trace with the latency-format option enabled.
216 Similar to irqsoff but traces and records the amount of
217 time for which preemption is disabled.
221 Similar to irqsoff and preemptoff, but traces and
222 records the largest time for which irqs and/or preemption
227 Traces and records the max latency that it takes for
228 the highest priority task to get scheduled after
229 it has been woken up.
233 Uses the BTS CPU feature on x86 CPUs to traces all
238 This is the "trace nothing" tracer. To remove all
239 tracers from tracing simply echo "nop" into
243 Examples of using the tracer
244 ----------------------------
246 Here are typical examples of using the tracers when controlling
247 them only with the debugfs interface (without using any
248 user-land utilities).
253 Here is an example of the output format of the file "trace"
258 # TASK-PID CPU# TIMESTAMP FUNCTION
260 bash-4251 [01] 10152.583854: path_put <-path_walk
261 bash-4251 [01] 10152.583855: dput <-path_put
262 bash-4251 [01] 10152.583855: _atomic_dec_and_lock <-dput
265 A header is printed with the tracer name that is represented by
266 the trace. In this case the tracer is "function". Then a header
267 showing the format. Task name "bash", the task PID "4251", the
268 CPU that it was running on "01", the timestamp in <secs>.<usecs>
269 format, the function name that was traced "path_put" and the
270 parent function that called this function "path_walk". The
271 timestamp is the time at which the function was entered.
273 The sched_switch tracer also includes tracing of task wakeups
274 and context switches.
276 ksoftirqd/1-7 [01] 1453.070013: 7:115:R + 2916:115:S
277 ksoftirqd/1-7 [01] 1453.070013: 7:115:R + 10:115:S
278 ksoftirqd/1-7 [01] 1453.070013: 7:115:R ==> 10:115:R
279 events/1-10 [01] 1453.070013: 10:115:S ==> 2916:115:R
280 kondemand/1-2916 [01] 1453.070013: 2916:115:S ==> 7:115:R
281 ksoftirqd/1-7 [01] 1453.070013: 7:115:S ==> 0:140:R
283 Wake ups are represented by a "+" and the context switches are
284 shown as "==>". The format is:
288 Previous task Next Task
290 <pid>:<prio>:<state> ==> <pid>:<prio>:<state>
294 Current task Task waking up
296 <pid>:<prio>:<state> + <pid>:<prio>:<state>
298 The prio is the internal kernel priority, which is the inverse
299 of the priority that is usually displayed by user-space tools.
300 Zero represents the highest priority (99). Prio 100 starts the
301 "nice" priorities with 100 being equal to nice -20 and 139 being
302 nice 19. The prio "140" is reserved for the idle task which is
303 the lowest priority thread (pid 0).
309 When the latency-format option is enabled, the trace file gives
310 somewhat more information to see why a latency happened.
311 Here is a typical trace.
315 irqsoff latency trace v1.1.5 on 2.6.26-rc8
316 --------------------------------------------------------------------
317 latency: 97 us, #3/3, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
319 | task: swapper-0 (uid:0 nice:0 policy:0 rt_prio:0)
321 => started at: apic_timer_interrupt
322 => ended at: do_softirq
325 # / _-----=> irqs-off
326 # | / _----=> need-resched
327 # || / _---=> hardirq/softirq
328 # ||| / _--=> preempt-depth
331 # cmd pid ||||| time | caller
333 <idle>-0 0d..1 0us+: trace_hardirqs_off_thunk (apic_timer_interrupt)
334 <idle>-0 0d.s. 97us : __do_softirq (do_softirq)
335 <idle>-0 0d.s1 98us : trace_hardirqs_on (do_softirq)
338 This shows that the current tracer is "irqsoff" tracing the time
339 for which interrupts were disabled. It gives the trace version
340 and the version of the kernel upon which this was executed on
341 (2.6.26-rc8). Then it displays the max latency in microsecs (97
342 us). The number of trace entries displayed and the total number
343 recorded (both are three: #3/3). The type of preemption that was
344 used (PREEMPT). VP, KP, SP, and HP are always zero and are
345 reserved for later use. #P is the number of online CPUS (#P:2).
347 The task is the process that was running when the latency
348 occurred. (swapper pid: 0).
350 The start and stop (the functions in which the interrupts were
351 disabled and enabled respectively) that caused the latencies:
353 apic_timer_interrupt is where the interrupts were disabled.
354 do_softirq is where they were enabled again.
356 The next lines after the header are the trace itself. The header
357 explains which is which.
359 cmd: The name of the process in the trace.
361 pid: The PID of that process.
363 CPU#: The CPU which the process was running on.
365 irqs-off: 'd' interrupts are disabled. '.' otherwise.
366 Note: If the architecture does not support a way to
367 read the irq flags variable, an 'X' will always
370 need-resched: 'N' task need_resched is set, '.' otherwise.
373 'H' - hard irq occurred inside a softirq.
374 'h' - hard irq is running
375 's' - soft irq is running
376 '.' - normal context.
378 preempt-depth: The level of preempt_disabled
380 The above is mostly meaningful for kernel developers.
382 time: When the latency-format option is enabled, the trace file
383 output includes a timestamp relative to the start of the
384 trace. This differs from the output when latency-format
385 is disabled, which includes an absolute timestamp.
387 delay: This is just to help catch your eye a bit better. And
388 needs to be fixed to be only relative to the same CPU.
389 The marks are determined by the difference between this
390 current trace and the next trace.
391 '!' - greater than preempt_mark_thresh (default 100)
392 '+' - greater than 1 microsecond
393 ' ' - less than or equal to 1 microsecond.
395 The rest is the same as the 'trace' file.
401 The trace_options file is used to control what gets printed in
402 the trace output. To see what is available, simply cat the file:
405 print-parent nosym-offset nosym-addr noverbose noraw nohex nobin \
406 noblock nostacktrace nosched-tree nouserstacktrace nosym-userobj
408 To disable one of the options, echo in the option prepended with
411 echo noprint-parent > trace_options
413 To enable an option, leave off the "no".
415 echo sym-offset > trace_options
417 Here are the available options:
419 print-parent - On function traces, display the calling (parent)
420 function as well as the function being traced.
423 bash-4000 [01] 1477.606694: simple_strtoul <-strict_strtoul
426 bash-4000 [01] 1477.606694: simple_strtoul
429 sym-offset - Display not only the function name, but also the
430 offset in the function. For example, instead of
431 seeing just "ktime_get", you will see
432 "ktime_get+0xb/0x20".
435 bash-4000 [01] 1477.606694: simple_strtoul+0x6/0xa0
437 sym-addr - this will also display the function address as well
438 as the function name.
441 bash-4000 [01] 1477.606694: simple_strtoul <c0339346>
443 verbose - This deals with the trace file when the
444 latency-format option is enabled.
446 bash 4000 1 0 00000000 00010a95 [58127d26] 1720.415ms \
447 (+0.000ms): simple_strtoul (strict_strtoul)
449 raw - This will display raw numbers. This option is best for
450 use with user applications that can translate the raw
451 numbers better than having it done in the kernel.
453 hex - Similar to raw, but the numbers will be in a hexadecimal
456 bin - This will print out the formats in raw binary.
458 block - TBD (needs update)
460 stacktrace - This is one of the options that changes the trace
461 itself. When a trace is recorded, so is the stack
462 of functions. This allows for back traces of
465 userstacktrace - This option changes the trace. It records a
466 stacktrace of the current userspace thread.
468 sym-userobj - when user stacktrace are enabled, look up which
469 object the address belongs to, and print a
470 relative address. This is especially useful when
471 ASLR is on, otherwise you don't get a chance to
472 resolve the address to object/file/line after
473 the app is no longer running
475 The lookup is performed when you read
476 trace,trace_pipe. Example:
478 a.out-1623 [000] 40874.465068: /root/a.out[+0x480] <-/root/a.out[+0
479 x494] <- /root/a.out[+0x4a8] <- /lib/libc-2.7.so[+0x1e1a6]
481 sched-tree - trace all tasks that are on the runqueue, at
482 every scheduling event. Will add overhead if
483 there's a lot of tasks running at once.
485 latency-format - This option changes the trace. When
486 it is enabled, the trace displays
487 additional information about the
488 latencies, as described in "Latency
494 This tracer simply records schedule switches. Here is an example
497 # echo sched_switch > current_tracer
498 # echo 1 > tracing_enabled
500 # echo 0 > tracing_enabled
503 # tracer: sched_switch
505 # TASK-PID CPU# TIMESTAMP FUNCTION
507 bash-3997 [01] 240.132281: 3997:120:R + 4055:120:R
508 bash-3997 [01] 240.132284: 3997:120:R ==> 4055:120:R
509 sleep-4055 [01] 240.132371: 4055:120:S ==> 3997:120:R
510 bash-3997 [01] 240.132454: 3997:120:R + 4055:120:S
511 bash-3997 [01] 240.132457: 3997:120:R ==> 4055:120:R
512 sleep-4055 [01] 240.132460: 4055:120:D ==> 3997:120:R
513 bash-3997 [01] 240.132463: 3997:120:R + 4055:120:D
514 bash-3997 [01] 240.132465: 3997:120:R ==> 4055:120:R
515 <idle>-0 [00] 240.132589: 0:140:R + 4:115:S
516 <idle>-0 [00] 240.132591: 0:140:R ==> 4:115:R
517 ksoftirqd/0-4 [00] 240.132595: 4:115:S ==> 0:140:R
518 <idle>-0 [00] 240.132598: 0:140:R + 4:115:S
519 <idle>-0 [00] 240.132599: 0:140:R ==> 4:115:R
520 ksoftirqd/0-4 [00] 240.132603: 4:115:S ==> 0:140:R
521 sleep-4055 [01] 240.133058: 4055:120:S ==> 3997:120:R
525 As we have discussed previously about this format, the header
526 shows the name of the trace and points to the options. The
527 "FUNCTION" is a misnomer since here it represents the wake ups
528 and context switches.
530 The sched_switch file only lists the wake ups (represented with
531 '+') and context switches ('==>') with the previous task or
532 current task first followed by the next task or task waking up.
533 The format for both of these is PID:KERNEL-PRIO:TASK-STATE.
534 Remember that the KERNEL-PRIO is the inverse of the actual
535 priority with zero (0) being the highest priority and the nice
536 values starting at 100 (nice -20). Below is a quick chart to map
537 the kernel priority to user land priorities.
539 Kernel Space User Space
540 ===============================================================
541 0(high) to 98(low) user RT priority 99(high) to 1(low)
542 with SCHED_RR or SCHED_FIFO
543 ---------------------------------------------------------------
544 99 sched_priority is not used in scheduling
545 decisions(it must be specified as 0)
546 ---------------------------------------------------------------
547 100(high) to 139(low) user nice -20(high) to 19(low)
548 ---------------------------------------------------------------
549 140 idle task priority
550 ---------------------------------------------------------------
554 R - running : wants to run, may not actually be running
555 S - sleep : process is waiting to be woken up (handles signals)
556 D - disk sleep (uninterruptible sleep) : process must be woken up
558 T - stopped : process suspended
559 t - traced : process is being traced (with something like gdb)
560 Z - zombie : process waiting to be cleaned up
567 The following tracers (listed below) give different output
568 depending on whether or not the sysctl ftrace_enabled is set. To
569 set ftrace_enabled, one can either use the sysctl function or
570 set it via the proc file system interface.
572 sysctl kernel.ftrace_enabled=1
576 echo 1 > /proc/sys/kernel/ftrace_enabled
578 To disable ftrace_enabled simply replace the '1' with '0' in the
581 When ftrace_enabled is set the tracers will also record the
582 functions that are within the trace. The descriptions of the
583 tracers will also show an example with ftrace enabled.
589 When interrupts are disabled, the CPU can not react to any other
590 external event (besides NMIs and SMIs). This prevents the timer
591 interrupt from triggering or the mouse interrupt from letting
592 the kernel know of a new mouse event. The result is a latency
593 with the reaction time.
595 The irqsoff tracer tracks the time for which interrupts are
596 disabled. When a new maximum latency is hit, the tracer saves
597 the trace leading up to that latency point so that every time a
598 new maximum is reached, the old saved trace is discarded and the
601 To reset the maximum, echo 0 into tracing_max_latency. Here is
604 # echo irqsoff > current_tracer
605 # echo latency-format > trace_options
606 # echo 0 > tracing_max_latency
607 # echo 1 > tracing_enabled
610 # echo 0 > tracing_enabled
614 irqsoff latency trace v1.1.5 on 2.6.26
615 --------------------------------------------------------------------
616 latency: 12 us, #3/3, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
618 | task: bash-3730 (uid:0 nice:0 policy:0 rt_prio:0)
620 => started at: sys_setpgid
621 => ended at: sys_setpgid
624 # / _-----=> irqs-off
625 # | / _----=> need-resched
626 # || / _---=> hardirq/softirq
627 # ||| / _--=> preempt-depth
630 # cmd pid ||||| time | caller
632 bash-3730 1d... 0us : _write_lock_irq (sys_setpgid)
633 bash-3730 1d..1 1us+: _write_unlock_irq (sys_setpgid)
634 bash-3730 1d..2 14us : trace_hardirqs_on (sys_setpgid)
637 Here we see that that we had a latency of 12 microsecs (which is
638 very good). The _write_lock_irq in sys_setpgid disabled
639 interrupts. The difference between the 12 and the displayed
640 timestamp 14us occurred because the clock was incremented
641 between the time of recording the max latency and the time of
642 recording the function that had that latency.
644 Note the above example had ftrace_enabled not set. If we set the
645 ftrace_enabled, we get a much larger output:
649 irqsoff latency trace v1.1.5 on 2.6.26-rc8
650 --------------------------------------------------------------------
651 latency: 50 us, #101/101, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
653 | task: ls-4339 (uid:0 nice:0 policy:0 rt_prio:0)
655 => started at: __alloc_pages_internal
656 => ended at: __alloc_pages_internal
659 # / _-----=> irqs-off
660 # | / _----=> need-resched
661 # || / _---=> hardirq/softirq
662 # ||| / _--=> preempt-depth
665 # cmd pid ||||| time | caller
667 ls-4339 0...1 0us+: get_page_from_freelist (__alloc_pages_internal)
668 ls-4339 0d..1 3us : rmqueue_bulk (get_page_from_freelist)
669 ls-4339 0d..1 3us : _spin_lock (rmqueue_bulk)
670 ls-4339 0d..1 4us : add_preempt_count (_spin_lock)
671 ls-4339 0d..2 4us : __rmqueue (rmqueue_bulk)
672 ls-4339 0d..2 5us : __rmqueue_smallest (__rmqueue)
673 ls-4339 0d..2 5us : __mod_zone_page_state (__rmqueue_smallest)
674 ls-4339 0d..2 6us : __rmqueue (rmqueue_bulk)
675 ls-4339 0d..2 6us : __rmqueue_smallest (__rmqueue)
676 ls-4339 0d..2 7us : __mod_zone_page_state (__rmqueue_smallest)
677 ls-4339 0d..2 7us : __rmqueue (rmqueue_bulk)
678 ls-4339 0d..2 8us : __rmqueue_smallest (__rmqueue)
680 ls-4339 0d..2 46us : __rmqueue_smallest (__rmqueue)
681 ls-4339 0d..2 47us : __mod_zone_page_state (__rmqueue_smallest)
682 ls-4339 0d..2 47us : __rmqueue (rmqueue_bulk)
683 ls-4339 0d..2 48us : __rmqueue_smallest (__rmqueue)
684 ls-4339 0d..2 48us : __mod_zone_page_state (__rmqueue_smallest)
685 ls-4339 0d..2 49us : _spin_unlock (rmqueue_bulk)
686 ls-4339 0d..2 49us : sub_preempt_count (_spin_unlock)
687 ls-4339 0d..1 50us : get_page_from_freelist (__alloc_pages_internal)
688 ls-4339 0d..2 51us : trace_hardirqs_on (__alloc_pages_internal)
692 Here we traced a 50 microsecond latency. But we also see all the
693 functions that were called during that time. Note that by
694 enabling function tracing, we incur an added overhead. This
695 overhead may extend the latency times. But nevertheless, this
696 trace has provided some very helpful debugging information.
702 When preemption is disabled, we may be able to receive
703 interrupts but the task cannot be preempted and a higher
704 priority task must wait for preemption to be enabled again
705 before it can preempt a lower priority task.
707 The preemptoff tracer traces the places that disable preemption.
708 Like the irqsoff tracer, it records the maximum latency for
709 which preemption was disabled. The control of preemptoff tracer
710 is much like the irqsoff tracer.
712 # echo preemptoff > current_tracer
713 # echo latency-format > trace_options
714 # echo 0 > tracing_max_latency
715 # echo 1 > tracing_enabled
718 # echo 0 > tracing_enabled
722 preemptoff latency trace v1.1.5 on 2.6.26-rc8
723 --------------------------------------------------------------------
724 latency: 29 us, #3/3, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
726 | task: sshd-4261 (uid:0 nice:0 policy:0 rt_prio:0)
728 => started at: do_IRQ
729 => ended at: __do_softirq
732 # / _-----=> irqs-off
733 # | / _----=> need-resched
734 # || / _---=> hardirq/softirq
735 # ||| / _--=> preempt-depth
738 # cmd pid ||||| time | caller
740 sshd-4261 0d.h. 0us+: irq_enter (do_IRQ)
741 sshd-4261 0d.s. 29us : _local_bh_enable (__do_softirq)
742 sshd-4261 0d.s1 30us : trace_preempt_on (__do_softirq)
745 This has some more changes. Preemption was disabled when an
746 interrupt came in (notice the 'h'), and was enabled while doing
747 a softirq. (notice the 's'). But we also see that interrupts
748 have been disabled when entering the preempt off section and
749 leaving it (the 'd'). We do not know if interrupts were enabled
754 preemptoff latency trace v1.1.5 on 2.6.26-rc8
755 --------------------------------------------------------------------
756 latency: 63 us, #87/87, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
758 | task: sshd-4261 (uid:0 nice:0 policy:0 rt_prio:0)
760 => started at: remove_wait_queue
761 => ended at: __do_softirq
764 # / _-----=> irqs-off
765 # | / _----=> need-resched
766 # || / _---=> hardirq/softirq
767 # ||| / _--=> preempt-depth
770 # cmd pid ||||| time | caller
772 sshd-4261 0d..1 0us : _spin_lock_irqsave (remove_wait_queue)
773 sshd-4261 0d..1 1us : _spin_unlock_irqrestore (remove_wait_queue)
774 sshd-4261 0d..1 2us : do_IRQ (common_interrupt)
775 sshd-4261 0d..1 2us : irq_enter (do_IRQ)
776 sshd-4261 0d..1 2us : idle_cpu (irq_enter)
777 sshd-4261 0d..1 3us : add_preempt_count (irq_enter)
778 sshd-4261 0d.h1 3us : idle_cpu (irq_enter)
779 sshd-4261 0d.h. 4us : handle_fasteoi_irq (do_IRQ)
781 sshd-4261 0d.h. 12us : add_preempt_count (_spin_lock)
782 sshd-4261 0d.h1 12us : ack_ioapic_quirk_irq (handle_fasteoi_irq)
783 sshd-4261 0d.h1 13us : move_native_irq (ack_ioapic_quirk_irq)
784 sshd-4261 0d.h1 13us : _spin_unlock (handle_fasteoi_irq)
785 sshd-4261 0d.h1 14us : sub_preempt_count (_spin_unlock)
786 sshd-4261 0d.h1 14us : irq_exit (do_IRQ)
787 sshd-4261 0d.h1 15us : sub_preempt_count (irq_exit)
788 sshd-4261 0d..2 15us : do_softirq (irq_exit)
789 sshd-4261 0d... 15us : __do_softirq (do_softirq)
790 sshd-4261 0d... 16us : __local_bh_disable (__do_softirq)
791 sshd-4261 0d... 16us+: add_preempt_count (__local_bh_disable)
792 sshd-4261 0d.s4 20us : add_preempt_count (__local_bh_disable)
793 sshd-4261 0d.s4 21us : sub_preempt_count (local_bh_enable)
794 sshd-4261 0d.s5 21us : sub_preempt_count (local_bh_enable)
796 sshd-4261 0d.s6 41us : add_preempt_count (__local_bh_disable)
797 sshd-4261 0d.s6 42us : sub_preempt_count (local_bh_enable)
798 sshd-4261 0d.s7 42us : sub_preempt_count (local_bh_enable)
799 sshd-4261 0d.s5 43us : add_preempt_count (__local_bh_disable)
800 sshd-4261 0d.s5 43us : sub_preempt_count (local_bh_enable_ip)
801 sshd-4261 0d.s6 44us : sub_preempt_count (local_bh_enable_ip)
802 sshd-4261 0d.s5 44us : add_preempt_count (__local_bh_disable)
803 sshd-4261 0d.s5 45us : sub_preempt_count (local_bh_enable)
805 sshd-4261 0d.s. 63us : _local_bh_enable (__do_softirq)
806 sshd-4261 0d.s1 64us : trace_preempt_on (__do_softirq)
809 The above is an example of the preemptoff trace with
810 ftrace_enabled set. Here we see that interrupts were disabled
811 the entire time. The irq_enter code lets us know that we entered
812 an interrupt 'h'. Before that, the functions being traced still
813 show that it is not in an interrupt, but we can see from the
814 functions themselves that this is not the case.
816 Notice that __do_softirq when called does not have a
817 preempt_count. It may seem that we missed a preempt enabling.
818 What really happened is that the preempt count is held on the
819 thread's stack and we switched to the softirq stack (4K stacks
820 in effect). The code does not copy the preempt count, but
821 because interrupts are disabled, we do not need to worry about
822 it. Having a tracer like this is good for letting people know
823 what really happens inside the kernel.
829 Knowing the locations that have interrupts disabled or
830 preemption disabled for the longest times is helpful. But
831 sometimes we would like to know when either preemption and/or
832 interrupts are disabled.
834 Consider the following code:
837 call_function_with_irqs_off();
839 call_function_with_irqs_and_preemption_off();
841 call_function_with_preemption_off();
844 The irqsoff tracer will record the total length of
845 call_function_with_irqs_off() and
846 call_function_with_irqs_and_preemption_off().
848 The preemptoff tracer will record the total length of
849 call_function_with_irqs_and_preemption_off() and
850 call_function_with_preemption_off().
852 But neither will trace the time that interrupts and/or
853 preemption is disabled. This total time is the time that we can
854 not schedule. To record this time, use the preemptirqsoff
857 Again, using this trace is much like the irqsoff and preemptoff
860 # echo preemptirqsoff > current_tracer
861 # echo latency-format > trace_options
862 # echo 0 > tracing_max_latency
863 # echo 1 > tracing_enabled
866 # echo 0 > tracing_enabled
868 # tracer: preemptirqsoff
870 preemptirqsoff latency trace v1.1.5 on 2.6.26-rc8
871 --------------------------------------------------------------------
872 latency: 293 us, #3/3, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
874 | task: ls-4860 (uid:0 nice:0 policy:0 rt_prio:0)
876 => started at: apic_timer_interrupt
877 => ended at: __do_softirq
880 # / _-----=> irqs-off
881 # | / _----=> need-resched
882 # || / _---=> hardirq/softirq
883 # ||| / _--=> preempt-depth
886 # cmd pid ||||| time | caller
888 ls-4860 0d... 0us!: trace_hardirqs_off_thunk (apic_timer_interrupt)
889 ls-4860 0d.s. 294us : _local_bh_enable (__do_softirq)
890 ls-4860 0d.s1 294us : trace_preempt_on (__do_softirq)
894 The trace_hardirqs_off_thunk is called from assembly on x86 when
895 interrupts are disabled in the assembly code. Without the
896 function tracing, we do not know if interrupts were enabled
897 within the preemption points. We do see that it started with
900 Here is a trace with ftrace_enabled set:
903 # tracer: preemptirqsoff
905 preemptirqsoff latency trace v1.1.5 on 2.6.26-rc8
906 --------------------------------------------------------------------
907 latency: 105 us, #183/183, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
909 | task: sshd-4261 (uid:0 nice:0 policy:0 rt_prio:0)
911 => started at: write_chan
912 => ended at: __do_softirq
915 # / _-----=> irqs-off
916 # | / _----=> need-resched
917 # || / _---=> hardirq/softirq
918 # ||| / _--=> preempt-depth
921 # cmd pid ||||| time | caller
923 ls-4473 0.N.. 0us : preempt_schedule (write_chan)
924 ls-4473 0dN.1 1us : _spin_lock (schedule)
925 ls-4473 0dN.1 2us : add_preempt_count (_spin_lock)
926 ls-4473 0d..2 2us : put_prev_task_fair (schedule)
928 ls-4473 0d..2 13us : set_normalized_timespec (ktime_get_ts)
929 ls-4473 0d..2 13us : __switch_to (schedule)
930 sshd-4261 0d..2 14us : finish_task_switch (schedule)
931 sshd-4261 0d..2 14us : _spin_unlock_irq (finish_task_switch)
932 sshd-4261 0d..1 15us : add_preempt_count (_spin_lock_irqsave)
933 sshd-4261 0d..2 16us : _spin_unlock_irqrestore (hrtick_set)
934 sshd-4261 0d..2 16us : do_IRQ (common_interrupt)
935 sshd-4261 0d..2 17us : irq_enter (do_IRQ)
936 sshd-4261 0d..2 17us : idle_cpu (irq_enter)
937 sshd-4261 0d..2 18us : add_preempt_count (irq_enter)
938 sshd-4261 0d.h2 18us : idle_cpu (irq_enter)
939 sshd-4261 0d.h. 18us : handle_fasteoi_irq (do_IRQ)
940 sshd-4261 0d.h. 19us : _spin_lock (handle_fasteoi_irq)
941 sshd-4261 0d.h. 19us : add_preempt_count (_spin_lock)
942 sshd-4261 0d.h1 20us : _spin_unlock (handle_fasteoi_irq)
943 sshd-4261 0d.h1 20us : sub_preempt_count (_spin_unlock)
945 sshd-4261 0d.h1 28us : _spin_unlock (handle_fasteoi_irq)
946 sshd-4261 0d.h1 29us : sub_preempt_count (_spin_unlock)
947 sshd-4261 0d.h2 29us : irq_exit (do_IRQ)
948 sshd-4261 0d.h2 29us : sub_preempt_count (irq_exit)
949 sshd-4261 0d..3 30us : do_softirq (irq_exit)
950 sshd-4261 0d... 30us : __do_softirq (do_softirq)
951 sshd-4261 0d... 31us : __local_bh_disable (__do_softirq)
952 sshd-4261 0d... 31us+: add_preempt_count (__local_bh_disable)
953 sshd-4261 0d.s4 34us : add_preempt_count (__local_bh_disable)
955 sshd-4261 0d.s3 43us : sub_preempt_count (local_bh_enable_ip)
956 sshd-4261 0d.s4 44us : sub_preempt_count (local_bh_enable_ip)
957 sshd-4261 0d.s3 44us : smp_apic_timer_interrupt (apic_timer_interrupt)
958 sshd-4261 0d.s3 45us : irq_enter (smp_apic_timer_interrupt)
959 sshd-4261 0d.s3 45us : idle_cpu (irq_enter)
960 sshd-4261 0d.s3 46us : add_preempt_count (irq_enter)
961 sshd-4261 0d.H3 46us : idle_cpu (irq_enter)
962 sshd-4261 0d.H3 47us : hrtimer_interrupt (smp_apic_timer_interrupt)
963 sshd-4261 0d.H3 47us : ktime_get (hrtimer_interrupt)
965 sshd-4261 0d.H3 81us : tick_program_event (hrtimer_interrupt)
966 sshd-4261 0d.H3 82us : ktime_get (tick_program_event)
967 sshd-4261 0d.H3 82us : ktime_get_ts (ktime_get)
968 sshd-4261 0d.H3 83us : getnstimeofday (ktime_get_ts)
969 sshd-4261 0d.H3 83us : set_normalized_timespec (ktime_get_ts)
970 sshd-4261 0d.H3 84us : clockevents_program_event (tick_program_event)
971 sshd-4261 0d.H3 84us : lapic_next_event (clockevents_program_event)
972 sshd-4261 0d.H3 85us : irq_exit (smp_apic_timer_interrupt)
973 sshd-4261 0d.H3 85us : sub_preempt_count (irq_exit)
974 sshd-4261 0d.s4 86us : sub_preempt_count (irq_exit)
975 sshd-4261 0d.s3 86us : add_preempt_count (__local_bh_disable)
977 sshd-4261 0d.s1 98us : sub_preempt_count (net_rx_action)
978 sshd-4261 0d.s. 99us : add_preempt_count (_spin_lock_irq)
979 sshd-4261 0d.s1 99us+: _spin_unlock_irq (run_timer_softirq)
980 sshd-4261 0d.s. 104us : _local_bh_enable (__do_softirq)
981 sshd-4261 0d.s. 104us : sub_preempt_count (_local_bh_enable)
982 sshd-4261 0d.s. 105us : _local_bh_enable (__do_softirq)
983 sshd-4261 0d.s1 105us : trace_preempt_on (__do_softirq)
986 This is a very interesting trace. It started with the preemption
987 of the ls task. We see that the task had the "need_resched" bit
988 set via the 'N' in the trace. Interrupts were disabled before
989 the spin_lock at the beginning of the trace. We see that a
990 schedule took place to run sshd. When the interrupts were
991 enabled, we took an interrupt. On return from the interrupt
992 handler, the softirq ran. We took another interrupt while
993 running the softirq as we see from the capital 'H'.
999 In a Real-Time environment it is very important to know the
1000 wakeup time it takes for the highest priority task that is woken
1001 up to the time that it executes. This is also known as "schedule
1002 latency". I stress the point that this is about RT tasks. It is
1003 also important to know the scheduling latency of non-RT tasks,
1004 but the average schedule latency is better for non-RT tasks.
1005 Tools like LatencyTop are more appropriate for such
1008 Real-Time environments are interested in the worst case latency.
1009 That is the longest latency it takes for something to happen,
1010 and not the average. We can have a very fast scheduler that may
1011 only have a large latency once in a while, but that would not
1012 work well with Real-Time tasks. The wakeup tracer was designed
1013 to record the worst case wakeups of RT tasks. Non-RT tasks are
1014 not recorded because the tracer only records one worst case and
1015 tracing non-RT tasks that are unpredictable will overwrite the
1016 worst case latency of RT tasks.
1018 Since this tracer only deals with RT tasks, we will run this
1019 slightly differently than we did with the previous tracers.
1020 Instead of performing an 'ls', we will run 'sleep 1' under
1021 'chrt' which changes the priority of the task.
1023 # echo wakeup > current_tracer
1024 # echo latency-format > trace_options
1025 # echo 0 > tracing_max_latency
1026 # echo 1 > tracing_enabled
1028 # echo 0 > tracing_enabled
1032 wakeup latency trace v1.1.5 on 2.6.26-rc8
1033 --------------------------------------------------------------------
1034 latency: 4 us, #2/2, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
1036 | task: sleep-4901 (uid:0 nice:0 policy:1 rt_prio:5)
1040 # / _-----=> irqs-off
1041 # | / _----=> need-resched
1042 # || / _---=> hardirq/softirq
1043 # ||| / _--=> preempt-depth
1046 # cmd pid ||||| time | caller
1048 <idle>-0 1d.h4 0us+: try_to_wake_up (wake_up_process)
1049 <idle>-0 1d..4 4us : schedule (cpu_idle)
1052 Running this on an idle system, we see that it only took 4
1053 microseconds to perform the task switch. Note, since the trace
1054 marker in the schedule is before the actual "switch", we stop
1055 the tracing when the recorded task is about to schedule in. This
1056 may change if we add a new marker at the end of the scheduler.
1058 Notice that the recorded task is 'sleep' with the PID of 4901
1059 and it has an rt_prio of 5. This priority is user-space priority
1060 and not the internal kernel priority. The policy is 1 for
1061 SCHED_FIFO and 2 for SCHED_RR.
1063 Doing the same with chrt -r 5 and ftrace_enabled set.
1067 wakeup latency trace v1.1.5 on 2.6.26-rc8
1068 --------------------------------------------------------------------
1069 latency: 50 us, #60/60, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
1071 | task: sleep-4068 (uid:0 nice:0 policy:2 rt_prio:5)
1075 # / _-----=> irqs-off
1076 # | / _----=> need-resched
1077 # || / _---=> hardirq/softirq
1078 # ||| / _--=> preempt-depth
1081 # cmd pid ||||| time | caller
1083 ksoftirq-7 1d.H3 0us : try_to_wake_up (wake_up_process)
1084 ksoftirq-7 1d.H4 1us : sub_preempt_count (marker_probe_cb)
1085 ksoftirq-7 1d.H3 2us : check_preempt_wakeup (try_to_wake_up)
1086 ksoftirq-7 1d.H3 3us : update_curr (check_preempt_wakeup)
1087 ksoftirq-7 1d.H3 4us : calc_delta_mine (update_curr)
1088 ksoftirq-7 1d.H3 5us : __resched_task (check_preempt_wakeup)
1089 ksoftirq-7 1d.H3 6us : task_wake_up_rt (try_to_wake_up)
1090 ksoftirq-7 1d.H3 7us : _spin_unlock_irqrestore (try_to_wake_up)
1092 ksoftirq-7 1d.H2 17us : irq_exit (smp_apic_timer_interrupt)
1093 ksoftirq-7 1d.H2 18us : sub_preempt_count (irq_exit)
1094 ksoftirq-7 1d.s3 19us : sub_preempt_count (irq_exit)
1095 ksoftirq-7 1..s2 20us : rcu_process_callbacks (__do_softirq)
1097 ksoftirq-7 1..s2 26us : __rcu_process_callbacks (rcu_process_callbacks)
1098 ksoftirq-7 1d.s2 27us : _local_bh_enable (__do_softirq)
1099 ksoftirq-7 1d.s2 28us : sub_preempt_count (_local_bh_enable)
1100 ksoftirq-7 1.N.3 29us : sub_preempt_count (ksoftirqd)
1101 ksoftirq-7 1.N.2 30us : _cond_resched (ksoftirqd)
1102 ksoftirq-7 1.N.2 31us : __cond_resched (_cond_resched)
1103 ksoftirq-7 1.N.2 32us : add_preempt_count (__cond_resched)
1104 ksoftirq-7 1.N.2 33us : schedule (__cond_resched)
1105 ksoftirq-7 1.N.2 33us : add_preempt_count (schedule)
1106 ksoftirq-7 1.N.3 34us : hrtick_clear (schedule)
1107 ksoftirq-7 1dN.3 35us : _spin_lock (schedule)
1108 ksoftirq-7 1dN.3 36us : add_preempt_count (_spin_lock)
1109 ksoftirq-7 1d..4 37us : put_prev_task_fair (schedule)
1110 ksoftirq-7 1d..4 38us : update_curr (put_prev_task_fair)
1112 ksoftirq-7 1d..5 47us : _spin_trylock (tracing_record_cmdline)
1113 ksoftirq-7 1d..5 48us : add_preempt_count (_spin_trylock)
1114 ksoftirq-7 1d..6 49us : _spin_unlock (tracing_record_cmdline)
1115 ksoftirq-7 1d..6 49us : sub_preempt_count (_spin_unlock)
1116 ksoftirq-7 1d..4 50us : schedule (__cond_resched)
1118 The interrupt went off while running ksoftirqd. This task runs
1119 at SCHED_OTHER. Why did not we see the 'N' set early? This may
1120 be a harmless bug with x86_32 and 4K stacks. On x86_32 with 4K
1121 stacks configured, the interrupt and softirq run with their own
1122 stack. Some information is held on the top of the task's stack
1123 (need_resched and preempt_count are both stored there). The
1124 setting of the NEED_RESCHED bit is done directly to the task's
1125 stack, but the reading of the NEED_RESCHED is done by looking at
1126 the current stack, which in this case is the stack for the hard
1127 interrupt. This hides the fact that NEED_RESCHED has been set.
1128 We do not see the 'N' until we switch back to the task's
1134 This tracer is the function tracer. Enabling the function tracer
1135 can be done from the debug file system. Make sure the
1136 ftrace_enabled is set; otherwise this tracer is a nop.
1138 # sysctl kernel.ftrace_enabled=1
1139 # echo function > current_tracer
1140 # echo 1 > tracing_enabled
1142 # echo 0 > tracing_enabled
1146 # TASK-PID CPU# TIMESTAMP FUNCTION
1148 bash-4003 [00] 123.638713: finish_task_switch <-schedule
1149 bash-4003 [00] 123.638714: _spin_unlock_irq <-finish_task_switch
1150 bash-4003 [00] 123.638714: sub_preempt_count <-_spin_unlock_irq
1151 bash-4003 [00] 123.638715: hrtick_set <-schedule
1152 bash-4003 [00] 123.638715: _spin_lock_irqsave <-hrtick_set
1153 bash-4003 [00] 123.638716: add_preempt_count <-_spin_lock_irqsave
1154 bash-4003 [00] 123.638716: _spin_unlock_irqrestore <-hrtick_set
1155 bash-4003 [00] 123.638717: sub_preempt_count <-_spin_unlock_irqrestore
1156 bash-4003 [00] 123.638717: hrtick_clear <-hrtick_set
1157 bash-4003 [00] 123.638718: sub_preempt_count <-schedule
1158 bash-4003 [00] 123.638718: sub_preempt_count <-preempt_schedule
1159 bash-4003 [00] 123.638719: wait_for_completion <-__stop_machine_run
1160 bash-4003 [00] 123.638719: wait_for_common <-wait_for_completion
1161 bash-4003 [00] 123.638720: _spin_lock_irq <-wait_for_common
1162 bash-4003 [00] 123.638720: add_preempt_count <-_spin_lock_irq
1166 Note: function tracer uses ring buffers to store the above
1167 entries. The newest data may overwrite the oldest data.
1168 Sometimes using echo to stop the trace is not sufficient because
1169 the tracing could have overwritten the data that you wanted to
1170 record. For this reason, it is sometimes better to disable
1171 tracing directly from a program. This allows you to stop the
1172 tracing at the point that you hit the part that you are
1173 interested in. To disable the tracing directly from a C program,
1174 something like following code snippet can be used:
1178 int main(int argc, char *argv[]) {
1180 trace_fd = open(tracing_file("tracing_enabled"), O_WRONLY);
1182 if (condition_hit()) {
1183 write(trace_fd, "0", 1);
1189 Single thread tracing
1190 ---------------------
1192 By writing into set_ftrace_pid you can trace a
1193 single thread. For example:
1195 # cat set_ftrace_pid
1197 # echo 3111 > set_ftrace_pid
1198 # cat set_ftrace_pid
1200 # echo function > current_tracer
1204 # TASK-PID CPU# TIMESTAMP FUNCTION
1206 yum-updatesd-3111 [003] 1637.254676: finish_task_switch <-thread_return
1207 yum-updatesd-3111 [003] 1637.254681: hrtimer_cancel <-schedule_hrtimeout_range
1208 yum-updatesd-3111 [003] 1637.254682: hrtimer_try_to_cancel <-hrtimer_cancel
1209 yum-updatesd-3111 [003] 1637.254683: lock_hrtimer_base <-hrtimer_try_to_cancel
1210 yum-updatesd-3111 [003] 1637.254685: fget_light <-do_sys_poll
1211 yum-updatesd-3111 [003] 1637.254686: pipe_poll <-do_sys_poll
1212 # echo -1 > set_ftrace_pid
1216 # TASK-PID CPU# TIMESTAMP FUNCTION
1218 ##### CPU 3 buffer started ####
1219 yum-updatesd-3111 [003] 1701.957688: free_poll_entry <-poll_freewait
1220 yum-updatesd-3111 [003] 1701.957689: remove_wait_queue <-free_poll_entry
1221 yum-updatesd-3111 [003] 1701.957691: fput <-free_poll_entry
1222 yum-updatesd-3111 [003] 1701.957692: audit_syscall_exit <-sysret_audit
1223 yum-updatesd-3111 [003] 1701.957693: path_put <-audit_syscall_exit
1225 If you want to trace a function when executing, you could use
1226 something like this simple program:
1230 #include <sys/types.h>
1231 #include <sys/stat.h>
1236 #define STR(x) _STR(x)
1237 #define MAX_PATH 256
1239 const char *find_debugfs(void)
1241 static char debugfs[MAX_PATH+1];
1242 static int debugfs_found;
1249 if ((fp = fopen("/proc/mounts","r")) == NULL) {
1250 perror("/proc/mounts");
1254 while (fscanf(fp, "%*s %"
1256 "s %99s %*s %*d %*d\n",
1257 debugfs, type) == 2) {
1258 if (strcmp(type, "debugfs") == 0)
1263 if (strcmp(type, "debugfs") != 0) {
1264 fprintf(stderr, "debugfs not mounted");
1273 const char *tracing_file(const char *file_name)
1275 static char trace_file[MAX_PATH+1];
1276 snprintf(trace_file, MAX_PATH, "%s/%s", find_debugfs(), file_name);
1280 int main (int argc, char **argv)
1290 ffd = open(tracing_file("current_tracer"), O_WRONLY);
1293 write(ffd, "nop", 3);
1295 fd = open(tracing_file("set_ftrace_pid"), O_WRONLY);
1296 s = sprintf(line, "%d\n", getpid());
1299 write(ffd, "function", 8);
1304 execvp(argv[1], argv+1);
1311 hw-branch-tracer (x86 only)
1312 ---------------------------
1314 This tracer uses the x86 last branch tracing hardware feature to
1315 collect a branch trace on all cpus with relatively low overhead.
1317 The tracer uses a fixed-size circular buffer per cpu and only
1318 traces ring 0 branches. The trace file dumps that buffer in the
1321 # tracer: hw-branch-tracer
1324 0 scheduler_tick+0xb5/0x1bf <- task_tick_idle+0x5/0x6
1325 2 run_posix_cpu_timers+0x2b/0x72a <- run_posix_cpu_timers+0x25/0x72a
1326 0 scheduler_tick+0x139/0x1bf <- scheduler_tick+0xed/0x1bf
1327 0 scheduler_tick+0x17c/0x1bf <- scheduler_tick+0x148/0x1bf
1328 2 run_posix_cpu_timers+0x9e/0x72a <- run_posix_cpu_timers+0x5e/0x72a
1329 0 scheduler_tick+0x1b6/0x1bf <- scheduler_tick+0x1aa/0x1bf
1332 The tracer may be used to dump the trace for the oops'ing cpu on
1333 a kernel oops into the system log. To enable this,
1334 ftrace_dump_on_oops must be set. To set ftrace_dump_on_oops, one
1335 can either use the sysctl function or set it via the proc system
1338 sysctl kernel.ftrace_dump_on_oops=1
1342 echo 1 > /proc/sys/kernel/ftrace_dump_on_oops
1345 Here's an example of such a dump after a null pointer
1346 dereference in a kernel module:
1348 [57848.105921] BUG: unable to handle kernel NULL pointer dereference at 0000000000000000
1349 [57848.106019] IP: [<ffffffffa0000006>] open+0x6/0x14 [oops]
1350 [57848.106019] PGD 2354e9067 PUD 2375e7067 PMD 0
1351 [57848.106019] Oops: 0002 [#1] SMP
1352 [57848.106019] last sysfs file: /sys/devices/pci0000:00/0000:00:1e.0/0000:20:05.0/local_cpus
1353 [57848.106019] Dumping ftrace buffer:
1354 [57848.106019] ---------------------------------
1356 [57848.106019] 0 chrdev_open+0xe6/0x165 <- cdev_put+0x23/0x24
1357 [57848.106019] 0 chrdev_open+0x117/0x165 <- chrdev_open+0xfa/0x165
1358 [57848.106019] 0 chrdev_open+0x120/0x165 <- chrdev_open+0x11c/0x165
1359 [57848.106019] 0 chrdev_open+0x134/0x165 <- chrdev_open+0x12b/0x165
1360 [57848.106019] 0 open+0x0/0x14 [oops] <- chrdev_open+0x144/0x165
1361 [57848.106019] 0 page_fault+0x0/0x30 <- open+0x6/0x14 [oops]
1362 [57848.106019] 0 error_entry+0x0/0x5b <- page_fault+0x4/0x30
1363 [57848.106019] 0 error_kernelspace+0x0/0x31 <- error_entry+0x59/0x5b
1364 [57848.106019] 0 error_sti+0x0/0x1 <- error_kernelspace+0x2d/0x31
1365 [57848.106019] 0 page_fault+0x9/0x30 <- error_sti+0x0/0x1
1366 [57848.106019] 0 do_page_fault+0x0/0x881 <- page_fault+0x1a/0x30
1368 [57848.106019] 0 do_page_fault+0x66b/0x881 <- is_prefetch+0x1ee/0x1f2
1369 [57848.106019] 0 do_page_fault+0x6e0/0x881 <- do_page_fault+0x67a/0x881
1370 [57848.106019] 0 oops_begin+0x0/0x96 <- do_page_fault+0x6e0/0x881
1371 [57848.106019] 0 trace_hw_branch_oops+0x0/0x2d <- oops_begin+0x9/0x96
1373 [57848.106019] 0 ds_suspend_bts+0x2a/0xe3 <- ds_suspend_bts+0x1a/0xe3
1374 [57848.106019] ---------------------------------
1375 [57848.106019] CPU 0
1376 [57848.106019] Modules linked in: oops
1377 [57848.106019] Pid: 5542, comm: cat Tainted: G W 2.6.28 #23
1378 [57848.106019] RIP: 0010:[<ffffffffa0000006>] [<ffffffffa0000006>] open+0x6/0x14 [oops]
1379 [57848.106019] RSP: 0018:ffff880235457d48 EFLAGS: 00010246
1383 function graph tracer
1384 ---------------------------
1386 This tracer is similar to the function tracer except that it
1387 probes a function on its entry and its exit. This is done by
1388 using a dynamically allocated stack of return addresses in each
1389 task_struct. On function entry the tracer overwrites the return
1390 address of each function traced to set a custom probe. Thus the
1391 original return address is stored on the stack of return address
1394 Probing on both ends of a function leads to special features
1397 - measure of a function's time execution
1398 - having a reliable call stack to draw function calls graph
1400 This tracer is useful in several situations:
1402 - you want to find the reason of a strange kernel behavior and
1403 need to see what happens in detail on any areas (or specific
1406 - you are experiencing weird latencies but it's difficult to
1409 - you want to find quickly which path is taken by a specific
1412 - you just want to peek inside a working kernel and want to see
1415 # tracer: function_graph
1417 # CPU DURATION FUNCTION CALLS
1421 0) | do_sys_open() {
1423 0) | kmem_cache_alloc() {
1424 0) 1.382 us | __might_sleep();
1426 0) | strncpy_from_user() {
1427 0) | might_fault() {
1428 0) 1.389 us | __might_sleep();
1433 0) 0.668 us | _spin_lock();
1434 0) 0.570 us | expand_files();
1435 0) 0.586 us | _spin_unlock();
1438 There are several columns that can be dynamically
1439 enabled/disabled. You can use every combination of options you
1440 want, depending on your needs.
1442 - The cpu number on which the function executed is default
1443 enabled. It is sometimes better to only trace one cpu (see
1444 tracing_cpu_mask file) or you might sometimes see unordered
1445 function calls while cpu tracing switch.
1447 hide: echo nofuncgraph-cpu > trace_options
1448 show: echo funcgraph-cpu > trace_options
1450 - The duration (function's time of execution) is displayed on
1451 the closing bracket line of a function or on the same line
1452 than the current function in case of a leaf one. It is default
1455 hide: echo nofuncgraph-duration > trace_options
1456 show: echo funcgraph-duration > trace_options
1458 - The overhead field precedes the duration field in case of
1459 reached duration thresholds.
1461 hide: echo nofuncgraph-overhead > trace_options
1462 show: echo funcgraph-overhead > trace_options
1463 depends on: funcgraph-duration
1468 0) 0.646 us | _spin_lock_irqsave();
1469 0) 0.684 us | _spin_unlock_irqrestore();
1471 0) 0.548 us | fput();
1477 0) | kmem_cache_free() {
1478 0) 0.518 us | __phys_addr();
1484 + means that the function exceeded 10 usecs.
1485 ! means that the function exceeded 100 usecs.
1488 - The task/pid field displays the thread cmdline and pid which
1489 executed the function. It is default disabled.
1491 hide: echo nofuncgraph-proc > trace_options
1492 show: echo funcgraph-proc > trace_options
1496 # tracer: function_graph
1498 # CPU TASK/PID DURATION FUNCTION CALLS
1500 0) sh-4802 | | d_free() {
1501 0) sh-4802 | | call_rcu() {
1502 0) sh-4802 | | __call_rcu() {
1503 0) sh-4802 | 0.616 us | rcu_process_gp_end();
1504 0) sh-4802 | 0.586 us | check_for_new_grace_period();
1505 0) sh-4802 | 2.899 us | }
1506 0) sh-4802 | 4.040 us | }
1507 0) sh-4802 | 5.151 us | }
1508 0) sh-4802 | + 49.370 us | }
1511 - The absolute time field is an absolute timestamp given by the
1512 system clock since it started. A snapshot of this time is
1513 given on each entry/exit of functions
1515 hide: echo nofuncgraph-abstime > trace_options
1516 show: echo funcgraph-abstime > trace_options
1521 # TIME CPU DURATION FUNCTION CALLS
1523 360.774522 | 1) 0.541 us | }
1524 360.774522 | 1) 4.663 us | }
1525 360.774523 | 1) 0.541 us | __wake_up_bit();
1526 360.774524 | 1) 6.796 us | }
1527 360.774524 | 1) 7.952 us | }
1528 360.774525 | 1) 9.063 us | }
1529 360.774525 | 1) 0.615 us | journal_mark_dirty();
1530 360.774527 | 1) 0.578 us | __brelse();
1531 360.774528 | 1) | reiserfs_prepare_for_journal() {
1532 360.774528 | 1) | unlock_buffer() {
1533 360.774529 | 1) | wake_up_bit() {
1534 360.774529 | 1) | bit_waitqueue() {
1535 360.774530 | 1) 0.594 us | __phys_addr();
1538 You can put some comments on specific functions by using
1539 trace_printk() For example, if you want to put a comment inside
1540 the __might_sleep() function, you just have to include
1541 <linux/ftrace.h> and call trace_printk() inside __might_sleep()
1543 trace_printk("I'm a comment!\n")
1547 1) | __might_sleep() {
1548 1) | /* I'm a comment! */
1552 You might find other useful features for this tracer in the
1553 following "dynamic ftrace" section such as tracing only specific
1559 If CONFIG_DYNAMIC_FTRACE is set, the system will run with
1560 virtually no overhead when function tracing is disabled. The way
1561 this works is the mcount function call (placed at the start of
1562 every kernel function, produced by the -pg switch in gcc),
1563 starts of pointing to a simple return. (Enabling FTRACE will
1564 include the -pg switch in the compiling of the kernel.)
1566 At compile time every C file object is run through the
1567 recordmcount.pl script (located in the scripts directory). This
1568 script will process the C object using objdump to find all the
1569 locations in the .text section that call mcount. (Note, only the
1570 .text section is processed, since processing other sections like
1571 .init.text may cause races due to those sections being freed).
1573 A new section called "__mcount_loc" is created that holds
1574 references to all the mcount call sites in the .text section.
1575 This section is compiled back into the original object. The
1576 final linker will add all these references into a single table.
1578 On boot up, before SMP is initialized, the dynamic ftrace code
1579 scans this table and updates all the locations into nops. It
1580 also records the locations, which are added to the
1581 available_filter_functions list. Modules are processed as they
1582 are loaded and before they are executed. When a module is
1583 unloaded, it also removes its functions from the ftrace function
1584 list. This is automatic in the module unload code, and the
1585 module author does not need to worry about it.
1587 When tracing is enabled, kstop_machine is called to prevent
1588 races with the CPUS executing code being modified (which can
1589 cause the CPU to do undesireable things), and the nops are
1590 patched back to calls. But this time, they do not call mcount
1591 (which is just a function stub). They now call into the ftrace
1594 One special side-effect to the recording of the functions being
1595 traced is that we can now selectively choose which functions we
1596 wish to trace and which ones we want the mcount calls to remain
1599 Two files are used, one for enabling and one for disabling the
1600 tracing of specified functions. They are:
1608 A list of available functions that you can add to these files is
1611 available_filter_functions
1613 # cat available_filter_functions
1622 If I am only interested in sys_nanosleep and hrtimer_interrupt:
1624 # echo sys_nanosleep hrtimer_interrupt \
1626 # echo ftrace > current_tracer
1627 # echo 1 > tracing_enabled
1629 # echo 0 > tracing_enabled
1633 # TASK-PID CPU# TIMESTAMP FUNCTION
1635 usleep-4134 [00] 1317.070017: hrtimer_interrupt <-smp_apic_timer_interrupt
1636 usleep-4134 [00] 1317.070111: sys_nanosleep <-syscall_call
1637 <idle>-0 [00] 1317.070115: hrtimer_interrupt <-smp_apic_timer_interrupt
1639 To see which functions are being traced, you can cat the file:
1641 # cat set_ftrace_filter
1646 Perhaps this is not enough. The filters also allow simple wild
1647 cards. Only the following are currently available
1649 <match>* - will match functions that begin with <match>
1650 *<match> - will match functions that end with <match>
1651 *<match>* - will match functions that have <match> in it
1653 These are the only wild cards which are supported.
1655 <match>*<match> will not work.
1657 Note: It is better to use quotes to enclose the wild cards,
1658 otherwise the shell may expand the parameters into names
1659 of files in the local directory.
1661 # echo 'hrtimer_*' > set_ftrace_filter
1667 # TASK-PID CPU# TIMESTAMP FUNCTION
1669 bash-4003 [00] 1480.611794: hrtimer_init <-copy_process
1670 bash-4003 [00] 1480.611941: hrtimer_start <-hrtick_set
1671 bash-4003 [00] 1480.611956: hrtimer_cancel <-hrtick_clear
1672 bash-4003 [00] 1480.611956: hrtimer_try_to_cancel <-hrtimer_cancel
1673 <idle>-0 [00] 1480.612019: hrtimer_get_next_event <-get_next_timer_interrupt
1674 <idle>-0 [00] 1480.612025: hrtimer_get_next_event <-get_next_timer_interrupt
1675 <idle>-0 [00] 1480.612032: hrtimer_get_next_event <-get_next_timer_interrupt
1676 <idle>-0 [00] 1480.612037: hrtimer_get_next_event <-get_next_timer_interrupt
1677 <idle>-0 [00] 1480.612382: hrtimer_get_next_event <-get_next_timer_interrupt
1680 Notice that we lost the sys_nanosleep.
1682 # cat set_ftrace_filter
1687 hrtimer_try_to_cancel
1691 hrtimer_force_reprogram
1692 hrtimer_get_next_event
1696 hrtimer_get_remaining
1698 hrtimer_init_sleeper
1701 This is because the '>' and '>>' act just like they do in bash.
1702 To rewrite the filters, use '>'
1703 To append to the filters, use '>>'
1705 To clear out a filter so that all functions will be recorded
1708 # echo > set_ftrace_filter
1709 # cat set_ftrace_filter
1712 Again, now we want to append.
1714 # echo sys_nanosleep > set_ftrace_filter
1715 # cat set_ftrace_filter
1717 # echo 'hrtimer_*' >> set_ftrace_filter
1718 # cat set_ftrace_filter
1723 hrtimer_try_to_cancel
1727 hrtimer_force_reprogram
1728 hrtimer_get_next_event
1733 hrtimer_get_remaining
1735 hrtimer_init_sleeper
1738 The set_ftrace_notrace prevents those functions from being
1741 # echo '*preempt*' '*lock*' > set_ftrace_notrace
1747 # TASK-PID CPU# TIMESTAMP FUNCTION
1749 bash-4043 [01] 115.281644: finish_task_switch <-schedule
1750 bash-4043 [01] 115.281645: hrtick_set <-schedule
1751 bash-4043 [01] 115.281645: hrtick_clear <-hrtick_set
1752 bash-4043 [01] 115.281646: wait_for_completion <-__stop_machine_run
1753 bash-4043 [01] 115.281647: wait_for_common <-wait_for_completion
1754 bash-4043 [01] 115.281647: kthread_stop <-stop_machine_run
1755 bash-4043 [01] 115.281648: init_waitqueue_head <-kthread_stop
1756 bash-4043 [01] 115.281648: wake_up_process <-kthread_stop
1757 bash-4043 [01] 115.281649: try_to_wake_up <-wake_up_process
1759 We can see that there's no more lock or preempt tracing.
1762 Dynamic ftrace with the function graph tracer
1763 ---------------------------------------------
1765 Although what has been explained above concerns both the
1766 function tracer and the function-graph-tracer, there are some
1767 special features only available in the function-graph tracer.
1769 If you want to trace only one function and all of its children,
1770 you just have to echo its name into set_graph_function:
1772 echo __do_fault > set_graph_function
1774 will produce the following "expanded" trace of the __do_fault()
1778 0) | filemap_fault() {
1779 0) | find_lock_page() {
1780 0) 0.804 us | find_get_page();
1781 0) | __might_sleep() {
1785 0) 0.653 us | _spin_lock();
1786 0) 0.578 us | page_add_file_rmap();
1787 0) 0.525 us | native_set_pte_at();
1788 0) 0.585 us | _spin_unlock();
1789 0) | unlock_page() {
1790 0) 0.541 us | page_waitqueue();
1791 0) 0.639 us | __wake_up_bit();
1795 0) | filemap_fault() {
1796 0) | find_lock_page() {
1797 0) 0.698 us | find_get_page();
1798 0) | __might_sleep() {
1802 0) 0.631 us | _spin_lock();
1803 0) 0.571 us | page_add_file_rmap();
1804 0) 0.526 us | native_set_pte_at();
1805 0) 0.586 us | _spin_unlock();
1806 0) | unlock_page() {
1807 0) 0.533 us | page_waitqueue();
1808 0) 0.638 us | __wake_up_bit();
1812 You can also expand several functions at once:
1814 echo sys_open > set_graph_function
1815 echo sys_close >> set_graph_function
1817 Now if you want to go back to trace all functions you can clear
1818 this special filter via:
1820 echo > set_graph_function
1826 The trace_pipe outputs the same content as the trace file, but
1827 the effect on the tracing is different. Every read from
1828 trace_pipe is consumed. This means that subsequent reads will be
1829 different. The trace is live.
1831 # echo function > current_tracer
1832 # cat trace_pipe > /tmp/trace.out &
1834 # echo 1 > tracing_enabled
1836 # echo 0 > tracing_enabled
1840 # TASK-PID CPU# TIMESTAMP FUNCTION
1844 # cat /tmp/trace.out
1845 bash-4043 [00] 41.267106: finish_task_switch <-schedule
1846 bash-4043 [00] 41.267106: hrtick_set <-schedule
1847 bash-4043 [00] 41.267107: hrtick_clear <-hrtick_set
1848 bash-4043 [00] 41.267108: wait_for_completion <-__stop_machine_run
1849 bash-4043 [00] 41.267108: wait_for_common <-wait_for_completion
1850 bash-4043 [00] 41.267109: kthread_stop <-stop_machine_run
1851 bash-4043 [00] 41.267109: init_waitqueue_head <-kthread_stop
1852 bash-4043 [00] 41.267110: wake_up_process <-kthread_stop
1853 bash-4043 [00] 41.267110: try_to_wake_up <-wake_up_process
1854 bash-4043 [00] 41.267111: select_task_rq_rt <-try_to_wake_up
1857 Note, reading the trace_pipe file will block until more input is
1858 added. By changing the tracer, trace_pipe will issue an EOF. We
1859 needed to set the function tracer _before_ we "cat" the
1866 Having too much or not enough data can be troublesome in
1867 diagnosing an issue in the kernel. The file buffer_size_kb is
1868 used to modify the size of the internal trace buffers. The
1869 number listed is the number of entries that can be recorded per
1870 CPU. To know the full size, multiply the number of possible CPUS
1871 with the number of entries.
1873 # cat buffer_size_kb
1874 1408 (units kilobytes)
1876 Note, to modify this, you must have tracing completely disabled.
1877 To do that, echo "nop" into the current_tracer. If the
1878 current_tracer is not set to "nop", an EINVAL error will be
1881 # echo nop > current_tracer
1882 # echo 10000 > buffer_size_kb
1883 # cat buffer_size_kb
1884 10000 (units kilobytes)
1886 The number of pages which will be allocated is limited to a
1887 percentage of available memory. Allocating too much will produce
1890 # echo 1000000000000 > buffer_size_kb
1891 -bash: echo: write error: Cannot allocate memory
1892 # cat buffer_size_kb
1897 More details can be found in the source code, in the
1898 kernel/trace/*.c files.