diff --git a/README.md b/README.md index 14f725fa3bd0..fb35f5d79531 100644 --- a/README.md +++ b/README.md @@ -100,6 +100,7 @@ Examples: - tools/[hardirqs](tools/hardirqs.py): Measure hard IRQ (hard interrupt) event time. [Examples](tools/hardirqs_example.txt). - tools/[killsnoop](tools/killsnoop.py): Trace signals issued by the kill() syscall. [Examples](tools/killsnoop_example.txt). - tools/[slabratetop](tools/slabratetop.py): Kernel SLAB/SLUB memory cache allocation rate top. [Examples](tools/slabratetop_example.txt). +- tools/[llcstat](tools/llcstat.py): Summarize CPU cache references and misses by process. [Examples](tools/llcstat_example.txt). - tools/[mdflush](tools/mdflush.py): Trace md flush events. [Examples](tools/mdflush_example.txt). - tools/[mysqld_qslower](tools/mysqld_qslower.py): Trace MySQL server queries slower than a threshold. [Examples](tools/mysqld_qslower_example.txt). - tools/[memleak](tools/memleak.py): Display outstanding memory allocations to find memory leaks. [Examples](tools/memleak_example.txt). diff --git a/man/man8/llcstat.8 b/man/man8/llcstat.8 index 75706716f9dc..d6fdfe9e45da 100644 --- a/man/man8/llcstat.8 +++ b/man/man8/llcstat.8 @@ -1,12 +1,12 @@ .TH llcstat 8 "2015-08-18" "USER COMMANDS" .SH NAME -llcstat \- Trace cache references and cache misses. Uses Linux eBPF/bcc. +llcstat \- Summarize CPU cache references and misses by process. Uses Linux eBPF/bcc. .SH SYNOPSIS .B llcstat [\-h] [\-c SAMPLE_PERIOD] [duration] .SH DESCRIPTION -llcstat traces cache references and cache misses system-side, and summarizes -them by PID and CPU. These events have different meanings on different -architecture. For x86-64, they mean misses and references to LLC. +llcstat instruments CPU cache references and cache misses system-side, and +summarizes them by PID and CPU. These events have different meanings on +different architecture. For x86-64, they mean misses and references to LLC. This can be useful to locate and debug performance issues caused by cache hit rate. diff --git a/man/man8/profile.8 b/man/man8/profile.8 index 9eb3aeed1121..b9fad4a67108 100644 --- a/man/man8/profile.8 +++ b/man/man8/profile.8 @@ -3,7 +3,7 @@ profile \- Profile CPU usage by sampling stack traces. Uses Linux eBPF/bcc. .SH SYNOPSIS .B profile [\-adfh] [\-p PID] [\-U | \-k] [\-F FREQUENCY] -.B [\-\-stack\-storage\-size COUNT] [\-S FRAMES] [duration] +.B [\-\-stack\-storage\-size COUNT] [duration] .SH DESCRIPTION This is a CPU profiler. It works by taking samples of stack traces at timed intervals. It will help you understand and quantify CPU usage: which code is @@ -17,17 +17,11 @@ This is also an efficient profiler, as stack traces are frequency counted in kernel context, rather than passing each stack to user space for frequency counting there. Only the unique stacks and counts are passed to user space at the end of the profile, greatly reducing the kernel<->user transfer. - -Note: if another perf-based sampling or tracing session is active, the output -may become polluted with their events. This will be fixed for Linux 4.9. .SH REQUIREMENTS CONFIG_BPF and bcc. -This also requires Linux 4.6+ (BPF_MAP_TYPE_STACK_TRACE support), and the -perf_misc_flags() function symbol to exist. The latter may or may not -exist depending on your kernel build, and if it doesn't exist, this tool -will not work. Linux 4.9 provides a proper solution to this (this tool will -be updated). +This also requires Linux 4.9+ (BPF_PROG_TYPE_PERF_EVENT support). See tools/old +for an older version that may work on Linux 4.6 - 4.8. .SH OPTIONS .TP \-h @@ -57,14 +51,6 @@ Show stacks from kernel space only (no user space stacks). The maximum number of unique stack traces that the kernel will count (default 2048). If the sampled count exceeds this, a warning will be printed. .TP -\-S FRAMES -A fixed number of kernel frames to skip. By default, extra registers are -recorded so that the interrupt framework stack can be identified and excluded -from the output. If this isn't working on your architecture, or, if you'd -like to improve performance a tiny amount, then you can specify a fixed count -to skip. Note for debugging that the IP address is printed as the first frame, -followed by the captured stack. -.TP duration Duration to trace, in seconds. .SH EXAMPLES diff --git a/tools/llcstat_example.txt b/tools/llcstat_example.txt index b196f870844b..ef2aec10f6f6 100644 --- a/tools/llcstat_example.txt +++ b/tools/llcstat_example.txt @@ -1,7 +1,9 @@ Demonstrations of llcstat. + llcstat traces cache reference and cache miss events system-wide, and summarizes them by PID and CPU. + These events, defined in uapi/linux/perf_event.h, have different meanings on different architecture. For x86-64, they mean misses and references to LLC. @@ -25,6 +27,18 @@ Total References: 518920000 Total Misses: 90265000 Hit Rate: 82.61% This shows each PID's cache hit rate during the 20 seconds run period. +A count of 5000 was used in this example, which means that one in every 5,000 +events will trigger an in-kernel counter to be incremented. This is refactored +on the output, which is why it is always in multiples of 5,000. + +We don't instrument every single event since the overhead would be prohibitive, +nor do we need to: this is a type of sampling profiler. Because of this, the +processes that trigger the 5,000'th cache reference or misses can happen to +some degree by chance. Overall it should make sense. But for low counts, +you might find a case where -- by chance -- a process has been tallied with +more misses than references, which would seem impossible. + + USAGE message: # ./llcstat.py --help diff --git a/tools/old/profile.py b/tools/old/profile.py new file mode 100755 index 000000000000..6f28eed5a56d --- /dev/null +++ b/tools/old/profile.py @@ -0,0 +1,364 @@ +#!/usr/bin/python +# @lint-avoid-python-3-compatibility-imports +# +# profile Profile CPU usage by sampling stack traces at a timed interval. +# For Linux, uses BCC, BPF, perf_events. Embedded C. +# +# This is an efficient profiler, as stack traces are frequency counted in +# kernel context, rather than passing every stack to user space for frequency +# counting there. Only the unique stacks and counts are passed to user space +# at the end of the profile, greatly reducing the kernel<->user transfer. +# +# This uses perf_event_open to setup a timer which is instrumented by BPF, +# and for efficiency it does not initialize the perf ring buffer, so the +# redundant perf samples are not collected. +# +# Kernel stacks are post-process in user-land to skip the interrupt framework +# frames. You can improve efficiency a little by specifying the exact number +# of frames to skip with -s, provided you know what that is. If you get -s +# wrong, note that the first line is the IP, and then the (skipped) stack. +# +# Note: if another perf-based sampling session is active, the output may become +# polluted with their events. On older kernels, the ouptut may also become +# polluted with tracing sessions (when the kprobe is used instead of the +# tracepoint). If this becomes a problem, logic can be added to filter events. +# +# REQUIRES: Linux 4.6+ (BPF_MAP_TYPE_STACK_TRACE support), and the +# perf_misc_flags() function symbol to exist. The latter may or may not +# exist depending on your kernel build. Linux 4.9 provides a proper solution +# to this (this tool will be updated). +# +# Copyright 2016 Netflix, Inc. +# Licensed under the Apache License, Version 2.0 (the "License") +# +# THANKS: Sasha Goldshtein, Andrew Birchall, and Evgeny Vereshchagin, who wrote +# much of the code here, borrowed from tracepoint.py and offcputime.py. +# +# 15-Jul-2016 Brendan Gregg Created this. + +from __future__ import print_function +from bcc import BPF, Perf +from sys import stderr +from time import sleep +import argparse +import signal +import os +import errno +import multiprocessing +import ctypes as ct + +# +# Process Arguments +# + +# arg validation +def positive_int(val): + try: + ival = int(val) + except ValueError: + raise argparse.ArgumentTypeError("must be an integer") + + if ival < 0: + raise argparse.ArgumentTypeError("must be positive") + return ival + +def positive_nonzero_int(val): + ival = positive_int(val) + if ival == 0: + raise argparse.ArgumentTypeError("must be nonzero") + return ival + +# arguments +examples = """examples: + ./profile # profile stack traces at 49 Hertz until Ctrl-C + ./profile -F 99 # profile stack traces at 99 Hertz + ./profile 5 # profile at 49 Hertz for 5 seconds only + ./profile -f 5 # output in folded format for flame graphs + ./profile -p 185 # only profile threads for PID 185 + ./profile -U # only show user space stacks (no kernel) + ./profile -K # only show kernel space stacks (no user) + ./profile -S 11 # always skip 11 frames of kernel stack +""" +parser = argparse.ArgumentParser( + description="Profile CPU stack traces at a timed interval", + formatter_class=argparse.RawDescriptionHelpFormatter, + epilog=examples) +thread_group = parser.add_mutually_exclusive_group() +thread_group.add_argument("-p", "--pid", type=positive_int, + help="profile this PID only") +# TODO: add options for user/kernel threads only +stack_group = parser.add_mutually_exclusive_group() +stack_group.add_argument("-U", "--user-stacks-only", action="store_true", + help="show stacks from user space only (no kernel space stacks)") +stack_group.add_argument("-K", "--kernel-stacks-only", action="store_true", + help="show stacks from kernel space only (no user space stacks)") +parser.add_argument("-F", "--frequency", type=positive_int, default=49, + help="sample frequency, Hertz (default 49)") +parser.add_argument("-d", "--delimited", action="store_true", + help="insert delimiter between kernel/user stacks") +parser.add_argument("-a", "--annotations", action="store_true", + help="add _[k] annotations to kernel frames") +parser.add_argument("-f", "--folded", action="store_true", + help="output folded format, one line per stack (for flame graphs)") +parser.add_argument("--stack-storage-size", default=2048, + type=positive_nonzero_int, + help="the number of unique stack traces that can be stored and " + "displayed (default 2048)") +parser.add_argument("-S", "--kernel-skip", type=positive_int, default=0, + help="skip this many kernel frames (default 3)") +parser.add_argument("duration", nargs="?", default=99999999, + type=positive_nonzero_int, + help="duration of trace, in seconds") + +# option logic +args = parser.parse_args() +skip = args.kernel_skip +pid = int(args.pid) if args.pid is not None else -1 +duration = int(args.duration) +debug = 0 +need_delimiter = args.delimited and not (args.kernel_stacks_only or + args.user_stacks_only) +# TODO: add stack depth, and interval + +# +# Setup BPF +# + +# define BPF program +bpf_text = """ +#include +#include + +struct key_t { + u32 pid; + u64 kernel_ip; + u64 kernel_ret_ip; + int user_stack_id; + int kernel_stack_id; + char name[TASK_COMM_LEN]; +}; +BPF_HASH(counts, struct key_t); +BPF_HASH(start, u32); +BPF_STACK_TRACE(stack_traces, STACK_STORAGE_SIZE) + +// This code gets a bit complex. Probably not suitable for casual hacking. + +PERF_TRACE_EVENT { + u32 pid = bpf_get_current_pid_tgid(); + if (!(THREAD_FILTER)) + return 0; + + // create map key + u64 zero = 0, *val; + struct key_t key = {.pid = pid}; + bpf_get_current_comm(&key.name, sizeof(key.name)); + + // get stacks + key.user_stack_id = USER_STACK_GET; + key.kernel_stack_id = KERNEL_STACK_GET; + + if (key.kernel_stack_id >= 0) { + // populate extras to fix the kernel stack + struct pt_regs regs = {}; + bpf_probe_read(®s, sizeof(regs), (void *)REGS_LOCATION); + u64 ip = PT_REGS_IP(®s); + + // if ip isn't sane, leave key ips as zero for later checking +#ifdef CONFIG_RANDOMIZE_MEMORY + if (ip > __PAGE_OFFSET_BASE) { +#else + if (ip > PAGE_OFFSET) { +#endif + key.kernel_ip = ip; + if (DO_KERNEL_RIP) { + /* + * User didn't specify a skip value (-s), so we will figure + * out how many interrupt framework frames to skip by recording + * the kernel rip, then later scanning for it on the stack. + * This is likely x86_64 specific; can use -s as a workaround + * until this supports your architecture. + */ + bpf_probe_read(&key.kernel_ret_ip, sizeof(key.kernel_ret_ip), + (void *)(regs.bp + 8)); + } + } + } + + val = counts.lookup_or_init(&key, &zero); + (*val)++; + return 0; +} +""" + +# set thread filter +thread_context = "" +perf_filter = "-a" +if args.pid is not None: + thread_context = "PID %s" % args.pid + thread_filter = 'pid == %s' % args.pid + perf_filter = '-p %s' % args.pid +else: + thread_context = "all threads" + thread_filter = '1' +bpf_text = bpf_text.replace('THREAD_FILTER', thread_filter) + +# set stack storage size +bpf_text = bpf_text.replace('STACK_STORAGE_SIZE', str(args.stack_storage_size)) + +# handle stack args +kernel_stack_get = "stack_traces.get_stackid(args, " \ + "%d | BPF_F_REUSE_STACKID)" % skip +user_stack_get = \ + "stack_traces.get_stackid(args, BPF_F_REUSE_STACKID | BPF_F_USER_STACK)" +stack_context = "" +if args.user_stacks_only: + stack_context = "user" + kernel_stack_get = "-1" +elif args.kernel_stacks_only: + stack_context = "kernel" + user_stack_get = "-1" +else: + stack_context = "user + kernel" +bpf_text = bpf_text.replace('USER_STACK_GET', user_stack_get) +bpf_text = bpf_text.replace('KERNEL_STACK_GET', kernel_stack_get) +if skip: + # don't record the rip, as we won't use it + bpf_text = bpf_text.replace('DO_KERNEL_RIP', '0') +else: + # rip is used to skip interrupt infrastructure frames + bpf_text = bpf_text.replace('DO_KERNEL_RIP', '1') + +# header +if not args.folded: + print("Sampling at %d Hertz of %s by %s stack" % + (args.frequency, thread_context, stack_context), end="") + if duration < 99999999: + print(" for %d secs." % duration) + else: + print("... Hit Ctrl-C to end.") + +# kprobe perf_misc_flags() +bpf_text = bpf_text.replace('PERF_TRACE_EVENT', + 'int kprobe__perf_misc_flags(struct pt_regs *args)') +bpf_text = bpf_text.replace('REGS_LOCATION', 'PT_REGS_PARM1(args)') +if debug: + print(bpf_text) + +# initialize BPF +try: + b = BPF(text=bpf_text) +except: + print("BPF initialization failed. perf_misc_flags() may be inlined in " + + "your kernel build.\nThis tool will be updated in the future to " + + "support Linux 4.9, which has reliable profiling support. Exiting.") + exit() + +# signal handler +def signal_ignore(signal, frame): + print() + +# +# Setup perf_events +# + +# use perf_events to sample +try: + Perf.perf_event_open(0, pid=-1, ptype=Perf.PERF_TYPE_SOFTWARE, + freq=args.frequency) +except: + print("ERROR: initializing perf_events for sampling.\n" + "To debug this, try running the following command:\n" + " perf record -F 49 -e cpu-clock %s -- sleep 1\n" + "If that also doesn't work, fix it first." % perf_filter, file=stderr) + exit(0) + +# +# Output Report +# + +# collect samples +try: + sleep(duration) +except KeyboardInterrupt: + # as cleanup can take some time, trap Ctrl-C: + signal.signal(signal.SIGINT, signal_ignore) + +if not args.folded: + print() + +def aksym(addr): + if args.annotations: + return b.ksym(addr) + "_[k]" + else: + return b.ksym(addr) + +# output stacks +missing_stacks = 0 +has_enomem = False +counts = b.get_table("counts") +stack_traces = b.get_table("stack_traces") +for k, v in sorted(counts.items(), key=lambda counts: counts[1].value): + # handle get_stackid erorrs + if (not args.user_stacks_only and k.kernel_stack_id < 0 and + k.kernel_stack_id != -errno.EFAULT) or \ + (not args.kernel_stacks_only and k.user_stack_id < 0 and + k.user_stack_id != -errno.EFAULT): + missing_stacks += 1 + # check for an ENOMEM error + if k.kernel_stack_id == -errno.ENOMEM or \ + k.user_stack_id == -errno.ENOMEM: + has_enomem = True + + user_stack = [] if k.user_stack_id < 0 else \ + stack_traces.walk(k.user_stack_id) + kernel_tmp = [] if k.kernel_stack_id < 0 else \ + stack_traces.walk(k.kernel_stack_id) + + # fix kernel stack + kernel_stack = [] + if k.kernel_stack_id >= 0: + if skip: + # fixed skip + for addr in kernel_tmp: + kernel_stack.append(addr) + kernel_stack = kernel_stack[skip:] + else: + # skip the interrupt framework stack by searching for our RIP + skipping = 1 + for addr in kernel_tmp: + if k.kernel_ret_ip == addr: + skipping = 0 + if not skipping: + kernel_stack.append(addr) + if k.kernel_ip: + kernel_stack.insert(0, k.kernel_ip) + + do_delimiter = need_delimiter and kernel_stack + + if args.folded: + # print folded stack output + user_stack = list(user_stack) + kernel_stack = list(kernel_stack) + line = [k.name.decode()] + \ + [b.sym(addr, k.pid) for addr in reversed(user_stack)] + \ + (do_delimiter and ["-"] or []) + \ + [aksym(addr) for addr in reversed(kernel_stack)] + print("%s %d" % (";".join(line), v.value)) + else: + # print default multi-line stack output. + for addr in kernel_stack: + print(" %016x %s" % (addr, aksym(addr))) + if do_delimiter: + print(" --") + for addr in user_stack: + print(" %016x %s" % (addr, b.sym(addr, k.pid))) + print(" %-16s %s (%d)" % ("-", k.name, k.pid)) + print(" %d\n" % v.value) + +# check missing +if missing_stacks > 0: + enomem_str = "" if not has_enomem else \ + " Consider increasing --stack-storage-size." + print("WARNING: %d stack traces could not be displayed.%s" % + (missing_stacks, enomem_str), + file=stderr) diff --git a/tools/old/profile_example.txt b/tools/old/profile_example.txt new file mode 100644 index 000000000000..cd0c5ef5341f --- /dev/null +++ b/tools/old/profile_example.txt @@ -0,0 +1,788 @@ +Demonstrations of profile, the Linux eBPF/bcc version. + + +This is a CPU profiler. It works by taking samples of stack traces at timed +intervals, and frequency counting them in kernel context for efficiency. + +Example output: + +# ./profile +Sampling at 49 Hertz of all threads by user + kernel stack... Hit Ctrl-C to end. +^C + ffffffff81189249 filemap_map_pages + ffffffff811bd3f5 handle_mm_fault + ffffffff81065990 __do_page_fault + ffffffff81065caf do_page_fault + ffffffff817ce228 page_fault + 00007fed989afcc0 [unknown] + - cp (9036) + 1 + + 00007f31d76c3251 [unknown] + 47a2c1e752bf47f7 [unknown] + - sign-file (8877) + 1 + + ffffffff813d0af8 __clear_user + ffffffff813d5277 iov_iter_zero + ffffffff814ec5f2 read_iter_zero + ffffffff8120be9d __vfs_read + ffffffff8120c385 vfs_read + ffffffff8120d786 sys_read + ffffffff817cc076 entry_SYSCALL_64_fastpath + 00007fc5652ad9b0 read + - dd (25036) + 4 + + 0000000000400542 func_a + 0000000000400598 main + 00007f12a133e830 __libc_start_main + 083e258d4c544155 [unknown] + - func_ab (13549) + 5 + +[...] + + ffffffff8105eb66 native_safe_halt + ffffffff8103659e default_idle + ffffffff81036d1f arch_cpu_idle + ffffffff810bba5a default_idle_call + ffffffff810bbd07 cpu_startup_entry + ffffffff817bf4a7 rest_init + ffffffff81d65f58 start_kernel + ffffffff81d652db x86_64_start_reservations + ffffffff81d65418 x86_64_start_kernel + - swapper/0 (0) + 72 + + ffffffff8105eb66 native_safe_halt + ffffffff8103659e default_idle + ffffffff81036d1f arch_cpu_idle + ffffffff810bba5a default_idle_call + ffffffff810bbd07 cpu_startup_entry + ffffffff8104df55 start_secondary + - swapper/1 (0) + 75 + +The output was long; I truncated some lines ("[...]"). + +This default output prints stack traces as two columns (raw addresses, and +then translated symbol names), followed by a line to describe the process (a +dash, the process name, and a PID in parenthesis), and then an integer count +of how many times this stack trace was sampled. + +The output above shows the most frequent stack was from the "swapper/1" +process (PID 0), running the native_safe_halt() function, which was called +by default_idle(), which was called by arch_cpu_idle(), and so on. This is +the idle thread. Stacks can be read top-down, to follow ancestry: child, +parent, grandparent, etc. + +The func_ab process is running the func_a() function, called by main(), +called by __libc_start_main(), and called by "[unknown]" with what looks +like a bogus address (1st column). That's evidence of a broken stack trace. +It's common for user-level software that hasn't been compiled with frame +pointers (in this case, libc). + +The dd process has called read(), and then enters the kernel via +entry_SYSCALL_64_fastpath(), calling sys_read(), and so on. Yes, I'm now +reading it bottom up. That way follows the code flow. + + +The dd process is actually "dd if=/dev/zero of=/dev/null": it's a simple +workload to analyze that just moves bytes from /dev/zero to /dev/null. +Profiling just that process: + +# ./profile -p 25036 +Sampling at 49 Hertz of PID 25036 by user + kernel stack... Hit Ctrl-C to end. +^C + 0000000000402748 [unknown] + 00007fc56561422c [unknown] + - dd (25036) + 1 + + 00007fc5652ada0e __write + - dd (25036) + 1 + + 00007fc5652ad9b0 read + - dd (25036) + 1 + +[...] + + 00000000004047b2 [unknown] + 00007fc56561422c [unknown] + - dd (25036) + 2 + + ffffffff817cc060 entry_SYSCALL_64_fastpath + 00007fc5652ada10 __write + 00007fc56561422c [unknown] + - dd (25036) + 3 + + ffffffff817cc060 entry_SYSCALL_64_fastpath + 00007fc5652ad9b0 read + - dd (25036) + 3 + + ffffffff813d0af8 __clear_user + ffffffff813d5277 iov_iter_zero + ffffffff814ec5f2 read_iter_zero + ffffffff8120be9d __vfs_read + ffffffff8120c385 vfs_read + ffffffff8120d786 sys_read + ffffffff817cc076 entry_SYSCALL_64_fastpath + 00007fc5652ad9b0 read + 00007fc56561422c [unknown] + - dd (25036) + 3 + + ffffffff813d0af8 __clear_user + ffffffff813d5277 iov_iter_zero + ffffffff814ec5f2 read_iter_zero + ffffffff8120be9d __vfs_read + ffffffff8120c385 vfs_read + ffffffff8120d786 sys_read + ffffffff817cc076 entry_SYSCALL_64_fastpath + 00007fc5652ad9b0 read + - dd (25036) + 7 + +Again, I've truncated some lines. Now we're just analyzing the dd process. +The filtering is performed in kernel context, for efficiency. + +This output has some "[unknown]" frames that probably have valid addresses, +but we're lacking the symbol translation. This is a common for all profilers +on Linux, and is usually fixable. See the DEBUGGING section of the profile(8) +man page. + + +Lets add delimiters between the user and kernel stacks, using -d: + +# ./profile -p 25036 -d +^C + ffffffff8120b385 __vfs_write + ffffffff8120d826 sys_write + ffffffff817cc076 entry_SYSCALL_64_fastpath + -- + 00007fc5652ada10 __write + - dd (25036) + 1 + + -- + 00007fc565255ef3 [unknown] + 00007fc56561422c [unknown] + - dd (25036) + 1 + + ffffffff813d4569 iov_iter_init + ffffffff8120be8e __vfs_read + ffffffff8120c385 vfs_read + ffffffff8120d786 sys_read + ffffffff817cc076 entry_SYSCALL_64_fastpath + -- + 00007fc5652ad9b0 read + - dd (25036) + 1 + +[...] + + ffffffff813d0af8 __clear_user + ffffffff813d5277 iov_iter_zero + ffffffff814ec5f2 read_iter_zero + ffffffff8120be9d __vfs_read + ffffffff8120c385 vfs_read + ffffffff8120d786 sys_read + ffffffff817cc076 entry_SYSCALL_64_fastpath + -- + 00007fc5652ad9b0 read + - dd (25036) + 9 + +In this mode, the delimiters are "--". + + + +Here's another example, a func_ab program that runs two functions, func_a() and +func_b(). Profiling it for 5 seconds: + +# ./profile -p `pgrep -n func_ab` 5 +Sampling at 49 Hertz of PID 2930 by user + kernel stack for 5 secs. + + 000000000040053e func_a + 0000000000400598 main + 00007f0458819830 __libc_start_main + 083e258d4c544155 [unknown] + - func_ab (2930) + 2 + + 0000000000400566 func_b + 00000000004005ac main + 00007f0458819830 __libc_start_main + 083e258d4c544155 [unknown] + - func_ab (2930) + 3 + + 000000000040053a func_a + 0000000000400598 main + 00007f0458819830 __libc_start_main + 083e258d4c544155 [unknown] + - func_ab (2930) + 5 + + 0000000000400562 func_b + 00000000004005ac main + 00007f0458819830 __libc_start_main + 083e258d4c544155 [unknown] + - func_ab (2930) + 12 + + 000000000040056a func_b + 00000000004005ac main + 00007f0458819830 __libc_start_main + 083e258d4c544155 [unknown] + - func_ab (2930) + 19 + + 0000000000400542 func_a + 0000000000400598 main + 00007f0458819830 __libc_start_main + 083e258d4c544155 [unknown] + - func_ab (2930) + 22 + + 0000000000400571 func_b + 00000000004005ac main + 00007f0458819830 __libc_start_main + 083e258d4c544155 [unknown] + - func_ab (2930) + 64 + + 0000000000400549 func_a + 0000000000400598 main + 00007f0458819830 __libc_start_main + 083e258d4c544155 [unknown] + - func_ab (2930) + 72 + +Note that the same stack (2nd column) seems to be repeated. Weren't we doing +frequency counting and only printing unique stacks? We are, but in terms of +the raw addresses, not the symbols. See the 1st column: those stacks are +all unique. + + +We can output in "folded format", which puts the stack trace on one line, +separating frames with semi-colons. Eg: + +# ./profile -f -p `pgrep -n func_ab` 5 +func_ab;[unknown];__libc_start_main;main;func_a 2 +func_ab;[unknown];__libc_start_main;main;func_b 2 +func_ab;[unknown];__libc_start_main;main;func_a 11 +func_ab;[unknown];__libc_start_main;main;func_b 12 +func_ab;[unknown];__libc_start_main;main;func_a 23 +func_ab;[unknown];__libc_start_main;main;func_b 28 +func_ab;[unknown];__libc_start_main;main;func_b 57 +func_ab;[unknown];__libc_start_main;main;func_a 64 + +I find this pretty useful for writing to files and later grepping. + + +Folded format can also be used by flame graph stack visualizers, including +the original implementation: + + https://github.com/brendangregg/FlameGraph + +I'd include delimiters, -d. For example: + +# ./profile -df -p `pgrep -n func_ab` 5 > out.profile +# git clone https://github.com/brendangregg/FlameGraph +# ./FlameGraph/flamegraph.pl < out.profile > out.svg + +(Yes, I could pipe profile directly into flamegraph.pl, however, I like to +keep the raw folded profiles around: can be useful for regenerating flamegraphs +with different options, and, for differential flame graphs.) + + +Some flamegraph.pl palettes recognize kernel annotations, which can be added +with -a. It simply adds a "_[k]" at the end of kernel function names. +For example: + +# ./profile -adf -p `pgrep -n dd` 10 +dd;[unknown] 1 +dd;[unknown];[unknown] 1 +dd;[unknown];[unknown] 1 +dd;[unknown];read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];rw_verify_area_[k];security_file_permission_[k];__fsnotify_parent_[k] 1 +dd;[unknown];__write;-;entry_SYSCALL_64_fastpath_[k];sys_write_[k];vfs_write_[k];__fsnotify_parent_[k] 1 +dd;[unknown];read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];__fdget_pos_[k] 1 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];rw_verify_area_[k];apparmor_file_permission_[k] 1 +dd;[unknown] 1 +dd;[unknown];[unknown] 1 +dd;__write;-;entry_SYSCALL_64_fastpath_[k];sys_write_[k];vfs_write_[k] 1 +dd;[unknown] 1 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];__fget_light_[k] 1 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];rw_verify_area_[k];__fsnotify_parent_[k] 1 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];__vfs_read_[k];read_iter_zero_[k];iov_iter_zero_[k] 1 +dd;[unknown];__write;-;entry_SYSCALL_64_fastpath_[k];sys_write_[k];__fget_light_[k] 1 +dd;[unknown];[unknown] 1 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k] 1 +dd;[unknown];[unknown] 1 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];__vfs_read_[k];read_iter_zero_[k];iov_iter_zero_[k] 1 +dd;[unknown];read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];__vfs_read_[k];read_iter_zero_[k];iov_iter_zero_[k] 1 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];read_iter_zero_[k] 1 +dd;[unknown];read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];__fsnotify_parent_[k] 1 +dd;[unknown];[unknown] 1 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];rw_verify_area_[k];security_file_permission_[k];apparmor_file_permission_[k];common_file_perm_[k] 1 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];__fsnotify_parent_[k] 1 +dd;__write;-;entry_SYSCALL_64_fastpath_[k];sys_write_[k];vfs_write_[k];fsnotify_[k] 1 +dd;__write;-;entry_SYSCALL_64_fastpath_[k];sys_write_[k];vfs_write_[k];rw_verify_area_[k];security_file_permission_[k] 1 +dd;[unknown];__write;-;entry_SYSCALL_64_fastpath_[k];sys_write_[k];__fdget_pos_[k] 1 +dd;[unknown];__write;-;entry_SYSCALL_64_fastpath_[k] 1 +dd;[unknown];[unknown] 1 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];rw_verify_area_[k];security_file_permission_[k] 1 +dd;__write;-;entry_SYSCALL_64_fastpath_[k];sys_write_[k];vfs_write_[k] 1 +dd;[unknown];read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];__fget_light_[k] 1 +dd;[unknown] 1 +dd;[unknown];__write;-;entry_SYSCALL_64_fastpath_[k];sys_write_[k];vfs_write_[k] 1 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];rw_verify_area_[k];security_file_permission_[k];apparmor_file_permission_[k] 1 +dd;[unknown];[unknown] 1 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];rw_verify_area_[k];security_file_permission_[k] 1 +dd;[unknown];read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];rw_verify_area_[k];__fsnotify_parent_[k] 1 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];__vfs_read_[k];read_iter_zero_[k];iov_iter_zero_[k] 1 +dd;__write;-;entry_SYSCALL_64_fastpath_[k];sys_write_[k];vfs_write_[k];rw_verify_area_[k];security_file_permission_[k] 1 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];__vfs_read_[k];read_iter_zero_[k];iov_iter_zero_[k];__clear_user_[k] 1 +dd;[unknown];[unknown] 1 +dd;__write;-;entry_SYSCALL_64_fastpath_[k];sys_write_[k];vfs_write_[k] 1 +dd;[unknown];[unknown] 1 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];rw_verify_area_[k];security_file_permission_[k];apparmor_file_permission_[k];common_file_perm_[k] 1 +dd;read 1 +dd;__write;-;entry_SYSCALL_64_fastpath_[k];sys_write_[k];vfs_write_[k];security_file_permission_[k] 1 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];fsnotify_[k] 1 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];fsnotify_[k] 1 +dd;__write;-;entry_SYSCALL_64_fastpath_[k];sys_write_[k];vfs_write_[k];rw_verify_area_[k];apparmor_file_permission_[k] 1 +dd;[unknown];__write;-;entry_SYSCALL_64_fastpath_[k];sys_write_[k];vfs_write_[k];__fsnotify_parent_[k] 1 +dd;[unknown];__write;-;entry_SYSCALL_64_fastpath_[k];sys_write_[k];vfs_write_[k];rw_verify_area_[k];apparmor_file_permission_[k] 1 +dd;[unknown];read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k] 1 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];__vfs_read_[k];iov_iter_init_[k] 1 +dd;[unknown];read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];rw_verify_area_[k];security_file_permission_[k];__fsnotify_parent_[k] 1 +dd;[unknown];__write;-;entry_SYSCALL_64_fastpath_[k];sys_write_[k];vfs_write_[k];__vfs_write_[k];write_null_[k] 1 +dd;[unknown];read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];__vfs_read_[k];read_iter_zero_[k];__clear_user_[k] 1 +dd;[unknown];read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];security_file_permission_[k] 1 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];rw_verify_area_[k];security_file_permission_[k];apparmor_file_permission_[k];common_file_perm_[k] 1 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];__vfs_read_[k];read_iter_zero_[k] 1 +dd;[unknown];__write;-;entry_SYSCALL_64_fastpath_[k];sys_write_[k];vfs_write_[k] 1 +dd;__write;-;entry_SYSCALL_64_fastpath_[k];sys_write_[k];vfs_write_[k] 1 +dd;[unknown];read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];rw_verify_area_[k] 1 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];__fget_light_[k] 1 +dd;[unknown];__write;-;entry_SYSCALL_64_fastpath_[k];sys_write_[k];vfs_write_[k];rw_verify_area_[k] 1 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];__vfs_read_[k] 1 +dd;[unknown];read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];__vfs_read_[k] 1 +dd;[unknown];__write;-;entry_SYSCALL_64_fastpath_[k];sys_write_[k];vfs_write_[k];__vfs_write_[k] 1 +dd;[unknown] 1 +dd;[unknown];read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];__vfs_read_[k];read_iter_zero_[k];iov_iter_zero_[k] 1 +dd;[unknown] 1 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];__vfs_read_[k] 1 +dd;__write;-;entry_SYSCALL_64_fastpath_[k];sys_write_[k];__fsnotify_parent_[k] 1 +dd;__write;-;entry_SYSCALL_64_fastpath_[k];sys_write_[k];vfs_write_[k] 1 +dd;[unknown];[unknown] 1 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];__vfs_read_[k];read_iter_zero_[k];iov_iter_zero_[k] 1 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];__vfs_read_[k];read_iter_zero_[k];iov_iter_zero_[k] 1 +dd;[unknown];__write;-;sys_write_[k] 1 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];__fsnotify_parent_[k] 1 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];rw_verify_area_[k];security_file_permission_[k];common_file_perm_[k] 1 +dd;[unknown];read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];__vfs_read_[k];read_iter_zero_[k];iov_iter_zero_[k] 1 +dd;[unknown];[unknown] 1 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];__vfs_read_[k];read_iter_zero_[k];iov_iter_zero_[k];__clear_user_[k] 1 +dd;__write;-;entry_SYSCALL_64_fastpath_[k];sys_write_[k];vfs_write_[k];rw_verify_area_[k];security_file_permission_[k];apparmor_file_permission_[k] 1 +dd;__write;-;entry_SYSCALL_64_fastpath_[k];sys_write_[k];__fget_light_[k] 1 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k] 1 +dd;read;-;entry_SYSCALL_64_fastpath_[k];vfs_read_[k] 1 +dd;__write 1 +dd;read;-;entry_SYSCALL_64_fastpath_[k];vfs_read_[k] 1 +dd;__write;-;entry_SYSCALL_64_fastpath_[k];sys_write_[k];vfs_write_[k];rw_verify_area_[k];security_file_permission_[k];apparmor_file_permission_[k];common_file_perm_[k] 1 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];__vfs_read_[k];read_iter_zero_[k];iov_iter_zero_[k] 1 +dd;[unknown];__write;-;entry_SYSCALL_64_fastpath_[k];sys_write_[k];__fget_light_[k] 1 +dd;[unknown];[unknown] 1 +dd;[unknown] 1 +dd;[unknown];read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];__vfs_read_[k];read_iter_zero_[k];iov_iter_zero_[k] 1 +dd;[unknown] 1 +dd;[unknown] 1 +dd;[unknown];[unknown] 1 +dd;__write;-;entry_SYSCALL_64_fastpath_[k];sys_write_[k];vfs_write_[k] 1 +dd;[unknown];read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];__vfs_read_[k];read_iter_zero_[k];iov_iter_zero_[k] 1 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k] 1 +dd;[unknown];read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];__vfs_read_[k];read_iter_zero_[k] 1 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k] 1 +dd;__write;-;entry_SYSCALL_64_fastpath_[k];sys_write_[k];vfs_write_[k];rw_verify_area_[k];security_file_permission_[k];apparmor_file_permission_[k];common_file_perm_[k] 1 +dd;__write 1 +dd;[unknown];read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];__fget_light_[k] 1 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];rw_verify_area_[k];security_file_permission_[k] 1 +dd;__write;-;entry_SYSCALL_64_fastpath_[k];sys_write_[k] 1 +dd;[unknown] 1 +dd;[unknown];read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];__fget_light_[k] 1 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];__vfs_read_[k];read_iter_zero_[k] 1 +dd;[unknown];[unknown] 1 +dd;__write;-;entry_SYSCALL_64_fastpath_[k];sys_write_[k];__fdget_pos_[k] 1 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];__vfs_read_[k];read_iter_zero_[k] 1 +dd;[unknown];read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];__vfs_read_[k];read_iter_zero_[k] 1 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];__vfs_read_[k];_cond_resched_[k] 1 +dd;[unknown];read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];iov_iter_init_[k] 1 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];rw_verify_area_[k];security_file_permission_[k];__fsnotify_parent_[k] 1 +dd;[unknown];read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];__vfs_read_[k] 1 +dd;__write;-;entry_SYSCALL_64_fastpath_[k];sys_write_[k];rw_verify_area_[k] 1 +dd;[unknown];read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];rw_verify_area_[k];apparmor_file_permission_[k] 1 +dd;[unknown];read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];__vfs_read_[k];read_iter_zero_[k];iov_iter_zero_[k] 1 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k] 1 +dd;[unknown] 1 +dd;[unknown];read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];rw_verify_area_[k];security_file_permission_[k];fsnotify_[k] 1 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];__vfs_read_[k] 1 +dd;__write;-;entry_SYSCALL_64_fastpath_[k];sys_write_[k];__fdget_pos_[k] 1 +dd;[unknown];read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];rw_verify_area_[k];security_file_permission_[k] 1 +dd;__write;-;entry_SYSCALL_64_fastpath_[k];sys_write_[k];vfs_write_[k] 1 +dd;[unknown];__write;-;entry_SYSCALL_64_fastpath_[k];sys_write_[k];vfs_write_[k];__vfs_write_[k] 1 +dd;__write;-;entry_SYSCALL_64_fastpath_[k];sys_write_[k];vfs_write_[k];rw_verify_area_[k];apparmor_file_permission_[k] 1 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];rw_verify_area_[k];security_file_permission_[k] 1 +dd;__write;-;entry_SYSCALL_64_fastpath_[k];sys_write_[k];vfs_write_[k];rw_verify_area_[k];security_file_permission_[k];apparmor_file_permission_[k];common_file_perm_[k] 1 +dd;__write;-;entry_SYSCALL_64_fastpath_[k];sys_write_[k];vfs_write_[k] 1 +dd;__write;-;entry_SYSCALL_64_fastpath_[k];sys_write_[k];vfs_write_[k] 1 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];__fget_light_[k] 1 +dd;[unknown] 1 +dd;[unknown];__write;-;entry_SYSCALL_64_fastpath_[k];sys_write_[k];vfs_write_[k];fsnotify_[k] 1 +dd;[unknown];__write;-;entry_SYSCALL_64_fastpath_[k];sys_write_[k];vfs_write_[k];rw_verify_area_[k];security_file_permission_[k];apparmor_file_permission_[k];common_file_perm_[k] 1 +dd;__write;-;entry_SYSCALL_64_fastpath_[k];sys_write_[k];vfs_write_[k];rw_verify_area_[k] 1 +dd;[unknown];read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];__vfs_read_[k];read_iter_zero_[k];iov_iter_zero_[k] 1 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];fsnotify_[k] 1 +dd;[unknown];__write;-;entry_SYSCALL_64_fastpath_[k];sys_write_[k];vfs_write_[k] 1 +dd;__write;-;entry_SYSCALL_64_fastpath_[k];vfs_write_[k] 1 +dd;[unknown];read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];rw_verify_area_[k] 1 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];__vfs_read_[k] 1 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];rw_verify_area_[k];security_file_permission_[k];apparmor_file_permission_[k];common_file_perm_[k] 1 +dd;[unknown];read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];__vfs_read_[k];read_iter_zero_[k] 1 +dd;[unknown];read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];rw_verify_area_[k];fsnotify_[k] 1 +dd;[unknown];__write;-;entry_SYSCALL_64_fastpath_[k];sys_write_[k];vfs_write_[k];rw_verify_area_[k];apparmor_file_permission_[k] 2 +dd;read;-;entry_SYSCALL_64_fastpath_[k];__fdget_pos_[k] 2 +dd;[unknown];[unknown] 2 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];__fdget_pos_[k] 2 +dd;[unknown];read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];rw_verify_area_[k];security_file_permission_[k];apparmor_file_permission_[k];common_file_perm_[k] 2 +dd;[unknown];[unknown] 2 +dd;[unknown];[unknown] 2 +dd;[unknown];[unknown] 2 +dd;[unknown];read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k] 2 +dd;[unknown];[unknown] 2 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];__vfs_read_[k];read_iter_zero_[k];__clear_user_[k] 2 +dd;__write;-;entry_SYSCALL_64_fastpath_[k];__fdget_pos_[k] 2 +dd;[unknown];read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];__vfs_read_[k] 2 +dd;[unknown];read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];__vfs_read_[k] 2 +dd;[unknown];read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];__vfs_read_[k];read_iter_zero_[k];iov_iter_zero_[k];__clear_user_[k] 2 +dd;[unknown];read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];__vfs_read_[k];read_iter_zero_[k];iov_iter_zero_[k];__clear_user_[k] 2 +dd;[unknown];[unknown] 2 +dd;__write;-;entry_SYSCALL_64_fastpath_[k];sys_write_[k];__fget_light_[k] 2 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];rw_verify_area_[k];security_file_permission_[k];fsnotify_[k] 2 +dd;__write;-;sys_write_[k] 2 +dd;[unknown];read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];fsnotify_[k] 2 +dd;[unknown];[unknown] 2 +dd;[unknown];read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];__vfs_read_[k] 2 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];__vfs_read_[k];read_iter_zero_[k];iov_iter_zero_[k] 2 +dd;read;-;SyS_read_[k] 2 +dd;[unknown] 2 +dd;__write;-;entry_SYSCALL_64_fastpath_[k];sys_write_[k];vfs_write_[k];rw_verify_area_[k] 2 +dd;[unknown];__write;-;entry_SYSCALL_64_fastpath_[k];sys_write_[k];__fget_light_[k] 2 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];__vfs_read_[k] 2 +dd;__write;-;entry_SYSCALL_64_fastpath_[k];sys_write_[k];vfs_write_[k];rw_verify_area_[k];security_file_permission_[k];apparmor_file_permission_[k] 2 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];__vfs_read_[k];read_iter_zero_[k];__clear_user_[k] 2 +dd;[unknown];__write;-;entry_SYSCALL_64_fastpath_[k];sys_write_[k];rw_verify_area_[k] 2 +dd;[unknown];[unknown] 3 +dd;[unknown];read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];rw_verify_area_[k] 3 +dd;[unknown];[unknown] 3 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];__vfs_read_[k];read_iter_zero_[k];iov_iter_zero_[k];__clear_user_[k] 3 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];__vfs_read_[k];read_iter_zero_[k];iov_iter_zero_[k] 3 +dd;[unknown];[unknown] 3 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];__vfs_read_[k];read_iter_zero_[k];iov_iter_zero_[k] 3 +dd;[unknown];[unknown] 3 +dd;[unknown];[unknown] 3 +dd;__write;-;entry_SYSCALL_64_fastpath_[k];sys_write_[k];vfs_write_[k] 3 +dd;[unknown];read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k] 3 +dd;[unknown];read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];__vfs_read_[k];read_iter_zero_[k];iov_iter_zero_[k];__clear_user_[k] 3 +dd;[unknown] 4 +dd;[unknown];read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k] 4 +dd;[unknown];[unknown] 4 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k] 4 +dd;[unknown] 4 +dd;[unknown];[unknown] 4 +dd;[unknown];read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k] 4 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];__vfs_read_[k];read_iter_zero_[k];iov_iter_zero_[k];__clear_user_[k] 5 +dd;[unknown];__write;-;entry_SYSCALL_64_fastpath_[k];sys_write_[k];vfs_write_[k] 5 +dd;[unknown];[unknown] 5 +dd;[unknown];[unknown] 5 +dd;[unknown];read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];__vfs_read_[k];read_iter_zero_[k];iov_iter_zero_[k] 6 +dd;read 15 +dd;[unknown];read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];__vfs_read_[k];read_iter_zero_[k];iov_iter_zero_[k];__clear_user_[k] 19 +dd;[unknown];__write;-;entry_SYSCALL_64_fastpath_[k] 20 +dd;read;-;entry_SYSCALL_64_fastpath_[k] 23 +dd;read;-;entry_SYSCALL_64_fastpath_[k];SyS_read_[k];vfs_read_[k];__vfs_read_[k];read_iter_zero_[k];iov_iter_zero_[k];__clear_user_[k] 24 +dd;__write;-;entry_SYSCALL_64_fastpath_[k] 25 +dd;__write 29 +dd;[unknown];read;-;entry_SYSCALL_64_fastpath_[k] 31 + +This can be made into a flamegraph. Eg: + +# ./profile -adf -p `pgrep -n func_ab` 10 > out.profile +# git clone https://github.com/brendangregg/FlameGraph +# ./FlameGraph/flamegraph.pl --color=java < out.profile > out.svg + +It will highlight the kernel frames in orange, and user-level in red (and Java +in green, and C++ in yellow). If you copy-n-paste the above output into a +out.profile file, you can try it out. + + +You can increase or decrease the sample frequency. Eg, sampling at 9 Hertz: + +# ./profile -F 9 +Sampling at 9 Hertz of all threads by user + kernel stack... Hit Ctrl-C to end. +^C + 000000000040056a func_b + 00000000004005ac main + 00007f0458819830 __libc_start_main + 083e258d4c544155 [unknown] + - func_ab (2930) + 1 + +[...] + + ffffffff8105eb66 native_safe_halt + ffffffff8103659e default_idle + ffffffff81036d1f arch_cpu_idle + ffffffff810bba5a default_idle_call + ffffffff810bbd07 cpu_startup_entry + ffffffff8104df55 start_secondary + - swapper/3 (0) + 8 + + ffffffff8105eb66 native_safe_halt + ffffffff8103659e default_idle + ffffffff81036d1f arch_cpu_idle + ffffffff810bba5a default_idle_call + ffffffff810bbd07 cpu_startup_entry + ffffffff817bf497 rest_init + ffffffff81d65f58 start_kernel + ffffffff81d652db x86_64_start_reservations + ffffffff81d65418 x86_64_start_kernel + - swapper/0 (0) + 8 + + +You can also restrict profiling to just kernel stacks (-K) or user stacks (-U). +For example, just user stacks: + +# ./profile -U +Sampling at 49 Hertz of all threads by user stack... Hit Ctrl-C to end. +^C + 0000000000402ccc [unknown] + 00007f45a624422c [unknown] + - dd (2931) + 1 + + 0000000000404b80 [unknown] + 00007f45a624422c [unknown] + - dd (2931) + 1 + + 0000000000404d77 [unknown] + 00007f45a624422c [unknown] + - dd (2931) + 1 + + 00007f45a5e85e5e [unknown] + 00007f45a624422c [unknown] + - dd (2931) + 1 + + 0000000000402d12 [unknown] + 00007f45a624422c [unknown] + - dd (2931) + 1 + + 0000000000400562 func_b + 00000000004005ac main + 00007f0458819830 __libc_start_main + 083e258d4c544155 [unknown] + - func_ab (2930) + 1 + + 0000000000404805 [unknown] + - dd (2931) + 1 + + 00000000004047de [unknown] + - dd (2931) + 1 + + 0000000000400542 func_a + 0000000000400598 main + 00007f0458819830 __libc_start_main + 083e258d4c544155 [unknown] + - func_ab (2930) + 3 + + 00007f45a5edda10 __write + 00007f45a624422c [unknown] + - dd (2931) + 3 + + 000000000040053a func_a + 0000000000400598 main + 00007f0458819830 __libc_start_main + 083e258d4c544155 [unknown] + - func_ab (2930) + 4 + + 000000000040056a func_b + 00000000004005ac main + 00007f0458819830 __libc_start_main + 083e258d4c544155 [unknown] + - func_ab (2930) + 7 + + - swapper/6 (0) + 10 + + 0000000000400571 func_b + 00000000004005ac main + 00007f0458819830 __libc_start_main + 083e258d4c544155 [unknown] + - func_ab (2930) + 10 + + 00007f45a5edda10 __write + - dd (2931) + 10 + + 0000000000400549 func_a + 0000000000400598 main + 00007f0458819830 __libc_start_main + 083e258d4c544155 [unknown] + - func_ab (2930) + 11 + + 00007f45a5edd9b0 read + - dd (2931) + 12 + + 00007f45a5edd9b0 read + 00007f45a624422c [unknown] + - dd (2931) + 14 + + - swapper/7 (0) + 46 + + - swapper/0 (0) + 46 + + - swapper/2 (0) + 46 + + - swapper/1 (0) + 46 + + - swapper/3 (0) + 46 + + - swapper/4 (0) + 46 + + +If there are too many unique stack traces for the kernel to save, a warning +will be printed. Eg: + +# ./profile +[...] +WARNING: 8 stack traces could not be displayed. Consider increasing --stack-storage-size. + +Run ./profile -h to see the default. + + +There is a -S option to skip kernel frames. You probably don't need to mess +with this. Here's why it exists: consider the following kernel stack trace, +and IP: + + ffffffff81174e78 perf_swevent_hrtimer + ffffffff810e6984 __hrtimer_run_queues + ffffffff810e70f8 hrtimer_interrupt + ffffffff81022c69 xen_timer_interrupt + ffffffff810d2942 handle_irq_event_percpu + ffffffff810d62da handle_percpu_irq + ffffffff810d1f52 generic_handle_irq + ffffffff814a5137 evtchn_2l_handle_events + ffffffff814a2853 __xen_evtchn_do_upcall + ffffffff814a4740 xen_evtchn_do_upcall + ffffffff817cd50c xen_hvm_callback_vector + ffffffff8103663e default_idle + ffffffff81036dbf arch_cpu_idle + ffffffff810bb8ea default_idle_call + ffffffff810bbb97 cpu_startup_entry + ffffffff8104df85 start_secondary + +IP: ffffffff8105eb66 native_safe_halt + +This is the idle thread. The first function is native_safe_halt(), and its +parent is default_idle(). But what you see there is really what we are +profiling. All that stuff above default_idle()? Interrupt framework stack. + +So we have to exclude those interrupt frames. I do this by fetching the ret IP +from the kernel stack, and then scanning for it in user-level: in this case +it would be default_idle(). Ok. + +If this doesn't work on your architecture (and your kernel stacks are a +single line, the IP), then you might consider setting a fixed skip count, +which avoids this ret IP logic. For the above stack, I'd set "-S 11", and +it would slice off those 11 interrupt frames nicely. It also does this in +kernel context for efficiency. + +So how do you figure out what number to use? 11? 14? 5? Well.. Try "-S 1", +and then see how much higher you need to set it. Remember on the real +profile output that the IP line is printed on top of the sliced stack. + + +USAGE message: + +# ./profile -h +usage: profile [-h] [-p PID] [-U | -K] [-F FREQUENCY] [-d] [-a] [-f] + [--stack-storage-size STACK_STORAGE_SIZE] [-S KERNEL_SKIP] + [duration] + +Profile CPU stack traces at a timed interval + +positional arguments: + duration duration of trace, in seconds + +optional arguments: + -h, --help show this help message and exit + -p PID, --pid PID profile this PID only + -U, --user-stacks-only + show stacks from user space only (no kernel space + stacks) + -K, --kernel-stacks-only + show stacks from kernel space only (no user space + stacks) + -F FREQUENCY, --frequency FREQUENCY + sample frequency, Hertz (default 49) + -d, --delimited insert delimiter between kernel/user stacks + -a, --annotations add _[k] annotations to kernel frames + -f, --folded output folded format, one line per stack (for flame + graphs) + --stack-storage-size STACK_STORAGE_SIZE + the number of unique stack traces that can be stored + and displayed (default 2048) + -S KERNEL_SKIP, --kernel-skip KERNEL_SKIP + skip this many kernel frames (default 3) + +examples: + ./profile # profile stack traces at 49 Hertz until Ctrl-C + ./profile -F 99 # profile stack traces at 99 Hertz + ./profile 5 # profile at 49 Hertz for 5 seconds only + ./profile -f 5 # output in folded format for flame graphs + ./profile -p 185 # only profile threads for PID 185 + ./profile -U # only show user space stacks (no kernel) + ./profile -K # only show kernel space stacks (no user) + ./profile -S 11 # always skip 11 frames of kernel stack diff --git a/tools/profile.py b/tools/profile.py index 6f28eed5a56d..30c4d259d1e3 100755 --- a/tools/profile.py +++ b/tools/profile.py @@ -13,31 +13,22 @@ # and for efficiency it does not initialize the perf ring buffer, so the # redundant perf samples are not collected. # -# Kernel stacks are post-process in user-land to skip the interrupt framework -# frames. You can improve efficiency a little by specifying the exact number -# of frames to skip with -s, provided you know what that is. If you get -s -# wrong, note that the first line is the IP, and then the (skipped) stack. -# -# Note: if another perf-based sampling session is active, the output may become -# polluted with their events. On older kernels, the ouptut may also become -# polluted with tracing sessions (when the kprobe is used instead of the -# tracepoint). If this becomes a problem, logic can be added to filter events. -# -# REQUIRES: Linux 4.6+ (BPF_MAP_TYPE_STACK_TRACE support), and the -# perf_misc_flags() function symbol to exist. The latter may or may not -# exist depending on your kernel build. Linux 4.9 provides a proper solution -# to this (this tool will be updated). +# REQUIRES: Linux 4.9+ (BPF_PROG_TYPE_PERF_EVENT support). Under tools/old is +# a version of this tool that may work on Linux 4.6 - 4.8. # # Copyright 2016 Netflix, Inc. # Licensed under the Apache License, Version 2.0 (the "License") # -# THANKS: Sasha Goldshtein, Andrew Birchall, and Evgeny Vereshchagin, who wrote -# much of the code here, borrowed from tracepoint.py and offcputime.py. +# THANKS: Alexei Starovoitov, who added proper BPF profiling support to Linux; +# Sasha Goldshtein, Andrew Birchall, and Evgeny Vereshchagin, who wrote much +# of the code here, borrowed from tracepoint.py and offcputime.py; and +# Teng Qin, who added perf support in bcc. # # 15-Jul-2016 Brendan Gregg Created this. +# 20-Oct-2016 " " Switched to use the new 4.9 support. from __future__ import print_function -from bcc import BPF, Perf +from bcc import BPF, PerfType, PerfSWConfig from sys import stderr from time import sleep import argparse @@ -77,7 +68,6 @@ def positive_nonzero_int(val): ./profile -p 185 # only profile threads for PID 185 ./profile -U # only show user space stacks (no kernel) ./profile -K # only show kernel space stacks (no user) - ./profile -S 11 # always skip 11 frames of kernel stack """ parser = argparse.ArgumentParser( description="Profile CPU stack traces at a timed interval", @@ -104,15 +94,12 @@ def positive_nonzero_int(val): type=positive_nonzero_int, help="the number of unique stack traces that can be stored and " "displayed (default 2048)") -parser.add_argument("-S", "--kernel-skip", type=positive_int, default=0, - help="skip this many kernel frames (default 3)") parser.add_argument("duration", nargs="?", default=99999999, type=positive_nonzero_int, help="duration of trace, in seconds") # option logic args = parser.parse_args() -skip = args.kernel_skip pid = int(args.pid) if args.pid is not None else -1 duration = int(args.duration) debug = 0 @@ -127,6 +114,7 @@ def positive_nonzero_int(val): # define BPF program bpf_text = """ #include +#include #include struct key_t { @@ -143,7 +131,7 @@ def positive_nonzero_int(val): // This code gets a bit complex. Probably not suitable for casual hacking. -PERF_TRACE_EVENT { +int do_perf_event(struct bpf_perf_event_data *ctx) { u32 pid = bpf_get_current_pid_tgid(); if (!(THREAD_FILTER)) return 0; @@ -160,7 +148,7 @@ def positive_nonzero_int(val): if (key.kernel_stack_id >= 0) { // populate extras to fix the kernel stack struct pt_regs regs = {}; - bpf_probe_read(®s, sizeof(regs), (void *)REGS_LOCATION); + bpf_probe_read(®s, sizeof(regs), (void *)&ctx->regs); u64 ip = PT_REGS_IP(®s); // if ip isn't sane, leave key ips as zero for later checking @@ -170,17 +158,6 @@ def positive_nonzero_int(val): if (ip > PAGE_OFFSET) { #endif key.kernel_ip = ip; - if (DO_KERNEL_RIP) { - /* - * User didn't specify a skip value (-s), so we will figure - * out how many interrupt framework frames to skip by recording - * the kernel rip, then later scanning for it on the stack. - * This is likely x86_64 specific; can use -s as a workaround - * until this supports your architecture. - */ - bpf_probe_read(&key.kernel_ret_ip, sizeof(key.kernel_ret_ip), - (void *)(regs.bp + 8)); - } } } @@ -206,10 +183,11 @@ def positive_nonzero_int(val): bpf_text = bpf_text.replace('STACK_STORAGE_SIZE', str(args.stack_storage_size)) # handle stack args -kernel_stack_get = "stack_traces.get_stackid(args, " \ - "%d | BPF_F_REUSE_STACKID)" % skip +kernel_stack_get = \ + "stack_traces.get_stackid(&ctx->regs, 0 | BPF_F_REUSE_STACKID)" user_stack_get = \ - "stack_traces.get_stackid(args, BPF_F_REUSE_STACKID | BPF_F_USER_STACK)" + "stack_traces.get_stackid(&ctx->regs, 0 | BPF_F_REUSE_STACKID | " \ + "BPF_F_USER_STACK)" stack_context = "" if args.user_stacks_only: stack_context = "user" @@ -221,12 +199,6 @@ def positive_nonzero_int(val): stack_context = "user + kernel" bpf_text = bpf_text.replace('USER_STACK_GET', user_stack_get) bpf_text = bpf_text.replace('KERNEL_STACK_GET', kernel_stack_get) -if skip: - # don't record the rip, as we won't use it - bpf_text = bpf_text.replace('DO_KERNEL_RIP', '0') -else: - # rip is used to skip interrupt infrastructure frames - bpf_text = bpf_text.replace('DO_KERNEL_RIP', '1') # header if not args.folded: @@ -237,41 +209,19 @@ def positive_nonzero_int(val): else: print("... Hit Ctrl-C to end.") -# kprobe perf_misc_flags() -bpf_text = bpf_text.replace('PERF_TRACE_EVENT', - 'int kprobe__perf_misc_flags(struct pt_regs *args)') -bpf_text = bpf_text.replace('REGS_LOCATION', 'PT_REGS_PARM1(args)') if debug: print(bpf_text) -# initialize BPF -try: - b = BPF(text=bpf_text) -except: - print("BPF initialization failed. perf_misc_flags() may be inlined in " + - "your kernel build.\nThis tool will be updated in the future to " + - "support Linux 4.9, which has reliable profiling support. Exiting.") - exit() +# initialize BPF & perf_events +b = BPF(text=bpf_text) +b.attach_perf_event(ev_type=PerfType.SOFTWARE, + ev_config=PerfSWConfig.CPU_CLOCK, fn_name="do_perf_event", + sample_period=0, sample_freq=args.frequency) # signal handler def signal_ignore(signal, frame): print() -# -# Setup perf_events -# - -# use perf_events to sample -try: - Perf.perf_event_open(0, pid=-1, ptype=Perf.PERF_TYPE_SOFTWARE, - freq=args.frequency) -except: - print("ERROR: initializing perf_events for sampling.\n" - "To debug this, try running the following command:\n" - " perf record -F 49 -e cpu-clock %s -- sleep 1\n" - "If that also doesn't work, fix it first." % perf_filter, file=stderr) - exit(0) - # # Output Report # @@ -317,19 +267,9 @@ def aksym(addr): # fix kernel stack kernel_stack = [] if k.kernel_stack_id >= 0: - if skip: - # fixed skip - for addr in kernel_tmp: - kernel_stack.append(addr) - kernel_stack = kernel_stack[skip:] - else: - # skip the interrupt framework stack by searching for our RIP - skipping = 1 - for addr in kernel_tmp: - if k.kernel_ret_ip == addr: - skipping = 0 - if not skipping: - kernel_stack.append(addr) + for addr in kernel_tmp: + kernel_stack.append(addr) + # the later IP checking if k.kernel_ip: kernel_stack.insert(0, k.kernel_ip) diff --git a/tools/profile_example.txt b/tools/profile_example.txt index cd0c5ef5341f..ab1c4eba7f93 100644 --- a/tools/profile_example.txt +++ b/tools/profile_example.txt @@ -702,53 +702,11 @@ WARNING: 8 stack traces could not be displayed. Consider increasing --stack-stor Run ./profile -h to see the default. -There is a -S option to skip kernel frames. You probably don't need to mess -with this. Here's why it exists: consider the following kernel stack trace, -and IP: - - ffffffff81174e78 perf_swevent_hrtimer - ffffffff810e6984 __hrtimer_run_queues - ffffffff810e70f8 hrtimer_interrupt - ffffffff81022c69 xen_timer_interrupt - ffffffff810d2942 handle_irq_event_percpu - ffffffff810d62da handle_percpu_irq - ffffffff810d1f52 generic_handle_irq - ffffffff814a5137 evtchn_2l_handle_events - ffffffff814a2853 __xen_evtchn_do_upcall - ffffffff814a4740 xen_evtchn_do_upcall - ffffffff817cd50c xen_hvm_callback_vector - ffffffff8103663e default_idle - ffffffff81036dbf arch_cpu_idle - ffffffff810bb8ea default_idle_call - ffffffff810bbb97 cpu_startup_entry - ffffffff8104df85 start_secondary - -IP: ffffffff8105eb66 native_safe_halt - -This is the idle thread. The first function is native_safe_halt(), and its -parent is default_idle(). But what you see there is really what we are -profiling. All that stuff above default_idle()? Interrupt framework stack. - -So we have to exclude those interrupt frames. I do this by fetching the ret IP -from the kernel stack, and then scanning for it in user-level: in this case -it would be default_idle(). Ok. - -If this doesn't work on your architecture (and your kernel stacks are a -single line, the IP), then you might consider setting a fixed skip count, -which avoids this ret IP logic. For the above stack, I'd set "-S 11", and -it would slice off those 11 interrupt frames nicely. It also does this in -kernel context for efficiency. - -So how do you figure out what number to use? 11? 14? 5? Well.. Try "-S 1", -and then see how much higher you need to set it. Remember on the real -profile output that the IP line is printed on top of the sliced stack. - - USAGE message: # ./profile -h usage: profile [-h] [-p PID] [-U | -K] [-F FREQUENCY] [-d] [-a] [-f] - [--stack-storage-size STACK_STORAGE_SIZE] [-S KERNEL_SKIP] + [--stack-storage-size STACK_STORAGE_SIZE] [duration] Profile CPU stack traces at a timed interval @@ -774,8 +732,6 @@ optional arguments: --stack-storage-size STACK_STORAGE_SIZE the number of unique stack traces that can be stored and displayed (default 2048) - -S KERNEL_SKIP, --kernel-skip KERNEL_SKIP - skip this many kernel frames (default 3) examples: ./profile # profile stack traces at 49 Hertz until Ctrl-C @@ -785,4 +741,3 @@ examples: ./profile -p 185 # only profile threads for PID 185 ./profile -U # only show user space stacks (no kernel) ./profile -K # only show kernel space stacks (no user) - ./profile -S 11 # always skip 11 frames of kernel stack