summaryrefslogtreecommitdiffstats
path: root/tools
AgeCommit message (Collapse)Author
2019-11-22perf intel-pt: Add support for decoding AUX area samplesAdrian Hunter
Add support for dumping, queuing and decoding AUX area samples. Decoding samples is the same as regular decoding, except in the case where there are no timestamps, in which case buffers are decoded immediately before the sample event. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Link: http://lore.kernel.org/lkml/20191115124225.5247-15-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-22perf intel-pt: Add support for recording AUX area samplesAdrian Hunter
Set up the default number of mmap pages, default sample size and default psb_period for AUX area sampling. Add documentation also. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Link: http://lore.kernel.org/lkml/20191115124225.5247-14-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-22perf pmu: When using default config, record which bits of config were ↵Adrian Hunter
changed by the user Default config for a PMU is defined before selected events are parsed. That allows the user-entered config to override the default config. However that does not allow for changing the default config based on other options. For example, if the user chooses AUX area sampling mode, in the case of Intel PT, the psb_period needs to be small for sampling, so there is a need to set the default psb_period to 0 (2 KiB) in that case. However that should not override a value set by the user. To allow for that, when using default config, record which bits of config were changed by the user. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Link: http://lore.kernel.org/lkml/20191115124225.5247-13-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-22perf auxtrace: Add support for queuing AUX area samplesAdrian Hunter
Add functions to queue AUX area samples in advance (auxtrace_queue_data()) or individually (auxtrace_queues__add_sample()) or find out what queue a sample belongs on (auxtrace_queues__sample_queue()). auxtrace_queue_data() can also queue snapshot data which keeps snapshots and samples ordered with respect to each other in case support for that is desired. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Link: http://lore.kernel.org/lkml/20191115124225.5247-12-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-22perf session: Add facility to peek at all eventsAdrian Hunter
AUX area samples are not limited in how far back in time the sample could start. Consequently samples must be queued in advance to allow for time-ordered processing. To achieve that, add perf_session__peek_events() that walks and peeks at all the events. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Link: http://lore.kernel.org/lkml/20191115124225.5247-11-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-22perf auxtrace: Add support for dumping AUX area samplesAdrian Hunter
Add support for dumping AUX area samples i.e. via the perf script/report -D (--dump-raw-trace) option. Committer notes: Add __maybe_unused to the two args for auxtrace__dump_auxtrace_sample() for when we don't HAVE_AUXTRACE_SUPPORT. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Link: http://lore.kernel.org/lkml/20191115124225.5247-10-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-22perf inject: Cut AUX area samplesAdrian Hunter
After decoding AUX area samples, the AUX area data is no longer needed (having been replaced by synthesized events) so cut it out. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Link: http://lore.kernel.org/lkml/20191115124225.5247-9-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-22perf record: Add aux-sample-size config termAdrian Hunter
To allow individual events to be selected for AUX area sampling, add aux-sample-size config term. attr.aux_sample_size is updated by auxtrace_parse_sample_options() so that the existing validation will see the value. Any event that has a non-zero aux_sample_size will cause AUX area sampling to be configured, irrespective of the --aux-sample option. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Link: http://lore.kernel.org/lkml/20191115124225.5247-8-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-22perf record: Add support for AUX area samplingAdrian Hunter
Add a 'perf record' option '--aux-sample' to request AUX area sampling. AUX area sampling uses an overwriting buffer much like snapshot mode, so adjust the AUX buffer mmapping accordingly. To make it easy to queue samples for decoding, synthesize an ID index. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Link: http://lore.kernel.org/lkml/20191115124225.5247-7-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-22perf auxtrace: Add support for AUX area sample recordingAdrian Hunter
Add support for parsing and validating AUX area sample options. At present, the only option is the sample size, but it is also necessary to ensure that events are in a group with an AUX area event as the leader. Committer note: Add missing 'static inline' in front of auxtrace_parse_sample_options() for when we don't HAVE_AUXTRACE_SUPPORT. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Link: http://lore.kernel.org/lkml/20191115124225.5247-6-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-22perf auxtrace: Move perf_evsel__find_pmu()Adrian Hunter
Move perf_evsel__find_pmu() so it can be used without forward declaration. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Link: http://lore.kernel.org/lkml/20191115124225.5247-5-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-22perf record: Add a function to test for kernel support for AUX area samplingAdrian Hunter
Architectures are expected to know if AUX area sampling is supported by the hardware. Add a function perf_can_aux_sample() which will determine whether the kernel supports it. Committer notes: I reported that this message was taking place on a kernel without the required bits: # perf record --aux-sample -e '{intel_pt//u,branch-misses:u}' Error: The sys_perf_event_open() syscall returned with 7 (Argument list too long) for event (branch-misses:u). /bin/dmesg | grep -i perf may provide additional information. Adrian sent a patch addressing it, with this explanation: ---- perf_can_aux_sample_size() always returned true because it did not pass the attribute size to sys_perf_event_open, nor correctly check the return value and errno. ---- After applying it I get, later in the series, when --aux-sample is added: # perf record --aux-sample -e '{intel_pt//u,branch-misses:u}' AUX area sampling is not supported by kernel Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Link: http://lore.kernel.org/lkml/20191115124225.5247-4-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-21selftests/x86/sigreturn/32: Invalidate DS and ES when abusing the kernelAndy Lutomirski
If the kernel accidentally uses DS or ES while the user values are loaded, it will work fine for sane userspace. In the interest of simulating maximally insane userspace, make sigreturn_32 zero out DS and ES for the nasty parts so that inadvertent use of these segments will crash. Signed-off-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: stable@kernel.org
2019-11-21selftests/x86/mov_ss_trap: Fix the SYSENTER testAndy Lutomirski
For reasons that I haven't quite fully diagnosed, running mov_ss_trap_32 on a 32-bit kernel results in an infinite loop in userspace. This appears to be because the hacky SYSENTER test doesn't segfault as desired; instead it corrupts the program state such that it infinite loops. Fix it by explicitly clearing EBP before doing SYSENTER. This will give a more reliable segfault. Fixes: 59c2a7226fc5 ("x86/selftests: Add mov_to_ss test") Signed-off-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: stable@kernel.org
2019-11-21Merge tag 'gpio-v5.4-5' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-gpio Pull GPIO fixes from Linus Walleij: "A last set of small fixes for GPIO, this cycle was quite busy. - Fix debounce delays on the MAX77620 GPIO expander - Use the correct unit for debounce times on the BD70528 GPIO expander - Get proper deps for parallel builds of the GPIO tools - Add a specific ACPI quirk for the Terra Pad 1061" * tag 'gpio-v5.4-5' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-gpio: gpiolib: acpi: Add Terra Pad 1061 to the run_edge_events_on_boot_blacklist tools: gpio: Correctly add make dependencies for gpio_utils gpio: bd70528: Use correct unit for debounce times gpio: max77620: Fixup debounce delays
2019-11-21perf tools: Add kernel AUX area sampling definitionsAdrian Hunter
Add kernel AUX area sampling definitions, which brings perf_event.h into line with the kernel version. New sample type PERF_SAMPLE_AUX requests a sample of the AUX area buffer. New perf_event_attr member 'aux_sample_size' specifies the desired size of the sample. Also add support for parsing samples containing AUX area data i.e. PERF_SAMPLE_AUX. Committer notes: I squashed the first two patches in this series to avoid breaking automatic bisection, i.e. after applying only the original first patch in this series we would have: # perf test -v parsing 26: Sample parsing : --- start --- test child forked, pid 17018 sample format has changed, some new PERF_SAMPLE_ bit was introduced - test needs updating test child finished with -1 ---- end ---- Sample parsing: FAILED! # With the two paches combined: # perf test parsing 26: Sample parsing : Ok # Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Link: http://lore.kernel.org/lkml/20191115124225.5247-3-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-21Merge branch 'kvm-tsx-ctrl' into HEADPaolo Bonzini
Conflicts: arch/x86/kvm/vmx/vmx.c
2019-11-20Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextDavid S. Miller
Daniel Borkmann says: ==================== pull-request: bpf-next 2019-11-20 The following pull-request contains BPF updates for your *net-next* tree. We've added 81 non-merge commits during the last 17 day(s) which contain a total of 120 files changed, 4958 insertions(+), 1081 deletions(-). There are 3 trivial conflicts, resolve it by always taking the chunk from 196e8ca74886c433: <<<<<<< HEAD ======= void *bpf_map_area_mmapable_alloc(u64 size, int numa_node); >>>>>>> 196e8ca74886c433dcfc64a809707074b936aaf5 <<<<<<< HEAD void *bpf_map_area_alloc(u64 size, int numa_node) ======= static void *__bpf_map_area_alloc(u64 size, int numa_node, bool mmapable) >>>>>>> 196e8ca74886c433dcfc64a809707074b936aaf5 <<<<<<< HEAD if (size <= (PAGE_SIZE << PAGE_ALLOC_COSTLY_ORDER)) { ======= /* kmalloc()'ed memory can't be mmap()'ed */ if (!mmapable && size <= (PAGE_SIZE << PAGE_ALLOC_COSTLY_ORDER)) { >>>>>>> 196e8ca74886c433dcfc64a809707074b936aaf5 The main changes are: 1) Addition of BPF trampoline which works as a bridge between kernel functions, BPF programs and other BPF programs along with two new use cases: i) fentry/fexit BPF programs for tracing with practically zero overhead to call into BPF (as opposed to k[ret]probes) and ii) attachment of the former to networking related programs to see input/output of networking programs (covering xdpdump use case), from Alexei Starovoitov. 2) BPF array map mmap support and use in libbpf for global data maps; also a big batch of libbpf improvements, among others, support for reading bitfields in a relocatable manner (via libbpf's CO-RE helper API), from Andrii Nakryiko. 3) Extend s390x JIT with usage of relative long jumps and loads in order to lift the current 64/512k size limits on JITed BPF programs there, from Ilya Leoshkevich. 4) Add BPF audit support and emit messages upon successful prog load and unload in order to have a timeline of events, from Daniel Borkmann and Jiri Olsa. 5) Extension to libbpf and xdpsock sample programs to demo the shared umem mode (XDP_SHARED_UMEM) as well as RX-only and TX-only sockets, from Magnus Karlsson. 6) Several follow-up bug fixes for libbpf's auto-pinning code and a new API call named bpf_get_link_xdp_info() for retrieving the full set of prog IDs attached to XDP, from Toke Høiland-Jørgensen. 7) Add BTF support for array of int, array of struct and multidimensional arrays and enable it for skb->cb[] access in kfree_skb test, from Martin KaFai Lau. 8) Fix AF_XDP by using the correct number of channels from ethtool, from Luigi Rizzo. 9) Two fixes for BPF selftest to get rid of a hang in test_tc_tunnel and to avoid xdping to be run as standalone, from Jiri Benc. 10) Various BPF selftest fixes when run with latest LLVM trunk, from Yonghong Song. 11) Fix a memory leak in BPF fentry test run data, from Colin Ian King. 12) Various smaller misc cleanups and improvements mostly all over BPF selftests and samples, from Daniel T. Lee, Andre Guedes, Anders Roxell, Mao Wenan, Yue Haibing. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2019-11-19selftests/bpf: Enforce no-ALU32 for test_progs-no_alu32Andrii Nakryiko
With the most recent Clang, alu32 is enabled by default if -mcpu=probe or -mcpu=v3 is specified. Use a separate build rule with -mcpu=v2 to enforce no ALU32 mode. Suggested-by: Yonghong Song <yhs@fb.com> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20191120002510.4130605-1-andriin@fb.com
2019-11-19libbpf: Fix call relocation offset calculation bugAndrii Nakryiko
When relocating subprogram call, libbpf doesn't take into account relo->text_off, which comes from symbol's value. This generally works fine for subprograms implemented as static functions, but breaks for global functions. Taking a simplified test_pkt_access.c as an example: __attribute__ ((noinline)) static int test_pkt_access_subprog1(volatile struct __sk_buff *skb) { return skb->len * 2; } __attribute__ ((noinline)) static int test_pkt_access_subprog2(int val, volatile struct __sk_buff *skb) { return skb->len + val; } SEC("classifier/test_pkt_access") int test_pkt_access(struct __sk_buff *skb) { if (test_pkt_access_subprog1(skb) != skb->len * 2) return TC_ACT_SHOT; if (test_pkt_access_subprog2(2, skb) != skb->len + 2) return TC_ACT_SHOT; return TC_ACT_UNSPEC; } When compiled, we get two relocations, pointing to '.text' symbol. .text has st_value set to 0 (it points to the beginning of .text section): 0000000000000008 000000050000000a R_BPF_64_32 0000000000000000 .text 0000000000000040 000000050000000a R_BPF_64_32 0000000000000000 .text test_pkt_access_subprog1 and test_pkt_access_subprog2 offsets (targets of two calls) are encoded within call instruction's imm32 part as -1 and 2, respectively: 0000000000000000 test_pkt_access_subprog1: 0: 61 10 00 00 00 00 00 00 r0 = *(u32 *)(r1 + 0) 1: 64 00 00 00 01 00 00 00 w0 <<= 1 2: 95 00 00 00 00 00 00 00 exit 0000000000000018 test_pkt_access_subprog2: 3: 61 10 00 00 00 00 00 00 r0 = *(u32 *)(r1 + 0) 4: 04 00 00 00 02 00 00 00 w0 += 2 5: 95 00 00 00 00 00 00 00 exit 0000000000000000 test_pkt_access: 0: bf 16 00 00 00 00 00 00 r6 = r1 ===> 1: 85 10 00 00 ff ff ff ff call -1 2: bc 01 00 00 00 00 00 00 w1 = w0 3: b4 00 00 00 02 00 00 00 w0 = 2 4: 61 62 00 00 00 00 00 00 r2 = *(u32 *)(r6 + 0) 5: 64 02 00 00 01 00 00 00 w2 <<= 1 6: 5e 21 08 00 00 00 00 00 if w1 != w2 goto +8 <LBB0_3> 7: bf 61 00 00 00 00 00 00 r1 = r6 ===> 8: 85 10 00 00 02 00 00 00 call 2 9: bc 01 00 00 00 00 00 00 w1 = w0 10: 61 62 00 00 00 00 00 00 r2 = *(u32 *)(r6 + 0) 11: 04 02 00 00 02 00 00 00 w2 += 2 12: b4 00 00 00 ff ff ff ff w0 = -1 13: 1e 21 01 00 00 00 00 00 if w1 == w2 goto +1 <LBB0_3> 14: b4 00 00 00 02 00 00 00 w0 = 2 0000000000000078 LBB0_3: 15: 95 00 00 00 00 00 00 00 exit Now, if we compile example with global functions, the setup changes. Relocations are now against specifically test_pkt_access_subprog1 and test_pkt_access_subprog2 symbols, with test_pkt_access_subprog2 pointing 24 bytes into its respective section (.text), i.e., 3 instructions in: 0000000000000008 000000070000000a R_BPF_64_32 0000000000000000 test_pkt_access_subprog1 0000000000000048 000000080000000a R_BPF_64_32 0000000000000018 test_pkt_access_subprog2 Calls instructions now encode offsets relative to function symbols and are both set ot -1: 0000000000000000 test_pkt_access_subprog1: 0: 61 10 00 00 00 00 00 00 r0 = *(u32 *)(r1 + 0) 1: 64 00 00 00 01 00 00 00 w0 <<= 1 2: 95 00 00 00 00 00 00 00 exit 0000000000000018 test_pkt_access_subprog2: 3: 61 20 00 00 00 00 00 00 r0 = *(u32 *)(r2 + 0) 4: 0c 10 00 00 00 00 00 00 w0 += w1 5: 95 00 00 00 00 00 00 00 exit 0000000000000000 test_pkt_access: 0: bf 16 00 00 00 00 00 00 r6 = r1 ===> 1: 85 10 00 00 ff ff ff ff call -1 2: bc 01 00 00 00 00 00 00 w1 = w0 3: b4 00 00 00 02 00 00 00 w0 = 2 4: 61 62 00 00 00 00 00 00 r2 = *(u32 *)(r6 + 0) 5: 64 02 00 00 01 00 00 00 w2 <<= 1 6: 5e 21 09 00 00 00 00 00 if w1 != w2 goto +9 <LBB2_3> 7: b4 01 00 00 02 00 00 00 w1 = 2 8: bf 62 00 00 00 00 00 00 r2 = r6 ===> 9: 85 10 00 00 ff ff ff ff call -1 10: bc 01 00 00 00 00 00 00 w1 = w0 11: 61 62 00 00 00 00 00 00 r2 = *(u32 *)(r6 + 0) 12: 04 02 00 00 02 00 00 00 w2 += 2 13: b4 00 00 00 ff ff ff ff w0 = -1 14: 1e 21 01 00 00 00 00 00 if w1 == w2 goto +1 <LBB2_3> 15: b4 00 00 00 02 00 00 00 w0 = 2 0000000000000080 LBB2_3: 16: 95 00 00 00 00 00 00 00 exit Thus the right formula to calculate target call offset after relocation should take into account relocation's target symbol value (offset within section), call instruction's imm32 offset, and (subtracting, to get relative instruction offset) instruction index of call instruction itself. All that is shifted by number of instructions in main program, given all sub-programs are copied over after main program. Convert few selftests relying on bpf-to-bpf calls to use global functions instead of static ones. Fixes: 48cca7e44f9f ("libbpf: add support for bpf_call") Reported-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20191119224447.3781271-1-andriin@fb.com
2019-11-19perf report: Jump to symbol source view from total cycles viewJin Yao
This patch supports jumping from tui total cycles view to symbol source view. For example, perf record -b ./div perf report --total-cycles In total cycles view, we can select one entry and press 'a' or press ENTER key to jump to symbol source view. This patch also sets sort_order to NULL in cmd_report() which will use the default branch sort order. The percent value in new annotate view will be consistent with the percent in annotate view switched from perf report (we observed the original percent gap with previous patches). v2: --- Fix the 'make NO_SLANG=1' error. (set __maybe_unused to annotation_opts in block_hists_tui_browse()). Signed-off-by: Jin Yao <yao.jin@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jin Yao <yao.jin@intel.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lore.kernel.org/lkml/20191118140849.20714-2-yao.jin@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-19perf util: Move block TUI function to ui browsersJin Yao
It would be nice if we could jump to the assembler/source view (like the normal perf report) from total cycles view. This patch moves the block_hists_tui_browse from block-info.c to ui/browsers/hists.c in order to reuse some browser codes (i.e do_annotate) for implementing new annotation view. v2: --- Fix the 'make NO_SLANG=1' error. (Change 'int block_hists_tui_browse()' to 'static inline int block_hists_tui_browse()') Signed-off-by: Jin Yao <yao.jin@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jin Yao <yao.jin@intel.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lore.kernel.org/lkml/20191118140849.20714-1-yao.jin@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-19perf session: Fix decompression of PERF_RECORD_COMPRESSED recordsAlexey Budankov
Avoid termination of trace loading in case the last record in the decompressed buffer partly resides in the following mmaped PERF_RECORD_COMPRESSED record. In this case NULL value returned by fetch_mmaped_event() means to proceed to the next mmaped record then decompress it and load compressed events. The issue can be reproduced like this: $ perf record -z -- some_long_running_workload $ perf report --stdio -vv decomp (B): 44519 to 163000 decomp (B): 48119 to 174800 decomp (B): 65527 to 131072 fetch_mmaped_event: head=0x1ffe0 event->header_size=0x28, mmap_size=0x20000: fuzzed perf.data? Error: failed to process sample ... Testing: 71: Zstd perf.data compression/decompression : Ok $ tools/perf/perf report -vv --stdio decomp (B): 59593 to 262160 decomp (B): 4438 to 16512 decomp (B): 285 to 880 Looking at the vmlinux_path (8 entries long) Using vmlinux for symbols decomp (B): 57474 to 261248 prefetch_event: head=0x3fc78 event->header_size=0x28, mmap_size=0x3fc80: fuzzed or compressed perf.data? decomp (B): 25 to 32 decomp (B): 52 to 120 ... Fixes: 57fc032ad643 ("perf session: Avoid infinite loop when seeing invalid header.size") Link: https://marc.info/?l=linux-kernel&m=156580812427554&w=2 Co-developed-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Alexey Budankov <alexey.budankov@linux.intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lore.kernel.org/lkml/cf782c34-f3f8-2f9f-d6ab-145cee0d5322@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-19perf dso: Move dso_id from 'struct map' to 'struct dso'Arnaldo Carvalho de Melo
And take it into account when looking up DSOs when we have the dso_id fields obtained from somewhere, like from PERF_RECORD_MMAP2 records. Instances of struct map pointing to the same DSO pathname but with anything in dso_id different are in fact different DSOs, so better have different 'struct dso' instances to reflect that. At some point we may want to get copies of the contents of the different objects if we want to do correct annotation or other analysis. With this we get 'struct map' 24 bytes leaner: $ pahole -C map ~/bin/perf struct map { union { struct rb_node rb_node __attribute__((__aligned__(8))); /* 0 24 */ struct list_head node; /* 0 16 */ } __attribute__((__aligned__(8))); /* 0 24 */ u64 start; /* 24 8 */ u64 end; /* 32 8 */ _Bool erange_warned:1; /* 40: 0 1 */ _Bool priv:1; /* 40: 1 1 */ /* XXX 6 bits hole, try to pack */ /* XXX 3 bytes hole, try to pack */ u32 prot; /* 44 4 */ u64 pgoff; /* 48 8 */ u64 reloc; /* 56 8 */ /* --- cacheline 1 boundary (64 bytes) --- */ u64 (*map_ip)(struct map *, u64); /* 64 8 */ u64 (*unmap_ip)(struct map *, u64); /* 72 8 */ struct dso * dso; /* 80 8 */ refcount_t refcnt; /* 88 4 */ u32 flags; /* 92 4 */ /* size: 96, cachelines: 2, members: 13 */ /* sum members: 92, holes: 1, sum holes: 3 */ /* sum bitfield members: 2 bits, bit holes: 1, sum bit holes: 6 bits */ /* forced alignments: 1 */ /* last cacheline: 32 bytes */ } __attribute__((__aligned__(8))); $ Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lkml.kernel.org/n/tip-g4hxxmraplo7wfjmk384mfsb@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-19net-af_xdp: Use correct number of channels from ethtoolLuigi Rizzo
Drivers use different fields to report the number of channels, so take the maximum of all data channels (rx, tx, combined) when determining the size of the xsk map. The current code used only 'combined' which was set to 0 in some drivers e.g. mlx4. Tested: compiled and run xdpsock -q 3 -r -S on mlx4 Signed-off-by: Luigi Rizzo <lrizzo@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Acked-by: Magnus Karlsson <magnus.karlsson@intel.com> Link: https://lore.kernel.org/bpf/20191119001951.92930-1-lrizzo@google.com
2019-11-19perf dsos: Remove unused dsos__find() methodArnaldo Carvalho de Melo
Not used anywhere, nuke it. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lkml.kernel.org/n/tip-teqz0eqcw43mnt7i3me44esw@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-19perf map: Move comparision of map's dso_id to a separate functionArnaldo Carvalho de Melo
We'll use it when doing DSO lookups using dso_ids. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lkml.kernel.org/n/tip-u2nr1oq03o0i29w2ay9jx03s@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-19perf map: Pass a dso_id to map__new()Arnaldo Carvalho de Melo
Instead of the 4 fields, a step in the direction of moving this to struct dso. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lkml.kernel.org/n/tip-gp5s1xgxacurmih5d1l94ymy@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-19perf map: Move maj/min/ino/ino_generation to separate structArnaldo Carvalho de Melo
And this patch highlights where these fields are being used: in the sort order where it uses it to compare maps and classify samples taking into account not just the DSO, but those DSO id fields. I think these should be used to differentiate DSOs with the same name but different 'struct dso_id' fields, i.e. these fields should move to 'struct dso' and then be used as part of the key when doing lookups for DSOs, in addition to the DSO name. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lkml.kernel.org/n/tip-8v5isitqy0dup47nnwkpc80f@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-18selftests: forwarding: Add speed and auto-negotiation testAmit Cohen
Check configurations and packets transference with different variations of autoneg and speed. Test plan: 1. Test force of same speed with autoneg off 2. Test force of different speeds with autoneg off (should fail) 3. One side is autoneg on and other side sets force of common speeds 4. One side is autoneg on and other side only advertises a subset of the common speeds (one speed of the subset) 5. One side is autoneg on and other side only advertises a subset of the common speeds. Check that highest speed is negotiated 6. Test autoneg on, but each side advertises different speeds (should fail) Signed-off-by: Amit Cohen <amitc@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-11-18selftests: forwarding: lib.sh: Add wait for dev with timeoutAmit Cohen
Add a function that waits for device with maximum number of iterations. It enables to limit the waiting and prevent infinite loop. This will be used by the subsequent patch which will set two ports to different speeds in order to make sure they cannot negotiate a link. Waiting for all the setup is limited with 10 minutes for each device. Signed-off-by: Amit Cohen <amitc@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-11-18selftests: forwarding: Add ethtool_lib.shAmit Cohen
Functions: 1. speeds_arr_get The function returns an array of speed values from /usr/include/linux/ethtool.h The array looks as follows: [10baseT/Half] = 0, [10baseT/Full] = 1, ... 2. ethtool_set: params: cmd The function runs ethtool by cmd (ethtool -s cmd) and checks if there was an error in configuration 3. dev_speeds_get: params: dev, with_mode (0 or 1), adver (0 or 1) return value: Array of supported/Advertised link modes with/without mode * Example 1: speeds_get swp1 0 0 return: 1000 10000 40000 * Example 2: speeds_get swp1 1 1 return: 1000baseKX/Full 10000baseKR/Full 40000baseCR4/Full 4. common_speeds_get: params: dev1, dev2, with_mode (0 or 1), adver (0 or 1) return value: Array of common speeds of dev1 and dev2 * Example: common_speeds_get swp1 swp2 0 0 return: 1000 10000 Assuming that swp1 supports 1000 10000 40000 and swp2 supports 1000 10000 Signed-off-by: Amit Cohen <amitc@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-11-18selftests: mlxsw: Check devlink device before running testDanielle Ratson
The scale test for Spectrum-2 should only be invoked for Spectrum-2. Skip the test otherwise. Signed-off-by: Danielle Ratson <danieller@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-11-18selftests: mlxsw: Add router scale test for Spectrum-2Danielle Ratson
Same as for Spectrum-1, test the ability to add the maximum number of routes possible to the switch. Invoke the test from the 'resource_scale' wrapper script. Signed-off-by: Danielle Ratson <danieller@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-11-18perf parse: Report initial event parsing errorIan Rogers
Record the first event parsing error and report. Implementing feedback from Jiri Olsa: https://lkml.org/lkml/2019/10/28/680 An example error is: $ tools/perf/perf stat -e c/c/ WARNING: multiple event parsing errors event syntax error: 'c/c/' \___ unknown term valid terms: event,filter_rem,filter_opc0,edge,filter_isoc,filter_tid,filter_loc,filter_nc,inv,umask,filter_opc1,tid_en,thresh,filter_all_op,filter_not_nm,filter_state,filter_nm,config,config1,config2,name,period,percore Initial error: event syntax error: 'c/c/' \___ Cannot find PMU `c'. Missing kernel support? Run 'perf list' for a list of valid events Usage: perf stat [<options>] [<command>] -e, --event <event> event selector. use 'perf list' to list available events Signed-off-by: Ian Rogers <irogers@google.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Allison Randal <allison@lohutok.net> Cc: Andi Kleen <ak@linux.intel.com> Cc: Anju T Sudhakar <anju@linux.vnet.ibm.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Thomas Richter <tmricht@linux.ibm.com> Link: http://lore.kernel.org/lkml/20191116074652.9960-1-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-18perf probe: Trace a magic number if variable is not foundMasami Hiramatsu
Trace a magic number as immediate value if the target variable is not found at some probe points which is based on one probe event. This feature is good for the case if you trace a source code line with some local variables, which is compiled into several instructions and some of the variables are optimized out on some instructions. Even if so, with this feature, perf probe trace a magic number instead of such disappeared variables and fold those probes on one event. E.g. without this patch: # perf probe -D "pud_page_vaddr pud" Failed to find 'pud' in this function. Failed to find 'pud' in this function. Failed to find 'pud' in this function. Failed to find 'pud' in this function. Failed to find 'pud' in this function. Failed to find 'pud' in this function. Failed to find 'pud' in this function. Failed to find 'pud' in this function. Failed to find 'pud' in this function. Failed to find 'pud' in this function. Failed to find 'pud' in this function. Failed to find 'pud' in this function. Failed to find 'pud' in this function. Failed to find 'pud' in this function. Failed to find 'pud' in this function. Failed to find 'pud' in this function. p:probe/pud_page_vaddr _text+23480787 pud=%ax:x64 p:probe/pud_page_vaddr _text+23808453 pud=%bp:x64 p:probe/pud_page_vaddr _text+23558082 pud=%ax:x64 p:probe/pud_page_vaddr _text+328373 pud=%r8:x64 p:probe/pud_page_vaddr _text+348448 pud=%bx:x64 p:probe/pud_page_vaddr _text+23816818 pud=%bx:x64 With this patch: # perf probe -D "pud_page_vaddr pud" | head spurious_kernel_fault is blacklisted function, skip it. vmalloc_fault is blacklisted function, skip it. p:probe/pud_page_vaddr _text+23480787 pud=%ax:x64 p:probe/pud_page_vaddr _text+149051 pud=\deade12d:x64 p:probe/pud_page_vaddr _text+23808453 pud=%bp:x64 p:probe/pud_page_vaddr _text+315926 pud=\deade12d:x64 p:probe/pud_page_vaddr _text+23807209 pud=\deade12d:x64 p:probe/pud_page_vaddr _text+23557365 pud=%ax:x64 p:probe/pud_page_vaddr _text+314097 pud=%di:x64 p:probe/pud_page_vaddr _text+314015 pud=\deade12d:x64 p:probe/pud_page_vaddr _text+313893 pud=\deade12d:x64 p:probe/pud_page_vaddr _text+324083 pud=\deade12d:x64 Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Cc: Steven Rostedt (VMware) <rostedt@goodmis.org> Cc: Tom Zanussi <tom.zanussi@linux.intel.com> Link: http://lore.kernel.org/lkml/157406476931.24476.6261475888681844285.stgit@devnote2 Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-18perf probe: Support DW_AT_const_value constant valueMasami Hiramatsu
Support DW_AT_const_value for variable assignment instead of location. Note that this requires ftrace supporting immediate value. Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Cc: Steven Rostedt (VMware) <rostedt@goodmis.org> Cc: Tom Zanussi <tom.zanussi@linux.intel.com> Link: http://lore.kernel.org/lkml/157406476012.24476.16096289871757175775.stgit@devnote2 Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-18perf probe: Support multiprobe eventMasami Hiramatsu
Support multiprobe event if the event is based on function and lines and kernel supports it. In this case, perf probe creates the first probe with an event, and tries to append following probes on that event, since those probes must be on the same source code line. Before this patch; # perf probe -a vfs_read:18 Added new events: probe:vfs_read_L18 (on vfs_read:18) probe:vfs_read_L18_1 (on vfs_read:18) You can now use it in all perf tools, such as: perf record -e probe:vfs_read_L18_1 -aR sleep 1 # After this patch (on multiprobe supported kernel) # perf probe -a vfs_read:18 Added new events: probe:vfs_read_L18 (on vfs_read:18) probe:vfs_read_L18 (on vfs_read:18) You can now use it in all perf tools, such as: perf record -e probe:vfs_read_L18 -aR sleep 1 # Committer testing: On a kernel that doesn't support multiprobe events, after this patch: # uname -a Linux quaco 5.3.8-200.fc30.x86_64 #1 SMP Tue Oct 29 14:46:22 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux # grep append /sys/kernel/debug/tracing/README be modified by appending '.descending' or '.ascending' to a can be modified by appending any of the following modifiers # # perf probe -a vfs_read:18 Added new events: probe:vfs_read_L18 (on vfs_read:18) probe:vfs_read_L18_1 (on vfs_read:18) You can now use it in all perf tools, such as: perf record -e probe:vfs_read_L18_1 -aR sleep 1 # perf probe -l probe:vfs_read_L18 (on vfs_read:18@fs/read_write.c) probe:vfs_read_L18_1 (on vfs_read:18@fs/read_write.c) # Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Cc: Steven Rostedt (VMware) <rostedt@goodmis.org> Cc: Tom Zanussi <tom.zanussi@linux.intel.com> Link: http://lore.kernel.org/lkml/157406475010.24476.586290752591512351.stgit@devnote2 Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-18perf probe: Generate event name with line numberMasami Hiramatsu
Generate event name from function name with line number as <function>_L<line_number>. Note that this is only for the new event which is defined by the line number of function (except for line 0). If there is another event on same line, you have to use "-f" option. In that case, the new event has "_1" suffix. e.g. # perf probe -a kernel_read:2 Added new event: probe:kernel_read_L2 (on kernel_read:2) You can now use it in all perf tools, such as: perf record -e probe:kernel_read_L2 -aR sleep 1 But if we omit the line number or 0th line, it will have no suffix. # perf probe -a kernel_read:0 Added new event: probe:kernel_read (on kernel_read) You can now use it in all perf tools, such as: perf record -e probe:kernel_read -aR sleep 1 probe:kernel_read (on kernel_read@linux-5.0.0/fs/read_write.c) probe:kernel_read_L2 (on kernel_read:2@linux-5.0.0/fs/read_write.c) Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Cc: Steven Rostedt (VMware) <rostedt@goodmis.org> Cc: Tom Zanussi <tom.zanussi@linux.intel.com> Link: http://lore.kernel.org/lkml/157406474026.24476.2828897745502059569.stgit@devnote2 Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-18perf probe: Do not show non representive lines by perf-probe -LMasami Hiramatsu
Since perf probe -L shows non representive lines, it can be mislead users where user can put probes. This prevents to show such non representive lines so that user can understand which lines user can probe. # perf probe -L kernel_read <kernel_read@/build/linux-pvZVvI/linux-5.0.0/fs/read_write.c:0> 0 ssize_t kernel_read(struct file *file, void *buf, size_t count, loff_t *pos) { 2 mm_segment_t old_fs; ssize_t result; old_fs = get_fs(); 6 set_fs(get_ds()); /* The cast to a user pointer is valid due to the set_fs() */ 8 result = vfs_read(file, (void __user *)buf, count, pos); 9 set_fs(old_fs); 10 return result; } EXPORT_SYMBOL(kernel_read); Committer testing: Before: # perf probe -L kernel_read <kernel_read@/usr/src/debug/kernel-5.3.fc30/linux-5.3.8-200.fc30.x86_64/fs/read_write.c:0> 0 ssize_t kernel_read(struct file *file, void *buf, size_t count, loff_t *pos) 1 { 2 mm_segment_t old_fs; 3 ssize_t result; 5 old_fs = get_fs(); 6 set_fs(KERNEL_DS); /* The cast to a user pointer is valid due to the set_fs() */ 8 result = vfs_read(file, (void __user *)buf, count, pos); 9 set_fs(old_fs); 10 return result; } EXPORT_SYMBOL(kernel_read); # See the 1, 3, 5 lines? They shouldn't be there, after this patch: # perf probe -L kernel_read <kernel_read@/usr/src/debug/kernel-5.3.fc30/linux-5.3.8-200.fc30.x86_64/fs/read_write.c:0> 0 ssize_t kernel_read(struct file *file, void *buf, size_t count, loff_t *pos) { 2 mm_segment_t old_fs; ssize_t result; old_fs = get_fs(); 6 set_fs(KERNEL_DS); /* The cast to a user pointer is valid due to the set_fs() */ 8 result = vfs_read(file, (void __user *)buf, count, pos); 9 set_fs(old_fs); 10 return result; } EXPORT_SYMBOL(kernel_read); # Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Reported-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Cc: Steven Rostedt (VMware) <rostedt@goodmis.org> Cc: Tom Zanussi <tom.zanussi@linux.intel.com> Link: http://lore.kernel.org/lkml/157406473064.24476.2913278267727587314.stgit@devnote2 Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-18perf probe: Verify given line is a representive lineMasami Hiramatsu
Verify user given probe line is a representive line (which doesn't share the address with other lines or the line is the least line among the lines which shares same address), and if not, it shows what is the representive line. Without this fix, user can put a probe on the lines which is not a a representive line. But since this is not a representive line, perf probe -l shows a representive line number instead of user given line number. e.g. (put kernel_read:3, but listed as kernel_read:2) # perf probe -a kernel_read:3 Added new event: probe:kernel_read (on kernel_read:3) You can now use it in all perf tools, such as: perf record -e probe:kernel_read -aR sleep 1 # perf probe -l probe:kernel_read (on kernel_read:2@linux-5.0.0/fs/read_write.c) With this fix, perf probe doesn't allow user to put a probe on a representive line, and tell what is the representive line. # perf probe -a kernel_read:3 This line is sharing the addrees with other lines. Please try to probe at kernel_read:2 instead. Error: Failed to add events. Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Reported-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Cc: Steven Rostedt (VMware) <rostedt@goodmis.org> Cc: Tom Zanussi <tom.zanussi@linux.intel.com> Link: http://lore.kernel.org/lkml/157406472071.24476.14915451439785001021.stgit@devnote2 Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-18perf probe: Show correct statement line number by perf probe -lMasami Hiramatsu
The dwarf_getsrc_die() can return the line which is not a statement nor the least line number among the lines which shares same address. This can lead perf probe --list shows incorrect line number for probed address. To fix this, this introduces cu_getsrc_die() which returns only a statement line and which is the least line number (we call it the representive line for an address), and use it in cu_find_lineinfo(). Also, if the given address is the entry address of a real function, cu_find_lineinfo() returns the function declared line number instead of the start line number of the function body. For example, without this change perf probe -l shows incorrect line as below. # perf probe -a kernel_read:2 Added new event: probe:kernel_read (on kernel_read:2) You can now use it in all perf tools, such as: perf record -e probe:kernel_read -aR sleep 1 # perf probe -l probe:kernel_read (on kernel_read:1@linux-5.0.0/fs/read_write.c) With this fix, it shows correct line number as below; # perf probe -l probe:kernel_read (on kernel_read:2@linux-5.0.0/fs/read_write.c) Reported-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Cc: Steven Rostedt (VMware) <rostedt@goodmis.org> Cc: Tom Zanussi <tom.zanussi@linux.intel.com> Link: http://lore.kernel.org/lkml/157406471067.24476.17463149618465494448.stgit@devnote2 Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-18x86/insn: Add some Intel instructions to the opcode mapAdrian Hunter
Add to the opcode map the following instructions: cldemote tpause umonitor umwait movdiri movdir64b enqcmd enqcmds encls enclu enclv pconfig wbnoinvd For information about the instructions, refer Intel SDM May 2019 (325462-070US) and Intel Architecture Instruction Set Extensions May 2019 (319433-037). The instruction decoding can be tested using the perf tools' "x86 instruction decoder - new instructions" test as folllows: $ perf test -v "new " 2>&1 | grep -i cldemote Decoded ok: 0f 1c 00 cldemote (%eax) Decoded ok: 0f 1c 05 78 56 34 12 cldemote 0x12345678 Decoded ok: 0f 1c 84 c8 78 56 34 12 cldemote 0x12345678(%eax,%ecx,8) Decoded ok: 0f 1c 00 cldemote (%rax) Decoded ok: 41 0f 1c 00 cldemote (%r8) Decoded ok: 0f 1c 04 25 78 56 34 12 cldemote 0x12345678 Decoded ok: 0f 1c 84 c8 78 56 34 12 cldemote 0x12345678(%rax,%rcx,8) Decoded ok: 41 0f 1c 84 c8 78 56 34 12 cldemote 0x12345678(%r8,%rcx,8) $ perf test -v "new " 2>&1 | grep -i tpause Decoded ok: 66 0f ae f3 tpause %ebx Decoded ok: 66 0f ae f3 tpause %ebx Decoded ok: 66 41 0f ae f0 tpause %r8d $ perf test -v "new " 2>&1 | grep -i umonitor Decoded ok: 67 f3 0f ae f0 umonitor %ax Decoded ok: f3 0f ae f0 umonitor %eax Decoded ok: 67 f3 0f ae f0 umoni