Age | Commit message (Collapse) | Author |
|
Before merging all different stack tracers the call traces printed had
an indicator if an entry can be considered reliable or not.
Unreliable entries were put in braces, reliable not. Currently all
lines contain these extra braces.
This patch restores the old behaviour by adding an extra "reliable"
parameter to the callback functions. Only show_trace makes currently
use of it.
Before:
[ 0.804751] Call Trace:
[ 0.804753] ([<000000000017d0e0>] try_to_wake_up+0x318/0x5e0)
[ 0.804756] ([<0000000000161d64>] create_worker+0x174/0x1c0)
After:
[ 0.804751] Call Trace:
[ 0.804753] ([<000000000017d0e0>] try_to_wake_up+0x318/0x5e0)
[ 0.804756] [<0000000000161d64>] create_worker+0x174/0x1c0
Fixes: 758d39ebd3d5 ("s390/dumpstack: merge all four stack tracers")
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
Now that the oprofile sampling code is gone there is only one user of
the sampling facility left. Therefore the reserve and release
functions can be removed.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Reviewed-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
This makes perf_callchain_{user,kernel}() receive the max stack
as context for the perf_callchain_entry, instead of accessing
the global sysctl_perf_event_max_stack.
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Milian Wolff <milian.wolff@kdab.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Cc: Wang Nan <wangnan0@huawei.com>
Cc: Zefan Li <lizefan@huawei.com>
Link: http://lkml.kernel.org/n/tip-kolmn1yo40p7jhswxwrc7rrd@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Signed-off-by: Adam Buchbinder <adam.buchbinder@gmail.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
We have four different stack tracers of which three had bugs. So it's
time to merge them to a single stack tracer which allows to specify a
call back function which will be called for each step.
This patch changes behavior a bit:
- the "nosched" and "in_sched_functions" check within
save_stack_trace_tsk did work only for the last stack frame within a
context. Now it considers the check for each stack frame like it
should.
- both the oprofile variant and the perf_events variant did save a
return address twice if a zero back chain was detected, which
indicates an interrupt frame. The new dump_trace function will call
the oprofile and perf_events backends with the psw address that is
contained within the corresponding pt_regs structure instead.
- the original show_trace and save_context_stack functions did already
use the psw address of the pt_regs structure if a zero back chain
was detected. However now we ignore the psw address if it is a user
space address. After all we trace the kernel stack and not the user
space stack. This way we also get rid of the garbage user space
address in case of warnings and / or panic call traces.
So this should make life easier since now there is only one stack
tracer left which we can break.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
git commit dc7ee00d4771 ("s390: lowcore stack pointer offsets")
introduced a regression in regard to perf_callchain_kernel(). The
stack pointer for the asynchronous stack in the lowcore now has an
additional offset applied. This offset needs to be taken into account
in the calculation for the low and high address for the stack.
This bug was already partially fixed with 9cc5c206d9b4
("s390/dumpstack: fix address ranges for asynchronous and panic
stack"). This patch fixes it also for the perf_event code.
Fixes: dc7ee00d4771 ("s390: lowcore stack pointer offsets")
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
Yet another leftover from the 31 bit era. The usual operation
"y = x & PSW_ADDR_INSN" with the PSW_ADDR_INSN mask is a nop for
CONFIG_64BIT.
Therefore remove all usages and hope the code is a bit less confusing.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
|
|
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
Signed-off-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
This is shorter and should be used instead of the longer form
which checks for both possible config options.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
Also support the diagnostic-sampling function in addition to the basic-sampling
function. Diagnostic-sampling data entries contain hardware model specific
sampling data and additional programs are required to analyze the data.
To deliver diagnostic-sampling, as well, as basis-sampling data entries to user
space, introduce support for sampling "raw data". If this particular perf
sampling type (PERF_SAMPLE_RAW) is used, sampling data entries are copied
to user space. External programs can then analyze these data.
Signed-off-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
The host-program-parameter (hpp) value of basic sample-data-entries designates
a SIE control block that is set by the LPP instruction in sie64a().
Non-zero values indicate guest samples, a value of zero indicates a host sample.
For perf samples, host and guest samples are distinguished using particular
PERF_MISC_* flags. The perf layer calls perf_misc_flags() to set the flags
based on the pt_regs content. For each sample-data-entry, the cpum_sf PMU
creates a pt_regs structure with the sample-data information. An additional
flag structure is added to easily distinguish between host and guest samples.
Signed-off-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
Register a service level handler to report information about available
CPU-Measurement facilities.
Signed-off-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
Introduce reserve/release functions to share the sampling facility
between perf and oprofile.
Also improve error handling for the sampling facility support in perf.
Signed-off-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
Introduce a perf PMU, "cpum_sf", to support the CPU-Measurement
Sampling Facility. You can control the sampling facility through
this perf PMU interfaces. Perf sampling events are created for
hardware samples.
For details about the CPU-Measurement Sampling Facility, see
"The Load-Program-Parameter and the CPU-Measurement Facilities" (SA23-2260).
Signed-off-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
Provide PMU event attributes for supported counters and export their symbolic
names to the sysfs "events" directory.
See the /sys/devices/cpum_cf/events/ directory for a list of available counters.
Note that you might require counter set authorizations for the LPAR to use them.
Signed-off-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
Signed-off-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
|
|
The perf_event code references sie_exit even if KVM is not available.
So add proper ifdefs to fix this one:
arch/s390/built-in.o: In function `sys_call_table_emu':
(.rodata+0x2b98): undefined reference to `sie_exit'
arch/s390/built-in.o: In function `sys_call_table_emu':
(.rodata+0x2ba0): undefined reference to `sie_exit'
make: *** [vmlinux] Error 1
Reported-by: Zhouping Liu <zliu@redhat.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
This patch is based on an original patch of David Hildenbrand.
The perf core implementation calls architecture specific code in order
to ask for specific information for a particular sample:
perf_instruction_pointer()
When perf core code asks for the instruction pointer, architecture
specific code must detect if a KVM guest was running when the sample
was taken. A sample can be associated with a KVM guest when the PSW
supervisor state bit is set and the PSW instruction pointer part
contains the address of 'sie_exit'.
A KVM guest's instruction pointer information is then retrieved via
gpsw entry pointed to by the sie control-block.
perf_misc_flags()
perf code code calls this function in order to associate the kernel
vs. user state infomation with a particular sample. Architecture
specific code must also first detectif a KVM guest was running
at the time the sample was taken.
Signed-off-by: Heinz Graalfs <graalfs@linux.vnet.ibm.com>
Reviewed-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Signed-off-by: Heiko Carstens <h.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
Add a perf PMU to access the CPU-measurement counter facility CPUM CF.
CPUM CF provides multiple counter sets for measuring generic,
problem-state, and crypto activaties. Also an extended counter set for
the IBM System z10 and IBM z196 mainframes is available.
Counters from the basic and problem-state counter set are mapped to
generic perf hardware events. Other counters are accessible through
raw events.
For a list of available counter sets and counters, see:
- The Load-Program-Parameter and the CPU-Measurement Facilities (SA23-2260)
- The CPU-Measurement Facility Extended Counters Definition for
z10 and z196 (SA23-2261)
Reviewed-by: Jan Glauber <jang@linux.vnet.ibm.com>
Signed-off-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|