Age | Commit message (Collapse) | Author |
|
PVR value of 0x0F000005 means we are arch v3.00 compliant (i.e. POWER9).
Acked-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Russell Currey <ruscur@russell.cc>
[mpe: Don't set num_pmcs, so we keep the PMU fields from the raw entry]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
Cleanup to use is_kernel_addr macro.
Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
Since power9 does not support FAB_*_MATCH bits in MMCR1,
avoid these checks for power9. For this, patch factor out
code in isa207_get_constraint() to retain these checks
only for power8.
Patch also updates the comment in power9-pmu raw event
encode layout to remove FAB_*_MATCH.
Finally for power9, patch adds additional check for
threshold events when adding the thresh mask and value in
isa207_get_constraint().
fixes: 7ffd948fae4c ('powerpc/perf: factor out power8 pmu functions')
fixes: 18201b204286 ('powerpc/perf: power9 raw event format encoding')
Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com>
Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
PMC5 on POWER9 DD1 may not provide right counts in all
sampling scenarios, hence use PM_INST_DISP event instead
in PMC2 or PMC3 in preference.
Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
Since PM_INST_DISP include speculative instruction,
based on the workload the dispatch count could vary
considerably. Hence as an alternative, for completed
instruction counting, program the PM_INST_DISP event
to the MMCR* but use Instruction Counter register value.
Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
Since PM_INST_CMPL may not provide right counts in all
sampling scenarios in power9 DD1, instead use PM_INST_DISP.
Patch also update generic instruction sampling with the same.
Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
Factor out the power8 event_alternative function to share
the code with power9.
Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
Indexed-count remove for memory hotplug guarantees that a contiguous block
of <count> lmbs beginning at a specified <index> will be unassigned (NOT
that <count> lmbs will be removed). Because of Qemu's per-DIMM memory
management, the removal of a contiguous block of memory currently
requires a series of individual calls. Indexed-count remove reduces
this series into a single call.
Signed-off-by: Sahil Mehta <sahilmehta17@gmail.com>
Signed-off-by: Nathan Fontenot <nfont@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
Indexed-count add for memory hotplug guarantees that a contiguous block
of <count> lmbs beginning at a specified <drc index> will be assigned,
any LMBs in this range that are not already assigned will be DLPAR added.
Because of Qemu's per-DIMM memory management, the addition of a contiguous
block of memory currently requires a series of individual calls to add
each LMB in the block. Indexed-count add reduces this series of calls to
a single call for the entire block.
Signed-off-by: Sahil Mehta <sahilmehta17@gmail.com>
Signed-off-by: Nathan Fontenot <nfont@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
We can't sensibly take a trap at this point. So, blacklist these
symbols.
Reported-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
slb_finish_load and slb_finish_load_1T are both only used within
slb_low.S, so make them local symbols.
This makes the code a little clearer, as it's more obvious neither is
intended to be an entry point from arbitrary other code, only the uses
in this file.
It also prevents them being used with kprobes and other tracing tools,
which is good because we're not able to safely take traps at these
locations, so making them local symbols avoids us needing to blacklist
them.
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
We broke the build when CONFIG_MEMORY_HOTREMOVE=n:
arch/powerpc/platforms/pseries/hotplug-memory.c:821:8: error: implicit
declaration of function 'dlpar_memory_readd_by_index'
Add a dummy to fix it.
Fixes: e70d59700fc3 ("powerpc/pseries: Introduce memory hotplug READD operation")
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
There are quite a few entries in asm-offests.c which look like:
DEFINE(REG, STACK_FRAME_OVERHEAD+offsetof(struct pt_regs, reg));
So define a macro to do it once.
Signed-off-by: Rashmica Gupta <rashmicy@gmail.com>
[mpe: Rename to STACK_PT_REGS_OFFSET for excruciating explicitness]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
A lot of entries in asm-offests.c look like this:
DEFINE(TI_FLAGS, offsetof(struct thread_info, flags));
But there is a common macro, OFFSET, which makes this cleaner:
OFFSET(TI_flags, thread_info, flags)
So use it.
Signed-off-by: Rashmica Gupta <rashmicy@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
In all the years it's been in the tree it had never been used by
anything - it would instantly trigger BUG_ON() in fs/fcntl.c due to
bogus band argument (ie. POLLIN not POLL_IN) passed to kill_fasync().
Since nobody had ever used it in ten years, let's just rip it out and be
done with that.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
WARN_ONCE() takes a condition and a format string. We were passing a
constant string as the condition, and the function name as the format
string. It would work, but the message would be just the function name.
Fix it by just using WARN_ONCE() directly instead of if (x) WARN_ONCE().
Noticed-by: Geliang Tang <geliangtang@163.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
We're supporting surprise hotplug on PCI slots behind root port
or PCIe switch downstream ports, which don't claim the capability
in hardware register (offset: PCIe cap + PCI_EXP_SLTCAP). PEX8718
is one of the examples. For those PCI slots, the PDC (Presence
Detection Change) event isn't reliable and the underly (skiboot)
firmware has best judgement.
This masks the PDC event when skiboot requests by "ibm,slot-broken-pdc"
property in PCI slot's device-tree node.
Reported-by: Hank Chang <hankmax0000@gmail.com>
Signed-off-by: Gavin Shan <gwshan@linux.vnet.ibm.com>
Tested-by: Willie Liauw <williel@supermicro.com.tw>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
After updating ppc-dis.c, ppc-opc.c and ppc.h the following changes were
made to enable compilation and working of xmon:
1. Remove all disassembler_info
2. Use xmon's printf/print_address to output data and addresses
respectively.
3. All bfd_* types and casts have been removed.
4. Optimizations related to opcd_indices have been removed.
5. The dialect is set based on cpu features.
6. PPC_OPCODE_CLASSIC is no longer supported in the new
disassembler.
7. VLE opcode parsing and printing has been stripped.
8. Coding style conventions used for those routines has
been retained and it does not match our CodingStyle.
9. The highest supported dialect is POWER9.
10. Defined ATTRIBUTE_UNUSED in ppc-dis.c.
11. Defined _(x) in ppc-dis.c.
Finally, we remove the dependency on BROKEN so that XMON_DISASSEMBLY can
be enabled again.
Signed-off-by: Balbir Singh <bsingharora@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
The following commit-ids from the binutils project were applied on the
xmon branch and relicensed with the permission of the authors under
GPLv2 for the following files:
ppc-opc.c
ppc-dis.c
ppc.h
Working off of binutils commit 65b650b4c746 we have now moved up to
binutils commit a5721ba270dd.
Some commit logs have been taken verbatim, some are summarized for ease
of understanding.
Here is a summary of the commits:
33e8d5ac613d PPC7450 New. (powerpc_opcodes): Use it in dcba.
c3d65c1ced61 New opcodes and mask
8dbcd839b1bb Instruction Sorting
91eb7075e370 (powerpc_opcodes): Fix the first two operands of dquaiq.
548b1dcfcbab ppc-opc.c (powerpc_opcodes): Remove the dcffix and dcffix.
930bb4cfae30 Support optional L form mtmsr.
de866fccd87d (powerpc_opcodes): Order and format.
19a6653ce8c6 ppc e500mc support
fa452fa6833c (ppc_cpu_t): New typedef.
c8187e1509b2 (parse_cpu): Handle -m464.
081ba1b3c08b Define. (PPC_OPERAND_FSL, PPC_OPERAND_FCR, PPC_OPERAND_UDI)
9b4e57660d38 Rename altivec_or_spe to retain_flags. Handle -mvsx and -mpower7.
899d85beadd0 (powerpc_opcodes): Enable rfci, mfpmr, mtpmr for e300.
e1c93c699b7d (extract_sprg): Correct operand range check.
2f3bb96af796 (powerpc_init_dialect): Do not set PPC_OPCODE_BOOKE
1cb0a7674666 (ppc_setup_opcodes): Remove PPC_OPCODE_NOPOWER4 test
21169fcfadfa (print_insn_powerpc): Skip insn if it is deprecated
80890a619b85 ("dcbt", "dcbtst")
0e55be1624c2 ("lfdepx", "stfdepx")
066be9f7bd8e (parse_cpu): Extend -mpower7 to accept power7 and isel instructions.
c72ab5f2c55d (powerpc_opcodes): Reorder the opcode table so that instructions
69fe9ce501f5 (ppc_parse_cpu): New function. (powerpc_init_dialect)
e401b04ca7cd (powerpc_opcodes) <"dcbzl">: Merge the POWER4 and E500MC entries.
70dc4e324b9a (powerpc_init_dialect): Do not choose a default dialect due to -many/-Many.
858d7a6db20b (powerpc_opcodes) <"tlbilxlpid", "tlbilxpid", "tlbilxva", "tlbilx"
bdc7fcfe59f1 (powerpc_macros <extrdi>): Allow n+b of 64
e0d602ecffb0 (md_show_usage): Document -mpcca2
b961e85b6ebe (ppc_cpu_t): Typedef to uint64_t
8765b5569284 (powerpc_opcodes): Remove support for the the "lxsdux", "lxvd2ux"
634b50f2a623 Rename "ppca2" to "a2"
9fe54b1ca1c0 (md_show_usage): Document -m476
0dc9305793c8 Add bfd_mach_ppc_e500mc64
ce3d2015b21b Define. bfd/ * archures.c (bfd_mach_ppc_titan)
cdc51b0748c4 Add -mpwr4, -mpwr5, -mpwr5x, -mpwr6 and -mpwr7
63d0fa4e9e57 Add PPC_OPCODE_E500MC for "e500mc64"
cee62821d472 New Define. ("dccci"): Enable for PPCA2
85d4ac0b3c0b Correct wclr encoding.
51b5d4a8c5e5 (powerpc_opcodes): Enable divdeu, devweu, divde, divwe, divdeuo
e01d869a3be2 (md_assemble): Emit APUinfo section for PPC_OPCODE_E500
09a8ad8d8f56 (powerpc_opcodes): Revert deprecation of mfocrf, mtcrf and mtocrf on EFS.
f2bae120dcef (PPC_OPCODE_COMMON): Expand comment.
81a0b7e2ae09 (PPCPWR2): Add PPC_OPCODE_COMMON. (powerpc_opcodes): Add "subc"
bdc70b4a03fd (PPC_OPCODE_32, PPC_OPCODE_BOOKE64, PPC_OPCODE_CLASSIC)
7102e95e4943 (ppc_set_cpu): Cast PPC_OPCODE_xxx to ppc_cpu_t before inverting
f383de6633cb (powerpc_opcodes) [lswx,lswi,stswx,stswi]: Deprecate on E500 and E500MC
6b069ee70de3 Remove PPC_OPCODE_PPCPS
2f7f77101279 (powerpc_opcodes): Enable icswx for POWER7
989993d80a97 (insert_nbi, insert_rbx, FRAp, FRBp, FRSp, FRTp, NBI, RAX, RBX)
a08fc94222d1 <drrndq, drrndq., dtstexq, dctqpq, dctqpq., dctfixq, dctfixq.
8ebac3aae962 (ISA_V2): Define and use for relevant BO field tests
aea77599d0db Add PPC_OPCODE_ALTIVEC2, PPC_OPCODE_E6500, PPC_OPCODE_TMR
b240011aba98 (disassemble_init_for_target): Handle ppc init.
d668828207c2 (powerpc_opcd_indices): Bump array size
b9c361e0ad33 Add support for PowerPC VLE.
e1dad58d73dc (has_tls_reloc, has_tls_get_addr_call, has_vle_insns, is_ppc_vle)
df7b86aa4cb6 Add check that sysdep.h has been included before
98c76446ea6b (extract_sprg): Use ALLOW8_SPRG to include VLE.
a4ebc835cbcb (powerpc_macros): Add entries for e_extlwi to e_clrlslwi
94caa966375d (has_vle_insns, is_ppc_vle): Delete
c7a8dbf91f37 Change RA to RA0
d908c8af5a1d Add necessary casts for printing integer values
03edbe3bfb93 Add/remove PPCVLE for some 32-bit insns
9f6a6cc022e1 <xnop, yield, mdoio, mdoom>: New extended mnemonics
588925d06545 <RSQ, RTQ>: Use PPC_OPERAND_GPR
8baf7b78b5d9 <"lswx">: Use RAX for the second and RBX for the third operand
e67ed0e885d6 Changed opcode for vabsdub, vabsduh, vabsduw, mviwsplt
fb048c26f19f (UIMM4, UIMM3, UIMM2, VXVA_MASK, VXVB_MASK, VXVAVB_MASK, VXVDVA_MASK
382c72e90441 (VXASHB_MASK): New define
c7a5aa9c64fc (ppc_opts) <altivec>: Use PPC_OPCODE_ALTIVEC2
ab4437c3224f <vcfpsxws>: Fix opcode spelling
62082a42b9cd "lfdp" and "stfdp" use DS offset.
776fc41826bb (ppc_parse_cpu): Update prototype
943d398f4c52 (insert_sci8, extract_sci8): Rewrite.
5817ffd1f81c New define (PPC_OPCODE_HTM/POWER8)
9f0682fe89d9 (extract_vlesi): Properly sign extend
c0637f3af686 (powerpc_init_dialect): Set default dialect to power8.
58ae08f29af8 (powerpc_opcodes): Add tdui, twui, tdu, twu, tui, tu
4f6ffcd38d90 (powerpc_init_dialect): Use ppc_parse_cpu() to set dialect
4b95cf5c0c75 Update copyright years
a47622ac1bad Allow both signed and unsigned fields in PowerPC cmpli insn
12e87fac5c76 ppc: enable msgclr and msgsnd on Power8
8514e4db84cc Don't deprecate powerpc mftb insn
db76a70026ab Power4 should treat mftb as extended mfspr mnemonic
b90efa5b79ac ChangeLog rotatation and copyright year update
c4e676f19656 powerpc: Add slbfee. instruction
27c49e9a8fc0 powerpc: Only initialise opcode indices once
4fff86c517ab DCBT_EO): New define
4bc0608a8b69 Fix some PPC assembler errors
dc302c00611b Add hwsync extended mnemonic
99a2c5612124 Remove unused MTMSRD_L macro and re-add accidentally deleted comment
11a0cf2ec0ed Allow for optional operands with non-zero default values
7b9341139a69 PPC sync instruction accepts invalid and incompatible operands
ef5a96d564a2 Remove ppc860, ppc750cl, ppc7450 insns from common ppc
43e65147c07b Remove trailing spaces in opcodes
6dca4fd141fd Add dscr and ctrl SPR mnemonics
b6518b387185 Fix compile time warnings generated when compiling with clang
36f7a9411dcd Patches for illegal ppc 500 instructions
a680de9a980e Add assembler, disassembler and linker support for power9
dd2887fc3de4 Reorder some power9 insns
b817670b52b7 Enable 2 operand form of powerpc mfcr with -many
6f2750feaf28 Copyright update for binutils
afa8d4054b8e Delete opcodes that have been removed from ISA 3.0
1178da445ad5 Accept valid one byte signed and unsigned values for the IMM8 operand
e43de63c8fd1 Fix powerpc subis range
514e58b72633 Correct "Fix powerpc subis range"
19dfcc89e8d9 Add support for new POWER ISA 3.0 instructions
1fe0971e41a4 add more extern C
026122a67044 Re-add support for lbarx, lharx, stbcx. and sthcx. insns back to the E6500 cpu
14b57c7c6a53 PowerPC VLE
6fd3a02da554 Add support for yet some more new ISA 3.0 instructions
dfdaec14b0db Fix some PowerPC VLE BFD issues and add some PowerPC VLE instructions
fd486b633e87 Modify POWER9 support to match final ISA 3.0 documentation
a5721ba270dd Disallow 3-operand cmp[l][i] for ppc64
This updates the disassembly capabilities to add support for newer
processors.
Signed-off-by: Balbir Singh <bsingharora@gmail.com>
[mpe: Reformat commit list for brevity]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
Upgrade ppc-opc.c, ppc-dis.c and ppc.h to the versions belonging to the
following binutils commit:
65b650b4c7463f4508bed523c24ab0031a5ae5cd
* ppc-dis.c (print_insn_powerpc): Don't skip all operands after
setting skip_optional.
That is the last version of those files that were licensed under GPLv2.
This leaves the code in a state that does not compile, because the
binutils code needs to be tweaked to work in the kernel. We don't fix
that in this commit, because we want to import more binutils changes in
subsequent commits. So for now we mark XMON_DISASSEMBLY as BROKEN, so it
can't be built.
Signed-off-by: Balbir Singh <bsingharora@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
We do them at the start of tlb flush, and we are sure a pte update will be
followed by a tlbflush. Hence we can skip the ptesync in pte update helpers.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Tested-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
This helps us to do some optimization for application exit case, where we can
skip the DD1 style pte update sequence.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Tested-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
In the kernel we do follow the below sequence in different code paths.
pte = ptep_get_clear(ptep)
....
set_pte_at(ptep, pte)
We do that for mremap, autonuma protection update and softdirty clearing. This
implies our optimization to skip a tlb flush when clearing a pte update is
not valid, because for DD1 system that followup set_pte_at will be done witout
doing the required tlbflush. Fix that by always doing the dd1 style pte update
irrespective of new_pte value. In a later patch we will optimize the application
exit case.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Tested-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
With radix, we can get page fault with DSISR_PROTFAULT value set in case of
PROT_NONE or autonuma mapping. The PROT_NONE case in handled by the vma check
where we consider the access bad. For autonuma we should fall through and fixup
the access mask correctly.
Without this patch we trigger the WARN_ON() on radix. This code moves that
WARN_ON() within a radix_enabled() check. I also moved the WARN_ON() outside
the if condition making it apply for all type of faults (exec/write/read). It
is also conditionalized for book3s, because BOOK3E can also get a PROTFAULT to
handle the D/I cache sync.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
Currently xmon data-breakpoint feature is broken.
Whenever there is a watchpoint match occurs, hw_breakpoint_handler will
be called by do_break via notifier chains mechanism. If watchpoint is
registered by xmon, hw_breakpoint_handler won't find any associated
perf_event and returns immediately with NOTIFY_STOP. Similarly, do_break
also returns without notifying to xmon.
Solve this by returning NOTIFY_DONE when hw_breakpoint_handler does not
find any perf_event associated with matched watchpoint, rather than
NOTIFY_STOP, which tells the core code to continue calling the other
breakpoint handlers including the xmon one.
Cc: stable@vger.kernel.org
Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
The recently merged HPT (Hash Page Table) resize support broke the build
when BOOK3S_64=n (ie. 32-bit or 64-bit Book3E) and MEMORY_HOTPLUG=y:
arch/powerpc/mm/mem.o: In function `.arch_add_memory':
(.text+0x4e4): undefined reference to `.resize_hpt_for_hotplug'
Fix it by adding a dummy version.
Fixes: 438cc81a41e8 ("powerpc/pseries: Automatically resize HPT for memory hot add/remove")
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
Merge the topic branch we're sharing with the kvm-ppc tree.
|
|
Currently the build breaks if CMA=n and SPAPR_TCE_IOMMU=y:
arch/powerpc/mm/mmu_context_iommu.c: In function ‘mm_iommu_get’:
arch/powerpc/mm/mmu_context_iommu.c:193:42: error: ‘MIGRATE_CMA’ undeclared (first use in this function)
if (get_pageblock_migratetype(page) == MIGRATE_CMA) {
^~~~~~~~~~~
Fix it by using the existing is_migrate_cma_page(), which evaulates to
false when CMA=n.
Fixes: 2e5bbb5461f1 ("KVM: PPC: Book3S HV: Migrate pinned pages out of CMA")
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
If we enable RADIX but disable HUGETLBFS, the build breaks with:
arch/powerpc/mm/pgtable-radix.c:557:7: error: implicit declaration of function 'pmd_huge'
arch/powerpc/mm/pgtable-radix.c:588:7: error: implicit declaration of function 'pud_huge'
Fix it by stubbing those functions when HUGETLBFS=n.
Fixes: 4b5d62ca17a1 ("powerpc/mm: add radix__remove_section_mapping()")
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
Fix typo in "hotplug_delay" parameter description. This allows modinfo
to match the help text to the parameter.
Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
... as the generic weak variant will do.
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
The final paragraph of the help text is reversed. We want to enable
this option by default, and disable it if the toolchain has a working
-mprofile-kernel.
Fixes: 8c50b72a3b4f ("powerpc/ftrace: Add Kconfig & Make glue for mprofile-kernel")
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
Currently the opal_exit tracepoint usually shows the opcode as 0:
<idle>-0 [047] d.h. 635.654292: opal_entry: opcode=63
<idle>-0 [047] d.h. 635.654296: opal_exit: opcode=0 retval=0
kopald-1209 [019] d... 636.420943: opal_entry: opcode=10
kopald-1209 [019] d... 636.420959: opal_exit: opcode=0 retval=0
This is because we incorrectly load the opcode into r0 before calling
__trace_opal_exit(), whereas it expects the opcode in r3 (first function
parameter). In fact we are leaving the retval in r3, so opcode and
retval will always show the same value.
Instead load the opcode into r3, resulting in:
<idle>-0 [040] d.h. 636.618625: opal_entry: opcode=63
<idle>-0 [040] d.h. 636.618627: opal_exit: opcode=63 retval=0
Fixes: c49f63530bb6 ("powernv: Add OPAL tracepoints")
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
Currently we get a warning that _mcount() can't be versioned:
WARNING: EXPORT symbol "_mcount" [vmlinux] version generation failed, symbol will not be versioned.
Add a prototype to asm-prototypes.h to fix it.
The prototype is not really correct, mcount() is not a normal function,
it has a special ABI. But for the purpose of versioning it doesn't
matter.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
The generic implementation of of_node_to_nid() is EXPORT_SYMBOL, added
in commit 298535c00a2c ("of, numa: Add NUMA of binding
implementation.").
The powerpc implementation added in commit 953039c8df7b ("[PATCH]
powerpc: Allow devices to register with numa topology") is
EXPORT_SYMBOL_GPL.
This creates an inconsistency for of_node_to_nid() callers across
architectures.
Update the powerpc implementation to be exported consistently with the
generic implementation.
Signed-off-by: Shailendra Singh <shailendras@nvidia.com>
Reviewed-by: Andy Ritger <aritger@nvidia.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
Kprobe placed on the kretprobe_trampoline() during boot time can be
optimized, since the instruction at probe point is a 'nop'.
Signed-off-by: Anju T Sudhakar <anju@linux.vnet.ibm.com>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
Current infrastructure of kprobe uses the unconditional trap instruction
to probe a running kernel. Optprobe allows kprobe to replace the trap
with a branch instruction to a detour buffer. Detour buffer contains
instructions to create an in memory pt_regs. Detour buffer also has a
call to optimized_callback() which in turn call the pre_handler(). After
the execution of the pre-handler, a call is made for instruction
emulation. The NIP is determined in advanced through dummy instruction
emulation and a branch instruction is created to the NIP at the end of
the trampoline.
To address the limitation of branch instruction in POWER architecture,
detour buffer slot is allocated from a reserved area. For the time
being, 64KB is reserved in memory for this purpose.
Instructions which can be emulated using analyse_instr() are the
candidates for optimization. Before optimization ensure that the address
range between the detour buffer allocated and the instruction being
probed is within +/- 32MB.
Signed-off-by: Anju T Sudhakar <anju@linux.vnet.ibm.com>
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
Fix two issues with kprobes.h on BE which were exposed with the
optprobes work:
- one, having to do with a missing include for linux/module.h for
MODULE_NAME_LEN -- this didn't show up previously since the only
users of kprobe_lookup_name were in kprobes.c, which included
linux/module.h through other headers, and
- two, with a missing const qualifier for a local variable which ends
up referring a string literal. Again, this is unique to how
kprobe_lookup_name is being invoked in optprobes.c
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
To permit the use of relative branch instruction in powerpc, the target
address has to be relatively nearby, since the address is specified in an
immediate field (24 bit filed) in the instruction opcode itself. Here
nearby refers to 32MB on either side of the current instruction.
This patch verifies whether the target address is within +/- 32MB
range or not.
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Anju T Sudhakar <anju@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
Introduce __PPC_SH64() as a 64-bit variant to encode shift field in some
of the shift and rotate instructions operating on double-words. Convert
some of the BPF instruction macros to use the same.
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
We've now implemented code in the pseries platform to use the new PAPR
interface to allow resizing the hash page table (HPT) at runtime.
This patch uses that interface to automatically attempt to resize the HPT
when memory is hot added or removed. This tries to always keep the HPT at
a reasonable size for our current memory size.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
The hypervisor needs to know a guest is capable of using the HPT resizing
PAPR extension in order to make full advantage of it for memory hotplug.
If the hypervisor knows the guest is HPT resize aware, it can size the
initial HPT based on the initial guest RAM size, relying on the guest to
resize the HPT when more memory is hot-added. Without this, the hypervisor
must size the HPT for the maximum possible guest RAM, which can lead to
a huge waste of space if the guest never actually expends to that maximum
size.
This patch advertises the guest's support for HPT resizing via the
ibm,client-architecture-support OF interface. We use bit 5 of byte 6 of
option vector 5 for this purpose, as defined in the PAPR ACR "HPT
resizing option".
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Anshuman Khandual <khandual@linux.vnet.ibm.com>
Reviewed-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
This adds support for using two hypercalls to change the size of the
main hash page table while running as a PAPR guest. For now these
hypercalls are only in experimental qemu versions.
The interface is two part: first H_RESIZE_HPT_PREPARE is used to
allocate and prepare the new hash table. This may be slow, but can be
done asynchronously. Then, H_RESIZE_HPT_COMMIT is used to switch to the
new hash table. This requires that no CPUs be concurrently updating the
HPT, and so must be run under stop_machine().
This also adds a debugfs file which can be used to manually control
HPT resizing or testing purposes.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Paul Mackerras <paulus@samba.org>
[mpe: Rename the debugfs file to "hpt_order"]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
This adds the hypercall numbers and wrapper functions for the hash page
table resizing hypercalls.
These hypercall numbers are defined in the PAPR ACR "HPT resizing
option".
It also adds a new firmware feature flag to track the presence of the
HPT resizing calls.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
We don't need asm/xics.h
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
Recent versions of OPAL can provide names for the various OPAL interrupts,
so let's use them. This also modernises the code that fetches the
interrupt array to use the helpers provided by the generic code instead
of hand-parsing the property.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[mpe: Free irqs on error, check allocation of names, consolidate error
handling, whitespace.]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
On some CAPP errors we see console messages that prints unknown HMIs for
which CAPI recovery is in progress. This patch fixes this by printing
correct error info for HMI generated due to CAPP recovery.
Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
Tested-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
These are common on bare metal machines, so put them in the defconfig.
This adds 216KB to the vmlinux size
Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|