summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)Author
2017-07-12kernel/watchdog: introduce arch_touch_nmi_watchdog()Nicholas Piggin
For architectures that define HAVE_NMI_WATCHDOG, instead of having them provide the complete touch_nmi_watchdog() function, just have them provide arch_touch_nmi_watchdog(). This gives the generic code more flexibility in implementing this function, and arch implementations don't miss out on touching the softlockup watchdog or other generic details. Link: http://lkml.kernel.org/r/20170616065715.18390-3-npiggin@gmail.com Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Reviewed-by: Don Zickus <dzickus@redhat.com> Reviewed-by: Babu Moger <babu.moger@oracle.com> Tested-by: Babu Moger <babu.moger@oracle.com> [sparc] Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-07-12powerpc/fadump: use the correct VMCOREINFO_NOTE_SIZE for phdrXunlei Pang
vmcoreinfo_max_size stands for the vmcoreinfo_data, the correct one we should use is vmcoreinfo_note whose total size is VMCOREINFO_NOTE_SIZE. Like explained in commit 77019967f06b ("kdump: fix exported size of vmcoreinfo note"), it should not affect the actual function, but we better fix it, also this change should be safe and backward compatible. After this, we can get rid of variable vmcoreinfo_max_size, let's use the corresponding macros directly, fewer variables means more safety for vmcoreinfo operation. [xlpang@redhat.com: fix build warning] Link: http://lkml.kernel.org/r/1494830606-27736-1-git-send-email-xlpang@redhat.com Link: http://lkml.kernel.org/r/1493281021-20737-2-git-send-email-xlpang@redhat.com Signed-off-by: Xunlei Pang <xlpang@redhat.com> Reviewed-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Reviewed-by: Dave Young <dyoung@redhat.com> Cc: Hari Bathini <hbathini@linux.vnet.ibm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Eric Biederman <ebiederm@xmission.com> Cc: Juergen Gross <jgross@suse.com> Cc: Michael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-07-12kexec: move vmcoreinfo out of the kernel's .bss sectionXunlei Pang
As Eric said, "what we need to do is move the variable vmcoreinfo_note out of the kernel's .bss section. And modify the code to regenerate and keep this information in something like the control page. Definitely something like this needs a page all to itself, and ideally far away from any other kernel data structures. I clearly was not watching closely the data someone decided to keep this silly thing in the kernel's .bss section." This patch allocates extra pages for these vmcoreinfo_XXX variables, one advantage is that it enhances some safety of vmcoreinfo, because vmcoreinfo now is kept far away from other kernel data structures. Link: http://lkml.kernel.org/r/1493281021-20737-1-git-send-email-xlpang@redhat.com Signed-off-by: Xunlei Pang <xlpang@redhat.com> Tested-by: Michael Holzheu <holzheu@linux.vnet.ibm.com> Reviewed-by: Juergen Gross <jgross@suse.com> Suggested-by: Eric Biederman <ebiederm@xmission.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Dave Young <dyoung@redhat.com> Cc: Hari Bathini <hbathini@linux.vnet.ibm.com> Cc: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-07-12KVM: SVM: Enable Virtual VMLOAD VMSAVE featureJanakarajan Natarajan
Enable the Virtual VMLOAD VMSAVE feature. This is done by setting bit 1 at position B8h in the vmcb. The processor must have nested paging enabled, be in 64-bit mode and have support for the Virtual VMLOAD VMSAVE feature for the bit to be set in the vmcb. Signed-off-by: Janakarajan Natarajan <Janakarajan.Natarajan@amd.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2017-07-12KVM: SVM: Add Virtual VMLOAD VMSAVE feature definitionJanakarajan Natarajan
Define a new cpufeature definition for Virtual VMLOAD VMSAVE. Signed-off-by: Janakarajan Natarajan <Janakarajan.Natarajan@amd.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2017-07-12KVM: SVM: Rename lbr_ctl field in the vmcb control areaJanakarajan Natarajan
Rename the lbr_ctl variable to better reflect the purpose of the field - provide support for virtualization extensions. Signed-off-by: Janakarajan Natarajan <Janakarajan.Natarajan@amd.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2017-07-12KVM: SVM: Prepare for new bit definition in lbr_ctlJanakarajan Natarajan
The lbr_ctl variable in the vmcb control area is used to enable or disable Last Branch Record (LBR) virtualization. However, this is to be done using only bit 0 of the variable. To correct this and to prepare for a new feature, change the current usage to work only on a particular bit. Signed-off-by: Janakarajan Natarajan <Janakarajan.Natarajan@amd.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2017-07-12KVM: SVM: handle singlestep exception when skipping emulated instructionsLadi Prosek
kvm_skip_emulated_instruction handles the singlestep debug exception which is something we almost always want. This commit (specifically the change in rdmsr_interception) makes the debug.flat KVM unit test pass on AMD. Two call sites still call skip_emulated_instruction directly: * In svm_queue_exception where it's used only for moving the rip forward * In task_switch_interception which is analogous to handle_task_switch in VMX Signed-off-by: Ladi Prosek <lprosek@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2017-07-12KVM: x86: take slots_lock in kvm_free_pitRadim Krčmář
kvm_vm_release() did not have slots_lock when calling kvm_io_bus_unregister_dev() and this went unnoticed until 4a12f9517728 ("KVM: mark kvm->busses as rcu protected") added dynamic checks. Luckily, there should be no race at that point: ============================= WARNING: suspicious RCU usage 4.12.0.kvm+ #0 Not tainted ----------------------------- ./include/linux/kvm_host.h:479 suspicious rcu_dereference_check() usage! lockdep_rcu_suspicious+0xc5/0x100 kvm_io_bus_unregister_dev+0x173/0x190 [kvm] kvm_free_pit+0x28/0x80 [kvm] kvm_arch_sync_events+0x2d/0x30 [kvm] kvm_put_kvm+0xa7/0x2a0 [kvm] kvm_vm_release+0x21/0x30 [kvm] Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2017-07-12kvm: vmx: Properly handle machine check during VM-entryJim Mattson
vmx_complete_atomic_exit should call kvm_machine_check for any VM-entry failure due to a machine-check event. Such an exit should be recognized solely by its basic exit reason (i.e. the low 16 bits of the VMCS exit reason field). None of the other VMCS exit information fields contain valid information when the VM-exit is due to "VM-entry failure due to machine-check event". Signed-off-by: Jim Mattson <jmattson@google.com> Reviewed-by: Xiao Guangrong <xiaoguangrong@tencent.com> [Changed VM_EXIT_INTR_INFO condition to better describe its reason.] Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2017-07-12KVM: x86: update master clock before computing kvmclock_offsetRadim Krčmář
kvm master clock usually has a different frequency than the kernel boot clock. This is not a problem until the master clock is updated; update uses the current kernel boot clock to compute new kvm clock, which erases any kvm clock cycles that might have built up due to frequency difference over a long period. KVM_SET_CLOCK is one of places where we can safely update master clock as the guest-visible clock is going to be shifted anyway. The problem with current code is that it updates the kvm master clock after updating the offset. If the master clock was enabled before calling KVM_SET_CLOCK, then it might have built up a significant delta from kernel boot clock. In the worst case, the time set by userspace would be shifted by so much that it couldn't have been set at any point during KVM_SET_CLOCK. To fix this, move kvm_gen_update_masterclock() before computing kvmclock_offset, which means that the master clock and kernel boot clock will be sufficiently close together. Another solution would be to replace get_kvmclock_ns() with "ktime_get_boot_ns() + ka->kvmclock_offset", which is marginally more accurate, but would break symmetry with KVM_GET_CLOCK. Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2017-07-12kvm: nVMX: Shadow "high" parts of shadowed 64-bit VMCS fieldsJim Mattson
Inconsistencies result from shadowing only accesses to the full 64-bits of a 64-bit VMCS field, but not shadowing accesses to the high 32-bits of the field. The "high" part of a 64-bit field should be shadowed whenever the full 64-bit field is shadowed. Signed-off-by: Jim Mattson <jmattson@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-07-12kvm: nVMX: Fix nested_vmx_check_msr_bitmap_controlsJim Mattson
Allow the L1 guest to specify the last page of addressable guest physical memory for an L2 MSR permission bitmap. Also remove the vmcs12_read_any() check that should never fail. Fixes: 3af18d9c5fe95 ("KVM: nVMX: Prepare for using hardware MSR bitmap") Signed-off-by: Jim Mattson <jmattson@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-07-12kvm: nVMX: Validate the I/O bitmaps on nested VM-entryJim Mattson
According to the SDM, if the "use I/O bitmaps" VM-execution control is 1, bits 11:0 of each I/O-bitmap address must be 0. Neither address should set any bits beyond the processor's physical-address width. Signed-off-by: Jim Mattson <jmattson@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-07-12kvm: nVMX: Don't set vmcs12 to "launched" when VMLAUNCH failsJim Mattson
The VMCS launch state is not set to "launched" unless the VMLAUNCH actually succeeds. VMLAUNCH failure includes VM-exits with bit 31 set. Note that this change does not address the general problem that a failure to launch/resume vmcs02 (i.e. vmx->fail) is not handled correctly. Signed-off-by: Jim Mattson <jmattson@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-07-12SPARC64: Fix sun4v DMA panicTushar Dave
64bit DMA only supported on sun4v equipped with ATU IOMMU HW. 'Commit b02c2b0bfd7ae ("sparc: remove arch specific dma_supported implementations")' introduced a code that incorrectly allow dma_supported() to succeed for 64bit dma mask even if system doesn't have ATU IOMMU. This results into panic. Fix it. Reported-by: Meelis Roos <mroos@linux.ee> Signed-off-by: Tushar Dave <tushar.n.dave@oracle.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-07-12powerpc/64: Fix atomic64_inc_not_zero() to return an intMichael Ellerman
Although it's not documented anywhere, there is an expectation that atomic64_inc_not_zero() returns a result which fits in an int. This is the behaviour implemented on all arches except powerpc. This has caused at least one bug in practice, in the percpu-refcount code, where the long result from our atomic64_inc_not_zero() was truncated to an int leading to lost references and stuck systems. That was worked around in that code in commit 966d2b04e070 ("percpu-refcount: fix reference leak during percpu-atomic transition"). To the best of my grepping abilities there are no other callers in-tree which truncate the value, but we should fix it anyway. Because the breakage is subtle and potentially very harmful I'm also tagging it for stable. Code generation is largely unaffected because in most cases the callers are just using the result for a test anyway. In particular the case of fget() that was mentioned in commit a6cf7ed5119f ("powerpc/atomic: Implement atomic*_inc_not_zero") generates exactly the same code. Fixes: a6cf7ed5119f ("powerpc/atomic: Implement atomic*_inc_not_zero") Cc: stable@vger.kernel.org # v3.4 Noticed-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-07-12powerpc: Fix emulation of mfocrf in emulate_step()Anton Blanchard
From POWER4 onwards, mfocrf() only places the specified CR field into the destination GPR, and the rest of it is set to 0. The PowerPC AS from version 3.0 now requires this behaviour. The emulation code currently puts the entire CR into the destination GPR. Fix it. Fixes: 6888199f7fe5 ("[POWERPC] Emulate more instructions in software") Cc: stable@vger.kernel.org # v2.6.22+ Signed-off-by: Anton Blanchard <anton@samba.org> Acked-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-07-12powerpc: Fix emulation of mcrf in emulate_step()Anton Blanchard
The mcrf emulation code was using the CR field number directly as the shift value, without taking into account that CR fields are numbered from 0-7 starting at the high bits. That meant it was looking at the CR fields in the reverse order. Fixes: cf87c3f6b647 ("powerpc: Emulate icbi, mcrf and conditional-trap instructions") Cc: stable@vger.kernel.org # v3.18+ Signed-off-by: Anton Blanchard <anton@samba.org> Acked-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-07-11Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparcLinus Torvalds
Pull sparc fixes from David Miller: - Fix symbol version generation for assembler on sparc, from Nagarathnam Muthusamy. - Fix compound page handling in gup_huge_pmd(), from Nitin Gupta. * git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc: sparc64: Fix gup_huge_pmd Adding the type of exported symbols sed regex in Makefile.build requires line break between exported symbols Adding asm-prototypes.h for genksyms to generate crc
2017-07-12powerpc/perf: Add POWER9 alternate PM_RUN_CYC and PM_RUN_INST_CMPL eventsAnton Blanchard
Similar to POWER8, POWER9 can count run cycles and run instructions completed on more than one PMU. Signed-off-by: Anton Blanchard <anton@samba.org> Acked-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-07-11Merge branch 'for-next' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/gerg/m68knommu Pull x86nommu update from Greg Ungerer: "Only a single change, to remove old Kconfig options from defconfigs" * 'for-next' of git://git.kernel.org/pub/scm/linux/kernel/git/gerg/m68knommu: m68k: defconfig: Cleanup from old Kconfig options
2017-07-11xtensa: move generic-y of exported headers to uapi/asm/KbuildMasahiro Yamada
Since commit fcc8487d477a ("uapi: export all headers under uapi directories"), all (and only) headers under uapi directories are exported, but asm-generic wrappers are still exceptions. To complete de-coupling the uapi from kernel headers, move generic-y of exported headers to uapi/asm/Kbuild. With this change, "make headers_install" will just need to parse uapi/asm/Kbuild to build up exported headers. Also, move "generic-y += kprobes.h" up in order to keep the entries sorted. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2017-07-11unicore32: move generic-y of exported headers to uapi/asm/KbuildMasahiro Yamada
Since commit fcc8487d477a ("uapi: export all headers under uapi directories"), all (and only) headers under uapi directories are exported, but asm-generic wrappers are still exceptions. To complete de-coupling the uapi from kernel headers, move generic-y of exported headers to uapi/asm/Kbuild. With this change, "make headers_install" will just need to parse uapi/asm/Kbuild to build up exported headers. Also, move "generic-y += kprobes.h" up in order to keep the entries sorted. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2017-07-11tile: move generic-y of exported headers to uapi/asm/KbuildMasahiro Yamada
Since commit fcc8487d477a ("uapi: export all headers under uapi directories"), all (and only) headers under uapi directories are exported, but asm-generic wrappers are still exceptions. To complete de-coupling the uapi from kernel headers, move generic-y of exported headers to uapi/asm/Kbuild. With this change, "make headers_install" will just need to parse uapi/asm/Kbuild to build up exported headers. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2017-07-11sparc: move generic-y of exported headers to uapi/asm/KbuildMasahiro Yamada
Since commit fcc8487d477a ("uapi: export all headers under uapi directories"), all (and only) headers under uapi directories are exported, but asm-generic wrappers are still exceptions. To complete de-coupling the uapi from kernel headers, move generic-y of exported headers to uapi/asm/Kbuild. With this change, "make headers_install" will just need to parse uapi/asm/Kbuild to build up exported headers. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2017-07-11sh: move generic-y of exported headers to uapi/asm/KbuildMasahiro Yamada
Since commit fcc8487d477a ("uapi: export all headers under uapi directories"), all (and only) headers under uapi directories are exported, but asm-generic wrappers are still exceptions. To complete de-coupling the uapi from kernel headers, move generic-y of exported headers to uapi/asm/Kbuild. With this change, "make headers_install" will just need to parse uapi/asm/Kbuild to build up exported headers. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2017-07-11parisc: move generic-y of exported headers to uapi/asm/KbuildMasahiro Yamada
Since commit fcc8487d477a ("uapi: export all headers under uapi directories"), all (and only) headers under uapi directories are exported, but asm-generic wrappers are still exceptions. To complete de-coupling the uapi from kernel headers, move generic-y of exported headers to uapi/asm/Kbuild. With this change, "make headers_install" will just need to parse uapi/asm/Kbuild to build up exported headers. Also, move "generic-y += kprobes.h" up in order to keep the entries sorted. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2017-07-11openrisc: move generic-y of exported headers to uapi/asm/KbuildMasahiro Yamada
Since commit fcc8487d477a ("uapi: export all headers under uapi directories"), all (and only) headers under uapi directories are exported, but asm-generic wrappers are still exceptions. To complete de-coupling the uapi from kernel headers, move generic-y of exported headers to uapi/asm/Kbuild. With this change, "make headers_install" will just need to parse uapi/asm/Kbuild to build up exported headers. Also, move "generic-y += kprobes.h" up in order to keep the entries sorted. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Acked-by: Stafford Horne <shorne@gmail.com>
2017-07-11nios2: move generic-y of exported headers to uapi/asm/KbuildMasahiro Yamada
Since commit fcc8487d477a ("uapi: export all headers under uapi directories"), all (and only) headers under uapi directories are exported, but asm-generic wrappers are still exceptions. To complete de-coupling the uapi from kernel headers, move generic-y of exported headers to uapi/asm/Kbuild. With this change, "make headers_install" will just need to parse uapi/asm/Kbuild to build up exported headers. Also, move "generic-y += kprobes.h" up in order to keep the entries sorted. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Reviewed-by: Tobias Klauser <tklauser@distanz.ch>
2017-07-11nios2: remove unneeded arch/nios2/include/(generated/)asm/signal.hMasahiro Yamada
Currently, NIOS2 has three signal.h files under arch/nios2/include: [1] arch/nios2/include/asm/signal.h [2] arch/nios2/include/uapi/asm/signal.h [3] arch/nios2/include/generated/asm/signal.h [3] is build-time generated by scripts/Makefile.asm-generic. However, -I$(srctree)/arch/$(hdr-arch)/include search path is listed before -I$(objtree)/arch/$(hdr-arch)/include/generated in LINUXINCLUDE. Therefore [1] is always included instead of [3]. Remove [3] which is never included. If we look at [1], it just includes [2]. So, [1] can be removed as well. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Reviewed-by: Tobias Klauser <tklauser@distanz.ch>
2017-07-11powerpc/perf: Fix SDAR_MODE value for continous sampling on Power9Madhavan Srinivasan
In case of continous sampling (non-marked), the code currently sets MMCRA[SDAR_MODE] to 0b01 (Update on TLB miss) for Power9 DD1. On DD2 and later it copies the sdar_mode value from the event code, which for most events is 0b00 (No updates). However we must set a non-zero value for SDAR_MODE when doing continuous sampling, so honor the event code, unless it's zero, in which case we use use 0b01 (Update on TLB miss). Fixes: 78b4416aa249 ("powerpc/perf: Handle sdar_mode for marked event in power9") Cc: stable@vger.kernel.org # v4.11+ Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-07-11MIPS: Fix MIPS I ISA /proc/cpuinfo reportingMaciej W. Rozycki
Correct a commit 515a6393dbac ("MIPS: kernel: proc: Add MIPS R6 support to /proc/cpuinfo") regression that caused MIPS I systems to show no ISA levels supported in /proc/cpuinfo, e.g.: system type : Digital DECstation 2100/3100 machine : Unknown processor : 0 cpu model : R3000 V2.0 FPU V2.0 BogoMIPS : 10.69 wait instruction : no microsecond timers : no tlb_entries : 64 extra interrupt vector : no hardware watchpoint : no isa : ASEs implemented : shadow register sets : 1 kscratch registers : 0 package : 0 core : 0 VCED exceptions : not available VCEI exceptions : not available and similarly exclude `mips1' from the ISA list for any processors below MIPSr1. This is because the condition to show `mips1' on has been made `cpu_has_mips_r1' rather than newly-introduced `cpu_has_mips_1'. Use the correct condition then. Fixes: 515a6393dbac ("MIPS: kernel: proc: Add MIPS R6 support to /proc/cpuinfo") Signed-off-by: Maciej W. Rozycki <macro@linux-mips.org> Reviewed-by: James Hogan <james.hogan@imgtec.com> Cc: linux-mips@linux-mips.org Cc: stable@vger.kernel.org # 3.19+ Patchwork: https://patchwork.linux-mips.org/patch/16758/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-07-11MIPS: Fix minimum alignment requirement of IRQ stackMatt Redfearn
Commit db8466c581cc ("MIPS: IRQ Stack: Unwind IRQ stack onto task stack") erroneously set the initial stack pointer of the IRQ stack to a value with a 4 byte alignment. The MIPS32 ABI requires that the minimum stack alignment is 8 byte, and the MIPS64 ABIs(n32/n64) require 16 byte minimum alignment. Fix IRQ_STACK_START such that it leaves space for the dummy stack frame (containing interrupted task kernel stack pointer) while also meeting minimum alignment requirements. Fixes: db8466c581cc ("MIPS: IRQ Stack: Unwind IRQ stack onto task stack") Reported-by: Darius Ivanauskas <dasilt@yahoo.com> Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com> Cc: Chris Metcalf <cmetcalf@mellanox.com> Cc: Petr Mladek <pmladek@suse.com> Cc: Aaron Tomlin <atomlin@redhat.com> Cc: Jason A. Donenfeld <jason@zx2c4.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/16760/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-07-11MIPS: generic: Support MIPS Boston development boardsPaul Burton
Add support for the MIPS Boston development board to generic kernels, which essentially amounts to: - Adding the device tree source for the MIPS Boston board. - Adding a Kconfig fragment which enables the appropriate drivers for the MIPS Boston board. With these changes in place generic kernels will support the board by default, and kernels with only the drivers needed for Boston enabled can be configured by setting BOARDS=boston during configuration. For example: $ make ARCH=mips 64r6el_defconfig BOARDS=boston Signed-off-by: Paul Burton <paul.burton@imgtec.com> Reviewed-by: James Hogan <james.hogan@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/16485/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-07-11MIPS: DTS: img: Don't attempt to build-in all .dtb filesPaul Burton
When building a FIT image we may want the kernel to build multiple .dtb files, but we don't want to build them all into the kernel binary as object files since they'll instead be included in the FIT image. Commit daa10170da27 ("MIPS: DTS: img: add device tree for Marduk board") however created arch/mips/boot/dts/img/Makefile with a line that builds any enabled .dtb files into the kernel. Remove this & build the pistachio object specifically, in preparation for adding .dtb targets which we don't want to build into the kernel. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: Rahul Bedarkar <rahulbedarkar89@gmail.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/16484/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-07-11MIPS: Traced negative syscalls should return -ENOSYSJames Hogan
If a negative system call number is used when system call tracing is enabled, syscall_trace_enter() will return that negative system call number without having written the return value and error flag into the pt_regs. The caller then treats it as a cancelled system call and assumes that the return value and error flag are already written, leaving the negative system call number in the return register ($v0), and the 4th system call argument in the error register ($a3). Add a special case to detect this at the end of syscall_trace_enter(), to set the return value to error -ENOSYS when this happens. Fixes: d218af78492a ("MIPS: scall: Always run the seccomp syscall filters") Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/16653/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-07-11MIPS: Correct forced syscall errorsJames Hogan
When the system call return value is forced to be an error (for example due to SECCOMP_RET_ERRNO), syscall_set_return_value() puts the error code in the return register $v0 and -1 in the error register $a3. However normally executed system calls put 1 in the error register rather than -1, so fix syscall_set_return_value() to be consistent with that. I don't anticipate that anything would have been broken by this, since the most natural way to check the error register on MIPS would be a conditional branch if error register is [not] equal to zero (bnez or beqz). Fixes: 1d7bf993e073 ("MIPS: ftrace: Add support for syscall tracepoints.") Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/16652/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-07-11MIPS: Negate error syscall return in traceJames Hogan
The sys_exit trace event takes a single return value for the system call, which MIPS passes the value of the $v0 (result) register, however MIPS returns positive error codes in $v0 with $a3 specifying that $v0 contains an error code. As a result erroring system calls are traced returning positive error numbers that can't always be distinguished from success. Use regs_return_value() to negate the error code if $a3 is set. Fixes: 1d7bf993e073 ("MIPS: ftrace: Add support for syscall tracepoints.") Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: linux-mips@linux-mips.org Cc: <stable@vger.kernel.org> # 3.13+ Patchwork: https://patchwork.linux-mips.org/patch/16651/ Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-07-11MIPS: Drop duplicate HAVE_SYSCALL_TRACEPOINTS selectJames Hogan
MIPS selects HAVE_SYSCALL_TRACEPOINTS twice. The first was added back in v3.13 by commit 2d7bf993e073 ("MIPS: ftrace: Add support for syscall tracepoints."), but then a second redundant one was added in v4.2 by commit fb59e394c30c ("MIPS: ftrace: Enable support for syscall tracepoints."). Drop the duplicate select. Fixes: fb59e394c30c ("MIPS: ftrace: Enable support for syscall tracepoints.") Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/16654/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-07-11MIPS16e2: Provide feature overrides for non-MIPS16 systemsMaciej W. Rozycki
Hardcode the absence of the MIPS16e2 ASE for all the systems that do so for the MIPS16 ASE already, providing for code to be optimized away. Signed-off-by: Maciej W. Rozycki <macro@imgtec.com> Reviewed-by: James Hogan <james.hogan@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/16097/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-07-11MIPS: MIPS16e2: Report ASE presence in /proc/cpuinfoMaciej W. Rozycki
Only now that both feature determination and unaligned emulation is in place add reporting to /proc/cpuinfo, so that the presence of "mips16e2" there not only indicates our recognition of the hardware feature, but correct unaligned emulation as well. Signed-off-by: Maciej W. Rozycki <macro@imgtec.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/16757/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-07-11spufs: Implement show_optionsDavid Howells
Implement the show_options superblock op for spufs as part of a bid to get rid of s_options and generic_show_options() to make it easier to implement a context-based mount where the mount options can be passed individually over a file descriptor. Signed-off-by: David Howells <dhowells@redhat.com> cc: Jeremy Kerr <jk@ozlabs.org> cc: linuxppc-dev@lists.ozlabs.org Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2017-07-11powerpc/asm: Mark cr0 as clobbered in mftb()Oliver O'Halloran
The workaround for the CELL timebase bug does not correctly mark cr0 as being clobbered. This means GCC doesn't know that the asm block changes cr0 and might leave the result of an unrelated comparison in cr0 across the block, which we then trash, leading to basically random behaviour. Fixes: 859deea949c3 ("[POWERPC] Cell timebase bug workaround") Cc: stable@vger.kernel.org # v2.6.19+ Signed-off-by: Oliver O'Halloran <oohall@gmail.com> [mpe: Tweak change log and flag for stable] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-07-11powerpc/powernv: Fix local TLB flush for boot and MCE on POWER9Nicholas Piggin
There are two cases outside the normal address space management where a CPU's local TLB is to be flushed: 1. Host boot; in case something has left stale entries in the TLB (e.g., kexec). 2. Machine check; to clean corrupted TLB entries. CPU state restore from deep idle states also flushes the TLB. However this seems to be a side effect of reusing the boot code to set CPU state, rather than a requirement itself. The current flushing has a number of problems with ISA v3.0B: - The current radix mode of the MMU is not taken into account. tlbiel is undefined if the R field does not match the current radix mode. - ISA v3.0B hash must flush the partition and process table caches. - ISA v3.0B radix must flush partition and process scoped translations, partition and process table caches, and also the page walk cache. Add POWER9 cases to handle these, with radix vs hash determined by the host MMU mode. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-07-10s390: reduce ELF_ET_DYN_BASEKees Cook
Now that explicitly executed loaders are loaded in the mmap region, we have more freedom to decide where we position PIE binaries in the address space to avoid possible collisions with mmap or stack regions. For 64-bit, align to 4GB to allow runtimes to use the entire 32-bit address space for 32-bit pointers. On 32-bit use 4MB, which is the traditional x86 minimum load location, likely to avoid historically requiring a 4MB page table entry when only a portion of the first 4MB would be used (since the NULL address is avoided). For s390 the position could be 0x10000, but that is needlessly close to the NULL address. Link: http://lkml.kernel.org/r/1498154792-49952-5-git-send-email-keescook@chromium.org Signed-off-by: Kees Cook <keescook@chromium.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: Pratyush Anand <panand@redhat.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-07-10powerpc: move ELF_ET_DYN_BASE to 4GB / 4MBKees Cook
Now that explicitly executed loaders are loaded in the mmap region, we have more freedom to decide where we position PIE binaries in the address space to avoid possible collisions with mmap or stack regions. For 64-bit, align to 4GB to allow runtimes to use the entire 32-bit address space for 32-bit pointers. On 32-bit use 4MB, which is the traditional x86 minimum load location, likely to avoid historically requiring a 4MB page table entry when only a portion of the first 4MB would be used (since the NULL address is avoided). Link: http://lkml.kernel.org/r/1498154792-49952-4-git-send-email-keescook@chromium.org Signed-off-by: Kees Cook <keescook@chromium.org> Tested-by: Michael Ellerman <mpe@ellerman.id.au> Acked-by: Michael Ellerman <mpe@ellerman.id.au> Cc: Russell King <linux@armlinux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: Pratyush Anand <panand@redhat.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-07-10arm64: move ELF_ET_DYN_BASE to 4GB / 4MBKees Cook
Now that explicitly executed loaders are loaded in the mmap region, we have more freedom to decide where we position PIE binaries in the address space to avoid possible collisions with mmap or stack regions. For 64-bit, align to 4GB to allow runtimes to use the entire 32-bit address space for 32-bit pointers. On 32-bit use 4MB, to match ARM. This could be 0x8000, the standard ET_EXEC load address, but that is needlessly close to the NULL address, and anyone running arm compat PIE will have an MMU, so the tight mapping is not needed. Link: http://lkml.kernel.org/r/1498251600-132458-4-git-send-email-keescook@chromium.org Signed-off-by: Kees Cook <keescook@chromium.org> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-07-10arm: move ELF_ET_DYN_BASE to 4MBKees Cook
Now that explicitly executed loaders are loaded in the mmap region, we have more freedom to decide where we position PIE binaries in the address space to avoid possible collisions with mmap or stack regions. 4MB is chosen here mainly to have parity with x86, where this is the traditional minimum load location, likely to avoid historically requiring a 4MB page table entry when only a portion of the first 4MB would be used (since the NULL address is avoided). For ARM the position could be 0x8000, the standard ET_EXEC load address, but that is needlessly close to the NULL address, and anyone running PIE on 32-bit ARM will have an MMU, so the tight mapping is not needed. Link: http://lkml.kernel.org/r/1498154792-49952-2-git-send-email-keescook@chromium.org Signed-off-by: Kees Cook <keescook@chromium.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: Pratyush Anand <panand@redhat.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Daniel Micay <danielmicay@gmail.com> Cc: Dmitry Safonov <dsafonov@virtuozzo.com> Cc: Grzegorz Andrejczuk <grzegorz.andrejczuk@intel.com> Cc: Kees Cook <keescook@chromium.org> Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Qualys Security Advisory <qsa@qualys.com> Cc: Rik van Riel <riel@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-07-10binfmt_elf: use ELF_ET_DYN_BASE only for PIEKees Cook
The ELF_ET_DYN_BASE position was originally intended to keep loaders away from ET_EXEC binaries. (For example, running "/lib/ld-linux.so.2 /bin/cat" might cause the subsequent load of /bin/cat into where the loader had been loaded.) With the advent of PIE (ET_DYN binaries with an INTERP Program Header), ELF_ET_DYN_BASE continued to be used since the kernel was only looking at ET_DYN. However, since ELF_ET_DYN_BASE is traditionally set at the top 1/3rd of the TASK_SIZE, a substantial portion of the address space is unused. For 32-bit tasks when RLIMIT_STACK is set to RLIM_INFINITY, programs are loaded above the mmap region. This means they can be made to collide (CVE-2017-1000370) or nearly collide (CVE-2017-1000371) with pathological stack regions. Lowering ELF_ET_DYN_BASE solves both by moving programs below the mmap region in all cases, and will now additionally avoid programs falling back to the mmap region by enforcing MAP_FIXED for program loads (i.e. if it would have collided with the stack, now it will fail to load instead of falling back to the mmap region). To allow for a lower ELF_ET_DYN_BASE, loaders (ET_DYN without INTERP) are loaded into the mmap region, leaving space available for either an ET_EXEC binary with a fixed location or PIE being loaded into mmap by the loader. Only PIE programs are loaded offset from ELF_ET_DYN_BASE, which means architectures can now safely lower their values without risk of loaders colliding with their subsequently loaded programs. For 64-bit, ELF_ET_DYN_BASE is best set to 4GB to allow runtimes to use the entire 32-bit address space for 32-bit pointers. Thanks to PaX Team, Daniel Micay, and Rik van Riel for inspiration and suggestions on how to implement this solution. Fixes: d1fd836dcf00 ("mm: split ET_DYN ASLR from mmap ASLR") Link: http://lkml.kernel.org/r/20170621173201.GA114489@beast Signed-off-by: Kees Cook <keescook@chromium.org> Acked-by: Rik van Riel <riel@redhat.com> Cc: Daniel Micay <danielmicay@gmail.com> Cc: Qualys Security Advisory <qsa@qualys.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Dmitry Safonov <dsafonov@virtuozzo.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Grzegorz Andrejczuk <grzegorz.andrejczuk@intel.com> Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Paul Mackerras <paulus@samba.org> Cc: Pratyush Anand <panand@redhat.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Will Deacon <will.deacon@arm.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>