Age | Commit message (Collapse) | Author |
|
The array of names in hugetlbpage.c no longer exists.
Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
Commit 430b01e8f5e524a2bfa50074d97d0bdc2505807b ("[POWERPC] Kill
flatdevtree.c") killed the two files including flatdevtree_env.h. It was
apparently just an oversight to not kill that header too. Kill it now.
Signed-off-by: Paul Bolle <pebolle@tiscali.nl>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
Warning(arch/powerpc/kernel/pci_of_scan.c:210): Excess function parameter 'node' description in 'of_scan_pci_bridge'
Warning(arch/powerpc/kernel/vio.c:636): No description found for parameter 'desired'
Warning(arch/powerpc/kernel/vio.c:636): Excess function parameter 'new_desired' description in 'vio_cmo_set_dev_desired'
Warning(arch/powerpc/kernel/vio.c:1270): No description found for parameter 'viodrv'
Warning(arch/powerpc/kernel/vio.c:1270): Excess function parameter 'drv' description in '__vio_register_driver'
Warning(arch/powerpc/kernel/vio.c:1289): No description found for parameter 'viodrv'
Warning(arch/powerpc/kernel/vio.c:1289): Excess function parameter 'driver' description in 'vio_unregister_driver'
Signed-off-by: Wanpeng Li <liwp@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
memblock_end_of_DRAM() returns end_address + 1, not end address.
While some code assumes that it returns end address.
Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
I blame Mikey for this. He elevated my slightly dubious testcase:
to benchmark status. And naturally we need to be number 1 at creating
zeros. So lets improve __clear_user some more.
As Paul suggests we can use dcbz for large lengths. This patch gets
the destination cacheline aligned then uses dcbz on whole cachelines.
Before:
10485760000 bytes (10 GB) copied, 0.414744 s, 25.3 GB/s
After:
10485760000 bytes (10 GB) copied, 0.268597 s, 39.0 GB/s
39 GB/s, a new record.
Signed-off-by: Anton Blanchard <anton@samba.org>
Tested-by: Olof Johansson <olof@lixom.net>
Acked-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
At the moment all queues in a multiqueue adapter will serialise
against the IOMMU table lock. This is proving to be a big issue,
especially with 10Gbit ethernet.
This patch creates 4 pools and tries to spread the load across
them. If the table is under 1GB in size we revert back to the
original behaviour of 1 pool and 1 largealloc pool.
We create a hash to map CPUs to pools. Since we prefer interrupts to
be affinitised to primary CPUs, without some form of hashing we are
very likely to end up using the same pool. As an example, POWER7
has 4 way SMT and with 4 pools all primary threads will map to the
same pool.
The largealloc pool is reduced from 1/2 to 1/4 of the space to
partially offset the overhead of breaking the table up into pools.
Some performance numbers were obtained with a Chelsio T3 adapter on
two POWER7 boxes, running a 100 session TCP round robin test.
Performance improved 69% with this patch applied.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
In preparation for IOMMU pools, push the spinlock into
iommu_range_alloc and __iommu_free.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
This patch moves tce_free outside of the lock in iommu_free.
Some performance numbers were obtained with a Chelsio T3 adapter on
two POWER7 boxes, running a 100 session TCP round robin test.
Performance improved 25% with this patch applied.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
We currently hold the IOMMU spinlock around tce_build and tce_flush.
This causes our spinlock hold times to be much higher than required
and can impact multiqueue adapters.
This patch moves tce_build and tce_flush outside of the lock in
iommu_alloc, and tce_flush outside of the lock in iommu_free.
Some performance numbers were obtained with a Chelsio T3 adapter on
two POWER7 boxes, running a 100 session TCP round robin test.
Performance improved 32% with this patch applied.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
tce_buildmulti_pSeriesLP uses a per cpu page to communicate with the
hypervisor. We currently rely on the IOMMU table spinlock but
subsequent patches will be removing that so disable interrupts
around all accesses of tce_page.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
Implement a POWER7 optimised memcpy using VMX and enhanced prefetch
instructions.
This is a copy of the POWER7 optimised copy_to_user/copy_from_user
loop. Detailed implementation and performance details can be found in
commit a66086b8197d (powerpc: POWER7 optimised
copy_to_user/copy_from_user using VMX).
I noticed memcpy issues when profiling a RAID6 workload:
.memcpy
.async_memcpy
.async_copy_data
.__raid_run_ops
.handle_stripe
.raid5d
.md_thread
I created a simplified testcase by building a RAID6 array with 4 1GB
ramdisks (booting with brd.rd_size=1048576):
# mdadm -CR -e 1.2 /dev/md0 --level=6 -n4 /dev/ram[0-3]
I then timed how long it took to write to the entire array:
# dd if=/dev/zero of=/dev/md0 bs=1M
Before: 892 MB/s
After: 999 MB/s
A 12% improvement.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
Version 2.06 of the POWER ISA introduced enhanced touch instructions,
allowing us to specify a number of attributes including the length of
a stream.
This patch adds a software stream for both loads and stores in the
POWER7 copy_tofrom_user loop. Since the setup is quite complicated
and we have to use an eieio to ensure correct ordering of the "GO"
command we only do this for copies above 4kB.
To quantify any performance improvements we need a working set
bigger than the caches so we operate on a 1GB file:
# dd if=/dev/zero of=/tmp/foo bs=1M count=1024
And we compare how fast we can read the file:
# dd if=/tmp/foo of=/dev/null bs=1M
before: 7.7 GB/s
after: 9.6 GB/s
A 25% improvement.
The worst case for this patch will be a completely L1 cache contained
copy of just over 4kB. We can test this with the copy_to_user
testcase we used to tune copy_tofrom_user originally:
http://ozlabs.org/~anton/junkcode/copy_to_user.c
# time ./copy_to_user2 -l 4224 -i 10000000
before: 6.807 s
after: 6.946 s
A 2% slowdown, which seems reasonable considering our data is unlikely
to be completely L1 contained.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
While creating the PCI root bus through function pci_create_root_bus()
of PCI core, it should have assigned the secondary bus number for the
newly created PCI root bus. Thus we needn't do the explicit assignment
for the secondary bus number again in pcibios_scan_phb().
Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
The form affinity for NUMA is set to 1 if the firmware supports
OPAL. Otherwise, we have to retrieve that from OF node "/chosen".
For the latter case, OF node "/chosen" reference count was never
decreased.
Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
Implement a POWER7 optimised copy_page using VMX and enhanced
prefetch instructions. We use enhanced prefetch hints to prefetch
both the load and store side. We copy a cacheline at a time and
fall back to regular loads and stores if we are unable to use VMX
(eg we are in an interrupt).
The following microbenchmark was used to assess the impact of
the patch:
http://ozlabs.org/~anton/junkcode/page_fault_file.c
We test MAP_PRIVATE page faults across a 1GB file, 100 times:
# time ./page_fault_file -p -l 1G -i 100
Before: 22.25s
After: 18.89s
17% faster
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
Subsequent patches will add more VMX library functions and it makes
sense to keep all the c-code helper functions in the one file.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
mtmsrd is an expensive instruction, we save a few cycles by
doing it once instead of twice.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
Version 2.06 of the POWER ISA introduced enhanced touch instructions,
allowing us to specify a number of attributes including the length of
a stream.
This patch adds a software stream for both loads and stores in the
POWER7 copy_tofrom_user loop. Since the setup is quite complicated
and we have to use an eieio to ensure correct ordering of the "GO"
command we only do this for copies above 4kB.
To quantify any performance improvements we need a working set
bigger than the caches so we operate on a 1GB file:
# dd if=/dev/zero of=/tmp/foo bs=1M count=1024
And we compare how fast we can read the file:
# dd if=/tmp/foo of=/dev/null bs=1M
before: 7.7 GB/s
after: 9.6 GB/s
A 25% improvement.
The worst case for this patch will be a completely L1 cache contained
copy of just over 4kB. We can test this with the copy_to_user
testcase we used to tune copy_tofrom_user originally:
http://ozlabs.org/~anton/junkcode/copy_to_user.c
# time ./copy_to_user2 -l 4224 -i 10000000
before: 6.807 s
after: 6.946 s
A 2% slowdown, which seems reasonable considering our data is unlikely
to be completely L1 contained.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
1) call_function.lock used in smp_call_function_many() is just to protect
call_function.queue and &data->refs, cpu_online_mask is outside of the
lock. And it's not necessary to protect cpu_online_mask,
because data->cpumask is pre-calculate and even if a cpu is brougt up
when calling arch_send_call_function_ipi_mask(), it's harmless because
validation test in generic_smp_call_function_interrupt() will take care
of it.
2) For cpu down issue, stop_machine() will guarantee that no concurrent
smp_call_fuction() is processing.
Signed-off-by: Yong Zhang <yong.zhang0@gmail.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
I noticed __clear_user high up in a profile of one of my RAID stress
tests. The testcase was doing a dd from /dev/zero which ends up
calling __clear_user.
__clear_user is basically a loop with a single 4 byte store which
is horribly slow. We can do much better by aligning the desination
and doing 32 bytes of 8 byte stores in a loop.
The following testcase was used to verify the patch:
http://ozlabs.org/~anton/junkcode/stress_clear_user.c
To show the improvement in performance I ran a dd from /dev/zero
to /dev/null on a POWER7 box:
Before:
# dd if=/dev/zero of=/dev/null bs=1M count=10000
10485760000 bytes (10 GB) copied, 3.72379 s, 2.8 GB/s
After:
# time dd if=/dev/zero of=/dev/null bs=1M count=10000
10485760000 bytes (10 GB) copied, 0.728318 s, 14.4 GB/s
Over 5x faster.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
irq_entry, irq_exit, timer_interrupt_entry and timer_interrupt_exit
all do the same thing so use DECLARE_EVENT_CLASS to avoid duplicating
everything 4 times.
This saves quite a lot of space in both instruction text and data:
text data bss dec hex filename
9265 19622 16 28903 70e7 arch/powerpc/kernel/irq.o
6817 19019 16 25852 64fc arch/powerpc/kernel/irq.o
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
When looking through some instruction traces I noticed our tracepoint
checks were inline. It turns out we don't have CONFIG_JUMP_LABEL
enabled.
By enabling CONFIG_JUMP_LABEL we replace a load/compare/branch with
a nop at every tracepoint call. For example in do_IRQ:
CONFIG_JUMP_LABEL disabled:
stdx 3,11,9
lwz 0,8(29)
cmpwi 7,0,0
bne- 7,.L124
bl .irq_enter
CONFIG_JUMP_LABEL enabled:
stdx 3,11,9
nop
bl .irq_enter
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
The following patch is to remove the pseries_notify_add_cpu() call
and replace it by a hot plug notifier.
This would prevent cpuidle resources being released and allocated each
time cpu comes online on pseries.
The earlier design was causing a lockdep problem
in start_secondary as reported on this thread
-https://lkml.org/lkml/2012/5/17/2
This applies on 3.4-rc7
Signed-off-by: Deepthi Dharwar <deepthi@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
An upcoming release of firmware will add DDW extensions, in particular
an API to "reset" the DMA window to the original configuration (32-bit,
2GB in size). With that API available, we can safely remove the default
window, increasing the resources available to firmware for creation of
larger windows for the slot in question -- if we encounter an error, we
can use the new API to reset the state of the slot.
Further, this same release of firmware will make it a hard requirement
for OSes to release the existing window before any other windows will be
shown as available, to avoid conflicts in addressing between the two
windows.
In anticipation of these changes, always remove the default window
before we do any DDW manipulations.
Signed-off-by: Nishanth Aravamudan <nacc@us.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
The patch_instruction() interface is made to modify kernel text. It is
safer to use that then the probe_kernel_write() when modifying kernel
code.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
For ftrace to use the patch_instruction code, it needs to check for
faults on write. Ftrace updates code all over the kernel, and we need to
know if code is updated or not due to protections that are placed on
some portions of the kernel. If ftrace does not detect a fault, it will
error later on, and it will be much more difficult to find the problem.
By changing patch_instruction() to detect faults, then ftrace will be
able to make use of it too.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
PowerPC does not have the synchronization issues that x86 has with
modifying code on one CPU while another CPU is executing it.
The other CPU will either see the old or new code without any
issues, unlike x86 which may issue a GPF.
Instead of calling the heavy stop_machine, just update the code.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
Currently we build all board files regardless of the final zImage
target. This is sub-optimal (in terms on compilation) and leads to
problems in one platform needlessly causing failures for other
platforms.
Use the Kconfig variables to selectively construct this board files to
build.
Signed-off-by: Tony Breeds <tony@bakeyournoodle.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc
Pull a couple more powerpc fixes from Benjamin Herrenschmidt:
"Here are two more fixes that I "missed" when scrubbing patchwork last
week which are worth still having in 3.5."
* 'merge' of git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc:
powerpc/kvm: sldi should be sld
powerpc/xmon: Use cpumask iterator to avoid warning
|
|
The SYS_NIRQ1 pin is the interupt line for the PMIC part of the TWL6030
and interrupts from the PMIC are needed as wakeup sources.
Ensure this pin is mux'd as input and has wakeup enabled so PMIC
interupts (e.g. RTC) can be used as wakeup sources.
Tested on OMAP4430/Panda.
Signed-off-by: Kevin Hilman <khilman@ti.com>
Signed-off-by: Tony Lindgren <tony@atomide.com>
|
|
In order for suspend/resume dependencies to work correctly, I2C has to
be initialized (more specifically, registered with the driver core)
before MMC. Without this, the MMC driver fails to adjust the VMMC
regulator (using i2c writes) during the suspend path.
Problem found testing suspend/resume on 3730/OveroSTORM platform.
Signed-off-by: Kevin Hilman <khilman@ti.com>
Signed-off-by: Tony Lindgren <tony@atomide.com>
|
|
commit 435ca24 (ARM i.MX: Visstrim_M10: Add board version detection)
included <asm/system.h>, which is a header file about to be deleted according to
9f97da (Disintegrate asm/system.h for ARM)
Include <asm/system_info.h> instead.
Reported-by: Russell King <linux@arm.linux.org.uk>
Signed-off-by: Fabio Estevam <fabio.estevam@freescale.com>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
|
|
Since we are taking a registers, this should never have been an sldi.
Talking to paulus offline, this is the correct fix.
Was introduced by:
commit 19ccb76a1938ab364a412253daec64613acbf3df
Author: Paul Mackerras <paulus@samba.org>
Date: Sat Jul 23 17:42:46 2011 +1000
Talking to paulus, this shouldn't be a literal.
Signed-off-by: Michael Neuling <mikey@neuling.org>
CC: <stable@kernel.org> [v3.2+]
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
We have a bug report where the kernel hits a warning in the cpumask
code:
WARNING: at include/linux/cpumask.h:107
Which is:
WARN_ON_ONCE(cpu >= nr_cpumask_bits);
The backtrace is:
cpu_cmd
cmds
xmon_core
xmon
die
xmon is iterating through 0 to NR_CPUS. I'm not sure why we are still
open coding this but iterating above nr_cpu_ids is definitely a bug.
This patch iterates through all possible cpus, in case we issue a
system reset and CPUs in an offline state call in.
Perhaps the old code was trying to handle CPUs that were in the
partition but were never started (eg kexec into a kernel with an
nr_cpus= boot option). They are going to die way before we get into
xmon since we haven't set any kernel state up for them.
Signed-off-by: Anton Blanchard <anton@samba.org>
CC: <stable@kernel.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
Pull two ARM fixes from Russell King:
"It's been fairly quiet with the fixes. Just two this time. One fixes
a long standing problem with KALLSYMS needing an additional pass, and
the other sorts a problem with the vmalloc space interacting with
static IO mappings."
* 'fixes' of git://git.linaro.org/people/rmk/linux-arm:
ARM: 7438/1: fill possible PMD empty section gaps
ARM: 7428/1: Prevent KALLSYM size mismatch on ARM.
|
|
On ARM with the 2-level page table format, a PMD entry is represented by
two consecutive section entries covering 2MB of virtual space.
However, static mappings always were allowed to use separate 1MB section
entries. This means in practice that a static mapping may create half
populated PMDs via create_mapping().
Since commit 0536bdf33f (ARM: move iotable mappings within the vmalloc
region) those static mappings are located in the vmalloc area. We must
ensure no such half populated PMDs are accessible once vmalloc() or
ioremap() start looking at the vmalloc area for nearby free virtual
address ranges, or various things leading to a kernel crash will happen.
Signed-off-by: Nicolas Pitre <nico@linaro.org>
Reported-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: "R, Sricharan" <r.sricharan@ti.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: stable@vger.kernel.org
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Basic suspend/resume is fixed by ensuring that the PGSR registers are
set correctly before sleep mode is entered. In particular four of the
active low resets need to be driven high while in sleep mode, otherwise
the unit resets itself instead of suspending. Another problem was that
the PCFR_GPROD bit is set by the HTC bootloader; this caused GPIO reset
(i.e. the reset button) to fail immediately after returning from sleep
mode.
Signed-off-by: Paul Parsons <lost.distance@yahoo.com>
Cc: Philipp Zabel <philipp.zabel@gmail.com>
Signed-off-by: Haojian Zhuang <haojian.zhuang@gmail.com>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc
Pull ARM SoC fixes from Olof Johansson:
"Another week, another batch of fixes.
All are small, contained, targeted fixes for explicit problems --
mostly build and boot failures across i.MX, OMAP, Renesas/Shmobile and
Samsung."
* tag 'fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc:
ARM: imx6q: fix suspend regression caused by common clk migration
ARM: OMAP4470: Fix OMAP4470 boot failure
ARM: EXYNOS: Fix EXYNOS_DEV_DMA Kconfig entry
ARM: OMAP2+: nand: fix build error when CONFIG_MTD_ONENAND_OMAP2=n
ARM: shmobile: r8a7779: Route all interrupts to ARM
ARM: shmobile: kzm9d: use late init machine hook
ARM: shmobile: kzm9g: use late init machine hook
ARM: mach-shmobile: armadillo800eva: Use late init machine hook
ARM: SAMSUNG: Fix for S3C2412 EBI memory mapping
ARM: mach-shmobile: add missing GPIO IRQ configuration on mackerel
ARM: mach-shmobile: Fix build when SMP is enabled and EMEV2 is not enabled
ARM: shmobile: sh7372: bugfix: chclr_offset base
ARM: shmobile: sh73a0: bugfix: SY-DMAC number
ARM: SAMSUNG: Should check for IS_ERR(clk) instead of NULL
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/kgene/linux-samsung into fixes
* 'v3.5-samsung-fixes-1' of git://git.kernel.org/pub/scm/linux/kernel/git/kgene/linux-samsung:
ARM: EXYNOS: Fix EXYNOS_DEV_DMA Kconfig entry
ARM: SAMSUNG: Fix for S3C2412 EBI memory mapping
ARM: SAMSUNG: Should check for IS_ERR(clk) instead of NULL
|
|
When moving to common clk framework, the imx6q clks rom and mmdc_ch1_axi
get different on/off states than old clk driver, which breaks suspend
function. There might be a better way to manage these clocks, but let's
takes the old clk driver approach to fix the regression first.
Signed-off-by: Shawn Guo <shawn.guo@linaro.org>
Signed-off-by: Olof Johansson <olof@lixom.net>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap into fixes
From Tony Lindgren:
"Here's one more regression fix that I missed earlier, and a
trivial fix to get omap4470 booting."
* tag 'omap-fixes-for-v3.5-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap:
ARM: OMAP4470: Fix OMAP4470 boot failure
ARM: OMAP2+: nand: fix build error when CONFIG_MTD_ONENAND_OMAP2=n
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux
Pull ACPI & Power Management patches from Len Brown.
* 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux:
acpi_pad: fix power_saving thread deadlock
ACPI video: Still use ACPI backlight control if _DOS doesn't exist
ACPI, APEI, Avoid too much error reporting in runtime
ACPI: Add a quirk for "AMILO PRO V2030" to ignore the timer overriding
ACPI: Remove one board specific WARN when ignoring timer overriding
ACPI: Make acpi_skip_timer_override cover all source_irq==0 cases
ACPI, x86: fix Dell M6600 ACPI reboot regression via DMI
ACPI sysfs.c strlen fix
|
|
'video-bugzilla-43168', 'bugzilla-40002' and 'bugfix-misc' into release
bug fixes
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc
Pull powerpc fixes from Benjamin Herrenschmidt:
"Here are a few powerpc fixes. Arguably some of this should have come
to you earlier but I'm only just catching up after my medical leave.
Mostly these fixes regressions, a couple are long standing bugs."
* 'merge' of git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc:
powerpc/pseries: Fix software invalidate TCE
powerpc: check_and_cede_processor() never cedes
powerpc/ftrace: Do not trace restore_interrupts()
powerpc: Fix Section mismatch warnings in prom_init.c
ppc64: fix missing to check all bits of _TIF_USER_WORK_MASK in preempt
powerpc: Fix uninitialised error in numa.c
powerpc: Fix BPF_JIT code to link with multiple TOCs
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 fixes from Ingo Molnar.
* 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86, cpufeature: Remove stray %s, add -w to mkcapflags.pl
x86, cpufeature: Catch duplicate CPU feature strings
x86, cpufeature: Rename X86_FEATURE_DTS to X86_FEATURE_DTHERM
x86: Fix kernel-doc warnings
x86, compat: Use test_thread_flag(TIF_IA32) in compat signal delivery
|
|
The following added support for powernv but broke pseries/BML:
1f1616e powerpc/powernv: Add TCE SW invalidation support
TCE_PCI_SW_INVAL was split into FREE and CREATE flags but the tests in
the pseries code were not updated to reflect this.
Signed-off-by: Michael Neuling <mikey@neuling.org>
cc: stable@kernel.org [v3.3+]
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
Commit f948501b36c6 ("Make hard_irq_disable() actually hard-disable
interrupts") caused check_and_cede_processor to stop working.
->irq_happened will never be zero right after a hard_irq_disable
so the compiler removes the call to cede_processor completely.
The bug was introduced back in the lazy interrupt handling rework
of 3.4 but was hidden until recently because hard_irq_disable did
nothing.
This issue will eventually appear in 3.4 stable since the
hard_irq_disable fix is marked stable, so mark this one for stable
too.
Signed-off-by: Anton Blanchard <anton@samba.org>
Cc: stable@vger.kernel.org
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
As I was adding code that affects all archs, I started testing function
tracer against PPC64 and found that it currently locks up with 3.4
kernel. I figured it was due to tracing a function that shouldn't be, so
I went through the following process to bisect to find the culprit:
cat /debug/tracing/available_filter_functions > t
num=`wc -l t`
sed -ne "1,${num}p" t > t1
let num=num+1
sed -ne "${num},$p" t > t2
cat t1 > /debug/tracing/set_ftrace_filter
echo function /debug/tracing/current_tracer
<failed? bisect t1, if not bisect t2>
It finally came down to this function: restore_interrupts()
I'm not sure why this locks up the system. It just seems to prevent
scheduling from occurring. Interrupts seem to still work, as I can ping
the box. But all user processes freeze.
When restore_interrupts() is not traced, function tracing works fine.
Cc: stable@kernel.org
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
This patches tries to fix a couple of Section mismatch warnings like
following one:
WARNING: arch/powerpc/kernel/built-in.o(.text+0x2923c): Section mismatch
in reference from the function .prom_query_opal() to the
function .init.text:.call_prom()
The function .prom_query_opal() references
the function __init .call_prom().
This is often because .prom_query_opal lacks a __init
annotation or the annotation of .call_prom is wrong.
Signed-off-by: Li Zhong <zhong@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
In entry_64.S version of ret_from_except_lite, you'll notice that
in the !preempt case, after we've checked MSR_PR we test for any
TIF flag in _TIF_USER_WORK_MASK to decide whether to go to do_work
or not. However, in the preempt case, we do a convoluted trick to
test SIGPENDING only if PR was set and always test NEED_RESCHED ...
but we forget to test any other bit of _TIF_USER_WORK_MASK !!! So
that means that with preempt, we completely fail to test for things
like single step, syscall tracing, etc...
This should be fixed as the following path:
- Test PR. If not set, go to resume_kernel, else continue.
- If go resume_kernel, to do that original do_work.
- If else, then always test for _TIF_USER_WORK_MASK to decide to do
that original user_work, else restore directly.
Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|