summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)Author
2009-09-05Merge branch 'perfcounters-fixes-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'perfcounters-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: perf_counter/powerpc: Fix cache event codes for POWER7 perf_counter: Fix /0 bug in swcounters perf_counters: Increase paranoia level
2009-09-05crypto: sha-s390 - Fix warnings in import functionJan Glauber
That patch should fix the warnings. Signed-off-by: Jan Glauber <jang@linux.vnet.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-09-04kmemleak: Don't scan uninitialized memory when kmemcheck is enabledPekka Enberg
Ingo Molnar reported the following kmemcheck warning when running both kmemleak and kmemcheck enabled: PM: Adding info for No Bus:vcsa7 WARNING: kmemcheck: Caught 32-bit read from uninitialized memory (f6f6e1a4) d873f9f600000000c42ae4c1005c87f70000000070665f666978656400000000 i i i i u u u u i i i i i i i i i i i i i i i i i i i i i u u u ^ Pid: 3091, comm: kmemleak Not tainted (2.6.31-rc7-tip #1303) P4DC6 EIP: 0060:[<c110301f>] EFLAGS: 00010006 CPU: 0 EIP is at scan_block+0x3f/0xe0 EAX: f40bd700 EBX: f40bd780 ECX: f16b46c0 EDX: 00000001 ESI: f6f6e1a4 EDI: 00000000 EBP: f10f3f4c ESP: c2605fcc DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068 CR0: 8005003b CR2: e89a4844 CR3: 30ff1000 CR4: 000006f0 DR0: 00000000 DR1: 00000000 DR2: 00000000 DR3: 00000000 DR6: ffff4ff0 DR7: 00000400 [<c110313c>] scan_object+0x7c/0xf0 [<c1103389>] kmemleak_scan+0x1d9/0x400 [<c1103a3c>] kmemleak_scan_thread+0x4c/0xb0 [<c10819d4>] kthread+0x74/0x80 [<c10257db>] kernel_thread_helper+0x7/0x3c [<ffffffff>] 0xffffffff kmemleak: 515 new suspected memory leaks (see /sys/kernel/debug/kmemleak) kmemleak: 42 new suspected memory leaks (see /sys/kernel/debug/kmemleak) The problem here is that kmemleak will scan partially initialized objects that makes kmemcheck complain. Fix that up by skipping uninitialized memory regions when kmemcheck is enabled. Reported-by: Ingo Molnar <mingo@elte.hu> Acked-by: Ingo Molnar <mingo@elte.hu> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
2009-09-04Merge branch 'amd-iommu/2.6.32' of ↵Ingo Molnar
git://git.kernel.org/pub/scm/linux/kernel/git/joro/linux-2.6-iommu into core/iommu
2009-09-04sparc64: Fix bootup with mcount in some configs.David S. Miller
Functions invoked early when booting up a cpu can't use tracing because mcount requires a valid 'current_thread_info()' and TLB mappings to be setup. The code path of sun4v_register_mondo_queues --> register_one_mondo is one such case. sun4v_register_mondo_queues already has the necessary 'notrace' annotation, but register_one_mondo does not. Normally register_one_mondo is inlined so the bug doesn't trigger, but with some config/compiler combinations, it won't be so we must properly mark it notrace. While we're here, add 'notrace' annoations to prom_printf and prom_halt so that early error handling won't have the same problem. Reported-by: Alexander Beregalov <a.beregalov@gmail.com> Reported-by: Leif Sawyer <lsawyer@gci.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2009-09-04sched: Turn on SD_BALANCE_NEWIDLEIngo Molnar
Start the re-tuning of the balancer by turning on newidle. It improves hackbench performance and parallelism on a 4x4 box. The "perf stat --repeat 10" measurements give us: domain0 domain1 ....................................... -SD_BALANCE_NEWIDLE -SD_BALANCE_NEWIDLE: 2041.273208 task-clock-msecs # 9.354 CPUs ( +- 0.363% ) +SD_BALANCE_NEWIDLE -SD_BALANCE_NEWIDLE: 2086.326925 task-clock-msecs # 11.934 CPUs ( +- 0.301% ) +SD_BALANCE_NEWIDLE +SD_BALANCE_NEWIDLE: 2115.289791 task-clock-msecs # 12.158 CPUs ( +- 0.263% ) Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Andreas Herrmann <andreas.herrmann3@amd.com> Cc: Andreas Herrmann <andreas.herrmann3@amd.com> Cc: Gautham R Shenoy <ego@in.ibm.com> Cc: Balbir Singh <balbir@in.ibm.com> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-04sched: Clean up topology.hIngo Molnar
Re-organize the flag settings so that it's visible at a glance which sched-domains flags are set and which not. With the new balancer code we'll need to re-tune these details anyway, so make it cleaner to make fewer mistakes down the road ;-) Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Andreas Herrmann <andreas.herrmann3@amd.com> Cc: Andreas Herrmann <andreas.herrmann3@amd.com> Cc: Gautham R Shenoy <ego@in.ibm.com> Cc: Balbir Singh <balbir@in.ibm.com> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-04Merge branch 'linus' into core/rcuIngo Molnar
Merge reason: Avoid fuzz in init/main.c and update from rc6 to rc8. Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-04x86, perf_counter, bts: Do not allow kernel BTS tracing for nowmarkus.t.metzger@intel.com
Kernel BTS tracing generates too much data too fast for us to handle, causing the kernel to hang. Fail for BTS requests for kernel code. Signed-off-by: Markus Metzger <markus.t.metzger@intel.com> Acked-by: Peter Zijlstra <a.p.zjilstra@chello.nl> LKML-Reference: <20090902140616.901253000@intel.com> [ This is really a workaround - but we want BTS tracing in .32 so make sure we dont regress. The lockup should be fixed ASAP. ] Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-04x86, perf_counter, bts: Correct pointer-to-u64 castsmarkus.t.metzger@intel.com
On 32bit, pointers in the DS AREA configuration are cast to u64. The current (long) cast to avoid compiler warnings results in a signed 64bit address. Signed-off-by: Markus Metzger <markus.t.metzger@intel.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <20090902140615.305889000@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-04x86, perf_counter, bts: Fail if BTS is not availablemarkus.t.metzger@intel.com
Reserve PERF_COUNT_HW_BRANCH_INSTRUCTIONS with sample_period == 1 for BTS tracing and fail, if BTS is not available. Signed-off-by: Markus Metzger <markus.t.metzger@intel.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <20090902140612.943801000@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-03Merge branch 'amd-iommu/pagetable' into amd-iommu/2.6.32Joerg Roedel
Conflicts: arch/x86/kernel/amd_iommu.c
2009-09-03Merge branch 'amd-iommu/passthrough' into amd-iommu/2.6.32Joerg Roedel
Conflicts: arch/x86/kernel/amd_iommu.c arch/x86/kernel/amd_iommu_init.c
2009-09-03Merge branches 'gart/fixes', 'amd-iommu/fixes+cleanups' and ↵Joerg Roedel
'amd-iommu/fault-handling' into amd-iommu/2.6.32
2009-09-03x86/gart: Do not select AGP for GART_IOMMUPavel Vasilyev
There is no dependency from the gart code to the agp code. And since a lot of systems today do not have agp anymore remove this dependency from the kernel configuration. Signed-off-by: Pavel Vasilyev <pavel@pavlinux.ru> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-09-03x86/amd-iommu: Initialize passthrough mode when requestedJoerg Roedel
This patch enables the passthrough mode for AMD IOMMU by running the initialization function when iommu=pt is passed on the kernel command line. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-09-03x86/amd-iommu: Don't detach device from pt domain on driver unbindJoerg Roedel
This patch makes sure a device is not detached from the passthrough domain when the device driver is unloaded or does otherwise release the device. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-09-03x86/amd-iommu: Make sure a device is assigned in passthrough modeJoerg Roedel
When the IOMMU driver runs in passthrough mode it has to make sure that every device not assigned to an IOMMU-API domain must be put into the passthrough domain instead of keeping it unassigned. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-09-03x86/amd-iommu: Align locking between attach_device and detach_deviceJoerg Roedel
This patch makes the locking behavior between the functions attach_device and __attach_device consistent with the locking behavior between detach_device and __detach_device. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-09-03x86/amd-iommu: Fix device table write orderJoerg Roedel
The V bit of the device table entry has to be set after the rest of the entry is written to not confuse the hardware. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-09-03x86/amd-iommu: Add passthrough mode initialization functionsJoerg Roedel
When iommu=pt is passed on kernel command line the devices should run untranslated. This requires the allocation of a special domain for that purpose. This patch implements the allocation and initialization path for iommu=pt. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-09-03x86/amd-iommu: Add core functions for pd allocation/freeingJoerg Roedel
This patch factors some code of protection domain allocation into seperate functions. This way the logic can be used to allocate the passthrough domain later. As a side effect this patch fixes an unlikely domain id leakage bug. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-09-03x86/dma: Mark iommu_pass_through as __read_mostlyJoerg Roedel
This variable is read most of the time. This patch marks it as such. It also documents the meaning the this variable while at it. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-09-03x86/amd-iommu: Change iommu_map_page to support multiple page sizesJoerg Roedel
This patch adds a map_size parameter to the iommu_map_page function which makes it generic enough to handle multiple page sizes. This also requires a change to alloc_pte which is also done in this patch. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-09-03x86/amd-iommu: Support higher level PTEs in iommu_page_unmapJoerg Roedel
This patch changes fetch_pte and iommu_page_unmap to support different page sizes too. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-09-03x86/amd-iommu: Remove old page table handling macrosJoerg Roedel
These macros are not longer required. So remove them. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-09-03x86/amd-iommu: Use 2-level page tables for dma_ops domainsJoerg Roedel
The driver now supports a dynamic number of levels for IO page tables. This allows to reduce the number of levels for dma_ops domains by one because a dma_ops domain has usually an address space size between 128MB and 4G. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-09-03x86/amd-iommu: Remove bus_addr check in iommu_map_pageJoerg Roedel
The driver now supports full 64 bit device address spaces. So this check is not longer required. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-09-03x86/amd-iommu: Remove last usages of IOMMU_PTE_L0_INDEXJoerg Roedel
This change allows to remove these old macros later. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-09-03x86/amd-iommu: Change alloc_pte to support 64 bit address spaceJoerg Roedel
This patch changes the alloc_pte function to be able to map pages into the whole 64 bit address space supported by AMD IOMMU hardware from the old limit of 2**39 bytes. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-09-03x86/amd-iommu: Introduce increase_address_space functionJoerg Roedel
This function will be used to increase the address space size of a protection domain. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-09-03x86/amd-iommu: Flush domains if address space size was increasedJoerg Roedel
Thist patch introduces the update_domain function which propagates the larger address space of a protection domain to the device table and flushes all relevant DTEs and the domain TLB. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-09-03x86/amd-iommu: Introduce set_dte_entry functionJoerg Roedel
This function factors out some logic of attach_device to a seperate function. This new function will be used to update device table entries when necessary. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-09-03x86/amd-iommu: Add a gneric version of amd_iommu_flush_all_devicesJoerg Roedel
This patch adds a generic variant of amd_iommu_flush_all_devices function which flushes only the DTEs for a given protection domain. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-09-03x86/amd-iommu: Use fetch_pte in amd_iommu_iova_to_physJoerg Roedel
Don't reimplement the page table walker in this function. Use the generic one. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-09-03x86/amd-iommu: Use fetch_pte in iommu_unmap_pageJoerg Roedel
Instead of reimplementing existing logic use fetch_pte to walk the page table in iommu_unmap_page. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-09-03x86/amd-iommu: Make fetch_pte aware of dynamic mapping levelsJoerg Roedel
This patch changes the fetch_pte function in the AMD IOMMU driver to support dynamic mapping levels. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-09-03x86/amd-iommu: Reset command buffer if wait loop failsJoerg Roedel
Instead of a panic on an comletion wait loop failure, try to recover from that event from resetting the command buffer. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-09-03x86/amd-iommu: Panic if IOMMU command buffer reset failsJoerg Roedel
To prevent the driver from doing recursive command buffer resets, just panic when that recursion happens. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-09-03x86/amd-iommu: Reset command buffer on ILLEGAL_COMMAND_ERRORJoerg Roedel
On an ILLEGAL_COMMAND_ERROR the IOMMU stops executing further commands. This patch changes the code to handle this case better by resetting the command buffer in the IOMMU. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-09-03x86/amd-iommu: Add reset function for command buffersJoerg Roedel
This patch factors parts of the command buffer initialization code into a seperate function which can be used to reset the command buffer later. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-09-03x86/amd-iommu: Add function to flush all DTEs on one IOMMUJoerg Roedel
This function flushes all DTE entries on one IOMMU for all devices behind this IOMMU. This is required for command buffer resetting later. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-09-03x86/amd-iommu: fix broken check in amd_iommu_flush_all_devicesJoerg Roedel
The amd_iommu_pd_table is indexed by protection domain number and not by device id. So this check is broken and must be removed. Cc: stable@kernel.org Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-09-03x86/amd-iommu: Remove redundant 'IOMMU' stringJoerg Roedel
The 'IOMMU: ' prefix is not necessary because the DUMP_printk macro already prints its own prefix. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-09-03x86/amd-iommu: replace "AMD IOMMU" by "AMD-Vi"Joerg Roedel
This patch replaces the "AMD IOMMU" printk strings with the official name for the hardware: "AMD-Vi". Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-09-03x86/amd-iommu: Remove some merge helper codeJoerg Roedel
This patch removes some left-overs which where put into the code to simplify merging code which also depends on changes in other trees. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-09-03x86/amd-iommu: Introduce function for iommu-local domain flushJoerg Roedel
This patch introduces a function to flush all domain tlbs for on one given IOMMU. This is required later to reset the command buffer on one IOMMU. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-09-03x86/amd-iommu: Dump illegal command on ILLEGAL_COMMAND_ERRORJoerg Roedel
This patch adds code to dump the command which caused an ILLEGAL_COMMAND_ERROR raised by the IOMMU hardware. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-09-03x86/amd-iommu: Dump fault entry on DTE errorJoerg Roedel
This patch adds code to dump the content of the device table entry which caused an ILLEGAL_DEV_TABLE_ENTRY error from the IOMMU hardware. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-09-03sparc64: Kill spurious NMI watchdog triggers by increasing limit to 30 seconds.David S. Miller
This is a compromise and a temporary workaround for bootup NMI watchdog triggers some people see with qla2xxx devices present. This happens when, for example: CPU 0 is in the driver init and looping submitting mailbox commands to load the firmware, then waiting for completion. CPU 1 is receiving the device interrupts. CPU 1 is where the NMI watchdog triggers. CPU 0 is submitting mailbox commands fast enough that by the time CPU 1 returns from the device interrupt handler, a new one is pending. This sequence runs for more than 5 seconds. The problematic case is CPU 1's timer interrupt running when the barrage of device interrupts begin. Then we have: timer interrupt return for softirq checking pending, thus enable interrupts qla2xxx interrupt return qla2xxx interrupt return ... 5+ seconds pass final qla2xxx interrupt for fw load return run timer softirq return At some point in the multi-second qla2xxx interrupt storm we trigger the NMI watchdog on CPU 1 from the NMI interrupt handler. The timer softirq, once we get back to running it, is smart enough to run the timer work enough times to make up for the missed timer interrupts. However, the NMI watchdogs (both x86 and sparc) use the timer interrupt count to notice the cpu is wedged. But in the above scenerio we'll receive only one such timer interrupt even if we last all the way back to running the timer softirq. The default watchdog trigger point is only 5 seconds, which is pretty low (the softwatchdog triggers at 60 seconds). So increase it to 30 seconds for now. Signed-off-by: David S. Miller <davem@davemloft.net>