summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)Author
2009-09-17Merge branch 'x86/pat' into x86/urgentH. Peter Anvin
Merge reason: Suresh Siddha (1): x86, pat: don't use rb-tree based lookup in reserve_memtype() ... requires previous x86/pat commits already pushed to Linus. Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2009-09-17x86, pat: don't use rb-tree based lookup in reserve_memtype()Suresh Siddha
Recent enhancement of rb-tree based lookup exposed a bug with the lookup mechanism in the reserve_memtype() which ensures that there are no conflicting memtype requests for the memory range. memtype_rb_search() returns an entry which has a start address <= new start address. And from here we traverse the linear linked list to check if there any conflicts with the existing mappings. As the rbtree is based on the start address of the memory range, it is quite possible that we have several overlapped mappings whose start address is much less than new requested start but the end is >= new requested end. This results in conflicting memtype mappings. Same bug exists with the old code which uses cached_entry from where we traverse the linear linked list. But the new rb-tree code exposes this bug fairly easily. For now, don't use the memtype_rb_search() and always start the search from the head of linear linked list in reserve_memtype(). Linear linked list for most of the systems grow's to few 10's of entries(as we track memory type of RAM pages using struct page). So we should be ok for now. We still retain the rbtree and use it to speed up free_memtype() which doesn't have the same bug(as we know what exactly we are searching for in free_memtype). Also use list_for_each_entry_from() in free_memtype() so that we start the search from rb-tree lookup result. Reported-by: Markus Trippelsdorf <markus@trippelsdorf.de> Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Cc: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> LKML-Reference: <1253136483.4119.12.camel@sbs-t61.sc.intel.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2009-09-10x86: Increase MIN_GAP to include randomized stackMichal Hocko
Currently we are not including randomized stack size when calculating mmap_base address in arch_pick_mmap_layout for topdown case. This might cause that mmap_base starts in the stack reserved area because stack is randomized by 1GB for 64b (8MB for 32b) and the minimum gap is 128MB. If the stack really grows down to mmap_base then we can get silent mmap region overwrite by the stack values. Let's include maximum stack randomization size into MIN_GAP which is used as the low bound for the gap in mmap. Signed-off-by: Michal Hocko <mhocko@suse.cz> LKML-Reference: <1252400515-6866-1-git-send-email-mhocko@suse.cz> Acked-by: Jiri Kosina <jkosina@suse.cz> Signed-off-by: H. Peter Anvin <hpa@zytor.com> Cc: Stable Team <stable@kernel.org>
2009-09-09x86, pat: Fix cacheflush address in change_page_attr_set_clr()Jack Steiner
Fix address passed to cpa_flush_range() when changing page attributes from WB to UC. The address (*addr) is modified by __change_page_attr_set_clr(). The result is that the pages being flushed start at the _end_ of the changed range instead of the beginning. This should be considered for 2.6.30-stable and 2.6.31-stable. Signed-off-by: Jack Steiner <steiner@sgi.com> Acked-by: Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com> Cc: Stable team <stable@kernel.org>
2009-09-05powerpc: Fix i8259 interrupt driver kernel crash on ML510Roderick Colenbrander
This patch fixes a null pointer exception caused by removal of 'ack()' for level interrupts in the Xilinx interrupt driver. A recent change to the xilinx interrupt controller removed the ack hook for level irqs. Signed-off-by: Roderick Colenbrander <thunderbird2k@gmail.com> Signed-off-by: Grant Likely <grant.likely@secretlab.ca> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-09-05Merge branch 'release' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6 * 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6: [IA64] fix csum_ipv6_magic() [IA64] Fix warning in dma-mapping.c
2009-09-05Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-2.6Linus Torvalds
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-2.6: sparc64: Fix bootup with mcount in some configs. sparc64: Kill spurious NMI watchdog triggers by increasing limit to 30 seconds.
2009-09-05Merge branch 'perfcounters-fixes-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'perfcounters-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: perf_counter/powerpc: Fix cache event codes for POWER7 perf_counter: Fix /0 bug in swcounters perf_counters: Increase paranoia level
2009-09-04sparc64: Fix bootup with mcount in some configs.David S. Miller
Functions invoked early when booting up a cpu can't use tracing because mcount requires a valid 'current_thread_info()' and TLB mappings to be setup. The code path of sun4v_register_mondo_queues --> register_one_mondo is one such case. sun4v_register_mondo_queues already has the necessary 'notrace' annotation, but register_one_mondo does not. Normally register_one_mondo is inlined so the bug doesn't trigger, but with some config/compiler combinations, it won't be so we must properly mark it notrace. While we're here, add 'notrace' annoations to prom_printf and prom_halt so that early error handling won't have the same problem. Reported-by: Alexander Beregalov <a.beregalov@gmail.com> Reported-by: Leif Sawyer <lsawyer@gci.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2009-09-03sparc64: Kill spurious NMI watchdog triggers by increasing limit to 30 seconds.David S. Miller
This is a compromise and a temporary workaround for bootup NMI watchdog triggers some people see with qla2xxx devices present. This happens when, for example: CPU 0 is in the driver init and looping submitting mailbox commands to load the firmware, then waiting for completion. CPU 1 is receiving the device interrupts. CPU 1 is where the NMI watchdog triggers. CPU 0 is submitting mailbox commands fast enough that by the time CPU 1 returns from the device interrupt handler, a new one is pending. This sequence runs for more than 5 seconds. The problematic case is CPU 1's timer interrupt running when the barrage of device interrupts begin. Then we have: timer interrupt return for softirq checking pending, thus enable interrupts qla2xxx interrupt return qla2xxx interrupt return ... 5+ seconds pass final qla2xxx interrupt for fw load return run timer softirq return At some point in the multi-second qla2xxx interrupt storm we trigger the NMI watchdog on CPU 1 from the NMI interrupt handler. The timer softirq, once we get back to running it, is smart enough to run the timer work enough times to make up for the missed timer interrupts. However, the NMI watchdogs (both x86 and sparc) use the timer interrupt count to notice the cpu is wedged. But in the above scenerio we'll receive only one such timer interrupt even if we last all the way back to running the timer softirq. The default watchdog trigger point is only 5 seconds, which is pretty low (the softwatchdog triggers at 60 seconds). So increase it to 30 seconds for now. Signed-off-by: David S. Miller <davem@davemloft.net>
2009-09-03perf_counter/powerpc: Fix cache event codes for POWER7Paul Mackerras
I had the codes for L1 D-cache load accesses and misses swapped around, and the wrong codes for LL-cache accesses and misses. This corrects them. Reported-by: Corey Ashford <cjashfor@linux.vnet.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: <stable@kernel.org> LKML-Reference: <19103.8514.709300.585484@cargo.ozlabs.ibm.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-02[IA64] fix csum_ipv6_magic()Jiri Bohac
The 32-bit parameters (len and csum) of csum_ipv6_magic() are passed in 64-bit registers in2 and in4. The high order 32 bits of the registers were never cleared, and garbage was sometimes calculated into the checksum. Fix this by clearing the high order 32 bits of these registers. Signed-off-by: Jiri Bohac <jbohac@suse.cz> Signed-off-by: Tony Luck <tony.luck@intel.com>
2009-09-02[IA64] Fix warning in dma-mapping.cLuck, Tony
arch/ia64/kernel/dma-mapping.c:14: warning: control reaches end of non-void function arch/ia64/kernel/dma-mapping.c:14: warning: no return statement in function returning non-void This warning was introduced by commit: 390bd132b2831a2ad0268e84bffbfc0680debfe5 Add dma_debug_init() for ia64 Signed-off-by: Tony Luck <tony.luck@intel.com>
2009-08-29x86: Fix earlyprintk=dbgp for machines without NXJan Beulich
Since parse_early_param() may (e.g. for earlyprintk=dbgp) involve calls to page table manipulation functions (here set_fixmap_nocache()), NX hardware support must be determined before calling that function (so that __supported_pte_mask gets properly set up). But the call after parse_early_param() can also not go away, as that will honor eventual command line specified disabling of the NX functionality. ( This will then just result in whatever mappings got established during parse_early_param() having the NX bit set despite it being disabled on the command line, but I think that's tolerable). Signed-off-by: Jan Beulich <jbeulich@novell.com> Cc: Yinghai Lu <yhlu.kernel@gmail.com> LKML-Reference: <4A97F3BD02000078000121B9@vpn.id2.novell.com> [ merged to x86/pat to resolve a conflict. ] Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-08-28parisc: fix warning in traps.cGrant Grundler
On Tue, Aug 18, 2009 at 01:45:17PM -0400, John David Anglin wrote: > CC arch/parisc/kernel/traps.o > arch/parisc/kernel/traps.c: In function 'handle_interruption': > arch/parisc/kernel/traps.c:535:18: warning: operation on 'regs->iasq[0]' > may be undefined Yes - Line 535 should use both [0] and [1]. Reported-by: John David Anglin <dave@hiauly1.hia.nrc.ca> Signed-off-by: Grant Grundler <grundler@parisc-linux.org> Signed-off-by: Kyle McMartin <kyle@mcmartin.ca> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-08-28Merge branch 'x86-fixes-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'x86-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: x86: Fix vSMP boot crash x86, xen: Initialize cx to suppress warning x86, xen: Suppress WP test on Xen
2009-08-26Merge branch 'merge' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc * 'merge' of git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc: powerpc/ps3: Update ps3_defconfig powerpc/ps3: Add missing check for PS3 to rtc-ps3 platform device registration
2009-08-27powerpc/ps3: Update ps3_defconfigGeoff Levand
Update ps3_defconfig. o Refresh for 2.6.31. o Remove MTD support. o Add more HID drivers. Signed-off-by: Geoff Levand <geoffrey.levand@am.sony.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-08-27powerpc/ps3: Add missing check for PS3 to rtc-ps3 platform device registrationGeert Uytterhoeven
On non-PS3, we get: | kernel BUG at drivers/rtc/rtc-ps3.c:36! because the rtc-ps3 platform device is registered unconditionally in a kernel with builtin support for PS3. Reported-by: Sachin Sant <sachinp@in.ibm.com> Signed-off-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com> Acked-by: Geoff Levand <geoffrey.levand@am.sony.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-08-26Merge branch 'x86/urgent' into x86/patH. Peter Anvin
Reason: Change to is_new_memtype_allowed() in x86/urgent Resolved semantic conflicts in: arch/x86/mm/pat.c arch/x86/mm/ioremap.c Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2009-08-26x86, pat: Sanity check remap_pfn_range for RAM regionVenkatesh Pallipadi
Add sanity check for remap_pfn_range of RAM regions using lookup_memtype(). Previously, we did not have anyway to get the type of RAM memory regions as they were tracked using a single bit in page_struct (WB, nonWB). Now we can get the actual type from page struct (WB, WC, UC_MINUS) and make sure the requester gets that type. Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2009-08-26x86, pat: Lookup the protection from memtype list on vm_insert_pfn()Venkatesh Pallipadi
Lookup the reserved memtype during vm_insert_pfn and use that memtype for the new mapping. This takes care or handling of vm_insert_pfn() interface in track_pfn_vma*/untrack_pfn_vma. Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2009-08-26x86, pat: Add lookup_memtype to get the current memtype of a paddrVenkatesh Pallipadi
Add a new routine lookup_memtype() to get the current memtype based on the PAT reserves and frees. Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2009-08-26x86, pat: Use page flags to track memtypes of RAM pagesVenkatesh Pallipadi
Change reserve_ram_pages_type and free_ram_pages_type to use 2 page flags to track UC_MINUS, WC, WB and default types. Previous RAM tracking just tracked WB or NonWB, which was not complete and did not allow tracking of RAM fully and there was no way to get the actual type reserved by looking at the page flags. We use the memtype_lock spinlock for atomicity in dealing with memtype tracking in struct page. Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2009-08-26x86, pat: Generalize the use of page flag PG_uncachedVenkatesh Pallipadi
Only IA64 was using PG_uncached as of now. We now intend to use this bit in x86 as well, to keep track of memory type of those addresses that have page struct for them. So, generalize the use of that bit across ia64 and x86. Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2009-08-26x86, pat: Add rbtree to do quick lookup in memtype trackingVenkatesh Pallipadi
PAT memtype tracking uses a linear link list to keep track of IO (non-RAM) regions and their memtypes. The code used a last_accessed pointer as a cache to speedup the lookup. As per discussions with H. Peter Anvin a while back, having a rbtree here will avoid bad performances in pathological cases where we may end up with huge linked list. This may not add any noticable performance speedup in normal case as the number of entires in PAT memtype list tend to be ~20-30 range. The patch removes the "cached_entry" logic as with rbtree we have more generic way of speeding up the lookup. With this patch, we use rbtree to do the quick lookup. We still use linked list as the memtype range tracked can be of different sizes and can overlap in different ways. We also keep track of usage counts with linked list. Example: Multiple ioremaps with different sizes uncached-minus @ 0xfffff00000-0xfffff04000 uncached-minus @ 0xfffff02000-0xfffff03000 And one userlevel mmap and the thread forks a new process uncached-minus @ 0xbf453000-0xbf454000 uncached-minus @ 0xbf453000-0xbf454000 Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2009-08-26x86, pat: Add PAT reserve free to io_mapping* APIsVenkatesh Pallipadi
io_mapping_* interfaces were added, mainly for graphics drivers. Make this interface go through the PAT reserve/free, instead of hardcoding WC mapping. This makes sure that there are no aliases due to unconditional WC setting. Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2009-08-26x86, pat: New i/f for driver to request memtype for IO regionsVenkatesh Pallipadi
Add new routines to request memtype for IO regions. This will currently be a backend for io_mapping_* routines. But, it can also be made available to drivers directly in future, in case it is needed. reserve interface reserves the memory, makes sure we have a compatible memory type available and keeps the identity map in sync when needed. Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2009-08-26x86, pat: ioremap to follow same PAT restrictions as other PAT usersVenkatesh Pallipadi
ioremap has this hard-coded check for new type and requested type. That check differs from other PAT users like /dev/mem mmap, remap_pfn_range in only one condition where requested type is UC_MINUS and new type is WC. Under that condition, ioremap fails. But other PAT interfaces succeed with a WC mapping. Change to make ioremap be in sync with other PAT APIs and use the same macro as others. Also changes the error print to KERN_ERR instead of pr_debug. Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2009-08-26x86, pat: Keep identity maps consistent with mmaps even when pat_disabledVenkatesh Pallipadi
Make reserve_memtype internally take care of pat disabled case and fallback to default return values. Remove the specific pat_disabled checks in track_* routines. Change kernel_map_sync_memtype to sync identity map even when pat_disabled. This change ensures that, even for pat_disabled case, we take care of keeping identity map in sync. Before this patch, in pat disabled case, ioremap() keeps the identity maps in sync and other APIs like pci and /dev/mem mmap don't, which is not a very consistent behavior. Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2009-08-26m68k,m68knommu: Wire up rt_tgsigqueueinfo and perf_counter_openGeert Uytterhoeven
Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org> Acked-by: Greg Ungerer <gerg@uclinux.org>
2009-08-26m68k: Fix redefinition of pgprot_noncachedAlexey Dobriyan
arch/m68k/include/asm/pgtable_mm.h:148:1: warning: "pgprot_noncached" redefined In file included from arch/m68k/include/asm/pgtable_mm.h:138, from arch/m68k/include/asm/pgtable.h:4, from include/linux/mm.h:40, from include/linux/pagemap.h:7, from include/linux/blkdev.h:12, from arch/m68k/emu/nfblock.c:17: include/asm-generic/pgtable.h:133:1: warning: this is the location of the previous definition pgprot_noncached() should be defined _before_ including asm-generic/pgtable.h Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
2009-08-26arch/m68k/include/asm/motorola_pgalloc.h: fix kunmap argAndrew Morton
arch/m68k/include/asm/motorola_pgalloc.h: In function 'pte_alloc_one': arch/m68k/include/asm/motorola_pgalloc.h:44: warning: passing argument 1 of 'kunmap' from incompatible pointer type Also, remove unneeded test for kmap() failure. Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
2009-08-26m68k: cnt reaches -1, not 0Roel Kluin
With the postfix decrement cnt reaches -1 rather than 0. Signed-off-by: Roel Kluin <roel.kluin@gmail.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
2009-08-26x86: Fix vSMP boot crashYinghai Lu
2.6.31-rc7 does not boot on vSMP systems: [ 8.501108] CPU31: Thermal monitoring enabled (TM1) [ 8.501127] CPU 31 MCA banks SHD:2 SHD:3 SHD:5 SHD:6 SHD:8 [ 8.650254] CPU31: Intel(R) Xeon(R) CPU E5540 @ 2.53GHz stepping 04 [ 8.710324] Brought up 32 CPUs [ 8.713916] Total of 32 processors activated (162314.96 BogoMIPS). [ 8.721489] ERROR: parent span is not a superset of domain->span [ 8.727686] ERROR: domain->groups does not contain CPU0 [ 8.733091] ERROR: groups don't span domain->span [ 8.737975] ERROR: domain->cpu_power not set [ 8.742416] Ravikiran Thirumalai bisected it to: | commit 2759c3287de27266e06f1f4e82cbd2d65f6a044c | x86: don't call read_apic_id if !cpu_has_apic The problem is that on vSMP systems the CPUID derived initial-APICIDs are overlapping - so we need to fall back on hard_smp_processor_id() which reads the local APIC. Both come from the hardware (influenced by firmware though) so it's a tough call which one to trust. Doing the quirk expresses the vSMP property properly and also does not affect other systems, so we go for this solution instead of a revert. Reported-and-Tested-by: Ravikiran Thirumalai <kiran@scalex86.org> Signed-off-by: Yinghai Lu <yinghai@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Cyrill Gorcunov <gorcunov@gmail.com> Cc: Shai Fultheim <shai@scalex86.org> Cc: Suresh Siddha <suresh.b.siddha@intel.com> LKML-Reference: <4A944D3C.5030100@kernel.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-08-25Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-2.6Linus Torvalds
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-2.6: sparc64: Validate linear D-TLB misses. sparc64: Update defconfig. sparc32: Update defconfig. sparc32: Kill trap table freeing code. sparc: sys32.S incorrect compat-layer splice() system call sparc: Use page_fault_out_of_memory() for VM_FAULT_OOM. sparc64: Sign extend length arg to truncate syscalls when compat. sparc: Fix cleanup crash in bbc_envctrl_cleanup()
2009-08-25x86, xen: Initialize cx to suppress warningH. Peter Anvin
Initialize cx before calling xen_cpuid(), in order to suppress the "may be used uninitialized in this function" warning. Signed-off-by: H. Peter Anvin <hpa@zytor.com> Cc: Jeremy Fitzhardinge <jeremy@goop.org>
2009-08-25x86, xen: Suppress WP test on XenJeremy Fitzhardinge
Xen always runs on CPUs which properly support WP enforcement in privileged mode, so there's no need to test for it. This also works around a crash reported by Arnd Hannemann, though I think its just a band-aid for that case. Reported-by: Arnd Hannemann <hannemann@nets.rwth-aachen.de> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Acked-by: Pekka Enberg <penberg@cs.helsinki.fi> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2009-08-25sparc64: Validate linear D-TLB misses.David S. Miller
When page alloc debugging is not enabled, we essentially accept any virtual address for linear kernel TLB misses. But with kgdb, kernel address probing, and other facilities we can try to access arbitrary crap. So, make sure the address we miss on will translate to physical memory that actually exists. In order to make this work we have to embed the valid address bitmap into the kernel image. And in order to make that less expensive we make an adjustment, in that the max physical memory address is decreased to "1 << 41", even on the chips that support a 42-bit physical address space. We can do this because bit 41 indicates "I/O space" and thus covers non-memory ranges. The result of this is that: 1) kpte_linear_bitmap shrinks from 2K to 1K in size 2) we need 64K more for the valid address bitmap We can't let the valid address bitmap be dynamically allocated once we start using it to validate TLB misses, otherwise we have crazy issues to deal with wrt. recursive TLB misses and such. If we're in a TLB miss it could be the deepest trap level that's legal inside of the cpu. So if we TLB miss referencing the bitmap, the cpu will be out of trap levels and enter RED state. To guard against out-of-range accesses to the bitmap, we have to check to make sure no bits in the physical address above bit 40 are set. We could export and use last_valid_pfn for this check, but that's just an unnecessary extra memory reference. On the plus side of all this, since we load all of these translations into the special 4MB mapping TSB, and we check the TSB first for TLB misses, there should be absolutely no real cost for these new checks in the TLB miss path. Reported-by: heyongli@gmail.com Signed-off-by: David S. Miller <davem@davemloft.net>
2009-08-25Merge branch 'timers-fixes-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'timers-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: clockevent: Prevent dead lock on clockevents_lock timers: Drop write permission on /proc/timer_list
2009-08-25Merge branch 'x86-fixes-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'x86-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: x86: Fix build with older binutils and consolidate linker script x86: Fix an incorrect argument of reserve_bootmem() x86: add vmlinux.lds to targets in arch/x86/boot/compressed/Makefile xen: rearrange things to fix stackprotector x86: make sure load_percpu_segment has no stackprotector i386: Fix section mismatches for init code with !HOTPLUG_CPU x86, pat: Allow ISA memory range uncacheable mapping requests
2009-08-25x86: Fix build with older binutils and consolidate linker scriptJan Beulich
binutils prior to 2.17 can't deal with the currently possible situation of a new segment following the per-CPU segment, but that new segment being empty - objcopy misplaces the .bss (and perhaps also the .brk) sections outside of any segment. However, the current ordering of sections really just appears to be the effect of cumulative unrelated changes; re-ordering things allows to easily guarantee that the segment following the per-CPU one is non-empty, and at once eliminates the need for the bogus data.init2 segment. Once touching this code, also use the various data section helper macros from include/asm-generic/vmlinux.lds.h. -v2: fix !SMP builds. Signed-off-by: Jan Beulich <jbeulich@novell.com> Cc: <sam@ravnborg.org> LKML-Reference: <4A94085D02000078000119A5@vpn.id2.novell.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-08-24Merge branch 'fixes' of git://git.marvell.com/orionLinus Torvalds
* 'fixes' of git://git.marvell.com/orion: [ARM] Orion NAND: Make asm volatile avoid GCC pushing ldrd out of the loop [ARM] Kirkwood: enable eSATA on QNAP TS-219P [ARM] Kirkwood: __init requires linux/init.h
2009-08-24Merge git://git.kernel.org/pub/scm/linux/kernel/git/hskinnemoen/avr32-2.6Linus Torvalds
* git://git.kernel.org/pub/scm/linux/kernel/git/hskinnemoen/avr32-2.6: favr32: improve touchscreen response avr32/lib: fix unaligned memcpy where len < 4 avr32/lib: fix unaligned memcpy()
2009-08-24x86: Fix an incorrect argument of reserve_bootmem()Amerigo Wang
This line looks suspicious, because if this is true, then the 'flags' parameter of function reserve_bootmem_generic() will be unused when !CONFIG_NUMA. I don't think this is what we want. Signed-off-by: WANG Cong <amwang@redhat.com> Cc: Yinghai Lu <yinghai@kernel.org> Cc: akpm@linux-foundation.org LKML-Reference: <20090821083709.5098.52505.sendpatchset@localhost.localdomain> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-08-24[ARM] Kirkwood: enable eSATA on QNAP TS-219PJohn Holland
Initialize PCI/PCIe on the QNAP TS-119, TS-219 and TS-219P hardware allowing the use of the discrete eSATA controller connected to the PCIe bus in the TS-219P. Signed-off-by: John Holland <john.holland@cellent-fs.de> Tested-by: Thomas Reitmayr <treitmayr@devbase.at> Signed-off-by: Martin Michlmayr <tbm@cyrius.com> Signed-off-by: Nicolas Pitre <nico@marvell.com>
2009-08-24[ARM] Kirkwood: __init requires linux/init.hMartin Michlmayr
Include linux/init.h for __init to fix this error: CC [M] drivers/net/wireless/wl12xx/boot.o In file included from arch/arm/mach-kirkwood/include/mach/gpio.h:13, from arch/arm/include/asm/gpio.h:5, from include/linux/gpio.h:7, from drivers/net/wireless/wl12xx/boot.c:24: arch/arm/plat-orion/include/plat/gpio.h:32: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘orion_gpio_init’ make[6]: *** [drivers/net/wireless/wl12xx/boot.o] Error 1 make[5]: *** [drivers/net/wireless/wl12xx] Error 2 Signed-off-by: Martin Michlmayr <tbm@cyrius.com> Signed-off-by: Nicolas Pitre <nico@marvell.com>
2009-08-23[S390] set preferred console based on conmodeHendrik Brueckner
setup_arch() unconditionally sets the preferred console to ttyS. This breaks the use of 3270 devices as the console. Provide a new function to set the default preferred console for s390. The preferred console depends on the conmode parameter that is used to switch between 3270 and 3215 terminal/console mode. Signed-off-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2009-08-21x86, mtrr: make mtrr_aps_delayed_init static boolH. Peter Anvin
mtr_aps_delayed_init was declared u32 and made global, but it only ever takes boolean values and is only ever used in arch/x86/kernel/cpu/mtrr/main.c. Declare it "static bool" and remove external references. Signed-off-by: H. Peter Anvin <hpa@zytor.com> Cc: Suresh Siddha <suresh.b.siddha@intel.com>
2009-08-21x86, pat/mtrr: Rendezvous all the cpus for MTRR/PAT initSuresh Siddha
SDM Vol 3a section titled "MTRR considerations in MP systems" specifies the need for synchronizing the logical cpu's while initializing/updating MTRR. Currently Linux kernel does the synchronization of all cpu's only when a single MTRR register is programmed/updated. During an AP online (during boot/cpu-online/resume) where we initialize all the MTRR/PAT registers, we don't follow this synchronization algorithm. This can lead to scenarios where during a dynamic cpu online, that logical cpu is initializing MTRR/PAT with cache disabled (cr0.cd=1) etc while other logical HT sibling continue to run (also with cache disabled because of cr0.cd=1 on its sibling). Starting from Westmere, VMX transitions with cr0.cd=1 don't work properly (because of some VMX performance optimizations) and the above scenario (with one logical cpu doing VMX activity and another logical cpu coming online) can result in system crash. Fix the MTRR initialization by doing rendezvous of all the cpus. During boot and resume, we delay the MTRR/PAT init for APs till all the logical cpu's come online and the rendezvous process at the end of AP's bringup, will initialize the MTRR/PAT for all AP's. For dynamic single cpu online, we synchronize all the logical cpus and do the MTRR/PAT init on the AP that is coming online. Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com>