summaryrefslogtreecommitdiff
path: root/drivers/firmware/efi/libstub
AgeCommit message (Collapse)Author
2015-04-01efi/libstub: Retrieve FDT size when loaded from UEFI config tableArd Biesheuvel
When allocating memory for the copy of the FDT that the stub modifies and passes to the kernel, it uses the current size as an estimate of how much memory to allocate, and increases it page by page if it turns out to be too small. However, when loading the FDT from a UEFI configuration table, the estimated size is left at its default value of zero, and the allocation loop runs starting from zero all the way up to the allocation size that finally fits the updated FDT. Instead, retrieve the size of the FDT from the FDT header when loading it from the UEFI config table. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: Roy Franz <roy.franz@linaro.org> Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2015-03-02Merge tag 'efi-urgent' of ↵Ingo Molnar
git://git.kernel.org/pub/scm/linux/kernel/git/mfleming/efi into x86/urgent Pull EFI fixes from Matt Fleming: " - Fix regression in DMI sysfs code for handling "End of Table" entry and a type bug that could lead to integer overflow. (Ivan Khoronzhuk) - Fix boundary checking in efi_high_alloc() which can lead to memory corruption in the EFI boot stubs. (Yinghai Lu)" Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-02-24efi/libstub: Fix boundary checking in efi_high_alloc()Yinghai Lu
While adding support loading kernel and initrd above 4G to grub2 in legacy mode, I was referring to efi_high_alloc(). That will allocate buffer for kernel and then initrd, and initrd will use kernel buffer start as limit. During testing found two buffers will be overlapped when initrd size is very big like 400M. It turns out efi_high_alloc() boundary checking is not right. end - size will be the new start, and should not compare new start with max, we need to make sure end is smaller than max. [ Basically, with the current efi_high_alloc() code it's possible to allocate memory above 'max', because efi_high_alloc() doesn't check that the tail of the allocation is below 'max'. If you have an EFI memory map with a single entry that looks like so, [0xc0000000-0xc0004000] And want to allocate 0x3000 bytes below 0xc0003000 the current code will allocate [0xc0001000-0xc0004000], not [0xc0000000-0xc0003000] like you would expect. - Matt ] Signed-off-by: Yinghai Lu <yinghai@kernel.org> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Cc: <stable@vger.kernel.org> Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2015-02-21Merge branch 'x86-urgent-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull misc x86 fixes from Ingo Molnar: "This contains: - EFI fixes - a boot printout fix - ASLR/kASLR fixes - intel microcode driver fixes - other misc fixes Most of the linecount comes from an EFI revert" * 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/mm/ASLR: Avoid PAGE_SIZE redefinition for UML subarch x86/microcode/intel: Handle truncated microcode images more robustly x86/microcode/intel: Guard against stack overflow in the loader x86, mm/ASLR: Fix stack randomization on 64-bit systems x86/mm/init: Fix incorrect page size in init_memory_mapping() printks x86/mm/ASLR: Propagate base load address calculation Documentation/x86: Fix path in zero-page.txt x86/apic: Fix the devicetree build in certain configs Revert "efi/libstub: Call get_memory_map() to obtain map and desc sizes" x86/efi: Avoid triple faults during EFI mixed mode calls
2015-02-18Revert "efi/libstub: Call get_memory_map() to obtain map and desc sizes"Matt Fleming
This reverts commit d1a8d66b9177105e898e73716f97eb61842c457a. Ard reported a boot failure when running UEFI under Qemu and Xen and experimenting with various Tianocore build options, "As it turns out, when allocating room for the UEFI memory map using UEFI's AllocatePool (), it may result in two new memory map entries being created, for instance, when using Tianocore's preallocated region feature. For example, the following region 0x00005ead5000-0x00005ebfffff [Conventional Memory| | | | | |WB|WT|WC|UC] may be split like this 0x00005ead5000-0x00005eae2fff [Conventional Memory| | | | | |WB|WT|WC|UC] 0x00005eae3000-0x00005eae4fff [Loader Data | | | | | |WB|WT|WC|UC] 0x00005eae5000-0x00005ebfffff [Conventional Memory| | | | | |WB|WT|WC|UC] if the preallocated Loader Data region was chosen to be right in the middle of the original free space. After patch d1a8d66b9177 ("efi/libstub: Call get_memory_map() to obtain map and desc sizes"), this is not being dealt with correctly anymore, as the existing logic to allocate room for a single additional entry has become insufficient." Mark requested to reinstate the old loop we had before commit d1a8d66b9177, which grows the memory map buffer until it's big enough to hold the EFI memory map. Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2015-02-13x86_64: kasan: add interceptors for memset/memmove/memcpy functionsAndrey Ryabinin
Recently instrumentation of builtin functions calls was removed from GCC 5.0. To check the memory accessed by such functions, userspace asan always uses interceptors for them. So now we should do this as well. This patch declares memset/memmove/memcpy as weak symbols. In mm/kasan/kasan.c we have our own implementation of those functions which checks memory before accessing it. Default memset/memmove/memcpy now now always have aliases with '__' prefix. For files that built without kasan instrumentation (e.g. mm/slub.c) original mem* replaced (via #define) with prefixed variants, cause we don't want to check memory accesses there. Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Konstantin Serebryany <kcc@google.com> Cc: Dmitry Chernenkov <dmitryc@google.com> Signed-off-by: Andrey Konovalov <adech.fo@gmail.com> Cc: Yuri Gribov <tetra2005@gmail.com> Cc: Konstantin Khlebnikov <koct9i@gmail.com> Cc: Sasha Levin <sasha.levin@oracle.com> Cc: Christoph Lameter <cl@linux.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-02-13kasan: add kernel address sanitizer infrastructureAndrey Ryabinin
Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides fast and comprehensive solution for finding use-after-free and out-of-bounds bugs. KASAN uses compile-time instrumentation for checking every memory access, therefore GCC > v4.9.2 required. v4.9.2 almost works, but has issues with putting symbol aliases into the wrong section, which breaks kasan instrumentation of globals. This patch only adds infrastructure for kernel address sanitizer. It's not available for use yet. The idea and some code was borrowed from [1]. Basic idea: The main idea of KASAN is to use shadow memory to record whether each byte of memory is safe to access or not, and use compiler's instrumentation to check the shadow memory on each memory access. Address sanitizer uses 1/8 of the memory addressable in kernel for shadow memory and uses direct mapping with a scale and offset to translate a memory address to its corresponding shadow address. Here is function to translate address to corresponding shadow address: unsigned long kasan_mem_to_shadow(unsigned long addr) { return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET; } where KASAN_SHADOW_SCALE_SHIFT = 3. So for every 8 bytes there is one corresponding byte of shadow memory. The following encoding used for each shadow byte: 0 means that all 8 bytes of the corresponding memory region are valid for access; k (1 <= k <= 7) means that the first k bytes are valid for access, and other (8 - k) bytes are not; Any negative value indicates that the entire 8-bytes are inaccessible. Different negative values used to distinguish between different kinds of inaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h). To be able to detect accesses to bad memory we need a special compiler. Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr)) before each memory access of size 1, 2, 4, 8 or 16. These functions check whether memory region is valid to access or not by checking corresponding shadow memory. If access is not valid an error printed. Historical background of the address sanitizer from Dmitry Vyukov: "We've developed the set of tools, AddressSanitizer (Asan), ThreadSanitizer and MemorySanitizer, for user space. We actively use them for testing inside of Google (continuous testing, fuzzing, running prod services). To date the tools have found more than 10'000 scary bugs in Chromium, Google internal codebase and various open-source projects (Firefox, OpenSSL, gcc, clang, ffmpeg, MySQL and lots of others): [2] [3] [4]. The tools are part of both gcc and clang compilers. We have not yet done massive testing under the Kernel AddressSanitizer (it's kind of chicken and egg problem, you need it to be upstream to start applying it extensively). To date it has found about 50 bugs. Bugs that we've found in upstream kernel are listed in [5]. We've also found ~20 bugs in out internal version of the kernel. Also people from Samsung and Oracle have found some. [...] As others noted, the main feature of AddressSanitizer is its performance due to inline compiler instrumentation and simple linear shadow memory. User-space Asan has ~2x slowdown on computational programs and ~2x memory consumption increase. Taking into account that kernel usually consumes only small fraction of CPU and memory when running real user-space programs, I would expect that kernel Asan will have ~10-30% slowdown and similar memory consumption increase (when we finish all tuning). I agree that Asan can well replace kmemcheck. We have plans to start working on Kernel MemorySanitizer that finds uses of unitialized memory. Asan+Msan will provide feature-parity with kmemcheck. As others noted, Asan will unlikely replace debug slab and pagealloc that can be enabled at runtime. Asan uses compiler instrumentation, so even if it is disabled, it still incurs visible overheads. Asan technology is easily portable to other architectures. Compiler instrumentation is fully portable. Runtime has some arch-dependent parts like shadow mapping and atomic operation interception. They are relatively easy to port." Comparison with other debugging features: ======================================== KMEMCHECK: - KASan can do almost everything that kmemcheck can. KASan uses compile-time instrumentation, which makes it significantly faster than kmemcheck. The only advantage of kmemcheck over KASan is detection of uninitialized memory reads. Some brief performance testing showed that kasan could be x500-x600 times faster than kmemcheck: $ netperf -l 30 MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost (127.0.0.1) port 0 AF_INET Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec no debug: 87380 16384 16384 30.00 41624.72 kasan inline: 87380 16384 16384 30.00 12870.54 kasan outline: 87380 16384 16384 30.00 10586.39 kmemcheck: 87380 16384 16384 30.03 20.23 - Also kmemcheck couldn't work on several CPUs. It always sets number of CPUs to 1. KASan doesn't have such limitation. DEBUG_PAGEALLOC: - KASan is slower than DEBUG_PAGEALLOC, but KASan works on sub-page granularity level, so it able to find more bugs. SLUB_DEBUG (poisoning, redzones): - SLUB_DEBUG has lower overhead than KASan. - SLUB_DEBUG in most cases are not able to detect bad reads, KASan able to detect both reads and writes. - In some cases (e.g. redzone overwritten) SLUB_DEBUG detect bugs only on allocation/freeing of object. KASan catch bugs right before it will happen, so we always know exact place of first bad read/write. [1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel [2] https://code.google.com/p/address-sanitizer/wiki/FoundBugs [3] https://code.google.com/p/thread-sanitizer/wiki/FoundBugs [4] https://code.google.com/p/memory-sanitizer/wiki/FoundBugs [5] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel#Trophies Based on work by Andrey Konovalov. Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com> Acked-by: Michal Marek <mmarek@suse.cz> Signed-off-by: Andrey Konovalov <adech.fo@gmail.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Konstantin Serebryany <kcc@google.com> Cc: Dmitry Chernenkov <dmitryc@google.com> Cc: Yuri Gribov <tetra2005@gmail.com> Cc: Konstantin Khlebnikov <koct9i@gmail.com> Cc: Sasha Levin <sasha.levin@oracle.com> Cc: Christoph Lameter <cl@linux.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-02-11Merge tag 'arm64-upstream' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 updates from Catalin Marinas: "arm64 updates for 3.20: - reimplementation of the virtual remapping of UEFI Runtime Services in a way that is stable across kexec - emulation of the "setend" instruction for 32-bit tasks (user endianness switching trapped in the kernel, SCTLR_EL1.E0E bit set accordingly) - compat_sys_call_table implemented in C (from asm) and made it a constant array together with sys_call_table - export CPU cache information via /sys (like other architectures) - DMA API implementation clean-up in preparation for IOMMU support - macros clean-up for KVM - dropped some unnecessary cache+tlb maintenance - CONFIG_ARM64_CPU_SUSPEND clean-up - defconfig update (CPU_IDLE) The EFI changes going via the arm64 tree have been acked by Matt Fleming. There is also a patch adding sys_*stat64 prototypes to include/linux/syscalls.h, acked by Andrew Morton" * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (47 commits) arm64: compat: Remove incorrect comment in compat_siginfo arm64: Fix section mismatch on alloc_init_p[mu]d() arm64: Avoid breakage caused by .altmacro in fpsimd save/restore macros arm64: mm: use *_sect to check for section maps arm64: drop unnecessary cache+tlb maintenance arm64:mm: free the useless initial page table arm64: Enable CPU_IDLE in defconfig arm64: kernel: remove ARM64_CPU_SUSPEND config option arm64: make sys_call_table const arm64: Remove asm/syscalls.h arm64: Implement the compat_sys_call_table in C syscalls: Declare sys_*stat64 prototypes if __ARCH_WANT_(COMPAT_)STAT64 compat: Declare compat_sys_sigpending and compat_sys_sigprocmask prototypes arm64: uapi: expose our struct ucontext to the uapi headers smp, ARM64: Kill SMP single function call interrupt arm64: Emulate SETEND for AArch32 tasks arm64: Consolidate hotplug notifier for instruction emulation arm64: Track system support for mixed endian EL0 arm64: implement generic IOMMU configuration arm64: Combine coherent and non-coherent swiotlb dma_ops ...
2015-01-29Merge tag 'efi-next' of ↵Ingo Molnar
git://git.kernel.org/pub/scm/linux/kernel/git/mfleming/efi into x86/efi Pull EFI updates from Matt Fleming: " - Move efivarfs from the misc filesystem section to pseudo filesystem, since that's a more logical and accurate place - Leif Lindholm - Update efibootmgr URL in Kconfig help - Peter Jones - Improve accuracy of EFI guid function names - Borislav Petkov - Expose firmware platform size in sysfs for the benefit of EFI boot loader installers and other utilities - Steve McIntyre - Cleanup __init annotations for arm64/efi code - Ard Biesheuvel - Mark the UIE as unsupported for rtc-efi - Ard Biesheuvel - Fix memory leak in error code path of runtime map code - Dan Carpenter - Improve robustness of get_memory_map() by removing assumptions on the size of efi_memory_desc_t (which could change in future spec versions) and querying the firmware instead of guessing about the memmap size - Ard Biesheuvel - Remove superfluous guid unparse calls - Ivan Khoronzhuk - Delete unnecessary chosen@0 DT node FDT code since was duplicated from code in drivers/of and is entirely unnecessary - Leif Lindholm There's nothing super scary, mainly cleanups, and a merge from Ricardo who kindly picked up some patches from the linux-efi mailing list while I was out on annual leave in December. Perhaps the biggest risk is the get_memory_map() change from Ard, which changes the way that both the arm64 and x86 EFI boot stub build the early memory map. It would be good to have it bake in linux-next for a while. " Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-01-20efi/libstub: Call get_memory_map() to obtain map and desc sizesArd Biesheuvel
This fixes two minor issues in the implementation of get_memory_map(): - Currently, it assumes that sizeof(efi_memory_desc_t) == desc_size, which is usually true, but not mandated by the spec. (This was added intentionally to allow future additions to the definition of efi_memory_desc_t). The way the loop is implemented currently, the added slack space may be insufficient if desc_size is larger, which in some corner cases could result in the loop never terminating. - It allocates 32 efi_memory_desc_t entries first (again, using the size of the struct instead of desc_size), and frees and reallocates if it turns out to be insufficient. Few implementations of UEFI have such small memory maps, which results in a unnecessary allocate/free pair on each invocation. Fix this by calling the get_memory_map() boot service first with a '0' input value for map size to retrieve the map size and desc size from the firmware and only then perform the allocation, using desc_size rather than sizeof(efi_memory_desc_t). Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2015-01-15arm64/efi: efistub: Apply __init annotationArd Biesheuvel
This ensures all stub component are freed when the kernel proper is done booting, by prefixing the names of all ELF sections that have the SHF_ALLOC attribute with ".init". This approach ensures that even implicitly emitted allocated data (like initializer values and string literals) are covered. At the same time, remove some __init annotations in the stub that have now become redundant, and add the __init annotation to handle_kernel_image which will now trigger a section mismatch warning without it. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2015-01-12arm64/efi: move SetVirtualAddressMap() to UEFI stubArd Biesheuvel
In order to support kexec, the kernel needs to be able to deal with the state of the UEFI firmware after SetVirtualAddressMap() has been called. To avoid having separate code paths for non-kexec and kexec, let's move the call to SetVirtualAddressMap() to the stub: this will guarantee us that it will only be called once (since the stub is not executed during kexec), and ensures that the UEFI state is identical between kexec and normal boot. This implies that the layout of the virtual mapping needs to be created by the stub as well. All regions are rounded up to a naturally aligned multiple of 64 KB (for compatibility with 64k pages kernels) and recorded in the UEFI memory map. The kernel proper reads those values and installs the mappings in a dedicated set of page tables that are swapped in during UEFI Runtime Services calls. Acked-by: Leif Lindholm <leif.lindholm@linaro.org> Acked-by: Matt Fleming <matt.fleming@intel.com> Tested-by: Leif Lindholm <leif.lindholm@linaro.org> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
2015-01-12efi: efistub: allow allocation alignment larger than EFI_PAGE_SIZEArd Biesheuvel
On systems with 64 KB pages, it is preferable for UEFI memory map entries to be 64 KB aligned multiples of 64 KB, because it relieves us of having to deal with the residues. So, if EFI_ALLOC_ALIGN is #define'd by the platform, use it to round up all memory allocations made. Acked-by: Matt Fleming <matt.fleming@intel.com> Acked-by: Borislav Petkov <bp@suse.de> Tested-by: Leif Lindholm <leif.lindholm@linaro.org> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
2014-11-05efi: efi-stub: notify on DTB absenceMark Rutland
In the absence of a DTB configuration table, the EFI stub will happily continue attempting to boot a kernel, despite the fact that this kernel may not function without a description of the hardware. In this case, as with a typo'd "dtb=" option (e.g. "dbt=") or many other possible failures, the only output seen by the user will be the rather terse output from the EFI stub: EFI stub: Booting Linux Kernel... To aid those attempting to debug such failures, this patch adds a notice when no DTB is found, making the output more helpful: EFI stub: Booting Linux Kernel... EFI stub: Generating empty DTB Additionally, a positive acknowledgement is added when a user-specified DTB is in use: EFI stub: Booting Linux Kernel... EFI stub: Using DTB from command line Similarly, a positive acknowledgement is added when a DTB from a configuration table is in use: EFI stub: Booting Linux Kernel... EFI stub: Using DTB from configuration table Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Leif Lindholm <leif.lindholm@linaro.org> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Roy Franz <roy.franz@linaro.org> Acked-by: Matt Fleming <matt.fleming@intel.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
2014-10-03Merge branch 'next' into efi-next-mergeMatt Fleming
Conflicts: arch/x86/boot/compressed/eboot.c
2014-10-03efi: Add efi= parameter parsing to the EFI boot stubMatt Fleming
We need a way to customize the behaviour of the EFI boot stub, in particular, we need a way to disable the "chunking" workaround, used when reading files from the EFI System Partition. One of my machines doesn't cope well when reading files in 1MB chunks to a buffer above the 4GB mark - it appears that the "chunking" bug workaround triggers another firmware bug. This was only discovered with commit 4bf7111f5016 ("x86/efi: Support initrd loaded above 4G"), and that commit is perfectly valid. The symptom I observed was a corrupt initrd rather than any kind of crash. efi= is now used to specify EFI parameters in two very different execution environments, the EFI boot stub and during kernel boot. There is also a slight performance optimization by enabling efi=nochunk, but that's offset by the fact that you're more likely to run into firmware issues, at least on x86. This is the rationale behind leaving the workaround enabled by default. Also provide some documentation for EFI_READ_CHUNK_SIZE and why we're using the current value of 1MB. Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Roy Franz <roy.franz@linaro.org> Cc: Maarten Lankhorst <m.b.lankhorst@gmail.com> Cc: Leif Lindholm <leif.lindholm@linaro.org> Cc: Borislav Petkov <bp@suse.de> Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-09-09efi/arm64: Fix fdt-related memory reservationMark Salter
Commit 86c8b27a01cf: "arm64: ignore DT memreserve entries when booting in UEFI mode prevents early_init_fdt_scan_reserved_mem() from being called for arm64 kernels booting via UEFI. This was done because the kernel will use the UEFI memory map to determine reserved memory regions. That approach has problems in that early_init_fdt_scan_reserved_mem() also reserves the FDT itself and any node-specific reserved memory. By chance of some kernel configs, the FDT may be overwritten before it can be unflattened and the kernel will fail to boot. More subtle problems will result if the FDT has node specific reserved memory which is not really reserved. This patch has the UEFI stub remove the memory reserve map entries from the FDT as it does with the memory nodes. This allows early_init_fdt_scan_reserved_mem() to be called unconditionally so that the other needed reservations are made. Signed-off-by: Mark Salter <msalter@redhat.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-08-04Merge branch 'x86-efi-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull EFI changes from Ingo Molnar: "Main changes in this cycle are: - arm64 efi stub fixes, preservation of FP/SIMD registers across firmware calls, and conversion of the EFI stub code into a static library - Ard Biesheuvel - Xen EFI support - Daniel Kiper - Support for autoloading the efivars driver - Lee, Chun-Yi - Use the PE/COFF headers in the x86 EFI boot stub to request that the stub be loaded with CONFIG_PHYSICAL_ALIGN alignment - Michael Brown - Consolidate all the x86 EFI quirks into one file - Saurabh Tangri - Additional error logging in x86 EFI boot stub - Ulf Winkelvos - Support loading initrd above 4G in EFI boot stub - Yinghai Lu - EFI reboot patches for ACPI hardware reduced platforms" * 'x86-efi-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (31 commits) efi/arm64: Handle missing virtual mapping for UEFI System Table arch/x86/xen: Silence compiler warnings xen: Silence compiler warnings x86/efi: Request desired alignment via the PE/COFF headers x86/efi: Add better error logging to EFI boot stub efi: Autoload efivars efi: Update stale locking comment for struct efivars arch/x86: Remove efi_set_rtc_mmss() arch/x86: Replace plain strings with constants xen: Put EFI machinery in place xen: Define EFI related stuff arch/x86: Remove redundant set_bit(EFI_MEMMAP) call arch/x86: Remove redundant set_bit(EFI_SYSTEM_TABLES) call efi: Introduce EFI_PARAVIRT flag arch/x86: Do not access EFI memory map if it is not available efi: Use early_mem*() instead of early_io*() arch/ia64: Define early_memunmap() x86/reboot: Add EFI reboot quirk for ACPI Hardware Reduced flag efi/reboot: Allow powering off machines using EFI efi/reboot: Add generic wrapper around EfiResetSystem() ...
2014-07-18efi: efistub: Convert into static libraryArd Biesheuvel
This patch changes both x86 and arm64 efistub implementations from #including shared .c files under drivers/firmware/efi to building shared code as a static library. The x86 code uses a stub built into the boot executable which uncompresses the kernel at boot time. In this case, the library is linked into the decompressor. In the arm64 case, the stub is part of the kernel proper so the library is linked into the kernel proper as well. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Matt Fleming <matt.fleming@intel.com>