summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2008-12-31KVM: ppc: Implement in-kernel exit timing statisticsHollis Blanchard
Existing KVM statistics are either just counters (kvm_stat) reported for KVM generally or trace based aproaches like kvm_trace. For KVM on powerpc we had the need to track the timings of the different exit types. While this could be achieved parsing data created with a kvm_trace extension this adds too much overhead (at least on embedded PowerPC) slowing down the workloads we wanted to measure. Therefore this patch adds a in-kernel exit timing statistic to the powerpc kvm code. These statistic is available per vm&vcpu under the kvm debugfs directory. As this statistic is low, but still some overhead it can be enabled via a .config entry and should be off by default. Since this patch touched all powerpc kvm_stat code anyway this code is now merged and simplified together with the exit timing statistic code (still working with exit timing disabled in .config). Signed-off-by: Christian Ehrhardt <ehrhardt@linux.vnet.ibm.com> Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2008-12-31KVM: ppc: save and restore guest mappings on context switchHollis Blanchard
Store shadow TLB entries in memory, but only use it on host context switch (instead of every guest entry). This improves performance for most workloads on 440 by reducing the guest TLB miss rate. Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2008-12-31KVM: ppc: directly insert shadow mappings into the hardware TLBHollis Blanchard
Formerly, we used to maintain a per-vcpu shadow TLB and on every entry to the guest would load this array into the hardware TLB. This consumed 1280 bytes of memory (64 entries of 16 bytes plus a struct page pointer each), and also required some assembly to loop over the array on every entry. Instead of saving a copy in memory, we can just store shadow mappings directly into the hardware TLB, accepting that the host kernel will clobber these as part of the normal 440 TLB round robin. When we do that we need less than half the memory, and we have decreased the exit handling time for all guest exits, at the cost of increased number of TLB misses because the host overwrites some guest entries. These savings will be increased on processors with larger TLBs or which implement intelligent flush instructions like tlbivax (which will avoid the need to walk arrays in software). In addition to that and to the code simplification, we have a greater chance of leaving other host userspace mappings in the TLB, instead of forcing all subsequent tasks to re-fault all their mappings. Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2008-12-31powerpc/44x: declare tlb_44x_index for use in C codeHollis Blanchard
KVM currently ignores the host's round robin TLB eviction selection, instead maintaining its own TLB state and its own round robin index. However, by participating in the normal 44x TLB selection, we can drop the alternate TLB processing in KVM. This results in a significant performance improvement, since that processing currently must be done on *every* guest exit. Accordingly, KVM needs to be able to access and increment tlb_44x_index. (KVM on 440 cannot be a module, so there is no need to export this symbol.) Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com> Acked-by: Josh Boyer <jwboyer@linux.vnet.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2008-12-31KVM: ppc: support large host pagesHollis Blanchard
KVM on 440 has always been able to handle large guest mappings with 4K host pages -- we must, since the guest kernel uses 256MB mappings. This patch makes KVM work when the host has large pages too (tested with 64K). Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2008-12-31KVM: split out kvm_free_assigned_irq()Mark McLoughlin
Split out the logic corresponding to undoing assign_irq() and clean it up a bit. Signed-off-by: Mark McLoughlin <markmc@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2008-12-31KVM: add KVM_USERSPACE_IRQ_SOURCE_ID assertionsMark McLoughlin
Make sure kvm_request_irq_source_id() never returns KVM_USERSPACE_IRQ_SOURCE_ID. Likewise, check that kvm_free_irq_source_id() never accepts KVM_USERSPACE_IRQ_SOURCE_ID. Signed-off-by: Mark McLoughlin <markmc@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2008-12-31KVM: don't free an unallocated irq source idMark McLoughlin
Set assigned_dev->irq_source_id to -1 so that we can avoid freeing a source ID which we never allocated. Signed-off-by: Mark McLoughlin <markmc@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2008-12-31KVM: make kvm_unregister_irq_ack_notifier() safeMark McLoughlin
We never pass a NULL notifier pointer here, but we may well pass a notifier struct which hasn't previously been registered. Guard against this by using hlist_del_init() which will not do anything if the node hasn't been added to the list and, when removing the node, will ensure that a subsequent call to hlist_del_init() will be fine too. Fixes an oops seen when an assigned device is freed before and IRQ is assigned to it. Signed-off-by: Mark McLoughlin <markmc@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2008-12-31KVM: remove the IRQ ACK notifier assertionsMark McLoughlin
We will obviously never pass a NULL struct kvm_irq_ack_notifier* to this functions. They are always embedded in the assigned device structure, so the assertion add nothing. The irqchip_in_kernel() assertion is very out of place - clearly this little abstraction needs to know nothing about the upper layer details. Signed-off-by: Mark McLoughlin <markmc@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2008-12-31KVM: VMX: fix sparse warningHannes Eder
Impact: make global function static arch/x86/kvm/vmx.c:134:3: warning: symbol 'vmx_capability' was not declared. Should it be static? Signed-off-by: Hannes Eder <hannes@hanneseder.net> Signed-off-by: Avi Kivity <avi@redhat.com>
2008-12-31KVM: fix sparse warningHannes Eder
Impact: make global function static virt/kvm/kvm_main.c:85:6: warning: symbol 'kvm_rebooting' was not declared. Should it be static? Signed-off-by: Hannes Eder <hannes@hanneseder.net> Signed-off-by: Avi Kivity <avi@redhat.com>
2008-12-31KVM: Remove extraneous semicolon after do/whileAvi Kivity
Notices by Guillaume Thouvenin. Signed-off-by: Avi Kivity <avi@redhat.com>
2008-12-31KVM: x86 emulator: fix popf emulationAvi Kivity
Set operand type and size to get correct writeback behavior. Signed-off-by: Avi Kivity <avi@redhat.com>
2008-12-31KVM: x86 emulator: fix ret emulationAvi Kivity
'ret' did not set the operand type or size for the destination, so writeback ignored it. Signed-off-by: Avi Kivity <avi@redhat.com>
2008-12-31KVM: x86 emulator: switch 'pop reg' instruction to emulate_pop()Avi Kivity
Signed-off-by: Avi Kivity <avi@redhat.com>
2008-12-31KVM: x86 emulator: allow pop from mmioAvi Kivity
Signed-off-by: Avi Kivity <avi@redhat.com>
2008-12-31KVM: x86 emulator: Extract 'pop' sequence into a functionAvi Kivity
Switch 'pop r/m' instruction to use the new function. Signed-off-by: Avi Kivity <avi@redhat.com>
2008-12-31KVM: Prevent trace call into unloaded module textWu Fengguang
Add marker_synchronize_unregister() before module unloading. This prevents possible trace calls into unloaded module text. Signed-off-by: Wu Fengguang <wfg@linux.intel.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2008-12-31KVM: s390: Fix memory leak of vcpu->runChristian Borntraeger
The s390 backend of kvm never calls kvm_vcpu_uninit. This causes a memory leak of vcpu->run pages. Lets call kvm_vcpu_uninit in kvm_arch_vcpu_destroy to free the vcpu->run. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Acked-by: Carsten Otte <cotte@de.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2008-12-31KVM: s390: Fix refcounting and allow module unloadChristian Borntraeger
Currently it is impossible to unload the kvm module on s390. This patch fixes kvm_arch_destroy_vm to release all cpus. This make it possible to unload the module. In addition we stop messing with the module refcount in arch code. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Acked-by: Carsten Otte <cotte@de.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2008-12-31KVM: x86 emulator: consolidate emulation of two operand instructionsAvi Kivity
No need to repeat the same assembly block over and over. Signed-off-by: Avi Kivity <avi@redhat.com>
2008-12-31KVM: x86 emulator: reduce duplication in one operand emulation thunksAvi Kivity
Signed-off-by: Avi Kivity <avi@redhat.com>
2008-12-31KVM: MMU: optimize set_spte for page syncMarcelo Tosatti
The write protect verification in set_spte is unnecessary for page sync. Its guaranteed that, if the unsync spte was writable, the target page does not have a write protected shadow (if it had, the spte would have been write protected under mmu_lock by rmap_write_protect before). Same reasoning applies to mark_page_dirty: the gfn has been marked as dirty via the pagefault path. The cost of hash table and memslot lookups are quite significant if the workload is pagetable write intensive resulting in increased mmu_lock contention. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2008-12-31KVM: MSI to INTx translateSheng Yang
Now we use MSI as default one, and translate MSI to INTx when guest need INTx rather than MSI. For legacy device, we provide support for non-sharing host IRQ. Provide a parameter msi2intx for this method. The value is true by default in x86 architecture. We can't guarantee this mode can work on every device, but for most of us tested, it works. If your device encounter some trouble with this mode, you can try set msi2intx modules parameter to 0. If the device is OK with msi2intx=0, then please report it to KVM mailing list or me. We may prepare a blacklist for the device that can't work in this mode. Signed-off-by: Sheng Yang <sheng@linux.intel.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2008-12-31KVM: Enable MSI for device assignmentSheng Yang
We enable guest MSI and host MSI support in this patch. The userspace want to enable MSI should set KVM_DEV_IRQ_ASSIGN_ENABLE_MSI in the assigned_irq's flag. Function would return -ENOTTY if can't enable MSI, userspace shouldn't set MSI Enable bit when KVM_ASSIGN_IRQ return -ENOTTY with KVM_DEV_IRQ_ASSIGN_ENABLE_MSI. Userspace can tell the support of MSI device from #ifdef KVM_CAP_DEVICE_MSI. Signed-off-by: Sheng Yang <sheng@linux.intel.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2008-12-31KVM: Add assigned_device_msi_dispatch()Sheng Yang
The function is used to dispatch MSI to lapic according to MSI message address and message data. Signed-off-by: Sheng Yang <sheng@linux.intel.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2008-12-31KVM: Export ioapic_get_delivery_bitmaskSheng Yang
It would be used for MSI in device assignment, for MSI dispatch. Signed-off-by: Sheng Yang <sheng@linux.intel.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2008-12-31KVM: Add fields for MSI device assignmentSheng Yang
Prepared for kvm_arch_assigned_device_msi_dispatch(). Signed-off-by: Sheng Yang <sheng@linux.intel.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2008-12-31KVM: Clean up assigned_device_update_irqSheng Yang
Signed-off-by: Sheng Yang <sheng@linux.intel.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2008-12-31KVM: Replace irq_requested with more generic irq_requested_typeSheng Yang
Separate guest irq type and host irq type, for we can support guest using INTx with host using MSI (but not opposite combination). Signed-off-by: Sheng Yang <sheng@linux.intel.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2008-12-31KVM: Separate update irq to a single functionSheng Yang
Separate INTx enabling part to a independence function, so that we can add MSI enabling part easily. Signed-off-by: Sheng Yang <sheng@linux.intel.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2008-12-31KVM: Move ack notifier register and IRQ sourcd ID requestSheng Yang
Distinguish common part for device assignment and INTx part, perparing for refactor later. Signed-off-by: Sheng Yang <sheng@linux.intel.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2008-12-31x86: KVM guest: sign kvmclock as paravirtGlauber Costa
Currently, we only set the KVM paravirt signature in case of CONFIG_KVM_GUEST. However, it is possible to have it turned off, while CONFIG_KVM_CLOCK is turned on. This is also a paravirt case, and should be shown accordingly. Signed-off-by: Glauber Costa <glommer@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2008-12-31KVM: VMX: Conditionally request interrupt window after injecting irqAvi Kivity
If we're injecting an interrupt, and another one is pending, request an interrupt window notification so we don't have excess latency on the second interrupt. This shouldn't happen in practice since an EOI will be issued, giving a second chance to request an interrupt window, but... Signed-off-by: Avi Kivity <avi@redhat.com>
2008-12-31KVM: ia64: Clean up vmm_ivt.S using tab to indent every lineXiantao Zhang
Using tab for indentation for vmm_ivt.S. Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2008-12-31KVM: ia64: Add handler for crashed vmmXiantao Zhang
Since vmm runs in an isolated address space and it is just a copy of host's kvm-intel module, so once vmm crashes, we just crash all guests running on it instead of crashing whole kernel. Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2008-12-31KVM: ia64: Add some debug points to provide crash infomationXiantao Zhang
Use printk infrastructure to print out some debug info once VM crashes. Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2008-12-31KVM: ia64: Define printk function for kvm-intel moduleXiantao Zhang
kvm-intel module is relocated to an isolated address space with kernel, so it can't call host kernel's printk for debug purpose. In the module, we implement the printk to output debug info of vmm. Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2008-12-31x86: disable VMX on all CPUs on rebootEduardo Habkost
On emergency_restart, we may need to use an NMI to disable virtualization on all CPUs. We do that using nmi_shootdown_cpus() if VMX is enabled. Note: With this patch, we will run the NMI stuff only when the CPU where emergency_restart() was called has VMX enabled. This should work on most cases because KVM enables VMX on all CPUs, but we may miss the small window where KVM is doing that. Also, I don't know if all code using VMX out there always enable VMX on all CPUs like KVM does. We have two other alternatives for that: a) Have an API that all code that enables VMX on any CPU should use to tell the kernel core that it is going to enable VMX on the CPUs. b) Always call nmi_shootdown_cpus() if the CPU supports VMX. This is a bit intrusive and more risky, as it would run nmi_shootdown_cpus() on emergency_reboot() even on systems where virtualization is never enabled. Finding a proper point to hook the nmi_shootdown_cpus() call isn't trivial, as the non-emergency machine_restart() (that doesn't need the NMI tricks) uses machine_emergency_restart() directly. The solution to make this work without adding a new function or argument to machine_ops was setting a 'reboot_emergency' flag that tells if native_machine_emergency_restart() needs to do the virt cleanup or not. Signed-off-by: Eduardo Habkost <ehabkost@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2008-12-31kdump: forcibly disable VMX and SVM on machine_crash_shutdown()Eduardo Habkost
We need to disable virtualization extensions on all CPUs before booting the kdump kernel, otherwise the kdump kernel booting will fail, and rebooting after the kdump kernel did its task may also fail. We do it using cpu_emergency_vmxoff() and cpu_emergency_svm_disable(), that should always work, because those functions check if the CPUs support SVM or VMX before doing their tasks. Signed-off-by: Eduardo Habkost <ehabkost@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2008-12-31x86: cpu_emergency_svm_disable() functionEduardo Habkost
This function can be used by the reboot or kdump code to forcibly disable SVM on the CPU. Signed-off-by: Eduardo Habkost <ehabkost@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2008-12-31KVM: SVM: move svm_hardware_disable() code to asm/virtext.hEduardo Habkost
Create cpu_svm_disable() function. Signed-off-by: Eduardo Habkost <ehabkost@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2008-12-31KVM: SVM: move has_svm() code to asm/virtext.hEduardo Habkost
Use a trick to keep the printk()s on has_svm() working as before. gcc will take care of not generating code for the 'msg' stuff when the function is called with a NULL msg argument. Signed-off-by: Eduardo Habkost <ehabkost@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2008-12-31x86: cpu_emergency_vmxoff() functionEduardo Habkost
Add cpu_emergency_vmxoff() and its friends: cpu_vmx_enabled() and __cpu_emergency_vmxoff(). Signed-off-by: Eduardo Habkost <ehabkost@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2008-12-31KVM: VMX: extract kvm_cpu_vmxoff() from hardware_disable()Eduardo Habkost
Along with some comments on why it is different from the core cpu_vmxoff() function. Signed-off-by: Eduardo Habkost <ehabkost@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2008-12-31x86: asm/virtext.h: add cpu_vmxoff() inline functionEduardo Habkost
Unfortunately we can't use exactly the same code from vmx hardware_disable(), because the KVM function uses the __kvm_handle_fault_on_reboot() tricks. Signed-off-by: Eduardo Habkost <ehabkost@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2008-12-31KVM: VMX: move cpu_has_kvm_support() to an inline on asm/virtext.hEduardo Habkost
It will be used by core code on kdump and reboot, to disable vmx if needed. Signed-off-by: Eduardo Habkost <ehabkost@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2008-12-31KVM: VMX: move ASM_VMX_* definitions from asm/kvm_host.h to asm/vmx.hEduardo Habkost
Those definitions will be used by code outside KVM, so move it outside of a KVM-specific source file. Those definitions are used only on kvm/vmx.c, that already includes asm/vmx.h, so they can be moved safely. Signed-off-by: Eduardo Habkost <ehabkost@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2008-12-31KVM: SVM: move svm.h to include/asmEduardo Habkost
svm.h will be used by core code that is independent of KVM, so I am moving it outside the arch/x86/kvm directory. Signed-off-by: Eduardo Habkost <ehabkost@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>