diff options
author | Peter Chubb <peterc@gelato.unsw.edu.au> | 2005-06-23 21:14:00 -0700 |
---|---|---|
committer | Tony Luck <tony.luck@intel.com> | 2005-06-28 10:01:19 -0700 |
commit | a68db763af9b676590c3fe9ec3f17bf18015eb2f (patch) | |
tree | 9862af81932797f0e68f7053d253e6b6b581ea3d /arch/ia64 | |
parent | 819c67e69c4e0062787e27dd989f5f9d00d4834e (diff) |
[IA64] Fix another IA64 preemption problem
There's another problem shown up by Ingo's recent patch to make
smp_processor_id() complain if it's called with preemption enabled.
local_finish_flush_tlb_mm() calls activate_context() in a situation
where it could be rescheduled to another processor. This patch
disables preemption around the call.
Signed-off-by: Peter Chubb <peterc@gelato.unsw.edu.au>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Diffstat (limited to 'arch/ia64')
-rw-r--r-- | arch/ia64/kernel/smp.c | 3 |
1 files changed, 3 insertions, 0 deletions
diff --git a/arch/ia64/kernel/smp.c b/arch/ia64/kernel/smp.c index b49d4ddaab93..0166a9847095 100644 --- a/arch/ia64/kernel/smp.c +++ b/arch/ia64/kernel/smp.c @@ -231,13 +231,16 @@ smp_flush_tlb_all (void) void smp_flush_tlb_mm (struct mm_struct *mm) { + preempt_disable(); /* this happens for the common case of a single-threaded fork(): */ if (likely(mm == current->active_mm && atomic_read(&mm->mm_users) == 1)) { local_finish_flush_tlb_mm(mm); + preempt_enable(); return; } + preempt_enable(); /* * We could optimize this further by using mm->cpu_vm_mask to track which CPUs * have been running in the address space. It's not clear that this is worth the |