diff options
author | David Hildenbrand <david@redhat.com> | 2018-08-06 17:54:07 +0200 |
---|---|---|
committer | Janosch Frank <frankja@linux.ibm.com> | 2018-10-01 08:52:24 +0200 |
commit | af4bf6c3d9b45b62da86d928d3fd2d4ddc8549e6 (patch) | |
tree | 80ed16c57af5bb4ae68a1f12c6977143114ed186 | |
parent | 67d49d52ae502eaea8858fbcb97e3c2891f78da9 (diff) |
s390/mm: optimize locking without huge pages in gmap_pmd_op_walk()
Right now we temporarily take the page table lock in
gmap_pmd_op_walk() even though we know we won't need it (if we can
never have 1mb pages mapped into the gmap).
Let's make this a special case, so gmap_protect_range() and
gmap_sync_dirty_log_pmd() will not take the lock when huge pages are
not allowed.
gmap_protect_range() is called quite frequently for managing shadow
page tables in vSIE environments.
Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Janosch Frank <frankja@linux.ibm.com>
Message-Id: <20180806155407.15252-1-david@redhat.com>
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
-rw-r--r-- | arch/s390/mm/gmap.c | 10 |
1 files changed, 8 insertions, 2 deletions
diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c index bb44990c8212..d4fa0a4514e0 100644 --- a/arch/s390/mm/gmap.c +++ b/arch/s390/mm/gmap.c @@ -905,10 +905,16 @@ static inline pmd_t *gmap_pmd_op_walk(struct gmap *gmap, unsigned long gaddr) pmd_t *pmdp; BUG_ON(gmap_is_shadow(gmap)); - spin_lock(&gmap->guest_table_lock); pmdp = (pmd_t *) gmap_table_walk(gmap, gaddr, 1); + if (!pmdp) + return NULL; - if (!pmdp || pmd_none(*pmdp)) { + /* without huge pages, there is no need to take the table lock */ + if (!gmap->mm->context.allow_gmap_hpage_1m) + return pmd_none(*pmdp) ? NULL : pmdp; + + spin_lock(&gmap->guest_table_lock); + if (pmd_none(*pmdp)) { spin_unlock(&gmap->guest_table_lock); return NULL; } |