diff options
author | Mauro Carvalho Chehab <mchehab+huawei@kernel.org> | 2020-10-04 12:19:12 +0200 |
---|---|---|
committer | Mauro Carvalho Chehab <mchehab+huawei@kernel.org> | 2020-10-04 12:19:12 +0200 |
commit | 463c43fcd97e493d8a17242f4f000c86fe642ed6 (patch) | |
tree | 95deff935b99d5ef6c87bd67b7f7be608a120951 /drivers/media/cec | |
parent | 7c9df3ec493e353166527f56ad1540d052b92cd4 (diff) | |
parent | a1b8638ba1320e6684aa98233c15255eb803fac7 (diff) |
Merge tag 'v5.9-rc7' into patchwork
Linux 5.9-rc7
* tag 'v5.9-rc7': (683 commits)
Linux 5.9-rc7
mm/thp: Split huge pmds/puds if they're pinned when fork()
mm: Do early cow for pinned pages during fork() for ptes
mm/fork: Pass new vma pointer into copy_page_range()
mm: Introduce mm_struct.has_pinned
mm: validate pmd after splitting
mm: don't rely on system state to detect hot-plug operations
mm: replace memmap_context by meminit_context
arch/x86/lib/usercopy_64.c: fix __copy_user_flushcache() cache writeback
lib/memregion.c: include memregion.h
lib/string.c: implement stpcpy
mm/migrate: correct thp migration stats
mm/gup: fix gup_fast with dynamic page table folding
mm: memcontrol: fix missing suffix of workingset_restore
mm, THP, swap: fix allocating cluster for swapfile by mistake
mm: slab: fix potential double free in ___cache_free
Documentation/llvm: Fix clang target examples
io_uring: ensure async buffered read-retry is setup properly
KVM: SVM: Add a dedicated INVD intercept routine
io_uring: don't unconditionally set plug->nowait = true
...
Diffstat (limited to 'drivers/media/cec')
-rw-r--r-- | drivers/media/cec/core/cec-adap.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/drivers/media/cec/core/cec-adap.c b/drivers/media/cec/core/cec-adap.c index 3f016caa45ba..d5d5d28d0b36 100644 --- a/drivers/media/cec/core/cec-adap.c +++ b/drivers/media/cec/core/cec-adap.c @@ -1205,7 +1205,7 @@ void cec_received_msg_ts(struct cec_adapter *adap, /* Cancel the pending timeout work */ if (!cancel_delayed_work(&data->work)) { mutex_unlock(&adap->lock); - flush_scheduled_work(); + cancel_delayed_work_sync(&data->work); mutex_lock(&adap->lock); } /* |