Age | Commit message (Collapse) | Author |
|
If the driver is used on an ARM platform with SPARSE_IRQ defined,
semantics of NR_IRQS is different (minimal value of virtual irqs) and
by default it is set to 16, see arch/arm/include/asm/irq.h.
This value may be less than the actual number of virtual irqs, which
may break the driver initialization. The check removal allows to use
the driver on such a platform, and, if irq controller driver works
correctly, the check is not needed on legacy platforms.
Fixes a runtime problem:
lpc-eth 31060000.ethernet: error getting resources.
lpc_eth: lpc-eth: not found (-6).
Signed-off-by: Vladimir Zapolskiy <vz@mleia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
qdisc_tree_decrease_qlen() suffers from two problems on multiqueue
devices.
One problem is that it updates sch->q.qlen and sch->qstats.drops
on the mq/mqprio root qdisc, while it should not : Daniele
reported underflows errors :
[ 681.774821] PAX: sch->q.qlen: 0 n: 1
[ 681.774825] PAX: size overflow detected in function qdisc_tree_decrease_qlen net/sched/sch_api.c:769 cicus.693_49 min, count: 72, decl: qlen; num: 0; context: sk_buff_head;
[ 681.774954] CPU: 2 PID: 19 Comm: ksoftirqd/2 Tainted: G O 4.2.6.201511282239-1-grsec #1
[ 681.774955] Hardware name: ASUSTeK COMPUTER INC. X302LJ/X302LJ, BIOS X302LJ.202 03/05/2015
[ 681.774956] ffffffffa9a04863 0000000000000000 0000000000000000 ffffffffa990ff7c
[ 681.774959] ffffc90000d3bc38 ffffffffa95d2810 0000000000000007 ffffffffa991002b
[ 681.774960] ffffc90000d3bc68 ffffffffa91a44f4 0000000000000001 0000000000000001
[ 681.774962] Call Trace:
[ 681.774967] [<ffffffffa95d2810>] dump_stack+0x4c/0x7f
[ 681.774970] [<ffffffffa91a44f4>] report_size_overflow+0x34/0x50
[ 681.774972] [<ffffffffa94d17e2>] qdisc_tree_decrease_qlen+0x152/0x160
[ 681.774976] [<ffffffffc02694b1>] fq_codel_dequeue+0x7b1/0x820 [sch_fq_codel]
[ 681.774978] [<ffffffffc02680a0>] ? qdisc_peek_dequeued+0xa0/0xa0 [sch_fq_codel]
[ 681.774980] [<ffffffffa94cd92d>] __qdisc_run+0x4d/0x1d0
[ 681.774983] [<ffffffffa949b2b2>] net_tx_action+0xc2/0x160
[ 681.774985] [<ffffffffa90664c1>] __do_softirq+0xf1/0x200
[ 681.774987] [<ffffffffa90665ee>] run_ksoftirqd+0x1e/0x30
[ 681.774989] [<ffffffffa90896b0>] smpboot_thread_fn+0x150/0x260
[ 681.774991] [<ffffffffa9089560>] ? sort_range+0x40/0x40
[ 681.774992] [<ffffffffa9085fe4>] kthread+0xe4/0x100
[ 681.774994] [<ffffffffa9085f00>] ? kthread_worker_fn+0x170/0x170
[ 681.774995] [<ffffffffa95d8d1e>] ret_from_fork+0x3e/0x70
mq/mqprio have their own ways to report qlen/drops by folding stats on
all their queues, with appropriate locking.
A second problem is that qdisc_tree_decrease_qlen() calls qdisc_lookup()
without proper locking : concurrent qdisc updates could corrupt the list
that qdisc_match_from_root() parses to find a qdisc given its handle.
Fix first problem adding a TCQ_F_NOPARENT qdisc flag that
qdisc_tree_decrease_qlen() can use to abort its tree traversal,
as soon as it meets a mq/mqprio qdisc children.
Second problem can be fixed by RCU protection.
Qdisc are already freed after RCU grace period, so qdisc_list_add() and
qdisc_list_del() simply have to use appropriate rcu list variants.
A future patch will add a per struct netdev_queue list anchor, so that
qdisc_tree_decrease_qlen() can have more efficient lookups.
Reported-by: Daniele Fucini <dfucini@gmail.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Cong Wang <cwang@twopensource.com>
Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Each openvswitch tunnel vport (vxlan,gre,geneve) holds a reference
to the underlying tunnel device, but never released it when such
device is deleted.
Deleting the underlying device via the ip tool cause the kernel to
hangup in the netdev_wait_allrefs() loop.
This commit ensure that on device unregistration dp_detach_port_notify()
is called for all vports that hold the device reference, properly
releasing it.
Fixes: 614732eaa12d ("openvswitch: Use regular VXLAN net_device device")
Fixes: b2acd1dc3949 ("openvswitch: Use regular GRE net_device instead of vport")
Fixes: 6b001e682e90 ("openvswitch: Use Geneve device.")
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Acked-by: Flavio Leitner <fbl@sysclose.org>
Acked-by: Pravin B Shelar <pshelar@nicira.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
The non-PCI builds of the O day test project are failing:
On Thu, 2015-12-03 at 05:02 +0800, kbuild test robot wrote:
> warning: (SCSI_MPT2SAS) selects SCSI_MPT3SAS which has unmet direct
> dependencies (SCSI_LOWLEVEL && PCI && SCSI)
The problem is that select and depend don't interact because Kconfig doesn't
have a SAT solver, so depend picks up dependencies and select does onward
selects, but select doesn't pick up dependencies. To fix this, we need to add
the correct dependencies to the MPT2SAS option like this.
Reported-by: kbuild test robot <fengguang.wu@intel.com>
Fixes: b840c3627b6f4f856b333a14a72f8ed86da2f86c
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
|
|
When a multicast group is joined on a socket, a struct ip_mc_socklist
is appended to the sockets mc_list containing information about the
joined group.
If the interface is hot unplugged, this entry becomes stale. Prior to
commit 52ad353a5344f ("igmp: fix the problem when mc leave group") it
was possible to remove the stale entry by performing a
IP_DROP_MEMBERSHIP, passing either the old ifindex or ip address on
the interface. However, this fix enforces that the interface must
still exist. Thus with time, the number of stale entries grows, until
sysctl_igmp_max_memberships is reached and then it is not possible to
join and more groups.
The previous patch fixes an issue where a IP_DROP_MEMBERSHIP is
performed without specifying the interface, either by ifindex or ip
address. However here we do supply one of these. So loosen the
restriction on device existence to only apply when the interface has
not been specified. This then restores the ability to clean up the
stale entries.
Signed-off-by: Andrew Lunn <andrew@lunn.ch>
Fixes: 52ad353a5344f "(igmp: fix the problem when mc leave group")
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Dmitry Vyukov reported a memory leak using IPV6 SCTP sockets.
We need to call inet6_destroy_sock() to properly release
inet6 specific fields.
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/bluetooth/bluetooth
Johan Hedberg says:
====================
pull request: bluetooth 2015-12-01
Here's a Bluetooth fix for the 4.4-rc series that fixes a memory leak of
the Security Manager L2CAP channel that'll happen for every LE
connection.
Please let me know if there are any issues pulling. Thanks.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
aarch64 doesn't have native store immediate instruction, such operation
has to be implemented by the below instruction sequence:
Load immediate to register
Store register
Signed-off-by: Yang Shi <yang.shi@linaro.org>
CC: Zi Shen Lim <zlim.lnx@gmail.com>
CC: Xi Wang <xi.wang@gmail.com>
Reviewed-by: Zi Shen Lim <zlim.lnx@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
While testing the np->opt RCU conversion, I found that UDP/IPv6 was
using a mixture of xchg() and sk_dst_lock to protect concurrent changes
to sk->sk_dst_cache, leading to possible corruptions and crashes.
ip6_sk_dst_lookup_flow() uses sk_dst_check() anyway, so the simplest
way to fix the mess is to remove sk_dst_lock completely, as we did for
IPv4.
__ip6_dst_store() and ip6_dst_store() share same implementation.
sk_setup_caps() being called with socket lock being held or not,
we have to use sk_dst_set() instead of __sk_dst_set()
Note that I had to move the "np->dst_cookie = rt6_get_cookie(rt);"
in ip6_dst_store() before the sk_setup_caps(sk, dst) call.
This is because ip6_dst_store() can be called from process context,
without any lock held.
As soon as the dst is installed in sk->sk_dst_cache, dst can be freed
from another cpu doing a concurrent ip6_dst_store()
Doing the dst dereference before doing the install is needed to make
sure no use after free would trigger.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch completes the work I did in commit 45f6fad84cc3
("ipv6: add complete rcu protection around np->opt"), as I missed
sctp part.
This simply makes sure np->opt is used with proper RCU locking
and accessors.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
As the comment indicates this can only fail gracefully when
called from compute_config. Fortunately this is now what's happening,
so the fixme can be removed and the DRM_ERROR downgraded.
Link: http://patchwork.freedesktop.org/patch/msgid/1448360945-5723-3-git-send-email-maarten.lankhorst@linux.intel.com
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
|
|
In intel_prepare_plane_fb, if fb is backed by dma-buf, wait for exclusive
fence
v2: First commit
v3: Remove object_name_lock acquire
Move wait from intel_atomic_commit() to intel_prepare_plane_fb()
v4: Wait only on exclusive fences, interruptible with no timeout
v5: Style tweaks to more closely match rest of file
v6: Properly handle interrupted waits
v7: No change
v8: No change
Link: https://patchwork.kernel.org/patch/7704181/
Signed-off-by: Alex Goins <agoins@nvidia.com>
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
|
|
If a buffer is backed by dmabuf, wait on its reservation object's exclusive
fence before flipping.
v2: First commit
v3: Remove object_name_lock acquire
v4: Move wait ahead of mark_page_flip_active
Use crtc->primary->fb to get GEM object instead of pending_flip_obj
use_mmio_flip() return true when exclusive fence is attached
Wait only on exclusive fences, interruptible with no timeout
v5: Move wait from do_mmio_flip to mmio_flip_work_func
Style tweaks to more closely match rest of file
v6: Change back to unintteruptible wait to match __i915_wait_request due to
inability to properly handle interrupted wait.
Warn on error code from waiting.
v7: No change
v8: Test for !reservation_object_signaled_rcu(test_all=FALSE) instead of
obj->base.dma_buf->resv->fence_excl
Link: https://patchwork.kernel.org/patch/7704181/
Signed-off-by: Alex Goins <agoins@nvidia.com>
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
|
|
Use the first retired request on a new context to unpin
the old context. This ensures that the hw context remains
bound until it has been written back to by the GPU.
Now that the context is pinned until later in the request/context
lifecycle, it no longer needs to be pinned from context_queue to
retire_requests.
This fixes an issue with GuC submission where the GPU might not
have finished writing back the context before it is unpinned. This
results in a GPU hang.
v2: Moved the new pin to cover GuC submission (Alex Dai)
Moved the new unpin to request_retire to fix coverage leak
v3: Added switch to default context if freeing a still pinned
context just in case the hw was actually still using it
v4: Unwrapped context unpin to allow calling without a request
v5: Only create a switch to idle context if the ring doesn't
already have a request pending on it (Alex Dai)
Rename unsaved to dirty to avoid double negatives (Dave Gordon)
Changed _no_req postfix to __ prefix for consistency (Dave Gordon)
Split out per engine cleanup from context_free as it
was getting unwieldy
Corrected locking (Dave Gordon)
v6: Removed some bikeshedding (Mika Kuoppala)
Added explanation of the GuC hang that this fixes (Daniel Vetter)
v7: Removed extra per request pinning from ring reset code (Alex Dai)
Added forced ring unpin/clean in error case in context free (Alex Dai)
Signed-off-by: Nick Hoath <nicholas.hoath@intel.com>
Issue: VIZ-4277
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: David Gordon <david.s.gordon@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Alex Dai <yu.dai@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Alex Dai <yu.dai@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
|
For now, remove the spinlocks that protected the GuC's
statistics block and work queue; they are only accessed
by code that already holds the global struct_mutex, and
so are redundant (until the big struct_mutex rewrite!).
The specific problem that the spinlocks caused was that
if the work queue was full, the driver would try to
spinwait for one jiffy, but with interrupts disabled the
jiffy count would not advance, leading to a system hang.
The issue was found using test case igt/gem_close_race.
The new version will usleep() instead, still holding
the struct_mutex but without any spinlocks.
v4: Reorganize commit message (Dave Gordon)
v3: Remove unnecessary whitespace churn
v2: Clean up wq_lock too
v1: Clean up host2guc lock as well
Signed-off-by: Alex Dai <yu.dai@intel.com>
Reviewed-by: Dave Gordon <david.s.gordon@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1449104189-27591-1-git-send-email-yu.dai@intel.com
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
|
There's no need to stop and restart FBC, which is quite expensive as
we have to revalidate the CRTC state. After flushing a drawing
operation we know the CRTC state hasn't changed, so a nuke
(recompress) should be fine.
v2: Make it simpler (Chris).
v3: Rewrite the patch again due to patch order changes.
v4: Rewrite commit message (Chris).
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/
|
|
When running Cinnamon I see way too many pairs of these messages: many
per second. Get rid of them as they're just telling us FBC is working
as expected. We already have the messages for enable/disable, so we
don't really need messages for activation/deactivation.
v2: Rebase after changing the patch order.
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/
|
|
Directly call intel_fbc_calculate_cfb_size() in the only place that
actually needs it, and use the proper check before removing the stolen
node. IMHO, this change makes our code easier to understand.
v2: Use drm_mm_node_allocated() (Chris).
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/
|
|
This was already on my TODO list, and was requested both by Chris and
Ville, for different reasons. The advantages are avoiding a frequent
malloc/free pair, and the locality of having the work structure
embedded in dev_priv. The maximum used memory is also smaller since
previously we could have multiple allocated intel_fbc_work structs at
the same time, and now we'll always have a single one - the one
embedded on dev_priv. Of course, we're now using a little more memory
on the cases where there's nothing scheduled.
The biggest challenge here is to keep everything synchronized the way
it was before.
Currently, when we try to activate FBC, we allocate a new
intel_fbc_work structure. Then later when we conclude we must delay
the FBC activation a little more, we allocate a new intel_fbc_work
struct, and then adjust dev_priv->fbc.fbc_work to point to the new
struct. So when the old work runs - at intel_fbc_work_fn() - it will
check that dev_priv->fbc.fbc_work points to something else, so it does
nothing. Everything is also protected by fbc.lock.
Just cancelling the old delayed work doesn't work because we might
just cancel it after the work function already started to run, but
while it is still waiting to grab fbc.lock. That's why we use the
"dev_priv->fbc.fbc_work == work" check described in the paragraph
above.
So now that we have a single work struct we have to introduce a new
way to synchronize everything. So we're making the work function a
normal work instead of a delayed work, and it will be responsible for
sleeping the appropriate amount of time itself. This way, after it
wakes up it can grab the lock, ask "were we delayed or cancelled?" and
then go back to sleep, enable FBC or give up.
v2:
- Spelling fixes.
- Rebase after changing the patch order.
- Fix ms/jiffies confusion.
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> (v1)
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/
|
|
This moves the pre-gen4 check from update() to enable(). The HAS_DDI
in the original code is not needed since only gen 2/3 have the plane
swapping code.
v2: Rebase.
v3: Extract fbc_on_plane_a_only() (Chris).
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/
|
|
One of the problems with the current code is that it frees the CFB and
releases its drm_mm node as soon as we flip FBC's enable bit. This is
bad because after we disable FBC the hardware may still use the CFB
for the rest of the frame, so in theory we should only release the
drm_mm node one frame after we disable FBC. Otherwise, a stolen memory
allocation done right after an FBC disable may result in either
corrupted memory for the new owner of that memory region or corrupted
screen/underruns in case the new owner changes it while the hardware
is still reading it. This case is not exactly easy to reproduce since
we currently don't do a lot of stolen memory allocations, but I see
patches on the mailing list trying to expose stolen memory to user
space, so races will be possible.
I thought about three different approaches to solve this, and they all
have downsides.
The first approach would be to simply use multiple drm_mm nodes and
freeing the unused ones only after a frame has passed. The problem
with this approach is that since stolen memory is rather small,
there's a risk we just won't be able to allocate a new CFB from stolen
if the previous one was not freed yet. This could happen in case we
quickly disable FBC from pipe A and decide to enable it on pipe B, or
just if we change pipe A's fb stride while FBC is enabled.
The second approach would be similar to the first one, but maintaining
a single drm_mm node and keeping track of when it can be reused. This
would remove the disadvantage of not having enough space for two
nodes, but would create the new problem where we may not be able to
enable FBC at the point intel_fbc_update() is called, so we would have
to add more code to retry updating FBC after the time has passed. And
that can quickly get too complex since we can get invalidate, flush,
disable and other calls in the middle of the wait.
Both solutions above - and also the current code - have the problem
that we unnecessarily free+realloc FBC during invalidate+flush
operations even if the CFB size doesn't change.
The third option would be to move the allocation/deallocation to
enable/disable. This makes sure that the pipe is always disabled when
we allocate/deallocate the CFB, so there's no risk that the FBC
hardware may read or write to the memory right after it is freed from
drm_mm. The downside is that it is possible for user space to change
the buffer stride without triggering a disable/enable - only
deactivate/activate -, so we'll have to handle this case somehow - see
igt's kms_frontbuffer_tracking test, fbc-stridechange subtest. It
could be possible to implement a way to free+alloc the CFB during said
stride change, but it would involve a lot of book-keeping - exactly as
mentioned above - just for on case, so for now I'll keep it simple and
just deactivate FBC. Besides, we may not even need to disable FBC
since we do CFB over-allocation.
Note from Chris: "Starting a fullscreen client that covers a single
monitor in a multi-monitor setup will trigger a change in stride on
one of the CRTCs (the monitors will be flipped independently).". It
shouldn't be a huge problem if we lose FBC on multi-monitor setups
since these setups already have problems reaching deep PC states
anyway.
v2: Rebase after changing the patch order.
v3:
- Remove references to the stride change case being "uncommon" and
paste Chris' example.
- Rebase after a change in a previous patch.
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/
|
|
The goal is to call FBC enable/disable only once per modeset, while
activate/deactivate/update will be called multiple times.
The enable() function will be responsible for deciding if a CRTC will
have FBC on it and then it will "lock" FBC on this CRTC: it won't be
possible to change FBC's CRTC until disable(). With this, all checks
and resource acquisition that only need to be done once per modeset
can be moved from update() to enable(). And then the update(),
activate() and deactivate() code will also get simpler since they
won't need to worry about the CRTC being changed.
The disable() function will do the reverse operation of enable(). One
of its features is that it should only be called while the pipe is
already off. This guarantees that FBC is stopped and nothing is
using the CFB.
With this, the activate() and deactivate() functions just start and
temporarily stop FBC. They are the ones touching the hardware enable
bit, so HW state reflects dev_priv->crtc.active.
The last function remaining is update(). A lot of times I thought
about renaming update() to activate() or try_to_activate() since it's
called when we want to activate FBC. The thing is that update() may
not only decide to activate FBC, but also deactivate or keep it on the
same state, so I'll leave this name for now.
Moving code to enable() and disable() will also help in case we decide
to move FBC to pipe_config or something else later.
The current patch only puts the very basic code on enable() and
disable(). The next commits will take care of moving more stuff from
update() to the new functions.
v2:
- Rebase.
- Improve commit message (Chris).
v3: Rebase after changing the patch order.
v4: Rebase again after upstream changes.
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/
|
|
The long term goal is to have enable/disable as the higher level
functions and activate/deactivate as the lower level functions, just
like we do for PSR and for the CRTC. This way, we'll run enable and
disable once per modeset, while update, activate and deactivate will
be run many times. With this, we can move the checks and code that
need to run only once per modeset to enable(), making the code simpler
and possibly a little faster.
This patch is just the first step on the conversion: it starts by
converting the current low level functions from enable/disable to
activate/deactivate. This patch by itself has no benefits other than
making review and rebase easier. Please see the next patches for more
details on the conversion.
v2:
- Rebase.
- Improve commit message (Chris).
v3: Rebase after changing the patch order.
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/
|
|
There's no need to reevaluate the status of every single crtc when a
single crtc changes its state.
With this, we're cutting the case where due to a change in pipe B,
intel_fbc_update() is called, then intel_fbc_find_crtc() concludes FBC
should be enabled on pipe A, then it completely rechecks the state of
pipe A only to conclude FBC should remain enabled on pipe A. If any
change on pipe A triggers a need to recompute whether FBC is valid on
pipe A, then at some point someone is going to call
intel_fbc_update(PIPE_A).
The addition of intel_fbc_deactivate() is necessary so we keep track
of the previously selected CRTC when we do invalidate/flush. We're
also going to continue the enable/disable/activate/deactivate concept
in the next patches.
v2: Rebase.
v3: Rebase after changing the patch order.
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/
|
|
This thing where we need to get the crtc either from the work
structure or the fbc structure itself is confusing and unnecessary.
Set fbc.crtc right when scheduling the enable work so we can always
use it.
The problem is not what gets passed and how to retrieve it. The
problem is that when we're in the other parts of the code we always
have to keep in mind that if FBC is already enabled we have to get the
CRTC from place A, if FBC is scheduled we have to get the CRTC from
place B, and if it's disabled there's no CRTC. Having a single place
to retrieve the CRTC from allows us to treat the "is enabled" and "is
scheduled" cases as the same case, reducing the mistake surface. I
guess I should add this to the commit message.
Besides the immediate advantages, this is also going to make one of
the next commits much simpler. And even later, when we introduce
enable/disable + activate/deactivate, this will be even simpler as
we'll set the CRTC at enable time. So all the
activate/deactivate/update code can just look at the single CRTC
variable regardless of the current state.
v2: Improve commit message (Chris).
v3: Rebase after changing the patch order.
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/
|
|
In function find_compression_threshold() we try to over-allocate CFB
space in order to reduce reallocations and fragmentation, and we're
not considering that at the CFB size check. Consider it.
There is also a longer-term plan to kill
dev_priv->fbc.uncompressed_size, but this will come later.
v2: Use drm_mm_node_allocated() (Chris).
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/
|
|
This can happen when we run out of encoders for a multi-crtc modeset,
or also when userspace is silly and tries to clone multiple connectors
that need the same encoder on the same crtc.
Reported-and-Tested-and-Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1449136154-11588-1-git-send-email-daniel.vetter@ffwll.ch
|
|
Proxy entries could have null pointer to net-device.
Signed-off-by: Konstantin Khlebnikov <koct9i@gmail.com>
Fixes: 84920c1420e2 ("net: Allow ipv6 proxies and arp proxies be shown with iproute2")
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Dmitry Vyukov reported that the user could trigger a kernel warning by
using a large len value for getsockopt SCTP_GET_LOCAL_ADDRS, as that
value directly affects the value used as a kmalloc() parameter.
This patch thus switches the allocation flags from all user-controllable
kmalloc size to GFP_USER to put some more restrictions on it and also
disables the warn, as they are not necessary.
Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
They don't need to be any bigger than that and with this we start a new
bitfield for tracking association runtime stuff, like zero window
situation.
Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Acked-by: Vlad Yasevich <vyasevich@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch addresses multiple problems :
UDP/RAW sendmsg() need to get a stable struct ipv6_txoptions
while socket is not locked : Other threads can change np->opt
concurrently. Dmitry posted a syzkaller
(http://github.com/google/syzkaller) program desmonstrating
use-after-free.
Starting with TCP/DCCP lockless listeners, tcp_v6_syn_recv_sock()
and dccp_v6_request_recv_sock() also need to use RCU protection
to dereference np->opt once (before calling ipv6_dup_options())
This patch adds full RCU protection to np->opt
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
For large map->value_size the user space can trigger memory allocation warnings like:
WARNING: CPU: 2 PID: 11122 at mm/page_alloc.c:2989
__alloc_pages_nodemask+0x695/0x14e0()
Call Trace:
[< inline >] __dump_stack lib/dump_stack.c:15
[<ffffffff82743b56>] dump_stack+0x68/0x92 lib/dump_stack.c:50
[<ffffffff81244ec9>] warn_slowpath_common+0xd9/0x140 kernel/panic.c:460
[<ffffffff812450f9>] warn_slowpath_null+0x29/0x30 kernel/panic.c:493
[< inline >] __alloc_pages_slowpath mm/page_alloc.c:2989
[<ffffffff81554e95>] __alloc_pages_nodemask+0x695/0x14e0 mm/page_alloc.c:3235
[<ffffffff816188fe>] alloc_pages_current+0xee/0x340 mm/mempolicy.c:2055
[< inline >] alloc_pages include/linux/gfp.h:451
[<ffffffff81550706>] alloc_kmem_pages+0x16/0xf0 mm/page_alloc.c:3414
[<ffffffff815a1c89>] kmalloc_order+0x19/0x60 mm/slab_common.c:1007
[<ffffffff815a1cef>] kmalloc_order_trace+0x1f/0xa0 mm/slab_common.c:1018
[< inline >] kmalloc_large include/linux/slab.h:390
[<ffffffff81627784>] __kmalloc+0x234/0x250 mm/slub.c:3525
[< inline >] kmalloc include/linux/slab.h:463
[< inline >] map_update_elem kernel/bpf/syscall.c:288
[< inline >] SYSC_bpf kernel/bpf/syscall.c:744
To avoid never succeeding kmalloc with order >= MAX_ORDER check that
elem->value_size and computed elem_size are within limits for both hash and
array type maps.
Also add __GFP_NOWARN to kmalloc(value_size | elem_size) to avoid OOM warnings.
Note kmalloc(key_size) is highly unlikely to trigger OOM, since key_size <= 512,
so keep those kmalloc-s as-is.
Large value_size can cause integer overflows in elem_size and map.pages
formulas, so check for that as well.
Fixes: aaac3ba95e4c ("bpf: charge user for creation of BPF maps and programs")
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Marcin Wojtas says:
====================
Marvell Armada XP/370/38X Neta fixes
I'm sending v4 with corrected commit log of the last patch, in order to
avoid possible conflicts between the branches as suggested by Gregory
Clement.
Best regards,
Marcin Wojtas
Changes from v4:
* Correct commit log of patch 6/6
Changes from v2:
* Style fixes in patch updating mbus protection
* Remove redundant stable notifications except for patch 4/6
Changes from v1:
* update MBUS windows access protection register once, after whole loop
* add fixing value of MVNETA_RXQ_INTR_ENABLE_ALL_MASK
* add fixing error path for skb_build()
* add possibility of setting custom TX IP checksum limit in DT property
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The Ethernet controller found in the Armada 38x SoC's family support
TCP/IP checksumming with frame sizes larger than 1600 bytes, however
only on port 0.
This commit enables it by setting 'tx-csum-limit' to 9800B in
'ethernet@70000' node.
Signed-off-by: Marcin Wojtas <mw@semihalf.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Since Armada 38x SoC can support IP checksum for jumbo frames only on
a single port, it means that this feature should be enabled per-port,
rather than for the whole SoC.
This patch enables setting custom TX IP checksum limit by adding new
optional property to the mvneta device tree node. If not used, by
default 1600B is set for "marvell,armada-370-neta" and 9800B for other
strings, which ensures backward compatibility. Binding documentation
is updated accordingly.
Signed-off-by: Marcin Wojtas <mw@semihalf.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
In the actual RX processing, there is same error path for both descriptor
ring refilling and building skb fails. This is not correct, because after
successful refill, the ring is already updated with newly allocated
buffer. Then, in case of build_skb() fail, hitherto code left the original
buffer unmapped.
This patch fixes above situation by swapping error check of skb build with
DMA-unmap of original buffer.
Signed-off-by: Marcin Wojtas <mw@semihalf.com>
Acked-by: Simon Guinot <simon.guinot@sequanux.org>
Cc: <stable@vger.kernel.org> # v4.2+
Fixes a84e32894191 ("net: mvneta: fix refilling for Rx DMA buffers")
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
A value originally defined in the driver was inappropriate. Even though
the ingress was somehow working, writing MVNETA_RXQ_INTR_ENABLE_ALL_MASK
to MVNETA_INTR_ENABLE didn't make any effect, because the bits [31:16]
are reserved and read-only.
This commit updates MVNETA_RXQ_INTR_ENABLE_ALL_MASK to be compliant with
the controller's documentation.
Signed-off-by: Marcin Wojtas <mw@semihalf.com>
Fixes: c5aff18204da ("net: mvneta: driver for Marvell Armada 370/XP network
unit")
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
MVNETA_RXQ_HW_BUF_ALLOC bit which controls enabling hardware buffer
allocation was mistakenly set as BIT(1). This commit fixes the assignment.
Signed-off-by: Marcin Wojtas <mw@semihalf.com>
Reviewed-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Fixes: c5aff18204da ("net: mvneta: driver for Marvell Armada 370/XP network
unit")
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This commit adds missing configuration of MBUS windows access protection
in mvneta_conf_mbus_windows function - a dedicated variable for that
purpose remained there unused since v3.8 initial mvneta support. Because
of that the register contents were inherited from the bootloader.
Signed-off-by: Marcin Wojtas <mw@semihalf.com>
Reviewed-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Fixes: c5aff18204da ("net: mvneta: driver for Marvell Armada 370/XP network
unit")
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi
Pull spi fixes from Mark Brown:
"There's one fix for the core here, we weren't reinitialising the
actual transferred length in messages when they get reused which meant
that we'd just keep adding to the length if a message is reused. This
has limited impact since it's only used in error handling cases but
will really mess anything that tries to use it up when it triggers.
As ever there's a small collection of driver specific fixes too"
* tag 'spi-fix-v4.4-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi:
spi: bugfix: spi_message.transfer_length does not get reset
spi: pl022: handle EPROBE_DEFER for dma
spi: bcm63xx: use correct format string for printing a resource
spi: mediatek: single device does not require cs_gpios
spi: Add missing kerneldoc description for parameter
|
|
For cpufreq drivers which use setpolicy interface, after offline->online
the policy is set to default. This can be reproduced by setting the
default policy of intel_pstate or longrun to ondemand and then change to
"performance". After offline and online, the setpolicy will be called with
the policy=ondemand.
For drivers using governors this condition is handled by storing
last_governor, during offline and restoring during online. The same should
be done for drivers using setpolicy interface. Storing last_policy during
offline and restoring during online.
Signed-off-by: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
Provide subsystem-level suspend and resume helpers that can be used to
implement suspend/resume on atomic mode-setting enabled drivers.
v2: simplify locking, enhance kerneldoc comments
v3: pass lock acquisition context by parameter, improve kerneldoc
v4: - remove redundant code (already provided by atomic helpers)
(Maarten Lankhorst)
- move backoff dance from drm_modeset_lock_all_ctx() into suspend
helper (Daniel Vetter)
v5: handle potential EDEADLK from drm_atomic_helper_duplicate_state()
and drm_atomic_helper_disable_all() (Daniel Vetter)
Signed-off-by: Thierry Reding <treding@nvidia.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1449075005-13937-2-git-send-email-thierry.reding@gmail.com
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
|
This function is like drm_modeset_lock_all(), but it takes the lock
acquisition context as a parameter rather than storing it in the DRM
device's mode_config structure.
Implement drm_modeset_{,un}lock_all() in terms of the new function for
better code reuse, and add a note to the kerneldoc that new code should
use the new functions.
v2: improve kerneldoc
v4: rename drm_modeset_lock_all_crtcs() to drm_modeset_lock_all_ctx()
and take mode_config's .connection_mutex instead of .mutex lock to
avoid lock inversion (Daniel Vetter), use drm_modeset_drop_locks()
which is now the equivalent of drm_modeset_unlock_all_ctx()
v5: do not take the dev->mode_config.connection_mutex in
drm_atomic_legacy_backoff() since drm_modeset_lock_all_ctx()
already keeps it, enhance kerneldoc for drm_modeset_lock_all_ctx()
(Daniel Vetter)
Signed-off-by: Thierry Reding <treding@nvidia.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1449075005-13937-1-git-send-email-thierry.reding@gmail.com
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
|
In the last change here, I neglected to update the cookie in one code
path: when a mgmt-tx has no real cookie sent to userspace as it doesn't
wait for a response, but is off-channel. The original code used the SKB
pointer as the cookie and always assigned the cookie to the TX SKB in
ieee80211_start_roc_work(), but my change turned this around and made
the code rely on a valid cookie being passed in.
Unfortunately, the off-channel no-wait TX path wasn't assigning one at
all, resulting in an uninitialized stack value being used. This wasn't
handed back to userspace as a cookie (since in the no-wait case there
isn't a cookie), but it was tested for non-zero to distinguish between
mgmt-tx and off-channel.
Fix this by assigning a dummy non-zero cookie unconditionally, and get
rid of a misleading comment and some dead code while at it. I'll clean
up the ACK SKB handling separately later.
Fixes: 3b79af973cf4 ("mac80211: stop using pointers as userspace cookies")
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
DFS channels should not be actively scanned as we can't be sure
if we are allowed or not.
If the current channel is in the DFS band, active scan might be
performed after CSA, but we have no guarantee about other channels,
therefore it is safer to prevent active scanning at all.
Cc: stable@vger.kernel.org
Signed-off-by: Antonio Quartulli <antonio@open-mesh.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
Interfaces are being initialized (setup) on addition,
and torn down on removal.
However, p2p device is being torn down when stopped,
resulting in the next p2p start operation being done
on uninitialized interface.
Solve it by calling ieee80211_teardown_sdata() only
on interface removal (for the non-netdev case).
Signed-off-by: Eliad Peller <eliadx.peller@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
[squashed in fix to call teardown after unregister]
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
Sunil Goutham says:
====================
thunderx: miscellaneous fixes
This patch series contains fixes for various issues observed
with BGX and NIC drivers.
Changes from v1:
- Fixed comment syle in the first patch of the series
- Removed 'Increase transmit queue length' patch from the series,
will recheck if it's a driver or system issue and resubmit.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Enable or disable BGX LMAC's RX/TX based on corresponding VF's
status. If otherwise, when multiple LMAC's physical link is up
then packets from all LMAC's whose corresponding VF is not yet
initialized will get forwarded to VF0. This is due to VNIC's default
configuration where CPI, RSSI e.t.c point to VF0/QSET0/RQ0.
This patch will prevent multiple copies of packets on VF0.
Signed-off-by: Sunil Goutham <sgoutham@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Call netif_carrier_on() only if interface's link is up. Switching this on
upon IFF_UP by default, is causing issues with ethernet channel bonding
in LACP mode. Initial NETDEV_CHANGE notification was being skipped.
Also fixed some issues with link/speed/duplex reporting via ethtool.
Signed-off-by: Sunil Goutham <sgoutham@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|