Age | Commit message (Collapse) | Author |
|
Currently we do an entity reset when we detect an MC reboot.
This messes up SRIOV because it leaves VFs orphaned. The extra
reset is rather redundant anyway, since the MC reboot will have
basically reset everything.
This change replaces the entity reset after MC reboot with a
simpler datapath reset that reallocates resources but doesn't
perform the entity reset.
Signed-off-by: Shradha Shah <sshah@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
rtnetlink calls ndo_get_vf_config when compiling information
about a network interface, so that the VFs associated with a PF
can be listed (eg: ip link show).
Implement a response to this entry point and return PF-set MAC
address for VF in ndo_get_vf_config
Signed-off-by: Shradha Shah <sshah@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Implement a response to this entrypoint.
The ndo_set_vf_mac() entrypoint is only exposed in the driver if
CONFIG_SFC_SRIOV is defined.
Signed-off-by: Shradha Shah <sshah@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
In order to avoid MC bugs the flags field needs to be set to 0.
Instead of explicitly clearing out the flags individually, a
better way to do this is to memset the MCDI_BUF to 0.
Signed-off-by: Shradha Shah <sshah@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
A VF's MAC address is set by its parent PF and added to its vport.
To get this MAC address, the VF must use MC_CMD_ VPORT_GET_MAC_ADDRESSES.
In the current scheme, a VF's vport should only have one MAC address,
so warn if this is not the case.
Signed-off-by: Shradha Shah <sshah@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
If MCDI timeouts are encountered during efx_ef10_filter_table_remove(),
an FLR will be queued, but efx->filter_state will still be kfree()d.
The queued FLR will then call efx_ef10_filter_table_restore(), which
will try to use efx->filter_state. This previously caused a panic.
This patch adds an rwsem to protect the existence of efx->filter_state,
separately from the spinlock protecting its contents. Users which can
race against efx_ef10_filter_table_remove() should down_read this rwsem.
Signed-off-by: Shradha Shah <sshah@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Initialised in efx_probe_vf and removal is dealt with in
efx_ef10_remove.
vf->efx is needed in future patches to change the MAC address
of the VF via the parent PF, while the driver is bound to the
VF.
Example: ip link set dev vf NUM mac LLADDR
Signed-off-by: Shradha Shah <sshah@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Otherwise the PF and VF can disagree on the VF's MAC address and
this leads to strange behaviour, up to and including kernel panics.
Signed-off-by: Shradha Shah <sshah@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Added function efx_ef10_get_vf_index to store the vf_index
in nic_data during probe
vf_index is needed in future patches to access a particular
VF in the VF data structure.
Moved efx_ef10_probe_pf and efx_ef10_probe_vf in order to
used efx_ef10_remove
Signed-off-by: Shradha Shah <sshah@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
MC_CMD_SET_MAC is privileged and can only by called by the link
control function.
This patch adds efx_ef10_mac_reconfigure_vf which avoids the call
to MC_CMD_SET_MAC by the Virtual function
Signed-off-by: Shradha Shah <sshah@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: Shradha Shah <sshah@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
There is one primary function per adaptor, one link control function
per port and the rest as categorised as general.
This patch adds privileges to the MCDI commands based on which
functions are allowed to call them.
Signed-off-by: Shradha Shah <sshah@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This also matches with the sibling call netdev_alloc_skb_ip_align() made in
rx fast path.
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
30 usecs (or really, 1 jiffy) can go by pretty fast.
Move the set of the timeout immediately before the loop.
Remove the unnecessary max(1ul, usecs_to_jiffies(30)) as
usecs_to_jiffies with a non-zero constant is guaranteed
to be non-zero.
Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Simon Horman says:
====================
rocker: transaction fixes
this series addresses what appear to be errors in the handling of
prepare and then commit transactions in the rocker driver.
In all cases the problem is that data structures visible outside of
the transaction are modified during the prepare phase.
In the case of the first two patches this results in the kernel reporting a
BUG. I have noted test-cases in the change logs.
The third patch is also a bug fix, as noted by Toshiaki Makita,
however I have not been able to reliably reproduce the problem and
thus have not provided a test case.
The last patch is a correctness fix that does not fix a bug
that manifests as far as I can tell.
Changes: v3->v4
* All patches
- Add Jiri Pirko's ack
* "rocker: do not make neighbour entry changes when preparing transactions"
- Setting of entry values in all transaction phases
as suggested by Toshiaki Makita
* "rocker: make rocker_port_internal_vlan_id_{get,put}() non-transactional"
- Remove Fixes tag as I believe this is a correctness rather than a bug fix
Changes: v2->v3
* "rocker: do not make neighbour entry changes when preparing transactions"
- Correct inverted logic
- Added ack from Scott Feldman
Changes: v1->v2
* "rocker: do not make neighbour entry changes when preparing transactions"
- Revised changelog to reflect information from Toshiaki Makita
that there is a bug that can manifest
- Update address and ttl regardless of the value of the transaction state
* All other patches
- Added acks from Scott Feldman
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The motivation for this is that rocker_port_internal_vlan_id_{get,put} appear
to only partially implement the transaction model: memory allocation
and freeing is transactional, but hash and bitmap manipulation is not.
The latter could be fixed, however, as it is not currently exercised
due to trans always being SWITCHDEV_TRANS_NONE it seems cleaner
to make rocker_port_internal_vlan_id_get non-transactional.
This problem was introduced by c4f20321d968 ("rocker: support
prepare-commit transaction model").
Found by inspection.
I do not believe that this change should have any run-time effect.
Acked-by: Scott Feldman <sfeldma@gmail.com>
Acked-by: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: Simon Horman <simon.horman@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
rocker_port_ipv4_nh() and in turn rocker_port_ipv4_neigh() may be
be called with trans == SWITCHDEV_TRANS_PREPARE and then
trans == SWITCHDEV_TRANS_COMMIT from switchdev_port_obj_set() via
fib_table_insert().
The first time that rocker_port_ipv4_nh() is called, with
trans == SWITCHDEV_TRANS_PREPARE, _rocker_neigh_add() adds a new entry to
the neigh table.
And the second time rocker_port_ipv4_nh() is called, with
trans == SWITCHDEV_TRANS_COMMIT, that entry is found. This causes
rocker_port_ipv4_nh() to believe it is not adding an entry and thus it
frees "entry", which is still present in rocker driver's neigh table.
This problem does not appear to affect deletion as my analysis is that
deletion is always performed with trans == SWITCHDEV_TRANS_NONE.
For completeness _rocker_neigh_{add,del,prepare} are updated not to
manipulate fib table entries if trans == SWITCHDEV_TRANS_PREPARE.
Fixes: c4f20321d968 ("rocker: support prepare-commit transaction model")
Reported-by: Toshiaki Makita <makita.toshiaki@lab.ntt.co.jp>
Acked-by: Scott Feldman <sfeldma@gmail.com>
Acked-by: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: Simon Horman <simon.horman@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
rocker_port_fdb_flush() may be called be called with
trans == SWITCHDEV_TRANS_PREPARE and then trans == SWITCHDEV_TRANS_COMMIT from
switchdev_port_attr_set() via switchdev_port_obj_add().
Adding the new entry to the FDB table when trans == SWITCHDEV_TRANS_PREPARE
may result in a memory leak because when trans == SWITCHDEV_TRANS_PREPARE
rocker_flow_tbl_bridge() will allocate memory when called via
rocker_port_fdb_learn(). However, when trans == SWITCHDEV_TRANS_COMMIT
the presence of the FDB entry in the FDB table causes
rocker_port_fdb() to set the ROCKER_OP_FLAG_REFRESH flag which results
in rocker_port_fdb_learn() skipping the call to rocker_flow_tbl_bridge()
which would free the memory allocated by it when
trans == SWITCHDEV_TRANS_PREPARE.
ip link add br0 type bridge
ip link set up dev eth0
ip link set dev eth0 master br0
bridge fdb add 52:54:00:12:35:08 dev eth0
bridge fdb add 52:54:00:12:35:09 dev eth0
[ 2.600730] ------------[ cut here ]------------
[ 2.601002] kernel BUG at drivers/net/ethernet/rocker/rocker.c:4369!
[ 2.601373] invalid opcode: 0000 [#1] SMP
[ 2.601963] Modules linked in:
[ 2.602355] CPU: 0 PID: 64 Comm: bridge Not tainted 4.1.0-rc3-01048-g6d0f50c50211-dirty #1075
[ 2.602721] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.8.0-0-g4c59f5d-20150219_092859-nilsson.home.kraxel.org 04/01/2014
[ 2.602721] task: ffff880019facef0 ti: ffff88001f96c000 task.ti: ffff88001f96c000
[ 2.602721] RIP: 0010:[<ffffffff811f1470>] [<ffffffff811f1470>] rocker_port_obj_add+0x150/0x160
[ 2.602721] RSP: 0018:ffff88001f96fa98 EFLAGS: 00000212
[ 2.602721] RAX: ffff880019d4fa68 RBX: ffff88001f96fb18 RCX: 0000000000000000
[ 2.602721] RDX: ffff880019d4f000 RSI: ffff88001f96fb18 RDI: ffff880019d4f000
[ 2.602721] RBP: 0000000000000001 R08: 0000000000000000 R09: ffff88001f904620
[ 2.602721] R10: ffff88001f96fb60 R11: ffff880019e9d100 R12: ffff88001f96fb18
[ 2.602721] R13: ffff880019d4f680 R14: ffff88001f904610 R15: ffff8800198f7b80
[ 2.602721] FS: 00007f3eee917700(0000) GS:ffff88001b000000(0000) knlGS:0000000000000000
[ 2.602721] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 2.602721] CR2: 00007f3eee4a15cb CR3: 000000001f933000 CR4: 00000000000006b0
[ 2.602721] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 2.602721] DR3: 0000000000000000 DR6: 0000000000000000 DR7: 0000000000000000
[ 2.602721] Stack:
[ 2.602721] 0000000000000000 ffff88001f96fb18 ffff880019d4f000 ffff88001f96fb18
[ 2.602721] ffff880019d4f000 ffffffff81332105 ffff88001f96fb50 ffffffff814464c0
[ 2.602721] ffff88001f96fb18 ffff88001f904600 ffff880019d4f000 ffffffff813326e5
[ 2.602721] Call Trace:
[ 2.602721] [<ffffffff81332105>] ? __switchdev_port_obj_add+0x25/0x90
[ 2.602721] [<ffffffff813326e5>] ? switchdev_port_obj_add+0x25/0xc0
[ 2.602721] [<ffffffff813327b1>] ? switchdev_port_fdb_add+0x31/0x40
[ 2.602721] [<ffffffff8123911f>] ? rtnl_fdb_add+0xff/0x1e0
[ 2.602721] [<ffffffff81237d8e>] ? rtnetlink_rcv_msg+0x7e/0x250
[ 2.602721] [<ffffffff8121d1ce>] ? __skb_recv_datagram+0xfe/0x4b0
[ 2.602721] [<ffffffff81237d10>] ? rtnetlink_rcv+0x30/0x30
[ 2.602721] [<ffffffff81247958>] ? netlink_rcv_skb+0xa8/0xd0
[ 2.602721] [<ffffffff81237cff>] ? rtnetlink_rcv+0x1f/0x30
[ 2.602721] [<ffffffff81247220>] ? netlink_unicast+0x150/0x200
[ 2.602721] [<ffffffff81247714>] ? netlink_sendmsg+0x374/0x3e0
[ 2.602721] [<ffffffff8120f8df>] ? sock_sendmsg+0xf/0x30
[ 2.602721] [<ffffffff8120ffd3>] ? ___sys_sendmsg+0x1f3/0x200
[ 2.602721] [<ffffffff812100e5>] ? ___sys_recvmsg+0x105/0x140
[ 2.602721] [<ffffffff810a36f0>] ? SyS_readahead+0x90/0x90
[ 2.602721] [<ffffffff81098dfd>] ? filemap_map_pages+0x1ed/0x210
[ 2.602721] [<ffffffff810b77fc>] ? handle_mm_fault+0x5fc/0xe50
[ 2.602721] [<ffffffff81210ef9>] ? __sys_sendmsg+0x39/0x70
[ 2.602721] [<ffffffff8133ce17>] ? system_call_fastpath+0x12/0x6a
[ 2.602721] Code: b7 8f a0 06 00 00 48 83 bf 88 06 00 00 00 74 1d 48 83 c4 08 89 ee 4c 89 ef 5b 5d 41 5c 41 5d 0f b7 c9 45 31 c0 e9 51 db ff ff 90 <0f> 0b b8 ea ff ff ff e9 cf fe ff ff 0f 1f 40 00 41 57 41 56 b9
[ 2.602721] RIP [<ffffffff811f1470>] rocker_port_obj_add+0x150/0x160
[ 2.602721] RSP <ffff88001f96fa98>
[ 2.615848] ---[ end trace 4f7b4f1c98077108 ]---
The above is resolved by not adding the new FDB entry to the FDB table
if trans == SWITCHDEV_TRANS_PREPARE.
For symmetry this patch also skips deleting FDB entries from the FDB
table trans == SWITCHDEV_TRANS_PREPARE. However, my analysis is that
this never occurs as trans is always SWITCHDEV_TRANS_NONE when removing
FDB entries.
Fixes: c4f20321d968 ("rocker: support prepare-commit transaction model")
Acked-by: Scott Feldman <sfeldma@gmail.com>
Acked-by: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: Simon Horman <simon.horman@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
transactions
rocker_port_fdb_flush() is called by rocker_port_stp_update() which in
turn may be called with trans == SWITCHDEV_TRANS_PREPARE and then
trans == SWITCHDEV_TRANS_COMMIT from switchdev_port_attr_set() via
br_set_state().
When rocker_port_fdb_flush() is called with trans == SWITCHDEV_TRANS_PREPARE
it calls rocker_port_fdb_learn() for each entry in the FDB table which in
turn calls rocker_flow_tbl_bridge() which will allocate memory using
rocker_port_kzalloc(). rocker_port_fdb_learn() will then remove the entry
from the FDB table.
Then when rocker_port_fdb_learn() is called with
trans == SWITCHDEV_TRANS_PREPARE no calls are made to rocker_port_fdb_learn()
because there are no longer any entries present in the FDB table. Thus the
memory previously allocated by rocker_port_fdb_learn() is leaked resulting
in the kernel BUG() below.
Furthermore, it looks like the driver ends up with an incorrect view of the
fdb table as the FDB entries are purged from the driver's table but not the
hardware's table.
ip link add br0 type bridge
ip link set up dev eth0
sleep 1
ip link set dev eth0 master br0
[ 3.704360] ------------[ cut here ]------------
[ 3.704611] kernel BUG at drivers/net/ethernet/rocker/rocker.c:4289!
[ 3.704962] invalid opcode: 0000 [#1] SMP
[ 3.705537] Modules linked in:
[ 3.705919] CPU: 0 PID: 63 Comm: ip Not tainted 4.1.0-rc3-01046-gb9fbe709de4d #1044
[ 3.706191] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.8.0-0-g4c59f5d-20150219_092859-nilsson.home.kraxel.org 04/01/2014
[ 3.706820] task: ffff880019f70150 ti: ffff88001f92c000 task.ti: ffff88001f92c000
[ 3.707138] RIP: 0010:[<ffffffff811f0080>] [<ffffffff811f0080>] rocker_port_attr_set+0xe0/0xf0
[ 3.707990] RSP: 0018:ffff88001f92f808 EFLAGS: 00000212
[ 3.708200] RAX: ffff880019d4fa68 RBX: ffff880019d4f000 RCX: 0000000000000000
[ 3.708471] RDX: 000000000000000c RSI: ffff88001f92f890 RDI: ffff880019d4f680
[ 3.708740] RBP: 0000000000000001 R08: 0000000000000000 R09: 0000000000000004
[ 3.708999] R10: ffff880000034024 R11: 0000000000000000 R12: ffff88001f92f890
[ 3.709276] R13: ffff88001f8f1c00 R14: 000000000000000b R15: 0000000000000000
[ 3.709303] FS: 00007f8ab66bd700(0000) GS:ffff88001b000000(0000) knlGS:0000000000000000
[ 3.709303] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 3.709303] CR2: 0000000000654988 CR3: 000000001f8f3000 CR4: 00000000000006b0
[ 3.709303] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 3.709303] DR3: 0000000000000000 DR6: 0000000000000000 DR7: 0000000000000000
[ 3.709303] Stack:
[ 3.709303] ffff88001f8f1c00 000000000000000b ffff88001f92f890 ffff880019d4f000
[ 3.709303] ffff88001f92f890 ffffffff813332f5 ffff88001f92f880 0000000000000000
[ 3.709303] ffff88001f92f890 0000000000000001 ffff880019d4f000 ffffffff81333627
[ 3.709303] Call Trace:
[ 3.709303] [<ffffffff813332f5>] ? __switchdev_port_attr_set+0x25/0x90
[ 3.709303] [<ffffffff81333627>] ? switchdev_port_attr_set+0x27/0x120
[ 3.709303] [<ffffffff81318e86>] ? br_set_state+0x36/0x50
[ 3.709303] [<ffffffff8131795c>] ? br_add_if+0x37c/0x400
[ 3.709303] [<ffffffff81238ce1>] ? do_setlink+0x7e1/0x800
[ 3.709303] [<ffffffff8111f980>] ? radix_tree_lookup_slot+0x10/0x30
[ 3.709303] [<ffffffff81136fba>] ? nla_parse+0xaa/0x110
[ 3.709303] [<ffffffff81239c98>] ? rtnl_newlink+0x548/0x870
[ 3.709303] [<ffffffff8111f900>] ? __radix_tree_lookup+0x40/0xb0
[ 3.709303] [<ffffffff81136f3e>] ? nla_parse+0x2e/0x110
[ 3.709303] [<ffffffff81237d7e>] ? rtnetlink_rcv_msg+0x7e/0x250
[ 3.709303] [<ffffffff8121d1be>] ? __skb_recv_datagram+0xfe/0x4b0
[ 3.709303] [<ffffffff81237d00>] ? rtnetlink_rcv+0x30/0x30
[ 3.709303] [<ffffffff81247948>] ? netlink_rcv_skb+0xa8/0xd0
[ 3.709303] [<ffffffff81237cef>] ? rtnetlink_rcv+0x1f/0x30
[ 3.709303] [<ffffffff81247210>] ? netlink_unicast+0x150/0x200
[ 3.709303] [<ffffffff81247704>] ? netlink_sendmsg+0x374/0x3e0
[ 3.709303] [<ffffffff8120f8cf>] ? sock_sendmsg+0xf/0x30
[ 3.709303] [<ffffffff8120ffc3>] ? ___sys_sendmsg+0x1f3/0x200
[ 3.709303] [<ffffffff812100d5>] ? ___sys_recvmsg+0x105/0x140
[ 3.709303] [<ffffffff812228d9>] ? dev_get_by_name_rcu+0x69/0x90
[ 3.709303] [<ffffffff812228d9>] ? dev_get_by_name_rcu+0x69/0x90
[ 3.709303] [<ffffffff81217b7d>] ? skb_dequeue+0x4d/0x60
[ 3.709303] [<ffffffff81217bb0>] ? skb_queue_purge+0x20/0x30
[ 3.709303] [<ffffffff810ebdcf>] ? __inode_wait_for_writeback+0x5f/0xb0
[ 3.709303] [<ffffffff810648b0>] ? autoremove_wake_function+0x30/0x30
[ 3.709303] [<ffffffff81210ee9>] ? __sys_sendmsg+0x39/0x70
[ 3.709303] [<ffffffff8133e097>] ? system_call_fastpath+0x12/0x6a
[ 3.709303] Code: bb 90 06 00 00 48 c7 04 24 00 00 00 00 45 31 c9 45 31 c0 48 c7 c1 c0 b7 1e 81 89 ea e8 da da ff ff eb 95 0f 1f 84 00 00 00 00 00 <0f> 0b 66 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 48 83 fe 15 75
[ 3.709303] RIP [<ffffffff811f0080>] rocker_port_attr_set+0xe0/0xf0
[ 3.709303] RSP <ffff88001f92f808>
[ 3.721409] ---[ end trace b7481fcb7cb032aa ]---
Segmentation fault
Fixes: c4f20321d968 ("rocker: support prepare-commit transaction model")
Acked-by: Scott Feldman <sfeldma@gmail.com>
Acked-by: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: Simon Horman <simon.horman@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Use the generic mechanism to declare a bitmap instead of unsigned long.
Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Alexei Starovoitov says:
====================
bpf: introduce bpf_tail_call() helper
introduce bpf_tail_call(ctx, &jmp_table, index) helper function
which can be used from BPF programs like:
int bpf_prog(struct pt_regs *ctx)
{
...
bpf_tail_call(ctx, &jmp_table, index);
...
}
that is roughly equivalent to:
int bpf_prog(struct pt_regs *ctx)
{
...
if (jmp_table[index])
return (*jmp_table[index])(ctx);
...
}
The important detail that it's not a normal call, but a tail call.
The kernel stack is precious, so this helper reuses the current
stack frame and jumps into another BPF program without adding
extra call frame.
It's trivially done in interpreter and a bit trickier in JITs.
Use cases:
- simplify complex programs
- dispatch into other programs
(for example: index in jump table can be syscall number or network protocol)
- build dynamic chains of programs
The chain of tail calls can form unpredictable dynamic loops therefore
tail_call_cnt is used to limit the number of calls and currently is set to 32.
patch 1 - support bpf_tail_call() in interpreter
patch 2 - support in x64 JIT
We've discussed what's neccessary to support it in arm64/s390 JITs
and it looks fine.
patch 3 - sample example for tracing
patch 4 - sample example for networking
More details in every patch.
This set went through several iterations of reviews/fixes and older
attempts can be seen:
https://git.kernel.org/cgit/linux/kernel/git/ast/bpf.git/log/?h=tail_call_v[123456]
- tail_call_v1 does it without touching JITs but introduces overhead
for all programs that don't use this helper function.
- tail_call_v2 still has some overhead and x64 JIT does full stack
unwind (prologue skipping optimization wasn't there)
- tail_call_v3 reuses 'call' instruction encoding and has interpreter
overhead for every normal call
- tail_call_v4 fixes above architectural shortcomings and v5,v6 fix few
more bugs
This last tail_call_v6 approach seems to be the best.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Usage:
$ sudo ./sockex3
IP src.port -> dst.port bytes packets
127.0.0.1.42010 -> 127.0.0.1.12865 1568 8
127.0.0.1.59526 -> 127.0.0.1.33778 11422636 173070
127.0.0.1.33778 -> 127.0.0.1.59526 11260224828 341974
127.0.0.1.12865 -> 127.0.0.1.42010 1832 12
IP src.port -> dst.port bytes packets
127.0.0.1.42010 -> 127.0.0.1.12865 1568 8
127.0.0.1.59526 -> 127.0.0.1.33778 23198092 351486
127.0.0.1.33778 -> 127.0.0.1.59526 22972698518 698616
127.0.0.1.12865 -> 127.0.0.1.42010 1832 12
this example is similar to sockex2 in a way that it accumulates per-flow
statistics, but it does packet parsing differently.
sockex2 inlines full packet parser routine into single bpf program.
This sockex3 example have 4 independent programs that parse vlan, mpls, ip, ipv6
and one main program that starts the process.
bpf_tail_call() mechanism allows each program to be small and be called
on demand potentially multiple times, so that many vlan, mpls, ip in ip,
gre encapsulations can be parsed. These and other protocol parsers can
be added or removed at runtime. TLVs can be parsed in similar manner.
Note, tail_call_cnt dynamic check limits the number of tail calls to 32.
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
kprobe example that demonstrates how future seccomp programs may look like.
It attaches to seccomp_phase1() function and tail-calls other BPF programs
depending on syscall number.
Existing optimized classic BPF seccomp programs generated by Chrome look like:
if (sd.nr < 121) {
if (sd.nr < 57) {
if (sd.nr < 22) {
if (sd.nr < 7) {
if (sd.nr < 4) {
if (sd.nr < 1) {
check sys_read
} else {
if (sd.nr < 3) {
check sys_write and sys_open
} else {
check sys_close
}
}
} else {
} else {
} else {
} else {
} else {
}
the future seccomp using native eBPF may look like:
bpf_tail_call(&sd, &syscall_jmp_table, sd.nr);
which is simpler, faster and leaves more room for per-syscall checks.
Usage:
$ sudo ./tracex5
<...>-366 [001] d... 4.870033: : read(fd=1, buf=00007f6d5bebf000, size=771)
<...>-369 [003] d... 4.870066: : mmap
<...>-369 [003] d... 4.870077: : syscall=110 (one of get/set uid/pid/gid)
<...>-369 [003] d... 4.870089: : syscall=107 (one of get/set uid/pid/gid)
sh-369 [000] d... 4.891740: : read(fd=0, buf=00000000023d1000, size=512)
sh-369 [000] d... 4.891747: : write(fd=1, buf=00000000023d3000, size=512)
sh-369 [000] d... 4.891747: : read(fd=1, buf=00000000023d3000, size=512)
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
bpf_tail_call() arguments:
ctx - context pointer
jmp_table - one of BPF_MAP_TYPE_PROG_ARRAY maps used as the jump table
index - index in the jump table
In this implementation x64 JIT bypasses stack unwind and jumps into the
callee program after prologue, so the callee program reuses the same stack.
The logic can be roughly expressed in C like:
u32 tail_call_cnt;
void *jumptable[2] = { &&label1, &&label2 };
int bpf_prog1(void *ctx)
{
label1:
...
}
int bpf_prog2(void *ctx)
{
label2:
...
}
int bpf_prog1(void *ctx)
{
...
if (tail_call_cnt++ < MAX_TAIL_CALL_CNT)
goto *jumptable[index]; ... and pass my 'ctx' to callee ...
... fall through if no entry in jumptable ...
}
Note that 'skip current program epilogue and next program prologue' is
an optimization. Other JITs don't have to do it the same way.
>From safety point of view it's valid as well, since programs always
initialize the stack before use, so any residue in the stack left by
the current program is not going be read. The same verifier checks are
done for the calls from the kernel into all bpf programs.
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
introduce bpf_tail_call(ctx, &jmp_table, index) helper function
which can be used from BPF programs like:
int bpf_prog(struct pt_regs *ctx)
{
...
bpf_tail_call(ctx, &jmp_table, index);
...
}
that is roughly equivalent to:
int bpf_prog(struct pt_regs *ctx)
{
...
if (jmp_table[index])
return (*jmp_table[index])(ctx);
...
}
The important detail that it's not a normal call, but a tail call.
The kernel stack is precious, so this helper reuses the current
stack frame and jumps into another BPF program without adding
extra call frame.
It's trivially done in interpreter and a bit trickier in JITs.
In case of x64 JIT the bigger part of generated assembler prologue
is common for all programs, so it is simply skipped while jumping.
Other JITs can do similar prologue-skipping optimization or
do stack unwind before jumping into the next program.
bpf_tail_call() arguments:
ctx - context pointer
jmp_table - one of BPF_MAP_TYPE_PROG_ARRAY maps used as the jump table
index - index in the jump table
Since all BPF programs are idenitified by file descriptor, user space
need to populate the jmp_table with FDs of other BPF programs.
If jmp_table[index] is empty the bpf_tail_call() doesn't jump anywhere
and program execution continues as normal.
New BPF_MAP_TYPE_PROG_ARRAY map type is introduced so that user space can
populate this jmp_table array with FDs of other bpf programs.
Programs can share the same jmp_table array or use multiple jmp_tables.
The chain of tail calls can form unpredictable dynamic loops therefore
tail_call_cnt is used to limit the number of calls and currently is set to 32.
Use cases:
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
==========
- simplify complex programs by splitting them into a sequence of small programs
- dispatch routine
For tracing and future seccomp the program may be triggered on all system
calls, but processing of syscall arguments will be different. It's more
efficient to implement them as:
int syscall_entry(struct seccomp_data *ctx)
{
bpf_tail_call(ctx, &syscall_jmp_table, ctx->nr /* syscall number */);
... default: process unknown syscall ...
}
int sys_write_event(struct seccomp_data *ctx) {...}
int sys_read_event(struct seccomp_data *ctx) {...}
syscall_jmp_table[__NR_write] = sys_write_event;
syscall_jmp_table[__NR_read] = sys_read_event;
For networking the program may call into different parsers depending on
packet format, like:
int packet_parser(struct __sk_buff *skb)
{
... parse L2, L3 here ...
__u8 ipproto = load_byte(skb, ... offsetof(struct iphdr, protocol));
bpf_tail_call(skb, &ipproto_jmp_table, ipproto);
... default: process unknown protocol ...
}
int parse_tcp(struct __sk_buff *skb) {...}
int parse_udp(struct __sk_buff *skb) {...}
ipproto_jmp_table[IPPROTO_TCP] = parse_tcp;
ipproto_jmp_table[IPPROTO_UDP] = parse_udp;
- for TC use case, bpf_tail_call() allows to implement reclassify-like logic
- bpf_map_update_elem/delete calls into BPF_MAP_TYPE_PROG_ARRAY jump table
are atomic, so user space can build chains of BPF programs on the fly
Implementation details:
=======================
- high performance of bpf_tail_call() is the goal.
It could have been implemented without JIT changes as a wrapper on top of
BPF_PROG_RUN() macro, but with two downsides:
. all programs would have to pay performance penalty for this feature and
tail call itself would be slower, since mandatory stack unwind, return,
stack allocate would be done for every tailcall.
. tailcall would be limited to programs running preempt_disabled, since
generic 'void *ctx' doesn't have room for 'tail_call_cnt' and it would
need to be either global per_cpu variable accessed by helper and by wrapper
or global variable protected by locks.
In this implementation x64 JIT bypasses stack unwind and jumps into the
callee program after prologue.
- bpf_prog_array_compatible() ensures that prog_type of callee and caller
are the same and JITed/non-JITed flag is the same, since calling JITed
program from non-JITed is invalid, since stack frames are different.
Similarly calling kprobe type program from socket type program is invalid.
- jump table is implemented as BPF_MAP_TYPE_PROG_ARRAY to reuse 'map'
abstraction, its user space API and all of verifier logic.
It's in the existing arraymap.c file, since several functions are
shared with regular array map.
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Reduce ifdef pollution slightly, no functional change. We can simply
remove the extra alternative definition of handle_ing() and nf_ingress().
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
In commit 8e4d980ac215 ("tcp: fix behavior for epoll edge trigger")
we fixed a possible hang of TCP sockets under memory pressure,
by allowing sk_stream_alloc_skb() to use sk_forced_mem_schedule()
if no packet is in socket write queue.
It turns out there are other cases where we want to force memory
schedule :
tcp_fragment() & tso_fragment() need to split a big TSO packet into
two smaller ones. If we block here because of TCP memory pressure,
we can effectively block TCP socket from sending new data.
If no further ACK is coming, this hang would be definitive, and socket
has no chance to effectively reduce its memory usage.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
[1] When entering NUD_PROBE state via neigh_update(), perhaps received
from userspace, correctly (re)initialize the probes count to zero.
This is useful for forcing revalidation of a neighbor (for example
if the host is attempting to do DNA [IPv4 4436, IPv6 6059]).
[2] Notify listeners when a neighbor goes into NUD_PROBE state.
By sending notifications on entry to NUD_PROBE state listeners get
more timely warnings of imminent connectivity issues.
The current notifications on entry to NUD_STALE have somewhat
limited usefulness: NUD_STALE is a perfectly normal state, as is
NUD_DELAY, whereas notifications on entry to NUD_FAILURE come after
a neighbor reachability problem has been confirmed (typically after
three probes).
Signed-off-by: Erik Kline <ek@google.com>
Acked-By: Lorenzo Colitti <lorenzo@google.com>
Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
That atom table does not check these bits. Fixes aux
regressions on some boards.
Reported-by: Malte Schröder <malte@tnxip.de>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: stable@vger.kernel.org
|
|
Retry the dpcd fetch several times. Some eDP panels
fail several times before the fetch is successful.
bug:
https://bugs.freedesktop.org/show_bug.cgi?id=73530
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: stable@vger.kernel.org
|
|
The index of ->planes[] array (3rd parameter) cannot be equal to MAX_PLANE.
This looks like a typo that is now fixed.
Signed-off-by: Stephane Viau <sviau@codeaurora.org>
Acked-by: Rob Clark <robdclark@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
|
|
Evaluating "&mddev->disks" is simple pointer arithmetic, so
it does not need 'rcu' annotations - no dereferencing is happening.
Also enhance the comment to explain that 'rdev' in that case
is not actually a pointer to an rdev.
Reported-by: Patrick Marlier <patrick.marlier@gmail.com>
Signed-off-by: NeilBrown <neilb@suse.de>
|
|
The variable "sector" in "raid0_make_request()" was improperly updated
by a call to "sector_div()" which modifies its first argument in place.
Commit 47d68979cc968535cb87f3e5f2e6a3533ea48fbd restored this variable
after the call for later re-use. Unfortunetly the restore was done after
the referenced variable "bio" was advanced. This lead to the original
value and the restored value being different. Here we move this line to
the proper place.
One observed side effect of this bug was discarding a file though
unlinking would cause an unrelated file's contents to be discarded.
Signed-off-by: NeilBrown <neilb@suse.de>
Fixes: 47d68979cc96 ("md/raid0: fix bug with chunksize not a power of 2.")
Cc: stable@vger.kernel.org (any that received above backport)
URL: https://bugzilla.kernel.org/show_bug.cgi?id=98501
|
|
ops_run_reconstruct6() doesn't correctly chain asyn operations. The tx returned
by async_gen_syndrome should be added as the dependent tx of next stripe.
The issue is introduced by commit 59fc630b8b5f9f21c8ce3ba153341c107dce1b0c
RAID5: batch adjacent full stripe write
Reported-and-tested-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Shaohua Li <shli@fb.com>
Signed-off-by: NeilBrown <neilb@suse.de>
|
|
The vmmouse Kconfig help text was referring to an incorrect user-space
driver version. Fix this.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
|
|
On v7 touchpads sometimes when 2 fingers are moved down on the touchpad
until they "fall of" the touchpad, the second touch will report 0 for y
(max y really since the y axis is inverted) and max x as coordinates,
rather then reporting 0, 0 as is expected for a non touching finger.
This commit detects this and treats these touches as non touching.
See the evemu-recording here:
https://bugzilla.redhat.com/attachment.cgi?id=1025058
BugLink: https://bugzilla.redhat.com/show_bug.cgi?id=1221200
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
|
|
Support for using UD and AF_IB is currently broken. The
IB_CM_SIDR_REQ_RECEIVED message is not handled properly in
cma_save_net_info() and we end up falling into code that will try and
process the request as ipv4/ipv6, which will end up failing.
The resolution is to add a check for the SIDR_REQ and call
cma_save_ib_info() with a NULL path record. Change cma_save_ib_info()
to copy the src sib info from the listen_id when the path record is NULL.
Reported-by: Hari Shankar <Hari.Shankar@netapp.com>
Signed-off-by: Matt Finlay <matt@mellanox.com>
Acked-by: Sean Hefty <sean.hefty@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
|
|
Problem reported by: Ted Kim <ted.h.kim@oracle.com>:
We have a case where a Linux system and a non-Linux system are
trying to interoperate. The Linux host is the active side and
starts the connection establishment, but later decides to not go
through with the connection setup and does rdma_destroy_id().
The rdma_destroy_id() eventually works its way down to cm_destroy_id()
in core/cm.c, where a REJ is sent. The non-Linux system
has some trouble recognizing the REJ because of:
A. CM states which can't receive the REJ
B. Some issues about REJ formatting (missing comm ID)
ISSUE A: That part of the spec says, a Consumer Reject REJ can be
sent for a connection abort, but it goes further
and says: can send a REJ message with a "Consumer Reject"
Reason code if they are in a CM state (i.e. REP
Rcvd, MRA(REP) Sent, REQ Rcvd, MRA Sent) that allows
a REJ to be sent (lines 35-38).
Of the states listed there in that sentence, it would
seem to limit the active side to using the Consumer Reject
(for the abort case) in just the REP-Rcvd and MRA-REP-Sent
states. That is basically only after the active side
sees a REP (or alternatively goes down the state transitions
to timeout in which case a Timeout REJ is sent).
As a fix, in cm-destroy-id() move the IB-CM-MRA-REQ-RCVD case
to the same as REQ-SENT. Essentially, make a REJ sent after
getting an MRA on active side a timeout rather than Consumer-
Reject, which is arguably more correct with the CM state
diagrams previous to getting a REP.
Signed-off-by: Ted Kim <ted.h.kim@oracle.com>
Signed-off-by: Sean Hefty <sean.hefty@intel.com>
|
|
This is an alternative way of fixing:
commit db9683fb412d ("net: phy: Make sure PHY_RESUMING state change
is always processed")
When the PHY state transitions from PHY_HALTED to PHY_RESUMING, there are
two things we need to do:
1). Re-enable interrupts (and power up the physical link, if powered down)
2). Update the PHY state and net-device based on the link status.
There's no strict reason why #1 has to be done from within the main
phy_state_machine() function. There is a risk that other changes to the
PHY (e.g. setting speed/duplex, which calls phy_start_aneg()) could cause
a subsequent state transition before phy_state_machine() has processed
the PHY_RESUMING state change. This would leave the PHY with interrupts
disabled and/or still in the BMCR_PDOWN/low-power mode.
Moving enabling the interrupts and phy_resume() into phy_start() will
guarantee this work always gets done. As the PHY is already in the HALTED
state and interrupts are disabled, it shouldn't conflict with any work
being done in phy_state_machine(). The downside of this change is that if
the PHY_RESUMING state is ever entered from anywhere else, it'll also have
to repeat this work.
Signed-off-by: Tim Beale <tim.beale@alliedtelesis.co.nz>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Michal Kubecek says:
====================
IPv6 ECMP route add/replace fixes
(1) When adding a nexthop of a multipath route fails (e.g. because of a
conflict with an existing route), we are supposed to delete nexthops
already added. However, currently we try to also delete all nexthops we
haven't even tried to add yet so that a "ip route add" command can
actually remove pre-existing routes if it fails.
(2) Attempt to replace a multipath route results in a broken siblings
linked list. Following commands (like "ip route del") can then either
follow a link into freed memory or end in an infinite loop (if the slab
object has been reused).
v2: fix an omission in first patch
v3: change the semantics of replace operation to better match IPv4
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
When replacing an IPv6 multipath route with "ip route replace", i.e.
NLM_F_CREATE | NLM_F_REPLACE, fib6_add_rt2node() replaces only first
matching route without fixing its siblings, resulting in corrupted
siblings linked list; removing one of the siblings can then end in an
infinite loop.
IPv6 ECMP implementation is a bit different from IPv4 so that route
replacement cannot work in exactly the same way. This should be a
reasonable approximation:
1. If the new route is ECMP-able and there is a matching ECMP-able one
already, replace it and all its siblings (if any).
2. If the new route is ECMP-able and no matching ECMP-able route exists,
replace first matching non-ECMP-able (if any) or just add the new one.
3. If the new route is not ECMP-able, replace first matching
non-ECMP-able route (if any) or add the new route.
We also need to remove the NLM_F_REPLACE flag after replacing old
route(s) by first nexthop of an ECMP route so that each subsequent
nexthop does not replace previous one.
Fixes: 51ebd3181572 ("ipv6: add support of equal cost multipath (ECMP)")
Signed-off-by: Michal Kubecek <mkubecek@suse.cz>
Acked-by: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
If adding a nexthop of an IPv6 multipath route fails, comment in
ip6_route_multipath() says we are going to delete all nexthops already
added. However, current implementation deletes even the routes it
hasn't even tried to add yet. For example, running
ip route add 1234:5678::/64 \
nexthop via fe80::aa dev dummy1 \
nexthop via fe80::bb dev dummy1 \
nexthop via fe80::cc dev dummy1
twice results in removing all routes first command added.
Limit the second (delete) run to nexthops that succeeded in the first
(add) run.
Fixes: 51ebd3181572 ("ipv6: add support of equal cost multipath (ECMP)")
Signed-off-by: Michal Kubecek <mkubecek@suse.cz>
Acked-by: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This reverts commit c055d5b03bb4cb69d349d787c9787c0383abd8b2.
There are two issues:
'dnat_took_place' made me think that this is related to
-j DNAT/MASQUERADE.
But thats only one part of the story. This is also relevant for SNAT
when we undo snat translation in reverse/reply direction.
Furthermore, I originally wanted to do this mainly to avoid
storing ipv6 addresses once we make DNAT/REDIRECT work
for ipv6 on bridges.
However, I forgot about SNPT/DNPT which is stateless.
So we can't escape storing address for ipv6 anyway. Might as
well do it for ipv4 too.
Reported-and-tested-by: Bernhard Thaler <bernhard.thaler@wvnet.at>
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
After improving setsockopt() coverage in trinity, I started triggering
vmalloc failures pretty reliably from this code path:
warn_alloc_failed+0xe9/0x140
__vmalloc_node_range+0x1be/0x270
vzalloc+0x4b/0x50
__do_replace+0x52/0x260 [ip_tables]
do_ipt_set_ctl+0x15d/0x1d0 [ip_tables]
nf_setsockopt+0x65/0x90
ip_setsockopt+0x61/0xa0
raw_setsockopt+0x16/0x60
sock_common_setsockopt+0x14/0x20
SyS_setsockopt+0x71/0xd0
It turns out we don't validate that the num_counters field in the
struct we pass in from userspace is initialized.
The same problem also exists in ebtables, arptables, ipv6, and the
compat variants.
Signed-off-by: Dave Jones <davej@codemonkey.org.uk>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
nfnetlink_{log,queue}_init() register the netlink callback nf*_rcv_nl_event
before registering the pernet_subsys, but the callback relies on data
structures allocated by pernet init functions.
When nfnetlink_{log,queue} is loaded, if a netlink message is received after
the netlink callback is registered but before the pernet_subsys is registered,
the kernel will panic in the sequence
nfulnl_rcv_nl_event
nfnl_log_pernet
net_generic
BUG_ON(id == 0) where id is nfnl_log_net_id.
The panic can be easily reproduced in 4.0.3 by:
while true ;do modprobe nfnetlink_log ; rmmod nfnetlink_log ; done &
while true ;do ip netns add dummy ; ip netns del dummy ; done &
This patch moves register_pernet_subsys to earlier in nfnetlink_log_init.
Notice that the BUG_ON hit in 4.0.3 was recently removed in 2591ffd308
["netns: remove BUG_ONs from net_generic()"].
Signed-off-by: Francesco Ruggeri <fruggeri@arista.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
The MPX feature requires eager KVM FPU restore support. We have verified
that MPX cannot work correctly with the current lazy KVM FPU restore
mechanism. Eager KVM FPU restore should be enabled if the MPX feature is
exposed to VM.
Signed-off-by: Yang Zhang <yang.z.zhang@intel.com>
Signed-off-by: Liang Li <liang.z.li@intel.com>
[Also activate the FPU on AMD processors. - Paolo]
Cc: stable@vger.kernel.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
This reverts commit 4473b570a7ebb502f63f292ccfba7df622e5fdd3. We'll
use the hook again.
Cc: stable@vger.kernel.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
memslot->userfault_addr is set by the kernel with a mmap executed
from the kernel but the userland can still munmap it and lead to the
below oops after memslot->userfault_addr points to a host virtual
address that has no vma or mapping.
[ 327.538306] BUG: unable to handle kernel paging request at fffffffffffffffe
[ 327.538407] IP: [<ffffffff811a7b55>] put_page+0x5/0x50
[ 327.538474] PGD 1a01067 PUD 1a03067 PMD 0
[ 327.538529] Oops: 0000 [#1] SMP
[ 327.538574] Modules linked in: macvtap macvlan xt_CHECKSUM iptable_mangle ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_nat_ipv4 nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 xt_conntrack nf_conntrack ipt_REJECT iptable_filter ip_tables tun bridge stp llc rpcsec_gss_krb5 nfsv4 dns_resolver nfs fscache xprtrdma ib_isert iscsi_target_mod ib_iser libiscsi scsi_transport_iscsi ib_srpt target_core_mod ib_srp scsi_transport_srp scsi_tgt ib_ipoib rdma_ucm ib_ucm ib_uverbs ib_umad rdma_cm ib_cm iw_cm ipmi_devintf iTCO_wdt iTCO_vendor_support intel_powerclamp coretemp dcdbas intel_rapl kvm_intel kvm crct10dif_pclmul crc32_pclmul ghash_clmulni_intel aesni_intel lrw gf128mul glue_helper ablk_helper cryptd pcspkr sb_edac edac_core ipmi_si ipmi_msghandler acpi_pad wmi acpi_power_meter lpc_ich mfd_core mei_me
[ 327.539488] mei shpchp nfsd auth_rpcgss nfs_acl lockd grace sunrpc mlx4_ib ib_sa ib_mad ib_core mlx4_en vxlan ib_addr ip_tunnel xfs libcrc32c sd_mod crc_t10dif crct10dif_common crc32c_intel mgag200 syscopyarea sysfillrect sysimgblt i2c_algo_bit drm_kms_helper ttm drm ahci i2c_core libahci mlx4_core libata tg3 ptp pps_core megaraid_sas ntb dm_mirror dm_region_hash dm_log dm_mod
[ 327.539956] CPU: 3 PID: 3161 Comm: qemu-kvm Not tainted 3.10.0-240.el7.userfault19.4ca4011.x86_64.debug #1
[ 327.540045] Hardware name: Dell Inc. PowerEdge R420/0CN7CM, BIOS 2.1.2 01/20/2014
[ 327.540115] task: ffff8803280ccf00 ti: ffff880317c58000 task.ti: ffff880317c58000
[ 327.540184] RIP: 0010:[<ffffffff811a7b55>] [<ffffffff811a7b55>] put_page+0x5/0x50
[ 327.540261] RSP: 0018:ffff880317c5bcf8 EFLAGS: 00010246
[ 327.540313] RAX: 00057ffffffff000 RBX: ffff880616a20000 RCX: 0000000000000000
[ 327.540379] RDX: 0000000000002014 RSI: 00057ffffffff000 RDI: fffffffffffffffe
[ 327.540445] RBP: ffff880317c5bd10 R08: 0000000000000103 R09: 0000000000000000
[ 327.540511] R10: 0000000000000000 R11: 0000000000000000 R12: fffffffffffffffe
[ 327.540576] R13: 0000000000000000 R14: ffff880317c5bd70 R15: ffff880317c5bd50
[ 327.540643] FS: 00007fd230b7f700(0000) GS:ffff880630800000(0000) knlGS:0000000000000000
[ 327.540717] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 327.540771] CR2: fffffffffffffffe CR3: 000000062a2c3000 CR4: 00000000000427e0
[ 327.540837] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 327.540904] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[ 327.540974] Stack:
[ 327.541008] ffffffffa05d6d0c ffff880616a20000 0000000000000000 ffff880317c5bdc0
[ 327.541093] ffffffffa05ddaa2 0000000000000000 00000000002191bf 00000042f3feab2d
[ 327.541177] 00000042f3feab2d 0000000000000002 0000000000000001 0321000000000000
[ 327.541261] Call Trace:
[ 327.541321] [<ffffffffa05d6d0c>] ? kvm_vcpu_reload_apic_access_page+0x6c/0x80 [kvm]
[ 327.543615] [<ffffffffa05ddaa2>] vcpu_enter_guest+0x3f2/0x10f0 [kvm]
[ 327.545918] [<ffffffffa05e2f10>] kvm_arch_vcpu_ioctl_run+0x2b0/0x5a0 [kvm]
[ 327.548211] [<ffffffffa05e2d02>] ? kvm_arch_vcpu_ioctl_run+0xa2/0x5a0 [kvm]
[ 327.550500] [<ffffffffa05ca845>] kvm_vcpu_ioctl+0x2b5/0x680 [kvm]
[ 327.552768] [<ffffffff810b8d12>] ? creds_are_invalid.part.1+0x12/0x50
[ 327.555069] [<ffffffff810b8d71>] ? creds_are_invalid+0x21/0x30
[ 327.557373] [<ffffffff812d6066>] ? inode_has_perm.isra.49.constprop.65+0x26/0x80
[ 327.559663] [<ffffffff8122d985>] do_vfs_ioctl+0x305/0x530
[ 327.561917] [<ffffffff8122dc51>] SyS_ioctl+0xa1/0xc0
[ 327.564185] [<ffffffff816de829>] system_call_fastpath+0x16/0x1b
[ 327.566480] Code: 0b 31 f6 4c 89 e7 e8 4b 7f ff ff 0f 0b e8 24 fd ff ff e9 a9 fd ff ff 66 66 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 66 66 66 66 90 <48> f7 07 00 c0 00 00 55 48 89 e5 75 2a 8b 47 1c 85 c0 74 1e f0
Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
Cc: stable@vger.kernel.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
We assumed all touch interfaces report touch data. But, Bamboo
and Intuos non-touch devices report express keys on touch
interface. We need to check touch_max before counting touches.
Reported-by: Tasos Sahanidis <tasos@tasossah.com>
Signed-off-by: Ping Cheng <pingc@wacom.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
|
|
We've got reports that ALC3226 (a Dell variant of ALC292) gives click
noises at transition from D3 to D0 when the widget power-saving is
enabled. Further debugging session showed that avoiding it isn't
trivial, unfortunately, since paths are basically activated
dynamically while the pins have been already enabled.
This patch disables the widget power-saving for such codecs.
Reported-by: Jonathan McDowell <noodles@earth.li>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
|