Age | Commit message (Collapse) | Author |
|
For test purpose, provide the generic nci loopback function.
Signed-off-by: Christophe Ricard <christophe-h.ricard@st.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
|
|
According to NCI specification, destination type and destination
specific parameters shall uniquely identify a single destination
for the Logical Connection.
Signed-off-by: Christophe Ricard <christophe-h.ricard@st.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
|
|
nci_core_conn_close was not retrieving a conn_info using the correct
connection id.
Signed-off-by: Christophe Ricard <christophe-h.ricard@st.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
|
|
NCI_CORE_CONN_CREATE may not have any destination type parameter.
Signed-off-by: Christophe Ricard <christophe-h.ricard@st.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
|
|
A stupid refactoring bug in inet6_lookup_listener() needs to be fixed
in order to get proper SO_REUSEPORT behavior.
Fixes: 3b24d854cb35 ("tcp/dccp: do not touch listener sk_refcnt under synflood")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Maciej Żenczykowski <maze@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The switchdev design implies that a software error should not happen in
the commit phase since it must have been previously reported in the
prepare phase. If an hardware error occurs during the commit phase,
there is nothing switchdev can do about it.
The DSA layer separates port_vlan_prepare and port_vlan_add for
simplicity and convenience. If an hardware error occurs during the
commit phase, there is no need to report it outside the driver itself.
Make the DSA port_vlan_add routine return void for explicitness.
Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The switchdev design implies that a software error should not happen in
the commit phase since it must have been previously reported in the
prepare phase. If an hardware error occurs during the commit phase,
there is nothing switchdev can do about it.
The DSA layer separates port_fdb_prepare and port_fdb_add for simplicity
and convenience. If an hardware error occurs during the commit phase,
there is no need to report it outside the DSA driver itself.
Make the DSA port_fdb_add routine return void for explicitness.
Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The DSA layer doesn't care about the return code of the port_stp_update
routine, so make it void in the layer and the DSA drivers.
Replace the useless dsa_slave_stp_update function with a
dsa_slave_stp_state function used to reply to the switchdev
SWITCHDEV_ATTR_ID_PORT_STP_STATE attribute.
In the meantime, rename port_stp_update to port_stp_state_set to
explicit the state change.
Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211-next
Johannes Berg says:
====================
For the 4.7 cycle, we have a number of changes:
* Bob's mesh mode rhashtable conversion, this includes
the rhashtable API change for allocation flags
* BSSID scan, connect() command reassoc support (Jouni)
* fast (optimised data only) and support for RSS in mac80211 (myself)
* various smaller changes
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Ptr to devlink structure can be easily obtained from
devlink_port->devlink. So share user_ptr[0] pointer for both and leave
user_ptr[1] free for other users.
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
As we rely on caller zeroing or correctly set the struct before the call,
this implicit type set is either no-op (DEVLINK_PORT_TYPE_NOTSET is 0)
or it rewrites wanted value. So remove this.
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Resetting a bearer/interface, with the consequence of resetting all its
pertaining links, is not an atomic action. This becomes particularly
evident in very large clusters, where a lot of traffic may happen on the
remaining links while we are busy shutting them down. In extreme cases,
we may even see links being re-created and re-established before we are
finished with the job.
To solve this, we now introduce a solution where we temporarily detach
the bearer from the interface when the bearer is reset. This inhibits
all packet reception, while sending still is possible. For the latter,
we use the fact that the device's user pointer now is zero to filter out
which packets can be sent during this situation; i.e., outgoing RESET
messages only. This filtering serves to speed up the neighbors'
detection of the loss event, and saves us from unnecessary probing.
Acked-by: Ying Xue <ying.xue@windriver.com>
Signed-off-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
When enabling a bearer we create a 'neigbor discoverer' instance by
calling the function tipc_disc_create() before the bearer is actually
registered in the list of enabled bearers. Because of this, the very
first discovery broadcast message, created by the mentioned function,
is lost, since it cannot find any valid bearer to use. Furthermore,
the used send function, tipc_bearer_xmit_skb() does not free the given
buffer when it cannot find a bearer, resulting in the leak of exactly
one send buffer each time a bearer is enabled.
This commit fixes this problem by introducing two changes:
1) Instead of attemting to send the discovery message directly, we let
tipc_disc_create() return the discovery buffer to the calling
function, tipc_enable_bearer(), so that the latter can send it
when the enabling sequence is finished.
2) In tipc_bearer_xmit_skb(), as well as in the two other transmit
functions at the bearer layer, we now free the indicated buffer or
buffer chain when a valid bearer cannot be found.
Acked-by: Ying Xue <ying.xue@windriver.com>
Signed-off-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Now that the UDP encapsulation GRO functions have been moved to the UDP
socket we not longer need the udp_offload insfrastructure so removing it.
Signed-off-by: Tom Herbert <tom@herbertland.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Adapt gue_gro_receive, gue_gro_complete to take a socket argument.
Don't set udp_offloads any more.
Signed-off-by: Tom Herbert <tom@herbertland.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add gro_receive and gro_complete to struct udp_tunnel_sock_cfg.
Signed-off-by: Tom Herbert <tom@herbertland.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch adds GRO functions (gro_receive and gro_complete) to UDP
sockets. udp_gro_receive is changed to perform socket lookup on a
packet. If a socket is found the related GRO functions are called.
This features obsoletes using UDP offload infrastructure for GRO
(udp_offload). This has the advantage of not being limited to provide
offload on a per port basis, GRO is now applied to whatever individual
UDP sockets are bound to. This also allows the possbility of
"application defined GRO"-- that is we can attach something like
a BPF program to a UDP socket to perfrom GRO on an application
layer protocol.
Signed-off-by: Tom Herbert <tom@herbertland.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add externally visible functions to lookup a UDP socket by skb. This
will be used for GRO in UDP sockets. These functions also check
if skb->dst is set, and if it is not skb->dev is used to get dev_net.
This allows calling lookup functions before dst has been set on the
skbuff.
Signed-off-by: Tom Herbert <tom@herbertland.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This reverts commit 5a5abb1fa3b05dd ("tun, bpf: fix suspicious RCU usage
in tun_{attach, detach}_filter") and replaces it to use lock_sock around
sk_{attach,detach}_filter. The checks inside filter.c are updated with
lockdep_sock_is_held to check for proper socket locks.
It keeps the code cleaner by ensuring that only one lock governs the
socket filter instead of two independent locks.
Cc: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The socket is either locked if we hold the slock spin_lock for
lock_sock_fast and unlock_sock_fast or we own the lock (sk_lock.owned
!= 0). Check for this and at the same time improve that the current
thread/cpu is really holding the lock.
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
During release_sock we use callbacks to finish the processing
of outstanding skbs on the socket. We actually are still locked,
sk_locked.owned == 1, but we already told lockdep that the mutex
is released. This could lead to false positives in lockdep for
lockdep_sock_is_held (we don't hold the slock spinlock during processing
the outstanding skbs).
I took over this patch from Eric Dumazet and tested it.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
David Ahern reported panics in __inet_hash() caused by my recent commit.
The reason is inet_reuseport_add_sock() was still using
sk_nulls_for_each_rcu() instead of sk_for_each_rcu().
SO_REUSEPORT enabled listeners were causing an instant crash.
While chasing this bug, I found that I forgot to clear SOCK_RCU_FREE
flag, as it is inherited from the parent at clone time.
Fixes: 3b24d854cb35 ("tcp/dccp: do not touch listener sk_refcnt under synflood")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: David Ahern <dsa@cumulusnetworks.com>
Tested-by: David Ahern <dsa@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Allow calling of iptunnel_pull_header without special casing ETH_P_TEB inner
protocol.
Signed-off-by: Jiri Benc <jbenc@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
If the user space issues a NL80211_CMD_CONNECT with
NL80211_ATTR_PREV_BSSID when there is already a connection, allow this
to proceed as a reassociation instead of rejecting the new connect
command with EALREADY.
Signed-off-by: Jouni Malinen <jouni@qca.qualcomm.com>
[validate prev_bssid]
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
This extends NL80211_CMD_CONNECT to allow the NL80211_ATTR_PREV_BSSID
attribute to be used similarly to way this was already allowed with
NL80211_CMD_ASSOCIATE. This allows user space to request reassociation
(instead of association) when already connected to an AP. This provides
an option to reassociate within an ESS without having to disconnect and
associate with the AP.
Signed-off-by: Jouni Malinen <jouni@qca.qualcomm.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
Prevents excessive A-MSDU aggregation at low data rates or bad
conditions.
Signed-off-by: Felix Fietkau <nbd@openwrt.org>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
Requires software tx queueing and fast-xmit support. For good
performance, drivers need frag_list support as well. This avoids the
need for copying data of aggregated frames. Running without it is only
supported for debugging purposes.
To avoid performance and packet size issues, the rate control module or
driver needs to limit the maximum A-MSDU size by setting
max_rc_amsdu_len in struct ieee80211_sta.
Signed-off-by: Felix Fietkau <nbd@openwrt.org>
[fix locking issue]
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
If the driver advertises the new HW flag USE_RSS, make the
station statistics on the fast-rx path per-CPU. This will
enable calling the RX in parallel, only hitting locking or
shared cachelines when the fast-RX path isn't available.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
The regular RX path has a lot of code, but with a few
assumptions on the hardware it's possible to reduce the
amount of code significantly. Currently the assumptions
on the driver are the following:
* hardware/driver reordering buffer (if supporting aggregation)
* hardware/driver decryption & PN checking (if using encryption)
* hardware/driver did de-duplication
* hardware/driver did A-MSDU deaggregation
* AP_LINK_PS is used (in AP mode)
* no client powersave handling in mac80211 (in client mode)
of which some are actually checked per packet:
* de-duplication
* PN checking
* decryption
and additionally packets must
* not be A-MSDU (have been deaggregated by driver/device)
* be data packets
* not be fragmented
* be unicast
* have RFC 1042 header
Additionally dynamically we assume:
* no encryption or CCMP/GCMP, TKIP/WEP/other not allowed
* station must be authorized
* 4-addr format not enabled
Some data needed for the RX path is cached in a new per-station
"fast_rx" structure, so that we only need to look at this and
the packet, no other memory when processing packets on the fast
RX path.
After doing the above per-packet checks, the data path collapses
down to a pretty simple conversion function taking advantage of
the data cached in the small fast_rx struct.
This should speed up the RX processing, and will make it easier
to reason about parallelizing RX (for which statistics will need
to be per-CPU still.)
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
On 32-bit platforms, the 64-bit counters we keep need to be protected
to be consistently read. Use the u64_stats_sync mechanism to do that.
In order to not end up with overly long lines, refactor the tidstats
assignments a bit.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
When storing the last_rate_* values in the RX code, there's nothing
to guarantee consistency, so a concurrent reader could see, e.g.
last_rate_idx on the new value, but last_rate_flag still on the old,
getting completely bogus values in the end.
To fix this, I lifted the sta_stats_encode_rate() function from my
old rate statistics code, which encodes the entire rate data into a
single 16-bit value, avoiding the consistency issue.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
Instead of touching the rx_stats.last_rx from the status path, introduce
and use a status_stats.last_ack variable. This will make rx_stats.last_rx
indicate when the last frame was received, making it available for real
"last_rx" and statistics gathering; statistics, when done per-CPU, will
need to figure out which place was updated last for those items where the
"last" value is exposed.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
There's no need to update rx_stats.last_rx after allocating
a station since it's already updated during allocation.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
Move the averaged values out of rx_stats and into rx_stats_avg,
to cleanly split them out. The averaged ones cannot be supported
for parallel RX in a per-CPU fashion, while the other values can
be collected per CPU and then combined/selected when needed.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
Move the semicolon, people typically assume that and
once line already put a semicolon behind the "call".
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
For the RX MSDU statistics, we need to count the number of
MSDUs created and accepted from an A-MSDU. Right now, all
frames in any A-MSDUs were completely ignored. Fix this by
moving the RX MSDU statistics accounting into the deliver
function.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
Sometimes drivers already looked up, or know out-of-band
from their device, which station transmitted a given RX
frame. Allow them to pass the station pointer to mac80211
to save the extra lookup.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
When signaling that a GRO frame is ready to be processed, the network stack
correctly checks length and aborts processing when a frame is less than 14
bytes. However, such a condition is really indicative of a broken driver,
and should be loudly signaled, rather than silently dropped as the case is
today.
Convert the condition to use net_warn_ratelimited() to ensure the stack
loudly complains about such broken drivers.
Signed-off-by: Aaron Conole <aconole@bytheb.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Enable peeking at UDP datagrams at the offset specified with socket
option SOL_SOCKET/SO_PEEK_OFF. Peek at any datagram in the queue, up
to the end of the given datagram.
Implement the SO_PEEK_OFF semantics introduced in commit ef64a54f6e55
("sock: Introduce the SO_PEEK_OFF sock option"). Increase the offset
on peek, decrease it on regular reads.
When peeking, always checksum the packet immediately, to avoid
recomputation on subsequent peeks and final read.
The socket lock is not held for the duration of udp_recvmsg, so
peek and read operations can run concurrently. Only the last store
to sk_peek_off is preserved.
Signed-off-by: Sam Kumar <samanthakumar@google.com>
Signed-off-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Remove UDP transport headers before queueing packets for reception.
This change simplifies a follow-up patch to add MSG_PEEK support.
Signed-off-by: Sam Kumar <samanthakumar@google.com>
Signed-off-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Lockdep warned of a lock dependency between the mesh_plink lock
and the internal lock for the rhashtable. The problem is that
the rhashtable code uses a spin lock with softirqs enabled, while
mesh_plink_timer executes a walk (to flush paths on a state change)
inside a softirq with the plink lock held.
This leads to the following deadlock if the timer fires while rht
lock is held on this CPU, and plink lock is held on another CPU:
CPU0 CPU1
---- ----
lock(&(&ht->lock)->rlock);
local_irq_disable();
lock(&(&sta->mesh->plink_lock)->rlock);
lock(&(&ht->lock)->rlock);
<Interrupt>
lock(&(&sta->mesh->plink_lock)->rlock);
*** DEADLOCK ***
Fix by waiting until we drop the plink lock to flush paths.
Fixes: d48a1b7cd439 ("mac80211: mesh: convert path table to rhashtable")
Signed-off-by: Bob Copeland <me@bobcopeland.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
The mesh path table needs to be around for the entire time the
interface is in mesh mode, as users can perform an mpath dump
at any time. The existing path table lifetime is instead tied
to the mesh BSS which can cause crashes when different MBSSes
are joined in the context of a single interface, or when the
path table is dumped when no MBSS is joined.
Introduce a new function to perform the final teardown of the
interface and perform path table cleanup there. We already
free the individual path elements when the leaving the mesh
so no additional cleanup is needed there. This fixes the
following crash:
[ 47.753026] BUG: unable to handle kernel paging request at fffffff0
[ 47.753026] IP: [<c0239765>] kthread_data+0xa/0xe
[ 47.753026] *pde = 00741067 *pte = 00000000
[ 47.753026] Oops: 0000 [#4] PREEMPT
[ 47.753026] Modules linked in: ppp_generic slhc 8021q garp mrp sch_fq_codel iptable_mangle ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat ip_tables ath9k_htc ath5k 8139too ath10k_pci ath10k_core arc4 ath9k ath9k_common ath9k_hw mac80211 ath cfg80211 cpufreq_powersave br_netfilter bridge stp llc ipw usb_wwan sierra_net usbnet af_alg natsemi via_rhine mii iTCO_wdt iTCO_vendor_support gpio_ich sierra coretemp pcspkr i2c_i801 lpc_ich ata_generic ata_piix libata ide_pci_generic piix e1000e igb i2c_algo_bit ptp pps_core [last unloaded: 8139too]
[ 47.753026] CPU: 0 PID: 12 Comm: kworker/u2:1 Tainted: G D W 4.5.0-wt-V3 #6
[ 47.753026] Hardware name: To Be Filled By O.E.M./To be filled by O.E.M., BIOS 080016 11/07/2014
[ 47.753026] task: f645a0c0 ti: f6462000 task.ti: f6462000
[ 47.753026] EIP: 0060:[<c0239765>] EFLAGS: 00010002 CPU: 0
[ 47.753026] EIP is at kthread_data+0xa/0xe
[ 47.753026] EAX: 00000000 EBX: 00000000 ECX: 00000000 EDX: 00000000
[ 47.753026] ESI: f645a0c0 EDI: f645a2fc EBP: f6463a80 ESP: f6463a78
[ 47.753026] DS: 007b ES: 007b FS: 0000 GS: 0000 SS: 0068
[ 47.753026] CR0: 8005003b CR2: 00000014 CR3: 353e5000 CR4: 00000690
[ 47.753026] Stack:
[ 47.753026] c0236866 00000000 f6463aac c05768b4 00000009 f6463ba8 f6463ab0 c0247010
[ 47.753026] 00000000 f645a0c0 f6464000 00000009 f6463ba8 f6463ab8 c0576eb2 f645a0c0
[ 47.753026] f6463aec c0228be4 c06335a4 f6463adc f6463ad0 c06c06d4 f6463ae4 c02471b0
[ 47.753026] Call Trace:
[ 47.753026] [<c0236866>] ? wq_worker_sleeping+0xb/0x78
[ 47.753026] [<c05768b4>] __schedule+0xda/0x587
[ 47.753026] [<c0247010>] ? vprintk_default+0x12/0x14
[ 47.753026] [<c0576eb2>] schedule+0x72/0x89
[ 47.753026] [<c0228be4>] do_exit+0xb8/0x71d
[ 47.753026] [<c02471b0>] ? kmsg_dump+0xa9/0xae
[ 47.753026] [<c0203576>] oops_end+0x69/0x70
[ 47.753026] [<c021dcdb>] no_context+0x1bb/0x1c5
[ 47.753026] [<c021de1b>] __bad_area_nosemaphore+0x136/0x140
[ 47.753026] [<c021e2ef>] ? vmalloc_sync_all+0x19a/0x19a
[ 47.753026] [<c021de32>] bad_area_nosemaphore+0xd/0x10
[ 47.753026] [<c021e0a1>] __do_page_fault+0x26c/0x320
[ 47.753026] [<c021e2ef>] ? vmalloc_sync_all+0x19a/0x19a
[ 47.753026] [<c021e2fa>] do_page_fault+0xb/0xd
[ 47.753026] [<c05798f8>] error_code+0x58/0x60
[ 47.753026] [<c021e2ef>] ? vmalloc_sync_all+0x19a/0x19a
[ 47.753026] [<c0239765>] ? kthread_data+0xa/0xe
[ 47.753026] [<c0236866>] ? wq_worker_sleeping+0xb/0x78
[ 47.753026] [<c05768b4>] __schedule+0xda/0x587
[ 47.753026] [<c0247010>] ? vprintk_default+0x12/0x14
[ 47.753026] [<c0576eb2>] schedule+0x72/0x89
[ 47.753026] [<c0228be4>] do_exit+0xb8/0x71d
[ 47.753026] [<c02471b0>] ? kmsg_dump+0xa9/0xae
[ 47.753026] [<c0203576>] oops_end+0x69/0x70
[ 47.753026] [<c021dcdb>] no_context+0x1bb/0x1c5
[ 47.753026] [<c021de1b>] __bad_area_nosemaphore+0x136/0x140
[ 47.753026] [<c021e2ef>] ? vmalloc_sync_all+0x19a/0x19a
[ 47.753026] [<c021de32>] bad_area_nosemaphore+0xd/0x10
[ 47.753026] [<c021e0a1>] __do_page_fault+0x26c/0x320
[ 47.753026] [<c021e2ef>] ? vmalloc_sync_all+0x19a/0x19a
[ 47.753026] [<c021e2fa>] do_page_fault+0xb/0xd
[ 47.753026] [<c05798f8>] error_code+0x58/0x60
[ 47.753026] [<c021e2ef>] ? vmalloc_sync_all+0x19a/0x19a
[ 47.753026] [<c0239765>] ? kthread_data+0xa/0xe
[ 47.753026] [<c0236866>] ? wq_worker_sleeping+0xb/0x78
[ 47.753026] [<c05768b4>] __schedule+0xda/0x587
[ 47.753026] [<c0391e32>] ? put_io_context_active+0x6d/0x95
[ 47.753026] [<c0576eb2>] schedule+0x72/0x89
[ 47.753026] [<c02291f8>] do_exit+0x6cc/0x71d
[ 47.753026] [<c0203576>] oops_end+0x69/0x70
[ 47.753026] [<c021dcdb>] no_context+0x1bb/0x1c5
[ 47.753026] [<c021de1b>] __bad_area_nosemaphore+0x136/0x140
[ 47.753026] [<c021e2ef>] ? vmalloc_sync_all+0x19a/0x19a
[ 47.753026] [<c021de32>] bad_area_nosemaphore+0xd/0x10
[ 47.753026] [<c021e0a1>] __do_page_fault+0x26c/0x320
[ 47.753026] [<c03b9160>] ? debug_smp_processor_id+0x12/0x16
[ 47.753026] [<c02015e2>] ? __switch_to+0x24/0x40e
[ 47.753026] [<c021e2ef>] ? vmalloc_sync_all+0x19a/0x19a
[ 47.753026] [<c021e2fa>] do_page_fault+0xb/0xd
[ 47.753026] [<c05798f8>] error_code+0x58/0x60
[ 47.753026] [<c021e2ef>] ? vmalloc_sync_all+0x19a/0x19a
[ 47.753026] [<c03b59d2>] ? rhashtable_walk_init+0x5c/0x93
[ 47.753026] [<f9843221>] mesh_path_tbl_expire.isra.24+0x19/0x82 [mac80211]
[ 47.753026] [<f984408b>] mesh_path_expire+0x11/0x1f [mac80211]
[ 47.753026] [<f9842bb7>] ieee80211_mesh_work+0x73/0x1a9 [mac80211]
[ 47.753026] [<f98207d1>] ieee80211_iface_work+0x2ff/0x311 [mac80211]
[ 47.753026] [<c0235fa3>] process_one_work+0x14b/0x24e
[ 47.753026] [<c0236313>] worker_thread+0x249/0x343
[ 47.753026] [<c02360ca>] ? process_scheduled_works+0x24/0x24
[ 47.753026] [<c0239359>] kthread+0x9e/0xa3
[ 47.753026] [<c0578e50>] ret_from_kernel_thread+0x20/0x40
[ 47.753026] [<c02392bb>] ? kthread_parkme+0x18/0x18
[ 47.753026] Code: 6b c0 85 c0 75 05 e8 fb 74 fc ff 89 f8 84 c0 75 08 8d 45 e8 e8 34 dd 33 00 83 c4 28 5b 5e 5f 5d c3 55 8b 80 10 02 00 00 89 e5 5d <8b> 40 f0 c3 55 b9 04 00 00 00 89 e5 52 8b 90 10 02 00 00 8d 45
[ 47.753026] EIP: [<c0239765>] kthread_data+0xa/0xe SS:ESP 0068:f6463a78
[ 47.753026] CR2: 00000000fffffff0
[ 47.753026] ---[ end trace 867ca0bdd0767790 ]---
Fixes: 3b302ada7f0a ("mac80211: mesh: move path tables into if_mesh")
Reported-by: Fred Veldini <fred.veldini@gmail.com>
Signed-off-by: Bob Copeland <me@bobcopeland.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
Several of the mesh path fields are undocumented and some
of the documentation is no longer correct or relevant after
the switch to rhashtable. Clean up the kernel doc
accordingly and reorder some fields to match the structure
layout.
Signed-off-by: Bob Copeland <me@bobcopeland.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
Reduce padding waste in struct mesh_table and struct rmc_entry by
moving the smaller fields to the end.
Signed-off-by: Bob Copeland <me@bobcopeland.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
Since we have converted the mesh path tables to rhashtable, we are
no longer swapping out the entire mesh_pathtbl pointer with RCU.
As a result, we no longer need indirection to the hlist head for
the gates list and can simply embed it, saving a pair of
pointer-sized allocations.
Signed-off-by: Bob Copeland <me@bobcopeland.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
The RMC cache has 256 list heads plus a u32, which puts it at the
unfortunate size of 4104 bytes with padding. kmalloc() will then
round this up to the next power-of-two, so we wind up actually
using two pages here where most of the second is wasted.
Switch to hlist heads here to reduce the structure size down to
fit within a page.
Signed-off-by: Bob Copeland <me@bobcopeland.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
In the unlikely case that mesh_rmc_init() fails with -ENOMEM,
the rmc pointer will be left as NULL but the interface is still
operational because ieee80211_mesh_init_sdata() is not allowed
to fail.
If this happens, we would blindly dereference rmc when checking
whether a multicast frame is in the cache. Instead just drop the
frames in the forwarding path.
Signed-off-by: Bob Copeland <me@bobcopeland.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
The mesh_path_reclaim() function, called from an rcu callback, cancels
the mesh_path_timer associated with a mesh path. Unfortunately, this
call can happen much later, perhaps after the hash table itself is
destroyed.
Such a situation led to the following crash in mesh_path_send_to_gates()
when dereferencing the tbl pointer:
[ 23.901661] BUG: unable to handle kernel NULL pointer dereference at 0000000000000008
[ 23.905516] IP: [<ffffffff814c910b>] mesh_path_send_to_gates+0x2b/0x740
[ 23.908757] PGD 99ca067 PUD 99c4067 PMD 0
[ 23.910789] Oops: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC
[ 23.913485] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.5.0-rc6-wt+ #43
[ 23.916675] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 23.920471] task: ffffffff81685500 ti: ffffffff81678000 task.ti: ffffffff81678000
[ 23.922619] RIP: 0010:[<ffffffff814c910b>] [<ffffffff814c910b>] mesh_path_send_to_gates+0x2b/0x740
[ 23.925237] RSP: 0018:ffff88000b403d30 EFLAGS: 00010286
[ 23.926739] RAX: 0000000000000000 RBX: ffff880009bc0d20 RCX: 0000000000000102
[ 23.928796] RDX: 000000000000002e RSI: 0000000000000001 RDI: ffff880009bc0d20
[ 23.930895] RBP: ffff88000b403e18 R08: 0000000000000001 R09: 0000000000000001
[ 23.932917] R10: 0000000000000000 R11: 0000000000000001 R12: ffff880009c20940
[ 23.936370] R13: ffff880009bc0e70 R14: ffff880009c21c40 R15: ffff880009bc0d20
[ 23.939823] FS: 0000000000000000(0000) GS:ffff88000b400000(0000) knlGS:0000000000000000
[ 23.943688] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[ 23.946429] CR2: 0000000000000008 CR3: 00000000099c5000 CR4: 00000000000006b0
[ 23.949861] Stack:
[ 23.950840] 000000000000002e ffff880009c20940 ffff88000b403da8 ffffffff8109e551
[ 23.954467] ffffffff82711be2 000000000000002e 0000000000000000 ffffffff8166a5f5
[ 23.958141] 0000000000685ce8 0000000000000246 ffff880009bc0d20 ffff880009c20940
[ 23.961801] Call Trace:
[ 23.962987] <IRQ>
[ 23.963963] [<ffffffff8109e551>] ? vprintk_emit+0x351/0x5e0
[ 23.966782] [<ffffffff8109e8ff>] ? vprintk_default+0x1f/0x30
[ 23.969529] [<ffffffff810ffa41>] ? printk+0x48/0x50
[ 23.971956] [<ffffffff814ceef3>] mesh_path_timer+0x133/0x160
[ 23.974707] [<ffffffff814cedc0>] ? mesh_nexthop_resolve+0x230/0x230
[ 23.977775] [<ffffffff810b04ee>] call_timer_fn+0xce/0x330
[ 23.980448] [<ffffffff810b0425>] ? call_timer_fn+0x5/0x330
[ 23.983126] [<ffffffff814cedc0>] ? mesh_nexthop_resolve+0x230/0x230
[ 23.986091] [<ffffffff810b097c>] run_timer_softirq+0x22c/0x390
Instead of cancelling in the RCU callback, set a new flag to prevent the
timer from being rearmed, and then cancel the timer synchronously when
freeing the mesh path. This leaves mesh_path_reclaim() doing nothing
but kfree, so switch to kfree_rcu().
Fixes: 3b302ada7f0a ("mac80211: mesh: move path tables into if_mesh")
Signed-off-by: Bob Copeland <me@bobcopeland.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
Legacy clients don't support P2P power save mechanism, and thus if a P2P GO
has a legacy client connected to it, it should disable P2P PS mechanisms.
Let the driver know about this with a new bss_conf parameter.
Signed-off-by: Ayala Beker <ayala.beker@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
Legacy clients don't support P2P power save mechanisms, and thus
if a P2P GO has a legacy client connected to it, it has to make
some changes in the PS behavior.
To handle this, add an attribute to specify whether a station supports
P2P PS or not. If the attribute was not specified cfg80211 will assume
that station supports it for P2P GO interface, and does NOT support it
for AP interface, matching the current assumptions in the code.
Signed-off-by: Ayala Beker <ayala.beker@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|