Age | Commit message (Collapse) | Author |
|
Use kmemdup when some other buffer is immediately copied into allocated
region. It replaces call to allocation followed by memcpy, by a single
call to kmemdup.
Signed-off-by: Muhammad Falak R Wani <falakreyaz@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Use kmemdup when some other buffer is immediately copied into allocated
region. It replaces call to allocation followed by memcpy, by a single
call to kmemdup.
Signed-off-by: Muhammad Falak R Wani <falakreyaz@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Use kmemdup when some other buffer is immediately copied into allocated
region. It replaces call to allocation followed by memcpy, by a single
call to kmemdup.
Signed-off-by: Muhammad Falak R Wani <falakreyaz@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
While reviewing the filesystems that set FS_USERNS_MOUNT I spotted the
bpf filesystem. Looking at the code I saw a broken usage of mount_ns
with current->nsproxy->mnt_ns. As the code does not acquire a
reference to the mount namespace it can not possibly be correct to
store the mount namespace on the superblock as it does.
Replace mount_ns with mount_nodev so that each mount of the bpf
filesystem returns a distinct instance, and the code is not buggy.
In discussion with Hannes Frederic Sowa it was reported that the use
of mount_ns was an attempt to have one bpf instance per mount
namespace, in an attempt to keep resources that pin resources from
hiding. That intent simply does not work, the vfs is not built to
allow that kind of behavior. Which means that the bpf filesystem
really is buggy both semantically and in it's implemenation as it does
not nor can it implement the original intent.
This change is userspace visible, but my experience with similar
filesystems leads me to believe nothing will break with a model of each
mount of the bpf filesystem is distinct from all others.
Fixes: b2197755b263 ("bpf: add support for persistent maps/progs")
Cc: Hannes Frederic Sowa <hannes@stressinduktion.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/wireless-drivers-next
Kalle Valo says:
====================
wireless-drivers patches for 4.7
Major changes:
iwlwifi
* remove IWLWIFI_DEBUG_EXPERIMENTAL_UCODE kconfig option
* work for RX multiqueue continues
* dynamic queue allocation work continues
* add Luca as maintainer
* a bunch of fixes and improvements all over
brcmfmac
* add 4356 sdio support
ath6kl
* add ability to set debug uart baud rate with a module parameter
wil6210
* add debugfs file to configure firmware led functionality
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Current implementation updates the mtu size and notify cdc_ncm
device using USB_CDC_SET_MAX_DATAGRAM_SIZE request about datagram
size change instead of changing rx_urb_size.
Whenever mtu is being changed, datagram size should also be
updated. Also updating maxmtu formula so it takes max_datagram_size with
use of cdc_ncm_max_dgram_size() and not ctx.
Signed-off-by: Robert Dobrowolski <robert.dobrowolski@linux.intel.com>
Signed-off-by: Rafal Redzimski <rafal.f.redzimski@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
We used to check dev->reg_state against NETREG_REGISTERED after each
time we are woke up. But after commit 9e641bdcfa4e ("net-tun:
restructure tun_do_read for better sleep/wakeup efficiency"), it uses
skb_recv_datagram() which does not check dev->reg_state. This will
result if we delete a tun/tap device after a process is blocked in the
reading. The device will wait for the reference count which was held
by that process for ever.
Fixes this by using RCV_SHUTDOWN which will be checked during
sk_recv_datagram() before trying to wake up the process during uninit.
Fixes: 9e641bdcfa4e ("net-tun: restructure tun_do_read for better
sleep/wakeup efficiency")
Cc: Eric Dumazet <edumazet@google.com>
Cc: Xi Wang <xii@google.com>
Cc: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Alexander Duyck says:
====================
Follow-ups for GUEoIPv6 patches
This patch series is meant to be applied after:
[PATCH v7 net-next 00/16] ipv6: Enable GUEoIPv6 and more fixes for v6 tunneling
The first patch addresses an issue we already resolved in the GREv4 and is
now present in GREv6 with the introduction of FOU/GUE for IPv6 based GRE
tunnels.
The second patch goes through and enables IPv6 tunnel offloads for the Intel
NICs that already support the IPv4 based IP-in-IP tunnel offloads. I have
only done a bit of touch testing but have seen ~20 Gb/s over an i40e
interface using a v4-in-v6 tunnel, and I have verified IPv6 GRE is still
passing traffic at around the same rate. I plan to do further testing but
with these patches present it should enable a wider audience to be able to
test the new features introduced in Tom's patchset with hardware offloads.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch adds support for offloading IPXIP6 type packets that represent
either IPv4 or IPv6 encapsulated inside of an IPv6 outer IP header. In
addition with this change we should also be able to support FOU
encapsulated traffic with outer IPv6 headers.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch addresses the same issue we had for IPv4 where enabling GRE with
an inner checksum cannot be supported with FOU/GUE due to the fact that
they will jump past the GRE header at it is treated like a tunnel header.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Sowmini Varadhan says:
====================
RDS: TCP: connection spamming fixes
We have been testing the RDS-TCP code with a connection spammer
that sends incoming SYNs to the RDS listen port well after
an rds-tcp connection has been established, and found a few
race-windows that are fixed by this patch series.
Patch 1 avoids a null pointer deref when an incoming SYN
shows up when a netns is being dismantled, or when the
rds-tcp module is being unloaded.
Patch 2 addresses the case when a SYN is received after the
connection arbitration algorithm has converged: the incoming
SYN should not needlessly quiesce the transmit path, and it
should not result in needless TCP connection resets due to
re-execution of the connection arbitration logic.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
When a rogue SYN is received after the connection arbitration
algorithm has converged, the incoming SYN should not needlessly
quiesce the transmit path, and it should not result in needless
TCP connection resets due to re-execution of the connection
arbitration logic.
Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com>
Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
There are two instances where we want to terminate RDS-TCP: when
exiting the netns or during module unload. In either case, the
termination sequence is to stop the listen socket, mark the
rtn->rds_tcp_listen_sock as null, and flush any accept workqs.
Thus any workqs that get flushed at this point will encounter a
null rds_tcp_listen_sock, and must exit gracefully to allow
the RDS-TCP termination to complete successfully.
Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com>
Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
->sk_shutdown bits share one bitfield with some other bits in sock struct,
such as ->sk_no_check_[r,t]x, ->sk_userlocks ...
sock_setsockopt() may write to these bits, while holding the socket lock.
In case of AF_UNIX sockets, we change ->sk_shutdown bits while holding only
unix_state_lock(). So concurrent setsockopt() and shutdown() may lead
to corrupting these bits.
Fix this by moving ->sk_shutdown bits out of bitfield into a separate byte.
This will not change the 'struct sock' size since ->sk_shutdown moved into
previously unused 16-bit hole.
Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Suggested-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Tom Herbert says:
====================
ipv6: Enable GUEoIPv6 and more fixes for v6 tunneling
This patch set:
- Fixes GRE6 to process translate flags correctly from configuration
- Adds support for GSO and GRO for ip6ip6 and ip4ip6
- Add support for FOU and GUE in IPv6
- Support GRE, ip6ip6 and ip4ip6 over FOU/GUE
- Fixes ip6_input to deal with UDP encapsulations
- Some other minor fixes
v2:
- Removed a check of GSO types in MPLS
- Define GSO type SKB_GSO_IPXIP6 and SKB_GSO_IPXIP4 (based on input
from Alexander)
- Don't define GSO types specifically for IP6IP6 and IP4IP6, above
fix makes that unnecessary
- Don't bother clearing encapsulation flag in UDP tunnel segment
(another item suggested by Alexander).
v3:
- Address some minor comments from Alexander
v4:
- Rebase on changes to fix IP TX tunnels
- Fix MTU issues in ip4ip6, ip6ip6
- Add test data for above
v5:
- Address feedback from Shmulik Ladkani regarding extension header
code that does not return next header but in instead relies
on returning value via nhoff. Solution here is to fix EH
processing to return nexthdr value.
- Refactored IPv4 encaps so that we won't need to create
a ip6_tunnel_core.c when adding encap support IPv6.
v6:
- Fix build issues with regard to new GSO constants
- FIx MTU calculation issues ip6_tunnel.c pointed out byt ALex
- Add encap_hlen into headroom for GREv6 to work with FOU/GUE
v7:
- Added skb_set_inner_ipproto to ip4ip6 and ip6ip6
- Clarified max_headroom in ip6_tnl_xmit
- Set features for IPv6 tunnels
- Other cleanup suggested by Alexander
- Above fixes throughput performance issues in ip4ip6 and ip6ip6,
updated test results to reflect that
Tested: Various cases of IP tunnels with netperf TCP_STREAM and TCP_RR.
- IPv4/GRE/GUE/IPv6 with RCO
1 TCP_STREAM
6616 Mbps
200 TCP_RR
1244043 tps
141/243/446 90/95/99% latencies
86.61% CPU utilization
- IPv6/GRE/GUE/IPv6 with RCO
1 TCP_STREAM
6940 Mbps
200 TCP_RR
1270903 tps
138/236/440 90/95/99% latencies
87.51% CPU utilization
- IP6IP6
1 TCP_STREAM
5307 Mbps
200 TCP_RR
498981 tps
388/498/631 90/95/99% latencies
19.75% CPU utilization (1 CPU saturated)
- IP6IP6/GUE with RCO
1 TCP_STREAM
5575 Mbps
200 TCP_RR
1233818 tps
143/244/451 90/95/99% latencies
87.57 CPU utilization
- IP4IP6
1 TCP_STREAM
5235 Mbps
200 TCP_RR
763774 tps
250/318/466 90/95/99% latencies
35.25% CPU utilization (1 CPU saturated)
- IP4IP6/GUE with RCO
1 TCP_STREAM
5337 Mbps
200 TCP_RR
1196385 tps
148/251/460 90/95/99% latencies
87.56 CPU utilization
- GRE with keyid
200 TCP_RR
744173 tps
258/332/461 90/95/99% latencies
34.59% CPU utilization (1 CPU saturated)
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Since iptunnel_handle_offloads() is called in all paths we can
probably drop the block in ip6_tnl_xmit that was checking for
skb->encapsulation and resetting the inner headers.
Signed-off-by: Tom Herbert <tom@herbertland.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: Tom Herbert <tom@herbertland.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: Tom Herbert <tom@herbertland.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Need to set dev features, use same values that are used in GREv6.
Signed-off-by: Tom Herbert <tom@herbertland.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add netlink and setup for encapsulation
Signed-off-by: Tom Herbert <tom@herbertland.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add netlink and setup for encapsulation
Signed-off-by: Tom Herbert <tom@herbertland.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch add a new fou6 module that provides encapsulation
operations for IPv6.
Signed-off-by: Tom Herbert <tom@herbertland.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add encap_hlen and ip_tunnel_encap structure to ip6_tnl. Add functions
for getting encap hlen, setting up encap on a tunnel, performing
encapsulation operation.
Signed-off-by: Tom Herbert <tom@herbertland.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch adds receive path support for IPv6 with fou.
- Add address family to fou structure for open sockets. This supports
AF_INET and AF_INET6. Lookups for fou ports are performed on both the
port number and family.
- In fou and gue receive adjust tot_len in IPv4 header or payload_len
based on address family.
- Allow AF_INET6 in FOU_ATTR_AF netlink attribute.
Signed-off-by: Tom Herbert <tom@herbertland.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Create __fou_build_header and __gue_build_header. These implement the
protocol generic parts of building the fou and gue header.
fou_build_header and gue_build_header implement the IPv4 specific
functions and call the __*_build_header functions.
Signed-off-by: Tom Herbert <tom@herbertland.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Use helper function to set up UDP tunnel related information for a fou
socket.
Signed-off-by: Tom Herbert <tom@herbertland.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Consolidate all the ip_tunnel_encap definitions in one spot in the
header file. Also, move ip_encap_hlen and ip_tunnel_encap from
ip_tunnel.c to ip_tunnels.h so they call be called without a dependency
on ip_tunnel module. Similarly, move iptun_encaps to ip_tunnel_core.c.
Signed-off-by: Tom Herbert <tom@herbertland.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
When performing foo-over-UDP, UDP packets are processed by the
encapsulation handler which returns another protocol to process.
This may result in processing two (or more) protocols in the
loop that are marked as INET6_PROTO_FINAL. The actions taken
for hitting a final protocol, in particular the skb_postpull_rcsum
can only be performed once.
This patch set adds a check of a final protocol has been seen. The
rules are:
- If the final protocol has not been seen any protocol is processed
(final and non-final). In the case of a final protocol, the final
actions are taken (like the skb_postpull_rcsum)
- If a final protocol has been seen (e.g. an encapsulating UDP
header) then no further non-final protocols are allowed
(e.g. extension headers). For more final protocols the
final actions are not taken (e.g. skb_postpull_rcsum).
Signed-off-by: Tom Herbert <tom@herbertland.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
In ip6_input_finish the nexthdr protocol is retrieved from the
next header offset that is returned in the cb of the skb.
This method does not work for UDP encapsulation that may not
even have a concept of a nexthdr field (e.g. FOU).
This patch checks for a final protocol (INET6_PROTO_FINAL) when a
protocol handler returns > 0. If the protocol is not final then
resubmission is performed on nhoff value. If the protocol is final
then the nexthdr is taken to be the return value.
Signed-off-by: Tom Herbert <tom@herbertland.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch defines two new GSO definitions SKB_GSO_IPXIP4 and
SKB_GSO_IPXIP6 along with corresponding NETIF_F_GSO_IPXIP4 and
NETIF_F_GSO_IPXIP6. These are used to described IP in IP
tunnel and what the outer protocol is. The inner protocol
can be deduced from other GSO types (e.g. SKB_GSO_TCPV4 and
SKB_GSO_TCPV6). The GSO types of SKB_GSO_IPIP and SKB_GSO_SIT
are removed (these are both instances of SKB_GSO_IPXIP4).
SKB_GSO_IPXIP6 will be used when support for GSO with IP
encapsulation over IPv6 is added.
Signed-off-by: Tom Herbert <tom@herbertland.com>
Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
In several gso_segment functions there are checks of gso_type against
a seemingly arbitrary list of SKB_GSO_* flags. This seems like an
attempt to identify unsupported GSO types, but since the stack is
the one that set these GSO types in the first place this seems
unnecessary to do. If a combination isn't valid in the first
place that stack should not allow setting it.
This is a code simplication especially for add new GSO types.
Signed-off-by: Tom Herbert <tom@herbertland.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Commit da47b4572056 ("phy: add support for a reset-gpio specification")
causes the following xtensa qemu crash according to Guenter Roeck:
[ 9.366256] libphy: ethoc-mdio: probed
[ 9.367389] (null): could not attach to PHY
[ 9.368555] (null): failed to probe MDIO bus
[ 9.371540] Unable to handle kernel paging request at virtual address 0000001c
[ 9.371540] pc = d0320926, ra = 903209d1
[ 9.375358] Oops: sig: 11 [#1]
This reverts commit da47b4572056487fd7941c26f73b3e8815ff712a.
Reported-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Fabio Estevam <fabio.estevam@nxp.com>
Acked-by: Florian Fainelli <f.fainelli@gmail.com>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
A domain with a frontend that does not implement a control ring has been
seen to cause a crash during domain save. This was apparently because
the call to xenvif_deinit_hash() in xenvif_disconnect_ctrl() is made
regardless of whether a control ring was connected, and hence
xenvif_hash_init() was called.
This patch brings the call to xenvif_deinit_hash() in
xenvif_disconnect_ctrl() inside the if clause that checks whether the
control ring event channel was connected. This is sufficient to ensure
it is only called if xenvif_init_hash() was called previously.
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Reported-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Tested-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Acked-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
When CONFIG_NET_CLS_ACT is disabled, we get a new warning in the mlx5
ethernet driver because the tc_for_each_action() loop never references
the iterator:
mellanox/mlx5/core/en_tc.c: In function 'mlx5e_stats_flower':
mellanox/mlx5/core/en_tc.c:431:20: error: unused variable 'a' [-Werror=unused-variable]
struct tc_action *a;
This changes the dummy tc_for_each_action() macro by adding a
cast to void, letting the compiler know that the variable is
intentionally declared but not used here. I could not come up
with a nicer workaround, but this seems to do the trick.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Fixes: aad7e08d39bd ("net/mlx5e: Hardware offloaded flower filter statistics support")
Fixes: 00175aec941e ("net/sched: Macro instead of CONFIG_NET_CLS_ACT ifdef")
Acked-By: Amir Vadai <amir@vadai.me>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Start address randomization and blinding in BPF currently use
prandom_u32(). prandom_u32() values are not exposed to unpriviledged
user space to my knowledge, but given other kernel facilities such as
ASLR, stack canaries, etc make use of stronger get_random_int(), we
better make use of it here as well given blinding requests successively
new random values. get_random_int() has minimal entropy pool depletion,
is not cryptographically secure, but doesn't need to be for our use
cases here.
Suggested-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Driver incorrectly uses dma_unmap_addr_set() to set
a variable which is in truth a dma_addr_t
[i.e not defined using DEFINE_DMA_UNMAP_ADDR()] and is
being used by the driver flows other than unmapping
physical addresses. This patch fixes driver fastpath
where CONFIG_NEED_DMA_MAP_STATE is not set.
Signed-off-by: Manish Chopra <manish.chopra@qlogic.com>
Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
In my last commit I replaced MACSEC_SA_ATTR_KEYID by
MACSEC_SA_ATTR_KEY.
Fixes: 8acca6acebd0 ("macsec: key identifier is 128 bits, not 64")
Signed-off-by: Sabrina Dubroca <sd@queasysnail.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The length checks on the grant table copy_ops for setting hash key and
hash mapping are checking the local 'len' value which is correct in
the case of the former but not the latter. This was picked up by
static analysis checks.
This patch replaces checks of 'len' with 'copy_op.len' in both cases
to correct the incorrect check, keep the two checks consistent, and to
make it clear what the checks are for.
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Acked-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Since e7f4dc3536a ("mdio: Move allocation of interrupts into core"),
platforms which call fixed_phy_add() before fixed_mdio_bus_init() is
called (for example, because the platform code and the fixed_phy driver
use the same initcall level) crash in fixed_phy_add() since the
->mii_bus is not allocated.
Also since e7f4dc3536a, these interrupts are initalized to polling by
default. The few (old) platforms which directly use fixed_phy_add()
from their platform code all pass PHY_POLL for the irq argument, so we
can keep these platforms not crashing by simply not attempting to set
the irq if PHY_POLL is passed.
Also, even if problems have not been reported on more modern platforms
which used fixed_phy_register() from drivers' probe functions, we return
-EPROBE_DEFER if the MDIO bus is not yet registered so that the probe is
retried later.
Fixes: e7f4dc3536a400 ("mdio: Move allocation of interrupts into core")
Signed-off-by: Rabin Vincent <rabinv@axis.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This reverts commit e00be9e4d0ffcc0121606229f0aa4b246d6881d7.
It causes warnings and has several problems.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Antonio Quartulli says:
====================
During the Wireless Battle Mesh v9 in Porto (PT) at the beginning of
May, we managed to uncover and fix some important bugs in our
new B.A.T.M.A.N. V algorithm. These are the fixes we came up with
together with others that I collected in the past weeks:
- avoid potential crash due to NULL pointer dereference in
B.A.T.M.A.N. V routine when a neigh_ifinfo object is not found, by
Sven Eckelmann
- avoid use-after-free of skb when counting outgoing bytes, by Florian
Westphal
- fix neigh_ifinfo object reference counting imbalance when using
B.A.T.M.A.N. V, by Sven Eckelmann. Such imbalance may lead to the
impossibility of releasing the related netdev object on shutdown
- avoid invalid memory access in case of error while allocating
bcast_own_sum when a new hard-interface is added, by Sven Eckelmann
- ensure originator address is updated in OMG/ELP packet content upon
primary interface address change, by Antonio Quartulli
- fix integer overflow when computing TQ metric (B.A.T.M.A.N. IV), by
Sven Eckelmann
- avoid race condition while adding new neigh_node which would result
in having two objects mapping to the same physical neighbour, by
Linus Lüssing
- ensure originator address is initialized in ELP packet content on
secondary interfaces, by Marek Lindner
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This is a patch to clean checkpatch warnings and errors
in the Space.c file.
Clean up the following warnings and errors.
WARNING :
* Block comments use * on subsequent lines
* Missing a blank line after declarations
* networking block comments don't use an empty /* line, use /*
* please, no space before tabs
* please, no spaces at the start of a line
* line over 80 characters
ERROR :
* code indent should use tabs where possible
* space prohibited after that open parenthesis '('
Signed-off-by: Amit Ghadge <amitg.b14@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Eric Dumazet says:
====================
net: block BH in TCP callbacks
Four layers using TCP stack were assuming sk_callback_lock could
be locked using read_lock() in their handlers because TCP stack
was running with BH disabled.
This is no longer the case. Since presumably the rest could
also depend on BH being disabled, just use read_lock_bh().
Then each layer might consider switching to RCU protection
and no longer depend on BH.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
TCP stack can now run from process context.
Use read_lock_bh(&sk->sk_callback_lock) variant to restore previous
assumption.
Fixes: 5413d1babe8f ("net: do not block BH while processing socket backlog")
Fixes: d41a69f1d390 ("tcp: make tcp_sendmsg() aware of socket backlog")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Jon Maloy <jon.maloy@ericsson.com>
Cc: Ying Xue <ying.xue@windriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
TCP stack can now run from process context.
Use read_lock_bh(&sk->sk_callback_lock) variant to restore previous
assumption.
Fixes: 5413d1babe8f ("net: do not block BH while processing socket backlog")
Fixes: d41a69f1d390 ("tcp: make tcp_sendmsg() aware of socket backlog")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
TCP stack can now run from process context.
Use read_lock_bh() variant to restore previous assumption.
Fixes: 5413d1babe8f ("net: do not block BH while processing socket backlog")
Fixes: d41a69f1d390 ("tcp: make tcp_sendmsg() aware of socket backlog")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
iscsi_sw_tcp_data_ready() and iscsi_sw_tcp_state_change() were
using read_lock(&sk->sk_callback_lock) which is fine if caller
disabled BH.
TCP stack no longer has this requirement and can run from
process context.
Use read_lock_bh() variant to restore previous assumption.
Ideally this code could use RCU instead...
Fixes: 5413d1babe8f ("net: do not block BH while processing socket backlog")
Fixes: d41a69f1d390 ("tcp: make tcp_sendmsg() aware of socket backlog")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Mike Christie <michaelc@cs.wisc.edu>
Cc: Venkatesh Srinivas <venkateshs@google.com>
Acked-by: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
(!count || count < 4) is always true.
So let's remove the coding which is dead at least since 2005.
Signed-off-by: Heinrich Schuchardt <xypron.glpk@gmx.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
(a && a > 0) is equivalent to (a > 0).
Signed-off-by: Heinrich Schuchardt <xypron.glpk@gmx.de>
Acked-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Only dereference variable self after checking it is not NULL.
Signed-off-by: Heinrich Schuchardt <xypron.glpk@gmx.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
|