summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2015-12-16net: Pass ndm_state to route netlink FDB notifications.Hubert Sokolowski
Before this change applications monitoring FDB notifications were not able to determine whether a new FDB entry is permament or not: bridge fdb add f1:f2:f3:f4:f5:f8 dev sw0p1 temp self bridge fdb add f1:f2:f3:f4:f5:f9 dev sw0p1 self bridge monitor fdb f1:f2:f3:f4:f5:f8 dev sw0p1 self permanent f1:f2:f3:f4:f5:f9 dev sw0p1 self permanent With this change ndm_state from the original netlink message is passed to the new netlink message sent as notification. bridge fdb add f1:f2:f3:f4:f5:f6 dev sw0p1 self bridge fdb add f1:f2:f3:f4:f5:f7 dev sw0p1 temp self bridge monitor fdb f1:f2:f3:f4:f5:f6 dev sw0p1 self permanent f1:f2:f3:f4:f5:f7 dev sw0p1 self static Signed-off-by: Hubert Sokolowski <hubert.sokolowski@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-16Merge tag 'mac80211-for-davem-2015-12-15' of ↵David S. Miller
git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211 Johannes Berg says: ==================== Another set of fixes: * memory leak fixes (from Ola) * operating mode notification spec compliance fix (from Eyal) * copy rfkill names in case pointer becomes invalid (myself) * two hardware restart fixes (myself) * get rid of "limiting TX power" log spam (myself) ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-1682xx: FCC: Fixing a bug causing to FCC port lock-upMartin Roth
The patch fixes FCC port lock-up, which occurs as a result of a bug during underrun/collision handling. Within the tx_startup() function in mac-fcc.c, the address of last BD is not calculated correctly. As a result of wrong calculation of the last BD address, the next transmitted BD may be set to an area out of the transmit BD ring. This actually causes to port lock-up and it is not recoverable. Signed-off-by: Martin Roth <martin.roth@motorolasolutions.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-16gianfar: Don't enable RX Filer if not supportedHamish Martin
After commit 15bf176db1fb ("gianfar: Don't enable the Filer w/o the Parser"), 'TSEC' model controllers (for example as seen on MPC8541E) always have 8 bytes stripped from the front of received frames. Only 'eTSEC' gianfar controllers have the RX Filer capability (amongst other enhancements). Previously this was treated as always enabled for both 'TSEC' and 'eTSEC' controllers. In commit 15bf176db1fb ("gianfar: Don't enable the Filer w/o the Parser") a subtle change was made to the setting of 'uses_rxfcb' to effectively always set it (since 'rx_filer_enable' was always true). This had the side-effect of always stripping 8 bytes from the front of received frames on 'TSEC' type controllers. We now only enable the RX Filer capability on controller types that support it, thereby avoiding the issue for 'TSEC' type controllers. Reviewed-by: Chris Packham <chris.packham@alliedtelesis.co.nz> Reviewed-by: Mark Tomlinson <mark.tomlinson@alliedtelesis.co.nz> Signed-off-by: Hamish Martin <hamish.martin@alliedtelesis.co.nz> Reviewed-by: Claudiu Manoil <claudiu.manoil@freescale.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-16dma-debug: Fix dma_debug_entry offset calculationDaniel Mentz
dma-debug uses struct dma_debug_entry to keep track of dma coherent memory allocation requests. The virtual address is converted into a pfn and an offset. Previously, the offset was calculated using an incorrect bit mask. As a result, we saw incorrect error messages from dma-debug like the following: "DMA-API: exceeded 7 overlapping mappings of cacheline 0x03e00000" Cacheline 0x03e00000 does not exist on our platform. Cc: <stable@vger.kernel.org> Fixes: 0abdd7a81b7e ("dma-debug: introduce debug_dma_assert_idle()") Signed-off-by: Daniel Mentz <danielmentz@google.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2015-12-16Merge branch 'fixes' of git://ftp.arm.linux.org.uk/~rmk/linux-armLinus Torvalds
Pull ARM fixes from Russell King: "Further ARM fixes: - Anson Huang noticed that we were corrupting a register we shouldn't be during suspend on some CPUs. - Shengjiu Wang spotted a bug in the 'swp' instruction emulation. - Will Deacon fixed a bug in the ASID allocator. - Laura Abbott fixed the kernel permission protection to apply to all threads running in the system. - I've fixed two bugs with the domain access control register handling, one to do with printing an appropriate value at oops time, and the other to further fix the uaccess_with_memcpy code" * 'fixes' of git://ftp.arm.linux.org.uk/~rmk/linux-arm: ARM: 8475/1: SWP emulation: Restore original *data when failed ARM: 8471/1: need to save/restore arm register(r11) when it is corrupted ARM: fix uaccess_with_memcpy() with SW_DOMAIN_PAN ARM: report proper DACR value in oops dumps ARM: 8464/1: Update all mm structures with section adjustments ARM: 8465/1: mm: keep reserved ASIDs in sync with mm after multiple rollovers
2015-12-16net: fix warnings in 'make htmldocs' by moving macro definition out of field ↵Hannes Frederic Sowa
declaration Docbook does not like the definition of macros inside a field declaration and adds a warning. Move the definition out. Fixes: 79462ad02e86180 ("net: add validation for the socket syscall protocol argument") Reported-by: kbuild test robot <lkp@intel.com> Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-16rhashtable: Fix walker list corruptionHerbert Xu
The commit ba7c95ea3870fe7b847466d39a049ab6f156aa2c ("rhashtable: Fix sleeping inside RCU critical section in walk_stop") introduced a new spinlock for the walker list. However, it did not convert all existing users of the list over to the new spin lock. Some continued to use the old mutext for this purpose. This obviously led to corruption of the list. The fix is to use the spin lock everywhere where we touch the list. This also allows us to do rcu_rad_lock before we take the lock in rhashtable_walk_start. With the old mutex this would've deadlocked but it's safe with the new spin lock. Fixes: ba7c95ea3870 ("rhashtable: Fix sleeping inside RCU...") Reported-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-16Merge tag 'batman-adv-for-davem' of git://git.open-mesh.org/linux-mergeDavid S. Miller
Antonio Quartulli says: ==================== Included changes: - change my email in MAINTAINERS and Doc files - create and export list of single hop neighs per interface - protect CRC in the BLA code by means of its own lock - minor fixes and code cleanups ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-16Merge branch 'geneve-udp-port-offload'David S. Miller
Anjali Singhai Jain says: ==================== Add support for Geneve udp port offload This patch series adds new ndo ops for Geneve add/del port, so as to help offload Geneve tunnel functionalities such as RX checksum, RSS, filters etc. i40e driver has been tested with the changes to make sure the offloads happen. We do understand that this is not the ideal solution and most likely will be redone with a more generic offload framework. But this certainly will enable us to start seeing benefits of the accelerations for Geneve tunnels. As a side note, we did find an existing issue in i40e driver where a service task can modify tunnel data structures with no locks held to help linearize access. A separate patch will be taking care of that issue. A question out to the community is regarding the driver Kconfig parameters for VxLAN and Geneve, it would be ideal to drop those if there is a way to help resolve vxlan/geneve_get_rx_port symbols while the tunnel modules are not loaded. Performance numbers: With the offloads enable on X722 devices with remote checksum enabled and no other tuning in terms of cpu governer etc on my test machine: With offload Throughput: 5527Mbits/sec with a single thread %cpu: ~43% per core with 4 threads Without offload Throughput: 2364Mbits/sec with a single thread %cpu: ~99% per core with 4 threads These numbers will get better for X722 as it is being worked. But this does bring out the delta in terms of when the stack is notified with csum_level 1 and CHECKSUM_UNNECESSARY vs not without the RX offload. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-16i40e: Call geneve_get_rx_port to get the existing Geneve portsSinghai, Anjali
This patch adds a call to geneve_get_rx_port in i40e so that when it comes up it can learn about the existing geneve tunnels. Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-16geneve: Add geneve_get_rx_port supportSinghai, Anjali
This patch adds an op that the drivers can call into to get existing geneve ports. Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-16i40e: Kernel dependency update for i40e to support geneve offloadSinghai, Anjali
Update the Kconfig file with dependency for supporting GENEVE tunnel offloads. Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com> Signed-off-by: Kiran Patil <kiran.patil@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-16i40e: geneve tunnel offload supportSinghai, Anjali
This patch adds driver hooks to implement ndo_ops to add/del udp port in the HW to identify GENEVE tunnels. Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com> Signed-off-by: Kiran Patil <kiran.patil@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-16geneve: Add geneve udp port offload for ethernet devicesSinghai, Anjali
Add ndo_ops to add/del UDP ports to a device that supports geneve offload. v2: Comment fix. Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com> Signed-off-by: Kiran Patil <kiran.patil@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-16net: sctp: dynamically enable or disable pf stateZhu Yanjun
As we all know, the value of pf_retrans >= max_retrans_path can disable pf state. The variables of pf_retrans and max_retrans_path can be changed by the userspace application. Sometimes the user expects to disable pf state while the 2 variables are changed to enable pf state. So it is necessary to introduce a new variable to disable pf state. According to the suggestions from Vlad Yasevich, extra1 and extra2 are removed. The initialization of pf_enable is added. Acked-by: Vlad Yasevich <vyasevich@gmail.com> Signed-off-by: Zhu Yanjun <zyjzyj2000@gmail.com> Acked-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-16rhashtable: Enforce minimum size on initial hash tableHerbert Xu
William Hua <william.hua@canonical.com> wrote: > > I wasn't aware there was an enforced minimum size. I simply set the > nelem_hint in the rhastable_params struct to 1, expecting it to grow as > needed. This caused a segfault afterwards when trying to insert an > element. OK we're doing the size computation before we enforce the limit on min_size. ---8<--- We need to do the initial hash table size computation after we have obtained the correct min_size/max_size parameters. Otherwise we may end up with a hash table whose size is outside the allowed envelope. Fixes: a998f712f77e ("rhashtable: Round up/down min/max_size to...") Reported-by: William Hua <william.hua@canonical.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-16batman-adv: lock crc access in bridge loop avoidanceSimon Wunderlich
We have found some networks in which nodes were constantly requesting other nodes BLA claim tables to synchronize, just to ask for that again once completed. The reason was that the crc checksum of the asked nodes were out of sync due to missing locking and multiple writes to the same crc checksum when adding/removing entries. Therefore the asked nodes constantly reported the wrong crc, which caused repeating requests. To avoid multiple functions changing a backbone gateways crc entry at the same time, lock it using a spinlock. Signed-off-by: Simon Wunderlich <sw@simonwunderlich.de> Tested-by: Alfons Name <AlfonsName@web.de> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Antonio Quartulli <antonio@meshcoding.com>
2015-12-16batman-adv: Fix typo 'wether' -> 'whether'Sven Eckelmann
Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Antonio Quartulli <antonio@meshcoding.com>
2015-12-16batman-adv: Use chain pointer when purging fragmentsSven Eckelmann
The chain pointer was already created in batadv_frag_purge_orig to make the checks more readable. Just use the chain pointer everywhere instead of having the same dereference + array access in the most lines of this function. Signed-off-by: Sven Eckelmann <sven@narfation.org> Acked-by: Martin Hundebøll <martin@hundeboll.net> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Antonio Quartulli <antonio@meshcoding.com>
2015-12-16batman-adv: unify flags access style in tt global addSimon Wunderlich
This should slightly improve readability Signed-off-by: Simon Wunderlich <sw@simonwunderlich.de> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Antonio Quartulli <antonio@meshcoding.com>
2015-12-16batman-adv: detect local excess vlans in TT requestSimon Wunderlich
If the local representation of the global TT table of one originator has more VLAN entries than the respective TT update, there is some inconsistency present. By detecting and reporting this inconsistency, the global table gets updated and the excess VLAN will get removed in the process. Reported-by: Alessandro Bolletta <alessandro@mediaspot.net> Signed-off-by: Simon Wunderlich <sw@simonwunderlich.de> Acked-by: Antonio Quartulli <antonio@meshcoding.com> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Antonio Quartulli <antonio@meshcoding.com>
2015-12-16inet: tcp: fix inetpeer_set_addr_v4()Eric Dumazet
David Ahern added a vif field in the a4 part of inetpeer_addr struct. This broke IPv4 TCP fast open client side and more generally tcp metrics cache, because inetpeer_addr_cmp() is now comparing two u32 instead of one. inetpeer_set_addr_v4() needs to properly init vif field, otherwise the comparison result depends on uninitialized data. Fixes: 192132b9a034 ("net: Add support for VRFs to inetpeer cache") Reported-by: Yuchung Cheng <ycheng@google.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Neal Cardwell <ncardwell@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-15ipv6: automatically enable stable privacy mode if stable_secret setHannes Frederic Sowa
Bjørn reported that while we switch all interfaces to privacy stable mode when setting the secret, we don't set this mode for new interfaces. This does not make sense, so change this behaviour. Fixes: 622c81d57b392cc ("ipv6: generation of stable privacy addresses for link-local and autoconf") Reported-by: Bjørn Mork <bjorn@mork.no> Cc: Bjørn Mork <bjorn@mork.no> Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-15sctp: use GFP_KERNEL in sctp_init()Eric Dumazet
modules init functions being called from process context, we better use GFP_KERNEL allocations to increase our chances to get these high-order pages we want for SCTP hash tables. This mostly matters if SCTP module is loaded once memory got fragmented. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-15Merge branch 'sock-diag-destroy'David S. Miller
Lorenzo Colitti says: ==================== Support administratively closing application sockets This patchset adds the ability to administratively close a socket without any action from the process owning the socket or the socket protocol. It implements this by adding a new diag_destroy function pointer to struct proto. In-kernel callers can access this functionality directly by calling sk->sk_prot->diag_destroy(sk, err). It also exposes this functionality to userspace via a new SOCK_DESTROY operation in the NETLINK_SOCK_DIAG sockets. This allows a privileged userspace process, such as a connection manager or system administration tool, to close sockets belonging to other apps when the network they were established on has disconnected. It is needed on laptops and mobile hosts to ensure that network switches / disconnects do not result in applications being blocked for long periods of time (minutes) in read or connect calls on TCP sockets that will never succeed because the IP address they are bound to is no longer on the system. Closing the sockets causes these calls to fail fast and allows the apps to reconnect on another network. Userspace intervention is necessary because in many cases the kernel does not have enough information to know that a connection is now inoperable. The kernel can know if a packet can't be routed, but in general it won't know if a TCP connection is stuck because it is now routed to a network where its source address is no longer valid [5][6]. Many other operating systems offer similar functionality: - FreeBSD has had this since 5.4 in 2005 [2]. It is available to privileged userspace and there is a tool to use it [3]. - The FreeBSD commit description states that the idea came from OpenBSD. - iOS has been administratively closing app sockets since iOS 4 - see [4], which states that a socket "might get reclaimed by the kernel" and after that will return EBADF]. For many years Android kernels have supported this via an out-of-tree SIOCKILLADDR ioctl that is called on every RTM_DELADDR event, but this solution is cleaner, more robust and more flexible: the connection manager can iterate over all connections on the deleted IP address and close all of them. It can also be used to close all sockets opened by a given app process, for example if the user has restricted that app from using the network, if a secure network such as a VPN has connected and security policy requires all of an application's connections to be routed via the VPN, etc. - For many years Android kernels have supported an out-of-tree SIOCKILLADDR ioctl that is called when a network disconnects or an RTM_DELADDR event is received. This solution is cleaner, more robust and more flexible. The connection manager can implement SIOCKILLADDR by iterating over all connections on the deleted IP address and close all of them, but it can also close all sockets opened by a given app process (for example if the user has restricted that app from), close all of a user's TCP connections if a user has connected a secure network such as a VPN and expects all of an application's connections to be routed via the VPN, etc. Alternative schemes such as TCP keepalives in combination with "iptables -j REJECT --reject-with tcp-reset", could be used to achieve similar results, but on mobile devices TCP keepalives are very expensive, and in such a scheme detecting stuck connections has to wait for a keepalive to be sent or the application to perform a write. An explicit notification from userspace is cheaper and faster in the common case where an application is blocked on read. SOCK_DESTROY is placed behind an INET_DIAG_DESTROY configuration option, which is currently off by default. The TCP implementation of diag_destroy causes a TCP ABORT as specified by RFC 793 [1]: immediately send a RST and clear local connection state. This is what happens today if an application enables SO_LINGER with a timeout of 0 and then calls close. The first versions of the patchset did not send a RST, but that is not graceful/correct TCP behaviour. tcp_abort now does a proper RFC 793 ABORT and sends a RST to the peer. This is consistent with BSD's tcpdrop, and is more correct in general, even though in many use cases tcp_abort will only be called when sending a RST is no longer possible (e.g., the network has disconnected). The original patchset also behaved like SIOCKILADDR and closed TCP sockets with ETIMEDOUT. Tom Herbert pointed out that it would be better if applications could distinguish between a timeout and an administrative close. ECONNABORTED was chosen because it is consistent with BSD. [1] http://tools.ietf.org/html/rfc793#page-50 [2] http://svnweb.freebsd.org/base?view=revision&revision=141381 [3] https://www.freebsd.org/cgi/man.cgi?query=tcpdrop&sektion=8&manpath=FreeBSD+5.4-RELEASE [4] https://developer.apple.com/library/ios/technotes/tn2277/_index.html#//apple_ref/doc/uid/DTS40010841-CH1-SUBSECTION3 [5] http://www.spinics.net/lists/netdev/msg352775.html [6] http://www.spinics.net/lists/netdev/msg352952.html ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-15net: diag: Support destroying TCP sockets.Lorenzo Colitti
This implements SOCK_DESTROY for TCP sockets. It causes all blocking calls on the socket to fail fast with ECONNABORTED and causes a protocol close of the socket. It informs the other end of the connection by sending a RST, i.e., initiating a TCP ABORT as per RFC 793. ECONNABORTED was chosen for consistency with FreeBSD. Signed-off-by: Lorenzo Colitti <lorenzo@google.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-15net: diag: Support SOCK_DESTROY for inet sockets.Lorenzo Colitti
This passes the SOCK_DESTROY operation to the underlying protocol diag handler, or returns -EOPNOTSUPP if that handler does not define a destroy operation. Most of this patch is just renaming functions. This is not strictly necessary, but it would be fairly counterintuitive to have the code to destroy inet sockets be in a function whose name starts with inet_diag_get. Signed-off-by: Lorenzo Colitti <lorenzo@google.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-15net: diag: Add the ability to destroy a socket.Lorenzo Colitti
This patch adds a SOCK_DESTROY operation, a destroy function pointer to sock_diag_handler, and a diag_destroy function pointer. It does not include any implementation code. Signed-off-by: Lorenzo Colitti <lorenzo@google.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-15net: diag: split inet_diag_dump_one_icsk into twoLorenzo Colitti
Currently, inet_diag_dump_one_icsk finds a socket and then dumps its information to userspace. Split it into a part that finds the socket and a part that dumps the information. Signed-off-by: Lorenzo Colitti <lorenzo@google.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-15Merge branch 'ila-early-demux'David S. Miller
Tom Herbert says: ==================== ila: Optimization to preserve value of early demux In the current implementation of ILA, LWT is used to perform translation on both the input and output paths. This is functional, however there is a big performance hit in the receive path. Early demux occurs before the routing lookup (a hit actually obviates the route lookup). Therefore the stack currently performs early demux before translation so that a local connection with ILA addresses is never matched. Note that this issue is not just with ILA, but pretty much any translated or encapsulated packet handled by LWT would miss the opportunity for early demux. Solving the general problem seems non trivial since we would need to move the route lookup before early demx thereby mitigating the value. This patch set addresses the issue for ILA by adding a fast locator lookup that occurs before early demux. This done by hooking in to NF_INET_PRE_ROUTING For the backend we implement an rhashtable that contains identifier to locator to mappings. The table also allows more specific matches that include original locator and interface. This patch set: - Add an rhashtable function to atomically replace and element. This is useful to implement sub-trees from a table entry without needing to use a special anchor structure as the table entry. - Add a start callback for starting a netlink dump. - Creates an ila directory under net/ipv6 and moves ila.c to it. ila.c is split into ila_common.c and ila_lwt.c. - Implement a table to do identifier->locator mapping. This is an rhashtable (in ila_xlat.c). - Configuration for the table with netlink. - Add a hook into NF_INET_PRE_ROUTING to perform ILA translation before early demux. Changes in v2: - Use iptables targets instead of a new xfrm function Changes in v3: - Add __rcu to next pointer in struct ila_map Changes in v4: - Use hook for NF_INET_PRE_ROUTING Changed in v5: - Register hooks per namespace using nf_register_net_hooks - Only register hooks when first mapping is actually added Changed in v6: - Remove gfp argument in alloc_ila_locks, it is unnecessary - Set registered_hooks properly when hooks are registered Testing: Running 200 netperf TCP_RR streams No ILA, baseline 79.26% CPU utilization 1678282 tps 104/189/390 50/90/99% latencies ILA before fix (LWT on both input and output) 81.91% CPU utilization 1464723 tps (-14.5% from baseline) 121/215/411 50/90/99% latencies ILA after fix 80.62% CPU utilization 1622985 (-3.4% from baseline) 110/191/347 50/90/99% latencies ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-15ila: Add generic ILA translation facilityTom Herbert
This patch implements an ILA tanslation table. This table can be configured with identifier to locator mappings, and can be be queried to resolve a mapping. Queries can be parameterized based on interface, direction (incoming or outoing), and matching locator. The table is implemented using rhashtable and is configured via netlink (through "ip ila .." in iproute). The table may be used as alternative means to do do ILA tanslations other than the lw tunnels Signed-off-by: Tom Herbert <tom@herbertland.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-15netlink: add a start callback for starting a netlink dumpTom Herbert
The start callback allows the caller to set up a context for the dump callbacks. Presumably, the context can then be destroyed in the done callback. Signed-off-by: Tom Herbert <tom@herbertland.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-15rhashtable: add function to replace an elementTom Herbert
Add the rhashtable_replace_fast function. This replaces one object in the table with another atomically. The hashes of the new and old objects must be equal. Signed-off-by: Tom Herbert <tom@herbertland.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-15ila: Create net/ipv6/ila directoryTom Herbert
Create ila directory in preparation for supporting other hooks in the kernel than LWT for doing ILA. This includes: - Moving ila.c to ila/ila_lwt.c - Splitting out some common functions into ila_common.c Signed-off-by: Tom Herbert <tom@herbertland.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-15Merge branch 'stmmac-mdio-compat'David S. Miller
Merge branch 'stmmac-mdio-compat' Phil Reid says: ==================== stmmac: create of compatible mdio bus for stmacc driver Provide ability to specify a fixed phy in the device tree and retain the mdio bus if no phy is found. This is needed where a dsa is connected via a fixed phy and uses the mdio bus for config. Fixed ptp ref clock calculatins for the stmmac when ptp ref clock is running at <= 50Mhz. Also add device tree setting to config ptp clk source on socfpga platforms. Changes from V5: - Restore behaviour of unregister mdio bus when no phys found if there is no device tree node create the bus. - Modify condition to allocate mdio_base_data conditional on fixed phy presece as well. Maintains existing behaviour in conditions where a fixed phy is not present. Changes from V4: - Restore #ifdef CONFIG_OF around setting of reset_gpio. Member doesn't exist when this isn't defined. Changes from V3: - Use if (IS_ENABLED(CONFIG_OF)) instead of #if. Reorder some code to reduce if statements. - of_mdiobus_register already falls back to mdiobus_register - Tested on system with CONFIG_OF Changes from V2: - Formatting, spaces & lines > 80 chars. Using checkpatch - Drop PTP register debugfs patch. Changes from V1: - Fixed mismatch doc / code for ptp_ref_clk dt node. - Remove unit address from doc example. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-15stmmac: socfpga: Provide dt node to config ptp clk source.Phil Reid
Provides an options to use the ptp clock routed from the Altera FPGA fabric. Instead of the defalt eosc1 clock connected to the ARM HPS core. This setting affects all emacs in the core as the ptp clock is common. Acked-by: Rob Herring <robh@kernel.org> Signed-off-by: Phil Reid <preid@electromag.com.au> Acked-by: Dinh Nguyen <dinguyen@opensource.altera.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-15stmmac: Fix calculations for ptp counters when clock input = 50Mhz.Phil Reid
stmmac_config_sub_second_increment set the sub second increment to 20ns. Driver is configured to use the fine adjustment method where the sub second register is incremented when the acculumator incremented by the addend register wraps overflows. This accumulator is update on every ptp clk cycle. If a ptp clk with a period of greater than 20ns was used the sub second register would not get updated correctly. Instead set the sub sec increment to twice the period of the ptp clk. This result in the addend register being set mid range and overflow the accumlator every 2 clock cycles. Signed-off-by: Phil Reid <preid@electromag.com.au> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-15stmmac: Correct documentation on stmmac clocks.Phil Reid
devm_get_clk looks in clock-name property for matching clock. the ptp_ref_clk property is ignored. Acked-by: Rob Herring <robh@kernel.org> Signed-off-by: Phil Reid <preid@electromag.com.au> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-15stmmac: create of compatible mdio bus for stmmac driverPhil Reid
The DSA driver needs to be passed a reference to an mdio bus. Typically the mac is configured to use a fixed link but the mdio bus still needs to be registered so that it con configure the switch. This patch follows the same process as the altera tse ethernet driver for creation of the mdio bus. Acked-by: Rob Herring <robh@kernel.org> Signed-off-by: Phil Reid <preid@electromag.com.au> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-15Merge branch 'end-of-ip-csum'David S. Miller
Tom Herbert says: ==================== net: The beginning of the end for NETIF_F_IP_CSUM and NETIF_F_IPV6_CSUM Background: This patch set starts to address one front in the battle against protocol ossification. Protocol ossification describes the state that we have arrived at in the evolution of the Internet where we are materially limited to only using a very narrow range of protocols and protocol features. For instance, only TCP and UDP is sufficiently supported on the Internet so that deploying alternative protocols, such as SCTP and DCCP, are non-starters. Similarly, IP options and IPv6 extension headers are typically not considered feasible for wide deployment, so we have loss the extensibility of IP protocols. Protocol ossification is not only a problem on the Internet, but in the data center as well. A root cause of this seems to be narrow, protocol specific optimizations implemented in switches (for doing EMCP) and in NICs (NIC offloads). These tend to be performance optimization around TCP and UDP packets, and these have become requirements to implement performant network solutions at scale. Attempts to deal with protocol ossification in data center have yielded ad hoc, sub-optimal solutions. A main driver of foo-over-UDP (e.g. GRE/UDP, MPLS/UDP) is to leverage the existing EMCP and RSS support for UDP by setting the source port as an entropy value. This has seen some success, but the cost of additional overhead and layering limits its usefulness. An even more extreme solution is STT where non-TCP packets are spoofed as TCP to leverage NIC offloads. This patch set endeavours to address protocol ossification caused by techniques used in transmit checksum offload for NICs. Future work will address protocol ossification in the other primary NIC offloads-- namely receive checksum offload, LSO, LRO, and RSS. NETIF_F_IP_CSUM and NETIF_F_IPV6_CSUM: NETIF_F_IP_CSUM and NETIF_F_IPV6_CSUM exemplify the problem of protocol ossification. These features are relics from a simpler time in the Internet, before encapsulation, before GRE and IPIP. Many hardware vendors only saw the need to provide checksum offload for simple UDP and TCP packets over IPv4 (IPv6 support is an afterthought also). In today's Internet and data centers, checksum offload is well established as a valuable feature, but we can no longer afford to be contsrained to use a handful of protocols and features that are supported at the discretion of NIC vendors. Generic and protocol agnostic methods are needed. The actual interface that the stack uses with drivers for checksum offload is CHECKSUM_PARTIAL. This is a generic and protocol agnostic interface. A driver for a device that supports this generic interface advertises NETIF_F_HW_CSUM. Goals of this patch set: We propose that drivers advertise NETIF_F_HW_CSUM instead of protocol specific values of NETIF_F_IP_CSUM and NETIF_F_IPV6_CSUM. If the driver's device is constrained (for instance it can only offlaod simple IPv4 and IPv6 packets) then these constraints can be checked in the transmit path and skb_checksum_help would be called for packets that the driver is unable to offload. In order to facilitate this, we add some helper functions that takes a specification argument indicating the type of packets a device is able to offload. If a packet does not match the specification, the helper function calls skb_checksum_help. Benefits of this approach are: - Simplify the stack and clarify the interface for checksum offload - Encourage NIC vendors to implement the generic. protocol agnostic checksum offload methods in hardware - Encourage feature parity in NIC offloads for IPv4 and IPv6 Many drivers advertise NETIF_F_IP_CSUM and NETIF_F_IPV6_CSUM and it probably isn't feasible to convert them all in a given time frame (although if we could this would be a great simplification to the stack). A reasonable direction may be to declare that new drivers must use NETIF_F_HW_CSUM as NETIF_F_IP_CSUM and NETIF_F_IPV6_CSUM are considered deprecated. There is a class of drivers that should now be converted to advertise NETIF_F_HW_CSUM, namely those that support offload of ecapsulated checksums. These drivers have to date been using skb->encapsulation to infer that checksum offload is being performed for an encapsulated checksum. This is strictly not correct. skb->encapsulation indicates that the inner headers are valid in the skbuff, whereas the stack indicates checksum offload arguments exclusively in csum_start and csum_offset. At some point we may want to set the inner headers for an skbuff but offload the outer transport checksum, so this needs to be fixed. In this patch set: - Rename some of constants involved in checksum offload to be more reflective of their function - Eliminate NETIF_F_GEN_CSUM and NETIF_F_V[46]_CSUM entirely as unnecessary convolutions - Fix conditions in tcp_sendpage and tcp_sendmsg to take IP protocol into account when determining if checksum offload can be done - Add driver helper functions for determining if a checksum can be offloaded to a device. If not, the helper function can call skb_checksum_help - Document the checksum offload interface between the stack and drivers with detail and specifics Testing: Have been testing ixgbe and mlx4. No noticeable regressions seen yet. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-15net: Elaborate on checksum offload interface descriptionTom Herbert
Add specifics and details the description of the interface between the stack and drivers for doing checksum offload. This description is meant to be as specific and complete as possible. Signed-off-by: Tom Herbert <tom@herbertland.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-15net: Add driver helper functions to determine checksum offloadabilityTom Herbert
Add skb_csum_offload_chk driver helper function to determine if a device with limited checksum offload capabilities is able to offload the checksum for a given packet. This patch includes: - The skb_csum_offload_chk function. Returns true if checksum is offloadable, else false. Optionally, in the case that the checksum is not offloable, the function can call skb_checksum_help to resolve the checksum. skb_csum_offload_chk also returns whether the checksum refers to an encapsulated checksum. - Definition of skb_csum_offl_spec structure that caller uses to indicate rules about what it can offload (e.g. IPv4/v6, TCP/UDP only, whether encapsulated checksums can be offloaded, whether checksum with IPv6 extension headers can be offloaded). - Ancilary functions called skb_csum_offload_chk_help, skb_csum_off_chk_help_cmn, skb_csum_off_chk_help_cmn_v4_only. Signed-off-by: Tom Herbert <tom@herbertland.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-15tcp: Fix conditions to determine checksum offloadTom Herbert
In tcp_send_sendpage and tcp_sendmsg we check the route capabilities to determine if checksum offload can be performed. This check currently does not take the IP protocol into account for devices that advertise only one of NETIF_F_IPV6_CSUM or NETIF_F_IP_CSUM. This patch adds a function to check capabilities for checksum offload with a socket called sk_check_csum_caps. This function checks for specific IPv4 or IPv6 offload support based on the family of the socket. Signed-off-by: Tom Herbert <tom@herbertland.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-15net: Eliminate NETIF_F_GEN_CSUM and NETIF_F_V[46]_CSUMTom Herbert
These netif flags are unnecessary convolutions. It is more straightforward to just use NETIF_F_HW_CSUM, NETIF_F_IP_CSUM, and NETIF_F_IPV6_CSUM directly. This patch also: - Cleans up can_checksum_protocol - Simplifies netdev_intersect_features Signed-off-by: Tom Herbert <tom@herbertland.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-15net: Rename NETIF_F_ALL_CSUM to NETIF_F_CSUM_MASKTom Herbert
The name NETIF_F_ALL_CSUM is a misnomer. This does not correspond to the set of features for offloading all checksums. This is a mask of the checksum offload related features bits. It is incorrect to set both NETIF_F_HW_CSUM and NETIF_F_IP_CSUM or NETIF_F_IPV6 at the same time for features of a device. This patch: - Changes instances of NETIF_F_ALL_CSUM to NETIF_F_CSUM_MASK (where NETIF_F_ALL_CSUM is being used as a mask). - Changes bonding, sfc/efx, ipvlan, macvlan, vlan, and team drivers to use NEITF_F_HW_CSUM in features list instead of NETIF_F_ALL_CSUM. Signed-off-by: Tom Herbert <tom@herbertland.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-15fcoe: Use CHECKSUM_PARTIAL to indicate CRC offloadTom Herbert
When setting up CRC offload set ip_summed to CHECKSUM_PARTIAL instead of CHECKSUM_UNNECESSARY. This is consistent with the definition of CHECKSUM_PARTIAL. The only driver that seems to be advertising NETIF_F_FCOE_CRC is ixgbe. AFICT the driver does not look at ip_summed for FCOE and just assumes that CRC is being offloaded. Signed-off-by: Tom Herbert <tom@herbertland.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-15sctp: Rename NETIF_F_SCTP_CSUM to NETIF_F_SCTP_CRCTom Herbert
The SCTP checksum is really a CRC and is very different from the standards 1's complement checksum that serves as the checksum for IP protocols. This offload interface is also very different. Rename NETIF_F_SCTP_CSUM to NETIF_F_SCTP_CRC to highlight these differences. The term CSUM should be reserved in the stack to refer to the standard 1's complement IP checksum. Signed-off-by: Tom Herbert <tom@herbertland.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-15net: Add skb_inner_transport_offset functionTom Herbert
Same thing as skb_transport_offset but returns the offset of the inner transport header (when skb->encpasulation is set). Signed-off-by: Tom Herbert <tom@herbertland.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-15Revert "scatterlist: use sg_phys()"Dan Williams
commit db0fa0cb0157 "scatterlist: use sg_phys()" did replacements of the form: phys_addr_t phys = page_to_phys(sg_page(s)); phys_addr_t phys = sg_phys(s) & PAGE_MASK; However, this breaks platforms where sizeof(phys_addr_t) > sizeof(unsigned long). Revert for 4.3 and 4.4 to make room for a combined helper in 4.5. Cc: <stable@vger.kernel.org> Cc: Jens Axboe <axboe@fb.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Russell King <linux@arm.linux.org.uk> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Fixes: db0fa0cb0157 ("scatterlist: use sg_phys()") Suggested-by: Joerg Roedel <joro@8bytes.org> Reported-by: Vitaly Lavrov <vel21ripn@gmail.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>