summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2014-03-31rds: prevent dereference of a NULL device in rds_iw_laddr_checkSasha Levin
Binding might result in a NULL device which is later dereferenced without checking. Signed-off-by: Sasha Levin <sasha.levin@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-31net-sysfs: expose number of carrier on/off changesdavid decotigny
This allows to monitor carrier on/off transitions and detect link flapping issues: - new /sys/class/net/X/carrier_changes - new rtnetlink IFLA_CARRIER_CHANGES (getlink) Tested: - grep . /sys/class/net/*/carrier_changes + ip link set dev X down/up + plug/unplug cable - updated iproute2: prints IFLA_CARRIER_CHANGES - iproute2 20121211-2 (debian): unchanged behavior Signed-off-by: David Decotigny <decot@googlers.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-31ipv6: tcp_ipv6 policy route issueWang Yufen
The issue raises when adding policy route, specify a particular NIC as oif, the policy route did not take effect. The reason is that fl6.oif is not set and route map failed. From the tcp_v6_send_response function, if the binding address is linklocal, fl6.oif is set, but not for global address. Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: Wang Yufen <wangyufen@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-31ipv6: reuse rt6_need_strictWang Yufen
Move the whole rt6_need_strict as static inline into ip6_route.h, so that it can be reused Signed-off-by: Wang Yufen <wangyufen@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-31ipv6: tcp_ipv6 do some cleanupWang Yufen
Signed-off-by: Wang Yufen <wangyufen@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-31Merge branch 'net_sysfs_doc'David S. Miller
Florian Fainelli says: ==================== net: document sysfs entries This patchset attempts to document the basic set of sysfs entries that are exposed by netdevices in /sys/class/net/<iface>/ I did not go before the pre-git era, so the oldest entries are marked with the 2.6.12 kernel version and dated of April 2005. Future patches will document the queues/ and statistics/ directories as well. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-31net: export NET_ADDR_* values to user-space APIFlorian Fainelli
NET_ADDR_* values are exported in the /sys/class/net/<iface>/addr_assign_type sysfs attributes, and as such constitutes an user-space ABI. Move the NET_ADDR_* definitions from include/linux/netdevice.h to include/uapi/linux/netdevice.h Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-31net: sysfs: add Documentation entries for basic set of attributesFlorian Fainelli
Add sysfs attributes Documentation entries for the basic set of attributes that are exposed by a network device in /sys/class/net/<iface>/ Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-31qmi_wwan/cdc_ether: move Novatel E371 (1410:9011) to qmi_wwanYegor Yefremov
This device provides QMI and ethernet functionality via a standard CDC ethernet descriptor. But when driven by cdc_ether, the QMI functionality is unavailable because only cdc_ether can claim the USB interface. Thus blacklist the device in cdc_ether and add its IDs to qmi_wwan, which enables both QMI and ethernet simultaneously. Signed-off-by: Yegor Yefremov <yegorslists@googlemail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-31Merge branch 'bridge_8021AD'David S. Miller
Vlad Yasevich says: ==================== bridge: Fix forwarding of 8021AD frames Bridge has its own way to deterine if the packet is forwardable and it doesn't support 8021ad tags correctly. Instead just allow bridge to use an existing is_skb_forwardable() function. v2: Fix missing hunk in patch 2/2 to make it build. v3: Fix indent for is_skb_forwardable ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-31bridge: use is_skb_forwardable in forward pathVlad Yasevich
Use existing function instead of trying to use our own. Signed-off-by: Vlad Yasevich <vyasevic@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-31net: Allow modules to use is_skb_forwardableVlad Yasevich
Signed-off-by: Vlad Yasevich <vyasevic@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-31Merge branch 'filter-next'David S. Miller
Daniel Borkmann says: ==================== BPF updates We sat down and have heavily reworked the whole previous patchset from v10 [1] to address all comments/concerns. This patchset therefore *replaces* the internal BPF interpreter with the new layout as discussed in [1], and migrates some exotic callers to properly use the BPF API for a transparent upgrade. All other callers that already use the BPF API in a way it should be used, need no further changes to run the new internals. We also removed the sysctl knob entirely, and do not expose any structure to userland, so that implementation details only reside in kernel space. Since we are replacing the interpreter we had to migrate seccomp in one patch along with the interpreter to not break anything. When attaching a new filter, the flow can be described as following: i) test if jit compiler is enabled and can compile the user BPF, ii) if so, then go for it, iii) if not, then transparently migrate the filter into the new representation, and run it in the interpreter. Also, we have scratched the jit flag from the len attribute and made it as initial patch in this series as Pablo has suggested in the last feedback, thanks. For details, please refer to the patches themselves. We did extensive testing of BPF and seccomp on the new interpreter itself and also on the user ABIs and could not find any issues; new performance numbers as posted in patch 8 are also still the same. Please find more details in the patches themselves. For all the previous history from v1 to v10, see [1]. We have decided to drop the v11 as we have pedantically reworked the set, but of course, included all previous feedback. v3 -> v4: - Applied feedback from Dave regarding swap insns - Rebased on net-next v2 -> v3: - Rebased to latest net-next (i.e. w/ rxhash->hash rename) - Fixed patch 8/9 commit message/doc as suggested by Dave - Rest is unchanged v1 -> v2: - Rebased to latest net-next - Added static to ptp_filter as suggested by Dave - Fixed a typo in patch 8's commit message - Rest unchanged Thanks ! [1] http://thread.gmane.org/gmane.linux.kernel/1665858 ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-31doc: filter: extend BPF documentation to document new internalsAlexei Starovoitov
Further extend the current BPF documentation to document new BPF engine internals. Joint work with Daniel Borkmann. Signed-off-by: Alexei Starovoitov <ast@plumgrid.com> Signed-off-by: Daniel Borkmann <dborkman@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-31net: filter: rework/optimize internal BPF interpreter's instruction setAlexei Starovoitov
This patch replaces/reworks the kernel-internal BPF interpreter with an optimized BPF instruction set format that is modelled closer to mimic native instruction sets and is designed to be JITed with one to one mapping. Thus, the new interpreter is noticeably faster than the current implementation of sk_run_filter(); mainly for two reasons: 1. Fall-through jumps: BPF jump instructions are forced to go either 'true' or 'false' branch which causes branch-miss penalty. The new BPF jump instructions have only one branch and fall-through otherwise, which fits the CPU branch predictor logic better. `perf stat` shows drastic difference for branch-misses between the old and new code. 2. Jump-threaded implementation of interpreter vs switch statement: Instead of single table-jump at the top of 'switch' statement, gcc will now generate multiple table-jump instructions, which helps CPU branch predictor logic. Note that the verification of filters is still being done through sk_chk_filter() in classical BPF format, so filters from user- or kernel space are verified in the same way as we do now, and same restrictions/constraints hold as well. We reuse current BPF JIT compilers in a way that this upgrade would even be fine as is, but nevertheless allows for a successive upgrade of BPF JIT compilers to the new format. The internal instruction set migration is being done after the probing for JIT compilation, so in case JIT compilers are able to create a native opcode image, we're going to use that, and in all other cases we're doing a follow-up migration of the BPF program's instruction set, so that it can be transparently run in the new interpreter. In short, the *internal* format extends BPF in the following way (more details can be taken from the appended documentation): - Number of registers increase from 2 to 10 - Register width increases from 32-bit to 64-bit - Conditional jt/jf targets replaced with jt/fall-through - Adds signed > and >= insns - 16 4-byte stack slots for register spill-fill replaced with up to 512 bytes of multi-use stack space - Introduction of bpf_call insn and register passing convention for zero overhead calls from/to other kernel functions - Adds arithmetic right shift and endianness conversion insns - Adds atomic_add insn - Old tax/txa insns are replaced with 'mov dst,src' insn Performance of two BPF filters generated by libpcap resp. bpf_asm was measured on x86_64, i386 and arm32 (other libpcap programs have similar performance differences): fprog #1 is taken from Documentation/networking/filter.txt: tcpdump -i eth0 port 22 -dd fprog #2 is taken from 'man tcpdump': tcpdump -i eth0 'tcp port 22 and (((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)' -dd Raw performance data from BPF micro-benchmark: SK_RUN_FILTER on the same SKB (cache-hit) or 10k SKBs (cache-miss); time in ns per call, smaller is better: --x86_64-- fprog #1 fprog #1 fprog #2 fprog #2 cache-hit cache-miss cache-hit cache-miss old BPF 90 101 192 202 new BPF 31 71 47 97 old BPF jit 12 34 17 44 new BPF jit TBD --i386-- fprog #1 fprog #1 fprog #2 fprog #2 cache-hit cache-miss cache-hit cache-miss old BPF 107 136 227 252 new BPF 40 119 69 172 --arm32-- fprog #1 fprog #1 fprog #2 fprog #2 cache-hit cache-miss cache-hit cache-miss old BPF 202 300 475 540 new BPF 180 270 330 470 old BPF jit 26 182 37 202 new BPF jit TBD Thus, without changing any userland BPF filters, applications on top of AF_PACKET (or other families) such as libpcap/tcpdump, cls_bpf classifier, netfilter's xt_bpf, team driver's load-balancing mode, and many more will have better interpreter filtering performance. While we are replacing the internal BPF interpreter, we also need to convert seccomp BPF in the same step to make use of the new internal structure since it makes use of lower-level API details without being further decoupled through higher-level calls like sk_unattached_filter_{create,destroy}(), for example. Just as for normal socket filtering, also seccomp BPF experiences a time-to-verdict speedup: 05-sim-long_jumps.c of libseccomp was used as micro-benchmark: seccomp_rule_add_exact(ctx,... seccomp_rule_add_exact(ctx,... rc = seccomp_load(ctx); for (i = 0; i < 10000000; i++) syscall(199, 100); 'short filter' has 2 rules 'large filter' has 200 rules 'short filter' performance is slightly better on x86_64/i386/arm32 'large filter' is much faster on x86_64 and i386 and shows no difference on arm32 --x86_64-- short filter old BPF: 2.7 sec 39.12% bench libc-2.15.so [.] syscall 8.10% bench [kernel.kallsyms] [k] sk_run_filter 6.31% bench [kernel.kallsyms] [k] system_call 5.59% bench [kernel.kallsyms] [k] trace_hardirqs_on_caller 4.37% bench [kernel.kallsyms] [k] trace_hardirqs_off_caller 3.70% bench [kernel.kallsyms] [k] __secure_computing 3.67% bench [kernel.kallsyms] [k] lock_is_held 3.03% bench [kernel.kallsyms] [k] seccomp_bpf_load new BPF: 2.58 sec 42.05% bench libc-2.15.so [.] syscall 6.91% bench [kernel.kallsyms] [k] system_call 6.25% bench [kernel.kallsyms] [k] trace_hardirqs_on_caller 6.07% bench [kernel.kallsyms] [k] __secure_computing 5.08% bench [kernel.kallsyms] [k] sk_run_filter_int_seccomp --arm32-- short filter old BPF: 4.0 sec 39.92% bench [kernel.kallsyms] [k] vector_swi 16.60% bench [kernel.kallsyms] [k] sk_run_filter 14.66% bench libc-2.17.so [.] syscall 5.42% bench [kernel.kallsyms] [k] seccomp_bpf_load 5.10% bench [kernel.kallsyms] [k] __secure_computing new BPF: 3.7 sec 35.93% bench [kernel.kallsyms] [k] vector_swi 21.89% bench libc-2.17.so [.] syscall 13.45% bench [kernel.kallsyms] [k] sk_run_filter_int_seccomp 6.25% bench [kernel.kallsyms] [k] __secure_computing 3.96% bench [kernel.kallsyms] [k] syscall_trace_exit --x86_64-- large filter old BPF: 8.6 seconds 73.38% bench [kernel.kallsyms] [k] sk_run_filter 10.70% bench libc-2.15.so [.] syscall 5.09% bench [kernel.kallsyms] [k] seccomp_bpf_load 1.97% bench [kernel.kallsyms] [k] system_call new BPF: 5.7 seconds 66.20% bench [kernel.kallsyms] [k] sk_run_filter_int_seccomp 16.75% bench libc-2.15.so [.] syscall 3.31% bench [kernel.kallsyms] [k] system_call 2.88% bench [kernel.kallsyms] [k] __secure_computing --i386-- large filter old BPF: 5.4 sec new BPF: 3.8 sec --arm32-- large filter old BPF: 13.5 sec 73.88% bench [kernel.kallsyms] [k] sk_run_filter 10.29% bench [kernel.kallsyms] [k] vector_swi 6.46% bench libc-2.17.so [.] syscall 2.94% bench [kernel.kallsyms] [k] seccomp_bpf_load 1.19% bench [kernel.kallsyms] [k] __secure_computing 0.87% bench [kernel.kallsyms] [k] sys_getuid new BPF: 13.5 sec 76.08% bench [kernel.kallsyms] [k] sk_run_filter_int_seccomp 10.98% bench [kernel.kallsyms] [k] vector_swi 5.87% bench libc-2.17.so [.] syscall 1.77% bench [kernel.kallsyms] [k] __secure_computing 0.93% bench [kernel.kallsyms] [k] sys_getuid BPF filters generated by seccomp are very branchy, so the new internal BPF performance is better than the old one. Performance gains will be even higher when BPF JIT is committed for the new structure, which is planned in future work (as successive JIT migrations). BPF has also been stress-tested with trinity's BPF fuzzer. Joint work with Daniel Borkmann. Signed-off-by: Alexei Starovoitov <ast@plumgrid.com> Signed-off-by: Daniel Borkmann <dborkman@redhat.com> Cc: Hagen Paul Pfeifer <hagen@jauu.net> Cc: Kees Cook <keescook@chromium.org> Cc: Paul Moore <pmoore@redhat.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: H. Peter Anvin <hpa@linux.intel.com> Cc: linux-kernel@vger.kernel.org Acked-by: Kees Cook <keescook@chromium.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-31net: isdn: use sk_unattached_filter apiDaniel Borkmann
Similarly as in ppp, we need to migrate the ISDN/PPP code to make use of the sk_unattached_filter api in order to decouple having direct filter structure access. By using sk_unattached_filter_{create,destroy}, we can allow for the possibility to jit compile filters for faster filter verdicts as well. Joint work with Alexei Starovoitov. Signed-off-by: Daniel Borkmann <dborkman@redhat.com> Signed-off-by: Alexei Starovoitov <ast@plumgrid.com> Cc: Karsten Keil <isdn@linux-pingi.de> Cc: isdn4linux@listserv.isdn4linux.de Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-31net: ppp: use sk_unattached_filter apiDaniel Borkmann
For the ppp driver, there are currently two open-coded BPF filters in use, that is, pass_filter and active_filter. Migrate both to make proper use of sk_unattached_filter_{create,destroy} API so that the actual BPF code is decoupled from direct access, and filters can be jited as a side-effect by the internal filter compiler. Joint work with Alexei Starovoitov. Signed-off-by: Daniel Borkmann <dborkman@redhat.com> Signed-off-by: Alexei Starovoitov <ast@plumgrid.com> Cc: Paul Mackerras <paulus@samba.org> Cc: linux-ppp@vger.kernel.org Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-31net: ptp: do not reimplement PTP/BPF classifierDaniel Borkmann
There are currently pch_gbe, cpts, and ixp4xx_eth drivers that open-code and reimplement a BPF classifier for the PTP protocol. Since all of them effectively do the very same thing and load the very same PTP/BPF filter, we can just consolidate that code by introducing ptp_classify_raw() in the time-stamping core framework which can be used in drivers. As drivers get initialized after bootstrapping the core networking subsystem, they can make use of ptp_insns wrapped through ptp_classify_raw(), which allows to simplify and remove PTP classifier setup code in drivers. Joint work with Alexei Starovoitov. Signed-off-by: Daniel Borkmann <dborkman@redhat.com> Signed-off-by: Alexei Starovoitov <ast@plumgrid.com> Cc: Richard Cochran <richard.cochran@omicron.at> Cc: Jiri Benc <jbenc@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-31net: ptp: use sk_unattached_filter_create() for BPFDaniel Borkmann
This patch migrates an open-coded sk_run_filter() implementation with proper use of the BPF API, that is, sk_unattached_filter_create(). This migration is needed, as we will be internally transforming the filter to a different representation, and therefore needs to be decoupled. It is okay to do so as skb_timestamping_init() is called during initialization of the network stack in core initcall via sock_init(). This would effectively also allow for PTP filters to be jit compiled if bpf_jit_enable is set. For better readability, there are also some newlines introduced, also ptp_classify.h is only in kernel space. Joint work with Alexei Starovoitov. Signed-off-by: Daniel Borkmann <dborkman@redhat.com> Signed-off-by: Alexei Starovoitov <ast@plumgrid.com> Cc: Richard Cochran <richard.cochran@omicron.at> Cc: Jiri Benc <jbenc@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-31net: filter: move filter accounting to filter coreDaniel Borkmann
This patch basically does two things, i) removes the extern keyword from the include/linux/filter.h file to be more consistent with the rest of Joe's changes, and ii) moves filter accounting into the filter core framework. Filter accounting mainly done through sk_filter_{un,}charge() take care of the case when sockets are being cloned through sk_clone_lock() so that removal of the filter on one socket won't result in eviction as it's still referenced by the other. These functions actually belong to net/core/filter.c and not include/net/sock.h as we want to keep all that in a central place. It's also not in fast-path so uninlining them is fine and even allows us to get rd of sk_filter_release_rcu()'s EXPORT_SYMBOL and a forward declaration. Joint work with Alexei Starovoitov. Signed-off-by: Daniel Borkmann <dborkman@redhat.com> Signed-off-by: Alexei Starovoitov <ast@plumgrid.com> Cc: Pavel Emelyanov <xemul@parallels.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-31net: filter: keep original BPF program aroundDaniel Borkmann
In order to open up the possibility to internally transform a BPF program into an alternative and possibly non-trivial reversible representation, we need to keep the original BPF program around, so that it can be passed back to user space w/o the need of a complex decoder. The reason for that use case resides in commit a8fc92778080 ("sk-filter: Add ability to get socket filter program (v2)"), that is, the ability to retrieve the currently attached BPF filter from a given socket used mainly by the checkpoint-restore project, for example. Therefore, we add two helpers sk_{store,release}_orig_filter for taking care of that. In the sk_unattached_filter_create() case, there's no such possibility/requirement to retrieve a loaded BPF program. Therefore, we can spare us the work in that case. This approach will simplify and slightly speed up both, sk_get_filter() and sock_diag_put_filterinfo() handlers as we won't need to successively decode filters anymore through sk_decode_filter(). As we still need sk_decode_filter() later on, we're keeping it around. Joint work with Alexei Starovoitov. Signed-off-by: Alexei Starovoitov <ast@plumgrid.com> Signed-off-by: Daniel Borkmann <dborkman@redhat.com> Cc: Pavel Emelyanov <xemul@parallels.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-31net: filter: add jited flag to indicate jit compiled filtersDaniel Borkmann
This patch adds a jited flag into sk_filter struct in order to indicate whether a filter is currently jited or not. The size of sk_filter is not being expanded as the 32 bit 'len' member allows upper bits to be reused since a filter can currently only grow as large as BPF_MAXINSNS. Therefore, there's enough room also for other in future needed flags to reuse 'len' field if necessary. The jited flag also allows for having alternative interpreter functions running as currently, we can only detect jit compiled filters by testing fp->bpf_func to not equal the address of sk_run_filter(). Joint work with Alexei Starovoitov. Signed-off-by: Alexei Starovoitov <ast@plumgrid.com> Signed-off-by: Daniel Borkmann <dborkman@redhat.com> Cc: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-29Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netDavid S. Miller
Conflicts: drivers/net/ethernet/marvell/mvneta.c The mvneta.c conflict is a case of overlapping changes, a conversion to devm_ioremap_resource() vs. a conversion to netdev_alloc_pcpu_stats. Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-29phy/at8031: enable at8031 to work on interrupt modeZhao Qiang
The at8031 can work on polling mode and interrupt mode. Add ack_interrupt and config intr funcs to enable interrupt mode for it. Signed-off-by: Zhao Qiang <B45475@freescale.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-29ipv6: fix checkpatch errors of "foo*" and "foo * bar"Wang Yufen
ERROR: "(foo*)" should be "(foo *)" ERROR: "foo * bar" should be "foo *bar" Suggested-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com> Signed-off-by: Wang Yufen <wangyufen@huawei.com> Acked-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-29ipv6: fix checkpatch errors of brace and trailing statementsWang Yufen
ERROR: open brace '{' following enum go on the same line ERROR: open brace '{' following struct go on the same line ERROR: trailing statements should be on next line Signed-off-by: Wang Yufen <wangyufen@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-29ipv6: fix checkpatch errors comments and spaceWang Yufen
WARNING: please, no space before tabs WARNING: please, no spaces at the start of a line ERROR: spaces required around that ':' (ctx:VxW) ERROR: spaces required around that '>' (ctx:VxV) ERROR: spaces required around that '>=' (ctx:VxV) Signed-off-by: Wang Yufen <wangyufen@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-29Merge branch 'netpoll-next'David S. Miller
Eric W. Biederman says: ==================== netpoll: Cleanups and fixes This should be a small set of safe cleanups and fixes to netpoll. The fixes are vlan headers are now always inserted when needed, and napi polling is always avoided when network devices are closed. There are a bunch of little cleanups removing unnecessary code, fixing function naming, not taking unnecessary locks and removing general silliness. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-29netpoll: Respect NETIF_F_LLTXEric W. Biederman
Stop taking the transmit lock when a network device has specified NETIF_F_LLTX. If no locks needed to trasnmit a packet this is the ideal scenario for netpoll as all packets can be trasnmitted immediately. Even if some locks are needed in ndo_start_xmit skipping any unnecessary serialization is desirable for netpoll as it makes it more likely a debugging packet may be trasnmitted immediately instead of being deferred until later. Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-29netpoll: Remove strong unnecessary assumptions about skbsEric W. Biederman
Remove the assumption that the skbs that make it to netpoll_send_skb_on_dev are allocated with find_skb, such that skb->users == 1 and nothing is attached that would prevent the skbs from being freed from hard irq context. Remove this assumption by replacing __kfree_skb on error paths with dev_kfree_skb_irq (in hard irq context) and kfree_skb (in process context). Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-29netpoll: Rename netpoll_rx_enable/disable to netpoll_poll_disable/enableEric W. Biederman
The netpoll_rx_enable and netpoll_rx_disable functions have always controlled polling the network drivers transmit and receive queues. Rename them to netpoll_poll_enable and netpoll_poll_disable to make their functionality clear. Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-29netpoll: Move rx enable/disable into __dev_close_manyEric W. Biederman
Today netpoll_rx_enable and netpoll_rx_disable are called from dev_close and and __dev_close, and not from dev_close_many. Move the calls into __dev_close_many so that we have a single call site to maintain, and so that dev_close_many gains this protection as well. Which importantly makes batched network device deletes safe. Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-29netpoll: Only call ndo_start_xmit from a single placeEric W. Biederman
Factor out the code that needs to surround ndo_start_xmit from netpoll_send_skb_on_dev into netpoll_start_xmit. It is an unfortunate fact that as the netpoll code has been maintained the primary call site ndo_start_xmit learned how to handle vlans and timestamps but the second call of ndo_start_xmit in queue_process did not. With the introduction of netpoll_start_xmit this associated logic now happens at both call sites of ndo_start_xmit and should make it easy for that to continue into the future. Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-29netpoll: Remove gfp parameter from __netpoll_setupEric W. Biederman
The gfp parameter was added in: commit 47be03a28cc6c80e3aa2b3e8ed6d960ff0c5c0af Author: Amerigo Wang <amwang@redhat.com> Date: Fri Aug 10 01:24:37 2012 +0000 netpoll: use GFP_ATOMIC in slave_enable_netpoll() and __netpoll_setup() slave_enable_netpoll() and __netpoll_setup() may be called with read_lock() held, so should use GFP_ATOMIC to allocate memory. Eric suggested to pass gfp flags to __netpoll_setup(). Cc: Eric Dumazet <eric.dumazet@gmail.com> Cc: "David S. Miller" <davem@davemloft.net> Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Cong Wang <amwang@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net> The reason for the gfp parameter was removed in: commit c4cdef9b7183159c23c7302aaf270d64c549f557 Author: dingtianhong <dingtianhong@huawei.com> Date: Tue Jul 23 15:25:27 2013 +0800 bonding: don't call slave_xxx_netpoll under spinlocks The slave_xxx_netpoll will call synchronize_rcu_bh(), so the function may schedule and sleep, it should't be called under spinlocks. bond_netpoll_setup() and bond_netpoll_cleanup() are always protected by rtnl lock, it is no need to take the read lock, as the slave list couldn't be changed outside rtnl lock. Signed-off-by: Ding Tianhong <dingtianhong@huawei.com> Cc: Jay Vosburgh <fubar@us.ibm.com> Cc: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: David S. Miller <davem@davemloft.net> Nothing else that calls __netpoll_setup or ndo_netpoll_setup requires a gfp paramter, so remove the gfp parameter from both of these functions making the code clearer. Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-29Merge branch 'skb_cow_head'David S. Miller
Francois Romieu says: ==================== remove open-coded skb_cow_head. As per http://marc.info/?l=linux-netdev&m=139440579104701. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-29wimax/i2400m: remove open-coded skb_cow_head.françois romieu
Signed-off-by: Francois Romieu <romieu@fr.zoreil.com> Cc: Inaky Perez-Gonzalez <inaky.perez-gonzalez@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-29tg3: remove open-coded skb_cow_head.françois romieu
Signed-off-by: Francois Romieu <romieu@fr.zoreil.com> Cc: Nithin Nayak Sujir <nsujir@broadcom.com> Cc: Michael Chan <mchan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-29bna: remove open-coded skb_cow_head.françois romieu
Signed-off-by: Francois Romieu <romieu@fr.zoreil.com> Cc: Rasesh Mody <rmody@brocade.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-29qlge: remove open-coded skb_cow_head.françois romieu
Signed-off-by: Francois Romieu <romieu@fr.zoreil.com> Cc: Jitendra Kalsaria <jitendra.kalsaria@qlogic.com> Cc: Shahed Shaikh <shahed.shaikh@qlogic.com> Cc: Ron Mercer <ron.mercer@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-29jme: remove open-coded skb_cow_head.françois romieu
Signed-off-by: Francois Romieu <romieu@fr.zoreil.com> Cc: Guo-Fu Tseng <cooldavid@cooldavid.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-29atl1e: remove open-coded skb_cow_head.françois romieu
Signed-off-by: Francois Romieu <romieu@fr.zoreil.com> Cc: Chris Snook <chris.snook@gmail.com> Cc: Jay Cliburn <jcliburn@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-29atl1c: remove open-coded skb_cow_head.françois romieu
Signed-off-by: Francois Romieu <romieu@fr.zoreil.com> Cc: Jay Cliburn <jcliburn@gmail.com> Cc: Chris Snook <chris.snook@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-29atl1: remove open-coded skb_cow_head.françois romieu
Signed-off-by: Francois Romieu <romieu@fr.zoreil.com> Cc: Chris Snook <chris.snook@gmail.com> Cc: Jay Cliburn <jcliburn@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-29net: mvneta: use devm_ioremap_resource() instead of of_iomap()Thomas Petazzoni
The mvneta driver currently uses of_iomap(), which has two drawbacks: it doesn't request the resource, and it isn't devm-style so some error handling is needed. This commit switches to use devm_ioremap_resource() instead, which automatically requests the resource (so the I/O registers region shows up properly in /proc/iomem), and also is devm-style, which allows to get rid of some error handling to unmap the I/O registers region. Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-28Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netLinus Torvalds
Pull networking fixes from David Miller: 1) We've discovered a common error in several networking drivers, they put VLAN offload features into ->vlan_features, which would suggest that they support offloading 2 or more levels of VLAN encapsulation. Not only do these devices not do that, but we don't have the infrastructure yet to handle that at all. Fixes from Vlad Yasevich. 2) Fix tcpdump crash with bridging and vlans, also from Vlad. 3) Some MAINTAINERS updates for random32 and bonding. 4) Fix late reseeds of prandom generator, from Sasha Levin. 5) Bridge doesn't handle stacked vlans properly, fix from Toshiaki Makita. 6) Fix deadlock in openvswitch, from Flavio Leitner. 7) get_timewait4_sock() doesn't report delay times correctly, fix from Eric Dumazet. 8) Duplicate address detection and addrconf verification need to run in contexts where RTNL can be obtained. Move them to run from a workqueue. From Hannes Frederic Sowa. 9) Fix route refcount leaking in ip tunnels, from Pravin B Shelar. 10) Don't return -EINTR from non-blocking recvmsg() on AF_UNIX sockets, from Eric Dumazet. * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (28 commits) vlan: Warn the user if lowerdev has bad vlan features. veth: Turn off vlan rx acceleration in vlan_features ifb: Remove vlan acceleration from vlan_features qlge: Do not propaged vlan tag offloads to vlans bridge: Fix crash with vlan filtering and tcpdump net: Account for all vlan headers in skb_mac_gso_segment MAINTAINERS: bonding: change email address MAINTAINERS: bonding: change email address ipv6: move DAD and addrconf_verify processing to workqueue tcp: fix get_timewait4_sock() delay computation on 64bit openvswitch: fix a possible deadlock and lockdep warning bridge: Fix handling stacked vlan tags bridge: Fix inabillity to retrieve vlan tags when tx offload is disabled vhost: validate vhost_get_vq_desc return value vhost: fix total length when packets are too short random32: avoid attempt to late reseed if in the middle of seeding random32: assign to network folks in MAINTAINERS net/mlx4_core: pass pci_device_id.driver_data to __mlx4_init_one during reset core, nfqueue, openvswitch: Orphan frags in skb_zerocopy and handle errors vlan: Set hard_header_len according to available acceleration ...
2014-03-28Merge branch 'vlan_offloads'David S. Miller
Vlad Yasevich says: ==================== Audit all drivers for correct vlan_features. Some drivers set vlan acceleration features in vlan_features. This causes issues with Q-in-Q/802.1ad configurations. Audit all the drivers for correct vlan_features. Fix broken ones. Add a warning to vlan code to help catch future offenders. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-28vlan: Warn the user if lowerdev has bad vlan features.Vlad Yasevich
Some drivers incorrectly assign vlan acceleration features to vlan_features thus causing issues for Q-in-Q vlan configurations. Warn the user of such cases. Signed-off-by: Vlad Yasevich <vyasevic@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-28veth: Turn off vlan rx acceleration in vlan_featuresVlad Yasevich
For completeness, turn off vlan rx acceleration in vlan_features so that it doesn't show up on q-in-q setups. Signed-off-by: Vlad Yasevich <vyasevic@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-28ifb: Remove vlan acceleration from vlan_featuresVlad Yasevich
Do not include vlan acceleration features in vlan_features as that precludes correct Q-in-Q operation. Signed-off-by: Vlad Yasevich <vyasevic@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-28qlge: Do not propaged vlan tag offloads to vlansVlad Yasevich
qlge driver turns off NETIF_F_HW_CTAG_FILTER, but forgets to turn off HW_CTAG_TX and HW_CTAG_RX on vlan devices. With the current settings, q-in-q will only generate a single vlan header. Remember to mask off CTAG_TX and CTAG_RX features in vlan_features. CC: Shahed Shaikh <shahed.shaikh@qlogic.com> CC: Jitendra Kalsaria <jitendra.kalsaria@qlogic.com> CC: Ron Mercer <ron.mercer@qlogic.com> Signed-off-by: Vlad Yasevich <vyasevic@redhat.com> Acked-by: Jitendra Kalsaria <jitendra.kalsaria@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>