Age | Commit message (Collapse) | Author |
|
Make the code more readable by moving the initial checksum setup
and the command/status preparation to its own function.
Signed-off-by: Ezequiel Garcia <ezequiel.garcia@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Now that the TSO helper API has been introduced, this commit makes use
of it to implement the TSO in this driver.
Using iperf to test and vmstat to check the CPU usage, shows a substantial
CPU usage drop when TSO is on (~15% vs. ~25%). HTTP-based tests performed
by Willy Tarreau have shown performance improvements.
Signed-off-by: Ezequiel Garcia <ezequiel.garcia@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Rework mvneta_tx() so that the code that performs the final handling
before a sk_buff is transmitted is done only if the numbers of fragments
processed if positive.
This is preparation work to add the support for software TSO.
Signed-off-by: Ezequiel Garcia <ezequiel.garcia@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
In order to ease the addition of new features, let's factorize the
feature list.
Signed-off-by: Ezequiel Garcia <ezequiel.garcia@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Although the implementation probably needs a lot of work, this initial API
allows to implement software TSO in mvneta and mv643xx_eth drivers in a not
so intrusive way.
Signed-off-by: Ezequiel Garcia <ezequiel.garcia@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Pablo Neira Ayuso says:
====================
Netfilter/nftables updates for net-next
The following patchset contains Netfilter/nftables updates for net-next,
most relevantly they are:
1) Add set element update notification via netlink, from Arturo Borrero.
2) Put all object updates in one single message batch that is sent to
kernel-space. Before this patch only rules where included in the batch.
This series also introduces the generic transaction infrastructure so
updates to all objects (tables, chains, rules and sets) are applied in
an all-or-nothing fashion, these series from me.
3) Defer release of objects via call_rcu to reduce the time required to
commit changes. The assumption is that all objects are destroyed in
reverse order to ensure that dependencies betweem them are fulfilled
(ie. rules and sets are destroyed first, then chains, and finally
tables).
4) Allow to match by bridge port name, from Tomasz Bursztyka. This series
include two patches to prepare this new feature.
5) Implement the proper set selection based on the characteristics of the
data. The new infrastructure also allows you to specify your preferences
in terms of memory and computational complexity so the underlying set
type is also selected according to your needs, from Patrick McHardy.
6) Several cleanup patches for nft expressions, including one minor possible
compilation breakage due to missing mark support, also from Patrick.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-next
Jeff Kirsher says:
====================
Intel Wired LAN Driver Updates
This series contains updates to i40e and i40evf.
Shannon makes minor changes to the AdminQ interface to bring it up to
date. Removes the hard coding of stats struct size in ethtool, in prep
for adding data fields which are configuration dependent.
Catherine removes some unused and unneeded PCI bus defines.
Jesse fixes the copyright headers and finishes up the removal of the PTP
Tx work functionality which allows us to rely on the Tx timesync interrupt.
Mitch provides a number of fixes and cleanups for i40e/i40evf based on
suggestions from Ben Hutchings. First is to use a macro parameter for
ethtool stats instead of just assuming that a valid netdev variable
exists. Second is not to tell ethtool that the VF can do 10GbaseT, when
it really has no idea what its link speed is, so set the supported value
to 0 instead. Make the ethtool_ops structure constant since it is
extremely unlikely to change at runtime. Ethtool consistently reports
0 values for our ITR settings because we never actually use them, so
fix this by setting the default values to the specified default values.
Greg avoids a compile error by wrapping the call to i40e_alloc_vfs() in
CONFIG_PCI_IOV because the function itself is wrapped in the same
conditional compile block.
Alexander Gordeev updates the driver to use the new pci_enable_msi_range()
and pci_enable_msix_range() or pci_enable_msi_exact() and
pci_enable_msix_exact().
Jean Sacren provides a fix where the wrong error code was being passed to
i40e_open().
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Experience with the recent e114a710aa50 ("tcp: fix cwnd limited
checking to improve congestion control") has shown that there are
common cases where that commit can cause cwnd to be much larger than
necessary. This leads to TSO autosizing cooking skbs that are too
large, among other things.
The main problems seemed to be:
(1) That commit attempted to predict the future behavior of the
connection by looking at the write queue (if TSO or TSQ limit
sending). That prediction sometimes overestimated future outstanding
packets.
(2) That commit always allowed cwnd to grow to twice the number of
outstanding packets (even in congestion avoidance, where this is not
needed).
This commit improves both of these, by:
(1) Switching to a measurement-based approach where we explicitly
track the largest number of packets in flight during the past window
("max_packets_out"), and remember whether we were cwnd-limited at the
moment we finished sending that flight.
(2) Only allowing cwnd to grow to twice the number of outstanding
packets ("max_packets_out") in slow start. In congestion avoidance
mode we now only allow cwnd to grow if it was fully utilized.
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Delete unnecessary local variable whose value is always 0 and that hides
the fact that the result is always 0.
A simplified version of the semantic patch that fixes this problem is as
follows: (http://coccinelle.lip6.fr/)
// <smpl>
@r exists@
local idexpression ret;
expression e;
position p;
@@
-ret = 0;
... when != ret = e
return
- ret
+ 0
;
// </smpl>
Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Kernel API for classic BPF socket filters is:
sk_unattached_filter_create() - validate classic BPF, convert, JIT
SK_RUN_FILTER() - run it
sk_unattached_filter_destroy() - destroy socket filter
Cleanup internal BPF kernel API as following:
sk_filter_select_runtime() - final step of internal BPF creation.
Try to JIT internal BPF program, if JIT is not available select interpreter
SK_RUN_FILTER() - run it
sk_filter_free() - free internal BPF program
Disallow direct calls to BPF interpreter. Execution of the BPF program should
be done with SK_RUN_FILTER() macro.
Example of internal BPF create, run, destroy:
struct sk_filter *fp;
fp = kzalloc(sk_filter_size(prog_len), GFP_KERNEL);
memcpy(fp->insni, prog, prog_len * sizeof(fp->insni[0]));
fp->len = prog_len;
sk_filter_select_runtime(fp);
SK_RUN_FILTER(fp, ctx);
sk_filter_free(fp);
Sockets, seccomp, testsuite, tracing are using different ways to populate
sk_filter, so first steps of program creation are not common.
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Acked-by: Daniel Borkmann <dborkman@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Govindarajulu Varadarajan says:
====================
enic: Add adaptive coalescing interrupt support
This series add support for adaptive coalescing interrupt and updates
enic Maintainers.
v1->v2:
* Add commit log
* do vnic_intr_coalescing_timer_set only while enabling intr
* use ktime_get instead of hrtimer
* make enic_set_rx_coal_setting return type void
* change func name enic_apply_int_moderation to enic_calc_int_moderation
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Cc: Sujith Sankar <ssujith@cisco.com>
Cc: Christian Benvenuti <benve@cisco.com>
Cc: Neel Patel <neepatel@cisco.com>
Signed-off-by: Govindarajulu Varadarajan <_govind@gmx.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch adds support for adaptive interrupt coalescing.
For small pkts with low pkt rate, we can decrease the coalescing interrupt
dynamically which decreases the latency. This however increases the cpu
utilization. Based on testing with different coal intr and pkt rate we came up
with a table(mod_table) with rx_rate and coalescing interrupt value where we
get low latency without significant increase in cpu. mod_table table stores
the coalescing timer percentage value for different throughputs.
Function enic_calc_int_moderation() calculates the desired coalescing intr timer
value. This function is called in driver rx napi_poll. The actual value is set
by enic_set_int_moderation() which is called when napi_poll is complete. i.e
when we unmask the rx intr.
Adaptive coal intr is support only when driver is using msix intr. Because
intr is not shared.
Struct mod_range is used to store only the default adaptive coalescing intr
value.
Adaptive coal intr calue is calculated by
timer = range_start + ((rx_coal->range_end - range_start) *
mod_table[index].range_percent / 100);
rx_coal->range_end is the rx-usecs-high value set using ethtool.
range_start is rx-usecs-low, set using ethtool, if rx_small_pkt_bytes_cnt is
greater than 2 * rx_large_pkt_bytes_cnt. i.e small pkts are dominant. Else its
rx-usecs-low + 3.
Cc: Christian Benvenuti <benve@cisco.com>
Cc: Neel Patel <neepatel@cisco.com>
Signed-off-by: Sujith Sankar <ssujith@cisco.com>
Signed-off-by: Govindarajulu Varadarajan <_govind@gmx.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
To be future-proof and for better readability the time comparisons are modified
to use time_before() instead of plain, error-prone math.
Signed-off-by: Manuel Schölling <manuel.schoelling@gmx.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch moves data allocated using kzalloc to managed data allocated
using devm_kzalloc and cleans now unnecessary kfrees in probe and remove
functions. An explicit linux/device.h include is added to make sure
the devm_*() routine declarations are unambiguously available.
The following Coccinelle semantic patch was used for making the change:
@platform@
identifier p, probefn, removefn;
@@
struct platform_driver p = {
.probe = probefn,
.remove = removefn,
};
@prb@
identifier platform.probefn, pdev;
expression e, e1, e2;
@@
probefn(struct platform_device *pdev, ...) {
<+...
- e = kzalloc(e1, e2)
+ e = devm_kzalloc(&pdev->dev, e1, e2)
...
?-kfree(e);
...+>
}
@rem depends on prb@
identifier platform.removefn;
expression e;
@@
removefn(...) {
<...
- kfree(e);
...>
}
Signed-off-by: Himangi Saraogi <himangi774@gmail.com>
Acked-by: Julia Lawall <julia.lawall@lip6.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Remove double checks, convert printk to pr_warn, and move the call to
pr_warn to the first check. The simplified version of the coccinelle
semantic patch that find this issue is as follows:
// <smpl>
@@
expression E; identifier pr; expression list es;
@@
while(...){
...
- if (E) break;
+ if (E){
+ pr(es);
+ break;
+ }
...
}
- if(E) pr(es);
// </smpl>
Tested by compilation only.
Signed-off-by: Peter Senna Tschudin <peter.senna@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
entries is always greater than rt_max_size here, since if entries is less
than rt_max_size, the fib6_run_gc function will be skipped
Signed-off-by: Li RongQing <roy.qing.li@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
tun_do_read always adds current thread to wait queue, even if a packet
is ready to read. This is inefficient because both sleeper and waker
want to acquire the wait queue spin lock when packet rate is high.
We restructure the read function and use common kernel networking
routines to handle receive, sleep and wakeup. With the change
available packets are checked first before the reading thread is added
to the wait queue.
Ran performance tests with the following configuration:
- my packet generator -> tap1 -> br0 -> tap0 -> my packet consumer
- sender pinned to one core and receiver pinned to another core
- sender send small UDP packets (64 bytes total) as fast as it can
- sandy bridge cores
- throughput are receiver side goodput numbers
The results are
baseline: 731k pkts/sec, cpu utilization at 1.50 cpus
changed: 783k pkts/sec, cpu utilization at 1.53 cpus
The performance difference is largely determined by packet rate and
inter-cpu communication cost. For example, if the sender and
receiver are pinned to different cpu sockets, the results are
baseline: 558k pkts/sec, cpu utilization at 1.71 cpus
changed: 690k pkts/sec, cpu utilization at 1.67 cpus
Co-authored-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Xi Wang <xii@google.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Enable the module alias hookup to allow tunnel modules to be autoloaded on demand.
This is in line with how most other netdev kinds work, and will allow userspace
to create tunnels without having CAP_SYS_MODULE.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The commit 6c167f582ea9 ("i40e: Refactor and cleanup i40e_open(),
adding i40e_vsi_open()") introduced a new function i40e_vsi_open()
with the regression by a typo. Due to the commit, the wrong error
code would be passed to i40e_open(). Fix this error in
i40e_vsi_open() by turning the macro into a negative value so that
i40e_open() could return the pertinent error code correctly.
Fixes: 6c167f582ea9 ("i40e: Refactor and cleanup i40e_open(), adding i40e_vsi_open()")
Signed-off-by: Jean Sacren <sakiwit@gmail.com>
Tested-by: Kavindya Deegala <kavindya.s.deegala@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
As result of deprecation of MSI-X/MSI enablement functions
pci_enable_msix() and pci_enable_msi_block() all drivers
using these two interfaces need to be updated to use the
new pci_enable_msi_range() or pci_enable_msi_exact()
and pci_enable_msix_range() or pci_enable_msix_exact()
interfaces.
Cc: Jesse Brandeburg <jesse.brandeburg@intel.com>
Cc: linux-pci@vger.kernel.org
Signed-off-by: Alexander Gordeev <agordeev@redhat.com>
Tested-by: Sibai Li <sibai.li@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
The call to i40e_alloc_vfs needs to be wrapped in CONFIG_PCI_IOV because
the function itself is wrapped in the same conditional compile block.
Change-ID: I663c5f1b85e5cfba0b36da8966f7db1a034f408b
Signed-off-by: Greg Rose <gregory.v.rose@intel.com>
Tested-by: Kavindya Deegala <kavindya.s.deegala@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
The previous removal of the PTP Tx work functionality was
incomplete as noted by Jake Keller. This removal allows
us to rely on the Tx timesync interrupt.
CC: Jacob Keller <jacob.e.keller@intel.com>
Change-ID: Id4faaf275a3688053ebbf07bef08072f9fd11aa9
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Kavindya Deegala <kavindya.s.deegala@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
When VFs are assigned to active VMs and we disable SR-IOV out from under them,
bad things happen. Currently, the VM does not crash, but the VFs lose all
resources and have no way to get them back.
Add an additional check for when the user is disabling through sysfs, and add a
comment to clarify why we check twice.
Change-ID: Icad78eef516e4e1e4a87874d59132bc3baa058d4
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Tested-by: Sibai Li <sibai.li@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
Base the queue stats length on the queue stats struct rather than
assuming it is 2 fields. This is in prep for adding data fields
which are configuration dependent.
Change-ID: I937f471f389d2e0f8cec733960c5d9a06b14f3ec
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Tested-by: Kavindya Deegala <kavindya.s.deegala@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
For all of our supported kernels, ethtool allows us to directly control
adaptive ITR instead of just faking it with an ITR value. Support this
capability so that user knows explicitly when ITR is being controlled
dynamically. Suggested by Ben Hutchings.
CC: Ben Hutchings <ben@decadent.org.uk>
Change-ID: Iae6b79c5db767a63d22ecd9a9c24acaff02a096e
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
Ethtool consistently reports 0 values for our ITR settings because
we never actually set them. Fix this by setting the default values
to the specified default values.
Change-ID: I2832406a66f7140f2b1230945d6ff6cbf77467c8
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
Const-ify the ethtool_ops structure, as it is extremely unlikely to
change at runtime. Suggested by Ben Hutchings.
CC: Ben Hutchings <ben@decadent.org.uk>
Change-ID: I1ccb1b7c3ea801cc934447599a35910e7c93d321
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Tested-by: Sibai Li <sibai.li@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
Don't tell ethtool that the VF can do 10GbaseT, when it really has no
idea what its link speed is. Set the supported values to 0 instead.
Suggested by Ben Hutchings.
CC: Ben Hutchings <ben@decadent.org.uk>
Change-ID: Iceb0d8af68fe5d8dc13224366979ba701ba89c39
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Tested-by: Sibai Li <sibai.li@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
Use a macro parameter for ethtool stats instead of just assuming
that a valid netdev variable exists. Suggested by Ben Hutchings.
CC: Ben Hutchings <ben@decadent.org.uk>
Change-ID: I66681698573c1549f95fdea310149d8a7e96a60f
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Tested-by: Sibai Li <sibai.li@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
On some architectures, this header must be explicitly included.
Change-ID: I4bc2eb0531956a7b676489f79d347d55cfe12421
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Tested-by: Kavindya Deegala <kavindya.s.deegala@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
Adding the appropriate GNU General Public License header and
update copyright year to 2014.
Change-ID: I769dd2d37d70350afd0c8727ae2859c0fd340361
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: Catherine Sullivan <catherine.sullivan@intel.com>
Tested-by: Sibai Li <sibai.li@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
Remove the defines for PCI bus info that are never used.
Signed-off-by: Catherine Sullivan <catherine.sullivan@intel.com>
Tested-by: Sibai Li <sibai.li@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
Minor changes to the AdminQ interface to bring it up-to-date.
Change-ID: Ie31a4cc4911b2d9d3b7f9af2e56fb0ae674f6345
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Signed-off-by: Kevin Scott <kevin.c.scott@intel.com>
Tested-by: Sibai Li <sibai.li@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
git://gitorious.org/linux-can/linux-can-next
Marc Kleine-Budde says:
====================
pull-request: can-next 2014-05-19
this is a pull request of 13 patches for net-next/master.
A patch by Dan Carpenter fixes a coccinelle warning in the mcp251x
driver. Jean Delvare contributes three patches to tightening the
Kconfig dependencies for some drivers. Then come three patches by Pavel
Machek that improve the c_can driver support on the socfpga platform.
Sergei Shtylyov's patch brings support for the CAN hardware found on
Renesas R-Car CAN controllers. Four patches by Oliver Hartkopp, the
first cleans up the guard macros in the CAN headers the other three
improve the EFF frame filtering. Maximilian Schneider's patch adds
support for the GS_USB CAN devices.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The upper timer_interval limit is arbitrary and much higher
than anything usable in the real world. Reducing it from 15s
to ~4s to make the timer_interval fit in an u32 does not make
much difference. The limit is still outside the practical
bounds.
This eliminates the need for a 64bit timer_interval, fixing a
build error related to 64bit division:
drivers/built-in.o: In function `cdc_ncm_get_coalesce':
ak8975.c:(.text+0x1ac994): undefined reference to `__aeabi_uldivmod'
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Bjørn Mork <bjorn@mork.no>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Now that all objects are released in the reverse order via the
transaction infrastructure, we can enqueue the release via
call_rcu to save one synchronize_rcu. For small rule-sets loaded
via nft -f, it now takes around 50ms less here.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
Instead of caching the original skbuff that contains the netlink
messages, this stores the netlink message sequence number, the
netlink portID and the report flag. This helps to prepare the
introduction of the object release via call_rcu.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
Now that all these function are called from the commit path, we can
pass the context structure to reduce the amount of parameters in all
of the nf_tables_*_notify functions. This patch also removes unneeded
branches to check for skb, nlh and net that should be always set in
the context structure.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
Leave the set content in consistent state if we fail to load the
batch. Use the new generic transaction infrastructure to achieve
this.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
This patch speeds up rule-set updates and it also provides a way
to revert updates and leave things in consistent state in case that
the batch needs to be aborted.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
So nf_tables_uptable() only takes one single parameter.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
nf_tables_table_disable() always succeeds, make this function void.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
This patch speeds up rule-set updates and it also introduces a way to
revert chain updates if the batch is aborted. The idea is to store the
changes in the transaction to apply that in the commit step.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
Add new routines to encapsulate chain statistics allocation and
replacement.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
This patch reworks the nf_tables API so set updates are included in
the same batch that contains rule updates. This speeds up rule-set
updates since we skip a dialog of four messages between kernel and
user-space (two on each direction), from:
1) create the set and send netlink message to the kernel
2) process the response from the kernel that contains the allocated name.
3) add the set elements and send netlink message to the kernel.
4) process the response from the kernel (to check for errors).
To:
1) add the set to the batch.
2) add the set elements to the batch.
3) add the rule that points to the set.
4) send batch to the kernel.
This also introduces an internal set ID (NFTA_SET_ID) that is unique
in the batch so set elements and rules can refer to new sets.
Backward compatibility has been only retained in userspace, this
means that new nft versions can talk to the kernel both in the new
and the old fashion.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
The patch adds message type to the transaction to simplify the
commit the and abort routines. Yet another step forward in the
generalisation of the transaction infrastructure.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
Move the commit and abort routines to the bottom of the source code
file. This change is required by the follow up patches that add the
set, chain and table transaction support.
This patch is just a cleanup to access several functions without
having to declare their prototypes.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
This patch generalises the existing rule transaction infrastructure
so it can be used to handle set, table and chain object transactions
as well. The transaction provides a data area that stores private
information depending on the transaction type.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
The new transaction infrastructure updates the family, table and chain
objects in the context structure, so let's deconstify them. While at it,
move the context structure initialization routine to the top of the
source file as it will be also used from the table and chain routines.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|