Age | Commit message (Collapse) | Author |
|
* remove interrupt.g inclusion from netdevice.h -- not needed
* fixup fallout, add interrupt.h and hardirq.h back where needed.
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6
Conflicts:
drivers/net/vmxnet3/vmxnet3_ethtool.c
net/core/dev.c
|
|
Commit 747df2258b1b9a2e25929ef496262c339c380009 ('sfc: Always map MCDI
shared memory as uncacheable') introduced a separate mapping for the
MCDI shared memory (MC_TREG_SMEM). This means we can no longer easily
include it in the register dump. Since it is not particularly useful
in debugging, substitute a recognisable dummy value.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6
Conflicts:
drivers/net/bnx2x/bnx2x_ethtool.c
|
|
During self-tests we use efx_process_channel_now() to handle
completion and other events synchronously. This disables interrupts
and NAPI processing for the channel in question, but it may still be
interrupted by another channel. A single socket may receive packets
from multiple net devices or even multiple channels of the same net
device, so this can result in deadlock on a socket lock.
Receiving packets in process context will also result in incorrect
classification by the network cgroup classifier.
Therefore, we must only use efx_process_channel_now() in the offline
loopback tests (which never deliver packets up the stack) and not for
the online interrupt and event tests.
For the interrupt test, there is no reason to process events. We
only care that an interrupt is raised.
For the event test, we want to know whether events have been received,
and there may be many events ahead of the one we inject. Therefore
remove efx_channel::magic_count and instead test whether
efx_channel::eventq_read_ptr advances. This is currently an event
queue index and might wrap around to exactly the same value, resulting
in a false negative. Therefore move the masking to efx_event() and
efx_nic_eventq_read_ack() so that it cannot wrap within the time of
the test.
The event test also tries to diagnose failures by checking whether an
event was delivered without causing an interrupt. Add and use a
helper function that only does this.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
|
|
This is preparation for using the generic netdev features interface,
and should have no effect in itself.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
|
|
In Falcon we can configure the fill levels of the RX data FIFO which
trigger the generation of pause frames (if enabled), and we have
module parameters for this.
Siena does not allow the levels to be configured (or, if it does, this
is done by the MC firmware and is not configurable by drivers).
So far as I can tell, the module parameters are not used by our
internal scripts and have not been documented (with the exception of
the short parameter descriptions). Therefore, remove them and always
initialise Falcon with the default values.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
|
|
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
|
|
Implement the ndo_setup_tc() operation with 2 traffic classes.
Current Solarstorm controllers do not implement TX queue priority, but
they do allow queues to be 'paced' with an enforced delay between
packets. Paced and unpaced queues are scheduled in round-robin within
two separate hardware bins (paced queues with a large delay may be
placed into a third bin temporarily, but we won't use that). If there
are queues in both bins, the TX scheduler will alternate between them.
If we make high-priority queues unpaced and best-effort queues paced,
and high-priority queues are mostly empty, a single high-priority queue
can then instantly take 50% of the packet rate regardless of how many
of the best-effort queues have descriptors outstanding.
We do not actually want an enforced delay between packets on best-
effort queues, so we set the pace value to a reserved value that
actually results in a delay of 0.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/bwh/sfc-next-2.6
|
|
master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6
Conflicts:
drivers/net/wireless/ath/ath9k/ar9003_eeprom.c
net/llc/af_llc.c
|
|
If we are using a legacy interrupt, our IRQ may be shared and our
interrupt handler may be called even though interrupts are disabled on
the NIC. When we change ring sizes, we reallocate the event queue and
the interrupt handler may use an invalid pointer when called for
another device's interrupt.
Maintain a legacy_irq_enabled flag and test that at the top of the
interrupt handler. Note that this problem results from the need to
work around broken INT_ISR0 reads, and does not affect the legacy
interrupt handler for Falcon A1.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
|
|
Whenever we add DMA descriptors to a TX ring and update the ring
pointer, the TX DMA engine must first read the new DMA descriptors and
then start reading packet data. However, all released Solarflare 10G
controllers have a 'TX push' feature that allows us to reduce latency
by writing the first new DMA descriptor along with the pointer update.
This is only useful when the queue is empty. The hardware should
ignore the pushed descriptor if the queue is not empty, but this check
is buggy, so we must do it in software.
In order to tell whether a TX queue is empty, we need to compare the
previous transmission count (write_count) and completion count
(read_count). However, if we do that every time we update the ring
pointer then read_count may ping-pong between the caches of two CPUs
running the transmission and completion paths for the queue.
Therefore, we split the check for an empty queue between the
completion path and the transmission path:
- Add an empty_read_count field representing a point at which the
completion path saw the TX queue as empty.
- Add an old_write_count field for use on the completion path.
- On the completion path, whenever read_count reaches or passes
old_write_count the TX queue may be empty. We then read
write_count, set empty_read_count if read_count == write_count,
and update old_write_count.
- On the transmission path, we read empty_read_count. If it's set, we
compare it with the value of write_count before the current set of
descriptors was added. If they match, the queue really is empty and
we can use TX push.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
|
|
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Make local functions and variable static. Do some rearrangement
of the string table stuff to put it where it gets used.
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Acked-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Change "return (EXPR);" to "return EXPR;"
return is not a function, parentheses are not required.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
For backward compatibility, add it at the end.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This requires some reorganisation of channel setup and teardown to
ensure that we can always roll-back a failed change.
Based on work by Steve Hodgson <shodgson@solarflare.com>
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
- Allow the ring size to be specified in non
power-of-two sizes (for instance to limit
the amount of receive buffers).
- Automatically size the event queue.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
In preparation for changes to the way channels and queue structures
are allocated, revise the macros and functions used to look up and
iterator over them.
- Replace efx_for_each_tx_queue() with iteration over channels then TX
queues
- Replace efx_for_each_rx_queue() with iteration over channels then RX
queues (with one exception, shortly to be removed)
- Introduce efx_get_{channel,rx_queue,tx_queue}() functions to look up
channels and queues by index
- Introduce efx_channel_get_{rx,tx}_queue() functions to look up a
channel's queues
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Currently we allocate DMA descriptor rings and event rings using
pci_alloc_consistent() which selects non-blocking behaviour from the
page allocator (GFP_ATOMIC). This is unnecessary, and since we
currently allocate a single contiguous block for each ring (up to 32
pages!) these allocations are likely to fail if there is any
significant memory pressure. Use dma_alloc_coherent() and GFP_KERNEL
instead.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Allow ethtool to query the number of RX rings, the fields used in RX
flow hashing and the hash indirection table.
Allow ethtool to update the RX flow hash indirection table.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Replace EFX_ERR() with netif_err(), EFX_INFO() with netif_info(),
EFX_LOG() with netif_dbg() and EFX_TRACE() and EFX_REGDUMP() with
netif_vdbg().
Replace EFX_ERR_RL(), EFX_INFO_RL() and EFX_LOG_RL() using explicit
calls to net_ratelimit().
Implement the ethtool operations to get and set message level flags,
and add a 'debug' module parameter for the initial value.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Acked-by: Jeff Garzik <jgarzik@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Ensure that efx_fast_push_rx_descriptors() must only run
from efx_process_channel() [NAPI], or when napi_disable()
has been executed.
Reimplement the slow fill by sending an event to the
channel, so that NAPI runs, and hanging the subsequent
fast fill off the event handler. Replace the sfc_refill
workqueue and delayed work items with a timer. We do
not need to stop this timer in efx_flush_all() because
it's safe to send the event always; receiving it will
be delayed until NAPI is restarted.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Formerly, efx_test_eventq_irq() assumed it was the only user of
driver generated events. Allow it to interoperate with other users.
We can create more than 16 channels, so align event codes with
a multiple of 256 not 16.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Under certain conditions a PHY may backpressure Falcon B0
in such a way that flushes timeout. In normal circumstances
the phy poller would fix the PHY, and the flush could complete.
But efx_nic_flush_queues() is always called after efx_stop_all(),
so the poller has been stopped. Even if this weren't the case,
how long would we have to wait for the poller to fix this? And
several callers of efx_nic_flush_queues() are about to reset
the device anyway - so we don't need to do anything.
Work around this bug by scheduling a reset. Ensure that the
MAC is never rewired back into the datapath before the reset
runs (we already ignore all rx events anyway).
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Create a core TX queue and 2 hardware TX queues for each channel.
If separate_tx_channels is set, create equal numbers of RX and TX
channels instead.
Rewrite the channel and queue iteration macros accordingly.
Eliminate efx_channel::used_flags as redundant.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Currently TX completions do not count towards the NAPI budget. This
means a continuous stream of TX completions can cause the polling
function to loop indefinitely with scheduling disabled. To avoid
this, follow the common practice of reporting the budget spent after
processing one ring-full of TX completions.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Parity errors in different blocks of SRAM may set one of two different
interrupt flags.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Siena has two problems with legacy interrupts:
1. There is no synchronisation between the ISR read completion,
and the interrupt deassert message.
2. A downstream read at the "wrong" moment can return 0, and
suppress generating the next interrupt.
Falcon should suffer from both of these, and it appears it does.
Enable EFX_WORKAROUND_15783 on Falcon as well.
Also, when we see queues == 0, ensure we always schedule or rearm
every event queue.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
'Fatal' errors set an interrupt flag associated with a specific event
queue; only read the syndrome vector if we see that queue's flag set
(legacy interrupts) or in the interrupt handler for that queue (MSI).
Do not ignore an interrupt if the fatal error flag is set but specific
error flags are all zero. Even if we don't schedule a reset, we must
respect the queue mask and rearm the appropriate event queues.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Siena has a separate SRAM bank for each port. On single-port boards
these can be merged together, so each port has an interrupt flag for
parity errors in the other port's SRAM. Currently we do not enable
such merging and should mask this interrupt source.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
In particular, the comment about EVQ_RPTR_REG is based on inconsistent
preliminary hardware documentation, though the following code was
fixed long before release.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This hardware watchdog can misfire, so it does more harm than good.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This driver has been mostly rewritten since Michael Brown's initial
work, so swap the order of the authors.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This integrates support for the SFC9000 family of 10G Ethernet
controllers and LAN-on-motherboard chips, starting with the SFL9021
'Siena' and SFC9020 'Bethpage'.
Credit for this code is largely due to my colleagues at Solarflare:
Guido Barzini
Steve Hodgson
Kieran Mansley
Matthew Slattery
Neil Turton
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|