Age | Commit message (Collapse) | Author |
|
Pull AFS updates from Al Viro:
"The AFS series posted by dhowells depended upon lookup_one_len()
rework; now that prereq is in the mainline, that series had been
rebased on top of it and got some exposure and testing..."
* 'afs-dh' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
afs: Do better accretion of small writes on newly created content
afs: Add stats for data transfer operations
afs: Trace protocol errors
afs: Locally edit directory data for mkdir/create/unlink/...
afs: Adjust the directory XDR structures
afs: Split the directory content defs into a header
afs: Fix directory handling
afs: Split the dynroot stuff out and give it its own ops tables
afs: Keep track of invalid-before version for dentry coherency
afs: Rearrange status mapping
afs: Make it possible to get the data version in readpage
afs: Init inode before accessing cache
afs: Introduce a statistics proc file
afs: Dump bad status record
afs: Implement @cell substitution handling
afs: Implement @sys substitution handling
afs: Prospectively look up extra files when doing a single lookup
afs: Don't over-increment the cell usage count when pinning it
afs: Fix checker warnings
vfs: Remove the const from dir_context::actor
|
|
Remove the address_space ->tree_lock and use the xa_lock newly added to
the radix_tree_root. Rename the address_space ->page_tree to ->i_pages,
since we don't really care that it's a tree.
[willy@infradead.org: fix nds32, fs/dax.c]
Link: http://lkml.kernel.org/r/20180406145415.GB20605@bombadil.infradead.orgLink: http://lkml.kernel.org/r/20180313132639.17387-9-willy@infradead.org
Signed-off-by: Matthew Wilcox <mawilcox@microsoft.com>
Acked-by: Jeff Layton <jlayton@redhat.com>
Cc: Darrick J. Wong <darrick.wong@oracle.com>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Processes like ld that do lots of small writes that aren't necessarily
contiguous result in a lot of small StoreData operations to the server, the
idea being that if someone else changes the data on the server, we only
write our changes over that and not the space between. Further, we don't
want to write back empty space if we can avoid it to make it easier for the
server to do sparse files.
However, making lots of tiny RPC ops is a lot less efficient for the server
than one big one because each op requires allocation of resources and the
taking of locks, so we want to compromise a bit.
Reduce the load by the following:
(1) If a file is just created locally or has just been truncated with
O_TRUNC locally, allow subsequent writes to the file to be merged with
intervening space if that space doesn't cross an entire intervening
page.
(2) Don't flush the file on ->flush() but rather on ->release() if the
file was open for writing.
Just linking vmlinux.o, without this patch, looking in /proc/fs/afs/stats:
file-wr : n=441 nb=513581204
and after the patch:
file-wr : n=62 nb=513668555
there were 379 fewer StoreData RPC operations at the expense of an extra
87K being written.
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
Add statistics to /proc/fs/afs/stats for data transfer RPC operations. New
lines are added that look like:
file-rd : n=55794 nb=10252282150
file-wr : n=9789 nb=3247763645
where n= indicates the number of ops completed and nb= indicates the number
of bytes successfully transferred. file-rd is the counts for read/fetch
operations and file-wr the counts for write/store operations.
Note that directory and symlink downloading are included in the file-rd
stats at the moment.
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
Trace protocol errors detected in afs.
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
Locally edit the contents of an AFS directory upon a successful inode
operation that modifies that directory (such as mkdir, create and unlink)
so that we can avoid the current practice of re-downloading the directory
after each change.
This is viable provided that the directory version number we get back from
the modifying RPC op is exactly incremented by 1 from what we had
previously. The data in the directory contents is in a defined format that
we have to parse locally to perform lookups and readdir, so modifying isn't
a problem.
If the edit fails, we just clear the VALID flag on the directory and it
will be reloaded next time it is needed.
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
Adjust the AFS directory XDR structures in a number of superficial ways:
(1) Rename them to all begin afs_xdr_.
(2) Use u8 instead of uint8_t.
(3) Mark the structures as __packed so they don't get rearranged by the
compiler.
(4) Rename the hdr member of afs_xdr_dir_block to meta.
(5) Rename the pagehdr member of afs_xdr_dir_block to hdr.
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
Split the directory content definitions into a header file so that they can
be used by multiple .c files.
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
AFS directories are structured blobs that are downloaded just like files
and then parsed by the lookup and readdir code and, as such, are currently
handled in the pagecache like any other file, with the entire directory
content being thrown away each time the directory changes.
However, since the blob is a known structure and since the data version
counter on a directory increases by exactly one for each change committed
to that directory, we can actually edit the directory locally rather than
fetching it from the server after each locally-induced change.
What we can't do, though, is mix data from the server and data from the
client since the server is technically at liberty to rearrange or compress
a directory if it sees fit, provided it updates the data version number
when it does so and breaks the callback (ie. sends a notification).
Further, lookup with lookup-ahead, readdir and, when it arrives, local
editing are likely want to scan the whole of a directory.
So directory handling needs to be improved to maintain the coherency of the
directory blob prior to permitting local directory editing.
To this end:
(1) If any directory page gets discarded, invalidate and reread the entire
directory.
(2) If readpage notes that if when it fetches a single page that the
version number has changed, the entire directory is flagged for
invalidation.
(3) Read as much of the directory in one go as we can.
Note that this removes local caching of directories in fscache for the
moment as we can't pass the pages to fscache_read_or_alloc_pages() since
page->lru is in use by the LRU.
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
Split the AFS dynamic root stuff out of the main directory handling file
and into its own file as they share little in common.
The dynamic root code also gets its own dentry and inode ops tables.
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
Each afs dentry is tagged with the version that the parent directory was at
last time it was validated and, currently, if this differs, the directory
is scanned and the dentry is refreshed.
However, this leads to an excessive amount of revalidation on directories
that get modified on the client without conflict with another client. We
know there's no conflict because the parent directory's data version number
got incremented by exactly 1 on any create, mkdir, unlink, etc., therefore
we can trust the current state of the unaffected dentries when we perform a
local directory modification.
Optimise by keeping track of the last version of the parent directory that
was changed outside of the client in the parent directory's vnode and using
that to validate the dentries rather than the current version.
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
Rearrange the AFSFetchStatus to inode attribute mapping code in a number of
ways:
(1) Use an XDR structure rather than a series of incremented pointer
accesses when decoding an AFSFetchStatus object. This allows
out-of-order decode.
(2) Don't store the if_version value but rather just check it and abort if
it's not something we can handle.
(3) Store the owner and group in the status record as raw values rather
than converting them to kuid/kgid. Do that when they're mapped into
i_uid/i_gid.
(4) Validate the type and abort code up front and abort if they're wrong.
(5) Split the inode attribute setting out into its own function from the
XDR decode of an AFSFetchStatus object. This allows it to be called
from elsewhere too.
(6) Differentiate changes to data from changes to metadata.
(7) Use the split-out attribute mapping function from afs_iget().
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
Store the data version number indicated by an FS.FetchData op into the read
request structure so that it's accessible by the page reader.
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
We no longer parse symlinks when we get the inode to determine if this
symlink is actually a mountpoint as we detect that by examining the mode
instead (symlinks are always 0777 and mountpoints 0644).
Access the cache after mapping the status so that we don't have to manually
set the inode size now.
Note that this may need adjusting if the disconnected operation is
implemented as the file metadata may have to be obtained from the cache.
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
Introduce a proc file that displays a bunch of statistics for the AFS
filesystem in the current network namespace.
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
Dump an AFS FileStatus record that is detected as invalid.
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
Implement @cell substitution handling such that if @cell is seen as a name
in a dynamic root mount, then the name of the root cell for that network
namespace will be substituted for @cell during lookup.
The substitution of @cell for the current net namespace is set by writing
the cell name to /proc/fs/afs/rootcell. The value can be obtained by
reading the file.
For example:
# mount -t afs none /kafs -o dyn
# echo grand.central.org >/proc/fs/afs/rootcell
# ls /kafs/@cell
archive/ cvs/ doc/ local/ project/ service/ software/ user/ www/
# cat /proc/fs/afs/rootcell
grand.central.org
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
Implement the AFS feature by which @sys at the end of a pathname component
may be substituted for one of a list of values, typically naming the
operating system. Up to 16 alternatives may be specified and these are
tried in turn until one works. Each network namespace has[*] a separate
independent list.
Upon creation of a new network namespace, the list of values is
initialised[*] to a single OpenAFS-compatible string representing arch type
plus "_linux26". For example, on x86_64, the sysname is "amd64_linux26".
[*] Or will, once network namespace support is finalised in kAFS.
The list may be set by:
# for i in foo bar linux-x86_64; do echo $i; done >/proc/fs/afs/sysname
for which separate writes to the same fd are amalgamated and applied on
close. The LF character may be used as a separator to specify multiple
items in the same write() call.
The list may be cleared by:
# echo >/proc/fs/afs/sysname
and read by:
# cat /proc/fs/afs/sysname
foo
bar
linux-x86_64
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
When afs_lookup() is called, prospectively look up the next 50 uncached
fids also from that same directory and cache the results, rather than just
looking up the one file requested.
This allows us to use the FS.InlineBulkStatus RPC op to increase efficiency
by fetching up to 50 file statuses at a time.
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
AFS cells that are added or set as the workstation cell through /proc are
pinned against removal by setting the AFS_CELL_FL_NO_GC flag on them and
taking a ref. The ref should be only taken if the flag wasn't already set.
Fix this by making it conditional.
Without this an assertion failure will occur during module removal
indicating that the refcount is too elevated.
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
Fix warnings raised by checker, including:
(*) Warnings raised by unequal comparison for the purposes of sorting,
where the endianness doesn't matter:
fs/afs/addr_list.c:246:21: warning: restricted __be16 degrades to integer
fs/afs/addr_list.c:246:30: warning: restricted __be16 degrades to integer
fs/afs/addr_list.c:248:21: warning: restricted __be32 degrades to integer
fs/afs/addr_list.c:248:49: warning: restricted __be32 degrades to integer
fs/afs/addr_list.c:283:21: warning: restricted __be16 degrades to integer
fs/afs/addr_list.c:283:30: warning: restricted __be16 degrades to integer
(*) afs_set_cb_interest() is not actually used and can be removed.
(*) afs_cell_gc_delay() should be provided with a sysctl.
(*) afs_cell_destroy() needs to use rcu_access_pointer() to read
cell->vl_addrs.
(*) afs_init_fs_cursor() should be static.
(*) struct afs_vnode::permit_cache needs to be marked __rcu.
(*) afs_server_rcu() needs to use rcu_access_pointer().
(*) afs_destroy_server() should use rcu_access_pointer() on
server->addresses as the server object is no longer accessible.
(*) afs_find_server() casts __be16/__be32 values to int in order to
directly compare them for the purpose of finding a match in a list,
but is should also annotate the cast with __force to avoid checker
warnings.
(*) afs_check_permit() accesses vnode->permit_cache outside of the RCU
readlock, though it doesn't then access the value; the extraneous
access is deleted.
False positives:
(*) Conditional locking around the code in xdr_decode_AFSFetchStatus. This
can be dealt with in a separate patch.
fs/afs/fsclient.c:148:9: warning: context imbalance in 'xdr_decode_AFSFetchStatus' - different lock contexts for basic block
(*) Incorrect handling of seq-retry lock context balance:
fs/afs/inode.c:455:38: warning: context imbalance in 'afs_getattr' - different
lock contexts for basic block
fs/afs/server.c:52:17: warning: context imbalance in 'afs_find_server' - different lock contexts for basic block
fs/afs/server.c:128:17: warning: context imbalance in 'afs_find_server_by_uuid' - different lock contexts for basic block
Errors:
(*) afs_lookup_cell_rcu() needs to break out of the seq-retry loop, not go
round again if it successfully found the workstation cell.
(*) Fix UUID decode in afs_deliver_cb_probe_uuid().
(*) afs_cache_permit() has a missing rcu_read_unlock() before one of the
jumps to the someone_else_changed_it label. Move the unlock to after
the label.
(*) afs_vl_get_addrs_u() is using ntohl() rather than htonl() when
encoding to XDR.
(*) afs_deliver_yfsvl_get_endpoints() is using htonl() rather than ntohl()
when decoding from XDR.
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
Pass the object size in to fscache_acquire_cookie() and
fscache_write_page() rather than the netfs providing a callback by which it
can be received. This makes it easier to update the size of the object
when a new page is written that extends the object.
The current object size is also passed by fscache to the check_aux
function, obviating the need to store it in the aux data.
Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Anna Schumaker <anna.schumaker@netapp.com>
Tested-by: Steve Dickson <steved@redhat.com>
|
|
Attach copies of the index key and auxiliary data to the fscache cookie so
that:
(1) The callbacks to the netfs for this stuff can be eliminated. This
can simplify things in the cache as the information is still
available, even after the cache has relinquished the cookie.
(2) Simplifies the locking requirements of accessing the information as we
don't have to worry about the netfs object going away on us.
(3) The cache can do lazy updating of the coherency information on disk.
As long as the cache is flushed before reboot/poweroff, there's no
need to update the coherency info on disk every time it changes.
(4) Cookies can be hashed or put in a tree as the index key is easily
available. This allows:
(a) Checks for duplicate cookies can be made at the top fscache layer
rather than down in the bowels of the cache backend.
(b) Caching can be added to a netfs object that has a cookie if the
cache is brought online after the netfs object is allocated.
A certain amount of space is made in the cookie for inline copies of the
data, but if it won't fit there, extra memory will be allocated for it.
The downside of this is that live cache operation requires more memory.
Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Anna Schumaker <anna.schumaker@netapp.com>
Tested-by: Steve Dickson <steved@redhat.com>
|
|
When relinquishing cookies, either due to iget failure or to inode
eviction, retire a cookie if we think the corresponding vnode got deleted
on the server rather than just letting it lie in the cache.
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
AFS vnodes (files) are referenced by a triplet of { volume ID, vnode ID,
uniquifier }. Currently, kafs is only using the vnode ID as the file key
in the volume fscache index and checking the uniquifier on cookie
acquisition against the contents of the auxiliary data stored in the cache.
Unfortunately, this is subject to a race in which an FS.RemoveFile or
FS.RemoveDir op is issued against the server but the local afs inode isn't
torn down and disposed off before another thread issues something like
FS.CreateFile. The latter then gets given the vnode ID that just got
removed, but with a new uniquifier and a cookie collision occurs in the
cache because the cookie is only keyed on the vnode ID whereas the inode is
keyed on the vnode ID plus the uniquifier.
Fix this by keying the cookie on the uniquifier in addition to the vnode ID
and dropping the uniquifier from the auxiliary data supplied.
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
Invalidate any data stored in fscache for a vnode that changes on the
server so that we don't end up with the cache in a bad state locally.
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
Pull networking updates from David Miller:
1) Support offloading wireless authentication to userspace via
NL80211_CMD_EXTERNAL_AUTH, from Srinivas Dasari.
2) A lot of work on network namespace setup/teardown from Kirill Tkhai.
Setup and cleanup of namespaces now all run asynchronously and thus
performance is significantly increased.
3) Add rx/tx timestamping support to mv88e6xxx driver, from Brandon
Streiff.
4) Support zerocopy on RDS sockets, from Sowmini Varadhan.
5) Use denser instruction encoding in x86 eBPF JIT, from Daniel
Borkmann.
6) Support hw offload of vlan filtering in mvpp2 dreiver, from Maxime
Chevallier.
7) Support grafting of child qdiscs in mlxsw driver, from Nogah
Frankel.
8) Add packet forwarding tests to selftests, from Ido Schimmel.
9) Deal with sub-optimal GSO packets better in BBR congestion control,
from Eric Dumazet.
10) Support 5-tuple hashing in ipv6 multipath routing, from David Ahern.
11) Add path MTU tests to selftests, from Stefano Brivio.
12) Various bits of IPSEC offloading support for mlx5, from Aviad
Yehezkel, Yossi Kuperman, and Saeed Mahameed.
13) Support RSS spreading on ntuple filters in SFC driver, from Edward
Cree.
14) Lots of sockmap work from John Fastabend. Applications can use eBPF
to filter sendmsg and sendpage operations.
15) In-kernel receive TLS support, from Dave Watson.
16) Add XDP support to ixgbevf, this is significant because it should
allow optimized XDP usage in various cloud environments. From Tony
Nguyen.
17) Add new Intel E800 series "ice" ethernet driver, from Anirudh
Venkataramanan et al.
18) IP fragmentation match offload support in nfp driver, from Pieter
Jansen van Vuuren.
19) Support XDP redirect in i40e driver, from Björn Töpel.
20) Add BPF_RAW_TRACEPOINT program type for accessing the arguments of
tracepoints in their raw form, from Alexei Starovoitov.
21) Lots of striding RQ improvements to mlx5 driver with many
performance improvements, from Tariq Toukan.
22) Use rhashtable for inet frag reassembly, from Eric Dumazet.
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next: (1678 commits)
net: mvneta: improve suspend/resume
net: mvneta: split rxq/txq init and txq deinit into SW and HW parts
ipv6: frags: fix /proc/sys/net/ipv6/ip6frag_low_thresh
net: bgmac: Fix endian access in bgmac_dma_tx_ring_free()
net: bgmac: Correctly annotate register space
route: check sysctl_fib_multipath_use_neigh earlier than hash
fix typo in command value in drivers/net/phy/mdio-bitbang.
sky2: Increase D3 delay to sky2 stops working after suspend
net/mlx5e: Set EQE based as default TX interrupt moderation mode
ibmvnic: Disable irqs before exiting reset from closed state
net: sched: do not emit messages while holding spinlock
vlan: also check phy_driver ts_info for vlan's real device
Bluetooth: Mark expected switch fall-throughs
Bluetooth: Set HCI_QUIRK_SIMULTANEOUS_DISCOVERY for BTUSB_QCA_ROME
Bluetooth: btrsi: remove unused including <linux/version.h>
Bluetooth: hci_bcm: Remove DMI quirk for the MINIX Z83-4
sh_eth: kill useless check in __sh_eth_get_regs()
sh_eth: add sh_eth_cpu_data::no_xdfar flag
ipv6: factorize sk_wmem_alloc updates done by __ip6_append_data()
ipv4: factorize sk_wmem_alloc updates done by __ip_append_data()
...
|
|
In rxrpc and afs, use the debug_ids that are monotonically allocated to
various objects as they're allocated rather than pointers as kernel
pointers are now hashed making them less useful. Further, the debug ids
aren't reused anywhere nearly as quickly.
In addition, allow kernel services that use rxrpc, such as afs, to take
numbers from the rxrpc counter, assign them to their own call struct and
pass them in to rxrpc for both client and service calls so that the trace
lines for each will have the same ID tag.
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
wait_var_event() API
The old wait_on_atomic_t() is going to get removed, use the more
flexible wait_var_event() API instead.
No change in functionality.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: David Howells <dhowells@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
Support the AFS dynamic root which is a pseudo-volume that doesn't connect
to any server resource, but rather is just a root directory that
dynamically creates mountpoint directories where the name of such a
directory is the name of the cell.
Such a mount can be created thus:
mount -t afs none /afs -o dyn
Dynamic root superblocks aren't shared except by bind mounts and
propagation. Cell root volumes can then be mounted by referring to them by
name, e.g.:
ls /afs/grand.central.org/
ls /afs/.grand.central.org/
The kernel will upcall to consult the DNS if the address wasn't supplied
directly.
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
Rearrange afs_select_fileserver() a little to put the use_server chunk
before the next_server chunk so that with the removal of a couple of gotos
the main path through the function is all one sequence.
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
Remove some old unused code.
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
Fix server list handling in the following ways:
(1) In afs_alloc_volume(), remove duplicate server list build code. This
was already done by afs_alloc_server_list() which afs_alloc_volume()
previously called. This just results in twice as many VL RPCs.
(2) In afs_deliver_vl_get_entry_by_name_u(), use the number of server
records indicated by ->nServers in the UVLDB record returned by the
VL.GetEntryByNameU RPC call rather than scanning all NMAXNSERVERS
slots. Unused slots may contain garbage.
(3) In afs_alloc_server_list(), don't stop converting a UVLDB record into
a server list just because we can't look up one of the servers. Just
skip that server and go on to the next. If we can't look up any of
the servers then we'll fail at the end.
Without this patch, an attempt to view the umich.edu root cell using
something like "ls /afs/umich.edu" on a dynamic root (future patch) mount
or an autocell mount will result in ENOMEDIUM. The failure is due to kafs
not stopping after nServers'worth of records have been read, but then
trying to access a server with a garbage UUID and getting an error, which
aborts the server list build.
Fixes: d2ddc776a458 ("afs: Overhaul volume and server record caching and fileserver rotation")
Reported-by: Jonathan Billings <jsbillings@jsbillings.org>
Signed-off-by: David Howells <dhowells@redhat.com>
cc: stable@vger.kernel.org
|
|
In afs_select_fileserver(), we need to clear the ->responded flag in the
address list when reusing it. We should also clear it in
afs_select_current_fileserver().
To this end, just memset() the object before initialising it.
Fixes: d2ddc776a458 ("afs: Overhaul volume and server record caching and fileserver rotation")
Signed-off-by: David Howells <dhowells@redhat.com>
cc: stable@vger.kernel.org
|
|
afs_select_fileserver() ends the address cursor it is using in the case in
which we get some sort of network error and run out of addresses to iterate
through, before it jumps to try the next server. This also needs to be
done when the server aborts with some sort of error that means we should
try the next server.
Fix this by:
(1) Move the iterate_address afs_end_cursor() call to the next_server
case.
(2) End the cursor in the failed case.
(3) Make afs_end_cursor() clear the ->begun flag and ->addr pointer in the
address cursor.
(4) Make afs_end_cursor() able to be called on an already cleared cursor.
Without this, something like the following oops may occur:
AFS: Assertion failed
18446612134397189888 == 0 is false
0xffff88007c279f00 == 0x0 is false
------------[ cut here ]------------
kernel BUG at fs/afs/rotate.c:360!
RIP: 0010:afs_select_fileserver+0x79b/0xa30 [kafs]
Call Trace:
afs_statfs+0xcc/0x180 [kafs]
? p9_client_statfs+0x9e/0x110 [9pnet]
? _cond_resched+0x19/0x40
statfs_by_dentry+0x6d/0x90
vfs_statfs+0x1b/0xc0
user_statfs+0x4b/0x80
SYSC_statfs+0x15/0x30
SyS_statfs+0xe/0x10
entry_SYSCALL_64_fastpath+0x20/0x83
Fixes: d2ddc776a458 ("afs: Overhaul volume and server record caching and fileserver rotation")
Reported-by: Marc Dionne <marc.dionne@auristor.com>
Signed-off-by: David Howells <dhowells@redhat.com>
cc: stable@vger.kernel.org
|
|
afs_alloc_volume() needs to release the cell ref it obtained in the case of
an error. Fix this by adding an afs_put_cell() call into the error path.
This can triggered when a lookup for a cell in a dynamic root or an
autocell mount returns an error whilst trying to look up the server (such
as ENOMEDIUM). This results in an assertion failure oops when the module
is unloaded due to outstanding refs on a cell record.
Fixes: d2ddc776a458 ("afs: Overhaul volume and server record caching and fileserver rotation")
Signed-off-by: David Howells <dhowells@redhat.com>
cc: stable@vger.kernel.org
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/jlayton/linux
Pull inode->i_version rework from Jeff Layton:
"This pile of patches is a rework of the inode->i_version field. We
have traditionally incremented that field on every inode data or
metadata change. Typically this increment needs to be logged on disk
even when nothing else has changed, which is rather expensive.
It turns out though that none of the consumers of that field actually
require this behavior. The only real requirement for all of them is
that it be different iff the inode has changed since the last time the
field was checked.
Given that, we can optimize away most of the i_version increments and
avoid dirtying inode metadata when the only change is to the i_version
and no one is querying it. Queries of the i_version field are rather
rare, so we can help write performance under many common workloads.
This patch series converts existing accesses of the i_version field to
a new API, and then converts all of the in-kernel filesystems to use
it. The last patch in the series then converts the backend
implementation to a scheme that optimizes away a large portion of the
metadata updates when no one is looking at it.
In my own testing this series significantly helps performance with
small I/O sizes. I also got this email for Christmas this year from
the kernel test robot (a 244% r/w bandwidth improvement with XFS over
DAX, with 4k writes):
https://lkml.org/lkml/2017/12/25/8
A few of the earlier patches in this pile are also flowing to you via
other trees (mm, integrity, and nfsd trees in particular)".
* tag 'iversion-v4.16-1' of git://git.kernel.org/pub/scm/linux/kernel/git/jlayton/linux: (22 commits)
fs: handle inode->i_version more efficiently
btrfs: only dirty the inode in btrfs_update_time if something was changed
xfs: avoid setting XFS_ILOG_CORE if i_version doesn't need incrementing
fs: only set S_VERSION when updating times if necessary
IMA: switch IMA over to new i_version API
xfs: convert to new i_version API
ufs: use new i_version API
ocfs2: convert to new i_version API
nfsd: convert to new i_version API
nfs: convert to new i_version API
ext4: convert to new i_version API
ext2: convert to new i_version API
exofs: switch to new i_version API
btrfs: convert to new i_version API
afs: convert to new i_version API
affs: convert to new i_version API
fat: convert to new i_version API
fs: don't take the i_lock in inode_inc_iversion
fs: new API for handling inode->i_version
ntfs: remove i_version handling
...
|
|
For AFS, it's generally treated as an opaque value, so we use the
*_raw variants of the API here.
Note that AFS has quite a different definition for this counter. AFS
only increments it on changes to the data to the data in regular files
and contents of the directories. Inode metadata changes do not result
in a version increment.
We'll need to reconcile that somehow if we ever want to present this to
userspace via statx.
Signed-off-by: Jeff Layton <jlayton@redhat.com>
|
|
afs_write_end() is missing page unlock and put if afs_fill_page() fails.
Reported-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
Repeating creation and deletion of a file on an afs mount will run the box
out of memory, e.g.:
dd if=/dev/zero of=/afs/scratch/m0 bs=$((1024*1024)) count=512
rm /afs/scratch/m0
The problem seems to be that it's not properly decrementing the nlink count
so that the inode can be scrapped.
Note that this doesn't fix local creation followed by remote deletion.
That's harder to handle and will require a separate patch as we're not told
that the file has been deleted - only that the directory has changed.
Reported-by: Marc Dionne <marc.dionne@auristor.com>
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
Smatch warns that:
fs/afs/rxrpc.c:922 afs_extract_data()
error: uninitialized symbol 'remote_abort'.
Smatch is right that "remote_abort" might be uninitialized when we pass
it to afs_set_call_complete(). I don't know if that function uses the
uninitialized variable. Anyway, the comment for rxrpc_kernel_recv_data(),
says that "*_abort should also be initialised to 0." and this patch does
that.
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
When an AFS inode is allocated by afs_alloc_inode(), the allocated
afs_vnode struct isn't necessarily reset from the last time it was used as
an inode because the slab constructor is only invoked once when the memory
is obtained from the page allocator.
This means that information can leak from one inode to the next because
we're not calling kmem_cache_zalloc(). Some of the information isn't
reset, in particular the permit cache pointer.
Bring the clearances up to date.
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marc Dionne <marc.dionne@auristor.com>
|
|
Fix four refcount bugs in afs_cache_permit():
(1) When checking the result of the kzalloc(), we can't just return, but
must put 'permits'.
(2) We shouldn't put permits immediately after hashing a new permit as we
need to keep the pointer stable so that we can check to see if
vnode->permit_cache has changed before we decide whether to assign to
it.
(3) 'permits' is being put twice.
(4) We need to put either the replacement or the thing replaced after the
assignment to vnode->permit_cache.
Without this, lots of the following are seen:
Kernel BUG at ffffffffa039857b [verbose debug info unavailable]
------------[ cut here ]------------
Kernel BUG at ffffffffa039858a [verbose debug info unavailable]
------------[ cut here ]------------
The addresses are in the .text..refcount section of the kafs.ko module.
Following the relocation records for the __ex_table section shows one to be
due to the decrement in afs_put_permits() and the other to be key_get() in
afs_cache_permit().
Occasionally, the following is seen:
refcount_t overflow at afs_cache_permit+0x57d/0x5c0 [kafs] in cc1[562], uid/euid: 0/0
WARNING: CPU: 0 PID: 562 at kernel/panic.c:657 refcount_error_report+0x9c/0xac
...
Reported-by: Marc Dionne <marc.dionne@auristor.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marc Dionne <marc.dionne@auristor.com>
|
|
This is a pure automated search-and-replace of the internal kernel
superblock flags.
The s_flags are now called SB_*, with the names and the values for the
moment mirroring the MS_* flags that they're equivalent to.
Note how the MS_xyz flags are the ones passed to the mount system call,
while the SB_xyz flags are what we then use in sb->s_flags.
The script to do this was:
# places to look in; re security/*: it generally should *not* be
# touched (that stuff parses mount(2) arguments directly), but
# there are two places where we really deal with superblock flags.
FILES="drivers/mtd drivers/staging/lustre fs ipc mm \
include/linux/fs.h include/uapi/linux/bfs_fs.h \
security/apparmor/apparmorfs.c security/apparmor/include/lib.h"
# the list of MS_... constants
SYMS="RDONLY NOSUID NODEV NOEXEC SYNCHRONOUS REMOUNT MANDLOCK \
DIRSYNC NOATIME NODIRATIME BIND MOVE REC VERBOSE SILENT \
POSIXACL UNBINDABLE PRIVATE SLAVE SHARED RELATIME KERNMOUNT \
I_VERSION STRICTATIME LAZYTIME SUBMOUNT NOREMOTELOCK NOSEC BORN \
ACTIVE NOUSER"
SED_PROG=
for i in $SYMS; do SED_PROG="$SED_PROG -e s/MS_$i/SB_$i/g"; done
# we want files that contain at least one of MS_...,
# with fs/namespace.c and fs/pnode.c excluded.
L=$(for i in $SYMS; do git grep -w -l MS_$i $FILES; done| sort|uniq|grep -v '^fs/namespace.c'|grep -v '^fs/pnode.c')
for f in $L; do sed -i $f $SED_PROG; done
Requested-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
The assignment of dvnode to itself is redundant and can be removed.
Cleans up warning detected by cppcheck:
fs/afs/dir.c:975: (warning) Redundant assignment of 'dvnode' to itself.
Fixes: d2ddc776a458 ("afs: Overhaul volume and server record caching and fileserver rotation")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
Due to recent changes this piece of code is no longer needed.
Addresses-Coverity-ID: 1462033
Link: https://lkml.kernel.org/r/4923.1510957307@warthog.procyon.org.uk
Signed-off-by: Gustavo A. R. Silva <garsilva@embeddedor.com>
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
afs_mkdir(), afs_create(), afs_link() and afs_symlink() all need to drop
the target dentry if a signal causes the operation to be killed immediately
before we try to contact the server.
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
Fix some of dentry handling in AFS directory ops:
(1) Do d_drop() on the new_dentry before assigning a new inode to it in
afs_vnode_new_inode(). It's fine to do this before calling afs_iget()
because the operation has taken place on the server.
(2) Replace d_instantiate()/d_rehash() with d_add().
(3) Don't d_drop() the new_dentry in afs_rename() on error.
Also fix afs_link() and afs_rename() to call key_put() on all error paths
where the key is taken.
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
Make afs_write_begin() wait for a page that's marked PG_writeback because:
(1) We need to avoid interference with the data being stored so that the
data on the server ends up in a defined state.
(2) page->private is used to track the window of dirty data within a page,
but it's also used by the storage code to track what's being written,
being cleared by the completion notification. Ownership can't be
relinquished by the storage code until completion because it a store
fails, the data must be remarked dirty.
Tracing shows something like the following (edited):
x86_64-linux-gn-15940 [1] afs_page_dirty: vn=ffff8800bef33800 9c75 begin 0-125
kworker/u8:3-114 [2] afs_page_dirty: vn=ffff8800bef33800 9c75 store+ 0-125
x86_64-linux-gn-15940 [1] afs_page_dirty: vn=ffff8800bef33800 9c75 begin 0-2052
kworker/u8:3-114 [2] afs_page_dirty: vn=ffff8800bef33800 9c75 clear 0-2052
kworker/u8:3-114 [2] afs_page_dirty: vn=ffff8800bef33800 9c75 store 0-0
kworker/u8:3-114 [2] afs_page_dirty: vn=ffff8800bef33800 9c75 WARN 0-0
The clear (completion) corresponding to the store+ (store continuation from
a previous page) happens between the second begin (afs_write_begin) and the
store corresponding to that. This results in the second store not seeing
any data to write back, leading to the following warning:
WARNING: CPU: 2 PID: 114 at ../fs/afs/write.c:403 afs_write_back_from_locked_page+0x19d/0x76c [kafs]
Modules linked in: kafs(E)
CPU: 2 PID: 114 Comm: kworker/u8:3 Tainted: G E 4.14.0-fscache+ #242
Hardware name: ASUS All Series/H97-PLUS, BIOS 2306 10/09/2014
Workqueue: writeback wb_workfn (flush-afs-2)
task: ffff8800cad72600 task.stack: ffff8800cad44000
RIP: 0010:afs_write_back_from_locked_page+0x19d/0x76c [kafs]
RSP: 0018:ffff8800cad47aa0 EFLAGS: 00010246
RAX: 0000000000000001 RBX: ffff8800bef33a20 RCX: 0000000000000000
RDX: 000000000000000f RSI: ffffffff81c5d0e0 RDI: ffff8800cad72e78
RBP: ffff8800d31ea1e8 R08: ffff8800c1358000 R09: ffff8800ca00e400
R10: ffff8800cad47a38 R11: ffff8800c5d9e400 R12: 0000000000000000
R13: ffffea0002d9df00 R14: ffffffffa0023c1c R15: 0000000000007fdf
FS: 0000000000000000(0000) GS:ffff8800ca700000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f85ac6c4000 CR3: 0000000001c10001 CR4: 00000000001606e0
Call Trace:
? clear_page_dirty_for_io+0x23a/0x267
afs_writepages_region+0x1be/0x286 [kafs]
afs_writepages+0x60/0x127 [kafs]
do_writepages+0x36/0x70
__writeback_single_inode+0x12f/0x635
writeback_sb_inodes+0x2cc/0x452
__writeback_inodes_wb+0x68/0x9f
wb_writeback+0x208/0x470
? wb_workfn+0x22b/0x565
wb_workfn+0x22b/0x565
? worker_thread+0x230/0x2ac
process_one_work+0x2cc/0x517
? worker_thread+0x230/0x2ac
worker_thread+0x1d4/0x2ac
? rescuer_thread+0x29b/0x29b
kthread+0x15d/0x165
? kthread_create_on_node+0x3f/0x3f
? call_usermodehelper_exec_async+0x118/0x11f
ret_from_fork+0x24/0x30
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
Fix the AFS file locking whereby the use of the big kernel lock (which
could be slept with) was replaced by a spinlock (which couldn't). The
problem is that the AFS code was doing stuff inside the critical section
that might call schedule(), so this is a broken transformation.
Fix this by the following means:
(1) Use a state machine with a proper state that can only be changed under
the spinlock rather than using a collection of bit flags.
(2) Cache the key used for the lock and the lock type in the afs_vnode
struct so that the manager work function doesn't have to refer to a
file_lock struct that's been dequeued. This makes signal handling
safer.
(4) Move the unlock from afs_do_unlk() to afs_fl_release_private() which
means that unlock is achieved in other circumstances too.
(5) Unlock the file on the server before taking the next conflicting lock.
Also change:
(1) Check the permits on a file before actually trying the lock.
(2) fsync the file before effecting an explicit unlock operation. We
don't fsync if the lock is erased otherwise as we might not be in a
context where we can actually do that.
Further fixes:
(1) Fixed-fileserver address rotation is made to work. It's only used by
the locking functions, so couldn't be tested before.
Fixes: 72f98e72551f ("locks: turn lock_flocks into a spinlock")
Signed-off-by: David Howells <dhowells@redhat.com>
cc: jlayton@redhat.com
|