summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2020-05-25btrfs: simplify error handling of clean_pinned_extents()Filipe Manana
At clean_pinned_extents(), whether we end up returning success or failure, we pretty much have to do the same things: 1) unlock unused_bg_unpin_mutex 2) decrement reference count on the previous transaction We also call btrfs_dec_block_group_ro() in case of failure, but that is better done in its caller, btrfs_delete_unused_bgs(), since its the caller that calls inc_block_group_ro(), so it should be responsible for the decrement operation, as it is in case any of the other functions it calls fail. So move the call to btrfs_dec_block_group_ro() from clean_pinned_extents() into btrfs_delete_unused_bgs() and unify the error and success return paths for clean_pinned_extents(), reducing duplicated code and making it simpler. Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: remove the redundant parameter level in btrfs_bin_search()Qu Wenruo
All callers pass the eb::level so we can get read it directly inside the btrfs_bin_search and key_search. This is inspired by the work of Marek in U-boot. CC: Marek Behun <marek.behun@nic.cz> Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Reviewed-by: Nikolay Borisov <nborisov@suse.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: make btrfs_read_disk_super return struct btrfs_disk_superNikolay Borisov
Instead of returning both the page and the super block structure, make btrfs_read_disk_super just return a pointer to struct btrfs_disk_super. As a result the function signature is simplified. Also, read_cache_page_gfp can never return NULL so check its return value only for IS_ERR. Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: use list_for_each_entry_safe in free_reloc_rootsNikolay Borisov
The function always works on a local copy of the reloc root list, which cannot be modified outside of it so using list_for_each_entry is fine. Additionally the macro handles empty lists so drop list_empty checks of callers. No semantic changes. Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: don't force read-only after error in drop snapshotDavid Sterba
Deleting a subvolume on a full filesystem leads to ENOSPC followed by a forced read-only. This is not a transaction abort and the filesystem is otherwise ok, so the error should be just propagated to the callers. This is caused by unnecessary call to btrfs_handle_fs_error for all errors, except EAGAIN. This does not make sense as the standard transaction abort mechanism is in btrfs_drop_snapshot so all relevant failures are handled. Originally in commit cb1b69f4508a ("Btrfs: forced readonly when btrfs_drop_snapshot() fails") there was no return value at all, so the btrfs_std_error made some sense but once the error handling and propagation has been implemented we don't need it anymore. Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: remove pointless assertion on reclaim_size counterFilipe Manana
The reclaim_size counter of a space_info object is unsigned. So its value can never be negative, it's pointless to have an assertion that checks its value is >= 0, therefore remove it. Reviewed-by: Nikolay Borisov <nborisov@suse.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: tree-checker: remove duplicate definition of 'inode_item_err'Zheng Wei
Remove the duplicate definition of 'inode_item_err' in the file tree-checker.c that got there by accident in c23c77b097dc ("btrfs: tree-checker: Refactor inode key check into seperate function"). Reviewed-by: Nikolay Borisov <nborisov@suse.com> Signed-off-by: Zheng Wei <wei.zheng@vivo.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: force chunk allocation if our global rsv is larger than metadataJosef Bacik
Nikolay noticed a bunch of test failures with my global rsv steal patches. At first he thought they were introduced by them, but they've been failing for a while with 64k nodes. The problem is with 64k nodes we have a global reserve that calculates out to 13MiB on a freshly made file system, which only has 8MiB of metadata space. Because of changes I previously made we no longer account for the global reserve in the overcommit logic, which means we correctly allow overcommit to happen even though we are already overcommitted. However in some corner cases, for example btrfs/170, we will allocate the entire file system up with data chunks before we have enough space pressure to allocate a metadata chunk. Then once the fs is full we ENOSPC out because we cannot overcommit and the global reserve is taking up all of the available space. The most ideal way to deal with this is to change our space reservation stuff to take into account the height of the tree's that we're modifying, so that our global reserve calculation does not end up so obscenely large. However that is a huge undertaking. Instead fix this by forcing a chunk allocation if the global reserve is larger than the total metadata space. This gives us essentially the same behavior that happened before, we get a chunk allocated and these tests can pass. This is meant to be a stop-gap measure until we can tackle the "tree height only" project. Fixes: 0096420adb03 ("btrfs: do not account global reserve in can_overcommit") CC: stable@vger.kernel.org # 5.4+ Reviewed-by: Nikolay Borisov <nborisov@suse.com> Tested-by: Nikolay Borisov <nborisov@suse.com> Signed-off-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: run btrfs_try_granting_tickets if a priority ticket failsJosef Bacik
With normal tickets we could have a large reservation at the front of the list that is unable to be satisfied, but a smaller ticket later on that can be satisfied. The way we handle this is to run btrfs_try_granting_tickets() in maybe_fail_all_tickets(). However no such protection exists for priority tickets. Fix this by handling it in handle_reserve_ticket(). If we've returned after attempting to flush space in a priority related way, we'll still be on the priority list and need to be removed. We rely on the flushing to free up space and wake the ticket, but if there is not enough space to reclaim _but_ there's enough space in the space_info to handle subsequent reservations then we would have gotten an ENOSPC erroneously. Address this by catching where we are still on the list, meaning we were a priority ticket, and removing ourselves and then running btrfs_try_granting_tickets(). This will handle this particular corner case. Reviewed-by: Nikolay Borisov <nborisov@suse.com> Tested-by: Nikolay Borisov <nborisov@suse.com> Signed-off-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: only check priority tickets for priority flushingJosef Bacik
In debugging a generic/320 failure on ppc64, Nikolay noticed that sometimes we'd ENOSPC out with plenty of space to reclaim if we had committed the transaction. He further discovered that this was because there was a priority ticket that was small enough to fit in the free space currently in the space_info. Consider the following scenario. There is no more space to reclaim in the fs without committing the transaction. Assume there's 1MiB of space free in the space info, but there are pending normal tickets with 2MiB reservations. Now a priority ticket comes in with a .5MiB reservation. Because we have normal tickets pending we add ourselves to the priority list, despite the fact that we could satisfy this reservation. The flushing machinery now gets to the point where it wants to commit the transaction, but because there's a .5MiB ticket on the priority list and we have 1MiB of free space we assume the ticket will be granted soon, so we bail without committing the transaction. Meanwhile the priority flushing does not commit the transaction, and eventually fails with an ENOSPC. Then all other tickets are failed with ENOSPC because we were never able to actually commit the transaction. The fix for this is we should have simply granted the priority flusher his reservation, because there was space to make the reservation. Priority flushers by definition take priority, so they are allowed to make their reservations before any previous normal tickets. By not adding this priority ticket to the list the normal flushing mechanisms will then commit the transaction and everything will continue normally. We still need to serialize ourselves with other priority tickets, so if there are any tickets on the priority list then we need to add ourselves to that list in order to maintain the serialization between priority tickets. Reviewed-by: Nikolay Borisov <nborisov@suse.com> Tested-by: Nikolay Borisov <nborisov@suse.com> Signed-off-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: account for trans_block_rsv in may_commit_transactionJosef Bacik
On ppc64le with 64k page size (respectively 64k block size) generic/320 was failing and debug output showed we were getting a premature ENOSPC with a bunch of space in btrfs_fs_info::trans_block_rsv. This meant there were still open transaction handles holding space, yet the flusher didn't commit the transaction because it deemed the freed space won't be enough to satisfy the current reserve ticket. Fix this by accounting for space in trans_block_rsv when deciding whether the current transaction should be committed or not. Reviewed-by: Nikolay Borisov <nborisov@suse.com> Tested-by: Nikolay Borisov <nborisov@suse.com> Signed-off-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: allow to use up to 90% of the global block rsv for unlinkJosef Bacik
We previously had a limit of stealing 50% of the global reserve for unlink. This was from a time when the global reserve was used for the delayed refs as well. However now those reservations are kept separate, so the global reserve can be depleted much more to allow us to make progress for space restoring operations like unlink. Change the minimum amount of space required to be left in the global reserve to 10%. Reviewed-by: Nikolay Borisov <nborisov@suse.com> Tested-by: Nikolay Borisov <nborisov@suse.com> Signed-off-by: Josef Bacik <josef@toxicpanda.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: improve global reserve stealing logicJosef Bacik
For unlink transactions and block group removal btrfs_start_transaction_fallback_global_rsv will first try to start an ordinary transaction and if it fails it will fall back to reserving the required amount by stealing from the global reserve. This is problematic because of all the same reasons we had with previous iterations of the ENOSPC handling, thundering herd. We get a bunch of failures all at once, everybody tries to allocate from the global reserve, some win and some lose, we get an ENSOPC. Fix this behavior by introducing BTRFS_RESERVE_FLUSH_ALL_STEAL. It's used to mark unlink reservation. To fix this we need to integrate this logic into the normal ENOSPC infrastructure. We still go through all of the normal flushing work, and at the moment we begin to fail all the tickets we try to satisfy any tickets that are allowed to steal by stealing from the global reserve. If this works we start the flushing system over again just like we would with a normal ticket satisfaction. This serializes our global reserve stealing, so we don't have the thundering herd problem. Reviewed-by: Nikolay Borisov <nborisov@suse.com> Tested-by: Nikolay Borisov <nborisov@suse.com> Signed-off-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: backref: distinguish reloc and non-reloc use of indirect resolutionQu Wenruo
For relocation tree detection, relocation backref cache uses btrfs_should_ignore_reloc_root() which uses relocation-specific checks like checking the DEAD_RELOC_ROOT bit. However for general purpose backref cache, we can rely on that check, as it's possible that relocation is also running. For generic purposed backref cache, we detect reloc root by SHARED_BLOCK_REF item. Only reloc root node has its parent bytenr pointing back to itself. And in that case, backref cache will mark the reloc root node useless, dropping any child orphan nodes. So only call btrfs_should_ignore_reloc_root() if the backref cache is for relocation. Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: reloc: move error handling of build_backref_tree() to backref.cQu Wenruo
The error cleanup will be extracted as a new function, btrfs_backref_error_cleanup(), and moved to backref.c and exported for later usage. Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: backref: rename and move finish_upper_links()Qu Wenruo
This the the 2nd major part of generic backref cache. Move it to backref.c so we can reuse it. Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: backref: rename and move handle_one_tree_block()Qu Wenruo
This function is the major part of backref cache build process, move it to backref.c so we can reuse it later. Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: reloc: open code read_fs_root() for handle_indirect_tree_backref()Qu Wenruo
The backref code is going to be moved to backref.c, and read_fs_root() is just a simple wrapper, open-code it to prepare to the incoming code move. Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: backref: rename and move should_ignore_root()Qu Wenruo
This function is mostly single purpose to relocation backref cache, but since we're moving the main part of backref cache to backref.c, we need to export such function. And to avoid confusion, rename the function to btrfs_should_ignore_reloc_root() make the name a little more clear. Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: backref: rename and move backref_tree_panic()Qu Wenruo
Also change the parameter, since all callers can easily grab an fs_info, there is no need for all the pointer chasing. Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: backref: rename and move backref_cache_cleanup()Qu Wenruo
Since we're releasing all existing nodes/edges, other than cleanup the mess after error, "release" is a more proper naming here. Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: backref: rename and move remove_backref_node()Qu Wenruo
Also add comment explaining the cleanup progress, to differ it from btrfs_backref_drop_node(). Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: backref: rename and move drop_backref_node()Qu Wenruo
With extra comment for drop_backref_node() as it has some similarity with remove_backref_node(), thus we need extra comment explaining the difference. Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: backref: rename and move free_backref_(node|edge)Qu Wenruo
Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: backref: rename and move link_backref_edge()Qu Wenruo
Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: backref: rename and move alloc_backref_edge()Qu Wenruo
Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: backref: rename and move alloc_backref_node()Qu Wenruo
Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: backref: rename and move backref_cache_init()Qu Wenruo
Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: rename tree_entry to rb_simple_node and export itQu Wenruo
Structure tree_entry provides a very simple rb_tree which only uses bytenr as search index. That tree_entry is used in 3 structures: backref_node, mapping_node and tree_block. Since we're going to make backref_node independnt from relocation, it's a good time to extract the tree_entry into rb_simple_node, and export it into misc.h. Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: backref: move btrfs_backref_(node|edge|cache) structures to backref.hQu Wenruo
These 3 structures are the main part of btrfs backref cache, move them to backref.h to build the basis for later reuse. Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: reloc: add btrfs_ prefix for backref_node/edge/cacheQu Wenruo
Those three structures are the main elements of backref cache. Add the "btrfs_" prefix for later export. Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: reloc: refactor useless nodes handling into its own functionQu Wenruo
This patch will also add some comment for the cleanup. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: reloc: refactor finishing part of upper linkage into finish_upper_links()Qu Wenruo
After handle_one_tree_backref(), all newly added (not cached) edges and nodes have the following features: - Only backref_edge::list[LOWER] is linked. This means, we can only iterate from botton to top, not the other direction. - Newly added nodes are not added to cache rb_tree yet So to finish the backref cache, we still need to finish the links and add all nodes into backref cache rb_tree. This patch will refactor the existing code into finish_upper_links(), add more comments of each branch, and why we need to do all the work. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: reloc: remove the open-coded goto loop for breadth-first searchQu Wenruo
build_backref_tree() uses "goto again;" to implement a breadth-first search to build backref cache. This patch will extract most of its work into a wrapper, handle_one_tree_block(), and use a do {} while() loop to implement the same thing. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: reloc: pass essential members for alloc_backref_node()Qu Wenruo
Bytenr and level are essential parameters for backref_node, thus it makes sense to initialize them at allocation time. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Reviewed-by: Nikolay Borisov <nborisov@suse.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: reloc: use wrapper to replace open-coded edge linkingQu Wenruo
Since backref_edge is used to connect upper and lower backref nodes, and needs to access both nodes, some code can look pretty nasty: list_add_tail(&edge->list[LOWER], &cur->upper); The above code will link @cur to the LOWER side of the edge, while both "LOWER" and "upper" words show up. This can sometimes be very confusing for reader to grasp. This patch introduces a new wrapper, link_backref_edge(), to handle the linking behavior. Which also has extra ASSERT() to ensure caller won't pass wrong nodes. Also, this updates the comment of related lists of backref_node and backref_edge, to make it more clear that each list points to what. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Reviewed-by: Nikolay Borisov <nborisov@suse.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: reloc: refactor indirect tree backref processing into its own functionQu Wenruo
The processing of indirect tree backref (TREE_BLOCK_REF) is the most complex work. We need to grab the fs root, do a tree search to locate all its parent nodes, link all needed edges, and put all uncached edges to pending edge list. This is definitely worth a helper function. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: reloc: refactor direct tree backref processing into its own functionQu Wenruo
For BTRFS_SHARED_BLOCK_REF_KEY, its processing is straightforward, as we now the parent node bytenr directly. If the parent is already cached, or a root, call it a day. If the parent is not cached, add it pending list. This patch will just refactor this part into its own function, handle_direct_tree_backref() and add some comment explaining the @ref_key parameter. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: reloc: make reloc root search-specific for relocation backref cacheQu Wenruo
find_reloc_root() searches reloc_control::reloc_root_tree to find the reloc root. This behavior is only useful for relocation backref cache. For the incoming more generic purpose backref cache, we don't care about who owns the reloc root, but only care if it's a reloc root. So this patch makes the following modifications to make the reloc root search more specific to relocation backref: - Add backref_node::is_reloc_root This will be an extra indicator for generic purposed backref cache. User doesn't need to read root key from backref_node::root to determine if it's a reloc root. Also for reloc tree root, it's useless and will be queued to useless list. - Add backref_cache::is_reloc This will allow backref cache code to do different behavior for generic purpose backref cache and relocation backref cache. - Pass fs_info to find_reloc_root() - Export find_reloc_root() So backref.c can utilize this function. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: reloc: add backref_cache::fs_info memberQu Wenruo
Add this member so that we can grab fs_info without the help from reloc_control. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: reloc: add backref_cache::pending_edge and backref_cache::useless_nodeQu Wenruo
These two new members will act the same as the existing local lists, @useless and @list in build_backref_tree(). Currently build_backref_tree() is only executed serially, thus moving such local list into backref_cache is still safe. Also since we're here, use list_first_entry() to replace a lot of list_entry() calls after !list_empty(). Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: reloc: rename mark_block_processed and __mark_block_processedQu Wenruo
These two functions are weirdly named, mark_block_processed() in fact just marks a range dirty unconditionally, while __mark_block_processed() does extra check before doing the marking. This patch will open code old mark_block_processed, and rename __mark_block_processed() to remove the "__" prefix. Since we're here, also kill the forward declaration, which could also kill in_block_group() with in_range() macro. Reviewed-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: reloc: use btrfs_backref_iter infrastructureQu Wenruo
In the core function of relocation, build_backref_tree, it needs to iterate all backref items of one tree block. Use btrfs_backref_iter infrastructure to do the loop and make the code more readable. The backref items look would be much more easier to read: ret = btrfs_backref_iter_start(iter, cur->bytenr); for (; ret == 0; ret = btrfs_backref_iter_next(iter)) { /* The really important work */ } Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: backref: implement btrfs_backref_iter_next()Qu Wenruo
This function will go to the next inline/keyed backref for btrfs_backref_iter infrastructure. Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: backref: introduce the skeleton of btrfs_backref_iterQu Wenruo
Due to the complex nature of btrfs extent tree, when we want to iterate all backrefs of one extent, this involves quite a lot of work, like searching the EXTENT_ITEM/METADATA_ITEM, iteration through inline and keyed backrefs. Normally this would result in a complex code, something like: btrfs_search_slot() /* Ensure we are at EXTENT_ITEM/METADATA_ITEM */ while (1) { /* Loop for extent tree items */ while (ptr < end) { /* Loop for inlined items */ /* Real work here */ } next: ret = btrfs_next_item() /* Ensure we're still at keyed item for specified bytenr */ } The idea of btrfs_backref_iter is to avoid such complex and hard to read code structure, but something like the following: iter = btrfs_backref_iter_alloc(); ret = btrfs_backref_iter_start(iter, bytenr); if (ret < 0) goto out; for (; ; ret = btrfs_backref_iter_next(iter)) { /* Real work here */ } out: btrfs_backref_iter_free(iter); This patch is just the skeleton + btrfs_backref_iter_start() code. Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: add missing annotation for btrfs_tree_lock()Jules Irenge
Sparse reports a warning at btrfs_tree_lock() warning: context imbalance in btrfs_tree_lock() - wrong count at exit The root cause is the missing annotation at btrfs_tree_lock() Add the missing __acquires(&eb->lock) annotation Signed-off-by: Jules Irenge <jbi.octave@gmail.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: add missing annotation for btrfs_lock_cluster()Jules Irenge
Sparse reports a warning at btrfs_lock_cluster() warning: context imbalance in btrfs_lock_cluster() - wrong count The root cause is the missing annotation at btrfs_lock_cluster() Add the missing __acquires(&cluster->refill_lock) annotation. Signed-off-by: Jules Irenge <jbi.octave@gmail.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-24Linux 5.7-rc7Linus Torvalds
2020-05-24Merge tag 'efi-urgent-2020-05-24' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull EFI fixes from Thomas Gleixner: "A set of EFI fixes: - Don't return a garbage screen info when EFI framebuffer is not available - Make the early EFI console work properly with wider fonts instead of drawing garbage - Prevent a memory buffer leak in allocate_e820() - Print the firmware error record properly so it can be decoded by users - Fix a symbol clash in the host tool build which only happens with newer compilers. - Add a missing check for the event log version of TPM which caused boot failures on several Dell systems due to an attempt to decode SHA-1 format with the crypto agile algorithm" * tag 'efi-urgent-2020-05-24' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: tpm: check event log version before reading final events efi: Pull up arch-specific prototype efi_systab_show_arch() x86/boot: Mark global variables as static efi: cper: Add support for printing Firmware Error Record Reference efi/libstub/x86: Avoid EFI map buffer alloc in allocate_e820() efi/earlycon: Fix early printk for wider fonts efi/libstub: Avoid returning uninitialized data from setup_graphics()
2020-05-24Merge tag 'x86-urgent-2020-05-24' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 fixes from Thomas Gleixner: "Two fixes for x86: - Unbreak stack dumps for inactive tasks by interpreting the special first frame left by __switch_to_asm() correctly. The recent change not to skip the first frame so ORC and frame unwinder behave in the same way caused all entries to be unreliable, i.e. prepended with '?'. - Use cpumask_available() instead of an implicit NULL check of a cpumask_var_t in mmio trace to prevent a Clang build warning" * tag 'x86-urgent-2020-05-24' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/unwind/orc: Fix unwind_get_return_address_ptr() for inactive tasks x86/mmiotrace: Use cpumask_available() for cpumask_var_t variables